HTTP Caching Client


As of Pixlet version 0.26.0, the starlark HTTP module now supports caching natively :tada: . To cache your GET request for 60 seconds, simply add a TTL:

http.get("", ttl_seconds = 60)


Our platform has grown substantially in terms of the number of Tidbyts online and the number of apps available on our platform. With the vast majority of apps relying on external HTTP resources, this poses a serious problem - new apps can quickly become popular and leak a large volume of requests to a third party. We need to make it easier to cache and harder to accidentally send a large volume of requests to an external API.

Notable Mentions

Some things you should be aware of:

  • Scope: the HTTP cache is scoped per app
  • Minimum: the client will cache requests for a minimum of 5 seconds
  • Jitter: the client will randomly select a TTL that is +/- 10% of the requested TTL
  • Rate Limits: the clients respects 429s, and will cache the response for period recommended by the API
  • Cache Control: if a developer does not request a TTL, and the API provides a max-age cache control header, the client will honor the response at a maximum of 1 hour

Confused? Have a question? Let me know in the comments below or give me a shoutout on Discord.

Next Steps

With the HTTP cache scoped per app, we’ll be looking to scope the cache module to be per installation. The reason being, the current scoping makes it too easy to leak PII to another user. As we continue to work on making the community review process more streamlined, this will eliminate some of the bugs we have to manually look out for by making them impossible.

Help Wanted

We need your help! We’re calling on all community members to help us migrate apps to the new client. We’ll be pitching in as well over the next week or so to get all apps migrated over. See something like this?

def get_cachable_data(url, timeout):
    key = base64.encode(url)

    data = cache.get(key)
    if data != None:
        return base64.decode(data)

    res = http.get(url = url)
    if res.status_code != 200:
        fail("request to %s failed with status code: %d - %s" % (url, res.status_code, res.body()))

    cache.set(key, base64.encode(res.body()), ttl_seconds = timeout)

    return res.body()

Instead, this function should now look like the following:

def get_cachable_data(url, timeout):
    res = http.get(url = url, ttl_seconds = timeout)

    if res.status_code != 200:
        fail("request to %s failed with status code: %d - %s" % (url, res.status_code, res.body()))

    return res.body()

Hi there! I had a quick question about some of the details regarding the new http cache mentioned here. I just built my first app, and despite my best efforts, I still seem to be getting rate limited by the API I’m hitting. This is the Marvel API for the Marvel of the Day app. My basic request code is this:

        req = http.get(BASE_URL + "/" + str(characterId), ttl_seconds = 86400, params = params)

The only thing worth noting is that the additional params include a timestamp and a hash generated with that timestamp that obviously changes every time a request is made. Does the new cache method include query params when setting the cache key? If that’s the case, then that would explain why my requests are never getting cached.

Is there a way around this? Do I need to revert to the old caching method? Thanks!

I think your logic is correct. You can either use the old cache or remove the timestamp parameter from the url