close icon
daily.dev platform

Discover more from daily.dev

Personalized news feed, dev communities and search, much better than whatโ€™s out there. Maybe ;)

Start reading - Free forever
Start reading - Free forever
Continue reading >

Caching in GraphQL: How to prevent excessive and unnecessary requests

Caching in GraphQL: How to prevent excessive and unnecessary requests
Author
Chidume Nnamdi
Related tags on daily.dev
toc
Table of contents
arrow-down

๐ŸŽฏ

Learn how caching can be used to cache GraphQL requests to prevent excessive and unnecessary requests.

What is caching?

Let's start by understanding what caching generally is. It is relevant we know that so we can understand fully what caching in GraphQL entails.

Caching is a long-term optimization trick used in different fields. It entails not performing the same action twice. An action is performed and the result of the action is stored, now when the same action is to be performed again, the previously stored result is retrieved and returned without performing the action.

The place where the results are stored is called cache.

The question here is: How do we know when we are performing the same action? How do we identify the uniqueness?

In the cache, the results are stored in a key-value pair. So for us to know when an action has been performed we use the cache key to match against what is coming in, So every action has a uniqueness. The uniqueness is what is stored in the cache key, and they are compared against each other when a match is met the value is returned from the cache value. If there is no match, the action is performed and the result is stored in the cache so in the future the result is returned from the cache.

HTTP Caching

We saw caching function calls, here HTTP cache involves caching our HTTP requests to the server.

Just like how we improved performance in our JS app by caching our function, we can use HTTP cache to reduce latency and network traffic in our website.

In HTTP cache, we have browser cache, proxy cache, gateway caches, CDN, reverse proxy caches, and load balancers.

Browser cache is a private HTTP cache. The browser caches the response of the network request in its internal storage. When a network request for a page or XHR is made the document is gotten from the server and the browser caches the document. Now, when next the same request is made, the browser retrieves the response right from its cache without a trip to the server. This makes the document loads very fast saving the user's data at the same time.

This is also helpful, when the browser is offline, the browser retrieves the document from its cache.

Proxy networks can have a huge number of users it is serving requests. The proxy network can use proxy cache to cache resources frequently accessed in the network thereby reducing the amount of time it had to spend retrieving the same resource from the internet.

HTTP cache mainly works on GET because it only reads from the server it does not write to the server. Other HTTP methods (POST, PUT, DELETE) are declined to be cached, because doing so may lead to loss of data and wrong info being displayed.

The server can tell us the freshness of a response by using the Cache-Control and Expires headers.

What is this freshness?

Items stored in a cache for a long time might become stale. That is, its value in the server has changed so the cache needs to be refreshed so the cache is set to the up-to-date value as in the server.


Cache-Control: max-age=100

The freshness of the cache will live for 100 seconds before being stale or outdated, in that case, the cache is refreshed with new values.

GraphQL Caching

In RESTful APIs that we have seen in the above sections, we learned that it is easy to cache GET APIs and other HTTP methods are not easy to cache or straightforwardly uncacheable.

The RESTful APIs use the URL which is the global unique identifier used to identify each GET request. This URL can be leveraged to build a cache. The URL serves as the key in the key-value pair in the cache storage and the response becomes the value.

Being a globally unique identifier means that the URL is unique and cannot have duplicates.

In GraphQL, the POST HTTP method is used to perform queries and there are no URL-like endpoints like we have in RESTful APIs.

This makes caching GraphQL queries difficult. Notwithstanding this, attempts have been made to cache GraphQL queries we will see them below.

GraphQL queries are like this:


query {

  foods {

    title

    type

  }

}

Since the queries are sent as text in the POST payload, the query text can be used as the identifier here, and cached against the response. The query text can then be used to fetch the response from the cache.

There is a problem with this, a query might seem unique until a variable is passed to it.


query {

  foods(id: 1) {

    title

    type

  }

}


This changes the uniqueness yet they query for the same data.

Now, imagine the same request where the id is 2. The query will be this:


query {

  foods(id: 2) {

    title

    type

  }

}

The URL remains the same but the POST body changes. This makes HTTP caching very difficult because the URL has not changed, so the cache key between the queries will be the same and the outcome will be wrong.

We now know that the POST payload which is the query text in GraphQL is what is unique. It is reasonable to use the POST body as the cache key.

But there are problems with these though.

Two queries might seem the same but they are not entirely the same. Let's see the below queries:


query {

    food {

        title

    }

}



query {

    food {

        # This contains the name of the food, e.g Jollof Rice

        title

    }

}

The two queries are the same but the one with the comment makes it to be different from the first query. This will cause them to be stored in different cache keys.

Another case whereby two seemingly identical queries can be both different is in the ordering of the fields and arguments.


query {

    food {

        title

        body

    }

}


query {

    food {

        body

        title

    }

}


The two queries are the same but the ordering of the fields makes them be stored in different cache keys.

See another example:


query {

    food(id: 100 title: "Rice") {

        body

        title

    }

}


query {

    food(title: "Rice" id: 100) {

        body

        title

    }

}

The above queries are the same because they query a food with id of 100 and a title that contains "Rice". The two will not be treated as the same query, each will have different cache keys because the arguments are not ordered in the same fashion.

In the first query, id came first, and in the second query the title came first, this is what makes it different. This will cause different cache keys to be set for them.

Strategies for GraphQL caching

In this section, we will see the standard provided by both Apollo Client or urql on how to cache GraphQL queries.

First, we will start with Apollo Client.

Apollo Client

Apollo Client is a graphql client library that provides us powerful methods to query and mutate our GraphQL server.

Apollo Client stores queries and the response in in-memory cache, with this future queries will cause Apollo Client to check for the request in the in-memory cache and return the result without performing the network request.

Cache can be configured in Apollo Client by passing an object of InMemoryCache and passing it in the ApolloClient constructor in the cache property.


import { InMemoryCache, ApolloClient } from "@apollo/client";

const client = new ApooloClient({

  cache: new InMemoryCache(config),

});

The config is an object that contains the configuration options for the InMemoryCache.

The InMemoryCache normalizes the query response objects before saving them in its internal storage.

According to ApolloClient's docs, we quote:

  • The cache generates a unique ID for every identifiable object included in the response.
  • The cache stores the objects by ID in a flat lookup table.
  • Whenever an incoming object is stored with the same ID as an existing object, the fields of those objects are merged.
  • If the incoming object and the existing object share any fields, the incoming object overwrites the cached values for those fields.
  • Fields that appear in only the existing object or only the incoming object are preserved.

Let's look at ways with which we can read and write to the ApolloClient cache.

Reading and writing to the cache

Let's say we have a GraphQL server set up, and we have performed queries to the server using the ApolloClient library.


import { InMemoryCache, ApolloClient } from "@apollo/client";

const client = new ApooloClient({

  cache: new InMemoryCache(config),

});

We can use the useQuery hook to perform graphql query like this:


import { useQuery, gpl } from "@apollo/client";

const GET_FOODS = gpl`

    query Food {

        foods {

            title

            body

        }

    }

`;

useQuery(GET_FOODS);

This query hits the GraphQL server, ApolloClient caches the results and the result is displayed.

Now, we can read and write data to the cache, this queries will only be on the cached results, it will never hit the GraphQL server. The queries will only be on the data we previously fetched above.

Reading from the cached result

We can read from the cached result by using the readQuery API. This readQuery enables us to execute a GraphQL query on our data.


client.readQuery({

  query: qpl`

        query QueryFood($id: Int){

            foods(id: $id) {

                body

            }

        }

    `,

  variables: {

    id: 4,

  },

});

This query will return the food with id 4 from the results that was returned in the above section. This QueryFood query will not hit the GraphQL server, there won't be any network rrequest at all.

If the cache is missing any fields readQuery returns null, it won't attempt to query the GrsaphQL server.

We can write data to the cached result by using thd writeQuery API. To write a new food to our cache result we do this:


client.writeQuery({

  query: gql`

    query WriteFood($id: Int!) {

      foods(id: $id) {

        id

        body

        title

      }

    }

  `,

  data: {

    todo: {

      __typename: "Food",

      id: 51,

      title: "Coconut Rice",

      body: "Coconut Rice",

    },

  },

  variables: {

    id: 51,

  },

});

writeQuery will create a new food object with id of 51 in the cached results.

These changes to the cached results will not be made in the GraphQL server, it is only done locally in the cache. Also, if there is another food object in the cache with an id of 51, writeQuery will overwrite the existing object with the new food object.

We can combine reading and writing to queries. this is done using the cache.modify(...) method.

This allows us select an object from the cached result and make modifications to it. It is just like calling readQuery to select a particular object from the cache and then calling writeQuery on the object to modify.

Example:


const foodObject = {

  id: 5,

  body: "Coconut Rice",

  title: "Coconut rice",

};

cache.modify({

  id: cache.identify(foodObject),

  fields: {

    body(cachedName) {

      return "White Rice";

    },

  },

});

The id property is the ID of the food object in the cache result we want to modify. The ID is gotten by passing the object to the cache.identify(...) method.

In the fields object, it will contain modifier functions one for each field in the food object. In the above code, we are changing the body filed in the food object to now contain "White Rice".

Server-side caching: ApolloServer

Most of the caching we discussed has been client-side caching whereby the cache resides in the client and the response is got from the cache with the request reaching the GraphQL server.

Now, caching in GraphQL can be done server-side. This means that the GraphQL server can be configured to fetch the response from its cache in the server without running the resolvers.

This caching is perfectly in ApolloServer. Caching in ApolloServer is configured on a per-field basis.

What does this mean?

It means that each field in a query/mutation schema can be cached. This ensures the result of the field is not calculated after the first request.

The caching is done by setting a maxAge value to a field. The maxAge is the maximum amount of time a field's value is valid or the maximum amount of time a field's value will expire or become stale.

It is just like expiry dates on products. The expiry date tells the consumer the date a product will become un-consumable or ineffective and can be dangerous to the body and environment.

This is the same thing as what the maxAge specifies.

Once, a maxAge is set on a field, the ApolloServer looks up the maxAge value o each request and calculates if the maxAge has passed or is still viable. If the maxAge is still viable, the response of the filed is gotten from its cache. If the time has passed, a new response is calculated for the field, the response will not be from the cache.

To set this caching to the query/mutation we use the @cacheControl directive with argument maxAge of type Int (measuring seconds)..


type BlogPost {

  _id: String

  title: String

  body: String

  postImage: String @cacheControl(maxage: 50)

}


This schema BlogPost is the shape of a blog post in our API. The @cacheControl directive is set on the postImage field of the schema, the param of maxAge: 50 sets that the value of the postImage in the cache will be valid until after 50 seconds.

So if we make the query request:


query {

  blogPost(id: 12344) {

    title

    body

    postImage

  }

}

The result of the field postImage will be retrieved from the db, then, on the next request, the value of the postImage will be from the ApolloServer cache. This will happen until after 50 secs elapses the value will come from the db.

Now, this happens for all users. The cache can be controlled further to make the caching of the fields be on per-user too. This is simply done by adding a scope: SCOPE_TYPE to the @cacheControl directive.


type BlogPost {

  _id: String

  title: String

  body: String

  postImage: String @cacheControl(maxage: 50, scope: PRIVATE)

}

A scope can either be PRIVATE or PUBLIC. Being private means that the cache will be separate for every user in the system. Public means that the cache is global and not specific to a user.


type BlogPost {

  _id: String

  title: String @cacheControl(maxAge: 3600)

  body: String

  postImage: String @cacheControl(maxage: 50, scope: PRIVATE)

}


Here we are caching the title of the blog post. It is not to be re-fetched until after 60 minutes has elapsed. The cache control on the title is global.

We now have cache control on two of our BlogPost fields.

Cache control can be set for all fields in the schema not merely for a single field. We can set cache control to as many fields as we want in our schema.

Note: Cache control @cacheControl(...) is set on the schema definitions of your queries.

The maxAge has a default value of 0. Every field in a schema has a maxAge of 0 sets to it, being 0 means the field's value won't be cache because it will expire instantly. Setting the maxAge value overrides this behavior.

The scope has a default value of "PUBLIC". So every cache-controlled field is public by default unless stated otherwise.

Be mindful of the scope you assign the fields. The operation tells us whether a field needs to be cached per user or globally.

For example,


type BlogPost {

  _id: String

  title: String @cacheControl(maxAge: 3600)

  body: String

  postImage: String @cacheControl(maxage: 50)

  viewedByUser: String @cacheControl(maxage: 50, scope: PRIVATE)

}

The BlogPost schema has a field viewedByUser. this filed holds the last time this blog post was viewed by the current user. Caching this field globally won't be the right choice because the field applies only to a user.

So caching this field should on a per-user basis because it returns the result based on a user.

the postImage and title are globally cached because it will be the same for all users, I mean the blog post title and image are global and seen by everyone.

Setting cache control for a type

Cache control can be set on all fields in a schema type by setting the @cacheControl on the type body.


type BlogPost @cacheControl(maxage: 50) {

  _id: String

  title: String

  body: String

  postImage: String

}

The cache control is set on the BlogPost body. This will set cache control for all the fields in the BlogPost from title to the postImage. All fields are cached for a maximum of 50 seconds.

This caching will also hold if the BlogPost type is used in another type.


type Query {

  blogPosts: BlogPosts

  blogPost(id: String): BlogPost

  user(id: Int): User

  users: [User]

}


The blogPost field will be cached for 50 seconds. This is because it includes the BlogPost type that was cached at the type level.

The caching we have seen so far is static, i.e the cache control is set in the schemas before the query is sent.

We can dynamically provide cache control in our resolvers. This is done by referencing the cacheControl from the info arg and calling the setCacheHint() method in the cacheControl object.

Example:


const resolvers = {

  Query: {

    blogPosts(parent, args, context, info) {

      info.cacheControl.setCacheHint({ maxAge: 50 });

      const blgPosts = blogPosts;

      return {

        nodes: blgPosts,

        aggregate: {

          count: blgPosts.length,

        },

      };

    },

    blogPost(parent, args, context, info) {

      const blgPosts = blogPosts;

      const id = args.id;

      info.cacheControl.setCacheHint({ maxAge: 40 });

      return blgPosts.find((r) => r.id == id);

    },

  },

};

This caches the result of the blogPosts and blogPost queries for 50, and 40 secs respectively. Their responses are gotten from cache until after their max ages expire.

We can also set the default maxAge for all fields in the ApooloServer constructor. this is done by adding a cacheControl object in the config object passed to the ApolloServer constructor. Inside this cacheControl object we set a defaultMaxAge prop with any value we want our max age time to be.


const server = new ApolloServer({

  //...

  cacheControl: {

    defaultMaxAge: 2

  },

}));

This sets the default max age of all fields in our schema to be 2 secs.

Caching GET requests

GraphQL queries are majorly sent via the POST, but they can also be sent via the GET method.


localhost:3000/grapqhl?query="query { blogPosts { title body } }"

The problem with sending queries via the GET is that the query body can become large quickly.


localhost:3000/grapqhl?query="query { blogPosts { title body postImage } blogPost(id: 1232343434) { title body postImage } ... }"

This can lead to high network usage and poor performance client-wise.

Now, executing large queries via the GET method will mean sending the body at each time of the request. With can circumvent this by using Apollo's Automatic Persisted Queries (APQ).

This APQ works by letting the query get to it once, then it caches the query string and generates a unique hash string, and sends it to the user. The user uses this hash string to execute the queries already cached on the server.

This reduces the size of the queries sent on each request.

If a hash string of a query string is sent to the server and the corresponding query is not found in the cache, the server responds with an error and the client generates a unique hash string and sends it with both the query string to the server.

Apollo executes the query, stores the query string alongside the unique string in its cache, and then sends the result to the client.

So, when the client sends the hash string to the server, the server executes the query string already in its cache and sends the result down to the client.

Apollo Server by default supports APQ, the configuration needs to be done in the client because that's where we generate the unique strings.

In the ApolloClient, we import createPersistedQueryLink. This function creates a link we can add to Apollo's link chain. According to the ApolloServer docs:

"The link takes care of generating APQ identifiers, using GET requests for hashed queries, and retrying requests with query strings when necessary."


import { ApolloClient, InMemoryCache, HttpLink } from "@apollo/client";

import { createPersistedQueryLink } from "@apollo/client/link/persisted-queries";

const linkChain = createPersistedQueryLink().concat(

  new HttpLink({ uri: "http://localhost:4000/graphql" })

);

const client = new ApolloClient({

  cache: new InMemoryCache(),

  link: linkChain,

});

The createPersistedQueryLink creates a query link and then, it is added to a link chain which is the first in the chain. The cache is set up using InMemoryCache, then the link chain is set in the link prop of the object in the ApolloClient constructor, also, the memory cache is set in the cache prop in the object.

urql

urql uses the concept of Document Caching to avoid sending the same requests to the GraphQL server. It does so by caching the result ofeach query.

According to urql docs:

This works like the cache in a browser. urql creates a key for each request that is sent based on a query and its variables.

Conclusion

We have seen the entirety of GraphQL caching in this article.

We started by learning what caching in general means, next we saw its applications in optimizing function calls in JavaScript, its application to optimize our HTTP requests, and then how it can be used to cache GraphQL requests to prevent excessive and unnecessary requests to make our app highly performant.

Citations

Why not level up your reading with

Stay up-to-date with the latest developer news every time you open a new tab.

Read more