gRPC is a robust open-source high-performance RPC framework released by Google in 2015. It is a standardized, general-purpose, and cross-platform RPC infrastructure that provides scalability, performance, and functionality to distributed applications.
In this article I will comprehensively introduce gRPC, detailing every aspect of it, from what it is, its different components to how it works.
What is gRPC?
gRPC is a robust open-source high-performance RPC framework released by Google in 2015. It is a standardized, general-purpose, and cross-platform RPC infrastructure that provides scalability, performance, and functionality to distributed applications.
RPC(Remote Procedure Framework) entails executing a sub-routine/method/function residing in a remote computer.
gRPC builds on that and provides more flexibility, scalability, and security to it. In gRPC, a client can execute or call a function on a server application on a different machine. The gRPC server exposes methods that can be called publicly, and the client uses a connection system to call those methods as if the methods locally reside in the client machine.
Let's go over how to build a gRPC application.
When developing a gRPC application, the first thing done is to define the service interface. This interface contains information on the methods and arg and returns types of the methods.
Using this service definition, the server will generate its code. This generation of code means that it will create the methods/functions defined in the service definition interface and expose them as service methods to be called.
The client will also use this service definition interface to create a client stub. This client stub will map to the service defined in the definition and use it to call the methods in the file. This will make the method defined in the server code to be executed.
The gRPC framework abstracts away the complexities and the workings of the communication. We know nothing more than the client calls the service method and the method is run on the server. The magic that plays out in-between is not open to us.
A gRPC application has three components:
- Service Definition
- The Server
- The Client
Let look at them in detail.
Service Definition
Like we have learned, gRPC uses a service definition interface to declare the methods that will be called from a remote client. The service definition interface will also contain the parameter and return types of the methods.
This service definition layer is written in IDL (Interface Definition Language). In gRPC, it uses Protocol Buffers as its IDL.
Protocol Buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data โ think XML, but smaller, faster, and simpler.
It allows us to define the structure of our data, then we can compile the source into a code we want to use. From the generated code we can read and write the data in the structure it was defined. Protocol Buffers supports compilation to various languages like Dart, Go, Ruby, C#, Java, Python, Objective-C, and C++, and many other languages.
The Protocol Buffer language is written in a file with .proto file extension. For example, if we have a BlogPost service that we want to use it to:
- get all blog posts
- get a blog post
- create a new blog post
- delete a blog post
- edit/update an existing blog post
We can define the IDL in blogPost.proto file.
Publish
Right now, the proto3 version is being used, so we will go over the syntax here.
Now, we will define a BlogPostService interface in the file.
We have an RPC service BlogPostService inside it it exposes four methods: addBlogPost, getAllBlogPost, getBlogPost, deleteBlogPost, and updateBlogPost.
A service is set in protobuf by using the service keyword, while methods are defined in the service by using the rpc keywords. See that all the methods have rpc before they were defined.
These methods, create a new blog post, returns all blog posts, get a given blog post, deletes a blog post, and updates a blog post.
The methods will need parameters especially if the method works on a specified blog post, the parameter you tell it the id of the blog post to work on, and also they should return some answers.
We define a BlogPost type to set the message format for a blog post.
The BlogPost definition defines how a blog post structure will look like. A blog post message will have three fields, the first holds the id of the blog post which is a globally unique identifier, the second field is the title of the blog post while the last field is the body/content of the blog post.
All fields in the BlogPost message are of string types, in protobuf we can specify other scalar types like int, etc, and composite types.
Notice the numbering 1, 2, 3?
These are the field numbers. Field numbers are unique to each field in a message. They are used to identify each field in the message binary format, and they affect the size of the encoded message.
So far our proto is looking like this:
We have to define messages for the requests and responses.
We defined three message types Request, Empty, and BlogPosts.
Request holds the id number of the blog post we want to work like we want to delete, edit or return. The Empty has no fields so it is used when no parameter is required. It is just like void in C++.
BlogPosts will return an array/list of BlogPost types. The repeated keyword in the blogPosts denotes the field will be an array and the BlogPost before it denotes that the elements in the array will be of BlogPost type.
See that we have added the parameter types and return types to our methods. addBlogPost will need to receive a BlogPost message format so it can be added to the database, and it returns the added blog post type BlogPost.
getAllBlogPost needs no parameter so the Empty message is there, on calling the methods empty object should be passed to it. It returns an array of BlogPost.
getBlogPost receives a Request param type and returns a BlogPost type.
deleteBlogPost receives a Request param type and returns an Empty type.
updateBlogPost receives a BlogPost type, it will contain the payload of the new values the blog post will be updated to have. It will return a BlogPost, it is the blog post that is edited with the new values so the client can see it has been successfully edited.
Now, this our BlogPost.proto file will serve as a blueprint to the gRPC server and client on how to build and call the methods in the service. It tells them the services present and the methods in each service.
The server uses this blueprint to set up the services and their methods handlers, the client on the other hand uses it to know the services present in a server and the methods it can call/invoke remotely.
In Protobuf, we have four kinds of service methods:
Unary RPC
These are like the regular methods we defined in our BlogPostService service. They perform a request from the client to the server and expect a response. It follows the request-response pattern.
Server streaming RPC
This opens a stream on the server when a single client request is made to the server. The client then reads a stream of data sequences sent from the server.
On the join({...}) call from the client to the server, the server opens and returns a stream of ChatMessage sequence of data. The listens on the stream to get the data stream. We use the keyword stream to indicate that the server will send a stream of messages.
Client streaming RPC
This is the reverse of the above. The client opens a stream and emits them to the server. the server listens to the stream of data to receive them.
The sendMsg opens a stream client-side and sends a stream of ChatMessage types to the server.
Bi-directional streaming RPC
This is two-way streaming. Both the client and the server open a stream and send a stream of data to each other. The streaming of data is non-sequential, they can stream data in whatever order they want.
We have built our Protobuf definitions and learned a great deal about Protocol Buffers, now let's build a server that will create the methods in the protobuf.
gRPC Server
We have our proto definition file, we can use a protoc compiler to compile the file to the source code language we want to use on the server-side.
gRPC server will implement the service definitions in the .proto file along with its methods.
The methods will be callable to handle whatever action/job it is to do.
In our proto example, the gRPC server will implement the BlogPostService service and the methods in it. The addBlogPost will be a callable method in whatever language the server is written on and can be invoked when the client stub calls it from its platform.
The gRPC server being a server will run like any normal server and listen for requests from clients. In a way, the server knows the method in the service from the client's request and calls the method.
gRPC servers can be written in any language so far gRPC supports gRPC protoc compiler plug-in for the language. This is one advantage of using gRPC because it is polyglot. It can be written in any language.
For example, we can build the server in Dart.
First, we have to install the proto compiler and the dart protocol buffer plugin to use gRPC.
We use the command below to generate the client and server interfaces from our .proto service definition.
This generates files that contain protocol buffer code, interface type (or stub) for clients to call, and an interface type for servers to implement.
The server code will be:
See that in the above code, we implemented the logic of each function in our BlogPost.proto definition file.
In Dart, the proto compiler provides a *ServiceBase interface generated from the service in the proto file, we implement the *ServiceBase interface and provide the methods.
Now, we can run the server.
The Server instance is created with the instance of BlogPostService passed in an array to the constructor. This registers the BlogPostService with the server.
The server is started on TCP port 9000. The server will start listening for requests from clients.
That's it for our server.
gRPC Client
We will use the service definition in our proto files to generate client stubs. The client stub will have the same methods in the service as it is in the server.
The client will use will call these methods which will translate to a network call to the server. The network will receive the request and call the method in the specified service.
The network call is made via HTTP/2 like we make a normal HTTP 1.1 call from our browser. It will contain a URL with the body of the request containing the payload, the payload will be the serialized message binary format to be used as request param or response return type.
A call to the addBlogPost will look like this:
localhost:9000 is the address of the server.
The request payload ๐
to be sent to server as the request parameters will be encoded into a message binary format. The above payload will be encoded to look like this:
On the server, it is decoded and the original object message is read.
It is similar to HTTP 1.1 API calls.
Response from the server is encoded in the message binary format, just like this:
The client receives the response in message binary format just like above and then decodes it to read the message.
We built the BlogPost gRPC server in Dart, we can build its client in Java, Go, JavaScript, C++, C#.
To set up a client in Dart, we will need to create a gRPC channel, then instantiate the BlogPostServiceClient which is provided by the package generated from the BlogPost.proto file. This will give us the client stub, we will then use this stub to call the methods.
See that the channel is pointed to the server address at 127.0.0.1:9000. Next, we created a BlogPostService stub in stub. We use the BlogPost to construct a new blog post, then we call the addBlogPost method from the stub and passed the blogPost to it.
This will invoke the addBlogPost in our server and calls its logic.
Whenever the gRPC client calls a gRPC service, the client gRPC library packs the parameters into a message (this is called marshaling), and makes a system call in the client's machine OS, the OS makes a network call to the server via the HTTP/2 protocol. The server's OS gets the packets, and the server stub unpacks the message(unmarshalling) and the respective server procedure is executed using message parameters.
The response from the server follows the steps but in reverse.
Conclusion
We learned a lot about gRPC in this tutorial.
Let's tick off the points:
- gRPC is great in building distributed applications.
- gRPC is very fast as it uses the new HTTP/2, a great upgrade on the old HTTP 1.1
- gRPC out-of-box supports streaming, both one-way and two-way streaming.
- gRPC uses Google's Protobuf to serialize and deserialize its message passing to and fro the client and server.
- gRPC is platform-agnostic, the client and server can seamlessly communicate with each other no matter the machine they run on.
- gRPC is polyglot, both the server and client can be developed in different programming languages.
gRPC is just great, it is a great improvement on its contemporaries like REST, GraphQL, etc.
References
- A basic tutorial introduction to gRPC in Dart.
- Remote procedure call-Wikipedia
- Building a gRPC Server and Client in Dart
- gRPC docs: Introduction to gRPC
- gRPC: Up and Running: Building Cloud Native Applications with Go and Java for Docker and Kubernetes - Kasun Indrasiri and Danesh Kuruppu
- Protocol Buffers(proto3) - Language guide