We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Teach your (micro)services speak Protocol Buffers with gRPC.

00:00

Formal Metadata

Title
Teach your (micro)services speak Protocol Buffers with gRPC.
Title of Series
Number of Parts
160
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Teach your (micro)services speak Protocol Buffers with gRPC. [EuroPython 2017 - Talk - 2017-07-12 - PythonAnywhere Room] [Rimini, Italy] When it comes to microservices, there're a lot of things worth keeping in mind. Designing such fine-grained, loosely-coupled services requires paying lots of attention to various patterns and approaches to make them future-proof. A very important thing to consider, is the way those services will communicate with each-other in production. Usually the communication is done over the network using a technology-agnostic protocol. At the next level the service should provide an API for its friend services. Then, the data should be serialized without altering its meaning and transferred to the picked endpoint. Nowadays, exposing a REST API that operates with JSON over plain HTTP is a usual way to lay the grounds of communication for the services. It is easy to accomplish, but it has some drawbacks. First of all, JSON is a human readable format, and it’s not as other serialization approaches. Also, with JSON it’s not possible to natively enforce the schema, and evolving the API may be painful. This talk’s purpose is to describe in deep detail the benefits of protocol buffers, that offer us for free an easy way to define the API messages in the proto format, and then reuse them inside different services, without even being locked to use the same programming language for them. Moreover, with gRPC we can define the API’s endpoints easily in the same proto format. All these offer us a robust schema enforcement, compact binary serialization, and easy backward compatibility
95
Thumbnail
1:04:08
102
119
Thumbnail
1:00:51
Communications protocolBuffer solutionIntelDemonComa BerenicesSoftwareMessage passingService (economics)Host Identity ProtocolObject (grammar)Web serviceDifferent (Kate Ryan album)InterprozesskommunikationBuffer solutionInjektivitätCommunications protocolProcess (computing)Object (grammar)Real numberFormal languageVariety (linguistics)Computing platformClient (computing)Message passingServer (computing)Serial portTerm (mathematics)String (computer science)MultilaterationRepresentation (politics)Right angleComputer wormBitView (database)Multiplication signCASE <Informatik>XMLUML
Formal languageSerial portProgramming languageData structureXMLComputer animation
Arithmetic meanContent (media)Term (mathematics)
Level (video gaming)MultiplicationBoilerplate (text)Object (grammar)Message passingTypprüfungExterior algebraNumbering schemeInstance (computer science)Client (computing)Cartesian coordinate systemDifferent (Kate Ryan album)ImplementationCodeLecture/Conference
Data typeMechanism designComputing platformCommunications protocolBuffer solutionSoftware developerFormal languageMessage passingString (computer science)Communications protocolBuffer solutionExterior algebraFormal languageData modelMechanism designSlide ruleTerm (mathematics)Interface (computing)Content (media)Message passingFamilyComputer animation
Communications protocolBuffer solutionMessage passingString (computer science)IntegerIdentifiabilityField (computer science)Message passingElectronic mailing listString (computer science)NumberBinary fileType theoryCodierung <Programmierung>XMLComputer animation
String (computer science)Communications protocolBuffer solutionMessage passingIdentifiabilityField (computer science)CASE <Informatik>TrailError messageEntire functionTheoryCompilerBuffer solutionCommunications protocolLine (geometry)Latent heatComputer animation
Binary fileFile formatCommunications protocolBuffer solutionLevel (video gaming)Message passingCodierung <Programmierung>Formal languageBoilerplate (text)Forcing (mathematics)Buffer solutionType theorySymbol tableCivil engineeringProcess (computing)File formatBinary fileCodeSerial port
Revision controlCommunications protocolBuffer solutionFormal languageCodeDifferent (Kate Ryan album)CuboidReal numberMobile appCommunications protocolSoftwarePerspective (visual)Field (computer science)Enterprise architectureClient (computing)Buffer solutionLecture/ConferenceComputer animation
Communications protocolBuffer solutionString (computer science)Message passingSpacetimeBefehlsprozessorSemiconductor memoryCodeJava appletIdeal (ethics)View (database)Message passingJSONComputer animation
Message passingString (computer science)Object (grammar)Parameter (computer programming)Projective planeLevel (video gaming)Web serviceMessage passingComputer animation
Host Identity ProtocolParsingInterprozesskommunikationMessage passingWeb serviceMeeting/Interview
ZugriffskontrolleCache (computing)Control flowCodierung <Programmierung>Content (media)Data typeEmailDependent and independent variablesAdditionEmailParameter (computer programming)AuthenticationDependent and independent variablesMereologyBitComputer wormComputer animationLecture/Conference
Dependent and independent variablesCuboidEmailOverhead (computing)AverageForcing (mathematics)Data compressionInformation securitySpeech synthesisComputer animation
Object (grammar)EvoluteObject (grammar)Program flowchart
Object (grammar)Local ringSoftwareObject (grammar)Uniform resource locatorUniverse (mathematics)Context awareness1 (number)Physical lawTerm (mathematics)CuboidPerspective (visual)Meeting/Interview
Object (grammar)Procedural programmingObject (grammar)Different (Kate Ryan album)System callSoftwareOrder (biology)10 (number)Local ringRight angleComputer animation
System administratorBand matrixInfinityComputer networkTopologyElectronic mailing listNeuroinformatikInformation securityMathematicsSoftwareAuthorizationSystem administratorNetwork topologyBand matrixComputer animationLecture/Conference
Computer networkTopologySystem administratorBand matrixInfinityBand matrixSoftwareMathematicsElectronic mailing listMusical ensemblePhysical lawComputer animation
Open sourceServer (computing)Remote procedure callSoftware frameworkClient (computing)Physical systemBuildingConnected spaceLecture/ConferenceXMLUML
Web serviceMessage passingObject (grammar)Web serviceMessage passingObject (grammar)Function (mathematics)LogicType theoryFluid staticsXMLComputer animation
Java appletClient (computing)Axiom of choiceWeb serviceComputing platformSoftware developerSemiconductor memoryIdentifiabilityMessage passingLimit (category theory)MereologyBefehlsprozessorElectronic mailing listDifferent (Kate Ryan album)Lecture/Conference
Client (computing)Java appletAndroid (robot)Web serviceClient (computing)Web serviceJava appletDiagramMereologyFormal languageRoutingPoint (geometry)MultiplicationXMLUMLDiagram
Web serviceMessage passingMessage passingDependent and independent variablesRouter (computing)Electric generatorFamilyBuffer solutionComputer fileUniform resource locatorWeb serviceTerm (mathematics)Communications protocolCodeRoutingComputer animationLecture/Conference
CodeServer (computing)Message passingCodeCompilerLatent heatComputer fileCommunications protocolFunction (mathematics)XML
CodeNetwork topologyComputer fileDirectory serviceClient (computing)Web serviceAndroid (robot)Java appletServer (computing)Context awarenessImplementationComputer fileMessage passingCodeLatent heatFunctional (mathematics)Web serviceMultiplication signSocial classRoutingXMLUMLDiagram
Context awarenessImplementationCodeWeb serviceServer (computing)Social classDependent and independent variablesMessage passingParameter (computer programming)Context awarenessLatent heatMultilaterationMetadataComputer animationXML
Server (computing)Web serviceCodeClient (computing)Server (computing)Web serviceImplementationNetwork socketThread (computing)Multiplication signClient (computing)XMLUML
Dependent and independent variablesClient (computing)CodeClient (computing)Network socketWeb serviceSystem callMessage passingCommunications protocolResultantObject (grammar)Buffer solutionQuicksortForm (programming)CodeSynchronizationMonad (category theory)Lecture/ConferenceComputer animation
Endliche ModelltheorieSystem callNetwork socketWeb serviceRight angleFamilySubject indexingMessage passingResultantComputer fileLecture/ConferenceComputer animationXML
Dependent and independent variablesWeb serviceCASE <Informatik>Dependent and independent variablesRoutingLatent heatWeb serviceResultantRepresentation (politics)Field (computer science)Correspondence (mathematics)
Web serviceDependent and independent variablesStreaming mediaStreaming mediaMultiplication signRoutingWeb serviceDivisorDependent and independent variablesMessage passingXMLMeeting/InterviewComputer animation
Web serviceThresholding (image processing)Client (computing)Web serviceStreaming mediaUniform resource locatorRoutingXMLUMLMeeting/Interview
Web serviceStreaming mediaDependent and independent variablesDependent and independent variablesWave packetStreaming mediaImplementationBitMeeting/InterviewXMLComputer animation
System callProper mapDifferent (Kate Ryan album)Group actionWeightChainMultiplication signDependent and independent variablesMeeting/InterviewProgram flowchart
Group actionSystem callMultiplication signClient (computing)ChainRow (database)
Different (Kate Ryan album)Expected valueTerm (mathematics)Web serviceResponse time (technology)System callComputer animationDiagram
ChainAdaptive behaviorCorrespondence (mathematics)Computer configurationMultiplication signDependent and independent variablesProcess (computing)Program flowchart
1 (number)Multiplication signSystem callLecture/ConferenceMeeting/InterviewDiagramProgram flowchart
Absolute valueMultiplication signNatural numberDependent and independent variablesWaveClient (computing)ResultantGoodness of fitLine (geometry)System callAbsolute valueForm (programming)Point (geometry)Instance (computer science)TimestampContext awarenessMeeting/InterviewDiagramProgram flowchart
TimestampContext awarenessSystem callLine (geometry)Content (media)Thresholding (image processing)Lecture/Conference
Lattice (order)Line (geometry)TimestampMultiplication signAbsolute valueSummierbarkeitDependent and independent variablesClient (computing)Entire functionChainOcean currentPoint (geometry)ResultantDiagramProgram flowchart
BitClient (computing)SoftwareMultiplication signException handlingSystem callDiagram
Client (computing)Server (computing)Remote procedure callPoint (geometry)Client (computing)Server (computing)Rollback (data management)Program flowchartComputer animation
Server (computing)Client (computing)Android (robot)Java appletFormal languageMathematicsDatabaseState of matterRollback (data management)Point (geometry)Streaming mediaProcess (computing)Formal languageInstance (computer science)Representational state transferGateway (telecommunications)Web serviceLecture/ConferenceXMLComputer animationProgram flowchart
Java appletAndroid (robot)Computing platformGoogolJava appletDialectWindowDemosceneIRIS-TSpeech synthesisPersonal computerComputing platformProcess (computing)Android (robot)outputEvoluteComputer animation
Square numberObservational studyMultiplication signSoftware frameworkInterprozesskommunikationSoftwareOpen setConfiguration spaceSquare numberInstance (computer science)Computer animation
Design by contractFocus (optics)Buffer solutionCommunications protocolLogicLevel (video gaming)Design by contractCuboidJSONXMLComputer animation
Focus (optics)Design by contractFormal languageMobile WebWeb serviceProduct (business)Web servicePoint (geometry)CuboidFormal languageAxiom of choiceLatent heatXML
Moment (mathematics)Point (geometry)Operator (mathematics)
Transcript: English(auto-generated)
Hello everybody, and welcome to our journey on the land of microservices. I will have an opinionated talk about different ways of communication between services these days. I'll give an introduction to protocol buffers and gRPC by covering their main concepts.
As I mentioned, I will generally talk about things needed for the communication process between microservices. That will include the message serialization and the serialization of the message, their transport over the wire, of course, the wide diversity of your services in terms of technological stack they're running on from the variety of languages to the variety of platforms that you should support.
In a real world use case, you'll have some services that communicate with each other. In this case, service A relies separately on B, C, and D. However, things could look like that. The essential difference is that there are more interdependencies between those services
that are not exposed to the user denoted by node A. At the same time, the node A, the user should not even care much about those dependencies. In a nutshell, simplifying the view, there is usually client server communication. The communication can be done over HTTP, and the messages could be serialized to JSON strings.
Also, the communication could go over some proprietary protocol, and instead of serialized objects in the payload, you'll have references to the actual remote objects. We'll come back to that one a bit later. Now considering JSON for object representation, let's take a brief look to its advantages and disadvantages.
So first of all, it's human readable, so it's easy to perceive and debug. It's also schema-less, so you have the liberty to form your JSON in any way. There's nothing to enforce you to follow some structure whatsoever. And it's language agnostic, as the serializers and deserializers are available in mainly all the programming languages.
Speaking of disadvantages, first of all, it's human readable, and if you ask yourself, isn't that a benefit? Well, not really. Human readable means that it's not too compact, though it's more consuming in terms of size. Also, it's schema-less, so you'd ask yourself again,
isn't that a benefit as well? Well, actually not, as you always have to map the contents of your JSON to meaningful objects, not only doing more work, but also introducing lots of repetitive boilerplate code. For instance, when you have multiple clients consuming the same endpoints. In fact, you implement an artificial schema
at the application level, whereas it could be done and it should be done on a lower level. Also, there is no type safety whatsoever within the message, which could lead to serious problems when interpreting a value in different ways for different implementations. As an alternative to JSON, we could consider protocol buffers.
Protocol buffers are Google language-neutral, platform-neutral, extensible mechanisms for serializing structured data. Think XML, but smaller, faster, and simpler, says the official documentation. I'll add there a small correction. Think JSON, but smaller, faster, and simpler. I may refer to protocol buffers later
using terms like proto, protomassages, or protobuffs. The interface definition language looks like that. The entire content from this slide defines a message. It's called person, and it's denoted by the message keyword. It has three fields. The first one is called name, and it's the type of string.
The second one is an integer, and the third one is a repeated field. A repeated field is essentially a list of items with a denote type. Take a closer look to the numbers in each field. That's the field identifier. It's used for binary encoding and decoding. It is very important to keep in mind
that this identifier should be unique, and it should not be reused in case of the deprecation of some fields. To help you with that, to keep track the deprecated fields, there is the reserved keyword to denote fields that are prohibited to be reused. If by any chance you'll try to reuse one of the reserved fields,
the proto C compiler will give you a specific error. In general lines, that's the entire theory behind protocol buffers. Let's see some of their benefits. So first of all, they're binary encoded. So this means that they're very compact, and the encoding and decoding process is very fast. Messages can be also serialized to JSON,
Thrift, or other formats if needed. The schema is enforced from the IDL level, and the messages are strongly typed, which is definitely a benefit over volatile and typed messages. Moreover, having a single way to define messages, you save lots of boilerplate code
for serialization and de-serialization. It is also language neutral, as it has official support in 10 different languages, and of course, JSON could be even better at that. And it gives us some out of the box backward compatibility features, avoiding this kind of code.
In real world, you'll have various clients, and you won't be able to guarantee that all of them are running the freshest app. With protocol buffers, it's easy to add new fields or deprecate some existing fields, or even rename some fields. And it's generally faster. From the network perspective, smaller RPCs consume less space, and they're transmitted faster.
Also, the memory and CPU usage is smaller, because less data is read and processed, while encoding or decoding a protobuf. Now, this is an ideal message notation, and it's similar to the example that was shown earlier.
From this IDL's auto-generated code, it's rather simple to create new objects like that. Sorry, this is Java. The generated code provides a view of builder, setters, getters, and so on. However, in Python, things look pretty much similar,
and for even more elegance, you can use the keyword arguments constructor to construct your objects. Now that you have the rudimentary understanding about the data that's generated and consumed by the services, let's take a brief look
to the way these messages are exchanged between the services. In a RESTful-ish API, usually entities have some distinct URIs. You fire an HTTP request to them, you get some plain text encoded data, and you parse it and do whatever's needed.
This is how an HTTP 1.1 request looks like. You send a bunch of plain text headers to the socket, and this is just a small amount of them. You'll usually solve that authentication with additional headers, and perhaps there are some other parameters. The response will start with a bunch of headers again.
This is just a small part of them. Surprisingly, sometimes you can receive a bigger amount of headers than the amount of payload itself, and this is your actual response payload. With HTTP 2, you could get some performance improvements out of the box. The requests become cheaper
as the average request overhead is reduced with multiplexing, header compression, and so on. Also, even though HTTP 2 doesn't enforce you to use TLS, it is encouraged to be the only correct way of doing things in HTTP 2. So you'll gain some extra security features. I won't go any further into details
as it's not in the scope of this talk, but if you're interested to learn more about HTTP 2 and grasp the entire HTTP 2 evolution, I would definitely recommend watching Ana Balika's talk on HTTP history and performance. It is a great talk covering lots of technical aspects. I mentioned earlier the distributed objects,
so let's get back to them for a while. In the late 90s and further, the concept of distributed objects became more and more popular. In theory, it looks very nice. You deal with some objects, and you don't care much whether they're local or access to a network. The concept of location transparency implies
that the remote objects have the same look and feel as the local ones. From my own perspective, the term should be rather called opaque as your universe awareness is really limited. With this in mind, I'd like to quote Martin Fowler. So the first law of distributed objects is do not distribute objects. First of all, we should acknowledge the fact
that there is a huge difference between calling some procedure locally and going somewhere remotely and doing so. The most obvious difference would be the latency. As a simple local call could take something like a few nanoseconds to produce its output, for a simple network call, you'd expect it to be tremendously slower in orders of tens or even hundreds of milliseconds.
Another difference would be the network reliability. A network call may and will fail eventually, whereas a local call will always succeed. So hiding these facts from the user in a transparent object is not the ideal thing that we could do.
Also, in the early 90s, a list of fallacies of distributed computing was established by the engineers at Sun Microsystems. And there are the following, that the network is reliable, the latency is zero, the bandwidth is infinite, the network is secure, topology doesn't change, there is one administrator,
transport cost is zero, and the network is homogeneous. Even though the network gets faster, the bandwidth wider and so on, it looks like the entire list is still accurate and still actual, and this is not gonna change in the near future. So let's keep in mind Marfa's law, which says that if something could go wrong,
it will go wrong eventually. So do not ignore any of the possibilities and be always prepared to handle them. Now, gRPC is an open-source remote procedure call framework that can run anywhere. It enables the client and the server applications
to communicate transparently and makes it easier to build connected systems. gRPC promises us to solve the issues that I mentioned before. gRPC is a recursive acronym. It stands for gRPC Remote Procedure Call. It is mainly developed by Google as a rework of the internal framework called Stabi.
The first principle of gRPC is to have services and messages instead of references to the remote distributed objects. The message is a static container with type data that respects its schema, and it's pretty much all of it.
Messages don't have any behavior whatsoever. The service itself has all of its business logic inside, so you give it an input and expect a static output from it. In this sense, it's pretty similar with the below-the-rest full services with some extra features like streaming, but let's keep it for now. Another important principle is that the stack
should be available on every popular development platform, and it should be easy for someone to build on their platform of choice. It should be viable as well on the devices with limited CPU and memory. And for the full list of the gRPC principles, you can follow the linked article.
A service in gRPC looks pretty much similar to a RESTful service. It has some endpoints, and you can pass messages over them. The only difference is that the entity identifier is not a part of the endpoint, as it's usually in other approaches like REST. In the diagram, the service is implemented in C++,
the clients in Ruby and Java, and that's obviously just an example, as any part can be implemented in any of the supported languages. Now let's design a service that will provide routes between two specified points.
First of all, we need a service. Our service is called Route Planner. A service can have multiple RPC endpoints, and in this case, it has just one. It's called getRoutes. The getRoutes endpoint takes a getRoutes request message,
and it returns a getRoutes response message. Now this is a proto-definition of the service. It should be kept in proto files just like any other protocol buffer messages. The request and the response messages are defined just like any other proto messages,
and the location and route are some user-defined messages. To generate Python code from protocol buffers definition, all you need to do is to run the protoc compiler, and it can be done through Python as well. We specify the protopath, the output path for the generated messages, and the output path for the generated
GRPC-specific code, and finally, the path to our proto file. This will result in two files. The first one holds the proto messages-specific code, and the second one, the GRPC-specific functionality.
As we already have the generated code from the protos, let's dive into implementing the service using Python this time. So this is basically our entire service. This class implements the route planner servicer that was generated by protoc,
and each method from this class implements a specific RPC endpoint. It gets the request and the context as parameters. It returns the response message. The context hold RPC metadata like deadlines, cancellations, and so on. I will get to this one a bit later.
To actually make use of our implementation, we create a GRPC server entity with a thread pool executor. The actual implementation has to be bound to the server, and then we specify the socket for our service and start it. And that's basically it. As we already have the service in place,
let's try to implement the client in Python this time. To access the service, a client must create a channel to the listening socket, then create a stub from the generated code, and form the request message
as before many other protocol buffers object from the generated code. You'll get the request from that in a blocking manner. There is also the possibility to make asynchronous calls to when calling the service. And it could be done like that,
and in a synchronous call, you'll get some sort of a future monad in form of Python 3 feature, from which you could get synchronously the result, add some callback, check if it's done, and so on. The features module is backported to Python 2 as well, if for any reason you still have to use Python 2.
Also, you can play with your service using the GRPC command line tool. To use it, you just have to issue the GRPC CLI call to your socket. After that, you write the RPC endpoint name, and then goes your request proto.
This is just the here documentation. You could also store the proto request message in text files and provide those to the CLI tool if needed. As a result from the GRPC CLI call,
you'll get the text representation of your response. In this case, it's a repeated field called routes with the corresponding data. Let's get back to our route planner service and imagine a specific use case. What if depending on the time and some external factors,
the service wants to get you rerouted as soon as some newer routes become available? Well, this is possible to be done with streaming. In proto definition, you just have to add the streaming, the stream keyword before your response message
and implement it accordingly. Also, let's say that our client is accessing our API from my mobile phone while traveling around the city, and his coordinates constantly change. It would be nice after some threshold to stream to the service your new location
so the routes could be recalculated as well. Well, this is as well possible to be done with the streaming. Now we have separately the response streaming and the request streaming. Why not have both? Of course, we could have both. We just have to add the stream keyword
before both of the request and response, and voila, you just have it. You have to implement it, and unfortunately, I won't go into implementation details as it's a bit out of the scope of this talk, but I hope that I managed to give you the feel of those features. Now that we know the basics of RPC definition,
let's try to go into some more sophisticated features of gRPC, and let's keep in mind that things will go wrong, and we should be very well prepared for that. When shooting a request, it is not fit to wait indefinitely for a response.
There should be always a timeout set, but how do we determine what's the proper timeout for different calls in a chain? Let's try different approaches. We could put in a uniform timeout on all the subsequent calls.
Let's see it in action. So we have set this 500 milliseconds timeout on all of the subsequent calls in our chain, and first three calls go pretty well. There is still some time remaining
from any of those timeouts. Node B for some reason is quite slow, so by the time it responds, the client hits the timeout and fails accordingly. None of the nodes are really aware of that, so they are continuing to do the always, the already useless work
until their own timeout get exceeded, and that's obviously not the best approach. In reality, for different services, we have different expectations in terms of their response times. That means that we can have custom timeouts, just like that. For, again, for the first three calls,
everything looks good so far, but when the node B responds, it violates the timeout by a very small amount of time, so the corresponding node fails. We could have done a better job, as probably the entire chain would have succeeded and would have taken less than those initial 300 milliseconds.
Let's try to adapt somehow the timeout, and adaptive timeout would sound like a better option. Adaptive in the sense of cascading, the timeout from the first call to all the subsequent ones. Let's try the previous example, but in this way.
The initial timeout would be 200 milliseconds, and the first call would take 20 milliseconds. That means that the next timeout would be 180 milliseconds. The next one takes 30 milliseconds, so the subsequent call would have a timeout
of 150 milliseconds. Now we have all the timeouts set naturally from the initial one. The response wave goes well, and the client is happy with its result. This looks like a good approach, but it's rather difficult to operate with timeouts.
The timeout is a relative value, a delta from a specific point. A deadline would mean an absolute value, say a timestamp in milliseconds. gRPC operates with deadlines. However, they're specified by the user in the form of a timeout, and it's computed internally.
The deadline is propagated automatically within gRPC to all the subsequent calls. You can access the deadline at any point from the context that I mentioned before. For instance, if you do some heavy lifting
before making another RPC call, and you want to assure whether the deadline is within some normal threshold, you would make another HTTP call, and you'd want to manually propagate the deadline there. It's rather simple, as you could simply get it from the context.
Let's try an example with deadlines, and for simplicity let's use a fictive starting timestamp in milliseconds. A timeout value would be, say, 200, and the deadline would be the sum of those two. Because the deadline is an absolute value, all we need to do is to assure at each step
that the current timestamp is smaller than the deadline. So, we just go, and at any step, we just compare the current timestamp and the deadline. And if the current timestamp is smaller, then we just proceed. And the entire request response chain goes well,
and at the end, the last timestamp is still smaller than the deadline. So, the client is happy with its result. Now, let's see an example when the deadline gets exceeded at some point. We have the exact same setup as in previous example, but probably a bit slower network.
So, by the time the request gets to the node A, the deadline gets exceeded. A deadline exceeded exception is propagated back to the client. We failed the entire call at an early point,
so the node B was never touched. This is obviously a good thing to do. In the same way the deadline exceeded was propagated back to the client, a manual cancellation could be propagated. So, the cancellation can be initiated by both the client and the server.
It immediately terminates the RPC and all the subsequent RPC calls that are pending. And keep in mind that it's not a rollback. So, if you need for some reason to, if you did some changes, some changes to the database or to some state,
you should keep in mind to do the rollback by yourself. And it's automatically cascaded, as I mentioned before. When migrating, for instance, a JSON RESTful API service to gRPC, a backward compatibility layer can be assured temporarily with the gRPC gateway.
This will translate the gRPC endpoints to a RESTful API. Keep in mind it won't work with streams and other nifty features of gRPC. gRPC has a tremendous language support with C++, Python, Java, Go, Ruby, C Sharp, JS, Android, Java, Objective-C, PHP.
And it also supports most of the use platforms like Linux, Mac, Windows, Android, iOS. Speaking of success stories, I'd like to mention Google as it's the original creator of gRPC. And gRPC is the evolution of Stabi, the googly internal RPC framework
which is widely used there for quite a long time. Also, the adoption of gRPC external is evolving rather fast. Companies like Docker, Square, Netflix, CoreOS, Cisco, Carbon 3D, Juniper Networks, and others are using it extensively already. For instance, Docker container implements gRPC API
and their swarm communication between nodes is done on gRPC only. Another interesting example is Juniper Networks. They do mostly software defined networks. And they implement the SDN open config on top of gRPC. Now, summing up, let's outline the benefits
of gRPC and protocol buffers. First of all, you focus on the design of your API and you establish a strong contract with your clients. And you have the schema defined in one place at a different level than your business logic. Also, HTTP2 is awesome and it's supported by gRPC out of the box.
You have the bi-directional streaming for free and you have the freedom to pick any suitable language for any specific service and you have the freedom to change your choice on specific service at any point. It is service to service and service to mobile friendly.
And even more importantly, it's production ready. So just give it a try. Thanks for your patience. I believe I won't take any questions at the moment but you can catch me at any point later and I'll be happy to discuss anything with you. And as I believe, the lunch is already served.
Just feel free to enjoy it.