Introduction
One of the most important features of MSAs (Micro Services Architectures) is the deployment of autonomous services of one another and their isolation according to data boundaries. Each service responds to the principle of Single Responsibility and therefore has to be specialized regarding a very specific purpose. However, even when the micro-service is well designed and limited to certain data domain boundaries, is well isolated, etc it can be useful for a number of different components in the system or externally. Remember that, according to the principle of Autonomy, the micro-service must provide a versioned, well-defined set of services for communication, the API, following a specific API contract (what is expected) to the API consumers.
For example, we have a micro-service that has to be used by network internal and external components (regarding the network the micro-service is running on). So, we need to expose an API or APIs to be exposed and then used by these consumers. However, the API contract varies for network internal and external API consumers. Security, functional restrictions, network restrictions, etc. Besides, some of the network internal components requires bi-directional communication and streaming client/server.
The first approach is the most frequently chosen. It consists of each micro-service exposes a unique REST API for everything. Then, different security contexts are applied to specific endpoints and HTTP methods. What are the main issues in this approach? They are mainly related to the so-called Enterprise API features:
- Cache. Settings related to caching can be different according to the consumer, with different needs of TTL (Time To Live), policies, methods, paths, etc.
- Rate Limits. The values can vary for every consumer and perhaps they don't make sense for internal consumers.
- Security. The application of security restrictions and access (authentication and authorization) to a path cannot be relevant for all of the consumers.
- Streaming. REST does not support bi-directional communication and other techniques like polling, callbacks
- preferred - and HTML5 Websockets can be used but they require extra
changes in the API Consumer side and they have other implications and
drawbacks.
Even though these features are managed by the API Gateway the matter is, our micro-service exposes only one API, REST style, to be used by different types of consumers, and it is difficult to design a proper API contract to apply different restrictions according to the API Consumer.
For our use case purposes it is proposed to define two different API contracts, with different API styles, for each type of API consumer: external (e.g. web applications) and internal (e.g. other micro-services in the same network).
Besides, there are other technical considerations regarding service-to-service communication like:
- REST can result too verbose.
- REST does not support streaming (as we said above).
- Protocol Buffers serialized messages are sent faster than JSON. In micro-services architecture projects, JSON is not the best method for data serialization. Instead, Protocol Buffers are a great option because they were designed to be faster than JSON & XML by removing many responsibilities performed by these formats and focusing solely on the ability to serialize and deserialize data as fast as possible. Another important optimization is regarding how much network bandwidth is being utilized by making the transmitted data as small as possible.
In above diagram we show the use case. The green cube is our micro-service which has two kinds of consumers, other micro-services (internal to the company) and a public web application. The architectural decision is, the micro-service will expose a gRPC API for the internal micro-services and a REST API for the web application.
The challenge here is, how to define the internal API contract for a gRPC API? (we assume we have written a proper API contract for the REST API suing OAS). In this article, we are going to present a some ideas and code containing patterns,
tools, and technologies used to develop contract-based APIs which will
help teams to communicate and ensure a better compatibility and usage of different API styles.
We Need a Common Vocabulary
I know it is annoying but this time we need to have a common set of terms to understand what we say.
Protocol. We mean HTTP, but it has different versions. Normally it is about 1.1 but v2 and v3 are getting traction with more features.
REST. It is an architectural layout or API Style for http web services providing interoperability between systems. They have to meet a set of features to be considered RESTful, like being fully stateless and many more.
OpenAPI Specification (OAS). This is a standard specification to describe the technical layout of REST APIs. There are two main set of versions right now, v 2.x and 3.x. REST APIs contracts are written using this specification.
Protobuf. It is a data serialization format asn also a toolset, originally developed to define services. It is not supported by all programming languages but you can use protobuf with C++, Go, Python, Java, etc. To write a contract we have to write a file with the extension proto. This guide is helpful to know how.
gRPC. It is based on the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types (the protobuf file). On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub that provides the same methods as the server. It supports bi-directional streaming and fully integrates pluggable authentication with HTTP/2-based transport.
Streaming. We mean the ability to establish mono or bi-directional communication between the two parts in a communication. The roles client/server are not so important as data can be pushed between them with no previouos request once the communication has been established.
It was easy, wasn't it?
The API Contract Using Protocol Buffer
As you know, the API contract is a definition that describes the surface area of the way each individual API service is offered and used. It is something that both API producer and API consumer can agree upon, and get to work developing and delivering, and then integrating and consuming. In fact, an API contract is a shared understanding of what the capabilities of a digital interface are, allowing for applications to be programmed on top of.
An API contract should be provided ahead of time for of each version, allowing consumers to review that new version of an API contract before they ever commit to integrating and moving to the next version.
Google Protocol Buffers: Protobuf
In our use case, we leverage from a binary protocol such as gRPC and
extend the use of Protobuf(IDL) for external-facing APIs based on HTTP
protocol. You have a good description of Protobuf in the Google's Protocol Buffer site.
The core concept here is, the .proto file is the source of truth for our service design. The first step when designing our service is to write a .proto file as our API contract, then use it to discuss the design with who will consume the API before starting the implementation.
It is very important for the microservice architect/developer to understand very clearly that RPC is not REST. RPC-based APIs are great for actions. REST-based APIs are great for modeling a domain, making CRUD available for all of your data.However, REST is not intuitive (it is based on resources, namely types of data) to define actions.
RPC-based APIs are great for actions. REST-based APIs are great for modeling a data domain.
For service-to-service communications, RPC can be much more convenient
thanks to new technologies like gRPC, but most developers are still
using REST for external-facing services, which is one of the reasons of the shape of our example.
Our Use Case
To create the API contract, you'll need first to create a new
.proto
file.
Then, you define the messages from each data structure you want to manage and serialize. Include a name, a type and a number for each field.
Our example use case handles communications with a PKI. It exposes a set of messages and services in pure RPC style (do something, give me something). You can read the main followed rules here. So, we are going toi define the services that will be invoked by the API Consumers. Basically:
- RegisterNewDID. Our service will create a new certificate. It requires a message of type IDR that has to be validated, etc. It will return an empty message and a 0 response code when successful.
- GetDID. Our service will get an existing certificate. It requires a message of type IDR that has to be checked, validated, etc. It will return a message of type IDRes and a 0 response code when successful.
- CompleteRevokeDID. Revokes an active certificate. It requires a message of type IDR that has to be validated, etc. It will return an empty message and a 0 response code when successful.
In our case we have defined the services first. Then, the messages are defined according to our requirements in the Business Analysis phase. Finally, the first version of our protobuf file looks like this:
Checking the API Contract Style in Protobuf
First Phase: Lint
The first check is convenient to verify the quality of protobufs and provide adherence to an agreed upon, community-tried-and-tested style guide. The chosen tool is Buf. While tools like Spectral are useful to check the compliance of our OAS-based API contract against a set or rules, we can use Buf to verify the correctness of our API contract as a proto file. Use Buf to check that:
- File names adhere to the naming convention.
- Package and directory match.
- Service names end in Service.
- Method names are PascalCase.
- Field names are lower_snake_case.
- Fields and messages have a non-empty comment for documentation.
- Enumerations have a proper zero-value default and enum values are properly named (with a prefix).
In addition to the above checks, Buf automates the detection of breaking changes (i.e. making new interfaces incompatible with existing clients) and reduces the time required to manage and code review proto files, as part of the code review is automated.
There’s no reason to not use the DEFAULT lint category. This is the most “strict” category that encompasses MINIMAL and BASIC categories. A description of the lint categories and styles enforced by the DEFAULT category can be found in the Checkers and Categories section of the Buf documentation.
In addition, to ensure that all message types and elements are documented, the COMMENTS category must also be included:
We should get exit code 0 with no response. So, we'll apply the suggested changes. This will help to clarity and better readability.
Second Phase: Protoc
- Never edit the pb files.
- Run the protoc command with every new change in the protobuf API contract and tell the same to your API consumers.
Third Phase: Protobuf API Contract Testing
Generate the Server/Stub Mock
Enable Reflection
Grpcurl must know the Protobuf contract of services before it can call them. It's easier to use grpcurl with gRPC reflection. gRPC reflection adds a
new gRPC service to the app that clients can call to discover services.There are two ways to do this:
- Set up gRPC reflection on the server. gRPCurl automatically discovers service contracts.
- Specify
.proto
files in command-line arguments to gRPCurl.
Test the gRPC API Implementation
curl
for gRPC servers. Its features include:
- Calling gRPC services, including streaming services.
- Service discovery using gRPC reflection.
- Listing and describing gRPC services.
- Works with secure (TLS) and insecure (plain-text) servers.
Run the Test
API Contract Guidelines
The main rules to define the protobuf API Contract are defined in the buf.yaml file and verified with Buf. So, the recommended way to follow some basic guidelines is to run this test against the buf.yaml. This file should be consistent, version controlled and reachable by the CI/CD pipelines to include this checking as a new phase.
You MUST use the set of available standard guidelines in the Google's protobuf site (version 3). Follow each one of the listed recommendations. Besides, you SHOULD follow the next extra rules for a better description:
1-Unintended Changes in the API Contract
Pay attention to changes in messages. If one of them is used in several services the impact is bigger and the API consumers can be unaware of the change. It is an option to use a type of message ONLY in one service. However, it is not practical. The same type is commonly reused across different services and we do not want to maintain different versions of the same message, same protobuf contract across different services. On the opposite, you SHOULD plan your versions and changes, keep your protobuf file accessible to API consumers, communicate changes and force the API consumers to compile the pb files with every new change to the protobuf file (don't forget it is our API contract).
2-Add Comments
You SHOULD add comments to the messages and services in the protobuf API Contract. Remember it is the only reference for external and internal consumers.
3-Use the gRPC Response Status Codes and Descriptions
There are areas that are not explicitly part of the protobuf contract. However, they are implicit rules that have to be followed as part of the protocol conventions. We are talking about:
- Status codes (you can find the list with explanations here)
- Error details.
You MUST use the Status codes in your responses while you SHOULD use the error descriptions to explain the API consumers what's going on.
4-Changes in the Protobuf API Contract
The protobuf API contract SHOULD be stable to not break API consumers. To ensure backwards-compatibility, we have to follow several rules:
- Use Buf to check breaking changes with every new Pull Request in the Git repository. We can use Buf for this purpose and set the breaking rules to be validated with every new change.
- When adding breaking changes, a new increased version number should be created in the API, always following the Buf code style.
- Ideally, our internal API consumers should be able to support multiple versions of a gRPC service. In any case, the API consumers SHOULD detect any change in the protobuf API contract (accessible in the API Registry) and run the subsequent tasks: protoc, run tests for their gRPC clients.
An API Registry is the protocol to facilitate the distribution of API contracts in different API styles (OAS, gRPC, GraphQL, AsyncAPI) to API consumers. It includes an API itself, which is a service to manage information about the API contracts and enable their distribution.
5-Special Techniques
Issues
1-API Contract Versioning
The adidas main guidelines regarding REST would be applicable here, specifically the section about API Description Versioning about the management of the major-minor-patch version number. Covered in OAS.
2-Include Information About the API Ownership
Responsibility for a service API lies with the team that maintains the associated microservice(s), more or less covered in OAS. Use coAPI definitions are driven by the needs of the consumer(s) of the API. It is usually included information about:
- Product Owner
- Support Contact
- Team
- Organization
- Cost Center/Organizational Unit
- Budget Owner
3-API Contract Test
The proposed way of testing the terms of the API contract is far from perfect and it is strictly based on the test of the gRPC server mock. It would be desirable an isolated compliance test of the Protocol Buffer API contract against the actual implementation of the gRPC server.
Anyway, by integrating the Buf Lint test and the gRPC mock test (generated from the Protocol Buffer file) should be enough for now to ensure the API contract compliance in every release.
4-Caching
No information about caching can be included in protobuf. Besides, gRPC does not suport cache specification which is up to the implementation. Not covered in OAS either.
5-Rate Limiting
No information about caching can be included in protobuf. Besides, gRPC
does not suport rate limiting specifications which is up to the implementation. Not covered in OAS either.6-Security
Same regarding authentication and authorization. It is a clear drawback regarding OAS.
7-Environments
Same regarding servers, hosts, etc. It is a clear drawback regarding OAS.
Conclusions
- API contracts cannot depend upon a specific API style.
- The usage of different API styles is a regular practice in MSAs (Micro-Services Architectures) to use the best option for each type of communication and/or purpose (network internal/external, real-time streaming, data boundary isolation, etc).
- The making of API contracts for gRPC APIs can be normalized as well to reflect the requirements from the API Owners using the Protocol Buffers specification.
- We have defined some basic guidelines complementing the already provided by Google. Besides, we added a step to verify the validity of the protobuf file according to certain rules (lint) and the test of the API with a grpcurl-based script.
- However, there is some issues in the protobuf specification and several important areas in APIs are not covered. This is a clear disadvantage regarding OAS, specially in its version 3.x.
- We still need a super API specification to cover all the aspects that any Enterprise API Contract needs.
Comments
Post a Comment