This post is the first on of a series of four about the making of API contracts based on standard API style. It is obvious that if you've collected all the requirements you are going to write down these mandates into a standard format contract. The standard formats are what we call API Specification (the most popular example is OpenAPI (OAS) but we can consider others like Protocol Buffer, GraphQL and AsyncAPI, for instance). The purpose of the API contract is exactly what a usual contract is, the list of specifications that describe the behavior of a specific API.
It is widely accepted in the IT community APIs are the best way to connect systems in a flexible and easy way. The different design of API standards rely on the features of protocols, versions of protocols and certain characteristics belonging to the architectural design (based on Remote Procedure Calls, Resources, queries, etc). It is critically important to know about the features of the different versions of protocols (e.g. HTTP 1.1, 2, 3, WS, etc) and the possibilities that they allow to the construction of modern software systems.
The current post has a very simple target: To test some approaches and tools to Contract-Driven RESTful API, how to test this contract in different phases and once it's done, to get some conclusions.
Contract Driven API Design?
The concept of Contract Driven API Design is not new. The API behaviour is typically described in documentation pages listing available endpoints, request data structures and expected response data structures, along with sample query and responses (in the case of REST APIs). This documentation is used for integration (used by developers building systems that consume our APIs), comprehension of our system, etc.
It is obvious, documentation written separately do not shield the consumers of the APIs from changes in the API. API producers may need to change the response data structure or rename an endpoint altogether to keep up with business requirements.
The problem of incorporating these changes then falls on the consumers of those APIs who have to keep checking the documentation for any changes. This model does not scale well. Consumers often end up with unexpected bugs because the response they were expecting had changed.
This is where having consumers assert on API contracts become useful. Instead of having API producers build a specification on their own, consumers of those APIs can set expectations by letting the producers know what data they want from the API.
The so-called API Contract then can be written down in two main standard ways:
- As an API descriptor file, following a standard specification (OpenAPI Specification, Protobuf, etc).
- As a description of behavior based on a set of Test Scenarios (e.g. using Gerkhin syntax).
In any case the purpose of the API Contract Test is, by taking as the reference the Contract, to perform some tests to check out the implementation corresponds to the specifications in the documentation.
For instance, the Behavior Driven Development uses the Test Scenarios as the seed and guide for the implementation of Functional Tests.
In the case of API Contract testing the approach to testing is mainly based on two possible approaches
- By following the BDD methodology and having a set of test scenarios written in Gerkhin syntax, implement the tests against the actual implementation of the API. As a real use case, you can imagine a testing app that acts as the client of your API and running some tests against it, or a JMeter test sending a set of calls against your API and recording the results.
- On the other side, we can consider the BDD test scenarios are not needed as the technical API specification description is enough, it is the actual API Contract.We are going to test some tools and different options in this regard.
The Scope of our Tests (in this post)
Our targets are basically two:
- to test some tools and see what happens
- to find out what scenario they fit better and their strong and weak points regarding OAS-based API Contract Testing.
The three main areas in these tests are tools, API architectural layouts and the Behaviour Driven Development (BDD) approach. I mean, the integration of Gerkhin syntax written test scenarios with the test implementation is very important.
Regarding the documentation standards, the test tool has to be able also of an integration with standard API descriptor files.
The list of tools are:
Other not specialized and well known tools can be used for this purpose. JMeter and Gatling can be effective when testing APIs in a Continuous Delivery pipeline.
Anyway, I want to have a very practical hands-on approach in this series of posts. So, the idea is to actually use the tools and include here the outcome of the usage of the tools. This can make reading a bit more difficult and boring but on the other side you can see directly how the result looks like.
Step 1: Compose the API Contract, OAS file and Test Scenarios
The test scenarios will describe the behavior of our API.
Include a test scenario per endpoint/method for the service we are going to test according to the contract that has been provided.
Ideally we should write the API contract terms in Gerkhin syntax. Once this is done, perhaps is needed to make some adaptations to the specific tool. We'll see this right now.
Step 2: Get an API Server
The second step is to get an implemented working REST API according to the contract. I used Node and Express for this. It's simple!
var app = require('express')();
app.get('/test', function(req, res) {
res.json({id: "1", name: "John"});
})
app.post('/test', function(req, res) {
res.json({message: 'Added!'});
})
app.put('/test', function(req, res) {
res.json({message: 'Updated!'});
})
app.delete('/test', function(req, res) {
res.json({message: 'Deleted!'});
})
app.listen(3000, () => {
console.log("Server running on port 3000");
});
Now, we'll start with the tests....
Comparison of REST API Contract Testing Tools
All right! As I said above we'll start with the most common and extended type of architectural layout in APIs, the REpresentational State Transfer. We'll use three tools to test the contract.
Karate
BDD integration: The Karate test depends upon the feature file that contains the test scenarios.
API Specification descriptor integration: Nope! Karate does not seem to be able to read an OAS file, for instance.
The test is written in Java and it is simple!
import com.intuit.karate.junit4.Karate;
import cucumber.api.CucumberOptions;
import org.junit.runner.RunWith;
@RunWith(Karate.class)
@CucumberOptions(features = "classpath:karate",
monochrome = true,
strict = true
)
public class APITest {
}
Test Scenarios. The Gerkhin syntax used for the test scenarios is specific for the Karate framework.
The base URL is still part of the feature file but probably it can be replaced for a place holder to insert the value of a property.
The Gerkhin/Karate syntax allows to specify the URL, method, expected response status code, send request parameters, body, etc. It's really flexible.
Feature: Testing a REST API as test scenarios with Karate
Scenario: Testing valid GET endpoint
Given url 'http://localhost:3000/test'
When method GET
Then status 200
And match $ == {id:"#notnull",name:"#notnull"}
Scenario: Testing an invalid GET endpoint - 404
Given url 'http://localhost:3000/tes'
When method GET
Then status 404
Scenario: Testing valid POST endpoint
Given url 'http://localhost:3000/test'
And request { id: "1" , name: "John"}
When method POST
Then status 200
And match $ == {message:"Added!"}
Scenario: Testing valid PUT endpoint
Given url 'http://localhost:3000/test'
And request { id: '1234' , name: 'John Smith'}
When method PUT
Then status 200
And match $ == {message:"Updated!"}
Scenario: Testing valid DELETE endpoint
Given url 'http://localhost:3000/test'
When method DELETE
Then status 200
And match $ == {message:"Deleted!"}
Results
They are quite clear in the IDE (IntelliJ):
The results in the console log are standard:
5 Scenarios (5 passed)
21 Steps (21 passed)
0m1.558s
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.011 sec
Preliminary conclusions:
- Protocol Introspection: No
- Documentation: OK
- Easiness of implementation: OK
- Effectiveness: OK
- BDD integration: OK (via Gerkhin)
- API Descriptor Integration: No
- Tool specific features: Acceptable (Gerkhin/Karate syntax)
- Integration in regular build time tests: OK
- Integration as separate/runtime tests: OK
REST-assured
BDD Integration: Yes, it supports BDD, and it is really useful. Besides, it does not require any adaptation to the framework. You can compose the test scenarios as you like (always you have to respect the Gerkhin syntax basics, of course). In this case the test scenario is just the GET method of the API. Please note there is not such a thing like the URL, etc. The method is mentioned but it is not considered by the framework. The test scenario in the feature file is only the source of the information for the implementation of the tests.
@Functional
Feature: Testing a REST API as test scenarios with REST Assured
Scenario: Testing valid GET endpoint
Given I access to the base URL
When using the method GET
Then I get a response status code 200
And the field ID is not empty and the field NAME is not empty
The first implementation is the test runner. Its mission is to generate the tests for the steps in the test scenarios in the feature file.
import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;
import org.junit.runner.RunWith;
@RunWith(Cucumber.class)
@CucumberOptions(
features = "src/test/resources/restassured",
plugin = {"pretty"},
glue = {"steps"},
monochrome = true,
strict = true
)
public class TestRunner {
}
Now, you have to run the class. You'll see in the terminal the proposed test methods to implement the steps of the test scenario (remember, a test scenario step is the statement after a Gerkhin expression Given, When, Then, etc.
Results
Preliminary conclusions:
- Protocol Introspection: OK
- Documentation: OK
- Easiness of implementation: Acceptable
- Effectiveness: OK
- BDD integration: OK
- Tool specific features: OK (Gerkhin syntax is generic)
- API descriptor Integration: No
- Integration in regular build time tests: OK
- Integration as separate/runtime tests: OK (managed as a external tool)
Dredd
BDD Integration: Nope!
API Spec descriptor integration. This approach is really interesting as it is clean and efficient. The API Spec descriptor file can be considered in this case as the set of features the API has to have. The amazing thing here is, you do not need an implementation!
I used an OAS example in this case:
---
basePath: /
host: localhost
info:
license:
name: MIT
title: "Test API"
version: "1.0"
paths:
/test:
delete:
produces:
- "application/json; charset=utf-8"
responses:
? "200"
:
description: ""
schema:
properties:
message:
type: string
required:
- message
type: object
get:
produces:
- "application/json; charset=utf-8"
responses:
? "200"
:
description: ""
schema:
properties:
id:
type: string
name:
type: string
required:
- id
- name
type: object
post:
produces:
- "application/json; charset=utf-8"
responses:
? "200"
:
description: ""
schema:
properties:
message:
type: string
required:
- message
type: object
put:
produces:
- "application/json; charset=utf-8"
responses:
? "200"
:
description: ""
schema:
properties:
message:
type: string
required:
- message
type: object
schemes:
- http
swagger: "2.0"
The execution of the test was done from the CLI. Dredd is installed with NPM.
To start the test just use the CLI. By using the Dredd tool with the descriptor and the base URL as arguments you can start the test.
dredd api-description.yml http://localhost:3000
Results: The output of the test is useful enough:
pass: DELETE (200) /test duration: 84ms
pass: GET (200) /test duration: 16ms
pass: POST (200) /test duration: 15ms
pass: PUT (200) /test duration: 23ms
complete: 4 passing, 0 failing, 0 errors, 0 skipped, 4 total
complete: Tests took 143ms
Preliminary conclusions:
- Protocol Introspection: No
- Documentation: OK
- Easiness of implementation: OK (no implementation at all!)
- Effectiveness: OK
- BDD integration: No
- Tool specific features: OK (the descriptor files have to follow the standards)
- API descriptor Integration: OK
- Integration in regular build time tests: No
- Integration as separate/runtime tests: OK (managed as an external tool)
Conclusions
- Karate is probably the best option if you're taking a BDD approach that describes the behaviour of the API. It as an excellent integration and it is fast.
- However, Dredd is great as it is directly integrated with the API specification. As it runs with Node it is trivial to integrate Dredd with the Delivery pipeline.
- For more customized scenarios or in the case you have to use predefined features files to your API (and they do not match with Karate) REST-assured is a good option
- If you need complete information about the HTTP protocol aspects request/response, headers, payloads, use REST-assured.
Final Note
As you can see they are all excellent tools that can be used in different scenarios.However, it depends on what you consider the API contract actually is. A really useful and practical case is, the OAS file is actually the API contract. Following the approach, Dredd is an excellent option.
For my understanding, the API Contract is composed by different parts:
- The Standard technical specification (OAS, Proto file, etc)
- The Enterprise API description (Basically all the non-functional features that bring your APIs to be ready for the world at scale. We'll see a post about it!)
- The semantics about the use cases
- The parameters for discovery and consumption
See ya in the next episode!
Comments
Post a Comment