API Testing Plan Template

INTRODUCTION 

The GovStack initiative is a multi-stakeholder initiative led by the Federal Ministry for Economic Cooperation and Development, Gesellschaft für Internationale Zusammenarbeit (GIZ), Estonia, the International Telecommunication Union (ITU) and the Digital Impact Alliance.  

Building blocks are enterprise-ready, reusable software components that provide key functionality facilitating generic workflows across multiple sectors.​ API’s of Build Blocks need to be tested for GovStack compliance using API Testing and Automation tools and services. 

OBJECTIVES AND TASKS 

Objectives 

Testing Building Blocks' APIs to ensure compliance with the GovStack Specification Behavioral Driven Development (BDD) technique is used to test APIs. It is recommended to create a test harness made up of test suites for various testing kinds, including sanity, black box, smoke, functional, load, performance, security, and integration tests. Use CI servers like CircleCI or Github Actions to automate the execution of test suites. Last but not least, a dashboard displaying statistics from test suite results against specific API endpoints, services, and functionality under test.

Tasks 

List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc. 

Following are the list of Tasks identified by Test Plan during testing and post-testing time lines. 

 

Tasks During Testing: 

  1. Develop Cucumber based Gherkin Scripts - To implement Behavioral Driven Development testing of APIs. Feature files to create multiple scenarios with different example data for testing individual endpoints/paths of API for functionality or business logic. 

  2. Step Definition Implementation Code of Feature File -  Implementation code for testing each features specified in Gherkin Feature file using Python or Java Script. 

  3. Developing different API Testing types – Different types of testing need to be incorporated to test different needs of APIs for compliance at different stages of API Development Life Cycle.  

  4. Test Cases Management – Select appropriate tools for storing test cases and results of API Testing. 

 

 Post-testing Tasks: 

  1. Design of Web Dashboard for capturing test statistics – Web dashboard design for capturing results/statistics of different test types/suites against APIs endpoint for better understanding or visibility of APIs. 

SCOPE

General

Select a particular GovStack Build Block Open API Definition/Specification file for First version of test harness. 

And also to select required API Testing types for implementing Test harness. 

Tactics

Select Registration Building Block API Spec for designing Version 1 of Test harness.  

Select following testing types for qualifying building block API for GovStack Compliance. 

Sanity Testing 

Smoke Testing

Functional/Unit Testing

 

An API example and a test matrix:  

We can now express everything as a matrix that can be used to write a detailed test plan (for test automation or manual tests). 

Let’s assume a subset of our API is the /users endpoint, which includes the following API calls: 

API Call

Action

GET /users

List all users 

GET /users?name={username}

Get user by username

GET /users/{id}

Get user by ID

GET /users/{id}/configurations

Get all configurations for user 

POST /users/{id}/configurations

Create a new configuration for user

DELETE /users/{id}/configurations/{id}

Delete configuration for user

PATCH /users/{id}/configuration/{id}

Update configuration for user

Where {id} is a UUID, and all GET endpoints allow optional query parameters filtersortskip and limit for filtering, sorting, and pagination. 

#

Test Scenario Category 

Test Action Category

Test Action Description

1

Basic positive tests (happy paths)

 

 

 

Execute API call with valid required parameters

Validate
status code:

  1. All requests should return 2XX HTTP status code

  2. Returned status code is according to spec: 

– 200 OK for GET requests
– 201 for POST or PUT requests creating a new resource 
– 200, 202, or 204 for a DELETE operation and so on

 

 

Validate
payload:

  1. Response is a well-formed JSON object

  2. Response structure is according to data model (schema validation: field 

names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.)

 

 

Validate
state: 

  1. For GET requests, verify there is NO STATE CHANGE in the system (idempotence)

  2. For POST, DELETE, PATCH, PUT operations

– Ensure action has been performed correctly in the system by:
– Performing appropriate GET request and inspecting response
– Refreshing the UI in the web application and verifying new state (only applicable to manual testing)

 

 

Validate
headers:

Verify that HTTP headers are as expected, including content-type, connection, cache-control, expires,
access-control-allow-origin, keep-alive, HSTS, and other standard header fields – according to spec.

Verify that information is NOT leaked via headers (e.g. X-Powered-By header is not sent to user). 

 

 

Performance sanity:

Response is received in a timely manner (within reasonable expected time) — as defined in the test plan.

2

Positive + optional parameters

 

 

 

Execute API call with valid required parameters AND valid optional parameters

Run same tests as in #1, this time including the endpoint’s optional parameters (e.g., filter, sort, limit, skip, etc.) 

 

 

 

 

Validate
status code:

As in #1

 

 

Validate
payload:

Verify response structure and content as in #1.  

In addition, check the following parameters:
– filter: ensure the response is filtered on the specified value. 
– sort: specify field on which to sort, test ascending and descending options. Ensure the response is sorted according to selected field and sort direction.
– skip: ensure the specified number of results from the start of the dataset is skipped
– limit: ensure dataset size is bounded by specified limit. 
– limit + skip: Test pagination

Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response.  

 

 

Validate
state:

As in #1

 

 

Validate
headers:

As in #1

 

 

Performance sanity:

As in #1

 

 

 

 

3

Negative testing – valid input

 

 

 

Execute API calls with valid input that attempts illegal operations. i.e.:

– Attempting to create a resource with a name that already exists (e.g., user configuration with the same name)

– Attempting to delete a resource that doesn’t
exist (e.g., user configuration with no such ID)

– Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name)

– Attempting illegal operation (e.g., delete a user configuration without permission.)

And so forth.

 

 

 

 

Validate
status code:

  1. Verify that an erroneous HTTP status code is sent (NOT 2XX)

  2. Verify that the HTTP status code is in accordance with error case as defined in spec 

 

 

Validate
payload:

  1. Verify that error response is received

  2. Verify that error format is according to spec. e.g., error is a valid JSON object or a plain string (as defined in spec)

  3. Verify that there is a clear, descriptive error message/description field

  4. Verify error description is correct for this error case and in accordance with spec 

 

 

Validate
headers:

As in #1

 

 

Performance sanity:

Ensure error is received in a timely manner (within reasonable expected time)

 

 

 

 

4

Negative testing – invalid input

 

 

 

Execute API calls with invalid input, e.g.:

– Missing or invalid authorization token
– Missing required parameters
– Invalid value for endpoint parameters, e.g.:
– Invalid UUID in path or query parameters
– Payload with invalid model (violates schema)
– Payload with incomplete model (missing fields or required nested entities)
– Invalid values in nested entity fields
– Invalid values in HTTP headers
– Unsupported methods for endpoints 

And so on.

 

 

 

 

Validate
status code:

As in #1

 

 

Validate
payload:

As in #1

 

 

Validate
headers:

As in #1

 

 

Performance sanity:

As in #1

 

 

 

 

5

Destructive testing

 

 

 

Intentionally attempt to fail the API to check its robustness:
Malformed content in request

Wrong content-type in payload

Content with wrong structure

Overflow parameter values. E.g.:
– Attempt to create a user configuration with a title longer than 200 characters

– Attempt to GET a user with invalid UUID
which is 1000 characters long

– Overflow payload – huge JSON in request body

Boundary value testing 

Empty payloads

Empty sub-objects in payload

Illegal characters in parameters or payload 

Using incorrect HTTP headers (e.g. Content-Type)

Small concurrency tests – concurrent API calls that write to the same resources (DELETE + PATCH, etc.)

Other exploratory testing

 

 

 

 

Validate
status
code:

As in #3. API should fail gracefully. 

 

 

Validate payload:

Validate headers:

As in #3. API should fail gracefully. As in #3. API should fail gracefully. 

 

 

Performance
sanity:

As in #3. API should fail gracefully. 

 

 

 

 

 

TESTING STRATEGY

Following testing types are designed to test for core functionality and business logic of an API is working or not according to the business needs. Also to test for integration and compatibility of APIs. Finally, Performance and Security are critical features of API so that features are also captured in test harness. 

 

Our first concern is API testing — ensuring that the API functions correctly.

The main objectives in functional testing of the API are: 

  • to ensure that the implementation is working correctly as expected — no bugs!

  • to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).

API test actions 

Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions: 

Verify correct HTTP status code.

 For example, creating a resource should return 201 CREATED and unpermitted requests should return 403 FORBIDDEN, etc.

Verify response payload. 

Check valid JSON body and correct field names, types, and values — including in error responses.

Verify response headers. 

HTTP server headers have implications on both security and performance.

Verify correct application state. 

This is optional and applies mainly to manual testing, or when a UI or another interface can be easily inspected.  

Verify basic performance sanity. 

If an operation was completed successfully but took an unreasonable amount of time, the test fails.

Smoke Testing 

Definition:

smoke test is essentially a quick-and-ready test to validate the API’s code and ensure that its basic and critical functionalities work. By going with a smoke test first rather than starting with a full test, major errors and flaws can quickly be spotted and identified for immediate resolution. This can help decrease overall testing time. 

Methodology:

  • Call the API to check if it responds.

  • Call the API using a regular amount of test data to see if it responds with a payload in the correct schema.

  • The same step as above but with a larger amount of test data.

  • Test the API and how it interacts with the other APIs and components it’s supposed to interact with.

Sanity Testing  

Definition:

Sanity testing involves checking to see if the results that the smoke testing comes back with makes sense when put in the context of the API’s main purpose

Methodology:

Sanity testing verifies the API is interpreting the results and displaying the required data in the correct manner.

Functional/Unit Testing

Definition: 

Functional testing is a type of software testing that validates the software system against the functional requirements/specifications. The purpose of Functional tests is to test each function of the software application, by providing appropriate input, verifying the output against the Functional requirements.

Methodology:

The main objectives in functional testing of the API are: 

  • to ensure that the implementation is working correctly as expected — no bugs!

  • to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).

  • to prevent regressions between code merges and releases.

ENVIRONMENT REQUIREMENTS 

Test Environment:

Test harness must run inside GovStack Build Block repos. API Testing is to be executed using Continuous Integration(CI) Servers to make process run using automation services. CircleCI configurations are used to set-up test environment to run test cases/suites against all implementation examples of API Specification.  

Following is the folder structure of BB Repos for example: 

  api: BB Open API Spec File in YAML/JSON

  test: 

         featutes: Gherkin Feature Files

         step_defs: Test Implementation Codes

         Dockerfile – Setup Test Environment

examples:

            mock: docker-compose.yaml

                       Dockerfile

                       Caddy-file

           CRM1: docker-compose.yaml

                       Dockerfile

                       Caddy-file

          CRM2: docker-compose.yaml

                       Dockerfile

                       Caddy-file

         CRM3: docker-compose.yaml

                       Dockerfile

                       Caddy-file 

 

Mock Server for API Testing must be compatible with docker-compose to set-up environment for testing. 

TEST SCHEDULE

Task

Members

Estimate Effort

Develop Cucumber based Gherkin Script –Test Specification

Test Designer

1 Week

Step Definition Implementation Code of Feature File

Tester, Test Administrator

1 Week

Developing different API Testing types

Tester, Test Administrator

1 Week

Test Case Management

Tester, Test Administrator

 

Test Reporting – Dashboard Design

Tester, Test Administrator

 

Test Harness Delivery

Testing Team

 

Total

 

1 Month

CONTROL PROCEDURES

80% or above Test cases passing successfully to qualify API Compliance with GovStack specifications. 

FEATURES TO BE TESTED

Core Functionality or business logic of API testing in Version 1. 

FEATURES NOT TO BE TESTED

Integration testing of API running behind Information mediators. 

Performance and Security/Authentication Features of API. 

Load Balancing Feature. 

SCHEDULES

Following are the deliverables expected from testing team at the end of Test Harness Version 1: 

  • Test Plan 

  • Test Cases 

  • Test Incident Reports 

  • Test Summary Reports  

TOOLS   

Test Suites Design Tools: Cucumber Gherkin Scripts, Pytest-bdd or Java Scripts.  

Test Environment Set-up: Mock Server and Example Implementaion set-up using Docker and Caddy Server. 

Test Execution: CircleCI Servers and Configuration Files.