Document presenting the plan for testing approach. It will be adjusted and updated as we’re moving forward with the test configuration work.
...
INTRODUCTION
The GovStack initiative is a multi-stakeholder initiative led by the Federal Ministry for Economic Cooperation and Development, Gesellschaft für Internationale Zusammenarbeit (GIZ), Estonia, the International Telecommunication Union (ITU) and the Digital Impact Alliance.
Building blocks are enterprise-ready, reusable software components that provide key functionality facilitating generic workflows across multiple sectors. API’s of Build Blocks need to be tested for GovStack compliance using API Testing and Automation tools and services.
OBJECTIVES AND TASKS
Objectives
Testing Building Blocks' APIs to ensure compliance with the GovStack Specification Behavioral Driven Development (BDD) technique is used to test APIs. It is recommended to create a test harness made up of test suites for various testing kinds, including sanity, black box, smoke, functional, load, performance, security, and integration tests. Use CI servers like CircleCI or Github Actions to automate the execution of test suites. Last but not least, a dashboard displaying statistics from test suite results against specific API endpoints, services, and functionality under test.
.
Tasks
List all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.
Following are the list of Tasks identified by Test Plan during testing and post-testing time lines.
Tasks During Testing:
Develop Cucumber based Gherkin Scripts - To implement Behavioral Driven Development testing of APIs. Feature files to create multiple scenarios with different example data for testing individual endpoints/paths of API for functionality or business logic.
Step Definition Implementation Code of Feature File - Implementation code for testing each features specified in Gherkin Feature file using Python or Java Script.
Developing different API Testing types – Different types of testing need to be incorporated to test different needs of APIs for compliance at different stages of API Development Life Cycle.
Test Cases Management – Select appropriate tools for storing test cases and results of API Testing.
Post-testing Tasks:
Design of Web Dashboard for capturing test statistics – Web dashboard design for capturing results/statistics of different test types/suites against APIs endpoint for better understanding or visibility of APIs.
SCOPE
General
Select a particular GovStack Build Block Open API Definition/Specification file for First version of test harness.
And also to select required API Testing types for implementing Test harness.
Tactics
Select Registration Building Block API Spec for designing Version 1 of Test harness.
Select following testing types for qualifying building block API for GovStack Compliance.
Sanity Testing
Smoke Testing
Functional/Unit Testing
An API example and a test matrix:
We can now express everything as a matrix that can be used to write a detailed test plan (for test automation or manual tests).
Let’s assume a subset of our API is the /users endpoint, which includes the following API calls:
API Call | Action |
GET /users | List all users |
GET /users?name={username} | Get user by username |
GET /users/{id} | Get user by ID |
GET /users/{id}/configurations | Get all configurations for user |
POST /users/{id}/configurations | Create a new configuration for user |
DELETE /users/{id}/configurations/{id} | Delete configuration for user |
PATCH /users/{id}/configuration/{id} | Update configuration for user |
Where {id} is a UUID, and all GET endpoints allow optional query parameters filter, sort, skip and limit for filtering, sorting, and pagination.
# | Test Scenario Category | Test Action Category | Test Action Description |
1 | Basic positive tests (happy paths) | ||
Execute API call with valid required parameters | Validate |
– 200 OK for GET requests | |
Validate |
names and field types are as expected, including nested objects; field values are as expected; non-nullable fields are not null, etc.) | ||
Validate |
– Ensure action has been performed correctly in the system by: | ||
Validate | Verify that HTTP headers are as expected, including content-type, connection, cache-control, expires, Verify that information is NOT leaked via headers (e.g. X-Powered-By header is not sent to user). | ||
Performance sanity: | Response is received in a timely manner (within reasonable expected time) — as defined in the test plan. | ||
2 | Positive + optional parameters | ||
Execute API call with valid required parameters AND valid optional parameters Run same tests as in #1, this time including the endpoint’s optional parameters (e.g., filter, sort, limit, skip, etc.) | |||
Validate | As in #1 | ||
Validate | Verify response structure and content as in #1. In addition, check the following parameters: Check combinations of all optional fields (fields + sort + limit + skip) and verify expected response. | ||
Validate | As in #1 | ||
Validate | As in #1 | ||
Performance sanity: | As in #1 | ||
3 | Negative testing – valid input | ||
Execute API calls with valid input that attempts illegal operations. i.e.: – Attempting to create a resource with a name that already exists (e.g., user configuration with the same name) – Attempting to delete a resource that doesn’t – Attempting to update a resource with illegal valid data (e.g., rename a configuration to an existing name) – Attempting illegal operation (e.g., delete a user configuration without permission.) And so forth. | |||
Validate |
| ||
Validate |
| ||
Validate | As in #1 | ||
Performance sanity: | Ensure error is received in a timely manner (within reasonable expected time) | ||
4 | Negative testing – invalid input | ||
Execute API calls with invalid input, e.g.: – Missing or invalid authorization token And so on. | |||
Validate | As in #1 | ||
Validate | As in #1 | ||
Validate | As in #1 | ||
Performance sanity: | As in #1 | ||
5 | Destructive testing | ||
Intentionally attempt to fail the API to check its robustness: Wrong content-type in payload Content with wrong structure Overflow parameter values. E.g.: – Attempt to GET a user with invalid UUID – Overflow payload – huge JSON in request body Boundary value testing Empty payloads Empty sub-objects in payload Illegal characters in parameters or payload Using incorrect HTTP headers (e.g. Content-Type) Small concurrency tests – concurrent API calls that write to the same resources (DELETE + PATCH, etc.) Other exploratory testing | |||
Validate | As in #3. API should fail gracefully. | ||
Validate payload: Validate headers: | As in #3. API should fail gracefully. As in #3. API should fail gracefully. | ||
Performance | As in #3. API should fail gracefully. |
TESTING STRATEGY
Following testing types are designed to test for core functionality and business logic of an API is working or not according to the business needs. Also to test for integration and compatibility of APIs. Finally, Performance and Security are critical features of API so that features are also captured in test harness.
Our first concern is API testing — ensuring that the API functions correctly.
The main objectives in functional testing of the API are:
to ensure that the implementation is working correctly as expected — no bugs!
to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).
API test actions
Each test is comprised of test actions. These are the individual actions a test needs to take per API test flow. For each API request, the test would need to take the following actions:
Verify correct HTTP status code.
For example, creating a resource should return 201 CREATED and unpermitted requests should return 403 FORBIDDEN, etc.
Verify response payload.
Check valid JSON body and correct field names, types, and values — including in error responses.
Verify response headers.
HTTP server headers have implications on both security and performance.
Verify correct application state.
This is optional and applies mainly to manual testing, or when a UI or another interface can be easily inspected.
Verify basic performance sanity.
If an operation was completed successfully but took an unreasonable amount of time, the test fails.
Smoke Testing
Definition:
A smoke test is essentially a quick-and-ready test to validate the API’s code and ensure that its basic and critical functionalities work. By going with a smoke test first rather than starting with a full test, major errors and flaws can quickly be spotted and identified for immediate resolution. This can help decrease overall testing time.
Methodology:
Call the API to check if it responds.
Call the API using a regular amount of test data to see if it responds with a payload in the correct schema.
The same step as above but with a larger amount of test data.
Test the API and how it interacts with the other APIs and components it’s supposed to interact with.
Sanity Testing
Definition:
Sanity testing involves checking to see if the results that the smoke testing comes back with makes sense when put in the context of the API’s main purpose
Methodology:
Sanity testing verifies the API is interpreting the results and displaying the required data in the correct manner.
Functional/Unit Testing
Definition:
Functional testing is a type of software testing that validates the software system against the functional requirements/specifications. The purpose of Functional tests is to test each function of the software application, by providing appropriate input, verifying the output against the Functional requirements.
Methodology:
The main objectives in functional testing of the API are:
to ensure that the implementation is working correctly as expected — no bugs!
to ensure that the implementation is working as specified according to the requirements specification (which later on becomes our API documentation).
to prevent regressions between code merges and releases.
ENVIRONMENT REQUIREMENTS
Test Environment:
Test harness must run inside GovStack Build Block repos. API Testing is to be executed using Continuous Integration(CI) Servers to make process run using automation services. CircleCI configurations are used to set-up test environment to run test cases/suites against all implementation examples of API Specification.
Following is the folder structure of BB Repos for example:
api: BB Open API Spec File in YAML/JSON
test:
featutes: Gherkin Feature Files
step_defs: Test Implementation Codes
Dockerfile – Setup Test Environment
examples:
mock: docker-compose.yaml
Dockerfile
Caddy-file
CRM1: docker-compose.yaml
Dockerfile
Caddy-file
CRM2: docker-compose.yaml
Dockerfile
Caddy-file
CRM3: docker-compose.yaml
Dockerfile
Caddy-file
Mock Server for API Testing must be compatible with docker-compose to set-up environment for testing.
TEST SCHEDULE
Task | Members | Estimate Effort |
Develop Cucumber based Gherkin Script –Test Specification | Test Designer | 1 Week |
Step Definition Implementation Code of Feature File | Tester, Test Administrator | 1 Week |
Developing different API Testing types | Tester, Test Administrator | 1 Week |
Test Case Management | Tester, Test Administrator | |
Test Reporting – Dashboard Design | Tester, Test Administrator | |
Test Harness Delivery | Testing Team | |
Total | 1 Month |
CONTROL PROCEDURES
80% or above Test cases passing successfully to qualify API Compliance with GovStack specifications.
FEATURES TO BE TESTED
Core Functionality or business logic of API testing in Version 1.
FEATURES NOT TO BE TESTED
Integration testing of API running behind Information mediators.
Performance and Security/Authentication Features of API.
Load Balancing Feature.
SCHEDULES
Following are the deliverables expected from testing team at the end of Test Harness Version 1:
Test Plan
Test Cases
Test Incident Reports
Test Summary Reports
TOOLS
Test Suites Design Tools: Cucumber Gherkin Scripts,Pytest-bdd or Java Scripts.
Test Environment Set-up: Mock Server and Example Implementaion set-up using Docker and Caddy Server.
Test Execution: CircleCI Servers and Configuration Files.