GovStack test harness, report and issues
HTML report on cucumber tests for IM-BB run manually, from developer’s computer against deployment in SandBox (as of 12.10.2023 the test data is set up in SandBox IM-BB and it can be used as a System Under Test)
Overview of the results show all tests passing:
However there are multiple issues with the test harness on different levels.
1. Assumption of Repository Location
The test harness setup assumes that all candidate applications reside within GovStack's working group repository. Furthermore, it's implied that these applications can be packaged into a single folder within the '/examples' directory in the building block specifications folder. They are then deployed using Circle CI and executed without manual input by running the './test_entrypoint.sh' file.
This is not the case with the Information Mediator, and we believe this also applies to other low/system-level building blocks. Especially since it was previously agreed that achieving a cloud-native, production-ready, and fully automatic deployment of the entire Information Mediator wasn't feasible within the current scope of our engagement with ITU.
As an alternative, the test harness suggests deploying an adaptor that proxies the functionality from an external system. This approach, however, is similarly not suitable for a low-level building block.
2. Reliance on a Specific CI/CD Tool
The test harness cannot simply be "connected to" an existing pipeline or deployment for validation. Instead, it mandates its own CI/CD tool, Circle CI, for building blocks and assumes a fully automatic deployment of the System Under Test.
Nortal utilizes GitLab CI/CD for software development and SandBox deployment. Additionally, IM-BB depends on several different systems that, while deployed automatically using Helm charts, still require partial manual configuration via their respective user interfaces.
Neither the RFP nor our agreement stipulated support for specific or additional CI/CD tools like Circle CI, Jenkins etc. or so on.
3. Test Composition and Assessing Stages of Compliance
The test cases for IM-BB currently encompass a limited selection of (X-Road) specific APIs and don't include PubSub.
The tested functionalities include:
Retrieving a list of services and endpoints for a service provider from a security server.
Obtaining the OpenAPI specification of those services.
Fetching a list of clients from a security server.
Retrieving a list of methods provided by a service from a security server.
All these functionalities are core to X-Road and come as standard features. Essentially in our case, the test harness for IMBB only examines the foundational technology for the BB implementation (which already undergoes rigorous QC, including automated tests) and even at that, it only covers a very small fraction of this technology.
The GovStack documentation, when outlining the method for assessing building block compliance, emphasizes integration as a pivotal component. It provides evaluation scenarios in the form of Nonfunctional Requirements for Onboarding Products (6 Onboarding Products | GovStack Specification ).
All these scenarios presuppose a functional Information Mediator. Concepts like "Adapters", "API Gateways", or "NativeGovStack Implementation" are all predicated on information mediation and are not applicable in the IM deployment context. As currently defined, integration, the primary criterion for compliance testing, cannot be employed to gauge IM compliance. The Information Mediator will always be fully integrated with itself, regardless of its implementation.
4. Test and test data quality
The test cases where written not according to specifications (for example not providing the protocol version needed in request) and only tested against a mock responder that always returned HTTP.OK irrespective of malformed URLs. A functional and compliant system could not be tested and thus would automatically fail even if conforming to specifications.
The tests did not provide correct headers, required by the protocol to retrieve properly formatted responses.
The tests assumed multiple Information Mediator network instances which is not the case for neither SandBox nor our implementation. An instance of the Information Mediator Network consists of the whole interoperability framework, including a central server, metrics, multiple security servers etc.
There was no provision to create a federated network between multiple different GovStack networks. Nor will it be relevant in the context of the SandBox or GovStack use cases implemented there. The instance for the network itself is a singleton (lets call it SANDBOX), which houses multiple IM-BB instances in the form of Security Servers and accompanying PubSub components.
5. Summary
Integrating our current IMBB solution with the test harness would necessitate devising an entirely new (and arbitrary) CI/CD process. This is unfeasible for two main reasons:
The assumption that an IMBB candidate application should or could be housed within a folder in the BB specifications repo and integrated via a straightforward merge request is incorrect. The solution cosnsits of four distinct repositories with their own structure, dependencies and deployment mechanisms.
The belief that IMBB could be deployed as a system under test (including preloaded and configured test data) automatically with the execution of a single script, within the confines of our current agreement, is also mistaken.
The effort required to synchronize these elements would be significant, yielding minimal value. We cannot justify or allocate resources for such an endeavor.
6. The workaround
Even though we consider testing X-Road itself arbitrary, the tests as they are are not functioning and we can not justify implementing the process of automatically running them, we have:
Fixed the bugs in the test helpers code to create protocol compliant requests
Fixed test data to properly account for SandBox as the network instance
Configured the tests to run against the following deployment in SandBox (
https://ss2.im.sandbox-playground.com:8443/
)Set up data in the SandBox deployment so it can perform as System Under Test
Configured Cucumber to create a HTML report of the tests
Run the Cucumber tests and attached the generated report to this page
We have temporarily uploaded our changes here:
Or you can access in file format (in GovStack working group IM-BB format):