GovStack sandbox execution environment

Sandbox environment provides Kubernetes resources to run containers in shared environment.

Overview

Our implementation follows guidelines from AWS documentation, AWS Prescriptive Guidance

Sandbox environment will be installed to one region. We are going to use eu-west-1 ( Frankfurt ) region. Inside region we create 3 public and private subnets. Subnets are distributed within 3 availability zones.

Subnets

In subnets we are using 24 subnet mask, so there is about 250 ip address to use.

Private subnets contains building blocks and other required components ( Kubernetes infra components )

 

At this phase we are not going to install building blocks to public subnets. If some building blocks requires access to outside of private network we have installed NAT GW to provide internet access.

Managed node groups

Instead of fargate, we chose managed working node groups. Fargate did not support all the features we needed for the kubernetes so it was chosen instead.

High-level architecture

 

Provisioning

We are using Terraform an Terragrunt for installing/provisioning Sandbox environment.

Provisioning uses a layered approach on the env provisioning

The stacks of circleci, eks and kube should be ran on that order. Provisionin order of the rest does not matter at this time.

Checkout latest IaC codes from repository: https://github.com/GovStackWorkingGroup/sandbox-infra

Update AWS login settings:

aws configure

 

NOTE: Instead of using aws own configure, it is recommended using https://github.com/99designs/aws-vault

 

 

Example container installation

Add Helm repo

helm repo add bitnami https://charts.bitnami.com/bitnami

Install mysql

helm install bitnami/mysql --generate-name

Check installed containers

helm list

 

Example Rest application installation

Example app source code can be found from:

Create a Dockerfile, if is not exists

FROM --platform=linux/amd64 openjdk:17-alpine COPY target/ExampleRestApp-0.0.1-SNAPSHOT.jar ExampleRestApp-0.0.1-SNAPSHOT.jar ENTRYPOINT ["java","-jar","/ExampleRestApp-0.0.1-SNAPSHOT.jar"]

Run docker build

docker build . -t example

Login to AWS ECR

Using eu-central-1 as a region

And 814942682479 as a aws account id

aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 814942682479.dkr.ecr.eu-central-1.amazonaws.com

Push image to ECR

docker push 814942682479.dkr.ecr.eu-central-1.amazonaws.com/govstackecr:latest

 

Create Helm chart

helm create examplerestapp

 

Change correct values for values.yml

image: repository: 814942682479.dkr.ecr.eu-central-1.amazonaws.com/govstackecr name: latest pullPolicy: IfNotPresent

 

Install Helm chart

helm install examplerestapp charts/ExampleRestApp --values charts/ExampleRestApp/values.yaml

 

Verify that installation executed successfully

kimmo.hedemaki@G133915 ExampleRestApp % kubectl get pods NAME READY STATUS RESTARTS AGE examplerestapp-688c56d859-gthrw 0/1 Running 0 24s nginx-1669796747-94b8596cc-wkjlg 1/1 Running 0 26h

 

Test application

Create tunnel to application and after that open browser with http://localhost:18081/api/v1

kubectl port-forward examplerestapp-688c56d859-gthrw 18081:8081

 

Building block isolation

Building blocks inside Govstack environment are isolated by Kubernetes namespace. Every organisation and/or business-case get own namespace and related building blocks are installed “under” that namespace. This functionality ensures that building blocks can interact between other building blocks inside namespace but can’t directly interact with other namespace BB’s. Also capacity reservations can be easily adjusted if needed.

Building block CI/CD

NOTICE: This MVP solution for now and in the future there will be Portal/Automated functionality handle CI/CD process.

Every Building block should contains helm charts inside building block repository.

TODO: Add more information about helm charts

CI/CD pipelines are develop to together with BB developer and Sandbox admin.

Only Sandbox Admin are allowed to maintain/install BB:S to Sandbox environment.

 

CircleCI

OIDC

For general authentication to AWS Resources CircleCI build jobs uses OIDC (OpenID Connect indentity) tokens. So we are not storing AWS secrets to CircleCI.

For every pipeline, separate iam role is needed to be created in aws side for it to be authenticated to the kubernetes. This is done by adding it to env.hcl and then running terragrunt stacks circleci and eks in that order.

For information: Using OpenID Connect identity tokens to authenticate jobs with cloud providers and Using OpenID Connect tokens in jobs - CircleCI

Note!

In CircleCI jobs we need to use context, CircleCI stores OIDC token to context during workflow execution

jobs: - build-and-test - aws-job: context: aws

 

Contexts provide a mechanism for securing and sharing environment variables across projects. The environment variables are defined as name/value pairs and are injected at runtime. This document describes creating and using contexts in CircleCI. CircleCI - context

CircleCI auth to aws

After OIDC is in place, pipelines needs an AWS IAM role that authenticates them to deploy to kubernetes. This is done with the circleci module in terraform. In the corresponding environment, you need to add the correct values to env.hcl

Example from development environment

# CircleCI org_id = "a9a7f9cb-bb2c-4787-b2a7-b7963c3172f8" ssl_thumbprints = ["9e99a48a9960b14926bb7f3b02e22da2b0ab7280"] projects = [ { name = "sandbox-bb-payments", project_id = "cdc99791-edf8-4dd4-8fff-6fea9fb6c302" }, { name = "sandbox-bb-information-mediator", project_id = "2a76c4ba-3f19-4cf4-a3fd-01d5671b7437" }, { name = "sandbox-bb-digital-registries", project_id = "36603874-7125-4106-8a22-4df79ede947f" }, { name = "sandbox-playground", project_id = "e530981a-3801-4366-9181-3371eee0d56a" } ]

Org id is the id of the circleci organisation, this can be found from the project settings

ssl thumbprint can be calculated following Obtain the thumbprint for an OpenID Connect identity provider - AWS Identity and Access Management or with the tool in infra repo tools/ssl_thumbprint.sh that basicly just implements that guide.

In projects project_id can be found from the pipelines configuration in circleci and name you can decide yourself.

After applying the changes to circleci stack, run also the eks stack so those roles are added to aws_auth list

Kubernetes auth

Build jobs also requires access to Kubernetes when installing new deployments. Kubernetes controls it’s own access, so access to AWS services is not enough.

Adding permission to specific user/role requires to modify aws-auth inside kube-system namespace. This can also be done with terraform, as is done with the circleci pipeline roles

Current settings can be check:

kubectl describe -n kube-system configmap/aws-auth

Kubernetes auth

Kubernetes RBAC

And settings can be add/edit using kubectl edit configmap/aws-auth or using eksctl

For example, following command add required permissions to build job:

eksctl create iamidentitymapping --cluster GovStack_sandbox --region=eu-central-1 --arn arn:aws:iam::463471358064:role/CircleCIRole --group system:masters --username system:node:EKSGetTokenAuth

Quality ensure of BB

Before BB is installed to GovStack environment it must go thru API Testing process and results must to provide to Sandbox admin. When Image is pushed to ECR vulnerability scan is executed and BB must be at level Medium or lower, see below. ? Otherwise BB is not allowed to install Sandbox environment.

  • CVSS scores

    • Critical (9.0-10.0) score are not allowed

    • High scores ( 7.0 - 8.9 ) must be analysed before install and take mitigate actions if possible

    • Lower scores can be installed

TODO: specify which level vulnerability issues can be