Backend
Set up Kubernetes on EKS, Docker on ECR, and Bitbucket Pipelines
The deployment flow for the backend is as follows:
Push to master/staging for prod/staging environments respectively
Pipeline will format, lint, and test the code
If passed, another pipeline will build a Docker image
Image is pushed to AWS ECR
Pipeline then runs deployment command on EKS to update the image and perform a blue/green switchover
ECR
AWS's Elastic Container Repository is a private repository, used to host our Docker images.
Create two new private repositories to hold the backend Docker images, one for the main image and one to hold intermediary build stages.


EKS
Setup
Delete the default VPC. This VPC exposes all services in a public subnet. Instead we use a separate VPC for each environment, made up of public and private subnets to restrict access.
cdinto the Backend Directory (see Backend Template Starter)Edit the cluster details in
/kube/eksctl/cluster.yamlto change the cluster name, node group config, etc.Run the following command to create a new VPC, Cluster and Node Group
Once the cluster has been created, the connection details should be added to your ~/.kube/config file. Ensure kubectl is connected to the correct cluster by running kubectl config current-context.
Switching to the cluster (before teleport)
The cluster should automatically be added to your kubeconfig, so normally you can skip to the next step. If it hasn't, run this to login to the cluster:
Add cluster to Teleport
To connect to and switch between k8 clusters, we use Teleport. To add the new cluster to the CA Teleport cluster:
Ensure you have helm installed on your local machine
SSH into the
teleport-hostmachine usingtsh ssh root@teleport-hostEdit the
/etc/teleport.yamlfile to add a new static token for akubenode, setting the token to a memorable string (such as the cluster name) under the# Add kube nodes heresectionReload the teleport system using
systemctl restart teleportRun the following command to create a new namespace for teleport on the cluster, install teleport agent, and connect to the teleport cluster
Make sure you have the correct context set in kubectl before running helm install
If everything went well, you should see the following output from this command. If you see could not get token: NoCredentialProviders: no valid providers in chain, see here. If you see The SSO session associated with this profile has expired or is otherwise invalid, see here.
Install Metrics Server
Metrics Server API allows the cluster to automatically scale pods. Install using
Switching clusters
Once the k8 cluster has been added as a node to the Teleport cluster, you should be able to view it:
You can log in to a particular cluster by running:
Cert Manager
First, set the role ARN for the cert-manager Route 53 Role in kube/helm/cert-manager-values.yaml.
Then install cert-manager:
Error: NoCredentialProviders
If you get the following error when running kubectl commands
You'll need to edit your ~/.kube/config file to remove aws-iam-authenticator command and replace it with aws. This is because aws-iam-authenticator does not work well with SSO.
Open the config file in the editor
Under the users section, find the user for the context/cluster you're trying to connect to:
Replace the command and args with the following config:
This will allow you to use SSO to authenticate with kubectl.
Error: The SSO session associated with this profile has expired or is otherwise invalid
This error means the SSO login session has expired (default is 4 hours). You'll need to login again using
Pods per instance type
CI/CD
Enable GitHub Actions to lint, test and deploy the backend.
Create AWS Policies
Three policies are needed:
ECR push image
EKS deploy image
Secrets Manager read secrets
Terraform
These can all be created automatically using the deploy Terraform script in scripts/terraform/deploy/.
First initalise Terraform
Then, perform a dry run to confirm the changes
Finally, apply the changes using
This will ask for a set of variables:
Name
Description
Example
aws_region
AWS region to use
eu-west-2
aws_profile
AWS config profile to use
Uplevyl
ecr_repository
Name of the ECR repository to allow access to
com.uplevyl.api
cluster_names
A list of names of EKS clusters to allow access to
["uplevyl-prod", "uplevyl-staging"]
You'll need to add this user to the mapUsers section of aws-auth. This can be done using eksctl. Make sure to replace the arn with the actual arn of the user.
Repository Secrets
Set the following variables under the repository settings in GitHub.
Variable
Description
Example
AWS_ACCESS_KEY_ID
DeployServiceAccount user programmatic access key ID to push and deploy images
N/A
AWS_SECRET_ACCESS_KEY
DeployServiceAccount user programmatic secret access key to push and deploy images
N/A
Workflow Variables
The workflows Main and Deploy require the following environment variables be set:
Name
Description
Example
AWS_REGION
AWS region the secrets and clusters are hosted in
eu-west-2
AWS_SECRETS_STAGING
Name of the staging environment secret in AWS Secrets Manager
staging/uplevyl
AWS_SECRETS_PROD
Name of the production environment secret in AWS Secrets Manager
prod/uplevyl
AWS_CLUSTER_STAGING
Name of the EKS staging cluster
uplevyl-staging
AWS_CLUSTER_PROD
Name of the EKS production cluster
uplevyl-prod
ECR_REPOSITORY
Name of the AWS ECR repository storing the deployment images
com.uplevyl.api
RDS
Production Database
For production databases, we use AWS RDS Serverless v2 Postgres databases. Set up a new database cluster, setting the user and database name to the project name (e.g. limelight). Generate a password using 1Password.
Security Group
The DB will need to be created in a custom VPC security group to allow access to it from the API server.

Give the SG a name and select the production EKS cluster.

Once created assign it to the DB cluster

Teleport - On Hold
RDS is used to host the Postgres database required for the Chelsea Apps backend API. Create a new AWS Aurora Postgres compatible database, and follow these steps to add the database to Teleport to allow developers to login to Postgres via Teleport. Make sure to enable "IAM database authentication".
Create a new policy for Teleport
Create a new policy to allow Teleport to connect to RDS. Replace the Account ID and Resource ID with the correct values for the DB:

Secrets Manager
Secrets manager is used to store the ES512 public and private key pair, used to sign all JWT tokens generated by the backend.
The keys can be generated using:
External Secrets
External Secrets Operator replaces Kubernetes Secrets.
Create an IAM account with permissions to access Secrets Manager.
Save the account's Access Credentials to a Kubernetes secret:
Then install ESO using Helm:
Kubernetes Secrets (Deprecated)
Environment variables required to run applications can be managed through AWS Secrets Manager, then pulled into Kubernetes and set up as Secrets, which can then be passed to applications.
Configure Secrets Backends
Get the cluster's OIDC issuer using:
Copy this for use in the next step.
Create needed AWS resources
Terraform is used to speed create the necessary resources in AWS. Make sure you have it installed.
cd into the scripts/terraform folder and run
This will initalise Terraform in that directory. Then, run the following command:
This will ask for a set of inputs (see below) and will then create a policy, role and attach the two. It will output the role's arn which is needed for the next step.
Inputs:
Name
Description
Example
aws_profile
Name of AWS config profile to use
Uplevyl
aws_region
AWS region to use
eu-west-2
cluster_name
Name of the EKS cluster
uplevyl-prod
oidc_issuer
Cluster OIDC provider. See above or the input description for details
oidc.eks.eu-west-2.amazonaws.com/id/B8CA09CC507EFD3735BA9699D63511F2
Deploy kubernetes-external-secrets Controller
It will automatically create a service account named external-secrets-kubernetes-external-secrets in Kubernetes.
Deploy ExternalSecret
ExternalSecret app-secrets will generate a Secret object with the same name, and the content would look like:
In the app, add the ExternalSecret under envFrom:
This will automatically inject the secrets from AWS Secrets Manager, into the Kubernetes ExternalSecret, then into the application itself.
Datadog (Monitoring)
Add the repo to helm
Get the Datadog API and App keys from your settings and set into the following environment variables
Key
Value
DD_API_KEY
API key
DD_APP_KEY
App Key
Then install the agent
Last updated
Was this helpful?