Backend
Answers & fixes for common issues
Database
How can I manually run migrations when they fail to run on deployment?
When migrations do not run automatically, they can be run manually from the K8s API Pod.
SSH into the pod
kubectl exec --stdin --tty POD_NAME -- shFrom here, run the migrations using
./node_modules/ts-node/dist/bin.js -r tsconfig-paths/register --transpile-only ./node_modules/typeorm/cli.js --config apps/uplevyl-backend/src/migration.ts migration:runHow can I access client databases?
Staging
Staging databases are publicly accessible, and can be access directly via any available database client:
Production
Production databases are not accessible from outside the VPC. The easiest way to access is via the AWS Console.
Log into the console and go to RDS, and select Query Editor in the sidebar.
Select the relevant database and credentials, and press connect. This will take you to a query editor.

Teleport
How can I access Teleport?
Web
Teleport can be accessed via the web interface here:
Use GitHub SSO to log in (speak to Ben to get your GitHub account authorised for access)
CLI
To install Teleport:
$ brew install teleportOr install for other platforms here.
To log in to tsh, run:
$ tsh login --auth githubThis will open a web page where you can log in with your GitHub account and authenticate the CLI.
How can I SSH into the Teleport server?
Sometimes you may need to access the Teleport EC2 instance, such as when adding a new cluster node or debugging Teleport issues.
The best way to access the server is using tsh:
$ tsh ssh root@teleport-hostBackup
If you cannot use Teleport for whatever reason, the best way to access the instance is through the AWS Console's SSH tool, as the SSH private key is not shared within the team for local access.
Kubernetes
How can I access a cluster with Teleport (default)?
When logged into Teleport, you can use it to log into clusters.
To list the available clusters:
To log into a cluster:
This will populate your kubectl config with the cluster credentials. These are only valid for a few hours - after this you will need to log in again.
How can I access a cluster without Teleport (only when TP is not working)?
If Teleport is not working, or you do not have access, clusters can be accessed via the eksctl tool (make sure you have this installed).
First, log into the SSO profile required for the project: CLI Access
Then, run:
Replace cluster name, profile, region with the relevant values for the cluster
How do I see server logs?
For production logs, the best place is DataDog.
For staging systems, use kubectl (make sure to log in first Kubernetes):
First, get the pods available:
Copy the name of the relevant pod into this command:
Deployments
I can't see my changes on the staging/prod APIs
The first place to check is GitHub Actions. Make sure the automated pipeline has run successfully.
If the pipeline is failing to deploy, see the next section.
If the pipeline has passed but you cannot see your changes, it may be that there was an issue starting up the latest server version. If Kubernetes cannot start up the container, it will keep the old version running.
To see what is causing the issue, connect to the cluster and view the logs from the crashed pod (Kubernetes)
How can I deploy manually?
To skip any checks, use the manual Deploy GitHub Action 
If this is still not working, you will need to perform the rollout locally:
Build locally using Docker and tag correctly
Go to the relevant ECR repository for the project, and view the details to push to the private repo
Log into the AWS CLI
Use the AWS CLI to authenticate Docker with the repository
Push to the repository
Rollout the latest build to the cluster
Connect to the cluster via
kubectlRun:
Last updated
Was this helpful?