AWS Aurora PostgreSQL Setup
This guide covers setting up AWS Aurora PostgreSQL with IAM authentication for zymtrace on Amazon EKS.
Most of the instructions in this guide are specific to AWS setup rather than zymtrace itself. We've documented our own experience setting up AWS Aurora with EKS to make it easier for users who want to use this configuration. The zymtrace-specific configuration is minimal - it's primarily about connecting to your PostgreSQL database once it's properly configured in AWS.
This guide covers setting up AWS Aurora PostgreSQL with IAM authentication for zymtrace on Amazon EKS.
Prerequisites​
- AWS EKS cluster with OIDC provider configured
- Aurora PostgreSQL cluster with IAM authentication enabled
- IAM role with RDS connect permissions
- Kubernetes service account annotated with IAM role ARN
Setup Steps​
Step 1: Enable IAM Authentication on Aurora​
Enable IAM database authentication on your Aurora PostgreSQL cluster:
aws rds modify-db-cluster \
--db-cluster-identifier your-aurora-cluster \
--enable-iam-database-authentication
Or via the AWS Console:
- Navigate to RDS → Databases → Your Aurora Cluster
- Modify → Additional configuration → Database authentication
- Enable "IAM database authentication"
Step 2: Create Database User and Configure Permissions​
Connect to Aurora using the master user and create the IAM-enabled database user.
There are two approaches for setting up database permissions, depending on whether you want to use automatic database creation or manual database creation.
Option 1: Using autoCreateDBs Mode​
If you plan to use autoCreateDBs: true in your Helm configuration, you only need to grant the CREATEDB permission. The required databases will be created automatically by zymtrace on startup.
-- Create IAM database user with CREATEDB permission
CREATE USER zymtrace_user LOGIN CREATEDB;
GRANT rds_iam TO zymtrace_user;
Option 2: Manual Database Creation​
If you prefer to create the databases manually (using autoCreateDBs: false), follow these steps:
-- Create IAM database user
CREATE USER zymtrace_user LOGIN;
GRANT rds_iam TO zymtrace_user;
-- Create the three required databases
CREATE DATABASE zymtrace_identity;
CREATE DATABASE zymtrace_symdb;
CREATE DATABASE zymtrace_web;
-- Grant all privileges on databases
GRANT ALL PRIVILEGES ON DATABASE zymtrace_identity TO zymtrace_user;
GRANT ALL PRIVILEGES ON DATABASE zymtrace_symdb TO zymtrace_user;
GRANT ALL PRIVILEGES ON DATABASE zymtrace_web TO zymtrace_user;
Grant schema privileges for each database:
-- For zymtrace_identity database
\c zymtrace_identity
GRANT ALL ON SCHEMA public TO zymtrace_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO zymtrace_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO zymtrace_user;
-- Repeat for zymtrace_symdb and zymtrace_web
\c zymtrace_symdb
GRANT ALL ON SCHEMA public TO zymtrace_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO zymtrace_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO zymtrace_user;
\c zymtrace_web
GRANT ALL ON SCHEMA public TO zymtrace_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO zymtrace_user;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO zymtrace_user;
The database user needs ALL privileges on the schema for migrations to work properly. The ALTER DEFAULT PRIVILEGES commands ensure that future tables and sequences created by zymtrace will have the correct permissions.
Step 3: Create IAM Policy​
Get your Aurora cluster resource ID:
aws rds describe-db-clusters \
--db-cluster-identifier your-aurora-cluster \
--query 'DBClusters[0].DbClusterResourceId' \
--output text
This outputs something like: cluster-ABCDEFGHIJKL01234
Create IAM policy file aurora-connect-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "rds-db:connect",
"Resource": "arn:aws:rds-db:<REGION>:<ACCOUNT_ID>:dbuser:<CLUSTER_RESOURCE_ID>/zymtrace_user"
}
]
}
Replace:
<REGION>with your AWS region (e.g.,us-west-2)<ACCOUNT_ID>with your AWS account ID<CLUSTER_RESOURCE_ID>with the cluster resource ID from above
Create the policy:
aws iam create-policy \
--policy-name ZymtraceAuroraConnectPolicy \
--policy-document file://aurora-connect-policy.json
Step 4: Create IAM Role with OIDC Trust Policy​
The OIDC role is what allows your Kubernetes pods to authenticate as an AWS IAM role without storing any AWS credentials. Here's how it works:
- Some zymtrace pods need to access Aurora with IAM authentication
- IAM authentication requires AWS credentials to generate database auth tokens
- OIDC is the secure bridge that lets Kubernetes service accounts assume AWS IAM roles
- The flow:
- Pod uses Kubernetes service account
zymtrace-aurora-sa - Service account has annotation pointing to AWS IAM role
- EKS OIDC provider tells AWS "this Kubernetes service account is allowed to assume this IAM role"
- Pod gets temporary AWS credentials to generate RDS IAM auth tokens
- Pod uses those tokens to connect to Aurora
- Pod uses Kubernetes service account
Get your EKS cluster's OIDC provider:
aws eks describe-cluster \
--name your-cluster-name \
--query "cluster.identity.oidc.issuer" \
--output text
This outputs something like: https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE
Create trust policy file trust-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:sub": "system:serviceaccount:<NAMESPACE>:zymtrace-aurora-sa",
"oidc.eks.<REGION>.amazonaws.com/id/<OIDC_ID>:aud": "sts.amazonaws.com"
}
}
}
]
}
Replace:
<ACCOUNT_ID>with your AWS account ID<REGION>with your AWS region (e.g.,us-west-2)<OIDC_ID>with the ID from the OIDC issuer URL (the part after/id/)<NAMESPACE>with your Kubernetes namespace (e.g.,zymtrace)
Create the IAM role:
# Create role
aws iam create-role \
--role-name zymtrace-aurora-role \
--assume-role-policy-document file://trust-policy.json
# Attach policy
aws iam attach-role-policy \
--role-name zymtrace-aurora-role \
--policy-arn arn:aws:iam::<ACCOUNT_ID>:policy/ZymtraceAuroraConnectPolicy
Step 5: Choose Authentication Method​
You have two options for providing IAM credentials to the zymtrace pods:
- IRSA (Recommended)
- Node-Level IAM
IRSA (IAM Roles for Service Accounts)​
This approach provides fine-grained, pod-level IAM permissions using Kubernetes service accounts.
Benefits:
- Granular permissions per service account
- Better security isolation
- Follows AWS best practices
Create the service account with IAM role annotation:
kubectl create serviceaccount zymtrace-aurora-sa -n <NAMESPACE>
kubectl annotate serviceaccount zymtrace-aurora-sa -n <NAMESPACE> \
eks.amazonaws.com/role-arn=arn:aws:iam::<ACCOUNT_ID>:role/zymtrace-aurora-role
Verify the service account:
kubectl get serviceaccount zymtrace-aurora-sa -n <NAMESPACE> -o yaml
Should show the annotation:
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/zymtrace-aurora-role
In your Helm values, reference the service account:
postgres:
mode: "aws_aurora"
aws_aurora:
host: "your-cluster.cluster-xxxxx.eu-west-2.rds.amazonaws.com"
user: "zymtrace_user"
database: "zymtrace"
region: "eu-west-2"
autoCreateDBs: false
serviceAccount: "zymtrace-aurora-sa"
Node-Level IAM​
This approach assigns the IAM role directly to your EKS worker nodes. All pods on those nodes inherit the same permissions.
This approach is simple and generally recommended for development/testing environments where all pods can share the same permissions.
Setup steps:
- Get your node group's IAM role:
aws eks describe-nodegroup \
--cluster-name <CLUSTER_NAME> \
--nodegroup-name <NODEGROUP_NAME> \
--query 'nodegroup.nodeRole' \
--output text
This outputs something like: arn:aws:iam::123456789012:role/eksctl-cluster-nodegroup-ng-NodeInstanceRole-ABC123
- Attach the Aurora connect policy to the node role:
aws iam attach-role-policy \
--role-name <NODE_ROLE_NAME> \
--policy-arn arn:aws:iam::<ACCOUNT_ID>:policy/ZymtraceAuroraConnectPolicy
- In your Helm values, leave the
serviceAccountfield empty:
postgres:
mode: "aws_aurora"
aws_aurora:
host: "your-cluster.cluster-xxxxx.eu-west-2.rds.amazonaws.com"
user: "zymtrace_user"
database: "zymtrace"
region: "eu-west-2"
autoCreateDBs: false
serviceAccount: "" # Empty - pods will use node IAM role
This gives Aurora access to ALL pods on the worker nodes, not just zymtrace.
If you assigned the IAM role directly to your worker nodes (Option B in Step 5), set serviceAccount: "" or omit it entirely. The pods will automatically use the node's IAM role to authenticate to Aurora.
Deploy zymtrace:
helm upgrade --install backend zymtrace/backend \
-f your-values.yaml \
--namespace <NAMESPACE>
Verification​
Check the migration job logs:
kubectl logs -l app.kubernetes.io/component=migrate -n <NAMESPACE>
Check service pods:
kubectl get pods -n <NAMESPACE>
All pods should be running and using the zymtrace-aurora-sa service account.
Troubleshooting​
Connection fails with "no password supplied"​
- Verify service account exists and has IAM role annotation
- Check trust policy matches service account namespace
- Verify pod is using the correct service account:
kubectl get pod POD_NAME -o jsonpath='{.spec.serviceAccountName}'
IAM authentication failed​
- Verify database user has
rds_iamrole granted:SELECT usename, useconfig FROM pg_user WHERE usename = 'zymtrace_user'; - Check IAM policy has correct cluster resource ID
- Ensure OIDC provider is configured on EKS cluster