Rds Aurora
AWS RDS and Aurora managed relational databases for production SQL workloads
You are an expert in Amazon RDS (Relational Database Service) and Amazon Aurora for deploying and managing production relational databases on AWS. ## Key Points - **Making the database publicly accessible** -- Always set `--no-publicly-accessible` and place the database in private subnets. Access it through a bastion host, VPN, or RDS Proxy. - **Using the writer endpoint for all queries** -- Read-heavy workloads that hit only the writer instance waste the capacity of read replicas and create unnecessary load on the primary. - **Use Aurora over RDS** for production PostgreSQL/MySQL workloads. The distributed storage, faster failover, and read scaling justify the cost. - **Always enable Multi-AZ** (RDS) or add at least one reader in another AZ (Aurora) for production. - **Use `--manage-master-user-password`** to let RDS manage credentials in Secrets Manager with automatic rotation. - **Use RDS Proxy** for Lambda or high-connection-count workloads. It pools connections and reduces database load from connection churn. - **Enable deletion protection** and set `DeletionPolicy: Snapshot` in CloudFormation for all production databases. - **Use Aurora Serverless v2** for variable workloads (dev environments, bursty traffic) to avoid over-provisioning. - **Separate read and write traffic**: Use the Aurora reader endpoint for read queries. This offloads the writer and improves throughput. - **Enable Performance Insights** to identify slow queries and wait events without installing third-party monitoring. - **Use IAM database authentication** for Lambda and ECS tasks to avoid storing long-lived database passwords. - **Publicly accessible by default**: Always set `--no-publicly-accessible` and place the database in private subnets. Access it through a bastion, VPN, or RDS Proxy.
skilldb get aws-services-skills/Rds AuroraFull skill: 321 linesAWS RDS & Aurora — Cloud Services
You are an expert in Amazon RDS (Relational Database Service) and Amazon Aurora for deploying and managing production relational databases on AWS.
Core Philosophy
A managed relational database should protect your data above all else. Every production database must have Multi-AZ deployment (or Aurora readers in multiple AZs), encryption at rest, automated backups with sufficient retention, and deletion protection enabled. These are not optimizations -- they are baseline requirements. A database without them is a single point of failure waiting for a 3 AM page.
Treat credentials as ephemeral, not permanent. Use --manage-master-user-password to let RDS manage and rotate credentials through Secrets Manager automatically. For application access, prefer IAM database authentication so that Lambda functions and ECS tasks never store long-lived database passwords. Connection pooling via RDS Proxy absorbs the overhead of short-lived IAM auth tokens and protects the database from connection storms.
Separate read and write traffic deliberately. Aurora provides a reader endpoint that load-balances across all read replicas. Routing analytical queries, reports, and read-heavy API endpoints to the reader endpoint offloads the writer, reduces replication lag impact, and improves overall throughput. But never read from a replica when you need read-after-write consistency -- use the writer endpoint for those operations.
Anti-Patterns
- Running production databases without Multi-AZ -- A single-AZ database fails completely during AZ outages and maintenance events. Multi-AZ failover takes seconds; rebuilding from a snapshot takes hours.
- Storing database credentials as static secrets in application config -- Use RDS-managed secrets with automatic rotation or IAM database authentication. Static credentials are a breach waiting to happen.
- Making the database publicly accessible -- Always set
--no-publicly-accessibleand place the database in private subnets. Access it through a bastion host, VPN, or RDS Proxy. - Skipping RDS Proxy for Lambda workloads -- Every Lambda invocation opens a new database connection. Without connection pooling, hundreds of concurrent invocations exhaust the database's max connections and crash it.
- Using the writer endpoint for all queries -- Read-heavy workloads that hit only the writer instance waste the capacity of read replicas and create unnecessary load on the primary.
Overview
RDS provides managed instances of PostgreSQL, MySQL, MariaDB, Oracle, and SQL Server with automated backups, patching, and Multi-AZ failover. Aurora is AWS's cloud-native relational database, compatible with PostgreSQL and MySQL, offering up to 5x throughput over standard MySQL and 3x over PostgreSQL. Aurora uses a distributed storage layer (up to 128 TB, 6-way replication across 3 AZs). Aurora Serverless v2 scales compute automatically based on demand.
Setup & Configuration
Create an RDS Instance
# Create a subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name my-app-db-subnets \
--db-subnet-group-description "Private subnets for RDS" \
--subnet-ids subnet-aaa subnet-bbb
# Create PostgreSQL RDS instance
aws rds create-db-instance \
--db-instance-identifier my-app-db \
--db-instance-class db.r6g.large \
--engine postgres \
--engine-version 16.2 \
--master-username dbadmin \
--manage-master-user-password \
--allocated-storage 100 \
--storage-type gp3 \
--storage-encrypted \
--multi-az \
--db-subnet-group-name my-app-db-subnets \
--vpc-security-group-ids sg-12345 \
--backup-retention-period 14 \
--preferred-backup-window "03:00-04:00" \
--deletion-protection \
--no-publicly-accessible
Create an Aurora Cluster
# Aurora PostgreSQL cluster
aws rds create-db-cluster \
--db-cluster-identifier my-app-aurora \
--engine aurora-postgresql \
--engine-version 16.2 \
--master-username dbadmin \
--manage-master-user-password \
--db-subnet-group-name my-app-db-subnets \
--vpc-security-group-ids sg-12345 \
--storage-encrypted \
--backup-retention-period 14 \
--deletion-protection
# Add writer instance
aws rds create-db-instance \
--db-instance-identifier my-app-aurora-writer \
--db-cluster-identifier my-app-aurora \
--db-instance-class db.r6g.large \
--engine aurora-postgresql
# Add reader instance
aws rds create-db-instance \
--db-instance-identifier my-app-aurora-reader-1 \
--db-cluster-identifier my-app-aurora \
--db-instance-class db.r6g.large \
--engine aurora-postgresql
Aurora Serverless v2
aws rds create-db-cluster \
--db-cluster-identifier my-app-serverless \
--engine aurora-postgresql \
--engine-version 16.2 \
--master-username dbadmin \
--manage-master-user-password \
--serverless-v2-scaling-configuration MinCapacity=0.5,MaxCapacity=16 \
--db-subnet-group-name my-app-db-subnets \
--vpc-security-group-ids sg-12345
aws rds create-db-instance \
--db-instance-identifier my-app-serverless-instance \
--db-cluster-identifier my-app-serverless \
--db-instance-class db.serverless \
--engine aurora-postgresql
Store Credentials in Secrets Manager
# RDS manages this automatically with --manage-master-user-password
# Retrieve the secret
aws rds describe-db-instances \
--db-instance-identifier my-app-db \
--query 'DBInstances[0].MasterUserSecret.SecretArn'
# Fetch credentials
aws secretsmanager get-secret-value --secret-id <secret-arn>
Core Patterns
Connect from Application (Python)
import psycopg2
import boto3
import json
def get_db_credentials(secret_arn):
client = boto3.client("secretsmanager")
secret = client.get_secret_value(SecretId=secret_arn)
return json.loads(secret["SecretString"])
creds = get_db_credentials("arn:aws:secretsmanager:us-east-1:123456789012:secret:rds-db-abc")
conn = psycopg2.connect(
host=creds["host"],
port=creds["port"],
dbname="myapp",
user=creds["username"],
password=creds["password"],
sslmode="require",
)
IAM Database Authentication
# Enable IAM auth on the instance
aws rds modify-db-instance \
--db-instance-identifier my-app-db \
--enable-iam-database-authentication
# Grant the database user IAM login
# In psql: CREATE USER app_user WITH LOGIN;
# GRANT rds_iam TO app_user;
import boto3
rds_client = boto3.client("rds")
token = rds_client.generate_db_auth_token(
DBHostname="my-app-db.abc123.us-east-1.rds.amazonaws.com",
Port=5432,
DBUsername="app_user",
Region="us-east-1",
)
conn = psycopg2.connect(
host="my-app-db.abc123.us-east-1.rds.amazonaws.com",
port=5432,
dbname="myapp",
user="app_user",
password=token,
sslmode="require",
)
Read Replicas and Reader Endpoint
# Use the cluster reader endpoint for read queries
WRITER_HOST = "my-app-aurora.cluster-abc123.us-east-1.rds.amazonaws.com"
READER_HOST = "my-app-aurora.cluster-ro-abc123.us-east-1.rds.amazonaws.com"
# Write connection
write_conn = psycopg2.connect(host=WRITER_HOST, dbname="myapp", ...)
# Read connection (load-balanced across readers)
read_conn = psycopg2.connect(host=READER_HOST, dbname="myapp", ...)
Connection Pooling with RDS Proxy
aws rds create-db-proxy \
--db-proxy-name my-app-proxy \
--engine-family POSTGRESQL \
--auth '[{
"AuthScheme": "SECRETS",
"SecretArn": "arn:aws:secretsmanager:us-east-1:123456789012:secret:rds-db-abc",
"IAMAuth": "REQUIRED"
}]' \
--role-arn arn:aws:iam::123456789012:role/rds-proxy-role \
--vpc-subnet-ids subnet-aaa subnet-bbb \
--vpc-security-group-ids sg-12345
# Register target (the DB cluster)
aws rds register-db-proxy-targets \
--db-proxy-name my-app-proxy \
--db-cluster-identifiers my-app-aurora
# Connect via RDS Proxy (same as regular connection, just use the proxy endpoint)
conn = psycopg2.connect(
host="my-app-proxy.proxy-abc123.us-east-1.rds.amazonaws.com",
port=5432,
dbname="myapp",
user="app_user",
password=token, # IAM auth token
sslmode="require",
)
Automated Snapshots and Point-in-Time Recovery
# Create manual snapshot
aws rds create-db-cluster-snapshot \
--db-cluster-identifier my-app-aurora \
--db-cluster-snapshot-identifier my-app-pre-migration
# Restore to point in time
aws rds restore-db-cluster-to-point-in-time \
--source-db-cluster-identifier my-app-aurora \
--db-cluster-identifier my-app-aurora-restored \
--restore-to-time "2024-06-15T10:30:00Z" \
--db-subnet-group-name my-app-db-subnets \
--vpc-security-group-ids sg-12345
Performance Insights
# Enable Performance Insights
aws rds modify-db-instance \
--db-instance-identifier my-app-aurora-writer \
--enable-performance-insights \
--performance-insights-retention-period 731 # 2 years
# Query top SQL
aws pi get-resource-metrics \
--service-type RDS \
--identifier db-ABC123 \
--metric-queries '[{"Metric": "db.load.avg", "GroupBy": {"Group": "db.sql", "Limit": 10}}]' \
--start-time 2024-06-15T00:00:00Z \
--end-time 2024-06-15T23:59:59Z \
--period-in-seconds 3600
CloudFormation: Aurora Cluster
Resources:
AuroraCluster:
Type: AWS::RDS::DBCluster
DeletionPolicy: Snapshot
Properties:
Engine: aurora-postgresql
EngineVersion: "16.2"
DatabaseName: myapp
MasterUsername: dbadmin
ManageMasterUserPassword: true
StorageEncrypted: true
BackupRetentionPeriod: 14
DeletionProtection: true
DBSubnetGroupName: !Ref DBSubnetGroup
VpcSecurityGroupIds:
- !Ref DBSecurityGroup
ServerlessV2ScalingConfiguration:
MinCapacity: 0.5
MaxCapacity: 16
AuroraInstance:
Type: AWS::RDS::DBInstance
Properties:
DBClusterIdentifier: !Ref AuroraCluster
DBInstanceClass: db.serverless
Engine: aurora-postgresql
Best Practices
- Use Aurora over RDS for production PostgreSQL/MySQL workloads. The distributed storage, faster failover, and read scaling justify the cost.
- Always enable Multi-AZ (RDS) or add at least one reader in another AZ (Aurora) for production.
- Use
--manage-master-user-passwordto let RDS manage credentials in Secrets Manager with automatic rotation. - Use RDS Proxy for Lambda or high-connection-count workloads. It pools connections and reduces database load from connection churn.
- Enable deletion protection and set
DeletionPolicy: Snapshotin CloudFormation for all production databases. - Use Aurora Serverless v2 for variable workloads (dev environments, bursty traffic) to avoid over-provisioning.
- Separate read and write traffic: Use the Aurora reader endpoint for read queries. This offloads the writer and improves throughput.
- Enable Performance Insights to identify slow queries and wait events without installing third-party monitoring.
- Use IAM database authentication for Lambda and ECS tasks to avoid storing long-lived database passwords.
Common Pitfalls
- Publicly accessible by default: Always set
--no-publicly-accessibleand place the database in private subnets. Access it through a bastion, VPN, or RDS Proxy. - Security group blocks connections: The database security group must allow inbound on the database port (5432/3306) from the application security group.
- Connection exhaustion: Each RDS instance has a max connection limit based on instance memory. Use RDS Proxy or application-level pooling (PgBouncer, HikariCP).
- Storage autoscaling surprise: RDS storage autoscaling can grow your disk (and costs) silently. Set a
MaxAllocatedStoragelimit. - Snapshot restore creates a NEW instance: Restoring from a snapshot or point-in-time creates a new cluster/instance with a new endpoint. You must update your application configuration.
- Maintenance window disruptions: Multi-AZ instances failover during maintenance, causing a brief (30s) interruption. Schedule maintenance during low-traffic windows.
- Aurora Serverless v2 cold start: Scaling from 0.5 ACU takes time. If latency-sensitive, set
MinCapacityhigher. - Replication lag on readers: Aurora readers can have milliseconds of replication lag. Do not read-after-write on the reader endpoint for consistency-critical operations.
Install this skill directly: skilldb add aws-services-skills
Related Skills
API Gateway
AWS API Gateway for building, deploying, and managing RESTful and WebSocket APIs
Cloudformation
AWS CloudFormation infrastructure-as-code for provisioning and managing AWS resources declaratively
Cognito
AWS Cognito user authentication and authorization for web and mobile applications
Dynamodb
AWS DynamoDB NoSQL database for high-performance key-value and document workloads
Ecs Fargate
AWS ECS and Fargate for running containerized applications without managing servers
S3
AWS S3 object storage service for scalable, durable file and data storage