Skip to main content
Technology & EngineeringDeployment Hosting Services410 lines

SST (Serverless Stack)

SST framework expertise — infrastructure as code for AWS, Lambda functions, Next.js/Remix/Astro deployment, constructs, live Lambda development, and the SST Console

Quick Summary18 lines
SST is an infrastructure-as-code framework that makes it easy to build full-stack applications on AWS. It provides high-level constructs that compose AWS services (Lambda, API Gateway, S3, DynamoDB, RDS) into patterns developers actually use. The killer feature is Live Lambda Dev — write a Lambda function, save the file, and see changes reflected instantly without redeploying. SST supports deploying Next.js, Remix, Astro, and SolidStart as serverless applications on AWS. Think of it as the missing developer experience layer on top of AWS CDK.

## Key Points

- **Use `bind` to connect resources** — instead of hardcoding ARNs or table names, bind resources to functions. SST injects the right values and IAM permissions automatically.
- **Leverage Live Lambda Dev daily** — run `sst dev` during development for instant feedback. It is dramatically faster than redeploying on every change.
- **Separate stacks by domain** — organize infrastructure into stacks (Database, API, Web, Auth) that compose via `use()`. This keeps configurations manageable.
- **Use stages for environments** — deploy `sst deploy --stage staging` for isolated environments. Each stage gets its own CloudFormation stack and resources.
- **Set `setDefaultRemovalPolicy("destroy")` for dev stages** — avoid accumulating orphaned AWS resources during development. Only keep resources in production.
- **Use the `Config` construct for secrets** — store sensitive values with `new Config.Secret(stack, "STRIPE_KEY")` and bind them to functions. Values are encrypted in SSM Parameter Store.
- **Enable function bundling optimizations** — SST uses esbuild by default. Keep functions focused and small to minimize cold starts.
- **Use `customDomain` for production** — configure Route53 domains directly in SST constructs for automatic SSL and DNS setup.
- **Putting all resources in one stack** — large stacks are slow to deploy and hard to reason about. Split by domain boundary using separate stack files.
- **Hardcoding AWS resource names** — always use SST's resource binding (`Table.Users.tableName`, `Bucket.Uploads.bucketName`). Hardcoded names break across stages.
- **Skipping Live Lambda Dev** — deploying after every change wastes minutes per iteration. Use `sst dev` for sub-second feedback during development.
- **Using `*` IAM permissions** — SST auto-generates least-privilege IAM policies when you use `bind`. Do not override with broad permissions.
skilldb get deployment-hosting-services-skills/SST (Serverless Stack)Full skill: 410 lines
Paste into your CLAUDE.md or agent config

SST (Serverless Stack)

Core Philosophy

SST is an infrastructure-as-code framework that makes it easy to build full-stack applications on AWS. It provides high-level constructs that compose AWS services (Lambda, API Gateway, S3, DynamoDB, RDS) into patterns developers actually use. The killer feature is Live Lambda Dev — write a Lambda function, save the file, and see changes reflected instantly without redeploying. SST supports deploying Next.js, Remix, Astro, and SolidStart as serverless applications on AWS. Think of it as the missing developer experience layer on top of AWS CDK.

Setup

Project Initialization

// Create a new SST project
// $ npx create-sst@latest my-app
// $ cd my-app && npm install

// sst.config.ts — root configuration
import type { SSTConfig } from "sst";
import { API } from "./stacks/API";
import { Web } from "./stacks/Web";
import { Database } from "./stacks/Database";
import { Storage } from "./stacks/Storage";

export default {
  config(_input) {
    return {
      name: "my-app",
      region: "us-east-1",
    };
  },
  stacks(app) {
    // Remove all resources when not in production
    if (app.stage !== "production") {
      app.setDefaultRemovalPolicy("destroy");
    }

    app
      .stack(Database)
      .stack(Storage)
      .stack(API)
      .stack(Web);
  },
} satisfies SSTConfig;

Live Lambda Development

// Start Live Lambda Dev — connects local code to deployed AWS resources
// $ npx sst dev

// How it works:
// 1. SST deploys a stub Lambda to AWS
// 2. The stub forwards invocations to your local machine via WebSocket
// 3. Your local code runs with real AWS permissions and event payloads
// 4. Changes are reflected instantly — no redeploy needed

// This means you can:
// - Set breakpoints in your IDE
// - See console.log output locally
// - Hot-reload function code
// - Test with real DynamoDB, S3, SQS events

Key Techniques

API Routes with Lambda

// stacks/API.ts — define the API infrastructure
import { StackContext, Api, use } from "sst/constructs";
import { Database } from "./Database";

export function API({ stack }: StackContext) {
  const { table } = use(Database);

  const api = new Api(stack, "Api", {
    defaults: {
      function: {
        bind: [table],
        timeout: "30 seconds",
        memorySize: "512 MB",
        runtime: "nodejs20.x",
      },
    },
    routes: {
      "GET    /users":       "packages/functions/src/users/list.handler",
      "GET    /users/{id}":  "packages/functions/src/users/get.handler",
      "POST   /users":       "packages/functions/src/users/create.handler",
      "PUT    /users/{id}":  "packages/functions/src/users/update.handler",
      "DELETE /users/{id}":  "packages/functions/src/users/delete.handler",
    },
  });

  stack.addOutputs({
    ApiEndpoint: api.url,
  });

  return { api };
}
// packages/functions/src/users/list.ts — Lambda function handler
import { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { Table } from "sst/node/table";
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocumentClient, ScanCommand } from "@aws-sdk/lib-dynamodb";

const client = DynamoDBDocumentClient.from(new DynamoDBClient({}));

export const handler: APIGatewayProxyHandlerV2 = async (event) => {
  const limit = parseInt(event.queryStringParameters?.limit ?? "20", 10);
  const lastKey = event.queryStringParameters?.cursor;

  const result = await client.send(
    new ScanCommand({
      TableName: Table.Users.tableName,
      Limit: limit,
      ExclusiveStartKey: lastKey ? JSON.parse(atob(lastKey)) : undefined,
    })
  );

  return {
    statusCode: 200,
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      users: result.Items,
      cursor: result.LastEvaluatedKey
        ? btoa(JSON.stringify(result.LastEvaluatedKey))
        : null,
    }),
  };
};

DynamoDB Table Construct

// stacks/Database.ts — define DynamoDB tables
import { StackContext, Table } from "sst/constructs";

export function Database({ stack }: StackContext) {
  const table = new Table(stack, "Users", {
    fields: {
      pk: "string",
      sk: "string",
      gsi1pk: "string",
      gsi1sk: "string",
    },
    primaryIndex: { partitionKey: "pk", sortKey: "sk" },
    globalIndexes: {
      gsi1: { partitionKey: "gsi1pk", sortKey: "gsi1sk" },
    },
    stream: "new_and_old_images",
    consumers: {
      consumer: "packages/functions/src/events/user-stream.handler",
    },
  });

  return { table };
}

S3 Bucket and File Uploads

// stacks/Storage.ts — S3 bucket with presigned URL pattern
import { StackContext, Bucket } from "sst/constructs";

export function Storage({ stack }: StackContext) {
  const bucket = new Bucket(stack, "Uploads", {
    cors: [
      {
        allowedMethods: ["GET", "PUT"],
        allowedOrigins: ["*"],
        allowedHeaders: ["*"],
      },
    ],
    notifications: {
      processUpload: {
        function: "packages/functions/src/storage/process.handler",
        events: ["object_created"],
      },
    },
  });

  return { bucket };
}

// packages/functions/src/storage/presign.ts — generate upload URL
import { APIGatewayProxyHandlerV2 } from "aws-lambda";
import { Bucket } from "sst/node/bucket";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3 = new S3Client({});

export const handler: APIGatewayProxyHandlerV2 = async (event) => {
  const { filename, contentType } = JSON.parse(event.body ?? "{}");
  const key = `uploads/${Date.now()}-${filename}`;

  const url = await getSignedUrl(
    s3,
    new PutObjectCommand({
      Bucket: Bucket.Uploads.bucketName,
      Key: key,
      ContentType: contentType,
    }),
    { expiresIn: 3600 }
  );

  return {
    statusCode: 200,
    body: JSON.stringify({ url, key }),
  };
};

Next.js Deployment on AWS

// stacks/Web.ts — deploy Next.js as a serverless app on AWS
import { StackContext, NextjsSite, use } from "sst/constructs";
import { API } from "./API";

export function Web({ stack }: StackContext) {
  const { api } = use(API);

  const site = new NextjsSite(stack, "Web", {
    path: "packages/web",
    environment: {
      NEXT_PUBLIC_API_URL: api.url,
      NEXT_PUBLIC_REGION: stack.region,
    },
    customDomain: stack.stage === "production"
      ? {
          domainName: "myapp.com",
          domainAlias: "www.myapp.com",
        }
      : undefined,
    memorySize: "1024 MB",
    timeout: "30 seconds",
  });

  stack.addOutputs({
    SiteUrl: site.customDomainUrl ?? site.url ?? "localhost",
  });

  return { site };
}

// Also supports:
// new RemixSite(stack, "Remix", { path: "packages/remix-app" });
// new AstroSite(stack, "Astro", { path: "packages/astro-app" });
// new SolidStartSite(stack, "Solid", { path: "packages/solid-app" });

Authentication

// stacks/Auth.ts — SST Auth construct
import { StackContext, Auth, use } from "sst/constructs";
import { API } from "./API";

export function AuthStack({ stack }: StackContext) {
  const { api } = use(API);

  const auth = new Auth(stack, "Auth", {
    authenticator: {
      handler: "packages/functions/src/auth/authenticator.handler",
    },
  });

  auth.attach(stack, { api });
  return { auth };
}

// packages/functions/src/auth/authenticator.ts
import { AuthHandler, GoogleAdapter, LinkAdapter } from "sst/node/auth";
import { Session } from "sst/node/auth";

export const handler = AuthHandler({
  providers: {
    google: GoogleAdapter({
      mode: "oidc",
      clientID: process.env.GOOGLE_CLIENT_ID!,
      onSuccess: async (tokenset) => {
        const claims = tokenset.claims();

        return Session.parameter({
          redirect: process.env.SITE_URL!,
          type: "user",
          properties: {
            userId: claims.sub,
            email: claims.email!,
          },
        });
      },
    }),
  },
});

Queue Processing

// stacks/Queue.ts — SQS queue with Lambda consumer
import { StackContext, Queue, use } from "sst/constructs";
import { Database } from "./Database";

export function QueueStack({ stack }: StackContext) {
  const { table } = use(Database);

  const queue = new Queue(stack, "ProcessingQueue", {
    consumer: {
      function: {
        handler: "packages/functions/src/queues/process.handler",
        bind: [table],
        timeout: "5 minutes",
      },
      cdk: {
        eventSource: {
          batchSize: 10,
          maxBatchingWindow: "30 seconds",
        },
      },
    },
  });

  return { queue };
}

// packages/functions/src/queues/process.ts
import { SQSHandler } from "aws-lambda";
import { Table } from "sst/node/table";
import { DynamoDBDocumentClient, PutCommand } from "@aws-sdk/lib-dynamodb";
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";

const client = DynamoDBDocumentClient.from(new DynamoDBClient({}));

export const handler: SQSHandler = async (event) => {
  for (const record of event.Records) {
    const payload = JSON.parse(record.body);
    console.log(`Processing job: ${payload.jobId}`);

    const result = await processJob(payload);

    await client.send(
      new PutCommand({
        TableName: Table.Users.tableName,
        Item: {
          pk: `JOB#${payload.jobId}`,
          sk: "RESULT",
          ...result,
          processedAt: new Date().toISOString(),
        },
      })
    );
  }
};

SST Console

// The SST Console provides a web-based dashboard for your app
// $ npx sst console

// Features:
// - View and search Lambda logs in real-time
// - Browse DynamoDB tables, S3 buckets, and queues
// - Invoke Lambda functions with custom payloads
// - View API Gateway routes and test endpoints
// - Monitor CloudWatch metrics

// Deploy to different stages
// $ npx sst deploy --stage staging
// $ npx sst deploy --stage production

// Remove a stage
// $ npx sst remove --stage dev

Best Practices

  • Use bind to connect resources — instead of hardcoding ARNs or table names, bind resources to functions. SST injects the right values and IAM permissions automatically.
  • Leverage Live Lambda Dev daily — run sst dev during development for instant feedback. It is dramatically faster than redeploying on every change.
  • Separate stacks by domain — organize infrastructure into stacks (Database, API, Web, Auth) that compose via use(). This keeps configurations manageable.
  • Use stages for environments — deploy sst deploy --stage staging for isolated environments. Each stage gets its own CloudFormation stack and resources.
  • Set setDefaultRemovalPolicy("destroy") for dev stages — avoid accumulating orphaned AWS resources during development. Only keep resources in production.
  • Use the Config construct for secrets — store sensitive values with new Config.Secret(stack, "STRIPE_KEY") and bind them to functions. Values are encrypted in SSM Parameter Store.
  • Enable function bundling optimizations — SST uses esbuild by default. Keep functions focused and small to minimize cold starts.
  • Use customDomain for production — configure Route53 domains directly in SST constructs for automatic SSL and DNS setup.

Anti-Patterns

  • Putting all resources in one stack — large stacks are slow to deploy and hard to reason about. Split by domain boundary using separate stack files.
  • Hardcoding AWS resource names — always use SST's resource binding (Table.Users.tableName, Bucket.Uploads.bucketName). Hardcoded names break across stages.
  • Skipping Live Lambda Dev — deploying after every change wastes minutes per iteration. Use sst dev for sub-second feedback during development.
  • Using * IAM permissions — SST auto-generates least-privilege IAM policies when you use bind. Do not override with broad permissions.
  • Deploying to production from a local machine — use CI/CD (GitHub Actions, etc.) for production deploys. Local deploys risk inconsistent state.
  • Not setting function timeouts — the default Lambda timeout is 10 seconds. Set explicit timeouts based on expected execution time to avoid hanging functions.
  • Ignoring cold starts — keep Lambda bundles small, use provisioned concurrency for latency-sensitive paths, and consider warming strategies for critical functions.
  • Mixing SST and raw CDK unnecessarily — SST constructs handle most patterns. Drop to raw CDK only when SST does not have a construct for your use case.

Install this skill directly: skilldb add deployment-hosting-services-skills

Get CLI access →

Related Skills

AWS Lightsail

AWS Lightsail provides a simplified way to launch virtual private servers (VPS), containers, databases, and more. It's ideal for developers and small businesses needing easy-to-use, cost-effective cloud resources without deep AWS expertise.

Deployment Hosting Services264L

Cloudflare Pages Deployment

Cloudflare Pages and Workers expertise — edge-first deployments, full-stack apps with Workers functions, KV/D1/R2 bindings, preview URLs, custom domains, and global CDN distribution

Deployment Hosting Services312L

Coolify Deployment

Coolify self-hosted PaaS expertise — Docker-based deployments, Git integration, automatic SSL, database provisioning, server management, and Heroku/Netlify alternative on your own hardware

Deployment Hosting Services227L

Digital Ocean App Platform

DigitalOcean App Platform is a fully managed Platform-as-a-Service (PaaS) that allows you to quickly build, deploy, and scale web applications, static sites, APIs, and background services. It integrates seamlessly with other DigitalOcean services like Managed Databases and Spaces, making it ideal for developers seeking a streamlined, opinionated deployment experience within the DO ecosystem.

Deployment Hosting Services248L

Fly.io Deployment

Fly.io platform expertise — container deployment, global edge distribution, Dockerfiles, volumes, secrets, scaling, PostgreSQL, and multi-region patterns

Deployment Hosting Services338L

Google Cloud Run

Google Cloud Run is a fully managed serverless platform for containerized applications. It allows you to deploy stateless containers that scale automatically from zero to thousands of instances based on request load, paying only for the resources consumed. Choose Cloud Run for microservices, web APIs, and event-driven functions that require custom runtimes or environments.

Deployment Hosting Services223L