Skip to main content
Technology & EngineeringWebassembly296 lines

WASM Serverside

Running WebAssembly on the server with Wasmtime, Spin, WasmEdge, and other server-side runtimes

Quick Summary30 lines
You are an expert in server-side WebAssembly for building WebAssembly applications.

## Key Points

- **Use AOT compilation for production** — `wasmtime compile` pre-compiles modules to native code, eliminating JIT latency. Cold starts drop from milliseconds to microseconds.
- **Limit resource consumption** — configure fuel limits (`Store::set_fuel`), memory caps, and execution timeouts to prevent runaway modules from consuming host resources.
- **Use the component model for polyglot systems** — WIT interfaces let you compose modules across languages. A Rust core can call a Python ML model through typed interfaces.
- **Keep modules small** — server-side Wasm benefits most when modules are focused. A 100KB Wasm module starts in under 1ms; a 10MB module may take 50ms.
- **Pin runtime versions in production** — WASI and the component model are evolving. Pin to specific runtime versions to avoid breaking changes.
- **Assuming full POSIX compatibility** — WASI is not POSIX. Many syscalls (mmap, signals, fork, exec) are not available. Code relying on them will fail to compile or trap at runtime.
- **Not configuring pre-opens** — WASI modules have no filesystem access by default. Forgetting to pass `--dir` or `preopened_dir()` causes all file operations to fail.
- **Ignoring cold start in benchmarks** — first-invocation time includes compilation. Always measure both cold and warm execution paths.
- **Exceeding fuel limits silently** — when fuel runs out, execution traps with `OutOfFuel`. Callers must handle this error and decide whether to retry with more fuel or reject the request.
- **Network access in WASI Preview 1** — there is no standard networking in Preview 1. Solutions are runtime-specific extensions (WasmEdge HTTP plugin, Spin's outbound HTTP) or require Preview 2.
- **Assuming full POSIX compatibility** — WASI is not POSIX; `mmap`, `fork`, `exec`, and signals are not available; code relying on these will fail to compile or trap at runtime.
- **Ignoring cold start time in benchmarks** — first-invocation time includes compilation; measuring only warm execution gives a misleadingly optimistic picture of real-world latency.

## Quick Example

```bash
spin build
spin up  # starts on http://127.0.0.1:3000
```

```bash
# Install WasmEdge with HTTP plugin
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugins wasmedge_httpsreq
```
skilldb get webassembly-skills/WASM ServersideFull skill: 296 lines
Paste into your CLAUDE.md or agent config

Server-Side Wasm — WebAssembly

You are an expert in server-side WebAssembly for building WebAssembly applications.

Overview

Server-side WebAssembly runs Wasm modules outside the browser using standalone runtimes like Wasmtime, Wasmer, and WasmEdge. This enables polyglot microservices, edge computing, plugin systems, and serverless functions with strong sandboxing, fast cold starts (microseconds vs. milliseconds for containers), and a fraction of the memory footprint of traditional VMs or containers. Frameworks like Fermyon Spin and wasmCloud build on these runtimes to provide full application platforms.

Core Concepts

Runtimes

RuntimeFocusKey Features
WasmtimeReference implementationWASI P1 & P2, component model, Cranelift JIT/AOT
WasmerUniversal runtimeMultiple backends (Cranelift, LLVM, Singlepass), WAPM registry
WasmEdgeCloud-native and edgeTensorflow, HTTP, KV extensions, CNCF project
wazeroGo-native, zero dependenciesPure Go, no CGo, WASI Preview 1

Application Frameworks

FrameworkDescription
Fermyon SpinHTTP trigger-based serverless framework for Wasm
wasmCloudActor-model platform for distributed Wasm apps
Fastly ComputeEdge computing platform running Wasm at CDN nodes

Component Model

The WebAssembly Component Model defines how Wasm modules compose. Components use WIT (Wasm Interface Types) to declare typed imports and exports, enabling language-agnostic interoperability. A Rust component can call a Go component through WIT interfaces without either knowing the other's language.

Implementation Patterns

Wasmtime CLI

# Run a WASI module
wasmtime run app.wasm -- arg1 arg2

# Grant filesystem access
wasmtime run --dir ./data::/data app.wasm

# Grant network access (Preview 2)
wasmtime run --wasi tcp app.wasm

# AOT compile for faster startup
wasmtime compile app.wasm -o app.cwasm
wasmtime run app.cwasm

Embedding Wasmtime in Rust

use wasmtime::*;
use wasmtime_wasi::WasiCtxBuilder;

fn main() -> anyhow::Result<()> {
    let engine = Engine::default();

    // AOT compile for production
    // let engine = Engine::new(Config::new().strategy(Strategy::Cranelift))?;

    let module = Module::from_file(&engine, "plugin.wasm")?;

    let mut linker = Linker::new(&engine);
    wasmtime_wasi::preview1::add_to_linker_sync(&mut linker, |s| s)?;

    // Host function: Wasm can call this
    linker.func_wrap("host", "log", |caller: Caller<'_, _>, ptr: i32, len: i32| {
        // Read string from Wasm memory
        let mem = caller.get_export("memory").unwrap().into_memory().unwrap();
        let mut buf = vec![0u8; len as usize];
        mem.read(&caller, ptr as usize, &mut buf).unwrap();
        println!("[plugin] {}", String::from_utf8_lossy(&buf));
    })?;

    let wasi = WasiCtxBuilder::new()
        .inherit_stdio()
        .build_p1();

    let mut store = Store::new(&engine, wasi);
    let instance = linker.instantiate(&mut store, &module)?;

    let run = instance.get_typed_func::<(), ()>(&mut store, "run")?;
    run.call(&mut store, ())?;

    Ok(())
}

Fermyon Spin Application

# Install Spin
curl -fsSL https://developer.fermyon.com/downloads/install.sh | bash

# Create a new Spin app
spin new -t http-rust my-api
cd my-api
// src/lib.rs — Spin HTTP handler in Rust
use spin_sdk::http::{IntoResponse, Request, Response};
use spin_sdk::http_component;

#[http_component]
fn handle_request(req: Request) -> anyhow::Result<impl IntoResponse> {
    let path = req.uri().path();

    match path {
        "/api/health" => Ok(Response::builder()
            .status(200)
            .header("content-type", "application/json")
            .body(r#"{"status":"ok"}"#)?),

        "/api/compute" => {
            let body = req.body();
            let input: serde_json::Value = serde_json::from_slice(body)?;
            let n = input["n"].as_u64().unwrap_or(10);
            let result = fibonacci(n);

            Ok(Response::builder()
                .status(200)
                .header("content-type", "application/json")
                .body(format!(r#"{{"fib":{},"n":{}}}"#, result, n))?)
        }

        _ => Ok(Response::builder()
            .status(404)
            .body("Not Found")?),
    }
}

fn fibonacci(n: u64) -> u64 {
    let (mut a, mut b) = (0u64, 1u64);
    for _ in 0..n {
        let t = b;
        b = a + b;
        a = t;
    }
    a
}
# spin.toml
spin_manifest_version = 2

[application]
name = "my-api"
version = "0.1.0"

[[trigger.http]]
route = "/api/..."
component = "my-api"

[component.my-api]
source = "target/wasm32-wasip1/release/my_api.wasm"
[component.my-api.build]
command = "cargo build --target wasm32-wasip1 --release"
spin build
spin up  # starts on http://127.0.0.1:3000

Plugin System with Wasmtime

use wasmtime::*;

struct PluginHost {
    engine: Engine,
    linker: Linker<PluginState>,
}

struct PluginState {
    name: String,
    log_buffer: Vec<String>,
}

impl PluginHost {
    fn new() -> anyhow::Result<Self> {
        let engine = Engine::default();
        let mut linker = Linker::new(&engine);

        linker.func_wrap("host", "get_input", |mut caller: Caller<'_, PluginState>| -> i32 {
            // Return a pointer to input data in Wasm memory
            // (simplified — real implementation would write to Wasm memory)
            42
        })?;

        linker.func_wrap("host", "set_output", |mut caller: Caller<'_, PluginState>, value: i32| {
            caller.data_mut().log_buffer.push(format!("Output: {}", value));
        })?;

        Ok(PluginHost { engine, linker })
    }

    fn run_plugin(&self, wasm_path: &str) -> anyhow::Result<Vec<String>> {
        let module = Module::from_file(&self.engine, wasm_path)?;
        let state = PluginState {
            name: wasm_path.to_string(),
            log_buffer: Vec::new(),
        };
        let mut store = Store::new(&self.engine, state);
        let instance = self.linker.instantiate(&mut store, &module)?;

        let process = instance.get_typed_func::<(), ()>(&mut store, "process")?;
        process.call(&mut store, ())?;

        Ok(store.into_data().log_buffer)
    }
}

WasmEdge with HTTP Extension

# Install WasmEdge with HTTP plugin
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugins wasmedge_httpsreq
// Rust WASI app using WasmEdge HTTP extension
use std::io::{Read, Write};
use wasmedge_http_req::request;

fn main() {
    let mut writer = Vec::new();
    let res = request::get("https://api.example.com/data", &mut writer).unwrap();
    println!("Status: {}", res.status_code());
    println!("Body: {}", String::from_utf8_lossy(&writer));
}

Running Wasm in Docker (OCI)

# Dockerfile for Wasm workloads (requires Docker Desktop with Wasm support)
FROM scratch
COPY app.wasm /app.wasm
ENTRYPOINT ["/app.wasm"]
# Build and run with Docker's Wasm runtime
docker buildx build --platform wasi/wasm -t my-wasm-app .
docker run --runtime=io.containerd.wasmtime.v2 --platform=wasi/wasm my-wasm-app

Best Practices

  • Use AOT compilation for productionwasmtime compile pre-compiles modules to native code, eliminating JIT latency. Cold starts drop from milliseconds to microseconds.
  • Limit resource consumption — configure fuel limits (Store::set_fuel), memory caps, and execution timeouts to prevent runaway modules from consuming host resources.
  • Use the component model for polyglot systems — WIT interfaces let you compose modules across languages. A Rust core can call a Python ML model through typed interfaces.
  • Pre-initialize modules — Spin and Wasmtime support snapshot-and-restore. Run initialization code once, snapshot the memory state, and clone it for each request to eliminate per-request init cost.
  • Keep modules small — server-side Wasm benefits most when modules are focused. A 100KB Wasm module starts in under 1ms; a 10MB module may take 50ms.
  • Pin runtime versions in production — WASI and the component model are evolving. Pin to specific runtime versions to avoid breaking changes.

Common Pitfalls

  • Assuming full POSIX compatibility — WASI is not POSIX. Many syscalls (mmap, signals, fork, exec) are not available. Code relying on them will fail to compile or trap at runtime.
  • Not configuring pre-opens — WASI modules have no filesystem access by default. Forgetting to pass --dir or preopened_dir() causes all file operations to fail.
  • Ignoring cold start in benchmarks — first-invocation time includes compilation. Always measure both cold and warm execution paths.
  • Exceeding fuel limits silently — when fuel runs out, execution traps with OutOfFuel. Callers must handle this error and decide whether to retry with more fuel or reject the request.
  • Network access in WASI Preview 1 — there is no standard networking in Preview 1. Solutions are runtime-specific extensions (WasmEdge HTTP plugin, Spin's outbound HTTP) or require Preview 2.
  • Large memory footprint from many instances — while individual Wasm modules are lightweight, thousands of concurrent instances each with their own linear memory can add up. Use shared-nothing or copy-on-write memory where supported.

Core Philosophy

Server-side Wasm is not about replacing containers or virtual machines — it is about providing a faster, lighter, more secure execution model for specific workloads. Wasm modules start in microseconds (vs. milliseconds for containers), use a fraction of the memory, and run in a hardware-enforced sandbox. These characteristics make Wasm ideal for serverless functions, plugin systems, and edge compute where cold start latency and multi-tenant isolation matter.

The Component Model is the key to polyglot server-side Wasm. WIT interfaces define typed contracts between components regardless of source language. A Rust module can call a Go module through a typed interface without serialization overhead or language-specific FFI. Design your server-side Wasm architecture around WIT interfaces, and you gain language flexibility, composability, and long-term maintainability.

Keep modules small and focused. A 100KB Wasm module starts in under 1ms; a 10MB module may take 50ms. For serverless and edge workloads where cold start dominates, binary size directly impacts user experience. Use AOT compilation (wasmtime compile) to pre-compile to native code and eliminate JIT latency entirely in production.

Anti-Patterns

  • Assuming full POSIX compatibility — WASI is not POSIX; mmap, fork, exec, and signals are not available; code relying on these will fail to compile or trap at runtime.

  • Not configuring pre-opens for file access — WASI modules have no filesystem access by default; forgetting to pass --dir or preopened_dir() causes all file operations to fail with permission errors.

  • Ignoring cold start time in benchmarks — first-invocation time includes compilation; measuring only warm execution gives a misleadingly optimistic picture of real-world latency.

  • Running without fuel or resource limits — a malicious or buggy Wasm module can consume unbounded CPU and memory; always configure fuel limits (Store::set_fuel), memory caps, and execution timeouts in multi-tenant environments.

  • Using runtime-specific networking extensions for portable modules — relying on WasmEdge HTTP extensions or Spin's outbound HTTP locks your module to that specific runtime; prefer WASI Preview 2 wasi:http for portable networking.

Install this skill directly: skilldb add webassembly-skills

Get CLI access →