Skip to main content
Technology & EngineeringRust233 lines

Async Rust

Async Rust programming with async/await, Tokio runtime, futures, and concurrent task patterns

Quick Summary26 lines
You are an expert in async Rust programming using async/await, the Tokio runtime, and concurrent patterns for writing high-performance non-blocking applications.

## Key Points

- Use `tokio::join!` for independent concurrent operations, not sequential `.await` calls.
- Prefer `tokio::sync::Mutex` over `std::sync::Mutex` when the lock is held across `.await` points. Use `std::sync::Mutex` when the critical section is short and synchronous.
- Use `spawn_blocking` for CPU-bound or blocking I/O work to avoid starving the async executor.
- Keep async tasks `Send` — avoid holding non-Send types (like `Rc`, `MutexGuard` from std) across `.await`.
- Use bounded channels (`mpsc::channel(N)`) to apply backpressure rather than unbounded channels.
- Prefer `select!` with a timeout branch over `tokio::time::timeout` for more control over cancellation.
- **Forgetting that futures are lazy**: Calling an `async fn` without `.await` does nothing. The future is not polled until awaited or spawned.
- **Blocking the runtime**: Calling blocking code (heavy computation, synchronous I/O, `std::thread::sleep`) inside an async task blocks the executor thread. Use `spawn_blocking`.
- **Accidental sequential execution**: Writing `let a = foo().await; let b = bar().await;` runs them sequentially. Use `join!` for concurrency.
- **Unbounded channel memory growth**: `mpsc::unbounded_channel` can grow without limit if the consumer is slower than the producer.

## Quick Example

```rust
// async fn returns an impl Future<Output = T>
async fn fetch_data(url: &str) -> Result<String, reqwest::Error> {
    let body = reqwest::get(url).await?.text().await?;
    Ok(body)
}
```
skilldb get rust-skills/Async RustFull skill: 233 lines
Paste into your CLAUDE.md or agent config

Async Rust — Rust Programming

You are an expert in async Rust programming using async/await, the Tokio runtime, and concurrent patterns for writing high-performance non-blocking applications.

Core Philosophy

Async Rust exists to let you write high-throughput, non-blocking I/O code while keeping Rust's guarantees of memory safety and fearless concurrency. The key insight is that futures are lazy and inert until polled -- you compose them declaratively, and the runtime decides when and where to execute them. This means you reason about what should happen concurrently rather than managing threads by hand.

Choosing an explicit runtime (rather than having one baked into the language) is a deliberate design decision. It keeps the language lean and gives you control over scheduling, thread pools, and I/O drivers. The trade-off is a steeper learning curve, but the reward is predictable performance with no hidden allocations or implicit spawning. Tokio dominates the ecosystem, but understanding that the runtime is a separate layer helps you reason about cancellation, task lifetimes, and Send bounds.

Write async code when you are I/O-bound and need to multiplex many connections or operations. If your workload is CPU-bound, threads and spawn_blocking are the right tool. Mixing the two -- blocking inside async contexts -- is the single most common source of performance bugs in async Rust codebases.

Anti-Patterns

  • Blocking the executor with synchronous code: Calling std::thread::sleep, heavy computation, or synchronous file I/O inside an async task starves the runtime's thread pool. Always offload blocking work to spawn_blocking or a dedicated thread pool.
  • Sequential awaits when concurrency is intended: Writing let a = foo().await; let b = bar().await; when a and b are independent runs them serially. Use tokio::join! or FuturesUnordered to express true concurrency.
  • Unbounded channels as a default: Reaching for mpsc::unbounded_channel because it is easier ignores backpressure entirely. Producers can overwhelm consumers and cause unbounded memory growth. Always start with bounded channels and choose buffer sizes deliberately.
  • Ignoring cancellation safety in select!: When select! drops an unfinished future, any partial state inside that future is lost. Writing futures that accumulate side effects without checkpointing leads to silent data corruption when the other branch wins.
  • Scattering Arc<Mutex<T>> everywhere: Wrapping every piece of shared state in Arc<Mutex<T>> is the async equivalent of a global variable. Prefer message passing with channels, or restructure so that a single task owns the state and others communicate with it.

Overview

Rust's async model is based on futures — lazy values that represent an asynchronous computation. The async/await syntax provides ergonomic composition of futures. Unlike Go or JavaScript, Rust does not bundle a runtime; you choose one. Tokio is the dominant async runtime, providing task scheduling, I/O, timers, and synchronization primitives.

Core Concepts

async/await Basics

// async fn returns an impl Future<Output = T>
async fn fetch_data(url: &str) -> Result<String, reqwest::Error> {
    let body = reqwest::get(url).await?.text().await?;
    Ok(body)
}

Tokio Runtime Setup

// Macro-based setup (most common)
#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let result = fetch_data("https://example.com").await?;
    println!("{result}");
    Ok(())
}

// Manual runtime construction
fn main() -> anyhow::Result<()> {
    let rt = tokio::runtime::Builder::new_multi_thread()
        .worker_threads(4)
        .enable_all()
        .build()?;

    rt.block_on(async {
        // async code here
        Ok(())
    })
}

Spawning Tasks

use tokio::task;

#[tokio::main]
async fn main() {
    let handle = task::spawn(async {
        // runs concurrently on the Tokio thread pool
        expensive_computation().await
    });

    let result = handle.await.expect("task panicked");

    // spawn_blocking for CPU-heavy synchronous work
    let blocked = task::spawn_blocking(|| {
        heavy_sync_computation()
    }).await.expect("blocking task panicked");
}

Concurrency with join and select

use tokio::join;

async fn fetch_all() -> anyhow::Result<()> {
    // Run concurrently, wait for all to complete
    let (users, orders, inventory) = join!(
        fetch_users(),
        fetch_orders(),
        fetch_inventory(),
    );

    let users = users?;
    let orders = orders?;
    let inventory = inventory?;
    Ok(())
}

// select! races futures, takes the first to complete
use tokio::select;
use tokio::time::{sleep, Duration};

async fn fetch_with_timeout() -> Result<String, &'static str> {
    select! {
        result = fetch_data("https://example.com") => {
            result.map_err(|_| "fetch failed")
        }
        _ = sleep(Duration::from_secs(5)) => {
            Err("timeout")
        }
    }
}

Implementation Patterns

Channels for Task Communication

use tokio::sync::mpsc;

async fn producer_consumer() {
    let (tx, mut rx) = mpsc::channel::<String>(100);

    let producer = tokio::spawn(async move {
        for i in 0..10 {
            tx.send(format!("message {i}")).await.unwrap();
        }
        // tx is dropped here, closing the channel
    });

    let consumer = tokio::spawn(async move {
        while let Some(msg) = rx.recv().await {
            println!("received: {msg}");
        }
    });

    let _ = tokio::join!(producer, consumer);
}

Shared State with Arc and Mutex

use std::sync::Arc;
use tokio::sync::Mutex;
use std::collections::HashMap;

type SharedState = Arc<Mutex<HashMap<String, String>>>;

async fn update_state(state: SharedState, key: String, value: String) {
    let mut map = state.lock().await;
    map.insert(key, value);
    // lock is released when `map` goes out of scope
}

Async Streams

use tokio_stream::{StreamExt, wrappers::ReceiverStream};

async fn process_stream() {
    let (tx, rx) = tokio::sync::mpsc::channel(100);
    let stream = ReceiverStream::new(rx);

    tokio::spawn(async move {
        for i in 0..5 {
            tx.send(i).await.unwrap();
        }
    });

    tokio::pin!(stream);
    while let Some(value) = stream.next().await {
        println!("got: {value}");
    }
}

Graceful Shutdown

use tokio::signal;
use tokio::sync::watch;

async fn run_server() -> anyhow::Result<()> {
    let (shutdown_tx, mut shutdown_rx) = watch::channel(false);

    let server = tokio::spawn(async move {
        loop {
            select! {
                _ = shutdown_rx.changed() => {
                    println!("shutting down");
                    break;
                }
                _ = accept_connection() => {}
            }
        }
    });

    signal::ctrl_c().await?;
    let _ = shutdown_tx.send(true);
    server.await?;
    Ok(())
}

Best Practices

  • Use tokio::join! for independent concurrent operations, not sequential .await calls.
  • Prefer tokio::sync::Mutex over std::sync::Mutex when the lock is held across .await points. Use std::sync::Mutex when the critical section is short and synchronous.
  • Use spawn_blocking for CPU-bound or blocking I/O work to avoid starving the async executor.
  • Keep async tasks Send — avoid holding non-Send types (like Rc, MutexGuard from std) across .await.
  • Use bounded channels (mpsc::channel(N)) to apply backpressure rather than unbounded channels.
  • Prefer select! with a timeout branch over tokio::time::timeout for more control over cancellation.

Common Pitfalls

  • Holding a MutexGuard across .await: std::sync::MutexGuard is not Send, so holding it across an await point prevents the future from being Send. Scope the lock tightly or use tokio::sync::Mutex.
  • Forgetting that futures are lazy: Calling an async fn without .await does nothing. The future is not polled until awaited or spawned.
  • Blocking the runtime: Calling blocking code (heavy computation, synchronous I/O, std::thread::sleep) inside an async task blocks the executor thread. Use spawn_blocking.
  • Accidental sequential execution: Writing let a = foo().await; let b = bar().await; runs them sequentially. Use join! for concurrency.
  • Unbounded channel memory growth: mpsc::unbounded_channel can grow without limit if the consumer is slower than the producer.
  • Cancellation safety: When using select!, the unfinished branch is dropped. If that future held partial state (e.g., half-written to a buffer), data can be lost. Design futures to be cancellation-safe or use tokio::pin!.

Install this skill directly: skilldb add rust-skills

Get CLI access →