Racing Tasks

This page explains how to race multiple tasks using when_any.

Code snippets assume using namespace boost::capy; is in effect.

The Problem

Sometimes you need the result from whichever task finishes first, not all of them. Common scenarios include:

  • Racing requests to multiple servers, using the first response

  • Implementing timeouts by racing against a timer

  • Speculative execution of multiple algorithms

  • Waiting for first available resource from a pool

when_any

The when_any function launches multiple tasks concurrently and returns when the first one completes:

#include <boost/capy/when_any.hpp>

task<void> race()
{
    auto [index, result] = co_await when_any(
        fetch_from_primary(),
        fetch_from_backup()
    );
    // index is 0 or 1 (which task won)
    // result contains the winner's value
}

The winning task’s result is returned immediately. All sibling tasks receive a stop request and are allowed to complete before when_any returns.

Return Value

when_any returns a std::pair containing the winner’s index and result.

Heterogeneous Tasks (Variadic)

When racing tasks with different return types, the result is a variant:

auto [index, result] = co_await when_any(
    task_returning_int(),     // task<int>
    task_returning_string()   // task<std::string>
);
// index is 0 or 1
// result is std::variant<int, std::string>

if (index == 0)
    std::cout << "Got int: " << std::get<int>(result) << "\n";
else
    std::cout << "Got string: " << std::get<std::string>(result) << "\n";

Void Tasks

Void tasks contribute std::monostate to the variant:

auto [index, result] = co_await when_any(
    task_returning_int(),  // task<int>
    task_void()            // task<void>
);
// result is std::variant<int, std::monostate>

if (index == 0)
    std::cout << "Got int: " << std::get<int>(result) << "\n";
else
    std::cout << "Void task completed\n";

Duplicate Types

The variant is deduplicated. When racing tasks with the same return type, use the index to identify which task won:

auto [index, result] = co_await when_any(
    fetch_from_server_a(),  // task<Response>
    fetch_from_server_b(),  // task<Response>
    fetch_from_server_c()   // task<Response>
);
// result is std::variant<Response> (deduplicated)
// index tells you which server responded (0, 1, or 2)

auto response = std::get<Response>(result);
std::cout << "Server " << index << " responded first\n";

Homogeneous Tasks (Vector)

For a dynamic number of tasks with the same type, use the vector overload:

std::vector<task<Response>> requests;
for (auto& server : servers)
    requests.push_back(fetch_from(server));

auto [index, response] = co_await when_any(std::move(requests));
// No variant needed - response is directly Response
std::cout << "Server " << index << " responded: " << response << "\n";

The vector overload returns std::pair<std::size_t, T> directly, without a variant wrapper.

For void tasks in a vector, only the index is returned:

std::vector<task<void>> tasks;
// ... populate tasks

std::size_t winner = co_await when_any(std::move(tasks));
std::cout << "Task " << winner << " completed first\n";

Error Handling

Exceptions are treated as valid completions. If the winning task throws, that exception is rethrown from when_any:

task<void> handle_errors()
{
    try {
        auto [index, result] = co_await when_any(
            might_fail(),
            might_succeed()
        );
        // If we get here, the winner succeeded
    } catch (std::exception const& e) {
        // The winning task threw this exception
        std::cerr << "Winner failed: " << e.what() << "\n";
    }
}

First-Completion Semantics

Unlike when_all (which captures the first error), when_any returns whichever task completes first, whether it succeeds or fails. Exceptions from non-winning tasks are discarded.

Stop Propagation

When a winner is determined, when_any requests stop for all sibling tasks. Tasks that support cancellation can exit early:

task<Response> fetch_with_cancel_support()
{
    auto token = co_await get_stop_token();

    for (auto& chunk : data_source)
    {
        if (token.stop_requested())
            co_return partial_response();  // Exit early
        co_await send_chunk(chunk);
    }
    co_return complete_response();
}

task<void> example()
{
    // When one fetch wins, the other sees stop_requested
    auto [index, response] = co_await when_any(
        fetch_with_cancel_support(),
        fetch_with_cancel_support()
    );
}

Tasks that ignore the stop token will run to completion. when_any always waits for all tasks to finish before returning, ensuring proper cleanup.

Parent Stop Token

when_any forwards the parent’s stop token to children. If the parent is cancelled, all children see the request:

task<void> parent()
{
    auto [index, result] = co_await when_any(
        child_a(),  // Sees parent's stop token
        child_b()   // Sees parent's stop token
    );
}

std::stop_source source;
run_async(ex, source.get_token())(parent());

// Later: cancel everything
source.request_stop();

Execution Model

All child tasks inherit the parent’s executor affinity:

task<void> parent()  // Running on executor ex
{
    auto [index, result] = co_await when_any(
        child_a(),  // Runs on ex
        child_b()   // Runs on ex
    );
}

Children are launched via dispatch() on the executor, which may run them inline or queue them depending on the executor implementation.

True Concurrency

With a multi-threaded executor, tasks race in parallel:

thread_pool pool(4);
run_async(pool.get_executor())(parent());

// Tasks may complete in any order based on actual execution time

With a single-threaded executor, tasks interleave at suspension points but execute sequentially.

Example: Redundant Requests

Race requests to multiple servers for reliability:

task<Response> fetch_with_redundancy(Request req)
{
    auto [index, response] = co_await when_any(
        fetch_from(primary_server, req),
        fetch_from(backup_server, req)
    );

    std::cout << (index == 0 ? "Primary" : "Backup")
              << " server responded\n";
    co_return std::get<Response>(response);
}

Example: Timeout Pattern

Race an operation against a timer:

task<Data> fetch_with_timeout(Request req)
{
    auto [index, result] = co_await when_any(
        fetch_data(req),
        timeout_after<Data>(100ms)
    );

    if (index == 1)
        throw timeout_error{"Request timed out"};

    co_return std::get<Data>(result);
}

// Helper that waits then throws
template<typename T>
task<T> timeout_after(std::chrono::milliseconds ms)
{
    co_await sleep(ms);
    throw timeout_error{"Timeout"};
    co_return T{};  // Never reached
}

Example: First Available Resource

Wait for the first available connection from a pool:

task<Connection> get_connection(std::vector<ConnectionPool>& pools)
{
    std::vector<task<Connection>> attempts;
    for (auto& pool : pools)
        attempts.push_back(pool.acquire());

    auto [index, conn] = co_await when_any(std::move(attempts));

    std::cout << "Got connection from pool " << index << "\n";
    co_return conn;
}

Comparison with when_all

Aspect when_all when_any

Completion

Waits for all tasks

Returns on first completion

Return type

Tuple of results

Pair of (index, variant/value)

Error handling

First exception wins, siblings get stop

Exceptions are valid completions

Use case

Need all results

Need fastest result

Summary

Feature Description

when_any(tasks…​)

Race tasks, return first completion

when_any(vector<task<T>>)

Race homogeneous tasks from a vector

Return type (variadic)

pair<size_t, variant<…​>> with deduplicated types

Return type (vector)

pair<size_t, T> or size_t for void

Error handling

Winner’s exception propagated, others discarded

Stop propagation

Siblings receive stop request on winner

Cleanup

All tasks complete before returning

Next Steps