Inliner
The inliner option adds the host (main thread) as one extra lane in the pool.
Instead of sending everything to workers, the host participates in execution
too — useful when every task is pure computation and you want to squeeze one
more core out of the machine.
const pool = createPool({ threads: 4, inliner: { position: "last", batchSize: 16 },})({ add });When to use it
Section titled “When to use it”The inliner shines for math and pure-compute workloads that run on the host without touching the network or filesystem:
- Number crunching, scoring, hashing, matrix ops.
- Batch transforms over arrays of primitives.
- Short, synchronous functions where IPC overhead matters more than the work itself.
- Bursty queues where one extra lane helps drain work faster.
Because inline tasks skip worker IPC entirely (no encode/decode round-trip), they can be significantly faster for tiny payloads.
When NOT to use it
Section titled “When NOT to use it”- HTTP / networking — inline tasks run on the main thread, so any I/O blocks the event loop and defeats the isolation that workers provide. If you need request handling, keep it in workers.
- File system or database calls — same problem. Anything that awaits external I/O will stall timers, sockets, and other pools sharing the host.
- Long-running async work — the inliner is designed around fast, ideally synchronous functions. Async tasks that take tens of milliseconds or more will block the batch loop and starve other inline slots.
- Isolation-sensitive code — if a task can throw or corrupt shared state, run it in a worker where a crash stays contained.
Inline tasks run on the main thread. Anything that blocks — network calls, disk reads, heavy async chains — will freeze the event loop. Stick to pure math and transforms.
How it runs
Section titled “How it runs”Inline execution is not immediate in the call.*() path.
Calls are queued, then processed when the macro-queue turn runs.
This delay is intentional: by that point, the dispatcher has had a chance to send/receive worker tasks first, then the host drains inline work.
batchSize controls how many inline tasks run per macro-queue turn.
For compute workloads, higher values let the host churn through more work
per tick without yielding back to the event loop unnecessarily.
Options
Section titled “Options”createPool({ threads: number, inliner: { position?: "first" | "last", batchSize?: number, dispatchThreshold?: number, }, balancer?: "roundRobin" | "robinRound" | "firstIdle" | "randomLane" | "firstIdleOrRandom",})position
Section titled “position”Controls where the inline lane sits relative to worker lanes.
"first"— host lane is considered before workers. Good when inline work is cheaper than IPC and you want the host to grab tasks first."last"— host lane is considered after workers. Workers get priority; the host only picks up overflow.
For most compute pools, "last" is the safe default.
batchSize
Section titled “batchSize”How many inline tasks are processed per macro-queue turn.
- Higher values = better throughput for pure math (fewer yields to the event loop).
- Lower values = more responsive host (other timers and callbacks get a chance to run between batches).
Defaults to 1 when the inliner is enabled. For compute-heavy pools you
typically want a much higher value (16, 64, 128+).
dispatchThreshold
Section titled “dispatchThreshold”Minimum in-flight calls before the inline lane becomes eligible for scheduling.
1(default) — inline lane is immediately eligible.- Higher values — host lane stays excluded until concurrency rises past the threshold, then joins to help drain the burst.
This is a pressure-relief valve: at low concurrency, workers handle everything; once a burst builds up, the host pitches in.
Exact behavior
Section titled “Exact behavior”- The scheduler tracks
inFlightcalls per task invoker. - On each call,
inFlightincrements before lane selection. - If
inFlight < dispatchThreshold, scheduling uses worker-only lanes (inline lane excluded). - If
inFlight >= dispatchThreshold, scheduling uses all lanes (workers + inline lane). inFlightdecrements on resolve, reject, or synchronous throw.- The configured balancer strategy applies to whichever lane set is currently active.
Internals
Section titled “Internals”The inline executor uses typed arrays (Int32Array, Int8Array) to manage
execution slots and a RingQueue for pending work. It coordinates with the
event loop through a MessageChannel (macro-task boundary) and
queueMicrotask (micro-task fast path). The first dispatch in a burst
resolves in microtasks; overflow beyond batchSize defers to the next
macro-task turn.
Promise arguments are awaited before execution (unlike thenables, which are
passed through as-is). Timeout specs from task() are applied via a
Promise.race wrapper only when the task returns a Promise.
Abort signals on inline tasks use a static toolkit where hasAborted() always
returns false — inline tasks cannot be individually aborted since they share
the host thread.
Balancer guidance
Section titled “Balancer guidance”roundRobin— simple rotation across all lanes. Works well when tasks are uniform.robinRound— legacy alias ofroundRobin.firstIdle— picks the first idle lane. Prioritizes workers whenposition: "last".firstIdleOrRandomorrandomLane— useful for pools with many registered tasks or uneven load.
Examples
Section titled “Examples”Math pipeline
Section titled “Math pipeline”High batch size, host joins after workers:
const { call, shutdown } = createPool({ threads: 4, inliner: { position: "last", batchSize: 64 }, balancer: "firstIdleOrRandom",})({ scoreChunk });
const results = await Promise.all( chunks.map((chunk) => call.scoreChunk(chunk)),);await shutdown();Burst drain with threshold
Section titled “Burst drain with threshold”Host stays out until concurrency spikes:
const { call, shutdown } = createPool({ threads: 2, inliner: { position: "last", batchSize: 32, dispatchThreshold: 16, }, balancer: "roundRobin",})({ hash });Single-thread + inliner
Section titled “Single-thread + inliner”Useful when you want one worker for isolation but the host can handle the easy math too:
const { call, shutdown } = createPool({ threads: 1, inliner: { position: "first", batchSize: 8 }, balancer: "roundRobin",})({ add });