Skip to main content

Worker tuning quick reference

This page provides a quick reference for Worker configuration options and their default values across Temporal SDKs. Use this guide alongside the comprehensive Worker performance documentation for detailed tuning guidance.

Worker performance is constrained by three primary resources:

ResourceDescription
ComputeCPU-bound operations, concurrent Task execution
MemoryWorkflow cache, thread pools
IONetwork calls to Temporal Service, polling

How a Worker works

Workers poll a Task Queue in Temporal Cloud or a self-hosted Temporal Service, execute Tasks, and respond with the result.

┌─────────────────┐     Poll for Tasks       ┌──────────────────┐
│ - Worker │ ◄─────────────────────── │ Temporal Service │
│ - Workflows │ │ │
│ - Activities │ ───────────────────────► │ │
└─────────────────┘ Respond with results └──────────────────┘

Multiple Workers can poll the same Task Queue, providing horizontal scalability.

How Worker failure recovery works

When a Worker crashes or experiences a host outage:

  1. The Workflow Task times out
  2. Another available Worker picks up the Task
  3. The new Worker replays the Event History to reconstruct state
  4. Execution continues from where it left off

For more details on Worker architecture, see What is a Temporal Worker?

Compute settings

Compute settings control how many Tasks a Worker can execute concurrently.

Compute configuration options

SettingDescription
MaxConcurrentWorkflowTaskExecutionSizeMaximum concurrent Workflow Tasks
MaxConcurrentActivityTaskExecutionSizeMaximum concurrent Activity Tasks
MaxConcurrentLocalActivityTaskExecutionSizeMaximum concurrent Local Activities
MaxWorkflowThreadCount / workflowThreadPoolSizeThread pool for Workflow execution

Compute defaults by SDK

SDKMaxConcurrentWorkflowTaskExecutionSizeMaxConcurrentActivityTaskExecutionSizeMaxConcurrentLocalActivityTaskExecutionSizeMaxWorkflowThreadCount
Go1,0001,0001,000-
Java200200200600
TypeScript401001001 (reuseV8Context)
Python100100100-
.NET100100100-

Resource-based slot suppliers

Instead of fixed slot counts, you can use resource-based slot suppliers that automatically adjust available Task slots based on CPU and memory utilization. For implementation details, see Slot suppliers.

Memory settings

Memory settings control the Workflow cache size and thread pool allocation.

Memory configuration options

SettingDescription
MaxCachedWorkflows / StickyWorkflowCacheSizeNumber of Workflows to keep in cache
MaxWorkflowThreadCountThread pool size
reuseV8Context (TypeScript)Reuse V8 context for Workflows

Memory defaults by SDK

SDKMaxCachedWorkflows / StickyWorkflowCacheSize
Go10,000
Java600
TypeScriptDynamic (e.g., 2000 for 4 GiB RAM)
Python1,000
.NET10,000

For cache tuning guidance, see Workflow cache tuning.

IO settings

IO settings control the number of pollers and rate limits for Task Queue interactions.

IO configuration options

SettingDescription
MaxConcurrentWorkflowTaskPollersNumber of concurrent Workflow pollers
MaxConcurrentActivityTaskPollersNumber of concurrent Activity pollers
Namespace APSActions per second limit for Namespace
TaskQueueActivitiesPerSecondActivity rate limit per Task Queue

IO defaults by SDK

SDKMaxConcurrentWorkflowTaskPollersMaxConcurrentActivityTaskPollersNamespace APSTaskQueueActivitiesPerSecond
Go22400Unlimited
Java55--
TypeScript1010--
Python55--
.NET55--

Poller autoscaling

Use poller autoscaling to automatically adjust the number of concurrent polls based on workload. For configuration details, see Configuring poller options.

Metrics reference by resource type

Use these metrics to identify bottlenecks and guide tuning decisions. For the complete metrics reference, see SDK metrics.

Worker configuration optionSDK metric
MaxConcurrentWorkflowTaskExecutionSizeworker_task_slots_available {worker_type = WorkflowWorker}
MaxConcurrentActivityTaskExecutionSizeworker_task_slots_available {worker_type = ActivityWorker}
MaxWorkflowThreadCountworkflow_active_thread_count (Java only)
CPU-intensive logicworkflow_task_execution_latency

Also monitor your machine's CPU consumption (for example, container_cpu_usage_seconds_total in Kubernetes).

Worker configuration optionSDK metric
StickyWorkflowCacheSizesticky_cache_total_forced_eviction, sticky_cache_size, sticky_cache_hit, sticky_cache_miss

Also monitor your machine's memory consumption (for example, container_memory_usage_bytes in Kubernetes).

Worker configuration optionSDK metric
MaxConcurrentWorkflowTaskPollersnum_pollers {poller_type = workflow_task}
MaxConcurrentActivityTaskPollersnum_pollers {poller_type = activity_task}
Network latencyrequest_latency {namespace, operation}

Task Queue metrics

MetricDescription
poll_success_sync_countSync match rate (Tasks immediately assigned to Workers)
approximate_backlog_countApproximate number of Tasks in a Task Queue

Task Queue statistics are also available via the DescribeTaskQueue API:

  • ApproximateBacklogCount
  • ApproximateBacklogAge
  • TasksAddRate
  • TasksDispatchRate
  • BacklogIncreaseRate

For more on Task Queue metrics, see Available Task Queue information.

Failure metrics

MetricDescription
long_request_failureFailures for long-running operations (polling, history retrieval)
request_failureFailures for standard operations (Task completion responses)

Common failure codes:

  • RESOURCE_EXHAUSTED - Rate limits exceeded
  • DEADLINE_EXCEEDED - Operation timeout
  • NOT_FOUND - Resource not found

Worker tuning tips

  1. Scale test before production: Validate your configuration under realistic load.
  2. Infrastructure matters: Workers don't operate in a vacuum. Consider network latency, database performance, and external service dependencies.
  3. Tune and observe: Make incremental changes and monitor metrics before making additional adjustments.
  4. Identify the bottleneck: Use the theory of constraints. Improving non-bottleneck resources won't improve overall throughput.

For detailed tuning guidance, see: