Skip to main content

Swift Concurrency Deep Dive

Why Swift Concurrency

Swift's structured concurrency model with async/await, actors, and Sendable types provides compile-time safety for concurrent code. Unlike traditional thread-based concurrency, structured concurrency makes async lifetimes explicit, prevents data races through the type system, and eliminates callback hell. Actors provide synchronized access to mutable state, while Sendable enforces thread-safe data sharing. This results in concurrent code that's both performant and correct.

Overview

This guide provides an in-depth exploration of Swift's concurrency model, covering structured concurrency foundations, actors for safe state isolation, Sendable protocol for thread-safe types, task hierarchies and cancellation, async sequences for streaming data, main actor isolation for UI updates, and patterns for migrating from completion handlers to async/await.

For basic async/await usage, see our Swift General guidelines. For iOS-specific concurrency patterns, see our iOS Performance guidelines.


Core Principles

  1. Structured Concurrency: Tasks have well-defined lifetimes and hierarchies
  2. Actors: Protect mutable state with automatic synchronization
  3. Sendable Types: Compile-time verification of thread-safe data
  4. Cooperative Cancellation: Explicit cancellation checks, not forced termination
  5. Task Groups: Manage dynamic collections of concurrent tasks
  6. Async Sequences: Process streams of values asynchronously
  7. MainActor: Ensure UI updates happen on the main thread
  8. No Data Races: Type system prevents concurrent access to mutable state
  9. Suspension Points: Explicit await shows where tasks can suspend
  10. Task Priorities: Express importance for scheduling decisions

Structured Concurrency

Structured concurrency means async operations have well-defined scopes and lifetimes. Unlike unstructured approaches (like Grand Central Dispatch or callbacks), tasks in structured concurrency:

  • Are organized in parent-child hierarchies
  • Automatically propagate cancellation to children
  • Complete (or get cancelled) before their scope exits
  • Make it impossible to "fire and forget" tasks that outlive their context

This structure eliminates common bugs: leaked tasks that continue running after they're no longer needed, orphaned tasks with no way to cancel them, and unclear ownership of async work.

Task Hierarchies

Tasks created within an async context become child tasks. Cancellation and priority flow down the hierarchy automatically:

// GOOD: Task hierarchy with automatic cancellation
func processOrder(_ order: Order) async throws {
// Parent task starts here

try await withThrowingTaskGroup(of: Void.self) { group in
// Child task 1
group.addTask {
try await validateInventory(order)
}

// Child task 2
group.addTask {
try await validatePayment(order)
}

// Child task 3
group.addTask {
try await validateShipping(order)
}

// If any child throws, all other children are automatically cancelled
try await group.waitForAll()
}

// Only reaches here if all validations succeeded
try await confirmOrder(order)
}

// If processOrder is cancelled, all child tasks are automatically cancelled

Unstructured Tasks

Sometimes you need unstructured concurrency - tasks that outlive their creation scope. Use Task {} for detached work, but be aware you lose automatic cancellation:

// GOOD: Unstructured task for background work
class AnalyticsService {
func logEvent(_ event: String) {
// Detached task - doesn't inherit priority or cancellation
Task.detached(priority: .background) {
await self.sendToServer(event)
}
// Function returns immediately, task continues in background
}
}

// USE CAREFULLY: Regular Task inherits context
class DataSync {
func syncData() async {
Task {
// Inherits priority and can be cancelled via parent
await performSync()
}
// Returns immediately, but task is structured
}
}

// BAD: Fire-and-forget can lead to leaks
func processPayment() {
Task {
await longRunningOperation()
// If the view controller is dismissed, this still runs!
}
}

The difference: Task {} is structured (inherits context, cancelled when parent cancels), while Task.detached {} is truly independent. Use detached tasks sparingly - they're harder to reason about and can cause resource leaks.


Actors

Actors are reference types that protect their mutable state from concurrent access. Only one task can execute actor methods at a time - the runtime automatically serializes access. This eliminates data races without manual locking.

Actors solve a fundamental concurrency problem: shared mutable state. Traditional approaches use locks (error-prone, can deadlock) or message passing (verbose, indirect). Actors provide automatic synchronization - you write straightforward code, and the compiler inserts synchronization.

Actor Basics

Actor isolation is enforced at compile time. Accessing actor properties or calling actor methods from outside the actor requires await, creating a suspension point:

// GOOD: Actor protects mutable state
actor PaymentProcessor {
private var processing: Set<String> = []
private var completed: [String: PaymentResult] = [:]

// Actor-isolated methods can access state synchronously
func startProcessing(_ paymentId: String) -> Bool {
guard !processing.contains(paymentId) else {
return false // Already processing
}
processing.insert(paymentId)
return true
}

func completeProcessing(_ paymentId: String, result: PaymentResult) {
processing.remove(paymentId)
completed[paymentId] = result
}

func getResult(_ paymentId: String) -> PaymentResult? {
return completed[paymentId]
}
}

// Usage from outside the actor requires await
func processPayment(_ payment: Payment) async {
let processor = PaymentProcessor()

// Each access is automatically serialized
let started = await processor.startProcessing(payment.id)
guard started else { return }

let result = await gateway.process(payment)
await processor.completeProcessing(payment.id, result: result)
}

Actor Reentrancy

Actors are reentrant - when an actor-isolated method awaits, the actor can start executing other methods. This prevents deadlocks but requires careful reasoning:

//  REENTRANCY: State can change across await
actor BankAccount {
private var balance: Decimal = 0

func withdraw(_ amount: Decimal) async throws {
guard balance >= amount else {
throw AccountError.insufficientFunds
}

// Suspension point - other methods can run here!
await logTransaction("Withdrawal: \(amount)")

// BAD: Balance might have changed!
balance -= amount
}

// GOOD: Re-check condition after await
func withdrawSafe(_ amount: Decimal) async throws {
guard balance >= amount else {
throw AccountError.insufficientFunds
}

await logTransaction("Withdrawal: \(amount)")

// Re-verify condition
guard balance >= amount else {
throw AccountError.insufficientFunds
}

balance -= amount
}
}

The key insight: assume actor state can change across any await. Either re-check conditions after suspension points, or structure code to avoid multiple suspension points in critical sections.

Nonisolated Methods

Some actor methods don't need isolation. Mark them nonisolated to allow synchronous access:

actor PaymentProcessor {
let processorId: String // Immutable
private var stats: ProcessingStats // Mutable

init(processorId: String) {
self.processorId = processorId
self.stats = ProcessingStats()
}

// Immutable properties can be accessed synchronously
nonisolated var id: String {
return processorId
}

// Methods that only read immutable state
nonisolated func generateLogPrefix() -> String {
return "[\(processorId)]"
}

// Mutable state requires isolation
func incrementProcessed() {
stats.processed += 1
}
}

// Nonisolated members don't require await
let processor = PaymentProcessor(processorId: "proc-1")
print(processor.id) // No await needed
print(processor.generateLogPrefix()) // No await needed
await processor.incrementProcessed() // Requires await

Sendable Types

The Sendable protocol marks types that can be safely passed across concurrency boundaries. This includes value types (structs, enums with Sendable members), immutable classes (final classes with only let properties), and actors.

Sendable is checked at compile time, catching data race bugs before they happen. When you pass non-Sendable types across tasks or to actors, the compiler errors. This prevents accidentally sharing mutable state between concurrent contexts.

Sendable Conformance

Value types and immutable types are automatically Sendable:

//  Automatically Sendable: struct with Sendable properties
struct Payment: Sendable {
let id: String
let amount: Decimal
let timestamp: Date
}

// Sendable: enum with Sendable associated values
enum PaymentResult: Sendable {
case success(transactionId: String)
case failure(error: PaymentError)
}

// Sendable: immutable class
final class PaymentConfiguration: Sendable {
let apiKey: String
let endpoint: URL

init(apiKey: String, endpoint: URL) {
self.apiKey = apiKey
self.endpoint = endpoint
}
}

// NOT Sendable: mutable class
class PaymentCache {
var payments: [String: Payment] = [:] // Mutable state
}

// Sendable: actor (always thread-safe)
actor PaymentProcessor: Sendable {
private var pending: [Payment] = []
}

Unchecked Sendable

When you have thread-safe types that the compiler can't verify (like types using locks internally), use @unchecked Sendable:

// GOOD: @unchecked when you provide your own synchronization
final class ThreadSafeCache<Key: Hashable, Value>: @unchecked Sendable {
private let lock = NSLock()
private var storage: [Key: Value] = [:]

func get(_ key: Key) -> Value? {
lock.lock()
defer { lock.unlock() }
return storage[key]
}

func set(_ key: Key, value: Value) {
lock.lock()
defer { lock.unlock() }
storage[key] = value
}
}

// The @unchecked assertion: "I guarantee thread safety"
// Compiler trusts you - ensure your synchronization is correct!

Use @unchecked Sendable carefully. It's an escape hatch that disables compiler verification. You're responsible for ensuring thread safety through manual synchronization.

Sendable Closures

Closures are Sendable if they only capture Sendable values:

//  Sendable closure: captures only Sendable values
func processPayments(_ payments: [Payment]) async {
await withTaskGroup(of: PaymentResult.self) { group in
for payment in payments {
group.addTask {
// payment is Sendable (struct)
return await processPayment(payment)
}
}
}
}

// NOT Sendable: captures non-Sendable value
class PaymentService {
func processAll(_ payments: [Payment]) async {
await withTaskGroup(of: Void.self) { group in
for payment in payments {
group.addTask {
await self.process(payment) // Captures self (non-Sendable class)
}
}
}
}
}

// Sendable: actor is Sendable
actor PaymentService {
func processAll(_ payments: [Payment]) async {
await withTaskGroup(of: Void.self) { group in
for payment in payments {
group.addTask {
await self.process(payment) // OK: self is actor (Sendable)
}
}
}
}
}

Task Groups

Task groups manage dynamic collections of concurrent tasks. Unlike async let (fixed number of tasks), task groups handle variable numbers of tasks, like processing arrays of unknown size.

Task groups enforce structured concurrency for dynamic work: all child tasks must complete before the group exits. This prevents orphaned tasks and makes cancellation automatic. For basic concurrency patterns, see our Swift General guidelines.

WithTaskGroup

Use withTaskGroup when child tasks return values you want to collect:

// GOOD: Task group for parallel processing
func processPayments(_ payments: [Payment]) async throws -> [PaymentReceipt] {
try await withThrowingTaskGroup(of: PaymentReceipt.self) { group in
// Add task for each payment
for payment in payments {
group.addTask {
try await self.processPayment(payment)
}
}

// Collect results as they complete
var receipts: [PaymentReceipt] = []
for try await receipt in group {
receipts.append(receipt)
}

return receipts
}
}

// GOOD: Process results as they arrive
func fetchUserData(userIds: [String]) async throws -> [User] {
try await withThrowingTaskGroup(of: User.self) { group in
for userId in userIds {
group.addTask {
try await self.fetchUser(id: userId)
}
}

// Process results in completion order (not submission order)
var users: [User] = []
for try await user in group {
users.append(user)
print("Loaded user: \(user.name)") // Prints as each completes
}

return users
}
}

Task Group Cancellation

When a task group is cancelled or a child task throws, all remaining tasks are automatically cancelled:

// GOOD: Automatic cancellation on error
func validateOrder(_ order: Order) async throws {
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask {
try await validateInventory(order)
}

group.addTask {
try await validatePayment(order)
}

group.addTask {
try await validateShipping(order)
}

// If any validation fails:
// 1. That task throws
// 2. Other tasks are automatically cancelled
// 3. The group rethrows the error
try await group.waitForAll()
}
}

// GOOD: Explicit cancellation
func processWithTimeout(_ items: [Item]) async throws -> [Result] {
try await withThrowingTaskGroup(of: Result.self) { group in
for item in items {
group.addTask {
try await process(item)
}
}

// Cancel all tasks if timeout reached
group.addTask {
try await Task.sleep(for: .seconds(30))
group.cancelAll()
throw ProcessError.timeout
}

var results: [Result] = []
for try await result in group {
results.append(result)
}
return results
}
}

Async Sequences

Async sequences process streams of values asynchronously, similar to how sequences process collections synchronously. They're perfect for network streams, file reading, or any scenario where values arrive over time.

The AsyncSequence protocol mirrors Sequence, but its next() method is async. You iterate with for await, which suspends between values. This makes working with async data streams as natural as working with arrays.

Consuming Async Sequences

URLSession, file APIs, and many Foundation APIs return async sequences:

// GOOD: Processing async sequence
func downloadLargeFile(from url: URL) async throws {
let (bytes, response) = try await URLSession.shared.bytes(from: url)

guard let httpResponse = response as? HTTPURLResponse,
httpResponse.statusCode == 200 else {
throw NetworkError.invalidResponse
}

var totalBytes = 0

// Process bytes as they arrive
for try await byte in bytes {
totalBytes += 1

if totalBytes % 1_000_000 == 0 {
print("Downloaded \(totalBytes / 1_000_000)MB")
}
}

print("Complete: \(totalBytes) bytes")
}

// GOOD: Async sequence with timeout
func fetchNotifications() async throws -> [Notification] {
let stream = notificationService.streamNotifications()

var notifications: [Notification] = []

let timeoutTask = Task {
try await Task.sleep(for: .seconds(5))
}

for try await notification in stream {
if timeoutTask.result != nil {
break // Timeout reached
}

notifications.append(notification)
}

timeoutTask.cancel()
return notifications
}

Creating Async Sequences

Create custom async sequences with AsyncStream:

// GOOD: Custom async sequence
func monitorPayments() -> AsyncStream<Payment> {
AsyncStream { continuation in
// Start monitoring
let observer = PaymentObserver { payment in
continuation.yield(payment)
}

// Cleanup when sequence is cancelled
continuation.onTermination = { _ in
observer.stop()
}

observer.start()
}
}

// Usage
func watchPayments() async {
let paymentStream = monitorPayments()

for await payment in paymentStream {
print("New payment: \(payment.id)")

if shouldStopMonitoring {
break // Triggers onTermination
}
}
}

// GOOD: Transform async sequences
extension AsyncSequence {
func map<T>(_ transform: @escaping (Element) async throws -> T) -> AsyncThrowingStream<T, Error> {
AsyncThrowingStream { continuation in
Task {
do {
for try await element in self {
let transformed = try await transform(element)
continuation.yield(transformed)
}
continuation.finish()
} catch {
continuation.finish(throwing: error)
}
}
}
}
}

// Usage
let paymentAmounts = monitorPayments().map { $0.amount }
for await amount in paymentAmounts {
print("Amount: \(amount)")
}

MainActor

The MainActor is a global actor that represents the main thread. UI updates must happen on the main thread, and @MainActor provides compile-time enforcement of this requirement.

Before Swift concurrency, you'd use DispatchQueue.main.async {} to update UI. This is error-prone - easy to forget, and violations cause crashes or corruption. @MainActor makes main-thread requirements explicit in the type system, catching mistakes at compile time.

MainActor Isolation

Mark types, methods, or properties with @MainActor to ensure they're only accessed on the main thread:

// GOOD: MainActor for view model
@MainActor
class PaymentViewModel: ObservableObject {
@Published var payments: [Payment] = []
@Published var isLoading = false
@Published var errorMessage: String?

private let repository: PaymentRepository

init(repository: PaymentRepository) {
self.repository = repository
}

// All methods automatically run on main thread
func loadPayments() async {
isLoading = true
defer { isLoading = false }

do {
// Repository runs on background thread
let payments = try await repository.fetchPayments()

// UI update automatically on main thread
self.payments = payments
} catch {
errorMessage = error.localizedDescription
}
}
}

// SwiftUI view automatically on MainActor
struct PaymentListView: View {
@StateObject private var viewModel: PaymentViewModel

var body: some View {
List(viewModel.payments) { payment in
PaymentRow(payment: payment)
}
.task {
await viewModel.loadPayments()
}
}
}

Nonisolated Methods

Background work in @MainActor types needs nonisolated annotation:

@MainActor
class ImageProcessor {
var processedImages: [UIImage] = []

// Heavy processing should not block main thread
nonisolated func processImage(_ data: Data) async -> UIImage? {
// Runs on background thread
guard let image = UIImage(data: data) else {
return nil
}

// Expensive operation
return await applyFilters(to: image)
}

// Main-thread method to store result
func addProcessedImage(_ image: UIImage) {
processedImages.append(image)
}

// Usage combines both
func processAndStore(_ data: Data) async {
// Runs on background
guard let processed = await processImage(data) else {
return
}

// Automatically back on main thread
addProcessedImage(processed)
}
}

Task Cancellation

Cancellation in Swift concurrency is cooperative - tasks must explicitly check for cancellation and respond appropriately. Cancellation is signaled, not forced. This prevents undefined state from forcibly terminating tasks mid-operation.

Check Task.isCancelled or call Task.checkCancellation() at appropriate points. Long-running operations should check cancellation periodically to be responsive. For detailed patterns, see our iOS Performance guidelines.

Checking Cancellation

// GOOD: Check cancellation in loops
func processLargeDataset(_ items: [Item]) async throws {
for item in items {
// Check cancellation before each iteration
try Task.checkCancellation()

await process(item)
}
}

// GOOD: Respond to cancellation without throwing
func downloadFile() async -> Result<Data, Error> {
for chunk in chunks {
if Task.isCancelled {
return .failure(DownloadError.cancelled)
}

await downloadChunk(chunk)
}

return .success(completeFile)
}

// GOOD: Cleanup on cancellation
func streamData() async throws {
let connection = await openConnection()

defer {
// Always cleanup, even on cancellation
Task { await connection.close() }
}

for try await data in connection.stream {
try Task.checkCancellation()
process(data)
}
}

Cancellation Handlers

Install cleanup code that runs when a task is cancelled:

// GOOD: Cancellation handler
func uploadFile(_ file: File) async throws {
let uploadTask = URLSession.shared.upload(for: file)

return try await withTaskCancellationHandler {
try await uploadTask.value
} onCancel: {
uploadTask.cancel()
}
}

// GOOD: Resource cleanup on cancellation
func processWithResource() async throws {
let resource = await acquireResource()

return try await withTaskCancellationHandler {
try await performWork(with: resource)
} onCancel: {
Task {
await resource.release()
}
}
}

Actor Isolation Advanced

Actors provide automatic synchronization, but understanding the nuances of actor isolation is essential for building correct concurrent systems. Actor isolation determines what code can access actor state, how methods are scheduled, and when data races can occur.

Understanding Actor Boundaries

Actor boundaries exist wherever code crosses from one execution context to another. When you call an actor method from non-actor code or from a different actor, you cross a boundary. At boundaries, await is required, and the runtime schedules the work.

The key insight: within an actor's methods, you have exclusive access to the actor's state. Across boundaries (marked by await), other methods might execute, potentially changing state. This is actor reentrancy - the same mechanism that prevents deadlocks but requires careful reasoning about state consistency.

Actor Isolation Rules

The Swift compiler enforces actor isolation through the type system. These rules ensure data race freedom without requiring manual synchronization.

Within an actor: You can access mutable state directly. Methods can call other isolated methods synchronously. This is the actor's "isolated context" - you have exclusive access.

Outside an actor: All access requires await. You cannot directly access properties - you must call async methods. This enforces synchronization at compile time.

Sendable requirement: Values crossing actor boundaries must be Sendable. This prevents accidentally sharing mutable state between actors.

// GOOD: Actor isolation patterns
actor PaymentProcessor {
private var pendingPayments: [String: Payment] = [:]
private var processedCount: Int = 0

// Isolated methods access state directly (no await)
func addPending(_ payment: Payment) {
pendingPayments[payment.id] = payment
}

func removePending(_ paymentId: String) -> Payment? {
let payment = pendingPayments.removeValue(forKey: paymentId)
if payment != nil {
processedCount += 1
}
return payment
}

// Isolated method calls other isolated method synchronously
func processPending(_ paymentId: String) async throws {
// Direct synchronous access - we're inside the actor
guard let payment = removePending(paymentId) else {
throw ProcessError.paymentNotFound
}

// Suspension point - crossing actor boundary
try await gateway.process(payment)
// After await, state might have changed!

// Re-verify assumptions after suspension
print("Processed: \(processedCount)")
}

// Nonisolated method cannot access mutable state
nonisolated func getProcessorId() -> String {
return "processor-\(UUID().uuidString)"
}
}

// BAD: Attempting to access actor state externally
func badAccess() async {
let processor = PaymentProcessor()

// ERROR: Cannot access property directly
// let count = processor.processedCount

// Must use method with await
await processor.addPending(payment)
}

// GOOD: Proper external access
func goodAccess() async {
let processor = PaymentProcessor()

// All access requires await
await processor.addPending(payment)
try? await processor.processPending(payment.id)
}

The compiler's enforcement is strict. If you try to access actor state without await, you get a compile error. This makes data races impossible - you can't accidentally forget synchronization because the type system requires it.

Isolated Parameters

Methods can accept isolated parameters, allowing synchronous access to actor state when the caller is already in the actor's context. This optimization eliminates unnecessary suspension when the caller is already synchronized.

Isolated parameters are an advanced feature for library authors. They enable APIs that work efficiently whether called from inside or outside the actor, adapting based on calling context.

// GOOD: Isolated parameters for optimization
actor DataStore {
private var data: [String: Data] = [:]

func store(_ key: String, data: Data) {
self.data[key] = data
}

func retrieve(_ key: String) -> Data? {
return data[key]
}

// Method with isolated parameter
func bulkStore(
_ items: [(String, Data)],
in store: isolated DataStore
) {
// 'store' parameter is isolated to the same actor
// Can access synchronously without await
for (key, data) in items {
store.data[key] = data // Direct access
}
}
}

// Usage from within actor
extension DataStore {
func loadFromDisk() async throws {
let items = try await readFromDisk()
// Calling with 'self' - already isolated, no await needed
bulkStore(items, in: self)
}
}

// GOOD: Generic isolated operations
func performIsolated<A: Actor>(
on actor: isolated A,
operation: (isolated A) -> Void
) {
// Operation executes in actor's isolation domain
operation(actor)
}

Isolated parameters are most useful for batch operations where the caller is already in the actor's context. Without isolated parameters, each operation would require await, adding unnecessary overhead. With isolated parameters, the entire batch executes synchronously when possible.


AsyncSequence Advanced Patterns

AsyncSequence is Swift's abstraction for asynchronous streams of values. Beyond basic iteration, AsyncSequence supports transformation operators, buffering strategies, error handling, and backpressure management. Understanding these patterns is essential for building efficient streaming pipelines.

AsyncStream with Buffering

AsyncStream is the primary way to create custom async sequences. It provides a continuation-based API where you control when values are yielded. The buffering strategy determines what happens when you yield faster than consumers iterate.

Buffering strategies trade memory for responsiveness. Unbuffered (default) blocks the producer until the consumer is ready - good for backpressure but can slow production. Buffering allows the producer to continue, storing values temporarily - faster but uses memory. Drop strategies sacrifice values to prevent memory growth.

// GOOD: AsyncStream with unbuffered strategy
func temperatureStream() -> AsyncStream<Double> {
AsyncStream { continuation in
let sensor = TemperatureSensor()

sensor.onReading = { temperature in
// Unbuffered: yields suspend if consumer is slow
continuation.yield(temperature)
}

continuation.onTermination = { _ in
sensor.stop()
}

sensor.start()
}
}

// GOOD: AsyncStream with buffering
func highFrequencyDataStream() -> AsyncStream<DataPoint> {
AsyncStream(DataPoint.self, bufferingPolicy: .bufferingNewest(100)) { continuation in
// Buffer up to 100 most recent values
// Drops oldest when buffer is full

let source = HighFrequencyDataSource()

source.onData = { dataPoint in
continuation.yield(dataPoint)
}

continuation.onTermination = { termination in
source.stop()
if case .cancelled = termination {
print("Stream cancelled by consumer")
}
}

source.start()
}
}

// GOOD: Custom backpressure handling
func smartStream() -> AsyncStream<Event> {
AsyncStream { continuation in
let buffer = ThreadSafeBuffer<Event>(maxSize: 50)
let source = EventSource()

source.onEvent = { event in
if buffer.isFull {
// Backpressure: slow down source
source.pause()
}

buffer.add(event)
continuation.yield(event)

Task {
// Resume source when buffer has space
if !buffer.isFull {
source.resume()
}
}
}

continuation.onTermination = { _ in
source.stop()
buffer.clear()
}

source.start()
}
}

// GOOD: Buffering policy comparison
enum BufferingStrategy {
case unbuffered // Blocks producer until consumer ready
case bufferingOldest(Int) // Keep oldest N, drop new values
case bufferingNewest(Int) // Keep newest N, drop old values
}

The choice of buffering strategy depends on your use case. Unbuffered is safest - you can't drop data, but slow consumers stall producers. Buffering oldest preserves history - useful for logs or events where old data matters. Buffering newest prioritizes real-time data - useful for sensors or market data where only current values matter.

Async Sequence Operators

AsyncSequence protocols support transformation, filtering, and combining operations similar to Combine or reactive libraries. These operators compose to build complex processing pipelines while maintaining async semantics.

Standard library operators include map, compactMap, filter, prefix, drop, and reduce. You can build custom operators by wrapping sequences in new AsyncStreams that apply transformations.

// GOOD: Built-in async sequence operators
func processPayments() async {
let paymentStream = paymentSource.stream()

for try await payment in paymentStream
.filter { $0.amount > 100 }
.map { $0.amount }
.prefix(10) {
print("Large payment: \(payment)")
}
}

// GOOD: Custom async sequence operators
extension AsyncSequence {
func debounce(
for duration: Duration
) -> AsyncThrowingStream<Element, Error> where Self: Sendable, Element: Sendable {
AsyncThrowingStream { continuation in
Task {
var lastValue: Element?
var debounceTask: Task<Void, Never>?

do {
for try await element in self {
lastValue = element

// Cancel previous debounce
debounceTask?.cancel()

// Start new debounce timer
debounceTask = Task {
try? await Task.sleep(for: duration)

if !Task.isCancelled, let value = lastValue {
continuation.yield(value)
}
}
}

// Wait for final debounce
await debounceTask?.value
continuation.finish()
} catch {
continuation.finish(throwing: error)
}
}
}
}

func throttle(
for duration: Duration
) -> AsyncThrowingStream<Element, Error> where Self: Sendable, Element: Sendable {
AsyncThrowingStream { continuation in
Task {
var lastEmission = ContinuousClock.now

do {
for try await element in self {
let now = ContinuousClock.now

if now - lastEmission >= duration {
continuation.yield(element)
lastEmission = now
}
}
continuation.finish()
} catch {
continuation.finish(throwing: error)
}
}
}
}
}

// Usage: Compose operators
func monitorPrices() async {
let priceStream = marketData.prices()

// Debounce rapid updates, throttle expensive processing
for await price in priceStream
.debounce(for: .milliseconds(500))
.throttle(for: .seconds(1)) {
await updateUI(with: price)
}
}

// GOOD: Combining async sequences
func mergeStreams<T>(
_ stream1: AsyncStream<T>,
_ stream2: AsyncStream<T>
) -> AsyncStream<T> {
AsyncStream { continuation in
let task1 = Task {
for await element in stream1 {
continuation.yield(element)
}
}

let task2 = Task {
for await element in stream2 {
continuation.yield(element)
}
}

continuation.onTermination = { _ in
task1.cancel()
task2.cancel()
}
}
}

These operators enable declarative stream processing. Instead of manually managing subscriptions and callbacks, you compose operators to express transformations. The async/await syntax keeps everything linear and readable, unlike callback-based reactive frameworks.


Task Groups with Error Handling

Task groups manage collections of concurrent tasks, but real-world usage requires sophisticated error handling. Different scenarios need different strategies: fail-fast (cancel all on first error), collect all errors, partial success with fallbacks, or custom retry logic.

Error Handling Strategies

The default behavior with withThrowingTaskGroup is fail-fast: when any child throws, remaining tasks are cancelled and the first error is rethrown. This is appropriate when tasks are interdependent - if one fails, the others can't succeed meaningfully.

For independent tasks where you want to collect all results (successes and failures), use Result types or custom error aggregation. This "try all, report all" pattern is common for batch operations where partial success is acceptable.

// GOOD: Fail-fast error handling (default)
func validateOrderAllOrNothing(_ order: Order) async throws {
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask { try await validateInventory(order) }
group.addTask { try await validatePayment(order) }
group.addTask { try await validateShipping(order) }

// First error cancels remaining tasks and throws
try await group.waitForAll()
}
}

// GOOD: Collect all errors (best-effort)
func validateOrderCollectErrors(_ order: Order) async -> [ValidationError] {
await withTaskGroup(of: Result<Void, ValidationError>.self) { group in
group.addTask {
do {
try await validateInventory(order)
return .success(())
} catch let error as ValidationError {
return .failure(error)
}
}

group.addTask {
do {
try await validatePayment(order)
return .success(())
} catch let error as ValidationError {
return .failure(error)
}
}

group.addTask {
do {
try await validateShipping(order)
return .success(())
} catch let error as ValidationError {
return .failure(error)
}
}

var errors: [ValidationError] = []
for await result in group {
if case .failure(let error) = result {
errors.append(error)
}
}

return errors
}
}

// GOOD: Partial success with fallbacks
func fetchUserDataWithFallbacks(userId: String) async -> UserData {
await withTaskGroup(of: UserDataComponent.self) { group in
group.addTask {
do {
let profile = try await fetchProfile(userId)
return .profile(profile)
} catch {
return .profile(Profile.default())
}
}

group.addTask {
do {
let settings = try await fetchSettings(userId)
return .settings(settings)
} catch {
return .settings(Settings.default())
}
}

group.addTask {
do {
let preferences = try await fetchPreferences(userId)
return .preferences(preferences)
} catch {
return .preferences(Preferences.default())
}
}

var userData = UserData()
for await component in group {
userData.merge(component)
}

return userData
}
}

// GOOD: Custom error aggregation
struct BatchResult<T, E: Error> {
let successes: [T]
let failures: [(index: Int, error: E)]

var hasFailures: Bool { !failures.isEmpty }
var allSucceeded: Bool { failures.isEmpty }
}

func processBatchWithDetails<T>(
_ items: [T],
process: @escaping (T) async throws -> Void
) async -> BatchResult<T, Error> {
await withTaskGroup(of: (Int, Result<Void, Error>).self) { group in
for (index, item) in items.enumerated() {
group.addTask {
do {
try await process(item)
return (index, .success(()))
} catch {
return (index, .failure(error))
}
}
}

var successes: [T] = []
var failures: [(Int, Error)] = []

for await (index, result) in group {
switch result {
case .success:
successes.append(items[index])
case .failure(let error):
failures.append((index, error))
}
}

return BatchResult(successes: successes, failures: failures)
}
}

// Usage
let result = await processBatchWithDetails(payments) { payment in
try await processPayment(payment)
}

if result.hasFailures {
for (index, error) in result.failures {
print("Payment \(index) failed: \(error)")
}
}

Choosing an error strategy: Fail-fast for interdependent tasks where partial completion is meaningless. Collect errors for validation where you want to show all problems at once. Fallbacks for resilient systems where degraded service is better than no service. Custom aggregation for batch operations where you need detailed failure reporting.


MainActor Isolation In-Depth

MainActor ensures UI operations execute on the main thread, but understanding isolation semantics, performance implications, and migration patterns requires deeper knowledge. MainActor is a global actor - there's exactly one, representing the main dispatch queue.

MainActor Semantics

When a type, function, or property is marked @MainActor, all access must occur on the main actor. The compiler enforces this: calling a MainActor method from a background context requires await, which automatically hops to the main thread.

The automatic hopping is both convenient and a performance consideration. Every cross-actor call involves scheduling work and potential thread switching. For hot paths that cross actor boundaries frequently, this overhead can accumulate.

MainActor Property Wrappers

SwiftUI property wrappers like @StateObject, @ObservedObject, and @Published are implicitly MainActor-isolated. This ensures UI updates happen on the main thread without explicit annotations. However, initialization and some operations have subtle isolation rules.

// GOOD: MainActor view model
@MainActor
class PaymentViewModel: ObservableObject {
@Published var payments: [Payment] = []
@Published var isLoading: Bool = false
@Published var error: String?

private let repository: PaymentRepository

init(repository: PaymentRepository) {
// Initializer runs on whatever thread creates the view model
// Be careful with thread-specific logic here
self.repository = repository
}

func loadPayments() async {
// Already on MainActor, no await needed for property updates
isLoading = true
error = nil

do {
// Repository method runs on its own isolation domain
let payments = try await repository.fetchPayments()

// Automatically back on MainActor for UI updates
self.payments = payments
} catch {
self.error = error.localizedDescription
}

isLoading = false
}

// Heavy computation should be nonisolated
nonisolated func calculateStatistics(for payments: [Payment]) async -> PaymentStatistics {
// Runs on background thread, doesn't block UI
await withTaskGroup(of: Decimal.self) { group in
// Parallel computation
group.addTask {
payments.reduce(0) { $0 + $1.amount }
}

var total: Decimal = 0
for await result in group {
total = result
}

return PaymentStatistics(total: total, count: payments.count)
}
}

func updateStats() async {
// Call nonisolated method - runs on background
let stats = await calculateStatistics(for: payments)

// Back on MainActor for UI update (automatic)
self.statisticsText = stats.description
}

@Published var statisticsText: String = ""
}

// GOOD: Nonisolated methods for heavy work
@MainActor
class ImageProcessor: ObservableObject {
@Published var processedImages: [UIImage] = []

// Expensive work doesn't block main thread
nonisolated func processImages(_ images: [UIImage]) async -> [UIImage] {
await withTaskGroup(of: UIImage.self) { group in
for image in images {
group.addTask {
await self.applyFilters(to: image)
}
}

var processed: [UIImage] = []
for await image in group {
processed.append(image)
}
return processed
}
}

nonisolated func applyFilters(to image: UIImage) async -> UIImage {
// Heavy image processing on background thread
// ...
return filteredImage
}

func processAndUpdate(_ images: [UIImage]) async {
// Process on background
let processed = await processImages(images)

// Update UI on main thread (automatic)
self.processedImages = processed
}
}

The pattern: MainActor for properties and UI coordination logic, nonisolated for expensive computations. This keeps the UI responsive while maintaining the safety guarantees of actor isolation.

MainActor Performance Considerations

Every await crossing into MainActor involves scheduling work on the main dispatch queue. For high-frequency operations, this overhead becomes significant. Profile your code and batch MainActor calls where possible.

// BAD: Excessive MainActor hops
@MainActor
class RealTimeChart: ObservableObject {
@Published var dataPoints: [DataPoint] = []

func addDataPoint(_ point: DataPoint) {
dataPoints.append(point)
}
}

func monitorDataStream(chart: RealTimeChart) async {
for await point in dataStream {
// Each iteration hops to main thread
await chart.addDataPoint(point) // Expensive!
}
}

// GOOD: Batch updates to reduce hops
@MainActor
class RealTimeChart: ObservableObject {
@Published var dataPoints: [DataPoint] = []

func addDataPoints(_ points: [DataPoint]) {
dataPoints.append(contentsOf: points)
}
}

func monitorDataStream(chart: RealTimeChart) async {
var buffer: [DataPoint] = []

for await point in dataStream {
buffer.append(point)

// Batch updates every 100ms or 50 points
if buffer.count >= 50 {
await chart.addDataPoints(buffer)
buffer.removeAll()
}
}

// Final update
if !buffer.isEmpty {
await chart.addDataPoints(buffer)
}
}

Batching MainActor calls reduces scheduling overhead and improves UI responsiveness. Instead of hopping to the main thread for every data point, hop once for a batch of points. The UI updates less frequently but each update is cheaper.


Concurrency Performance Optimization

Swift concurrency is efficient, but understanding performance characteristics helps you write fast concurrent code. Task creation, context switching, actor contention, and suspension overhead all impact performance.

Task Creation Overhead

Creating tasks has overhead - allocating the task structure, setting up context, and scheduling. For fine-grained parallelism (thousands of tiny tasks), this overhead dominates. Prefer coarser-grained tasks or sequential processing for small workloads.

The break-even point depends on the work being done. As a rule of thumb: if an operation takes less than 10 microseconds, concurrent execution probably adds overhead rather than reducing latency. Profile to verify.

// BAD: Task creation overhead dominates
func processSmallItems(_ items: [Int]) async -> Int {
await withTaskGroup(of: Int.self) { group in
for item in items {
group.addTask {
return item * 2 // Too cheap to justify task creation
}
}

var sum = 0
for await result in group {
sum += result
}
return sum
}
}

// GOOD: Sequential processing for cheap operations
func processSmallItemsEfficiently(_ items: [Int]) -> Int {
items.reduce(0) { $0 + $1 * 2 }
}

// GOOD: Chunked parallelism for balance
func processLargeItemsEfficiently(_ items: [DataPoint]) async -> [ProcessedData] {
let chunkSize = max(items.count / ProcessInfo.processInfo.activeProcessorCount, 100)

return await withTaskGroup(of: [ProcessedData].self) { group in
for chunk in items.chunked(into: chunkSize) {
group.addTask {
// Process chunk sequentially
chunk.map { processExpensively($0) }
}
}

var results: [ProcessedData] = []
for await chunkResults in group {
results.append(contentsOf: chunkResults)
}
return results
}
}

extension Array {
func chunked(into size: Int) -> [[Element]] {
stride(from: 0, to: count, by: size).map {
Array(self[$0..<Swift.min($0 + size, count)])
}
}
}

The chunking strategy divides work into CPU-count sized pieces, balancing parallelism with overhead. Each task processes multiple items sequentially, amortizing task creation cost across meaningful work.

Actor Contention

Actors serialize access - only one method executes at a time. High contention (many tasks waiting for actor access) reduces effective parallelism. Actor methods should be fast; move expensive work to nonisolated methods.

Contention shows up as tasks spending time waiting rather than working. If profiling shows actor methods blocking frequently, consider redesigning: split the actor into multiple actors with finer-grained locks, or move work outside the actor.

// BAD: Expensive work inside actor blocks other tasks
actor DataProcessor {
private var data: [String: Data] = [:]

func processAndStore(_ key: String, rawData: Data) async throws {
// Expensive parsing blocks actor
let parsed = try parseComplexData(rawData) // Slow!

// Expensive transformation blocks actor
let transformed = try await transformData(parsed) // Slow!

// Finally store
data[key] = transformed
}
}

// GOOD: Minimize time inside actor
actor DataProcessor {
private var data: [String: Data] = [:]

// Isolated method just updates state (fast)
func store(_ key: String, data: Data) {
self.data[key] = data
}

func retrieve(_ key: String) -> Data? {
return data[key]
}
}

// Heavy lifting outside actor
func processAndStore(
_ key: String,
rawData: Data,
processor: DataProcessor
) async throws {
// Parse outside actor (any thread)
let parsed = try parseComplexData(rawData)

// Transform outside actor (any thread)
let transformed = try await transformData(parsed)

// Quick actor call to store result
await processor.store(key, data: transformed)
}

The refactored version keeps actor methods fast - just reading/writing the dictionary. Expensive parsing and transformation happen outside the actor, allowing concurrent processing. Many tasks can parse simultaneously; only the final store is serialized.

Task Priority and QoS

Task priority influences scheduling but doesn't guarantee order. High-priority tasks are more likely to run sooner, but priorities are hints to the scheduler, not hard constraints. Use priorities to express importance, not enforce sequencing.

Quality of Service (QoS) maps to priority levels and affects which thread pool executes the task. User-interactive and user-initiated tasks run on more responsive threads. Utility and background tasks use more energy-efficient scheduling.

// GOOD: Priority for critical operations
func urgentProcessing() async throws {
try await withThrowingTaskGroup(of: Void.self) { group in
// Critical task - high priority
group.addTask(priority: .high) {
try await validatePayment()
}

// Less critical - normal priority
group.addTask(priority: .medium) {
try await updateAnalytics()
}

// Background work - low priority
group.addTask(priority: .low) {
try await archiveOldData()
}

try await group.waitForAll()
}
}

// GOOD: Explicit QoS for system integration
func downloadInBackground(url: URL) {
Task.detached(priority: .background) {
let data = try await URLSession.shared.data(from: url).0
await storeData(data)
}
}

// User-facing operation - high priority
func refreshUI() {
Task(priority: .userInitiated) {
let data = try await fetchLatestData()
await updateDisplay(with: data)
}
}

Priorities help the system make intelligent scheduling decisions. High-priority tasks get resources first, low-priority tasks defer to more important work. This improves perceived performance - user-facing operations complete quickly while background work proceeds opportunistically.


Further Reading

Internal Documentation

External Resources


Summary

Key Takeaways

  1. Structured concurrency - Tasks have explicit lifetimes and hierarchies
  2. Actors - Eliminate data races with automatic synchronization
  3. Sendable - Compile-time verification of thread-safe types
  4. Task groups - Manage dynamic collections of concurrent work
  5. Async sequences - Process streams of async values naturally
  6. MainActor - Enforce UI updates on main thread
  7. Cooperative cancellation - Explicit checks, graceful shutdown
  8. Suspension points - await shows where tasks can be interrupted
  9. No callbacks - Linear async code, no completion handlers
  10. Type safety - Compiler prevents concurrency bugs

Next Steps: Review Swift Testing for testing concurrent code and iOS Performance for concurrency optimization patterns.