← All articles

Modern Swift Concurrency

This week I released a major update for Trellis, and I finally got to use the new Swift concurrency model. I have to say, it's surprisingly simple to understand and a pleasure to use.

Before we start, a shout-out to the official docs. They are rather good. If you have the time, a full read is a solid investment. If you also want a refresher on other concurrency-related topics, read on!

Also, subscribe for the best Swift and SwiftUI war stories, yadda, yadda! No spam, I promise!

We'll be mostly talking about actors and tasks. There are other subtilities, like global actors, custom executors, asynchronous sequences, to name a few, but this article doesn't explore the full API. Instead, it focuses on the fundamental understanding in such a way that the remaining parts become easy to discover.

Asynchronous vs Parallel

These two terms are sometimes (unfortunately) used interchangeably. To understand why asynchronous behavior doesn't mean running things in parallel, let's take a run loop:

func spin(runloop: RunLoop, dispatcher: Dispatcher)
    while !runloop.shouldExit {
        if runloop.queue.isEmpty {
            runloop.wait()
        }
        let event = runloop.queue.popFront()
        dispatcher.handle(event)
    }
}

func fireEvent(runloop: RunLoop, event: Event)
{
    runloop.queue.pushBack(event)
    runloop.signal()
}

There are some things missing that would need to be addressed before calling this a proper run loop, like handling reentrancy, but this is the main idea: while the queue is empty, wait for events, if the queue has events, run them. Events resolve to sections of code that you register with the dispatcher, pretty similar to what happens when you call DispatchQueue.main.async.

One can schedule events from any thread (using fireEvent in the above example), at any later time, but ultimately, all events will run one after the other (historically on the run loop thread), in the order they were added to the queue, without interleaving their associated instructions. This behavior is asynchronous in nature because it can be suspended until a new event is added to the queue. But ultimately, two events can never be processed in parallel, only sequential.

Real-life example:

The code below adds a new event at the end of the run loop's queue calling print("World"). The output will be Hello, then World, as opposed to what we'd expect from a synchronous call: (World then Hello).

DispatchQueue.main.async {
    print("World")
}
print("Hello")

Swift has language-level support for asynchronous code using the (relatively) new async/await API. This means that you don't have to directly use a run loop, but still can suspend code and wait for tasks to complete before resuming, without blocking a thread. And all this, within a nice, easy to read API:

func getUser() async throws -> User {
    try await checkSession()
    // Suspention happens here ^.
    // It works nicely with throwing functions.
    // -------
    // We resume once getSession is done.
    // Possibly on a different thread, but
    // within the same actor (more about this later).
    let user = getUser()
    return user
}

Parallel calls are easier to understand since they mean exactly what everybody expects them to mean: instructions that might be called at literally the same time on different cores, or at least, free to interleave at the lowest atomic instruction level (note that for some types, even the assignment is not atomic). The thread is kernel's software construct for running code in parallel.

Critical sections

One crucial thing to understand is that, to some extent, we want operations to interleave when running things in parallel, otherwise, there would be no gain from doing so. But we also want to control when and where that happens. Imagine we have a function that updates a string in multiple steps, like, maybe concatenating a sentence from multiple words. During this update, other threads can't mutate the string, otherwise, the resulting sentence might not make sense. However, when the updating operation is done, other threads should be able to mutate the string.

This piece of code that runs between the start and the end of an uninterruptable mutation is called a critical section.

Shared memory

When using a single thread and a run loop, accessing memory is not a problem. Even if the calls are asynchronous, reads and writes happen sequentially, in different run loop iterations, and each event is a critical section in itself. This applies even when events originate from other threads.

However, when using multiple threads, sharing memory becomes a problem since two threads are possibly interleaving the reading and writing of the same shared memory. So, we'll have to protect critical sections, making threads wait before accessing them if another thread already does so.

The Swift Concurrency Model

First, in Swift's terminology concurrent means async or parallel. In other words, you don't care if it's one or the other, you let Swift handle threads and lower-level details and think only in terms of actors, suspension points and critical sections. The intro above was just to give you a deeper understanding of the problems we're facing when writing parallel and asynchronous code. Truth is, most of the things we already talked about are abstracted away in the new Swift concurrency model.

Join us
Sign up and be the first to know when we release new content!

Actors

The new actor type ensures all its regular functions and internal state are called/mutated only by one thread at any time. Below, all the code inside register is a critical section. That's why you have to await register(service: service) even if the function is not explicitly marked as async: it might be that other threads are in the middle of calling it and we have to wait our turn.

actor Dispatcher {
    private var _services: [Service] = []

    func register(service: Service) {
        // Start of critical section.
        if !_services.contains(service) {
            _services.append(service)
        }
        // End of critical section
    }
}

Suspension points

Actors abstract away the threads. All their functions are critical sections. Their state can only be mutated by one thread at a time. We can stop caring about which thread calls our code as long as we have these guarantees. But there's a catch: suspension points. Once the code reaches an await, the critical section ends. In the example below, we have two critical sections. There's no guarantee that the state won't change between them because of reentrant calls or calls from other threads. This is probably the sneakiest thing when working with actors:

actor Dispatcher {
    private var _services: [Service] = []

    func register(service: Service) {
        // Start of first critical section on thread one
        if !_services.contains(service) {
            _services.append(service)
        }
        // End of first critical section
        await notfifyOthers() // <- suspension point
        // Start of new critical section, potentially
        // on another thread in some Swift versions
        // (check out SE-0338 for details) .
        // `_services` might have been mutated
        // (not by `notfifyOthers()`)

        for service in _services {
            service.bootstrap()
        }
        // End of second critical section
    }
}

Tasks

We've already talked about tasks without even knowing. A task is a unit of work that can be run asynchronously. When awaiting something, we're actually spawning a task and waiting for its completion event to allow us to continue. We call nesting awaits structured concurrency, because of the emerging tree-like structure, where each parent task depends on all its children's completion in order to complete itself.

One might ask, since spawning a task using await requires the current context to be async, and thus, awaited itself one level above, what's at the root of the tree? Literally a Task. It turns out you can explicitly create tasks as well. This can be really useful in some situations where external factors influence the cancellation of some tasks or when you'd like a non-async function to call an async one. Creating and managing tasks yourself is called unstructured concurrency.

func dispatch(action: Action) {
    // If we're already processing this action,
    // cancel the old task.
    if let oldTask = _tasks[action] {
        oldTask.cancel()
    }

    let task = Task { [weak self] in
        // Before starting any work, check if
        // the task is not cancelled.
        // We can repeat this check each time
        // we can safetly interrupt our work.
        try Task.checkCancellation()

        // Perform some work.

        // Then remove the task from `_tasks`.
        self?._tasks.removeValue(forKey: key)
    }

    // Add the task to `_tasks`
    _tasks[action] = task
}