Of the two models for asynchronous programs, which one works better for your usecase?
This post is a long overdue follow-up to my earlier post
about how blocking code is a leaky abstraction. I’ve written a lot
of words on this blog defending one of Rust’s most
controversial features: async
/await
syntax. Many view it as an overcomplication
in an otherwise elegant language; I see it as the missing piece thats lets you
model asynchronous functionality and treat I/O operations as data.
However, in these blogposts I realize that I have made a significant error. I have
presented async
/await
as the only way to write asynchronous programs in Rust.
There are many other ways to write programs with non-standard control flow in Rust;
in fact, many of these strategies predate async
/await
as a whole.
Most of these strategies rely on a callback-based event loop; where you pass a
callback to an event source, and then that event source triggers the callback
when an event happens. This is a tried and true model; libuv
is an example
of an implementation of callback loops that’s used in millions of applications
by way of node.js. As for Rust, winit
and the appropriately named calloop
are examples of popular callback event loops.
For this post, I’d like to take a closer look at calloop
. I feel as though it’s
a pretty good example of callback event loops, and in fact the Linux backend for
winit
is built on calloop
. I’ll compare calloop
to async
/await
(or specifically smol
) and see what applications in which they’re a good fit.
Disclaimer: I maintain an async runtime, smol
. However, I also maintain calloop
and winit
(although I am on hiatus). So I am biased in both directions.
Breakdown Boogie
What you must first understand is that the underlying design philosophies for
async
/await
and calloop
have very different goals.
async
/await
was designed specifically for networking applications; where you
have to scale to handling millions of requests at once across dozens of CPU
cores. Some writers have asserted that this means async I/O for smaller use
cases is a “should be a weird thing that you resort to for niche use cases”.
I argue that it means you can scale with relative ease. If you know code can
handle 5,000,000 concurrent tasks, that means it can handle 5 with no issues.
calloop
, meanwhile, is designed for single threaded applications. As in, an
application where all of your logic runs on one single CPU core. This does not
necessarily mean the application is slower or meant to be used less;
Redis notably runs on a single-threaded architecture. Although, calloop
is explicitly not
meant for performance; this is made quite explicit in its documentation:
The main target use of this event loop is thus for apps that expect to spend most of their time waiting for events and wishes to do so in a cheap and convenient way. It is not meant for large scale high performance IO.
async
/await
is the definite winner when it comes to high performance applications
that expect to scale vertically. However, the calloop
model is used more
frequently in the GUI ecosystem. This makes sense once you realize calloop
was created by the same person who created the Rust Wayland implementation
and calloop
is effectively a Rust implementation of the Wayland event loop.
As I mentioned earlier, winit
is built on calloop
.
In the past, I’ve proposed that we could work async
/await
into
the GUI ecosystem and extolled the possible benefits. Needless to say, I think
async
/await
could have its place in the GUI ecosystem; a place that calloop
has traditionally occupied.
Advantage Async
To cut this off at the pass; you don’t have to exclusively use async
/await
or calloop
in your program. They are compatible! Best friends! Roommates! smooching
calloop
goes out of its way to add async
/await
compatibility,
making it so you can easily run Future
s inside of the event loop. Meanwhile,
you can use the async-io
crate to poll an EventLoop
on certain platforms:
use async_io::Async;
use calloop::EventLoop;
// Wrap the event loop into the `smol` runtime.
let mut event_loop = Async::new(EventLoop::try_new()?)?;
// Dispatch events when needed.
loop {
event_loop.read_with(|event_loop| {
event_loop.dispatch(None, &mut ())
}).await?;
}
Since GUI programs are mostly single threaded,
async
/await
’s multithreaded advantages don’t really apply. But that doesn’t
mean async
/await
is powerless here; there are single threaded runtimes.
The main disadvantage of these runtimes is that Waker
, the backbone of async
/await
,
is multithreaded and assumes it’s going to be sent to other threads. This condition
requires that you make all of your primitives dependent on synchronous primitives like Mutex
,
which adds a performance drag to the program. LocalWaker
would fix this issue if it’s ever merged to
mainline Rust.
There are two principle advantages to async
/await
in this model. The first
is that async
/await
brings composability, a property
I believe is valuable in GUI applications. calloop
is admittedly not composable,
preferring multiple event sources
over being able to chain event sources together.
I see libraries like accesskit
,
ui_events_winit
and
wgpu
that are begging for composability. wgpu
even has an official
middleware pattern.
Not to mention how composable GUI widgets need to be able to layer on top of eachother
in order to actually be useful.
The second advantage is easy integration with the rest of the Rust async
ecosystem. Many useful GUI apps end up having to make network calls at some
point; imagine being able to seamlessly make network requests from your GUI
application without freezing up like every other Win32 application seems to do.
Not to mention, the async
ecosystem has mature event delivery mechanisms
that could solve the “event delivery” problem Rust GUI frequently has.
I’ve already written about this at length here, if you’re interested in me going into more detail.
Shared State Scenario
However, calloop
has one crucial advantage that async
/await
will never
have. One critical strength that means there are very good reasons why programmers
will and should choose calloop
for many use cases. The silver bullet, the
kryptonite: calloop
allows for shared state in a way that async
/await
does not.
calloop
is designed around having a shared structure that’s passed around
to every event source. You pass it some shared state in dispatch
…
struct MyState {
name: String,
counter: i32,
foobar: File
}
let mut state = MyState { /* ... */ };
event_loop.dispatch(None, &mut state);
…and suddenly, every event source has direct &mut
access to state
. This
sharing works because calloop
is just calling event sources in order on a
single thread in a loop; it can trivially pass &mut MyState
to an event source
while it’s polling it for completion.
async
/await
has no good answer for this. Usually, network applications are
designed in accordance with the actor model,
where each task has its own specific state and only shares it with other tasks
via channels. Many GUI applications are designed like this as well. But many more
are designed around only having one single blob of mutable state that every widget
needs to have access to.
Granted, async
/await
has no problem sharing immutable state (via immutable
references, with &state
). Using primitives like RefCell
in single-threaded
setting, it’s possible to turn this into mutable state. But this solution
introduces ugly, brittle interior mutability into the Rust program. It’s a pale
shadow of what calloop
is able to achieve so effortlessly.
There is some form of shared state in async
/await
, via the Context
parameter that is passed to all Future
s. But as of the time of writing, this
value only holds a Waker
, and is often re-created wholesale during things
like async task polling.
There is a proposed ext()
method that would allow holding arbitrary extension data inside of the Context
.
However, even this is a shallow imitation. It only holds an &mut dyn Any
, a type
erased value that will need to be cast into whatever value you need. Even if you’re
okay with that, it will take some work for ext()
to be handled by the Rust async
ecosystem.
So yes, async
/await
has no refutal for this problem.
Conclusion
While async
/await
holds many advantages for programs that are using the
actor model, it will take some work and compromise for it to be integrated into
programs that use a significant amount of shared state. I cope by saying shared
state is an antipattern in Rust anyways.