Structured Concurrency in C++

Structured Concurrency in C++

By Lucian Radu Teodorescu

Overload, 30(168):9-14, April 2022


Raw threads tend to be unstructured. Lucian Radu Teodorescu applies principles from Structured Programming to concurrency.

If one made a list of the paradigm changes that positively influenced software development, Structured Programming would probably be at the top. It revolutionised the software industry in two main ways: a) it taught us how to structure programs to better fit our mental abilities, and b) constituted a fundamental step in the development of high-level programming languages.

Work on structured programming began in the late 1950s with the introduction of the ALGOL 50 and ALGOL 60 programming languages, containing compound statements and code blocks, and allowing engineers to impose a structure on programs. In 1966, Böhm and Jacopini discovered what is now called structured program theorem [Böhm66], proving that the principles of structured programming can be applied to solve all problems. Two years later, in 1968, Dijkstra published the highly influential article, ‘Go To Statement Considered Harmful’ [Dijkstra68], arguing that we need to lay out the code in a way that the human mind can easily follow it (and that the GOTO statement works against this principle).

It is said that concurrency became a concern in the software industry in 1965, when the same Dijkstra wrote the article, ‘Solution of a problem in concurrent programming control’ [Dijkstra65]. This article introduced the critical section (i.e., mutex), and provided a method of implementing it – the use of critical sections has remained an important technique for concurrent programming since its introduction. To reiterate the timeline, the critical section concept was introduced before the structured programming theorem, before the influential ‘Go To Statement Considered Harmful’, and before 1968, when the term Software Engineering was coined (at a NATO conference [Naur69]).

While concurrency is an old topic in the software industry, it appears that we still mainly handle concurrency in an unstructured way. Most of the programs that we write today rely heavily on raw threads and mutexes. And it’s not just the software that’s written this way; this is how we teach programmers concurrency. One can open virtually any book on concurrency, or look at most of the concurrency tutorials, and most probably the first chapters start with describing threads and mutexes (or any other form of synchronisation primitive). This is not anybody’s fault; it’s just the state of our industry (at least in C++).

This article aims to show that we can abandon the thread and mutex primitives for building concurrent abstractions, and start using properly structured concurrency with the senders/receivers framework [P2300R4] that may be included in C++231. We look at the essential traits of structured programming, and we show that they can be found in the senders/receivers programming model.

Structured programming

The term structured programming is a bit overloaded. Different authors use the term to denote different things. Thus, it is important for us to define what we mean by it. We have 3 main sources for our definition of structured programming: a) the article by Böhm and Jacopini introducing the structured program theorem [Böhm66], Dijkstra’s article [Dijkstra68], and the book Structured Programming by Dahl, Dijkstra, and Hoare [Dahl72]. The book contains references to the previous two articles, so, in a sense, is more complete.

Based on these sources, by structured programming we mean the following:

  • use of abstractions as building blocks (both in code and data)
  • recursive decomposition of the program as a method of creation/analysis of a program
  • local reasoning helps understandability, and scope nesting is a good way of achieving local reasoning
  • code blocks should have one entry and one exit point
  • soundness and completeness: all programs can be safely written in a style that enables structured programming

In the following subsections, we will cover all these aspects of Structured Programming. Subsequently, we will use these as criteria for defining Structured Concurrency, and decide whether a concurrency approach is structured or not.

Use of abstractions

Abstraction is the reduction of a collection of entities to the essential parts while ignoring the non-essential ones. It’s the decision to concentrate on some of the similarities between elements of that set, discarding the differences between the elements. Variables are abstractions over ‘the current value’. Functions are abstractions that emphasise ‘what it does’ and discard the ‘how it does it’.

As Brooks argues, software is essential complexity [Brooks95]. That is, we cannot fit it into our heads. The only way to reason about it is to ignore parts of it. Abstractions are good ways to ignore the uninteresting details and keep in focus just the important parts. For example, one can easily understand and reason about something that ‘sorts an array’, but it’s much harder to understand the actual instructions that go into that sorting algorithm. And, even instructions are very high-level abstractions compared with the reality of electromagnetic forces and moving electrons.

Use of (good) abstractions is quintessential for structured programming. And to remind us of that, we have the following quote from Dijkstra [Dahl72]:

At this stage I find it hard to be very explicit about the abstraction, partly because it permeates the whole subject.

Out of all the abstractions possible in a programming language, the named abstractions (e.g., functions) are most useful. Not because they have a name associated with them, but because the engineer can create many of them and differentiate them by name. I.e., having the ability to add for loops in the code is good, but having the ability to create different functions that are capable of implementing any types of problems (including abstracting out for statements) is much more useful.

Using functions, the engineer can abstract out large parts of the program and express them with a simple syntactical unit (a function call). This means that functions play an important role in Structured Programming. When we say that abstractions are at the core of Structured Programming, we mean exactly this: we’re able to abstract away large portions of the program.

Furthermore, it’s worth noting that, in Structure Programming, one can find abstractions that would represent the whole program. Just think of the popular main() function.

Recursive decomposition of programs

In the Structured Programming book, Dijkstra spends a fair amount of time on problem decomposition, on how we’re able to build software hierarchically. That is, we’re capable of building programs by:

  • recursively decomposing programs into parts, using a top-down approach
  • making one decision at a time, the decision being local to the currently considered context
  • ensuring that later decisions do inot influence early decisions (for the majority of the decisions)

This gives us an (almost) linear process for solving software problems. Any process for solving software problems is good, but a linear process is much better. Software being essential complexity, one can imagine that we might have coupling between all parts of the system. If the system has N parts, then the amount of coupling would be in the order of O(N2). That is, a process that is focused on the interactions between these parts would have to be in the order of at least O(N2) steps. And yet, Structured Programming proposes us a linear process for solving problems.

The process advocated by Structured Programming is based on the Divider Et Impera strategy, and our brain is structured in such a way that allows us to easily cope with it.

One other key aspect of this method is that the sub-problems have the same general structure as the bigger problems. For example, let’s assume that we have one function that calls two smaller functions. We can reason about all three functions in the same way: they all are abstractions, they all have the same shape (single entry, single exit point), they all can be further decomposed, etc. This will reduce the strain on our brains, and allows us to reuse the same type of solution in many places.

Local reasoning and nested scopes

To reduce the burden of understanding programs, Dijkstra remarks that we need to shorten the conceptual gap between the text of the program and how this is actually executed at runtime [Dahl72, Dijkstra68]. The closer the execution follows the program text, the easier it is for us to comprehend it. It is best if we can understand the consequences of running a piece of code by understanding just the preconditions and the corresponding instructions.

If we are following the pattern preconditions + instructionspostconditions, to properly have local reasoning (i.e., focus on the actual instructions), then we need the set of preconditions to be small. We cannot speak of local reasoning if the set of preconditions needs to contain information about everything else that happened before in the program. For example, if we want to analyse a block of code that sorts an array of numbers, we should not be concerned with how that array was generated; any parts of the sorting algorithm should not be coupled with other parts of the program.

Another way to look at local reasoning is by looking at encapsulation. As much as possible, all code blocks need to be fully encapsulated. Parts of a code block should not interact with the world outside the block. Dijkstra imagines that, in the ideal world, every part of the program would run on its dedicated (virtual) machine. This way, different parts will be completely independent.

In this context, the notion of scope is important. A lexical scope isolates the local concerns of a code block (lexically specified) from the rest of the blocks. If I declare a local variable in my block, no other block can interfere with my local variable. And again, the more stuff we have locally defined, the more local reasoning we’re able to do.

Single entry, single exit point for code blocks

Looking at a sequence of regular instructions (i.e., without loops or alternatives) is easy. The preconditions of an instruction directly depend on the postconditions of the previous instruction. This is what Dijkstra calls enumerative reasoning. The conceptual gap between a sequence of instructions and the execution of those instructions in time is minimal.

If we want to treat code blocks or function calls as instructions, we should ensure that they share as many properties as possible with the simple instructions. One of these properties is single entry, single exit point. Every instruction, every block of code and every function should have one single entry point so that we can easily check whether the preconditions are met. Similarly, they should have one single exit point so that we analyse a single set of postconditions.

There is another advantage of using a single entry, single exit point strategy. The blocks and the function calls have the same shape as simple instructions. That allows us to apply the same type of reasoning to code blocks and to function calls, and permits us to have a simpler recursive decomposition.

Soundness and completeness

Having a single entry and a single exit point is a big restriction on the number of programs we can write. We need to make sure that this restriction does not impose a limit on the number of problems that can be solved with Structured Programming. We must have a sound method to ensure that we are able to build all programs with the restrictions imposed by our method.

The structured program theorem [Böhm66] proves that we can write all our programs using 3 simple control structures: sequence, selection and repetition. The Böhm and Jacopini paper has 3 major takeaways:

  • ensuring that Structured Programming can be applied to all types of problems
  • providing a set of primitives that can be used as building blocks for our programs
  • providing alternative ways to visualise the programs – flowcharts

Concurrency with threads and synchronisation primitives

In the classic model of concurrency, one would use raw threads to express parallelism and then use synchronisation primitives to ensure that the threads do not interact in ways that break the correctness of the program. This model of organising concurrency is completely unstructured.

We don’t have a general way of creating higher level abstractions that can represent parts of our concurrent program (and fully handle concurrency concerns inside it). Thus, we cannot use any such abstractions to decompose a program into subparts while keeping the concurrency constraints straight.

In this model, it’s generally hard to do local reasoning. With synchronisation primitives, we almost always need to consider the rest of the program. We cannot simply look at one single function and draw any conclusion about what synchronisation we need.

As we don’t have good general-purpose abstractions to encapsulate concurrency constraints, it is hard for us to discuss single-entry and single-exit points for such abstractions. Furthermore, as there is no general composable method for expressing concurrency, it’s hard for us to discuss the soundness of the approach.

Synchronisation primitives act like GOTO commands, breaking local reasoning and encapsulation.

Concurrency with raw tasks

Let’s now turn our attention to another model of dealing with concurrency. The one that is based on raw tasks (see [Teodorescu20]). We call a task an independent unit of work that is executed on one thread of execution. We call an executor something that dictates how and when tasks are executed. Those two concepts are enough for building concurrent systems.

We have proved before that a task-based system can be used to implement all concurrent problems [Teodorescu20] and that we can compose task-based system without losing correctness and efficiency [Teodorescu21]. That is a step towards Structured Concurrency. However, task-based systems still don’t fully have the characteristics we’ve taken from Structured Programming.

Let’s first look at the ability to create concurrent abstractions. We may create abstractions that correspond to tasks, but it’s hard to create abstractions of different sizes, spanning multiple threads. Taking tasks as they are, without any workarounds, doesn’t fully encapsulate concurrency constraints. In this case, they also cannot be used for recursively decomposing concurrent programs.

Simple tasks are equivalent to functions, so they allow local reasoning just like Structured Programming does (that is, only if tasks are independent, as we said previously).

In [Teodorescu21], we introduced a technique to allow the decomposition of tasks. First, the concept of a task is extended to also contain a continuation. A continuation is a type-erased function that is executed whenever the task is completed. Most of the machinery that can be built on top of tasks can be made to work by using continuation. Secondly, the article introduced a trick that allows tasks to change their continuations while running. More precisely, to exchange the continuation of the current task with the continuation of another tasks (that can be executed in the future, or on a different thread). With this trick, we can make tasks represent concurrent work that span across multiple threads, with inner details.

If we apply this trick, then we lose the ability for local reasoning. The bigger abstraction (original task + continuation) is not specified in a one place; it is distributed between the start task, and all the tasks that are used to exchange the continuations (directly or indirectly) with this task. We lose lexical scoping and nesting properties.

With raw tasks, it’s often the case that we spawn tasks and continue. This is the fire-and-forget style of creating concurrent work. Each time we do this, the spawn operation has one entry point, but two exit points: one that remains with the current thread, and one that follows the newly spawned task.

In conclusion, even if tasks systems allow us to achieve most of the goals we set up for a structured approach, we cannot achieve all the goals at the same time.

Concurrency with senders/receivers

There is a new model in C++ that can solve concurrency in a structured approach. This is the model proposed by [P2300R4], informally called senders/receivers. As we shall see, this model works well with all the important points considered to be ‘structured’.

A sender is an entity that describes concurrent work. The concurrent work described by the sender has one entry point (starting the work) and one exit point, with three different alternatives: successful completion (possible with a value), completed with error (i.e., exception), or cancelled.

It is worth mentioning that senders just describe work; they don’t encapsulate it. That would be the role of what P2300 calls operation states. However, users will rarely interact directly with operation states. These are hidden behind simple-to-use abstractions.

Listing 1 shows an example of using a sender to describe concurrent work. We specify that the work needs to be executed on pool_scheduler (e.g., an object identifying a pool of threads), we specify what work must happen, and we specify that if somehow the scheduler is cancelled to transform this into a specific error. While creating the sender object, no actual work happens; we just describe what the work looks like. The call to sync_wait starts the work and waits for it to complete, capturing the result of our concurrent work (or maybe forwarding the exception if any exception is thrown).

using ex = std::execution;
int compute_something() {...}

ex::sender auto snd =
    ex::schedule(pool_scheduler)
  | ex::then(compute_something)
  | ex::stopped_as_error(my_stopped_error)
  ;
auto [r] = std::this_thread::sync_wait(std::move(snd))
  .value();
			
Listing 1

For the purpose of this article, we define a computation as a chunk of work that can be executed on one or multiple threads, with one entry point and one exit point. The exit point can represent the three alternatives mentioned above: success, error and cancellation. Please note that having a multiple exit strategy doesn’t necessarily imply multiple exit points; this is similar to how function calls are considered to have a single exit point, even if they can exit either with a value or by throwing an exception.

A computation is a generalisation of a task. Everything that can be a task can also be a computation, but not the other way around.

The senders/receivers model allows us to describe any computation (i.e., any concurrent chunk of work) as one sender. A proof for this statement would be too lengthy to show here, and the reader can find it in [P2504R0]2. This paper also shows the following:

  • all programs can be described in terms of senders, without the need of synchronisation primitives
  • any part of a program, that has one entry point and one exit point, can be described as a sender
  • the entire program can be described as one sender
  • any sufficiently large concurrent chunks of work can be decomposed into smaller chunks of work, which can be described with senders
  • programs can be implemented using senders using maximum efficiency (under certain assumptions)

In summary, all concurrent single-entry single-exit chunks of works, i.e., computations, can be modelled with senders.

It is important to note that computations fully encapsulate concurrency concerns. Computations are to concurrent programming what functions are to Structured Programming. Computations are the concurrent version of functions.

The above paragraph represents the quintessence of this whole article.

Use of abstractions

Let’s start our analysis of whether the senders/receivers model can be considered structured by looking at abstractions. In this model, the obvious abstraction is the computation, which can be represented by a sender. One can use senders to abstract out simple computations (smaller than typical tasks), to abstract out work that corresponds to a task, or to abstract out a chunk of work that may span multiple threads. The upper limit to how much work can be represented by a sender is the size of the program itself. Actually, the entire program can be represented by a single sender. See Listing 2 for how this may be done.

using ex = std::execution;
ex::sender auto whole_program_sender() {...}
int main() {
  auto [r] = std::this_thread::sync_wait
    (whole_program_sender()).value();
  return r;
}
			
Listing 2

Recursive decomposition of programs

As mentioned above, [P2504R0] proves that any program, or part of the program, can be recursively decomposed with the use of senders. Moreover, within a decomposition, the smaller parts can be made to look like the original part; they can be all senders.

Listing 3 shows an example on how a problem can be decomposed into two smaller parts, using senders to describe both the problem and the sub-problems.

ex::sender auto do_process() {
  ex::sender auto start 
    = ex::schedule(pool_scheduler);
  ex::sender auto split_start = ex::split(start);
  return ex::when_all(
    split_start | ex::let_value(comp_first_half),
    split_start | ex::let_value(comp_second_half)
  );
}
			
Listing 3

Local reasoning and nested scopes

The reasoning can be local with the senders/receivers model. The definition of a sender completely describes the computation from beginning to end. The reasoning of all the aspects of that sender can be done locally, even if the sender contains multiple parts.

Let us look again at the code in Listing 3. The top-level computation contains the elements needed to reason about how the sub-computations start and finish. We’re able to inspect this code and draw the following conclusions:

  • one entry and one exit for the entire process
  • both halves start processing on the pool scheduler
  • if one half ends with error/cancellation, then the other half is cancelled and the whole process ends with the original error/cancellation
  • the process ends when both parts end

The idea of nested scopes was a major point in the design of senders/receivers. One can properly nest senders, and also nest operation states, and also receivers nest. Please see [Niebler21a] for more details.

Single entry, single exit point

By definition, senders have one entry point and one exit point. There are multiple alternatives if the work described by a sender may terminate, but essentially, we may say that there is only one exit point. This is similar to how a function call might exit either with a return value or an exception.

While one can use the fire-and-forget style with senders, this is not the recommended way of using senders.

See also [Niebler21b] for a better illustration about how senders are fulfilling the single entry, single exit point requirement.

Soundness and completeness

In Structured Programming, the structured program theorem [Böhm66] provides the soundness and completeness guarantees for the method. For the senders/receivers model, as mentioned above, [P2504R0] shows that one can soundly use it to solve all possible concurrency problems.

If one is able to use senders to describe any chunk of concurrent work (with one entry and one exit point), and if we can decompose any concurrent programs in such work chunks, then we are covered.

The reader should note that, out of the box, P2300 doesn’t provide primitives for representing repetitive computations or for type-erasing the senders; these are needed to be able to describe many concurrent chunks of work. However, the model is general enough to allow the user to create abstractions for these. In fact, there are libraries out there that already support these abstractions; one of the most known ones is [libunifex].

The senders/receivers model

The senders/receivers model proposed to C++ share many traits with the async/await model [Wikipedia]. The latter model was introduced in F# 2.0 in 2007 [Syme11], and then spread to multiple programming languages (C#, Haskell, Python, etc.), eventually reaching C++ with the addition of coroutines in C++20. In F#, a type Async<T> represents an asynchronous computation (wording taken from [Syme11]). This would be analogous to the Task<T> coroutine type we discussed above. And, as senders are equivalent to these types, it means that the Async<T> from F# is an abstraction similar to a sender.

However, the senders/receivers model can be implemented more efficiently than coroutines or other similar type-erased abstractions.

Structured Concurrency is not something that can be done with senders/receivers only in C++. Other languages can support Structured Concurrency too; the P2300 senders/receivers model just ensures that we get the most performance out of it, as it avoids type-erasure and other performance penalties.

Bonus: coroutines

I felt it would be better not to include coroutines as one of the requirements needed for structured concurrency. We can write good concurrent code even without the use of coroutines. However, coroutines can help in writing good concurrent (and parallel) code.

Interestingly enough, in his part of the Structured Programming book [Dahl72], Dahl discusses coroutines, a SIMULA feature, as a way of organising programs. If a program uses only subroutines, then, at the function-call level, the program is always organised hierarchically. Using coroutines might help when strict hierarchical organisation is not necessarily needed, and cooperative behaviour is more important.

The senders/receivers proposal guarantees good interoperability between senders and coroutines. According to the proposal, all awaitables are also senders (i.e., can be used in places where senders are used). Moreover, most of the senders (i.e., senders that yield a single value) can be easily made awaitables (by a simple annotation in the coroutine type).

Let us look at the example from Listing 43. Here, task<T> is a coroutine type. The reference implementation associated with the P2300 paper provides an implementation for this abstraction [P2300RefImpl]. naive_par_fib is a coroutine; it uses co_return twice and co_await once. The res variable inside the coroutine is a sender. We can co_await this sender, as shown in the body of our coroutine; thus, senders can be awaited. Towards the end of the example, we show how one can simply use the coroutine invocation to chain it to a sender, thus using it just as if it was a sender. Furthermore, the function passed to let_value needs to return a sender; we are returning a task<uint64_t> awaitable.

template <ex::scheduler S>
task<uint64_t> naive_par_fib(S& sched, int n) {
  if (n <= 2)
    co_return 1;
  else {
    ex::sender auto start = ex::schedule(sched);
    ex::sender auto res = ex::when_all(
      start | ex::let_value([&] {
                return naive_par_fib(sched, n - 1);
              }),
      start | ex::let_value([&] {
                return naive_par_fib(sched, n - 2);
              })
    ) | ex::then(std::plus<uint64_t>());
    co_return co_await std::move(res);
  }
}
ex::sender auto snd =
    naive_par_fib(sched, n)
    | ex::then([](uint64_t res) {
        std::cout << res << "\n";
      });
std::this_thread::sync_wait(std::move(snd));
			
Listing 4

Using coroutines might make the user code easier to read. In this particular example, we also showed how coroutines can be used to obtain type-erasure of senders; task<T> acts like a type-erased sender that generates a value of type T.

Discussion

The current senders/receivers proposal, [P2300R4] doesn’t aim at being a comprehensive framework for concurrent programming. For example, it lacks facilities for the following:

  • type-erased senders
  • a coroutine task type (the name is misleading, as it would encapsulate a computation)
  • facilities for repeating computations
  • facilities for enabled stream processing (reactive programming)
  • serialiser-like abstractions
  • etc.

However, the most important part of the proposal is the definition of the general framework needed to express concurrency. Then, all the facilities needed to represent all kinds of concurrent programs can be built on top of this framework. The proposal contains just a few such facilities (sender algorithms), but the users can create their own abstractions on top of this framework.

This article focuses on the framework, showing that it has the needed requirements for Structured Concurrency; we are not arguing that the facilities described in P2300 are enough.

Another important topic to discuss, especially in conjunction with concurrency, is performance. After all, the move towards more parallelism is driven by the performance needs of the applications. Our single-threaded processors are not fast enough, so we add more parallel processing to speed things up, and concurrency is needed to ensure the safety of the entire system.

In task-based systems, we put all the work into tasks. Usually, we type-erase these tasks to be able to pass them around. That means that we have an abstraction cost at the task level. This will make it impractical to use tasks for very small chunks of work. But, it is typically also impractical to use tasks with very big chunks of works, as this can lead to situations in which we do not properly fill up all the available cores. Thus, task-based systems have good performance in just a relatively narrow range, and users cannot fully control all performance aspects. Task-based systems can have good performance, but not always the best performance.

In contrast, the senders/receivers model doesn’t imply any abstraction that might incur performance costs. The user is free to introduce type-erasure at any level that is best for the problem at hand. One can write large concurrent chunks of work without needing to pay the costs of type-erasure or memory allocation. Actually, most facilities provided in the current P2300 proposal do not incur any additional type-erasure or memory allocation costs.

The senders/receivers model allows one to model concurrency problems without any performance penalties.

Conclusions

Structured Programming produced a revolution for the good in the software industry. Sadly, that revolution did not also happen in concurrent programming. To improve concurrent programming, we want to apply the core ideas of Structured Programming to concurrency.

After discussing the main ideas that shape Structured Programming, we briefly discussed that classic concurrent programming is not structured at all. We’ve shown that task-based programming is better, but still doesn’t meet all the goals we set to enable Structured Concurrency.

The rest of this article showed how the senders/receivers model fully supports all the important ideas from Structured Programming, thus making concurrent programming with this model worthy to be called structured.

In concurrent programming, we have computations to play the role of functions in structured programming, and any computation can be described by a sender. With the help of [P2504R0], we argued that one can use senders to describe any concurrent program. We discussed here how senders can be used as abstractions to describe concurrent work at any level (from the entire program to small chunks of work). We’ve shown that this model lends itself well to recursive decomposition of concurrent programs and to local reasoning of abstractions. Likewise, we’ve also discussed how senders have the single entry, single exit shape and how one can build complex structures from simple primitives.

All these make the senders/receivers model match the criteria we set to achieve Structured Concurrency. When this proposal gets into the C++ standard, we will finally have a model to write safe, efficient and structured concurrent programs.

References

[Böhm66] Corrado Böhm, Giuseppe Jacopini, ‘Flow Diagrams, Turing Machines and Languages With only Two Formation Rules’, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.119.9119&rep=rep1&type=pdf, Communication of the ACM, May, 1966

[Brooks95] Frederick P. Brooks Jr., The Mythical Man-Month (anniversary ed.)., Addison-Wesley Longman Publishing, 1995

[Dahl72] O.-J. Dahl, E. W. Dijkstra, C. A. R. Hoare, Structured Programming, Academic Press Ltd., 1972

[Dijkstra65] Edgar Dijkstra, ‘Solution of a problem in concurrent programming control’, Communications of the ACM, September, 1965.

[Dijkstra68] Edgar Dijkstra, ‘Go To Considered Harmful’, https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf, Communication of the ACM, March, 1968

[libunifex] Eric Niebler, et al., libunifex, https://github.com/facebookexperimental/libunifex, 2022

[Naur69] Peter Naur, Brian Randell, Software Engineering: Report on a conference sponsored by the NATO SCIENCE COMMITTEE, Garmisch, Germany, 7th to 11th October, 1968, 1969

[Niebler21a] Eric Niebler, ‘Working with Asynchrony Generically: A Tour of C++ Executors (part 1/2)’, CppCon, 2021, https://www.youtube.com/watch?v=xLboNIf7BTg

[Niebler21b] Eric Niebler, ‘Working with Asynchrony Generically: A Tour of C++ Executors (part 2/2)’, CppCon, 2021, https://www.youtube.com/watch?v=6a0zzUBUNW4

[P2300R4] Michał Dominiak et al., ‘std::execution’, 2022, http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p2300r4.html

[P2300RefImpl] Bryce Lelbach, et al., ‘P2300 Reference implementation’, 2022, https://github.com/brycelelbach/wg21_p2300_std_execution

[P2504R0] Lucian Radu Teodorescu, ‘Computations as a global solution to concurrency’, 2021, http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p2504r0.html

[Syme11] Don Syme, Tomas Petricek, Dmitry Lomov. ‘The F# asynchronous programming model’, International Symposium on Practical Aspects of Declarative Languages, Springer, 2011

[Teodorescu20] Lucian Radu Teodorescu, ‘The Global Lockdown of Locks’, Overload 158, August 2020, available online at https://accu.org/journals/overload/28/158/teodorescu/

[Teodorescu21] Lucian Radu Teodorescu, ‘Composition and Decomposition of Task Systems’, Overload 162, April 2021, available online at https://accu.org/journals/overload/29/162/teodorescu/

[Wikipedia] ‘Async/await’ on Wikipedia, https://en.wikipedia.org/wiki/Async/await, 2022

Footnotes

  1. This appears not to be true anymore. While writing this article, the C++ standard committee moved the P2300 proposal target release from C++23 to C++26.
  2. The paper uses the term computation in a different way to this article. There, computation is defined to be equal to a sender, and then it’s proven that it can represent arbitrarily chunks of work. It’s the same thing, but coming from a different perspective.
  3. This is a bad implementation of a Fibonacci function, for multiple reasons. We use it here just to exemplify the interaction between components, not to showcase how one would implement a parallel version of Fibonacci

Lucian Radu Teodorescu has a PhD in programming languages and is a Software Architect at Garmin. He likes challenges; and understanding the essence of things (if there is one) constitutes the biggest challenge of all.






Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED


By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED



Settings can be changed at any time from the Cookie Policy page.