Last year saw a proliferation of talks and articles about safety in C++. Lucian Radu Teodorescu gives an overview of these and presents a unified perspective on safety.
Safety was a hot topic in 2023 for the C++ community. Leading experts took clear positions on its significance in the context of C++ and, in general, system programming languages. They explored various aspects, including general safety principles, functional safety, memory safety, as well as the intersections between safety and security, and safety and correctness. Many of these discussions were influenced by recent reports [NSA22, CR23, WH23a, EC22, CISA23a, CISA23b], that strongly criticised memory-unsafe languages.
In this context, it’s logical to revisit the primary safety discussions from last year and piece together a comprehensive understanding of safety in the context of C++. While experts may find common ground, they also have differing opinions. However, it’s likely that the nuances expressed by the authors hold greater significance than mere agreements or disagreements.
In this article, we will examine key C++ conference talks with a primary focus on safety, along with a brief mention of relevant podcasts. Our selection is limited to talks and podcasts from 2023. Subsequently, we will consolidate the insights and viewpoints of various authors into a unified perspective on safety in system languages.
Safety in 2023: a brief retrospective
Sean Parent, All the Safeties
In his presentation at C++ now [Parent23a], Sean Parent presents the reasons why it’s important to discuss safety in the C++ world, tries to define safety, argues that the C++ model needs to improve to achieve safety, and looks at a possible future of software development. This same talk was later delivered as a keynote at C++ on Sea [Parent23b].
Sean argues the importance of safety by surveying a few recent US and EU reports which have begun to recognise safety as a major concern [NSA22, CR23, WH23a, EC22]. There are a few takeaways from these reports. Firstly, they identify memory safety as a paramount issue. The NSA report [NSA22], for instance, cites a Microsoft study noting that “70 percent of their vulnerabilities were due to memory safety issues”. Secondly, they highlight the inherent safety risks in C and C++ languages, advocating for the adoption of memory-safe languages. Lastly, these documents suggest a paradigm shift in liability towards software vendors. Under this framework, vendors may face accountability for damages resulting from safety lapses in their software.
Building on the reports that underscore the significance of safety, Sean delves into deciphering the meaning of ‘safety’ in the context of software development. After evaluating several inadequate definitions, he adopts a framework conceptualised by Leslie Lamport [Lamport77]. The idea is to express the correctness of the program in terms of two types of properties: safety properties and liveness properties. The safety properties describe what cannot happen, while the liveness properties indicate what needs to happen.
As highlighted in other talks that Sean gave (see, for example, ‘Exceptions the Other Way Round’ [Parent22]), safety composes. If all the operations in a program are safe, then the program is also safe (if preconditions are not violated). Correctness, on the other hand, doesn’t compose like safety. This is why safety is a (easily) solvable problem.
Sean further elaborates on what constitutes memory safety in a language. Citing ‘The Meaning of Memory Safety’ [Amorim18], he argues that memory safety is what the authors of the paper call the frame rule. This rule is equivalent to the Law of Exclusivity, coined by John McCall [McCall17]. Sean then explains why C++ can never be a safe language. In general, any language that allows undefined behaviour is an unsafe language.
Herb Sutter, Fill in the blank: _________ for C++
In his keynote at C++ now, ‘Fill in the blank: _________ for C++’ [Sutter23a], Herb Sutter presents the latest developments in his cppfront project, envisioned as a successor to C++. He presented this talk again, with minor variations, at CppCon 2023 under the title ‘Cooperative C++ Evolution: Toward a Typescript for C++’ [Sutter23b]. A primary objective of this new language is to significantly enhance safety. Herb sets an ambitious goal of improving safety by 50 times compared to C++. His plan for achieving this goal is to have bounds and null checking by default, guaranteed initialisation before use, and other smaller safety improvements (contracts, default const, no pointer arithmetic, etc).
The features that Herb presents in this talk are fairly small in terms of safety improvements that they bring (especially compared to his previous talk [Sutter22]). However, I included Herb’s keynote because he advocates a gradual approach to safety, and provides a clear measurement for achieving the goal. One might say that this approach is more pragmatic.
As an interesting observation, Herb has two different approaches to two important features of his language: safety and compatibility. While he advocates a gradual adoption for safety, he advocates an all-or-nothing approach to compatibility (a good successor needs to be fully compatible with the previous language from day one).
Bob Steagall, Coding for Safety, Security, and Sustainability (panel discussion)
Safety was an important topic at C++ now, and the conference organised a panel with JF Bastien, Chandler Carruth, Daisy Hollman, Lisa Lippincott, Sean Parent and Herb Sutter [Steagall23].
The panellists disagreed on a definition of safety, and they disagreed on the relation between safety and security. But, apart from that, there seemed to be a consensus on multiple topics: it’s difficult to express safe/correct code in C++, safety is important to the future of C++, safety and performance are not incompatible, the C++ experts need to consider more the opinion of security experts, the programmers also have responsibilities for delivering safe and secure code (not just managers) and that regulation of the industry is likely imminent.
One important point that Daisy puts forward is that there shouldn’t be a single answer for safety. She points out that the HPC community is not particularly interested in safety and security, and they focus solely on performance.
Chandler Carruth, Carbon’s Successor Strategy: from C++ interop to memory safety
In his presentation at C++ now 2023, titled ‘Carbon’s Successor Strategy: from C++ interop to memory safety’ [Carruth23], Chandler Carruth delved into the ongoing evolution of the Carbon language. As memory safety is a key objective for the Carbon language, a significant portion of his talk also addressed the strategy of handling safety by the Carbon community.
In his talk, Chandler offers a different type of definition for safety, starting from bugs. Safety, according to Chandler, is the guarantees that the program provides in the face of bugs. According to Chandler, safety is not a binary state; rather, it can exist in varying degrees. Chandler defines memory safety as a mechanism that “limits program behaviour to only read or write intended memory, even in the face of bugs”. Then he goes on to make an important clarification: we may not want the entire language to be memory safe, but we may want a subset of the language to be memory safe. This subset should serve as a practical default, with unsafe constructs being the exception. Additionally, there should be a clear and auditable demarcation between safe and unsafe elements in the language. Intriguingly, Chandler does not deem data-race safety as a strict requirement for this safe subset, although he acknowledges it as an admirable objective to strive for.
The Carbon migration strategy that Chandler presented is a step-by-step process. Initially, the transition involves moving from unsafe C++ code to Carbon, potentially utilising some unsafe constructs. Subsequently, the strategy shifts towards adopting a safe subset of Carbon. This phased approach breaks down the transition from unsafe to safe code into more manageable steps, enabling an incremental migration process.
Throughout his talk, Chandler implicitly advocates the viewpoint that memory safety ought to be a fundamental expectation in programming languages. He suggests that software engineers should have the right to demand safety guarantees in their work.
JF Bastien, Safety & Security: the future of C++
Right from the outset of his C++ Now keynote [Bastien23], JF Bastien presents a compelling argument: safety and security represent existential threats to C++. Software is central in modern society, and safety issues can have serious consequences, potentially even leading to loss of life. To reinforce his point, JF cites an extensive array of reports and articles, stressing the message that the C++ community cannot afford to neglect safety [NSA22, CR23, Gaynor18, Black21, Dhalla23, Claburn23, CISA23a, CISA23b].
JF draws a striking analogy in his talk: he compares programming in C++ to driving without seatbelts. He points out that the resistance within the C++ community towards memory safety mirrors the initial reluctance of the automotive industry to adopt seatbelts. His vision is for safe programming languages to become as universally accepted and life-saving as seatbelts. This perspective gradually evolves into an ethical argument. JF suggests that to truly adhere to our duty of avoiding harm, it’s essential to take the necessary steps to mitigate safety issues in programming as much as possible.
Later on, JF argues that we don’t have a common understanding of safety and attempts to provide a definition for what safety means; for him, safety is about type safety, memory safety, thread safety and functional safety.
Type safety “prevents type errors” (“attempts to perform operations on values that are not of the appropriate data type”). He references Robin Milner’s famous quote “Well-typed programs cannot go wrong” [Milner78] to indicate the importance of type safety. Turning the attention to C++, he argues that it’s really hard in C++ to follow all the best practices regarding type safety; he also argues that it’s difficult to guarantee the absence of undefined behaviour, so C++ cannot be considered a type-safe language.
For memory safety, JF references ‘The Meaning of Memory Safety’ [Amorim18], and defines memory safety as the absence of use-after-free and out-of-bounds accesses (he doesn’t include use of uninitialised values). To the question whether C++ has memory safety, the answer is no, C++ is not there yet, but JF points out a few alternatives how C++ can get memory safety; it’s worth noting that each of the approaches has trade-offs.
Thread safety is defined as the absence of data races. Like with the other types of safety, C++ doesn’t have a way of guaranteeing the lack of data races.
Regarding functional safety, he defines functional safety as “the systematic process used to ensure that failure doesn’t occur”. While the programming part of this is important, functional safety extends beyond it, to processes and people; having a “safe culture” (where the boss can hear bad news) is also important for functional safety. JF additionally argues that security is also needed to achieve functional safety.
In the rest of the talk, JF discusses the distinction between the two adversaries behind safety and security: stochastic vs smart adversaries, and how the smart adversary may have a wide range of resources. He discusses many nuances of preventing and mitigating these types of issues. Towards the end of the talk the subject of possible regulations is tackled; based on recent reports [OpenSSF22, WH23b, Hubert23], JF believes that our field will soon be regulated.
Andreas Weis, Safety-First: Understanding How To Develop Safety-critical Software
At C++ now, Andreas Weis talks about safety from a slightly different perspective, focusing on software development for safety-critical domains like the automotive industry. He delved into topics such as functional safety, existing regulations, processes or multiple methods of achieving safety [Weis23].
Andreas starts by defining safety. His definition of safety is also inspired from Leslie Lamport [Lamport83]: “something bad does not happen”. As examples, Andreas gives partial correctness (“the program does not produce the wrong answer”), mutual exclusion and deadlock freedom. He also defines safety from the perspective of Functional Safety, as defined by ISO 26262:2018 [ISO26262]. The ISO standard defines safety as being the “absence of unreasonable risk”, and unreasonable risk as “risk judged to be unacceptable in a certain context according to valid societal moral concepts”, and risk as the “combination of the probability of occurrence of harm and the severity of that harm”. Crucial to this definition are the notions of unreasonable and the probability factor in risk assessment. The ISO processes require a thorough risk evaluation to determine the significance of each risk.
Another important point that Andreas draws attention to is the fact that preventing a safety fault is just one way of dealing with the fault. There are other ways to deal with the fault (control the impact, designing fault-tolerant system, increasing controllability, etc.)
Andreas also explains that defining intended functionality for a system is important; there may not be universal guarantees for a system. He also briefly attempts at providing a distinction and commonalities between safety and security.
Much of Andreas’s talk was dedicated to discussing the processes mandated by ISO for addressing safety concerns. Pertinent to our discussion are the sections where these processes dictate coding requirements. While the standards mention some specific items, they primarily mandate companies to develop coding standards that address various safety concerns.
Bjarne Stroustrup: Approaching C++ Safely
In the Core C++ opening keynote, ‘Approaching C++ safety’ [Stroustrup23a], Bjarne Stroustrup presented a blend between his ideas and standards committee on approaching safety in C++ (referencing [P2759R1, P2739R0, P2816R0, P2687R0, P2410R0]). He tries to capture the many nuances of safety, discusses the evolution of C++ towards safety and also a possible future for C++ regarding safety. He delivers roughly the same talk as a CppCon keynote in October 2023 [Stroustrup23a].
From the beginning of the talk, Bjarne argues that safety is not just one thing, it’s actually a set of other things; he lists some of the things that safety means but fails to define any of the terms discussed. He acknowledges that the recent NSA report [NSA22] is a cause of concern and that C++ can be massively improved in terms of safety. The approach that Bjarne suggests is relying on guidelines and tooling; this puts most of the responsibility on the users, and not on the committee members. Notably, his response to criticism, evident in the first keynote [Stroustrup23a], was marked by a notably sharp tone.
A major part of the talk discusses the evolution of C++ and how, during the decades, it improved safety compared to C. Most of the ideas were also presented in different forms by Bjarne at different conferences before this ‘safety crisis’.
After discussing the evolution of C++ so far, the talk goes to discuss the C++ core guidelines (present) and the safety profiles (future). The guidelines can help users write safer code, while the profiles can enforce the guidelines with appropriate tooling (static analysers). Using profiles will allow gradual improvement in safety.
Throughout the talk, the audience can hear Bjarne say that some problems are hard, and for some of them we may not get any static check soon. The references that are made in the talk [P2759R1, P2739R0, P2816R0, P2687R0, P2410R0] do not offer clear guarantees that all the safety issues of C++ will be tackled. The references slide included a 2015 paper with the note “we didn’t start yesterday,” underscoring the slow pace of safety improvements. This leaves an impression that fully resolving C++’s safety issues is likely to be a prolonged endeavour.
Timur Doumler, C++ and Safety
Timur Doumler gave a talk called ‘C++ and Safety’ both at C++ on Sea [Doumler23a] and at CppNorth [Doumler23b], explaining his perspective on safety. While his approach is similar to what others have said before at C++ now 2023, he has some new takes on safety and C++, more specifically on the importance of safety in C++.
In the first part of the talk, he gives a taxonomy around safety, touching functional safety, language safety, correctness (total and partial), and on the relation between undefined behaviour and safety. He focuses on language safety; he defines a program as language safe if it has no undefined behaviour, and considers a programming language safe if it cannot express programs that are not language safe.
He explores different types of safety issues (type safety, bounds safety, lifetime safety, initialisation safety, thread safety, arithmetic safety and definition safety). Furthermore, he provides examples and discusses them in terms of trade-offs. A language can ban undefined behaviour, but, he argues, that would have other negative consequences and would make some old programs not work anymore.
Towards the end of the talk, Timur starts to draw some conclusions. First, C++ has too much undefined behaviour, and that would make it impossible for C++ to become a safe language. The industry has developed tools and practices to make this a smaller problem. His second assertion is that compromising on performance might pose a greater threat to C++ than compromising on safety. This leads to his third conclusion: C++ is not doomed if it fails to become a memory-safe language.
To back up his claims on completely fixing C++ safety issues, he presents some data. First, he looks at the number of vulnerabilities per language: while C and C++ are often quoted together as having a memory safety issue, there is a large gap between the two languages. From the total vulnerabilities looked at, 46.9% are in C, while only 5.23% are in C++. Other languages, like PHP, Java, JavaScript and Python have more vulnerabilities than C++; Java, considered a safe language, has an 11.4% share of vulnerabilities, which is twice as much as C++.
Then, he presents the results of a survey that he ran to determine the importance of safety for C++ users. The main conclusion is that “today, C++ developers generally do not perceive undefined behaviour as a business-critical problem”.
Robert Seacord, Safety and Security: The Future of C and C++
Another important talk related to safety was Robert Seacord’s keynote at NDC TechTown, entitled ‘Safety and Security: The Future of C and C++’ [Seacord23]. The talk was based on Bastien’s talk ‘Safety & Security: the future of C++’ [Bastien23] and most of the ideas are repeated. In addition to the ideas presented in Bastien’s talk, Robert, being the convenor for the ISO standardisation working group for the C programming language, added content to also cover C, not just C++.
Gabor Horvath, Lifetime Safety in C++: Past, Present and Future
In his CppCon talk [Horvath23], Gabor Horvath discusses some possible approaches to improve lifetime safety. He is deliberately not explaining once more why safety is important and briefly mentions a few C++now talks [Bastien23, Parent23a, Steagall23, Weis23] and a few reports [NSA22, CR23].
What is interesting in Gabor’s talk is the distinction between safe by construction and opportunistic bug finding (Gabor also has a third category named hybrid approach, but I failed to understand what’s the difference between this one and opportunistic bug finding). In a safe by construction language, the expressible programs are guaranteed to be safe. This may reject safe programs if the compiler cannot reason that they are safe; often, for this reason, the language allows escape hatches. On the other hand, in an opportunistic bug-finding approach, the language allows all the programs whether they are safe or not; then, language warnings, static analysers or other tools might identify safety issues. In this model, we incrementally move to safer code as the tools suggest safer constructs. The major downside is that there will be unsafe programs that we won’t be able to detect.
Gabor spends a fair amount of time discussing recent improvements in C++ (or the MSVC/clang compiler) for lifetime safety. What I found interesting in this section is not necessarily the recent improvements (although they are great), but the many ways in which we can write unsafe code. The takeaway I have is that the more features we add to C++, the more possibilities of expressing unsafe code we have, making it harder for people to reason about them.
Podcasts
In terms of podcasts, the safety theme appeared on many episodes. Out of all these episodes, I’ve selected a few where safety plays a central role: CppCast’s ‘Safety Critical C++ with Andreas Weis’ [CppCast356], CppCast’s ‘Safety, Security and Modern C++ with Bjarne Stroustrup’ [CppCast365] and ADSP’s ‘Rust & Safety at Adobe with Sean Parent’ [ADSP160]. They are all worth listening to.
Putting it all together
From correctness to safety
Let’s assume that we have a complete specification for a program we want to build. We consider a program to be functionally correct if for every input given to the program, it produces an output, and that output satisfies the specification; this is sometimes referred to as total correctness. A program is deemed partially correct if, for every input, should the program produce an output, this output must conform to the specification. Note that partial correctness does not guarantee termination, which makes it easier to reason about.
Ideally, programs would be functionally correct. However, ensuring this for most programs is not feasible. In fact, even achieving partial correctness is a challenge for many programs. Furthermore, defining a complete specification for a problem is often as complex as implementing the solution itself. Therefore, having completely correct programs is not practically achievable. But this does not mean we should resign ourselves to letting our programs behave unpredictably. We must constrain them to ensure only a reasonable set of outcomes are possible.
Consider a self-driving car that needs to travel from point A to point B. We cannot guarantee the car will complete its journey since it might break down. However, we aim to ensure that, under normal operating conditions, the car doesn’t, for instance, continuously accelerate uncontrollably or start driving off-road. This leads us to express guarantees in terms of ‘X shouldn’t happen’.
Sean [Parent23a] references a paper by Leslie Lamport [Lamport77] which suggests breaking down correctness into two types of properties: safety properties (what must not happen) and liveness properties (what must happen). Thus, our first definition of safety is that aspect of correctness concerned with what must not happen. In the example above, not continuously accelerating and not driving off-road are safety properties.
Because there are potentially an infinite amount of negative properties, having a clear definition of safety is not possible. We might have different types of safeties, and we should always keep in mind the goals of our programs. As Sean and Timur argue, safety is just an illusion [Parent23a, Doumler23a]. There are always limitations to what safety properties can express.
All safety properties should, in one way or another, contain the condition ‘if operating under intended usage parameters’. For example, there is no guarantee that a software can make if the hardware misbehaves. For example, a car may continuously accelerate if the cosmic rays make the output of the hardware to continuously accelerate (ignoring any input from the software); or, a car may drive off-road if it’s teleported outside the road while having high speed. Due to practical reasons, we should always assume our safety properties are qualified to exclude unintended usage behaviours.
If we want safety properties (that are not defined in probabilistic terms) to always hold, then, this definition of safety excludes gradual safety adoption that Herb was advocating for.
Once we address the issue of intended usage, we must consider the implications of any safety property for a program. The presence of even a single non-probabilistic safety property implies that undefined behaviour in our programs is unacceptable. Undefined behaviour means anything can happen, including violations of the safety property. Therefore, discussing safety in a system where undefined behaviour is possible is fundamentally flawed.
Functional safety
There is another path that leads to defining safety, more specifically called functional safety, coming from regulated industries, like automotive. Andreas and JF provide a good overview of functional safety [Weis23, Bastien23]. I will slightly alter the definition to make it more general.
We will define harm as physical, moral or financial injury or damage to individuals or companies. In automotive, this is typically defined as “physical injury or damage to the health of persons” [Weis23]. If a software leads to moral injuries or leads to customers losing money, by our definition, this will be called harm. Following Andreas, we will then define risk as being the “probability of the occurrence of harm weighted by the severity”, and unreasonable risk as the risk that is “unacceptable according to societal moral concepts”. This leads us to define safety of a system as the process of ensuring the absence of unreasonable risk.
These are several points to notice about this way of defining safety. First, we define safety at the system level; that means that we might have unsafe components in the system, if, overall, the system is safe. Then, we are talking about processes; this implies that C code, which theoretically contains undefined behaviour, can be rendered safe by applying processes that (probabilistically) ensure that undefined behaviour does not occur in practice. Finally, this definition uses the probability of harm occurrence and “societal moral concepts”, rendering the entire definition subjective.
The subjectivity of this definition of safety makes it harder to work with in practice, and especially if we want to apply safety at the programming language level. Andreas outlines a thorough process by which car manufacturers can certify their systems for functional safety, but this approach may be too heavyweight.
On the other hand, the main thing that I like about this form of defining safety is that it revolves around the why? question. It tells us why it’s important to have safety guarantees, and lets us choose which guarantees we should have, and allows us to prioritise safety guarantees. If, with our previous definition of safety, we are allowed to select any negative property as being part of safety, this definition encourages us to consider the important properties.
The reader should note that this definition of safety is equivalent to the first definition of safety if we properly express the probabilities and the ‘societal moral concepts’ into the negative properties.
Security
While there is general consensus that safety and security are interrelated, the nature of their relationship is a subject of debate among C++ experts [Steagall23]. By the two definitions we’ve listed above, security needs to be a part of safety.
For simplicity, let’s define security as the protection of software systems from malicious attacks that may result in unauthorised information disclosure or any other damage to a software system. This definition of security is a property of the software system of things that are not allowed to happen; thus security is part of safety.
Coming from the second definition of safety, we can say that a security attack is producing harm, so preventing this harm is part of safety.
To classify security as a subset of safety, the system’s specifications must align with safety principles. For instance, if a program’s specifications permit unauthenticated access to data, this could create a conflict between safety and security. However, if both safety and security are defined solely in the context of the program’s specifications, then there should be no discrepancy between the two.
Another important discussion point that also applies to safety is the above intended use. Sometimes, security issues don’t stem directly from the software itself, but from the surrounding system. For example, a security breach might be feasible due to insufficient security at the hardware level. Such scenarios fall outside the scope of the safety properties that can be ascribed to the software system itself. However, if the software in question amplifies the damage produced (compared to that is reasonably feasible), the software may still be considered unsafe.
Regulation and ethical perspective
Starting from the recent reports on safety and security [NSA22, CR23, WH23a, EC22, CISA23a, CISA23b], many authors we cited here believe that our software industry needs to be regulated.
Consider the scenario where someone buys a phone that fails to function properly; typically, the customer is entitled to return the phone and receive a replacement, depending on the country’s consumer protection laws. However, if a software update bricks the phone, the software company is often not obligated to rectify the issue, as noted in a story from Conor Hoekstra [ADSP160]. This situation seems unjust, particularly given the ubiquity of software and its increasing significance. Therefore, it is anticipated that the software industry will eventually be held accountable for its failures.
Indeed, the National Cybersecurity Strategy document issued by the White House [WH23a] has, among others, the following strategic objectives: “Hold the Stewards of Our Data Accountable”, “Shift Liability for Insecure Software Products and Services”, and “Leverage Federal Procurement to Improve Accountability”. In Europe, the European Commission produced a proposal [EC22] that states:
It is necessary to improve the functioning of the internal market by laying down a uniform legal framework for essential cybersecurity requirements for placing products with digital elements on the Union market.
The same issue can also be seen from an ethical perspective. Buggy software can produce harm (physical, moral, or financial). The engineers who produced and/or allowed those bugs are morally responsible for the harm they produce. And we can conclude – according to our second definition of safety, i.e., the bugs that create harm are safety issues – that engineers are morally responsible for safety issues.
Safety for programming languages
It’s not obvious how the notion of safety applied to systems or to programs applies to programming languages (we will mainly focus here on system programming languages). The fact that a software system must not exhibit a specific problem doesn’t mean that the language should prevent this problem or that there can’t be sub-systems that have this problem. For example, the software system may detect faults and control their impact.
However, from a practical standpoint, it makes sense to add as many guarantees as possible to the language level so that we don’t spend too much energy addressing safety issues at the system level. For example, it’s a very hard problem to mitigate undefined behaviour from one component to make the entire system safe; after all, undefined behaviour can mean tricking the rest of the system that everything is fine while covering a safety issue.
While at the language level we can’t guarantee all the safety properties, there are a few safety properties that we can enforce from that very level. We can enforce the absence of undefined behaviour (which includes memory safety, thread safety and arithmetic safety).
And, while we are here, I’ll attempt to provide a different definition of memory safety that doesn’t rely on enumerating different types of issues (type safety, bounds safety, lifetime safety, initialisation safety). We can define memory safety as the absence of undefined behaviour caused by accessing memory (for reading or writing). Other authors (for example, see [Doumler23a]) put type safety as part of memory safety. With my definition of memory safety, one can have type safety issues without having memory safety issues.
Avoiding deadlocks is another safety property that some programming languages strive for; there are solutions that provide general concurrency mechanisms while avoiding deadlocks, but these solutions are not widely deployed.
In addition to the absence of undefined behaviour, a safe programming language may also provide mechanisms for dealing with failed preconditions. When preconditions are not satisfied, there is a bug in the program. The most reasonable path forward for the software is to terminate or raise an error; the idea is to immediately get outside the area in which the bug was detected. See [Parent22] for an insightful discussion on this topic.
A safe language ensures that safety properties always hold. However, such guarantees can limit the language’s expressiveness; there are safe constructs that the compiler cannot definitively verify as safe. Therefore, a common strategy in language design is to divide the language into two parts: a safe subset (where the compiler verifies safety properties) and an unsafe subset (where compiler guarantees are relaxed).
With all these, we can define a safe programming language as a programming language that:
- has a safe subset of the language that guarantees:
- no undefined behaviour
- type safety
- (optional) absence of deadlocks
- (optional) safe handling of precondition failure when detected
- makes the safe subset distinct from the rest of the language (unsafe subset), and that this distinction is visible and auditable
Achieving a safe programming language is feasible. From an ethical perspective, it is also desirable. We are beginning to see examples of C++ components that require efficiency being successfully rewritten in Rust with minimal efficiency loss. Thus, my conclusion is that a larger portion of the systems programming community will likely (and, for ethical reasons, also should) shift towards safe programming languages, whether partially or entirely.
Quo vadis, C++?
C++ has too much undefined behaviour to become a safe programming language in the foreseeable future. One way or another, all the C++ experts cited here agree on that. This means that C++ can only make partial improvements towards the direction of a safe programming language; while it may address some safety issues, it can never guarantee basic safety.
We’ve already seen that companies and products are seriously considering moving parts of their software away from C++ to Rust. The main question is whether C++ can do something to stop this trend, and maybe reverse it. Personally, I am not convinced that in the near future, C++ can do something to stop this trend. C++ will leak talent to other languages (currently Rust, but perhaps in the future to Cppfront, Carbon, Hylo or Swift). If the progress towards safety started in 2015 as Bjarne suggested, the last 8 years have seen very little progress in safety improvements. Even with accelerated efforts, the three-year release cycle and slow adoption of new standards will keep C++ a decade away from addressing major safety concerns.
If safety remains as critical as it is today, then C++ will bleed engineers, reducing its importance in programming language landscape. However, after some time, there will be a point in which this bleeding will not be significant any more. We still have Cobol and Fortran code being maintained, so we cannot expect C++ to simply disappear. The key questions are: how long will this transition take and how much of core C++ will remain when the bleeding is over?
The answers depend on C++’s ability to restore credibility among its core user base. Again, all things equal, I believe the transition will probably take more than a decade; during this time, C++ usage will decrease, especially for new software, potentially relegating C++ to a language for legacy code. If this happens, it could start a negative reinforcement loop, making C++ even less attractive for new projects.
A factor that could halt this process is the inability of other languages to effectively operate in the systems programming space. Rust is relatively young and not fully vetted; if it proves inefficient, this might slow or halt the migration from C++ to Rust. The stability of the language and ecosystem is another crucial factor.
One major selling point for C++ for new projects is efficiency. People are still skeptical that other languages can truly compete with C++ in that space. For example, people that need low latency (game development, trading, etc.) or are into HPC, may never be convinced to switch away from C++ because of this reason. We still don’t have enough data to argue if another language that provides safety can compete with C++ in this area.
There is another reason that may slow down the transition away from C++. The inertia of some industries. Just like some industries/companies were very slow to move from C to C++, there may be companies that invested so heavily in C++ that they can’t afford to switch to another language.
A more likely scenario is the segregation of C++ codebases and the rewriting of parts in other languages. Some components might continue to be written in C++, while others, where safety is more critical than peak performance, could migrate to safer languages. This is similar to the approach taken by Microsoft, Linux, and Adobe, which are starting to migrate parts of their codebase to Rust.
Assuming there is a migration away from C++, another important question is: How would migration happen? We may see multiple ways of approaching this, from migrating entire systems, to incrementally migrating components of a bigger system and even to smoother transitions similar to the ones that Chandler and Herb discuss [Carruth23, Sutter23a]. What I would envision is that soon we will see tools appearing that help migrate C++ code to safer languages.
Only time will tell how all this will evolve. While a decade may seem long, it’s also just around the corner. Our focus will soon shift to other pressing topics, and before we know it, we may see a predictable safety narrative for C++ and other languages.
Reasonable safety
Safety is no longer a luxury. As the world increasingly depends on software, the importance of software safety cannot be overstated. Therefore, consumers are justified in expecting software safety to be a given. This places a responsibility on us to ensure safety is guaranteed.
To achieve safety for programs written in languages like C++, we rely on heavyweight processes. The more we shift left these guarantees to the compilers, the easier it would be for us to provide safety.
Programming language safety is not any more something difficult to achieve. We have proven experience of languages that are safe by default, i.e., that avoid undefined behaviour, while being also efficient.
Our aspiration is for all our software to be reasonable – easy to understand, reliably good, and free of surprises. We want safety to be reasonable, we want safety to be the default setting in a programming language. This should be a right, of both consumers and us, the programmers.
References
[ADSP160] Conor Hoekstra, Bryce Adelstein Lelbach, Sean Parent, ‘Rust & Safety at Adobe with Sean Parent’, ADSP: The Podcast, episode 160, Dec 2023, https://adspthepodcast.com/2023/12/15/Episode-160.html
[Amorim18] Arthur Azevedo de Amorim, Cătălin Hriţcu, Benjamin C. Pierce. ‘The meaning of memory safety’, Principles of Security and Trust: 7th International Conference, POST 2018, Held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Apr 2018.
[Bastien23] JF Bastien, ‘Safety and Security: The Future of C++’, C++ now, May 2023, https://www.youtube.com/watch?v=Gh79wcGJdTg
[Black21] Paul E. Black, Barbara Guttman, Vadim Okun, ‘Guidelines on Minimum Standards for Developer Verification of Software’, National Institute of Standards and Technology, NISTIR 8397, Oct 2021, https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8397.pdf
[Carruth23] Chandler Carruth, ‘Carbon’s Successor Strategy: From C++ interop to memory safety’, C++ now, May 2023, https://www.youtube.com/watch?v=1ZTJ9omXOQ0
[CISA23a] Cybersecurity and Infrastructure Security Agency, ‘Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default’, Apr 2023, https://www.cisa.gov/sites/default/files/2023-10/SecureByDesign_1025_508c.pdf
[CISA23b] Cybersecurity and Infrastructure Security Agency, ‘Secure by Design’, Apr 2023, https://www.cisa.gov/securebydesign
[Claburn23] Thomas Claburn, ‘Memory safety is the new black, fashionable and fit for any occasion’, The Register, Jan 2023, https://www.theregister.com/2023/01/26/memory_safety_mainstream/
[CppCast356] Timur Doumler, Phil Nash, Andreas Weis, ‘Safety Critical C++’, CppCast, episode 356, Mar 2023, https://cppcast.com/safety-critical-cpp/
[CppCast365] Timur Doumler, Phil Nash, Bjarne Stroustrup, ‘Safety, Security and Modern C++, with Bjarne Stroustrup’, CppCast, episode 365, Jul 2023, https://cppcast.com/safety_security_and_modern_cpp-with_bjarne_stroustrup/
[CR23] Yael Grauer (Consumer Reports), ‘Future of Memory Safety: Challenges and Recommendations’, Jan 2023, https://advocacy.consumerreports.org/wp-content/uploads/2023/01/Memory-Safety-Convening-Report.pdf
[Dhalla23] Amira Dhalla, ‘Fireside Chat: The State of Memory Safety, with Yael Grauer, Alex Gaynor, Josh Aas, USENIX Enigma 2023, Feb 2023, https://www.youtube.com/watch?v=b1I8qGYCx3c
[Doumler23a] Timur Doumler, ‘C++ and Safety’, C++ on Sea, Jun 2023, https://www.youtube.com/watch?v=imtpoc9jtOE
[Doumler23b] Timur Doumler, ‘C++ and Safety’, CppNorth, Jul 2023, https://www.youtube.com/watch?v=iCP2SFsBvaU
[EC22] European Commission, ‘Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on horizontal cybersecurity requirements for products with digital elements and amending Regulation (EU) 2019/1020’, Document 52022PC0454, Sep 2022, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0454&qid=1703762955224
[Gaynor18] Alex Gaynor, ‘The Internet Has a Huge C/C++ Problem and Developers Don’t Want to Deal With It’, Vice, 2018, https://www.vice.com/en/article/a3mgxb/the-internet-has-a-huge-cc-problem-and-developers-dont-want-to-deal-with-it
[Horvath23] Gabor Horvath, ‘Lifetime Safety in C++: Past, Present and Future’, CppCon, Oct 2023, https://www.youtube.com/watch?v=PTdy65m_gRE
[Hubert23] Bert Hubert, ‘The EU’s new Cyber Resilience Act is about to tell us how to code’, Mar 2023, https://berthub.eu/articles/posts/eu-cra-secure-coding-solution/
[ISO26262] ISO, 26262:2018 ‘Road vehicles – functional safety’, 2018.
[Lamport77] Leslie Lamport. ‘Proving the correctness of multiprocess programs’, IEEE transactions on software engineering 2, 1977, https://www.microsoft.com/en-us/research/publication/2016/12/Proving-the-Correctness-of-Multiprocess-Programs.pdf
[Lamport83] Leslie Lamport, ‘What good is temporal logic?’ IFIP congress, 1983, http://lamport.azurewebsites.net/pubs/what-good.pdf
[McCall17] John McCall, ‘Swift ownership manifesto’, 2017. https://github.com/apple/swift/blob/main/docs/OwnershipManifesto.md
[Milner78] Robin Milner, ‘A theory of type polymorphism in programming’, Journal of computer and system sciences, 1978, https://www.sciencedirect.com/science/article/pii/0022000078900144/pdf?md5=cdcf7cdb7cfd2e1e4237f4f779ca0df7&pid=1-s2.0-0022000078900144-main.pdf&_valck=1
[NSA22] National Security Agency, ‘Software Memory Safety’, Nov 2022, https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSISOFTWAREMEMORYSAFETY.PDF
[OpenSSF22] OpenSSF, May 2022, https://openssf.org/oss-security-mobilization-plan/
[P2410R0] Bjarne Stroustrup, ‘P2410R0: Type-and-resource safety in modern C++’, WG21, Jul 2021, https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p2410r0.pdf
[P2687R0] Bjarne Stroustrup, Gabriel Dos Reis, ‘P2687R0: Design Alternatives for Type-and-Resource Safe C++’, WG21, Oct 2022, https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p2687r0.pdf
[P2739R0] Bjarne Stroustrup, ‘P2739R0 : A call to action: Think seriously about “safety”; then do something sensible about it’, WG21, Dec 2022, https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2739r0.pdf
[P2759R1] H. Hinnant, R. Orr, B. Stroustrup, D. Vandevoorde, M. Wong, ‘P2759R1: DG Opinion on Safety for ISO C++’, WG21, Jan 2023, https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2759r1.pdf
[P2816R0] Bjarne Stroustrup, ‘P2816R0: Safety Profiles: Type-and-resource Safe programming in ISO Standard C++’, WG21, Feb 2023, https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2816r0.pdf
[Parent22] Sean Parent, ‘Exceptions the Other Way Round’, C++ now, May 2022, https://www.youtube.com/watch?v=mkkaAWNE-Ig
[Parent23a] Sean Parent, ‘All the Safeties’, C++ now, May 2023, https://www.youtube.com/watch?v=MO-qehjc04s
[Parent23b] Sean Parent, ‘All the Safeties’, C++ on Sea, Jun 2023, https://www.youtube.com/watch?v=BaUv9sgLCPc
[Seacord23] Robert Seacord, ‘Safety and Security: The Future of C and C++’, NDC TechTown, Sep 2023, https://www.youtube.com/watch?v=DRgoEKrTxXY
[Steagall23] Bob Steagall, ‘Coding for Safety, Security, and Sustainability’, panel discussion with JF Bastien, Chandler Carruth, Daisy Hollman, Lisa Lippincott, Sean Parent, Herb Sutter, C++ now, May 2023, https://www.youtube.com/watch?v=jFi5cILjbA4
[Stroustrup23a] Bjarne Stroustrup, ‘Approaching C++ Safely’, Core C++, Aug 2023, https://www.youtube.com/watch?v=eo-4ZSLn3jc
[Stroustrup23b] Bjarne Stroustrup, ‘Delivering Safe C++’, CppCon, Oct 2023, https://www.youtube.com/watch?v=I8UvQKvOSSw
[Sutter22] Herb Sutter, ‘Can C++ be 10× simpler & safer…? CppCon, Oct 2022, https://www.youtube.com/watch?v=ELeZAKCN4tY&list=WL
[Sutter23a] Herb Sutter, ‘Fill in the blank: _________ for C++’, C++ now, May 2023, https://www.youtube.com/watch?v=fJvPBHErF2U
[Sutter23b] Herb Sutter, ‘Cooperative C++ Evolution: Toward a Typescript for C++’, CppCon, Oct 2023, https://www.youtube.com/watch?v=8U3hl8XMm8c
[Weis23] Andreas Weis, ‘Safety-First: Understanding How To Develop Safety-critical Software’, C++now, May 2023, https://www.youtube.com/watch?v=mUFRDsgjBrE
[WH23a] White House, ‘National Cybersecurity Strategy’, Mar 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf
[WH23b] White House, ‘National Cybersecurity Strategy’ (press release), Mar 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/02/fact-sheet-biden-harris-administration-announces-national-cybersecurity-strategy/
has a PhD in programming languages and is a Staff Engineer at Garmin. He likes challenges; and understanding the essence of things (if there is one) constitutes the biggest challenge of all.