Making sense of software is challenging. Lucian Radu Teodorescu and Sean Parent show how local reasoning can help.
Programs are becoming increasingly complex, while our mental capacities remain constant. We are well past the point where a single person can understand the entirety of an average-sized software system. In this context, it is worth revisiting what Dijkstra called “mental aids” [Dahl72].
One of the most valuable techniques that support our ability to reason about software is local reasoning. While the term itself does not appear explicitly in Dijkstra’s writings, it is almost as though he had the concept in mind when contributing to Structured Programming [Dahl72]. Hoare also explored the idea of reasoning about the correctness of programs by analysing small code blocks [Hoare69], which laid the conceptual groundwork for local reasoning. However, the term local reasoning first appeared in the literature in 2001, in the article ‘Local Reasoning about Programs that Alter Data Structures’ [O’Hearn01].
Local reasoning refers to the ability to analyse and verify a defined unit of code in isolation – without needing to understand all the contexts in which it is used, or the details of the code it depends on. By “unit of code”, we typically mean functions or classes (though the concept can be extended to other organisational constructs, such as namespaces). These units must have well-defined APIs that separate the code into client-side usage (e.g., calling a function or instantiating a class) and implementation-side logic (e.g., the internal details of the function or class). Furthermore, the implementation-side logic can itself be subdivided into the focal unit of code and the dependencies it relies upon.
By applying local reasoning, one can more easily understand code, as it separates the unit being analysed from its clients and its dependencies. Having code segments that can be reasoned about independently provides an excellent mental aid, allowing us to incrementally fit code into our limited cognitive resources.
The aim of this article is to show that local reasoning can also support proving the correctness of code. This is significant because, unlike safety, correctness does not compose. Let us explain. If two components, A and B, are safe, then composing them preserves safety. However, if A and B are correct, we cannot guarantee that combining them will result in a correct program. For example, suppose A produces measurements in imperial units, and B consumes values assuming they are in metric units. Individually, A and B may be correct, but used together, they yield incorrect behaviour. This exact type of mismatch contributed to the crash of NASA’s Mars Climate Orbiter [Wikipedia].
Properties
Let us begin with the notion of correctness. Following Leslie Lamport [Lamport77], we define correctness as the combination of liveness and safety properties that describe the behaviour of a program or specification. Liveness properties assert that something desirable eventually happens; safety properties assert that something undesirable never happens. For example, in a program that sorts a list of words, a liveness property might be that the output is a permutation of the input sequence, while a safety property might be that the program does not exhibit undefined behaviour.
To use the distinction introduced by Fred Brooks in his well-known ‘No Silver Bullet’ article [Brooks95] – a distinction inspired by Aristotelian terminology – we can divide properties into essential and accidental. In the sorting example, the requirement that the output be a permutation of the input is an essential property: we cannot imagine the program being correct without it. In contrast, accidental properties include the ordering of equivalent elements in a sorted sequence, the number of CPU instructions executed for a given input, the heat generated on a specific machine, or the amount of memory used.
Naturally, when we discuss correctness, we are concerned only with the essential properties; accidental ones can typically be ignored. For the sorting example, two different algorithms may vary in their accidental properties, but they share the same essential properties.
Two programs (or specifications) are considered equivalent whenever they share the same set of essential properties.
When dealing with an entire program specification, the set of essential properties can be interpreted as the program’s requirements. These requirements come in two forms: explicit and implicit. Explicit requirements are those that are clearly stated or documented. Implicit requirements are not usually written down and are often overlooked, yet we still recognise them as essential when asked. For instance, “the program shall not deadlock” is a typical implicit requirement. Other examples include: “the program shall not exceed the system’s memory”, “the program shall complete in under one second when sorting 1,000 words”, or “the output file format shall match the input file format”.
A program is typically composed of multiple parts, ideally structured in some form of hierarchy [Dahl72]. In our simple example, there may be one part that reads the input file, another that writes the output, one that performs the sorting, and a smaller component that compares two words (possibly alongside others). The global properties of the full program may not be directly attributable to individual components. However, just as we decompose programs into smaller parts, we assume there exists a corresponding method to decompose properties into smaller ones applicable to each part. For example, from the program’s specification, we could derive what it means for two words to be considered equal or for one to be ‘less than’ another – key requirements for the comparison component.
To summarise, we define correctness as the set of essential properties that apply to both programs and specifications, at both the level of the entire system and its constituent parts. The essential properties of the full specification correspond directly to the program’s requirements.
Abstractions
Functions and classes are abstractions. To use Dijkstra’s words, an abstraction is something for which we can describe “what it does” while completely disregarding “how it works” [Dahl72]. As mentioned earlier, abstractions define an API, and this API separates the client-side logic from the implementation-side logic. The implementation logic can itself be further divided into the core implementation and the logic it depends upon.
The API of an abstraction typically consists of:
- Type information (e.g., function declaration, class definition)
- Contract specifications: preconditions and postconditions
- Semantic properties (often documented as comments, or occasionally expressed in the name of the abstraction)
- Implicit global assumptions (e.g., non-aliasing of function parameters, object lifetime guarantees, etc.)
We refer to the abstraction API properties as the set of essential properties derived from the abstraction’s API.
We say that an abstraction is well defined if its abstraction API properties are equivalent to the essential properties that can be derived from its implementation – considering both the core logic and its dependencies. While it is possible to have correct programs even if some abstractions are not well defined, we still strive to ensure all abstractions are well defined, as this simplifies reasoning about the program.
Listing 1 provides a straightforward example of a well-defined abstraction. In contrast, Listing 2 illustrates two examples of abstractions that are not well defined:
//! Adds '2' to 'x'. //Precondition: x + 2 < INT_MAX int add_two(int x) { return x + 2; } |
Listing 1 |
//! Sorts inplace `v`. void my_sort(vector<int>& v) { if (v.size() > 2) std::sort(v.begin(), v.end()); } //! Sorts inplace `v`. void my_sort2(vector<int>& v) { std::sort(v.begin(), v.end()); if (v[0] > v[1]) std::terminate(); } |
Listing 2 |
my_sort()
fails to perform sorting for a vector with exactly two elements,my_sort2()
exhibits undefined behaviour if the input vector contains fewer than two elements.
Programs are typically organised hierarchically – more precisely, as directed acyclic graphs (DAGs). The implementation of one abstraction may rely on other abstractions. If we take functions as primary examples of such abstractions, we often encounter functions that call other functions.
For a given abstraction, we refer to the abstractions it directly uses in its implementation as its child abstractions. In other words, the child abstractions are those for which the initial abstraction is a direct client. Using this notion, we can navigate the hierarchical structure of programs more effectively.
We define the local view of an abstraction as the program or specification obtained by replacing, within its implementation, all child abstractions with program fragments or specifications that are equivalent to their APIs. For example, if a function my_sort()
calls std::sort()
, the local view of the my_sort()
abstraction is the program obtained by replacing the call to std::sort()
with an abstract representation conforming to the API of std::sort()
, discarding its internal implementation details.
By using local views, we effectively ignore the implementations of child abstractions – i.e. the dependencies of an abstraction’s implementation – and focus solely on the core implementation of the abstraction, assuming that all dependencies behave as specified.
The notion of equivalence in the previous definition is essential. The theorem we will introduce later relies critically on this equivalence. When examining the local view of an abstraction, we are effectively assuming that all of its child abstractions are perfectly implemented. For example, when reasoning about a function, we assume that all the functions it calls behave according to their specified contracts – so we do not need to inspect their implementations directly.
It is worth noting that there may be multiple valid substitutions for each child abstraction when constructing the local view. However, since all these substitutions are equivalent with respect to essential properties, the resulting local views are also equivalent.
Now that we have defined what a local view is, we can define the local properties of an abstraction as the set consisting of:
- Properties derived from the type information of the abstraction
- Properties derived from the preconditions of the abstraction
- Implicit global assumptions
- Essential properties of the abstraction’s local view, assuming the above as preconditions
When comparing the local properties of an abstraction with its API properties, two key differences arise. First, local properties do not include the postconditions or the semantic properties defined by the API. Second, they do include all properties that can be inferred from the local view and its preconditions.
Whereas API properties allow us to decouple an abstraction’s implementation from its clients, local properties decouple the abstraction’s local view from the dependencies of its implementation. The local properties capture all assertions that can be made by applying Hoare logic [Hoare69] to the abstraction’s local view.
We say that an abstraction is locally well defined if its local properties form a superset of its API properties. In other words, all the postconditions and semantic properties defined by the abstraction’s API can be derived by analysing its immediate implementation.
In this definition, we allow the set of local properties to be larger than the set of API properties, rather than requiring them to be equivalent. This choice is intentional. Applying Hoare logic to the abstraction’s implementation may yield additional properties, not all of which are relevant at the abstraction level. For instance, consider a sorting routine that always handles more than 10 elements. Its API may specify: “the routine shall not terminate if the input has more than 10 elements”. However, reasoning about its implementation may yield the stronger statement: “the routine will never terminate”. The latter includes the former.
Listing 3 presents two examples of functions that are not locally well defined.
// precondition: abs(x) < 1000 int times_two(x) { return x + 2; } bool test(int n); // precondition: 2 < n < 2^16. int count_primes_below(int n) { int result = 0; for (int i=1; i<n; i++) { if (test(i)) result++; } return result; } |
Listing 3 |
- The function
times_two()
is not locally well defined because its local view implies that the function returns the input plus2
, while its semantic intent (as suggested by its name) is to multiply the input by2
. - In the second case,
count_primes_below()
is not locally well defined because reasoning over its local view does not yield the required property. The problem lies in its dependency on a function namedtest()
, whose API is underspecified. There is nothing stating thattest()
must return true when the input is a prime number. Without that assumption, we must treattest()
as returning arbitrary values. Consequently, the body ofcount_primes_below()
cannot be shown to produce the property that it counts the number of primes below the given input – a property implied by the function’s name.
There are also cases where a function is (provably) well defined, but not locally well defined. Listing 4 illustrates such a situation. The function add_three()
behaves correctly: it adds 3
to its input. However, it does so by relying on a helper function add_one()
, whose API does not match its actual implementation. For add_three()
to be locally well defined, we must either correct the API of add_one()
or adjust the implementation of both functions to be consistent with their declared behaviour.
// precondition: abs(x) < 1000 int add_one(int x) { return x + 2; } // precondition: abs(x) < 1000 int add_three(int x) { return 1 + add_one(x); } |
Listing 4 |
Local reasoning
We say that an abstraction has local reasoning if it is locally well defined. In other words, we can reason about the correctness of the abstraction by focusing solely on its core implementation – without inspecting the implementations it depends on.
Lemma. If an abstraction A has local reasoning and all its child abstractions are well defined, then A is also well defined.
We provide a schematic proof by contradiction.
Assume that there exists an essential property P which belongs to the API properties of A, but does not hold for the implementation of A. From the perspective of Hoare logic, there must exist an instruction Q in the implementation of A which is expected to contribute (directly or indirectly) to ensuring P, but fails to do so. That is, a fault at Q explains the discrepancy between the specification and the implementation.
Now, Q must either belong to the core implementation of A, or to its dependent logic. If Q is in the dependent logic, then the failure lies in one of the child abstractions. But by assumption, all child abstractions are well defined – so this case is impossible.
Therefore, Q must be in the core implementation of A. This implies that P cannot be among the local properties of A, since local properties are derived by applying Hoare logic to the core implementation (including Q). But if A has local reasoning, then its local properties are equivalent to its API properties, and thus P should not appear in the API properties either. This is a contradiction.
Hence, if A has local reasoning and all of its child abstractions are well defined, then A is also well defined.
To express the next corollary more conveniently, let us denote by tr(A) the transitive closure of the child abstractions of A, including A itself.
Corollary. For a given abstraction A, if all abstractions in tr(A) have local reasoning, and tr(A) forms a DAG (directed acyclic graph), then A is well defined.
Since the abstractions in tr(A) form a DAG, we can arrange them in a sequence such that, for any abstraction A₀, all its child abstractions appear earlier in the sequence (i.e. to the left of A₀). By traversing this sequence from left to right, we can iteratively prove that each abstraction in the sequence is well defined.
For any element A₀ in the sequence, its child abstractions appear earlier and have therefore already been shown to be well defined. Given that A₀ has local reasoning and all its child abstractions are well defined, we can apply the lemma to deduce that A₀ is also well defined.
Thus, by proceeding through the sequence from left to right, we prove that all elements are well defined abstractions. Since abstraction A appears as the final element in the sequence (as per our construction), we conclude that A is well defined.
Hence, the corollary holds.
The reader should note that the corollary does not apply if the abstractions do not form a DAG. Listing 5 presents an example of two mutually dependent functions that are not well defined. According to their API properties, each function is expected to return the value 2, but in practice, they call each other endlessly, resulting in non-termination.
// Postcondition: returns 2 int f(); // Postcondition: returns 2 int g(); int f() { return g(); } int g() { return f(); } |
Listing 5 |
Now, let us turn our attention to the entire program. We assume the program can be represented by a top-level abstraction. In terms of functions, this typically corresponds to the main()
function. We also assume that the abstractions within the program can be organised hierarchically as a DAG.
Theorem. If all abstractions in a program support local reasoning, and the program is composed of hierarchically organised abstractions (i.e. forms a DAG), then the entire program is correct.
The proof follows directly from the corollary above. Given the existence of a top-level abstraction for the program, and the assumption that the abstraction hierarchy forms a DAG, we can apply the corollary to conclude that the top-level abstraction is well defined. That is, its API properties match the essential properties derivable from its implementation.
By our definition of correctness – as the equivalence between API-level properties (requirements) and implementation-level properties – we conclude that the program is correct.
Thus, we arrive at the desired result.
Discussions
Local reasoning is good
The average programmer spends significantly more time reading code than writing it – by a factor of more than 10×, according to Robert C. Martin [Martin09]. Local reasoning is one of the ‘mental aids’ (to borrow Dijkstra’s terminology) that helps us read code more effectively, and more broadly, to reason about it.
Much of this article has focused on showing how local reasoning supports proving that code is correct. Turning the problem around, local reasoning can also be invaluable for identifying issues. It is far easier to spot problems when one can analyse a single function or class at a time, checking whether it conforms to its specification, without constantly examining its dependent functions. In a way that parallels our earlier corollary, the process of issue detection becomes linear rather than exponential.
Understanding code also benefits greatly from local reasoning. Rather than attempting to fit an entire program into one’s mental model, a developer can focus on understanding smaller pieces of code, and how those pieces compose into larger abstractions. To draw an analogy: imagine trying to make sense of a Wikipedia article. Like local reasoning, the article should include sufficient context to be readable in isolation. If understanding it required recursively drilling down into every linked page, reading the article would become a Sisyphean task – well beyond our cognitive capacity.
Local reasoning also offers a secondary benefit in tooling: most static analysis tools benefit heavily from it. Code that is easier for humans to understand and reason about is also easier for machines to analyse effectively.
Organizing programs hierarchically
One of the widely applied design principles in software engineering is the Acyclic Dependencies Principle, which states that “the dependency graph of packages or components should have no cycles” [Martin97]. Cyclic dependencies in an application typically lead to tight coupling, hinder reuse, and cause domino effects – where a small change in one module propagates through others in unintended ways.
This article offers an additional perspective on why cyclic dependencies are problematic. Cyclic dependencies can make it significantly harder to reason about correctness when applying local reasoning. Although this may not be the primary reason to avoid cyclic dependencies, it is a consideration worth pondering.
Reasonable software and global reasoning
A well-known maxim in our industry is that we should write code for humans, not for machines. In other words, we should strive to produce reasonable software. We define reasonable software as software that can be easily reasoned about by people, and as software that is decent and fair – meaning that it avoids unexpected surprises.
Local reasoning is perhaps the most effective way to achieve reasonable software. At the same time, it can be viewed as just one aspect of what reasonable software entails. In practice, we also need some form of global reasoning: that is, we expect certain properties to hold across multiple abstractions of a program – albeit with occasional exceptions.
A good example of such a global property is the applicability of local reasoning itself. Local reasoning is of limited value unless it is applied consistently: if one function supports local reasoning but calls other functions that do not adhere to their contracts, reasoning about correctness quickly becomes ineffective.
Other examples of global properties that contribute to reasonable software include:
- Abstractions are organised hierarchically (no cycles)
- The program is well structured
- The Law of Exclusivity [McCall17] is applied consistently
- The program exhibits no undefined behaviour
- Consistent conventions for naming and documentation are followed
- Implicit assumptions across the program are clear and respected
From requirements to abstraction properties
One weakness of the approach presented in this article is the assumption that we can readily translate complete requirements – both explicit and implicit – into properties associated with the top-level abstraction (e.g., the main()
function), and then systematically distribute those properties to child abstractions. In practice, this rarely occurs in a disciplined or complete manner.
Explicit requirements are typically the known knowns of a project. Implicit requirements are often the unknown knowns. Moreover, as Kevlin Henney has frequently pointed out, most software projects are also plagued by unknown unknowns [Henney21]. The unfortunate reality is that we seldom know the full set of requirements. As a result, it becomes difficult to ensure that all essential properties are properly assigned across abstractions. This makes the top-down application of the approach described here hard to carry out rigorously.
However, this is only partially bad news. In practice, having good abstractions – that is, abstractions with clearly defined and trustworthy contracts – greatly aids in achieving overall correctness.
In contrast to the top-down approach of distributing requirements across abstractions, starting from solid abstraction contracts enables a complementary bottom-up strategy. By applying local reasoning and examining each abstraction individually, we can incrementally infer the properties of the entire program. And even if we do not know all the requirements in advance, we can still verify that the inferred properties align with our expectations of the program’s behaviour.
There is another important way in which the bottom-up strategy proves valuable: by applying local reasoning to individual components, we can improve their correctness in isolation. This, in turn, enhances our ability to reason about the system as a whole. Dave Abrahams and Sean Parent refer to this process as “building islands of correctness”. It can be applied not only to newly developed software but also to existing systems.
Improving local reasoning
There is relatively little literature dedicated to local reasoning. The original article that introduced the term [O’Hearn01] does not offer practical guidance on how to create or improve local reasoning in a program. More recently, the works of Dave Abrahams, Dimi Racordon, and Sean Parent have explored local reasoning as a useful, applicable technique in real-world software [Racordon22a, Abrahams22a, Abrahams22b, Abrahams22c, Racordon22b, Parent23, Parent24].
Local reasoning can be severely impeded by reference semantics in the presence of mutation. If a function holds a reference to a memory location, it becomes difficult to reason locally about that function if other functions can also access and mutate the same memory. For example, the code in Listing 6 may appear correct to the untrained eye – but a call such as add_twice(x, x)
produces unintuitive results. If x == 2
, then x
becomes 8
instead of 6
, because the value of y
changes between the first and second additions. This is counter-intuitive: nothing in the body of add_twice
suggests that such behaviour is even possible.
While the example in Listing 6 is intentionally constructed to demonstrate this point, real-world code often contains many such opportunities for spooky action at a distance. This can happen any time a function takes parameters by reference, uses global variables, or accesses shared ownership (e.g. via shared pointers). In all of these cases, unexpected side effects may occur, making local reasoning much harder.
// Adds twice the value of 'y' to 'x'. void add_twice(int& x, const int& y) { x += y; x += y; } |
Listing 6 |
The most effective mitigation is to enforce the law of exclusivity [McCall17], which states that if a piece of code holds a mutable reference to an object, no other references to that object may exist concurrently. Conversely, multiple references may coexist as long as they are all read-only. Languages such as Rust, Swift, and Hylo enforce this rule at the compiler level. In languages like C++, the law of exclusivity can be approximated by adopting value semantics instead of reference semantics.
By following the law of exclusivity and using value semantics, we significantly improve the applicability of local reasoning.
Another useful technique for supporting local reasoning is the use of whole-part relationships [Stepanov09, Parent15]. Two objects (commonly referred to as parent and child) are said to be in a whole-part relationship if the following properties hold:
- Connectedness: one can reach the child from the parent.
- Non-circularity: an object cannot be its own parent (i.e. an object cannot be part of itself).
- Logical disjointness: modifying one object does not affect any other distinct object (although shared children are permitted).
- Ownership: it is possible to copy an object, modify the copy, and destroy it without affecting the original object.
The last two properties are particularly important for enabling local reasoning. If a function operates on a value of such an object, then modifications to other objects – including copies – will not affect the original.
Containers in the C++ Standard Template Library (STL) exemplify this relationship: they maintain whole-part semantics with respect to the values they contain. When we build objects through simple composition (i.e. without pointers or references), and all sub-objects respect the whole-part relationship, the composed object also preserves this property.
Designing data structures around the whole-part relationship is an effective way to enhance local reasoning.
The third tip we offer for supporting local reasoning is to craft good abstractions and assign clear, well-defined contracts to them. As the saying goes, the devil is in the details – so ensuring that an abstraction’s contract accurately captures its intended semantics is essential for enabling local reasoning. For a useful introduction to this topic, the reader is encouraged to watch the ‘Contracts in C++’ talk by Sean Parent and Dave Abrahams, presented at CppCon 2023.
As for the art of creating good abstractions, there is a wealth of literature on the principles of sound software design. We will just leave the reader with a relevant quote from Edsger W. Dijkstra [Disjkstra72]:
The purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise.
Conclusions
Local reasoning is a powerful tool. Since the 1970s, when structured programming first emerged, the core idea of local reasoning has served as a vital mental aid – helping programmers make sense of software, both when reading existing code and writing new code. This benefit alone secures local reasoning a place among the most valuable techniques in software development. But there is more to it. As this article has explored, local reasoning also plays a crucial role in enabling formal reasoning about correctness.
Achieving correct software is a long-term effort, but local reasoning offers a path toward steady, composable progress. By crafting strong contracts, avoiding cyclic dependencies, enforcing the law of exclusivity, and favouring value semantics, we can build systems in which reasoning is tractable – one function, class, or component at a time. This approach applies equally well to greenfield development and to the modernisation of legacy code.
In recent years, the software industry – especially the C++ community – has increasingly focused on safety. Yet, as important as safety is, correctness is the harder goal. As Alexander Stepanov once said:
Understanding why software fails is important, but the real challenge is understanding why software works.
Local reasoning gives us that ability.
If there is one takeaway from this article, it is that local reasoning is not a luxury – it is a necessity for sustainable software. While language and tooling support for enforcing local reasoning remains limited, we as engineers can take deliberate steps to design and structure our systems with this principle in mind. In doing so, we move closer to building software that is not only powerful, but understandable – and ultimately, trustworthy.
References
[Abrahams22a] Dave Abrahams, ‘A Future of Value Semantics and Generic Programming (part 1)’, C++ Now 2022, https://www.youtube.com/watch?v=4Ri8bly-dJs.
[Abrahams22b] Dave Abrahams, Dimi Racordon, ‘A Future of Value Semantics and Generic Programming (part 2)’, C++ Now 2022, https://www.youtube.com/watch?v=GsxYnEAZoNI&list=WL.
[Abrahams22c] Dave Abrahams, ‘Values: Safety, Regularity, Independence, and the Future of Programming’, CppCon 2022, https://www.youtube.com/watch?v=QthAU-t3PQ4.
[Brooks95] Frederick P. Brooks Jr., The Mythical Man-Month (anniversary ed.), Addison-Wesley Longman Publishing, 1995.
[Dahl72] O.-J. Dahl, E. W. Dijkstra, C. A. R. Hoare, Structured Programming, Academic Press Ltd., 1972.
[Disjkstra72] Edsger W. Dijkstra, ‘The Humble Programmer’, ACM Turing Lecture 1972, https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340.html.
[Henney21] Kevlin Henney, ‘Beyond the Known Knowns’, 2021, https://www.youtube.com/watch?v=eNeOzOoipQs.
[Hoare69] Hoare, C. A. R., ‘An axiomatic basis for computer programming’ in Communications of the ACM, 12(10), 1969, http://sunnyday.mit.edu/16.355/Hoare-CACM-69.pdf.
[Lamport77] Leslie Lamport, ‘Proving the correctness of multiprocess programs’ in IEEE transactions on software engineering 2, 1977, https://www.microsoft.com/en-us/research/publication/proving-correctness-multiprocess-programs/.
[Martin97] Robert C. Martin, ‘Granularity: The Acyclic Dependencies Principle (ADP)’, https://web.archive.org/web/20151130032005/http://www.objectmentor.com/resources/articles/granularity.pdf, 1997.
[Martin09] Robert C. Martin, Clean Code: A handbook of agile software craftmanship, Prentice Hall, 2009.
[McCall17] John McCall, ‘Swift ownership manifesto’, https://github.com/apple/swift/blob/main/docs/OwnershipManifesto.md, 2017.
[O’Hearn01] Peter W. O’Hearn, John C. Reynolds, and Hongseok Yang, ‘Local Reasoning about Programs that Alter Data Structures’, Lecture Notes in Computer Science, Volume 2142, Springer, 2001, http://www0.cs.ucl.ac.uk/staff/p.ohearn/papers/localreasoning.pdf.
[Parent15] Sean Parent, ‘Better Code: Data Structures’, CppCon 2015, https://www.youtube.com/watch?v=sWgDk-o-6ZE.
[Parent23] Sean Parent, Dave Abrahams, ‘Contracts in C++’, CppCon 2023, https://www.youtube.com/watch?v=OWsepDEh5lQ.
[Parent24] Sean Parent, ‘Local Reasoning in C++’, NDC TechTown 2024, https://www.youtube.com/watch?v=bhizxAXQlWc.
[Racordon22a] Dimi Racordon, Denys Shabalin, Daniel Zheng, Dave Abrahams and Brennan Saeta, ‘Implementation Strategies for Mutable Value Semantics’ https://www.jot.fm/issues/issue_2022_02/article2.pdf.
[Racordon22b] Dim Racordon, ‘Val Wants To Be Your Friend: The design of a safe, fast, and simple programming language’, CppCon 2022, https://www.youtube.com/watch?v=ws-Z8xKbP4w.
[Stepanov09] Alexander A. Stepanov, Paul McJones, Elements of programming. Addison-Wesley Professional, 2009.
[Wikipedia] Mars Climate Orbiter: https://en.wikipedia.org/wiki/Mars_Climate_Orbiter.
has a PhD in programming languages and is a Staff Engineer at Garmin. He likes challenges; and understanding the essence of things (if there is one) constitutes the biggest challenge of all.
is a senior principal scientist and software architect in Adobe’s Software Technology Lab. Sean joined Adobe in 1993, working on Photoshop, and is one of the creators of Photoshop Web, Photoshop Mobile, Lightroom Mobile, and Lightroom Web. In 2009, Sean spent a year at Google working on Chrome OS before returning to Adobe. From 1988 to 1993, Sean worked at Apple, where he was part of the system software team that developed the technologies enabling Apple’s successful transition to PowerPC.