ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Google+ ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinMemory Management Patterns in Business-Level Programs

Overload Journal #148 - December 2018 + Design of applications and programs   Author: Sergey Ignatchenko
There are many memory management patterns. Sergey Ignatchenko considers these from an application level.

Disclaimer: as usual, the opinions within this article are those of ‘No Bugs’ Hare, and do not necessarily coincide with the opinions of the translators and Overload editors; also, please keep in mind that translation difficulties from Lapine (like those described in [Loganberry04]) might have prevented an exact translation. In addition, the translator and Overload expressly disclaim all responsibility from any action or inaction resulting from reading this article.

Discussions on ‘how we should manage memory in our programs’ are ages old; the first discussion which puts together ‘memory management’ and ‘design patterns’ (and which I was able to find) was over 15 years ago, in [Douglass02] – and BTW is still very relevant.

On business-level programming

Still, in spite of the topic being rather ancient, even in [Douglass02] the discussion is a bit cluttered with details which are not directly relevant for an app-level/business-level programmer (sure, pooling is nice – but 99% of the time it should be an implementation detail hidden from business-level programmer).

In this article, I will try to concentrate on the way memory management is seen by an app-level (more specifically, business-level) developer; this means that I am excluding not only OS kernel programmers, but also developers who are implementing stuff such as lower-level libraries (in other words, if your library is directly calling poll(), this whole thing is not really about you). FWIW, the developers I am trying to represent now are those who are writing business-level logic – more or less similar to the logic which is targeted by programming languages such as C# and Java (while certainly not all Java/C# code qualifies as ‘business-level’, most of it certainly does).

A few properties of a typical business-level code:

  • It changes very often (more often than everything else in sight)
  • More often than not, it is ‘glue’ code
  • Development speed is of paramount importance (time to market rulezzz)
  • Making changes to existing code quickly (~=‘on a whim of some guy in marketing department’) is even more important than ‘paramount’
  • Raw performance of business-level code is usually not that important (it is the way that it calls the pieces of code it glues together which matters for overall performance)

Pattern: Zero Memory Management

With this in mind, we can start discussing different memory management patterns which are applicable at the business level. As business-level programming has its own stuff to deal with and memory management is rarely a business requirement, more precisely, we’ll be speaking about memory management which we cannot avoid using even at a business level.

It may be difficult to believe, but there are (ok, ‘were’) programming languages that didn’t need memory management at all (or, more precisely, with memory management so rudimentary that we can ignore it for our purposes); one example of such a language is FORTRAN77 (though it was expanded with allocatable data in FORTRAN90). Let’s call it Zero Memory Management.

The idea behind Zero Memory Management is simple: as long as we say that all the variables in our program behave ‘as if’ they’re on-stack variables (and any references to them can only go downstream, i.e. from caller to callee), we can easily prove that we don’t need any additional memory management at all, and the program is guaranteed to be correct memory-wise without any additional efforts. In other words:

the best way to avoid problems with pointers is to prohibit them outright.

BTW, Zero Memory Management doesn’t prevent us from using heap; the only thing I, as a business-level developer, care about is that all the variables behave ‘as if’ they’re on the stack. It means that things such as std::string and std::unique_ptr<> in C++ (as well as any implementation which uses heap behind the scenes in a similar manner) still qualify as Zero Memory Management (sic!).

In a sense, if we could restrict business-level programming to Zero Memory Management, it would be a Holy Grail™ from the memory management point of view: there would be no need to think about anything memory-related – our programs would ‘just work’ memory-wise. Unfortunately, for real-world business programs with complicated data structures, it is not that easy.

Traversing problem

One thing which cannot be expressed easily under the Zero Memory Management model is the traversing of complicated data structures. For example, let’s consider the following data structure:

  //PSEUDO-CODE
  class OrderItem {
    //some data here
  };
  class Order {
    vector<OrderItem> items;
  };

So far so good, and we’re still well within the Zero Memory Management model, until we need, given an OrderItem, to find the Order to which it belongs.

Traditionally, this task is solved by adding a pointer/reference to Order into OrderItem – but in Zero Memory Management there is no such thing as a pointer, so we’re out of luck. Of course, we can always use a pair of (OrderItem, index-in-vector-of-OrderItems) to express the same thing, but with more complicated data structures with multiple levels of indirections it will quickly (a) become really ugly and (b) start causing significant performance hits.

Let’s call this issue of traversing our complicated data structure a ‘traversing problem’. In practice, it happens to be bad enough to prevent us from using the otherwise very simple and straightforward Zero Memory Management model ☹.

Pattern: Manual Memory Management

The second memory management pattern on our list is the dreaded manual memory management. We’re allocating something on the heap – and obtaining a pointer to it, and then it is our obligation to delete this allocated memory.

Manual memory management has been known at least for 50 years (IIRC, C’s malloc()/free() was not the first such thing); and the problems with it have been known for all those years too. With manual memory management, it is very easy to make one of the following mistakes:

  • Forget to delete allocated memory (causing memory leak)
  • Dereference a pointer to already deleted memory (causing all kinds of trouble, with the worst cases being crashes or data corruption ☹).
  • On the positive side, we can say that Manual Memory Management does allow us to support arbitrarily-complicated data structures – and very efficiently too. But its lack of protection from silly mistakes is a serious drawback for business-level programming.

Pattern: Reference Counting

NB: I won’t engage in a discussion whether reference counting should be considered a flavor of Garbage Collection or not. As with any debate about terminology, it is outright silly; what really matters is how the things work, not how we name them.

One very simple idea to avoid Manual Memory Management is to use Reference Counting: each object has a reference counter in it, and whenever the object is no longer referenced (because the last reference to an object is dropped) – the object is deleted. Unfortunately, this approach can easily cause loops of objects which are referenced only by each other, causing syntactic memory leaks (also, the semantic memory leaks discussed in the section ‘Pattern: (full-scale) Garbage Collection’ below, still apply).

In C++, Reference Counting is represented by std::shared_ptr<>. To help with addressing the loop problem, there is a std::weak_ptr<> which allows us to avoid loops, but it is still the developer’s responsibility to make sure loops don’t occur. In addition, there are certain problems with the implementation of std::shared_ptr<> such as the need for the developer to choose the lifetime of the tombstone (very briefly: if we’re using make_shared(), we’re improving locality, but this is at the cost of the memory for the whole object being kept until the last weak_ptr<> to it is removed, which can take a while ☹).

Overall, Reference Counting is not too popular in the industry (and IMNSHO for a good reason too): on the one hand, it doesn’t provide firm safety guarantees, and on the other one – for a vast majority of use cases – it is not really necessary, with std::unique_ptr<> (and its ‘definite, predictable lifespan’) generally preferred over reference counting most of the time ([Stroustrup][Parent13][Weller14], and most importantly 😉 – my personal experience; IIRC, in my career I have needed an equivalent to shared_ptr<> only thrice – all the cases not at business-level – with number of examples when unique_ptr<> was sufficient, going into goodness knows how many thousands).

Pattern: (full-scale) Garbage Collection

After people have tried reference counting (which still failed to achieve the Holy Grail™ of memory safety), the next thing which arose to deal with memory management was (full-scale) Garbage Collection. The premise of Garbage Collection is simple:

  • Objects are allocated on the heap (but references can live on the stack).
  • An object is not deleted as long as it is reachable from the stack (or from globals) via a chain of references. In other words, as long as there is at least one way to reach the object, it stays alive.

Garbage Collection (a) does solve the Traversing Problem, and (b) does avoid the mistakes which are typical with Manual Memory Management. But does this mean that we’ve found a new Holy Grail of Memory Management™ with Garbage Collection? Not really.

The Big Fat Problem™ with Garbage Collection is that if we forget to clean a no-longer-needed reference, it leads to a so-called semantic memory leak, which is a well-known plague of garbage-collected programming languages. Moreover, as has been argued in [NoBugs18], for code to be both safe and leak-free, the cleaning of such no-longer-needed references should happen exactly at the same places where we’d call manual free()/delete in Manually Managed languages.

More generally, keeping an object alive as long as there is ‘at least one way to reach’ it (a concept central to garbage collection) means, effectively, loss of control over object lifetimes (they become next-to-impossible to track and control in a sizeable project where anybody can silently store a reference causing all kinds of trouble); this, in turn, creates the potential for Big Fat Memory Leaks™ – and these have to be avoided in quite a few Big Fat Business-Level Programs™.

Note that these semantic leaks could be avoided (and control over object life times can be claimed back without introducing the risk of a crash) by using ‘weak references’ for all the references except those references which do express ownership, but such approaches are uncommon for garbage-collected languages (where traditionally ‘weak references’ are seen as a way to implement caches rather than something to prevent unwanted expansion of an object’s life time, and what’s more important for practical purposes, ‘weak references’ have much bulkier syntax than default references).

Pattern: Rust Memory Management (~=‘proof engine for dumb pointers’)

A new kid on the memory management block is the Rust programming language, which has its own approach to ensuring memory safety. Very, very briefly: [within code which is not marked as ‘unsafe’] Rust tries to prove that your program is memory-safe, and if it cannot prove safety – your program fails to compile.

The problem with such an approach is that the reasoning becomes convoluted, and what’s much worse is that this proof engine becomes exposed to the developer. Moreover, this restriction is fundamental: Rust cannot (and won’t be able to, ever) ensure that its proof engine accepts all correct programs (while still rejecting all incorrect ones); in turn, it means that as a Rust programmer, I have to learn not only ‘how to write correct programs’ (which is inherent for any programming language), but also ‘how to write programs which the current Rust compiler considers correct’. In practice, this last thing happens to be a Big Pain In The Ahem Neck™. Personally, I’d compare the complexity and the amount of jumping through hoops when programming in Rust with the complexity of programming templates in C++ (~=‘manageable but really ugly especially for business-level programming’) – but at least in C++, not all the code is within templates (and using templates in C++ doesn’t suffer from additional complexity).

And as an app-level developer I certainly do NOT want to spend time thinking about Rust proof engine messages that are cryptic for anyone who doesn’t know them. For example, you ‘cannot borrow foo as immutable because it is also borrowed as mutable’. Furthermore, how I should explain my intentions to the proof engine so it can prove that the program is correct. In Rust, this corresponds to specifying explicit lifetimes.

Sure, it is possible to learn how the Rust proof engine works – but it is yet another Big Fat Thing™ to remember, and with a cognitive limitation of 7±2 entities we can deal with at any given time, it becomes a Big Fat Burden™ on us (=‘poor defenseless app/business-level programmers’)

This complexity IMNSHO stems from an original intention of Rust which I’d describe as ‘let’s make an equivalent of the C language and try to prove the correctness of such programs’; and as the C language is all about ‘dumb pointers’, this means trying to prove the correctness of an arbitrary program with dumb pointers (things don’t change if we rename ‘pointers’ to ‘references’). And this, as Rust experience IMNSHO demonstrates, is an unsurmountable task ☹ without placing too much burden on app-level developers (though IMHO Rust still may have its use at system level).

NB: currently, Rust is in the process of improving their proof engine with ‘Non-Lexical Lifetimes’ (NLL). This is supposed to improve Rust’s ability to prove correctness. IMNSHO, while NLL will indeed allow Rust to prove more cases, it won’t make the life of the developer significantly simpler. First, there will still be correct programs for which Rust cannot prove correctness, and second, when the Rust compiler fails to prove correctness, understanding why it has failed will become even more cryptic. As one of the NLL features is to analyze conditional control flow across functions, this means that errors become essentially non-local; by changing my function z() in a million-LoC project I can cause a compile-time failure in a function a() which calls b() which calls c() … which calls y() which calls my function z(). Under these circumstances, figuring out which of the functions a()z() has to be fixed becomes a hugely non-trivial task (and with a huge potential for fingerpointing between different people/teams about implied contracts between different functions).

Pattern: Zero+ Memory Management

And last, but certainly not least, I want to describe a memory management approach which is successfully used in quite a few projects (without this fancy name, of course), ensuring very straightforward programming (and without semantic memory leaks too).

In essence, it is a Zero Memory Management pattern with some kind of {raw|safe|weak} pointers/references added to get around the Traversing Problem. This allows to keep a Zero Memory Management pattern that is very undemanding for the developer, adding only the very minimal fix necessary to address the Traversing Problem.

In essence, this approach relies on expressing data structures via a combination of (a) ‘owning’ pointers, and (b) ‘non-owning’ ones. It means that our resulting data structure is a forest, with trees of the forest built from ‘owning’ pointers, and ‘non-owning’ pointers going pretty much anywhere within this tree (in particular, in our example above adding a non-owning pointer from OrderItem to Order won’t be a problem).

Formally, such a forest can express an arbitrary complex data structure. An arbitrary data structure say, in Java, corresponds to a directed graph all parts of which arbitrary data structure are reachable from certain entry points of this directed graph. But then, we can remove those edges which are not necessary for reachability (actually, replacing them with ‘non-owning’ pointers) and we’ll get a forest which is built out of ‘owning’ pointers.

Less formally, in all my 20+ years of development, I can remember only three times when I have seen a data structure which doesn’t naturally lend itself to the Zero+ Memory Management model above. Just to illustrate this observation, let’s take a look at the data structures we’re routinely using to express things: structures, nested structures, lists, vectors, hash tables, trees – all of them are naturally expressed via some kind of a tree built from ‘owning pointers’ (with an occasional ‘non-owning pointer’ going in the direction opposite to the direction of ‘owning’ ones). Those few cases when this model wasn’t enough in practice were the same cases when shared_ptr<> was indeed necessary, but all such cases weren’t at the business-level to start with.

Pattern: Zero+Something Memory Management

One more thing about Zero+ Memory Management is that we don’t necessarily need to follow this model for all the objects which can arise in our program. Rather, most of the time we can separate our objects into:

  • long-term objects: for example, those representing the state of our state machine – and
  • temporary objects, which are going to be destroyed after we leave our current function.

This separation is also consistent with the so-called ‘generational hypothesis’ which says that young objects are more likely to die than old ones. With this in mind, we can say that Zero+ Memory Management is really important only for long-term objects; for temporary ones we can live with pretty much any other model which works for us. In one example, we may want to use Zero+ for long-term objects, and classical GC-based Memory Management for temporary ones (as long as long-term objects are properly structured, GC will indeed take care of all the temporary ones as soon as temporary objects go out of scope).

Examples of how Zero+Something Memory Management can be used with different programming languages:

  • C++ with std::unique_ptr<> as ‘owning’ pointers, and naked C-style pointers for ‘non-owning’ ones. This is not safe (there is a risk of dangling pointers), but at least will protect against memory leaks. This is actually an approach which was used for quite a few serious C++ projects (including some of my own, including a G20 stock exchange and an MOG with 500K simultaneous players) – and very successfully too.
  • C++ which uses shared_ptr<> as an implementation of owning_ptr<> (with copying prohibited for owning_ptr<>); this also allows for safe pointers (implemented on top of weak_ptr<>).
  • ‘Safe C++’ along the lines described in [NoBugs18]. Essentially replaces ‘non-owning’ pointers with their safe counterparts – being able to provide safety guarantees (!). (NB: in [NoBugs18] there is also a big chunk of logic aimed to make naked pointers safe for those temporary objects.)
  • Java with usual Java references as ‘owning’ pointers, and WeakReference<> as ‘non-owning’ ones. This will allow us to avoid memory leaks, including most existing semantic memory leaks too (at the very least, it will be much easier to track semantic memory leaks). Zero+Something approach allows us to use the usual Java references for temporary objects (i.e. everywhere except for long-term ones).
  • C# with usual C# references as ‘owning’ pointers, and WeakReference (a ‘short’ kind(!)) as ‘non-owning’ ones. Similar to Java, this will allow us to avoid most existing semantic memory leaks. And also same as with Java, Zero+Something approach allows us to use usual C# references for temporary objects (i.e. everywhere except for long-term ones).

Conclusion

We considered quite a wide range of existing memory management patterns, ranging from Zero Memory Management to Rust Memory Safety. And IMNSHO, at least for business-level programming (which covers most of app-level programming), Zero+Something Memory Management tends to work the best. This approach allows to (a) represent pretty much everything we need, and (b) to avoid pitfalls which are typical for other models. Moreover, it has quite a significant history at least in serious C++ projects (my own ones included), with very good results.

References

[Douglass02] Bruce Powel Douglass (2002) Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems, Addison-Wesley, https://www.amazon.com/Real-Time-Design-Patterns-Scalable-Architecture/dp/0201699567

[Loganberry04] David ‘Loganberry’, Frithaes! – An Introduction to Colloquial Lapine!, http://bitsnbobstones.watershipdown.org/lapine/overview.html

[NoBugs18] ‘No Bugs’ Hare, ‘Java vs C++: Trading UB for Semantic Memory Leaks (Same Problem, Different Punishment for Failure)’, http://ithare.com/java-vs-c-trading-ub-for-semantic-memory-leaks-same-problem-different-punishment-for-failure

[Parent13] Sean Parent (2013) C++ Seasoning, GoingNative, https://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoning

[Stroustrup] Bjarne Stroustrup, C++11 FAQ, http://www.stroustrup.com/C++11FAQ.html#std-shared_ptr

[Weller14] Jens Weller, shared_ptr addiction, https://www.meetingcpp.com/blog/items/shared_ptr-addiction.html

'No Bugs' Hare Translated from Lapine by Sergey Ignatchenko using the classic dictionary collated by Richard Adams.

Sergey Ignatchenko Sergey Ignatchenko has 15+ years of industry experience, including being a co-architect of a stock exchange, and the sole architect of a game with 400K simultaneous players. He currently holds the position of Security Researcher.

Overload Journal #148 - December 2018 + Design of applications and programs