ACCU Home page ACCU Conference Page ACCU 2017 Conference Registration Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Google+ ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinTo DLL or Not To DLL

Overload Journal #99 - October 2010 + Programming Topics + Design of applications and programs   Author: Sergey Ignatchenko
Shared libraries provide both benefits and problems. Sergey Ignatchenko introduces a rabbit's-eye view.

Hi all! Let me introduce myself. My name is 'No Bugs' Bunny. I've appeared in two previous issues of Overload as a main character in articles about multithreading [Ignatchenko10], and now I've decided to start a writing career of my own. All opinions in my articles are my own, and don't necessarily coincide with opinion of the translator, let alone the editors of the journal. Most of the time, I will be thinking aloud on more or less controversial issues, so please always take my words with a good pinch of salt. I do not aim to tell absolute truths, but rather to raise questions and invite readers to think about their own answers.

Today I will think aloud about a rather contentious DLL issue. Please keep in mind that for the purposes of this article (unless it is specified explicitly) I will use term 'DLL' both for Windows DLLs and for .so libraries in Linux/*nix.

DLL hell

Whenever I need to link a DLL to my application, the very first thing that comes to my mind is 'DLL Hell'. Dependency problems and crashes caused by DLLs are extremely common, and the more installations an application has, the more likely the problems are to appear on some machine.

I will not elaborate on 'DLL Hell' theory, but will provide a few examples from my personal experience. My very first encounter with DLLs was many years ago, when I was a cute little bunny and the very term 'DLL Hell' hadn't even been coined. I had a third-party application which worked perfectly for me for months, and then I installed another application on my system (I think it was electronic dictionary application). Bang! The first application started to crash every time I tried to perform some simple operation. Being a curious little bunny with lots of time to spare, I started to research the problem, and eventually found out that the electronic dictionary I'd just installed had replaced the file MFC42.DLL with a 'customized' version; obviously it wasn't 100% compatible and it was the reason for my first application crashes. It was my very first practical lesson about DLLs.

During my career I've seen many applications which had millions of installations, and can tell you that when dealing with DLLs, the famous Murphy's law ('Anything that can go wrong, will go wrong') is not an exaggeration but an understatement. Not only have I seen when one very specific build of IE5.5 crashed an application which used it merely to show a fancy HTML-based splash-screen (how was QA supposed to test it? By trying all builds of IE in existence? And the ones that don't exist yet?), and situations where a faulty video card driver (obviously, a DLL too) caused the infamous 'Blue Screen of Death' only when also running a very specific application on a very specific laptop model (the bug has since been fixed by the laptop manufacturer), and bugs in no less a widely used file than MSVCRT.DLL. But IMHO the most convincing case was when an application with a few million installations started to use the function SetDIBits() to load Windows bitmaps (replacing the hand-written BMP file parsing + CreateBitmap() calls to simplify code); the result was that about 2% of installations just stopped working (and 2% meant 20000 frustrated users per million of installations, and resulted in many hundreds of complaints to the support department). Investigation revealed that while this function is a system one, some video drivers tried to optimize it and this 'optimized' version simply crashed for a certain BMP format (which was a perfectly valid, though not the most common, bitmap variation). This was the last straw for me, and I came to the conclusion: 'if you want your application to run reliably, avoid DLLs for as long as they're possible to avoid'.

It might not be your fault, but it is your problem

To make things even worse, if your application crashes the end-user doesn't care if it happened because some ill-behaved 3rd-party application replaced MFC42.DLL, or if it happened because of a faulty version of Internet Explorer which is installed on their system: for the end-user it is your application which crashes, your application s/he will blame, your support department s/he will call/write to, and it is you who will eventually need to deal with it. When the problem with the ill-behaved application installing faulty MFC42.DLL occurs, 99.99% of the users will not go into lengthy investigations of the reasons, they will just blame the application that crashes. An application is perceived as a single product, and DLL dependencies are implementation details which the end-user doesn't care about at all. And if the application crashes because I am using a DLL without a good reason, it is indeed my fault; my job is to deliver a product which should work in the best possible way for the end-user, and if that doesn't happen then I didn't do my job properly.


Now I hope that I've described the most compelling disadvantages of DLLs (there are more - I haven't mentioned technical issues like more complicated memory management or messy name mangling), I will try to describe reasons why one may want to use DLLs despite these disadvantages. Reasons for implemeting DLLs are traditionally numerous, but IMHO many of them are not valid on closer inspection.

Reasons which are usually used to justify using DLLs are the following:

  • Using system services that are in a DLL.

    A perfectly valid reason, but it begs the question 'what should we consider to be a system service'? For example, there is no way that file access can be done without using kernel32.dll on Windows, on Linux (or similar DLLs/.so's), but in cases when more exotic services are used (such as a 'HTML control' or the GetDIBits() function described above), it becomes less obvious. Usually I'm sticking to the concept 'if you can do it yourself in reasonable amount of time - do it'.

  • Providing an interface for 3rd-party plug-ins or writing a plug-in.

    Another perfectly valid reason. While you might experience issues with badly behaved plug-ins which can crash your application (and user won't be able to tell difference if it was your application crashing or the plug-in), DLLs are still a primary method of providing plug-ins without the need to recompile the main application. If stability is of real concern, solutions which run plug-ins within separate processes are preferable, but they're much more complicated and are not always worth the trouble.

  • The Library I need exists only as a DLL.

    It indeed happens, but personally in such cases I prefer to ask myself: 'do we really need this library or maybe we can live without it?' Sometimes it helps.

  • It saves memory.

    While it was a valid reason back in 1980s, these days PCs have at least 256M of RAM, and the size of static library code is about 1000x less than that. It means that any noticeable effect of static linking instead of DLLs to overall system performance is extremely unlikely, and as a user I would definitely prefer having a statically linked application which doesn't crash instead of an application which uses 200K less RAM but has a 2% chance of crashing. On non-PC platforms analysis can be very different, but for modern PCs I feel memory savings are negligible. In addition, as described in [Anderson00], on Windows effects related to DLL relocation might reduce memory savings.

  • Security reasons.

    With the whole software development industry plagued by security problems, having security-related areas separated and independently updated initially sounds like a good idea. Still, on closer consideration this aspect is not that obvious, and needs careful analysis depending on the specific application. First of all, it depends on your application life cycle: if it is routinely updated several times a month, the benefit of DLLs being updated independently is not that great; and in extreme cases of large security holes you can easily recompile and update your entire application. Moreover in some specific cases, when you need to resort to 'security by obscurity' (for example, if you're writing MMORPG and want to prevent 'bots from playing and giving an unfair advantage), using well known DLLs like OpenSSL.DLL provide an additional relatively easy vector of attack on your communication protocols. On the other hand, if your application is not going to be updated frequently and has nothing to do with 'security by obscurity', using security DLLs can be indeed a rather good idea.

  • Smaller updates

    One common pro-DLL argument is that if you need to apply a minor fix, in the case of DLLs you only need to update a small number of small files rather than the whole large executable. On the other hand, if updating a large executable starts causing problems, it is always possible to use some kind of 'differential update' algorithm, which is able to calculate the differences between two versions of an executable file and then apply such a patch to previous version of the file; if checksums like SHA-1 are checked before and after applying such patch, this method is indeed more reliable than relying on DLL versions (while you can easily produce two different DLLs with the same version number, you will have really difficult time producing two different executables with the same SHA-1). In addition, the effects of larger updates become less relevant with steadily increasing broadband speeds and reduced traffic pricing.

  • Static linking is so 1990-ish.

    This argument comes in many forms, including 'static linking is so uncool', 'everybody does it with DLLs these days' etc. etc. As I'm commonly characterized not as a 'cool' Bunny, but as a 'damn hot' Bunny, I really hate 'cool' arguments about technical implementation details, especially when they're causing problems for end-users. Our primary job as developers is to make things work, and arguments about being 'cool' don't have any standing in my book.


With the 'DLL Hell' problem being so ubiquitous, numerous ways have been proposed to deal with it:

  • Static linking.

    My favourite. If there are no DLLs, they're not able to cause any problems. If concerned about updates, one will need to use 'differential updates' described above, but it is still, IMHO, a very minor effort compared to all the headaches originating from DLL use for large user base.

  • Windows file protection.

    This one happens automagically and essentially simply prevents ill-behaved applications from overwriting system DLLs, aiming to address problems like the one I've described above with MFC42.DLL being overwritten (as well as security attacks). It does indeed provide some mitigation in certain cases, but is not enough to address all the problems arising from DLL Hell.

  • 'private' DLLs.

    If on Windows you put all the DLLs into the same folder where your .EXE resides, your DLLs will become 'private' to your application, and the chances of somebody else messing with them will be minimal. While it indeed helps to deal with some aspects of 'DLL Hell', IMHO this approach doesn't make much sense and should be replaced with static linking, unless (a) a library only exists as a DLL, or (b) this DLL is used very rarely and you load it via LoadLibrary() or dlopen(). One argument for 'private DLLs' [Anderson00] is that they facilitate software updates, but with the existence of 'differential update' algorithms, it doesn't seem to be a strong argument.

  • Allowing different DLL versions to run together.

    In Windows this is known as 'side-by-side assemblies'. [Microsoft] It essentially relies on the ability to specify the version of a DLL needed, which makes your application run reliably provided that user can obtain the required version of the DLL. On the other hand, if you specify the DLL version explicitly, you're putting yourself in a position which is even worse than with 'private DLLs', taking only the disadvantages with no apparent advantages: if you require a specific DLL version it is unlikely to be shared, and you're not able to benefit from security updates etc.; if you specify a major version but will accept minor versions to catch security updates, you no longer have the assurance that your application will run, and are still contributing to the horrible mess with multiple versions. For further analysis of 'side-by-side assemblies', please refer to an excellent recent article in Dr.Dobb's journal [Worthmuller10].

.so/RPM hell

While most of this article was written about DLLs, it would be a big mistake not to mention that *nix, and especially the Linux world, aren't free of similar problems. In particular, on Linux systems, specifying the exact version of an .so library is traditionally much more common than on Windows, making the hunt for the right version a particularly annoying exercise. Even if it is handled automagically by a package manager it still causes a horrible mess in installation directories and for deployment/maintenance purposes. In particular, incompatibilities between versions required by different subcomponents of the same executable abound (as just one such example, you can see the discussion about including OpenSSL v1.0 on the Apache mailing list [Apache]).

I don't want to say that Linux or Windows is better in regard of DLLs/.so's, I think that both are a horrible mess, and what I'm really surprised about is that both Windows and Linux/*nix are borrowing the very worst features from each other! *nix was the first to do it, borrowing the whole concept of DLLs as opposed to static linking from Windows - to the best of my knowledge, full support for .so's appeared in *nix as late as SVR4 in 1990 while Windows has had DLLs since Windows 1.0 in 1985. On the other hand, recent Windows 'side-by-side assemblies' seem to borrow from Linux a concept of explicit library version requirement for DLLs/.so's, which has been characterized in [Worthmuller10] as 'We were needing a solution, but we created a monster'.

Bottom line

I know for sure that hardcore fans of neither Windows nor Linux will be fascinated by this article, but that wasn't among my goals (as stated above, my goal was to invite people to think, and whoever can think critically is not a 'hardcore fan' in my book). What I've tried to say is that DLLs (or .so's) are full of inherent dangers, and the decision to use them is not to be taken lightly.

Personally I try to avoid them as long as possible, but the question 'how long is "as long as possible"?' still needs to be solved on case-by-case basis.

Good luck to everybody who needs to tackle DLLs, you'll definitely need it.



[Anderson00] 'The End of DLL Hell', Rick Anderson, Microsoft Corporation, 2000

[Apache] Apache HTTP Server Development Main Discussion List,

[Ignatchenko10] 'Single-Threading: Back to the Future?', Sergey Ignatchenko, Overload #97/#98 (June/August 2010)


[Worthmuller10] 'No End to DLL Hell!', Stefan Worthmuller, Dr. Dobb's Journal, September 2010,

Overload Journal #99 - October 2010 + Programming Topics + Design of applications and programs