Metaprogramming is Your Friend

Metaprogramming is Your Friend

By Thomas Guest

Overload, 13(66):, April 2005


Whenever I create a new C++ file using Emacs a simple elisp script executes. This script:

  • places a standard header at the top of the file,

  • works out what year it is and adjusts the Copyright notice accordingly,

  • generates suitable #include guards (for header files),

  • inserts placeholders for Doxygen comments.

In short, the script automates some routine housekeeping for me.

Nothing extraordinary is going on here. One program (the elisp script) helps me write another program (the C++ program which needs the new file).

By contrast, C++ template-metaprogramming is extraordinary. It inspires cutting-edge C++ software; it fuels articles, newsgroup postings and books [ Abrahams_and_Gurotovy ]; and it may even influence the future direction of the language.

Despite (or maybe because of) this, this article has little more to say about template-metaprogramming. Instead we shall investigate some ordinary metaprograms. For example, the elisp script - a program to write a program - is a metaprogram. There may be other metaprograms out there which, perhaps, we don't notice. And there may be other metaprogramming techniques which, perhaps, we should be aware of.

What is Metaprogramming?

I like the definition found in the [ Wikipedia ]:

"Metaprogramming is the writing of programs that write or manipulate other programs (or themselves) as their data or that do part of the work that is otherwise done at runtime during compile time."

Actually, it's the first half of this definition I like (everything up to and including "data"). The second seems rather to weaken the concept by being too specific, and in my opinion its presence reflects the current interest in C++ template-metaprogramming - but a Wikipedia is bound to relect what's in fashion!

Why Metaprogram?

Having established what metaprogramming is, the obvious follow-up is "Why?" Writing programs to manipulate ordinary data is challenging enough for most of us, so writing programs to manipulate programs must surely be either crazy or too clever by half.

Rather than attempt to provide a theoretical answer to "Why?" at this point, let's push the question on the stack and discuss some practical applications of metaprogramming.

Editor Metaprogramming

I've already spoken about programming Emacs to create C++ files in a standard format. We can compare this technique to a couple of common alternatives:

  1. create an empty file then type in the standard header etc.

  2. copy an existing file which does something similar to what we want, then adapt as required.

The first option is tough on the fingers and few of us would fail to introduce a typo or two. The second is better but all too often is executed without due care - maybe because a programmer prefers to concentrate on what she wants to add rather than on what she ought to remove - and all too often leads to a new file which is already slightly broken: perhaps a comment remains which only applies to the original file, perhaps there's an incorrect date stamp.

The elisp solution is an improvement. It addresses the concerns described above and can be tailored to fit our needs most exactly. All decent editors have a macro language, so the technique is portable.

Of course, there is a downside. You have to be comfortable customising your editor. (Or you have to know someone who can do it for you.)

Batch Editing

By "batch editing" I mean the process of creating a program to edit a collection of source files without user intervention. This is closely related to editor metaprogramming - indeed, I often execute simple batch edits without leaving my editor (though the editor itself may shell-out instructions to tools such as find and sed ).

Very early on in my career (we're talking early 80's) I worked with a programmer who preferred to edit source files in batch mode. His desk did not have a computer terminal on it. Instead, he would study printouts, perhaps marking them up in pencil, perhaps using a rubber to undo these edits, before finally writing - by hand - an editor batch file to apply his changes. He then visited a computer terminal to enter and execute this batch file.

Even then, this was an old-fashioned way of working, yet he was clear about its advantages:

  • Recordable: the batch file provides a perfect record of what it has done.

  • Reversible: its effects can therefore be undone, if required.

  • Reflective: by working in this reflective, careful way, he was less likely to introduce errors. When system rebuilds can only be run overnight, this becomes paramount.

These days, builds are quicker and batch editing is more immediate. With a few regular expressions and a script one can alter every file in the system in less time than it takes to check your email. As an example, in another article [ Guest1 ] I describe the development of a simple Python script to relocate source files into a new directory structure, taking care to adjust internal references to #include d files.

The benefits of using a script to perform this sort of operation are a superset of those listed above. In addition, a scripted solution beats hand hacking since it is:

  • Reliable: the script can be shown to work by unit tests and by system tests on small data sets. Then it can be left to do its job.

  • Efficient: editing dozens - perhaps hundreds - of files by hand is error prone and tedious. A script can process megabytes of source in minutes.

Again, there is a downside. You have to invest time in writing the script, which may well require a larger investment in learning a new language. Many of us would regard proficiency in other languages as an upside but it may be difficult to make that initial investment under the usual project pressures.

So, once again, it may end up being a team-mate who ends writes the script for you. Indeed, many software organisations have a dedicated "Tools Group" which specialises in writing and customising tools for internal use during the development of core products. Perhaps this team could equally well be named a "Metaprogramming Group"?


The compiler is the canonical example of a metaprogram: it translates a program written in one language (such as C) into an equivalent program written in another language (object code).

Of course, when we invoke a compiler we are not metaprogramming, we are simply using a metaprogram, but it is important to be aware of what's going on. We may prefer to program in higher-level languages but we should remember the compiler's role as our translator.

We lean on compilers: we rely on them to faithfully convert our source code into an executable; we expect different compilers to produce "the same" results on different platforms; and we want them to do all this while tracking language changes.

In some environments these considerations are taken very seriously. For safety critical software, a compiler will be tested systematically to confirm the object code produced from various test cases is correct. In such places, you cannot simply apply the latest patch or tweak optimisation flags. You may even prefer to work in C rather than C++ since C is a smaller language which translates more directly to object code.

In other environments we train ourselves to get along with our compilers. We accept limitations, report defects, find workarounds, upgrade and apply patches. Optimisation settings are fine-tuned. We prefer tried-and-tested and, above all, supported brands. We monitor newsgroups and share our experiences.

One last point before leaving compilers alone: C and C++ provide a hook which allows you to embed assembler code in a source file - that's what the asm keyword is for. I guess this too is metaprogramming in a rather back-to-front form. The asm keyword instructs the compiler to suspend its normal operation and include your handwritten assembler code directly. Its exact operation is implementation dependent, and, fortunately, rarely needed.


The program which follows is a short but non-trivial Python script. It makes use of a couple of text codecs from the Python standard library to generate a C++ function. This C++ function converts a single character from ISO 8859-9 encoding into UTF-8 encoded Unicode.

def warnGenerated():
  '''Return a standard 'generated code' warning.'''
  import sys, time
  return (
    '// generated by %s, %s' %
    (' '.join(sys.argv),
def functionHeader(codec):
  '''Return the decode function header.'''
  return '''/**
* @brief Convert from %(codec)s into UTF-8
* encoded Unicode
* @param %(codec)s An %(codec)s encoded character
* @param it Reference to an output iterator
* @note If the input character is invalid, the
* Unicode replacement character U+FFFD will be
* returned.
template <typename output_iterator>
  unsigned char %(codec)s,
  output_iterator & it)''' % { 'codec' : codec }

def convertCh(ch, codec):
  '''Return the 'case' statement converting
  the input character using the supplied codec'''

  from unicodedata import name

  ucs = chr(ch).decode(codec, 'replace')
  utf = ucs.encode('utf-8')
  ucname = name(ucs, 'Control code')
  action = '; '.join(['*it++ = 0x%02x' % ord(c)
                      for c in utf])

  return '''case 0x%02x: // %s
  break;''' % (ch, ucname, action)

def codeBlock(prefix, body, indent = ' ' * 4):
  '''Return an indented code block.
  This code block will be formatted:
  import re
  indent_re = re.compile('^', re.MULTILINE)
  return '''%s
}''' % (prefix, indent_re.sub(indent, body))

codec = 'iso8859_9'
print warnGenerated()

print codeBlock(
    'switch(%s)' % codec,
    # iso8859-* encodings are 8-bit
    '\n'.join([convertCh(ch, codec)
               for ch in range(0x100)]),
    indent = '' # don't indent case: labels

By now, it should go without saying that this script is a metaprogram. Before discussing why I think it's a good use of metaprogramming, some notes:

  • The function warnGenerated() is used to place a standard warning in front of the generated C++ function. If users of this C++ function edit it by hand, their changes will be overwritten next time the script is run: hence the warning.

  • The generated code identifies the command which created it (this information appears as part of the standard warning). This is to help users regenerate the code, if required.

  • It is very important that the Python script is both maintained and easy to locate. Ideally, the build system includes a rule to generate the C++ from the script, though this behaviour may be hard to integrate into some IDEs: it may prove more pragmatic to run the script by hand and keep the dependent C++ code checked directly into the repository.

  • Notice how Python's triple quoted strings allow us to create neatly formatted C++ code from neatly formatted Python code without needing lots of escaped characters.

  • It is perhaps ironic that, according to the Python documentation, some of Python's builtin codecs are implemented in C (presumably for reasons of speed). I haven't worked out if this applies to the ones this script uses.

I like this script since it makes use of the standard Python library to create code we can use in a C++ program. The hard work goes on in the calls to encode() and decode() and we don't even have to look at the implementations of these functions, let alone maintain them. Their speed does not affect the speed of our C++ function and I am willing to trust their correctness, meaning I don't have to locate or purchase the ISO 8859 standards.

The second big win is that all the boilerplate code is generated without effort. If, at some point in the future, we need a fuller range of ISO 8859 text converters, then we tweak the script so the final section reads, for example:

codecs = ['iso8859_%d' for n in range(1, 10)]

print warnGenerated()

for codec in codecs:
  print codeBlock(

and let it run. And should we decide on a different strategy for handling invalid input data, again, the metaprogram is our friend.

Preprocessor Metaprogramming

As mentioned in passing, C++ has a sophisticated templating facility which (amongst other things) makes metaprogramming possible without needing to step outside the language.

C++ also inherits the C preprocessor: a rather unsophisticated facility, but one which is equally ready for use by metaprogrammers. In fact, careful use of this preprocessor can allow you to create generic C algorithms and simulate lambda functions.

For example:

#define ALL_ITEMS_IN_LIST(T, first, item, ...) \
do {                                     \
   T * item = first;                     \
   while (item != NULL) {                \
     __VA_ARGS__;                        \
     item = item->next;                  \
   }                                     \
} while(0)

#define ALL_FISH_IN_SEA(first_fish, ...) \
        ALL_ITEMS_IN_LIST(Fish, first_fish, \
                          fish, __VA_ARGS__)

The first macro, ALL_ITEMS_IN_LIST , iterates through items in a linked list and optionally performs some action on each of them. It requires that list nodes are connected by a next pointer called next . The second macro, ALL_FISH_IN_SEA , specialises the first: the node type is set to Fish * and the list node iterator is called fish instead of item.

Here's an example of how we might use it:

 * @brief Find Nemos
 * @param fishes Linked list of fish
 * @returns The number of fish in the list called
 * Nemo
int findNemo(Fish * fishes) {
  int count;
     if(!strcmp(fish->name, "Nemo")) {
       printf("Found one!\n");
  return count;

Note how simple it is to plug a code snippet into our generic looping construct. I have used one of C99's variadic macros to do this (these are not yet part of standard C++, but some compilers may support them).

I hesitate to recommend using the preprocessor in this way for all the usual reasons [ Sutter ]. That said:

  • This is a technique I have seen used to good effect in production code.

  • Techniques like these are used in highly respected C software - Perl and Zlib, for example. All C/C++ programmers should be familiar with it.

  • Although the preprocessor can be dangerous, the way it operates is simple and transparent: use your compiler's -E option (or equivalent) to see exactly what the preprocessor is up to. (I sometimes wish I had an equivalent option for working out how the compiler is handling templated code)

  • Template metaprogramming experts use every preprocessor trick in the book. See, for example, some of Andrei Alexandrescu's publications [ Alexandrescu ], or the Boost preprocessor library [ Boost ]. (This library's documentation includes an excellent introduction to the preprocessor's limitations, and techniques for working round them.)

One final point: the inline keyword (intentionally) does not require the compiler to inline code. The preprocessor can do nothing but!

Reflection and Introspection

Take a look at the following Python function which on my machine lives in <PYTHONROOT>/Lib/

def encode_long(x):
  r"""Encode a long to a two's complement
  little-endian binary string.
  Note that 0L is a special case, returning
  an empty string, to save a byte in the
  LONG1 pickling context.
  >>> encode_long(0L)
  >>> encode_long(255L)
  >>> encode_long(32767L)
  >>> encode_long(-256L)
  >>> encode_long(-32768L)
  >>> encode_long(-128L)
  >>> encode_long(127L)

The triple quoted string which follows the function declaration is the function's docstring (and the r which prefixes the string makes this a raw string, ensuring that the backslashes which follow are not used as escape characters). This particular docstring provides a concise description of what the function does, fleshed out with some examples of the function in action. These examples exercise special cases and boundary cases, rather like a unit test might.

Python's doctest module [ Doctest ] enables a user to test that these examples work correctly. Here's how to doctest pickle in an interactive Python session:

>>> import pickle
>>> import doctest
>>> doctest.testmod(pickle)
(0, 14)

The test result, (0, 14) , indicates 14 tests have run with 0 failures. For more details try doctest.testmod(pickle, verbose=True) . In case anyone is confused, 7 of the tests apply to encode_long - and unsurprisingly the other 7 apply to decode_long .

Incidentally, if is executed (rather than imported as a library) it runs these tests directly.

The doctest module is a metaprogram - an example of Python being used to both read and execute Python. To see how it works I suggest taking a look at its implementation. The code runs to about 1500 lines of which the majority are documentation and many of the rest are to do with providing flexibility for more advanced use.

In essence, note that docstrings are not comments, they are formal object attributes. Now, Python allows you to list and categorise objects at runtime, so we can collect up the docstrings for classes, class methods and for the module itself. Once we have all these docstrings we can search them to find anything which looks like the output of an interactive session using Python's text parsing capabilities. The remaining twist is Python's ability to dynamically compile and execute source code using the compile and exec commands. So, we can replay the documentation examples, capturing and checking the output.

The doctest module provides no more than an introduction to metaprogramming in Python. Given a Python object it is possible to get at the object's class, which is itself an object which can be dynamically queried and even modified at run-time. This isn't the sort of trick which is often required: I haven't tried it myself so I'd better keep quiet and refer you to the experts. See for example [ vanRossum ] or [ Raymond ].

Domain Specific Extensions

Sometimes the best way to solve a particular family of problems is to create a domain specific language, which may be implemented as an extension to a standard language For example (and once again, quite early in my career), I worked for an organisation - I'll call it Vector Products - which specialised in solid geometry software. Vector Products developed and actively maintained a proprietary extension to C - I'll call it C-cubed - which provided native support for various domain-specific primitives: vectors (the sort you find in 3D mathmematics, not std::vector ), ranges, axis-aligned boxes; and for domain specific operators to work with these primitives.

I should stress that this C extension pre-dated standard C++. C++ classes and operator overloading can now handle much of what C-cubed provided. Nonetheless, Vector Products' investment paid off: C-cubed allowed programmers to write vector mathematics in a clean and legible way, thereby freeing them to concentrate on the real solid geometry problems they needed to solve.

I believe that the earliest incarnations of C++ were essentially domain-specific extensions to C. For early C++, the domain would be "Object Oriented Programming". [ Stroustrup ]

This again is metaprogramming, though (particularly with respect to the supplied examples) it is closely related to compilation.


Most of this article puts a positive spin on metaprogramming. I'm happy enough to leave you with this impression, but I should also mention some problems.


The first problem is to do with trouble-shooting. You have problems with your program but the problem is actually in the metaprogram which generated your program. You are one step removed from fixing it.

I deliberately used the term "trouble-shooting" rather than debugging. When you think about it, debug builds and debuggers are there to help you solve these problems by hooking you back from machine code to source code. It gives the illusion of reversing the effect of the compiler. If you can provide similar hooks in your metaprograms, then similarly the fix will be easier to find.

Quote Escape Problems

The second problem I refer to as the "quote-escape" problem. It bit me recently when I converted a regular C++ program into one which was partially generated by another C++ program. For details, I refer you to [ Guest2 ].

For the purposes of this article, look at what happened when I needed to generate C++ code which produces formatted output.

Here's the code I wanted to generate:

  << context.indent()
  << field_name << " "
  << bitwidth
  << " = 0x" << value << "\n";

Here's the code I developed to do the generating:

  << indent()
  << "context.decodeOut() << context.indent() << "
  << quote(field_name
        + " "
        + bitwidth 
        + " = 0x")
  << " << context.readFieldValue("
  << quote(field_name) + ", "
  << value 
  << ") << \"\\n\";\n";

It looks even worse without the helper function, quote, which returns a double-quoted version of the input string.

I was able to defuse this problem with some refactoring but the self-referential nature of metaprogramming will always make it susceptible to these issues.

This is also part of the reason why Python is so popular as a code-generator: as has been shown by some of the preceding examples, its sophisticated string support can subvert most quote-escape problems.

Build Time Complexity

I have already mentioned the problem of integrating code-generators into your build system. Some IDEs don't integrate them very well, and even if they do, we have introduced complexity into this part of the system. In general we prefer to trade complexity at build time for safety at run-time but we should always check that the gains outweigh the costs.

Too Much Code

We're nearing the end of our investigation, and I hope the "Why Metaprogram?" question I posed at the beginning has been addressed. The [ Wikipedia ] answers this question rather more directly:

"[Metaprogramming] ... allows programmers to produce a larger amount of code and get more done in the same amount of time as they would take to write all the code manually."

It's possible to interpret this wrongly. As we all know, we want less code, not more (more software can be good, though). The important point is that the metaprogram is what we develop and maintain and the metaprogram is small: we shouldn't have to worry about the generated code's size.

Unfortunately we do have to worry about the generated code, not least because it has to fit in our system. If we turn a critical eye on the ISO 8859 conversion functions we discussed earlier we can see that the generated code size could be halved: values in the range ( 0, 0x7f ) translate unchanged into UTF-8, and therefore do not require 128 separate cases. Of course, the metaprogram could easily be modified to take advantage of this information, but the point still holds: generated code can be bloated.

See [ Brown ] for a more thorough discussion of this issue.

Too Clever

Good programmers use metaprograms because they are lazy. I don't mean lazy in the sense of "can't be bothered to put the right header in a source file", I mean lazy in the sense of "why should I do something a machine could do for me?".

Being lazy in this way requires a certain amount of cleverness and "clever" can be a pejorative every bit as much as "lazy" can. A metaprogram lives at a higher conceptual level than a regular program. It has to be clever.

Experienced C++ programmers are used to selecting the right language features for a particular job. Where possible, simple solutions are preferred: not every class needs to derive from an interface, and not every function needs template-type parameters. Similarly, experienced metaprogrammers do not write metaprograms when they can, they do it when they choose to.

Concluding Thoughts

This article has touched on metaprogramming in a few of its more common guises. I hope I have persuaded you that metaprogramming is both ubiquitous and useful, and that it shouldn't be left to a select few.

At one time, the aim of computer science seemed to be to come up with a language whose concepts were pitched at such a high level that software development would be simple. Simple enough that people could program machines as easily as they could, say, send a text message [ 1 ] . Compilers would be intelligent and forgiving enough to translate wishes to machine code.

This aim is far from being realised. We do have higher-level languages but their grammars remain decidedly mechanical. Programs written in low-level languages still perform the bulk of processing. Perhaps a more realistic aim is for a framework where languages and programs are compatible, able to communicate with humans and amongst themselves, on a single device or across a network.

In such a framework, metaprogramming is your friend.


Thanks to Dan Tallis for reviewing an earlier draft of this article.


[Abrahams_and_Gurtovoy] David Abrahams and Aleksey Gurtovoy, C++ Template Metaprogramming: Concepts, Tools, and Techniques from Boost and Beyond

[Alexandrescu] Andrei Alexandrescu's homepage is at

[Brown] Silas S Brown, "Automatically-Generated Nightmares", CVu 16.6

[Doctest] doctest - Test interactive Python examples

[Guest1] Thomas Guest, "A Python Script to Relocate Source Trees", CVu 16.2 (also available re-titled "From A to B with Python" at [ Homepage ]

[Guest2] Thomas Guest, A Mini-Project to Decode a Mini-Language - Part 3, available at [ Homepage ]. (The first two parts of this article appeared in Overloads 63 and 64).


[Raymond] Eric S. Raymond, Why Python? ,

[Stroustrup1] Bjarne Stroustrup, The Design and Evolution of C++

[Stroustrup2] Bjarne Stroustrup, Did you really say that? , from Bjarne Stroustrup's FAQ bs_faq.html#really-say-that

[Sutter] Herb Sutter, "What can and can't macros do?", Guru of the Week 77

[vanRossum] Unifying types and classes in Python 2.2

[Wikipedia] A free-content encyclopedia that anyone can edit,

[ 1 ] Though maybe we aren't so far off. To quote Bjarne Stroustrup [ Stroustrup2 ]: " I have always wished for my computer to be as easy to use as my telephone; my wish has come true because I can no longer figure out how to use my telephone. "

Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED

By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED

Settings can be changed at any time from the Cookie Policy page.