It was nice to learn from John Merrells that my last article fired at least one reader to add a contribution. If you remember, one of the main purposes I have in writing these articles is to help me develop my understanding of various patterns. It is in my nature to fire off various ideas when I am in exploration mode. If I am lucky a few of them will be worthwhile and have more than a transient existence. Even the bad ideas have the advantage of exercising a few brain cells; yours as well as mine.
The Adapter Pattern
The fundamental idea behind the Adapter pattern is being able to provide a tailor made interface to allow reuse of existing code (and a secondary form in which code is tailored to an existing interface). Design Patterns limits its discussion to classes (or the equivalent in other OO languages). Let me start with a couple of simpler instances using functions.
Several functions in the Standard C Library are seriously flawed in that they are not const correct. For example the prototype for strpbrk is:
char *strpbrk(char const *s1, char const *s2);
Unless no character from s2 occurs in s1 (when strpbrk() returns a null pointer) this function unfortunately removes const qualification. How should we write it in C++? Try as you might, you will find that you cannot implement this C function as a single C++ function. Well I tell a lie, you could forcibly remove the const qualification with a cast.
If we remove the const qualification from the first parameter we cannot call that version with a traditional const C string (array of char ). That will certainly break some legacy code where the programmer passes a const qualified array of char to strpbrk() . What we need is two functions with the following prototypes:
char const * strpbrk(char const *s1, char const *s2); char *strpbrk(char * s1, char const * s2);
Overloading will resolve the call to preserve the appropriate qualification. I guess this will still break some C code, but such code will already be broken in that it will try to write to a string that was already (prior to the call to strpbrk() ) read only. I am not worried about such breakage, all that a C++ compiler will be doing is highlighting a potential problem in the existing code.
Now any self-respecting programmer is going to feel less than happy with writing two identical function bodies simply to maintain const correctness. There is no need for separate implementation just different interfaces (prototypes). The maximal const version is all we need to implement. By itself it will work for any string (well any non- volatile string). The only problem is that passing a string through such a function adds a superfluous const qualification if there was none at entry. Fortunately we can write a simple adapter function to resolve this problem:
inline char * strpbrk(char * s1, char const * s2) { return const_cast<char *> (strpbrk(const_cast<char const *> (s1), s2)); }
In other words we forcibly add const qualification to the first argument to get the all const version selected and then forcibly remove it from the return. As this version will only be called for a writable first argument, removing the const qualification cannot introduce some new danger.
There are many examples where this mechanism makes sense. Consider the common idiom where a member function returns a reference to the object:
MyType & MyType::func() { // functionality return *this; }
Fine, but what happens when this is a const member function? That's right, you get an added const qualification. This can be fixed in the same way:
MyType const & MyType::func() const { // functionality return *this; }
coupled with (almost certainly in class):
MyType & func() { return const_cast<MyType> (const_cast<MyType const *> (this)->func()); }
This mechanism may not yet be an idiom of C++ but if not, it ought to be. It would be nice to turn this into a template but I cannot see anyway of doing so. That may just be my lack of insight.
Overloaded C Functions
The coming version of C will include support libraries for both float and long double as well as the already existing support for double . Leaving aside what some will consider an elegant method to avoid overloading in C (others will consider it an atrocious hack) the true names of the maths functions will identify their precision. For example sin() will take and return a double , fsin() will take and return a float and lsin() will take and return a long double . (By the way, if the introduction of these extra identifiers breaks your code, the problem is yours because ISO/IEC 9899:1990 7.13.4 reserved all the names of existing functions in math.h when prefixed with f or l ).
The problem for C++ is that C's fancy footwork to allow the compiler to choose what it thinks you want when you write sin(0.12F) is far from how we would do it in C++. What we need is some adapter functions to take the C distinct names and create a C++ overloaded set of functions. So we will have:
double sin (double); // implemented as in C inline float sin (float angle) { return fsin(angle);} inline long double sin(long double angle) { return lsin(angle);}
In other words, if you need a set of overloaded functions that can be used in C you must first write them with distinct names (qualified as extern "C" ) and then add adapter functions to provide the overloading if you want to use them in C++.
As these adapter functions are straight forwarding functions (sometimes called wrappers) I cannot think of a good reason to not inline them.
As a general principle, when I want a function to be useable in both C and C++ I would give careful consideration to declaring and defining a C version with a distinct name and then using an adapter to provide an equivalent type-safe C++ version. I think that this would reduce surprises when naïve C programmers do not realise that the version of an overloaded function they are calling will have been name mangled in the object code. Please note, I said careful consideration not that this is the only correct way to do it.
Adapters for classes
Changing or Enhancing an Existing Interface
Let me start with a very simple adapter (mainly because it includes a nasty gotcha pointed out to me by Michael Ball). Suppose I have a class that I want to use as a base class for a polymorphic hierarchy but the original author failed to provide any virtual functions. In particular it does not include a virtual destructor. A simple solution comes to mind:
class PolyBase: public Base { public: virtual ~PolyBase(){}; };
This is about the simplest class adapter that you can imagine. Of course, in practice, you will add quite a few virtual qualified forwarding functions (adapter functions) to provide polymorphism for existing functionality. Note that you haven't apparently added any data to PolyBase , so you might think that you could happily use this global/namespace function:
Base & func();
Which returns a Base object by reference. You might try this for example:
PolyBase & pb = func();
When the compiler declines to compile this on the basis that a Base is not a PolyBase you stop, think for a moment and come to the conclusion that while this is true a downcast will be OK because you have not added any data. So you amend your code to:
PolyBase & pb = static_cast<PolyBase>(PolyBase) func();
WRONG. The chances are very high that your program is now irredeemably broken. You did (almost certainly) add data when you derived PolyBase from Base . PolyBase will include a vftptr . Worse still, most implementers put this at the beginning of the class object data. Adding polymorphic behaviour to a class changes its data layout. Fixing this problem requires a bit more care. You might try writing an adapter function:
PolyBase pfunc() { return PolyBase(func()); };
to construct a PolyBase object that can be returned by value (the copy constructor will be optimised away by any respectable compiler). Unfortunately that will not work as we need a constructor that takes a Base by reference (or value) (neither the default nor copy constructor will do). So we must add:
PolyBase(Base const & data): Base(data){}
to our class interface. That has a side affect in that it will suppress the default constructor which we must now make explicit:
PolyBase(){}
if we wish to restore that behaviour.
So even our small adaptation to make a class suitable for use as a polymorphic base class (assuming that there are no contra-indicators) requires:
class PolyBase: public Base { public: virtual ~PolyBase(){}; PolyBase(Base const & data) :Base(data){} // duplication of an explicit // constructors in Base // if none then // PolyBase(){} };
Adding an Existing Class to a Hierarchy
Design Patterns gives an example of this where you might want to add an existing text object into a hierarchy of displayable objects. You certainly wish to reuse the existing code for text but you need to supply an interface that conforms to that for the displayable hierarchy so that you will get appropriate polymorphic behaviour. They propose something such as:
class DisplayText : public Displayable, private Text { // implement Displayable's public // interface };
While I understand the rationale for not using public inheritance for Text I think that using private inheritance is also an error. I think that if you decide to use the multiple inheritance route, that the interface should be inherited publicly and the implementation mechanism should use protected inheritance. The decision between private and protected inheritance comes down to whether inheritance is being used purely for implementation (when it should be private ) or because the implementation choice might be useful in more derived classes. There are benefits and costs both ways. So you should think about it rather than just follow the crowd.
I can see little if any advantage to using private inheritance rather than simple composition or layering. True, you can import functionality with using declarations, but you can do the same thing with forwarding functions to make functionality of the data object available in the protected or public interfaces.
One thought that crossed my mind was that you might want to use a dynamic_cast<> of a Displayable object to determine if it was a Text one and thereby access the full public interface of Text if appropriate. The thing that nags me here (and with the ACCU conference only days away, I have not time to check all the details) is that I do not know how dynamic_cast<> treats protected inheritance, nor do I know how it treats sub-objects that are non-polymorphic (in other words base objects that do not support RTTI). Perhaps someone else might contribute the answers for the next issue. (Some of you are going to have to start providing something pretty soon because Overload is being written by far too few people).
Subsuming a Hierarchy
As well as adopting a single class into your polymorphic hierarchy with an appropriate adaptor you might also wish to adopt a whole hierarchy. In this case inheritance definitely is not what you want (I assume that you do not want to adopt the classes on a one by one basis). Straight layering does not help either. You have two choices, a pointer or a reference. Which you choose is a design decision. If you are happy for you DisplayText object to be specialised to the specific Text subtype (e.g. ColouredText ) at construction time then you can use a reference member. With only the appropriate members listed something like:
class DisplayText: public Displayable { Text & data public: DisplayText(Text & d):data(d){} };
Note that this uses an existing object. You will need to consider the Observer pattern as well because the display will need updating if changes are made to the referenced object. If you want the Text object contained in the DisplayText one you will need to investigate creational patterns.
If you want to allow your DisplayText to change the Text object to a different one you will need to use a pointer:
class DisplayText: public Displayable { Text * data public: DisplayText(Text & d):data(&d){} };
Now you will need to consider what you want to do with assignment and copy construction. Note that the DisplayText object does not own the Text object and so you do not need to concern yourself with problems of deep copying. However your Text object might go away so you do need to implement some variation on the Observer pattern so that the destruction of your Text results in the pointer to Text in DisplayText being reset to null.
As long as you can get the intercommunication between Text objects and DisplayText working correctly then the pointer mechanism is much better than the reference for handling non-ownership semantics.
To Summarise
If you want to adopt a single object type into a hierarchy use either multiple inheritance with public inheritance of the hierarchy interface and protected inheritance of the adoptee
or
single inheritance of the hierarchy interface and layering for the adoptee.
These mechanisms make the object own the instance of the adoptee. You do not need to consider the complexity of providing communication between the adapter and the adoptee but you may need to consider how, if desirable, you are going to provide access to the adoptee's public interface.
If you want to adopt a whole hierarchy you must consider whether the adapter will own the adoptee (either through a pointer or a reference) or just access a free-standing instance.
Pluggable Adapters
Let me be entirely honest here; I have no idea what the authors of Design Patterns mean by these. I find their text incomprehensible. I would be delighted to read a simple exposition of this variety of adapter with a good clear example with reasonably full implementation in code. I understand the fundamental concept that I might want an adapter that makes minimal assumptions about the class it is adapting but that is as far as I can go. I doubt that I am the only one who is confused.
STL Adapters
There are some interesting examples of adapters in the STL.
We even have examples of adapters that modify behaviour rather than interface. For example the reverse_iterator adapters simply reverse the direction in which a sequence is traversed. As you might expect, these are examples of templates that provide the new behaviour based on the old. The template argument used to instantiate a reverse_iterator must be an appropriate type of iterator .
Then we have a wide range of other function adapters such as the various binders. For example bind2nd<> is a template function that takes two parameters. The first is a binary function returning a bool and the second is a value for the second parameter of that function. Let me give you a simple non-template example.
Suppose I have a perfectly good function computes the product of two objects and I need a specific function (with its own distinct address) that doubles a value. The prototype of the original function might be:
value_t product(value_t, value_t);
Now I write:
inline value_t times2(value_t val) { return product(val, 2); }
Now this kind of adaptation is common (note that default arguments do not help because I might want to have several different constant values for the second parameter in different parts of my program. I could write a special adapter template such as:
template < int i> inline value_t times_by(value_t) { return product(val, i); }
Now as long as my compiler supports explicit template arguments for functions I can write something like:
value_t val(12.8); cout << times_by<3>(val)<< endl;
to get three times val sent to cout .
However this is far from being general enough. Suppose that I have a function that is something like:
template<typename T, typename S> bool has_a_factor(T t, S s);
and I want to create a unary function that tests BigInt values to see if they are divisible by 37. The function I want is:
bind2nd(has_a_factor<BigInt, int>(), 37);
For a single instance this may seem pretty trivial but the STL is riddled with functions that need some form of test function as a template parameter. The existence of such template function adapters makes life much easier (once you get used to using them). As in my example, the test function itself is often a template function (STL provides such functions as greater() ready to plug and play, as long as your type meets the specified interface requirements)
In addition to a rich collection of function adapters, STL also includes a variety of class adapters. For example, when we describe a data structure as a FIFO queue we know exactly what interface we expect it to present to the user. However there are several different internal structures ( deque , list etc.) that we might use to implement the necessary behaviour. As long as we stick to appropriate standard STL containers we can create queues with the whole interface generated for us. In deed we can go further if we wish, by creating our own specialist (either template or plain) data structures that can plug into the STL queue template. We have to check exactly what interface elements the STL queue requires of its template argument but as long as we meet those we have a queue available on demand.
Much of the power of the STL comes from a wide application of the adapter concept. It is well worth careful and thoughtful study. That way you will broaden the range of your programming skills. Do not waste time re-inventing wheels but do invest time understanding how wheels work and how to build your own.
Remember that the Amerind (Native American to be PC) never invented the wheel because wheels do not work well across the range of snow, ice, pine forest and prairie, runners work much better on that kind of terrain.
Sorry, but this is as far as I have time to go this time out. Please write in expanding, correcting etc. on what I have written.