ACCU Home page ACCU Conference Page
Search Contact us ACCU at Flickr ACCU at GitHib ACCU at Google+ ACCU at Facebook ACCU at Linked-in ACCU at Twitter Skip Navigation

pinSome Thoughts on the Use of New

Overload Journal #2 - Jun 1993 + Programming Topics   Author: Francis Glassborow

One of the reasons some C++ programmers like to use new to create objects is that the memory required for the object is taken from the heap rather than the stack. In many environments stack space is limited and needs to be carefully nursed. This is certainly the case with PC's running MSDOS. The problem becomes more severe if you are using a recursive algorithm as part of your problem solution.

The problem with the standard (as found in most books) method of using new to create objects is that an explicit pointer is used to handle the object. This raises two problems. First you will often have to specifically dereference a pointer. The second problem is that you can re-assign the pointer without using delete to release storage back to the heap.

Of course neither problem is fundamental but both add to the potential for making mistakes. For example consider the following minimalist program:

#include <iostream.h> 
main()
{
int * a = new int(23);
cout<<a<<endl; // value of a intended
return 0;
}

A simple coding error that will be easily identified in this context. In many other cases the type checking built into the compiler will trap errors caused by failure to dereference a pointer. But what happens if you use a cast, perhaps to resolve an overload ambiguity as in the following code.

#include <iostream.h>
void fn(char c) { cout<<c<<endl; }
void fn(long l) { cout<<l<<endl; }
main()
{
int a = 23;
fn(a);
return 0;
}

If you try to compile this code you will get an ambiguity error. You can fix that by changing the call to:

fn((char) a)

To make your intentions explicit. So far so good, but look what happens when we decide to use new to create storage (Yes, I know this is silly when dealing with builtins but I do not want to have to implement a user defined type, where its size might make such methods desirable).

The code now reads as:

#include <iostream.h>
void fn(char c){ cout<<c<<endl; }
void fn(long l){ cout<<l<<endl; }
main()
{
int * a = new int(23);
fn((char)a);
return 0;
}

The cast now does two things, what we wanted -resolving the ambiguity - and something we did not want - casting a pointer to an int.

Furthermore none of the uses of new above have been matched with delete. Careless, but it also allows this to happen:

main()
{
int * a = new int(23);
fn((char)a);
a=new int[24];
return 0;
}

Again this is simply bad programming and in some instances you might turn on the programmer and blame his lack of care. However it might be better to show how

a different approach might eliminate much of the problem. Consider replacing

int * a = new int(23);

with

int & a = *new int(23);

That simple change removes the danger of casting a pointer into something else that you did not intend. It also allows the compiler to detect any attempts to re-use the id to point at something else. We are still left with a couple of problems. The first is the need to match uses of new with uses of delete. The line

delete a;

generates an error (just as well, else this method would be far too dangerous) but

delete &a;

does exactly what we want. Perhaps you wish that when an object is created via this route that it will automatically be destroyed when the reference variable goes out of scope. A little careful thought should convince you that, even if implementors could write compilers that could recognise this method, it would be intolerable to implement such an idea. Would you and the compiler always agree? No, the simple rule is best -every use of new requires an explicit corresponding use of delete.

How to Deal with Arrays

Unfortunately the way that C (and hence C++) implements arrays means that we cannot use the above method for them. However if we understand the C implementation of an array we can duplicate it for a dynamic array and gain reasonable security. C treats an array name as if it was a * const T type. That is a constant pointer to type T.

So what we need is code like:

#include <iostream.h>
void fn(char c){ cout<<c<<endl; }
void fn(long l){ cout<<l<<endl; }
main()
{
int * const a = new int[23];
for(int i=0;i<23;i++) a[i]=i;
fn((long)a[20]);
delete [] a;
return 0;
}

Once again we get the advantage of automatic dereferencing (i.e. the code we write for our dynamic array is indistinguishable from that used with a static one). Be careful to get the const in the right place when dealing with multi-dimensioned arrays. Well that is only a tiny part of the problem. When dealing with such objects you should certainly create an appropriate class. Try it without! Remember that C++ provides you more powerful tools, if you insist you can use an ordinary screwdriver on a machine screw.

And Finally

The use of reference variables in a class has some interesting consequences which certainly require some careful writing of copy constructors and destructors. More particularly it introduces some problems with writing overloads for assignment operators. It torpedoes the default operator=. I think that is beneficial, but others will disagree.

If Mike is agreeable I will write next time about the use of new in class definitions and how to minimise stack use when implementing recursive algorithms.

I am always agreeable to you writing for Overload
Francis.

Overload Journal #2 - Jun 1993 + Programming Topics