One size rarely fits all. Bill Clare considers different approaches to parameterization.
The purpose of this article is to outline a design approach for allowing data to be shared where needed and to not be visible elsewhere. The issue becomes particularly interesting when not all users who have visibility of a shared capability are actually provided with the same implementation of that capability.
Kevlin Henney [ Henney07 ] has recently written a series of articles ('The PfA Papers: Context Matters' et al.) about the history and advantages of a pattern termed PfA or Parameterize from Above. The pattern involves passing environment variables to applications as parameters rather than through use of globals or singletons.
Allan Kelly [ Kelly04 ] addressed many of the same issues by developing a Pattern for a shared context 'The Encapsulated Context Pattern'. A spirited set of responses was later given (Overload 65, February 2005).
Many have written about the use and misuse of singletons.
Problem to be addressed
Before addressing approaches to this, it is well to state carefully the issues involved.
OO methodology provides strong support for encapsulation of the behaviour of a single or a related set of concepts. By itself, though, it provides little guidance about how these concepts interact and communicate. Various language features, patterns and approaches address this issue.
Here we consider approaches where:
There are multiple users or clients that share data from multiple external capabilities.
Clients can include components, services, threads, algorithms, objects or functions.
Capabilities can include services, characteristics, resources, error and exception handlers, and values.
Also capabilities can be clients of higher level capabilities, which in turn are external to them.
Different users, under different circumstances may need a different version or implementation of a particular capability or service.
Here we can partition uses by nested scopes in the function invocation hierarchy, where invocation includes both function calls and routing of work to different servers.
This provides a separation of capabilities, that are dependent on their environment, from the base functionality of the clients, which are environment independent. The approach is to allow clients to be adapted to a rich set of environment based capabilities without code change to the client. However, it is worth noting that this notion of an 'environment' boundary can be somewhat flexible for many applications.
This notion also supports some of the concepts of Aspect Oriented programming, where common capabilities are 'woven' into client users. The emphasis there is on compile-time binding, while here it is on runtime binding.
The basic objectives here are to suggest a framework where:
- Sharing of a common capability does not create dependencies among the clients.
- Implementations for capabilities can be specified globally to meet overall system requirements and adapted locally to meet specific internal scope requirements.
- New capabilities can be introduced without impacting existing code.
- New implementations of a capability can be introduced without impact to existing code.
- New scopes can be introduced and modified without impacting existing code.
- Code to access capabilities is independent of their implementation or of their tailoring to a scope.
An approach to this can be based on considerations of an overall environment with particular implementations of shared capabilities to be used within certain scopes. Also there are related considerations for capabilities that have interdependencies, for resource control, for establishing concurrent processing, for data sharing, for establishing controls externally and for testing.
An environment is specified through a set of function or function object pointers that provide client access to capabilities.
Capability implementations can be maintained and specified:
- globally and accessed through maps indexed by scope;
- locally to the scope that uses them; or
- in some combination of these.
Ordinarily the external capabilities are orthogonal not only to their clients but also to each other. Where there are interdependencies among the capabilities, it is useful to view them as satisfying a common role. Examples of role based capabilities include:
- A set of mathematical and physical constants that are needed to provide consistent values and precision for a particular algorithm.
- A logging capability which collects, filters, formats, routes and records data.
- Here different clients may need different versions of some of these particular functions.
Values of a capability access pointer are stacked by scopes within the execution hierarchy. This can occur in two phases:
Before the scope is invoked.
This is useful to establish external consistency and to meet external requirements.
Within scope initialization.
This is useful to establish internal consistency and to meet internal requirements.
In both cases, resources may be obtained and setup, and appropriate configurations initialized when the scope is entered. When the scope is exited, finalization routines release resources, and restore the previous external state.
Access to other capabilities, that do not need to be adapted to the scope, are left untouched and thus are directly inherited.
Resource allocation and management
Data and other resources for shared capabilities needs to be actually created at some time and within some scope. The scope concept suggests a framework for management of resource instances. In many cases, this can provide more structured support for data sharing and control than through use of so called smart pointers.
Alternatives here include:
- Resources instances are created as needed, and released after use.
Resources instances are created when first needed, and then cached and reinitialized as needed.
Actual release of the resource can occur at some higher level of scope, based on tradeoffs of memory and processor usage.
- Resources can simply be initialized in advance of use, possibly at startup, and never explicitly released.
- If it is not known at the scope level if data or a resource will be needed, then a Singleton can be set up or the Singleton can register with the scope manager. The resource is released if necessary at the scope end.
- If it is not known how many objects will be needed within the scope, then an empty container, with possibly a factory, can be set up and the resources released, if necessary, at the scope end.
Implementation instances can be specified globally through maps indexed by a scope ID to appropriate functions or objects. Alternatively, implementations can be specified through local scopes where they are needed. This is based on design trade-offs between a co-ordinated global environment and configuration management on the one hand, and independent support for lower level functions on the other.
For concurrently executing threads or processes and for remote executions, the current environment is copied to initialize the new thread, process or remote execution environment.
For applications that queue requests to a separate thread or process, the queue manager can propagate access pointer references to the execution environment or environments of the request processing.
Where data needs to be shared for both read and update, the usual issues of data sharing remain, independently of scope management. These issues can be addressed with the usual techniques of:
- Providing separate data instances, possibly using copy-on-write techniques.
- Use of locking.
- Queuing requests to a data owner thread or process.
External environment parameters
Parameters for adaptation can be supplied in environment variables and files that are accessed by initialization routines. With this, adaptation and tailoring of services for particular environments can be accomplished without code modification.
Testing occurs at several levels.
- Capabilities can be tested independently of clients.
- Clients can be unit tested with adapted versions of the capabilities they invoke.
- As always, integration testing is needed to verify interactions among separate capabilities.
Actual access to capabilities needs to be consider from two perspectives:
- Scope managers to setup client access to particular capability implementations.
- Client access to actually make use of the capability.
Scope managers set up client access to particular capability implementations. They have visibility to implementations only to create and locate them, and then to set pointers. There are several mechanisms possible for this access.
The most direct is through global variables for the pointers.
In particular, these globals could be isolated in an Environment Namespace.
Singletons are sometimes advocated as an alternative to the use of globals.
Singletons combine concepts of global access and creation when needed. This creation on the fly, when and if the object is needed, is the advantage of a Singleton. However, it is also its disadvantage, in that it leaves open the issue of when the resource should be deleted, if ever.
Here two approaches are possible:
- A single Singleton could provide the basis for access to all implementation instances.
- Singletons could be used to create shared data resources as needed. Again scope management could be used to release the resources at appropriate times.
Ultimately however, the Singleton creator itself must be global, and unless there is some reason to postpone this creation, it does not appear to provide added value.
An alternative is for scope managers to pass parameters, or parameter blocks, to their immediate functions, which then pass these down the call hierarchy.
This can provide some encapsulation by combining the internal and external capabilities within a single function, but it introduces its own complexity.
None of these techniques is exclusive of the others, and they are independent of the requirements of scope management. Thus, they can be combined as needed, especially for code obtained from different sources.
Actual use of a capability within client code can be much simpler. Here, the actual base application code can look like:
pi( ); // retrieve value with // consistent precision for // this scope
math( ).pi( ); // retrieve role and property
The implementation mechanism at the client level can determine if these routines that access pi or math are:
- set up as globals
- defined within the local scope
- functions or properties of the local class.
In particular, this is independent of whether the actual values are derived from globals, singletons or passed parameters.
Turning back to the logging example above, particular scopes may need specific logging capabilities, and so may substitute their own or just add to a global capability. Logging within a particular scope can in turn have specific functions for filtering, formatting, or routing. For test environments, or even as deployed, logging needs may vary considerably for individual scopes and circumstances.
Applications can be instrumented with a considerable amount of internal trace calls. Such code usually has considerable overhead, so it is desirable to be able to dynamically adjust the amount of detailed logging within separate scopes. Each trace call can provide parameters that specify a level of detail at which output should be recorded. The amount of output for specific tests, or to debug specific problems, can then be adjusted through external parameters that specify trace levels for different scopes. With appropriate templates, some of this tailoring can occur at compile time, with calls being completely eliminated through redefinition of the call templates.
A program's environment consists of a set of nested scopes which can be more or less global. Scopes have internal and external capabilities with many of the external capabilities shared to differing degrees with other components. A capability may have different implementations for use in different scopes under different conditions.
Judicious use of scope concepts allows common capabilities to:
- be global where needed, but limited otherwise
- support managed resources
- allow controlled data sharing
- be flexible, in terms of adaptation and of being introduced into an evolving code base.
Within this framework, applications can, without code impact, use capabilities derived from globals, singletons or parameters or, where necessary, combinations of such techniques.