* Articles
* Source Code
* DevSearcher
* Technetcast
* Book Reviews
* Op-Eds
* Microprocessors
* Programmer's Vault
* Software Careers
* History of Computing
* Mailing Lists
* Dr. Dobb's Store
* Customer Service

We're the dot in .com Total e-business support at PartnerWorld for Developers Click here to download VTune(TM) 4.0 DOMINO FOR LINUX. THE ONLY COLLABORATIVE APPLICATION SERVER
Visit Dr. Dobb's DevSearcher to find the answers to all your technical queries!
DDJ Home Page About Dr. Dobb's Subscribe to DDJ About DDJ

Pseudo-Incremental Linking for C/C++

Dr. Dobb's Journal October 1999

Avoiding the headaches of long compile/link/run cycles

By William A. Hoffman and Rupert W. Curwen

While working as a researcher for GE Corporate Research and Development Laboratory, William was a primary contributor to the open-source software package TargetJr (http://www.targetjr.org/). He is currently a partner at Kitware and can be contacted at bill.hoffman@kitware.com. Rupert develops applications for Sentient Computing at AT&T Laboratories in Cambridge, England. He can be contacted at rcurwen@ uk.research .att.com.

The C++ language has provided developers with a powerful framework for implementing algorithms and building software systems. However, as C++ projects increase in size, the time it takes to test and debug new code also increases. Software engineers have spent years improving the compilers to minimize the development cycle time. The basics of modular organization of software are now well understood. But ironically, one of the major roadblocks to rapid development in C++ has little to do with the compiler -- it is the linking step that frequently causes large-scale projects to grind to a halt.

This is far from the object-oriented utopia of rich, reusable class libraries taking the hard work out of programming. Certainly design time is reduced, but you spend more and more time waiting for the linker to sift its way through all those many richly typed symbols. Such long link times frustrates developers and causes them to reject existing libraries and substitute their own smaller versions, which are almost certainly incompatible with the rest of the system. They are cut out of the feedback loop of code reuse and no longer benefit from fixes and upgrades contributed by their colleagues. Furthermore, the proliferation of incompatible libraries, duplicating the same functionality, increases the burden of linking for the rest of the system developers. This "sandbox" mentality will inevitably cause conflicts during integration. The self-defeating progression from rich software libraries to long link times is particularly disturbing because the software environment should help you, not hinder you.

[Advertisement: Dr. Dobb's Technetcast]

To fulfill the ideal of object-oriented design, the compile/link/run cycle time should not grow rapidly with system size. There are well-known techniques for managing compile time for large systems such as reducing dependencies, forward referencing where possible, and so forth, but less emphasis has been placed on link/run time. In this article, we'll describe a method for managing link/run time that we employ at the General Electric Corporate Research and Development Laboratory. Our method provides fast link/run time during development, without sacrificing run time in the final product, and without the use of any customized, nonstandard linking software.

A Brief History of Incremental Linking

Tool vendors such as SunSoft, Pure-Atria, and others recognize the need for faster link times. They address this need by offering incremental linkers, which replace the standard system linker. These linkers typically work by leaving swathes of empty, unused space in the executable binary. As code is recompiled, the newly generated object is inserted directly into the empty space. After several iterations, the empty space is depleted, and a complete relink becomes necessary. While these schemes do help to decrease link times and increase productivity, incremental linkers are not available on every platform, and may come with a high price tag.

One company (Lucid) went even further and built a system (Energize) that combined an object-oriented database with an incremental linker to provide function-level incremental builds. Lucid then attempted to transfer to C++ some of the functionality of Lisp, resulting in a complex, function-level, incremental linker and integrated object database, which required the user to read two manuals just to get started. In short, a snorting pig. The reason for the failure of Energize was not due to any lack in demand for fast incremental C++ builds. Lucid's vision was one step beyond what was necessary for rapid development, and the benefits of the sophisticated framework did not justify the complexity or cost.

Shared Libraries

The scheme we use for incremental linking does not require additional software beyond the system compiler and linker. The key to this method is the use of shared (or dynamic) libraries. A shared library is a library that can be loaded into an executable at run time, either automatically, or under the functional control of the program. Support for shared libraries is provided under most modern operating systems, ostensibly because they help reduce the disk footprint of executables. There is no need to link the shared library code directly into the executable. Instead, it contains only references to the functions defined in the shared library. These references are finally resolved at run time.

Of course, a program that uses shared libraries still has a separate linking step. The linking step in this case must locate all the symbols referenced by the program in the various shared libraries used. However, at link time, the exact offsets of the symbols within those shared libraries are not determined, and the corresponding code is not incorporated into the linked executable. Instead, these symbols are referenced indirectly through a symbol table in each shared library. The symbol table contains one entry for each symbol exported from the shared library, and indicates the address of the corresponding code within the library.

At run time, as the executable is loaded, the operating system also loads all the shared libraries required, and maps them into the program's address space. As undefined symbols are encountered within the executable, they are replaced with the actual address where the symbol was mapped. Because the linker has already organized all this information, this final dynamic linking step is fast.

Pseudo-Incremental Linking

Of course there is absolutely no reason that a shared library must remain unchanged between link time and run time. This is the key observation that makes pseudo-incremental linking possible. Obviously, there are some changes that will cause the dynamic link to fail, such as removing symbols that are needed by the executable. However, for the most part, it is perfectly safe to change a shared library after it has been linked into an executable.

Here, then, is the scheme we use: The executable itself is simply a shell that calls a main function located in a shared library. All the functionality of the executable is then spread amongst a set of shared libraries, each of which is relatively small compared to the whole system. Thus it is cheap to rebuild each shared library. All the libraries are relinked into the executable occasionally, perhaps every night. In between these relinks, the programmer modifies code in one or more of the shared libraries.

Figure 1 shows the typical statically linked scheme, in which a change to a single module within the executable forces a relink of the whole executable. Compare this with the pseudo-incremental linking method in Figure 2. Now a change in a module causes only the encompassing shared library to be rebuilt.

Obviously, if the libraries are being used in a number of executables, the usual advantages of shared libraries will also accrue -- namely that the executables will be much smaller because they need to keep a separate copy of the library in each executable. The only cost is an increase in the time taken to load and run the program. This increased load time is considerably shorter than the relink time of the static executable, and can be removed in the final release of the system by relinking statically before the program is shipped.

Library Ordering

There is a subtle distinction between static and dynamic libraries that could, if not properly addressed, cause problems when trying to use both methods interchangeably. Under static linking, as each library is processed only those symbols that have already been referenced will be extracted from the library and placed into the executable. This means that if a symbol is in the library, but has not yet been referenced, it will not make it into the executable. Thus it is best to try to avoid circular references between static libraries (library A uses a symbol in library B, and library B uses a symbol in library A), because you would have to include each library in the link line more than once to resolve all required symbols. Correct ordering in static libraries is important.

Shared libraries, in contrast, are much less affected by symbol ordering because all the symbols in a library are loaded together at run time. This makes the run-time image of the executable larger than a statically linked program, but it does mean that symbols will be resolved even if the libraries are not referenced in the correct order in the link line. It also means that if you have duplicate symbols in your libraries, the symbol in the first library loaded will be the one that is used. This can be a great boon during development, as it allows symbols to be redefined without even changing the library in which they are normally to be found. We call this the "TestLib method."


To recap, pseudo-incremental linking enables changes to be made in the modules of an executable without relinking that executable. Instead, only the libraries that have changed need to be rebuilt. This is a great improvement over static linkage. However, the special processing that occurs when shared libraries are loaded at run time allows an even finer grained modification of the executable, under certain circumstances.

Specifically, if you need to change the code for a method of a class -- but without changing the method's signature -- you can use TestLib. The method works by defining a special shared library, called "TestLib," which is normally empty (containing just a dummy object file), but which is linked into the executable before any other libraries. Now if you change TestLib to include a symbol that also occurs in another library, the dynamic loader will get that symbol from TestLib, not from the other library, because TestLib is loaded before that other library.

For example, assume that a class C is defined in library L. Class C has a method M that has a bug, but you don't want to recompile the whole of the class in order to test the fix. Simply create a new source file containing just the method M, compile it, and put the object file into the TestLib shared library. Run the executable and, as if by magic, the fixed method is now used.

This is great for trying out little bug fixes before applying them to the real library. It is particularly useful if the original library is not under your control, so you cannot change it directly. This might occur if the library is owned by another programmer, so you would have to get them to fix it, and that could take valuable time.

This technique does not work if you have to change the signature of the class, as defined in the header file, because then your new symbol will be different from the symbols that are referenced in other libraries used by your executable. Hence the other libraries will still use the original symbol definition. It is also important to remember that the symbol defined in your TestLib will not get linked in when you statically link, because it will not be referenced until later in the link line.

Test Cases, Framework Drivers, and Loadable Shared Libraries

The article "Large-Scale C++ Software Design," (C++ Report, June 1996), by John Lakos, includes an extensive section on reducing dependencies so that test cases can be run without link time dominating the cost of regression tests. Sometimes it is impractical to reduce interdependencies of a large system. We have found a way of writing test cases that uses shared libraries in such a way as to reduce the run time used by regression tests, and for certain operating systems, the disk space used. The savings in our system were significant. Figure 3 shows the savings in link time for 21 test programs from the TargetJr system (see http://www.targetjr .org). Figure 4 shows the savings in disk space used by the same 21 tests.

This test program method uses a small test driver program that links all of the shared libraries used by a set of tests. A single test is not a separately linked program, but is itself a shared library (Example 1). The test driver takes the name of a test, encapsulated in a shared library, on the command line. When run, the test driver opens the dynamic library that contains the test and calls the test function named on the command line. This saves both disk space and link time. There is only one link done for an entire suite of test cases, and there is only one executable, as each test case is just a shared library.

But this method is not limited to test cases. Create a driver program that is linked once with every dynamic library within your entire framework. Now write a small program using code from throughout your framework, with a main function. Compile this and turn it into a small shared library. The driver program can now load your shared library dynamically and execute the main function. Because symbols are resolved as they are encountered, this efficiently loads all the libraries you actually use at run time. In addition, these tiny shared library programs enjoy the same savings in disk space and link time as the dynamic tests described earlier.

With most modern operating systems, the code to implement this is both available and portable. We have implemented class tjDLoadLib (available electronically; see "Resource Center," page 5) to provide a portable C++ interface to the various ways of accessing dynamic libraries at run time on the major operating systems. This provides the basic functionality that is used by class RunDynamicTests (available electronically) to load the test case shared library and run the test function, generating an output file with a summary of tests passed and failed. The tjrun.C main program (available electronically) is used as the main framework for driver programs. Finally, Listing One contains the TestDriver program, which is similar to tjrun.C but calls RunDynamicTests and looks for a function that has the same name as the file instead of main function. This approach is not something new to computing. It is similar to the practice of loadable modules in UNIX kernels.

Debugging Pseudo-Incremental Programs

Large-scale systems present some unique debugging challenges. Principally, these stem from the size of the symbol tables required to support symbolic debugging. For a C++ system on the order of a million lines of code, these symbol tables can add several hundred megabytes to the size of the object files. Even with the modern glut of memory, only the fastest workstations can cope with such huge debugging processes. What is really needed is the ability to load the symbols for the portions of code being debugged, and not for the whole system. But a single debug session can meander through modules that seemed unconnected with the problem, so it is clearly insufficient to attempt to guess which modules should be compiled with debugging information.

Visual C++ appears to do the right thing, with built-in support for loading only those symbol tables actually used. Indeed, this is the default behavior of the development environment. Debuggers in the UNIX world do not seem so frugal when loading symbol tables, and for a large system will spend several minutes desperately attempting to ingest the whole system before running out of virtual memory. But if you have used the pseudo-incremental system, you will be pleased that it also solves this debugging problem. Most debuggers allow you to turn off the automatic loading of symbols from dynamic libraries. Under gdb, for example, a debugging session might look like Example 2. The only drawback with this method is that a dynamic library must be loaded before breakpoints can be set in the code. In our TestDriver, we include an empty function, Run_DynamicTests_Break(), which is called just after the library has been loaded. Placing a breakpoint in this function will interrupt the program in a suitable state for further breakpoints to be inserted within the library code. In the framework driver example, a function cmain is called prior to entering the shared library.

Building and Finding Shared Libraries

Creating shared libraries is almost the same as building an executable program. However, there are usually compiler-specific flags that must be used when compiling to object code to make the resulting object code position independent. Table 1 lists the flags used by several common compilers and operating systems to compile files that can be used in shared library creation.

Once you have built your shared library or DLL, there are several things you must do so that your programs can find the libraries. On most UNIX systems, it is possible to build the path into an executable so that no external environment variables are required (see Table 2 for the most common variable names used). On Windows, you must always have the DLLs in the PATH, as there is no way to build in run-time paths.

Real-World Applications

The ideas presented in this article come out of years of experience with the TargetJr system (http://www.targetjr.org/), a freely available C++ object-oriented environment for image understanding research that has been developed and used over the past eight years. In working with such a large research-oriented C++ system, the techniques presented here have been invaluable and have enabled the software to be a productive resource.

However, both the software and ideas presented in this article are applicable to any large C++ system. For example, I have taken the tjrun pattern and applied it to the Visualization toolkit (VTK), a high-end C++ toolkit for scientific visualization software development (http://www.kitware .com/). To use this software with vtk, all that was required was to link the VTK libraries to the main routine in tjrun.C, thus creating a program called "vtkrun." Once that's done you can compile any of the C++ examples that come with VTK into shared objects. These can then be given to the vtkrun application as command line arguments and run in the same way as a fully linked application. Example 3 is the makefile used to create vtkrun and two example programs.


The techniques and source code presented here let you take full advantage of C++ without the penalty of excessively long compile/link/run cycles. In addition, a framework for test cases that uses less disk space and provides basic test reporting capabilities should be valuable to any C++ system. Finally, methods of debugging applications that use dynamic loading and link-time shared libraries are essential to using pseudo-incremental linking.


Listing One

// -----------------------------------------------------------------------
//                   Copyright (c) 1997 TargetJr Consortium
//               GE Corporate Research and Development (GE CRD)
//                             1 Research Circle
//                            Niskayuna, NY 12309
//                            All Rights Reserved
//              Reproduction rights limited as described below.
// Permission to use, copy, modify, distribute, and sell this software
// and its documentation for any purpose is hereby granted without fee,
// provided that (i) the above copyright notice and this permission
// notice appear in all copies of the software and related documentation,
// (ii) the name TargetJr Consortium (represented by GE CRD), may not be
// used in any advertising or publicity relating to the software without
// the specific, prior written permission of GE CRD, and (iii) any
// modifications are clearly marked and summarized in a change history log.
// ------------------------------------------------------------------------
// Module: TestDriver.C
// Purpose: Generic test driver program to be copied to test 
//   directories by generic.mk
#include "RunDynamicTests.h"
#include <iostream.h>

main(int argc, char** argv)
  if(argc < 2)
      cerr << "Usage: " << argv[0] << " fullpath/to/testlibrary " << endl;
      return -1;
  cout << endl
       << "BEGIN: Generic TargetJr TestDriver: Testing - " 
       << argv[1] << endl;
  return RunDynamicTests::RunTests(argc, argv);

Back to Article

DDJ CD-ROM Library

Copyright © 2000 Dr. Dobb's Journal
Dr. Dobb's Journal's Privacy Policy
Comments: webmaster@www.ddj.com