Home page Runtime Collective - Home Page

Software Project Management



  • Why Usability?
  • Streaming Media
  • Web Hosting
  • Java / XML
  • Software Project Management
  • Knowledge Management
  • Java Landscape
  • Content Publishing with XML and XSL

    Why hire us?
    Products and services
    Our recent clients
    About Runtime
  • This report is a summary of ideas on Software Project Management collected from various sources. The motivation behind this is to get a better understanding of software project management in order to help Runtime projects run smoothly.

    1. Sources
    2. Basics of efficient management
    3. Estimates
    4. Risk management
    5. Classic mistakes
    6. Team management
    7. Development methodology
    Fabrice Retkowsky, October 2000

    1 Sources

    1.1 Rapid Development

    Written by Steve McConnell (Microsoft Press), this over-long book is considered as `must-read'. It's split in two main sections: the first 390 pages define the principle of Rapid Development, the main issues of project management, how to tackle them, and hints at `best practices'. The second part is a catalogue of 27 such `best practices', i.e. techniques that projects could/should use to achieve faster development.

    1.2 Death March

    A short book by Edward Yourdon (Prentice Hall). Two or three chapters (at the beginning and the end) are kind of useless, but the core of the book is more or less a very good summary of Rapid Development, with some extra ideas.

    1.3 Peopleware

    Second edition of a classic by Tom DeMarco and Timothy Lister (Dorset House Publishing). This one is about people and their environment, not about software project management. There's no technicalities there, only general observations of people and places full of common sense. The authors take from various sources, incl. some Alexander stuff. It is extremely good reading and short (230p.).

    1.4 ACS 4.0 development notes

    aD has some documentation on the development of ACS 4.0 on its website. Most of it is private, but some is public and shows the kind of management tools/methodology they use or used.

    1.5 Further readings/resources

    Some of the ideas here come from discussions within Runtime. Still reading `Code complete' yet. And `The mythical man-month' maybe?


    2 Basics of efficient management

    According to Rapid Development, there are 4 steps to efficient development:

    • avoid classic mistakes
    • follow development fundamentals
    • risk management
    • schedule oriented practices.
    All of these are needed, there's no magic cure, not one single thing which will guarantee success.

    If you think about the development speed of a project, there are four dimensions to take into account:

    • people
    • process
    • product
    • technology


    3 Estimates

    3.1 Aims

    Estimates (within the context of a requirements specification, or during the process of a project) have various objectives: knowing how much effort a project will take, how much time it will take, how much resources it will require.

    3.2 Lack of accuracy

    However good the initial requirements capture is, statistics say that early effort-, size- and schedule-estimates will have a very low accuracy (Rapid Development):

    • at product definition time: the margin of error is +/- 75%
    • after requirements specification: +/- 30% (that's probably as close as you'll get before having the client to sign the contract)
    • by the end of the simple design stage: +/- 15%
    • after detailed design: +/- 5%

    Hence we should be honest to clients and give them margin estimates such as ``this will take between 3 and 6 months, probably 4, we will know more precisely in one month's time when the design is complete''. We should take a `cooperative' tone when talking to the client: ``We can do it together, it's not good for both of us not to have more precise estimates, but as soon as we'll know we'll tell you''. Telling the `estimation story' is an important part of client relationship (Rapid Development p. 172, 198, 201, 325). Rapid Development also gives advice on Principled negotiation (p. 222, 503) and Theory-W management (p. 559). The focus of Principled negotiation is on interests (not positions), by separating problems from people, using objective criteria, and finding options for mutual gain. Theory-W management, which is based on principled negotiation, consist in making ``everyone a winner'', by identifying stake-holders and their win criteria, and then finding win-win conditions where everybody wins (rather than I win - you lose).

    Programmers always get their estimates wrong by minus 20% to 30%. This means that they should always be asked for estimates, as they know best, but keeping in mind their margin of error.

    Finally, resources and feature set never match from the beginning of a project: clients always want more than they are ready to pay for. Resources and features therefore will have to be revised in time so that they end up merging by the end of the project (or hopefully before that).

    3.3 Improving accuracy

    A lot of the inaccuracies of estimates stem from the estimator's ill-judged belief:

    • pure wishful thinking (``this usually takes 3 months but I'm sure we can do it in 2'')
    • or trust in so-called silver bullets: ``XXX increases productivity by 50%''. This never happens, and it is much better to set up a `tool group', e.g. a group of people which will be sole in charge of finding and testing potentially useful new devices and tools.

    The requirements should be drawn with a strong client involvement. The client is the one who can say what he wants, even if it takes time for him to say it. Rapid Development insists on this customer-oriented development (cf. Joint Application Development, p. 449). We should keep in mind to cut down on specifications early (as the client will always want to much).

    There are various techniques for information gathering:

    • structured analysis, data-structured analysis, OO analysis
    • interviews
    • diagrams: class/dataflow/ERD
    • prototyping (particularly for the user-interface)
    • tools, such as Requisite, DOORS, and RTM (see Death March p. 141)

    The requirements specification report can/should include:

    • a short paper specification (10p.), including both general description and detailed requirements
    • user-interface prototypes
    • paper storyboards
    • a product theme (what it is all about)

    Rapid Development gives a general estimation methodology. First, estimate the project size: either with function points (which is probably too complex and not so reliable), or in man-months/days. Then, from the size, estimate the cost-optimal effort (team size) and schedule (duration). Rapid Development gives a formula and 3 tables for this (p. 183). For example, a 20 man-month project should be done in 8 months by 2 people. Don't forget to always give results in the ``between 3 to 6 months, probably 4'' format, or using quarters (``by 3Q01''). And always account for extra tasks, such as testing, integration, showing staged releases to the client, modifications, manager time, on the phone, etc.

    For the NEC ecommerce project, what we did was to account for all the coding which was quantifiable in man-days first, then double it because of testing, then add a 2-week buffer at the end of the project for integration purpose, and finally add an estimated manager time (e.g. half a person over the whole project).

    3.4 Negotiating the costs

    There are 3 dimensions to the negotiation process (Rapid Development): Schedule, Cost, and Product. All three of them have to be balanced. If a client chooses 2 of them, we have to tell them what the 3rd has to be.

    Tying up the time and cost issues, can be seen as a mistake. For a given effort (i.e. a no of man-months), there is an optimal schedule where cost will be minimal (i.e. an optimal team size). Doing the same project in less or more time will make it more expensive.

    Don't hesitate to ask the question: is time really the issue? Is it not simply something else? Scheduling for an (over-)tight schedule won't make things happen more quickly - rather the opposite: it will take longer in the end, and be more costly (in terms of money, people, everything). We have to find acceptable trade offs. Death March gives a few negotiating games (p. 80). And if the terms are still not right... don't hesitate to walk away. Or to resign...

    3.5 Evolving the requirements

    Requirements should be revised and made more accurate in time, in a supervised way, usually by the project manager via weekly requirements review, with associated cost and schedule reviews.

    The client will always want some feature changes. This can be minimalised by:

    1. a strong user involvement in the requirements and design phases,
    2. designing in a flexible way, without making too many assumptions,
    3. resisting these changes which would lead to feature creep and delays.
    Any change should be officialised, agreed, and planned/budgeted for in a `change board' via the weekly requirements review. Moreover the `master' requirements/design documents should be kept up to date.

    In the event of a schedule slip, multiply the slip by the ratio of the project done to estimate the final slip. If after 2 months of a 6-month project we are 2 weeks late, we will end up 6 weeks late. Don't think otherwise.

    Besides, use the principle of Triage: categorising feature priority. Some features are must-do, others should-do, could-do, etc. Or use a numeric scale.


    4 Risk Management

    The whole chapter 5 of Rapid Development deals with risk management. Here are some classic risks:

    • feature creep, gold-plating
    • low quality of design/coding
    • overly optimistic schedule, silver bullet
    • too much research
    • weak personnel
    • use of contractors is risky, we should always keep a tight control on them. This extends to any 3rd-party reliance (graphic designers, SSL certificates, ecommerce merchants, etc.)
    • user-programmer frictions

    Risk management consists of both risk assessment (identification, analysis and prioritisation) and risk control (risk-management planning, resolution, monitoring). A list of risks should be drawn from the start of the project. This list should include all potential risks, their probability of occurring, the size of the potential loss (e.g. in man-weeks), and hence the risk exposure (in weeks), as well as ways to prevent them, and/or to tackle them.

    The list should be prioritised, and not simply by exposure. For example, a risk with a 4 months penalty, even if it is extremely unlikely, will come top if the project deadline is in 2 months' time and is of utmost importance.

    The list should be reviewed regularly, for example in the form of a weekly Top 10.

    aD had a tiny risk list for ACS 4.0, they didn't seem to take it too seriously.

    A final note: it is important to plan for risks - but not to eliminate them. Companies using e.g. the `Capability Maturity Model' Big-M Methodology will end up evading risky projects, which are usually the most profitable ones (Peopleware). Better take on risks as research or internal projects, and always be aware of them and budget accordingly.


    5 Classic mistakes

    Rapid Development gives a complete list of classic mistakes to avoid (p. 49). We should learn the most important ones, and go through this list regularly during the whole length of a project.

    People-related mistakes Process-related mistakes Product-related mistakes Technology-related mistakes
    1. Undermined motivation
    2. Weak personnel
    3. Uncontrolled problem employees
    4. Heroics
    5. Adding people to a late project
    6. Noisy, crowded offices
    7. Friction between developers and customers
    8. Unrealistic expectations
    9. Lack of effective project sponsorship
    10. Lack of stake-holder buy-in
    11. Lack of user-input
    12. Politics placed over substance
    13. Wishful thinking
    1. Overly optimistic schedules
    2. Insufficient risk management
    3. Contractor failure
    4. Insufficient planning
    5. Abandonment of planning under pressure
    6. Wasted time during the fuzzy front end
    7. Short-changed quality assurance
    8. Insufficient management controls
    9. Premature or overly frequent convergence
    10. Omitting necessary tasks from estimates
    11. Planning to catch up later
    12. Code-like-hell programming
    1. Requirements gold-plating
    2. Feature creep
    3. Developer gold-plating
    4. Push-me, pull-me negotiation
    5. Research-oriented development
    1. Silver-bullet syndrome
    2. Overestimated savings from new tools or methods
    3. Switching tools in the middle of a project
    4. Lack of automated source-code control


    6 Team management

    6.1 Individuals

    All books insist on the role of motivation. Rapid Development adds that the motivation of programmers differ from other employees (e.g. managers): personal life and achievement take a much stronger place.

    Most books talk about overtime as being too common. Peopleware has long sections on workaholics and overtime. The general point is that both are dangerous, for individuals as well as projects on a whole. The problem of workaholism: ``the loss of a good person is not worth it''. Problems of overtime: the negative aspect can be substantial (error, burn-out, accelerated turnover, compensatory under-time), and teamicidal repercussions on otherwise healthy work groups. Finally, overtime should always be kept track of: otherwise it could ruin schedule estimates.

    At the same time, Peopleware develops the idea of `intrapreneurs': the (very few) members of staff who are so good on their own that they are let to define their own task, and who, whatever they decide to do, always end up benefiting the company.

    Finally, companies should help people retrain themselves completely if they want to/need to. All books insist on the overwhelming weight of losing a member of staff and having to replace him/her (it could be equivalent to anything between 3 months to 1 year of salary).

    6.2 Teams

    Rapid Development give guidelines for team selection:

    • get the top talent, not just available people
    • match people to their (new) job
    • give room for everybody's career progression
    • balance the team (complementary without disruption)
    • eliminate misfits
    • keep teams small

    Rapid Development also gives a list of team structures (p.304), and Death March a list of team roles (e.g. roles of individuals within a team, p.115). An important aspect of all of this is that each team should have at its head a project manager/technical lead pair. Both roles are distinct and should be taken by different people. The technical lead looks after the project development from scratch to finish, while the manager takes care of client relations, before, during and after the project.

    Once a team is formed, it should be given one main aim, one priority, one goal towards which all efforts should be focused. During the progress of the project, individuals should be given smaller-scale targets, such as a weekly aim, or even a few daily mini-milestones.

    Good communication between team members should be promoted. Managers should trust people (and show it), particularly within the team they manage. `Someone you don't trust is of no use'.

    Managers should also try to re-inject small amounts of constructive disorder into an activity which is getting more and more ordered and rigid (Peopleware). Examples: pilot projects (but don't experiment with more than one aspect at once), war games, brainstorming (with no evaluation of proposed ideas - that's for later), provocative training experiences, trips, conferences, etc.

    Hopefully new teams will end up developing an identity of their own, or even `jell' (Peopleware). Jelled teams are a bunch of people who work together easily, who feel unique and possibly superior to others, who develop their own jargon and private jokes, who feel like a team. They should have a cult of quality, and enjoy the feeling of `closure' (the end of a *completed* project). And they can't be controlled in anyway, so managers shouldn't try to.

    Finally, Peopleware (again) explains that the adaptivity of a company as a whole is usually based on its strong middle management, i.e. whether middle-managers cooperate towards improvement.

    6.3 Environment

    Peopleware comments largely on office environment:

    • privacy will increase productivity
    • so will silence
    • and less interruptions
    • and larger dedicated space

    Bad symptoms: when people hide out to work, when they prefer to work from home or anywhere else than in the office, when ``nothing can be done here between 9 and 5''.

    On interruptions: it takes 15 minutes for someone to get into `flow' mode, where work can be done efficiently. Any interruption (phone call, someone talking, loud speaker, alarm) will cost you that much. Don't count how many hours you spent working everyday, count how many non-interrupted hours you managed, and don't be surprised if it is 0 at first. The ratio of the later on the former varies from 0.10 to 0.40 between organisations. For one, I changed my cute `you've got mail' animation on my desktop to a very lame text-only thingy - I can ignore it much more easily, and it doesn't interrupt me anymore.

    On music: people can do our kind of work fairly easily while listening to music, because listening to music and doing our kind of work don't use the same side of the brain. But experiments showed that listening to music kills creativity (or nearly). Don't expect to come up with as many time-saving smart design solutions if there's some music around (Peopleware).

    On office space: let people who fit together put their desk near each other's. Let people organise themselves (organically). Peopleware quotes C. Alexander stuff (architectural patterns, Timeless Way of Building, etc.). They mention 4 more patterns:

    • tailor the workspace around working groups
    • give windows to everybody
    • provide outdoor work space
    • provide both public space (nearby the entrance) and private space


    7 Development methodology

    Here we're speaking small-m methodology. None of these Big M standardisation madness (Peopleware).

    7.1 Development strategy

    No-one should really use the rigid waterfall methodology (make requirements and then stick to them blindly). Rather use more flexible methodologies where requirements are refined and changed within a constrained process. The most suitable methodology depends on the situation. Examples:

    • the spiral model: a succession of longer and longer cycles, for which you give yourself a strict target (such as requirements or design validation through a prototype)
    • evolutionary prototyping: create an initial prototype, which is then refined and refined and ... until it's complete and released
    • staged delivery: go through requirements and design, then create a first deliverable (i.e. a product which is actually delivered to the client). Then improve this version into a second deliverable, a third one, and so on.
    • evolutionary delivery: similar to staged delivery, except that after each delivery you actually take into account customer feedback to refine and create the next version.

    All these methodologies sound similar, some are delivery-focused, others more prototype-focused. The focus may change during the development process, more prototype-focused at the beginning and delivery-focused at the end. The user/client involvement may change as well, from being feature-oriented to being bug-oriented.

    Rapid Development gives a table which can help choosing the most suitable methodology according to various criteria. I did it for one of our projects, and it confirmed the first impressions by ranking spiral and evolutionary delivery as best methodologies, but it also gave `design-to-tools' as suitable (a methodology where the product is limited to what the toolkit can do).

    7.2 Development management

    Death March names a few formal processes such as ISO9000 and SEI (p. 144/147). But Peopleware reminds us to stay clear of Big-M Methodologies.

    In terms of tools, Death March: Estimacs (Computer Associates), Checkpoint (Software Productivity Research) and Slim (Quantitative Software Management). aD was using MS Project 2000 at the beginning of the ACS 4.0 project, but stopped using it after a couple of weeks and moved to Dev-Tracker (by Abante, who don't advertise it on their own website). We should find out if there's such a tool for Linux, one which outputs HTML if possible.

    It seems all methodologies/tools turn around the same principle: that a list of tasks should be drawn, with their duration estimate, a staff assignee, the percentage of completion (and how much time it took to get there). With such a table it is possible to know what tasks are on target, whether there is already any delay, etc. And don't forget that over-time should be included.

    Besides having assigned tasks, developers can also be given milestones. Some advocate mini-milestones ( a few every day), it is important to have at least weekly targets.

    The manager should every week spend some time to reevaluate the project:

    • assess the new requirement changes
    • ask programmers to reevaluate the design
    • re-assess the effort, cost and schedule estimates
    • update the risk list
    • re-assess the win conditions (cf. Theory-W management)
    • identify the win conditions for the next iteration and set targets
    This should keep a high manager visibility, and a high client visibility (be customer focused!).

    Never think that some things are not measurable (Peopleware). Gilb's Law: anything you need to quantify can be measured in some way that is superior to not measuring it at all. This can have a cost of course. And keep management whatever happens - particularly if project is not doing well.

    7.3 Development practices

    Using a flexible design will mean less rework. Quality is a means to higher productivity (Peopleware). It has an initial cost, but is cheaper in the end.

    At the same time, reducing developer gold plating, and not doing research work within a tightly-scheduled project is important as well. Don't accept client requests for changes in requirements, and cut down on the specifications if possible.

    All other comments on development practices boil down to one thing: testing. Testing, testing, testing, testing and more testing:

    • for large projects, daily sessions of ``build and smoke test'' should be set up (even aD does it for ACS 4.0). They help get rid of bugs early and are an invite not to write bugs. Penalty for the programmer who breaks the build. At least, making a weekly delivery to the client for client-side evaluation and testing should be done.
    • analyse bug statistics and fix error prone modules first, or get rid of them and replace them.
    • fixing a bug early can be 50%/100% cheaper. So maybe we should keep an eye on those ticket trackers.
    • make weekly reviews of design by the programmers: with a few days' perspective, they can simplify and clean up things pretty well (as well as keep informed of changes).
    • make regular code review/inspection sessions. These are basically time spent reading someone else's code, and happen to be extremely efficient (Rapid Development). aD does it.
    • write test scripts which perform API and HTTP calls, and run them periodically (Rapid Development). aD uses eTester and a Quality Assurance System (in which at least one test case should be recorded for each requirement) for regression testing. They use an ACS test harness for each tcl script, utPLSQL for each pl/sql procedures and functions, and eTester for the user-interface aspect.

    One final comment is about the programming tool set. According to Rapid Development, it should be kept minimal. This would allow programmers to see what tools are really needed before introducing them. Sufficient learning time should be planned before using them in real projects.

    7.4 Coding practices

    Code complete gives substantial information on coding practices (I haven't read it yet though). I think we should also come up with a list of specific coding practices for each toolkit we use.

    Some suggestions for the ACS:

    • each programmer should use his own server
    • each programmer can make his own personal subdir for personal files
    • pure tcl scripts shouldn't display any html, and should only be used as transitions between adp pages
    • limit as much as possible the use of ns_sets as function parameters/result variables, as they are not native to Tcl, clunky to use, slow, and often a sign of a badly-defined interface (I know we can argue about this, but I stand my ground).
    • ban ns_puts within loops which lock a database handle (append it to a string instead and write the whole string outside the loop)
    • draw control flow representations of all the tcl and adp pages
    • put sufficient comment at the top of each file (incl. maybe where it should be called from)
    • draw ERD diagrams of all existing and new tables