What Programmers Want

Most people who have been assigned the unfortunate task of managing programmers have no idea how to motivate them. They believe that the small perks (such as foosball tables) and bonuses that work in more relaxed settings will compensate for more severe hindrances like distracting work environments, low autonomy, poor tools, unreasonable deadlines, and pointless projects. They’re wrong. However, this is one of the most important things to get right, for two reasons. The first is that programmer output is multiplicative of a number of factors– fit with tools and project, skill and experience, talent, group cohesion, and motivation. Each of these can have a major effect (plus or minus a factor of 2 at least) on impact, and engineer motivation is one that a manager can actually influence. The second is that measuring individual performance among software engineers is very hard. I would say that it’s almost impossible and in practical terms, economically infeasible. Why do I call infeasible rather than merely difficult? That’s because the only people who can reliably measure individual performance in software are so good that it’s almost never worth their time to have them doing that kind of work. If the best engineers have time to spend with their juniors, it’s more worthwhile to have them mentoring the others (which means their interests will align with the employees rather than the company trying to perform such measurement) than measuring them, the latter being a task they will resent having assigned to them.

Seasoned programmers can tell very quickly which ones are smart, capable, and skilled– the all-day, technical interviews characteristic of the most selective companies achieve that– but individual performance on-site is almost impossible to assess. Software is too complex for management to reliably separate bad environmental factors and projects from bad employees, much less willful underperformance from no-fault lack-of-fit. So measurement-based HR policies add noise and piss people off but achieve very little, because the measurements on which they rely are impossible to make with any accuracy. This means that the only effective strategy is to motivate engineers, because attempting to measure and control performance after-the-fact won’t work.

Traditional, intimidation-based management backfires in technology. To see why, consider the difference between 19th-century commodity labor and 21st-century technological work. For commodity labor, there’s a level of output one can expect from a good-faith, hard-working employee of average capability: the standard. Index that to the number 100. There are some who might run at 150-200, but often they are cutting corners or working in unsafe ways, so often the best overall performers might produce 125. (If the standard were so low that people could risklessly achieve 150, the company would raise the standard.) The slackers will go all the way down to zero if they can get away with it. In this world, one slacker cancels out four good employees, and intimidation-based management– which annoys the high performers and reduces their effectiveness, but brings the slackers in line, having a performance-middling effect across the board– can often work. Intimidation can pay off, because more is gained by intimidating the slacker into mediocrity brings more benefit than is lost. Technology is different. The best software engineers are not at 125 or even 200, but at 500, 1000, and in some cases, 5000+. Also, their response to a negative environment isn’t mere performance middling. They leave. Engineers don’t object to the firing of genuine problem employees (we end up having to clean up their messes) but typical HR junk science (stack ranking, enforced firing percentages, transfer blocks against no-fault lack-of-fit employees) disgusts us. It’s mean-spirited and it’s not how we like to do things. Intimidation doesn’t work, because we’ll quit. Intrinsic motivation is the only option.

Bonuses rarely motivate engineers either, because the bonuses given to entice engineers to put up with undesirable circumstances are often, quite frankly, two or three orders of magnitude too low. We value interesting work more than a few thousand dollars, and there are economic reasons for us doing so. First, we understand that bad projects entail a wide variety of risks. Even when our work isn’t intellectually stimulating, it’s still often difficult, and unrewarding but difficult work can lead to burnout. Undesirable projects often have a 20%-per-year burnout rate between firings, health-based attrition, project failure leading to loss of status, and just plain losing all motivation to continue. A $5,000 bonus doesn’t come close to compensating for a 20% chance of losing one’s job in a year. Additionally, there are the career-related issues associated with taking low-quality work. Engineers who don’t keep current lose ground, and this becomes even more of a problem with age. Software engineers are acutely aware of the need to establish themselves as demonstrably excellent before the age of 40, at which point mediocre engineers (and a great engineer becomes mediocre after too much mediocre experience) start to see prospects fade.

The truth is that typical HR mechanisms don’t work at all in motivating software engineers. Small bonuses won’t convince them to work differently, and firing middling performers (as opposed to the few who are actively toxic) to instill fear will drive out the best, who will flee the cultural fallout of the firings. There is no way around it: the only thing that will bring peak performance out of programmers is to actually make them happy to go to work. So what do software engineers need?

The approach I’m going to take is based on timeframes. Consider, for an aside, peoples’ needs for rest and time off. People need breaks at  work– say, 10 minutes every two hours. They also need 2 to 4 hours of leisure time each day. They need 2 to 3 days per week off entirely. They need (but, sadly, don’t often get) 4 to 6 weeks of vacation per year. And ideally, they’d have sabbaticals– a year off every 7 or so to focus on something different from the assigned work. There’s a fractal, self-similar nature to peoples’ need for rest and refreshment, and these needs for breaks tap into Maslovian needs: biological ones for the short-timeframe breaks and higher, holistic needs pertaining to the longer timeframes. I’m going to assert that something similar exists with regard to motivation, and examine six timeframes: minutes, hours, days, weeks, months, and years.

1. O(minutes): Flow

This may be the most important. Flow is a state of consciousness characterized by intense focus on a challenging problem. It’s a large part of what makes, for example, games enjoyable. It impels us toward productive activities, such as writing, cooking, exercise, and programming. It’s something we need for deep-seated psychological reasons, and when people don’t get it, they tend to become bored, bitter, and neurotic. For a word of warning, while flow can be productive, it can also be destructive if directed toward the wrong purposes. Gambling and video game addictions are probably reinforced, in part, by the anxiolytic properties of the flow state. In general, however, flow is a good thing, and the only thing that keeps people able to stand the office existence is the ability to get into flow and work.

Programming is all about flow. That’s a major part of why we do it. Software engineers get their best work done in a state of flow, but unfortunately, flow isn’t common for most. I would say that the median software engineer spends 15 to 120 minutes per week in a state of flow, and some never get into it. The rest of the time is lost to meetings, interruptions, breakages caused by inappropriate tools or bad legacy code, and context switches. Even short interruptions can easily cause a 30-minute loss. I’ve seen managers harass engineers with frequent status pings (2 to 4 per day) resulting in a collapse of productivity (which leads to more aggressive management, creating a death spiral). The typical office environment, in truth, is quite hostile to flow.

To achieve flow, engineers have to trust their environment. They have to believe that (barring an issue of very high priority and emergency) they won’t be interrupted by managers or co-workers. They need to have faith that their tools won’t break and that their environment hasn’t changed in a catastrophic way due to a mistake or an ill-advised change by programmers responsible for another component. If they’re “on alert”, they won’t get into flow and won’t be very productive. Since most office cultures grew up in the early 20th century when the going philosophy was that workers had to be intimidated or they would slack off, the result is not much flow and low productivity.

What tasks encourage or break flow is a complex question. Debugging can break flow, or it can be flowful. For me, I enjoy it (and can maintain flow) when the debugging is teaching me something new about a system I care about (especially if it’s my own). It’s rare, though, that an engineer can achieve flow while maintaining badly written code, which is a major reason why they tend to prefer new development over maintenance. Trying to understand bad software (and most in-house corporate software is terrible) creates a lot of “pinging” for the unfortunate person who has to absorb several disparate contexts in order to make sense of what’s going on. Reading good code is like reading a well-written academic paper: an opportunity to see how a problem was solved, with some obvious effort put into presentation and aesthetics of the solution. It’s actually quite enjoyable. On the other hand, reading bad code is like reading 100 paragraphs, all clipped from different sources. There’s no coherency or aesthetics, and the flow-inducing “click” (or “aha” experience) when a person makes a connection between two concepts almost never occurs. The problem with reading code is that, although good code is educational, there’s very little learned in reading bad code aside from the parochial idiosyncracies of a badly-designed system, and there’s a hell of a lot of bad code out there.

Perhaps surprisingly, whether a programmer can achieve “flow”, which will influence her minute-by-minute happiness, has almost nothing to do with the macroscopic interestingness of the project or company’s mission. Programmers, left alone, can achieve flow and be happy writing the sorts of enterprise business apps that they’re “supposed” to hate. And if their environment is so broken that flow is impossible, the most interesting, challenging, or “sexy” work won’t change that. Once, I saw someone leave an otherwise ideal machine learning quant job because of “boredom”, and I’m pretty sure his boredom had nothing to do with the quality of the work (which was enviable) but with the extremely noisy environment of a trading desk.

This also explains why “snobby” elite programmers tend to hate IDEs, the Windows operating system, and anything that forces them to use the mouse when key-combinations would suffice. Using the mouse and fiddling with windows can break flow. Keyboarding doesn’t. Of course, there are times when the mouse and GUI are superior. Web surfing is one example, and writing blog posts (WordPress instead of emacs) is another. Programming, on the other hand, is done using the keyboard, not drag-and-drop menus. The latter are a major distraction.

2. O(hours): Feedback

Flow is the essence here, but what keeps the “flow” going? The environmental needs are discussed above, but some sorts of work are more conducive to flow than others. People need a quick feedback cycle. One of the reasons that “data science” and machine learning projects are so highly desired is that there’s a lot of feedback, in comparison to enterprise projects which are developed in one world over months (with no real-world feedback) and released in another. It’s objective, and it comes on a daily basis. You can run your algorithms against real data and watch your search strategies unfold in front of you while your residual sum of squares (error) decreases.

Feedback needs to be objective or positive in order to keep people enthusiastic about their work. Positive feedback is always pleasant, so long as it’s meaningful. Objective, negative feedback can be useful as well. For example, debugging can be fun, because it points out one’s own mistakes and enables a person to improve the program. The same holds for problems that turn out to be more difficult than originally expected: it’s painful, but something is learned. What never works well is subjective, negative feedback (such as bad performance reviews, or aggressive micromanagement that shows a lack of trust). That pisses people off.

I think it should go without saying that this style of “feedback” can’t be explicitly provided on an hourly basis, because it’s unreasonable to expect managers to do that much work (or an employee do put up with such a high frequency of managerial interruption). So, the feedback has to come organically from the work itself. This means there need to be genuine challenges and achievements involved. So most of this feedback is “passive”, by which I mean there is nothing the company or manager does to inject the feedback into the process. The engineer’s experience completing the work provides the feedback itself.

One source of frustration and negative feedback that I consider subjective (and therefore place in that “negative, subjective feedback that pisses people off” category) is the jarring experience of working with badly-designed software. Good software is easy to use and makes the user feel more intelligent. Bad software is hard to use, often impossible to read at the source level, and makes the user or reader feel absolutely stupid. When you have this experience, it’s hard to tell if you are rejecting the ugly code (because it is distasteful) or if it is rejecting you (because you’re not smart enough to understand it). Well, I would say that it doesn’t matter. If the code “rejects” competent developers, it’s shitty code. Fix it.

The “feedback rate” is at the heart of many language debates. High-productivity languages like Python, Scala, and Clojure, allow programmers to implement significant functionality in mere hours. On my best projects, I’ve written 500 lines of good code in a day (by the corporate standard, that’s about two months of an engineer’s time). That provides a lot of feedback very quickly and establishes a virtuous cycle: feedback leads to engagement, which leads to flow, which leads to productivity, which leads to more feedback. With lower-level languages like C and Java– which are sometimes the absolute right tools to use for one’s problem, especially when tight control of performance is needed– macroscopic progress is usually a lot slower. This isn’t an issue, if the performance metric the programmer cares about lives at a lower level (e.g. speed of execution, limited memory use) and the tools available to her are giving good indications of her success. Then there is enough feedback. There’s nothing innate that makes Clojure more “flow-ful” than C; it’s just more rewarding to use Clojure if one is focused on macroscopic development, while C is more rewarding (in fact, probably the only language that’s appropriate) when one is focused on a certain class of performance concerns that require attention to low-level details. The problem is that when people use inappropriate tools (e.g. C++ for complex, but not latency-sensitive, web applications) they are unable to get useful feedback, in a timely manner, about the performance of their solutions.

Feedback is at the heart of the “gameification” obsession that has grown up of late, but in my opinion, it should be absolutely unnecessary. “Gameifcation” feels, to me, like an after-the-fact patch if not an apology, when fundamental changes are necessary. The problem, in the workplace, is that these “game” mechanisms often evolve into high-stakes performance measurements. Then there is too much anxiety for the “gameified” workplace to be fun.

In Java culture, the feedback issue is a severe problem, because development is often slow and the tools and culture tend to sterilize the development process by eliminating that “cosmic horror” (which elite programmers prefer) known as the command line. While IDEs do a great job of reducing flow-breakage that occurs for those unfortunate enough to be maintaining others’ code, they also create a world in which the engineers are alienated from computation and problem-solving. They don’t compile, build, or run code; they tweak pieces of giant systems that run far away in production and are supported by whoever drew the short straw and became “the 3:00 am guy”.

IDEs have some major benefits but some severe drawbacks. They’re good to the extent that they allow people to read code without breaking flow; they’re bad to the extent that they tend to require use patterns that break flow. The best solution, in my opinion, to the IDE problem is to have a read-only IDE served on the web. Engineers write code using a real editor, work at the command line so they are actually using a computer instead of an app, and do almost all of their work in a keyboard-driven environment. However, when they need to navigate others’ code in volume, the surfing (and possibly debugging) capabilities offered by IDEs should be available to them.

3. O(days): Progress

Flow and feedback are nice, but in the long term, programmers need to feel like they’re accomplishing something, or they’ll get bored. The feedback should show continual improvement and mastery. The day scale is the level at which programmers want to see genuine improvements. The same task shouldn’t be repeated more than a couple times: if it’s dull, automate it away. If the work environment is so constrained and slow that a programmer can’t log, on average, one meaningful accomplishment (feature added, bug fixed) per day, something is seriously wrong.  (Of course, most corporate programmers would be thrilled to fix one bug per week.)

The day-by-day level and the need for a sense of progress is where managers and engineers start to understand each other. They both want to see progress on a daily basis. So there’s a meeting point there. Unfortunately, managers have a tendency to pursue this in a counterproductive way, often inadvertently creating a Heisenberg problem (observation corrupts the observed) in their insistence on visibility into progress. I think that the increasing prevalence of Jira, for example, is dangerous, because increasing managerial oversight at a fine-grained level creates anxiety and makes flow impossible. I also think that most “agile” practices do more harm than good, and that much of the “scrum” movement is flat-out stupid. I don’t think it’s good for managers to expect detailed progress reports (a daily standup focused on blockers is probably ok) on a daily basis– that’s too much overhead and flow breakage– but this is the cycle at which engineers tend to audit themselves, and they won’t be happy in an environment where they end the day not feeling that they worked a day.

4. O(weeks): Support.

Progress is good, but as programmers, we tend toward a trait that the rest of the world sees only in the moody and blue: “depressive realism”. It’s as strong a tendency in the mentally healthy among us as the larger-than-baseline percentage who have mental health issues. For us, it’s not depressive. Managers are told every day how awesome they are by their subordinates, regardless of the fact that more than half of the managers in the world are useless. We, on the other hand, have subordinates (computers) that frequently tell us that we fucked up by giving them nonsensical instructions. “Fix this shit because I can’t compile it.” We tend to have an uncanny (by business standards) sense of our own limitations. We also know (on the topic of progress) that we’ll have good days and bad days. We’ll have weeks where we don’t accomplish anything measurable because (a) we were “blocked”, needing someone else to complete work before we could continue, (b) we had to deal with horrendous legacy code or maintenance work– massive productivity destroyers– or (c) the problem we’re trying to solve is extremely hard, or even impossible, but it took us a long time to fairly evaluate the problem and reach that conclusion.

Programmers want an environment that removes work-stopping issues, or “blockers”, and that gives them the benefit of the doubt. Engineers want the counterproductive among them to be mentored or terminated– the really bad ones just have to be let go– but they won’t show any loyalty to a manager if they perceive that he’d give them grief over a slow month. This is why so-called “performance improvement plans” (PIPs)– a bad idea in any case– are disastrous failures with engineers. Even the language is offensive, because it suggests with certainty that an observed productivity problem (and most corporate engineers have productivity problems because most corporate software environments are utterly broken and hostile to productivity) is a performance problem, and not something else. An engineer will not give one mote of loyalty to a manager that doesn’t give her the benefit of the doubt.

I choose “weeks” as the timeframe order of magnitude for this need because that’s the approximate frequency with which an engineer can be expected to encounter blockers, and removal of these is one thing that engineers often need from their managers: resolution of work-stopping issues that may require additional resources or (in rare cases) managerial intervention. However, that frequency can vary dramatically.

5. O(months): Career Development.

This is one that gets a bit sensitive, and it becomes crucial on the order of months, which is much sooner than most employers would like to see their subordinates insisting on career advancement, but, as programmers, we know we’re worth an order of magnitude more than we’re paid, and we expect to be compensated by our employers investing in our long-term career interests. This is probably the most important of the 6 items listed here.

Programmers face a job market that’s unusually meritocratic when changing jobs. Within companies, the promotion process is just as political and bizarre as it is for any other profession, but when looking for a new job, programmers are evaluated not on their past job titles and corporate associations, but on what they actually know. This is quite a good thing overall, because it means we can get promotions and raises (often having to change corporate allegiance in order to do so, but that’s a minor cost) just by learning things, but it also makes for an environment that doesn’t allow for intellectual stagnation. Yet most of the work that software engineers have to do is not very educational and, if done for too long, that sort of work leads in the wrong direction.

When programmers say about their jobs, “I’m not learning”, what they often mean is, “The work I am getting hurts my career.” Most employees in most jobs are trained to start asking for career advancement at 18 months, and to speak softly over the first 36. Most people can afford one to three years of dues paying. Programmers can’t. Programmers, if they see a project that can help their career and that is useful to the firm, expect the right to work on it right away. That rubs a lot of managers the wrong way, but it shouldn’t, because it’s a natural reaction to a career environment that requires actual skill and knowledge. In most companies, there really isn’t a required competence for leadership positions, so seniority is the deciding factor. Engineering couldn’t be more different, and the lifetime cost of two years’ dues-paying can be several hundred thousand dollars.

In software, good projects tend to beget good projects, and bad projects beget more crap work. People are quickly typecast to a level of competence based on what they’ve done, and they have a hard time upgrading, even if their level of ability is above what they’ve been assigned. People who do well on grunt work get more of it, people who do poorly get flushed out, and those who manage their performance precisely to the median can get ahead, but only if managers don’t figure out what they’re doing. As engineers, we understand the career dynamic very well, and quickly become resentful of management that isn’t taking this to heart. We’ll do an unpleasant project now and then– we understand that grungy jobs need to be done sometimes– but we expect to be compensated (promotion, visible recognition, better projects) for doing it. Most managers think they can get an undesirable project done just by threatening to fire someone if the work isn’t done, and that results in adverse selection. Good engineers leave, while bad engineers stay, suffer, and do it– but poorly.

Career-wise, the audit frequency for the best engineers is about 2 months. In most careers, people progress by putting in time, being seen, and gradually winning others’ trust, and actual skill growth is tertiary. That’s not true for us, or at least, not in the same way. We can’t afford to spend years paying dues while not learning anything. That will put us one or two full technology stacks behind the curve with respect to the outside world.

There’s a tension employees face between internal (within a company) and external (within their industry) career optimization. Paying dues is an internal optimization– it makes the people near you like you more, and therefore more likely to offer favors in the future– but confers almost no external-oriented benefit. It was worthwhile in the era of the paternalistic corporation, lifelong employment, and a huge stigma attached to changing jobs (much less getting fired) more than two or three times in one career. It makes much less sense now, so most people focus on the external game. Engineers who focus on the external objective are said to be “optimizing for learning” (or, sometimes, “stealing an education” from the boss). There are several superiorities to a focus on the external career game. First, external career advancement is not zero-sum–while jockeying internally for scarce leadership positions is– and what we do is innately cooperative. It works better with the type of people we are. Second, our average job tenure is about 2 to 3 years. Third, people who suffer and pay dues are usually passed over anyway in favor of more skilled candidates from outside. Our industry has figured out that it needs skilled people more than it needs reliable dues-payers (and it’s probably right). So this explains, in my view, why software engineers are so aggressive and insistent when it comes to the tendency to optimize for learning.

There is a solution for this, and although it seems radical, I’m convinced that it’s the only thing that actually works: open allocation. If programmers are allowed to choose the projects best suited to their skills and aspirations, the deep conflict of interest that otherwise exists among their work, their careers, and their educational aspirations will disappear.

6. O(years): Macroscopic goals. On the timescale of years, macroscopic goals become important. Money and networking opportunities are major concerns here. So are artistic and business visions. Some engineers want to create the world’s best video game, solve hard mathematical problems, or improve the technological ecosystem. Others want to retire early or build a network that will set them up for life.

Many startups focus on “change the world” macroscopic pitches about how their product will connect people, “disrupt” a hated industry or democratize a utility, or achieve some world-changing ambition. This makes great marketing copy for recruiters, but it doesn’t motivate people on the day-to-day basis. On the year-by-year basis, none of that marketing matters, because people will actually know the character of the organization after that much time. That said, the actual macroscopic character, and the meaning of the work, of a business matters a great deal. Over years and decades, it determines whether people will stick around once they develop the credibility, connections, and resources that would give them the ability to move on to something else more lucrative, more interesting, or of higher status.

How to win

It’s conventional wisdom in software that hiring the best engineers is an arbitrage, because they’re 10 times as effective but only twice as costly. This is only true if they’re motivated, and also if they’re put to work that unlocks their talent. If you assign a great engineer to mediocre work, you’re going to lose money. Software companies put an enormous amount of effort into “collecting” talent, but do a shoddy job of using or keeping it. Often, this is justified in the context of a “tough culture”; turnover is reflective of failing employees, not a bad culture. In the long term, this is ruinous. The payoff in retaining top talent is often exponential as a function of the time and effort put into attracting, retaining, and improving it.

Now that I’ve discussed what engineers need from their work environments in order to remain motivated, the next question is what a company should do. There isn’t a one-size-fits-all managerial solution to this. In most cases, the general best move is to reduce managerial control and to empower engineers: to set up an open-allocation work environment in which technical choices and project direction are set by engineers, and to direct from leadership rather than mere authority. This may seem “radical” in contrast to the typical corporate environment, but it’s the only thing that works.

The Great Discouragement, and how to escape it.

I’ve recently taken an interest in the concept of the technological “Singularity”, referring to the acceleration of economic growth and social change brought along by escalating technological growth, and the potential for extreme growth (thousands of times faster than what exists now) in the future. People sometimes use “exponential” to refer to fast growth, but the reality is that (a) exponential curves do not always grow fast, and (b) economic growth has actually been faster than exponential to this point.

Life is estimated to estimated to be nearly 4 billion years old, but sexual reproduction and multicellular life are only about a billion years old. In other words, for most of its time in existence, life was relatively primitive, and growth itself was slow. Organisms themselves could reproduce quickly, but they died just as fast, and the overall change was minimal. This was true until the Cambrian Explosion, about 530 million years ago, when it accelerated. Evolution has been speeding up over time. If we represent “growth” in terms such as energy capture, energy efficiency, and neural complexity, we see that biological evolution has a faster-than-exponential “hockey stick” growth pattern. Growth was very slow for a long time, then the rate sped up.

One might model pre-Cambrian life’s growth rate at below 0.0000001% (note: these numbers are all estimates) per year, but by the age of animals it was closer to 0.000001% per year, or a doubling (of neural sophistication) every 70 million years or so, and several times faster than that in the primate era. Late in the age of animals, creatures such as birds and mammals could adapt rapidly, taking appreciably different forms in a mere few hundred thousand years. With the advent of tools and especially language (which had effects on assortative mating, and created culture) the growth rate, now factoring in culture and organization as well as evolutionary changes, skyrocketed to a blazing 0.00001% per year, in the age of hominids. Then came modern humans.

Data on the economic growth of human society paint a similar picture: accelerating exponential growth. Neolithic humans plodded along at about 0.0004% per year (still an order of magnitude faster than evolutionary change) and with the emergence of agriculture around 10000 B.C.E., that rate spend up, again, to 0.006% per year. This fostered the growth of urban, literate civilization (around 3000 B.C.E) and that boosted the growth rate to a whopping 0.1% per year, which was the prevailing economic growth rate for the world up until the Renaissance (1400 C.E.).

This level of growth– a doubling every 700 years– is rapid by the standards of most of the Earth’s history. It’s so obscenely fast that many animal and plant species have, unfortunately, been unable to adapt. They’re gone forever, and there’s a credible risk that we do ourselves in as well (although I find that unlikely). Agricultural humans increased their range by miles per year and increased the earth’s carrying capacity by orders of magnitude. Despite this progress, such a rate would be invisible to the people living in this 4,400-year span. No one had the global picture, and human lives aren’t long enough for anyone to have seen the underlying trend of progress, as opposed to the much more severe, local ups and downs. Tribes wiped each other out. Empires rose and fell. Religions were born, died, and were forgotten. Civilizations that grew too fast faced enemies (such as China, which likely would have undergone the Industrial Revolution in the 13th century had it not been susceptible to Mongol invasions). Finally, economic growth that occurred in this era was often absorbed entirely (and then some) by population growth. A convincing case can be made that the average person’s quality of life changed very little from 10000 B.C.E. to 1800 C.E., when economic growth began (for the first time) to outpace population growth.

In the 15th to 17th centuries, growth accelerated to about 0.3 percent per year: triple the baseline agricultural rate. In the 18th century, with the early stages of the Industrial Revolution, the Age of Reason, and the advent of rational government (as observed in the American experiment and French Revolution) it was 0.8 percent per year. By this point, progress was visible. Whether this advancement is desirable has never been without controversy, but by the 18th century, that it was occurring was without question. At that rate of progress, one would see a doubling of the gross world product in a long human life.

Even Malthus, the archetypical futurist pessimist, observed progress in 1798, but he made the mistake of assuming agrarian productivity to be a linear function of time, while correctly observing population growth to be exponential. In fact, economic growth has always been exponential: it was just a very slow (at that time, about 1% per year) exponential function that looked linear. On the other hand, his insight– that population growth would outpace food production capacity, leading to disaster– would have been correct, had the Industrial Revolution (then in its infancy) not accelerated. (Malthusian catastrophes are very common in history.) The gross world product increased more than six-fold in the 19th century, rising at a rate of 1.8 percent per year. Over the 20th, it continued to accelerate, with economic growth at its highest in the 1960s, at 5.7 percent per year– or a doubling every 150 months. We’re now a society that describes lower-than-average but positive growth as a “recession”.

In that sense, we’re also “in decline”. We’ve stopped growing at anything near our 1960s peak rate. We’re now plodding along at about 4.2 percent per year, if the last three decades are any indication. Most countries in the developed world would be happy to grow at half that rate.

The above numbers, and the rapid increase in the growth rate itself, describe the data behind the concept of “The Singularity”. Exponential growth emerges as a consequence of the differential equation, dy/dx = a * y, whose solution is an exponential function. Logistic growth is derived from the related equation dy/dx = a * y * (1 – y/L), where L is an upper limit or “carrying capacity”. Such limitations always exist, but I think that, with regard to economic growth, that limit is very far away– far enough away that we can ignore it for now. However, what we’ve observed is much faster than exponential growth, since the growth rate itself seems to be accelerating (also at a faster than exponential rate). So what is the correct way to model it?

One class of models for such a phenomenon is derived from the differential equation, dy/dx = a*y^(1+b), where b > 0. The solution to this differential equation (power law) is of the form y = C/(D-t)^(-1/b), the result of which is that as t -> D, growth becomes infinite. Hence, the name “Singularity”. No one actually believes that economic progress will become literally infinite, but that is a point at which it is assumed we will land comfortably in a post-scarcity, indefinite-lifespan existence. These two concepts are intimately connected and I would consider them identical. Time is the only scarce element in the life of a person middle-class or higher, but extremely so as long as our lifespans are so short compared to the complexity of the modern world (a person only gets to have one or two careers). Additionally, if people live “forever” (by which I mean millions of years, if they wish) then there will be an easy response to not being able to afford something: wait until you can. There will still be differences in status among post-scarcity people (some being at the end of a five-year waiting list for lunar tourism, and with the richest paying a premium for the prestige of having human servants) and probably some people will care deeply about them, but on the whole, I think these differences will be trivial and people will (over time) develop an immunity to the emotional problems of extreme abundance.

I should note that there are also dystopian Singularity possibilities, such as in The Matrix, in which machines become sentient and overthrow humans. I find this extremely far-fetched, because most artificial intelligence (to date) is still human intelligence applied to difficult statistical problems. We use machines to do things that we’re bad at, like multiply huge matrices in fractions of a second, and analyze game trees at 40-ply depth. I don’t see machines becoming “like us” because we’ll never have a need for them to be so. We’ll replicate functionality we want in order to solve menial tasks (with an increasingly sophisticated category of tasks being considered “menial”) but we won’t replicate the difficult behaviors and needs of humans. I don’t think we’ll fall into the trap of creating a “strong AI” that overthrows us. Sad to say it, but we’ve been quite skilled, over the millennia, at dehumanizing humans (slavery) in the attempt to make ideal workers. The upshot of this is that we’re unlikely to go to the other extreme and attempt to humanize machines. We’ll make them extremely good at performing our grunt work and leave the “human” stuff to ourselves.

Also, I don’t think a “Singularity” (in the sense of infinite growth) is likely, because I don’t think the model that produces a singularity is correct. I think that economic and technical growth are accelerating, and that we may see a post-scarcity, age-less world as early as 2100. That said, the data show deceleration over the past 50 years (from 5-6 percent to 3-4 percent annual growth) so rather than rocketing toward such a world, we seem to be coasting. I would be willing to call the past 40 years, in the developed world, an era of malaise and cultural decline. It’s the Great Discouragement, culminating a decade (2000s) of severe sociological contraction despite economic growth in the middle years, ending with a nightmare recession. What’s going on?

Roughly speaking, I think we can examine, and classify, historical periods by their growth rate, like so:

  • Evolutionary (below 0.0001% per year): 3.6 billion to 1 million BCE. Modern humans not yet on the scene.
  • Pre-Holocene (0.0001% to 0.01% per year): 1 million to 10,000 BCE.
  • Agrarian (0.01 to 1.0% per year): 10,000 BCE to 1800 CE. Most of written human history occurred during this time. Growth was slower than population increase, hence frequent Malthusian conflict. Most labor was coerced.
  • Industrial (1.0 to 10.0% per year): 1800 CE to Present. Following the advent of rational government, increasing scientific literacy, and the curtailment of religious authority, production processes could be measured and improved at rapid rates. Coercive slavery was replaced by semi-coercive wage labor.
  • Technological (10.0 to 100.0+% per year): Future. This rate of growth hasn’t been observed in the world economy as a whole, ever, but we’re seeing it in technology already (Moore’s Law, cost of genome sequencing, data growth, scientific advances). We’re coming into a time where things that were once the domain of wizardry (read: impossible) such as reading other peoples’ dreams can now be done. In the technological world, labor will be non-coercive, because the labor of highly motivated people is going to be worth 10 to 100 times more than that of poorly motivated people.

Each of these ages has a certain mentality that prospers in it, and that characterizes successful leadership in such a time. In the agrarian era, the world was approximately zero-sum, and the only way for a person to become rich was to enslave others and capture their labor, or kill them and take their resources. In the early industrial era, growth became real, but not fast enough to accommodate peoples’ material ambitions, creating a sense of continuing necessity for hierarchy, intimidation, and injustice in the working world. In a truly technological era (which we have not yet entered) the work will be so meaningful and rewarding (materially and subjectively) that such control structures won’t be necessary.

In essence, these economic eras diverge radically in their attitudes toward work. Agrarian-era leaders, if they wanted to be rich, could only do so by controlling more people. Kings and warlords were assessed on the size of their armies, chattel, and harems. Industrial-era leaders focused on improving mechanical processes and gaining control of capital. They ended slavery in favor of a freer arrangement, and workplace conditions improved somewhat, but were still coarse. Technological-era leadership doesn’t exist yet, in most of the world, but its focus seems to be on the deployment of human creativity to solve novel problems. In the technological world, a motivated and happy worker isn’t 25 or 50 percent more productive than an average one, but 10 times as effective. As one era evolves into the next, the leadership of the old one proves extremely ineffective.

The clergy and kings of antiquity were quite effective rulers in a world where almost no one could afford books, land was the most important form of wealth, and people needed a literate, historically-aware authority to direct them over what to do with it. Those in authority had a deep understanding of the limitations of the world and the slow rate of human progress: much slower than population growth. They knew that life was pretty close to a zero-sum struggle, and much of religion focuses on humanity’s attempts to come to terms with such a nasty reality. These leaders also knew, in a macabre way, how to handle such a world: control reproduction, gain dominion over land through force, use religion to influence the culture and justify land “ownership”, and curtail population growth in small-scale massacres called “wars” instead of suffering famines or revolutions.

People like Johannes Gutenberg, Martin Luther, John Locke, Adam Smith, and Voltaire came late in the agrarian era changed all that. Books became affordable to middle-class Europeans, and the Reformation happened a couple centuries later. This culminated in the philosophical movement known as The Enlightenment, in which Europe and North America disavowed rule based on “divine right” or heredity and began applying principles of science and philosophy to all areas of life. By 1750, there was a world in which the clerics and landlords of the agrarian era were terrible leaders. They didn’t know the first thing about the industrial world that was appearing right in front of them. Over the next couple hundred years, they were either violently overthrown (as in France) or allowed to decline gracefully out of influence (as in England).

The best political, economic, and scientific minds in that time could see a world that grew at industrial rates that were unheard of until that time. The landowning dinosaurs from the agrarian era died out or lost power. This was not always an attractive picture, of course. One of the foremost conflicts between an industrial and an agrarian society was the American Civil War, an extremely traumatic conflict for both sides. Then there were the nightmarish World Wars of the early 20th century, which established that industrial societies can still be immensely barbaric. That said, the mentalities underlying these wars were not novel, and it wasn’t the industrial era that caused them, so much as it was a case of pre-industrial mentalities combining with industrial power, to very dangerous results.

For example, before Nazism inflamed it, racism in Germany was (although hideous) not unusual by European or world standards, then or at any point up to then. In fact, it was a normal attitude in England, the United States, Japan, and probably all of the other nation-states that were forming around that time. Racism, although I would argue it to be objectively immoral in any era, was a natural byproduct of a world whose leaders saw it necessary, for millennia, to justify dispossession, enslavement, and massacre of strangers. What the 1940s taught us, in an extreme way, is that this hangover from pre-industrial humanity, an execrable pocket of non-Reason that had persisted into industrial time, could not be accepted.

The First Enlightenment began when leading philosophers and statesmen realized that industrial rates of growth were possible in a still mostly agrarian world, and they began to work toward the sort of world in which science and reason could reign. Now we have an industrial economy, but our world is still philosophically, culturally and rationally illiterate, even in the leading ranks. Still, we live on the beginning fringe of what might be (although it is too early to tell) a “Second Enlightenment”. We now have an increasing number of technological thinkers in science and academia. We see such thinking on forums like Hacker News, Quora, and some corners of Reddit. It’s “nerd culture”. However, by and large, the world is still run by industrial minds (and the mentality underlying American religious conservatism is distinctly pre-industrial). This is the malaise that top computer programmers face in their day jobs. They have the talent and inclination to work to turn $1.00 into $2.00 on difficult, “sexy” problems (such as machine learning, bioinformatics, and the sociological problems solved by many startups) but they work for companies and managers that have spent decades perfecting the boring, reliable processes that turn $1.00 into $1.04, and I would guess that this is the kind of work with which 90% of our best technical minds are engaged: boring business bullshit instead of the high-potential R&D work that can actually change the world. The corporate world still thinks in industrial (not technological) terms, and it always will. It’s an industrial-era institution, as much as baronies and totalitarian religion are agrarian-era beasts.

Modern “nerd culture” began in the late 1940s when the U.S. government and various corporations began funding basic research and ambitious engineering and scientific projects. This produced immense prosperity, rapid growth, and an era of optimism and peace. It enabled us to land a man on the moon in 1969. (We haven’t been back since 1972.) It built Silicon Valley. It looked like the transition from industrial to technological society (with 10+ percent annual economic growth) was underway. An American in 1969 might have perceived that the Second Enlightenment was underway, with the Civil Rights Act, enormous amounts of government funding for scientific research, and a society whose leaders were, by and large, focused on ending poverty.

Then… something happened. We forgot where we came from. We took the great infrastructure that a previous generation had build for granted, and let it decay. As the memory of the Gilded Age (brought to us by a parasitic elite) and Great Depression faded, elitism became sexy again. Woodstock, Civil Rights, NASA and “the rising tide that lifts all boats” gave way to Studio 54 and the Reagan Era. Basic research was cut for its lack of short-term profit, and because the “take charge” executives (read: demented simians) that raided their companies couldn’t understand what those people did all day. (They talk about math over their two-hour lunches? They can’t be doing anything important! Fire ‘em all!) Academia melted down entirely, with tenure-track jobs becoming very scarce. America lost its collective vision entirely. The 2001 vision of flying cars and robot maids for all was replaced with a shallow and nihilistic individual vision: get as rich as you can, so you have a goddamn lifeboat when this place burns the fuck down.

The United States entered the post-war era as an industrial leader. It rebuilt Europe and Japan after the war, lifted millions out of poverty, made a concerted (if still woefully incomplete) effort to end its own racism, and had enormous technical accomplishments. Yet now it’s in a disgraceful state, with people dying of preventable illnesses because they lack health insurance, and business innovation stagnant except in a few “star cities” with enormous costs of living, where the only thing that can get funded are curious but inconsequential sociological experiments. Funding for basic research has collapsed, and the political environment has veered to the far right wing. Barack Obama– who clearly has a Second Enlightenment era mind, if a conservative one in such a frame– has done an admirable job of fighting this trend (and he’s accomplished far more than his detractors, on the left and right, give him credit for) but one man alone cannot hold back the waterfall. The 2008 recession may have been the nadir of the Great Discouragement, or the trough may still be ahead of us. Right now, it’s too early to tell. We’re clearly not out of the mess, however.

How do we escape the Great Discouragement? To put it simply, we need different leadership. If the titans of our world and our time are people who can do no better than to turn $1.00 into $1.04, then we can’t expect more of them. If we let such people dominate our politics, then we’ll have a mediocre world. This is why we need the Second Enlightenment. The First brought us the idea of rational government: authority coming from laws and structure rather than charismatic personalities, heredity, or religious claims. In the developed world, it worked! We don’t have an oppressive government in the United States. (We may have an inefficient one, and we have some very irrational politicians, but the system is shockingly robust when one considers the kinds of charismatic morons who are voted into power on a fairly regular basis.) To the extent that the U.S. government is failing, it’s because the system has been corrupted by the unchecked corporate power that has stepped into the power vacuum created by a limited, libertarian government. Solving the nation’s economic and sociological problems, and the cultural residue associated with a lack of available, affordable education, will take us a long way toward fixing the political issues we have.

The Second Enlightenment will focus on a rational economy and a fair society. We need to apply scientific thought and philosophy to these domains, just as we did for politics in the 1700s when we got rid of our kings and vicars. I don’t know what the solution will end up looking like. Neither pure socialism nor pure capitalism will do: the “right answer” is very likely to be a hybrid of the two. It is clear to me, to some extent, what conditions this achievement will require. We’ll have to eliminate the effects of inherited wealth, accumulated social connection, and the extreme and bizarre tyranny of geography in determining a person’s economic fortune. We’ll have to dismantle the current corporate elite outright; no question on that one. Industrial corporations will still exist, just as agrarian institutions do, but the obscene power held by these well-connected bureaucrats, whose jobs involve no production, will have to disappear. Just as we ended the concept of a king’s”divine right” to rule, turning such people into mere figureheads, we’ll have to do the same with corporate “executives” and their similarly baseless claims to leadership.

We had the right ideas in the Age of Reason, and the victories from that time benefit us to that day, but we have to keep fighting to keep the lights on. If we begin to work at this, we might see post-scarcity humanity in a few generations. If we don’t, we risk driving headlong into another dark age.

Competing to excel vs. competing to suffer

One of the more emotionally charged concepts in our society is competition. Even the word evokes strong feelings, some positive and others adverse. Sometimes, the association is of an impressive athletic or intellectual feat encouraged by a contest. For others, the image is one of congestion, scarcity, and degeneracy. The question I intend to examine is: Is competition good or bad? (The obvious answer is, “it depends.” So the real question is, “on what?”)

In economics, competition is regarded as an absolute necessity, and any Time Warner Cable customer will attest to the evils of monopolies: poor service, high costs, and an overall dismal situation that seems unlikely to improve. (I would argue that a monopoly situation has competition: between the sole supplier and the rest of the world. Ending the monopoly doesn’t “add” competition, but makes more fair the competition that already exists intrinsically.) Competition between firms is generally seen as better than any alternative. Competition within firms is generally regarded as corrosive, although this viewpoint isn’t without controversy.

It’s easy to find evidence that competition can be incredibly destructive. Moreover, competition in the general sense is inevitable. In a “non-competitive” business arrangement such as a monopoly or monopsony, competition is very much in force: just a very unfair variety of it. Is competition ever, however, intrinsically good? To answer this, it’s important to examine two drastically different kinds of competition: competing to excel, and competing to suffer.

Competition to excel is about doing something extremely well: possibly better than it has ever been done before. It’s not about beating the other guy. It’s about performing so well that very few people can reach that level. Was Shakespeare motivated by being better than a specific rival, or doing his own thing? Almost certainly, it was the latter. This style of competition can focus people toward goals that they might otherwise not see. When it exists, it can be a powerful motivator.

In a competition to excel, people describe the emotional frame as “competing against oneself” and enter a state comparable to a long-term analogue of flow. Any rivalries become tertiary concerns. This doesn’t mean that people in competitions to excel never care about relative performance. Everyone would rather be first place than second, so they do care about relative standing, even if absolute performance is given more weight. However, in a competition to excel, you’d rarely see someone take an action that deliberately harms other players. That would be bad sportsmanship: so far outside the spirit of the game that most would consider it cheating.

Competition to suffer is about absorbing more pain and making more sacrifices, or creating the appearance of superior sacrifice. It’s about being the last person to leave the office, even when there isn’t meaningful work left to do. It’s about taking on gnarly tasks with a wider smile on one’s face than the other guy. These contests become senseless wars of attrition. In the working world, sacrifice-oriented competitions tend to encourage an enormous amount of cheating, because people can’t realistically absorb that much pain and still perform at a decent level at basic tasks. With very few exceptions, these contests encourage a lot of bad behavior and are horrible for society.

What’s most common in the corporate world? In most companies, the people who advance are the ones who (a) visibly participate in shared suffering, (b) accept subordination the most easily, and (c) retain an acceptable performance under the highest load (rather than those who perform best under a humane load.) People are measured, in most work environments, based on their decline curves (taken as a proxy for reliability) rather than their capability. So the corporate ladder is, for the most part, a suffering-oriented competition, not an excellence-oriented one. People wonder why we get so few creative, moral, or innovative people in the upper ranks of large corporations. This is why. The selection process is biased against them.

People who do well in one style of competition tend to perform poorly in the other, and the salient trait is context-sensitivity. Highly context-sensitive people, whose performance is strongly correlated with interest in their work, lack of managerial detriment, and overall health, tend to be the most creative and capable of hitting high notes: they win at excellence-oriented contests, but they fail in the long, pointless slogs of corporate suffering contests. People with low context-sensitivity tend to be the last ones standing in suffering-oriented competitions, but they fail when excellence is required. Corporations are configured in such a way that they load up on the latter type in the upper ranks. Highly context-sensitive, creative people are decried as “not a team player” when their motivation drops (as if it were a conscious choice, when it’s probably a neurological effect) due to an environmental malady.

Suffering-oriented competitions focus on reliability and appearance: how attractively a person can do easy, stupid things. John’s TPS reports are as good as anyone else’s, but he has to be reminded to use the new cover sheet. Tom’s does his TPS reports with a smile on his face. Tom gets the promotion. Excellence-oriented competitions have much higher potential payoff. In the workplace, excellence-oriented environments have an R&D flavor: highly autonomous and implicitly trusted workers, and a long-term focus. After the short-sighted and mean-spirited cost-cutting of the past few decades, much of which has targeted R&D departments, there isn’t much excellence-oriented work left in the corporate world.

As businesses become risk-averse, they grow to favor reliability and conformity over creativity and excellence, which are intermittent (and therefore riskier) in nature. Suffering-oriented competitions dominate. Is this good for these companies? I don’t think so. Even in software, the most innovative sector right now, companies struggle so much at nurturing internal creativity that they feel forced to “acq-hire” mediocre startups at exorbitant prices in order to compensate for their own defective internal environments.

The other problem with suffering-oriented competitions is that it’s much easier to cheat, and antisocial behavior is more common. Excellence can’t be faked, and the best players inspire the others. People are encouraged to learn from the superior players, rather than trying to destroy them. In sacrifice-oriented competitions (in the corporate world, usually centered on perceived effort and conformity) the game frequently devolves into an arrangement where people spend more time trying to trip each other up, and not be tripped, than actually working.

Related to this topic, one of the more interesting financial theories is the Efficient Market Hypothesis. It’s not, of course, literally true. (Arbitrage is quite possible, for those with the computational resources.) It is, however, very close to being true. It provides reliable, excellent approximations of relationships between tradeable securities. At it’s heart, though, EMH isn’t about financial markets. It’s about competition itself, and who the prime movers are in a contest. Fair prices do not require all (possibly competing) parties to be maximally informed about the security being traded (since that’s obviously not the case). One well-informed participant (a market-maker) with enough liquidity is often enough to set prices at the fair level based on current knowledge. Competitions, in other words, tend to be dominated by a small set of players who are the “prime movers”. By whom? Excellence-oriented competitions are dominated by the best: the most skilled, capable, talented or energetic. Suffering-oriented competitions tend to be dominated by the stupidest, and by “stupidest”, I mean something that has nothing to do with intelligence but, rather, “willing to take the shittiest deal”.

That said, in the real world, the Gervais Principle applies. The stupidest (the Clueless) are a force, but those who can manipulate the competition from outside (i.e. cheat) tend to be the actual winners. They are the Sociopaths. The sociopaths shift blame, take credit, and seem to be the most reliable, best corporate citizens by holding up (socially and intellectually) under immense strain. The reality is that they aren’t suffering at all. Even if they can’t get a managerial role (and they usually can) they will find some way to delegate that shit. They win suffering-oriented competitions by externalizing the suffering, and remain socially pleasant and well-slept enough to take the rewards. So suffering-oriented competitions, if cheating is possible, aren’t really dominated by the stupidest, so much as by the slimiest.

Intuitively, people understand the difference between excellence- and suffering-oriented competition. Consider the controversy associated with doping in sports. These performance-enhancing drugs have horrific, long-term side effects. They take what would otherwise be a quintessential excellence-oriented competition and inject an element of inappropriate sacrifice: willingness to endure long-term health risks. The agent (performance-enhancing drugs) that turns an excellence competition into a sacrifice-oriented one must be disallowed. People have an emotional intuition that it’s cheating to use such a thing. Athletes are discouraged from destroying their long-term health in order to improve short-term performance, with the knowledge that allowing this behavior will require most or all of the top contestants to follow suit. But in the corporate world, no such ethics exist. Even six hours of physical inactivity (sitting at a desk) is bad for a person’s long-term health, but that’s remarkably common, and the use of performance-enhancing drugs that would not be required outside of the office context (such benzodiazepine and stimulant overuse to compensate for the unhealthy environment, after-hours “social” drinking) is widespread.

Why does corporate life so quickly devolve into competition to suffer? In truth, companies benefit little from these contests. Excellence has higher expected value than suffering. The issue is that companies don’t allow people to excel. It’s not that they flat-out forbid it, but that almost no one in a modern, so-called “lean” corporate environment has the autonomy that would make excellence even possible. R&D has been closed down, in most companies, for good. That leaves suffering as the single axis of competition. I think most people reading this know what kind of world we get out of this, and can see why it’s not acceptable.