Older blog entries for apenwarr (starting at number 83)

IBMese and People Hacking Revisited

The other day, talking to some people from IBM in Raleigh, I learned a bit about businessspeak. You know, the strange language involving "paradigm shifts" and "issues" and "core competencies" instead of the normal things that normal people talk about. I finally figured out what businessspeak is good for. Yes, I realize that I could be kicked off Advogato for saying that, but I'm going to tell you anyway.

As you might imagine, people from IBM know an awful lot about businessspeak. IBM is also massively rich, big, and powerful, so they're doing something that works, whatever it is. Here are some specific businessspeak lessons they taught me:

1. (You've probably heard of this one.) There is no such thing as a "problem." Problems are bad. Some people would say there are "issues," but that's still a bit negative sounding. The proper term is "challenges." Everyone has challenges. And challenges are, of course, good. What would your job be like without challenges?

2. (An impressive new discovery.) No product can be described as a "something something killer." This is only true after the something something has already been killed, at which time there's no real need to describe it that way. Instead, it can be a "something something fighter."

Why are these two examples interesting? Because I finally found the common theme: "non-arguability." I just made up that word, which in retrospect has about the same meaning as "non-controversial," but I didn't really understand what non-controversial really was until I thought of it this way. It's not that you don't argue with it because everyone agrees; it's that you can't argue with it because the person making the statement wins by default.

Try these:

1. "You know, marketing this software to schoolchildren in Africa, who have neither computers nor electricity, is going to be a real challenge." Even if you think it's possible through some incredible new space-age technique, nobody's ever going to deny that it's a challenge. Conversely, "You're going to have problems selling this software to schoolchildren in Africa" is inviting someone to explain how no, really, it's possible, and it might be hard but it won't be a problem.

2. "This thingy is going to be a real something something fighter!" The person saying that is only saying that your thingy is going to go up against the something something, which is always true in some respect; you're both in the same marketplace, and someone might buy one, the other, both, or neither. If they buy just one or the other, then I suppose one or the other won. Meanwhile, if you say, "This thingy is going to be a real something something killer!" you invite argument. Perhaps the something something will kill the thingy. Wouldn't you look silly then!

Okay, so why would you want to be non-controversial? Well, as an engineer, you wouldn't. That's why engineers who insist on calling their bugs "issues" drive me crazy. I'm sorry, but engineers don't gloss over problems: they state them as clearly as possible, and then they either solve the problem or agree that it's not worth solving.

But sales - and by extension, people hacking - is different. In that case, you're messing with someone's emotions with the goal of getting them to agree with you and eventually do something for you (eg. buy your stuff). And the biggest barrier to sales is (ironically?) defensiveness: the feeling that someone is trying to sell you something. Being non-controversial helps avoid making people defensive.

Here's where I'll add my own bit of spin. The most basic form of non-controversiality is the above: making all your statements more bland in order to keep the recipient from becoming defensive. The good news for salespeople is you actually can do this by memorizing a few simple words and techniques. Unfortunately, blandness also prevents your customers from becoming interested. In fact, controversy results in a lot of emotion, and emotion keeps people interested. Boring, neutral TV news shows don't get any viewers; controversial news shows on either end of the spectrum get lots of viewers.

So here's what you do. If you're really good at this, you can say only controversial things that the person listening will agree with instantly. You have to be really good to pull it off, because you need to really understand your listener. If you misread them and say the wrong thing, they get all defensive, and you're worse off than if they were only bored.

And then, if you're really really sneaky (or perhaps just dumb), you can do what I realized I've done for a long time: you can say mostly non-controversial things, then every now and then throw in a fake controversy. This is either something you can teach them about and they'll like in the end, or something that you can just turn out to be joking about. This is the "just seeing if you were listening!" technique, and used correctly, it can really make a difference in the interest level of your presentation, without causing defensiveness. NITIites who have seen my presentations will probably be able to remember some cases of this from me if they think back.

Now, if you agree with the things I've just said, see if you can find any instances of those techniques in the article I just wrote.

Travel Notes

After visiting Raleigh, NC, which smells nice, I now find myself in Portland, Oregon, home of the Transformers. Take that, mich!

Predictions and Promises

I have been pondering the Great Dichotomy between the two offices of NITI a lot lately. We have one office in Toronto (sales, marketing, support, manufacturing, etc), and another in Montreal (R&D), and their cultures are very different. Neither is exactly wrong, but people from one culture have a lot of trouble understanding the other.

I think I now at least partly understand the difference. And, surely to the joy of the more technical readers here, I can explain it in terms of task scheduling algorithms.

In Montreal we use a system called Schedulator that I originally "designed" (it has since been rewritten). It basically takes Joel Spolsky's Painless Software Schedules essay and automates it.

Schedulator, and Joel's original essay, is about schedule prediction. It helps you predict when your project is going to reach various milestones. Used under the right conditions, it can work very well.

Now what happens if you get some surprise bugs in an earlier release? Joel didn't say anything about that. Well, what happens is your schedule will slip, because high-priority tasks insert themselves in front of everything else. Schedulator manages this automatically, and each day you can look at a pretty graph of your project schedule and watch the end date slip further into the future.

Inside R&D, there are other mini-deadlines as well. We have this concept of a Zero Bug Bounce (yes, stolen directly from Microsoft), where the idea is that all tasks are either complete, found too recently to be reasonably fixed, or shovable into a future release. And it's someone's job to predict when the next bounce will be, then make sure you get done on time.

Hey, what's this "make sure" business? Schedulator updates its predictions in real time, right? So the bounce date might slip, but only for a good reason, right? So there's nothing we can do, right?

Sort of. The problem is, if the last release-critical bug just doesn't seem very important, something more important will always come up, the new thing will jump to the front of the schedule, and the bounce will never get done. Thus, we have to introduce a form of deadline-based prioritization as we get very close to a bounce date. In the deadline-based system, you convert your prediction into a promise: that is, Schedulator tells you, as of right now, when you think you can get it done. You add a bit of time, just in case. And then you promise that you will absolutely have it done by that time, no matter what.

The same overall method applies for software final release dates (sooner or later, you just stop fixing bugs - even important-seeming ones - in the old version so you can finally just finish the new one). This method feels wrong; we're raising the priority of obviously lower-priority tasks above obviously important ones. Developers want things to be simple; they like prediction, but they don't like keeping promises that force them to break their priority scheme.

Here's my big insight. Marketing people are exactly the opposite. They don't care about predictions at all. They only care about promises. You can predict bug fix times, bounce dates, or anything else, but in the end, they want you to commit to a release date, and they want to do their work, confident that you'll be done by the date you promised. Exactly what that date is isn't very important; that you guarantee it is what's critical, because otherwise they can't properly do their thing.

And Marketing-type people carry their concern for promises above prediction way too far. Joel's article, above, talks about how Microsoft Project is useful, but not for writing software. I now understand why: it's because the type of work is very different. Some tasks a salesperson might have to do - like arranging a meeting - can take two weeks, but only, say, 5% of their effort during that time. Software developers work mostly linearly (one 100% task at a time), while salespeople do a lot of tasks in parallel. So it's easy for a salesperson to promise, "I'll have this meeting set up within about two weeks, no problem." Even if something new and "more important" comes up, they don't delay the previous task; they just do one more thing in the same amount of time. But after a person gets sufficiently heavily loaded with 5% tasks, each new task does delay the work, the same way as it would for a developer. That's the exception, however, not the rule, because simply not overloading your salespeople can dodge the problem.

A couple of weeks ago, the CEO told me, "I think we need a Schedulator for the rest of the company." This was true, in a way: we have people in sales and marketing who are having trouble making and keeping their due date promises. Schedulator, in contrast, was created to help developers, who were having trouble making predictions. It doesn't help them keep promises (although better predictions are one necessary part of saner promises); in fact, unless you're careful, it gives them an excuse for breaking promises when the predictions change. The design we came up with for a "marketing schedulator" is actually totally different from the original Schedulator. It involves a central, top-down plan from Microsoft Project, and only three bits of information fed back from individuals: notes about each task, % complete, and a checkbox: "in danger of missing the due date." Despite your best intentions, you might still have to sometimes break promises, but when this is (albeit rarely) the case, now you have to check a checkbox and have the CEO get angry at you.

I suppose the ultimate Schedulator of the future could combine the two concepts. It would predict dates, then help you promise them by auto-increasing the priority and moving tasks around so that future predictions always leave your existing promises intact. Then everyone on both sides of the fence could use it. Meanwhile, I think we need two systems, which will be strangely perpendicular to each other.

19 Nov 2005 (updated 19 Nov 2005 at 22:55 UTC) »
Societal Interstructures and Bigness of Thought

Our company has been working lately at partnering with several different other "ISV" companies who want to use our product as a platform for their product. After being basically screwed around by one of the larger of those ISVs in the last few days (nothing really serious; just super annoying), I came to this conclusion:

    Trust people in your company, but don't expect to trust people in other companies.

And therein lies a very interesting lesson, which I will attempt to relate back to software via a mild digression in the opposite direction.

One of my pet theories of capitalism (or at least, the non-insane variant practiced in Canada) is that, unlike idealistic theories like communism or libertarianism, capitalism tries to combine the best of both worlds:

- It is impossible to centrally control an immensely complex system, like an entire country's economy, so we don't try. We implement a complex, mostly self-organizing system instead, by carefully controlling the rules of the game.

- Simpler systems, like small groups of people, are much more efficient when organized as cooperative, not competitive, groups with a centrally organized set of goals. That's why employees of any particular successful company aren't generally set in cutthroat competition with one another.

Why does it work? Because up to a certain size, a single mind can hold and optimize the entire structure. I don't mean just the person at the top of the pyramid; I mean that, in a proper organization, anyone can see and understand the structure they're working in. That means they can understand other people's goals and how those goals fit in with the big picture. When you can do that, you can resolve your differences of opinion by finding the one "right" non-compromise answer. In other words, you can work efficiently.

In groups that are too large, this is impossible. You really can't understand why the person you disagree with is doing what he's doing; or worse, perhaps he's doing that because his goals are actually different from yours and the system is too complicated to find a non-compromise solution. That's why we have multiple companies competing with each other, and it works better than if they just tried to all get along.

But what exactly is too big?

This is the fun part. The last few decades have seen a huge increase in the number of smaller companies, while simultaneously bigger companies have merged and gotten even bigger (and fewer). This is because of two completely separate effects.

First, big companies get bigger because of technology: with better communication and management technologies, it's possible to centrally organize larger and larger groups. What used to be impossible with pencil-and-abacus accounting systems is now possible with computers. This trend should continue, and big companies will be able to get bigger.

The second effect is weirder. It's also because of technology, but causes exactly the opposite effect. With technology, smaller groups can do more and more complex things. Complex things are difficult to manage centrally.

So which companies get bigger? The ones doing simple, parallelizable things. Kraft mass produces food products in giant vats. Various companies massively drill for oil. GM mass-produces cars. Sony mass-produces electronics.

And IBM and Sun mass-produce and recombine Java objects that go on to mass-produce billable consulting hours. Meanwhile tiny little companies start up, produce and recombine open source tools, and (sometimes profitably) produce complex, not always open source, products in small quantities, and almost never in Java. Aha. And here we are back in the world of software.

Linus's rule for functions: The maximum length of a function is inversely proportional to the complexity and indentation level of that function. Take that to the world of OO, and the maximum size of a class is inversely proportional to its complexity. My claim is that it's also proportional to Bigness of Thought (BoT), that is, the amount of complexity of this type that can be held in a human brain - the particular brains working on your project - at once.

Java encourages teeny tiny objects that do only one tiny thing, hopefully well. That's because many Java programmers are people like financial analysts, who never really wanted to be programmers, and so their BoT for programming concepts is teeny tiny. These people can program in Java. You might need a horde of them to get anything done, but they can get it done. Pretty neat, really.

Some people have a very high programming BoT. I think I fall into that category. The danger for those people is that they can write amazing programs nobody else can understand, which in the end, doesn't do anyone any good, because it's impossible for someone to maintain it after they move on. Worse, even people with very high programming BoT can't understand the programs, because everyone with high BoT has a different mental structure. I'm great at remembering concepts, but can't remember my postal code; other people can remember dozens of interest rates and formulas with no problem, but they need to draw a picture to see any difficult concept. Both groups have a high BoT, but for totally different things. Both groups can even make very good programmers, but for totally different programs.

What's the point of all this? Well, I could write for hours about the results, but my main point is: the BoT of your group defines the maximum implementation complexity of your objects. When the overall project exceeds your group's BoT (which is virtually always), you need to subdivide your objects into sub-objects with understandable interfaces that reduce the BoT needed to build the object that combines them. And, as technology (eg. programming languages and libraries) improves, the amount of organized complexity you can squeeze into the same BoT increases.

It's just like capitalism. If you can't centrally manage it all, you split it into two companies to make it achievable. If you have two companies but you could centrally manage it, you merge the two companies to make it more efficient.

And if your partner company ("library programmer") has some idiots ("bugs") you're going to have to ask them nicely to get your problems solved, because your poor brain doesn't have the capacity to hold all the details of the whole picture all at once.

Interesting Side Notes

Compromise (solving each person's problems poorly, instead of solving them all perfectly at once) should be necessary only when the problem exceeds the BoT of all affected parties.

Concepts near the limit of your BoT are very difficult for you to explain to anyone else unless they have an exceedingly high BoT. People with a low BoT, but who manage to understand a particular concept anyway, will be good at explaining it.

It is possible to reexplain difficult solutions from one BoT domain such that they make sense to people in another BoT domain. For example, some concepts can be explained using diagrams, even if the person who solved the problem didn't need a diagram to do so. This is sort of like translating from French to English; something is bound to get lost, but it's better than nothing.

By extension, UML is about as useful as, say, Babelfish.

Weasel Words

Adrian had some interesting comments about how if someone doesn't say something in the simplest possible way, there's probably a reason.

Someone asked me today about a comment I made in one of my papers, and I thought about it in those terms.

Windows, although of course nothing is perfect, makes a great desktop system.

I'm trying to appease two opposite types of people with this sentence: people who like Windows, and people who don't. There are lots of IT people in both categories. I need to bring both of them around to agree that our system is better, at least in this particular case. See all the things I'm doing with only a few words:

- Nothing is perfect. Of course. I've got nothing against Windows, you know, but...

- Windows makes a great desktop system.

- Windows, by implication, doesn't make such a great server system, or I would have said it "Makes a great system" or something.

- Windows' imperfections are less important than its greatness; that's why it's a subordinate clause instead of the main clause or an ending.

- The end of a sentence sticks with people more than the middle (this is what subordinates subordinate clauses in the first place). We end on a positive note.

- The next sentence is about server vs. desktop, not imperfections, so the end of the sentence leads into the next one.

Okay, so maybe I massively overanalyzed this one. But massive overanalysis is what I do, really. The real question is: was I really thinking all that while I was writing, or did I just make it up afterwards and make it sound good? That answer is the key to my personality, I think, so don't expect me to just give it away. :)

Obligatory Correlation to Coding

For the advogato audience, here's how it all ties into programming. I wrote earlier about restructuring to simplify a design not really working, because you lose all those hidden details that deal with the many special cases.

Well, there you go. When you're looking at a program, ask yourself why it's so complicated. Give people the benefit of the doubt. Sure, maybe you can find a better way to do it. But make sure you first know what "it" is.

Religion and Non-compromise

Today, while wandering down the street, I accidentally learned about Falun Dafa (aka Falun Gong), a Buddhism-influenced religious movement from China.

I won't try to explain their whole story in as much detail as it was explained to me. But the part that stuck with me (probably due to selection bias, of course) was their explanation of persecution by the Chinese government. Some religions would explain it away as "There's a reason for everything", or "God is punishing us", or whatever. That view never really worked out too well for me. But my local Falun Dafa representative explains it this way: "Suffering is always wrong; there is no good reason for it. But when there must be suffering, the right thing to do is to endure it, not try to avoid it." And then, of course, you launch an international campaign involving millions of people to try to get rid of the cause of the suffering.

Why is this interesting? Because it parallels my earlier comments on stupidity: when stupidity is forced on you, as an individual you have to take the "less stupid" of your available options. But it's the core stupidity itself that is wrong; that's what forcing you to do something stupid yourself. You have to make the root cause of the stupidity go away.

What does all this have to do with programming? Uh, er, use your imagination. But it all fits together in the end.

On Naming Things

If I had a battleship, I would name it the HMCS Impediment.

Slogan: "It's mostly the name that gets in the way."

Design Requires Persistence

People were teasing me a bit over the last few days because of my insistence that the room where we were relocating my desk was "evil." This wasn't me being a pest and refusing to move - this was the simple fact that the room's layout so violated my sense of aesthetics that I had to, in good conscience, refuse to work there.

Since I finally understood things back in August, one underlying, general rule has become extremely clear to me: good "design" consists of finding a way to satisfy all the constraints at once. I've always known that choosing a side in an argument and violently adhering to it, in opposition to other goals or other points of view, just felt wrong; but I had thought the alternative was compromise. It's not. In a compromise, your solution has each side giving in, so that nobody loses too badly. But the right solution to a problem, in fact, is a solution in which both sides get exactly what they want, despite how the two might seem initially to be opposites.

It is not about sacrificing the good of the one for the good of the many, or vice versa; it's the sacrifice itself that is wrong. You have to find a way that the good of the one is served perfectly at the same time as the good of the many.

In the end, after many hours, with advice from many people around the office (who tried to be helpful, despite visibly tolerating my obsessiveness) we finally found a "right" solution: no desk had to sacrifice itself to an inconvenient orientation, and yet the room at last found its coherency.

Was this worth it, to fuss for hours just for the trivial details of the layout of a single room? No, probably not in itself. But the larger purpose was to test myself, and to prove a point. The most important lesson of my life so far is that you don't have to settle for contradictions. This is not a lesson that has stood unchallenged. Better than that: it has been challenged, and it won.

Implementation Details

Special thanks to mich, whose tolerance was not visible, and, moreover, who actually did the work of "implementing the final design," so to speak.

4 Nov 2005 (updated 4 Nov 2005 at 04:32 UTC) »

It often seems like a good idea to throw out all your code and start again; especially when you just took over someone else's code and it's now your job to maintain it. The reason it seems like such a good idea is that the problem space sounds so simple... but the code looks so complicated. So you throw it away, and write it from scratch. That's when you realize why the old code was so complicated: there were lots of special cases for all the weird bits that you didn't realize were part of the problem space in the first place. So your rewrite gets big and complex too. If you're very smart, at least it eventually gets to be better than the original; but even then, it takes a long time to do. And if you're unlucky, your rewrite is merely bigger and slower, not better.

Companies are the same. Company processes are designed and encoded (eg. through forms or software systems) over a long period of time, and they cover lots of weird special cases. When you look at the problem space, it sounds so simple, and you wonder why the solution has to be so complicated. So you throw it away and start from scratch, redesigning a bunch of things to better align with your way of seeing the world. But it means you've lost the benefit of all the tweaking you've been through over the last few years; perhaps your overall model is better, but the surface details are lumpy, inefficient, and downright wrong at first. If you're very smart, hopefully it eventually gets to be better than the original; but even then, it takes a long time. And if you're unlucky, your redesigned company is merely bigger and slower, not better.

With any great redesign, you have to be constantly on the lookout for mistakes. No new design or implementation is perfect right from the start. The biggest danger is to assume that it is, and find out only very late that you're wrong. (That way is called the "waterfall method," and it's very inefficient.)

Looking for a job in Toronto?

Speaking of restructuring...

NITI has a job posting for a QA Manager in Markham, Ontario. If the corporate version of the job description looks uninspiring, see instead my version: Evil Death Ray. Actually, Evil Death Ray is a "QA Person", while the new job is "QA Manager." But the idea is the same, only more so.

You'd be working on testing our award-winning Nitix distribution of Linux, the only one that has ever been successful in servers for small business. Yes, we are open source friendly. And you'd get to play with our super fun, room-filling Project Death Ray test cluster. As well as annoying me, one of the shoddy developers whose code you'd be breaking.

Roadblock Analysis and the 80/20 Rule

I've written down this theory a few times in a few different places, but I still don't think I've explained it clearly. Here's another try.

For years people have been talking about the magical 80/20 rule of business: that 80% of your revenue comes from 20% of your customers, so you should find out who that 20% is and focus your attention on them. When you do, you magically make more money.

Nobody, of course, has ever offered me any evidence of this; only, "Wherever you look, it always turns out to be true. It did for us." This is suspicious, because it implies a selection bias, in which people have a natural tendency to only look for evidence that supports the specific thing they're trying to prove. For example, in the 80/20 rule, the 80 and the 20 measure different things; they don't have to add up to 100%. The 80/30 rule or the 90/40 rule are just as plausible as the 80/20 rule, lacking any additional evidence.

But I'm willing to accept a weaker formulation: the majority/minority rule. The majority of your revenue comes from a minority of customers. There are all sorts of reasons this might be true, most obviously the fact that most customers are small and therefore a few larger customers add up to more money than a few smaller customers.

Roadblock Analysis

I recently learned a rule, obvious in retrospect, that is critical to understanding business. Let's define a "roadblock" as a convincing reason not to buy. In that case the following must be true: no customer will buy your product until you eliminate all of his roadblocks. How do I know? If you invert the statement, it's obvious: if there remains a convincing reason not to buy, the customer will be convinced not to buy, by definition, and so will not buy your product.

This is important: if you solve 90% of the roadblocks for 100% of people, you don't have 90% of people buying your product; you have 0% buying your product. "Roadblock analysis" is my name for a process that I certainly didn't invent: the process of identifying a group of people (a "market segment") and the complete list of their roadblocks.

Now, in reality, people's needs are distributed randomly, and your features are distributed randomly, so some people will have their roadblocks solved just by random luck. But not most of them.

Note that roadblock analysis, unlike the 80/20 rule, is not magic: it's just a simple, logical statement. If you do eliminate all the reasons not to buy your product (remember, many of these reasons are non-technical, such as "I've never heard of your product"), then they will buy your product.

Why 80/20 Works

Once you understand roadblock analysis, you can understand why following the 80/20 rule (whether it's precisely 80% and 20% or not) actually helps. It's like this: the current customers who actually make you the most money are the ones who currently have zero roadblocks for many of their situations. The others are the ones who currently have more than zero roadblocks, at least for most of what they do.

People known to have more than zero roadblocks might in fact have lots of roadblocks; maybe hundreds of them. Who knows? But people who already have zero roadblocks in many cases probably have near-zero roadblocks for a bunch of other related things. It just makes sense; they probably do a lot of similar things, so if there are many situations where your product fits, and some situations where it doesn't, you can probably improve just a few things and solve those problems too. Not so with the other 80% of customers; for those, by default, you should assume you're nowhere close.

80/20 is a Random Process with Convergence

Repeatedly following the 80/20 rule causes you to converge on the closest market segment to the randomly-selected customers you originally chose.

That is, you start off by spamming the market with a technology-driven product that does something cool; you find out who buys it; you optimize it for those people; you find out which of those people buy it; you optimize it for those people; and so on. This is a feedback control system which will eventually converge on the local maximum market segment. Notice how the 80/20 rule, by focussing on a smaller and smaller subset of customers each time through the loop, decreases the "hop size" each time. This is a well-known technique for guaranteeing convergence. As we know from calculus, this kind of method works pretty well: but the local maximum is often very different from the absolute maximum.

Characteristics of 80/20 Solutions

The 80/20 rule is a major management fad at the moment, presumably because it works much better than completely random guessing about which customers are important, which is what most companies would resort to otherwise. After all, a simple mathematical method that gives a very high probability of making an existing product even more profitable is nothing to sneeze at.

But 80/20 solutions will show some very specific tendencies, which you can see all around you by looking at your favourite companies.

- The tendency to annoy about 80% of customers by ignoring or mistreating them. (In this case, the 80% is real, because companies define their strategy by literally choosing the 20% of customers they will care about.)

- The lack of new customers. By focussing on very specific existing customers and never implementing features someone else might want, you limit your ability to attract new ones and slowly get further and further away from other market segments.

- The irresistable tendency to move "upmarket." The one thing this algorithm guarantees is that when you have one big company and one small company as a customer, the big company will always win. There's no way any one small customer can land in the top 20% of your revenues. So you get more successful only as you serve fewer and fewer bigger and bigger customers.

This leaves a badly underserviced 80% vacuum at the low end, which in the software industry is basically the "small business" market.

The Missing Markets

Roadblock analysis is a more general method than 80/20 for finding and servicing a market, based on one important insight: there might be a huge market that you completely cannot serve right now because you left all of the customers in that market with a few, actually rather simple, roadblocks. These customers aren't in your top 20%, because all of them have problems.

The roadblock analysis method is much more risky than 80/20, in fact, because it's hard to know the list of all roadblocks you have to solve. It might look like there are only a couple of them, but after solving those, you might discover a dozen more. 80/20 gives you a virtually guaranteed path to expansion, albeit at an unknown pace; roadblock analysis guarantees nothing, but offers higher potential gain.

The Best of Both Worlds

Finally, the good news: if you explicitly choose a market segment using roadblock analysis, you can then use a variant of 80/20 to improve your performance inside that market segment.

In math, this is like choosing a better initial value for your convergence algorithm; if you give your algorithm a clue where to start from, it's more likely to converge on the "right" local maximum.

So there you go - no more magic.

Veiled Historical Reference

There's a lot of crazy, crazy people in this world. Trust me. I know.

74 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!