As you may have noticed lately, a lot of ordinarily sensible citizens tend to develop fairly severe fruitcake syndrome when the calendar turns up a year with a lot of zeros. There's something about 2000, with its three creepy ciphers, that seems important, if not actually ominous.

Of course, it doesn't mean anything. Calling this 2000 A.D. is based on a dating system that uses the wrong year for the start of the A.D. era: Jesus of Nazareth probably was born in 4 B.C. And even then, the 21st century -- and the new millennium -- don't start until 2001. That's because there's no year 0 in our numbering scheme, which dates from the early medieval period.

In fact, zero -- as a number or a concept -- is a shockingly new idea in western civilization. For nearly everyone in Europe, such a thing was unthinkable until the 14th century at the earliest. That's nearly 1,000 years after zero's invention.

Why the delay?

There are several answers, and they begin about 10,000 years ago in the earliest grubby epoch of organized human activity. In those days, people didn't really need a zero. Arithmetic probably arose from the social need for counting things -- number of goats sold, bundles of grain delivered and the like.

So who wanted a symbol for nothing? As the English philosopher Alfred North Whitehead once observed, "No one goes out to buy zero fish."

All that was necessary for commerce or inventory was to make a mark on something to represent each goat or grain bundle or non-zero seafood order.

But as quantities grew larger, societies more complex and human mathematical curiosity a bit friskier, people had to rely on fancier kinds of tallies. There is evidence of such rudimentary numbering systems as long ago as 3,500 B.C., and each culture devised its own.

The differences were sometimes spectacular. You might think that it would be "natural" to base a number system on 10, as we do now, because there are 10 digits on both hands. And in fact, the Sumerians, Egyptians and, later, the Romans did.

But the ancient Babylonians, the same outfit that brought you the 360-degree circle and the 60-minute hour, used a system based, not surprisingly, on 60. The Maya in Central America used a base-20 system. Some early Greek cultures bumbled along with base-5 before the decimal (base-10) system took over.

There are still societies in Brazil that employ a base-2, or binary, system. They count this way: one, two, two-and-one, two-and-two, two-and-two-and-one and so on, as Charles Seife describes in his forthcoming book Zero: The Biography of a Dangerous Idea.

And lots of computer programmers learn to write in base-16, or hexadecimal, which accords nicely with PC systems that use eight-bit "bytes" and 16-bit "words."

In truth, it really doesn't matter what your base is; you'll get the same result if you do your arithmetic correctly. What does matter, in terms of efficiency of computation, is how you represent the numbers. Unfortunately, most of the early systems lacked two characteristics that make modern calculations so easy.

One is called "positional notation," a fancy term for a way of depicting numbers so you can stack them in columns to add or subtract. The other is a number for zero. They arrived in the West in that order. And they were a long time coming.

Some ancient cultures used a different, unique symbol for each number up to some limit. The Greek system, for example, employed dozens of alphabetical characters. Alpha was 1, beta 2, kappa 20, sigma 200 and so forth.

Many numbering schemes left room for a lot of uncertainty. For example, in the Babylonian system, meant one. But the same symbol also stood for 60. And also 60 times 60, or 3,600. So

was 2 or 61 or maybe 3,601. (Similar logic seems to persist in Washington today when dealing with the federal budget.) The intended value was determined by context.

But by 300 B.C., the Babylonians had solved this problem -- at least somewhat -- with an innovation. They added a symbol that functioned as a place holder.

This was possible because their numbering system employed positional notation, or place value. That is, the value of a number depended on its position within the whole number, just as it does in our own system. The number 2 signifies something quite different in 12 than it does in 429 or 2,763, and the difference depends on its place within the number.

So the Babylonians stuck in a couple of hash marks to tell readers at a glance what position a symbol was supposed to occupy.

The place holder wasn't a real number and certainly not zero as we know it today. It was more like an empty string on an abacus or an empty row in the typically sand-covered "counting boards" used by various societies. In those, each row represented quantities of a different magnitude.

For example, suppose you wanted to depict the number 315 on a base-10 counting board. You'd put five stones in the farthest-right "singles" row; then one stone in the next, or "tens" row to the left of that; and then three stones in the "hundreds" row to the left of that. So popular was this system that our word "calculate" comes from the Latin word calculus, meaning pebble. Not exactly rocket science, but a clear step in the right direction.

Counting boards, and thus empty spaces on counting boards, had been around at least since the Socratic era. But it would be nearly 1,000 years until a genuine zero arose independently within two cultures -- Mayan civilization in the Yucatan peninsula, which was unknown to Europe at the time, and India to the far more accessible east.

The Maya didn't mess around. Not only did they develop positional notation, but they also had a real zero and used it boldly. The first day of each 20-day month was the 0th; the last was the 19th. Unfortunately, Europeans, whose systems eventually would dominate world culture, didn't have a clue about the Maya until the 16th century.

Instead, our zero came from India, which seems to have picked up the rudimentary concept of a empty-value place holder from the invading forces of Alexander the Great in the 4th century B.C. For a fascinating discussion of how this cross-cultural transplant probably took place, see Robert Kaplan's new book, The Nothing That Is: A Natural History of Zero.

Whatever happened, the Hindus eventually elevated zero to the rank of a number. That is, it was not just a place-holder. It was a real part of the number system, and it represented a real quantity: nothing.

Nobody knows exactly when the first Indian nullmeister came up with this improved zero and a circular symbol to represent it, but it seems to have been well established by at least the 7th century A.D., when Islamic peoples began pushing toward China.

Without a doubt, zero behaved badly. It scarcely made sense, for example, that a real quantity such as, say, 352 would, if multiplied times zero, simply equal nothing. Where did all those 352 real things go? Dividing by zero or raising a number to the zeroth power yielded equally baffling results. Nonetheless, zero soon was headed west.

As the epic wave of Islamic conquest washed eastward, it swept up the best ideas of local populations on the way. One of those was the Indian zero. In fact, Arab scholars admired the entire base-10 Indian symbol set -- nine numbers and a circle for zero -- and brought it back.

Shortly after zero reached Baghdad, it attracted the attention of the leading Arab mathematician of the 9th century, the great al-Khwarazmi. He may seem rather obscure, but a corruption of his name gives us the modern computer-science word "algorithm," and the word "algebra" comes from the Arabic term al-jabr that means something like "reassembly" and was part of the title of one of his major works.

Al-Khwarazmi popularized use of the Indian symbols, which we now incorrectly call "Arabic numerals," as well as the exotic notion of zero -- a symbol for nothing whatsoever.

European sages, in turn, picked up the idea of zero from the Arabs.

At first, the notion of a complete absence (true zero) didn't sit well with western minds. Greco-Roman philosophy, as typified in the teachings of Aristotle, had been hostile to the concept of nothingness. It had no place for the idea of complete emptiness, even in what passed for the idea of "outer space," which in those days of an Earth-centered cosmos included everything above the moon.

You might say that classical thinkers abhorred a vacuum. Certainly, they avoided the void. Indeed, until the end of the 19th century, thousands of highly sophisticated scientists believed that outer space simply could not be empty. They figured it had to be filled with some stuff they called "ether."

Moreover, zero messed up the orderly behavior of arithmetic. Sometimes, it didn't do anything at all: 473 -- 0 = 473. Big deal. But sometimes it made a colossal mess: 473 / 0 = . . . um, well, infinity. Or something else. Or whatever you want it to be.

But to Eastern minds, nothing was no big deal. After all, many Hindu and Buddhist beliefs were based on the idea that reality actually is illusory, a sort of fictional movie that our brains foolishly project on a screen of cosmic nothingness. In addition, Judeo-Christian and Islamic creation stories involved a deity shaping the Earth from a featureless void.

Philosophy aside, Arabic numerals, with their positional notation and nifty zero, were terrific business tools for those more interested in money than metaphysics.

So by the early 13th century, the Islamic-Indian method of calculating was being advocated by a few Europeans, among them a well-traveled Italian merchant and part-time mathematician named Fibonacci. His book Liber Abaci, or Book of the Abacus, urged the benefits of the Arabic numbering system.

It made a lot of sense. In those days, Europeans were still doing math with Roman numerals. And you couldn't get very far by placing CDXXXVII over LXIV and adding the columns. But put 103 over 21, and you almost automatically ended up with 124. Indeed, calculating with Arabic numerals seemed to be just as fast as using a counting board and gave the user far greater range.

Nonetheless, acceptance was slow. In 1299, Florence banned use of Arabic numerals, ostensibly because it was so easy to alter them -- for example, by turning a 0 into a 6. Of course, a Roman I could be changed to a V with as little effort, but never mind.

Finally, by 1500, there had been plenty of head-to-head computation competitions between counting boards and Arabic numbers. And the numbers were starting to win every time.

Roman numerals finally began to disappear except for ceremonial purposes. And zero, the numerical embodiment of nothing at all, was here to stay.