Pat Gelsinger, Senior Vice President Digital Enterprise Group, Intel, patrick.p.gelsinger@intel.com
To truly understand the genius behind Gordon Moore and his famous
“Moore’s Law,” you have to remember what the semiconductor
industry was like in the mid-1960s when Gordon wrote his famous article in the
April 1965 edition of Electronics Magazine. At that time, the industry
was capable of integrating only tens of transistors on a single silicon die,
and Gordon was projecting the integration of only “65,000 components on a
single silicon chip.” Even with that relatively small number, pundits
thought his predictions were bold and probably optimistic.
The industry has gradually transformed from those early successes with tens
of transistors to greater than a billion transistors on a single microprocessor
today. When Professor Carver Mead from the California Institute of Technology
named Gordon’s prediction “Moore’s Law,” he
wasn’t referring to a law of physics but rather a discipline for the
electronics industry to follow. Later, Robert Dennard1 developed
scaling theory showing how Moore’s Law can be realized in practice.
Simply put, the rest has been history. The industry has dutifully followed
Moore’s Law for more than four decades, with no end in sight. To be sure,
the road has not always been easy. Challenges such as yield, design
productivity, lithographic scaling, and power dissipation all seemed
insurmountable in their own time, but one by one were overcome through hard
work and perseverance. There will surely be more challenges to come including
in-die variations, power efficiency, and reliability. But again with hard work,
cooperation throughout academia and the industry, and vigilant perseverance
these too will be overcome. Now, let’s take a walk through the four
decades since Gordon Moore made his wonderful prediction.
With Moore’s Law in its infancy, the 1970s was the era of
invention. No one was quite sure what to do with the new abundance of
transistors and so they were used everywhere: integrated static memory, dynamic
memory, microcontrollers, and microprocessors to name a few. Innovations were
everywhere, and the sky was the limit. Fabrication facilities were relatively
inexpensive, allowing Moore’s Law to march on unabated as integration
capacity rose from tens of transistors to thousands. Wafer sizes doubled from
two inches at the start of the decade to four inches in diameter by 1976.
Microprocessor frequencies rose from hundreds of kHz in the early days to tens
of MHz later in the decade. These were still slow compared to discrete logic
but they provided the foundation for the general-purpose programmable
microprocessor that would dominate the industry in the coming decades. Apple
introduced the first truly personal computer based on an early
microprocessor to bring compute power into the home. It didn’t match the
performance of large mainframe computers of the day but it certainly opened the
door for the revolution that would grip the world in the next decade. Engineers
had little idea at the time that integrated systems technology would later
replace discrete systems. So, one has to ask why was integration so desirable?
Really, three fundamental factors emerged as driving forces for integration;
namely, integrated systems 1) provide better cost/performance; 2) they take
less space; and 3) they are more reliable. And so fulfillment of Moore’s
Law was the ticket to realize these benefits.
If the 1970s was the era of invention, then the 1980s was the era of
scaling and manufacturing science that made the realization of
Moore’s Law viable and affordable. With the success of integrating
thousands of transistors, it soon became clear that achieving the integration
of millions of transistors on a single die was within reach. With this level of
integration, VLSI was born, providing both lucrative opportunities and
unprecedented performance at remarkably low costs. Incremental increases in the
fundamental wafer sizes became instrumental in realizing significant cost
reductions. Four inch wafers gave way to six inch wafers in high volume
manufacturing during this decade. The three big challenges of yields, design
complexity, and power dissipation began to emerge during this time. In
response, a whole new era of manufacturing science arose to tackle the yield
problem. Through the use of statistical controls and discipline it became
possible to manufacture VLSI components cost-effectively in very high volumes.
The advent of CAD (Computer-Aided Design) and new technologies such as logic
synthesis made it possible for small numbers of designers to construct these
complex designs. Unfortunately, power was quickly becoming an issue. The
underlying process technology was NMOS, with CMOS on the horizon. CMOS was
known to consume less power but was used for very low power applications such
as watches and was considered an underperformer in applications such as
microprocessor designs. Nevertheless, the drive to realize Moore’s Law
continued, and CMOS was the technology brought to bear on the power problem.
High-performance CMOS designs began to emerge and quickly replaced NMOS as the
technology of choice. As a result, the power consumed by these devices
decreased substantially from tens of watts to just a few watts while the
frequency rose to tens of MHz.
By the early 1980s, Gordon’s prediction had two decades under its
belt, with no end in sight. But what followed in the 1990s surprised even the
most optimistic industry leaders. The last decade of the 20th
century is most appropriately labeled the era of manufacturing and
speed. Transistor integration rose from a million transistors in the early
1990s to close to 50 million by the end of the decade. Silicon wafers increased
to eight inches in diameter, carried hundreds of microprocessor die, and were
manufactured in high volumes, making for extraordinary cost reductions.
Advances in CAD and manufacturing science allowed the industry to design and
produce complex chips in very high volume. Innovative manufacturing and design
techniques, such as redundancy in memory, helped improve yields to exceptional
levels. Perhaps the most stunning achievement of the 1990s was the increase in
performance realized in everyday platforms. The quest to deliver higher and
higher performance fueled an exponential growth in frequencies from 25 MHz to
more than 1 GHz, with a corresponding tenfold increase in power. In the 1980s,
the transition from NMOS to CMOS was a temporary fix for the problem that
increasing power presented. Unfortunately, no such savior was in sight in the
1990s. Instead, aggressive voltage scaling was employed to allow these
high-performance microprocessors to fit into the common desktop, laptop, and
server form factors. As voltages scaled so did the transistor threshold
voltages, enabling higher frequencies, and hence subthreshold leakage started
increasing at an alarming rate. By the end of the decade, this leakage power
was a substantial component of the total design power.
As we enter the 21st century, Moore’s Law continues
unabated, with billions of transistors per chip and wafers reaching 12 inches
in diameter. But the industry has shifted its focus to providing
energy-efficient performance across all platforms. The challenge is to exploit
the transistor integration capacity provided by Moore’s Law, deliver
higher and higher performance, and yet stay within the power limits imposed in
each platform segment. For example, cell phones continue to increase in
function, now capable of email, music, and video. Laptops are ubiquitous in the
workplace but face higher and higher consumer demands for performance and
battery life. High-density data centers such as those created by Google are
exploding as the world’s data store grows exponentially. These network
data centers are increasingly challenged by power delivery and thermal
constraints. To increase performance linearly, processors must dissipate power
quadratically, which is unfortunately a poor trade-off between power and
energy. In the last two years, a new paradigm has arisen that we call
“multi-everywhere,” referring to the multiplication of functions at
every level of the platform, from multiple logic blocks on a chip to
multithreading to chip level multiprocessing. Through the increase of thread
counts and processing cores, we’re able to delivering near-linear
performance gains with only modest increases in frequency while staying within
required power levels. Today, we see the dawn of this
“multi-everywhere” era with dual-core and quad-core mainstream
processors. In the coming years, you will see core counts increase greatly to
the point of possibly hundreds of cores on a single processor allowing us to
achieve Tera-Scale computing on a single processor die.
We see no end to Moore’s Law in the coming decade. As before,
challenges are abundant. However, as was true over the last 40 years, there are
large numbers of brilliant scientists and engineers ready to tackle these
challenges. Looking ahead, ever-decreasing transistor geometries will lead to a
significant increase in cross-die variability. Electric fields in transistors
will continue to increase, threatening reliability and the useful lifetime of
the transistors. But there will be billions of them at our disposal. Besides
new features and increased core counts, we will need to find new ways to
utilize these large numbers of transistors to circumvent ever-increasing
reliability and variability concerns. At the same time, with such an abundance
of transistors, we can envision a complete platform integrated on a single chip
featuring hundreds of cores, special-purpose hardware, and memory. Such
processors will need to include new architectural, micro-architectural, and
circuit techniques that will provide for built-in resiliency.
Gordon Moore’s simple prediction has been a guiding principle for an
electronics industry that has far surpassed anything anyone could have dreamed
of in 1965. In each decade since, there have been challenges, creative
solutions, and more challenges. Today, those challenges include reliability,
variability, and power, and they appear daunting. However, history has proven
time and again that the realization of Moore’s Law drives us to new
levels of innovation. Will it ever come to end? No one can know, but for the
foreseeable future, Moore’s Law appears both intact and as prophetic as
the day it was first penned.
References
1. Dennard, R.H.; Gaensslen, F.H.; Yu, Hua-Nien; Rideout, V.L.; Bassous, E.;
LeBlanc, A.R.; IBM T.J. Watson Research Center, Yorktown Heights, NY, USA,
"Design of ion-implanted MOSFETs with very small physical
dimensions," Vol.SC-9, No.5, pp. 256-68, Oct. 1974.
2. S. Borkar, “Designing Reliable Systems from Unreliable Components:
The Challenges of Transistor Variability and Degradation,” IEEE Micro,
vol 25, no 6, Nov-Dec 2005, pp.10-16.
3. www.intel.com/pressroom/kits/quickreffam.htm#i486
4. Intel Moore's Law web site:
www.intel.com/technology/magazine/silicon/moores-law-0405.htm
About the Author
Pat Gelsinger is senior vice president and general manager of
Intel Corporation's Digital Enterprise Group.
Gelsinger joined Intel in 1979, and has more than 26 years of experience in
general management and product development positions. Gelsinger led Intel's
Corporate Technology Group, which encompasses many Intel research activities,
including leading Intel Labs and Intel Research, and driving industry alignment
with these technologies and initiatives. As CTO, he coordinated with Intel's
longer-term research efforts and helped ensure consistency from Intel's
emerging computing, networking and communications products and
technologies.
Before his appointment as the company's first CTO, Gelsinger was the chief
technology officer of the Intel Architecture Group. In this position, he led
the organization that researches, develops and designs next-generation hardware
and software technologies for all Intel Architecture platforms for business and
consumer market segments.
Previously, Gelsinger led the Desktop Products Group, where he was
responsible for Intel's desktop processors, chipsets and motherboards for
consumer and commercial OEM customers as well as Intel's desktop technology
initiatives and the Intel Developer Forum. From 1992 to 1996, Gelsinger was
instrumental in defining and delivering the Intel® ProShare® video
conferencing and Internet communications product line. Prior to 1992, he was
general manager of the division responsible for the Pentium® Pro,
IntelDX2™ and Intel486™ microprocessor families. Other positions
Gelsinger has held during his Intel career include director of the Platform
Architecture Group, design manager and chief architect of the original
i486™ microprocessor, manager of CAD methodologies, and key contributor
on the original i386™ and i286 chip design teams.
Gelsinger holds six patents and six applications in the areas of VLSI
design, computer architecture and communications. He has more than 20
publications in these technical fields, including "Programming the
80386," published in 1987 by Sybex Inc. He has received numerous Intel and
industry recognition awards, and his promotion to group vice president at age
32 made him the youngest vice president in the history of the company.
Gelsinger received an associate's degree from Lincoln Technical Institute in
1979, a bachelor's degree from Santa Clara University in 1983, Magna Cum Laude,
and a master's degree from Stanford University in 1985. All degrees are in
electrical engineering. Gelsinger is married and the father of four
children.