Computing has given us so many good things and we want more, but more powerful performance comes with increased energy costs. Granted, taken in context, energy usage in computing is a small compared to other industries, so what should we do to prevent computing energy cost from becoming the constraint now imposed by the hardware itself. Luiz Barroso suggests three broad areas of opportunities: data center energy efficiencies, server energy efficiencies, and computing efficiencies.
Barroso discusses the power provisioning problem in that, for a 2 megawatt data center you want to figure out how to utilize most of that power most of the time. Then using his research into the dynamics of energy usage in computing, he turns to the concept of energy-proportional computing. He explains why a low-power solution works well in small and mobile devices, but not in servers where the ideal solution is to consume power in proportion to the work being performed. He suggests what is needed to get this done and what it could save in energy costs. Barroso concludes with a third energy saving opportunity which he believes has nearly unbounded potential.
Luiz André Barroso is a Distinguished Engineer at Google, where he has worked across several engineering areas, ranging from applications and software infrastructure to hardware design. Prior to Google he was a member of the Research Staff at Compaq and Digital Equipment Corporations, where his group did some of the pioneering work on computer architectures for commercial workloads. That work included the design of Piranha, a system based on an aggressive chip-multiprocessing, which helped inspire many of the multi-core CPUs that are now in the mainstream.
Luiz has a Ph.D. degree in Computer Engineering from the University of Southern California and B.S. and M.S. degrees in Electrical Engineering from the Pontifícia Universidade Católica, Rio de Janeiro.
This free podcast is from our Velocity Conference series.