Data centers are reckless consumers of power.
Since modern processors leak somewhere in the neighborhood of 40% of their peak power consumption even when idle and since most measurements show that most computers are nearly always idle, that’s a lot of power wastage. RMI has some innovative thinking about data center design. The designers of Via’s new chip are trying to limit the effects with some clever design methods. But the basic problem is software. Processor design has been dominated by the SUV approach: huge and heavy machinery that needs enormous engines to partially offset the weight. Processors and memory have been used to compensate for crappy software – we now have generations of computer science graduates who have been taught that efficiency and good design don’t matter as long as you get the damn program out the door. That’s why we have so much software that requires 4GHz out-of-order multi-core monsters with multiple giabytes of memory for peak utilization, while generally doing nothing. The flight to heavyweight virtualization is, I think, a doomed effort to fix the problem by making it worse. The idea, as far as I can tell, is that since the OS is doing such a bad job of using the hardware, we need a super OS that juggles multiple operating systems to somehow magically keep everything busy (as if all that overhead was efficient!). This is an upside down version of the old argument that we needed threads because processes were so heavyweight.