Energy-thrifty computer parts are generally more expensive than mainstream counterparts, but they will pay for themselves in electricity bills in the long run. As a bonus, you can get a smaller and quieter system. The easiest way is likely buying a laptop, but there are other options for system builders.
My experiences are mainly from the x86 Linux world. The following is therefore a rather limited look at things. I'm personally interested in other architectures such as PPC, which could work out even better in terms of saving energy. But the practical reality is that x86 systems are much easier and somewhat cheaper to find.
As for operating systems, I have strong experience in a number of them, and Linux has turned out the best compromise so far. For these purposes, it's lightweight and configurable enough to stay out of the way, thereby helping save energy. On the other hand, it's popular enough to have robust drivers for lots of hardware, not to mention the loads of excellent software.
I encourage everyone to try out something new, and to draw your own conclusions. But for a starting point, why not take a look at what's worked for me.
[2009-07-08] The ARM architecture is hitting the mainstream with some of the best performance-per-watt ratios. Earlier this year I got myself a Buffalo Linkstation Live, which is sold as a network file server, but turns into a complete Linux server. Its power consumption is mainly due to the hard drive, and the CPU runs at 400 MHz without a heat sink. It is not exactly a new model, but the recent ARMs race with netbooks has made the whole architecture more appealing. Some of these netbooks are already available, but the upcoming Touchbook looks especially promising. Another famous system is the OpenPandora, "an ultra portable open source computer with gaming controls".
However, for a lot of home and office tasks, low-power x86 is still a good choice. For some tasks it is the only affordable option with sufficient processing power. Many people also have a lot of existing x86 infrastructure than can be easily tweaked/upgraded to save some energy.
An increasing proportion of motherboards has 'mobile' chipsets and sockets for 'mobile' processors. Mini-ITX is the most common form factor where you can find these.
A considerable number of Mini-ITX boards come with soldered-on processors, making them easy and inexpensive choices for low-energy experiments. VIA is probably the most famous of these, though there are other vendors out there, including Intel. I've personally owned an EPIA MII10000, while currently I use a Jetway J9F2-Extreme with a discrete CPU.
Mini-ITX.com is a good starting point for finding hardware, though it's not necessarily the easiest/cheapest one to actually buy from.
The Asus Eee and other netbooks are probably worth checking out, as many of them are already designed for Linux, with low power consumption in mind.
The level of integration in Mini-ITX motherboards can pose a few challenges for free software users. The same applies to laptops, and any highly integrated systems where you cannot change or add components such as graphics cards.
Lots of hardware vendors provide some kinds of drivers for Linux, and sometimes even specs and drivers for other operating systems. Unfortunately, these are often half-baked attempts at gaining some opensource kudos. In most cases, only Windows drivers expose all of the hardware capabilities, such as accelerated video decoding.
Binary drivers are bad, if only because they are usually tied to some outdated version of kernel and libraries. Even truly opensource drivers are often hard to compile and install, if they are not well integrated into the mainline software packages. When you update your system, you have to compile and install these drivers again.
My simple advice is to get an Intel system. The drivers are already in the mainline kernel tree and X.org, with nearly all hardware features available. (GMA 500 is a notable exception, which should be avoided for now.)
Here are two basically identical processors. Each one is an Intel Core 2 Duo, with 2 MB cache, 2 GHz clock, and 800 MHz bus. The power numbers refer to thermal design power, i.e. the maximum practical power consumption.
The difference is that one is 'mobile' and the other is 'desktop'. They have the same performance, as far as I know. Which one would you buy?It used to be that you could choose between the 'smart, fast and expensive' and the 'dumb, slow and cheap' processor, for the same motherboard. For example a Pentium III or a Celeron. Unfortunately, it happens that today's Celerons are the desktop processors. You generally cannot use a 'mobile' CPU in a 'desktop' motherboard. So, for green computing, you need a completely green system.
An interesting exception is Socket 478. This connector for most Pentium 4 and related Celeron processors also accepts Pentium 4-M, the low-power mobile version. It's a nice coincidence given that the P4 is probably the worst offender of environmentally aware computing, with its high wattage and relatively low performance.
AFAIK, the P4-M is not a particularly good "mobile" processor, as they are merely the best picks from ordinary P4 production runs, capable of running at lower voltage at a given frequency. (You may be able to undervolt any processor.) But if you're stuck with a Socket 478 motherboard, the P4-M is a better choice.
There are some caveats, though. The lower voltages may not be available from the motherboard. Also, the P4-M lacks the heat spreader of its wasteful cousins, which means some coolers may not work without slight modification.
The way power consumption scales with CPU frequency is usually superlinear: doubling the frequency gets twice as much work done, but more than twice the power is consumed.
Thus modern CPUs usually have their clock frequency adjustable on the fly. A typical frequency governor will vary the clock according to the workload. You can also peg the CPU at any single frequency. This used to be a specialty of 'mobile' processors, but of course it's beneficial for 'desktop' usage as well, so you can probably find it in any modern CPU. Even the NetBurst (P4 etc.) processors have a 'clock modulation' feature, though without the associated voltage tables.
A CPU usually requires a higher voltage to keep stable at a higher frequency. The frequency scaling driver keeps a table of these, and changes the voltage accordingly, saving further power. If you're undervolting, you need to check each frequency/voltage point separately.
Many newer processors since the Pentium M have software-adjustable voltage tables. The default settings are usually far from optimal, and you can reduce the voltages (and hence power and cooling requirements) with additional software.
Some motherboards also allow voltage adjustments via the BIOS boot menu. These do not require any particular CPU features.
Whichever method you use, it's important to check the stability of your system. A kernel compile is a tried and true method for judging the overall stability of a computer. The Mersenne Prime search is better for finding more subtle CPU glitches, such as rounding errors.
Lower performance means lower power consumption? In some cases yes, but often it's the opposite.
The answer seems positive with the NetBurst architecture; there a Celeron consumes much less than the related Pentium 4. The explanation lies with integrated cache, which is a major consumer of power. Low-end models simply have disabled part of the cache. This probably applies to many earlier architectures, and explains the massive overclockability of the early 300-MHz Celerons.
With Pentium M and newer Intels, the power-saving mechanisms are extended to turning off unused parts of the cache. However, the Celeron M lacks many of these mechanisms, hence it may consume more power in practice.
Since we're dealing with small, cool and quiet here, forget about those bulky steel boxes with fans. These days you can get a tiny passively cooled PSU that accepts 12 V from a laptop-style adapter. PicoPSU is probably the most famous of these; they didn't exist back in 2004 when I got my Morex which is slightly bigger :)
Thanks to SATA, one arbitrary distinction has been lifted, and you can finally connect a "laptop" drive to a "desktop" system without any adapters. Even the Playstation 3 uses a standard 2.5'' SATA drive. A 2.5'' drive generally consumes less than half of the power of a 3.5'' drive from the same era.
There is a persistent rumour about smaller hard drives being slower. This is easily debunked if you consider the movement of the mechanical read/write head, and the fact that we no longer use 5.25'' HDs. Bulk throughput is usually lower due to the smaller linear velocity of the platter, but in practical use, it is the seek latency that matters. However, 2.5'' drives are generally made with more aggressive power savings in mind, for example with a smaller buffer memory. Nevertheless, if you are concerned about power consumption, 2.5'' drives are well worth considering.
For general hard drive replacement, I think SSDs still have a long way to go, as of February 2010. Direct SATA replacements are mostly in the high end of performance, with the associated cost and power consumption. For the highest of high end, some SSDs attach directly into PCI Express. So these are not something to recommend for low-energy and low-cost enthusiasts.
On the other hand, USB flash sticks and CF/SD cards are relatively inexpensive. I have experimented with Linux on a USB stick, both on my laptop and an iMac, but there are serious speed issues in most uses. Of course, USB sticks are not designed for good latency, throughput or reliability. Daily use as a HD replacement can even be damaging. Nevertheless, I have yet to encounter a worn-out flash drive, so I can recommend these for experimentation, if not for serious uses. My Nokia N800 even uses swap on a SDHC.
Considering the price per GB for flash, a useful scenario might consist of a few gigs of flash for the OS, and a hard drive for bigger files such as movies and music. This way, the HD could be powered down most of the time.