Energy is increasingly a first-order concern in computer systems design. Battery life is a big constraint in mobile systems, and power/cooling costs largely dominate the cost of equipment in data centers. More fundamentally, current trends point toward a "utilization wall," in which the amount of active die area is limited by how much power can be fed to a chip.
Much of the focus in reducing energy consumption has been on low-power architectures, performance/power trade-offs, and resource management. While those techniques have been effective and can be applied without programmer involvement, exposing energy considerations to higher-level software can enable a whole new set of energy optimizations. The scope of this project is to develop new programming models, system support, and hardware for energy-aware programming, with the goal of reducing energy consumption in modern computing systems by an order of magnitude or more. More specifically, we are exploring: (a) designing language and runtime monitoring/control techniques to express quality-of-service metrics where errors can be tolerated to achieve energy savings, (b) implementing tools for helping programmers use these techniques and for compilers to communicate the information to the hardware, and (c) designing micro-architectures and hardware accelerators that can better leverage this information.
See publications, downloadable resources, and more details on the EnerJ project page and here.