By Simon Karpen, Sr. Technical Consultant

Supercomputing is the International Conference for High-Performance Computing (HPC), Networking, Storage and Analysis. Supercomputing ’12 is happening now in Salt Lake City. The primary organizers are IEEE and ACM, and the sponsors include computing and research organizations and companies.

In terms of tools and languages, for many applications, there is a great deal of work going into making Python a first-class option. Between compilation to LLVM (i.e. numba), GPU support and huge efforts towards optimization, Python is now as fast as or faster than C family languages for many classes of problems, and there are ongoing efforts to push it towards Fortran-like performance. There’s also a great deal of work on faster, simpler, lower latency communication (upgrade or replace MPI), as well as better tools for parallelizing code and tasks. In addition to Python, R is also seeing widespread use as something that’s “fast enough” while greatly improving developer productivity. Very large scale applications are of course still written in C, C++, and Fortran, as well as domain-specific languages.

Another space with a great deal of interest and activity is GPU / co-processor computing. GPU computing (CUDA, OpenCL) has had a significant role in many applications for several years, and Intel has finally entered the fray with the Xeon Phi. Unlike GPUs, this is a many-core general processing (CPU) co-processor, though with simplified cores (in-order, but 64-bit). Many vendors are offering the same infrastructure with nVidia GPUs (Fermi, Kepler) or Xeon Phi; I expect a shake-out over the coming years in terms of performance per watt for major applications. Co-processors and GPUs are no longer just the domain of the hardcore propeller-heads; they can even speed up complex Excel financial models.

All of this computing would be irrelevant without applications in science, research, industry, manufacturing, finance, and even social media. Biology and biochemistry (everything from genomics to protein folding to simulating the brain) are rapidly growing applications and a significant part of the push towards Big-Data models of computation. Traditional applications such as computational fluid dynamics were of course well represented, frequently with complex and refined enough approaches to eliminate physical tests altogether (i.e. one automaker who no longer owns a wind tunnel). Many problems that used to take days or weeks to solve can now be managed in real-time; this can be anything from very high definition rendering of a new design for a car or airplane to a political heat-map during an election that performs natural language processing on a very large fraction of Twitter’s data stream.

Finally, there is a huge focus on power and cooling efficiency. DIMMs that use 20% less power are a headline product. A key driver of SSD adoption is IOPs per watt, and even the student cluster competition focused on power efficiency. Liquid cooling also may be making a comeback, with some new twists. There are solutions that bring the heat transfer fluid (i.e. water, glycol, mineral oil) to the CPU, the rack, or even the entire system via full immersion. With spinning disk out of the picture, full immersion may save enough in the capital and operating costs (no CRACs, very simple and efficient heat exchangers) to justify the complexity and, for lack of a better term, messiness.