Some videos are available from Sun’s HPC Consortium which was held last year in Portland, next to the SC09 conference.
On the more interesting ones is by of the presentation by Yan Fisher, who is Benchmark Lead in Sun’s Technical Marketing Systems Group. His presentation is an update on benchmarking in HPC.
Posted on January 20, 2010 by Tom Kranz in Technology
Over at the Sun HPC Watercooler there’s a great video from Acumen CTO Professor Erik Hagersten about how to migrate legacy code to multicore architectures, and how to optimise performance for parallel architectures.
Finding single core processors in servers is almost impossible now, and with processors like Sun’s UltraSPARC T2+ and NVidia’s GPU solutions, parallel processing (and the associated performance issues) are going to be a hot topic over the next few years.
The full video can be viewed here – well worth a watch.
Posted on December 11, 2009 by Tom Kranz in HPC, SUN
Alongside the recent SC09 show, Sun ran their HPC Consortium, which featured a number of interesting technical presentations from Sun and their customers. Obviously there was a big focus on using technologies within HPC, but discussions on things like file system roadmaps and how to scale performance with multi-chip hardware solutions are just as relevant to business as they are to HPC.
So it’s great to see that Sun have posted PDFs of the presentations, and videos of the discussion panels, up at the HPC Consortium website.
Posted on November 23, 2009 by Tom Kranz in HPC, Technology
Over on their nTersect blog NVidia have post an interesting interview with Pat McCormick, a Research Computer Scientist, at Los Alamos National Lab (LANL). If you’ve ever wondered exactly how using GPUs for computation would work, or how much of a performance improvement it could bring to your workloads, you should watch this interview.
According to Pat, “Our research challenge is dealing with massive amounts of data, not only from the high performance computing aspect but how to analyze the data from simulations.”
This isn’t an HPC problem, it’s an issue that affects every business today. As storage expands and business needs grow, faster and more efficient methods of data analysis are needed – and GPUs seem to be offering the most cost-efficient way to solve this at the moment.
Posted on November 2, 2009 by Tom Kranz in HPC, Technology
Sun have released a technical report on Transactional Memory, based on their experiences with the (now sadly canned) ROCK processor. “Early Experience with a Commercial Hardware Transactional Memory Implementation” is available as a free download from Sun’s research website – you can grab it at http://research.sun.com/techrep/2009/abstract-180.html
From the abstract:
We report on our experience with the hardware transactional memory (HTM) feature of two revisions of a prototype multicore processor. Our experience includes a number of promising results using HTM to improve performance in a variety of contexts, and also identifies some ways in which the feature could be improved to make it even better. We give detailed accounts of our experiences, sharing techniques we used to achieve the results we have, as well as describing challenges we faced in doing so. This technical report expands on our ASPLOS paper [9], providing more detail and reporting on additional work conducted since that paper was written.
Anyone who’s interested in High Performance Computing (HPC) or performance gains from Transactional Memory should have a read through this paper – it’s interesting stuff.