Some videos are available from Sun’s HPC Consortium which was held last year in Portland, next to the SC09 conference.
On the more interesting ones is by of the presentation by Yan Fisher, who is Benchmark Lead in Sun’s Technical Marketing Systems Group. His presentation is an update on benchmarking in HPC.
In possibly one of the best named marketing efforts ever, NVidia have announced their “Mad Science” promotion – just in time for Christmas! The deal is simple – buy a Tesla card now, and get a free upgrade to the equivalent Fermi based card when they start shipping:
When you purchase a Tesla C1060 GPU Computing Processor through this promotional offer, you will qualify for a no penalty upgrade to a Tesla C2050 or a Tesla C2070 GPU Computing Processor. Start experiencing GPU computing today on a Tesla C1060 and be assured to be one of the first to receive the new Fermi-based Tesla C2050/C2070 GPU Computing Processor.
Posted on December 16, 2009 by Tom Kranz in HPC, SUN
DanT has posted up a fantastic introduction to Sun Grid Engine. Most discussions of Grid Engine assume a decent level of knowledge of clustering and distributed load balancing – fine if you know your stuff, not so good if you want to get up to speed with little prior knowledge.
Dan’s post breaks down the concepts behind Grid Engine and provides an excellent explanation on how and why it works. This is a really great resource and is well worth a read through – even if you’re not planning on deploying a Grid Engine solution, it’s well worth understanding the technology and how it works.
Posted on December 11, 2009 by Tom Kranz in HPC, SUN
Alongside the recent SC09 show, Sun ran their HPC Consortium, which featured a number of interesting technical presentations from Sun and their customers. Obviously there was a big focus on using technologies within HPC, but discussions on things like file system roadmaps and how to scale performance with multi-chip hardware solutions are just as relevant to business as they are to HPC.
So it’s great to see that Sun have posted PDFs of the presentations, and videos of the discussion panels, up at the HPC Consortium website.
Posted on November 23, 2009 by Tom Kranz in HPC, Technology
Over on their nTersect blog NVidia have post an interesting interview with Pat McCormick, a Research Computer Scientist, at Los Alamos National Lab (LANL). If you’ve ever wondered exactly how using GPUs for computation would work, or how much of a performance improvement it could bring to your workloads, you should watch this interview.
According to Pat, “Our research challenge is dealing with massive amounts of data, not only from the high performance computing aspect but how to analyze the data from simulations.”
This isn’t an HPC problem, it’s an issue that affects every business today. As storage expands and business needs grow, faster and more efficient methods of data analysis are needed – and GPUs seem to be offering the most cost-efficient way to solve this at the moment.