UNIX Consulting and Expertise
Golden Apple Enterprises Ltd. » Posts for tag 'performance'

Comparative benchmarks: here there be dragons Comments Off on Comparative benchmarks: here there be dragons

Part of the work I’m doing for a client at the moment involves a migration of databases from a Sun E2900 running Solaris 8 to a cluster of M3000s running the latest build of Solaris 10. This is going to be a long post, but I want to touch on some of the difficulties businesses face when trying to benchmark such migrations.

Initially the migration was sized on the simple (and flawed) premise that we have 24 cores running at 1.9Ghz (US IV+), and we’re now moving to 8 cores running at 2.9Ghz (SPARC64 VII). Therefore the box will be X times faster, where X is a nice big number that’s acceptable to the business. Project signed off, cheque printed – next!

This is great, but from a technical point of view it’s dangerously misleading – and it also leads to some pain when trying to manage this infrastructure. Could we consolidate more databases onto this machine? What’s the performance implication? What’s the real performance gain we can expect from the migration, and what are the key factors that determine this?

First of all we have the OS. Solaris 8 is the Honda Civic of operating systems. It’s been around for ages, it’s reliable, it’s not fast and flashy, and you can try pimping it out but you’ll end up wasting money and looking foolish.

Solaris 10 has so many improvements in areas that boost performance (as well as decreases the administration pain) that they might as well be different OSs. The only way to do a proper comparison is to sit down with the same hardware, and the same application, and run through some performance tests with both 8 and 10, then compare the results.

The problems for all businesses is that this just isn’t possible – this is a stable production machine. No-one is going to buy another one when they’re going to migrate to new tin, and when the machine becomes freed up it will be post-migration – which would be too late.

Then you’ve got the actual hardware. The UltraSPARC IV was a cracking bit of kit, but it’s been out for almost a decade now, and Fujitsu’s SPARC64 is faster in every area. There’s a huge technology gap, and again, unless we get the chance to compare the same OS with the same application on the two different sets of CPUs – but then the server hardware is so radically different that the results will be out of whack.

Let’s add into the mix some performance improvements within the M3000 itself to highlight this. Initially the SPARC64 VII CPU that was offered has a clockspeed of 2.52Ghz, but the kit for this particular project has 2.75Ghz CPUs. That’s only a 9% performance increase, right?

*bzzzrt* Computer says “No”:

  • faster System clock at 306MHz
  • faster memory – now using 667 MHz DIMMS (the memory runs at 612 Mhz, doubling the system clock)
  • faster interconnect to the Jupiter System Controller at 1224MHz (four times the system clock)

This actually adds up to a 23% speed increase for the CPU overall. This highlights the effectiveness of balanced RISC computing platforms, compared to the single minded “clock speed is king” focus of x86. It doesn’t half make of a mess of your benchmarking figures, though.

Mr. Benchmark has a great post that goes into far more depth about this sort of thing, and I highly recommend reading his blog.

The conclusion? Simple “compare speed and number of cores” comparisons are very, very dangerous, and can lead to a massive over-spec of new systems. However, many times this will be the only way to make such a comparison, unless you can get early access to a vendor’s test lab. In short, tread carefully, and think carefully about *all* aspects of system performance.

HPC Benchmarking Comments Off on HPC Benchmarking

Some videos are available from Sun’s HPC Consortium which was held last year in Portland, next to the SC09 conference.

On the more interesting ones is by of the presentation by Yan Fisher, who is Benchmark Lead in Sun’s Technical Marketing Systems Group. His presentation is an update on benchmarking in HPC.

Head on over to Sun’s HPC Watercooler to watch it.

Optimising performance for parallel processing Comments Off on Optimising performance for parallel processing

Over at the Sun HPC Watercooler there’s a great video from Acumen CTO Professor Erik Hagersten about how to migrate legacy code to multicore architectures, and how to optimise performance for parallel architectures.

Finding single core processors in servers is almost impossible now, and with processors like Sun’s UltraSPARC T2+ and NVidia’s GPU solutions, parallel processing (and the associated performance issues) are going to be a hot topic over the next few years.

The full video can be viewed here – well worth a watch.

Solaris kernel tuning when upgrading to Solaris 10 Comments Off on Solaris kernel tuning when upgrading to Solaris 10

When migrating to Solaris 10, most people focus on the more obvious changes, such as using SMF to manage services, and ZFS for the filesystems. Something that seems to catch lots by surprise is what happened to kernel turnables in Solaris 10.

In Solaris, the kernel is largely self-tuning, with each new major release and kernel patch building on this. There’s not really that much to tweak, and for most of the time, you can just leave Solaris to get on with it.

There are always edge cases, though. Oracle is a classic one, and if you’re doing HPC, big compute jobs, or lots of I/O intensive work, you’ll want to start tweaking.

In previous releases of Solaris, this involved digging out the correct syntax from the Solaris documentation, working out what the values should be, then editing /etc/system, followed by a reboot.

Oracle DBAs in particular will now be distressed to note that Solaris 10 happily ignores the settings in /etc/system. If you’re lucky, you’ll see some complaints about deprecated settings in /var/adm/messages.

Lots of the defaults have now been changed in Solaris 10, and they’re a lot more sensible – you shouldn’t really need to play with System V IPC tunables like semaphores and message queues.

The Solaris 10 Tunable Parameters Reference Manual can be found at http://download.oracle.com/docs/cd/E19253-01/817-0404/

Things have changed a bit for Oracle databases as well, which is the most common case for kernel tuning most people will encounter. Solaris Resource Controls – aka projects – are now used to set things up for Oracle.

Setting up a project for Oracle, and then defining the kernel settings, is pretty straightforward:

  • Setup a new project for the Oracle user:

    projadd user.oracle

  • Modify the project to increase the number of semaphores:

    projmod -s -K “process.max-sem-nsems=VALUE” ‘user.oracle’

    Where VALUE should be the total number of Oracle processes, + 10. Chris Gerhard has an excellent blog entry here that discusses the pros and cons of fiddling with the number of semaphores.

  • Modify the project to set the maximum shared memory segment size:

    projmod -s -K “project.max-shm-memory=(privileged,VALUE,deny)” ‘user.oracle’

    This corresponds to the old shmsys:shminfo_shmmax kernel tunable in /etc/system.

Once you’ve done that, you’re all set. The old semsys:seminfo_semmns tunable is now deprecated, and you should find that the resource controls that replaced the old tunables semsys:seminfo_semopm and shmsys:msginfo_msgmni have high enough defaults that you won’t need to tweak them.

There are still lots of Solaris 8 and Solaris 9 machines out there, that are only just coming to the end of their service life. Although Solaris 10 has been out for a while, and we’re looking at Solaris 11 being released soon, many clients are still looking to tackle the challenges that a Solaris 10 migration brings.

Extracting EMC Symmetrix Data with Orca Comments Off on Extracting EMC Symmetrix Data with Orca

One of the problems using big disk arrays is the difficulty in getting meaningful reporting out of them. All the vendors’ tools are closed source, and in many cases the expertise from the vendor is often missing or seriously lacking when it comes plotting performance trends.

“Just add more cache” is the same tired refrain vendors always give. No. I’m not going to recommend to clients that they spend a huge sum of money buying more SAN cache until I can prove the SAN actually needs it.

In March 2004 I wrote an article for SysAdmin Magazine showing how to use the symcli command line tools in conjunction with Orca to plot some nice historic performance graphs, showing the host’s view of performance of the Symmetrix array.

You can find the original article, complete with diagrams and code, on SysAdmin Magazine’s website at http://www.samag.com/documents/s=9364/sam0403f/0403f.htm

Top of page / Subscribe to new Entries (RSS)