UNIX Consulting and Expertise
Golden Apple Enterprises Ltd. » Posts in 'SAN and Storage' category

The loss of critical skills in IT Comments Off on The loss of critical skills in IT

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

There’s a recurring problem in IT, and although it’s been going on for years, only now is it starting to bite. With the increase in easy to use GUIs to manage systems, critical skills are starting to disappear. New starters in all areas of IT are able to quickly manage complex systems, but without learning the underlying hard stuff – which means that, when things break, the outages are longer and the fixes prove more difficult.

Here’s a great article from Enterprise Storage Forum. It covers the more common RAID levels that are used today, but also touches on how this knowledge is being lost to storage administrators.

No one who cut their teeth on Veritas Volume Manager or Disksuite can deny that storage administration is easier now than it’s ever been for new admins. However, that knowledge of how to carve up disks, how storage virtualisation worked, how to eek every last ounce of performance out of your system, is now being lost. Trust the SAN storage to optimise itself. Buy some more cache. Monitoring tools are so expensive from the vendor, and we don’t really understand what they do – just fit some more disk trays.

On a wider level, this article in Wired highlights some of the concerns in the US from DARPA about the declining numbers of teenagers learning maths, technology, and hard science – which is slowly leading to a shortage of hard-core geeks.

It’s a problem I’m seeing in more and more companies – even big consulting outfits. The people are great, they can learn quickly, and they can manage large, complex solutions – but they don’t have an understanding of the underlying technology at a low level. More often than not, this leads to to extremes: underestimating what the technology can do (leading to excess cost for the client, as they spend more on kit and consultancy than they need), or overestimating what the technology is capable of (leading to excess costs for the client as they have to buy more kit and consultancy).

This is clearly a pretty poor state of affairs: clients get a raw deal on their IT projects, consultancy companies get a bad reputation as shysters – and less people want to get into a career in IT because, let’s face it, it’s a bit of a mess.

The solution? I don’t know. When I was a teenager we had the Computer Literacy Project, and BBC Micros we had to hack about with to get them to do anything. It was an instructive education and a great time to be involved in IT.

Now, however, hardware hackers are viewed with suspicion. The endless war on terror is making things difficult for someone who carries some homebrew electronics in their pocket – Hack-A-Day has some good editorial coverage here.

Ultimately, I think the hacking scene holds the key to getting more people interested in hardware and software – just as it did 30 years ago.

Solaris SVM metasets Comments Off on Solaris SVM metasets

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

The Solaris disk management/virtualisation tools have gone through many name changes – ODS, SDS, SVM – but the basic tools have remained the same. With the introduction of Sun Cluster, Sun needed to come up with a way to share storage between cluster nodes. Obviously this functionality needed to be added to SVM, and they came up with the idea of metasets.

Normally, metadevices are local to a host. When encapsulating disk slices into metadevices, you first have to dedicate a disk slice to store the metadatabase (with them metadb command). You can then create replicas of the metadb, and scatter them across slices, to ensure they don’t get corrupt and you lose all your metadevice information.

metasets are, in a nutshell, a collection of metadevices that have their own metadb state databases.

This works well in a cluster – the metaset metadevices, along with their metadbs, can be moved around, existing on the node which needs to mount their filesystems.

It also makes things easy for use when mounting LUNs from a SAN. If we can encapsulate the LUNs within metadevices, in their own metaset, then if our host dies we can just re-import everything on another host. Think of it as very basic – but very quick and easy – disaster recovery.

Think of metasets as being very similar to disk groups in Veritas Volume Manager.

Creating the metaset is a simple process. First of all we define our metaset and add our host to it.

bash-3.00# metaset -s test -a -h avalon

Syntax is pretty straightforward:

  • -s is used to specify which metaset we’re using
  • -a is the add flag. Guess what -d does?
  • -h specifies the hostname which owns this metaset

All metasets are owned by at least one host (it’s how they track who can access them). If you’re in a cluster environment, multiple hosts will own the metaset, allowing the cluster software to move the metadevices between nodes.

For a single hosted metaset, however, we just need to add one host, and we need to make sure that it will automatically take ownership and import the metaset on boot.

All we have to do to make this happen is enable the autotake flag on the metaset:

bash-3.00# metaset -s test -A enable

And that completes the setup of the metaset. We then just select which LUNs we’re interested in, and add them in to the metaset:

bash-3.00# metaset -s test -a c7t60060E80141189000001118900001A10d0 \
c7t60060E80141189000001118900001A17d0 \
c7t60060E80141189000001118900001A17d0 \

Note that when we add devices to a metaset (disks or LUNs) we need to only specify the device name – not slices, and not s2 (the Solaris way to reference an entire disk by a single reserved slice).

Normally, when you create a metadevice, you are encapsulation a slice that already exists on disk. This means the data stays intact. This is not the case when importing a disk into a metaset.

The act of importing a disk re-partitions it. All existing partitions are deleted, with a tiny slice on s7 being created to store the metadb replica, and the rest given over to s0. Note that s2 – the usual way of addressing a disk in Solaris – is also removed.

Here’s what the partitions look like on our root disk:

bash-3.00# prtvtoc /dev/dsk/c1t0d0s2
* /dev/dsk/c1t0d0s2 partition map
* Dimensions:
*     512 bytes/sector
*     107 sectors/track
*      27 tracks/cylinder
*    2889 sectors/cylinder
*   24622 cylinders
*   24620 accessible cylinders
* Flags:
*   1: unmountable
*  10: read-only
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      2    00    8389656  23071554  31461209
       1      3    01          0   8389656   8389655
       2      5    00          0  71127180  71127179
       5      7    00   31461210  20974140  52435349
       6      0    00   52435350  18625383  71060732
       7      0    00   71060733     66447  71127179

And here’s what they look like on a LUN that’s part of the metaset:

bash-3.00# prtvtoc /dev/dsk/c7t60060E80141189000001118900001A10d0s0
* /dev/dsk/c7t60060E80141189000001118900001A10d0s0 partition map
* Dimensions:
*     512 bytes/sector
*     512 sectors/track
*      15 tracks/cylinder
*    7680 sectors/cylinder
*   13653 cylinders
*   13651 accessible cylinders
* Flags:
*   1: unmountable
*  10: read-only
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       0      4    00      15360 104824320 104839679
       7      4    01          0     15360     15359

We can query the metaset and have a look at it’s contents, to check everything is OK:

bash-3.00# metaset -s test

Set name = test, Set number = 5

Host Owner
avalon Yes (auto)

Drive Dbase

/dev/dsk/c7t60060E80141189000001118900001A10d0 Yes

/dev/dsk/c7t60060E80141189000001118900001A17d0 Yes

/dev/dsk/c7t60060E80141189000001118900001A18d0 Yes

/dev/dsk/c7t60060E80141189000001118900001A19d0 Yes

Once we’ve populated our metaset, we create metadevices as normal. The only extras we need when using the metainit command is that we need to specify which metaset we’re using, and that we’ll always be using s0.

Let’s create a single metadevice striped across all 4 LUNs in our metaset:

bash-3.00# metainit -s test d100 1 4 /dev/dsk/c7t60060E80141189000001118900001A10d0s0 \
/dev/dsk/c7t60060E80141189000001118900001A17d0s0 \
/dev/dsk/c7t60060E80141189000001118900001A18d0s0 \

metainit works in the same way it’s always done – we need to specify the full path to the slice we’re using – but with the additional -s flag to tell metainit which metaset we want to add the metadevice to.

We can use the summary flag to metastat (sorry, Solaris 10 only) to show us the summary of what we’ve just configured:

bash-3.00# metastat -c -s test
dbt/d100     s  199GB /dev/dsk/c7t60060E80141189000001118900001A10d0s0 \
/dev/dsk/c7t60060E80141189000001118900001A17d0s0 \
/dev/dsk/c7t60060E80141189000001118900001A18d0s0 \

metasets are an easy way to group together storage and filesystems in Solaris, especially where the storage is external to your host, and you’d like the flexibility of importing it to another host in the future – for example, as part of some DR work if the host fails.

What version of the SAN Foundation Suite is installed? 1 comment

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

This is a constant pain that rears it’s ugly head again and again. You have a Solaris machine, the SAN Foundation Suite is installed, and you want to find out what version it is.

Well, you’d do a pkginfo on the SUNWsan package, right? Wrong.

# pkginfo -l SUNWsan
      NAME:  SAN Foundation Kit
  CATEGORY:  system
      ARCH:  sparc
   VERSION:  1.0
   BASEDIR:  /
    VENDOR:  Sun Microsystems, Inc.
      DESC:  This package provides a support for the SAN Foundation Kit.
    PSTAMP:  sanserve-a20031029172438
  INSTDATE:  Nov 24 2008 17:46
   HOTLINE:  Please contact your local service provider
    STATUS:  completely installed

Every version of SUNWsan reports 1.0. This is unforgivably rubbish – why hasn’t it been sorted yet?

To find out the real version of the SFS, you need to do something far more esoteric – look at the versions of the relevant kernel modules.

If you’ve got SFS 4.3 or later, they will be reported as ‘build dates’:

# modinfo | egrep '(SunFC|mpxio|scsi_vhci)'   
  79  138f975  15f10 149   1  fp (SunFC Port v20051027-1.61)
  80  13a50cd   7fa4   -   1  fctl (SunFC Transport v20051027-1.33)
  82  13ab699  5032a 153   1  qlc (SunFC Qlogic FCA v20050926-1.50)
  113 78196000  20313 150   1  fcp (SunFC FCP v20051027-1.80)
  114 781b8000   55fc   -   1  mpxio (MDI Library v20051027-1.17)
  115 781be000   c8cc 189   1  scsi_vhci (SCSI vHCI Driver v20051027-1.40)
  222 780de000   783f 154   1  fcip (SunFC FCIP v20050824-1.28)

If the version of SFS is earlier than 4.3, then the version is directly referenced for each driver:

# modinfo | egrep '(SunFC|mpxio|scsi_vhci)'   
  85 103199bc  10c23 149   1  fp (SunFC Port v5.e-2-1.18)
  86 1032a0c7   6f28   -   1  fctl (SunFC Transport v5.e-2-1.16)
  87 1032f747  2db28 153   1  qlc (SunFC Qlogic FCA v5.e-2-1.16)
  88 7825a000   fe94 150   1  fcp (SunFC FCP v5.e-2-1.17)
  89 7826a000   49ac   -   1  mpxio (MDI Library v5.e-1-1.7)

If you want to decipher what versions of SFS the build dates map to, you need to look at Sunsolve Document ID 216809. It can be found at http://sunsolve.sun.com/search/document.do?assetkey=1-61-216809-1

You seem to need a Sunsolve login (free) to view the document, and as far as I can tell it’s not restricted to contract customers only. If you find out that’s not the case, please let me know, and I’ll stick a copy up here.

Tape Devices in Solaris Comments Off on Tape Devices in Solaris

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

Yes, people out there are still using tape – and in fact in certain situations tape still has many advantages over disk backup or site replication. One of the many quirks with Solaris is how tape devices are addressed, so in this post I’m going to quickly cover the options.

Solaris tape devices all live under /dev/rmt, where rmt stands for Raw Magnetic Tape device.

First tape device name: /dev/rmt/0
Second tape device name: /dev/rmt/1

Each tape device also has special characters added after it to specify density and the characteristics of the drive that you want to use.

So the actual format you’d use to address a drive would be /dev/rmt/XY, where:

  • X is tape drive number such as 0, 1 etc.
  • Y can be any one of following
    • l – Low density
    • m – Medium density
    • h – High density
    • u – Ultra density
    • c – Compressed density
    • n – No rewinding

It’s actually pretty straightforward. If you want to use tar to backup to your first tape drive, using compressing, and not rewinding the media afterwards (so you can append to the backup), you’d use the device /dev/rmt/0cn

On top of this, Solaris comes with a utility called mt, which is used to carry out some simple tape operations.

mt takes the -f option to specific which device it should talk to, and it then mainly used for these three options:

  1. Rewinding a tape
    # mt –f /dev/rmt/0 rewind
  2. Display the status of a tape drive
    # mt –f /dev/rmt/0 status
  3. Retensioning a tape
    # mt –f /dev/rmt/0 retension

Most people will have some sort of front end software to handle their tape backups – Oracle RMAN, Networker, Netbackup or similar – but if you need to do a quick test, or are just using tar or another backup utility, then this post should point you in the right direction.

Solaris 9 can’t import it’s SVM metasets when booting Comments Off on Solaris 9 can’t import it’s SVM metasets when booting

Looking for UNIX and IT expertise? Why not get in touch and see how we can help?

I came across this particular issue for a client, and it turned out to be a harsh gotcha in Solaris 9.

Quick recap: SVM metasets are a group of disks (usually from a SAN) that have their own meta state databases. They grew out of Sun Cluster as a way to share storage between cluster nodes, using SVM, and have since become a really handy way of managing SAN volumes.

Anyway, Solaris 9 4/04 introduced the ability to have ‘autotake’ metasets. Basically, one host was the master, and it could automatically import and manage the metaset on boot. This was great, because it finally swept aside the last baggage of Sun Cluster, and meant you could have your metasets referenced in /etc/vfstab and mount them at boot – just like real disks.

And there was much rejoicing across the land.

In this particular case, there was a host running Solaris 9 (for client software reasons) which had many terabytes of SAN LUNs mounted as metasets. I say had because when it rebooted, the machine said it couldn’t autotake the disk set because it wasn’t the owner, before dropping to single user mode complaining it couldn’t check any of the filesystems.

Odd. A quick check from single user mode, and yes indeed – the metaset was configured for autotake, but the host wasn’t the owner. Comment the (many) filesystems out of /etc/vfstab, continue the boot, and check again once at run level 3. Hang on – now the host is the metaset owner.

Whisky Tango Foxtrot, over. A quick Google threw up far too many suggestions to hack the startup scripts so that the SVM daemons start before the filesystem mounts. Not a great idea.

A very quick dig through Sunsolve turned up Sun BugID 6276747 – “Auto-take fails to work with fabric disks”
Turns out that this is an issue with the Solaris 9 SAN Foundation Suite, and how the kernel initialises SAN fabric LUNs, as opposed to FC-AL LUNs.

Adding the following like to /etc/system:

set fcp:ssfcp_enable_auto_configuration = 1

Followed by a quick reboot later, and behold! metasets are imported and mounted correctly, no further problems. This appears to be purely an issue in Solaris 9, so apart from old client apps I’m hoping we can leave this one behind.

Top of page / Subscribe to new Entries (RSS)