Xsan 2.3 or later: Deactivating a storage pool
It may be necessary to deactivate (down) an Xsan data storage pool in a volume, but retain access to data on other storage pools within the volume. You may need to do this if there is a LUN issue in an Xsan storage pool and Xsan clients are unable to mount the Xsan volume, for example.
Learn how to disable the affected storage pool to allow Xsan clients to continue using the rest of the Xsan volume.
Note: The metadata and journal storage pool cannot be taken down and still have access to data on the volume. This procedure will only work if there is more than one data storage pool.
If the storage pool will only be temporarily unusable
- On the active metadata controller, execute this command in Terminal:
sudo cvadmin select Volume_name
- To bring the storage pool back up, execute this command while in cvadmin interactive mode with the volume:
If the storage pool will be down for an extended period of time
If for some reason the storage pool needs to be down for an extended period, or permanently taken down, the files on the affected storage pool should be deleted from the volume.
In order to remove the files from the storage pool you'll first need to determine the storage pool's number. Storage pools, also called stripe groups, are listed in order in the volume's .cfg file. The .cfg file is found at:
Below the line "# A stripe section for defining stripe groups", locate the appropriate storage pool/stripe group. The MetadataAndJournal Stripe Group is always 0, the next Stripe group is 1, and so forth.
In this example, Stripe Group "MetadataAndJournal" is 0, StripeGroup "Video" is 1, StripeGroup "Audio" is 2, StripeGroup "Other-1" is 3, and so forth.
[StripeGroup "MetadataAndJournal"] Status Up Exclusive Yes Metadata Yes Journal Yes Read Enabled Write Enabled MultiPathMethod Rotate StripeBreadth 16 Node "metalun" 0 [StripeGroup "Video"] Status Up Exclusive No Metadata No Journal No Affinity "Video" Read Enabled Write Enabled MultiPathMethod Rotate StripeBreadth 16 Node "Media1" 0 Node "Media2" 1 Node "Media3" 2 Node "Media4" 3 [StripeGroup "Audio"] Status Up Exclusive No Metadata No Journal No Affinity "Audio" Read Enabled Write Enabled MultiPathMethod Rotate StripeBreadth 16 Node "XsanLUN1" 0 Node "XsanLUN2" 1 Node "XsanLUN3" 2 Node "XsanLUN4" 3 [StripeGroup "Other-1"] Status Up Exclusive No Metadata No Journal No Affinity "Other" Read Enabled Write Enabled MultiPathMethod Rotate StripeBreadth 16 Node "XsanLUN5" 0 Node "XsanLUN6" 1 Node "XsanLUN7" 2 Node "XsanLUN8" 3
The following Terminal command will list all of the files written to the storage pool, storage_pool_number on the volume, Volume_name. The volume must be mounted to run snfsdefrag.
sudo snfsdefrag -r -l -m0 -G storage_pool_number /Volumes/Volume_name
After the files have been removed the storage pool's status should be changed to down in the volume's .cfg file on the metadata controllers. An easy way to accomplish this is:
- Change the role from controller to client on all the metadata controllers except the active one.
- Using these guidelines, edit /Library/Preferences/Xsan/Volume_name.cfg file on the active metadata controller.
- Below the line " # A stripe section for defining stripe groups", locate the appropriate Stripe Group and change the Status from up to down.
- Save the file.
- Change the role back to controller on the desired systems.
If it's possible to use the LUN again at a later date, for example if the LUN was originally a RAID 0 Stripe and the problematic drive was replaced and the RAID re-created, or a LUN of identical size is available, then the LUN can added back. Make sure the LUN is labeled correctly, then change the storage pool status from down to up following the steps above.
While the storage pool is down, files stored in the affected stripe group will be visible but will not be usable by Xsan clients. Attempting to access one of these files will generate this alert:
macname kernel <Debug>: acfs 'Volume_name': I/O attempt on DOWN/OFFLINE stripe group 4 cookie 0x13