SolarisSSD
Installation of the SSD card
- Some tuning tips done before insertion of the SSD card :
- added in /kernel/drv/sd.conf :
sd-config-list = "ATA MARVELL SD88SA02","throttle-max:32, disksort:false, cache-nonvolatile:true";
- added in /etc/system :
set zfs:zfs_mdcomp_disable = 1
More info on the tuning side : http://wikis.sun.com/display/Performance/Tuning+ZFS+for+the+F5100
- Shutdown the fileserver
- Insert the HBA F20 PCIe card inside the fileserver (screwdriver needed)
- Restart the fileserver. When it is up, type in a shell :
touch /reconfigure
and reboot the fileserver.
- Now, you should see 4 new devices using the format command :
-bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252> /pci@0,0/pci10de,377@a/pci1000,1000@0/sd@0,0 ########c1t1d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252> [...SNIP...] 46. c6t6d0 <ATA-HITACHI H7220AA3-A28A-1.82TB> /pci@3c,0/pci10de,376@f/pci1000,1000@0/sd@6,0 47. c6t7d0 <ATA-HITACHI H7220AA3-A28A-1.82TB> /pci@3c,0/pci10de,376@f/pci1000,1000@0/sd@7,0 their should be 4 more entries here (expected c7t0d0 to c7t3d0 but could be different) 48. c7t0d0 <ATA-MARVELL SD88SA02-D10R-22.89GB> /pci@0,0/pci10de,376@e/pci1000,3150@0/sd@18,0 [exit from the format command without doing anything, it is just used here to view the devices]
Creation of the new zpool
- Destroy old zpools :
zpool destroy storage zpool destroy storage2
- Create the new big zpool :
zpool create -f storage \ raidz2 c1t2d0 c1t3d0 c1t4d0 c1t5d0 c1t6d0 c1t7d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 \ raidz2 c2t6d0 c2t7d0 c3t0d0 c3t1d0 c3t2d0 c3t3d0 c3t4d0 c3t5d0 c3t6d0 c3t7d0 \ raidz2 c4t0d0 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 c4t6d0 c4t7d0 c5t0d0 c5t1d0 \ raidz2 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0 c5t7d0 c6t0d0 c6t1d0 c6t2d0c6t3d0 c6t4d0 c6t5d0
- Add some spares :
zpool add storage spare c6t6d0 c6t7d0
- Configure log and cache to be on the 4 SSD modules :
zpool add storage log mirror c7t0d0 c7t1d0 zpool add storage cache c7t2d0 c7t3d0
Creation of the ZFS filesystems
- The ZFS filesystems will be created with quotas :
zfs create storage/sandbox zfs set quota=500G storage/sandbox zfs create storage/swmgrs zfs set quota=500G storage/swmgrs zfs create storage/localgrid zfs set quota=1000G storage/localgrid zfs create storage/user
- Creation of the new software areas :
mkdir /storage/sandbox/beappss mkdir /storage/sandbox/becmss mkdir /storage/sandbox/betests mkdir /storage/sandbox/cmss mkdir /storage/sandbox/dtes mkdir /storage/sandbox/hones mkdir /storage/sandbox/opss chown 22000:22050 /storage/sandbox/beappss chown 20000:20050 /storage/sandbox/becmss chown 21000:21050 /storage/sandbox/betests chown 4000:4050 /storage/sandbox/cmss chown 5000:5050 /storage/sandbox/dtes chown 30000:30050 /storage/sandbox/hones chown 29000:29050 /storage/sandbox/opss
Sharing the new filesystems
- Modify /etc/dfs/dfstab :
share -F nfs -o rw /storage/user share -F nfs -o rw /storage/swmgrs share -F nfs -o rw /storage/localgrid share -F nfs -o rw,log=global /storage/sandbox
- Apply the changes and check :
shareall share