Category: Storage

Stupid board - zfs could be faster

pci, instance #0
pci1028,20e (driver not attached)
isa, instance #0
motherboard (driver not attached)
pit_beep, instance #0
pci1028,20e (driver not attached)
pci1028,20e (driver not attached)
pci1028,20e, instance #0
keyboard, instance #0
mouse, instance #1
pci1028,20e, instance #0
pci10de,3f3, instance #0
pci-ide, instance #0
ide (driver not attached)
ide (driver not attached)
pci-ide, instance #1
ide, instance #2
cmdk, instance #2
ide, instance #3
cmdk, instance #3
pci-ide, instance #2
ide, instance #4
cmdk, instance #0
ide, instance #5
cmdk, instance #1
pci10de,3e8, instance #0
pci1166,103, instance #2
pci14e4,164c, instance #0
pci10de,3e9 (driver not attached)

nice, i've been running in ide mode this whole time, no ncq without sata. fuckers.

EON Stuff for vmware, and storage for opensolaris

EON rocks, its kinda hard to setup, but its freaking rock solid and has dedup

#########test wont work will later

mount -t nfs 192.168.1.19:/zfs-hybrid/vms /mnt
zpool iostat -v zfs-hybrid 5

How do I start NFS server services?
First, import the services
cd /var/svc/manifest/network
svccfg -v import rpc/bind.xml
svccfg -v import nfs/status.xml
svccfg -v import nfs/nlockmgr.xml
svccfg -v import nfs/mapid.xml
svccfg -v import nfs/server.xml
Then, enable them
svcadm enable -r nfs/server
or individually
svcadm enable rpc/bind
svcadm enable nfs/status
svcadm enable nfs/nlockmgr
svcadm enable nfs/mapid
svcadm enable nfs/server
##########BASICSSSS

#finds all the disk
hd
#makes partitions for the slog l2arc/zil
format

zpool create zfs-hybrid raidz c1d0p0 c2d0p0 c5d0p0 c6d0p0

zfs create zfs-hybrid/vms

zfs set compression=on zfs-hybrid
zfs set dedup=on zfs-hybrid

zfs set sharenfs=on zfs-hybrid/vms

zfs set sharenfs=rw,nosuid,root=192.168.1.41:192.168.1.42:192.168.1.51 zfs-hybrid/vms

@########advance

zpool add zfs-hybrid log /dev/dsk/c3d0p1
zpool add zfs-hybrid cache /dev/dsk/c3d0p2

zfs create -V 500G -b=128k pool/name
itadm create-initiator iqn.1991-05.com.microsoft:mailbox01-node1.flinetech.ca
sbdadm create-lu /dev/zvol/rdsk/zsan00store/mbx01-node1
stmfadm create-hg zsan00server-hg
stmfadm add-hg-member -g zsan00server-hg iqn.1991-05.com.microsoft:mailbox01-node1.flinetech.ca
stmfadm create-tg tg-mbx0-node0

itadm modify-target -n iqn.1986-03.com.sun:02:mbx01-node01 iqn.1986-03.com.sun:02:4b6f1bdc-86ca-611d-92eb-d840016fab80
stmfadm add-tg-member -g tg-mbx0-node0 iqn.1986-03.com.sun:02:mbx01-node01
stmfadm add-tg-member -g tg-mbx0-node0 iqn.1986-03.com.sun:02:mbx01-node01

stmfadm add-view -h zsan00server-hg -t tg-mbx0-node0 -n 1 600144F0CC12CC0000004A9036D30001
stmfadm list-view

stmfadm create-hg test-hg

itadm create-initiator iqn.1998-01.com.vmware:esx-02-0b27d1f5
itadm create-initiator iqn.1998-01.com.vmware:esx-01-0d95e7a9

stmfadm add-hg-member -g test-hg iqn.1998-01.com.vmware:esx-02-0b27d1f5
stmfadm add-hg-member -g test-hg iqn.1998-01.com.vmware:esx-01-0d95e7a9
stmfadm create-tg esx

#itadm create-target
iqn.1986-03.com.sun:02:21b5da92-8eae-4c74-a214-8a1d29c7ee72

itadm modify-target -n iqn.1986-03.com.sun:02:esx-vol1 iqn.1986-03.com.sun:02:8fba33b4-5e51-ea80-a5fc-d4364afd16b1
itadm modify-target -n iqn.1986-03.com.sun:02:esx-vol1 iqn.1986-03.com.sun:02:21b5da92-8eae-4c74-a214-8a1d29c7ee72

stmfadm add-tg-member -g esx iqn.1986-03.com.sun:02:esx-vol1

stmfadm add-view -h test-hg -t esx -n 1 600144f0998dc30000004bb0ed7d0001

stmfadm list-view -l 600144f0998dc30000004bb0ed7d0001

itadm create-initiator iqn.1998-01.com.vmware:esx-01-0d95e7a9
stmfadm add-hg-member -g test-hg iqn.1998-01.com.vmware:esx-01-0d95e7a9

$#@#@#@##@@
#fix after power outage!
stmfadm list-lu -v
stmfadm import-lu /dev/zvol/rdsk/zfs/iscsi

IOP Testing and Shit

attaching a ssd to a zfs pool of a raidz made up of 4x500gb hd to use
as a zil resulted in a ~4.89x performance increase in a real world
VMware 60% rand 65% write access pattern, an ssd alone netted a ~10x
improvement. When more than one worker is assigned the iops on the HSP
remained at a constant performance level where as the standard pool
quickly dropped to unusable levels. When the main os was converted to
a usb device and the other ssd was split into two partions and used as
a read cache for the zfs system, and the zil for the raidz pool with
the ssd in its own pool, at relative 16x IOPs improvement was noticed.
when running multiple workloads from different NFS datastores

So anywho im now flooding the fucking pipes and need to lag. A VERY
cost effect upgrade ($260 bucks for a hell of a ROI and performance
increase)

WordPress Themes