iSCSI, twin-tailed disk, cheap-cheap fibre channel, ???

Vlad Slavoaca shiwan-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org
Sun Jan 22 08:55:21 UTC 2006


        Hmm. Where to begin...

        For the purposes of keeping this kind of short, I'll only
cover FC-AL2 stuff, as that's the most prevalent today.
        A Fibre Channel setup will provide the following, with more
items being available as the size of the setup grows:

* 2Gbps full-duplex link. (Slower than U320 SCSI, but higher I/O.)
* Multihoming to different arrays, through multiport HBAs or multiple HBAs.
* Having multiple systems access the same array (think MS SQL cluster,
where the DB resides on a shared medium between the servers, so that
they can fail over statefully; a GFS (global file system) is
required).
* Hardware RAID on the disk shelf(ves).
* Multiple redundant links.
* Switched FC infrastructure (the theoretical maximum nodes is in the millions)
* Custom redundancy options from disk-level to shelf-level.
* Custom provisioning and backup (i.e. you can back up the whole
system at the disk level, instead of the OS or file-level).
* Long-haul transport (i.e. LZ-class extended long-haul MiniGBIC would
handle up to 120km of SMF (single mode fiber)) - think having your
physical storage in another datacentre.
* Storage virtualization and Zoning (you slice and dice out bits of a
large pool of disks into custom sizes, and then, basically, VLAN the
array).
* IP and ATM transport over FC as the physical medium (think having a
dual-port HBA and using one port for storage data, and the other for
IP data).


        So with a small setup, you'd end up Direct Attaching with a
2Gbit HBA, so you'd get up to 2Gbps  full-duplex R/W. (Of course, with
much better I/O than anything else.) Unless it was a dual-headed HBA,
you wouldn't get redundancy.

        With a large setup, you'd have multiple disk shelves with full
redundancy; multiple FC switches with redundancy; every system
multihomed into the switched infrastructure for redundancy; possibly
running IP transport on it as well, for private high-throughput
networking.
        Then you'd have something like a farm of servers booting off
the SAN and running VMware ESX Server, with the VMs saved completely
on the SAN (but using VMware's snapshotting technology to do
full-system backups, including the RAM state, to SAN). You could
instantly provision new VMs by mirroring the disk image on SAN. Also,
since multiple systems have simultaneous access to the same storage
pool(s), you can migrate /running/ VMs between physical servers,
thanks to VirtualCenter.

        I hope that's covered most things; I didn't intend to sound so
much like a consultant - those are my typical case studies when
explaining this technology.

        As a side note, about three years ago I ended up building a
small FC setup on under $500. It had about 80GB in three disks, each
with a Tcard (tester card - converts SCA2 to 2x shielded DB9 and Molex
- or two UTP/STC RJ45 jacks), shielded DB9 cabling between the disks,
DB9 to HSSDC cabling into a 12-port managed FC HUB that used HSSDC and
SC MMC GBICs, and then about 50 metres of fiber to my workstation.
        Everything ran at 1Gbps full-duplex, although HSSDC cabling
can do 2Gbps full-duplex.

        Then there's InfiniBand, and it's 2.5Gbps, 10Gbps, and 40Gbps
goodness... including having a 1U IB switch having a 480Gbps switching
fabric. Oh, and FC to iSCSI bridges.

        Cheers,

         --Vlad

On 1/22/06, Joseph Kubik <josephkubik-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote:
> The high end SANs provide for clustering. One LUN will be made
> accessible to multiple hosts (who have to handle their own atomic
> writes).
>
> If anyone knows more about the NBD stuff I'm interested.
> -Joseph-
>
> On 1/22/06, William Park <opengeometry-FFYn/CNdgSA at public.gmane.org> wrote:
> > On Sat, Jan 21, 2006 at 10:57:40PM -0500, Fraser Campbell wrote:
> > > William Park wrote:
> > >
> > > >>More useful?  They are essential, my shared disk (SAN or not) is useless
> > > >>without them.
> > > >
> > > >
> > > >How is SAN different from just a file server?
> > >
> > > Main difference is how a client system accesses the disks
> > >
> > > * a file server serves filesystems over the network, typically using NFS
> > >   or SMB protocol
> > > * a SAN serves block devices, typically over fibre channel
> > >
> > > A client system mounting SAN disks can partition up those disks and
> > > format them however it sees fit, just like local disks effectively.
> >
> > - Is that what Network Block Devices supposed to do?
> > - Looking at kernel options, I also see 'ATA over Ethernet' option
> >   Not sure how you would use them, though.
> >
> > >
> > > The advantage of SANs are mostly from a data management perspective
> > > (backup, failover, redundancy, etc.).
> >
> > --
> > William Park <opengeometry-FFYn/CNdgSA at public.gmane.org>, Toronto, Canada
> > ThinFlash: Linux thin-client on USB key (flash) drive
> >            http://home.eol.ca/~parkw/thinflash.html
> > BashDiff: Super Bash shell
> >           http://freshmeat.net/projects/bashdiff/
> > --
> > The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
> > TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> > How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
> >
> --
> The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml
>


--
end
--
The Toronto Linux Users Group.      Meetings: http://tlug.ss.org
TLUG requests: Linux topics, No HTML, wrap text below 80 columns
How to UNSUBSCRIBE: http://tlug.ss.org/subscribe.shtml





More information about the Legacy mailing list