iSCSI Specification Approved 50
nasorsan writes "The iSCSI protocol is a means to transport SCSI commands and data using TCP/IP packets. This ratification by the IETF is "the last major hurdle for iSCSI to become widely supported. . . 'Now that it's done, Microsoft Corp. and Novell Inc. will release drivers, and the games will begin,' says Steve Duplessie, senior analyst at Enterprise Storage Group Inc. 'Anyone who doesn't think this is the beginning of a huge market is insane.'" he added."
New Virus Opportunity (Score:2)
Re:New Virus Opportunity (Score:5, Informative)
It's like how SCSI is set up today on a single computer: there's really no way to get access to the SCSI bus without first gaining access to the host computer. The LAN and the SCSI bus are two entirely different things, separated by the host computer.
When iSCSI is used, this separation should be preserved. The network that is set up for iSCSI, your SAN, should be kept separate from your main LAN. Think of it as a private network that is visible only from your file servers that have a need to access storage devices directly.
Think of a SAN as equivalent to your IDE or SCSI cable. A SAN network typically uses a block-based protocol, that will read and write individual disk sectors without regards to filesystems, access control, and so on. This is designed for maximum speed, not security. It is the job of the file server to translate this into file-based access for outside clients, and enforce the appropriate file permissions.
And yes, you should definitely have good security on your file servers....
Re:New Virus Opportunity (Score:4, Insightful)
I doubt that's going to be the case for the very long term. iSCSI is going to be in the LAN space sooner or later. The protocol does give at least some thought to security, and you can run it over IPSec.
There are plenty of insecure protocols run on local LANs today that can be nearly as bad. I know that it's a bad idea to trust your LAN but nevertheless people do it all the time, especially in physically secure environments like machine rooms.
Re:New Virus Opportunity (Score:2)
A good application of iSCSI over a remote link is to bridge multiple iSCSI SAN networks together. This combines storage devices in multiple locations into one logical network. This would be useful for applications such as remote backup, mirroring, database replication, and so on.
[adaptec.com]
http://www.adaptec.com/worldwide/product/marked
It is also useful as an interconnect for bridging multiple Fibre Channel SAN networks together, and translating between iSCSI and Fibre Channel.
One thing is certain... (Score:2)
I must be insane.
What is the use case for this, again?
Re:One thing is certain... (Score:1)
Daniel
But think of the innovation! (Score:4, Funny)
I mean, seriously! Who gives a rat's buttocks about low latency and high performance and sanity? I mean seriously? Who cares about the praciticality and usefulness and overall sanity?
I mean, Jesus Tap-dancing Christ. Jump on the bandwagon already! It's all about eInternet now. iTCP/IP. eHTTP. eE-mail. Who cares about design grounded in reality anymore? These days, it's all about XML and TCP/IP and Web Services? Jump on the BANDWAGON! Everything should be implemented in XML: it's a rule! The SCSI protocol is hopelessly outdated, since it doesn't use the latest advancements in XML, TCP/IP, ADO.NET, ASP.NET, SOAP and web services.
I mean, you've GOT TO BE INSANE if you aren't smart enough to get in with the technology!
Re:But think of the innovation! (Score:1)
Re:One thing is certain... (Score:4, Informative)
http://www.adaptec.com/worldwide/common/index.h
iSCSI is useful as an interconnect in a SAN environment. The storage devices exist as their own node on the network, independently of the computers they are attached to. This is good for reliability (can replace the disk independently of the computer if it fails, and vice versa), configuration flexibility, and many other useful things.
iSCSI is nice because it uses standards that are well understood (Ethernet, TCP/IP) instead of custom networks like Fibre Channel. This should make SAN networks cheaper and more common, as well as providing an easy way to bridge a SAN network over the Internet (firewalled and encrypted, of course).
The only difference between a LAN and a SAN will be what you use it for!
Re:One thing is certain... (Score:1, Funny)
Sub Area Network networks?
Do ATM machines use SAN networks when you enter your PIN number?
Re:One thing is certain... (Score:3, Insightful)
[techtarget.com]
http://searchstorage.techtarget.com/sDefinition
It is basically a fast and tight network to connect computers with storage devices. It does exactly what your IDE or SCSI cable does, except over a network.
Re:One thing is certain... (Score:1, Funny)
Re:One thing is certain... (Score:2)
Damn, I *AM* crazy.
Re:One thing is certain... (Score:4, Informative)
(cat
iSCSI is similar to SCSI and IDE: it is block-based. Computers do reads and writes of individual disk sectors, addressed by number, and not in terms of files. It is below the filesystem. There is no security in terms of individual users here, because once the disk is opened, it is wide open. It is much faster than file-based access, though, which makes it popular for databases and such.
(cat
Your file server does a great job at both file-based and block-based access. The server serves up shares and files over file-based protocols, allowing users to connect. Internally, a filesystem is applied to the disks, and the files are translated into individual low-level accesses to read and write various disk sectors. These reads and writes to the disk take place over a block-based protocol.
Everything has its place, and it all fits together... hopefully!
Lets just stick 'i's in front of everything! (Score:3, Insightful)
I bet NCR has a patent that covers this...
"Some nasty reflection attacks were discovered on iSCSI's use of CHAP" I wonder how many more security holes are waiting to be discovered? I would be very careful about how I use this untill it's been tested by fire.
Still the idea is really pretty fucking cool. Ethernet is cheap and fast (especially gigabit) and doesn't have any of the limitations that traditional SCSI or IDE have as far as devices on a chain. This could be a good replacement for Samba in some situations. The standards document is pretty daunting, so I can't tell if iSCSI will allow multiple connections to a single volume, but even if it doesn't there are many single user Samba applications that could be handled better by iSCSI.
Re:Lets just stick 'i's in front of everything! (Score:1)
what is the use of a SCSI target disk formated as a NTFS volume shared through iSCSI to a Linux user ?
Re:Lets just stick 'i's in front of everything! (Score:2)
Well, since you asked: There's NTFS read support in the kernel which could make this a convenient way to share reference data between Windows and Linux boxes without having to deal with the complexity and vagarities of Samba.
Of course, this is bound to bring its own complexities and vagarities, but potentially somebody might use it.
A more likely target is an Oracle formatted disk shared by multiple clients potentially running many OSes. Oracle (and other commerical databases) have their own disk format in place of a filesystem and have already worked out the shared disk semantics to run over SANs.
Re:Lets just stick 'i's in front of everything! (Score:2)
It's more likely that they're beginning to apply for said patent today.
Re:Lets just stick 'i's in front of everything! (Score:2)
Ethernet is cheap and fast (especially gigabit)
Well, cheap compared to fiber channel, I guess. But the 1000bsx ethernet at my workplace wasn't too cheap to put, as I recall. I'm sure it will get cheaper faster than FC, though, just do to the economies of scale of Ethernet deployment.
Despite the fantastic potential bandwidth of gigabit ethernet (assuming big frames), I still wonder if latency issues won't become more important in a NAS environment full of iSCSI devices.
More details please (Score:2)
Does this mean that soon we will see SCSI disks w/ ethernet ports?
If so...
Can I take a bunch of disks and plug them into the switch on the the beowulf cluster that I am building and have all the nodes use them !!?! If so then this is INCREDIBLE!
-OR-
Would I plug a bunch of disks into a seperate switch that is accessible to the master node, and then the compute nodes use nfs or similar to access the master node just like in a traditional beowulf?
Either way this would give more flexibility than current solutions, and it is a GOOD thing.
Re:More details please (Score:5, Informative)
iSCSI is really targetted to replace (or augment) Fibre Channel (basically SCSI over fiber optics with 1-2 Gbit data rates). Fibre Channel is very expensive both for the interface cards and for the switches. iSCSI lets everyone leverage the developments of generic ethernet switches, routers, tunnels and bridges rather than having to develop new Fibre Channel ones from scratch.
As to your second question, You could potentially plug all of the disks (or arrays) into an ethernet switch and use them individually, but it's more likely you'd put some kind of front end in place to handle the filesystem tasks. Most filesystems assume they have sole ownership of the disks and can't share partitions between multiple live nodes. You would still gain the ability to partition big disks into smaller chunks per-compute node if you connected the disks directly (and maybe some failover capability) but that would probably be offset by the inability to share data.
I think that SGI's XFS filesystem can share partitions between multiple compute nodes but I don't know if that feature made it into the Linux port. For more info on this kind of thing google "clustered filesystems".
It's going to come down to Software (Score:3, Interesting)
Right, you will have boxes of drives on the SAN, just like with current FC based SANs. From what I've seen, the host OSs have to manage 'drive allocation', and as you say, typically this will be whole drive at a time (important for partitioning the I/O load between spindles anyway). The addition of authentication protocals probably would help in binding the drive to a particular system as well.
Since the other reason you want a SAN is for reliability, you're going to want redundancy in the connections anyway. If the drives themselves are iSCSI, they would probably only have one connection per drive anyway (well, maybe not, FC drives are often dual channel, right?). In any case, you'd have dual channels to each system and storage array as well as redundant switches or routers to eliminate all single points of failure.
There are some hints in the article that compatibility issues could become significant quickly. Since at the most basic level, this will be a normal routed TCP/IP network, I'm sure the vendors have all sorts of ideas for 'support' protocols to run on the SAN with the iSCSI packets. It states that people are 'chomping at the bit' to add more protocols, but the committe wants to hold things stable for at least a year for things to sort out. The whole thing could be sunk by various players doing the 'embrace and extend' dance in ways that tend away from full multi-vendor interoperability.
Without reading all the specs and proposals, it is easy to guess that protocols to provide for automatic device detection and allocation would be very useful from a system design perspective, but would also need to be part of the standard to acheive continued support for multi-vendor SANs. Another likely area is RAID support (configuring, fault detection and reporting, rebuilding and maintanance, etc.). Logically, a RAID controller is just another node on the network, but it lies between the hosts and storage devices.
Note to people who think this is something like Serial ATA, it isn't. Serial ATA is a point to point protocal, and it probably is asymetric to boot. TCP/IP is a symetric routed network, so it is a different animal altogether. OTOH, there is no reason why a storage array couldn't be iSCSI on the outside and SATA to the drives (expect products like this from some vendors).
Re:More details please (Score:2, Interesting)
One problem with this is the performance will be crap using existing ethernet host adapters. There are a few companies working on host adapters with TCP-offload engines. Putting the TCP packets back together and pulling them apart takes a lot of kernel/system CPU cycles and it severely slows the data transmission rates.
Initially, and probably for the next couple of years, host adapters or other hardware that can offload the TCP overhead from the system CPU will be very expensive (more than the current fibre channel HBAs) but overall not having to buy FC fabric switches from Brocade because you can use existing IP hardware infrastructure will be a cost advantage -- but not much. If anything, the prices for implementation will be close to the same for the next year or two and then it depends on how fast the FC stuff becomes cheaper and how fast the iSCSI stuff gets truly developed by hardware companies (Emulex, Qlogic, Adaptec, LSI Logic, etc.) whose R&D budgets are already squeezed tight by the current economic environment.
We'll see. NAS or SAN or iSCSI????
Re:More details please (Score:2)
It's not that the ethernet adapters will deliver bad performance, it's that they suck a lot of CPU cycles, so you need a faster CPU to get the same overall performance with iSCSI as you get with Fibre Channel. The SCSI market was in this same position years ago when SCSI cards were just dumb electrical interfaces and relied on the driver and host CPU to do the protocol work.
I think iSCSI might catch on faster than you think because there's a potential for cheap software only implementations on the low to medium end. A few hundred dollars buys a lot of mHz from Intel these days.
It's probably best to reserve judgement until we see what the performance is like for a tuned iSCSI software implementation. A network card that can offload the iSCSI protocol checksum combined with a zerocopy kernel driver might be able to deliver quite acceptible performance.
serial SCSI (Score:1)
Its too bad even gigabit ethernet wont be as fast as SATA. Not that harddrives can typicaly go that fast anyway..
Re:serial SCSI (Score:4, Informative)
[scsita.org]
http://www.scsita.org/sas/FAQ.html
What's really cool is that SAS and SATA share the same cabling and interface! SAS is a superset of SATA, that adds SCSI's features (multiple devices per port, and so on) to the basic one-device-per-port SATA design. The nice thing is that you can use cheap SATA drives on a SAS setup! This should be good for RAID. Think of SAS as "SATA Plus".
Here's a quote from the link above:
"Serial Attached SCSI complements Serial ATA by adding device addressing, and offers higher reliability and data availability services, along with logical SCSI compatibility. It will continue to enhance these metrics as the specification evolves, including increased device support and better cabling distances. Serial ATA is targeted at cost-sensitive non-mission-critical server and storage environments. Most importantly, these are complementary technologies based on a universal interconnect, where Serial Attached SCSI customers can choose to deploy cost-effective Serial ATA in a Serial Attached SCSI environment."
Re:Combine with 1394 over tcp/ip (Score:3, Interesting)
No, FireWire does not use SCSI. (Score:2)
I'm writing a FireWire driver for QNX. FireWire is a local area network protocol suite designed by hardware people, and it shows. FireWire is a packet oriented LAN; you send and receive packets on the wire. That's the level at which FireWire adapters operate. Above this is a layer that creates the illusion that there's a 64-bit "address space" into which you can store and load, 32 bits at a time. This address space is entirely a software abstraction - both ends are usually faking it. In a driver, you typically make something happen by "storing into" a "device register". The software packages up the store request as a packet and sends it. The receiver gets the packet, looks at the address being "stored into", and does something. It then replies with a reply packet. This is usually completely separate from whatever memory system either end has.
Since this "bus" illusion is too slow for bulk data transfer, it's not used for that. Bulk data is sent as packets. All of this looks like a protocol built on top of UDP. You have to match replies with requests, queue, time out, and retransmit, just like UDP.
SCSI, on the other hand, has "commands" and "responses", which makes sense. If something can't do some command, you get an error status back. With FireWire, you have to go read some address from the "bus" to find out what happened. Status returns aren't standardized across devices, either; there's a separate spec for each class of device, and there may be differences between manufacturers. So generic drivers are hard.
How FireWire uses SCSI (Score:2)
Transport of SCSI command descriptor blocks is provided by the Serial Bus 2 protocol (SBP2).
If you configure a Linux system to use firewire storage, you will find that Linux' SCSI is a client of the SBP2 driver, which in turn is a client of the 1394 driver.
SBP2 is more general than SCSI, although that's its most common use. It can also be used to transport IDE commands.
Otherwise what you say is correct. The SCSI part just comes at a higher layer in the protocol stack.
iSCSI is a SAN replacement... (Score:5, Informative)
Here are some answers/clarifycations on some stuff I've already seen in the coments here:
iSCSI is a SAN (Storage Area Network) replacement. It is not a file shareing system like Samba or NFS. The primary advantage of iSCSI over something like Fiber Channel is cost. You can build an iSCSI system with regular Ethernet switches where as Fiber Channel requires "special" switches and cableing. I would think that two systems could use the same iSCSI target, but only where it would make sense and where the file system could handle such access.
Yes, there are already are adapters [adaptec.com]. (Not quite sure how they are out ahead of the spec, but why would you let a little thing like that slow you down). They connect to the Ethernet switch (usually a gigabit switch) and therefor could boot off a volume via iSCSI.
Cisco also makes a device that can bridge lagacy SAN networks to iSCSI [cisco.com]
Re:iSCSI is a SAN replacement... (Score:3, Informative)
diskless iSCSI (Score:1)
2x the speed of my old ide drive, and i'm just using 100mbit ethernet.
and my harddrive is backed up now.
Watch the Help Wanted ads... (Score:2)
It's hot! It's NOW!
Block Based Network File Systems (Score:2)
In other news, (Score:1)
go wasabi + ARM (Score:1)
goto the iSCSI on www [wasabisystems.com]
serve mine with ARM please
"netBSD is not dead its just on your disk and you dont know about it "
regards
John Jones
How about having everything this way? (Score:1)
What about sEthernet... (Score:1)
I object to the next comment.
We had this in 1993 (sort of) (Score:1)
No, really! Farralon made a scsi based ethernet adapter for the powerbook.
http://www.macworld.com/1994/11/secrets/989.htm
Very Simple Explaination (Score:2)
1. count the number of PCI slots in your computer that you have or could add SCSI controllers to.
2. multiply that by 15 (assuming wide SCSI)
3. whatever that number is, iSCSI can give you more disks.
Noise reduction (Score:2)
All the disks for my machines in one place - makes for quiet PC's.
One iSCSI project I did like was the Intel one (they released some beta drivers for their ServerPro cards under Linux as I recall). They successfully constructed a RAID array on the client machine that consisted entirely of iSCSI devices that were physically made up of huge ram disks in other machines - a RAM disk array! Gotta be some peformance gains there (network speeds notwithanstanding) If you can get 8GB say in each machine you could construct quite a large array purely out of solid state devices.