Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

iSCSI Specification Approved 50

nasorsan writes "The iSCSI protocol is a means to transport SCSI commands and data using TCP/IP packets. This ratification by the IETF is "the last major hurdle for iSCSI to become widely supported. . . 'Now that it's done, Microsoft Corp. and Novell Inc. will release drivers, and the games will begin,' says Steve Duplessie, senior analyst at Enterprise Storage Group Inc. 'Anyone who doesn't think this is the beginning of a huge market is insane.'" he added."
This discussion has been archived. No new comments can be posted.

iSCSI Specification Approved

Comments Filter:
  • And you thought that doing a packet flood could just disrupt communications! Disk IO now could get hammered, right? And corrupted? What's the spec say about that?
    • by Krellan ( 107440 ) <krellan@NOspAm.krellan.com> on Thursday February 13, 2003 @03:43PM (#5296506) Homepage Journal
      iSCSI will be used in a SAN environment. This is only between computers and storage devices, and not betwen computers and the outside world. I think you're confusing SAN with LAN.

      It's like how SCSI is set up today on a single computer: there's really no way to get access to the SCSI bus without first gaining access to the host computer. The LAN and the SCSI bus are two entirely different things, separated by the host computer.

      When iSCSI is used, this separation should be preserved. The network that is set up for iSCSI, your SAN, should be kept separate from your main LAN. Think of it as a private network that is visible only from your file servers that have a need to access storage devices directly.

      Think of a SAN as equivalent to your IDE or SCSI cable. A SAN network typically uses a block-based protocol, that will read and write individual disk sectors without regards to filesystems, access control, and so on. This is designed for maximum speed, not security. It is the job of the file server to translate this into file-based access for outside clients, and enforce the appropriate file permissions.

      And yes, you should definitely have good security on your file servers....
      • by jrstewart ( 46866 ) on Thursday February 13, 2003 @03:49PM (#5296553) Homepage
        iSCSI will be used in a SAN environment. This is only between computers and storage devices, and not betwen computers and the outside world. I think you're confusing SAN with LAN.

        I doubt that's going to be the case for the very long term. iSCSI is going to be in the LAN space sooner or later. The protocol does give at least some thought to security, and you can run it over IPSec.

        There are plenty of insecure protocols run on local LANs today that can be nearly as bad. I know that it's a bad idea to trust your LAN but nevertheless people do it all the time, especially in physically secure environments like machine rooms.
        • You are right. iSCSI will be used over LAN and WAN links. Hopefully this will be done in a way that is tunneled and secure, similar to how a VPN works today. Allowing outside users to arbitrarily inject packets into a SAN network would be a Bad Thing.

          A good application of iSCSI over a remote link is to bridge multiple iSCSI SAN networks together. This combines storage devices in multiple locations into one logical network. This would be useful for applications such as remote backup, mirroring, database replication, and so on.

          [adaptec.com]
          http://www.adaptec.com/worldwide/product/markedi to rial.html?sess=no&prodkey=ipstorage_sra_asic&type= Common&cat=/Common/IP+Storage


          It is also useful as an interconnect for bridging multiple Fibre Channel SAN networks together, and translating between iSCSI and Fibre Channel.
  • Anyone who doesn't think this is the beginning of a huge market is insane.

    I must be insane.

    What is the use case for this, again?
    • I want to know too.

      Daniel
    • by Qwaniton ( 166432 ) on Thursday February 13, 2003 @03:31PM (#5296414)
      iSCSI will be an important leap into the future of technology!!!!1

      I mean, seriously! Who gives a rat's buttocks about low latency and high performance and sanity? I mean seriously? Who cares about the praciticality and usefulness and overall sanity?

      I mean, Jesus Tap-dancing Christ. Jump on the bandwagon already! It's all about eInternet now. iTCP/IP. eHTTP. eE-mail. Who cares about design grounded in reality anymore? These days, it's all about XML and TCP/IP and Web Services? Jump on the BANDWAGON! Everything should be implemented in XML: it's a rule! The SCSI protocol is hopelessly outdated, since it doesn't use the latest advancements in XML, TCP/IP, ADO.NET, ASP.NET, SOAP and web services.

      I mean, you've GOT TO BE INSANE if you aren't smart enough to get in with the technology!
    • by Krellan ( 107440 ) <krellan@NOspAm.krellan.com> on Thursday February 13, 2003 @03:34PM (#5296443) Homepage Journal
      [adaptec.com]
      http://www.adaptec.com/worldwide/common/index.ht ml ?prodkey=ipstorage_index1


      iSCSI is useful as an interconnect in a SAN environment. The storage devices exist as their own node on the network, independently of the computers they are attached to. This is good for reliability (can replace the disk independently of the computer if it fails, and vice versa), configuration flexibility, and many other useful things.

      iSCSI is nice because it uses standards that are well understood (Ethernet, TCP/IP) instead of custom networks like Fibre Channel. This should make SAN networks cheaper and more common, as well as providing an easy way to bridge a SAN network over the Internet (firewalled and encrypted, of course).

      The only difference between a LAN and a SAN will be what you use it for!
      • by Anonymous Coward
        > SAN networks...

        Sub Area Network networks?

        Do ATM machines use SAN networks when you enter your PIN number?
        • Storage Area Network!

          [techtarget.com]
          http://searchstorage.techtarget.com/sDefinition/ 0, ,sid5_gci212937,00.html


          It is basically a fast and tight network to connect computers with storage devices. It does exactly what your IDE or SCSI cable does, except over a network.
      • I thought that what [?]FS is for, where ?=> N, or A, for example.

        Damn, I *AM* crazy.
        • by Krellan ( 107440 ) <krellan@NOspAm.krellan.com> on Thursday February 13, 2003 @04:25PM (#5296815) Homepage Journal
          NFS, SMB, and all the other file-serving protocols are file-based. Clients open files on the file server, and do reading and writing from/to those files. The file server is responsible for security, making sure the client has proper permission to access their files.
          (cat /etc/passwd)

          iSCSI is similar to SCSI and IDE: it is block-based. Computers do reads and writes of individual disk sectors, addressed by number, and not in terms of files. It is below the filesystem. There is no security in terms of individual users here, because once the disk is opened, it is wide open. It is much faster than file-based access, though, which makes it popular for databases and such.
          (cat /dev/hda)

          Your file server does a great job at both file-based and block-based access. The server serves up shares and files over file-based protocols, allowing users to connect. Internally, a filesystem is applied to the disks, and the files are translated into individual low-level accesses to read and write various disk sectors. These reads and writes to the disk take place over a block-based protocol.

          Everything has its place, and it all fits together... hopefully!
  • by n1ywb ( 555767 ) on Thursday February 13, 2003 @03:24PM (#5296360) Homepage Journal
    Then we can ALL seem cutting edge!
    I bet NCR has a patent that covers this...

    "Some nasty reflection attacks were discovered on iSCSI's use of CHAP" I wonder how many more security holes are waiting to be discovered? I would be very careful about how I use this untill it's been tested by fire.

    Still the idea is really pretty fucking cool. Ethernet is cheap and fast (especially gigabit) and doesn't have any of the limitations that traditional SCSI or IDE have as far as devices on a chain. This could be a good replacement for Samba in some situations. The standards document is pretty daunting, so I can't tell if iSCSI will allow multiple connections to a single volume, but even if it doesn't there are many single user Samba applications that could be handled better by iSCSI.

    • This could be a good replacement for Samba in some situations. The standards document is pretty daunting, so I can't tell if iSCSI will allow multiple connections to a single volume, but even if it doesn't there are many single user Samba applications that could be handled better by iSCSI.

      what is the use of a SCSI target disk formated as a NTFS volume shared through iSCSI to a Linux user ?
      • what is the use of a SCSI target disk formated as a NTFS volume shared through iSCSI to a Linux user ?

        Well, since you asked: There's NTFS read support in the kernel which could make this a convenient way to share reference data between Windows and Linux boxes without having to deal with the complexity and vagarities of Samba.

        Of course, this is bound to bring its own complexities and vagarities, but potentially somebody might use it.

        A more likely target is an Oracle formatted disk shared by multiple clients potentially running many OSes. Oracle (and other commerical databases) have their own disk format in place of a filesystem and have already worked out the shared disk semantics to run over SANs.
    • > I bet NCR has a patent that covers this...

      It's more likely that they're beginning to apply for said patent today.

    • Ethernet is cheap and fast (especially gigabit)

      Well, cheap compared to fiber channel, I guess. But the 1000bsx ethernet at my workplace wasn't too cheap to put, as I recall. I'm sure it will get cheaper faster than FC, though, just do to the economies of scale of Ethernet deployment.

      Despite the fantastic potential bandwidth of gigabit ethernet (assuming big frames), I still wonder if latency issues won't become more important in a NAS environment full of iSCSI devices.

  • This sounds really cool, but I am a little unclear on exactly what it means.
    Does this mean that soon we will see SCSI disks w/ ethernet ports?

    If so...

    Can I take a bunch of disks and plug them into the switch on the the beowulf cluster that I am building and have all the nodes use them !!?! If so then this is INCREDIBLE!

    -OR-

    Would I plug a bunch of disks into a seperate switch that is accessible to the master node, and then the compute nodes use nfs or similar to access the master node just like in a traditional beowulf?

    Either way this would give more flexibility than current solutions, and it is a GOOD thing.
    • by jrstewart ( 46866 ) on Thursday February 13, 2003 @04:06PM (#5296687) Homepage
      More accurately you'll see SCSI RAID arrays with Ethernet ports. The technology will initially be too expensive to put on individual disks.

      iSCSI is really targetted to replace (or augment) Fibre Channel (basically SCSI over fiber optics with 1-2 Gbit data rates). Fibre Channel is very expensive both for the interface cards and for the switches. iSCSI lets everyone leverage the developments of generic ethernet switches, routers, tunnels and bridges rather than having to develop new Fibre Channel ones from scratch.

      As to your second question, You could potentially plug all of the disks (or arrays) into an ethernet switch and use them individually, but it's more likely you'd put some kind of front end in place to handle the filesystem tasks. Most filesystems assume they have sole ownership of the disks and can't share partitions between multiple live nodes. You would still gain the ability to partition big disks into smaller chunks per-compute node if you connected the disks directly (and maybe some failover capability) but that would probably be offset by the inability to share data.

      I think that SGI's XFS filesystem can share partitions between multiple compute nodes but I don't know if that feature made it into the Linux port. For more info on this kind of thing google "clustered filesystems".
      • As to your second question, You could potentially plug all of the disks (or arrays) into an ethernet switch and use them individually, but it's more likely you'd put some kind of front end in place to handle the filesystem tasks. Most filesystems assume they have sole ownership of the disks and can't share partitions between multiple live nodes. You would still gain the ability to partition big disks into smaller chunks per-compute node if you connected the disks directly (and maybe some failover capability) but that would probably be offset by the inability to share data.

        Right, you will have boxes of drives on the SAN, just like with current FC based SANs. From what I've seen, the host OSs have to manage 'drive allocation', and as you say, typically this will be whole drive at a time (important for partitioning the I/O load between spindles anyway). The addition of authentication protocals probably would help in binding the drive to a particular system as well.

        Since the other reason you want a SAN is for reliability, you're going to want redundancy in the connections anyway. If the drives themselves are iSCSI, they would probably only have one connection per drive anyway (well, maybe not, FC drives are often dual channel, right?). In any case, you'd have dual channels to each system and storage array as well as redundant switches or routers to eliminate all single points of failure.

        There are some hints in the article that compatibility issues could become significant quickly. Since at the most basic level, this will be a normal routed TCP/IP network, I'm sure the vendors have all sorts of ideas for 'support' protocols to run on the SAN with the iSCSI packets. It states that people are 'chomping at the bit' to add more protocols, but the committe wants to hold things stable for at least a year for things to sort out. The whole thing could be sunk by various players doing the 'embrace and extend' dance in ways that tend away from full multi-vendor interoperability.

        Without reading all the specs and proposals, it is easy to guess that protocols to provide for automatic device detection and allocation would be very useful from a system design perspective, but would also need to be part of the standard to acheive continued support for multi-vendor SANs. Another likely area is RAID support (configuring, fault detection and reporting, rebuilding and maintanance, etc.). Logically, a RAID controller is just another node on the network, but it lies between the hosts and storage devices.

        Note to people who think this is something like Serial ATA, it isn't. Serial ATA is a point to point protocal, and it probably is asymetric to boot. TCP/IP is a symetric routed network, so it is a different animal altogether. OTOH, there is no reason why a storage array couldn't be iSCSI on the outside and SATA to the drives (expect products like this from some vendors).

      • "iSCSI lets everyone leverage the developments of generic ethernet switches, routers, tunnels and bridges rather than having to develop new Fibre Channel ones from scratch."

        One problem with this is the performance will be crap using existing ethernet host adapters. There are a few companies working on host adapters with TCP-offload engines. Putting the TCP packets back together and pulling them apart takes a lot of kernel/system CPU cycles and it severely slows the data transmission rates.

        Initially, and probably for the next couple of years, host adapters or other hardware that can offload the TCP overhead from the system CPU will be very expensive (more than the current fibre channel HBAs) but overall not having to buy FC fabric switches from Brocade because you can use existing IP hardware infrastructure will be a cost advantage -- but not much. If anything, the prices for implementation will be close to the same for the next year or two and then it depends on how fast the FC stuff becomes cheaper and how fast the iSCSI stuff gets truly developed by hardware companies (Emulex, Qlogic, Adaptec, LSI Logic, etc.) whose R&D budgets are already squeezed tight by the current economic environment.

        We'll see. NAS or SAN or iSCSI????
        • One problem with this is the performance will be crap using existing ethernet host adapters. There are a few companies working on host adapters with TCP-offload engines. Putting the TCP packets back together and pulling them apart takes a lot of kernel/system CPU cycles and it severely slows the data transmission rates.


          It's not that the ethernet adapters will deliver bad performance, it's that they suck a lot of CPU cycles, so you need a faster CPU to get the same overall performance with iSCSI as you get with Fibre Channel. The SCSI market was in this same position years ago when SCSI cards were just dumb electrical interfaces and relied on the driver and host CPU to do the protocol work.

          I think iSCSI might catch on faster than you think because there's a potential for cheap software only implementations on the low to medium end. A few hundred dollars buys a lot of mHz from Intel these days.

          It's probably best to reserve judgement until we see what the performance is like for a tuned iSCSI software implementation. A network card that can offload the iSCSI protocol checksum combined with a zerocopy kernel driver might be able to deliver quite acceptible performance.
  • So basicaly this is the SCSI equivalent of Serial ATA. But instead of coming up with some new cabling and hardware, they're just grafting SCSI on top of Ethernet.

    Its too bad even gigabit ethernet wont be as fast as SATA. Not that harddrives can typicaly go that fast anyway..
    • Re:serial SCSI (Score:4, Informative)

      by Krellan ( 107440 ) <krellan@NOspAm.krellan.com> on Thursday February 13, 2003 @03:57PM (#5296623) Homepage Journal
      No, the SCSI equivalent of SATA is called "SAS". Serial Attached SCSI.

      [scsita.org]
      http://www.scsita.org/sas/FAQ.html


      What's really cool is that SAS and SATA share the same cabling and interface! SAS is a superset of SATA, that adds SCSI's features (multiple devices per port, and so on) to the basic one-device-per-port SATA design. The nice thing is that you can use cheap SATA drives on a SAS setup! This should be good for RAID. Think of SAS as "SATA Plus".

      Here's a quote from the link above:

      "Serial Attached SCSI complements Serial ATA by adding device addressing, and offers higher reliability and data availability services, along with logical SCSI compatibility. It will continue to enhance these metrics as the specification evolves, including increased device support and better cabling distances. Serial ATA is targeted at cost-sensitive non-mission-critical server and storage environments. Most importantly, these are complementary technologies based on a universal interconnect, where Serial Attached SCSI customers can choose to deploy cost-effective Serial ATA in a Serial Attached SCSI environment."
  • by GeekWithGuns ( 466361 ) on Thursday February 13, 2003 @04:21PM (#5296797) Homepage

    Here are some answers/clarifycations on some stuff I've already seen in the coments here:

    iSCSI is a SAN (Storage Area Network) replacement. It is not a file shareing system like Samba or NFS. The primary advantage of iSCSI over something like Fiber Channel is cost. You can build an iSCSI system with regular Ethernet switches where as Fiber Channel requires "special" switches and cableing. I would think that two systems could use the same iSCSI target, but only where it would make sense and where the file system could handle such access.

    Yes, there are already are adapters [adaptec.com]. (Not quite sure how they are out ahead of the spec, but why would you let a little thing like that slow you down). They connect to the Ethernet switch (usually a gigabit switch) and therefor could boot off a volume via iSCSI.

    Cisco also makes a device that can bridge lagacy SAN networks to iSCSI [cisco.com]

    • actually, booting off iscsi is only available from ibm using a proprietary protocol. there is now a spec for remote booting via iscsi but nobody has any hardware out that supports it.
      • The software iscsi initiator for linux can be used with PXE to build a diskless iscsi sysstem. I've been running a diskless linux box for my primary work station for about 2 months.

        2x the speed of my old ide drive, and i'm just using 100mbit ethernet.

        and my harddrive is backed up now.
  • ...I bet that in a couple of weeks they will be advertising positions that require "three years experience in iSCSI."

    It's hot! It's NOW!
  • One of the lecturers at the local uni did some research into how to have multiple machines interact with a disk over a network without stepping on anyones toes. A Block-Based Network File System [waikato.ac.nz]
  • Apple lawyers have sent a cease and desist to the IETF for misappropriation of Apple's i* trademark.
  • this is not considered dangerous

    goto the iSCSI on www [wasabisystems.com]

    serve mine with ARM please

    "netBSD is not dead its just on your disk and you dont know about it "

    regards

    John Jones
  • Wouldn't it be cool to just have an ethernet connector on the computer? And connect everything, mouse, keyboard, webcam and ADSL into the same hub. Routing the packets it even could be possible to switch what computer the keypresses and mouse signals are going to without swapping cables.
  • ...Ethernet over SCSI?

    I object to the next comment.
  • In Soviet Russia Ethernet runs over SCSI.

    No, really! Farralon made a scsi based ethernet adapter for the powerbook.

    http://www.macworld.com/1994/11/secrets/989.html
  • Very simple explaination about why this is important.

    1. count the number of PCI slots in your computer that you have or could add SCSI controllers to.

    2. multiply that by 15 (assuming wide SCSI)

    3. whatever that number is, iSCSI can give you more disks.
  • Could be quite nice to house all those noisy disks in the attic at the end of an IP network - hell how long til wireless iSCSI?

    All the disks for my machines in one place - makes for quiet PC's.

    One iSCSI project I did like was the Intel one (they released some beta drivers for their ServerPro cards under Linux as I recall). They successfully constructed a RAID array on the client machine that consisted entirely of iSCSI devices that were physically made up of huge ram disks in other machines - a RAM disk array! Gotta be some peformance gains there (network speeds notwithanstanding) If you can get 8GB say in each machine you could construct quite a large array purely out of solid state devices.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...