Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Software Linux Science

High Performance Diskless Linux At AX-Div, LLNL 20

Lee Busby writes "As a co-author, I am biased, but I think that our recent paper describing a diskless Linux deployment at Lawrence Livermore National Lab (PDF)may be of general interest. It's a little different than most diskless systems -- simpler, and designed to be high performance."
This discussion has been archived. No new comments can be posted.

High Performance Diskless Linux At AX-Div, LLNL

Comments Filter:
  • by 4of12 ( 97621 )

    Thanks for the link. I work in a similar environment where security requirements are more easily met by diskless workstations.

    This project started a few years ago and there have been some interesting new developments since.

    Given the current situation, would you consider using iSCSI in your environment?

    • I would have, but I would have used ndb instead of nfs. You could then proxy over a secure connection. It would have also fixed their swap problem.

      Joe
    • The problem that I see with using iSCSI is that you cannot share the storage for the units - each box needs its own section of disk.

      With NFS, you can have all boxes sharing /home, /usr, /bin, and so on, saving total storage.
      • Re:iSCSI? (Score:3, Insightful)

        by 4of12 ( 97621 )

        Sharing common resources is great, and NFS performance is increased if they're all mounted read-only on the clients.

        Keeping some protection and partition of /home/me over iSCSI to a single workstation seems like a good idea to me. Given all the concerns of classified processing, it's not like you and some other workstation are going to be routinely part of a parallel cluster where multiple clients need write access to the same filesystem.

        Then, each client would only need a writable /etc for system config

  • NFS? (Score:4, Interesting)

    by ghostlibrary ( 450718 ) on Wednesday November 12, 2003 @08:56AM (#7452593) Homepage Journal
    This has some nice configuration management ideas. Given that a linux distribution is small, cloning them for each diskless is a neat approach to balance centralized management versus changing hardware.

    That said... NFS is woefully insecure so, if subversion by an insider is a problem (as it would be with, say, disk workstations), NFS may not be the best choice for handling the disk management.
    • How many times do we need to hear the canard about NFS being insecure before we decide to RTFM?

      UNIX auth isn't the only authentication flavor for NFS. It may be appropriate in may settings, it may be the default for many "distros", it may be the only authentication various n00bs are familiar with, but it isn't the only game in town.

      • So is there an authentication system for NFS that doesn't trust the client PC? One that is secure against plugging in your own laptop and setting it to have the same IP address and MAC address as an existing machine (which you unplug at the same time)?
    • Why not Linux's remote block device? It might perform better than NFS because (assuming read-only access to the block device on the server, with a smaller device for /var) blocks can be cached on the client. (OTOH, a clever NFS implementation could cache bits of filesystem if you promised it the data was not going to change.)
  • the more they stay the same. This doesn't seem too much different from the 'dumb' terminal connected to the corporate mainframe that is still in use today. Except that it is running Linux, a much superios OS to IBM's MVS or z/OS.
  • by wowbagger ( 69688 ) on Wednesday November 12, 2003 @09:23AM (#7452785) Homepage Journal
    A quick skim of the PDF leads me to believe they are still using NFS for the operation.

    My question would be, does anybody have any meaningful experiences using CodaFS or Intermezzio?

    Where I work, we have NFS mounted home directories. When the main server goes down, we all get to twiddle our thumbs because we cannot do anything without a home directory.

    It would seem to me that the caching of Coda and Intermezzio would be better - you still have the centralized management of the disk images, but you also get the speed of local access and the robustness of not having a single failure point in the server.

    But I've not had time to set up a trial system - has anybody else?
    • At my University there is a fairly large Andrew system which performs well. The only gotcha is that Andrew implements its own non-Unix file permissions, which is a bit confusing. Coda is of course based on Andrew. It differs only slightly in its concurrent write semantics.
    • When the main server goes down, we all get to twiddle our thumbs because we cannot do anything without a home directory.

      Why does you main server go down during working hours? Perhaps you need a new administrator or better hardware. If your company is too cheap to get better hardware, perhaps you need a new employer, as well.

    • The typical workaround to that of the risk of a central server failure would be a NAS box where the box has redundant "heads" and redundant pathing through 2 switches.

      That way, if the server dies, the 2nd takes its place immediately. If maintenance needs to be done, you do it on one before the other. If a switch is lost, you have the other path, etc.

      If you're working on a production site where your customer's time means your money, you will have a robust setup. A single server with a single path is just a

  • The company I work for (actually, the company I own) builds and sells specialized systems that use diskless via NFS. One server and up to 10 terminals. Each terminal has it's own (very stripped down) root fs, and they all share a single usr fs mounted ro.

    I use e2fs still, despite other options available. It benchmarks reasonably well across the board. (Personally, I think it's a lot more important to have more than sufficient memory on the server than to quibble about benchmark numbers.. cache cache

  • Seems swap via NFS might cause some bottlenecks. Since cost doesn't seem to be the issue, why not compile ide and fs support into your tagged client kernels? You could then on the client boxes (theoretically?) add a hd and set up swap locally. Though I suppose there might be security concerns at LL about the theoretical possibility of removal of a 'swap' harddrive and subsequent inspection elsewhere.
    • Encrypted swap/cache partitions? That may kill performance too much so that it's not useful though. But it's one way to get around the problem of not being able to swap because of security requirements.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...