Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Interview with Andrew Tridgell 165

Jeremy Allison - Sam writes "See here for a *great* interview with tridge. My favourite quote: 'In 50 years' time I doubt anyone would have ever heard of Samba, but they'll probably be using rsync in one way or another,' Tridgell says. Cheers, Jeremy."
This discussion has been archived. No new comments can be posted.

Interview with Andrew Tridgell

Comments Filter:


  • We <B>might</b> have hologram storage by then.

  • Samba versus rsync (Score:4, Insightful)

    by Ed Avis ( 5917 ) <ed@membled.com> on Friday October 04, 2002 @03:53PM (#4389338) Homepage
    Okay, so how long until Samba is able to use the rsync protocol for file updates? That depends on what Microsoft decide to do I guess.
    • Perhaps Samba could overtake its predecessor, detect the presence of other Samba systems and use optimizations between the two while making Windoze look like the slow horse in the race.
  • i want this sequence (Score:1, Informative)

    by Anonymous Coward
    does anybody know what this sequence is.
    Tridgell says that he recently discovered a certain combination of data which, when sent down the wire to a Windows server, rebooted it. "Every NT server just completely rebooted. We decided not to emulate that. We contact Microsoft about these bugs, and we get back emails saying, 'Have you got your computer switched on? Are you sure you've got all the latest patches?' Of course, you idiot! Just put me through to someone who knows what they're doing," he says.
    • by Jeremy Allison - Sam ( 8157 ) on Friday October 04, 2002 @03:55PM (#4389357) Homepage
      Yes I do, but I'm not telling :-). Read the Samba source code :-).

      Jeremy.
      • A quick grep of Samba 2.2.5 reveals this:

        if (locktype & (LOCKING_ANDX_CANCEL_LOCK | LOCKING_ANDX_CHANGE_LOCKTYPE)) { /* we don't support these - and CANCEL_LOCK makes w2k
        and XP reboot so I don't really want to be
        compatible! (tridge) */
        return ERROR_NT(NT_STATUS_NOT_SUPPORTED);
        }
    • by budcub ( 92165 ) on Friday October 04, 2002 @03:59PM (#4389388) Homepage
      Hmmm, maybe he discovered the "SMB-die" [microsoft.com] attack.
    • by Jacco de Leeuw ( 4646 ) on Friday October 04, 2002 @04:55PM (#4389872) Homepage
      Years ago I stumbled into a bug in OS/2 Warp 4. I got the SMB networking process to crash after a sequence of smbclient commands.

      So I downloaded a bug report form from the IBM website, filled in all details and sent it off. After a while I got a response. I could not make heads or tails of it. It was in some kind of IBM speak. (IBM speak really exists. Do they still call a harddisk a "hard file"? :-)

      So I forwarded the message to Timothy Sipples, who had been very active on Usenet and had just started working for IBM. He translated it for me: I was not a big account customer so they would not accept the bug report. Sigh...

      Soon after that, Linux became my main OS.

      (I actually made a patch for smbclient [jacco2.dds.nl] so that it would not kill OS/2, but I never forwarded it to the Samba people).

  • by sootman ( 158191 ) on Friday October 04, 2002 @03:55PM (#4389351) Homepage Journal
    Tridge's software, Samba, lets Torvald's free operating system Linux co-exist with Bill Gates' Windows.

    Hear that "whirr"? That's Stallman spinning in his grave, and he's not even dead yet!

    • Hell, I'm spinning in my grave, and I am not even RMS (or dead).
    • I think you're making some unwarrented assumptions here. Why should a free implementation of SMB upset RMS? It's better than a non-free implementation of SMB. If you're in a position where you can control what you're running on your organisation's file servers, but you can't control what's on the desktop, using Samba is currently the only ethical course of action available to you.

      What might even be better, or at least ethically equivalent and practically easier, is to have a free software implementation of NFS for non-free platforms like Windows (I'm not aware of any), as you don't have to reverse engineer, and re-reverse engineer every couple of years, a secret, proprietary standard to make it work. And it means that some proprietary networking software on the client machines has been replaced by free software.

      The article is actually quite good (the AFR is the only Australian paper worth reading). It uses the term "free software" several times and doesn't even mention that "open source" fad from a couple of years ago. Whatever happened to that, BTW?

  • What about the doomsday for UNIX. Isn't that in about 2036 [or there abouts], when the time just runs out, in UNIX's own Y2K bug?
    If we haven't upgraded our systems by then to the next OS, I'll eat my hat. [I suppose a lot of developers ate their hats too two years ago.]
    • Shouldn't this be fixed before the problem arises as we will have the ability to address more and more memory?
      • Y2K should have been fixed, since we had enough memory, and foresight to see the problem. How many machines and software is in our landfills now because of it?
        • I believe it was fixed, in time to not cause any airplanes to drop out of the sky. Yes, it took insane media coverage and m/billions of dollars to do it. Really, (disregarding the media coverage), it would have taken the same to replace all those machines in the first place, twenty years ago.
      • Shouldn't this be fixed before the problem arises as we will have the ability to address more and more memory?
        From what I understand, it's the ability of the processor to count to higher numbers. UNIX's datetime variable is limited to 32-bits, of course, which gives us our 2036 deadline. Of course, with AMD and Intel struggling to be the first to make a viable 64-bit chip available to the end-user, I doubt this will be a problem for long. By the next major Linux kernel revision, and by the next major BSD release, I'm more than certain we'll have the groundwork in place to migrate to 64-bit systems.

        With the quality of modern computer systems, and the rate at which they're being updated - do you honestly forsee yourself running any of your current machines a decade from now? Certainly not in any form of mission-critical applications, I'd wager. My screaming fast Athlon XP with DDR RAM will likely be relegated to a backup DNS server by that point, providing it's still alive of course.

        So two decades from now - what will we be running? Likely our 'antiques' will be hardware purchased in or about the year 2012. Judging by AMD's Processor Roadmap [amd.com], we'll be seeing the [Claw/Sledge]Hammer procssors within a year or two, and based on the proliferation of current processors (PII/P4, ThunderBird/Athlon/Athlon XP) I'd bet they'll be either commonplace or outdated by 2012.

        There will come a day when 64-bit on the desktop will be the 'norm', and there will be weirdos {cough} still running "Those really old 32-bit processors", just like we now have people running C=64s. :)

        UNIX will be prepared for its D-Day with more than a decade of breathing room; mark my words.

        • It's not that the processor can't count above 32-bits. There are 64-bit (or even higher) long long integers, and Java longs are also 64-bits. The difference is that for 64-bits on a 32-bit computer, the processor actually has to do the addition in two steps, once for each 32-bit dword. Unix programmers knew rightly that this is a little less efficient than straight 32-bit numbers, in addition to the fact that 64-bits takes twice as much memory. So they decided to go the efficient route, instead of the correct route.

          There is nothing about 32-bit processors that prevents 64-bit datatypes from being emulated. Many Unixes are already migrating; the new time_t structures really are 64-bit. Java time, and I'm sure there's lots of other examples, is 64-bit as well.
        • With the quality of modern computer systems, and the rate at which they're being updated - do you honestly forsee yourself running any of your current machines a decade from now?

          At the rate of DRM/Palladium/Whaever being pushed, yeah, maybe!

    • There's already standardisation efforts underway to double the length of the time variables, so I don't think there's any huge issue. We should be finished within 30 years I would think.
      • Given the turn around time of (most) Open source projects, don't you think 30 years is cutting it a bit close?
        • name 1 open source/free software project that's taken 30 years

          just 1

          i dare you.

          (seeing as - to the best of my knowledge - no open/free licence has existed for more than 16 years it would be tough).

          Your problem is that closed projects burst fully formed (although mostly deformed) into the public arena, open projects are kicking around in public from the start of their process.

          interestingly this also applies to security fixes, where in the free world the fix is released on the basis of a theoretical exploit, whereas in the closed world a practical exploit is in the wild before you see a security patch.
    • But in most *nixes, especially of the open source variety, all one would have to do essentially is change the variables a bit and recompile. Granted, it's somewhat more complicated an effort than just that, but you get the idea. This should be a much simpler problem to fix than the y2k bug that never really was a problem.

      I suppose my point is that if we were able to survive the y2k bug without much of a real problem (sure some things were broken, but compared to what we were told was going to happen, it was really smooth), we ought to be able to do the same with *nix, only much easier.
    • by m0rph3us0 ( 549631 ) on Friday October 04, 2002 @04:35PM (#4389662)
      UNIX doomsday, this only applies to 32-bit integers if you recompile your code with time as a 64-bit integer (like on 64-bit processors) then the 32 bit integer which represents time as seconds since circa 1970, will last for 70 ish years, however a 64 bit integer can store 2^32 times more numbers, meaning it will last for 70 * (2^32) years. So as long as all UNIX machines are on 64 bit processors by 2038, doomsday will be avoided until the year 300647712690. In other words approx. 280 billion years. Given that we estimate that the universe is approaching its mid life crisis, 64 bits should keep time for 9.3 universe life times. I have a feeling my math may be a bit off can someone double check this for me. I do know that 64 bit UNIX time will last for a the forseeable future.
      • This totally ignores a more urgent problem than Y2K. I like to call it the "Y10K" problem. Since no one is preparing for it, when the year 9999 rolls around, we are going to have major problems. You see, they only updated most date fields with 4 digits, not nearly enough just a few millenia from now. And I dare you to suggest "they certainly won't be using the same computers they're using now!". That's what they said last time. Worse, all the copies of COBOL for Dummies and The Complete Idiot's Guide to COBOL will have long since rotted.

        If I were you, I'd start stocking up on canned food, and non-electronic forms of currency like rolls of toilet paper.
      • until the year 300647712690. In other words approx. 280 billion years

        Congratulations, you have just been selected for the ultimate geek award! :-D

        Hint: people that don't know about 1024 would have probably said either 300 billion years or 301 billion years. :-D
      • It'd be enough to change integers to unsigned integers in the meanwhile IMHO. We'd count 'till 2106 then ...

        b4n
    • If you've ever worked with UNIX then you might know that stupid mainframe programmers don't program for it. Unix programmers are smarter than everybody else and would never use 2 digits for a date (That's stupid!)

      Besides, Y2K is over!!! Earth to McFly!!!!
      • Wrong, several UNIX variants had (or have) Y2K issues. For some systems it was just user level programs that had problems, for some it was much more serious requiring updates to the system libraries or kernel.

        back in 1998 I was working for a HP VAR. We had several customers who could not upgrade their systems from HP-UX 9. Unfortunately HP's Y2K "solution" for HP-UX 9 was upgrading to HPUX 10 or 11. Most of these users were planning on setting the system clocks back 32 years.

        There were a number of vile hacks put into place to get us past Y2K such as pivot dates and setting system clocks back. Hopefully these hacks won't come back to haunt us in a few years.
    • I'm not sure if 64 bits will fix it completely. It is easy to change the vast majority of code with a recompile (unlike Y2K where you had to change the size of an array from 2 to 3 and there was no easy way to detect it). However there is still going to be structures that have 32 bit entries in them.

      But actually the true doomsday is not until about 2100, because this is only for *signed* 32-bit integers. If you assumme unsigned then you get twice as long from 1970 before it overflows. You can also do "sliding window" hacks like those proposed for Y2K that will allow code that relies on negative values to work as long as the negative value is not too big.

      Another reason that this is not a problem is that the 1-second resolution is increasingly becoming a problem and I expect virtually all uses of time in Unix to be replaced before then with some higher-resolution thing. Hopefully when this is done they will add enough extra bits so there is no overflow problem for many millenia. Probably 64 bits where 65536 is one second would be a good replacement. 64-bit IEEE floating point might also be good, it would allow short time intervals to be accurate to less than Plank time and allow Universe-age time intervals to be represented with a fraction of a second of accuracy, though the fact that addition is not communative might make people not want to use this.

  • Heh (Score:3, Redundant)

    by Spock the Vulcan ( 196989 ) on Friday October 04, 2002 @04:03PM (#4389424)
    From the article:
    Tridgell says that he recently discovered a certain combination of data which, when sent down the wire to a Windows server, rebooted it.
    "Every NT server just completely rebooted. We decided not to emulate that."
  • "Certainly (the Samba team) knows a lot more about the Microsoft protocol than the people who Microsoft sends to the (annual) CIFS conferences. The people they've sent along haven't had a clue, but I don't know if they were just people who happened to be walking up the corridor when the manager decided he needed someone to go along."

    Good to know that at least somebody understands it...
    • by SurfTheWorld ( 162247 ) on Friday October 04, 2002 @05:09PM (#4390006) Homepage Journal
      Indeed. An often-times overlooked benefit of open-source is the exposure a product can attain simply by virtue of being open. A closed source product team has to make an investment in a quality assurance group, which usually works 9 - 5. An open source project (assuming it is highly visible) is capable of leveraging a global supply of quality assurance engineers to test their product 24 hours a day, 7 days a week. The whole world is essentially their beta testers.

      While the Samba folks have done us Linux folk a tremendous favor (reverse engineering *any* protocol is difficult) in encapsulating all of the SMB details via Samba, they have also performed a huge service to MicroSoft and the rest of the closed-source world by hammering on the various platforms that come out of Redmond. As the article points out, every new version (or patch release) is put through it's paces against Samba. Although their primary goal is to ensure compatibility, the secondary effect is extremely valuable to non-Samba users: bugs in server software from a closed source vendor are exposed (and hopefully fixed).

      The difficulty is that the rest of the world (and probably MicroSoft in particular), either doesn't see, or see's but turns a cold shoulder none-the-less to the open source community.

      Thank you Andrew for your work and the work of your team.
    • You left out my favorite part of this quote -- the sentence immediately before what you quoted:

      ``A lot of the really technical people who really understood the protocol appear to have left Microsoft."

      From the rumors surrounding the release of Win2000, I suspect that this loss of technical expertise is not limited to the SMB protocol alone.

      Geoff
  • by Dannon ( 142147 )
    'In 50 years' time I doubt anyone would have ever heard of Samba'

    Oh, I don't know 'bout that... it's been at least a few centuries since Waltz was invented and I know a few folks who still cut the rug in 3/4 time! *rimshot*
    • Yeah but before 50 years are up, the music industry will have crushed all "non-popular" forms of music expression, and those who play a latin rhythm will have to be "reeducated" by the microsoft police.
  • by geekoid ( 135745 ) <dadinportland&yahoo,com> on Friday October 04, 2002 @04:06PM (#4389460) Homepage Journal
    for rsync suppose to go to the space station?

    • Damn I am a nerd. I didn't get the joke at all, I started thinking that yes, rsync would make sense on space missions as the bandwidth to space as well as the propogation delay would necesitate having something like rsync for data transfers.

  • Does this mean all the people who read this article have an expiry date before fifty years are up, or will our memories simply be doctered?
    What does this man know that I don't? No, I don't mean about networked file system protocols either, although if you could give me an exhaustive and comprehensible list of that I'd appreciate it.
  • Thank You. (Score:5, Insightful)

    by DigitalAdrenaline ( 549986 ) on Friday October 04, 2002 @04:28PM (#4389599)
    I'd just like to say a REALLY big thank you for the time and effort you've spent working on Samba. It has been a huge benefit to me both personally and professionally, and I am taking this opportunity to express my sincere gratitude.

    Andrew, thanks for envisioning this project, and getting up all started. Thanks also to your wife for putting up with it, I'm not sure mine would have :)

    The developer list is growing, and I've never even read messages from some from some of you, but it's worth taking the time to personally express thanks as individually as this forumn allows.

    Jeremy Allison
    Andrew Tridgell
    John Terpstra
    Chris Hertel
    John Blair
    Gerald Carter
    Michael Warfield
    Brian Roberson
    Jean Francois Micouleau
    Simo Sorce
    Andrew Bartlett
    Motonobu Takahashi
    Jelmer Vernooij
    Richard Sharpe
    Eckart Meyer
    Herb Lewis
    Dan Shearer
    David Fenwick
    Paul Blackman
    Volker Lendecke
    Alexandre Oliva
    Tim Potter
    Matt Chapman
    David Bannon
    Steve French
    Jim McDonough
    *Luke Leighton
    *Elrond
    *Sander Striker

    Thank You. You have done a great service for us all, and we are very much in your debt.

    Kevin Anderson
  • Anyone know where I can get that reboot code he was talking about? I've got some ideas about hooking that in to my portsentry...
  • by tucay ( 563672 ) on Friday October 04, 2002 @04:44PM (#4389749)
    Samba was our beach head that allowed us to get a footing on Microsoft so we could execute missions in their territory.

    The best thing is that our Samaba soldiers will still live on to write other great software to help us rid our lives of Microsoft software.

    Thanks samba team even though I rarely use your Samba software anymore. I use rsync all the time on my Gentoo systems!

  • rsync (Score:5, Funny)

    by dnoyeb ( 547705 ) on Friday October 04, 2002 @04:59PM (#4389912) Homepage Journal
    I have never heard of rsync, but I have a samba PDC in my basement. I'm not any hotrod Linux hacker or anything. My wife asked me how come she didnt see the same favorites on both computers?

    I made it so.
    I'm a good husband.

    Besides, these things are not just toys right? It was damn easy. Buying as much as an NT server still costs no less than $500 on ebay. samba cost about 5 minutes in FTP to get the latest for RedHat. On my K6-233 Asus tx97x its flawless. Flawless i say.

    Ramble on.

    Everytime I login I feel a little geekdom. Everytime my wife *doesen't* complain about the computer I feel like THE MAN. You see in my house I am Bill Gates. If windows breaks, I get the blame. If Linux is too confusing, I get the blame. So what we have here is the best of both worlds. BTW, i used to get pissed at the IT department for taking so long to launch new OSes. Now I am about to take XP off my computer because its loosing faxes and the printer dont work on it, etc... Its affecting my love life ;)
    • Ahhh. If only we good husbands could enjoy Bill Gates' salary too. LOL

    • Now I am about to take XP off my computer because its loosing faxes and the printer dont work on it, etc... Its affecting my love life ;)


      I went through this as well.

      I tried backing up any importatnt stuff and doing a reinstall, I thought that a "bug fix" broke it (like that ever happens ;).

      No go. The lost faxes, they weren't really lost, you just can't see them in the fax app.

      Being a former Windows support tech I know how stupid some of the fixes can be, so I started mucking around.

      It turns out that the fix for this was to move all of the faxes out of the outbox and inbox, then fire up the fax app and import them back in. I had to do this a few times before it imported all of the faxes back in. But it worked and has continued to work, so who cares.

      Good luck.
  • It's likely that the Samba team now spends more time testing Microsoft's networking software than Microsoft itself.

    Gee, now I'm really surprised...
  • by image ( 13487 ) on Friday October 04, 2002 @05:33PM (#4390170) Homepage
    > In 50 years time I doubt anyone would have ever heard of Samba, but they'll probably be using rsync in one way or another

    Think so? The Univac was state of the art in 1952. Considering that the progress of technology is accelerating over time (check out The History of Computing Timeline [computer.org]), do you really think that the ideas behind rsync are going to be relevant? Network throughput is already getting massive. If we could fast-forward to 2052, I imagine we would barely recognize the technologies in use.

    Do you think that Turing could have even fathomed performing a billion operations a second and having a almost a terrabyte of storage available and (almost) accessible anywhere on the planet at megabit data transfer rates? In our homes? For an inflation adjusted price of under $100? You have to be kidding me -- it would have blown his mind.

    In 2052 CPU power will be effectively unlimited (imagine doing a billion billion operations per second), storage constraints meaningless, and, if networking trends continue and/or quantum plays out (as it may), effectively instantaneous access to that data.

    Think we'll still be diff-ing data to squeeze the most out of the net? In 2052 that is the last thing we'll be bothering with.

    All this only hold true of course if we assume that technology will improve as fast as it historically has and that we don't hit a cataclysmic end to human progress in general (plague, nuclear armageddon, etc). But if the last 50 years have been any indication, what we will see in 2052 will bare little resemblance to what we have in 2002.
    • > Think so? [...]

      Oh, and I forgot to add, Samba rocks, rsync rocks, and Andrew Tridgell rocks. I don't mean at all to take away from the contributions of an amazing individual in the open source movement.
    • by SuperKendall ( 25149 ) on Friday October 04, 2002 @06:19PM (#4390496)
      Is that with all that tremendous increase in power comes equally large increases in volume of data. When getting the weather report means downloading data every second or so from a few million collection devices around the world so that your GPS watch can run a global weather simulation to tell you what weather will be like throughout the day within a 1 mile radius, then yes, rsync (or its distant children) will still be quite useful!!

      Not to mention fully volumetric video feeds.
    • Well, fifty years is a damn long time, so who knows.

      That said, historically raw computing power has increased more rapidly than network bandwidth. Rsync is essentially about using compute power to save bandwidth, using hashes and checksums to avoid transferring unnecessary bytes. So the cost/benefit will likely still hold. The network may be faster, but the files will be bigger and the CPU will be faster still.

      That said, rsync as a command-line utility will almost surely be gone, but the ideas in rsync may well migrate directly into the application layer or even the network stack. At least, it's more likely to be around than samba, which is a fantastic yet special purpose tool for a specialized problem (Windows compatible file-sharing).

      Besides, tridge got his CompSci Ph.D. for his rsync work, so nobody should be surprised he's proud of it. :-) Check out his thesis at http://samba.org/~tridge.

      Matt
    • Slow down turbo.

      Fifty years ago people thought that we all would be flying around in personal airplanes by now.

      It's not usually valid to stretch a trend out beyond a decade. Unlike the last 20 years of computing, we are running into the fundamental limits of physics: the size of the atom and the speed of light. Not saying that we won't come up with something clever. :-)
  • I must take some exception to the poster to suggest that was a GREAT interview. Yawn. It left me somewhere between less than satisfied and really, really, dissatisfied. This is hard-hitting news: the SMB protocol?

    All this tells me is that we (computer industry) are still in our infancy if we need to create emulators to share files? We have to create an entire code-base to share files? We need to get way passed this and set some sort of standard. Samba's a good product, but it's just adds to the complexity: one more thing to break and one more thing to admin.

    Anything with less than 100 comments was recieved less than favorably by the readers. This makes 99.

    "Look. In twentieth-century Old Earth, a fast food chain took dead cow meat, fried it in grease, added carcinogens, wrapped it in petroleum-based foam, and sold nine hundred billion units. Human beings. Go figure."
    • How about that this story got a WHOLE PAGE in the Australian Financial Review (and the picture of tridge was half the page).

      This isn't a tech piece on Linux Orbit.

      this is a mostly technically literate puff piece on linux in the newspaper that the suits of a modern nation read (roughly equivalent to the Wall Street Journal or the Financial Times).

      thats what's newsworthy about it.

      Plus Tridge lives in Canberra so he's all right unlike the rest of you bastards who pick on us (sorry, local grievances there).
    • To begin with, tridge is a great guy--having the pleasure of working with him recently--and I'm glad to see some press about him because it's well deserved.

      The SMB is a standard protocol (originally designed and created by IBM). SMB is merely a pipe and messaging mechanism (sort of like IPC for networks). CIFS is an RPC mechanism that sits on top of SMB. The SNIA workgroup has published a standard for CIFS that Microsoft has contributed to.

      Unfortunately, one of the big problems is that if the Windows implementation is broken, everyone else has to be too. Furthermore, Windows is always adding new calls to CIFS that of course are undocumented.

      Samba is not an emulator. It is as much of a CIFS server as a Windows machine is.
  • "We observe the interaction between Windows boxes on the network, watch the packets (of data) going past, and then try sending that packet ourselves to see what happens. Sometimes we get a slap in the face, most times we get a coffee," he says.

    It took me a second read to realize that asking for the "wrong thing" from your waitress might get you that proverbial slap in the face!
  • Rsync good (Score:4, Informative)

    by larien ( 5608 ) on Friday October 04, 2002 @06:01PM (#4390362) Homepage Journal
    I have to say rsync is an excellent bit of software. It has a small task, and damn it does it well. I subscribe to the Sun Manager's list and there are several times I've recommended rsync, just because it is the best bit of software around for copying files while retaining all the Unix stuff like:
    • file ownership
    • permissions
    • symlinks
    • special files (devices, etc)
    • hard links
    Great bit of software. Perhaps not as technically excellent as Samba, which is more complex, but very useful.
  • Also (Score:3, Informative)

    by johnburton ( 21870 ) <johnb@jbmail.com> on Friday October 04, 2002 @06:57PM (#4390742) Homepage
    This doesn't mention that he's also the person who first did a lot of the tivo hacks that are out there. How can one person do so many good things?
  • I had the privilege, along with a few other enthusiastic first years, of being informally tutored by tridge. I still regard him as the best lecturer/tutor/university-person I've ever had the pleasure of meeting. He was always quite happy to explain anything, even to lowly first years that weren't even in his unit (he took Operating Systems in that year) and who had no right to rock up to his office, unannounced, and ask long boring questions. Rather than complaining he `didn't have time for it' and `why didn't we go read a textbook' he'd suggest we go and have a cup of (black, strong) coffee in the staff common room and explain patiently making the whole topic sound interesting.

    In fact, thinking about it now I kinda wish I'd got his autograph... Oh well. :-)

    --

    Tom Rowlands
    (Sorry, I can't sign this.)

  • Whither rproxy? (Score:2, Interesting)

    The article mentioned how great rsync is for HTTP traffic, and left it at that. I've seen rproxy in the rsync source tree, but I wonder how active it is these days, and whether it has a chance for wide adoption. What good is cutting the transfer down by 90% if no one uses it? Also, there's a somewhat dated study of delta-encoding [nec.com] (and rsync/rproxy is in this genre) that raises the issue of how frequently the same data is retrieved repeatedly.

    Does anyone have empirical evaluations of deltas (including, but not necessarily limited to, rproxy) on today's workloads?

  • Is there a better way to let users mount network partitions than using samba?
    • I'll treat this message as if it's not a troll.

      If you're dealing with a free Unix (Linux, BSD etc), the most 'standard' way for mounting network partitions is using NFS (the Network Filesystem. [sourceforge.net])

      Several companies will sell you NFS utilities for Windows. nfsAxe [labf.com] is by the people who make WinaXe, a Win32 X server. A quick search doesn't turn up a standard Windows open-source solution for this.

      SMB has been rebranded by Microsoft as CIFS, the Common Internet File System [samba.org]. Microsoft have all the official docs, but of course samba.org have more information about it than they do.

      Samba is supported by all Windows machines. It doesn't even work too badly for sharing filesystems between unices (I have a public SMB share on a FreeBSD file server machine mounted on my Linux gateway: you wouldn't know it wasn't a local FS.) The permissions model isn't perfect, of course, but for a shared FS, it works good.

      Your question asked "is there a better way." Well, without getting into what's wrong with Samba, it's hard to answer. If you want Windows interoperability (and it's hard to find a situation where it's not a plus), you can't go wrong with Samba. It's a very mature, stable, complete solution.
      • Hi, I wasn't trolling. I like to use samba to let non root users mount and unmount net partitions at will. From what I understand, this isn't possible with NFS.

God help those who do not help themselves. -- Wilson Mizner

Working...