Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology Linux

Linux Kernel To Have Stable Userspace Drive 309

liquidat writes "Linus Torvalds has included patches into the mainline tree which implement a stable userspace driver API into the Linux kernel. The stable driver API was already announced a year ago by Greg Kroah-Hartman. The last patch to Linus' tree included the new API elements. The idea is to make life easier for driver developers: 'This interface allows the ability to write the majority of a driver in userspace with only a very small shell of a driver in the kernel itself. It uses a char device and sysfs to interact with a userspace process to process interrupts and control memory accesses.'"
This discussion has been archived. No new comments can be posted.

Linux Kernel To Have Stable Userspace Drive

Comments Filter:
  • Drivers are very OS-centric; they tell the OS how to interact with the hardware and, of course, Linux and Windows have very different ways of interacting with hardware.

    That still doesn't preclude universal drivers. In fact, Linux can already use some Windows drivers via ndiswrapper [sourceforge.net].

    I agree with the GP poster, it would be a wonderful thing for computer users, but suspect Microsoft would put VERY heavy pressure on any hardware maker who looked like participating in that sort of development.

  • by DriftingDutchman ( 703460 ) on Sunday July 22, 2007 @03:16AM (#19943949)
    If this brings us closer to using (possibly unreliable) windows drivers, a major reason for using windows will be gone.
  • High time! (Score:3, Interesting)

    by iamacat ( 583406 ) on Sunday July 22, 2007 @03:28AM (#19944007)
    Just because some code controls a piece of hardware doesn't mean that a runaway pointer in it should cause a panic or even corrupt files by messing up filesystem buffers. This will also enable device drivers to make use of all available userspace libraries, with sophisticated algorithms that would never be used if all code had to be written from scratch and non-pagable.
  • Re:Damnit... (Score:3, Interesting)

    by moro_666 ( 414422 ) <kulminaator@gmai ... Nom minus author> on Sunday July 22, 2007 @03:43AM (#19944069) Homepage
    2.6.20 ? dude !!!

    I know i'm hopelessy outdated and my machine shows:

    martin@hope ~ $ uname -a
    Linux hope 2.6.21-gentoo-r4 #1 Sat Jul 21 22:18:42 EEST 2007 i686 AMD Turion(tm) 64 Mobile Technology MT-30 AuthenticAMD GNU/Linux

    Remove that gentoo notice quickly from your slashdot sig. A man using kernel 0.0.02 versions old is a stable version pimp not a gentoo roller... :p

    As for the article :D
      Userspace drivers are not really a new groundbreaking idea now are they :D The lists are full of different proposals and attempts over different times, but it's really nice to see that this thing finally got rolling, this may open a lot of closed-source drivers for usage for linux people (a lot of those fancy windows toys).

      Thumbs up Linus ;)
  • by Anonymous Coward on Sunday July 22, 2007 @05:53AM (#19944527)
    i suppose what i'm really against is technology being used to restrict the freedoms and capabilities of the individual
    In the case you're arguing for, how is the technology restricting the freedoms and capabilities of the individual?
    You can still work out yourself how to communicate with the device, you can still work out the motherboard connections etc. What you really want is a free handout. The companies that make tech products are under no obligation, moral or otherwise, to help you there. The only moral obligation I see them as having is not preventing you from figuring it all out yourself.
  • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Sunday July 22, 2007 @06:20AM (#19944603) Homepage
    USB drivers for example. There's no reason for anything using USB to be in kernel space - it just doesn't need the performance.

    Ditto for filesystem drivers, although performance matters there - you'd have to design the driver API to minimise context switching.

    I don't think anyone's expecting userspace IDE or graphics drivers.

  • by TheRaven64 ( 641858 ) on Sunday July 22, 2007 @08:29AM (#19945107) Journal
    Safe DMA will be possible in the relatively near future. Modern systems are starting to include an IOMMU, which makes this simpler; you simply set up a mapping so the device can only write to or read from the process's address space, and then it can do any DMA it wants safely. Current AMD chips include something called the 'Device Exclusion Vector'. This isn't a full IOMMU, since it doesn't handle translation, but it does do protection. You can tell the DMA controller that the device is only allowed to access a certain block of memory, and DMAs will fail if they write outside this area. The userspace driver would still need to know about machine addresses, rather than vertical addresses. I would probably do this by having a special process type for drivers that had a 1:1 virtual to physical address mapping, and just had holes in its address space where the kernel and other processes were living. The process could just walk its own page tables to do this, but it would be more expensive.

    It would be nice to get a stable and usable userspace driver API, since then other operating systems could use it. DRI has done a lot for getting 3D supported across *NIX variants; the Linux and FreeBSD drivers are almost identical, and just need a slightly different kernel module which handles the very low-level parts.

  • by Jah-Wren Ryel ( 80510 ) on Sunday July 22, 2007 @08:29AM (#19945109)

    Am I missing something here?
    Yeah. Patents.

    They are afraid that by providing documentation on interfaces, it may tip-off a patent holder to start looking for infringement where they might not otherwise have done so.

    After all, when the prevailing legal advice is to actively not look for pre-existing patents, it is inevitable that companies will independently create infringing hardware. It's like we get the worst of both worlds - patents might as well be trade-secrets since reading them is a legal mine-field if you are working in the same area, but we also get government enforced monopolies that stifle competition.

    At least the lawyers get paid for their contributions,
    and that's all that should really matter in the end. Right?
  • by Anonymous Coward on Sunday July 22, 2007 @09:41AM (#19945457)

    This makes no sense whatsoever. Why can't they just release the driver source code with a note adding that it is illegal to use the driver if you remove the restrictions?

    Because that's not good enough to appease the FCC; apparently, the legal barriers aren't enough for them--they insist upon technical ones as well.

    I had exactly this conversation with one of the EMC engineers handling FCC certifications at my prior job; he said that not only does the FCC require binary-blob firmware, but the FCC has even been known to revoke the certification of a device if it becomes known that hackers have reverse engineered the closed firmware enough to change the limits.
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday July 22, 2007 @10:08AM (#19945593) Homepage Journal

    More than once, a driver update has come out that has *massively* boosted graphics card speed. I suspect that modern graphics cards are really just ultra-high-speed multicore floating-point coprocessors, and most of the scene logic happens on the CPU.

    I don't know precisely what the situation is here but my understanding is that at least older geforce cards specifically implemented pretty much all of Direct3D directly on-card. The driver basically handled configuration and errata - when they fuck up the hardware, they fix it in software. The driver will be more a map of their failings than a catalog of elite code.

  • by Skapare ( 16644 ) on Sunday July 22, 2007 @10:12AM (#19945633) Homepage

    Keep in mind how much of a modern graphics card's abilities are now located in software.

    Yes. But does that mean software that runs on a CPU on the graphics card, or software that runs on the system CPU, stealing cycles from it? The latter is what some manufacturers are doing, and should not be doing. For software that runs on the graphics card CPU, it doesn't need to run either in kernel space or user space ... that's another space we can call "hardware space". It has no access to kernel APIs or user APIs, nor should it have that.

    Imagine for a moment if X Windows were the universal graphical system on (nearly) call computers. It isn't due to the likes of Microsoft, but just imagine it were. What we could have is a graphics card with a CPU on the card that implements an X Windows server. Then all you need is a way to shuffle all the messages that go between applications and X across the bus between system CPU and graphics card.

    Now X might not be the best interface design, and certainly would not be for gaming, a better design certainly can be made. But even X would be faster than it is now with the server on the card (quite doable today). And still, the window manager would run in user space.

    I wouldn't be surprised in the least if the interface between CPU and graphics card was a tightly guarded secret - main bus bandwidth, and bandwidth in general, is one of the major bottlenecks on graphics systems right now.

    That interface should be nothing more than the information of what the system and applications expect the graphics card to display, an encapsulation protocol to organize it into messages and responses, and a basic way to stream it across the bus (like PCI-Express x16, for an example with high performance). Those messages may possibly be a reflection of the graphical API calls done by the applications.

    ... those restrictions would either have to be moved into hardware (expensive) or disabled (causes horrific problems with the FCC.)

    What do you think is happening now with quite a number of wireless routers [openwrt.org] being booted (or boosted) with Linux or BSD on them?

    We're well past the point where hardware interfaces can be described in half a dozen pages. We're well past the point where "hardware devices" even exist entirely in hardware. Most interesting hardware devices have complex interfaces that depend on functioning backend software.

    That is certainly something that is happening. But it most certainly is not something that is necessary. What we have these days in designs are the result of companies trying to cut their costs with the consumers be damned. These are bad designs, not so much because they steal CPU power from the consumer's computer, but more so because they create these massively complex interfaces that keep changing all the time, and driver code that is so buggy it is frequently the source of systemwide crashes or data corruption. At least if that buggy code is moved into a process, it can do its thing without taking down the whole system.

    But we shouldn't have to be doing that. The hardware specific code should be inside the hardware, running on the CPU that comes as part of that hardware. Upgrades can be provided by the system CPU as a checksummed and, if necessary, cryptographically signed, blob (via a unified firmware image upload interface design that all devices should share that includes device match checks to be sure the correct image is loaded). Then maybe we'll start getting some real value add out of things like video cards, instead of getting cards that result in a net loss of CPU power when added in.

  • by Dan Ost ( 415913 ) on Sunday July 22, 2007 @10:51AM (#19945873)
    The demand for linux drivers needs to reach the point at which a given manufacturer perceives that whatever IP they might expose by releasing Linux drivers is less of an impact than losing out on those sales. We are almost certainly at that point already, but most manufacturers don't realize it.

    I was under the impression that Linux had less than 2% of the desktop market. Is that really enough computers to sway the decision making of hardware manufacturers?
  • by ScrewMaster ( 602015 ) on Sunday July 22, 2007 @10:58AM (#19945931)
    Interesting. Now I have a wireless router running alternate firmware, and one of the options is to adjust output power from 1 to 251 mw, which would be way above the legal limit, I understand. As it happens, I used that option to reduce the power from the stock value, since the computers upstairs work fine with the router set to 24 mw. In my case, having that capability reduced my chances of causing unwanted interference. Not everyone wants to blow away their neighbors. Also, since the router is located in the basement and is running at minimum power, it's very hard to pick up a signal outside my house, and that's the way I want it. Encryption is all well and good but if they don't know it's there, so much the better.
  • by level_headed_midwest ( 888889 ) on Sunday July 22, 2007 @11:17AM (#19946049)
    They could, and with the signed drivers being required for at least 64-bit Vista, they could enforce it. But I'd be VERY willing to bet that if Microsoft even hinted at this, the hardware maker would just have to threaten to call the DOJ and Microsoft would backpedal faster than you could say "antitrust."

    And you also mentioned the other solution if Microsoft does make threatening noises. With both the Windows and Linux kernel driver APIs being stable, it should be trivial to make a translation wrapper between the two. MS would have their hands tied in keeping drivers off Linux if that happens as they'd have to stop their own driver development. MS needs as many good drivers as they can get for even 32-bit Vista, let alone anything 64-bit. If I were Microsoft, I'd be helping out all I could to get a stable wrapper or translation layer so that a "universal" driver for both Linux and Vista could be made by device manufacturers. Vista notoriously lacks drivers, especially Vista 64-bit, and Linux has enough to make most things work, especially for corporate machines and in server rooms. Both of those areas are ones that are much more likely to consider switching to Linux from W2K/XP than to Vista than the ordinary Joe. Or they will sit on XP like many sat on W2K until around the time XP SP2 came out. Both results would give MS no sales, and since they are also MS's most profitable markets, not upgrading to Vista would be a serious blow to MS. So I think that taking a risk that some previous 2K/XP switch to Linux because of more driver support is far outweighed by the increased number of Vista sales because of better driver support.
  • by wellingj ( 1030460 ) on Sunday July 22, 2007 @11:24AM (#19946085)
    But it does help embedded developers who need access to an SOC's hardware like PWM and GPIO and A2D.
    A lot of people really are missing the point here that this patch really doesn't do much for x86, but
    does great things for SuperH PPC and ARM.
  • by networkBoy ( 774728 ) on Sunday July 22, 2007 @06:59PM (#19949459) Journal
    Also worthy to note,
    USB is framed data and half duplex, while firewire (IIRC) is streamed and full duplex.
    I implemented a 4 way mesh in fire wire (4 PCs each with one 4 port firewire NIC) It rocked. Now I have GigE, but still it was awesome, full non-blocking access from any PC to any PC.
    -nB

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...