Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software Linux Technology

Introduction to Linux Sound Systems and APIs 43

UnderScan writes "Linux.com is running an article on Linux kernel sound subsystems, OSS & ALSA, and their APIs. Insightful commentary from both users and the project's developers can be found at OSNews.com comments section."
This discussion has been archived. No new comments can be posted.

Introduction to Linux Sound Systems and APIs

Comments Filter:
  • by Otter ( 3800 ) on Tuesday August 10, 2004 @11:17AM (#9929658) Journal
    The fact that users require an explanation illustrates precisely what the problem is with sound on Linux. Do Windows or MacOS have analogous multiple sound servers that are somehow handled better or have those platforms standardized on a single server? I don't have the slightest idea, and as a user (or a hobbyist programmer), that's precisely how it ought to be.
    • Well, OSS is now deprecated and has been effectively relegated to second place behind ALSA for some time now. However, there are still issues with what people put on top of ALSA, things like esd, aRts, or whatever.
    • by molo ( 94384 )
      I don't know about MacOS.. but Windows has the standard "multimedia" sound API and the "directSound" API. These are both of course different from the Win16 sound API.

      Windows goes through the same type of API revisions.

      -molo
      • by Otter ( 3800 )
        I know that there are different APIs you can write to, although not exactly how they compare to the Linux sound servers in terms of low-level access.

        But the point is that as a user, it's completely transparent. As a user, it's absurd that I should have to know that sound servers exist, let alone have to kill artsd to hear xmms.

        • by molo ( 94384 )
          Yes, I agree there. There is no reason that any application should ever start up a sound daemon that locks the device. If I wanted to run a sound daemon, I would start it myself.

          -molo
    • Do Windows or MacOS have analogous multiple sound servers

      Yes.

    • I don't think the article was ment for users but rather developers since it described the different APIs, why would a user need to read about that? And as many other mentioned, yes, windows has different APIs, but exactly as under linux the user do not need to think about them as long as the applications does not require them to.
  • by Anonymous Coward
    I have never had sound work on any of these machines (NEC, Fujitsu, HPj).

    I used to be a team leader back on the initial Unix (read SCO) team, and one thing that we never would have let happen was letting down the Japanese customers by not supporting their hardware.

    If there is any one thing holding back Linux uptake, it is the lack of driver support for non-mainstream devices.
    • Unfortunately, it's the chicken-and-egg problem (officially known as "Network Externalities"). Hardware manufacturers won't write drivers until lots of people use Linux, and lots of people won't use Linux until there are drivers. What's really needed is the backing of some major coperations to drive development, like say, IBM, or HP, or Nov.... oh wait...
    • Sound on Linux is a mess. On my system, getting sound to run under a 2.4 kernel using ALSA was trivial, but some other niceties in the 2.6 kernel didn't work, so I spent an awful lot of time under various revisions trying to get ALSA to work on 2.6.

      One day it did, through seemingly random unrelated combinations of modules and kernel options. Today I grabbed an incremental release of the 2.6.7 and poof, there goes sound again!

      I don't see these problems in X or the network subsystems, or disk access so why
  • Linux sound is simplicity itself. How does this article demonstrate that Linux is ready for the desktop again?
  • by linuxkrn ( 635044 ) <gwatson AT linuxlogin DOT com> on Tuesday August 10, 2004 @12:25PM (#9930506)
    Being a Gentoo [gentoo.org] user, I've been running ALSA [alsa-project.org] for some time. While ALSA has an OSS compat API [alsa-project.org] that you can load, it doesn't allow you to have the full control of more advance cards. (Like the EMU10k1/2 chipsets)

    While oss-compat-api will give you basic sound, mixer controls, etc. sometimes you want to do more advanced things. For example, I use a tvtuner app and wanted to be able to control detailed mixer channels (Analog Capture Volume and Analog Playback Volume) that just couldn't be done with OSS. Looking at my app, tvtime [sourceforge.net], I found it only had OSS mixer controls. So I just took a weekend to learned/wrote the ALSA API version for it. Wasn't too bad and the app works great now. I can configure any control (mixer channel) on any card I want. Hopefully the dev will include the patch I sent it in the 1.0 release this month.

    I know that this isn't an option for everyone. But I think as time goes on, more and more apps should have support for ALSA. Especially since it's in the 1.0.x range and the API has become more stable.
    • I had a lot of problems installing a new card based on EMU10k. In the end I had to go into the Kernel and turn off ALSA and turn on OSS, and it had the "driver" right there. Compile, reboot, sound works! However, this proves yet again that while Linux is a great desktop (I use it for my home machine), it is not "ready for the desktop". This is not to throw blame at anyone. But sound should be seemless, or at least very easy to configure with a GUI tool.

      I would not be against taking some developer resou

      • I would not be against taking some developer resources away from progress on the kernel, etc, and have them work on drivers and configuration applications for sound, video, modem, network, vpn, etc.

        the problem there is that lots of people ([i think] including myself) see this as a distro problem and beyond the scope of the kernel where most of this has moved (i think the vpn thing isn't though). And as you said making applications for configuring sound, video, etc. this is usually done at the distro le
    • Two problems with ALSA are that it is Linux specific, and that its drivers have consistently been worse that the OSS ones for me.
  • by XO ( 250276 ) <blade,eric&gmail,com> on Tuesday August 10, 2004 @01:11PM (#9931120) Homepage Journal
    The Audio subsystems are junk. Mixing should be handled intelligently by the drivers, and it should be standard unix systems used to access it. You want to play a file, you dump it to /dev/audio, you want to record something, you open /dev/mic or /dev/linein and and record it.

    Additional controls should be handled by ioctls to the special devices.

    The sound system in Linux is a nightmare.
    • by 0x0d0a ( 568518 ) on Tuesday August 10, 2004 @03:21PM (#9932725) Journal
      Ever used a system with multiple sound cards? I have, and I'm not even an audio engineer. That approach wouldn't work very well for it.

      You want to "dump a file to /dev/audio"? What format would be used? Linear or logarithmic encoding? What if the sound card does MP3 decompression onboard -- how do you get MP3 data to it? How do you detect whether to use 44.1 or 48kHz? Am I unable to set bass enhancement from the command line? What if I want to play a MIDI? What about cards that have a front and rear stereo channel -- where does what go?

      I'm not saying that these are insoluable, just that there's a bit more complexity than you're making out.

      How would you implement "mixing should be handled intelligently"? This is something that I've thought and bitched about for a while. The ideal would be to automatically use hardware mixing up to the maximum number of channels (two on an old card I had, 32 on my current Sound Blaster Live), then fall back to software mixing. The problem is that you have to have some buffer space to mix audio, which means adding latency. When you hit 33 channels and that last channel has to be software-mixed, what are you going to do -- suddenly bump up the latency in the audio to add a buffer into the audio output line? Right in the middle of playback?
  • As far as Linux sound support, I must be 5 years behind the times; and yet, it seems as though nothing has been happening on that front for the past three. What have I missed (besides the obviousness of ALSA)?
    • by 0x0d0a ( 568518 ) on Tuesday August 10, 2004 @03:14PM (#9932661) Journal
      Oh, let's see:

      * The OpenAL library came around. Does 3d audio, hardware mixing, doppler, etc, etc. Good for games.

      * OSS/Free got deprecated.

      * The plethora of eight million halfassed sound servers resolved down into just a few -- artsd is probably going away in favor of JACK (if the article is correct), which means that we just have the (icky) esound -- which with any luck will give way to JACK -- and JACK. Finally, applications can avoid having eight million output plugins.

      * Hardware mixing in drivers became par for the course. Five years ago, everyone used OSS/Free. Today, you can play audio in xmms and *still* hear your "bong" when an error occurs without having to ram everything through a high-latency sound server.

      * Wavetable MIDI is, at long last, reasonably well supported. I remember the early days with my emu10k1-based Sound Blaster Live Value and earlier cards where I had to just use FM synth because I couldn't load soundfonts to my card. Linux was behind for years here.

      * Creative Labs is no longer ignoring Linux users.

      * At least in theory, I can use the DSP on my emu10k1 chip to do things like adjust bass.

      * There are half-decent sound applications out there. Rosegarden doesn't suck, there are synths and trackers and editors. Still not the same as a Windows or MacOS-based sound editing environment, but you can actually do sound work on Linux without coding up your own tools. :-)

      I actually really like Linux as a sound-using environment. I can plonk two or three sound cards into a Linux system and (unlike Windows) all my apps let me choose what device to play out of. I can be playing music going to speakers out of Sound Card A for everyone in a room, but still be listening to what someone's saying on VoIP over my headphones connected to Sound Card B.
  • by tigeba ( 208671 ) on Tuesday August 10, 2004 @01:46PM (#9931567)
    Perhaps Linux developers should take a whack at emulating/copying OSX Core audio. It might provide an incentive for application developers to port their audio apps to Linux.
    • JACK [sf.net] uses a callback based API much like Core Audio.

      Basically every high-end (e.g. ardour [ardour.org], JAMin [sf.net], Rosegarden [rosegardenmusic.com], Hydrogen [sourceforge.net], etc.) uses it.

      You can get really low latency using it if you have good sound hardware (e.g. RME Hammerfall for extremely low latency or even an M-Audio Delta 1010). Something like an SBLive! (what I have) will need a period size of 2048 bytes with two periods to avoid underrunning (I have a Dual AthlonMP 2800+ so I'm pretty sure it's the sound card...). Stuff like QJackCtl [sf.net] and Jack-Rack [arb.bash.sh] make controlling Jack easy.

      Getting realtime mode working for a normal user can be tricky, but Debian makes it really easy. Just install the realtime-lsm package and build the realtime-lsm-source package for your kernel and all users in the audio group gain the ability to run applications realtime (at least with the default config). It could be made easier (mainly by prebuilding the realtime-lsm modules for the stock kernels) but GNU/Linux pro-audio is still mostly for hackers and adventurous people right now. Stuff like PlanetCCRMA [stanford.edu] and AGNULA [agnula.org] are aiming to make everything work out of the box. I have yet to try either (I use Debian so PlanetCCRMA is useless for me) but it looks like DeMuDi has everything set up for recording out of the box.


      • I agree that low latency is quite important, and anything that furthers that goal is a good
        thing. Even the really good native systems still arent quite up to the task of recording lots of live musicians, which is why for now I use Protools HD on OSX.

        I was recommending implementing CoreAudio (or heck Direct Sound) instead of something just similar because it would decrease the level of effort for the developers of the applications (and very importantly plugin developers). It would just be a case of recomp
        • I think CoreAudio could very easily be implemented on top of Jack because the APIs are similarish (well, at least as far as being callback based and realtime capable).

          CoreAudio would be more worthwhile than DirectSound because OS X apps are more Unixy than Windows apps and the OpenSTEP/Cocoa stuff for the GUI is mostly implemented by GNUStep [gnustep.org]. OS X is way closer to GNU/Linux than Windows and I'm betting it would be tons easier getting an OS X Cocoa app working on GNU/Linux than a Windows app.

          I don't real

fortune: No such file or directory

Working...