Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Media GNOME GUI

The Full Story on GStreamer 201

JigSaw writes "Gnome's Christian Schaller has written an intro/status document on GStreamer, the next generation multimedia development framework for Unix. Christian explains what it is, why it is important, its use in both the desktop and server side, its use on embedded Linux, Gnome and even KDE. He also discusses its current competition and the plans for the future."
This discussion has been archived. No new comments can be posted.

The Full Story on GStreamer

Comments Filter:
  • by Telex4 ( 265980 ) on Tuesday January 13, 2004 @06:23PM (#7967609) Homepage
    There was one snippet of news buried in the last page that I think is pretty big:

    "Another interesting development is that we currently got a team of about 7 french students who are going to make a GStreamer-based non-linear video editor as the final year project."

    7 students running a final year project suggests it ought to be good, so does that mean we might finally have some really high quality video editing software other than Cinelerra [heroinewarrior.com]? If so, that's brilliant!

    I like the fact that GStreamer support is now filtering even into non-GNOME apps like Juk (in KDE). Good stuff :)
    • by Goonie ( 8651 ) * <robert,merkel&benambra,org> on Tuesday January 13, 2004 @07:00PM (#7968009) Homepage
      I wouldn't get too excited about final-year student projects.

      They are usually evaluated according to their adherence to software development methodologies, rather than the actual quality of the end product. To that end, students spend more time making the paperwork good rather than the code good. Whilst this is a necessary part of building really big projects, it's not an optimal method of building small projects by inexperienced part-timers who often have only a very partial understanding of the problem domain.

    • by Anonymous Coward
      Cinelerra is honestly pathetic compared to ANY non-linear videditor available on Windows or Mac.

      Even that piece of shit known as Adobe Premiere -- which Cinelerra trys and fails to immitate -- is lightyears ahead. If you want to get into the billions of lightyears ahead, then compare to FinalCutPro or Vegas Video (which was recently bought from SoundForge by Sony Pictures).

      Linux has a loooooooooooooooong way to come in this department, and that's no troll.

      • Cinelerra is honestly pathetic compared to ANY non-linear videditor available on Windows or Mac.

        I don't know about that, I tried Avid FreeDV (ok, I do concede that it is a freeware program and in the windows world you only get what you pay for, but....) it couldn't even draw itself on the screen properly. ...and it's not as if my system isn't up to it, an AMD AthlonXP 1700+ with 1GB of RAM and a fresh install of WindowsXP ought to be able to run a low end NLE.
    • by Hatta ( 162192 ) on Tuesday January 13, 2004 @11:07PM (#7970146) Journal
      "Another interesting development is that we currently got a team of about 7 french students who are going to make a GStreamer-based non-linear video editor as the final year project."

      Sounds good, but don't you think it would be better if they were comp sci students?
  • by Anonymous Coward
    That was a good reading. I hope Rhythmbox and Totem will be included on Gnome 2.8 at least when GStreamer 0.8 is out and more mature.

    The BeOS had the Media Kit and it was great, allowed for cool stuff easily done on apps. Check Cortex for example: http://www.bebits.com/search?search=cortex [bebits.com] and its surrounded plugins.

    • Rhythmbox is set for inclusion in GNOME 2.6. My guess is that Rhythmbox 0.7/0.8 will ship at that point (I'm not sure if 0.7 is their CVS or will be the next stable) and if you read the article, you will notice that a lot right now is going into GStreamer 0.8 for GNOME integration. The current CVS of Rhythmbox depends on the current development version of GStreamer also.
  • Hmm... (Score:5, Informative)

    by My Secondary Account ( 741235 ) on Tuesday January 13, 2004 @06:25PM (#7967642) Homepage
    The "pipeline" he describes is somewhat similar to what you can do with VST plugins in Windows. E.G., you could hook up a microphone, then attach some distortion filters and eventually terminate the pipeline at some output device. All in all, this is a great article in my opinion. For the technically inclined, there are much more in-depth docs here [gstreamer.net], including all the gory API details.

    • Actually, this screenshot [gstreamer.net] shows it's much more then that. Also, here's one project, MozStreamer [mozdev.org] is another good example of the flexibility offered.
    • Re:Hmm... (Score:5, Interesting)

      by CoolVibe ( 11466 ) on Tuesday January 13, 2004 @06:44PM (#7967865) Journal
      Ever saw artsbuilder? Arts (despised and misunderstood by many) also is more than meets the eye.

      You can plug in modules, and synthesise any sound you like though plugins and modules, not unlike the pipeline editor in gstreamer.

      • despised with a reaon : while extremely promising, I never got to get arts working properly -and consuming 20% CPU - nor have I found a correct (up to date) programming documentation for it.

        However, it's not cross-toolkit, so difficult to consider as a goal.
        • I seem to recall that recently the guy who looks after the debian packages for mplayer began building them with arts support because arts no longer depended on Qt. So although it doesn't care much about gtk I don't really think you can say it isn't cross-platform. Or am I missing something?
      • Is it something like Jeskola Buzz [buzzmachines.com]? (Buzz is for Win32 though, and not the most stable program around.. save often)
      • Re:Hmm... (Score:5, Insightful)

        by Spy Hunter ( 317220 ) on Tuesday January 13, 2004 @09:21PM (#7969250) Journal
        aRts is despised for several reasons:

        1. It has problems with some linux sound drivers causing it to skip or play brief loud screeching sounds randomly, even when no sounds are playing
        2. High CPU usage by other programs causes sound to skip
        3. It keeps the sound device open, meaning non-aRts programs display cryptic error messages or freeze
        4. It uses a non-trivial percentage of the CPU even when idle
        5. Video support was tacked on and sucks
        6. It adds a large, very noticable delay to all sounds

        aRts is a synthesizer. KDE shouldn't have adopted it as a general-purpose sound system. Some of its problems have been partially addressed in newer versions, but it is still a bad choice. The way it SHOULD work is the kernel should provide unlimited virtual sound channels and let an unlimited number of programs open the sound device at once. It should use hardware mixing if possible and software mixing if it must, but it should keep latencies absolutely as low as possible. Then apps can choose whatever sound framework they want: aRts, gstreamer, jack, or plain old alsa, and everything would Just Work.

        • Re:Hmm... (Score:3, Interesting)

          by egreB ( 183751 )
          Since I don't know a whole lot about kernels, that sounded like a good idea to me. But isn't OSS and ALSA at least partly kernel-driven? Sound drivers are in the kernel, and I would think /dev/mixer is a kernel-thingy. Is the mixing done in userland? Is this supported by hardware mixers? Are there any arguments against doing it in kernel? Could it be done in a nasty hack? How is other OSes doing this (like BeOS, Good'ol-MacOS, Darwin, Windows or HURD (if they at all think about sound))? Would this be differ
          • Re:Hmm... (Score:3, Informative)

            by iantri ( 687643 )
            FreeBSD does this -- in fact, that's where his terminology comes from, I believe.

            http://www.freebsd.org/doc/en_US.ISO8859-1/books/h andbook/sound-setup.html [freebsd.org]

          • Re:Hmm... (Score:3, Interesting)

            by Spy Hunter ( 317220 )
            Right now the only mixing done in the ALSA/OSS drivers is done by the sound card hardware itself. All cards can mix together at least one channel of sound output from programs plus MIDI, CD Audio, Mic input, and sometimes other things. Some cards can mix together two or three channels of sound output from programs. But many can't, so the result is that only one program can output sound at a time (which is mixed together with the MIDI/CD/MIC channels on the card, not by the kernel). It is part of the ker
    • flaming skull heads

      you must check out LADSPA they're beautiful
      so beautiful
      I die
  • Video playback I could resize on the fly!
    Call me lazy, but I hate putting in all those switches for mplayer.

  • I'm surprised that KDE users would use something that started w/ a "G" instead of a "K"....and vice versa ;-)
    • But KDE users are (probably) already using GNU/Linux. ;)
    • KDE already uses the gphoto2 library in Kamera. There's no reason to write low-level digital camera support twice.

      Should they develop a new compiler called k++ and license under the KPL?

  • by Anonymous Coward
    GStreamer, Gnome, and KDE : WHY KDE IS WRONG

    KDE was cooked up in the same country that started both World Wars, embraced philosophies of destruction and hate (such as Nazism and Fascism), and spawned evil murderous maniacs such as Adolf Hitler.

    By using KDE you are implicitly endorsing these hatemongering people and their genocidal dogmas.

    A true patriot uses GNOME, written in the land of the free and the home of the brave. By using Gnome you are re-affirming your American ideals and supporting the open d

  • by vanyel ( 28049 ) * on Tuesday January 13, 2004 @06:28PM (#7967686) Journal
    The article wasn't clear if Gstream addresses this problem, but one of the things I've been looking for is X-server based audio. I have a variety of types of systems and try to run or two desktops. Since Windows and Mac won't remote natively, they're the ones I'm currently stuck with, and my unix systems, being capable of it, are off in another room somewhere and I get to them using a local X-server or ssh. But that means no Unix multimedia, because no audio.
  • Breaking news (Score:2, Interesting)

    by Anonymous Coward
    Multimedia on Linux, Film At 11.

    And here I believed the rabid zealots that told me in no uncertain terms that Linux was a viable multimedia platform... 3 years ago. 3 years ago Linux wouldn't detect most soundcards.

    OT really, but you guys should think more before blathering it up in the trenches. Coming back with a zany "we have that, fucker" and pointing people to a page for a project maintained by a kid in Romania barely out of alpha that's been abandoned for 2 years as an alternative to a mature, sta

    • Then what else is a computer, to Joe Average Windows User?

      And who in the hell was saying that "Linux is a viable multimedia platform" three years ago?

      --grendel drago
    • Re:Breaking news (Score:2, Informative)

      by gnulxusr ( 729574 )

      The computer is not just a browser, office suite and MP3 player.

      No, it's definitely not just that. It's also a SoftSynth [fluidsynth.org]MIDI sequencer with audio capabilities, a DV video capture system [schirmacher.de] with editing and effects facilities, a multitrack HDR [sourceforge.net] and probably more. No, I see no multimedia this side of the mountain.

  • 100% CPU Usage (Score:4, Interesting)

    by Anonymous Coward on Tuesday January 13, 2004 @06:31PM (#7967720)
    My experience with GStreamer on RedHat v9 with Gnome has got me annoyed. Nautilus uses GStreamer to make the thumbnails and when browsing directories I'll often see GKrellM screaming about 100% CPU usage. A quick 'top' almost always reveals the culprit as gst-thumbnail.

    One of my terminal windows looks this:
    killall gst-thumbnail
    killall gst-thumbnail
    killall gst-thumbnail
    killall gst-thumbnail
    killall gst-thumbnail

    KDE has a runaway process killer. Why doesnt gnome?

  • by plastik55 ( 218435 ) on Tuesday January 13, 2004 @06:37PM (#7967792) Homepage
    But where are the places where GStreamer innovates over the DirectShow APIs? The basic concept seems to be the same. DirectShow even has a filter graph editor which GStreamer's stream editor is eerily reminiscent of.
    • If Microsoft's system works, there's nothing wrong with imitating the good parts of it. Just because it's Microsoft doesn't automatically make it a dumb idea - just most of the time...

      Innovation for innovation's sake is a waste of time.

      • GStreamer was not a simple re-implementation of something which had been done before. I guess we did what Steve Balmer claims free software never does: we innovated. The basic design and basic idea came from a research project at Portland University, research work in which GStreamer project founder Erik Walthinsen participated. It was loosely modeled on DirectShow.

        I think he was referring to this paragraph... saying that it was not implementing something done before, and then following that by saying the

    • by TheRaven64 ( 641858 ) on Tuesday January 13, 2004 @08:06PM (#7968648) Journal
      I was wondering the exact same thing. I used DirectShow a couple of years ago, and it was a joy to work with. The example code was very clear (for example, it ships with a null-transform filter, which takes any input and then outputs it unchanged. This could easily be specialised to create a new filter). At the time, I looked at GStreamer as a possible alternative, but found that it was nowhere near mature enough for real use.

      I would like to see some kind of cross-platform framework of this kind. DirectShow is Windows only. QuickTime runs on Windows and Mac, but not anything else. Are we going to see GStreamer on other platforms? The transform filters should be relatively easy to port, as should the file readers. The only parts that should require more than a quick recompile (I would imagine) will be source and sync filters that interface directly with hardware (cameras, microphones, speakers, displays).

      Oh, and one more question: DirectShow exposes an interface to graphics hardware allowing filter developers to take advantage of hardware IDCT and MC features relatively easily. Does GStreamer have an equivalent?

  • Nice. (Score:4, Interesting)

    by Bluesman ( 104513 ) on Tuesday January 13, 2004 @06:38PM (#7967802) Homepage
    This really has come a long way from when I checked it out a while back.

    It's a fantastic idea, although it's been around for a while. But being able to apply different filters to an audio stream is really cool. It's unix pipes for audio.

    What would be great is if gnome standardized a bunch of filters like this for everything. Imagine being able to apply a tar and then a gzip filter in this manner. Or perhaps a .doc decode filter and a grep, then to a .csv. All file conversion could be handled by the environment, rather than individual programs, which is messy and inconsistent.

    Gstreamer is a big step in the right direction. Way to go guys.

    • Before everyone jumps all over me and thinks I'm being facetious, I don't mean applying tar and gzip via the command line, but rather with a GUI interface, as in the article. :-)
      • sounds like a kio slave to me. KDE at least has this support.

        I've never used GNOME, but I'm surprized they don't have something like this. Surprized enough to suggest you look again because it seems more likely that they have this and you haven't seen it than they don't.

        • Mmm...not exactly. I have a specific interface in mind. Kioslave seems to accomplish what I'm talking about, but I'd like to see it in a more general sense.

          For example, imagine if all Unix commands were represented graphically, so that you could arrange them like the screenshot showed in the article. Then say you could highlight multiple input files and send them through the pipe you created. It would save a lot of typing, and be easier for a beginner to understand.

          Further, maybe you could save the sp
        • Kioslaves are now being ported to work with gnome and the command line, which is great news.
    • um, it's called a |

      If I had to use a graphical interface to apply a gzip filter after tar I would probably shoot myself rather soon.

      • Just a simple example. Obviously you could get more complex. I'd imagine different stages of compilation from different languages to different object file formats to a final executable would be something you could do much easier in a gui than a Makefile.

        And the point isn't to make it easier for people who already are adept at the command line, it's for people who can't be bothered with a cli. Otherwise there would be no Gnome at all.
  • GSteamer and MPLayer (Score:4, Interesting)

    by JarekC ( 544383 ) on Tuesday January 13, 2004 @06:40PM (#7967819)
    Recently I read a short but interesting discussion [gnome.org] of GStreamer in context of MPlayer, triggered by an announcement of a bonobo component wrapping MPlayer.

    I wonder what will happen when MPlayerG2 comes out from an incubator. Will the two projects simply compete, or will they work out some way to integrate/support each other?

    • by IamTheRealMike ( 537420 ) * on Tuesday January 13, 2004 @06:50PM (#7967922)
      I wonder what will happen when MPlayerG2 comes out from an incubator. Will the two projects simply compete, or will they work out some way to integrate/support each other?

      To be honest, the two don't really compete... mplayer is purely playback focussed. It has no pretensions as a multimedia framework, or anything of the sort. GStreamer is all about being a powerful multimedia framework.

      It's easy to forget how much code sharing goes on between these projects. They are all liberally licensed, all import each others code all the time and swap codec implementations etc. This isn't like standard capitalist competition where people constantly reinvent the wheel in order to stay ahead - if mplayer has a codec the GStreamer guys want, licensing issues nonwithstanding they'll go and take it.

  • GStreamer summarized (Score:3, Interesting)

    by Jeffrey Baker ( 6191 ) on Tuesday January 13, 2004 @06:45PM (#7967870)
    I think gstreamer is a cute hack, but it is also *exactly* what Chris Pirazzi warned against in his "Video I/O on Linux: Lessons Learned from SGI" [lurkertech.com].
    One can build fancy mechanisms which have network transparency, compression/decompression, format conversion, graph-based dataflow management, etc. on top of a well-designed video I/O API, and such mechanisms might be useful for some applications. But SGI's big mistake--one which hampered development of useful audio/video applications for years--was to try to build and offer those fancier mechanisms to developers instead of offering a simple API that worked on multiple video devices.

    Substitute audio for video when necessary.

    • by Vann_v2 ( 213760 ) on Tuesday January 13, 2004 @06:58PM (#7967984) Homepage
      From the article:


      GStreamer provides you with an easy to use API that lets you focus on your actual application instead of worrying about what kinds of things happen at the lower levels.
      • Obviously you didn't read the SGI article. His point is that such encapsulating APIs make the details inaccessible, thereby frustrating attempts to make any decent video applications.
        • by starseeker ( 141897 ) on Tuesday January 13, 2004 @11:02PM (#7970107) Homepage
          There are two ways to read that - a) It's impossible to abstract the details in any useful way or b) all APIs to date have made bad choices in how to abstract things.

          Looking over the SGI article, I seem to get the sense that it isn't really possible to have an abstract API that works on wide varieties of hardware, and you need to communicate with hardware vendors about a large number of issues. Of course, we all know how well vendors like to pay attention to open source developers, so let's not worry about being able to do that for a while. Gstreamer would have to be a force to be reckoned with before they pay any attention at all.

          I don't really understand some of these objections, but I suppose it's because I'm not a graphica guru. For example, the square vs. non square pixel issue - can't a library be defined that knows what various devices do about that? AFAIK, all one can do with video shot for one pixel length when displaying on another length is add black lines to the edges to make it size correctly, or do some kind of averaging to add or subtract pixels from a dimension. I suppose if combining video from different sources the latter would have to be attempted. Even so, I can't see that this is necessarily a bad thing to have in the API - if someone has a different method for scaling between systems, just impliment it as an option for the resizing part of the API. I know if I were developing a video editor/manipulator I sure wouldn't want to deal with that myself if I could help it. Assuming there are standard ways for dealing with these issues, and maybe even default ones used most of the time, why should all the various video editors out there have to worry about it? Solve it once, define it as a standard option for the autoadapt API, and move on.

          Am I missing something? I don't know much about Gstreamer, but I don't see why they can't do things in such a way that allows detailed specification of behavior if the developer is after something specific, and use the standard or accepted best solution to common video compatibility problems if not told otherwise. Maybe Gstreamer could even become a part in standardizing some of the hardware insanity SGI had to deal with, if it is successful and powerful enough. It's open source, but you never know. Ogg Vorbis is starting to get hardware support, and I would never have guessed that either.
    • I think gstreamer is a cute hack, but it is also *exactly* what Chris Pirazzi warned against in his "Video I/O on Linux: Lessons Learned from SGI".

      Every difficulty with SGI's Video Library mentioned in this article has been overcome by GStreamer (as well as most other Linux-based media frameworks), and is used in working applications. Many of his observations seem rather quaint and outdated.
  • by Anonymous Coward on Tuesday January 13, 2004 @06:47PM (#7967890)
    Although the main integration isn't planned until 4.0, the upcoming 3.2 will support gstreamer in JuK, the new music player for KDE. It will replace the slow and buggy noatun. Ive tried it, and its really quite good. Its one of the reasons why KDE 3.2 will rock.
    • JuK doesn't replace noatun in 3.2, noatun will still be there. JuK is a media Library, Noatun a media player that plays audio and video. They are barely similar appplications.

      GStreamer is looking more likely to be adopted by KDE. Arts is a little unmaintained and not well liked. GStreamer is good, has few new dependencies (arts depends on glib too as it happens), and supported by freedesktop.org. All things going for it. But frankly I don't know who will decide this sort of thing.
  • As in pd [pure-data.org].
  • or licensed in a way that makes them uninteresting to most free software developers (like Helix)

    I used to work as an open-source developer with the helix [helixcommunity.org] engine (still do, in fact), and didn't find the licensing to be that much of a turn-off. It's kinda like the NPL, or the GPL with the special rights for the Licensor outlined in section 3.

    You can read the Helix license mentioned in the article here: RPSL [opensource.org]

    • The Helix license is GPL-incompatible, meaning that Helix can't be linked with GPL code.

      Mozilla fixed that problem with dual licensing, as did Trolltech for QT. Real should fix it as well.

  • ..read the headline quickly and saw "G-Stringer?"
  • Isn't this similar to JACK [sourceforge.net]? From what I gather, GStreamer extends it to video, also.
    • It is similar to JACK in that you have a signal graph. However there is more different than the same. JACK is for communication between different processes. It allows applications to send data to each other so that you can have a toolbox of applications that all work together. With Gstreamer the whole graph is in-process; it is designed to give individual applications the capability to do complex media tasks. The two are complementary.

      • OK, I'm going to take advantage of your post to inquire about this stuff. My question is only sorta on-topic for this story, but it is inspired by it (and by this sub-thread).

        I'm a Linux audio user, not a programmer. I use JACK primarily because JACK is required to use Rosegarden4 (MIDI sequencer), which I use with my soundcard's onboard synth as a composition tool, to play with tunes before trying them out with the band I'm in. And I understand that it's necessary to use Ardour effectively, which I'd

        • by paulbd ( 118132 ) on Tuesday January 13, 2004 @11:03PM (#7970111) Homepage

          I am JACK's primary author. I hope I can explain some of the basics to you.

          1. What the hell is a signal graph (re: your response above)? Of what I've read about JACK, that's the first time I've seen that expression? Or by "signal graph" do you simply mean "a graphical environment for stringing together a sequence of signal processing modules into an overall application"?

          When audio programmers talk about a signal graph, they are using the term to refer to a rather abstract conceptualization of what is happening in software (sometimes in hardware). The model is of a series of "nodes" each of which processes a signal in some way. Each node is connected to one or more other nodes, for input and/or output. You can build a very simple graph, such as some kind of node that reads from a disk file and sends output to another node that delivers it to an audio interface. Or you can build incredibly complex graphs in which the signal is routed all over the place, possibly even including through feedback loops.

          JACK is merely one of many systems that use the model of a signal graph internally; GStreamer is another.

          2. You say that JACK is for communications between different processes. My understanding was that JACK was for communication between different sources/sinks of audio signal. Those could be processes, but they could also be hardware devices. For instance, when I start jackd prior to running rosegarden4, I tell it to use the ALSA driver for output. In fact, I thought that it could really be anything that could provide or accept an audio signal (even files, network URLs, etc.), since some sort of "virtual device" could be specified for them. Is that not correct? And if it is correct, how is that different from Gstreamer then?

          Gstreamer is really a toolbox to be used by a SINGLE program to construct processing pathways (aka "signal graphs"). It offers no facilities (other than connections to JACK) that allow MULTIPLE processes to route data among themselves.

          As to what a JACK client does with the data it receives - that is entirely up to the client. We have some clients that stream to an icecast server, other people are working on UDP and RTP-based networking, others write data to disk etc. But JACK knows nothing about this, its entirely internally to each JACK client.

          3. What do you mean by "with Gstreamer the whole graph is in-process"? Are you saying that you use the graphical signal path editor to create an application out of modules, but when you're done it links (in the post-compilation sense) the modules together into a single executable which has the capability described by the network? Because otherwise -- if the modules do their work independently and pass data between each other -- that sounds like processes talking to processes, just like with JACK. What am I missing?

          As I mentioned above, Gstreamer is used by a SINGLE application to build processing pathways. It is of no use whatsoever in building multiprocess pathways, other than its connection to JACK.

          4. My understanding of the whole point of JACK is that it's for low-latency audio work. But it sits between processes, or between devices and processes, or whatever; how can that be lower-latency than if JACK wasn't there at all. For example, rosegarden4 uses JACK to pass data to the ALSA driver for my soundcard. How can that be lower-latency than if rosegarden4 just talked to the ALSA driver directly?

          For a situation involving only one process (such as rosegarden), its certainly possible for direct access to provide marginally lower latencies than with JACK. But when I say "marginal", I really mean it. On a modern CPU, and with the right kernel, you can basically JACK as low as your audio interface can handle. The reason that JACK's design matters for latency is 2-fold. First of all, it imposes the correct model of int

          • Paul, I would like to use this thread to ask your opinion on a subject : would you think jack is suitable for processing video ?

            I know, they are two completely different subjects, but as I think Ardour does video, how hard/useful would you think extending LADSPA / Jack to video could be ?

            • It would be fairly easy to extend Jack to work on video. It would just require someone writing a new data type.

              I don't know what the basic data type of video work is. In audio, it's unsigned long numbers. LADSPA works on those. If someone changed it to work on whatever video uses, then LADSPA could be used too. Someone would have to get the video editors to support Jack and LADSPA though.

              Ardour doesn't support video yet. It does has a feature to support animatics, which is almost video, but not what
        • Just a few things to add to Paul's post.

          2. You say that JACK is for communications between different processes. My understanding was that JACK was for communication between different sources/sinks of audio signal. Those could be processes, but they could also be hardware devices. For instance, when I start jackd prior to running rosegarden4, I tell it to use the ALSA driver for output. In fact, I thought that it could really be anything that could provide or accept an audio signal (even files, network URL
  • pipe dream? (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Wednesday January 14, 2004 @01:49AM (#7971019) Homepage Journal
    "GStreamer is that of a pipeline system which your media streams through"

    Linux programs are filters in pipelines with data streaming through them. GStreamer is a special case for media. "Programming" GStreamer is executed through a pipeline viewer, a flowchart for GStreamer components. How about a general purpose flowchart programing tool for Linux?

    Perl, for example, is internally compiled into a graph of primitives. How about a program that parses Perl into graphs, enforces Perl graph grammar in a GUI, and reconstitutes Perl code for saving? The three tier form has Perl code for data, Perl graphs in the "business", and flowcharts as presentation. Is there such a thing? For Python? Ruby (hint ;)?

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...