Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
KDE GUI

Coding with KParts 119

wrinkledshirt writes "IBM DeveloperWorks has an article here about coding with KParts, KDE's component architecture. It's a little thin, but given that no single component technology has claimed victory yet for Linux, just thought this might be an interesting read for some. It also might lead to some good discussion comparing people's experiences with KParts, ORBit ? , Bonobo ? , or Kylix ? 's CLX..."
This discussion has been archived. No new comments can be posted.

Coding with KParts

Comments Filter:
  • Truth is, all software houses use each other for
    R&D.

    KParts -truth must be told- is good ol ActiveX components, albeit cleaner.
    The ATL with its COM architecture is one of my
    favorite win32 tools, along side MFC. KPart is
    an integral part of KOffice, in the same ways
    COM components are part of MS Office.

    No one is bashing the KDE team for going with the
    devil, just because they have seen the devil put
    its weight behind component based software and
    make it work.
    Miguel also sees the samething, and he knows that
    the devil would never have adopted .NET, if it
    didn't work.
    • Perhaps the biggest complaint with ActiveX was all the overlapping confusing interfaces and APIs that one has to become familiar with in order to properly code a component (especially during its early days). Couple that with obtuse confusing documentation (which has improved over the years) and you had (still have?) a real stomach bleed when implementing a new control back then. -- Version 2 is always cleaner, nicer and more consistent ;)
      • ActiveX development is shiteasy today. ATL makes it a breeze. In fact, when I do GUI stuff, I try to make most of my stuff ActiveX components so that I can reuse it later. And the good part is that the overhead of ActiveX components is very small. Sometimes it's even better than "ordinary" windows/window classes since you can make ax controls windowless.
        • I must admit - for all the MS fliers I get in the mail, I missed the one that was titled: "ActiveX: Shiteasy!".

          I liked the ones with the dining bill that showed how much they overcharge you for licensing... although I think their intent was more along their antipiracy campaign.

          --
          Evan

    • No one is bashing the KDE team for going with the devil

      As well, nobody is bashing Linus for implementing an operating system, although Microsoft implemented one before.

      There is a slight difference between creating something with a similar scope, and a 1-to-1 clone. If you don't get it yet, don't worry - in a few months/ years, Microsoft will explain...

    • I agree with most of what you say but don't be too hard on the Linux community for bashing Miguel. There is a distrust of Microsoft that isn't without reason.

      I think that if Microsoft plays fair, .NET could be of great benefit to the Linux community. But I can't help noting that Microsoft has a history of entering into collaborative arrangements when it is advantageous for them and then screwing there partners after they've got what they wanted or needed.

      Miguel is whole-heartedly supporting a technology that is controlled by a company that is, at best, untrustworthy. Before going too far down that thorny path, I would like to see realistic risk assessment done.
      • Name one time Microsoft has ever played fair? Better yet name one time that microsoft didn't intentionally released a technology whose sole purpose wasn't to destroy a company or technology?

        I believe the port of MS Office to the mac was the only thing microsoft has ever done that wasn't officially intended to destroy a competitor. They release to either destroy or make money. DirectX is an example of a free sdk whose sole purpose was to knock apple out of the multimedia market. THis is why Ms does not like OpenGL and would like to replace it totally with directX when it matures enough for cad users to depend on it. Why? To make it difficult to port cad apps and games to other os's.

        I speak not as someone who is paranoid but someone who judges with a grain of salt. Every, and I mean every bussiness that has ever done any dealings with Microsoft has always been burned. I bet ms is working with Mono now only to knock linux out of the desktop market by patening .NET and then suing mono. Remember that this is Microsoft we are talking about. GO read the article right above this one with the CEO of EFF. He mentions that those who support it don't really understand whats its real intention.
  • kparts is very cool (Score:5, Interesting)

    by SirSlud ( 67381 ) on Friday February 22, 2002 @02:31PM (#3053043) Homepage
    I just have to say how impressed I am with anything KDE related. They seem to really always make sure they have the horse in front of the cart, when it comes to their libraries and subsystems.

    I've done a few things with QT and KDE (before KParts, unfortuantely), and I was blown away by the cleanliness of the architecture of KDE's codebase and subsystems.

    KParts in action is extremely cool.

    BTW, I suppose ActiveX controls are the Windows equivilent (they communicate over COM and DCOM as I recall?) .. while I can't speak for the technical equivilences, I can say that they simply don't seem to get used enough in the Windows world .. ie, that centralized functionality seems to run counter the competative software marketplace, which is a real shame.

    I think KParts, technical superiority/inferiorities not withstanding, is far more useful because open source developers are far more interests in centralizing functionality and more likely to attempt to reduce redundancy in codebases and application bases. That's why I think KDE is such a winner, and will benifit from a componant based archecture far more in the long run. (IE is a componant too, and MS claims they can't even 'unglue' it from the OS .. haha, whats the point of componants then? :P)
    • by throx ( 42621 )
      I can say that they [ActiveX Components] simply don't seem to get used enough in the Windows world

      Umm... ActiveX components are impossible to escape in the Windows world. Almost everything you use is an ActiveX component. VB programs are made exclusively of ActiveX components. Windows itself is a massive library of ActiveX components. IE is just a collection of ActiveX components. Office is a collection of ActiveX components.

      Are you sure of what you are saying here?
      • COM, not ActiveX (Score:3, Informative)

        by fm6 ( 162816 )
        I think you're confusing ActiveX with COM. COM is the basic encapsulation API with a client-server model. ActiveX is a component model based on a specific kind of in-process COM server. It's true that VB components are simply ActiveX objects. (Even some basic commands that were once built into the compiler are now methods for ActiveX objects. This is hidden by making the object references a default.) And it's also true that COM is pretty pervasive in Windows apps and Windows itself. But its mostly other forms of COM. Office, for example, relies heavily on COM Automation [microsoft.com] to support the VBA scripting engine. You can also script Office apps from other languages that support COM.

        ActiveX does enter the picture, though, because its pretty simple to write ActiveX components that handle a lot of the COM busywork. Delphi [borland.com] and C++ Builder [borland.com] ship with utilities that help generate such components.

        • Read the books again. ActiveX objects are a proper subset of COM objects. Anything that has the first 3 virtual methods as AddRef(), Release() and QueryInterface() is (by definition) an ActiveX object.

          COM Automation (usually called OLE Automation) is based on ActiveX objects which implement specific interfaces. In other words, I do not have ActiveX confused with COM. It seems you do.
          • Read the books again. ActiveX objects are a proper subset of COM objects.
            OK, to be properly technical, I should have said "ActiveX objects are a kind of COM object." (Not as erudite sounding as "proper subset" but let's no assume everybody in our audience took set theory.) Maybe that's not quite the same as my assertion that ActiveX is implemented using COM, but it doesn't contradict it either.
            Anything that has the first 3 virtual methods as AddRef(), Release() and QueryInterface() is (by definition) an ActiveX object.
            Whose definition is that? And what does it mean? If you mean any COM object that implements reference counting and interface discovery is an ActiveX object, then you're saying that all COM objects are ActiveX objects! These are the two basic features of the whole object model!

            This page [microsoft.com] has the only online copy I could find of the chart that describes the basic parts of COM technology and how they relate. The terms are out of date (ActiveX is still referred to as "OLE Controls") but the basic relationships haven't changed.

            I think there's a certain confusion because MS loves to tweak its terminology (just like they love to tweak everything else). And the market wonks don't help by taking every new technical term and finding every use -- and abuse -- possible as a brand. (No .NET peanut butter yet, but it wouldn't suprise me.)

            • First paragraph of "ActiveX Controls" in the MSDN library states:

              An ActiveX control is essentially a simple OLE object that supports the IUnknown interface. It usually supports many more interfaces in order to offer functionality, but all additional interfaces can be viewed as optional and, as such, a container should not rely on any additional interfaces being supported.

              OLE Controls are not the same thing as ActiveX controls. Fundamentally ActiveX is a simplification of the original OLE to the essential IUnknown interface with everything else optional.

              I am indeed saying that all COM objects are ActiveX objects. That is the fundamental point of my argument. In fact I screwed up in what I wrote. That proper subset thing was the wrong way around. Teach me for being Erudite...
              • First paragraph of "ActiveX Controls" in the MSDN library states:
                I've seen that page, and I nearly quoted it as an example of how MS confuses the issue. The sad fact is there are a lot of bad tech writers out there, and a lot of them work for MS.

                Read that page again very carefully. It describes IUnknown as a defining characteristic of ActiveX controls -- then adds the fact that all other COM objects implement them too!

                • It describes IUnknown as a defining characteristic of ActiveX controls -- then adds the fact that all other COM objects implement them too!

                  Actually, it doesn't say that at all. There is no place on the page (or any other page that I've found) that suggests that there is any type of COM object that is not an ActiveX control. There is no such thing as "all other COM objects" because there are no "other" COM objects.

                  What it does say frequently is that all COM objects are ActiveX controls because they implement IUnknown. There is no confusing of the issue because it's very simple - if a piece of code supports IUnknown then it is an ActiveX control.

                  I believe you are confusing yourself by thinking that "OLE Control" and "ActiveX Control" are the same thing. They aren't. An OLE Control is an ActiveX Control (because it supports IUnknown), but an ActiveX Control isn't necessarily an OLE Control because there is no requirement for an ActiveX Control to support the dozen or so interfaces required to be an OLE Control.
                  • by fm6 ( 162816 )
                    Read this [microsoft.com].
                    • Huh?

                      Nothing on that page suggests anything different to what I've been saying. Perhaps you can help out my obviously limited comprehension skills by pointing out exactly what I'm supposed to be looking at that proves something other than an ActiveX control being anything that implements the IUnknown interface?

                      If you go back to the definition of an ActiveX control from the MSDN library [microsoft.com] rather than trying to find gems among the marketing slides on Microsoft's site you always come back to the core definition:
                      "An ActiveX control is essentially a simple OLE object that supports the IUnknown interface."

                      If you could point me to some site that says there are any more requirements that the implementation if IUnknown and the ability to self-register then perhaps you may have a case.
                    • We already talked about that page. You're reading everying into a sloppy reading of a sloppily worded sentence. There's no explicit statement there that only ActiveX uses IUnknown. He does explicitly refer to an ActiveX control as "a simple OLE object" (my italics).

                      This was such a bad piece of writing, I don't go past the first few paragraphs until you trotted it out for the third time. Apparently you didn't either, or you would have found "an ActiveX control--or COM Object for that matter" and "An ActiveX control, by virtue of being a COM object".

                      Also, this web page is just a tutorial. The COM programmer's guide is here [microsoft.com]. It's also pretty sloppy, but it does do a better job of describing the relationship between COM and ActiveX.

                    • We have talked about that page and the best I can understand it, you dislike it because it disagrees with the point you are trying to make (whatever that is). If you could tell me why you think it is technically incorrect and post some other link to a definition of what an "ActiveX Control" is that disputes the definition on this page then please do.

                      You would also do well to actually read what I am saying because I think you are missing the point. Nowhere have I said that only ActiveX uses IUnknown. All I am saying is that if a piece of code supports IUnknown then it is an ActiveX object (possibly among other things). None of the pages you have linked to have disputed that definition. Just because something is technically an "ActiveX Control" doesn't mean it isn't an OLE Control, a MTS component or anything else.

                      I read the entire page several times. I've also read dozens of books on the subject which concur with the statements on that page. Apparently you simply aren't understanding what I'm saying, or are just trolling because I am in complete agreement that an "ActiveX Control" is a COM Object (it's virtually synonymous). In fact, here's some quotes for you:

                      "It just barely passes as an ActiveX Control because it implements the IUnknown interface" Professional VC++5 ActiveX/COM Control Programming, p73
                      "...an ActiveX Control is any specialized COM object that supports the IUnknown interface and self-registration." The Active Template Library: A Developer's Guide, p291
                      "OC96 changed the definition of a COM-based control from one of those gargantuan COM classes implementing a ton of interfaces to a COM class implementing only IUnknown." MSJ, The Visual Programmer, April 1999

                      (there's plenty more - that's just what I found on my desk at the moment)

                      The link you posted does not address "ActiveX" at all - it was written before the time that Microsoft renamed COM objects to be ActiveX controls in a fit of marketing hysteria. The only place ActiveX is mentioned inside that topic is to link back to the page which we were originally discussing.

                      If you are so adamant that I am mistaken, please educate me and let me know what more an object needs to be an "ActiveX Control" besides IUnknown and the ability to self-register? Please provide a link and a quote from the page that you got the definition from.

                      In the end, I think you'll find I'm correct in that any object that supports IUnknown and is self registering is an "ActiveX Control".
  • ... and TINO (Score:1, Informative)

    by Anonymous Coward
    TINO = TINO is not OLE

    Yet another Linux component thingie (YALCT)

    No kidding - see this article [yahoo.com]

  • by kurisudes ( 258390 )
    The problem with all of this is how heavy it is to run a kde program. I am a fan of the fast lightweight programs even if I do have a decent computer. I prefer blackbox, rxvt and so on. While I appreciate the "beauty" of kde and the work the project does for attracting users to linux for desktops, I hate how long it takes to load up and then don't even think about using an app without using the whole kde environment.

    My only problem with it is how slow it is, but I guess that's a little unfair for the features that it comes with...
    • Re:kde the beast... (Score:4, Informative)

      by EricKrout.com ( 559698 ) on Friday February 22, 2002 @02:51PM (#3053213) Homepage
      I understand where you're coming from, although to be honest, my high-end Athlon chip and DDR RAM don't mind KDE at all.

      There was a survey at dot.kde.org [kde.org] about users' #1 concerns about the desktop environment. About one out of four said they were concerned with its speed.

      That being said, you should definitely read (or at least skim through) this article [www.suse.de] about C++ applications on the desktop.

      Eric Krout
      • Re:kde the beast... (Score:3, Informative)

        by __past__ ( 542467 )
        Would have been a good idea to link to the solution [bottou.com] for the problem described in this article, IMHO.
        • btw, i understand absolutely nothing about what is done in my RH 7.2 about that !
          neither prelink nor objprelink seems to be installed or available as packages. very strange because the guy making prelink is from redhat.

          even more strange, here is a trace of exec:
          $ LD_DEBUG=statistics konqueror
          02593:
          02593: runtime linker statistics:
          02593: total startup time in dynamic loader: 197802673 clock cycles
          02593: time needed for relocation: 191792475 clock cycles (96.9%)
          02593: number of relocations: 20241
          02593: number of relocations from cache: 33562
          02593: time needed to load objects: 5807702 clock cycles (2.9%)

          considering the low level of relocations, i assume there is some (obj)prelink done somewhere. from what i discussed once with bero, i think it's fully prelinked. but, there are no /etc/prelink.conf or .cache or /usr/sbin/prelink..

          moreover, where to find a true website about prelink, FAQ, HOWTO and stuff ? all i can get is mail archive, which is completely clueless.
          to add to my confusion, i found that [freelists.org].
          well, as u've understood, i'm lost, and i can't even access to people.redhat.com to download the damn thing, i don't know why, but connection's refused. clues anyone ?

          and, finally, did someone try to compare prelink and objprelink, or the combination of the two, to see which method is the fastest/more efficient ?
      • Re:kde the beast... (Score:2, Informative)

        by con ( 149685 )
        You may also be interested in knowing that now in KDE CVS there is an experimental simplified malloc implementation which supposedly can give a perfomance improvement of upto 30%!!
      • Re:kde the beast... (Score:1, Informative)

        by Anonymous Coward
        You should also have a look at current KDE3 CVS -- there is a configure switch --enable-fast-malloc=full, which makes applications quite a bit faster!
    • It works fine on that the computer I believe has 64mb of ram (I don't recall at this moment though). Running redhat 7.1, works very well as a backup X computer.
    • Also give the betas of KDE3 a shot. They are a LOT snappier on my PIII-500/256RAM box. A lot of work has gone into speeding up KDE, and I believe there's more coming.
  • disadvantages (Score:3, Informative)

    by theCURE ( 551589 ) on Friday February 22, 2002 @02:52PM (#3053219) Homepage
    There is one thing that strikes my attention about kparts in the listing of disadvantages. "A single component can bring down the entire application." That can be annoying, I wonder if they'll find a way to implement another application to catch the falling components should there be a problem. Personally, I'd rather have 2 windows doing 2 different things, as that's a favorite X feature. Nothing against KDE, i like/use it. I really like the krash handler too. a lot.
    • Re:disadvantages (Score:2, Interesting)

      If you want components isolated, then you'll probably have to run them in a child process or at the very least its own thread. In the process case, calls to the component and events from it must be marshalled in some way across the process boundaries. Pretty much like RPC but locally (on Windows it's called LPC - Local Procedure Calls, on Solaris it's called doors). This is CPU intensive. Sharing data between components will also be very time/CPU consuming since data must be marshalled back and forth. And how can you share data between processes where all changes are visible in all processes without a very high overhead? In the thread case, calls to the component and events from it must be synchronized in some way. An event for instance cannot be delivered until the receiver is ready. Of course you could use event queues, but you'd have to protect them with mutexes. With a lot of components, there will be a lot of lock/unlock scenarios and it will slow things down. And the worst case scenario is when one component crashes during a lock.. Who's gonna unlock it? Designing a threaded architecture requires a lot of thought and skill. I'd say threading and multiprocessing should only be used when absolutely necessary. Bad use of both technologies will punish you later...
    • I don't think that's too big a deal. If you have an application with multiple components, dealing with one of those components failing is very difficult -- you can try to make the best of a bad situation, maybe restart the component, try to continue the application with a hole in it, or something else... but that won't change the fact that a part of the application crashed.

      So I don't think it's a big deal if the component takes down the entire application, as opposed to having the component just leave the application in a semi-usable state (or place undue burden on the main application developer to consider all failure conditions and create recovery procedures for each one).


    • Just C++, that is main disadvantage for me.

      Nearly everything can interface to C, nearly anything
      can interface to C++ :-)

  • by Karma Sucks ( 127136 ) on Friday February 22, 2002 @02:54PM (#3053231)
    After all the hype and noise about .NET, tomorrow's technology tomorrow, I'm glad there is now a little focus on some of the great technologies we have *today*.

    KParts is modest. It doesn't not try to solve all the problems of the programming community. But it's *damn* good at what it's good at.

    Like they say, the right tool for the right job. Only rarely will you find a one-size-fits-all solution.
  • Also, see... (Score:2, Informative)

    by joeytsai ( 49613 )
    Gnome's gnotices [gnome.org] also has an article about designing and debugging corba application [linuxjournal.com], using the great application ethereal [ethereal.com] as an example.
  • by DGolden ( 17848 )
    We have another good component architecture - Xt. It's just a shame a lot of young whippersnapper open source developers these days don't take the time to understand the X window system before plunging into wheel-reinvention, then complain how "X is slow", when they've just gone and effectively "misprogrammed" it, treating it like the dumb framebuffer that it isn't.

    • Xt is not the same sort of "component architecture" that kparts is. Xt is for X Window widgets, kparts is for generically embedding entire applications (or parts of applications) within other applications (or kparts). this embedding includes things like transparently merging menu and toolbar entries. it goes far beyond Xt.
    • No way ! I wrote my own widget set on Xlib back in 1996. Real men don't need bloat like Xt ;)

      Actually I did. Didn't use anything but Xlib. But it's gone now - you really need a capable and *standard* widget set (eventually comprising dialogs like "file open" etc.) to make apps that are nice (not use usable).

      Today I use Gtk--, which I find is a really neat wrapper around Gtk+ (let's me use widgets using a decent language).

      I suppose someone ought to go thru the code in the used libraries today, rip things out, and make them run efficiently on the sparc20 which will be the only system they would be allowed to use for the task. For some reason, it seems it's a lot more fun to add new features, than to spend a year wading thru piles of rotten code... Wonder why :)

  • by markj02 ( 544487 ) on Friday February 22, 2002 @03:43PM (#3053615)
    no single component technology has claimed victory yet for Linux, just thought this might be an interesting read for some

    And no single component technology will "claim victory". Different applications have different needs. For some applications, CORBA interoperability is absolute essential.

    KParts in particular is further held back by the fact that it is covered by the GPL: commecial developers do not like being nickled-and-dimed just to put their software on Linux, in particular since the industry standard is free. And KParts is (at least perceived to be) biased towards C++.

    It's nice what the people over at KDE are doing. But don't expect world domination.

    All these "component architectures" are really mostly driven by limitations of C and C++ anyway. In the long run, the issue of component architectures will largely go away, as desktop software development shifts to Java and C#. Yes, Java and C# still require some conventions for components, but they already have most of the hard parts implemented as part of the language.

    • kparts is licensed under the LGPL. you can link to it even from commercial applications, just like the rest of the kde libraries.
      • That may be technically correct. However, KDE is based on Qt. Can you give any examples of GUI components written using KParts that don't use Qt? Is there an infrastructure for writing KParts components using Gtk+? Because if there isn't (as I believe there isn't), the distinction is academic.
        • Except that if you're doing commercial software, you can buy the commercial qt license. All commercial software costs money to develop.

          I don't see how having to pay for a qt license amounts to nickel and diming....
    • The goal of KParts is certainly not to dominate, but to do the job well in KDE. Without KParts there would be no embedding in Konqueror nor KOffice (etc.), no GUI merging etc.
      There's nothing in KParts that attempts to dominate the world - nor even other component models. Why would there be ?

      KParts itself won't dominate. However, if KDE ever dominates, then KParts will by association dominate too ;)

      David (co-designer of KParts).
  • Kylix CLX is QT (Score:1, Interesting)

    by vtechpilot ( 468543 )
    First one should observe the difference between visual and non-visual components. Non visual components aren't really relavant to this. They are invisible so they don't get drawn to screen. Visual Components on the other hand are drawn to the display and have their own ways of getting there. Visual CLX components are really just QT with some code added to make them easy to use from Object Pascal.

    This is why KDE is listed as a requirement for Kylix because if they have KDE then they have QT. Also if you compile a CLX app in Delphi, then your windows EXE will require a QT DLL (I forget which one.)

    Anyway the point here is that CLX doesn't really belong in this discussion.

    • by Anonymous Coward
      The CLX framework itself is completely independent of Qt. The Visual CLX components use the Qt runtime to render GUI components, so yes a CLX TButton ends up calling a Qt Button, but it's definitely not Qt or even a Qt wrapper. 90% of CLX classes (everything non-GUI) don't even link with or call Qt in anyway. The CLX component framework, architecture, and library is modeled after VCL, which sits above the OS APIs and GUI APIs. VCL was originated in Delphi in 1995 and was the basis for the JavaBeans component spec (called Baja I think) back in 1996. Java didn't have a component spec, and Borland was making one modeled after VCL, so the Baja ideas were "donated" to Java. CLX is most definitely relavent to this converstaion and it's ideas and origins are actually the model for the even .Net component framework. Take a look at the .Net component framework and it will look suspiciously like CLX. Almost even spooky. Why? Not too strange when you consider that the designer and architect for .Net was one of the Borland architects of the original Delphi VCL which CLX is based on.

    • Incorrect! (Score:3, Informative)

      by fm6 ( 162816 )
      This is why KDE is listed as a requirement for Kylix because if they have KDE then they have QT.
      I'm pretty sure there's no such requirement listed. Actually, I'm quite sure -- I wrote the release notes for Kylix 1. You need Qt to run CLX GUI applications. You need recent versions of standard Linux libraries to run all other CLX applications. That's it.

      If you think a dependence on Qt means a dependence on KDE, you don't understand what either are. Qt is a cross-platform library. KDE is a desktop environment based on Qt. (Interesting, since KDE is Unix/Linux only. I gather the KFolk just liked the Qt API.) No Qt application needs KDE to run, unless it specifically uses the KDE API.

      Kylix is itself a CLX application, so it needs Qt to run. It does not require KDE or any other desktop or window manager. When I put this in the release notes, a reviewer objected to the implication that you can run Kylix without a window manager. In point of fact, you can -- I tried it. Not very practical, but it is possible.

      About that Qt DLL. Yes, you need it to run CLX apps under Windows. This is not a precedent! I can't think of any non-trivial Windows application that doesn't require at least one aftermarket library to run. Check your System32 directory. See any .BPL files? These are Borland Package Libraries, a kind of DLL. Their presence means you've installed an application written using Delphi or C++ Builder.

  • by Anonymous Coward
    The CLX component framework, architecture, and library is modeled after VCL, which sits above the OS APIs and GUI APIs. VCL was originated in Delphi in 1995 and was the basis for the JavaBeans component spec (called Baja I think) back in 1996. Java didn't have a component spec, and Borland was making one modeled after VCL, so the Baja ideas were "donated" to Java. CLX is most definitely relavent to this converstaion and it's ideas and origins are actually the model for the even .Net component framework. Take a look at the .Net component framework and it will look suspiciously like CLX. Almost even spooky. Why? Not too strange when you consider that the designer and architect for .Net was one of the Borland architects of the original Delphi VCL which CLX is based on.

    By the way, despite what someone said earlier, the CLX framework itself is completely independent of Qt. The Visual CLX components use the Qt runtime to render GUI components, so yes a CLX TButton ends up calling a Qt Button, but it's definitely not Qt or even a Qt wrapper. VCL uses Windows GDI and Win32 API to render GUIs, JavaBeans use AWT or Swing to render GUIs, .Net uses Winforms (which uses Win32) to render GUIs.... and CLX uses Qt to draw GUIs.... 90% of CLX classes (everything non-GUI) don't even link with or call Qt in anyway. The CLX is Qt idea is old outdated and not accurate.
  • Don't you mean Koding with Kparts?

    Maybe not ;).

  • It seems like IBM is making a real effort to be involved in the development of Linux

    they sure are investing a lot of time..
  • by evilviper ( 135110 ) on Saturday February 23, 2002 @04:55AM (#3056441) Journal
    It's bugged me from day one, and yet it seems nobody else notices... Either everyone in the world is a moron, or people who work in the software field think that they live in a world apart from everyone else...

    Programming is engineering... Some engineers make an engine run, some make web browsers. The basic concepts are the same. Yet, the software industry bastardizes those principles.

    Engineers design things to be cheap an effecient. The software industry designs things to be cheap as hell. It's the equivalent of an engineer designing a plastic car engine. Reclkess disregard for performance and stability, focusing solly on price.

    It's terrible. That's the reason why Windows is unstable, slow, et al. The developers do everything just so it works... What's most software engineer's solution to unstable programs? More code on top to do more error checking. Don't even think of coding correctly in the first place.

    KDE developers use C++. Why? because you can QUICKLY and EASIALLY write bloated programs, and that's all they care about. They don't care about a quick piece of code, just that it works. I shouldn't single out KDE, since this practice is ubiquitous in software design.

    Everyone decides how they can write things easier, not how they can write something that will work better, run faster, use less memory, etc. It's just not how things are done. So we go out and buy new computers that suck up more power and more money out of our pockets because the programs are getting slower. HOW THE HELL DID WE GET HERE? Why is it acceptable to spend thousands every couple years? If you focused that money on good programs, you wouldn't need to upgrade. If you have programs that run nicely in 16 Megs of RAM, why spend thousands to upgrade?

    Software practices like this are going to come to a head. They have to. It's going to reach a point where people either refuse to buy new computers because they don't have the money, or want to spend it on something else. Or the otther possibilty is power concerns. We may reach a point where traditional cooling methods are not enough, or power cannot be generated fast enough to suppy all these computers. Then the computer dillema will solve itself. More effecient hardware will accompany more effecient software. Perhaps one company will come forward and say 'Our software will run more quickly on the computers you are throwing away, than the software of our competitors on those new computers they said you had to buy.' Then things will change. Then KDE will no longer be a glutton for CPU power. Then people say dammit, I'm sick of this upgrade shit. I'm keeping my computer. It works just fine. I'll use the software that runs nicely on it and not touch KDE until they clean up their code.

    Then, we will all use XFce as our desktops, Dillo to browse the web, OpenBSD as our OS, low-power Laptops as our platform, and do away with Mozilla, Windows, Athlons/P4s, Gigantic CRTs, and any programs that eat up more memory or CPU cycles than they need.

    The only question to ask is, how far off are we? How long will it be before the real world invades the computer world and smacks some sense into all the developers, engineers, and geeks? I think we will all be much happier, and who wouldn't be happier when they're spending less money on their PC habbit?
    • Of course Mozilla is slower than Dillo, because it has more features. Of course KDE architecture, that makes life easier for programmers, has a cost in performance. This is obvious. No one is forcing *you* to use them. I'll tell you why I'll keep using the KDE framework to write my applications. First, the design is very clean, DCOP, KParts, KIO, etc are simply awesome. Second, it makes easier and faster writing applications with the GUI features that most people nowdays want (yes, you are not one of those), Third, I don't really care about the small little performance cost I have to pay, it is perfectly acceptable with today's computers, at least with mine, and all of my friends. As it has been said many times, CPU time is way cheaper than programmer's time. And finally, this is free software, I'm already giving my time to write software for free, if there are some people still using pentiums at 100 megahertz that will bitch about my "bloated" software, I couldnt care less. There are other programs they can use, anyway. Sergio
      • CPU time is way cheaper than programmer's time

        For some reason many people actually believe this. The fact of the matter is that it's just not true. First off, the salary of the programmer is distributed over several coppies of the program. Secondly, your product would be worth far more (people would pay more) if your product worked 10x faster than a competing product. The problem with free software is just as you've said: you've got no incentive to improve the program, it's just not fun is it?
      • Mozilla is slower than Dillo becuase the dillo people are trying to keep dillo small. Netscape 4.x is nearly as featurefull as Mozilla but much faster.

        XFce is smaller and faster than CDE while having neatly all the features, and many of it's own features CDE doesn't have.

        Good programs are just good programs.
    • by Anonymous Coward
      Either everyone in the world is a moron, or people who work in the software field think that they live in a world apart from everyone else...
      ...or maybe you live in a world apart from everyone else. The point of a component model like kparts or anything else is to have a set of working parts optimized for robustness and rapid development. Sure, maximum efficiency is nice, and if you want to code a nonportable program in assembler to run on a soon-to-be outdated hardware platform, go for it. By the time you've finished with your ultra-efficient "for" loop, only one of a thousand or so routines you'll need to write in order to have software that people will pay for, the rest of us inefficient fools will have written packages that are out the door and keeping our jobs.

      You try to sound like you know what software engineering is, but if you think you can utterly disregard things like reusability, ease (speed) of implementation, and non-techie friendly interface design (all of which come at a cost to efficiency), you have a lot of learning to do. Engineering is about making trade-offs between a lot of conflicting ideals, not just the two or three you happen to decide are most important.

      Speaking in non-programming terms, I'm sure the auto companies can design cars with >60 mpg fuel efficiency, if that were the only criterion they used. Of course, they might not have the pickup that paying customers want; they might be so light they don't meet federal crash safety regulations; maybe they pollute the air worse than current engines do. And maybe they'd be so noisy and inconvenient that people won't buy them. When you factor in real-world (e.g., market) considerations, there's a reason why people don't solely optimize for the one value you think supercedes all others.

      Oh, and one more thing...it's perfectly possible to write highly efficient c++ code if you have to; all the low-level c operators are there, and if you keep a tight rein on the number of virtual functions you call, you can keep processor overhead under control. But that (like a lot of your post) is neither here nor there.

  • I've worked extensively with Borland's CLX. Delphi 6 (Windows) and Kylix 2 (Linux) support this cross-platform language. Supposedly you are supposed to have a single source code base that is cross platform. There are a few major problems with this:
    1. CLX itself and its structure has MANY bugs and memory leaks. In all fairness CLX is a first generation language but it is equivalent to an alpha test edition. I recall reading a post in borland.public.delphi.clx.using where someone had open and closed a blank form about 100 times and generated a 6 megabyte memory hole.
    2. CLX isn't well supported by Borland. I wonder if they are planning to drop support for CLX altogether. Kylix 2 came out well less than a year than Kylix 1 and included some fixes for CLX. Users were pretty steamed because Borland never released CLX fixes for Kylix 1. K2 was really a bug fix release of K1 and Borland wanted people to pay for the updates. Borland recently released service pack #2 for Delphi 6 but still there are no CLX fixes. Borland also maintains freeclx.sourceforge.net which is supposed to be a community effort to improve CLX. That site is basically dead despite the fact people actually submitted bug reports at one time.
    3. Applications written with CLX require a very large dynamic runtime library QTINTF.DLL or .SO. This file is several megs compressed. So if your app was previously 500k its now 3.5 megs and this can be a tremendous strain on your web server and bandwith usage.
    • Almost forgot one of the most important points...

      4. Forms written in the Linux flavor of CLX and the Windows flavor of CLX appear differently at runtime. This defeats the whole purpose of having a single cross-platform code base. The optimal solution is to have separate forms for each OS.

Always look over your shoulder because everyone is watching and plotting against you.

Working...