Hardware Based XRender Slower than Software Rendering? 297
Neon Spiral Injector writes "Rasterman of Enlightenment fame has finally updated the news page of his personal site. It seems that the behind the scenes work for E is coming along. He is investigating rendering backends for Evas. The default backend is a software renderer written by Raster. Trying to gain a little more speed he ported it to the XRender extension, only to find that it became 20-50 times slower on his NVidia card. He has placed some sample code on this same news page for people to try, and see if this is also experienced on other setups."
2D acceleration using OpenGL? (Score:5, Interesting)
Has anyone done that? Any interesting results? One would think that there's a lot of potential here...
accelerated? (Score:4, Interesting)
duh (Score:3, Interesting)
I'm more interested in using them for specific calculations. Imagine if one of these things was accidentally embued with the ability to factor gigantic numbers. The AGP slot is just an excuse to keep us from beowulfing them over PCI-X
Re:Not enough details (Score:4, Interesting)
I got weird glitches all over the screen, most notably in the window borders and wherever windows or menu's overlapped other things on the screen. There was an increase in speed however. As you might expect I disabled it after about 15 minutes. Ugh. I'll have another look at it when it's been fixed
Graphics cards and computation (Score:5, Interesting)
There has been some work on using graphics cards for computation [att.com]. The tough part is figuring out how to rephrase your algorithm in terms of what the GPU can handle. You'd expect matrix math [cs.sfu.ca] to work out but people have tried to implement more interesting algorithms too. :-)
- Amit [stanford.edu]Lessons from the ancient (Score:4, Interesting)
In those days of yore (only in the computer industry can one refer to something 20 years ago as "yore"...) there was the Commodore 64. It retains it's place as a pioneering home computer in that it offered very good (for the time) graphics and sound capability, and an amazing 64K of RAM, in an inexpensive unit. But then came its bastard son...
The 1541 floppy disk drive. It became the storage option for a home user once they became infuriated enough with the capabilites of cassette-tape backup to pony up for storage on a real medium. Unfortunately, the 1541 was slow. Unbelievably slow. Slow enough to think, just maybe, there were little dwarven people in your serial interface cable running your bits back and forth by hand.
Now, a very unique attribute of the 1541 drive was that it had its own 6502 processor and firmware. Plausibly, having in effect a "disk-drive-coprocessor" would accelerate your data transfer. It did not. Not remotely. Running through a disassembly of the 6502 firmware revealed endless, meandering code to provide what would appear, on the surface, to be a pretty straightforward piece of functionality: send data bits over the data pin and handshake it over the handshake signal pin.
As the market forces of installed base and demand for faster speed imposed themselves, solutions to the 1541 speed problem were found by third party companies. Software was released which performed such functions as loading from disk and backing up floppies as speeds that were many, many times faster than the 1541's base hardware and firmware could offer.
The top of this particular speed-enhancement heap was a nice strategy involving utilizing both the Commodore 64's and the 1541's processors, and the serial connection, optimally. Literally optimally. Assembly routines were written to run on the both 64 and the 1541 side to exactly synchronize the sending and receiving of bits on a clock-cycle by clock-cycle basis. Taking advantage of the fact both 6502's were running at 1 Mhz, the 1541's code would start blasting the data across the serial line to the corresponding 64 code, which would pull it off the serial bus within a 3-clock-cycle window (you could not write the two routines to be any more in sync than a couple 6502 instructions). This method used no handshaking whatsoever for large blocks of data being sent from the drive to the computer, and so, in an added speed coup, the handshaking line was also used for data, doubling the effective speed.
The 1541 still seems pertinent as an example of a computer function that one would probably think would best be done primarily on a software level (running on the Commodore 64), but was engineered instead to utilize a more-hardware approach (on the 1541), only to be rescued by better software to utilize the hardware (on both).
There's probably still a few design lessons from the "ancient" 1541, for both the hardware and the software guys.
Re:2D acceleration using OpenGL? (Score:5, Interesting)
One problem is that multi-window OpenGL doesn't work that well. Game-oriented graphics boards don't have good support for per-window unsynchronized buffer swapping, so you tend to get one window redraw per frame time under Windows. (How well does Linux do with this?) Try running a few OpenGL apps that don't stress the graphics hardware at the same time. Do they slow down?
One of the neater ways to do graphics is to use Flash for 2D and OpenGL for 3D. Quite a number of games work that way internally. The Flash rendering engine typically isn't Macromedia's, but Macromedia authoring tools are used. This gives the user interface designers great power without having to program.
Well, yes (Score:3, Interesting)
As far as I know, only the Matrox G400 card has good hardware render accelaration. NVidia's support is still experimental and rather poor. Render is still considered experimental, and speed is not yet considered to be very important. Full accelerated support is planned for XFree86 5.
Re:Lessons from the ancient (Score:2, Interesting)
IIRC, they originally tried (slave mode -- the only available thing then) DMA, and in general, it was faster to pump the data out by hand.
Re:Keith IS being paid. (Score:2, Interesting)
Interesting, but how can we fund them? They dont accept donations, they dont have a way for someone like me who doesnt have the skills to develop Xrender to pay people who do.
2 people on Xrender is why its taking so long.
It takes time to talk to hardware (Score:4, Interesting)
General purpose graphics libraries such as ours ended up spending most of the time dealing with the cool features than the features saved. For example, if a plotter had a 2D perspective transform built in, was it better to do the 3D projection ourselves and just feed it untransformed vectors, or map the 3D in such a way as to allow the 2D processing of the plotter to help out? This might require pre-computing sample data.
Also, since the plotter had 2D transforms we have to do a lot more work including reading the plotter's status and inverting the plotter's transform matrix to make sure that the resulting output didn't end up outside the plotter's viewport.
A code analysis found that over 90% of the code and 90% of the processing time was spent preventing and dealing with input errors and handling compatibility issues.
Nowadays, it's harder in many ways with a wide variety of hardware based texturing and other rendering - do we do the lighting model ourselves, or let the HW do it? It may depend on whether we're going for speed and 'looks' or photometric correctness.
Re:accelerated? (Score:3, Interesting)
The NVidia drivers say something about Render Accleration as someone already pointed out. However, there is definitely some glitch somewhere. I tried the benckmark with the RenderAccel both turned off and on on my GeForce 3 with the 4496 drivers and perceived no significant difference in the tests except for test 1. (11s for no accel, 2.5s for accel, 0.62 for imlib2). The rest of the tests sucked for the driver (11s, 215s, 183s, 356s for tests 2 to 5 -- both with and without render accel as opposed to 0.21s, 4.5s, 2.7s, 5.8s for imlib2).
I use Xinerama with the secondary display on an ATI 98 Pro (Yay for college tuitions). One thing I did notice was that even in render-acclerated mode, if I drag the window to the middle straddling the screen split, the images display on both sides -- though ATI's side is scaled down even at the same resolution for some reason). However, if I use a gl application (glxgears, mplayers -vo gl2 etc) then straddling the screen only gives half a display on the GeForce board. So in this case, X is either not using XRender either because of NVidia drivers or is picking the lower of the capabilities of the video card. Or is doing something in the middle causing the GeForce and ATI displays to be different.
I wonder if there is any way to explicitly force X to use the hardware for XRender as you can do with GL.
I actually downloaded and ran his benchmark (Score:3, Interesting)
Here are the test scores from one of the rounds -
*** ROUND 3 ***
Test: Test Xrender doing 2* smooth scaled Over blends
Time: 196.868 sec.
Test: Test Xrender (offscreen) doing 2* smooth scaled Over blends
Time: 196.347 sec.
Test: Test Imlib2 doing 2* smooth scaled Over blends
Time: 6.434 sec.
Now for the strange thing. For the first platform, I watched as the program drew the enlightenment logo thousands of times in the test window, as you would expect. For the second test, it took about the same amount of time, but drew offscreen, again, as the test's name would indicate. However, for the imlib2 test, it also didn't draw anything in the test window.
I got the impression (perhaps wrongly?) that Imlib2 would actually draw to the screen as well. Since it doesn't change the screen, I have no way of telling if imlib2 is doing any drawing at all.
So, I'm digging into the benchmark's code... I'll let you guys know what I find.
Re:Yawn (Score:2, Interesting)
X11 uses unix sockets (or optionally slower, less secure TCP) and shared memory.
Win32 uses shared memory and messaging.
MacOS X
QNX Photon uses qnx kernel messages and shared memory.
The real difference is the layer at which the windowing system exists. in the case of X11, MacOS X and Photon. the windowing system is just another process.
In Win32 it's a kernel thread (as far as I know). But still, you're sending messages from one place to another and constructing windows based on them.
Client/Server is the natural way to build a multi-application graphical environment.
Of course there are "fake" environments which amount of an embedded video driver and some library to draw widgets. (most DOS gui apps are like this).
the usual superficial analyses of X11 (Score:4, Interesting)
It will be a while until XRender beats client-side software implementations. Furthermore, you can't just take a client-side renderer and hack in XRender calls and expect it to run fast--code that works efficiently with a client-server window system like X11 needs to be written differently than something that moves around pixels locally.
Re:2D acceleration using OpenGL? (Score:2, Interesting)
Exactly, on modern hardware. If you have anything less than ATI 9700, NV "hottest and greatest" or the only third (but really expensive, why I won't name them) vendor OpenGL-supporting 3D cards, you are indeed screwed. Bigtime!
To explain what I'm talking about here: On a 2D card you can easily often move a full screen/vblank (OK, not true for PCI cards if you're like me using 1600x1200x32 ~= 7.32MB/frame, and at just 75Hz that's 549MB/second).
That's obviously impossible, right? But what if you pushed e.g. MP3 video at 480x480 (plain VCD) in non-RGB format, but in some other fomrat. Let's call it YUV. Oh, I hear you say now. Yes, that's data rates the card cant complain too much about. Furthermore, it has support to stretch the image in hardware.
But what about what we're talking about here? We're talking about plain RGB, 24/32-bit obviously. If using a 3D API you'd first have to create a quad, then *upload* the image you already have in memory to the card, and finally have the card "paint" that quad, stretched or not, onto the screen.
Let me tell you, the amount of games that do this and manage to bring down the frame-rate for just shitty 2D work (this *includes* the mouse cursor) to less than 1 FPS (frame per second) has been enough to tell me:
1. The world is full of idiots.
2. The designers (of that software) are not really competent enough to make the decisions they do.
3. Even if *you* happen to have an ATI9800, are you willing to byu such a card for everyone not having it?
Had these people staied with a 2D API and only used the available 2D primitives for the 2D API they'd had an order of magnitude if not more higher framerate on all hardware.
In all, it could be fun to look at, running at a machine suitably equipped (meaning, the "baddest" and most expensive money can buy) - but to actually use on the machines we own, I don't think so...
To even suggest to use Flash for gfx IMO validates a combination of web-cameras, pieces of led forced to a somewhat hight velocity by chemical reactions and lots of cheering people.
Re: Graphics cards and computation (Score:3, Interesting)
I've written MMX versions of algorithms (blending, intensity etc) that are 5 times faster than their C equivalent - I've yet to see that kind of improvement from GCC.
Re:2D acceleration using OpenGL? (Score:2, Interesting)
1: 1871
3: 630
5: 372
When you look at them it's obvious that they're not running simultaniously, but get a little bit of gfx-time each, stopping and waiting for the other gears to stop moving. Utterly useless!
Re:Lessons from the ancient (Score:3, Interesting)
The way I was told the story, Apple was buying lower-quality components than those on more expensive drives, and to compensate, would do each disk operation (like a read) seven times, and vote on the result.
Several patched drivers came out that merely read 5 or, if you were willing to risk data errors, 3 times. Greatly improved performance.
Of course, no mention of Apple ][ disks would be complete without the mention that a blank floppy would cause some sort of infinite loop because the directory table couldn't be found. Hence:
On a clear disk you can seek forever!
Re:One word: (Score:4, Interesting)
There are also third-party utilities (Stardock WindowFX, etc.) that create all sorts of nifty transitions, shadows, blending, etc. that are handled by the graphics device.