Carmack on NV30 vs R300 226
Nexxpert writes "John Carmack has posted his thoughts on the NV30 vs R300 (featured via www.bluesnews.com. Highlights some of the shortcomings of Nvidia's next step as well as pointing out what they've done right. Interesting read." In particular the arb2 vs nv30 path differences mean that it's not as simple as saying "ATI roX0rs nVidia" or vice versa.(update: sorry bout the misspelling, don't know how I missed that)
We know that... (Score:5, Informative)
My concern is what is it going to look like on, lets say, a G4Ti4200 w/64 megs of ram, or the G4Ti4600 w/128 megs of ram. Are those of us not willing to spend 400 bucks on a new vid card (or for those of us stuck with a 4x AGP board, that plus a new mobo) going to have to turn 90% of the features off to run it at a good looking frame rate?
Thanks goes out to Carmack, its nice that he takes the time to give us a run down of the two cards that are battling for supremecy, especially since I like many of you are thinking about D3 when I evaluate my system and its need for being upgraded.
Re:We know that... (Score:5, Insightful)
Probably, but only for about 6 months until that $400 video card turns into a $200 video card and the Mobo becomes the $99 special.
God I love this business.
Re:We know that... (Score:1)
Re:We know that... (Score:5, Informative)
leaked alpha != anything useful (Score:4, Interesting)
okay I agree with the premise that older cards will do just fine but the Doom3 alpha wasn't pushing the limits like I expect the ifnal release to do.
A couple of dynamic lights and some bump mapping.
You can still get q3 to reach the limits of the Ti4600 128Mb. Heck, even the original Unreal still poses a test with eveything turned up!
Re:leaked alpha != anything useful (Score:2)
Re:leaked alpha != anything useful (Score:4, Interesting)
3 rooms and 4 mobs does not a stress test make
I thought that doom 3 was supposed to be more of a suspense game than a gorefest - i.e. there won't be hordes of demons after your butt. Instead, maybe you have to sneak around and avoid most creatures.
Re:leaked alpha != anything useful (Score:2)
Wow (Score:2)
"but you should be pretty good to go I would imagine."
Somehow I don't feel any better.
Re:We know that... (Score:2)
Your opinion means soooo much to us here, since you were running an alpha version of a game that won't come out for at least a year.
Doom 3 will be the reason to buy a video card in 2004.
I call B.S... (Score:3, Insightful)
*MAYBE* in an empty room with very few shadows and no combat..
On my 9700 Pro running on a 2.4 GHz Xeon, at that resolution, I normally got ~40-45 fps, but when anything happened, it dipped under 10. Sometimes in the 1-2 fps range -- when I was shooting, and being attacked.
No way in hell any gf2 is going to push 15 fps when anything is going on. It'd be a freaking slideshow slower than the powerpoint presentation at the meeting I just got out of.
Re:We know that... (Score:1)
By the time D3 is out(x-mas?), that card will be in your mom's PC and you'll likely have another generation of cards on the market.
Word is R350 in march and R400 in June. Who knows about NVidia, they seem to be following the Blizzard method of development. I'm sure we will see a "real soon now" press release at the introduction of those products.
OK, I feel a little bit stupider. (Score:5, Funny)
Then my brain went *beep* *beep* *beep*
And I lost everything.
About the only thing I came away with is, if you do it the way a specific vendor wants, it kicks the crap outa the other one, otherwise the ATI may be a wee bit faster.
Re:OK, I feel a little bit stupider. (Score:4, Informative)
Close. I think the big thing he was trying to say was that ATI & NVidia in the ARB2 mode, ATI kicks the snot out of NVidia. But it's because ATI renders with less precision than NVidia.
So when both cards are in ARB2 mode, NVidia looks better, but ATI is faster.
Re:OK, I feel a little bit stupider. (Score:2)
Re:OK, I feel a little bit stupider. (Score:2, Informative)
Re:OK, I feel a little bit stupider. (Score:2)
Re:OK, I feel a little bit stupider. (Score:2)
Re:OK, I feel a little bit stupider. (Score:4, Insightful)
The thing is that the renderer didn't want the higher precision, so the excess precision is mostly wasted. It's like calculating out 1.5000 * 1.5000 and getting 2.2500, and then truncating it back to 2.2. You could've just truncated it to 1.5 and 1.5 initially, and gotten 2.2 as well.
So, I doubt that NVIDIA actually looks better in the ARB2 mode. It just runs slower, because it's calculating things that don't need insane precision to insane precision.
Hence the reason that Carmack said that NVIDIA is confident there's a lot of room to improve, if it can realize which of the three precision modes is most ideal and shift to it.
Re:OK, I feel a little bit stupider. (Score:3, Informative)
Your example with the truncation is a gross simplification, which goes into ATI's favour, but in reality there's way more precision than just 2 decimals...more like 6 for ATI and 8 for Nvidia...which makes a reasonably big difference in quality.
The numbers above are not entirely spot on (except for Nvidia's 128 bit precision, and I can't be arsed to find the exact other numbers), but the principle remains the same: grandparent had it right; ATI for speed, Nvidia for quality (on a ARB2 path...vendor specific paths lead to different results, but are you really going to play the game in anything other than true OpenGL?
Re:OK, I feel a little bit stupider. (Score:3, Insightful)
It's like in freshman-level science classes - you don't take the numbers out to more significant figures than you start with, because the added precision is meaningless. Carmack was talking about fragment path programs, for which added precision probably wouldn't pan out to added image quality.
Re:OK, I feel a little bit stupider. (Score:2)
To break this down to engineering terms: whenever you step the simulation program to higher values, you increase your accuracy, even if you then only display at half the frequency. This gives you more accurate readings than if you stepped the simulation at half the frequency to begin with.
And that applies to whatever you simulate, be it velocity or colour.
And the thing is that your first sentence doesn't pan out: the rendering path would work with whatever it's programmed to do, and Carmack would difine that. Thus he'd say that when rendering for ATI the precision would be 96 bits, and for nvidia 128. Why would he give nvidia's path only 96 bits, when doing it at 128 would be just as fast, seeing as it's hardware!? If he gave the nvidia path 96 bits, it would be 96 bits with 32 null bits tacked on to the end anyway! The rendering path doesn't ask for anything...it just gets what it's given, and what it's given depends on how deep it is...96 for ATI, 128 for nvidia. What makes the difference is that the speeds of the 96 bit path is different from the 128 bit path...therefore making ATI faster and nvidia looking better.
Re:OK, I feel a little bit stupider. (Score:2)
What you missed from before is that the result from the fragment path program is truncated (because it goes into an 8-bit-per-component framebuffer) and so doing it in 24-bit vs. 32-bit (which is what is really going on: in the ARB2 path it tries to request 24-bit, but the NV30 only has 32 bit, so it has to calculate at a higher precision. I think there's a typo at Beyond3D there...) is completely pointless.
The ONLY place it would matter is in a fringe case where a multiply-rounded 24-bit calculation would fall into a different bin than a multiply-rounded 32-bit calculation, which is basically never.
Re:OK, I feel a little bit stupider. (Score:2)
Then my brain went *beep* *beep* *beep*
Yeah, I was going to post the same thing. Only read Carmack in the morning, that way you have the rest of the day to bring your ego back up to level 5 ;)
Re:OK, I feel a little bit stupider. (Score:2)
JOhn
Once again... (Score:1, Interesting)
Re:Once again... (Score:2)
Re:Once again... (Score:5, Informative)
The RATIO of bandwidth to calculation speed is going to decrease. It is nothing short of miraculous that ram bandwidth has made the progress is has, but adding gates is cheaper than adding more pins or increasing the clock on external lines.
Bandwidth will continue to increase, but calculation will likely get faster at an even better pace. If all calculations were still done in 8 bit, we would clearly be there with this generation, but bumping to 24/32 bit calculations while keeping the textures and framebuffer at 8 bit put the pressure on the calculations.
John Carmack
Mod Parent Down (Score:2)
John Carmack does what needs to be done so he can make money and make a living. That is something each of us do everyday and will continue to do so.
However, Mr. Carmack is a man who is to be respected, because unlike many others he often takes his hard work and releases it to the general community. How many engine authors can you think of do this ? EXACTLY
Additionally, John Carmack is great for standing on his two feet and using OpenGL while most other lazy developers have sold their souls to Microsoft's Direct X.
I can think of a billion more reasons, why Mr. Carmack will get my highest level of respect. I hope you crawl out of whatever rock you have been living under.
Sunny Dubey
Whatver works best for Doom3... (Score:3, Funny)
...is what I'm buying! Blood, gore, death, mutilation, horror... that's what I want splattered on my monitor.
Re:Whatver works best for Doom3... (Score:1)
Re:Whatver works best for Doom3... (Score:2, Funny)
Considered a chainsaw? Or a shotgun?
You should be able to get hold of them a bit cheaper than one of those videocards, and it even looks better.
Re:Whatver works best for Doom3... (Score:2)
Real life could in no way imitate the wonderful pixel and vertex shaders in Doom III. It has better graphics than real life.
R300 path (Score:5, Interesting)
One is that ATI has optimized for the standard ARB2 path, and a specific R300 path wouldn't make much difference. In that case, my response would be, kudos ATI for promoting the standard, but speak positively of the performance of NV30.
The other possibility I can think of is that the lack of an R300 path is punishment for ATI leaking the Doom III alpha version. In that case I wonder how much the Radeon 9700 Pro would gain from an R300 specific path.
It certainly isn't a lack of time to develop the ATI path; there is an R200 path for older Radeon cards, and the Radeon 9700 has been available to developers for quite a bit longer than the GeforceFX has.
Re:R300 path (Score:1, Interesting)
NV30 needed its own path to compete (Score:4, Informative)
Half the speed at the moment. This is unfortunate, because when you do an
exact, apples-to-apples comparison using exactly the same API, the R300 looks
twice as fast, but when you use the vendor-specific paths, the NV30 wins.
I'm betting that Carmack assumed NV30 would also use the ARB2 path with NV10/20 R200 for the older cards. When he found ARB2 ran like shit on NV30, he had to do a special NV30 path.
He's already dumped vendor-specific vertex programs. I bet ARB2 would have been the only next-generation fragment processor if the NV30 could have run it fast enough.
Re:R300 path (Score:5, Informative)
The leaked alpha does support ATI's two-sided stencil extension (ATI_separate_stencil) which is only implemented on R300.
Re:R300 path (Score:2)
thought on noise (Score:5, Interesting)
slots, and when the cooling fan fires up they are VERY LOUD. I'm not usually
one to care about fan noise, but the NV30 does annoy me.
Noise is becoming a big problem. I now putting zalman stuff in all my computers. My guide is a zalman heatsink, a zalman powersupply, athlon 1.2, seagate 40 Gig HD, ATi radeon 9000(whitout fan) and a good motherboard without fan on the chipset.
thank you
louis
Re:thought on noise (Score:2, Informative)
Good advice, but all this will be futile if the system case doesn't reduce the noise that is generated by the internal components (or even amplifies it because of vibrations). Watch out for expensive aluminum cases which look cool but have no provisions against noise. Acoustically well-designed cases, such as this one [silentmaxx.de] from silentmaxx use several types of dampening materials with different noise absorption properties [silentmaxx.de].
You don't need bluesnews; just standard finger (Score:4, Informative)
[idsoftware.com]
Welcome to id Software's Finger Service V1.5!
Name: John Carmack
Email:
Description: Programmer
Project:
Last Updated: 01/29/2003 18:53:43 (Central Standard Time)
Jan 29, 2003
NV30 vs R300, current developments, etc
At the moment, the NV30 is slightly faster on most scenes in Doom than the
R300, but I can still find some scenes where the R300 pulls a little bit
ahead. The issue is complicated because of the different ways the cards can
choose to run the game.
The R300 can run Doom in three different modes: ARB (minimum extensions, no
specular highlights, no vertex programs), R200 (full featured, almost always
single pass interaction rendering), ARB2 (floating point fragment shaders,
minor quality improvements, always single pass).
The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five
rendering passes, no vertex programs), NV20 (full featured, two or three
rendering passes), NV30 ( full featured, single pass), and ARB2.
The R200 path has a slight speed advantage over the ARB2 path on the R300, but
only by a small margin, so it defaults to using the ARB2 path for the quality
improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 path.
Half the speed at the moment. This is unfortunate, because when you do an
exact, apples-to-apples comparison using exactly the same API, the R300 looks
twice as fast, but when you use the vendor-specific paths, the NV30 wins.
The reason for this is that ATI does everything at high precision all the
time, while Nvidia internally supports three different precisions with
different performances. To make it even more complicated, the exact
precision that ATI uses is in between the floating point precisions offered by
Nvidia, so when Nvidia runs fragment programs, they are at a higher precision
than ATI's, which is some justification for the slower speed. Nvidia assures
me that there is a lot of room for improving the fragment program performance
with improved driver compiler technology.
The current NV30 cards do have some other disadvantages: They take up two
slots, and when the cooling fan fires up they are VERY LOUD. I'm not usually
one to care about fan noise, but the NV30 does annoy me.
I am using an NV30 in my primary work system now, largely so I can test more
of the rendering paths on one system, and because I feel Nvidia still has
somewhat better driver quality (ATI continues to improve, though). For a
typical consumer, I don't think the decision is at all clear cut at the
moment.
For developers doing forward looking work, there is a different tradeoff --
the NV30 runs fragment programs much slower, but it has a huge maximum
instruction count. I have bumped into program limits on the R300 already.
As always, better cards are coming soon.
--------
Doom has dropped support for vendor-specific vertex programs
(NV_vertex_program and EXT_vertex_shader), in favor of using
ARB_vertex_program for all rendering paths. This has been a pleasant thing to
do, and both ATI and Nvidia supported the move. The standardization process
for ARB_vertex_program was pretty drawn out and arduous, but in the end, it is
a just-plain-better API than either of the vendor specific ones that it
replaced. I fretted for a while over whether I should leave in support for
the older APIs for broader driver compatibility, but the final decision was
that we are going to require a modern driver for the game to run in the
advanced modes. Older drivers can still fall back to either the ARB or NV10
paths.
The newly-ratified ARB_vertex_buffer_object extension will probably let me do
the same thing for NV_vertex_array_range and ATI_vertex_array_object.
Reasonable arguments can be made for and against the OpenGL or Direct-X style
of API evolution. With vendor extensions, you get immediate access to new
functionality, but then there is often a period of squabbling about exact
feature support from different vendors before an industry standard settles
down. With central planning, you can have "phasing problems" between
hardware and software releases, and there is a real danger of bad decisions
hampering the entire industry, but enforced commonality does make life easier
for developers. Trying to keep boneheaded-ideas-that-will-haunt-us-for-years
ou
Graphics Summit for the past three years, even though I still code for OpenGL.
The most significant functionality in the new crop of cards is the truly
flexible fragment programming, as exposed with ARB_fragment_program. Moving
from the "switches and dials" style of discrete functional graphics
programming to generally flexible programming with indirection and high
precision is what is going to enable the next major step in graphics engines.
It is going to require fairly deep, non-backwards-compatible modifications to
an engine to take real advantage of the new features, but working with
ARB_fragment_program is really a lot of fun, so I have added a few little
tweaks to the current codebase on the ARB2 path:
High dynamic color ranges are supported internally, rather than with
post-blending. This gives a few more bits of color precision in the final
image, but it isn't something that you really notice.
Per-pixel environment mapping, rather than per-vertex. This fixes a pet-peeve
of mine, which is large panes of environment mapped glass that aren't
tessellated enough, giving that awful warping-around-the-triangulation effect
as you move past them.
Light and view vectors normalized with math, rather than a cube map. On
future hardware this will likely be a performance improvement due to the
decrease in bandwidth, but current hardware has the computation and bandwidth
balanced such that it is pretty much a wash. What it does (in conjunction
with floating point math) give you is a perfectly smooth specular highlight,
instead of the pixelish blob that we get on older generations of cards.
There are some more things I am playing around with, that will probably remain
in the engine as novelties, but not supported features:
Per-pixel reflection vector calculations for specular, instead of an
interpolated half-angle. The only remaining effect that has any visual
dependency on the underlying geometry is the shape of the specular highlight.
Ideally, you want the same final image for a surface regardless of if it is
two giant triangles, or a mesh of 1024 triangles. This will not be true if
any calculation done at a vertex involves anything other than linear math
operations. The specular half-angle calculation involves normalizations, so
the interpolation across triangles on a surface will be dependent on exactly
where the vertexes are located. The most visible end result of this is that
on large, flat, shiny surfaces where you expect a clean highlight circle
moving across it, you wind up with a highlight that distorts into an L shape
around the triangulation line.
The extra instructions to implement this did have a noticeable performance
hit, and I was a little surprised to see that the highlights not only
stabilized in shape, but also sharpened up quite a bit, changing the scene
more than I expected. This probably isn't a good tradeoff today for a gamer,
but it is nice for any kind of high-fidelity rendering.
Renormalization of surface normal map samples makes significant quality
improvements in magnified textures, turning tight, blurred corners into shiny,
smooth pockets, but it introduces a huge amount of aliasing on minimized
textures. Blending between the cases is possible with fragment programs, but
the performance overhead does start piling up, and it may require stashing
some information in the normal map alpha channel that varies with mip level.
Doing good filtering of a specularly lit normal map texture is a fairly
interesting problem, with lots of subtle issues.
Bump mapped ambient lighting will give much better looking outdoor and
well-lit scenes. This only became possible with dependent texture reads, and
it requires new designer and tool-chain support to implement well, so it isn't
easy to test globally with the current Doom datasets, but isolated demos are
promising.
The future is in floating point framebuffers. One of the most noticeable
thing this will get you without fundamental algorithm changes is the ability
to use a correct display gamma ramp without destroying the dark color
precision. Unfortunately, using a floating point framebuffer on the current
generation of cards is pretty difficult, because no blending operations are
supported, and the primary thing we need to do is add light contributions
together in the framebuffer. The workaround is to copy the part of the
framebuffer you are going to reference to a texture, and have your fragment
program explicitly add that texture, instead of having the separate blend unit
do it. This is intrusive enough that I probably won't hack up the current
codebase, instead playing around on a forked version.
Floating point framebuffers and complex fragment shaders will also allow much
better volumetric effects, like volumetric illumination of fogged areas with
shadows and additive/subtractive eddy currents.
John Carmack
Re:You don't need bluesnews; just standard finger (Score:2, Funny)
Heh, even teb Carmack isn't sure what the capitalization is.
MOD PARENT DOWN (Score:2)
And posting John Carmack's email was not a Smart Thing To Do (TM). Please follow the Golden Rule of email. I bet you wouldn't want your email address posted without your permission.
Carmak: Negative (Score:2, Funny)
I like my cards quiet (Score:5, Interesting)
I really wanted to go away from ATI this time around, but it appears I'll have to wait a little longer. I'm sure nVidia will [eventually] release a fanless, 1-slot version. I just wonder if it will be too little too late.
Re:I like my cards quiet (Score:3, Interesting)
Re:I like my cards quiet (Score:5, Informative)
From the [H]ard|OCP review:
"Using a decibel meter we tested the sound level of the GFFX at three feet away, directly in front of the exhaust vent. In 2D mode, the reading was 56dB."
I don't know about you, but I find 56 dB to be very noisy.
Re:I like my cards quiet (Score:2, Informative)
Not quite. The FX card actually turns the fans full blast the second 3d is detected in software (ie: a DirectX/OpenGL call is made). I thought (hoped) it would be based on the running temperature of the card (hardware RPM control) but unfortunately, it isn't.
Re:I like my cards quiet (Score:2)
Hopefully the 3rd Party OEM of the FX will create more sensible cooling mechanisms.
Re:I like my cards quiet (Score:2)
Unfortunately we all can't get Quadro FX's :-)
Re:I like my cards quiet (Score:2)
A GF2 really won't cut it -- you can run UT2k3 on it with all the eye candy off, but even then it's marginal (yes, I know -- I ran UT2k3 like this for 2-3 months on a GF2). It simply won't have the horsepower to handle D3 in any reasonable capacity.
Re:I like my cards quiet (Score:2)
If you've HTPC you should also have display with VGA (RGB+Hsync+Vsync) or DVI input. Unless you have such a display, you really should be using consumer hardware that has stuff like component output or RGB (RGB+sync). In no situation should you use HTPC to output anything to normal TV if you want any quality. (I do have Radeon and even its s-video output still sucks big time. All nvidia based products I've seen contain much worse outputs.)
I'm still hoping I've a new DLP projector by the time Doom 3 comes out. All I have is a lousy CRT cannon.
MIB 2 reference (Score:4, Funny)
Not to mention ATi's next card...
I live in Wisconsin... (Score:1)
Ah, Wisconsin... Now if you'll excuse me, it is time for another dose of cheese.
Cowboy Kneal and everyone at Slachdot... (Score:5, Funny)
More Kudos to ATI (Score:5, Insightful)
After we started to get benchmarks showing matched performance, the remaining questions were left to DX9 and the more complex shaders. From Carmack's comments and the shadermark tests that are showing up, it appears that ATI is anywhere from competitive to superior in the DX9 2.0 shaders, as well. It does look like NV30 can indeed run deeper/higher precision shaders, but we will have to wait to see if games ever do show with shaders deeper than the LCD between NV30 and R300.
Carmack does mention that nVidia promised that "compiler improvements" will increase the NV30 shader performance. (Better scheduling of parallel pipes?)
The astounding bottom line is that as of Jan 2003, the 9700 is not shown to be inferior in any way to an as-yet unreleased flagship product from the king of 3d on the mainstream desktop.3 Cheers for ATI.
Re:More Kudos to ATI (Score:1, Informative)
Have you looked at Matrox [matrox.com]'s product range lately?
If I were building a system for primary 2D operations, that's who I'd be buying from. Their cards are wonderful.
Re:More Kudos to ATI-Boldly go were no card has... (Score:2)
Now, a hardware fractal solution...that
Real geeks know.... (Score:1, Redundant)
Thats the only way to read johc info.
Major breakthrough (Score:1)
finger johnc@idsoftware.com|less
I dunno, these Windoze lusers...
Good stuff (Score:2)
Imagine how much of a market major printer, digital camera, and scanner manufacturers are missing by forcing their tech support people to say "we don't support it under that lee-nooks stuff". If a company, say Canon, would release a universal supported bubblejet driver for older printers, and a universal PPD for their postscript printers to work in CUPS, they could see massive gains (we do account for like 6% of all users, afterall).
Re:Good stuff (Score:2)
How ironic (and actually close to the true meaning of ironic, for once.)
Yes, nvidia "supports linux". But I thought the "grass roots movement" was about "free, opensource software". Nvidia's support certainly does NOT fall into that category.
I worry about NVIDIA (Score:5, Informative)
One thing NVIDIA does seem to have going well is their motherboard chipsets. The new nForce2 really kicks ass by all accounts. I remember a while back hearing about an ATI mobo chipset based on tech they acquired from ArtX, but apparently end-user mobo chipsets aren't ATI's plan.
Good luck, NVIDIA. Hope y'all can keep up the pace.
Re:I worry about NVIDIA (Score:1)
Erm, try six month old radeons.
Re:I worry about NVIDIA (Score:2, Insightful)
mid to low end that generates most of the revenue.
You need not worry until Nvidia starts losing
OEM business to ATI.
Re:I worry about NVIDIA (Score:2)
Actually, the 9700 Pro has been out for something like six months, and the GeForce FX won't be out for another three, so it's more like nine months' difference.
Can they really afford to lose the lucrative high-end sales right now?
No, but they won't. First of all, there are all the nvidia fanboys, who think that nvidia rocks because nvidia rocks, ergo nvidia rocks. Secondly, there are the ATI anti-fanboys, who think that ATI sucks because ATI sucks, or because their drivers suck (despite not having actually tried any drivers for three years). Thirdly, there are the people who are stupid, or keep their PC in some kind of sound-proof box. Finally, there are people who don't buy the top-of-the-line cards anyway, that fit into one of the above categories.
Nvidia won't die off for a long long time. They may be new, but fanboys will keep them alive through any tough times they weather... unless ATI can deliver a next-generation card within a few months of nvidia's last-generation card. If ATI can bring out a new card within, say, six months, and whose performance is to the GeForce FX what the 9700 Pro was to the GF4Ti, then I think things will begin to go very, very bad. Unless that happens though, ATI has only won the battle, not the war.
--Dan
nForce/nForce2 and Linux (Score:2)
1: No GART driver for Linux. The GART driver is integrated with nVidia video drivers, so forget about 3D on an ATI on an nForce under Linux. The nForce is effectively tied to nVidia video for Linux 3D.
2: No APIC. At the moment, I have stuff like SATA, firewire, USB2, AC97 modem, and USB2.0 turned off. Even so, I have an IRQ conflict between the ATI video and USB1.1 that so far hasn't bit me. But I suspect future pain, here.
3: Sound works - in stereo, not Dolby 5.1. I've heard of a $30 driver that will give full capability, though I've heard mixed reports of getting the SPDIF working even with these.
4: Binary-only network driver. There's also a 3com, but something about it requires patching the standard driver to get it recognized. So var nvnet works, so I haven't fussed with the 3com.
Demi off-topic, except that there is a tie between nForce and nVidia video, so I guess that's relevant to the subject. This is also a concern because it's a really high-performance board, where you'd really like to run an R300 or NV30.
Fortunately my mission for this board was largely Win-based with Linux as a dual-boot, or I would have RMA-ed the thing. But I kept the ATI video, and refused to "reward" nVidia's actions with more money.
Re:nForce/nForce2 and Linux (Score:2)
What the hell are you talking about? IRQ conflicts was a DOS (and thus also windows 95/98) problem.
Unless you have two old ISA-cards that a jumper configured to using the same IRQ you wont get a similiar problem in linux.
PCI and thus AGP was designed so that different devices can share the same interrupt lines. The only problem I've heard of with this are Creative's soundcards that craps on the PCI-bus, but linux makes sure not to share IRQs between devices that are known to be buggy.
Also check out http://doom.axlegames.com/ (Score:1, Redundant)
Doom.AxleGames.com [axlegames.com]
Has the
Driver differences (Score:5, Informative)
There's a big difference between the drivers theoretical output, and the actual acheived output.
In testing at my job, we found that the ATI drivers typically performed very poorly in comparison to those released by nVidia on similar hardware. In addition, we often had more serious issues with bugs in ATI drivers than nVidia. Although the next great thing from nVidia isn't likely to outright dethrone the 9700, nVidia is constantly improving their driver technology, constantly making the layer between software and hardware thinner and thinner.
Re:Driver differences (Score:5, Interesting)
Slashdot heros. (Score:3, Funny)
Shader program limits (Score:2)
Re:Shader program limits (Score:2, Interesting)
If you do more passes, as I understand it you have to upload all the scene geometry again, which is stressful on bus bandwidth and wasteful of processing resources.
Of course, it's entirely possible I'm misinterpreting everything, and I apologise in advance if that's the case.
Spelling (Score:5, Funny)
psst... it's spelled about...
Re:Spelling (Score:2)
"psst... it's spelled about.."
Man, if you hadn't come along, I woulda thought he was referring to Celebrity Boxing match between Miss Spelling and Miss Harding.
Fan Replacement (Score:2)
Re:Fan Replacement (Score:2)
The manufacturers are going to be able to come up with their own thermal management solutions if they want, and I'm sure someone is going to come up with something not as loud. (I have watercooling so it's not that big of an issue, I just have to wait for a waterblock for it..)
As for fan replacements, contact the person who made your GeForce card. They can probably sell you one directly or refer you somewhere.
3d card vs. Leaf Blower (Score:2)
Re:3d card vs. Leaf Blower (Score:2)
Linux driver performance?
GeforceFX non Ultra (Score:2, Informative)
Re:Who the fuck is that guy? (Score:3, Funny)
Re:Who the fuck is that guy? (Score:2, Funny)
Re:Who the fuck is that guy? (Score:5, Funny)
The answer (Score:3, Funny)
In other words, he is a god...
While you are a smuck.
Re:The answer (Score:3, Funny)
Re:The answer (Score:5, Informative)
Re:The answer (Score:3, Informative)
Well, I don't know about that. However, I would be willing to give him kudos though for implementing it in his games. We owe SGI a certain debt for making IRIS GL and documenting the api, then making the OpenGL "open". I suppose we could also give kudos to Evans and Sutherland for creating frame buffers, Postscript and "GL" as well.
Anyway, the whole reason SGI decided to open it up was pressure from the software developers. In order to be viable, they needed to have a wider variety of hardware choices and told SGI they would drop them if they did not open up the standard.
Currently we also have companies like Apple to thank for aggressively supporting it, integrating and improving OpenGL.
Re:The answer (Score:4, Informative)
Currently we also have companies like Apple to thank for aggressively supporting it, integrating and improving OpenGL.
That goes back to Carmack again. He went to Apple more than once evangelizing OpenGL, and his games have always been the Macworld showpieces on new hardware in the iMac era.
Re:The answer (Score:2)
And he does that so well
Re:Notoriety (Score:1)
He did mention the noise (Score:2)
He did say... (Score:1)
slots, and when the cooling fan fires up they are VERY LOUD. I'm not usually
one to care about fan noise, but the NV30 does annoy me.
Bill Gates eludes custard pie man!! (Score:1, Funny)
This story states that Bill gates is on the run once again [tiscali.co.uk] from the Belgian Custard Pie Hurler 'Noel Godin', the famous practical joker who has gotten the MS boss before [hedweb.com].
More on the first Attack can be found here [epix.net].
Ol' Billy now seems to be overly cautious [planetinternet.be] now each time he is forced to visit Belgium.
Its (Score:1, Funny)
It's "It's Carmack.". Why are you such a nitpicker?
Re:I am the great "Carmak" (Score:1)
a guy phoned a chineese restaurant and asked to
speak to John Romero (sp?) and after a few minutes
tried to speak to John "Carmak" (searched on google
for a link to the mp3 but I couldn't find it , sorry)
But on a more serious note , I got kinda dizzy
reading everything after the long dotted line
(And I program for a living)
Re:ATI card is the best choice (Score:2)
four words: better memory bandwidth management
Re:ATI card is the best choice (Score:2)
Not quite sure what those nVidia boys were thinking.. but it's not panning out. They've got 1GHz memory (after DDR).. and it still can't touch the much slower ATI bandwidth..
Hehe.. I should check hotjobs.com and see if there's any computer engineers looking for jobs that have their last listed job as nVidia...
Re:huh? 'path differences?' (Score:1)
(Reporter == idiot)
Would be better as
if (Reporter == idiot)
{
misinformPublic();
return ERROR;
}
Re:Um - can anyone explain this? (Score:5, Informative)
OK, I did some 3D imaging math about 10 years ago (when you had to code your own drivers to get SuperVGA mode under DOS), so I think I get what he's talking about: the problem of how to show the reflection of one object (or light source) off another object. I've never heard of "interpolated half-angle" or "specular highlights", or the "triangulation line". Anyone know what he is talking about?
You didn't get much beyond Gouraud shading, did you? :)
Of course, depending on your hardware ten years ago, specularity might not have been feaasible if you were doing something big and real-time. Certainly not with the standard PC of that era.
Hope that helps!