Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Programming Entertainment Games IT Technology

Carmack on NV30 vs R300 226

Nexxpert writes "John Carmack has posted his thoughts on the NV30 vs R300 (featured via www.bluesnews.com. Highlights some of the shortcomings of Nvidia's next step as well as pointing out what they've done right. Interesting read." In particular the arb2 vs nv30 path differences mean that it's not as simple as saying "ATI roX0rs nVidia" or vice versa.(update: sorry bout the misspelling, don't know how I missed that)
This discussion has been archived. No new comments can be posted.

Carmack on NV30 vs R300

Comments Filter:
  • We know that... (Score:5, Informative)

    by LordYUK ( 552359 ) <jeffwright821@noSPAm.gmail.com> on Thursday January 30, 2003 @11:58AM (#5189387)
    We already know that the Radeon 9700 pro and the GeForce FX will run Doom III, and they are both going to look good doing it.

    My concern is what is it going to look like on, lets say, a G4Ti4200 w/64 megs of ram, or the G4Ti4600 w/128 megs of ram. Are those of us not willing to spend 400 bucks on a new vid card (or for those of us stuck with a 4x AGP board, that plus a new mobo) going to have to turn 90% of the features off to run it at a good looking frame rate?

    Thanks goes out to Carmack, its nice that he takes the time to give us a run down of the two cards that are battling for supremecy, especially since I like many of you are thinking about D3 when I evaluate my system and its need for being upgraded.
    • Re:We know that... (Score:5, Insightful)

      by nomadicGeek ( 453231 ) on Thursday January 30, 2003 @12:06PM (#5189436)
      Are those of us not willing to spend 400 bucks on a new vid card (or for those of us stuck with a 4x AGP board, that plus a new mobo) going to have to turn 90% of the features off to run it at a good looking frame rate?

      Probably, but only for about 6 months until that $400 video card turns into a $200 video card and the Mobo becomes the $99 special.

      God I love this business.

      • Forget the mobo - AGPx4 Vs AGPx8 doesn't make a huge difference at the moment with games, and even if the NV30 cards drop to $200 I would still like to be able to hear the lovingly created sound track by Trent Reznor [nin.com] - go for the $180 ATI card - it is better for your ears ^_^
    • Re:We know that... (Score:5, Informative)

      by Camulus ( 578128 ) on Thursday January 30, 2003 @12:17PM (#5189511) Journal
      Doom 3 will be playable with a GF4 ti4200 and you won't have to turn every thing off to get a decent frame rate, if by decent you mean at least 60 or above. A friend of mine got a hold of the leaked alpha and on a GF2 GTS it was running around some where at 15 fps with all options turned on at 1280x1024 (I think) on an unoptimized release of the program. So, yeah, of course you will go get more performance if you plop down some money, but you should be pretty good to go I would imagine.
      • by DrSkwid ( 118965 ) on Thursday January 30, 2003 @01:28PM (#5189870) Journal
        3 rooms and 4 mobs does not a stress test make

        okay I agree with the premise that older cards will do just fine but the Doom3 alpha wasn't pushing the limits like I expect the ifnal release to do.

        A couple of dynamic lights and some bump mapping.

        You can still get q3 to reach the limits of the Ti4600 128Mb. Heck, even the original Unreal still poses a test with eveything turned up!

        • Well, first of all the alpha isn't near as optimized as the end product will be. So, while there might not have been a buttload of mobs, it still gives you an idea. I saw it run on a 9700 pro at Qcon last year and in a couple of really large areas in the beginning cinematic it was chopping, but this was also at full detail. If you play at 1024x768 with most of the options on an have a GF 4 ti4200 like the poster does, then I would think you would be okay the vast majority of the time.
        • by Fulcrum of Evil ( 560260 ) on Thursday January 30, 2003 @04:12PM (#5190684)

          3 rooms and 4 mobs does not a stress test make

          I thought that doom 3 was supposed to be more of a suspense game than a gorefest - i.e. there won't be hordes of demons after your butt. Instead, maybe you have to sneak around and avoid most creatures.

      • by bogie ( 31020 )
        Besides the fact the leaked alpha ran like crap on your friends GF2, essentially your proof comes down to this.

        "but you should be pretty good to go I would imagine."

        Somehow I don't feel any better.
      • Thank you JeffK!

        Your opinion means soooo much to us here, since you were running an alpha version of a game that won't come out for at least a year.

        Doom 3 will be the reason to buy a video card in 2004.
      • I call B.S... (Score:3, Insightful)

        by XaXXon ( 202882 )
        on getting 15 FPS at 1280x1024 with all options turned on with a gf2.

        *MAYBE* in an empty room with very few shadows and no combat..

        On my 9700 Pro running on a 2.4 GHz Xeon, at that resolution, I normally got ~40-45 fps, but when anything happened, it dipped under 10. Sometimes in the 1-2 fps range -- when I was shooting, and being attacked.

        No way in hell any gf2 is going to push 15 fps when anything is going on. It'd be a freaking slideshow slower than the powerpoint presentation at the meeting I just got out of.
    • My concern is what is it going to look like on, lets say, a G4Ti4200 w/64 megs of ram, or the G4Ti4600 w/128 megs of ram. Are those of us not willing to spend 400 bucks on a new vid card (or for those of us stuck with a 4x AGP board, that plus a new mobo) going to have to turn 90% of the features off to run it at a good looking frame rate?

      By the time D3 is out(x-mas?), that card will be in your mom's PC and you'll likely have another generation of cards on the market.

      Word is R350 in march and R400 in June. Who knows about NVidia, they seem to be following the Blizzard method of development. I'm sure we will see a "real soon now" press release at the introduction of those products.
  • by 3.5 stripes ( 578410 ) on Thursday January 30, 2003 @11:58AM (#5189391)
    I was doing ok for the first paragraph or so.

    Then my brain went *beep* *beep* *beep*

    And I lost everything.

    About the only thing I came away with is, if you do it the way a specific vendor wants, it kicks the crap outa the other one, otherwise the ATI may be a wee bit faster.
    • by tsetem ( 59788 ) <tsetem@gmai[ ]om ['l.c' in gap]> on Thursday January 30, 2003 @12:02PM (#5189417)
      About the only thing I came away with is, if you do it the way a specific vendor wants, it kicks the crap outa the other one, otherwise the ATI may be a wee bit faster.

      Close. I think the big thing he was trying to say was that ATI & NVidia in the ARB2 mode, ATI kicks the snot out of NVidia. But it's because ATI renders with less precision than NVidia.

      So when both cards are in ARB2 mode, NVidia looks better, but ATI is faster.
      • I'll take your word for it, you obviously understood more than I did :)

      • He never indicated which looked better, all he said was that ATI was faster in ARB2 mode, which may or may not be due to the differences in precision. Every review I've read indicates that ATI has either a slight or a major advantage in visual quality, depending on settings (ATI aquires a much bigger lead in quality as the settings are lowered.) and that ATI aquired a lead in speed as quality settings go up. Notably, the Radeon 9700 Pro also looks as good at 6x AA as the NV30 does at 8x AA.

        • He did say that Nvidia is rendering at a higher precision than ATI when in ARB2. This should mean that the colors are more accurate in the Nvidia render. My guess anyhow is that the ARB2 path on NV30 will quickly move towards the speed of the NV30 path. You know Nvidia driver devs will be highly optimizing for Doom III performance.
          • Yeah, it might, except that every test so far indicates that when it comes to visual quality, the ATI R300 does more with less than the NV30. Calculation accuracy means nothing if the rest of the part can't use it, and the indications are so far that the NV30 needs the more accurate calculations to keep up with the R300's superior display output, especially with AA and Anisotrpic filtering.

      • by barawn ( 25691 ) on Thursday January 30, 2003 @04:51PM (#5191038) Homepage
        No - this is decidedly not what he said. What he said was that the ATI precision mode that is used doesn't correspond to a precision that NVIDIA uses on the NV30, and the ARB2 precision mode corresponds to what ATI is using, but is "between" two for NV30. So the NV30 has to render it at its highest precision (rather than rendering it at a lower precision and artifacting the hell out of the thing) which slows it down.

        The thing is that the renderer didn't want the higher precision, so the excess precision is mostly wasted. It's like calculating out 1.5000 * 1.5000 and getting 2.2500, and then truncating it back to 2.2. You could've just truncated it to 1.5 and 1.5 initially, and gotten 2.2 as well.

        So, I doubt that NVIDIA actually looks better in the ARB2 mode. It just runs slower, because it's calculating things that don't need insane precision to insane precision.

        Hence the reason that Carmack said that NVIDIA is confident there's a lot of room to improve, if it can realize which of the three precision modes is most ideal and shift to it.
        • Actually, it breaks down more like this: ATI does 96 bit precision, Nvidia can do 32, 64 and 128.

          Your example with the truncation is a gross simplification, which goes into ATI's favour, but in reality there's way more precision than just 2 decimals...more like 6 for ATI and 8 for Nvidia...which makes a reasonably big difference in quality.

          The numbers above are not entirely spot on (except for Nvidia's 128 bit precision, and I can't be arsed to find the exact other numbers), but the principle remains the same: grandparent had it right; ATI for speed, Nvidia for quality (on a ARB2 path...vendor specific paths lead to different results, but are you really going to play the game in anything other than true OpenGL?
          • I don't think you understood what I said. The thing is that the rendering path only asked for 96 bit precision - however the NV30 had to render it in 128 bit because it doesn't have a 96 bit mode. You're not going to get "free" improved image quality simply by calculating things out to more precision.

            It's like in freshman-level science classes - you don't take the numbers out to more significant figures than you start with, because the added precision is meaningless. Carmack was talking about fragment path programs, for which added precision probably wouldn't pan out to added image quality.
            • Actually, it does: when using the 128 bit pipeline, you calculate everything in 128 bits. This automatically leads to greater acuracy when truncating to the 10 bit colour resolution your monitor can handle...which leads to a more accurate value of your 10 bit number.

              To break this down to engineering terms: whenever you step the simulation program to higher values, you increase your accuracy, even if you then only display at half the frequency. This gives you more accurate readings than if you stepped the simulation at half the frequency to begin with.
              And that applies to whatever you simulate, be it velocity or colour.

              And the thing is that your first sentence doesn't pan out: the rendering path would work with whatever it's programmed to do, and Carmack would difine that. Thus he'd say that when rendering for ATI the precision would be 96 bits, and for nvidia 128. Why would he give nvidia's path only 96 bits, when doing it at 128 would be just as fast, seeing as it's hardware!? If he gave the nvidia path 96 bits, it would be 96 bits with 32 null bits tacked on to the end anyway! The rendering path doesn't ask for anything...it just gets what it's given, and what it's given depends on how deep it is...96 for ATI, 128 for nvidia. What makes the difference is that the speeds of the 96 bit path is different from the 128 bit path...therefore making ATI faster and nvidia looking better.
              • Unfortunately, this isn't right. I was kindof at a loss to prove it until now, hence the late repost, but for a more detailed description, read Carmack's post at www.beyond3d.com. To quote:

                There is no discernable quality difference, because everything is going into an 8 bit per component framebuffer. Few graphics calculations really need 32 bit accuracy.


                What you missed from before is that the result from the fragment path program is truncated (because it goes into an 8-bit-per-component framebuffer) and so doing it in 24-bit vs. 32-bit (which is what is really going on: in the ARB2 path it tries to request 24-bit, but the NV30 only has 32 bit, so it has to calculate at a higher precision. I think there's a typo at Beyond3D there...) is completely pointless.

                The ONLY place it would matter is in a fringe case where a multiply-rounded 24-bit calculation would fall into a different bin than a multiply-rounded 32-bit calculation, which is basically never.
    • I was doing ok for the first paragraph or so.

      Then my brain went *beep* *beep* *beep*

      Yeah, I was going to post the same thing. Only read Carmack in the morning, that way you have the rest of the day to bring your ego back up to level 5 ;)

    • I thought slashdot was news for nerds... are you sure some h4x0r didn't re-direct you from PC World?

      JOhn
  • Once again... (Score:1, Interesting)

    by Anonymous Coward
    it's not as easy as it seems. Carmak makes some well reasoned and vaild points. I can't say I disagree with him about why DX's evolution is somewhat better(esentailly central control of the standard vs vendor squabling leading to dead branches). But he mentioned something about next gen cards having less bandwidth. Does that make sense to anyone?
    • I think he meant that performance would increase because the bandwidth requirements would go down - not that the bandwidth capacity would go down.
    • Re:Once again... (Score:5, Informative)

      by John Carmack ( 101025 ) on Thursday January 30, 2003 @05:26PM (#5191377)
      >But he mentioned something about next gen cards having less bandwidth. Does that make sense to anyone?

      The RATIO of bandwidth to calculation speed is going to decrease. It is nothing short of miraculous that ram bandwidth has made the progress is has, but adding gates is cheaper than adding more pins or increasing the clock on external lines.

      Bandwidth will continue to increase, but calculation will likely get faster at an even better pace. If all calculations were still done in 8 bit, we would clearly be there with this generation, but bumping to 24/32 bit calculations while keeping the textures and framebuffer at 8 bit put the pressure on the calculations.

      John Carmack
  • by grub ( 11606 ) <slashdot@grub.net> on Thursday January 30, 2003 @12:01PM (#5189405) Homepage Journal

    ...is what I'm buying! Blood, gore, death, mutilation, horror... that's what I want splattered on my monitor.
  • R300 path (Score:5, Interesting)

    by Anonymous Coward on Thursday January 30, 2003 @12:02PM (#5189415)
    Notice that Carmack has no R300 path. Why is that? I can think of two possible explanations instantly.

    One is that ATI has optimized for the standard ARB2 path, and a specific R300 path wouldn't make much difference. In that case, my response would be, kudos ATI for promoting the standard, but speak positively of the performance of NV30.

    The other possibility I can think of is that the lack of an R300 path is punishment for ATI leaking the Doom III alpha version. In that case I wonder how much the Radeon 9700 Pro would gain from an R300 specific path.

    It certainly isn't a lack of time to develop the ATI path; there is an R200 path for older Radeon cards, and the Radeon 9700 has been available to developers for quite a bit longer than the GeforceFX has.
    • Re:R300 path (Score:1, Interesting)

      by Anonymous Coward
      Attention John Carmack, I would love to hear a response from you on this particular post.
    • by scotay ( 195240 ) on Thursday January 30, 2003 @12:50PM (#5189679)
      The NV30 runs the ARB2 path MUCH slower than the NV30 path.
      Half the speed at the moment. This is unfortunate, because when you do an
      exact, apples-to-apples comparison using exactly the same API, the R300 looks
      twice as fast, but when you use the vendor-specific paths, the NV30 wins.


      I'm betting that Carmack assumed NV30 would also use the ARB2 path with NV10/20 R200 for the older cards. When he found ARB2 ran like shit on NV30, he had to do a special NV30 path.

      He's already dumped vendor-specific vertex programs. I bet ARB2 would have been the only next-generation fragment processor if the NV30 could have run it fast enough.
    • Re:R300 path (Score:5, Informative)

      by jpaana ( 35532 ) on Thursday January 30, 2003 @01:00PM (#5189731)
      ATI doesn't have any proprietary extensions for exposing the R300 shader functionality, only "ARB2" (ARB_vertex_program and ARB_fragment_program) so there's no way to do "specific" R300 path.

      The leaked alpha does support ATI's two-sided stencil extension (ATI_separate_stencil) which is only implemented on R300.
    • Even if it weren't for ATI's ARB2 performance, there still probably wouldn't be an R300 path for one simple reason - ATI was the company that leaked the Doom 3 alpha, and folks at iD Software were (and probably still are) a bit pissed off at them.
  • thought on noise (Score:5, Interesting)

    by Anonymous Coward on Thursday January 30, 2003 @12:05PM (#5189435)
    "The current NV30 cards do have some other disadvantages: They take up two
    slots, and when the cooling fan fires up they are VERY LOUD. I'm not usually
    one to care about fan noise, but the NV30 does annoy me.


    Noise is becoming a big problem. I now putting zalman stuff in all my computers. My guide is a zalman heatsink, a zalman powersupply, athlon 1.2, seagate 40 Gig HD, ATi radeon 9000(whitout fan) and a good motherboard without fan on the chipset.

    thank you
    louis
    • Re:thought on noise (Score:2, Informative)

      by frozenray ( 308282 )
      > My guide is a zalman heatsink, a zalman powersupply, athlon 1.2, seagate 40 Gig HD, ATi radeon 9000(whitout fan) and a good motherboard without fan on the chipset.

      Good advice, but all this will be futile if the system case doesn't reduce the noise that is generated by the internal components (or even amplifies it because of vibrations). Watch out for expensive aluminum cases which look cool but have no provisions against noise. Acoustically well-designed cases, such as this one [silentmaxx.de] from silentmaxx use several types of dampening materials with different noise absorption properties [silentmaxx.de].
  • by Anonymous Coward on Thursday January 30, 2003 @12:13PM (#5189487)
    $ finger johnc@idsoftware.com
    [idsoftware.com]
    Welcome to id Software's Finger Service V1.5!

    Name: John Carmack
    Email:
    Description: Programmer
    Project:
    Last Updated: 01/29/2003 18:53:43 (Central Standard Time)

    Jan 29, 2003

    NV30 vs R300, current developments, etc

    At the moment, the NV30 is slightly faster on most scenes in Doom than the
    R300, but I can still find some scenes where the R300 pulls a little bit
    ahead. The issue is complicated because of the different ways the cards can
    choose to run the game.

    The R300 can run Doom in three different modes: ARB (minimum extensions, no
    specular highlights, no vertex programs), R200 (full featured, almost always
    single pass interaction rendering), ARB2 (floating point fragment shaders,
    minor quality improvements, always single pass).

    The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five
    rendering passes, no vertex programs), NV20 (full featured, two or three
    rendering passes), NV30 ( full featured, single pass), and ARB2.

    The R200 path has a slight speed advantage over the ARB2 path on the R300, but
    only by a small margin, so it defaults to using the ARB2 path for the quality
    improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 path.
    Half the speed at the moment. This is unfortunate, because when you do an
    exact, apples-to-apples comparison using exactly the same API, the R300 looks
    twice as fast, but when you use the vendor-specific paths, the NV30 wins.

    The reason for this is that ATI does everything at high precision all the
    time, while Nvidia internally supports three different precisions with
    different performances. To make it even more complicated, the exact
    precision that ATI uses is in between the floating point precisions offered by
    Nvidia, so when Nvidia runs fragment programs, they are at a higher precision
    than ATI's, which is some justification for the slower speed. Nvidia assures
    me that there is a lot of room for improving the fragment program performance
    with improved driver compiler technology.

    The current NV30 cards do have some other disadvantages: They take up two
    slots, and when the cooling fan fires up they are VERY LOUD. I'm not usually
    one to care about fan noise, but the NV30 does annoy me.

    I am using an NV30 in my primary work system now, largely so I can test more
    of the rendering paths on one system, and because I feel Nvidia still has
    somewhat better driver quality (ATI continues to improve, though). For a
    typical consumer, I don't think the decision is at all clear cut at the
    moment.

    For developers doing forward looking work, there is a different tradeoff --
    the NV30 runs fragment programs much slower, but it has a huge maximum
    instruction count. I have bumped into program limits on the R300 already.

    As always, better cards are coming soon.

    --------

    Doom has dropped support for vendor-specific vertex programs
    (NV_vertex_program and EXT_vertex_shader), in favor of using
    ARB_vertex_program for all rendering paths. This has been a pleasant thing to
    do, and both ATI and Nvidia supported the move. The standardization process
    for ARB_vertex_program was pretty drawn out and arduous, but in the end, it is
    a just-plain-better API than either of the vendor specific ones that it
    replaced. I fretted for a while over whether I should leave in support for
    the older APIs for broader driver compatibility, but the final decision was
    that we are going to require a modern driver for the game to run in the
    advanced modes. Older drivers can still fall back to either the ARB or NV10
    paths.

    The newly-ratified ARB_vertex_buffer_object extension will probably let me do
    the same thing for NV_vertex_array_range and ATI_vertex_array_object.

    Reasonable arguments can be made for and against the OpenGL or Direct-X style
    of API evolution. With vendor extensions, you get immediate access to new
    functionality, but then there is often a period of squabbling about exact
    feature support from different vendors before an industry standard settles
    down. With central planning, you can have "phasing problems" between
    hardware and software releases, and there is a real danger of bad decisions
    hampering the entire industry, but enforced commonality does make life easier
    for developers. Trying to keep boneheaded-ideas-that-will-haunt-us-for-years
    out of Direct-X is the primary reason I have been attending the Windows
    Graphics Summit for the past three years, even though I still code for OpenGL.

    The most significant functionality in the new crop of cards is the truly
    flexible fragment programming, as exposed with ARB_fragment_program. Moving
    from the "switches and dials" style of discrete functional graphics
    programming to generally flexible programming with indirection and high
    precision is what is going to enable the next major step in graphics engines.

    It is going to require fairly deep, non-backwards-compatible modifications to
    an engine to take real advantage of the new features, but working with
    ARB_fragment_program is really a lot of fun, so I have added a few little
    tweaks to the current codebase on the ARB2 path:

    High dynamic color ranges are supported internally, rather than with
    post-blending. This gives a few more bits of color precision in the final
    image, but it isn't something that you really notice.

    Per-pixel environment mapping, rather than per-vertex. This fixes a pet-peeve
    of mine, which is large panes of environment mapped glass that aren't
    tessellated enough, giving that awful warping-around-the-triangulation effect
    as you move past them.

    Light and view vectors normalized with math, rather than a cube map. On
    future hardware this will likely be a performance improvement due to the
    decrease in bandwidth, but current hardware has the computation and bandwidth
    balanced such that it is pretty much a wash. What it does (in conjunction
    with floating point math) give you is a perfectly smooth specular highlight,
    instead of the pixelish blob that we get on older generations of cards.

    There are some more things I am playing around with, that will probably remain
    in the engine as novelties, but not supported features:

    Per-pixel reflection vector calculations for specular, instead of an
    interpolated half-angle. The only remaining effect that has any visual
    dependency on the underlying geometry is the shape of the specular highlight.
    Ideally, you want the same final image for a surface regardless of if it is
    two giant triangles, or a mesh of 1024 triangles. This will not be true if
    any calculation done at a vertex involves anything other than linear math
    operations. The specular half-angle calculation involves normalizations, so
    the interpolation across triangles on a surface will be dependent on exactly
    where the vertexes are located. The most visible end result of this is that
    on large, flat, shiny surfaces where you expect a clean highlight circle
    moving across it, you wind up with a highlight that distorts into an L shape
    around the triangulation line.

    The extra instructions to implement this did have a noticeable performance
    hit, and I was a little surprised to see that the highlights not only
    stabilized in shape, but also sharpened up quite a bit, changing the scene
    more than I expected. This probably isn't a good tradeoff today for a gamer,
    but it is nice for any kind of high-fidelity rendering.

    Renormalization of surface normal map samples makes significant quality
    improvements in magnified textures, turning tight, blurred corners into shiny,
    smooth pockets, but it introduces a huge amount of aliasing on minimized
    textures. Blending between the cases is possible with fragment programs, but
    the performance overhead does start piling up, and it may require stashing
    some information in the normal map alpha channel that varies with mip level.
    Doing good filtering of a specularly lit normal map texture is a fairly
    interesting problem, with lots of subtle issues.

    Bump mapped ambient lighting will give much better looking outdoor and
    well-lit scenes. This only became possible with dependent texture reads, and
    it requires new designer and tool-chain support to implement well, so it isn't
    easy to test globally with the current Doom datasets, but isolated demos are
    promising.

    The future is in floating point framebuffers. One of the most noticeable
    thing this will get you without fundamental algorithm changes is the ability
    to use a correct display gamma ramp without destroying the dark color
    precision. Unfortunately, using a floating point framebuffer on the current
    generation of cards is pretty difficult, because no blending operations are
    supported, and the primary thing we need to do is add light contributions
    together in the framebuffer. The workaround is to copy the part of the
    framebuffer you are going to reference to a texture, and have your fragment
    program explicitly add that texture, instead of having the separate blend unit
    do it. This is intrusive enough that I probably won't hack up the current
    codebase, instead playing around on a forked version.

    Floating point framebuffers and complex fragment shaders will also allow much
    better volumetric effects, like volumetric illumination of fogged areas with
    shadows and additive/subtractive eddy currents.

    John Carmack
    • by Anonymous Coward
      "The R300 can run Doom.." "The NV30 can run DOOM"

      Heh, even teb Carmack isn't sure what the capitalization is.
    • Please do not slashdot id's servers. They're trying to build Doom3 not Duke Nukem forever.

      And posting John Carmack's email was not a Smart Thing To Do (TM). Please follow the Golden Rule of email. I bet you wouldn't want your email address posted without your permission.

  • (mostly due to the egregious misspelling)
  • by Wag ( 102501 ) on Thursday January 30, 2003 @12:20PM (#5189533)
    I was holding out on nVidia's new card, but now I've given up on that idea. With more and more people using PCs as multimedia devices (watch DVDs, listen to music, etc), a fan that puts out almost 60Db of noise is unacceptable.

    I really wanted to go away from ATI this time around, but it appears I'll have to wait a little longer. I'm sure nVidia will [eventually] release a fanless, 1-slot version. I just wonder if it will be too little too late.
    • The fan only runs at full RPM when the card is doing a lot of 3d work. 2D stuff causes the fan to run a lot slower (not sure if it ever turns off completely tho)...
      • by Elledan ( 582730 ) on Thursday January 30, 2003 @12:47PM (#5189666) Homepage
        "The fan only runs at full RPM when the card is doing a lot of 3d work. 2D stuff causes the fan to run a lot slower (not sure if it ever turns off completely tho)..."

        From the [H]ard|OCP review:

        "Using a decibel meter we tested the sound level of the GFFX at three feet away, directly in front of the exhaust vent. In 2D mode, the reading was 56dB."

        I don't know about you, but I find 56 dB to be very noisy.
      • The fan only runs at full RPM when the card is doing a lot of 3d work.

        Not quite. The FX card actually turns the fans full blast the second 3d is detected in software (ie: a DirectX/OpenGL call is made). I thought (hoped) it would be based on the running temperature of the card (hardware RPM control) but unfortunately, it isn't.
        • This would become a real issue if Microsoft/Linux Desktop took a cue from Apple with Quartz Extreme and actually put UI rendering features such as transparencies and shadows onto the video card rather than the CPU. Your card wouldn't be working hard but it would noisy all the time.

          Hopefully the 3rd Party OEM of the FX will create more sensible cooling mechanisms.

  • by BigBir3d ( 454486 ) on Thursday January 30, 2003 @12:25PM (#5189557) Journal
    These are both the "new hotness" but with the noise of the nVidia I forsee it becoming "old and busted" quite soon.

    Not to mention ATi's next card...
  • It's quite likely that I'm the only person here in this hick town that could understand most of the words in that article. I really can't wait until D3 comes out and all the idiots around here wonder why it won't run on a P200 MMX with an ATI Rage Pro. The same sort of thing happened with UT2003, and it's really enjoyable to tell people that I have a GeForce3 with 64 MB of memory, but I'm thinking about upgrading soon, and they go "HOLY SHIT! Why would you need to upgrade?!"

    Ah, Wisconsin... Now if you'll excuse me, it is time for another dose of cheese.

  • by mbbac ( 568880 ) on Thursday January 30, 2003 @12:31PM (#5189585)
    ...really need to concentrate on spelling people's names correctly. When they can't even spell John Carmack's name right, something is seriously wrong.
  • More Kudos to ATI (Score:5, Insightful)

    by scotay ( 195240 ) on Thursday January 30, 2003 @12:33PM (#5189597)
    I've been running ATI cards on my desktop since the mach64 chip days. When I got my 9700 in August, I NEVER thought I'd still have a chip that was competitive with nVidia's best offering in 3D. I never bought ATI cards because they were best in 3D or driver quality - they never were better. ATI did have superior 2D quality (to my eyes) and Video/DVD playback. Given I spend 90% on my time on a desktop, ATI had the right mix of features. Now they finally are competitive with nVidia's 3D.

    After we started to get benchmarks showing matched performance, the remaining questions were left to DX9 and the more complex shaders. From Carmack's comments and the shadermark tests that are showing up, it appears that ATI is anywhere from competitive to superior in the DX9 2.0 shaders, as well. It does look like NV30 can indeed run deeper/higher precision shaders, but we will have to wait to see if games ever do show with shaders deeper than the LCD between NV30 and R300.

    Carmack does mention that nVidia promised that "compiler improvements" will increase the NV30 shader performance. (Better scheduling of parallel pipes?)

    The astounding bottom line is that as of Jan 2003, the 9700 is not shown to be inferior in any way to an as-yet unreleased flagship product from the king of 3d on the mainstream desktop.3 Cheers for ATI.
    • Re:More Kudos to ATI (Score:1, Informative)

      by Anonymous Coward
      "ATI did have superior 2D quality (to my eyes) and Video/DVD playback. Given I spend 90% on my time on a desktop, ATI had the right mix of features."

      Have you looked at Matrox [matrox.com]'s product range lately?

      If I were building a system for primary 2D operations, that's who I'd be buying from. Their cards are wonderful.
  • by Anonymous Coward
    finger johnc@idsoftware.com| more

    Thats the only way to read johc info.
  • I have actually heard this same sentiment from graphics and game designers before. It got me thinking about how Linux users, even when faced with the powerful opinion of Carmack that ATI is likely to be better then Nvidia as a whole, we still use Nvidia: because Nvidia supports our grass roots movement more. This usually applies even to Linux using gamers that utilize WineX or have a Windows partition (of which I am both).

    Imagine how much of a market major printer, digital camera, and scanner manufacturers are missing by forcing their tech support people to say "we don't support it under that lee-nooks stuff". If a company, say Canon, would release a universal supported bubblejet driver for older printers, and a universal PPD for their postscript printers to work in CUPS, they could see massive gains (we do account for like 6% of all users, afterall).

    • we [linux users] still use Nvidia: because Nvidia supports our grass roots movement more.

      How ironic (and actually close to the true meaning of ironic, for once.)

      Yes, nvidia "supports linux". But I thought the "grass roots movement" was about "free, opensource software". Nvidia's support certainly does NOT fall into that category.

  • I worry about NVIDIA (Score:5, Informative)

    by szquirrel ( 140575 ) on Thursday January 30, 2003 @12:45PM (#5189660) Homepage
    NVIDIA got where they are today by beating 3dfx on their own turf: high-end gaming performance. Remember when 3dfx released the Voodoo 4 & 5? More expensive than the GeForce256 but not decisively better performance. Now I'm hearing similar things about the GeForceFX vs. ATI's three month old Radeons. NVIDIA is getting bigger but they still aren't a huge company. Can they really afford to lose the lucrative high-end sales right now?

    One thing NVIDIA does seem to have going well is their motherboard chipsets. The new nForce2 really kicks ass by all accounts. I remember a while back hearing about an ATI mobo chipset based on tech they acquired from ArtX, but apparently end-user mobo chipsets aren't ATI's plan.

    Good luck, NVIDIA. Hope y'all can keep up the pace.
    • > three month old Radeons.

      Erm, try six month old radeons.
    • high end isn't where the money is at.. its the
      mid to low end that generates most of the revenue.

      You need not worry until Nvidia starts losing
      OEM business to ATI.

    • Now I'm hearing similar things about the GeForceFX vs. ATI's three month old Radeons.

      Actually, the 9700 Pro has been out for something like six months, and the GeForce FX won't be out for another three, so it's more like nine months' difference.

      Can they really afford to lose the lucrative high-end sales right now?

      No, but they won't. First of all, there are all the nvidia fanboys, who think that nvidia rocks because nvidia rocks, ergo nvidia rocks. Secondly, there are the ATI anti-fanboys, who think that ATI sucks because ATI sucks, or because their drivers suck (despite not having actually tried any drivers for three years). Thirdly, there are the people who are stupid, or keep their PC in some kind of sound-proof box. Finally, there are people who don't buy the top-of-the-line cards anyway, that fit into one of the above categories.

      Nvidia won't die off for a long long time. They may be new, but fanboys will keep them alive through any tough times they weather... unless ATI can deliver a next-generation card within a few months of nvidia's last-generation card. If ATI can bring out a new card within, say, six months, and whose performance is to the GeForce FX what the 9700 Pro was to the GF4Ti, then I think things will begin to go very, very bad. Unless that happens though, ATI has only won the battle, not the war.

      --Dan
    • Watch out with this combination. Works like a charm in Windows, and even works decently well in Linux, so far. But under Linux it's not fully capable, due to nVidia's usual documentation/binary driver issues.

      1: No GART driver for Linux. The GART driver is integrated with nVidia video drivers, so forget about 3D on an ATI on an nForce under Linux. The nForce is effectively tied to nVidia video for Linux 3D.

      2: No APIC. At the moment, I have stuff like SATA, firewire, USB2, AC97 modem, and USB2.0 turned off. Even so, I have an IRQ conflict between the ATI video and USB1.1 that so far hasn't bit me. But I suspect future pain, here.

      3: Sound works - in stereo, not Dolby 5.1. I've heard of a $30 driver that will give full capability, though I've heard mixed reports of getting the SPDIF working even with these.

      4: Binary-only network driver. There's also a 3com, but something about it requires patching the standard driver to get it recognized. So var nvnet works, so I haven't fussed with the 3com.

      Demi off-topic, except that there is a tie between nForce and nVidia video, so I guess that's relevant to the subject. This is also a concern because it's a really high-performance board, where you'd really like to run an R300 or NV30.

      Fortunately my mission for this board was largely Win-based with Linux as a dual-boot, or I would have RMA-ed the thing. But I kept the ATI video, and refused to "reward" nVidia's actions with more money.
      • <i>2: No APIC. At the moment, I have stuff like SATA, firewire, USB2, AC97 modem, and USB2.0 turned off. Even so, I have an IRQ conflict between the ATI video and USB1.1 that so far hasn't bit me. But I suspect future pain, here.</i>

        What the hell are you talking about? IRQ conflicts was a DOS (and thus also windows 95/98) problem.

        Unless you have two old ISA-cards that a jumper configured to using the same IRQ you wont get a similiar problem in linux.

        PCI and thus AGP was designed so that different devices can share the same interrupt lines. The only problem I've heard of with this are Creative's soundcards that craps on the PCI-bus, but linux makes sure not to share IRQs between devices that are known to be buggy.
  • Just in-case bluesnews starts to get chunky. A perfectly non busy server at:
    Doom.AxleGames.com [axlegames.com]
    Has the .plan.
  • Driver differences (Score:5, Informative)

    by daVinci1980 ( 73174 ) on Thursday January 30, 2003 @12:50PM (#5189680) Homepage
    Carmack mentioned this, and its important not to gloss over...

    There's a big difference between the drivers theoretical output, and the actual acheived output.

    In testing at my job, we found that the ATI drivers typically performed very poorly in comparison to those released by nVidia on similar hardware. In addition, we often had more serious issues with bugs in ATI drivers than nVidia. Although the next great thing from nVidia isn't likely to outright dethrone the 9700, nVidia is constantly improving their driver technology, constantly making the layer between software and hardware thinner and thinner.
    • by bogie ( 31020 ) on Thursday January 30, 2003 @02:22PM (#5190187) Journal
      He's right, ATI has really only gotten its act together with drivers in the past3-6 months. Up till then it was one buggy driver after another. For most people the nvidia drivers just worked while it seemed that with every ATI driver release you ended up needing patches for every game to work right. That said at this point it does seem like ATI finally got things right with the 9700. Up till now I honestly wouldn't even consider an ATI card, but compared to the initial FX I don't see why you choose it over the 9700. The more I read about it, it seems like Cost, noise, and loss of a pci slot will be keeping me away from the FX. Right now I have a 4200 and by the summer I'll be ready for a new card. If Nvidia doesn't get its FX in line by then there's no doubt I'll jump to ATI.
  • by Cannelbrae ( 157237 ) on Thursday January 30, 2003 @01:06PM (#5189766)
    Its wonderful to see slashdot celebrating its heros like John Carmak, Richard Stalman and Linus Torvads. I mean, with names like these contributing, who needs editors?
  • Seems to me the shader limits are more important than the ARB2 path. Nvidia can probaly get the ARB2 speeds up, with driver optimization. I can't imagine the limits on shader instructions can easily be remedied. Anyone know how this will affect the ATI? Can it swap in more instructions at a performance loss (Or no loss) or can it just not run the shader if it goes over the instruction limit? In other words does Carmack make large shader programs that ATI can't run or run slower or does he cap the shaders at ATI's limit and get simpler shader programs for both cards?
    • To my knowledge, all you'd have to do in order to use a longer shader is to break it down into separate passes. What the nVIDIA card does well, is extremely long pixel-shaders in once pass (1024 vs. 255 instructions, I think?), and also insanely long vertex shaders in one pass (1024x64 loops vs. 255x1 loop).

      If you do more passes, as I understand it you have to upload all the scene geometry again, which is stressful on bus bandwidth and wasteful of processing resources.

      Of course, it's entirely possible I'm misinterpreting everything, and I apologise in advance if that's the case.
  • Spelling (Score:5, Funny)

    by doc_traig ( 453913 ) on Thursday January 30, 2003 @01:28PM (#5189873) Homepage Journal
    update: sorry bout the misspelling, don't know how I missed that

    psst... it's spelled about...

    • "update: sorry bout the misspelling, don't know how I missed that"


      "psst... it's spelled about.."

      Man, if you hadn't come along, I woulda thought he was referring to Celebrity Boxing match between Miss Spelling and Miss Harding.
  • This may be slightly OT, but have any of you ever tried to replace a fan on a video card before? The big issue with Nvidia's new offering is that it is loud, etc - but I don't see that as the biggest problem. I have a case that's fairly quiet on the inside, and the only way I would notice that my video card fan died to is to take the side off and check, or notice that it burnt up. I have a Geforce 2 GTS, and when I did look inside my case, I heard a grinding sound and traced it back to my stupid VGA fan. This tells me it's about to crap out on me, and I need to replace it. I have a found a few good places on the net to replace the fans with, yet every OEM manufacturer of the graphics cards seems to use different methods of attaching the fans. Some have pins, some just use adhesive (I think that's a major problem waiting to happen), and even if you do find a fan that works - the pins on it might not line up with the slots on your board. Oh, another thing - 9 times out of 10, even if I do find a fan that has little pins that line up with the holes on the board, I can't find one that also has matching power connectors. With my luck, I'll find one that has a 3 pin connector when I need 2, or vice versa. So, does anybody know of a good place to get VGA fan replacements for these ultra-hot running video cards???
    • This is something I commented on the other day when /. had the article about the FX coming out, and everyone was bitching about the noise level.

      The manufacturers are going to be able to come up with their own thermal management solutions if they want, and I'm sure someone is going to come up with something not as loud. (I have watercooling so it's not that big of an issue, I just have to wait for a waterblock for it..)

      As for fan replacements, contact the person who made your GeForce card. They can probably sell you one directly or refer you somewhere.
  • There's just no comparision. The 3d card by ATI is much better at game graphics than the leaf-blower from nVidia.
  • GeforceFX non Ultra (Score:2, Informative)

    by Ratchet ( 79516 )
    There'a also a slower version of the GeforceFX in the works, sans leafblower. It's reported to run at 400MHz core/800MHz memory (as opposed to 500/1000 for the "Ultra" leafblower version). It will likely get trounced by the 9700 and 9700 Pro in most performance and IQ areas, but it will be an alternate solution for you guys that want one of those nV30 based cards but don't want to risk having your cat sucked into the back of your computer.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...