Sony Says Nobody Will Ever Use All the Power of a PS3 581
Tighthead Prop writes "Sony executive Phil Harrison has made some brash comments about the Cell processor and the PlayStation 3. Harrison says that the current PS3 game lineup is using less than half of the machines power, adding that 'nobody will ever use 100 percent of its capacity.' Is he right? 'The major reason Harrison wants to hype up the "unlimited" potential of the PS3's architecture is to downplay comparisons between games running on Sony's console and Microsoft's Xbox 360. The two systems are not completely dissimilar: they both contain a PowerPC core running at 3.2 GHz, both have similarly-clocked GPUs, and both come with 512 MB of RAM.'"
This sounds familiar... (Score:5, Insightful)
Architecture (Score:2, Insightful)
Conclusion: they are trying to present a bad news as a good one, business as usual...
Brilliant plan, guys (Score:5, Insightful)
Does this mean that there won't be a PS4? (Score:4, Insightful)
Why make it then? (Score:3, Insightful)
Re:Does this mean that there won't be a PS4? (Score:3, Insightful)
Well duh! (Score:3, Insightful)
That is what had happened after the SNES-Genesis-etc days (From the N64 onwards), the "next gen" iteration life span has became shorter and shorter so developers just start to get familiar with the system when the Next-gen system gets out.
I will sound like the old-grandpa but I liked more when the game generations lasted longer, you could see really nice things done with the technology and the hardware had more "value" (see for example all the NES peripherals) as you *knew* the system will remain active for a long time and more games would likely come.
I won't buy the "eye toy" or the "maracas" or the "bongos" today for any system because I know that only 1 or two games would ever be available.
Re:Architecture (Score:5, Insightful)
Re:Does this mean that there won't be a PS4? (Score:2, Insightful)
Re:This sounds familiar... (Score:5, Insightful)
Will anyone use 100% of the CPU(s)?
AND 100% of the GPU?
AND 100% of the RAM?
If not, Sony can always say they aren't using 100% of the system- so they game didn't live up to its potential.
Show me a game on any system that uses 100% of the resources, and I'll show you a game that hangs like mad and runs like crap.
Once again Sony comes out with an idiotic statement that they think will impress the public.
(Admittedly, the article was
Re:This sounds familiar... (Score:3, Insightful)
You could just guess...you have reasonably good odds of getting it correct. I bet it's 7.
Re:This sounds familiar... (Score:3, Insightful)
Bragging? (Score:1, Insightful)
Phil Harrison states that developers are, obviously, just getting started in fully tapping the PS3 hardware, just like every other console ever made, and that Sony will be constantly updating the PS3 system software with new capabilities so no game will likely ever use EVERY SINGLE FEATURE in the monster of a console.
Shocking!
Guy, go away, the console world is tired of the FUD from people like you.
Re:This sounds familiar... (Score:4, Insightful)
Guess that means it's impossible for a game to "live up to it's full potential"...
Not News (Score:4, Insightful)
Nobody every uses 100% of the power of their car, either. Sure, you LIKE to have the 250 HP engine, but you only use it for 3 seconds on the on-ramp. And hopefully nobody uses the full power of their 800 watt home theatre system. The excess power is there for the momentary condition - not to use all of the time.
I'm really shocked (Score:1, Insightful)
All that the guy is saying in the article is that developers aren't using the full potential of the platform yet, and that games will continue to get more and more out of the console as time progresses.
This is true with any console.. Look at PS2 launch titles compared to current PS2 titles. Look at xbox360 launch titles compared to current 360 titles. As developers become more familiar with the hardware, the games get better. Sony was developing the ps3 up to the last minute, developers really only got maybe 3 months with final hardware before launch (and with only 3 months to go, no one is going to take a huge risk with redesigning how their game engine is structured). Even the best PS3 launch title, resistance: fall of man, has admitted that they have only used 2 of the available 6 SPUs.
The ps3's power does not lie in it's core processor, the core prosessor is fairly slow. It's power lies in the SPUs. Second and third generation games are going to use this power more effectively. Yes, the ps3 is harder to program for, but it also has more raw calculation power. It's a trade off.
There are other architectural differences that will be addressed by developers too. Like the PS2, the PS3 has a fairly small amount of texture ram. But that texture ram is faster than the 360, and the bus between main memory and texture memory is huge. So like the ps2, the ps3 is structured to stream textures into texture memory from main memory. In order to hit ps3 launch deadlines, i doubt anyone did this. Next wave of games they will, and that code will be in their engine for every game after that.
All of the comparisons between xbox360 and PS3 so far have been comparing multiplatform titles. Developers who do multiplatform titles, usually develop the game in windows, and then write hardware specific low level code. This code ports fairly well to the xbox360 due to it's similarity to pc/directx, but will not run very well on the ps3, because it does not take into account the SPUs, which are the core of the PS3's processing power.
PS3 exclusive games will start to appear, and they will really shine. And as more and more middleware companies begin to write ps3 specific code that utilizes the PS3's SPUs, you will see more and more ports that will start swinging in favor of the PS3. As games get bigger, more and more companies are using middleware for physics, sound, graphics and AI.
This is all that Harrison is saying, that right now developers are not utilizing the full power of the console, and that there will always be new discoveries to pull more power out of it. This is still happening with the PS3, and new VU tequniques are discovered.
He's right but for the wrong reasons. (Score:4, Insightful)
With the GPU doing graphics, one core doing AI/Gameplay, another doing Physics, another doing Audio/Networking/Input you've pretty much got all the processing power you need. If you start spreading a game out across too many cores it's going to negatively effect the speed of the game due to the fact you're going to spend all your time trying to keep threads in sync. I'd argue that this is why Sony has it wrong and MS has it right. The GPU can handle graphics, then the 3 cores can be used as mentioned above - this seems the optimal division of work in a game engine. I'm convinced that 4 physical processing units at 4ghz would be better than 8 physical processing units at 3.2ghz so perhaps that would've been a better route for Sony if they really felt the need to beat the 360 on performance.
To me the Cell seems more suited to number crunching type applications, the sort where you can offload large amounts of data to each cell and let them go on their merry way processing these chunks without having to worry about whether every few bytes of data is in sync.
I honestly wonder if Sony management just assumed that the Playstation 3 would cell like the PS2 and PS1 and hence just insisted they use it as the tool to bring down the prices of Cell and BluRay regardless of whether they were fit for purpose or not.
Re:Architecture (Score:3, Insightful)
For publishers like EA, this is absolutely true, and has been for some time (see: Call of Duty). But for the first-party stuff, they will still be leveraging whatever strengths the console has (Gears or War for Xbox, Gran Turismo for PS3, etc).
Re:PS3 Exclusives - Volume 3 (Score:3, Insightful)
Re:This sounds familiar... (Score:5, Insightful)
I disagree with this 100%. Final Fantasy XII is one of the best looking games on the PS2 to date, but There's a good argument to be made that Gran Turismo 4 (which runs in 1080i in one way or another while FFXII is 480i only) surpasses it. But regardless- consoles arent like PC's. there will ALWAYS be an enterprising developer who comes up with some crazy coding method no one ever considered before and squeezes a little more performance out of the system.
Remember when Shadow of the Colossus was released, and everyone was saying things like "no one ever thought the PS2 was capable of things like this?" same principle. There's probably a lot of life left in the Ps2 that no one will ever get around to tapping, because with the existence of the PS3 it's no longer worth the effort to do so. By the time Developers REALLY know their way around the PS3 and are on the verge of squeezing every last ounce out of it, the Ps4 will be out and in the market and it simply won't make sense to bust one's ass trying to max out the PS3.
Re:Thank You AC (Score:3, Insightful)
Unlike the fellow above, I actually hope Sony is not paying you, because I would hope they'd get more for their money than a list of "Untitled" exclusives...
Re:Then either (Score:3, Insightful)
Or perhaps the PS3 utilizes a stupid design considering that developers have repeatedly shunned the most complex console.
Even in the 16 bit days a lot of developers went with the genesis rather than the SNES even though the SNES had greater capabilities because the genesis had more CPU and it was simply easier to develop games for it.
In the 32 bit era the Saturn was basically ignored by everyone because it was a nightmare to code for, meanwhile the PS1 came out. It had roughly half the raw CPU power of the Saturn (The Saturn has two Hitachi SH-2 processors, while the PS1 has one MIPS R3000 at about the same clock rate - SH-2 and R3000 are both pretty pathetic 32-bit RISC designs, although many people will hate on me for saying that about the R3000.) The PS1 however made it EASIER to do a lot of things - you only had to focus on one CPU and they did transparency in hardware. You can't use their graphics chip for general purpose computation though. The Saturn is more powerful and you CAN do transparency but you have to do it in software using the second CPU (in order to do the rendering in a timely fashion) and that is hard. So again the more powerful platform is neglected.
Those who would use Dreamcast as a counterexample should note that it died more because of Sony marketing than because of anything else. Developers abandoned it and waited for PS2, which turned out to have specs about an order of magnitude less powerful than announced.
Re:Thank You AC (Score:5, Insightful)
Re:Then either (Score:5, Insightful)
C) The Cell is a poor general purpose processor.
If you're at all familiar with the fundamentals of CPU design, it should be blindingly obvious that the Cell should be very good at handling streaming vector data, but relatively poor at more general purpose calculations.
Re:This sounds familiar... (Score:4, Insightful)
Re:This sounds familiar... (Score:3, Insightful)
Actually, your comment does raise a valid point, and deserves a more thoughtful response.
A console definitely demands a different approach to player interaction than a desktop does, for a variety of reasons. On the desktop you always have a keyboard and always have a display capable of at least moderately high res (800x600 minimum, usually more these days), whereas on the console you pretty much never have a keyboard and are still stuck with a large number of NTSC displays. How you interact with the game at the UI level will be quite different. Even how you draw everything could be rather different once you get above the core world logic.
I'm much less convinced that the underlying physics and modeling will be that much different between the two. How you present the world to the user (the graphics rendering) and how you interact with the user (controllers, UI elements) will certainly need to be different, but how the world operates should be pretty much the same. Otherwise I can see it being very hard to tune the interactions in the virtual world to make a fun, playable game.
Physics engines are complex beasts, and there are companies that specialize in implementing just that aspect of a game engine. [google.com] I can definitely see there being tuned implementations of the same physics engine for different platforms, so that the physics engine runs optimally on all of its specified targets. What I have a hard time imagining is that one target will increase the number of variables and parameters it tracks over another, or will run higher-precision calculations on one platform vs. another. As game companies move to commercial physics engines, I see them instead treating it as a black box, with known inputs giving known outputs. The remaining MIPS will then be spent on game logic, which I also see as being relatively constant across platforms, and game interface, which I see as requiring extensive adaptation for each platform. Optimizing the physics engine on a given platform just frees up MIPS for the other pieces, and the piece with the most flexibility holds the rendering and UI.
--JoeTruth Be Told (Score:3, Insightful)
We still haven't even used the full capability of a 300mhz processor and 32mb video card. The bottleneck is not hardware, it's software. Inefficient code, outdated methodologies, and improper application of libraries is a much greater bottleneck than the hardware in any system.
More cycles and more memory doesn't mean that developers are capable of using better graphics and logic, it means they can be lazier in their optimization. Games which take up 5gb of hard drive space do so because they can, not because they must. Developers know the user has 100+ gigabytes available on their hard drive, so no further optimization is necessary. They know that the video card has 256mb or more memory, so they don't optimize the game anymore than they need to. We only need 3ghz processors because developers can throw away as many cycles as they want. On a needs basis, the actual logic and graphics of the most powerful game available probably would require a 300mhz processor and 32mb of video memory. All the rest is a buffer for waste.
This isn't a sleight against coders, I'm a professional developer too. I've seen a lot of applications that could be optimized further but other tasks are much higher up the priority tree because even though the program could be more efficient, it doesn't need to be.