Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Oracle Businesses Databases Programming Software IT

Oracle Won't Abandon SPARC, Says Ellison 280

fm6 writes "When the Oracle acquisition of Sun Microsystems was announced, it was widely assumed that Oracle was interested only in Sun's software technology, and would sell or discontinue all its hardware businesses. Larry Ellison, in an interview just posted on the Oracle web site, says that's not what's going to happen. In particular, SPARC isn't going anywhere (PDF): 'Once we own Sun we're going to increase the investment in SPARC. We think designing our own chips is very, very important. Even Apple is designing its own chips these days.'"
This discussion has been archived. No new comments can be posted.

Oracle Won't Abandon SPARC, Says Ellison

Comments Filter:
  • Designing chips (Score:5, Insightful)

    by flaming error ( 1041742 ) on Thursday May 07, 2009 @09:54PM (#27871507) Journal

    "Even Apple is designing its own chips these days."

    Unlike Oracle, I think Apple is traditionally a hardware company.

    I wish them the best carrying on the Sun baton.

  • Of course (Score:4, Insightful)

    by SultanCemil ( 722533 ) on Thursday May 07, 2009 @09:54PM (#27871511)
    Well, of course he's going to say that - he's not just going to say "well, we're planning on axing 20,000 jobs and kissing bye-bye to the SPARC line". He has to at least maintain the *illusion* that they're going to keep producing SPARC chips.

    I love the line about "even Apple" is designing its own chips. One could say "even Sun" sells Intel.
  • by phantomfive ( 622387 ) on Thursday May 07, 2009 @10:07PM (#27871655) Journal
    For many years, there were a multitude of different architectures, and all of them were supported by major software developers. Over time the number has gotten smaller and smaller, the only one used in typical desktop computers anymore is the x86 (mainly thanks to Intel investing mountains of money into the manufacturing process). Unfortunately for Intel, manufacturing isn't the advantage it once was: AMD is still able to compete with them moderately well even when they've been a generation behind in manufacturing. Other things are coming into play besides raw processing power, things like power consumption and battery life.

    Intel is going to have trouble competing on battery life with ARM, or even PowerPC. Going into the future, we are going to see more ARM based netbooks (and they are going to be more usable), and the already common ARM handheld device is going to become more powerful. Suddenly there is going to be a need for software that runs on more than one architecture again. This is a good thing, in my opinion: it means x86 will not necessarily be the dominant processor forever into the future.
  • Re:Of course (Score:5, Insightful)

    by rackserverdeals ( 1503561 ) on Thursday May 07, 2009 @10:08PM (#27871669) Homepage Journal

    Well, of course he's going to say that - he's not just going to say "well, we're planning on axing 20,000 jobs and kissing bye-bye to the SPARC line". He has to at least maintain the *illusion* that they're going to keep producing SPARC chips.

    I love the line about "even Apple" is designing its own chips. One could say "even Sun" sells Intel.

    Sure, buy a company and kill off their highest revenue generating, and highest margin products which coincidentally are chosen more than any other platform to deploy your own database product. That's real smart.

    Anyone that thought it would make sense to kill off sparc doesn't have a clue or is just likely spreading IBM FUD.

  • Good for routers? (Score:5, Insightful)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Thursday May 07, 2009 @10:13PM (#27871705) Homepage

    While Oracle is big, I kind of doubt that they could ever keep up with Intel. Even in turn-key appliance servers (sort of an iMac of databases, pre-configured computer), Intel/AMD will outstrip them in performance and they won't be able to stay up to date.

    The only place I can think that this would be useful is routers. In a turn-key appliance like that that does a very specialized job (especially one that requires custom silicon to do the routing fast enough), SPARC could make sense. It would make it harder to steal their software (because you'd have to run on SPARC). It would give them total control (no need to source processors from external companies). They could even build the SPARC cores into the same chips that hold all the magic high-speed routing magic.

    SPARC could be useful, but I doubt they'll try and compete in the general market.

    This is just off the top of my head. Is there something special about SPARC that would make it remarkably good at some specific application that Oracle uses?

  • Re:Designing chips (Score:5, Insightful)

    by mabinogi ( 74033 ) on Thursday May 07, 2009 @10:16PM (#27871737) Homepage

    that's not traditionally, that's lately.

    Would you really consider an Apple II to be a fashion accessory?

  • by mako1138 ( 837520 ) on Thursday May 07, 2009 @10:48PM (#27872069)

    People have been saying for years that we're about to reach the end of the line in terms of Moore's law. So far they've all been proven wrong, and scaling continues unabated.

    Dumping processors in a box is "easy", but multicore programming is not easy. The software tools are not there yet. Not to mention, you need deep pockets to roll your own multicore IC and build up the requisite software ecosystem. Just look at how much trouble Sony had with Cell. Everybody is watching to see if Intel will succeed with Larrabee.

    Now Oracle may have good reason to be interested in Sun's Niagara. Database applicances, perhaps.

    And where does Apple come into this, exactly? PA Semi's focus is on a totally different market segment.

  • Re:Designing chips (Score:3, Insightful)

    by pathological liar ( 659969 ) on Thursday May 07, 2009 @11:00PM (#27872185)

    No, but compared to PCs of the era I could probably get away with calling the SE/20 or SE/30 fashion accessories.

    They were certainly great little machines too, but style was key (and that's where you start hearing the anecdotes about Steve micromanaging the UI design of everything.)

  • by phantomfive ( 622387 ) on Thursday May 07, 2009 @11:33PM (#27872451) Journal
    The main thing keeping Crysis from running on the iPhone isn't the processor, it's the video card.
  • by Anonymous Coward on Friday May 08, 2009 @01:33AM (#27872597)

    I'm trying to figure out if that was a insightful speculation or a bunch of words thrown together randomly.

  • Re:Of course (Score:3, Insightful)

    by Eskarel ( 565631 ) on Friday May 08, 2009 @01:42AM (#27872667)

    Apple is starting to design their own chips, more specifically it appears for their iPhone and iPod ranges(no news so far on they trying it for PCs and I don't expect any). They've hired some heavy hitters from AMD, and made some noise in the press about it. It's fairly recent and they haven't to the best of my knowledge released anything about it yet.

    Presumably they're after technology which will provide them with a competetive advantage in the performance/battery life arenas.

    For the same reasons, the idea of selling a database appliance is probably something that appeals to Oracle. Considering they just bought a company with heavy investment in hardware, operating systems as well as web and virtualization technologies. This is probably a rather appealing idea.

    If they can make it work it's potentially a very profitable one, and they've got a better chance than Apple since they've just bought a company with all the bits they didn't have as opposed to trying to start designing and fabbing chips(something Apple has never done) even if it's only for the low power handheld market.

  • Re:Of course (Score:3, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday May 08, 2009 @02:04AM (#27872799) Homepage Journal

    For as long as I can remember, Apple has been designing and outsourcing their own chips. Be it in the form of custom ROMs or VLSIs wich Apple is a big user of.

    Nobody is impressed by a "custom ROM" (and nobody uses a non-programmable ROM any more, and few even use a non-electronically-erasable one) and VLSI just means "Very Large Scale Integration" ... the integration of thousands of transistors on a single chip. It's also a company that put together a lot of "custom" silicon for Apple. But in the chip industry nothing is ever a one-off, and SOP is to have a library of cores which are integrated into "custom" solutions for different customers; the custom part is which cores are in the package, and sometimes they just turn off some unused cores in a previous, working design if the customer isn't that picky about die area. Furthermore, that stuff more or less disappeared when Apple went intel, but of course the iPhone is a whole different ball of wax.

  • by gaspyy ( 514539 ) on Friday May 08, 2009 @02:04AM (#27872807)

    They are especially good at marketing to business. They are also good at knowing what businesses want

    This is just a minor nitpick, but knowing what your customers want is part of the marketing. Marketing is not just advertising, though many seem to forget that.

  • by jcnnghm ( 538570 ) on Friday May 08, 2009 @02:08AM (#27872833)

    People have been saying for years that we're about to reach the end of the line in terms of Moore's law. So far they've all been proven wrong, and scaling continues unabated.

    Unless you know something I don't, you can't make a silicon wire smaller than the width of a single atom, so there is definitely a physical limit that we aren't that far away from. I've read that practically, the limit is 4nm for silicon nanowires. That means that if we're at 45nm today (Intel's 32nm chips are slated for 2009), and we're assuming size shrinks 50% every 18 months, in less than 72 months we'll have reached the practical lower limit for silicon features. Even assuming that you can make silicon chips with wires the width of a single atom, given that the atomic radius of Silicon is 110 pm, that only gives 144 months.

    In addition to that, at 3.2GHz, light in a vacuum can only travel about 9.36 centimeters per cycle. Given a dialetric constant for the Si02 used in chip manufacturing of 3.9, you can calculate the velocity of propagation of the electromagnetic waves through the Silicon as about 50.6% of C. Therefore, at 3.2 GHz, the electromagnetic waves inside the chip can only propagate about 4.7 centimeters per cycle. You also can lose a bit depending on the switching speed of the transistors, but they actually become faster the smaller they are, so the real limiter is the propagation speed.

    You've probably noticed that we haven't had any really major jumps in the clock speeds of consumer processors since about 2002. Intel originally thought they'd be able to scale the Pentium 4 Netburst architecture to about 10GHz, bu they ran into a frequency ceiling at about 4GHz.

    In short, unless there is a major materials breakthrough, or materials change, I would expect Moore's law to hold for the next five years or so, but not much longer after that. We're rapidly approaching the physical limits.

  • by LKM ( 227954 ) on Friday May 08, 2009 @03:47AM (#27873337)

    "But the appeal of the Mac and the Lisa was as much or more fashion and style as it was practical."

    That's an interesting statement, and it betrays more about you than about the topic we're discussing. I remember back when I went to school and the schoolwork our teachers handed out suddenly changed from photocopied hand-written stuff to neatly layouted, professionally looking stuff. That was when the Mac came out and normal people were suddenly able to use computers in a meaningful way.

    You're a geek. You don't care about normal people, because you were perfectly happy with DOS or whatever you were using. To you, all that stuff that made computers usable for everyone else was just "fashion".

    You were as wrong then as you are now.

    To you, the iPod is a fashion statement because you were happy with the MP3 players that came before the iPod. To most people, those were unusable, bulky pieces of crap. You were happy with cell phones before the iPhone came out. Most people hated their cell phones and used them only for the most basic things.

    Perhaps creating things normal people can actually use seems like "fashion" to you, but most people don't use these devices for their own sake; they don't enjoy learning complex stuff just to learn complex stuff. They want to get stuff done, and all of those things that you like, all those ways you can tinker with your toys actually only get in their way.

    Apple's success is not about fashion and style, it is about normal people getting stuff done.

  • by Anonymous Coward on Friday May 08, 2009 @04:51AM (#27873727)

    There is potential, especially in data mining. The equivalent of "run a task 2 million times" is "search through 200 million rows in a table." A speed-up of two or three orders of magnitude is straightforward (SSDs and n-way parallel processing, n>32), but getting the next two is not quite so easy. Specialised hardware might help (4096-bit data paths, anyone?)

    Traditionally, the big issue in database is disk management. It seems to me Sun has quite a good track record in this area, and system administration generally. Sun's recent experience with ZFS might be of use to Oracle too - not so much ZFS the product, but the experience and insight gained by the engineers working on it.

    In all, I think there are quite a few possibilities for Oracle. IBM may be kicking themselves in a few years for missing the opportunity of keeping Sun out of Oracle's hands.

  • by Jah-Wren Ryel ( 80510 ) on Friday May 08, 2009 @06:42AM (#27874355)

    People have been saying for years that we're about to reach the end of the line in terms of Moore's law. So far they've all been proven wrong, and scaling continues unabated.

    That means that if we're at 45nm today (Intel's 32nm chips are slated for 2009), and we're assuming size shrinks 50% every 18 months, in less than 72 months we'll have reached the practical lower limit for silicon features.

    I don't know if you realize it, but you are really just confirming the OP's point -- you are just another person predicting the end of Moore's law based on the technical obstacle du jour.

    Moore's law is solely about the number of transistors on a single IC for a constant cost. Feature size may appear to be a limiting factor, but that doesn't mean it will be one when we get to that point. Just like leakage for features sizes below roughly 100nm was once thought to be an insurmountable obstacle to Moore's law, and then some smart people figured out how to handle it, or how lithography processes were also considered a limiting factor below roughly 60nm -- until they weren't any more.

    So maybe 4nm really is a hard limit, somebody will come up with something to get around that obstacle - like say 3D ICs [74.125.47.132] - adding a couple of layers and you've easily doubled the number of transistors on the same size chip.

    In short, unless there is a major materials breakthrough, or materials change, I would expect Moore's law to hold for the next five years or so, but not much longer after that.

    The smart money is on the breakthrough, we've had plenty of them before and there is no reason to believe they are going to stop coming.

  • by Ilgaz ( 86384 ) on Friday May 08, 2009 @09:12AM (#27875439) Homepage

    Biggest fight between Apple and IBM came from IBM not keeping the promise of shipping a 3 Ghz G5 for Apple right? Also they didn't make something for portable which is the future. Months later, they shipped an architecture which can scale to 6 Ghz (by some exclusive tech) and shipped a real working 4.7 Ghz enterprise CPU (POWER6) which they keep selling. So, IBM isn't just capable of 3Ghz, they have such a technology in hand making competitors Mhz look so funny. It is almost like ultimate justice for years of Mhz myth by X86 vendors.

    Apple didn't design the entire G5. It is actually scaled down POWER4+Apple design choices+Altivec (which almost shouts like "I come from Apple").

    IBM wants to stay away from "end user" and they want to sell CPUs to companies who makes consoles/very high end TV/BluRay etc. XBox 360, Sony PS3, Nintendo Wii are all IBM CPUs designed with the respective partners. XBox 360 is almost designed for MS engineers needs, that is how it does serve them great. I was visiting a friend at IBM one day, one line had a 10.000 client network having some speed issues and other line was a teenage bitching about his FPS performance... It was in 1990s and when IBM sold their PC division to Chinese, I wasn't surprised a bit. Enterprise and end user really doesn't go together.

    Apple also wanted this situation: Consumers should be able to run x86 software and even can run Windows as exclusive OS (if needed). Don't let the comments/rants fool you, there are some amazing numbers of virtualisation software/ boot camp updates downloads from sites like versiontracker, macupdate etc. It is only x86 which can do it, you won't be emulating a same generation CPU with something completely different down to endianness. I actually run MS Virtual PC 7 (with their exclusive info,undocumented access) on Quad G5 2500 Mhz. Trust me, x86 isn't easy to emulate even if you are Microsoft itself. For year, before iPod, people had question "What happens if Apple dies?". If you ship them something that can run Windows even better than generic PCs, you won't have that question asked at all.

    Basically both companies wanted to end partnership. Steve Jobs likes to have "No 3Ghz for me, damn you IBM" and IBM likes to exit end user chaos, both are happy and interestingly, consumers are also happy. People actually hoping for CPU arch competition aren't happy, that is it Intel also lost a good reason to push SSE etc. achievements, who will ship something like Altivec now? AMD?

  • by tb3 ( 313150 ) on Friday May 08, 2009 @09:57AM (#27875943) Homepage

    No, I think you're missing the main reason Apple dumped IBM. Apple saw the market moving towards laptops, and IBM couldn't bring the operating temperature of the G5 down. Apple never built a G5 laptop, and it was killing them. Meanwhile, Intel was building fast, low power CPUs and chipsets, and in the quantities Apple wanted. Apple could build more powerful portables, and smaller, lighter, more compact desktops like the iMac and Mac Mini, as a side effect.

    The virtualization was just a nice bonus. It's actually easier to emulate an x86 on a RISC chip than the other way around. The Rosetta guys did some amazing things to get PowerPC code running on Intel, and even then it was just a stopgap measure.

  • by MagikSlinger ( 259969 ) on Friday May 08, 2009 @10:47AM (#27876541) Homepage Journal

    With all the talk of container and "lego" [datacenterknowledge.com] data centers, Oracle wants to become fully vertically integrated so that you can go to Oracle and say: "I've got $10 million -- sell my data center blocks".

    Sun's already been developing their own data-center-in-a-shipping-container [sun.com], and Oracle now has all the bits and pieces:

    • Hardware that runs Oracle really well -- Sun SPARC
    • The operating system for big data centers -- Solaris
    • The Java application server -- BEA's WebLogic
    • The Database -- well duh!

    Also, having a horde of hardware engineers is Ellison's wet dream. As I said before, Larry Ellison wakes up every morning and asks himself, "How can I [fsck] Microsoft today?" Larry has stated in the past he wouldn't mind moving beyond databases, and with Sun's hardware and Java, he's poised to do pretty much anything he wants. So he might entertain delusions of mobile, return of the net appliances, home multimedia, etc. In the short term, though, I think he's hoping he can create custom hardware to make Oracle and Java run much faster. Will he succeed? Dunno, but Larry Ellison has a ferocious desire to succeed, and often, that's all you need.

  • by mikael ( 484 ) on Friday May 08, 2009 @11:20AM (#27876881)

    They also built things no one wanted. In fact, they had a really hard time figuring out what people wanted, this was their weakness.

    That was supposed to be the job of their ambassadors and maybe the sales/marketing people - to get feedback from potential customers as to what they wanted to see in future products. Problem is, they mostly wanted a solid reliable OS that that they wouldn't have to wait for the first service pack before upgrading an entire department as well as having a competitive price/performance ratio.

    For Sparc processors like Niagara II [wikipedia.org], the server group would want more cache and hardware support for encryption, but the workstation group would want more floating-point processors. In the end they both get what they want with multi-core chips.

  • Re:Of course (Score:3, Insightful)

    by fm6 ( 162816 ) on Friday May 08, 2009 @11:28AM (#27876943) Homepage Journal

    I work for the part of Sun that makes the blades you're talking about. Two important details: they also run Linux, Solaris and ESX Hyperviser. And although I'm certainly glad you think they're great (as I do), remember that x64 systems (blades, rack mount systems, and one lonely workstation) are still a relatively small part of our business.

    The future of which is my biggest concern. I'm encouraged that Ellison has seen fit to debunk the assumption that Oracle wasn't interested in Sun's hardware operations. But frustrated that he hasn't said anything about the x64 systems. He did talk about his partnership with HP. One hopes that preserving that relationship doesn't come at a cost of shutting down Sun's x64 products. If it does, I'm out of a job.

  • Re:Of course (Score:2, Insightful)

    by bsdaemonaut ( 1482047 ) on Friday May 08, 2009 @11:30AM (#27876989)

    Yes, but unlike every other major chip manufacturer the SPARC/ULTRASPARC architectures have stayed largely the same. Relatively recently the chips have gone multi-core and multi-threaded, but the chips are similar enough to have maintained backwards compatibility for the past 15 years. Intel had just entered the 32bit arena 15 years ago. Want to talk about wasting money? My god look at the Itanium, that's Intel's main offering for high-end servers. The Itanium has been and still is a complete failure. SUN has stretched its R&D dollar far more than Intel could ever dream of. I'd be willing to bet good money if you compared R&D for the past two decades SUN's expenditures would be a comparative pittance. SUN made some big mistakes, not the least of which was failing to fully embrace the low-end market -- but when it comes to mid-to-high end servers they were (and still are) brilliant. Unfortunately brilliance does not necessarily make one successful.

  • by TheRaven64 ( 641858 ) on Friday May 08, 2009 @12:00PM (#27877317) Journal

    Months later, they shipped an architecture which can scale to 6 Ghz (by some exclusive tech) and shipped a real working 4.7 Ghz enterprise CPU

    IBM could make a chip which ran up to 4.7GHz, but did you see the cost of it and the power consumption? IBM didn't have anything that could go in a laptop, and with the PowerBook the best-selling Mac stuck shipping a 1.67GHz G4 while the competition was shipping 2GHz+ chips with two cores using less power for the same or better performance. The G5, even at 2.7GHz, needed massively engineered cooling.

    I actually run MS Virtual PC 7 (with their exclusive info,undocumented access) on Quad G5 2500 Mhz. Trust me, x86 isn't easy to emulate even if you are Microsoft itself

    What undocumented access? VirtualPC 7 is an incremental improvement on VirtualPC 6, which Microsoft bought from Connectix. It's a fairly good x86 emulator, but it's based on old technology. Microsoft have no incentive to improve a product that makes it easy for you to migrate to a competitor's product.

  • by Hatta ( 162192 ) on Friday May 08, 2009 @12:04PM (#27877379) Journal

    I remember back when I went to school and the schoolwork our teachers handed out suddenly changed from photocopied hand-written stuff to neatly layouted, professionally looking stuff. That was when the Mac came out and normal people were suddenly able to use computers in a meaningful way.

    My teachers didn't seem to have any problem making handouts in Word Perfect on DOS. The tools are there on either platform. The only reason Macs were more prevalent in schools is because of Apple marketing directly do them.

  • by raddan ( 519638 ) on Friday May 08, 2009 @12:50PM (#27878107)
    Yeah, but all of those fancy features found in Atom, like floating point, branch prediction, long pipelines, large caches, and so on... they aren't RISC. They are the antithesis of RISC. I think that's what's making low-power devices difficult for Intel and AMD to achieve.

    "saving grace" is a little bit of a strange for an architecture that essentially dominates the entire embedded space, but I'll bite. The "saving grace" for ARM is that you can make them cheaply. I don't think anyone has ever looked at the Intel architecture and went "wow, that's beautiful". You think, "wow, I can't believe they crammed so much shit into that package!" Intel arch chips are orders of magnitude more expansive.

    ARM is elegant, consistent, very programmer-friendly, and amazingly powerful. A lot of the things that should be handled correctly, like interrupts, are. There are plenty of registers to play with. I think we're only seeing the beginning of what ARM is capable of. Don't need an MMU? No problem. I mean, heck, if you want an ARM that has a Harvard arch for real-time use, you can get one, and you don't need to learn a new architecture.
  • by Dog-Cow ( 21281 ) on Friday May 08, 2009 @12:57PM (#27878223)

    It's actually easier to emulate an x86 on a RISC chip than the other way around.

    The distinction between RISC and CISC are largely meaningless when talking about an x86(-64) CPU. If you can decode the instruction set, you can emulate. The hard part is emulating the attendant chipsets and their interactions with the emulated and real system.

  • by davecb ( 6526 ) * <davecb@spamcop.net> on Friday May 08, 2009 @02:44PM (#27879947) Homepage Journal

    Oracle wanted the hardware, so they could become the kind of top-to-bottom solution that IBM used to be in the Mainframe days. IBM failed to prevent it, so now they're loudly saying "sour grapes! sour grapes!"

    I suspect the commentators who missed why IBM and Oracle wanted Sun were the same ones who said IBM and Sun were doomed technologies, and that the future was NT 4 on Intel x86-32.

    And to answer the question literally, you put your marketers on marketing the company while you put your lawyers on working on the merger. I assume they're different people (;-))

    --dave

  • by petermgreen ( 876956 ) <plugwash.p10link@net> on Friday May 08, 2009 @04:17PM (#27881367) Homepage

    Afaict the hard part in emulation is doing it fast.

    Simple interpretive emulation is pretty easy and if you only want to run apps from another CPU (rather than a whole OS) you don't need to emulate much in the way of hardware since you only have to emulate the userland environment.

    If you want good performance from your emulation you have to use "dynamic recompilation", basically converting the machine code from one CPU to another in blocks and then emulating it.

    x86 (remember the original intel macs were NOT x64, that came later) is widely known as a register starved architecture. PPC OTOH has plenty of registers.

    I would imagine translation of code from a register poor architecture to run on a register rich architecture would be much simpler than translation of code from a register rich architecture to run on a register poor architecture.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...