Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Intel Android Programming

Intel Confronts a Big Mobile Challenge: Native Compatibility 230

Posted by Soulskill
from the write-once-run-nowhere dept.
smaxp writes: "Intel has solved the problem of ARM-native incompatibility. But will developers bite? App developers now frequently bypass Android's Dalvik VM for some parts of their apps in favor of the faster native C language. According to Intel, two thirds of the top 2,000 apps in the Google Play Store use natively compiled C code, the same language in which Android, the Dalvik VM, and the Android libraries are mostly written.

The natively compiled apps run faster and more efficiently, but at the cost of compatibility. The compiled code is targeted to a particular processor core's instruction set. In the Android universe, this instruction set is almost always the ARM instruction set. This is a compatibility problem for Intel because its Atom mobile processors use its X86 instruction set."
This discussion has been archived. No new comments can be posted.

Intel Confronts a Big Mobile Challenge: Native Compatibility

Comments Filter:
  • Fsck x86 (Score:4, Insightful)

    by Anonymous Coward on Friday June 06, 2014 @09:24AM (#47179233)

    I like compatability, but I've had it with x86. Let ARM hog the limelight for a while, no reason it shouldn't have its fifteen minutes. And let x86 die, it's way past its BBE date and has outstayed its welcome by several generations.

    • Re:Fsck x86 (Score:5, Insightful)

      by Anonymous Coward on Friday June 06, 2014 @09:53AM (#47179507)

      This person is likely in their 20s, I am assuming early 20s. With that said, I am in my 30s, somewhat early. My first PC was an 8088 and I've deep dived into every modern processor since then. Even with the debacle that was Windows 7 and 8, I am still going to stand behind x86 as a great architecture that can stand the sands of time.

      Scalability: What other architecture has scaled so far that it was completely decimated two competing architectures from the past and the future at the same time. The original 8088/86 was 3mhz, the latest x86 offering is 4ghz.

      Popularity: Both Apple and Sun saw the writing on the wall, Sun saw it too late, Apple saw it early (or saw what happened to Sun). They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.

      Backwards Compatibilty: I know my x86 processor is still going start in 8-bit mode and I know that I can put it in 16bit mode and run my 1992 applications. But to that extent, x86-64 just extends the instruction set. eg ARM32 does not play on ARM64.

      Let's face it. I witnessed Y2K. I witnessed every weak architecture under the sun get wiped out because it had shortcomings. Intel designed the best architecture with x86 and naysayers generally harp because it's "too big". I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

      • Re:Fsck x86 (Score:5, Funny)

        by lagomorpha2 (1376475) on Friday June 06, 2014 @09:57AM (#47179557)

        I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

        This is just a guess but you don't actually have children yet, do you?

      • Re:Fsck x86 (Score:4, Interesting)

        by gbjbaanb (229885) on Friday June 06, 2014 @10:20AM (#47179767)

        x86 is good in the same way that a modern police baton is good - its still a stick you hit people with, and serves its purpose. But there are better weapons available.

        So saying that x86 is great because technology has had to improve to make up for its deficiencies is just stupid. x86 isn't some wonderful architecture, putting 4 cores on a single die isn't anything that x86 made happen that others couldn't do, fabrication techniques that shrunk the die size isn't anything to do with x86 either.

        Think that a motorola 68000, way back in the day was better than the old 286s it compared to. Imagine that the 68000 took off instead of the 286 - if MS and IBM had built DOS for 68000 instead of x86... today we'd be in pretty much the same position but with a different chipset. But it would be faster and cheaper and more efficient.

        BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen. The same as happens with A64 that allows old A32 and T32 instructions to still run on the same chip.

        • x86 is good in the same way that a modern police baton is good - its still a stick you hit people with, and serves its purpose. But there are better weapons available.

          So Intel is essentially crying "Don't ARM me, bro!"...? ;-)

        • BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen. The same as happens with A64 that allows old A32 and T32 instructions to still run on the same chip.

          Disregarding built-in microcode that converts CISC instructions into simpler RISC-like operations, this statement is not accurate. All x86-64 processors have the same native 32-bit registers and instructions that the original 386 had (some may be deprecated, but IIRC there is 100% compatibility). No hardware emulation is being done.

          You may be confusing the virtual memory translation scheme (Wow64) that Windows uses to run 32-bit processes in Windows x64. Yes, there is some slight overhead, but it isn't c

        • by Type44Q (1233630)

          BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen.

          Unless I'm mistaken, that's completely incorrect.

          • by drinkypoo (153816)

            BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen.

            Unless I'm mistaken, that's completely incorrect.

            The CPU's functional units don't execute x86 instructions or amd64 instructions. They execute micro-operations into which the x86(_64) decoder decomposes the instructions issued to the CPU.

      • by dave420 (699308)
        Windows 7 was not a debacle. ME & Vista, fair enough, but not 7.
      • Re:Fsck x86 (Score:5, Interesting)

        by OneAhead (1495535) on Friday June 06, 2014 @10:58AM (#47180161)

        First of all, at this point, it is misguided to talk about x86 as an architecture; there is generally little or no architectural overlap between two x86 processors that are a few generation apart. x86 is an instruction set, or even more correct, a family of instruction sets. The distinction is important, especially in the age of complex instruction decoders; a lot of the more complex x86 instructions are internally decoded into smaller pieces, and one could say that the CPU internally runs its own, different instruction set. The fact of supporting a certain instruction set nowadays says almost nothing about the underlying architecture.

        So we're talking about an instruction set. One that was conceived in an age where manual coding in machine language was far more common than it is today; the original x86 instruction set was designed to be easy for a human bitpusher to handle, whereas newer instruction sets like ARM are more geared to get the most out of a decent optimizing compiler. What followed for x86 was extension upon extension, and the instruction set is now so byzantine that x86 is a very difficult market to break into; the design complexity of the decoder can probably be overcome, but all the cross-licensing between Intel and AMD cannot. The complex, organically-grown instruction set also leads to some waste of silicon in having to support all those instructions, and waste of performance/energy efficiency in that the instruction set is not designed from the ground up towards efficiency. People on the x86 side make a compelling argument that this has become negligible, but the fact remains that I'm still not seeing any x86 processor getting (unbiased) performance/W scores that are close to common ARM processors.

        My first computer contained a Z80, a true 8-bit processor (your claim that x86 has 8-bit mode is false; the lowest common denominator for x86 is the 16-bit 8086, which you're probably confusing with the 8-bit 8080 which is not x86 compatible). More relevantly, I also have experience running scientific workloads on a whole zoo of processors. I have particularly fond memories of the later Alphas, which wiped the floor with everything up until and including the Pentium 4, and were very competitive even against Athlon 64 and Core2 performance-wise (but not price-wise). Repeat after me: x86 has zero inherent architectural advantage!!! The big advantages it has is (1) economies of scale and (2) the higher profits of a mass market that generate more revenue to be pumped into R&D. There hasn't been a kid on the block that could compete on these fronts - not until ARM came around.

        While Intel is sitting on an impressive pile of cash and R&D potential, their attempts to match ARM in performance/W are so far unsuccessful when looking at non-biased benchmark results, and ARM has profited from this in establishing a mass market of its own. Things are about to get interesting from here onwards. I can't predict whether x86 or ARM will win. The outcome might be coexistence, with x86 keeping its dominance in the server room, workstation, office and hard-core gaming, and everything else becoming ARM. Whatever the outcome might be, I am firmly rooting for ARM, though. Technically, it's leaner, more rationally designed instruction set that is more frugal with resources (die size, cache & memory usage) and better reflects present-day usage cases. And the fact that it's relatively straightforward for a newcomer to license it and start making chips will bring some healthy competition onto the stage.

        • by the_B0fh (208483)

          Wow. A factual followup. Wish I had mod points.

          • by tippe (1136385)

            I *do* have mod points, but can't mod him any higher, as he is already at 5. The most I could do is mod him down so that somebody else could come along and show how awesome his post is by modding him back up, but that doesn't seem like a very efficient use of mod points...

        • Re:Fsck x86 (Score:4, Interesting)

          by drinkypoo (153816) <martin.espinoza@gmail.com> on Friday June 06, 2014 @12:09PM (#47180915) Homepage Journal

          First of all, at this point, it is misguided to talk about x86 as an architecture;

          Thank you. I got a raft of shit when I argued that there was no such thing as an x86 instruction set architecture any more just recently, and hasn't been since the Am586 took x86 into RISC-land.

          I'm still not seeing any x86 processor getting (unbiased) performance/W scores that are close to common ARM processors.

          On the other hand, I'm seeing a pretty massive gap between the high-end in x86 (really, amd64) processors and in ARM processors. This doesn't mean ARM can't be scaled up, but I think it's worth remembering what x86 went through in being scaled up. It didn't happen overnight. It may well be that x86-compatible processors embrace low power consumption before the ARM-compatible processors scale up to the high end.

        • Re:Fsck x86 (Score:5, Informative)

          by rahvin112 (446269) on Friday June 06, 2014 @12:35PM (#47181117)

          A typical processor design takes around 4-5 years from concept to production silicon. Intel did not even consider power as a constraint (other than a maximum) until 2008.Haswell was the first ground up design where power was a constraint, but still not a major constraint. With Haswell Intel was within shooting distance of ARM power levels without even compromising computing power.

          In about 2010 power consumption became not just a feature, but a required feature in low watt to milliwatt ranges. Intel should have a processor to meet that requirement later this year or early/mid next year. Intel's already preliminarily released some (un-handicapped) atoms that have about 75% of the performance of Haswell and are power competitive with ARM.

          Up until a year or two ago when the PC market began to crater Intel wasn't interested in playing in the low power market because margins were atrocious, but with the rise of high margin smartphones and the reality that they will likely replace a significant chunk of the personal PC market they've begun to take the market seriously. Writing them off as unable to play this game because they haven't bothered in the past would be incredibly stupid. They are the largest CPU designer in the world and they have some of the smartest CPU designers in the world working for them, it just takes a while to turn such a big boat. Give it a few more years then come back and talk about x86 being unable to compete.

            I don't know if Intel will succeed but if they put their resources into it they will easily outpace ARM because in the CPU design game it's about design resources and FAB's and Intel has both in spades (in FAB's Intel is one entire process step ahead of everyone else), more than the rest of the ARM market combined and they won't be designing the same thing 50 times. See that's the ARM markets biggest handicap, there are dozens of companies reinventing the wheel over and over again. Intel's biggest handicap is their desire to not eat existing markets and it might be their undoing (a processor with 75% of haswell's power with ARM's power use could likely cannibalize much of their Haswell sales and the tricks to prevent that, ie sales restrictions, will also handicap the processors chances in competing with ARM). IMO if Intel fails at competing with ARM it will only be because they didn't want to cannibalize sales with lower margin parts.

      • Re:Fsck x86 (Score:4, Insightful)

        by ezelkow1 (693205) on Friday June 06, 2014 @11:07AM (#47180279)

        then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.

        Mips and arm as fads? You do realize mips has been around almost as long as x86 has, and is still widely used. People all too often forget that the majority of devices out there are not full fledged computers, they are embedded devices, to which mips and arm own the space. This is exactly why mips is still widely taught in colleges as it is readily accessible, open, and still used in the industry. It also gives a good foundation to build on when looking at other ISAs

      • by the_B0fh (208483)

        What kind of nonsense are you spouting? Every processor out there with the exception of that one open sourced one, *IS* proprietary! *YOU* go try and make an x86 and see if Intel sues the pants off you.

        You also don't understand what scalability and all the other big words mean. You don't even know that your Intel processor is now effectively a RISC core with a CISC layer of microcode on top.

        And who the hell cares if you can run 8bit or 16 mode shit? Those were all design inefficiencies and now you are f

      • by JohnFen (1641097)

        Since we're all playing the age card, I'm 50 and have been actively developing software since I was 12 (using punch cards and the ultra-fast and modern paper tape!)

        The x86 is a fine architecture, despite its numerous warts. However, so is arm. Each has distinct advantages and disadvantages -- and being able to operate in power-starved situations (such as with smartphones) is one of the main strengths of arm and one of the main weaknesses of x86. If my experiences has taught me anything, it's that there's no

      • by Pepix (84058)

        Popularity: Both Apple and Sun saw the writing on the wall, Sun saw it too late, Apple saw it early (or saw what happened to Sun). They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.

        Did you mean: 'Apple suffered from a PowerPC processor shortage, while Sun added x86-64 to their lines of workstations and servers'?

        Get you facts right: Sun didn't shift to being an x86 shop, and Oracle hasn't, too. In fact, the SPARC architecture is so alive, that is used in some of the Top 500 Supercomputers [wikipedia.org]... as IBM's PowerPC.

      • Re:Fsck x86 (Score:4, Interesting)

        by UnknowingFool (672806) on Friday June 06, 2014 @04:21PM (#47183153)

        They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.

        In my opinion, Apple shifted to x86 mainly for logistical reasons not architectural reasons. The problem with staying with PPC was two fold: 1) Intel offered better laptop chips and 2) Apple was going to have problems with PPC supplies.

        The first one is fairly self-evident. Intel laptop chips were being updated all the time and they were better at power efficiency than the PPC4 based chips Apple was getting from Motorola. IBM never manufactured a mobile PPC5 chip probably because of the heat dissipation problems. If Apple stuck with IBM and PPC then they would have to be further behind Intel based mobile chips.

        The second one is more complicated. Motorola then IBM supplied Apple with chips but even with millions of chips a year, Apple would be a small customer to either company. Compounding this is that Apple's chip was heavily customized so that it was costly to maintain and develop for any chip maker. Apple needed those customizations as other PPC chips by Motorola and IBM were not designed to be used for consumers in computers. For example IBM's chips for servers do not need any multimedia processing capabilities as they were designed for servers not desktops.

        As such it was burden for Motorola and IBM to make changes to any supply schedules. If Apple got a sudden upswing in orders, IBM wouldn't be able to keep up. Also IBM would have to dedicate more and more resources to develop newer generations with Apple's changes and IBM was reluctant to spend possibly billions in research for one small customer.

        Intel on the other hand was better logistically. Any development Intel did for Apple could be used for other customers. In fact, the ultrabook specifications came out of Intel's work with Apple on the MacBook Air.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      You have no concept. Many companies, including Intel, have tried to move away from x86 - the market won't let them. There is too much software out there written to the x86 architecture to move away from it. You are completely underestimating the market forces behind x86.

      Staying with x86 is *not* Intel's choice, of even their desire (they have tried to shift the market off x86). This is where real world forces/issues trump ivory tower technical perspectives.

    • Blind hate for an instruction set. Brilliant. /s

      • No-one who hasn't learned assembler for at least a couple of processor architectures would understand it.

        I too have a deep dislike of X86 that dates back to approaching it as my third processor architecture after 6502 and 68000 and hating X86 for it's inelegance. I've learned a couple more since than and nothing has shifted that initial distaste for X86. Even though it doesn't matter anymore as few people, and certainly not me, programs CPUs in assembler any more.

    • Can you name several reasons why the x86 ISA has a negative impact on your computing experience?
    • by BitZtream (692029)

      Why?

      LONG mode x86-64 works out most of the major 'problems' with x86 from a programmers perspective. Thanks AMD! ARM has its own set of silliness that devs at the assembly level have to deal with (For reference, I fluently speak x86, ARM, and ATmega ASM).

      Furthermore, x86-64 is a language the rides on top of the core, the core doesn't actually speak x86 in pretty much any x86 processor, it has a translation unit in front of it that breaks the CISC instructions up into more RISC like ones for the core.

      I wou

  • "newsy" bits (Score:5, Informative)

    by bill_mcgonigle (4333) * on Friday June 06, 2014 @09:30AM (#47179287) Homepage Journal

    Somehow missing from TFS...

    Intel has released a beta of its native development environment called Intel Integrated Native Developer Experience, (INDE) and written plugins for Eclipse the most Android developers use to build for Android so the apps can be X86 compatible and execute efficiently on Atom processor-based hardware.

    • I made an app for Android - ported an emulator written in C++. (Link in sig, if you're interested, but this isn't a slashvertisement.)

      So the core of the app, the 'engine', is in C++ and must be natively compiled, while the UI and such is in Java. Naturally, the binary's compiled for ARM first. This actually runs on a lot of Intel Android tablets because they have ARM emulators. But, thanks to a user finally asking, I put in some time and now I can make an Intel version. (Heck, the original source was written for Intel anyway, so it wasn't a big stretch.) The existing tools are sufficient for my purposes. And it runs dramatically faster now on Intel.

      However, for the developer it's mildly painful. The main issue is that you have a choice to make, with drawbacks no matter which way you go. You can include native libraries for multiple platforms, but that makes the APK larger - and according to a Google dev video I saw, users tend to uninstall larger apps first. In my case, it'd nearly double the size. So instead I'm putting together multiple APKs, so that ARM users get the ARM version and Intel users get the Intel version - but only Google Play supports that, not third-party app stores. I haven't looked into other app stores, and now it's less likely I will.

      Note that native development can be important to apps for a non-technical reason: preventing piracy. An app written purely in Java is relatively easy to decompile and analyze, and pirates have a lot of techniques for removing or disabling licensing code. Adding a native component makes the app much harder to reverse-engineer, at least delaying the day that your app appears on pirate sites.

      • by gbjbaanb (229885)

        what would be even better is if you could submit your source code to the Google store and it would compile it for you on a server farm and produce APKs optimised for each chipset they support.

        i remember the days when sourceforge had such a thing, you supplied your code and it got built for all manner of Linux and (IIRC) windows architectures/platforms.

  • Well-written C can be cross-platform compatible. It's all in how you write things (or the libraries you use).

    • by Atzanteol (99067) on Friday June 06, 2014 @09:36AM (#47179357) Homepage

      A compiled binary doesn't care how well-written your C is if you are running it on the wrong platform.

      • Most people get Android apps from an App Store though, and it can easily select the correct version. This happens already for MIPS-based devices, so there's no reason why it wouldn't work for x86, if it were worth the effort for developers to provide them.
      • Clearly we just need a small set of POSIX apps to do 'git, 'make' and 'gcc' on your phone.
        Download the signed source code from the app store.

        • For Android, I'm shocked this isn't part of the install process. Either done server side and cached(Compile once, then cache the stored binary) or done on the phone. If compilation fails for the target, the app dev is notified and made unavailable for those on that platform.

        • Clearly we just need a small set of POSIX apps to do 'git, 'make' and 'gcc' on your phone.
          Download the signed source code from the app store.

          like a gentoodroid

    • by AuMatar (183847)

      In fact one of the main reasons to write your logic in mobile in C is that it will run on any platform- then you only have to rewrite your UI layer. But this isn't what they're talking about- they're talking about multiple processors. However Android allows for fat binary apks with multiple versions of libraries, so it isn't that big a deal.

  • For a while they had their XScale line of ARM processors and SoCs. I think one of the dumber moves they've made was to sell that line of business off to Marvell in 2006 and go "all-in" on x86 before they were ready.
    • by alen (225700)

      i've read that they kept their ARM architecture license. one of the few good licenses where they can make their own CPU and instruction set as long as it's compatible with ARM. like apple and qualcomm. not like the regular licenses where you simply manufacture whatever ARM designs with minor changes

      • With ARMv8, a lot of companies have this kind of license. I think there are six independent ARMv8 implementations that I'm aware of, but there may be more. Well, I say independent - they all had engineers from ARM consult on the design, but they're quite different in pipeline structure. This is the problem Intel is going to face over the next few years. They could compete with AMD by outspending them on R&D: Intel could afford to design 10 processors and only bring 3 to market depending on what cust
        • by Kjella (173770)

          I think you can spin that any way you like it, on the one side they're a many headed beast for Intel to deal with but on the other side they're also in pretty intense competition with each other so they're not willing to share their secret sauce either. You pool resources but you are also at the mercy of a third party which may have interests that aren't exactly aligned with yours. The saying is "divide and conquer", not "spread yourself thin and attack from all sides".

          I also wouldn't underestimate the amou

        • by rahvin112 (446269)

          Reinventing the wheel 8 times is not effective use of R&D. The minor differences between these ARM vendors doesn't justify the expensive redesign of the same thing 8 times. If Intel goes all in and chases this market that divided R&D will be a significant handicap in competition against a well orchestrated and heavily financed competitor like Intel, not a benefit.

    • XScale and StrongARM.
  • by hey! (33014) on Friday June 06, 2014 @09:36AM (#47179353) Homepage Journal

    It worked amazingly well, but it still sucked.

    • Yeah it's nothing new to put such emulation in place, apple did it twice when they switched to powerpc and when they switched to intel. DEC did it for windows NT on alpha. Intel did it for windows and linux on itanium (the itanium originally had hardware x86 support but it sucked so much that software emulation was faster and it was removed in later versions). qemu can do it for linux binaries across a wide range of cpu architecture combinations.

      It's doable but there is a significant performance penalty. Th

    • What sucked is when Apple removed compatibility for PPC and all your applications (some of which were rather expensive) stopped working.
  • taking x64 except for one or two instructions to hurt AMD
    or their SSE extensions

  • by edxwelch (600979) on Friday June 06, 2014 @09:47AM (#47179445)

    ARM ran a survey of the top 500 Android apps in the market and found that only 20% are pure Java, 30% are native x86, 42% require binary translation and 6% do not work at all on Intel's platform. To make matters worse the level of compatibility is falling. They also found that running an app in binary translation mode takes a huge performance hit."
    http://www.theregister.co.uk/2... [theregister.co.uk]

    • I can't speak for all app developers, but the first time I got an x86 device on my desk at work, I ran machine code for several hours before even realizing it wasn't an ARM device. It was somewhat shocking when I finally ran 'cat .proc/cpuinfo.'
  • Microsoft and Intel spent 20 years building bigger. Intel made bigger more complex silicon and Microsoft bloat happily expanded to fill that bigger silicon.

    I remember times in the 90s where I was upgrading CPUs for clients that were 6 months old - crazy.

    These two companies where wholly unprepared for the mobile revolution that required small and efficient. Neither company could shrink their offerings down fast enough. Unix on ARM was there to fill the need.

    I say to both companies - tough cookies. Had th

    • Are you kidding?

      Atom is now competitive on phones, better than ARM on tablets and Haswell destroys ARM on larger tablets and everything above.

      The Windows NT kernel runs smoothly on hardware that would choke on Android.

      Don't forget that 90s processors were slower than current mobile processors. Good luck using a Pentium (Pro/II/III) for anything useful these days, regardless of the OS.

      • by JohnFen (1641097)

        I use several PIII machines, and one PII, regularly as servers in my LAN. They perform very well in that role.

        • Serving what? DNS, NTP and DHCP?

          The power savings from retiring something as old as a Pentium II or Pentium III certainly pay for newer hardware.

  • LLVM byte code (Score:4, Interesting)

    by reg (5428) <reg@freebsd.org> on Friday June 06, 2014 @10:12AM (#47179687) Homepage

    I still don't understand why APKs are not just pure LLVM byte code, and either the store or the phone completes the byte code to native compile, including the final optimization passes...

    Regards,
    -Jeremy

    • by outlaw (59202)

      Those who don't remember the past are doomed to repeat it...

      http://en.wikipedia.org/wiki/A... [wikipedia.org] (one of the earlier UNCOL)

      I'll go back to my cave now

      • The article explains what ANDF is, but it doesn't say what was wrong with it. What was wrong with ANDF?

        I'll go back to my cave now

        I feel that way all the time at work now, every time my manager tells me about some programming technology a company 'invented.'

  • Unless the native code includes ARM-specific inline assembly or uses ARM-specific processor features then the lack of x86 libs is just due to laziness on the part of developers. All the dev would need to do is compile his native code for x86 and include it in the APK. Devs feel free to be "lazy" in this way because ARM is so prevalent relative to x86.
    • by iggymanz (596061)

      wrong, different architectures can and do cause problems for pure C code too. here's tip of the iceberg for you, find out about various ARM model endian modes

      • "...or uses ARM-specific processor features..."

        I'll count byte order as a processor feature.

        Basically there's C code and then there's architecture-optimized C code. The former should be easily ported between architectures. So, if an app's native code is architecture-agnostic and the dev doesn't include x86 (and MIPS, for that matter) versions then he's just being lazy.
  • For I minute I though we had it bad because Apple is now creating a brand new language we have to learn just for "Apple development".

    But actually it seems...

    You're the ones getting fucked.

  • Only life important.

    Oh wait, wrong topic...

    When we talk about mobile devices, efficiency comes before compatibility.

  • Google should have designed Android around C or possibly C++. It would be more power efficient, but, more importantly, it would be free from involvement with Oracle. There is no reason why apps written in C for Android shouldn't be recompilable for X86 or ARM. It does require a bit more care. Fat binaries would also work well enough. Any large app is likely to be mostly data anyway.

"Bureaucracy is the enemy of innovation." -- Mark Shepherd, former President and CEO of Texas Instruments

Working...