Intel Confronts a Big Mobile Challenge: Native Compatibility 230
smaxp writes: "Intel has solved the problem of ARM-native incompatibility. But will developers bite? App developers now frequently bypass Android's Dalvik VM for some parts of their apps in favor of the faster native C language. According to Intel, two thirds of the top 2,000 apps in the Google Play Store use natively compiled C code, the same language in which Android, the Dalvik VM, and the Android libraries are mostly written.
The natively compiled apps run faster and more efficiently, but at the cost of compatibility. The compiled code is targeted to a particular processor core's instruction set. In the Android universe, this instruction set is almost always the ARM instruction set. This is a compatibility problem for Intel because its Atom mobile processors use its X86 instruction set."
The natively compiled apps run faster and more efficiently, but at the cost of compatibility. The compiled code is targeted to a particular processor core's instruction set. In the Android universe, this instruction set is almost always the ARM instruction set. This is a compatibility problem for Intel because its Atom mobile processors use its X86 instruction set."
Fsck x86 (Score:4, Insightful)
I like compatability, but I've had it with x86. Let ARM hog the limelight for a while, no reason it shouldn't have its fifteen minutes. And let x86 die, it's way past its BBE date and has outstayed its welcome by several generations.
Re:Fsck x86 (Score:5, Insightful)
This person is likely in their 20s, I am assuming early 20s. With that said, I am in my 30s, somewhat early. My first PC was an 8088 and I've deep dived into every modern processor since then. Even with the debacle that was Windows 7 and 8, I am still going to stand behind x86 as a great architecture that can stand the sands of time.
Scalability: What other architecture has scaled so far that it was completely decimated two competing architectures from the past and the future at the same time. The original 8088/86 was 3mhz, the latest x86 offering is 4ghz.
Popularity: Both Apple and Sun saw the writing on the wall, Sun saw it too late, Apple saw it early (or saw what happened to Sun). They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.
Backwards Compatibilty: I know my x86 processor is still going start in 8-bit mode and I know that I can put it in 16bit mode and run my 1992 applications. But to that extent, x86-64 just extends the instruction set. eg ARM32 does not play on ARM64.
Let's face it. I witnessed Y2K. I witnessed every weak architecture under the sun get wiped out because it had shortcomings. Intel designed the best architecture with x86 and naysayers generally harp because it's "too big". I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.
Re:Fsck x86 (Score:5, Funny)
I, for one, plan to teach my children x86 ASM so they understand the basics.. then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.
This is just a guess but you don't actually have children yet, do you?
Re:Fsck x86 (Score:4, Interesting)
x86 is good in the same way that a modern police baton is good - its still a stick you hit people with, and serves its purpose. But there are better weapons available.
So saying that x86 is great because technology has had to improve to make up for its deficiencies is just stupid. x86 isn't some wonderful architecture, putting 4 cores on a single die isn't anything that x86 made happen that others couldn't do, fabrication techniques that shrunk the die size isn't anything to do with x86 either.
Think that a motorola 68000, way back in the day was better than the old 286s it compared to. Imagine that the 68000 took off instead of the 286 - if MS and IBM had built DOS for 68000 instead of x86... today we'd be in pretty much the same position but with a different chipset. But it would be faster and cheaper and more efficient.
BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen. The same as happens with A64 that allows old A32 and T32 instructions to still run on the same chip.
Re: (Score:2)
x86 is good in the same way that a modern police baton is good - its still a stick you hit people with, and serves its purpose. But there are better weapons available.
So Intel is essentially crying "Don't ARM me, bro!"...? ;-)
Re: (Score:2)
BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen. The same as happens with A64 that allows old A32 and T32 instructions to still run on the same chip.
Disregarding built-in microcode that converts CISC instructions into simpler RISC-like operations, this statement is not accurate. All x86-64 processors have the same native 32-bit registers and instructions that the original 386 had (some may be deprecated, but IIRC there is 100% compatibility). No hardware emulation is being done.
You may be confusing the virtual memory translation scheme (Wow64) that Windows uses to run 32-bit processes in Windows x64. Yes, there is some slight overhead, but it isn't c
Re: (Score:3)
BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen.
Unless I'm mistaken, that's completely incorrect.
Re: (Score:2)
BTW x86 32-bit doesn't run on x86_64 either. The software and chips have emulation routines that allow it to happen.
Unless I'm mistaken, that's completely incorrect.
The CPU's functional units don't execute x86 instructions or amd64 instructions. They execute micro-operations into which the x86(_64) decoder decomposes the instructions issued to the CPU.
Re: (Score:3)
Comment removed (Score:5, Interesting)
Re: (Score:2)
Wow. A factual followup. Wish I had mod points.
Re: (Score:2)
I *do* have mod points, but can't mod him any higher, as he is already at 5. The most I could do is mod him down so that somebody else could come along and show how awesome his post is by modding him back up, but that doesn't seem like a very efficient use of mod points...
Re:Fsck x86 (Score:4, Interesting)
First of all, at this point, it is misguided to talk about x86 as an architecture;
Thank you. I got a raft of shit when I argued that there was no such thing as an x86 instruction set architecture any more just recently, and hasn't been since the Am586 took x86 into RISC-land.
I'm still not seeing any x86 processor getting (unbiased) performance/W scores that are close to common ARM processors.
On the other hand, I'm seeing a pretty massive gap between the high-end in x86 (really, amd64) processors and in ARM processors. This doesn't mean ARM can't be scaled up, but I think it's worth remembering what x86 went through in being scaled up. It didn't happen overnight. It may well be that x86-compatible processors embrace low power consumption before the ARM-compatible processors scale up to the high end.
Re:Fsck x86 (Score:5, Informative)
A typical processor design takes around 4-5 years from concept to production silicon. Intel did not even consider power as a constraint (other than a maximum) until 2008.Haswell was the first ground up design where power was a constraint, but still not a major constraint. With Haswell Intel was within shooting distance of ARM power levels without even compromising computing power.
In about 2010 power consumption became not just a feature, but a required feature in low watt to milliwatt ranges. Intel should have a processor to meet that requirement later this year or early/mid next year. Intel's already preliminarily released some (un-handicapped) atoms that have about 75% of the performance of Haswell and are power competitive with ARM.
Up until a year or two ago when the PC market began to crater Intel wasn't interested in playing in the low power market because margins were atrocious, but with the rise of high margin smartphones and the reality that they will likely replace a significant chunk of the personal PC market they've begun to take the market seriously. Writing them off as unable to play this game because they haven't bothered in the past would be incredibly stupid. They are the largest CPU designer in the world and they have some of the smartest CPU designers in the world working for them, it just takes a while to turn such a big boat. Give it a few more years then come back and talk about x86 being unable to compete.
I don't know if Intel will succeed but if they put their resources into it they will easily outpace ARM because in the CPU design game it's about design resources and FAB's and Intel has both in spades (in FAB's Intel is one entire process step ahead of everyone else), more than the rest of the ARM market combined and they won't be designing the same thing 50 times. See that's the ARM markets biggest handicap, there are dozens of companies reinventing the wheel over and over again. Intel's biggest handicap is their desire to not eat existing markets and it might be their undoing (a processor with 75% of haswell's power with ARM's power use could likely cannibalize much of their Haswell sales and the tricks to prevent that, ie sales restrictions, will also handicap the processors chances in competing with ARM). IMO if Intel fails at competing with ARM it will only be because they didn't want to cannibalize sales with lower margin parts.
Re:Fsck x86 (Score:4, Insightful)
then let them find MIPS or ARM or whatever-fad-arch-is-current so they too can appreciate the design of x86.
Mips and arm as fads? You do realize mips has been around almost as long as x86 has, and is still widely used. People all too often forget that the majority of devices out there are not full fledged computers, they are embedded devices, to which mips and arm own the space. This is exactly why mips is still widely taught in colleges as it is readily accessible, open, and still used in the industry. It also gives a good foundation to build on when looking at other ISAs
Re: (Score:2)
What kind of nonsense are you spouting? Every processor out there with the exception of that one open sourced one, *IS* proprietary! *YOU* go try and make an x86 and see if Intel sues the pants off you.
You also don't understand what scalability and all the other big words mean. You don't even know that your Intel processor is now effectively a RISC core with a CISC layer of microcode on top.
And who the hell cares if you can run 8bit or 16 mode shit? Those were all design inefficiencies and now you are f
Re: (Score:3)
Since we're all playing the age card, I'm 50 and have been actively developing software since I was 12 (using punch cards and the ultra-fast and modern paper tape!)
The x86 is a fine architecture, despite its numerous warts. However, so is arm. Each has distinct advantages and disadvantages -- and being able to operate in power-starved situations (such as with smartphones) is one of the main strengths of arm and one of the main weaknesses of x86. If my experiences has taught me anything, it's that there's no
Re: (Score:2)
Popularity: Both Apple and Sun saw the writing on the wall, Sun saw it too late, Apple saw it early (or saw what happened to Sun). They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.
Did you mean: 'Apple suffered from a PowerPC processor shortage, while Sun added x86-64 to their lines of workstations and servers'?
Get you facts right: Sun didn't shift to being an x86 shop, and Oracle hasn't, too. In fact, the SPARC architecture is so alive, that is used in some of the Top 500 Supercomputers [wikipedia.org]... as IBM's PowerPC.
Re:Fsck x86 (Score:4, Interesting)
They both shifted from a proprietary processor and chipset to a more common and popular platform. Both platforms had specific benefits over x86 until x86 scaled far and beyond what they both offered.
In my opinion, Apple shifted to x86 mainly for logistical reasons not architectural reasons. The problem with staying with PPC was two fold: 1) Intel offered better laptop chips and 2) Apple was going to have problems with PPC supplies.
The first one is fairly self-evident. Intel laptop chips were being updated all the time and they were better at power efficiency than the PPC4 based chips Apple was getting from Motorola. IBM never manufactured a mobile PPC5 chip probably because of the heat dissipation problems. If Apple stuck with IBM and PPC then they would have to be further behind Intel based mobile chips.
The second one is more complicated. Motorola then IBM supplied Apple with chips but even with millions of chips a year, Apple would be a small customer to either company. Compounding this is that Apple's chip was heavily customized so that it was costly to maintain and develop for any chip maker. Apple needed those customizations as other PPC chips by Motorola and IBM were not designed to be used for consumers in computers. For example IBM's chips for servers do not need any multimedia processing capabilities as they were designed for servers not desktops.
As such it was burden for Motorola and IBM to make changes to any supply schedules. If Apple got a sudden upswing in orders, IBM wouldn't be able to keep up. Also IBM would have to dedicate more and more resources to develop newer generations with Apple's changes and IBM was reluctant to spend possibly billions in research for one small customer.
Intel on the other hand was better logistically. Any development Intel did for Apple could be used for other customers. In fact, the ultrabook specifications came out of Intel's work with Apple on the MacBook Air.
Re: (Score:2)
ARM also doesn't scale nicely
You figured that out...how exactly? It's a RISC, RISCs have historically scaled very, very well.
Re:Fsck x86 (Score:5, Interesting)
You figured that out...how exactly? It's a RISC, RISCs have historically scaled very, very well.
What does that even mean any more? Visibly-CISC processors are now internally-RISCy (All of them since Am586) and there is actually a benefit to be derived from variable-length instructions and the x86 decoder is a small portion of the modern CPU. But ARM cores have never gotten up into the big, big clock rates because they've never been designed for them, instead targeting efficiency. That's a much easier goal to reach than bigger shinier if you're on a constrained budget, and it certainly has paid off for them now that we care about power budgets, but they're still having trouble scaling.
I'd sure like to hear anything insightful anyone has to say about XScale. It was the fastest ARM implementation of its day, but it was also the most power-hungry, and AFAICT Intel never really managed to scale either performance up or power consumption down after their initial release, and then dropped it. Is that how it played out?
Re: (Score:2)
What does that even mean any more? Visibly-CISC processors are now internally-RISCy (All of them since Am586)
How is that relevant to the issue I was commenting on? That doesn't make ARM scale worse, that makes x86 scale better, doesn't it?
and there is actually a benefit to be derived from variable-length instructions and the x86 decoder is a small portion of the modern CPU
Sure, if you're cranking out single thread performance. Which may not be the best thing to do in all application areas, though. Especially if you're doing something like ARM's big.LITTLE configuration, where you may not want the LITTLE cores to have complicated instruction decoders - to keep them actually simple - but you still need them to support the same ISA, otherwise you'd h
Re: (Score:2)
How is that relevant to the issue I was commenting on? That doesn't make ARM scale worse, that makes x86 scale better, doesn't it?
It means ARM isn't going to just magically scale well because it's RISC, and it won't scale any better than x86 because x86 is RISC, too — even if it doesn't look like it from the outside.
Re: (Score:2)
Re: (Score:2)
I never claimed any such thing. Put those things into somebody else's mouth.
Testes today, aren't we? Someone said ARM doesn't scale well, you said how did you figure that out, RISC designs have historically scaled well. I'm calling the relevance of RISCyness of ARM into question. You can either continue to be defensive, or you can respond to that.
Re: Fsck x86 (Score:2)
Re:Fsck x86 (Score:4, Informative)
x86 is hardly any less proprietary than PowerPC or SPARC. You've got Intel and AMD at the helm. VIA walked the plank ages ago.
Apple ditched PowerPC because Apple's market share was so fucking low that the only company compiling for PowerPC was Adobe. The decision to drop PowerPC had to do with market share and cost, not the architecture itself.
Yes? No? I think this is a misunderstanding of the motivations behind Apple's PowerPC switch. (Source: I wrote PowerPC Mac apps at the time and was in the room at WWDC when Apple announced the switch.)
The PowerPC market was a bit wider than that. Microsoft had Office on PowerPC, Adobe had their suite, and there was a smattering of other apps.
At the time, the future of PowerPC had looked pretty bright. Microsoft's Xbox, Sony's PS3, and Nintendo's Wii were all switching to PowerPC. Within a span of several months, the community was looking at a majority of gaming hardware being PowerPC based. PowerPC was going to be in very high demand, which would mean great things for the Mac PowerPC platform. Far from "the only company compiling for PowerPC was Adobe", Microsoft was buying Power Mac G5 boxes for their dev kits and they were porting Windows to the PowerPC for the Xbox. And in the end, Microsoft, Sony, and Nintendo combined shipped several hundred million units based on the PowerPC (With Nintendo still shipping the Wii U with PowerPC today.)
So why did Apple leave the PowerPC?
At the time, laptop sales were on the rise, but Apple's laptop CPUs were not designed by IBM, they were designed by Motorola. IBM's PowerPC G5 was suitable for the Xbox 360 and desktop machines, but it ran far too hot to go into laptops. This left Motorola with their G4 CPU. And let me tell you, Motorola probably had very smart people working for them, but their execution was incompetent. The G4 had a 133 mhz system bus (which was slow even for the time), and ran very hot (but still cooler than the G5), and worst of all, was much slower than Intel's Pentium M.
Meanwhile the Pentium M was doing very well. It was faster than the G4, more power efficient than the G4, and it actually had a modern chipset and bus. Switching to the Pentium M was a no brainer.
There was speculation that Apple was trying to get IBM to make a mobile G5, but they were never able to get the power consumption down. When Microsoft and Sony entered onto the scene, IBM's interest shifted to getting the PowerPC into larger form factors, and Apple just didn't ship enough units in laptops to balance out the R&D demand that Microsoft and Sony created.
Motorola in the meantime with the G4 just kept sucking. There was a new architecture that was basically a modern architecture for the G4 that did eventually end up shipping, but by then Apple was just done with PowerPC.
Intel provided a stability the AIM (Apple, IBM, Motorola) alliance just didn't provide, with a quality chip. PowerPC did end up scaling, but there simply wasn't the same demand for PowerPC machines at the time to make it scale well enough.
So were people not actually writing code for PowerPC? No, lot's of people were. I'd actually guess that after Apple left PowerPC, the number of PowerPC developers continued to rise. And with the Xbox 360, Sony PS3, and the Nintendo Wii/Wii U continuing to get new games, there are still a lot of PowerPC developers out there.
Re: (Score:2, Insightful)
You have no concept. Many companies, including Intel, have tried to move away from x86 - the market won't let them. There is too much software out there written to the x86 architecture to move away from it. You are completely underestimating the market forces behind x86.
Staying with x86 is *not* Intel's choice, of even their desire (they have tried to shift the market off x86). This is where real world forces/issues trump ivory tower technical perspectives.
Re: (Score:2)
Blind hate for an instruction set. Brilliant. /s
Re: (Score:2)
No-one who hasn't learned assembler for at least a couple of processor architectures would understand it.
I too have a deep dislike of X86 that dates back to approaching it as my third processor architecture after 6502 and 68000 and hating X86 for it's inelegance. I've learned a couple more since than and nothing has shifted that initial distaste for X86. Even though it doesn't matter anymore as few people, and certainly not me, programs CPUs in assembler any more.
Re: (Score:2)
The zealous ARMists are mostly those people who bought AMD during the speed-wars of the 90's. Intel won, and they can't stand that.
You're projecting, big time. These are completely different groups of people. They just happen to be 2 groups of people that you, as an Intel fan, dislike.
I like ARM, and as far as I'm concerned there's nothing to choose between Intel and AMD. It's the x86 instruction set that I dislike. And the power inefficiency.
Re: (Score:2)
Re: (Score:2)
Why?
LONG mode x86-64 works out most of the major 'problems' with x86 from a programmers perspective. Thanks AMD! ARM has its own set of silliness that devs at the assembly level have to deal with (For reference, I fluently speak x86, ARM, and ATmega ASM).
Furthermore, x86-64 is a language the rides on top of the core, the core doesn't actually speak x86 in pretty much any x86 processor, it has a translation unit in front of it that breaks the CISC instructions up into more RISC like ones for the core.
I wou
Re:Fsck x86 (Score:4, Insightful)
Re: (Score:2)
I'm hardly counting ARM out. I doubt Intel will ever try to apply themselves to all the areas ARM is in. For phones and tablets, though? There is no doubt that ARM will have some very serious competition in the near.
I realize we like to root for the underdog here, but realistically, Intel's got a leg up in the long run.
intel and power efficiency (Score:3)
"They've been actively focusing on increasing power efficiency for a number of years now, so I have no doubt they'll be able to bring strong competition."
It Intel wants to, they can bring strong competition. They used to have their own ARM variant, but sold it off. They decided that there was no future in low power. Oops.
When they do get a low power chip they seem to lose interest, and then crank up its performance, and its power budget. Then Steve Jobs would yell at them, and they would produce another low
Re: (Score:3)
You cannot use the TDP (the "power budget" in your post) to compare actual power consumption of the chips. The 35w Haswells will consume less power than your Core 2 chip in actual use, thanks to massive gating and idle power gains that Intel has made. Haswells are also faster, allowing them to go back to idle quicker.
That's the thing about Intel- some chips have higher TDPs, sure, but the performance per watt is unparalleled. You need more ARM cores to do the same things- and many people would like more pe
"newsy" bits (Score:5, Informative)
Somehow missing from TFS...
Not useful to me, but I'll support Intel anyway. (Score:5, Interesting)
So the core of the app, the 'engine', is in C++ and must be natively compiled, while the UI and such is in Java. Naturally, the binary's compiled for ARM first. This actually runs on a lot of Intel Android tablets because they have ARM emulators. But, thanks to a user finally asking, I put in some time and now I can make an Intel version. (Heck, the original source was written for Intel anyway, so it wasn't a big stretch.) The existing tools are sufficient for my purposes. And it runs dramatically faster now on Intel.
However, for the developer it's mildly painful. The main issue is that you have a choice to make, with drawbacks no matter which way you go. You can include native libraries for multiple platforms, but that makes the APK larger - and according to a Google dev video I saw, users tend to uninstall larger apps first. In my case, it'd nearly double the size. So instead I'm putting together multiple APKs, so that ARM users get the ARM version and Intel users get the Intel version - but only Google Play supports that, not third-party app stores. I haven't looked into other app stores, and now it's less likely I will.
Note that native development can be important to apps for a non-technical reason: preventing piracy. An app written purely in Java is relatively easy to decompile and analyze, and pirates have a lot of techniques for removing or disabling licensing code. Adding a native component makes the app much harder to reverse-engineer, at least delaying the day that your app appears on pirate sites.
Re: (Score:2)
what would be even better is if you could submit your source code to the Google store and it would compile it for you on a server farm and produce APKs optimised for each chipset they support.
i remember the days when sourceforge had such a thing, you supplied your code and it got built for all manner of Linux and (IIRC) windows architectures/platforms.
Multiple APK info. (Score:2)
Not very well written then (Score:2)
Well-written C can be cross-platform compatible. It's all in how you write things (or the libraries you use).
Re:Not very well written then (Score:4, Informative)
A compiled binary doesn't care how well-written your C is if you are running it on the wrong platform.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Clearly we just need a small set of POSIX apps to do 'git, 'make' and 'gcc' on your phone.
Download the signed source code from the app store.
Re: (Score:2)
For Android, I'm shocked this isn't part of the install process. Either done server side and cached(Compile once, then cache the stored binary) or done on the phone. If compilation fails for the target, the app dev is notified and made unavailable for those on that platform.
Re: (Score:2)
Clearly we just need a small set of POSIX apps to do 'git, 'make' and 'gcc' on your phone.
Download the signed source code from the app store.
like a gentoodroid
Re: (Score:2)
In fact one of the main reasons to write your logic in mobile in C is that it will run on any platform- then you only have to rewrite your UI layer. But this isn't what they're talking about- they're talking about multiple processors. However Android allows for fat binary apks with multiple versions of libraries, so it isn't that big a deal.
Intel once made ARM processors... (Score:2)
Re: (Score:2)
i've read that they kept their ARM architecture license. one of the few good licenses where they can make their own CPU and instruction set as long as it's compatible with ARM. like apple and qualcomm. not like the regular licenses where you simply manufacture whatever ARM designs with minor changes
Re: (Score:3)
Re: (Score:2)
I think you can spin that any way you like it, on the one side they're a many headed beast for Intel to deal with but on the other side they're also in pretty intense competition with each other so they're not willing to share their secret sauce either. You pool resources but you are also at the mercy of a third party which may have interests that aren't exactly aligned with yours. The saying is "divide and conquer", not "spread yourself thin and attack from all sides".
I also wouldn't underestimate the amou
Re: (Score:2)
Reinventing the wheel 8 times is not effective use of R&D. The minor differences between these ARM vendors doesn't justify the expensive redesign of the same thing 8 times. If Intel goes all in and chases this market that divided R&D will be a significant handicap in competition against a well orchestrated and heavily financed competitor like Intel, not a benefit.
Re: (Score:2)
Apple did this when they switched to PPC. (Score:4)
It worked amazingly well, but it still sucked.
Re: (Score:2)
Yeah it's nothing new to put such emulation in place, apple did it twice when they switched to powerpc and when they switched to intel. DEC did it for windows NT on alpha. Intel did it for windows and linux on itanium (the itanium originally had hardware x86 support but it sucked so much that software emulation was faster and it was removed in later versions). qemu can do it for linux binaries across a wide range of cpu architecture combinations.
It's doable but there is a significant performance penalty. Th
Re: (Score:2)
Re: (Score:2)
Are you (and were you) a Mac user?
Re: (Score:2)
Re:Apple did this when they switched to PPC. (Score:4, Interesting)
They had fat binaries for apps compiled to both PPC and x86, but that wasn't the only solution, since with just that you wouldn't be able to run apps until the developer recompiled and shipped a new version. They also had a binary translator [wikipedia.org] to run unmodified PPC binaries on x86.
Re: (Score:2)
Re: (Score:2)
Out of interest, what apps are big enough for doubling their size to matter, yet most of their storage usage isn't data of some kind that can be shared between both versions?
Games, for example, might be a few megabytes of code with tens or hundreds of megabytes of game data.
Re: (Score:2)
The binaries are a small part of the whole package. Besides, you don't have to download all the binaries.
Re: (Score:2)
Re: (Score:2)
The solution is easy: provide signatures for the various download options.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Similarly there were tools to remove the unwanted architecture code from Universal Binaries during the PowerPC to x86 transition.
Re: (Score:2)
Re: (Score:2)
If you're going to go that far, why not just have developers sign a manifest which lists each file in the original APK, the architecture(s) it's required for, and a hash of the file? Then anyone could repack the APK to strip out some or all of the extra files, and the end user could still verify that they have all the right files for their architecture, and that all the files present correspond to the hashes in the manifest.
Re: (Score:2)
not like intel hasn't done this itself (Score:2)
taking x64 except for one or two instructions to hurt AMD
or their SSE extensions
Bigger problem than Intel admits (Score:5, Informative)
ARM ran a survey of the top 500 Android apps in the market and found that only 20% are pure Java, 30% are native x86, 42% require binary translation and 6% do not work at all on Intel's platform. To make matters worse the level of compatibility is falling. They also found that running an app in binary translation mode takes a huge performance hit."
http://www.theregister.co.uk/2... [theregister.co.uk]
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
Are you kidding?
Atom is now competitive on phones, better than ARM on tablets and Haswell destroys ARM on larger tablets and everything above.
The Windows NT kernel runs smoothly on hardware that would choke on Android.
Don't forget that 90s processors were slower than current mobile processors. Good luck using a Pentium (Pro/II/III) for anything useful these days, regardless of the OS.
Re: (Score:2)
I use several PIII machines, and one PII, regularly as servers in my LAN. They perform very well in that role.
Re: (Score:2)
Serving what? DNS, NTP and DHCP?
The power savings from retiring something as old as a Pentium II or Pentium III certainly pay for newer hardware.
LLVM byte code (Score:4, Interesting)
I still don't understand why APKs are not just pure LLVM byte code, and either the store or the phone completes the byte code to native compile, including the final optimization passes...
Regards,
-Jeremy
Re: (Score:2)
Those who don't remember the past are doomed to repeat it...
http://en.wikipedia.org/wiki/A... [wikipedia.org] (one of the earlier UNCOL)
I'll go back to my cave now
Re: (Score:2)
I'll go back to my cave now
I feel that way all the time at work now, every time my manager tells me about some programming technology a company 'invented.'
well... (Score:2)
Re: (Score:2)
wrong, different architectures can and do cause problems for pure C code too. here's tip of the iceberg for you, find out about various ARM model endian modes
Re: (Score:2)
I'll count byte order as a processor feature.
Basically there's C code and then there's architecture-optimized C code. The former should be easily ported between architectures. So, if an app's native code is architecture-agnostic and the dev doesn't include x86 (and MIPS, for that matter) versions then he's just being lazy.
Andriod vs iOS development (Score:2)
For I minute I though we had it bad because Apple is now creating a brand new language we have to learn just for "Apple development".
But actually it seems...
You're the ones getting fucked.
Time not important (Score:2)
Only life important.
Oh wait, wrong topic...
When we talk about mobile devices, efficiency comes before compatibility.
Should have been plan A (Score:2)
Re: (Score:2)
I think they do that already; filter apps you see via a device compatibility matrix. Or is this only at a crude level to distinguish (say) Tablets from Phones, etc.
Price discrimination (Score:2)
I think they do that already; filter apps you see via a device compatibility matrix.
The challenge is to make the filtered list other-than-empty without having to convince developers to recompile. I imagine that some developers are likely to refuse to recompile in order to price discriminate between the desktop market and the mobile market, where people aren't willing to pay as much per app as in the desktop market.
Re: (Score:2)
More to the point, you are asking them to cross-compile rather than just recompile. And if you don't have the hardware, you can't test the result.
Re: (Score:2)
More to the point, you are asking them to cross-compile rather than just recompile. And if you don't have the hardware, you can't test the result.
Because no devs anywhere has any x86 or x86_64 units under their desk. And its not like we have dozens of emulators and virtual machines for ever instruction set under the sun. And it is absolutely impossible to buy touch screen after windows 8 was released... oh wait we have all of those more so than ever.
Inaccurate emulators (Score:2)
Re: (Score:2)
I don't know how it works precisely but I do know from experience that selecting a different model can permit you to install an app when you were disqualified for all manner of reasons, including such details as display resolution. Finless roms for MK908 show up as a Samsung device or something because otherwise lots of apps which work fine don't install
Actually, it needn't be a technical issue. (Score:2)
Though I do agree that JNI is a serious pain. On the other hand, I've dev
Re: (Score:2)
Of course, how else would one make code portable between platforms?
By writing the app in JS instead of Java or C++.
Re: (Score:3)
Hey Juden, don't make it bad.
Take your sad chip and make it better.
Remember to put it into their phones
Then you can own
The Android market.
Re: (Score:3)
I'm lots older than that, and I don't remember any such complaints. Because originally the ARM chip was made only for the Acorn Archimedes. Which sold to people who certainly didn't want a PC clone. Later it started to be used as the CPU in PDAs, printers and mobile phones. None of which would have benefited at the time from Intel compatibility.
Issues with ARM not being x86 compatible are a recent thing.
Maybe there were complaints in the hobbyist Linux-everywhere crowd before there was an ARM port of Linux?
Re: (Score:3, Insightful)
More to the point, the problem is that x86 is not compatible with ARM. And it's pretty much just a problem for Intel. So not really a problem at all.
Re: (Score:3)
At this point, the tables are reversed, since what the actual Android users want is ARM compatibility, and x86 is the odd man out.
However, even in the ARM universe the compatibility is murky as there are so many varieties; straight up ARM, thumb, thumb-2, Jazelle as the basic instructions sets, plus some models that have multiply instructions, some have DSP instructions, and so on. Most of the Androids though presumably come with high end Cortex-A series and enough memory to use 32-bit ARM instructions whi