Apple announced their new generation iPhones yesterday, the normal looking iPhone 8 and 8 Plus and the striking iPhone X. They have new hardware aboard and one of the changes we come to expect is the new processors that are shipped with the new iPhones.
The new iPhones will be shipping with the Apple A11 Bionic processor, which is the latest and greatest processor Apple has designed for their iOS devices. The A11’s CPU features a 6 core design, which includes 2 performance cores which are 25% faster and 4 efficiency cores that are 70% faster than the previous design. The GPU is also 30% faster than the A10 Fusion chip that powered the iPhone 7 and 7 Plus.
When it comes to performance, the A10 Fusion chip on the iPhone 7 was among the top performance when you consider Geekbench 4 CPU test. Geekbench 4 is a multiplatform test that allows us to compare CPU performance between mobile chips and PC chips. The A10 Fusion chip in particular, has a single core performance of around 3500 which was a record for smartphone processors and 5900 for the multicore performance.
Now with the A11 Bionic processors inside the new iPhones, Apple has taken that performance to a whole new level as shown by this screenshot.
Looking at iPhone 8/X benchmarks. Android / Qualcomm left so far back in the dust now it's irrelevant. RIP Intel pic.twitter.com/mlVlGOXhlS
— Jeff Atwood (@codinghorror) September 12, 2017
The Apple A11 chip has astounding performance as shown from this screenshot. The single core performance is over 4000 and the multicore performance is almost 10,000, which is insane considering the fact that it is a chip designed for smartphones and not for laptops.
It is even more impressive if you consider the fact that Geekbench 4 performance is based on a baseline score of 4000 which was obtained from an Intel Core i7 6600 U, a processor we see on laptops. This performance is similar to a quad core Core i5 7300 HQ which should worry Intel since now we have a smartphone chip that is performing as well as a laptop CPU.
Apple currently is way ahead of the competition in the smartphone CPU performance and the current top dogs in the Android space (Snapdragon 835, Exynos 8895 and Kirin 960) still lag behind the new iPhones in both single core and multicore performance as shown from the Geekbench 4 tests.
Smartphone processor manufacturers like Samsung (Exynos), Huawei (Kirin) and Qualcomm (Snapdragon) have their work cut out to play catch up with Apple and Intel has to make sure their laptop chips continue breaking new performance grounds since now Apple is becoming a threat to them.
This is interesting but can such a processor be used to power a laptop that let’s say runs on core i5?
Would have been useful if there was an equivalent of Samsung Dex or Windows Continuum where that computing power could be used.
Technically yes, but it needs optimization. The Apple chips run on a different platform from what Intel chips run on (ARM vs x86) but I won’t be shocked if Apple decides to use their processors to power Macbooks in the future
they cannot be used on laptops… first of all because of size… unless a stripped out version of a pc OS
Honestly asides the series 3, THIS was the highlight of the event for me.
ML/AI-first SoC? Gimme that!!
And the in-house GPU? If all those things Phil spit on stage yesterday about it are even half true, Adreno should be ashamed.
It’s time for dock solution!
no matter how fast a smartphone cpu is, it can not be used to run a full laptop OS… arm processors on smartphones are built from ground up to be efficient and consume much less power and handle less heat, hence heavy applications on mac or windows cannot run on an arm processor without blowing it up. pc CPUs have a large die area designed to handle heat. the A11 is fast for a smartphone but it cannot be swapped into any laptop with full version of OS, even with drivers. the only way around this is redesigning PC OS to handle ARM processors carefully
That’s not really true. I run Ubuntu on an octa-core Exynos with no issues. Full GUI, full office suite, software dev suite, no problems. I have a fan on mine that runs a lot if I’m doing those things so it does need cooling to yield the highest performance. But if I clock it down to about 500 MHz, it’s still quite usable and the fan never runs. If I replaced the fan with a larger heatsink, it could probably get about 1.0-1.2 GHz without over heating.
do your research well, A11 cannot replace even entry level intel atom… read about RISC and CISC cpu architectures too
You may want to do some more research. I suggest starting here: https://browser.geekbench.com/processor-benchmarks
Also, the RISC vs CISC argument is pretty much dead. Since the R8000 and PowerPC, RISC has had better performance than it’s peers. Even Intel switched to RISC cores with a microcode translation layer to support their legacy instruction sets for compatibility. The lines between the two types of instruction sets have blurred since the 80’s to the point where it isn’t even used as a good comparison anymore.
Dude, am not talking about scores, as I said earlier, no matter how well an arm architecture CPU perform on geekbench, it can never replace even an entry level Intel atom processor… You just arguing something you just don’t know… It’s like saying a smartphone camera can replace a DSLR… Size would never allow… That’s literally how pc and smartphones CPUs differ… Again, do your research well… Don’t just argue… Running Linux on Samsung CPUs doesn’t mean it performs better than PC chips…
I suspect it is you who is arguing something that you “just don’t know” since I have degrees in Computer Science and Nuclear Engineering, have been programming since 1979, and have designed and built prototype embedded computing systems. You spout a few buzz words hoping to impress, discount any evidence that doesn’t support your position and then offer no evidence of your own, not even anecdotal evidence. As to your suggestion that the Atom is in anyway superior to a high end ARM chip from ANYONE, I can easily refute that. I have ARM systems that handily outperform Atom based systems in everything from boot times, interface speeds and compile times to gaming and general usability. Does that mean that all ARMs are better than all Atoms? Of course not. But ARMs cannot be discounted out of hand and Atoms cannot be assumed to be better. And the new A11 is the cream of the crop pushing ARM performance to a new level.
😂😂😂 okay professor, I give up… It’s about degrees that an individual has instead of arguing the facts… I can see you have your ARM “systems” which you have tested a full windows OS with and came up with a conclusion… Do you understand what die size means…? I don’t need to give evidence about anything… Anyways, with your research, you can ask Dell and hp to replace their entry level PC atom CPUs with Exynos and Apple’s A11
Once again, you simply dismiss facts, like education and experience, when they don’t support your argument. But, I see where you’ve gone wrong: you’ve concluded that ARM’s are inferior because “entry level PC’s” don’t have them. Ok, here’s where that line of reasoning doesn’t really work:
– Entry level PC’s need to be able to run Windows and Windows apps. Why? Because that’s what most of the world uses. Therefore, you must use a chip with an x86 or x64 compatible instruction set. That rules out ARM.
– Some people have suggested that Apple should use its own chips in their computers. They won’t. Why? Same reason: compatibility with the x86/x64 world. Apple already abandoned an arguably superior platform (PowerPC) in order to gain the ability to boot and run Windows applications making their computers a one size fits all solution that kept the platform alive. (The fact that IBM was not investing the resources required to improve the PowerPC performance at the rate Apple wanted contributed as well but that is a different issue.)
– Finally, you seem to think that if the A11 has superior performance in any that everyone would then rush to replace everything with the A11 and since they don’t, the A11 must therefore not have superior performance. That just isn’t true. Choosing a microprocessor for a platform is a very complicated decision that includes CPU performance, thermal management, energy consumption, compatibility, size, interface, cores, threading, pipelining, cacheing, and many more factors. A chip that met Apple’s performance goals for the iPhone/iPad didn’t exist so they made it. They can afford to do that because they sell the iPhone in such large volumes and they charge a price that is much higher than a netbook or low end PC.
😂😂😂 okay man, didn’t read all that but you win. I know nothing…go make some bombs or something
Not sure why you’re laughing—he’s right on all counts. Plus, unless ARM was *significantly* outperforming x86 (it is currently not) it might not make sense for a company (such as Apple with their desktop/server/laptop Macs) to invest all the time and money do a complete overhaul of their OS and develop another “rosetta” style application emulator for the x86 software still on shelves.
Back when Apple went through the pains of transitioning from PPC to x86, x86 CPUs were cheaper, far easier to make production targets and noticably outperforming the available PPC desktop cpus because like he said, IBM had let the line stagnate since G5 was introduced. IBM, also, hadnt been able to deliver the quantity or power-envelope of G5 cpus for the very important laptop market. So, Apple’s best laptops were chugging along on old, noncompetitive G4s.
And IBM was really the only developer of desktop PPC at that point–whereas the the very least you had behemoth chipmaker Intel and scrappy competitor AMD making x86 chips, pushing each other for innovation and pricing wars.
Besides, Apple can still play the “we’re thinking of moving to our own ARM chips or maybe AMD x86” card whenever they’re negotiating chip deals with Intel. Whether or not they actually intend to do either.
We are talking about the current A11, not what they might develop in future. As you said yourself, unless arm is outperforming intel on all counts there is no sense in investing time to change its os to fit arm architecture.
Its all a matter of cost. If it started making fiscal sense for Apple to spend money on transitioning to ARM for everything, you can bet they would. My guess is that we are a ways away from the cost benefit Apple would get by making their own chips being worth the trouble of ditching Intel. If Intel wasnt offering cut-rate deals on x86 chips to Apple, you can bet one of two things would have happened: 1. Apple designed ARM chips for everything, since it would really unite their smartphone/tablet areas with the Mac or 2. AMD cpus showing up in Macs/Macbooks.
To me, the fact that #2 hasnt happened tells me that Intel is still offering insane incentives to Tim Cook to keep Core/Xeon on the mac, since you can bet AMD has been trying to get to Apple ever since Cupertino chose Core over Athlon64 in 2006.
do more reading, the main reason why apple never uses amd processors is due to TDP, thermal design power. intel is good in making low power high perfomance chips and since apple needs these types of cpus to design better laptops and imacs thats why they picked intel. last year apple started making PCs with radeon gpus due to their capability in handling multiple screens, which means there is no bad blood between them. in addition, with rise Ryzen, Apple is considering incorporating them in their PCs. however, this whole argument has significantly deviated from the main point where i was initially arguing why A11 cannot be used in PCs, and not what apple can develop in future.
The new Ryzen/Threadripper processors are actually very power efficient—and we’ve still not heard any serious rumblings from Apple about switching to AMD. It would seem that the next Mac Pro would be nearly ideal for the new 1950X, as right now, the best ‘trashcan’ model you can get from them doesnt match up. Yet no serious rumors coming from Apple about that being the case.
“intel is good in making low power high perfomance chips ”
Not necessarily true. Atom largely failed to put a dent in ARM’s dominance in the ultraportable space, and I’m not sure where you are getting this information from that Apple wouldnt be able to scale A11 or a variant of such to work well in a Macbook or imac. Single thread performance is already very competitive versus some x64 macbook cpus—it would be a matter of scaling the core count to be sufficient for desktop/laptop use.
Again, its about money—is it worth it to Apple to design such a chip, then after that expense, invest tons of money in yet another macOS/apps software architecture transition (the transition from PPC to x86 was not cheap)? And even given that IBM was stagnating with PPC development after the G5, if they had been able to supply a notebook chip before Apple got tired of waiting, theres a pretty good chance they never go x86/Intel–despite the power per watt advantage Core (Merom) had over G5.
Since Intel is likely never going to come anywheres near the trouble IBM was having with supply (their main strength is their massive fabrication and manufacturing resources), its also unlikely that any ARM manufacturer will be able to fab a chip that offers the kind of power consumption advantage that Intel offered over IBM PPC back then. And only then has Apple shown they’d be willing to take on the huge cost and upheaval of transitioning Mac to a new architecture.
you took so much time to explain what i am actually telling you, contradicting yourself over time. Any macbook you know that uses atom??? did i say apple would not be able to scale their chip to fit macs??? when you say “it would be a matter of scaling the core count to be sufficient for desktop/laptop use” what does that mean? is it not what am actually explaining to you? in addition, what you are actually explaining is just your opinion without any proof and what am describing here is the technicalitites of the current A11 if it ever gets swapped to a pc. am not talking about what developments apple may make to fit arm to their OS. TDP in the A11 is the topic, revise your explanation. fyi, in 2005 Steve Jobs actually said the reason they picked Intel is due to TDP. A11 the way it is without any improvements cant be swapped to a PC to compete with current power CPUs from intel. dont try to explain RnD capabilities of Apple. I know they can. but what we are discussing is the current A11 without any modifications.
What on earth are you blabbing about? This whole discussion started because you insisted that Apple couldnt use the A11 in its desktops/laptops. Sure, AS IS it wouldnt be a great replacement for x86 which is why it hasnt happened yet, but you’re really being obtuse if you think that Apple couldnt replace Intel with an A11 variant should Intel suddenly refuse to give them a good deal on x86 supply. They could unify the entire Apple experience—no more separate MacOS and iOS architectures, something that if it was economically advantageous to do, Apple would love to do.
I’ve been consistent this entire argument–RIGHT NOW, it doesnt make financial sense for Apple to invest tons of money in rebuilding MacOS from x86 to ARM for a new architecture that doesnt offer anywheres near the cost per watt advantage that Core offered over G5 when they made that major transition in the mid 2000s.
Actually the best part of your replies was insisting that CISC vs RISC has any bearing on this argument — completely oblivious to the fact that since PPro/K5 all x86 cpus are RISC cores with an x86 decoder stage slapped on top. That was hilarious.
dude you dont even know what you are arguing about… you just blubbering here copy pasting things you dont even know what they mean. i understand. this argument is closed since the time you said this ” Sure, AS IS it wouldnt be a great replacement for x86 which is why it hasnt happened yet..” the rest of what you are saying is just a baseless opinion. if you were part of apple hardware development team i would understand. as for RISK and CISC, you are actually very wrong. i would recommend you start here before you start copy pasting baseless arguments https://www.allaboutcircuits.com/news/understanding-the-differences-between-arm-and-x86-cores/
“blubbering here copy pasting things you dont even know what they mean” ” as for RISC and CISC, you are actually very wrong”
LOL, oh lord—somebody who didnt know that every Intel/AMD x86 cpu core since the late 90s has been a RISC with an x86 microop translator on top is telling me I dont know what I’m writing? Thats pretty basic cpu knowledge there—you might want to reevaluate exactly how smart you think you are.
” this argument is closed since the time you said this ” Sure, AS IS it wouldnt be a great replacement for x86 which is why it hasnt happened yet..” the rest of what you are saying is just a baseless opinion.”
No it isnt. Use you own brain—theres a reason it wouldnt be a good replacement–because it WOULDNT BE just a SIMPLE chip swap—you’d have to rebuild MacOS for ARM, test new motherboard chipsets, and provide a Rosetta-style emulation layer to aid in the transition from x86 to ARM for legacy apps.
You seem to be unable to grasp the one situation where it made sense for Apple to leave its established previous architecture—2006. G5 was at that point an overheating, underperforming, barely developed chip versus Intel & AMDs ferocious one-upmanship in the x86 arena. AND—for the burgeoning, extremely important portable market–IBM hadnt even gotten a competitive G5 to Apple for its Powerbooks.
Now, in comes Intel, which has both a far better supply chain for its x86 cpus, a robust support chipset portfolio, and—the clincher— its new Core architecture which both outperforms and is far more energy efficient than the current G5s
NONE of those conditions exist in the current ARM vs x86 battle for Apple. Which, if you’d think for a second, would explain why Apple hasnt made any serious attempt to ditch x86 for MacOS laptops and desktops/servers. ARM may or may not currently have a decent lead in power efficiency versus a comparable x86 cpu (depending on what test you run), but in any case, its nowheres near the gap that existed between PPCG5 and Intel’s new Core series when Apple pulled the plug in 05/06.
,😂😂😂😂says someone who thinks amd has better power management chips than Intel. Where did you copy paste this from 😂😂 good night dude. Whoever taught you these things should be jailed. I saw some arguments here you copy pasted from the Verge without using your head to reason if this opinion has any bases. Am done arguing… Go on, copy paste more to show how you know what you are saying 😂😂😂
The Verge? What the hell is the verge? Now you’re making stuff up? Show me where I copied and pasted anything somebody else said. That stuff is completely off the top of my head—its fairly common knowledge, and especially known to me since I have a G5 Quad sitting dormant on a shelf on the other side of this room. The very last of the G5 Powermacs, and it needed a 1000W PSU to power it.
Geez, learn to lose an argument with some dignity. Oh, and BTW? The very processors we already mentioned from AMD (Ryzen/Threadripper) have shown better power efficiency than their equivalent Intel competitors. Bulldozer was a disaster, no argument there. Oh, and back in the day of Netburst architecture (especially Prescott) AMD was also indeed more power efficient. But I wouldnt expect you to know that either since your grasp of computing tech seems to not exist before 2010.
Hahah😂😂 okay dude, now you don’t know the Verge… You can give a better explanation. Please stop embarrassing yourself with these copy pasted arguments… Damn! Oh, by the way, you have a Ryzen threadripper too on your “shelf”???😂😂😂
Dear God, you are delusional. If there is some guy posting on The Verge with my name on it, its some other guy with the name Pat D.
I know—its a tough thing to believe that theres more than one guy on this planet who uses this screenname. If you knew how to use disqus, you could even backtrack the posts i’ve done on this account and see that I have never, ever posted on that site.
I have been to The Verge once in my entire lifetime—and that was today after you accused me of posting on there, just to see what it was. I frequent a total of 3 tech sites—Anandtech, Tomshardware and Extremetech. The only reason I even found your silly original post was doing a search for A11 geekbench performance yesterday because I saw an article in the NY Times about it.
But, nice deflection—accuse me of being somebody else because you have no other comeback.
Oh, and personally, I dont care whether you think I actually have a G5 Quad or not. It hasnt been powered up in like 2 years, so for all intents and purposes, its just a shiny aluminum box with an apple logo on it. Why? Because nowadays, it feels molasses slow and makes a FX-9590 look like a good power to watt bargain.
Hahah you are the one who needs a better come back cuz we already agreed A11 can’t be used on a PC without any modifications… Until you said I don’t know Intel uses risc on their chips. I actually never said that. I only advised that dude to read about risc and cisc to understand what we were arguing about. Then you brought your copy pasted arguments from nowhere to try to act knowing… Dude, just sleep
Of course it cant–you’d need a new motherboard and a recompile of MacOS to use anything other than an x86 chip in an apple desktop pc. Nobody ever disputed that. But you are completely wrong that Apple couldnt adapt A11 to start a new model line of macbooks and imacs. Heck—almost all of what you can do on apples desktop and full laptop lineup can already be done on the ARM powered ipads. It just DOESNT MAKE FISCAL SENSE FOR THEM TO DO SO.
Oh, and cut it out with the copy and paste nonsense. First of all, you’ve been the only one here posting links to other people’s articles, so your only shaming yourself. And I still havent seen any proof that *I* copied and pasted anything from anywhere, so I love how you pulled that out of thin air.
Did you read the part where I said I don’t need an explanation about Apple’s RnD? Did I ever say apple can’t develop anything with A11? My argument was swapping this chip to current Macs.
Oh my God, you dont even remember your own arguments anymore. Here’s a hint, scroll back to the top of this lengthy thread to read what you wrote about the chip not being able to handle desktop computing because of its RISC design causing it to melt. Then scroll down when the other guy told you he already runs Linux and a Window Manager on an older one for everyday computing. Then scroll down to him telling you about x86 chips being RISCs with an x86 instruction decoder attached– for 20 years now.
Would Apple put a completely unmodified A11 in a Mac Pro tomorrow if they suddenly decided to ditch Intel? Heck no. Its a low clocked PHONE cpu as it exists in the iphone. No more heat dissipation and power consumption than needed. What doesnt make sense in your argument is how you seemingly think that the A11 couldnt be adopted for higher clocks and power consumption for the workloads in a PC or laptop, which is silly because the upcoming (2018?) A11 ipads are already really, really close to being just a laptop with no attached keyboard.
Wow, an iPad becoming a laptop 😂😂😂 if you ain’t the most biased dude I have ever argued with, I don’t know what you are… And what I said was the die size of arm mobile CPUs limits them significantly in handling heat… They are built to solely be low power usage chips. Do you know what the die size mean?
“Wow, an iPad becoming a laptop 😂😂😂 if you ain’t the most biased dude I have ever argued with, I don’t know what you are”
Biased to WHAT? Apple? LMAO—the only apple product I currently own is that Powermac G5 Quad, which hasnt been used in like forever. Yup, just keep slinging these baseless accusations—I’m a guy posting on The Verge, I’m an Apple fanboy, I’m copying and pasting stuff from other people….0 for 3 so far, keep em coming!
“And what I said was the die size of arm mobile CPUs limits them significantly in handling heat..”
What you are still not acknowledging to any degree is that that the A11 ARM core, like Intel/AMD x86, is not limited to its smallest form factor adaptation. You seem to think that because Intel made Atom, they wouldnt be able to make i3, i5, i7 for desktops. In the apple world, the greater capabilities, clock speeds, and power usage of the ipad versions of the same chips proves that it is possible. You’re arguing that something isnt possible when it already exists.
“My argument was swapping this chip to current Macs”
And nobody in this entire thread told you that Apple would do that—put the *exact* same chip as a smartphone in a desktop. They dont even do that for the ipad!
The point, which apparently flew over your head, is that they COULD make an A11 for desktops which wouldnt have the thermal and clock limits of a smartphone. Think of this in Intel terms—you’d look at a smartphone with an Atom in it and conclude that they couldnt possibly put its architecture in a desktop.
We keep circling around the same spot. Okay dude, I concede.😂
reading these debates like watching someone argue with a flaters. And the flaters being DK
When people think about possibilities, and you just insist to think about impossibilities, trying to justify your weak idea by laughing. Yeah that doesn’t make you smarter brother. In brief, you lose those debates, clearly.
well, he’s not being nice about it or leading you from point a to point b very easily, but he is right.
x86-64 is just the core instruction set, there’s several other functions of a modern intel or amd chip; handling huge amounts of pci-e data, ddr4 data, other bridges, etc. not only handle it but making interrupts in it to modify, redirect, multiple etc.. there is no way an A11 in it’s current form can do this in 5W.
apple engineers created a very lean, but restricted chip that handles 8 gigs of lddr4 ram, 4x pcie gen 3 of data at most. doesn’t have avx of any sorts. sure there’s other things it handles, like communication to modem/ radio, data store etc. but those things can be handled by full pc cpu; quite easily really.
i don’t deny that a11 is an amazing chip that can crunch java (with a good browser) and handle disk access well (but is no where as fast as m.2) but really guys.. there’s no magic in an a11. it’s the worlds best mobile processor, but hardly the best and fastest processor in the world (toss up between xeon, powerpc and nvidia volta depending on work scenario) . there’s a few AI chips that google has that’ll probably be one of the best in the future but only time will tell and that’s also for another topic another day.
can a version of an a11 someday run a full laptop, sure, a modified a11 could sit in the heart of a fully blown up arm like chip, but it wont be 5W that’s for sure.
I don’t think so. The instruction set is not relevant since all major OS’s already support ARM architecture for some time. Heating is also not a problem – this chip is already designed to sustain extended stress times, albeit with throttling, using very minimal heat sink that could fit inside a phone. Pair it with normal laptop cooler and it probably will run fully stressed indefinitely without any throttling at all.
This is quite impressive, but then again,bench marks run the CPUs at Max speed but in real life usage, heat and cooling makes leads to throttling. It would be interesting to see such a chip with active fan cooling like desktop CPUs and see how it performs
[…] ‘new’ cameras, face unlock tricks (iPhone X) and feature that ridiculously powerful Apple A11 Bionic chip. However, there are other cool features in the new iPhones that Apple decided not to mention during […]
It’s extremely infuriating that the ignorant Android sheep keep saying Apple doesn’t innovate. The A11 Bionic has double the single core performance of all the other high end chips save for Apple’s own A10 Fusion. The multicore performance is also ahead by a considerable amount. What more does Apple have to do for them to finally admit Apple does innovate?
[…] smartphones have become very powerful gadgets, the computing power within them can achieve a lot if put to use in the right way. We have heard of scientists who “mine” computational […]
[…] smartphones have become very powerful gadgets, the computing power within them can achieve a lot if put to use in the right way. We have heard of scientists who “mine” […]
apple and oranges
one is RISC one is CISC
[…] smartphones have become very powerful gadgets, the computing power within them can achieve a lot if put to use in the right way. We have heard of scientists who “mine” computational power […]
there is no point comparing benchmarks of two different architecture, no matter how good benchmark you write there won’t be one to one equivalence between points, because of platform specific optimizations…
Comments are closed.