Apple Silicon (ARM64 based), aka "ASi"

Registered users do not see ads, it's free and easy. Register Here

DT

Thread Starter
Site Champ
Posts
732
Reaction score
1,382
Pretty excited about this. There's an event scheduled on Tuesday (and most who are also on TOP have probably seen).

I'm on deck to get the little G a new notebook, that could double up as a "travel machine" for me too (er, assuming that ever happens again). Looks like the WOTS is a couple of 13" laptop models, something like a Pro and another like an Air, in fact, some people think it's going to be the same FF/chassis/display, but with ASi internals. I tend to believe Apple will want their new SOCs (System on a Chip) to be rolled out with a little more fanfare vs. just repurposing most of the existing design (at the very least, a 13" machine with an edge-to-edge display, so it fits into what was previously a 12" chassis/case).

I'm also hoping for a Mac Mini ASi at this event - for those who don't know, Apple distributed development machines so companies/devs could start porting their code over to native ASi, and that was an A12X/Z based SOC in a current Mini box.
 
Registered users do not see ads, it's free and easy. Register Here

DT

Thread Starter
Site Champ
Posts
732
Reaction score
1,382
Or, at least, by April.

Yeah, I'm not in a huge rush to buy a replacement Mini, just for it to see some love :) I would expect, since it was the "dev box", it'll definitely see an ASi upgrade at least in 2021 - and it's a great option for that chassis (high performance, low power, potentially a much improved GPU).
 

Yoused

Site Champ
Posts
380
Reaction score
686
I could imagine (a stretch, perhaps) that they would build new variants of the SoC, like with a 6-big/2-little core array for the A12N (notebook) and an 8-big in the A12M for the mini. Probably more than they want to spend on major processor variants right now, though.
 

thekev

Power User
Posts
150
Reaction score
301
I might pick up one depending on price, because it would be the first practical dev box I know of which directly supports Neon and possibly Helium ISAs, even though Helium targets low powered embedded processors. Some of the instructions differ quite a bit from Intel's, particularly in the realm of predication, and auto-vectorization among gcc and clang is still quite limited, particularly by interactions with control flow logic. This makes me want to run my own tests as soon as something is available.
 

Arkitect

Power User
Posts
229
Reaction score
656
I share the excitement but I fear my Silicon commitment will be a few months or years away.
My Rhino 3D dependance is total. 😩
 

PearsonX

Site Champ
Posts
927
Reaction score
1,724
Pretty excited about this. There's an event scheduled on Tuesday (and most who are also on TOP have probably seen).

I'm on deck to get the little G a new notebook, that could double up as a "travel machine" for me too (er, assuming that ever happens again). Looks like the WOTS is a couple of 13" laptop models, something like a Pro and another like an Air, in fact, some people think it's going to be the same FF/chassis/display, but with ASi internals. I tend to believe Apple will want their new SOCs (System on a Chip) to be rolled out with a little more fanfare vs. just repurposing most of the existing design (at the very least, a 13" machine with an edge-to-edge display, so it fits into what was previously a 12" chassis/case).

I'm also hoping for a Mac Mini ASi at this event - for those who don't know, Apple distributed development machines so companies/devs could start porting their code over to native ASi, and that was an A12X/Z based SOC in a current Mini box.
I've been wondering what's exciting about this. I foresee a perpetual compatibility nightmare that nullifies all the few advantages I personally can imagine. So really curious, why is this more exciting than concerning?
 

thekev

Power User
Posts
150
Reaction score
301
I've been wondering what's exciting about this. I foresee a perpetual compatibility nightmare that nullifies all the few advantages I personally can imagine. So really curious, why is this more exciting than concerning?

We'll see how it works out. The low level aspects of ARM have some nice aspects that are really missing on x86_64. Intel has increased standard vector widths from 128 with SSE variants to a standard 256 bit width with AVX and then to 512 bits with AVX512. I don't expect them to go longer at the moment, because 512 bits encompasses an entire cache line, so any load that doesn't hit in L1 requires a minimum of 2 fetches once you go beyond that. Right now it's still 1 fetch if aligned to a 64 byte boundary.

I deleted several paragraphs of nerd stuff here that may not be interesting to anyone but myself... I can go into much greater detail if necessary.

On ARM, we have Neon today, which retains the 128 bit width, which is much simpler from a code generation standpoint. Their charted course seems to favor SVE in general, which moves problems of scheduling vectorization edges (commonly done by loop peeling or folding via predication) from the compiler level to the system's runtime.

SVE may not lead to optimal peak performance, but it should be much simpler from a compiler standpoint. Compilers have to deal with optimization concerns both at the level of vectorization and latency hiding in potentially compute bound problems. The issues of scalarizing or predicating loop tails in these things are much more complicated than they should be, due to weird interactions between Intel's favored opcodes and the range of hardware that supports them.

The second thing they released was Helium, which targets low power chips. It's meant to favor pipelining of instructions.

As a TLDR, even if ARM's peak performance numbers do not surpass those attainable on x86_64, the optimization passes required for an ARM back end may be considerably simpler and will hopefully further limit the shrinking number of cases where hand written assembly is still used.
 

PearsonX

Site Champ
Posts
927
Reaction score
1,724
We'll see how it works out. The low level aspects of ARM have some nice aspects that are really missing on x86_64. Intel has increased standard vector widths from 128 with SSE variants to a standard 256 bit width with AVX and then to 512 bits with AVX512. I don't expect them to go longer at the moment, because 512 bits encompasses an entire cache line, so any load that doesn't hit in L1 requires a minimum of 2 fetches once you go beyond that. Right now it's still 1 fetch if aligned to a 64 byte boundary.

I deleted several paragraphs of nerd stuff here that may not be interesting to anyone but myself... I can go into much greater detail if necessary.

On ARM, we have Neon today, which retains the 128 bit width, which is much simpler from a code generation standpoint. Their charted course seems to favor SVE in general, which moves problems of scheduling vectorization edges (commonly done by loop peeling or folding via predication) from the compiler level to the system's runtime.

SVE may not lead to optimal peak performance, but it should be much simpler from a compiler standpoint. Compilers have to deal with optimization concerns both at the level of vectorization and latency hiding in potentially compute bound problems. The issues of scalarizing or predicating loop tails in these things are much more complicated than they should be, due to weird interactions between Intel's favored opcodes and the range of hardware that supports them.

The second thing they released was Helium, which targets low power chips. It's meant to favor pipelining of instructions.

As a TLDR, even if ARM's peak performance numbers do not surpass those attainable on x86_64, the optimization passes required for an ARM back end may be considerably simpler and will hopefully further limit the shrinking number of cases where hand written assembly is still used.
Fantastic explanation, too bad I only understood every other word:D
 

thekev

Power User
Posts
150
Reaction score
301
Fantastic explanation, too bad I only understood every other word:D
This might provide a partial lexicon.

One of the guys at the other place actually made me aware of SVE. SVE just refers to scalable vector extensions. It means that as opposed to issuing instructions that operate on operands with fixed data width, you specify a vector length and the system runtime handles that.

SVE

Helium

The other stuff dealt with issues that arise in general programming involving compute bound problems, where different iterations of a problem can be computed independently. These problems are broken into loops, which execute instructions over a bounded range multiple times. We sometimes compute these in parallel using SIMD logic (vectors or sequences of operations), super scalar pipelines (issuing instructions on different ports in a single cycle) when different iterations of a loop satisfy certain independence constraints.

Problem sizes aren't always evenly divisible by the number of operands that should be involved in a single loop iteration. To account for this, we can apply a type of mask to control what executes (predication) or fall back to computing the "tail" iterations one at a time (scalarization).

partial disambiguation of instruction models

From my perspective, ARM's direction is better for automated conversion of sequential logic to parallel logic, which reduces the direct involvement of programmers on these problems and may allow them to be more easily folded into generic or system specific APIs rather than architecture specific ones. They're under-utilized right now, because such optimizations are difficult to use in large scale software which is built for multiple hardware targets and environments.

It's enough to make me maybe think about buying another Mac in spite of past issues.
 

DT

Thread Starter
Site Champ
Posts
732
Reaction score
1,382
I've been wondering what's exciting about this. I foresee a perpetual compatibility nightmare that nullifies all the few advantages I personally can imagine. So really curious, why is this more exciting than concerning?

Without getting into instruction set extensions, microcode analysis, etc., between the two processor families (see @thekev's terrific technical [semi] deep-dive), I'll address the more "consumer facing" particulars :)

1) SOC provides CPU, GPU, DSP, ML, in the same "chip", etc., allowing for unified memory, task specific distribution to the more effective subsystem (i.e., __potentially__ much for efficient / faster)

2) This will (read: should) allow Apple to more rapidly improve CPU/SOCs, they spent time waiting on Intel

3) Improved software to hardware optimization (see 1)

4) Lower power consumption (at the same performance) outstanding for portables, reduced thermals (better sustained performance with load), if we can get a 13" Pro, with i7 performance, much better GPU (vs. the mediocre iGPU in Intel configs), that also runs for 13-15 hours ... yes please.

I'm especially excited about a high performance Mini that includes a much more powerful GPU (as that's the only real drawback, and eGPUs still seems like a clusterf***).

OK, so downsides, right, apps should be ported to the new architecture, but __should__ run under Rosetta 2 in the interim.

So obv. the OS and all the "bundled" apps will be native from Day 1: Messages, Photos, Calendar, Mail, Music/Apple TV/Podcasts/Books, Notes, Reminders, you know, all the included productivity apps. I'd also assume all Apple developed apps - very specifically XCode since that was distributed with the "Dev Box" - but also things like Keynote, Numbers, iMovie, Garage Band, and Pro apps like Logic and Final Cut.

Plus, native apps from all the big 3rd party players: MS (Office), Adobe (PS, AI, ID), Google (Chrome) and from what I'm following on several product sites, many other popular products like Evernote, Pixelmator, Day One, VLC.

I expect pretty lightweight apps - for example, two I use: Postman (API testing app) and Sublime Text 2 (powerful text editor) to run just fine under R2 until (assuming it's when not if ...) they get ported to native ASi.
 

DT

Thread Starter
Site Champ
Posts
732
Reaction score
1,382
All the above being said (including my excitement for a much more powerful GPU in a Mac Mini), I have some specific needs for Winders™, and while of that I've moved into cloud based services, some I haven't (and it's pretty critical), so I've got a lot to sort out before I make the leap (outside of a notebook which wouldn't need my VM apps).
 

PearsonX

Site Champ
Posts
927
Reaction score
1,724
Without getting into instruction set extensions, microcode analysis, etc., between the two processor families (see @thekev's terrific technical [semi] deep-dive), I'll address the more "consumer facing" particulars :)

1) SOC provides CPU, GPU, DSP, ML, in the same "chip", etc., allowing for unified memory, task specific distribution to the more effective subsystem (i.e., __potentially__ much for efficient / faster)

2) This will (read: should) allow Apple to more rapidly improve CPU/SOCs, they spent time waiting on Intel

3) Improved software to hardware optimization (see 1)

4) Lower power consumption (at the same performance) outstanding for portables, reduced thermals (better sustained performance with load), if we can get a 13" Pro, with i7 performance, much better GPU (vs. the mediocre iGPU in Intel configs), that also runs for 13-15 hours ... yes please.

I'm especially excited about a high performance Mini that includes a much more powerful GPU (as that's the only real drawback, and eGPUs still seems like a clusterf***).

OK, so downsides, right, apps should be ported to the new architecture, but __should__ run under Rosetta 2 in the interim.

So obv. the OS and all the "bundled" apps will be native from Day 1: Messages, Photos, Calendar, Mail, Music/Apple TV/Podcasts/Books, Notes, Reminders, you know, all the included productivity apps. I'd also assume all Apple developed apps - very specifically XCode since that was distributed with the "Dev Box" - but also things like Keynote, Numbers, iMovie, Garage Band, and Pro apps like Logic and Final Cut.

Plus, native apps from all the big 3rd party players: MS (Office), Adobe (PS, AI, ID), Google (Chrome) and from what I'm following on several product sites, many other popular products like Evernote, Pixelmator, Day One, VLC.

I expect pretty lightweight apps - for example, two I use: Postman (API testing app) and Sublime Text 2 (powerful text editor) to run just fine under R2 until (assuming it's when not if ...) they get ported to native ASi.
Amazing explanations. You guys are talking to someone who very heavily relies on BASH scripting of software coded by underpaid biomedical engineering students. There's only so deep I can go into the troubleshooting...even last week I bumped into issue by Catalina messing up old proprietary software. Hence, Apple silicon may be the last push for me for a PC or Linux transition.
 
Last edited:

Arkitect

Power User
Posts
229
Reaction score
656
I’m just hoping this isn’t going to be a revisiting of the OS 9 to 10 horse shit with tons of incompatible software that required paid upgrades that in some cases took years.
I think that is inevitable.
Software developers will spend and they'd want to recoup the costs.
 

Chew Toy McCoy

Site Champ
Posts
948
Reaction score
1,632
I think that is inevitable.
Software developers will spend and they'd want to recoup the costs.
I got a new 27” iMac this year and I don’t plan on replacing it anytime soon. As it is I have a significant amount of music and video software that it’s not a good idea to upgrade to the newest MacOS until at least 6 months after release. There are several developers I can count on to send a “Don’t upgrade yet!” email every year like clockwork. If this chip change requires major new software changes there are also some developers who will just throw in the towel and there will be some clunky bridging apps released that never quite work well.
 

DT

Thread Starter
Site Champ
Posts
732
Reaction score
1,382
Amazing explanations. You guys are talking to someone who very heavily relies BASH scripting of software coded by underpaid biomedical engineering students. There's only deep I can go into the troubleshooting...even last week I bumped into issue by Catalina messing up old proprietary software. Hence, Apple silicon may be the what could push me over for a PC or Linux transition.

I've used some *nix flavor for, umm, well decades, so I'm reasonably comfortable in that space, worked with Sun, SCO, SGI, all sorts of variants, later different Linux flavors, etc., and what I love about Apple, is you get the *nix underpinnings, in nicely designed hardware, with a readily available support channel (where the OS and hardware manufacturers, being the same, don't point fingers).

So I'm all in on staying with Apple as my primary computing platform, even though a non-trivial amount of my development work is actually on MS specific stacks. To the latter point, I'm hoping that those tools continue to evolve for MacOS (MS has done an amazing job in the last several years, of open sourcing and providing cross platform solutions, mostly driven by Satya Nadella the current CEO).
 

thekev

Power User
Posts
150
Reaction score
301
Amazing explanations. You guys are talking to someone who very heavily relies on BASH scripting of software coded by underpaid biomedical engineering students. There's only so deep I can go into the troubleshooting...even last week I bumped into issue by Catalina messing up old proprietary software. Hence, Apple silicon may be the last push for me for a PC or Linux transition.

Post the issue on here or the other place next time. I don't use OSX that much these days, but I can take a look. Apple has increasingly restricted access to sensitive directories, where a lot of build systems and installers would conventionally place compiled libraries. It's also possible that Apple deprecated something. I seem to recall needing to grab a brew version of OpenSSL once to resolve some dependency and other various weird stuff. Apple's versions aren't typically open source, so it's not like anyone can really push patches resolving any conflicts.
 

ronntaylor

Power User
Posts
110
Reaction score
269
My current MacBook Air is on it's last legs. At least a couple random shutdowns per day. So I'm getting a M1 MBA. The MBP is too much machine for me and I can't justify the added cost for a better sound and display. Not worried about any programs as I left Microsoft behind and if I need something, will find a work-through. Of course, I'll sleep on it and will probably buy tomorrow afternoon/early evening. And then my current MBA will probably die just before the new machine arrives. 🤷‍♂️
 

DT

Thread Starter
Site Champ
Posts
732
Reaction score
1,382
My current MacBook Air is on it's last legs. At least a couple random shutdowns per day. So I'm getting a M1 MBA. The MBP is too much machine for me and I can't justify the added cost for a better sound and display. Not worried about any programs as I left Microsoft behind and if I need something, will find a work-through. Of course, I'll sleep on it and will probably buy tomorrow afternoon/early evening. And then my current MBA will probably die just before the new machine arrives. 🤷‍♂️


Nice! The new MBA is pretty fantastic, upgraded display (same color gamut as the Pro, just slightly lower nits ... er, I think ... :D), INSANE battery life, very notable performance increase, I mean, the GPU performance is now pretty spectacular (rivalling lower end discrete GPUs). And wow, no fans, completely silent operation, that will be amazing.
 
Registered users do not see ads, it's free and easy. Register Here
Top Bottom