- Posts
- 5,398
- Reaction score
- 8,647
Apple chipmaking stumbles led to less impressive iPhone 14 Pro | AppleInsider
A mistake in developing the A16 Bionic may have led Apple to release a less performative processor for the iPhone 14 Pro, which may be indicative of issues within Apple's chip team.
appleinsider.com
I believe this for not one second. The reporting has discussions of ”early prototypes.” The reporting is based on this idea that CPU designers design something, send it out to make a prototype, then find out how much power will be consumed. At that point they have time to completely replace the GPU and try again.
Hilarious.
1) the power consumption is highly predictable at the design stage using software. Such software is commercially available, but it is so easy to predict that we did it within 5%-10% at AMD using a perl script I wrote. You can actually get pretty close by multiplying your total capacitance x your voltage squared x half your clock frequency.
2) you don’t make “prototypes.” When you tape out a chip, the expectation is that it will be the production chip. You may very well have to do additional spins to fix unforeseen problems. A cross-coupling issue your software simulations didn’t account for, a logic bug that didn’t turn up in verification, etc. But your fixes will not include completely ripping out the GPU and then fitting a new GPU into the same chip area (because you certainly aren’t going to completely change the size and shape of the chip - that would require revisiting the design of every other part of the chip). You move some metal around. Maybe, worst case, you have to add new logic gates.
3) If there WAS a power prediction problem, it would have affected the whole chip. The only way it could happen is if capacitance was wildly mispredicted, which would affect the CPU too, and would also mean that predicted clock speed would be way off.
4) there simply isn’t time in the design cycle to take multiple shots like this
What might have happened?
The design team found that to hit their performance goals with the architecture they were going with, they’d need too much power. Possibly because the new thing was originally intended for 3nm. They would discover this at the design stage, not via “prototypes.” So before they taped anything out, they put the new thing on the back burner and went with the old thing this time around.