Benchmarking monolith vs heterogenous SoC

tomO2013

Power User
Posts
101
Reaction score
182
Hello friends :)

I’m interested to get your thoughts on something that I have been mullling over, typed, retyped and almost posted over at the other place!

I see lots of talk of geekbench single core and multi core tests over at the other place (along side cine bench).

I do see and do understand the value of benchmarks to get a baseline general idea of a chips characteristics and performance on a plethora of cross platform typical software tasks. To reiterate …. I get the value of testing and benchmarking - some information an or insight to a chips characteristics is better than nothing!
However in a world of heterogenous Apple Silicon SoC design with unified memory architecture, cpu, gpu, neural engine, additional processors and co-processors that are only accessible when we optimize via the vendors provided API framework… does that in many ways play down the traditional value of such benchmarks (even when Apple marketing uses them to suit their own purposes).

For example, measuring on-cpu ability to encode h.265 is less relevant when developers of a particular platform can tap into a co-processor to accelerate encoding tasks while using the platforms inherent API stack for doing so! Or maybe even encryption/decryption - is it less relevant when Intel offloads this to acceleration fo AES-NI workloads but AS does this on traditional cpu cores.
I was thinking more and more about this with the recent discussion of Oryon over at the other place - I had to wonder if comparing SoC at the cpu to cpu level is even going to tell the full picture when we look at the chip in context of it’s platform and entire system on chip with aforementioned accelerators and co-processors.

Then again… I’m probably over thinking this! I was nevertheless something taht I’ve been mulling for a while. Really keen to hear your guys perspective here :) Again to reiterate, not questioning the value of benchmarking or cross-platform benchmarking. Asking if modern SoC exacerbates and really throws gas on the fire of the debate on optimization versus general purpose that we had going back many years (3dNow vs SSE, SSE vs PowerPC G4 AltiVec etc.. ).
For Apples full widget platform … do benchmarks in 2023 hold as much value as they did back in 2003.

Keen to hear your opinions and thoughts.
Tom.
 
Last edited:

leman

Site Champ
Posts
641
Reaction score
1,196
Benchmarking is inherently hard, and always has been. Not because of heterogenous computing or accelerators, but because one rarely stops to think "what is it that I am doing and what is it that I am comparing exactly?". And this requires good understanding of hardware and software, as well as willingness to pay attention to details. Most benchmark results are discussed in a very vague context of "how fast is stuff", a context that very quickly breaks apart when one starts to apply some basic scrutiny to it. And it doesn't help that benchmarks are a topic of popular public debate, where any attempt to be rigorous or do things properly promptly flies out of the window in face of the angry mob's non-existent attention span and inability to employ critical thinking. And since there is no critical thinking, benchmarks are widely used to obfuscate and manipulate. All in all, it's a very sad state of affairs.

Much could be improved by better communication, documentation, and result presentation by the benchmark authors themselves. They could be more upfront about what the benchmark does and where its limitations lie. They could report the variance between the individual test runs instead of aggregating everything into a single data point. But then again, where is the fun in that, right? Being vague and non-specific is where the money is.

Regardless, I do think that benchmarks are important and relevant. One just needs to pay attention and look at them critically.
 
Top Bottom
1 2