Less RAM needed than before?

Chew Toy McCoy

Pleb
Site Donor
Posts
7,559
Reaction score
11,811
First let me say I still agree more is better. I’m fairly tech stupid but I sometimes come across an article saying modern computers can get away with less RAM than was required on older computers to perform the same task. A possible oversimplification could be if you were fine on a 10-year-old computer with 32GB of RAM then you’ll probably be able to get away with 16GB on a modern computer. Is there some truth to that? I thought in a lot of situations RAM and the CPU perform completely different tasks, but do some traditional RAM duties now take place on the CPU?
 

rdrr

Elite Member
Posts
1,229
Reaction score
2,056
I can only speak if you have a mac or a linux desktop. The SSDs make it so you don't need as much memory as you did with the older spinning disks. However, ymmv based on how intensive the workloads are.
 

Yoused

up
Posts
5,623
Reaction score
8,942
Location
knee deep in the road apples of the 4 horsemen
First let me say I still agree more is better. I’m fairly tech stupid but I sometimes come across an article saying modern computers can get away with less RAM than was required on older computers to perform the same task. A possible oversimplification could be if you were fine on a 10-year-old computer with 32GB of RAM then you’ll probably be able to get away with 16GB on a modern computer. Is there some truth to that? I thought in a lot of situations RAM and the CPU perform completely different tasks, but do some traditional RAM duties now take place on the CPU?
To get too technical, CPUs have more inboard capacity, which allows routines to communicate with each other without having to use RAM as much. Compilers and OS protocols have moved toward doing more stuff inside the CPU, as it is faster that way.

RAM does not actually do anything beyond hold data and allow the CPU to work on it. If you are doing large work, having the RAM to hold all your data is extremely useful. If you have files that spread out big (decompress, as with the majority of media-type files), your computer will always function better if you have enough RAM to hold that datasplat, and there is presently just no substitute for having enough RAM. Whether less will be good really depends on what kind of work you are doing.
 

mr_roboto

Site Champ
Posts
288
Reaction score
464
First let me say I still agree more is better. I’m fairly tech stupid but I sometimes come across an article saying modern computers can get away with less RAM than was required on older computers to perform the same task. A possible oversimplification could be if you were fine on a 10-year-old computer with 32GB of RAM then you’ll probably be able to get away with 16GB on a modern computer. Is there some truth to that? I thought in a lot of situations RAM and the CPU perform completely different tasks, but do some traditional RAM duties now take place on the CPU?
IMO, there is very little truth to this idea. The only sliver of truthiness is what @rdrr said: SSDs make swapping faster, so sometimes a low-RAM computer will perform acceptably while swapping where before, with a spinning disk, it would have been horrible. It still won't perform nearly as well as it would if you had enough RAM to avoid swapping at all.

Your thought on the roles of CPU and RAM is correct, but more than you knew: it's not merely "a lot" of situations, it's all of them. RAM is storage for both program code and data; CPUs are devices which read and write RAM as they execute the programs stored in RAM.

To get too technical, CPUs have more inboard capacity, which allows routines to communicate with each other without having to use RAM as much. Compilers and OS protocols have moved toward doing more stuff inside the CPU, as it is faster that way.
Can you name anything which actually reduces the amount of RAM allocated and used by a process? I mean, sure, modern CPUs have stuff intended to accelerate inter-process communication which tries to avoid round trips all the way out to DRAM chips and back, but in the end it's just caches and similar stuff to make the memory accesses faster and more local rather than anything which reduces the amount of RAM you need to do any given task.
 

Andropov

Site Champ
Posts
620
Reaction score
779
Location
Spain
Well there's another possible (likely, even!) scenario here: maybe the old computer didn't need the 32GB either. Otherwise, yeah, what @mr_roboto said.

To be fair, extra RAM always has some uses, at least on macOS. That RAM would have been used for something. I believe that macOS caches of commonly used files when there's plenty RAM available. That's particularly useful with spinning disks or some of the first SSDs, while it's less important on computers with fast NVMe SSDs. But if you needed more than 16GB of RAM for application memory, no amount of extra CPU, GPU or SSD speed is going to make up for it.
 

fischersd

Meh
Posts
1,218
Reaction score
862
Location
Coquitlam, BC, Canada
No-one even talked about the days of upgrading the L2 cache on your sysbrd? :)

As others have suggested - you likely needed more RAM as the pagefile on old, spinning disk was painfully slow.

Nowadays, with SOC (system on a chip), you have all of your RAM closely integrated with your CPU (in the same chip), so the pathway is wide and insanely fast, compared to the systems years ago.

How much RAM you need depends on the apps that you're running. Very large spreadsheets, databases and video editing all still take as much memory as you can throw at them for optimal performance.
 

throAU

Site Champ
Posts
257
Reaction score
275
Location
Perth, Western Australia
The working set in memory is probably larger than before, however with SSD based storage the OS can be more proactive about swapping idle things out to SSD without tanking IO performance, so the reality is you may have room for more working set more rapidly - meaning that in some circumstances, sure - you MAY be able to get away with relying on swap more.

but... if you have a large working set, there's no substitute for RAM.
 

Yoused

up
Posts
5,623
Reaction score
8,942
Location
knee deep in the road apples of the 4 horsemen
Can you name anything which actually reduces the amount of RAM allocated and used by a process? I mean, sure, modern CPUs have stuff intended to accelerate inter-process communication which tries to avoid round trips all the way out to DRAM chips and back, but in the end it's just caches and similar stuff to make the memory accesses faster and more local rather than anything which reduces the amount of RAM you need to do any given task.
IPC is a much broader thing, that always uses as much memory as it uses and is mostly not accelerated by hardware. More registers allow subroutines to pass parameters internally, somewhat or greatly reducing stack usage. It may not be a large amount of memory, but I suspect it adds up.

And now, of course, Apple Silicon includes instructions that literally compress/decompess memory 1K at a time, which is one way around using SSD for swap space, but still seems slightly kludgy.
 

Nycturne

Elite Member
Posts
1,139
Reaction score
1,489
And now, of course, Apple Silicon includes instructions that literally compress/decompess memory 1K at a time, which is one way around using SSD for swap space, but still seems slightly kludgy.

It kinda is, but how else do you reduce the number of idle pages without writing them out to some form of storage (other than simply purging read-only pages which may not always be an option)? I wonder if it is more about SSD endurance than it is about improving I/O. I also imagine that these pages need to be locked for compression to take place and that the MMU is updated so that the compressed page is now a fault which triggers decompression, so acceleration is going to be a noticeable win on CPUs that offer such extensions.

The working set in memory is probably larger than before, however with SSD based storage the OS can be more proactive about swapping idle things out to SSD without tanking IO performance, so the reality is you may have room for more working set more rapidly - meaning that in some circumstances, sure - you MAY be able to get away with relying on swap more.

While this shouldn’t be an AS vs x64 thing, I do wonder also how much SSDs have changed how developers design their working sets. An SSD with reasonable performance means that there are more scenarios where streaming data is feasible vs pre-loading large chunks.

At least in the apps I write, I tend to start with “load on demand” and then look at enlarging the working set in RAM if that can’t be made good enough. That said, for some of the scenarios I’ve played with, AS seems to handle this sort of usage better than x64. This is comparing an i7 2019 MBP to an M1 Max, so not exactly an even comparison, but still a pretty obvious difference in performance when streaming image files from disk and rendering them during scrolling. Some of that is likely the whole SoC architecture being more efficient at moving everything around for render as well. But this is also the case of comparing streaming (which lets my app have a sub-100MB footprint) vs caching everything needed for a scroll view (which can easily get into the 1GB+ range).
 

Yoused

up
Posts
5,623
Reaction score
8,942
Location
knee deep in the road apples of the 4 horsemen
It kinda is, but how else do you reduce the number of idle pages without writing them out to some form of storage
My point about kludgy was the thing with putting it in the instruction set. Once the core starts th op, it has to sit and wait for completion before moving on to the next K (which amounts to reissuing the op with the same, updated, registers). Less kludgy would be a SoC subunit that does the whole job for a whole page, without blocking an entire core.
 

throAU

Site Champ
Posts
257
Reaction score
275
Location
Perth, Western Australia
While this shouldn’t be an AS vs x64 thing, I do wonder also how much SSDs have changed how developers design their working sets. An SSD with reasonable performance means that there are more scenarios where streaming data is feasible vs pre-loading large chunks.

Based on personal experience with the transition from Snow Leopard to Lion and the performance differences on SSD vs. rust, I'm pretty sure the OS scheduler/memory management was optimised for the new SSD machines back at that point. Or rather, priorities changed in terms of how memory was managed.
 

mr_roboto

Site Champ
Posts
288
Reaction score
464
IPC is a much broader thing, that always uses as much memory as it uses and is mostly not accelerated by hardware. More registers allow subroutines to pass parameters internally, somewhat or greatly reducing stack usage. It may not be a large amount of memory, but I suspect it adds up.

And now, of course, Apple Silicon includes instructions that literally compress/decompess memory 1K at a time, which is one way around using SSD for swap space, but still seems slightly kludgy.
I would bet good money that reductions in stack memory consumption due to more registers is a sub-1% effect.

Compressed memory isn't an Apple Silicon exclusive feature - Apple first shipped it 10 years ago in 10.9 Mavericks. The custom instructions in AS CPUs do make the feature faster, but shouldn't influence memory use one way or the other (assuming the compression algorithm's the same).
 

Nycturne

Elite Member
Posts
1,139
Reaction score
1,489
My point about kludgy was the thing with putting it in the instruction set.

I think we are saying the same thing here. But part of the issue is that if you are doing this to memory pages, handling state gets tricky. Especially if the (de)compression occurs as the result of a memory interrupt. In many scenarios, moving to a sub-unit just means the CPU core is now busy-waiting instead of busy. Not a huge difference. And in interrupt cases, busy-waiting could very well lead to worse performance of the interrupt which is not good.

I can get the desire to avoid juggling this sort of thing, and that moving it onto an SoC subunit might not actually make anything better, but rather more complicated.

Based on personal experience with the transition from Snow Leopard to Lion and the performance differences on SSD vs. rust, I'm pretty sure the OS scheduler/memory management was optimised for the new SSD machines back at that point. Or rather, priorities changed in terms of how memory was managed.

Agreed, but I was thinking of this more from the angle of app/service development as well. Apps don’t necessarily need the same size/type of working set today under SSDs. And as I said, I don’t really think it would be an AS vs x64 difference.
 

Yoused

up
Posts
5,623
Reaction score
8,942
Location
knee deep in the road apples of the 4 horsemen
There is an old story about a team working on Macintosh/Lisa OS, on the QuickDraw module, who presented their daily production report to their department head, showing that they had written minus 3 lines of code. The point being that development is constantly improving the efficiency of stuff. Even a tweak to a compiler, or back-end generator like LLVM, can yield notable performance gains.

RAM has always been a limited resource, so anything that trims RAM-waste is a good thing. You want to be running in 2 Quettabytes of RAM, just for the headroom, but most systems have orders of orders of magnitude less memory than that, so you tune your application to make the most efficient use of RAM possible, because if you have plenty, the waste reduction goes toward making the overall system more responsive (which always benefits your application), and if RAM is snug, you would prefer your application to still run decently.

So, if this claim is reasonably accurate, it probably has to do with code getting better.
 

throAU

Site Champ
Posts
257
Reaction score
275
Location
Perth, Western Australia
So, if this claim is reasonably accurate, it probably has to do with code getting better.
Pretty sure it was Bill Atkinson and it was quite a few "less" (?) than that:


That said, efficiency isn't always about lines of code, and loops can allocate huge amounts in few lines, but yes - you have a point.

And so did Bill :D
 

Andropov

Site Champ
Posts
620
Reaction score
779
Location
Spain
The point being that development is constantly improving the efficiency of stuff. Even a tweak to a compiler, or back-end generator like LLVM, can yield notable performance gains.
Not sure about that. Compiler engineers, maybe. But in app development? Not a lot of pressure to make apps faster anymore. Everything is fast enough out of the box. No manager ever prioritizes a performance improvement.

Notable exceptions are when apps become stupidly slow. For instance, Microsoft Teams got an update that halved the app launch time... from 20+ seconds 🫠


But other that those kind of outliers, I think the performance of software has notably regresses since 10 years ago. If only because of the influence of Electron.
 

throAU

Site Champ
Posts
257
Reaction score
275
Location
Perth, Western Australia
Not sure about that. Compiler engineers, maybe. But in app development? Not a lot of pressure to make apps faster anymore. Everything is fast enough out of the box. No manager ever prioritizes a performance improvement.

The reason for that is pretty clear; software is buggy enough already. Making things WORK in a maintainable way is expensive enough, making them not crash and be exploitable to own the machine/user's data is even too much to hope for in most cases.

Performance optimisation, unless its for a process run millions of times a day and has an actual legitimate hardware cost saving (vs. just "oh it's not very snappy") is so uncommon these days.

And I'm OK with that, given the above two points which are far more important.

Our software development tools today really do still suck (just in different ways to the 90s, 80s and earlier), and hopefully work goes into AI for bug-finding; because you know damn sure the bad guys will be leveraging AI to figure out new ways to exploit stuff.
 

Yoused

up
Posts
5,623
Reaction score
8,942
Location
knee deep in the road apples of the 4 horsemen
… hopefully work goes into AI for bug-finding; because you know damn sure the bad guys will be leveraging AI to figure out new ways to exploit stuff …

Okay, enough of that. You are filling our heads with the vision of a network inhabited by a few score* LLM-based AI bots which are constantly trying to scam each other without becoming the victim of a scam, and we will all be keeping our heads covered while trying to navigate the internets amidst these perilous behemoths.



* or several hundred, or thousands, tens of thousands, arrrgh
 

Joelist

Power User
Posts
177
Reaction score
168
More RAM is generally never going to be a bad thing. But yes some things have changed, oddly it was noticed soon after M1 hit the street that some programs were using less RAM than before - unsure if the "why" was ever isolated.
 
Top Bottom
1 2