![](https://feddit.nl/pictrs/image/fbe0f62f-ff90-4ed8-86b8-d74abad301e4.png)
![](https://programming.dev/pictrs/image/028151d2-3692-416d-a8eb-9d3d4cc18b41.png)
I have 20GB in my current setup and it was never full. If anything gets swapped in this situation it means it needlessly slows me down.
Not necessarily. Your memory also contains file backed pages (i.e. “file system cache”). These pages are typically not counted when determining “memory usage”, because they can always be discarded.
It is often advantageous to keep frequently use files in cache in favor of unfrequently used memory pages.
Actually… as a former DBA on large databases, you typically want to minimize swapping on a dedicated database system. Most database engines do a much better job at keeping useful data in memory than the Linux kernel’s file caching, which is agnostic about what your files contain. There are some exceptions, like elasticsearch which almost entirely relies on the Linux filesystem cache for buffering I/O.
Anyway, database engines have query optimizers to determine the optimal path to resolve a query, but they rely on it that the buffers that they consider to be “in memory” are actually residing in physical memory, and not sitting in a swapfile somewhere.
So typically, on a large database system the vendor recommendation will be to set
vm.swappiness=0
to minimize memory pressure from filesystem caching, and to set the database buffers as high as the amount of memory you have in your system minus a small amount for the operating system.