PVF execution currently has a memory limit of 128MB, which is the same as Cumulus uses for its block import pipeline. While collators execute state transitions by reading parachain data from disk locally, parachain validators must load state proofs into memory. Therefore, they should have a higher memory limit than collators. Otherwise a parachain block that made full use of memory when executed by the collator could fail to be executed by validators. To fix this, we propose increasing the memory limit for PVF execution to 512MB.
set_config_with_executor_params {}: set_executor_params(RawOrigin::Root,
ExecutorParams::from(&[ExecutorParam::MaxMemoryPages(8192)][..]))
PVF is executed by a polkadot validator during backing, approvals and disputes. It's possible requests from those systems come in simultaneously. I don't know how many of those requests may come in concurrently, but it depends on the configuration. If I were to guess, then I would suppose there would be at most 3-4 PVF executing at the same time. Assuming those numbers, in the pathological case those PVF manage to fill the whole available memory at approx. the same time, it would make at most 512 MiB of committed memory with the current 128 MiB limit. With the new parameters the amount of memory consumed only by PVFs jumps up to 2 GiB.
So that makes me wonder if there were any testing done for what would happen with those pathological values on the standard hardware?