I'm working on a small video game which includes a drawing loop and I experienced a strange behavior with MS Edge (but it happens in an other way on Chrome too).
At one particular time, I want to get data about pixels and use, just one time,
It triggers a CPU memory increase at this very precise moment (I know this function is costly)... But during all next iterations of the loop, the painting time dramatically increase.
Here is the link: http://millgraphik.alwaysdata.net/cpu-test/
And the code: http://millgraphik.alwaysdata.net/cpu-test/js/canvas.js
You can test with MS Edge, launch the performance record just after page load. The getImageData is triggered at about 3 seconds after page load. Here is the result of performance test:
in green, the painting duration
So what could be the reason of this increasing painting time?
Chrome and Edge are nowadays both based on Chromium, which uses GPU to accelerate 2D canvas operations. If you navigate to
chrome://flags (on Chrome) and disable Accelerated 2D canvas, you'll notice that the slow behavior starts instantly instead of after the
getImageData() call. This is because the canvas is now, by default, being rendered by your CPU.
But why does it start off fast and then slow down after
getImageData()? When using GPU acceleration, it appears Chrome at least swaps to the CPU based renderer after using
Here's a quote from a related Chromium bug ticket comment:
The problem in Chrome is that every single call to getImageData is generating a readback from the GPU, which is a slow operation. We have a performance heuristic that switches canvases to software rendering mode (no GPU) when getImageData is use a lot. The problem is that the heuristic looks for cases where getImageData is used in three consecutive animation frames. We need to change that rule so that it will catch cases like this.
So your code is triggering whatever heuristic Chromium is currently using to detect frequent
We moved to use gpu to do all canvas related operation few years ago. One issue we have is that GPU is significant slower than CPU to run getImageData, but GPU is faster than CPU for all other canvas operations. So we made a decision, the first getImageData call runs on GPU and all subsequent call to getImageData runs on CPU. This is because we believe the users may do more getImageData calls, and we want to be fast to handle those.
Software Rendering(CPU) result is slightly different than GPU rendering result, that is the difference you have observed in the two getImageData calls.
The flag you found, it allows you to specify if getImageData will be used frequently. If it does, use CPU; otherwise, use GPU (meaning calling getImageData will not cause the change of renderer). So clearly, this would fix the issue!
We are not ready to release this feature yet -- still at testing stage. We plan to launch it in M94-95. This issue will be gone by that time.
Thank you for the feedback!
I've created an offscreen canvas with CSS
display: none; and get my pixels datas with
getImageData() in this offscreen canvas.
It seems to maintain the painting with GPU.
Here is the new test : http://millgraphik.alwaysdata.net/cpu-test2/
EDIT : A better solution with a fragment canvas : http://millgraphik.alwaysdata.net/cpu-test3/js/canvas.js
and the perf test on Edge (similar on chrome) :
What do you think of this new way ?