mirror of
https://github.com/mozilla/gecko-dev.git
synced 2024-10-28 04:35:33 +00:00
f072d8f9e8
Currently SourceBuffer::ExpectLength will allocate a buffer which is a multiple of MIN_CHUNK_CAPACITY (4096) bytes, no matter what the expected size is. While it is true that HTTP servers can lie, and that we need to handle that for legacy purposes, it is more likely the HTTP servers are telling the truth when it comes to the content length. Additionally images sourced from other locations, such as the file system or data URIs, are always going to have the correct size information (barring a bug elsewhere in the file system or our code). We should be able to trust the size given as a good first guess. While overallocating in general is a waste of memory, SourceBuffer::Compact causes a far worse problem. After we have written all of the data, and there are no active readers, we attempt to shrink the allocated buffer(s) into a single contiguous chunk of the exact length that we need (e.g. N allocations to 1, or 1 oversized allocation to 1 perfect). Since we almost always overallocate, that means we almost always trigger the logic in SourceBuffer::Compact to reallocate the data into a properly sized buffer. If we had simply trusted the expected size in the first place, we could have avoided this situation for the majority of images. In the case that we really do get the wrong size, then we will allocate additional chunks which are multiples of MIN_CHUNK_CAPACITY bytes to fit the data. At most, this will increase the number of discrete allocations by 1, and trigger SourceBuffer::Compact to consolidate at the end. Since we are almost always doing that before, and now we rarely do, this is a significant win. |
||
---|---|---|
.. | ||
browser | ||
crashtests | ||
gtest | ||
mochitest | ||
reftest | ||
unit |