diff options
| author | Marc Mutz <marc.mutz@qt.io> | 2024-12-20 22:00:32 +0100 |
|---|---|---|
| committer | Marc Mutz <marc.mutz@qt.io> | 2024-12-24 23:41:58 +0000 |
| commit | 1a9f8cc0df33195df959cee2e355dde4cbacd754 (patch) | |
| tree | 5320e34e82ac49261260786f789032bd662a93ac /src/corelib/global/qexceptionhandling.cpp | |
| parent | 05b9a4b2deefd586356e1f36d84372b06e74cfe3 (diff) | |
Fix a performance regression in QDataStream
Commit e176dd78fd2f253eb2625585b2bd90b5713e5984 replaced a `new
char[n]` with a std::make_unique<char[]>(n), probably on this author's
insistence.
But the two are not equivalent: make_unique() value-initializes, even
arrays, even of built-in type, which means that each buffer resize
writes each byte twice: first nulling out the whole buffer as part of
value-initialization, then in the memcpy() and any following read()s.
For buffers of several MiB, or even GiB in size, this is very costly.
Fix by adding and using a backport of C++20
make_unique_for_overwrite(), which performs the equivalent of the old
code (ie. default-, not value-initialization).
Also add q20::is_(un)bounded_array, which are needed for the
implementation of q20::make_unique_for_overwrite().
Amends e176dd78fd2f253eb2625585b2bd90b5713e5984.
Pick-to: 6.9 6.8 6.5
Change-Id: I8865c7369e522ec475df122e3d00d6aba3b24561
Reviewed-by: Thiago Macieira <thiago.macieira@intel.com>
Diffstat (limited to 'src/corelib/global/qexceptionhandling.cpp')
0 files changed, 0 insertions, 0 deletions
