aboutsummaryrefslogtreecommitdiffstats
path: root/reftable/stack.c
AgeCommit message (Collapse)AuthorFilesLines
2025-11-10reftable/stack: add function to check if optimization is requiredKarthik Nayak1-5/+37
The reftable backend performs auto-compaction as part of its regular flow, which is required to keep the number of tables part of a stack at bay. This allows it to stay optimized. Compaction can also be triggered voluntarily by the user via the 'git pack-refs' or the 'git refs optimize' command. However, currently there is no way for the user to check if optimization is required without actually performing it. Extract out the heuristics logic from 'reftable_stack_auto_compact()' into an internal function 'update_segment_if_compaction_required()'. Then use this to add and expose `reftable_stack_compaction_required()` which will allow users to check if the reftable backend can be optimized. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Acked-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-11-10reftable/stack: return stack segments directlyKarthik Nayak1-11/+12
The `stack_table_sizes_for_compaction()` function returns individual sizes of each reftable table. This function is only called by `reftable_stack_auto_compact()` to decide which tables need to be compacted, if any. Modify the function to directly return the segments, which avoids the extra step of receiving the sizes only to pass it to `suggest_compaction_segment()`. A future commit will also add functionality for checking whether auto-compaction is necessary without performing it. This change allows code re-usability in that context. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Acked-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-10-13Merge branch 'kn/reftable-consistency-checks'Junio C Hamano1-6/+1
The reftable backend learned to sanity check its on-disk data more carefully. * kn/reftable-consistency-checks: refs/reftable: add fsck check for checking the table name reftable: add code to facilitate consistency checks fsck: order 'fsck_msg_type' alphabetically Documentation/fsck-msgids: remove duplicate msg id reftable: check for trailing newline in 'tables.list' refs: move consistency check msg to generic layer refs: remove unused headers
2025-10-07reftable: check for trailing newline in 'tables.list'Karthik Nayak1-6/+1
In the reftable format, the 'tables.list' file contains a newline separated list of tables. While we parse this file, we do not check or care about the last newline. Tighten the parser in `parse_names()` to return an appropriate error if the last newline is missing. This requires modification to `parse_names()` to now return the error while accepting the output as a third argument. Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-08-12reftable: don't second-guess errors from flock interfacePatrick Steinhardt1-29/+8
The `flock` interface is implemented as part of "reftable/system.c" and thus needs to be implemented by the integrator between the reftable library and its parent code base. As such, we cannot rely on any specific implementation thereof. Regardless of that, users of the `flock` subsystem rely on `errno` being set to specific values. This is fragile and not documented anywhere and doesn't really make for a good interface. Refactor the code so that the implementations themselves are expected to return reftable-specific error codes. Our implementation of the `flock` subsystem already knows to do this for all error paths except one. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-08-12reftable/stack: handle outdated stacks when compactingPatrick Steinhardt1-6/+26
When we compact the reftable stack we first acquire the lock for the "tables.list" file and then reload the stack to check that it is still up-to-date. This is done by calling `stack_uptodate()`, which knows to return zero in case the stack is up-to-date, a positive value if it is not and a negative error code on unexpected conditions. We don't do proper error checking though, but instead we only check whether the returned error code is non-zero. If so, we simply bubble it up the calling stack, which means that callers may see an unexpected positive value. Fix this issue by translating to `REFTABLE_OUTDATED_ERROR` instead. Handle this situation in `reftable_addition_commit()`, where we perform a best-effort auto-compaction. All other callsites of `stack_uptodate()` know to handle a positive return value and thus don't need to be fixed. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-08-12reftable/stack: allow passing flags to `reftable_stack_add()`Patrick Steinhardt1-4/+4
The `reftable_stack_add()` function is a simple wrapper to lock the stack, add records to it via a callback and then commit the result. One problem with it though is that it doesn't accept any flags for creating the addition. This makes it impossible to automatically reload the stack in case it was modified before we managed to lock the stack. Add a `flags` field to plug this gap and pass it through accordingly. For now this new flag won't be used by us, but it will be used by libgit2. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-08-12reftable/stack: fix compiler warning due to missing bracesPatrick Steinhardt1-7/+7
While perfectly legal, older compiler toolchains complain when zero-initializing structs that contain nested structs with `{0}`: /home/libgit2/source/deps/reftable/stack.c:862:35: error: suggest braces around initialization of subobject [-Werror,-Wmissing-braces] struct reftable_addition empty = REFTABLE_ADDITION_INIT; ^~~~~~~~~~~~~~~~~~~~~~ /home/libgit2/source/deps/reftable/stack.c:707:33: note: expanded from macro 'REFTABLE_ADDITION_INIT' #define REFTABLE_ADDITION_INIT {0} ^ We had the discussion around whether or not we want to handle such bogus compiler errors in the past already [1]. Back then we basically decided that we do not care about such old-and-buggy compilers, so while we could fix the issue by using `{{0}}` instead this is not the preferred way to handle this in the Git codebase. We have an easier fix though: we can just drop the macro altogether and handle initialization of the struct in `reftable_stack_addition_init()`. Callers are expected to call this function already, so this change even simplifies the calling convention. [1]: https://lore.kernel.org/git/20220710081135.74964-1-sunshine@sunshineco.com/T/ Suggested-by: Carlo Arenas <carenas@gmail.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-08-12reftable/stack: reorder code to avoid forward declarationsPatrick Steinhardt1-188/+176
We have a couple of forward declarations in the stack-related code of the reftable library. These declarations aren't really required, but are simply caused by unfortunate ordering. Reorder the code and remove the forward declarations. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-04-07reftable/constants: make block types part of the public interfacePatrick Steinhardt1-4/+4
Now that reftable blocks can be read individually via the public interface it becomes necessary for callers to be able to distinguish the different types of blocks. Expose the relevant constants. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-04-07reftable/reader: rename data structure to "table"Patrick Steinhardt1-84/+84
The `struct reftable_reader` subsystem encapsulates a table that has been read from the disk. As such, the current name of that structure is somewhat hard to understand as it only talks about the fact that we read something from disk, without really giving an indicator _what_ that is. Furthermore, this naming schema doesn't really fit well into how the other structures are named: `reftable_merged_table`, `reftable_stack`, `reftable_block` and `reftable_record` are all named after what they encapsulate. Rename the subsystem to `reftable_table`, which directly gives a hint that the data structure is about handling the individual tables part of the stack. While this change results in a lot of churn, it prepares for us exposing the APIs to third-party callers now that the reftable library is a standalone library that can be linked against by other projects. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-04-07reftable: fix formatting of the license headerPatrick Steinhardt1-6/+6
The license headers used across the reftable library doesn't follow our typical coding style for multi-line comments. Fix it. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-02-18reftable/stack: stop using `sleep_millisec()`Patrick Steinhardt1-1/+1
Refactor our use of `sleep_millisec()` by open-coding it with poll(3p), which is the current implementation of this function. Ideally, we'd use a more direct way to sleep, but there is no equivalent to sleep(3p) that would accept milliseconds as input. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-02-18reftable/system: introduce `reftable_rand()`Patrick Steinhardt1-2/+2
Introduce a new system-level `reftable_rand()` function that generates a single unsigned integer for us. The implementation of this function is to be provided by the calling codebase, which allows us to more easily hook into pre-seeded random number generators. Adapt the two callsites where we generated random data. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-02-18reftable/stack: stop using `write_in_full()`Patrick Steinhardt1-4/+24
Similar to the preceding commit, drop our use of `write_in_full()` and implement a new wrapper `reftable_write_full()` that handles this logic for us. This is done to reduce our dependency on the Git library. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-02-18reftable/stack: stop using `read_in_full()`Patrick Steinhardt1-4/+14
There is a single callsite of `read_in_full()` in the reftable library. Open-code the function to reduce our dependency on the Git library. Note that we only partially port over the logic from `read_in_full()` and its underlying `xread()` helper. Most importantly, the latter also knows to handle `EWOULDBLOCK` via `handle_nonblock()`. This logic is irrelevant for us though because the reftable library never sets the `O_NONBLOCK` option in the first place. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-02-14Merge branch 'kn/reflog-migration-fix-followup'Junio C Hamano1-2/+4
Code clean-up. * kn/reflog-migration-fix-followup: reftable: prevent 'update_index' changes after adding records refs: use 'uint64_t' for 'ref_update.index' refs: mark `ref_transaction_update_reflog()` as static
2025-01-28Merge branch 'ps/reftable-sign-compare'Junio C Hamano1-7/+5
The reftable/ library code has been made -Wsign-compare clean. * ps/reftable-sign-compare: reftable: address trivial -Wsign-compare warnings reftable/blocksource: adjust `read_block()` to return `ssize_t` reftable/blocksource: adjust type of the block length reftable/block: adjust type of the restart length reftable/block: adapt header and footer size to return a `size_t` reftable/basics: adjust `hash_size()` to return `uint32_t` reftable/basics: adjust `common_prefix_size()` to return `size_t` reftable/record: handle overflows when decoding varints reftable/record: drop unused `print` function pointer meson: stop disabling -Wsign-compare
2025-01-22reftable: prevent 'update_index' changes after adding recordsKarthik Nayak1-2/+4
The function `reftable_writer_set_limits()` allows updating the 'min_update_index' and 'max_update_index' of a reftable writer. These values are written to both the writer's header and footer. Since the header is written during the first block write, any subsequent changes to the update index would create a mismatch between the header and footer values. The footer would contain the newer values while the header retained the original ones. To protect against this bug, prevent callers from updating these values after any record is written. To do this, modify the function to return an error whenever the limits are modified after any record adds. Check for record adds within `reftable_writer_set_limits()` by checking the `last_key` and `next` variable. The former is updated after each record added, but is reset at certain points. The latter is set after writing the first block. Modify all callers of the function to anticipate a return type and handle it accordingly. Add a unit test to also ensure the function returns the error as expected. Helped-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-01-21reftable: address trivial -Wsign-compare warningsPatrick Steinhardt1-7/+5
Address the last couple of trivial -Wsign-compare warnings in the reftable library and remove the DISABLE_SIGN_COMPARE_WARNINGS macro that we have in "reftable/system.h". Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-01-07reftable/stack: accept insecure random bytesPatrick Steinhardt1-2/+2
The reftable library uses randomness in two call paths: - When reading a stack in case some of the referenced tables disappears. The randomness is used to delay the next read by a couple of milliseconds. - When writing a new table, where the randomness gets appended to the table name (e.g. "0x000000000001-0x000000000002-0b1d8ddf.ref"). In neither of these cases do we need strong randomness. Unfortunately though, we have observed test failures caused by the former case. In t0610 we have a test that spawns a 100 processes at once, all of which try to write a new table to the stack. And given that all of the processes will require randomness, it can happen that these processes make the entropy pool run dry, which will then cause us to die: + test_seq 100 + printf %s commit\trefs/heads/branch-%s\n 68d032e9edd3481ac96382786ececc37ec28709e 1 + printf %s commit\trefs/heads/branch-%s\n 68d032e9edd3481ac96382786ececc37ec28709e 2 ... + git update-ref refs/heads/branch-98 HEAD + git update-ref refs/heads/branch-97 HEAD + git update-ref refs/heads/branch-99 HEAD + git update-ref refs/heads/branch-100 HEAD fatal: unable to get random bytes fatal: unable to get random bytes fatal: unable to get random bytes fatal: unable to get random bytes fatal: unable to get random bytes fatal: unable to get random bytes fatal: unable to get random bytes The report was for NonStop, which uses OpenSSL as the backend for randomness. In the preceding commit we have adapted that backend to also return randomness in case the entropy pool is empty and the caller passes the `CSPRNG_BYTES_INSECURE` flag. Do so to fix the issue. Reported-by: Randall S. Becker <rsbecker@nexbridge.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2025-01-07wrapper: allow generating insecure random bytesPatrick Steinhardt1-1/+1
The `csprng_bytes()` function generates randomness and writes it into a caller-provided buffer. It abstracts over a couple of implementations, where the exact one that is used depends on the platform. These implementations have different guarantees: while some guarantee to never fail (arc4random(3)), others may fail. There are two significant failures to distinguish from one another: - Systemic failure, where e.g. opening "/dev/urandom" fails or when OpenSSL doesn't have a provider configured. - Entropy failure, where the entropy pool is exhausted, and thus the function cannot guarantee strong cryptographic randomness. While we cannot do anything about the former, the latter failure can be acceptable in some situations where we don't care whether or not the randomness can be predicted. Introduce a new `CSPRNG_BYTES_INSECURE` flag that allows callers to opt into weak cryptographic randomness. The exact behaviour of the flag depends on the underlying implementation: - `arc4random_buf()` never returns an error, so it doesn't change. - `getrandom()` pulls from "/dev/urandom" by default, which never blocks on modern systems even when the entropy pool is empty. - `getentropy()` seems to block when there is not enough randomness available, and there is no way of changing that behaviour. - `GtlGenRandom()` doesn't mention anything about its specific failure mode. - The fallback reads from "/dev/urandom", which also returns bytes in case the entropy pool is drained in modern Linux systems. That only leaves OpenSSL with `RAND_bytes()`, which returns an error in case the returned data wouldn't be cryptographically safe. This function is replaced with a call to `RAND_pseudo_bytes()`, which can indicate whether or not the returned data is cryptographically secure via its return value. If it is insecure, and if the `CSPRNG_BYTES_INSECURE` flag is set, then we ignore the insecurity and return the data regardless. It is somewhat questionable whether we really need the flag in the first place, or whether we wouldn't just ignore the potentially-insecure data. But the risk of doing that is that we might have or grow callsites that aren't aware of the potential insecureness of the data in places where it really matters. So using a flag to opt-in to that behaviour feels like the more secure choice. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-28reftable: avoid leaks on realloc errorRené Scharfe1-3/+5
When realloc(3) fails, it returns NULL and keeps the original allocation intact. REFTABLE_ALLOC_GROW overwrites both the original pointer and the allocation count variable in that case, simultaneously leaking the original allocation and misrepresenting the number of storable items. parse_names() and reftable_buf_add() avoid leaking by restoring the original pointer value on failure, but all other callers seem to be OK with losing the old allocation. Add a new variant of the macro, REFTABLE_ALLOC_GROW_OR_NULL, which plugs the leak and zeros the allocation counter. Use it for those callers. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-23Merge branch 'ps/reftable-alloc-failures-zalloc-fix'Junio C Hamano1-10/+17
Recent reftable updates mistook a NULL return from a request for 0-byte allocation as OOM and died unnecessarily, which has been corrected. * ps/reftable-alloc-failures-zalloc-fix: reftable/basics: return NULL on zero-sized allocations reftable/stack: fix zero-sized allocation when there are no readers reftable/merged: fix zero-sized allocation when there are no readers reftable/stack: don't perform auto-compaction with less than two tables
2024-12-22reftable/stack: fix zero-sized allocation when there are no readersPatrick Steinhardt1-10/+14
Similar as the preceding commit, we may try to do a zero-sized allocation when reloading a reftable stack that ain't got any tables. It is implementation-defined whether malloc(3p) returns a NULL pointer in that case or a zero-sized object. In case it does return a NULL pointer though it causes us to think we have run into an out-of-memory situation, and thus we return an error. Fix this by only allocating arrays when they have at least one entry. Reported-by: Randall S. Becker <rsbecker@nexbridge.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-12-22reftable/stack: don't perform auto-compaction with less than two tablesPatrick Steinhardt1-0/+3
In order to compact tables we need at least two tables. Bail out early from `reftable_stack_auto_compact()` in case we have less than two tables. In the original, `stack_table_sizes_for_compaction()` yields an array that has the same length as the number of tables. This array is then passed on to `suggest_compaction_segment()`, which returns an empty segment in case we have less than two tables. The segment is then passed to `segment_size()`, which will return `0` because both start and end of the segment are `0`. And because we only call `stack_compact_range()` in case we have a positive segment size we don't perform auto-compaction at all. Consequently, this change does not result in a user-visible change in behaviour when called with a single table. But when called with no tables this protects us against a potential out-of-memory error: `stack_table_sizes_for_compaction()` would try to allocate a zero-byte object when there aren't any tables, and that may lead to a `NULL` pointer on some platforms like NonStop which causes us to bail out with an out-of-memory error. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-26reftable/stack: add mechanism to notify callers on reloadPatrick Steinhardt1-0/+4
Reftable stacks are reloaded in two cases: - When calling `reftable_stack_reload()`, if the stat-cache tells us that the stack has been modified. - When committing a reftable addition. While callers can figure out the second case, they do not have a mechanism to figure out whether `reftable_stack_reload()` led to an actual reload of the on-disk data. All they can do is thus to assume that data is always being reloaded in that case. Improve the situation by introducing a new `on_reload()` callback to the reftable options. If provided, the function will be invoked every time the stack has indeed been reloaded. This allows callers to invalidate data that depends on the current stack data. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-26reftable/stack: add accessor for the hash IDPatrick Steinhardt1-0/+5
Add an accessor function that allows callers to access the hash ID of a reftable stack. This function will be used in a subsequent commit. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-19reftable/system: provide thin wrapper for lockfile subsystemPatrick Steinhardt1-36/+27
We use the lockfile subsystem to write lockfiles for "tables.list". As with the tempfile subsystem, the lockfile subsystem also hooks into our infrastructure to prune stale locks via atexit(3p) or signal handlers. Furthermore, the lockfile subsystem also handles locking timeouts, which do add quite a bit of logic. Having to reimplement that in the context of Git wouldn't make a whole lot of sense, and it is quite likely that downstream users of the reftable library may have a better idea for how exactly to implement timeouts. So again, provide a thin wrapper for the lockfile subsystem instead such that the compatibility shim is fully self-contained. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-19reftable/stack: drop only use of `get_locked_file_path()`Patrick Steinhardt1-3/+9
We've got a single callsite where we call `get_locked_file_path()`. As we're about to convert our usage of the lockfile subsystem to instead be used via a compatibility shim we'd have to implement more logic for this single callsite. While that would be okay if Git was the only supposed user of the reftable library, it's a bit more awkward when considering that we have to reimplement this functionality for every user of the library eventually. Refactor the code such that we don't call `get_locked_file_path()` anymore. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-19reftable/system: provide thin wrapper for tempfile subsystemPatrick Steinhardt1-34/+23
We use the tempfile subsystem to write temporary tables, but given that we're in the process of converting the reftable library to become standalone we cannot use this subsystem directly anymore. While we could in theory convert the code to use mkstemp(3p) instead, we'd lose access to our infrastructure that automatically prunes tempfiles via atexit(3p) or signal handlers. Provide a thin wrapper for the tempfile subsystem instead. Like this, the compatibility shim is fully self-contained in "reftable/system.c". Downstream users of the reftable library would have to implement their own tempfile shims by replacing "system.c" with a custom version. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-19reftable/stack: stop using `fsync_component()` directlyPatrick Steinhardt1-17/+32
We're executing `fsync_component()` directly in the reftable library so that we can fsync data to disk depending on "core.fsync". But as we're in the process of converting the reftable library to become standalone we cannot use that function in the library anymore. Refactor the code such that users of the library can inject a custom fsync function via the write options. This allows us to get rid of the dependency on "write-or-die.h". Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-11-19reftable/system: stop depending on "hash.h"Patrick Steinhardt1-2/+2
We include "hash.h" in "reftable/system.h" such that we can use hash format IDs as well as the raw size of SHA1 and SHA256. As we are in the process of converting the reftable library to become standalone we of course cannot rely on those constants anymore. Introduce a new `enum reftable_hash` to replace internal uses of the hash format IDs and new constants that replace internal uses of the hash size. Adapt the reftable backend to set up the correct hash function. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-17reftable: handle trivial `reftable_buf` errorsPatrick Steinhardt1-16/+33
Convert the reftable library such that we handle failures with the new `reftable_buf` interfaces. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2024-10-17reftable/stack: adapt `stack_filename()` to handle allocation failuresPatrick Steinhardt1-17/+45
The `stack_filename()` function cannot pass any errors to the caller as it has a `void` return type. Adapt it and its callers such that we can handle errors and start handling allocation failures. There are two interesting edge cases in `reftable_stack_destroy()` and `reftable_addition_close()`. Both of these are trying to tear down their respective structures, and while doing so they try to unlink some of the tables they have been keeping alive. Any earlier attempts to do that may fail on Windows because it keeps us from deleting such tables while they are still open, and thus we re-try on close. It's okay and even expected that this can fail when the tables are still open by another process, so we handle the allocation failures gracefully and just skip over any file whose name we couldn't figure out. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2024-10-17reftable/stack: adapt `format_name()` to handle allocation failuresPatrick Steinhardt1-9/+19
The `format_name()` function cannot pass any errors to the caller as it has a `void` return type. Adapt it and its callers such that we can handle errors and start handling allocation failures. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2024-10-17reftable: convert from `strbuf` to `reftable_buf`Patrick Steinhardt1-60/+60
Convert the reftable library to use the `reftable_buf` interface instead of the `strbuf` interface. This is mostly a mechanical change via sed(1) with some manual fixes where functions for `strbuf` and `reftable_buf` differ. The converted code does not yet handle allocation failures. This will be handled in subsequent commits. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2024-10-17reftable: stop using `strbuf_addf()`Patrick Steinhardt1-6/+12
We're about to introduce our own `reftable_buf` type to replace `strbuf`. One function we'll have to convert is `strbuf_addf()`, which is used in a handful of places. This function uses `snprintf()` internally, which makes porting it a bit more involved: - It is not available on all platforms. - Some platforms like Windows have broken implementations. So by using `snprintf()` we'd also push the burden on downstream users of the reftable library to make available a properly working version of it. Most callsites of `strbuf_addf()` are trivial to convert to not using it. We do end up using `snprintf()` in our unit tests, but that isn't much of a problem for downstream users of the reftable library. While at it, remove a useless call to `strbuf_reset()` in `t_reftable_stack_auto_compaction_with_locked_tables()`. We don't write to the buffer before this and initialize it with `STRBUF_INIT`, so there is no need to reset anything. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2024-10-10Merge branch 'ps/reftable-alloc-failures'Junio C Hamano1-44/+143
The reftable library is now prepared to expect that the memory allocation function given to it may fail to allocate and to deal with such an error. * ps/reftable-alloc-failures: (26 commits) reftable/basics: fix segfault when growing `names` array fails reftable/basics: ban standard allocator functions reftable: introduce `REFTABLE_FREE_AND_NULL()` reftable: fix calls to free(3P) reftable: handle trivial allocation failures reftable/tree: handle allocation failures reftable/pq: handle allocation failures when adding entries reftable/block: handle allocation failures reftable/blocksource: handle allocation failures reftable/iter: handle allocation failures when creating indexed table iter reftable/stack: handle allocation failures in auto compaction reftable/stack: handle allocation failures in `stack_compact_range()` reftable/stack: handle allocation failures in `reftable_new_stack()` reftable/stack: handle allocation failures on reload reftable/reader: handle allocation failures in `reader_init_iter()` reftable/reader: handle allocation failures for unindexed reader reftable/merged: handle allocation failures in `merged_table_init_iter()` reftable/writer: handle allocation failures in `reftable_new_writer()` reftable/writer: handle allocation failures in `writer_index_hash()` reftable/record: handle allocation failures when decoding records ...
2024-10-02reftable: introduce `REFTABLE_FREE_AND_NULL()`Patrick Steinhardt1-4/+4
We have several calls to `FREE_AND_NULL()` in the reftable library, which of course uses free(3P). As the reftable allocators are pluggable we should rather call the reftable specific function, which is `reftable_free()`. Introduce a new macro `REFTABLE_FREE_AND_NULL()` and adapt the callsites accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable: fix calls to free(3P)Patrick Steinhardt1-1/+1
There are a small set of calls to free(3P) in the reftable library. As the reftable allocators are pluggable we should rather call the reftable specific function, which is `reftable_free()`. Convert the code accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable: handle trivial allocation failuresPatrick Steinhardt1-0/+20
Handle trivial allocation failures in the reftable library and its unit tests. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/stack: handle allocation failures in auto compactionPatrick Steinhardt1-4/+12
Handle allocation failures in `reftable_stack_auto_compact()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/stack: handle allocation failures in `stack_compact_range()`Patrick Steinhardt1-2/+17
Handle allocation failures in `stack_compact_range()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/stack: handle allocation failures in `reftable_new_stack()`Patrick Steinhardt1-8/+26
Handle allocation failures in `reftable_new_stack()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/stack: handle allocation failures on reloadPatrick Steinhardt1-9/+25
Handle allocation failures in `reftable_stack_reload_once()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/merged: handle allocation failures in `merged_table_init_iter()`Patrick Steinhardt1-11/+23
Handle allocation failures in `merged_table_init_iter()`. While at it, merge `merged_iter_init()` into the function. It only has a single caller and merging them makes it easier to handle allocation failures consistently. This change also requires us to adapt `reftable_stack_init_*_iterator()` to bubble up the new error codes of `merged_table_iter_init()`. Adapt callsites accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/writer: handle allocation failures in `reftable_new_writer()`Patrick Steinhardt1-4/+10
Handle allocation failures in `reftable_new_writer()`. Adapt the function to return an error code to return such failures. While at it, rename it to match our code style as we have to touch up every callsite anyway. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-10-02reftable/basics: handle allocation failures in `parse_names()`Patrick Steinhardt1-1/+5
Handle allocation failures in `parse_names()` by returning `NULL` in case any allocation fails. While at it, refactor the function to return the array directly instead of assigning it to an out-pointer. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-09-24reftable/stack: allow locking of outdated stacksPatrick Steinhardt1-7/+13
In `reftable_stack_new_addition()` we first lock the stack and then check whether it is still up-to-date. If it is not we return an error to the caller indicating that the stack is outdated. This is overly restrictive in our ref transaction interface though: we lock the stack right before we start to verify the transaction, so we do not really care whether it is outdated or not. What we really want is that the stack is up-to-date after it has been locked so that we can verify queued updates against its current state while we know that it is locked for concurrent modification. Introduce a new flag `REFTABLE_STACK_NEW_ADDITION_RELOAD` that alters the behaviour of `reftable_stack_init_addition()` in this case: when we notice that it is out-of-date we reload it instead of returning an error to the caller. This logic will be wired up in the reftable backend in the next commit. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-09-24refs/reftable: introduce "reftable.lockTimeout"Patrick Steinhardt1-6/+12
When multiple concurrent processes try to update references in a repository they may try to lock the same lockfiles. This can happen even when the updates are non-conflicting and can both be applied, so it doesn't always make sense to abort the transaction immediately. Both the "loose" and "packed" backends thus have a grace period that they wait for the lock to be released that can be controlled via the config values "core.filesRefLockTimeout" and "core.packedRefsTimeout", respectively. The reftable backend doesn't have such a setting yet and instead fails immediately when it sees such a lock. But the exact same concepts apply here as they do apply to the other backends. Introduce a new "reftable.lockTimeout" config that controls how long we may wait for a "tables.list" lock to be released. The default value of this config is 100ms, which is the same default as we have it for the "loose" backend. Note that even though we also lock individual tables, this config really only applies to the "tables.list" file. This is because individual tables are only ever locked when we already hold the "tables.list" lock during compaction. When we observe such a lock we in fact do not want to compact the table at all because it is already in the process of being compacted by a concurrent process. So applying the same timeout here would not make any sense and only delay progress. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-23reftable/stack: fix segfault when reload with reused readers failsPatrick Steinhardt1-0/+23
It is expected that reloading the stack fails with concurrent writers, e.g. because a table that we just wanted to read just got compacted. In case we decided to reuse readers this will cause a segfault though because we unconditionally release all new readers, including the reused ones. As those are still referenced by the current stack, the result is that we will eventually try to dereference those already-freed readers. Fix this bug by incrementing the refcount of reused readers temporarily. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-23reftable/stack: reorder swapping in the reloaded stack contentsPatrick Steinhardt1-15/+19
The code flow of how we swap in the reloaded stack contents is somewhat convoluted because we switch back and forth between swapping in different parts of the stack. Reorder the code to simplify it. We now first close and unlink the old tables which do not get reused before we update the stack to point to the new stack. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-23reftable/reader: introduce refcountingPatrick Steinhardt1-4/+4
It was recently reported that concurrent reads and writes may cause the reftable backend to segfault. The root cause of this is that we do not properly keep track of reftable readers across reloads. Suppose that you have a reftable iterator and then decide to reload the stack while iterating through the iterator. When the stack has been rewritten since we have created the iterator, then we would end up discarding a subset of readers that may still be in use by the iterator. The consequence is that we now try to reference deallocated memory, which of course segfaults. One way to trigger this is in t5616, where some background maintenance jobs have been leaking from one test into another. This leads to stack traces like the following one: + git -c protocol.version=0 -C pc1 fetch --filter=blob:limit=29999 --refetch origin AddressSanitizer:DEADLYSIGNAL ================================================================= ==657994==ERROR: AddressSanitizer: SEGV on unknown address 0x7fa0f0ec6089 (pc 0x55f23e52ddf9 bp 0x7ffe7bfa1700 sp 0x7ffe7bfa1700 T0) ==657994==The signal is caused by a READ memory access. #0 0x55f23e52ddf9 in get_var_int reftable/record.c:29 #1 0x55f23e53295e in reftable_decode_keylen reftable/record.c:170 #2 0x55f23e532cc0 in reftable_decode_key reftable/record.c:194 #3 0x55f23e54e72e in block_iter_next reftable/block.c:398 #4 0x55f23e5573dc in table_iter_next_in_block reftable/reader.c:240 #5 0x55f23e5573dc in table_iter_next reftable/reader.c:355 #6 0x55f23e5573dc in table_iter_next reftable/reader.c:339 #7 0x55f23e551283 in merged_iter_advance_subiter reftable/merged.c:69 #8 0x55f23e55169e in merged_iter_next_entry reftable/merged.c:123 #9 0x55f23e55169e in merged_iter_next_void reftable/merged.c:172 #10 0x55f23e537625 in reftable_iterator_next_ref reftable/generic.c:175 #11 0x55f23e2cf9c6 in reftable_ref_iterator_advance refs/reftable-backend.c:464 #12 0x55f23e2d996e in ref_iterator_advance refs/iterator.c:13 #13 0x55f23e2d996e in do_for_each_ref_iterator refs/iterator.c:452 #14 0x55f23dca6767 in get_ref_map builtin/fetch.c:623 #15 0x55f23dca6767 in do_fetch builtin/fetch.c:1659 #16 0x55f23dca6767 in fetch_one builtin/fetch.c:2133 #17 0x55f23dca6767 in cmd_fetch builtin/fetch.c:2432 #18 0x55f23dba7764 in run_builtin git.c:484 #19 0x55f23dba7764 in handle_builtin git.c:741 #20 0x55f23dbab61e in run_argv git.c:805 #21 0x55f23dbab61e in cmd_main git.c:1000 #22 0x55f23dba4781 in main common-main.c:64 #23 0x7fa0f063fc89 in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 #24 0x7fa0f063fd44 in __libc_start_main_impl ../csu/libc-start.c:360 #25 0x55f23dba6ad0 in _start (git+0xadfad0) (BuildId: 803b2b7f59beb03d7849fb8294a8e2145dd4aa27) While it is somewhat awkward that the maintenance processes survive tests in the first place, it is totally expected that reftables should work alright with concurrent writers. Seemingly they don't. The only underlying resource that we need to care about in this context is the reftable reader, which is responsible for reading a single table from disk. These readers get discarded immediately (unless reused) when calling `reftable_stack_reload()`, which is wrong. We can only close them once we know that there are no iterators using them anymore. Prepare for a fix by converting the reftable readers to be refcounted. Reported-by: Jeff King <peff@peff.net> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-23reftable/reader: inline `reader_close()`Patrick Steinhardt1-4/+1
Same as with the preceding commit, we also provide a `reader_close()` function that allows the caller to close a reader without freeing it. This is unnecessary now that all users will have an allocated version of the reader. Inline it into `reftable_reader_free()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-23reftable/reader: rename `reftable_new_reader()`Patrick Steinhardt1-2/+2
Rename the `reftable_new_reader()` function to `reftable_reader_new()` to match our coding guidelines. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-23reftable/stack: inline `stack_compact_range_stats()`Patrick Steinhardt1-12/+4
The only difference between `stack_compact_range_stats()` and `stack_compact_range()` is that the former updates stats on failure, whereas the latter doesn't. There are no callers anymore that do not want their stats updated though, making the indirection unnecessary. Inline the stat updates into `stack_compact_range()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-22reftable/generic: drop interfacePatrick Steinhardt1-1/+0
The `reftable_table` interface provides a generic infrastructure that can abstract away whether the underlying table is a single table, or a merged table. This abstraction can make it rather hard to reason about the code. We didn't ever use it to implement the reftable backend, and with the preceding patches in this patch series we in fact don't use it at all anymore. Furthermore, it became somewhat useless with the recent refactorings that made it possible to seek reftable iterators multiple times, as these now provide generic access to tables for us. The interface is thus redundant and only brings unnecessary complexity with it. Remove the `struct reftable_table` interface and its associated functions. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-22t/helper: inline `reftable_stack_print_directory()`Patrick Steinhardt1-20/+0
Move `reftable_stack_print_directory()` into the "dump-reftable" helper. This follows the same reasoning as the preceding commit. Note that this requires us to remove the tests for this functionality in `reftable/stack_test.c`. The test does not really add much anyway, because all it verifies is that we do not crash or run into an error, and it specifically doesn't check the outputted data. Also, as the code is now part of the test helper, it doesn't make much sense to have a unit test for it in the first place. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-22reftable/stack: open-code reading refsPatrick Steinhardt1-3/+22
To read a reference for the reftable stack, we first create a generic `reftable_table` from the merged table and then read the reference via a convenience function. We are about to remove these generic interfaces, so let's instead open-code the logic to prepare for this removal. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-22reftable/merged: stop using generic tables in the merged tablePatrick Steinhardt1-30/+18
The merged table provides access to a reftable stack by merging the contents of those tables into a virtual table. These subtables are being tracked via `struct reftable_table`, which is a generic interface for accessing either a single reftable or a merged reftable. So in theory, it would be possible for the merged table to merge together other merged tables. This is somewhat nonsensical though: we only ever set up a merged table over normal reftables, and there is no reason to do otherwise. This generic interface thus makes the code way harder to follow and reason about than really necessary. The abstraction layer may also have an impact on performance, even though the extra set of vtable function calls probably doesn't really matter. Refactor the merged tables to use a `struct reftable_reader` for each of the subtables instead, which gives us direct access to the underlying tables. Adjust names accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-22reftable/merged: rename `reftable_new_merged_table()`Patrick Steinhardt1-2/+2
Rename `reftable_new_merged_table()` to `reftable_merged_table_new()` such that the name matches our coding style. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: handle locked tables during auto-compactionPatrick Steinhardt1-10/+49
When compacting tables, it may happen that we want to compact a set of tables which are already locked by a concurrent process that compacts them. In the case where we wanted to perform a full compaction of all tables it is sensible to bail out in this case, as we cannot fulfill the requested action. But when performing auto-compaction it isn't necessarily in our best interest of us to abort the whole operation. For example, due to the geometric compacting schema that we use, it may be that process A takes a lot of time to compact the bulk of all tables whereas process B appends a bunch of new tables to the stack. B would in this case also notice that it has to compact the tables that process A is compacting already and thus also try to compact the same range, probably including the new tables it has appended. But because those tables are locked already, it will fail and thus abort the complete auto-compaction. The consequence is that the stack will grow longer and longer while A isn't yet done with compaction, which will lead to a growing performance impact. Instead of aborting auto-compaction altogether, let's gracefully handle this situation by instead compacting tables which aren't locked. To do so, instead of locking from the beginning of the slice-to-be-compacted, we start locking tables from the end of the slice. Once we hit the first table that is locked already, we abort. If we succeeded to lock two or more tables, then we simply reduce the slice of tables that we're about to compact to those which we managed to lock. This ensures that we can at least make some progress for compaction in said scenario. It also helps in other scenarios, like for example when a process died and left a stale lockfile behind. In such a case we can at least ensure some compaction on a best-effort basis. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: fix corruption on concurrent compactionPatrick Steinhardt1-5/+102
The locking employed by compaction uses the following schema: 1. Lock "tables.list" and verify that it matches the version we have loaded in core. 2. Lock each of the tables in the user-supplied range of tables that we are supposed to compact. These locks prohibit any concurrent process to compact those tables while we are doing that. 3. Unlock "tables.list". This enables concurrent processes to add new tables to the stack, but also allows them to compact tables outside of the range of tables that we have locked. 4. Perform the compaction. 5. Lock "tables.list" again. 6. Move the compacted table into place. 7. Write the new order of tables, including the compacted table, into the lockfile. 8. Commit the lockfile into place. Letting concurrent processes modify the "tables.list" file while we are doing the compaction is very much part of the design and thus expected. After all, it may take some time to compact tables in the case where we are compacting a lot of very large tables. But there is a bug in the code. Suppose we have two processes which are compacting two slices of the table. Given that we lock each of the tables before compacting them, we know that the slices must be disjunct from each other. But regardless of that, compaction performed by one process will always impact what the other process needs to write to the "tables.list" file. Right now, we do not check whether the "tables.list" has been changed after we have locked it for the second time in (5). This has the consequence that we will always commit the old, cached in-core tables to disk without paying to respect what the other process has written. This scenario would then lead to data loss and corruption. This can even happen in the simpler case of one compacting process and one writing process. The newly-appended table by the writing process would get discarded by the compacting process because it never sees the new table. Fix this bug by re-checking whether our stack is still up to date after locking for the second time. If it isn't, then we adjust the indices of tables to replace in the updated stack. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: use lock_file when adding table to "tables.list"Patrick Steinhardt1-10/+11
When modifying "tables.list", we need to lock the list before updating it to ensure that no concurrent writers modify the list at the same point in time. While we do this via the `lock_file` subsystem when compacting the stack, we manually handle the lock when adding a new table to it. While not wrong, it is at least inconsistent. Refactor the code to consistently lock "tables.list" via the `lock_file` subsytem. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: do not die when fsyncing lock file filesPatrick Steinhardt1-2/+5
We use `fsync_component_or_die()` when committing an addition to the "tables.list" lock file, which unsurprisingly dies in case the fsync fails. Given that this is part of the reftable library, we should never die and instead let callers handle the error. Adapt accordingly and use `fsync_component()` instead. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: simplify tracking of table locksPatrick Steinhardt1-7/+7
When compacting tables, we store the locks of all tables we are about to compact in the `table_locks` array. As we currently only ever compact all tables in the user-provided range or none, we simply track those locks via the indices of the respective tables in the merged stack. This is about to change though, as we will introduce a mode where auto compaction gracefully handles the case of already-locked files. In this case, it may happen that we only compact a subset of the user-supplied range of tables. In this case, the indices will not necessarily match the lock indices anymore. Refactor the code such that we track the number of locks via a separate variable. The resulting code is expected to perform the same, but will make it easier to perform the described change. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: update stats on failed full compactionPatrick Steinhardt1-7/+7
When auto-compaction fails due to a locking error, we update the statistics to indicate this failure. We're not doing the same when performing a full compaction. Fix this inconsistency by using `stack_compact_range_stats()`, which handles the stat update for us. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-08reftable/stack: refactor function to gather table sizesPatrick Steinhardt1-5/+6
Refactor the function that gathers table sizes to be more idiomatic. For one, use `REFTABLE_CALLOC_ARRAY()` instead of `reftable_calloc()`. Second, avoid using an integer to iterate through the tables in the reftable stack given that `stack_len` itself is using a `size_t`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-06-17Merge branch 'ps/no-writable-strings'Junio C Hamano1-4/+6
Building with "-Werror -Wwrite-strings" is now supported. * ps/no-writable-strings: (27 commits) config.mak.dev: enable `-Wwrite-strings` warning builtin/merge: always store allocated strings in `pull_twohead` builtin/rebase: always store allocated string in `options.strategy` builtin/rebase: do not assign default backend to non-constant field imap-send: fix leaking memory in `imap_server_conf` imap-send: drop global `imap_server_conf` variable mailmap: always store allocated strings in mailmap blob revision: always store allocated strings in output encoding remote-curl: avoid assigning string constant to non-const variable send-pack: always allocate receive status parse-options: cast long name for OPTION_ALIAS http: do not assign string constant to non-const field compat/win32: fix const-correctness with string constants pretty: add casts for decoration option pointers object-file: make `buf` parameter of `index_mem()` a constant object-file: mark cached object buffers as const ident: add casts for fallback name and GECOS entry: refactor how we remove items for delayed checkouts line-log: always allocate the output prefix line-log: stop assigning string constant to file parent buffer ...
2024-06-17Merge branch 'ps/ref-storage-migration'Junio C Hamano1-6/+2
A new command has been added to migrate a repository that uses the files backend for its ref storage to use the reftable backend, with limitations. * ps/ref-storage-migration: builtin/refs: new command to migrate ref storage formats refs: implement logic to migrate between ref storage formats refs: implement removal of ref storages worktree: don't store main worktree twice reftable: inline `merged_table_release()` refs/files: fix NULL pointer deref when releasing ref store refs/files: extract function to iterate through root refs refs/files: refactor `add_pseudoref_and_head_entries()` refs: allow to skip creation of reflog entries refs: pass storage format to `ref_store_init()` explicitly refs: convert ref storage format to an enum setup: unset ref storage when reinitializing repository version
2024-06-07global: improve const correctness when assigning string constantsPatrick Steinhardt1-4/+6
We're about to enable `-Wwrite-strings`, which changes the type of string constants to `const char[]`. Fix various sites where we assign such constants to non-const variables. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-06-06reftable: inline `merged_table_release()`Patrick Steinhardt1-6/+2
The function `merged_table_release()` releases a merged table, whereas `reftable_merged_table_free()` releases a merged table and then also free's its pointer. But all callsites of `merged_table_release()` are in fact followed by `reftable_merged_table_free()`, which is redundant. Inline `merged_table_release()` into `reftable_merged_table_free()` to get rid of this redundance. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-30Merge branch 'ps/reftable-reusable-iterator'Junio C Hamano1-5/+23
Code clean-up to make the reftable iterator closer to be reusable. * ps/reftable-reusable-iterator: reftable/merged: adapt interface to allow reuse of iterators reftable/stack: provide convenience functions to create iterators reftable/reader: adapt interface to allow reuse of iterators reftable/generic: adapt interface to allow reuse of iterators reftable/generic: move seeking of records into the iterator reftable/merged: simplify indices for subiterators reftable/merged: split up initialization and seeking of records reftable/reader: set up the reader when initializing table iterator reftable/reader: inline `reader_seek_internal()` reftable/reader: separate concerns of table iter and reftable reader reftable/reader: unify indexed and linear seeking reftable/reader: avoid copying index iterator reftable/block: use `size_t` to track restart point index
2024-05-13reftable/merged: adapt interface to allow reuse of iteratorsPatrick Steinhardt1-5/+9
Refactor the interfaces exposed by `struct reftable_merged_table` and `struct merged_iter` such that they support iterator reuse. This is done by separating initialization of the iterator and seeking on it. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-13reftable/stack: provide convenience functions to create iteratorsPatrick Steinhardt1-0/+15
There exist a bunch of call sites in the reftable backend that want to create iterators for a reftable stack. This is rather convoluted right now, where you always have to go via the merged table. And it is about to become even more convoluted when we split up iterator initialization and seeking in the next commit. Introduce convenience functions that allow the caller to create an iterator from a reftable stack directly without going through the merged table. Adapt callers accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-13reftable: make the compaction factor configurablePatrick Steinhardt1-4/+10
When auto-compacting, the reftable library packs references such that the sizes of the tables form a geometric sequence. The factor for this geometric sequence is hardcoded to 2 right now. We're about to expose this as a config option though, so let's expose the factor via write options. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-13reftable: pass opts as constant pointerPatrick Steinhardt1-2/+5
We sometimes pass the refatble write options as value and sometimes as a pointer. This is quite confusing and makes the reader wonder whether the options get modified sometimes. In fact, `reftable_new_writer()` does cause the caller-provided options to get updated when some values aren't set up. This is quite unexpected, but didn't cause any harm until now. Adapt the code so that we do not modify the caller-provided values anymore. While at it, refactor the code to code to consistently pass the options as a constant pointer to clarify that the caller-provided opts will not ever get modified. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-13reftable: consistently refer to `reftable_write_options` as `opts`Patrick Steinhardt1-21/+20
Throughout the reftable library the `reftable_write_options` are sometimes referred to as `cfg` and sometimes as `opts`. Unify these to consistently use `opts` to avoid confusion. While at it, touch up the coding style a bit by removing unneeded braces around one-line statements and newlines between variable declarations. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-05-08Merge branch 'ps/reftable-write-optim'Junio C Hamano1-66/+1
Code to write out reftable has seen some optimization and simplification. * ps/reftable-write-optim: reftable/block: reuse compressed array reftable/block: reuse zstream when writing log blocks reftable/writer: reset `last_key` instead of releasing it reftable/writer: unify releasing memory reftable/writer: refactorings for `writer_flush_nonempty_block()` reftable/writer: refactorings for `writer_add_record()` refs/reftable: don't recompute committer ident reftable: remove name checks refs/reftable: skip duplicate name checks refs/reftable: perform explicit D/F check when writing symrefs refs/reftable: fix D/F conflict error message on ref copy
2024-04-08reftable: remove name checksPatrick Steinhardt1-66/+1
In the preceding commit we have disabled name checks in the "reftable" backend. These checks were responsible for verifying multiple things when writing records to the reftable stack: - Detecting file/directory conflicts. Starting with the preceding commits this is now handled by the reftable backend itself via `refs_verify_refname_available()`. - Validating refnames. This is handled by `check_refname_format()` in the generic ref transacton layer. The code in the reftable library is thus not used anymore and likely to bitrot over time. Remove it. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08reftable/stack: use geometric table compactionJustin Tobler1-61/+62
To reduce the number of on-disk reftables, compaction is performed. Contiguous tables with the same binary log value of size are grouped into segments. The segment that has both the lowest binary log value and contains more than one table is set as the starting point when identifying the compaction segment. Since segments containing a single table are not initially considered for compaction, if the table appended to the list does not match the previous table log value, no compaction occurs for the new table. It is therefore possible for unbounded growth of the table list. This can be demonstrated by repeating the following sequence: git branch -f foo git branch -d foo Each operation results in a new table being written with no compaction occurring until a separate operation produces a table matching the previous table log value. Instead, to avoid unbounded growth of the table list, the compaction strategy is updated to ensure tables follow a geometric sequence after each operation by individually evaluating each table in reverse index order. This strategy results in a much simpler and more robust algorithm compared to the previous one while also maintaining a minimal ordered set of tables on-disk. When creating 10 thousand references, the new strategy has no performance impact: Benchmark 1: update-ref: create refs sequentially (revision = HEAD~) Time (mean ± σ): 26.516 s ± 0.047 s [User: 17.864 s, System: 8.491 s] Range (min … max): 26.447 s … 26.569 s 10 runs Benchmark 2: update-ref: create refs sequentially (revision = HEAD) Time (mean ± σ): 26.417 s ± 0.028 s [User: 17.738 s, System: 8.500 s] Range (min … max): 26.366 s … 26.444 s 10 runs Summary update-ref: create refs sequentially (revision = HEAD) ran 1.00 ± 0.00 times faster than update-ref: create refs sequentially (revision = HEAD~) Some tests in `t0610-reftable-basics.sh` assert the on-disk state of tables and are therefore updated to specify the correct new table count. Since compaction is more aggressive in ensuring tables maintain a geometric sequence, the expected table count is reduced in these tests. In `reftable/stack_test.c` tests related to `sizes_to_segments()` are removed because the function is no longer needed. Also, the `test_suggest_compaction_segment()` test is updated to better showcase and reflect the new geometric compaction behavior. Signed-off-by: Justin Tobler <jltobler@gmail.com> Acked-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-08reftable/stack: expose option to disable auto-compactionJustin Tobler1-1/+1
The reftable stack already has a variable to configure whether or not to run auto-compaction, but it is inaccessible to users of the library. There exist use cases where a caller may want to have more control over auto-compaction. Move the `disable_auto_compact` option into `reftable_write_options` to allow external callers to disable auto-compaction. This will be used in a subsequent commit. Signed-off-by: Justin Tobler <jltobler@gmail.com> Acked-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-25reftable/stack: gracefully handle failed auto-compaction due to locksPatrick Steinhardt1-1/+12
Whenever we commit a new table to the reftable stack we will end up invoking auto-compaction of the stack to keep the total number of tables at bay. This auto-compaction may fail though in case at least one of the tables which we are about to compact is locked. This is indicated by the compaction function returning `REFTABLE_LOCK_ERROR`. We do not handle this case though, and thus bubble that return value up the calling chain, which will ultimately cause a failure. Fix this bug by ignoring `REFTABLE_LOCK_ERROR`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-25reftable/stack: use error codes when locking fails during compactionPatrick Steinhardt1-5/+13
Compaction of a reftable stack may fail gracefully when there is a concurrent process that writes to the reftable stack and which has thus locked either the "tables.list" file or one of the tables. This is expected and can be handled gracefully by some of the callers which invoke compaction. Thus, to indicate this situation to our callers, we return a positive return code from `stack_compact_range()` and bubble it up to the caller. This kind of error handling is somewhat awkward though as many callers in the call chain never even think of handling positive return values. Thus, the result is either that such errors are swallowed by accident, or that we abort operations with an unhelpful error message. Make the code more robust by always using negative error codes when compaction fails, with `REFTABLE_LOCK_ERROR` for the described benign error case. Note that only a single callsite knew to handle positive error codes gracefully in the first place. Subsequent commits will touch up some of the other sites to handle those errors better. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-25reftable/error: discern locked/outdated errorsPatrick Steinhardt1-3/+3
We currently throw two different errors into a similar-but-different error code: - Errors when trying to lock the reftable stack. - Errors when trying to write to the reftable stack which has been modified concurrently. This results in unclear error handling and user-visible error messages. Create a new `REFTABLE_OUTDATED_ERROR` so that those error conditions can be clearly told apart from each other. Adjust users of the old `REFTABLE_LOCK_ERROR` to use the new error code as required. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-25reftable/stack: fix error handling in `reftable_stack_init_addition()`Patrick Steinhardt1-6/+1
In `reftable_stack_init_addition()` we call `stack_uptodate()` after having created the lockfile to check whether the stack was modified concurrently, which is indicated by a positive return code from the latter function. If so, we return a `REFTABLE_LOCK_ERROR` to the caller and abort the addition. The error handling has an off-by-one though because we check whether the error code is `> 1` instead of `> 0`. Thus, instead of returning the locking error, we would return a positive value. One of the callers of `reftable_stack_init_addition()` works around this bug by repeating the error code check without the off-by-one. But other callers are subtly broken by this bug. Fix this by checking for `err > 0` instead. This has the consequence that `reftable_stack_init_addition()` won't ever return a positive error code anymore, but will instead return `REFTABLE_LOCK_ERROR` now. Thus, we can drop the check for a positive error code in `stack_try_add()` now. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-07reftable/stack: register compacted tables as tempfilesPatrick Steinhardt1-24/+30
We do not register tables resulting from stack compaction with the tempfile API. Those tables will thus not be deleted in case Git gets killed. Refactor the code to register compacted tables as tempfiles. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-07reftable/stack: register lockfiles during compactionPatrick Steinhardt1-134/+121
We do not register any of the locks we acquire when compacting the reftable stack via our lockfiles interfaces. These locks will thus not be released when Git gets killed. Refactor the code to register locks as lockfiles. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-07reftable/stack: register new tables as tempfilesPatrick Steinhardt1-17/+12
We do not register new tables which we're about to add to the stack with the tempfile API. Those tables will thus not be deleted in case Git gets killed. Refactor the code to register tables as tempfiles. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-12Merge branch 'ps/reftable-styles'Junio C Hamano1-81/+72
Code clean-up in various reftable code paths. * ps/reftable-styles: reftable/record: improve semantics when initializing records reftable/merged: refactor initialization of iterators reftable/merged: refactor seeking of records reftable/stack: use `size_t` to track stack length reftable/stack: use `size_t` to track stack slices during compaction reftable/stack: index segments with `size_t` reftable/stack: fix parameter validation when compacting range reftable: introduce macros to allocate arrays reftable: introduce macros to grow arrays
2024-02-06Merge branch 'ps/reftable-compacted-tables-permission-fix'Junio C Hamano1-0/+6
Reftable bugfix. * ps/reftable-compacted-tables-permission-fix: reftable/stack: adjust permissions of compacted tables
2024-02-06Merge branch 'jc/reftable-core-fsync'Junio C Hamano1-3/+21
The write codepath for the reftable data learned to honor core.fsync configuration. * jc/reftable-core-fsync: reftable/stack: fsync "tables.list" during compaction reftable: honor core.fsync
2024-02-06reftable/stack: use `size_t` to track stack lengthPatrick Steinhardt1-11/+10
While the stack length is already stored as `size_t`, we frequently use `int`s to refer to those stacks throughout the reftable library. Convert those cases to use `size_t` instead to make things consistent. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-06reftable/stack: use `size_t` to track stack slices during compactionPatrick Steinhardt1-16/+16
We use `int`s to track reftable slices when compacting the reftable stack, which is considered to be a code smell in the Git project. Convert the code to use `size_t` instead. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-06reftable/stack: index segments with `size_t`Patrick Steinhardt1-14/+11
We use `int`s to index into arrays of segments and track the length of them, which is considered to be a code smell in the Git project. Convert the code to use `size_t` instead. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-06reftable/stack: fix parameter validation when compacting rangePatrick Steinhardt1-11/+13
The `stack_compact_range()` function receives a "first" and "last" index that indicates which tables of the reftable stack should be compacted. Naturally, "first" must be smaller than "last" in order to identify a proper range of tables to compress, which we indeed also assert in the function. But the validations happens after we have already allocated arrays with a size of `last - first + 1`, leading to an underflow and thus an invalid allocation size. Fix this by reordering the array allocations to happen after we have validated parameters. While at it, convert the array allocations to use the newly introduced macros. Note that the relevant variables pointing into arrays should also be converted to use `size_t` instead of `int`. This is left for a later commit in this series. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-06reftable: introduce macros to allocate arraysPatrick Steinhardt1-15/+13
Similar to the preceding commit, let's carry over macros to allocate arrays with `REFTABLE_ALLOC_ARRAY()` and `REFTABLE_CALLOC_ARRAY()`. This requires us to change the signature of `reftable_calloc()`, which only takes a single argument right now and thus puts the burden on the caller to calculate the final array's size. This is a net improvement though as it means that we can now provide proper overflow checks when multiplying the array size with the member size. Convert callsites of `reftable_calloc()` to the new signature and start using the new macros where possible. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-06reftable: introduce macros to grow arraysPatrick Steinhardt1-17/+12
Throughout the reftable library we have many cases where we need to grow arrays. In order to avoid too many reallocations, we roughly double the capacity of the array on each iteration. The resulting code pattern is duplicated across many sites. We have similar patterns in our main codebase, which is why we have eventually introduced an `ALLOC_GROW()` macro to abstract it away and avoid some code duplication. We cannot easily reuse this macro here though because `ALLOC_GROW()` uses `REALLOC_ARRAY()`, which in turn will call realloc(3P) to grow the array. The reftable code is structured as a library though (even if the boundaries are fuzzy), and one property this brings with it is that it is possible to plug in your own allocators. So instead of using realloc(3P), we need to use `reftable_realloc()` that knows to use the user-provided implementation. So let's introduce two new macros `REFTABLE_REALLOC_ARRAY()` and `REFTABLE_ALLOC_GROW()` that mirror what we do in our main codebase, with two modifications: - They use `reftable_realloc()`, as explained above. - They use a different growth factor of `2 * cap + 1` instead of `(cap + 16) * 3 / 2`. The second change is because we know a bit more about the allocation patterns in the reftable library. In most cases, we end up only having a handful of items in the array and don't end up growing them. The initial capacity that our normal growth factor uses (which is 24) would thus end up over-allocating in a lot of code paths. This effect is measurable: - Before change: HEAP SUMMARY: in use at exit: 671,983 bytes in 152 blocks total heap usage: 3,843,446 allocs, 3,843,294 frees, 223,761,402 bytes allocated - After change with a growth factor of `(2 * alloc + 1)`: HEAP SUMMARY: in use at exit: 671,983 bytes in 152 blocks total heap usage: 3,843,446 allocs, 3,843,294 frees, 223,761,410 bytes allocated - After change with a growth factor of `(alloc + 16)* 2 / 3`: HEAP SUMMARY: in use at exit: 671,983 bytes in 152 blocks total heap usage: 3,833,673 allocs, 3,833,521 frees, 4,728,251,742 bytes allocated While the total heap usage is roughly the same, we do end up allocating significantly more bytes with our usual growth factor (in fact, roughly 21 times as many). Convert the reftable library to use these new macros. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-30reftable/stack: fsync "tables.list" during compactionPatrick Steinhardt1-0/+8
In 1df18a1c9a (reftable: honor core.fsync, 2024-01-23), we have added code to fsync both newly written reftables as well as "tables.list" to disk. But there are two code paths where "tables.list" is being written: - When appending a new table due to a normal ref update. - When compacting a range of tables during compaction. We have only addressed the former code path, but do not yet sync the new "tables.list" file in the latter. Fix this omission. Note that we are not yet adding any tests. These tests will be added once the "reftable" backend has been upstreamed. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-29Merge branch 'ps/reftable-optimize-io'Junio C Hamano1-46/+154
Low-level I/O optimization for reftable. * ps/reftable-optimize-io: reftable/stack: fix race in up-to-date check reftable/stack: unconditionally reload stack after commit reftable/blocksource: use mmap to read tables reftable/blocksource: refactor code to match our coding style reftable/stack: use stat info to avoid re-reading stack list reftable/stack: refactor reloading to use file descriptor reftable/stack: refactor stack reloading to have common exit path
2024-01-26reftable/stack: adjust permissions of compacted tablesPatrick Steinhardt1-0/+6
When creating a new compacted table from a range of preexisting ones we don't set the default permissions on the resulting table when specified by the user. This has the effect that the "core.sharedRepository" config will not be honored correctly. Fix this bug and add a test to catch this issue. Note that we only test on non-Windows platforms because Windows does not use POSIX permissions natively. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-23reftable: honor core.fsyncJohn Cai1-3/+13
While the reffiles backend honors configured fsync settings, the reftable backend does not. Address this by fsyncing reftable files using the write-or-die api's fsync_component() in two places: when we add additional entries into the table, and when we close the reftable writer. This commits adds a flush function pointer as a new member of reftable_writer because we are not sure that the first argument to the *write function pointer always contains a file descriptor. In the case of strbuf_add_void, the first argument is a buffer. This way, we can pass in a corresponding flush function that knows how to flush depending on which writer is being used. This patch does not contain tests as they will need to wait for another patch to start to exercise the reftable backend. At that point, the tests will be added to observe that fsyncs are happening when the reftable is in use. Signed-off-by: John Cai <johncai86@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18reftable/stack: fix race in up-to-date checkPatrick Steinhardt1-7/+92
In 6fdfaf15a0 (reftable/stack: use stat info to avoid re-reading stack list, 2024-01-11) we have introduced a new mechanism to avoid re-reading the table list in case stat(3P) figures out that the stack didn't change since the last time we read it. While this change significantly improved performance when writing many refs, it can unfortunately lead to false negatives in very specific scenarios. Given two processes A and B, there is a feasible sequence of events that cause us to accidentally treat the table list as up-to-date even though it changed: 1. A reads the reftable stack and caches its stat info. 2. B updates the stack, appending a new table to "tables.list". This will both use a new inode and result in a different file size, thus invalidating A's cache in theory. 3. B decides to auto-compact the stack and merges two tables. The file size now matches what A has cached again. Furthermore, the filesystem may decide to recycle the inode number of the file we have replaced in (2) because it is not in use anymore. 4. A reloads the reftable stack. Neither the inode number nor the file size changed. If the timestamps did not change either then we think the cached copy of our stack is up-to-date. In fact, the commit introduced three related issues: - Non-POSIX compliant systems may not report proper `st_dev` and `st_ino` values in stat(3P), which made us rely solely on the file's potentially coarse-grained mtime and ctime. - `stat_validity_check()` and friends may end up not comparing `st_dev` and `st_ino` depending on the "core.checkstat" config, again reducing the signal to the mtime and ctime. - `st_ino` can be recycled, rendering the check moot even on POSIX-compliant systems. Given that POSIX defines that "The st_ino and st_dev fields taken together uniquely identify the file within the system", these issues led to the most important signal to establish file identity to be ignored or become useless in some cases. Refactor the code to stop using `stat_validity_check()`. Instead, we manually stat(3P) the file descriptors to make relevant information available. On Windows and MSYS2 the result will have both `st_dev` and `st_ino` set to 0, which allows us to address the first issue by not using the stat-based cache in that case. It also allows us to make sure that we always compare `st_dev` and `st_ino`, addressing the second issue. The third issue of inode recycling can be addressed by keeping the file descriptor of "files.list" open during the lifetime of the reftable stack. As the file will still exist on disk even though it has been unlinked it is impossible for its inode to be recycled as long as the file descriptor is still open. This should address the race in a POSIX-compliant way. The only real downside is that this mechanism cannot be used on non-POSIX-compliant systems like Windows. But we at least have the second-level caching mechanism in place that compares contents of "files.list" with the currently loaded list of tables. This new mechanism performs roughly the same as the previous one that relied on `stat_validity_check()`: Benchmark 1: update-ref: create many refs (HEAD~) Time (mean ± σ): 4.754 s ± 0.026 s [User: 2.204 s, System: 2.549 s] Range (min … max): 4.694 s … 4.802 s 20 runs Benchmark 2: update-ref: create many refs (HEAD) Time (mean ± σ): 4.721 s ± 0.020 s [User: 2.194 s, System: 2.527 s] Range (min … max): 4.691 s … 4.753 s 20 runs Summary update-ref: create many refs (HEAD~) ran 1.01 ± 0.01 times faster than update-ref: create many refs (HEAD) Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18reftable/stack: unconditionally reload stack after commitPatrick Steinhardt1-1/+1
After we have committed an addition to the reftable stack we call `reftable_stack_reload()` to reload the stack and thus reflect the changes that were just added. This function will only conditionally reload the stack in case `stack_uptodate()` tells us that the stack needs reloading. This check is wasteful though because we already know that the stack needs reloading. Call `reftable_stack_reload_maybe_reuse()` instead, which will unconditionally reload the stack. This is merely a conceptual fix, the code in question was not found to cause any problems in practice. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-11reftable/stack: use stat info to avoid re-reading stack listPatrick Steinhardt1-1/+11
Whenever we call into the refs interfaces we potentially have to reload refs in case they have been concurrently modified, either in-process or externally. While this happens somewhat automatically for loose refs because we simply try to re-read the files, the "packed" backend will reload its snapshot of the packed-refs file in case its stat info has changed since last reading it. In the reftable backend we have a similar mechanism that is provided by `reftable_stack_reload()`. This function will read the list of stacks from "tables.list" and, if they have changed from the currently stored list, reload the stacks. This is heavily inefficient though, as we have to check whether the stack is up-to-date on basically every read and thus keep on re-reading the file all the time even if it didn't change at all. We can do better and use the same stat(3P)-based mechanism that the "packed" backend uses. Instead of reading the file, we will only open the file descriptor, fstat(3P) it, and then compare the info against the cached value from the last time we have updated the stack. This should always work alright because "tables.list" is updated atomically via a rename, so even if the ctime or mtime wasn't granular enough to identify a change, at least the inode number or file size should have changed. This change significantly speeds up operations where many refs are read, like when using git-update-ref(1). The following benchmark creates N refs in an otherwise-empty repository via `git update-ref --stdin`: Benchmark 1: update-ref: create many refs (refcount = 1, revision = HEAD~) Time (mean ± σ): 5.1 ms ± 0.2 ms [User: 2.4 ms, System: 2.6 ms] Range (min … max): 4.8 ms … 7.2 ms 109 runs Benchmark 2: update-ref: create many refs (refcount = 100, revision = HEAD~) Time (mean ± σ): 19.1 ms ± 0.9 ms [User: 8.9 ms, System: 9.9 ms] Range (min … max): 18.4 ms … 26.7 ms 72 runs Benchmark 3: update-ref: create many refs (refcount = 10000, revision = HEAD~) Time (mean ± σ): 1.336 s ± 0.018 s [User: 0.590 s, System: 0.724 s] Range (min … max): 1.314 s … 1.373 s 10 runs Benchmark 4: update-ref: create many refs (refcount = 1, revision = HEAD) Time (mean ± σ): 5.1 ms ± 0.2 ms [User: 2.4 ms, System: 2.6 ms] Range (min … max): 4.8 ms … 7.2 ms 109 runs Benchmark 5: update-ref: create many refs (refcount = 100, revision = HEAD) Time (mean ± σ): 14.8 ms ± 0.2 ms [User: 7.1 ms, System: 7.5 ms] Range (min … max): 14.2 ms … 15.2 ms 82 runs Benchmark 6: update-ref: create many refs (refcount = 10000, revision = HEAD) Time (mean ± σ): 927.6 ms ± 5.3 ms [User: 437.8 ms, System: 489.5 ms] Range (min … max): 919.4 ms … 936.4 ms 10 runs Summary update-ref: create many refs (refcount = 1, revision = HEAD) ran 1.00 ± 0.07 times faster than update-ref: create many refs (refcount = 1, revision = HEAD~) 2.89 ± 0.14 times faster than update-ref: create many refs (refcount = 100, revision = HEAD) 3.74 ± 0.25 times faster than update-ref: create many refs (refcount = 100, revision = HEAD~) 181.26 ± 8.30 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD) 261.01 ± 12.35 times faster than update-ref: create many refs (refcount = 10000, revision = HEAD~) Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-11reftable/stack: refactor reloading to use file descriptorPatrick Steinhardt1-3/+18
We're about to introduce a stat(3P)-based caching mechanism to reload the list of stacks only when it has changed. In order to avoid race conditions this requires us to have a file descriptor available that we can use to call fstat(3P) on. Prepare for this by converting the code to use `fd_read_lines()` so that we have the file descriptor readily available. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-11reftable/stack: refactor stack reloading to have common exit pathPatrick Steinhardt1-44/+42
The `reftable_stack_reload_maybe_reuse()` function is responsible for reloading the reftable list from disk. The function is quite hard to follow though because it has a bunch of different exit paths, many of which have to free the same set of resources. Refactor the function to have a common exit path. While at it, touch up the style of this function a bit to match our usual coding style better. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-03reftable/stack: do not auto-compact twice in `reftable_stack_add()`Patrick Steinhardt1-3/+0
In 5c086453ff (reftable/stack: perform auto-compaction with transactional interface, 2023-12-11), we fixed a bug where the transactional interface to add changes to a reftable stack did not perform auto-compaction by calling `reftable_stack_auto_compact()` in `reftable_stack_addition_commit()`. While correct, this change may now cause us to perform auto-compaction twice in the non-transactional interface `reftable_stack_add()`: - It performs auto-compaction by itself. - It now transitively performs auto-compaction via the transactional interface. Remove the first instance so that we only end up doing auto-compaction once. Reported-by: Han-Wen Nienhuys <hanwenn@gmail.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-03reftable/stack: do not overwrite errors when compactingPatrick Steinhardt1-12/+8
In order to compact multiple stacks we iterate through the merged ref and log records. When there is any error either when reading the records from the old merged table or when writing the records to the new table then we break out of the respective loops. When breaking out of the loop for the ref records though the error code will be overwritten, which may cause us to inadvertently skip over bad ref records. In the worst case, this can lead to a compacted stack that is missing records. Fix the code by using `goto done` instead so that any potential error codes are properly returned to the caller. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-12-11reftable/stack: fix use of unseeded randomnessPatrick Steinhardt1-1/+1
When writing a new reftable stack, Git will first create the stack with a random suffix so that concurrent updates will not try to write to the same file. This random suffix is computed via a call to rand(3P). But we never seed the function via srand(3P), which means that the suffix is in fact always the same. Fix this bug by using `git_rand()` instead, which does not need to be initialized. While this function is likely going to be slower depending on the platform, this slowness should not matter in practice as we only use it when writing a new reftable stack. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-12-11reftable/stack: fix stale lock when dyingPatrick Steinhardt1-32/+15
When starting a transaction via `reftable_stack_init_addition()`, we create a lockfile for the reftable stack itself which we'll write the new list of tables to. But if we terminate abnormally e.g. via a call to `die()`, then we do not remove the lockfile. Subsequent executions of Git which try to modify references will thus fail with an out-of-date error. Fix this bug by registering the lock as a `struct tempfile`, which ensures automatic cleanup for us. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-12-11reftable/stack: reuse buffers when reloading stackPatrick Steinhardt1-8/+4
In `reftable_stack_reload_once()` we iterate over all the tables added to the stack in order to figure out whether any of the tables needs to be reloaded. We use a set of buffers in this context to compute the paths of these tables, but discard those buffers on every iteration. This is quite wasteful given that we do not need to transfer ownership of the allocated buffer outside of the loop. Refactor the code to instead reuse the buffers to reduce the number of allocations we need to do. Note that we do not have to manually reset the buffer because `stack_filename()` does this for us already. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-12-11reftable/stack: perform auto-compaction with transactional interfacePatrick Steinhardt1-0/+6
Whenever updating references or reflog entries in the reftable stack, we need to add a new table to the stack, thus growing the stack's length by one. The stack can grow to become quite long rather quickly, leading to performance issues when trying to read records. But besides performance issues, this can also lead to exhaustion of file descriptors very rapidly as every single table requires a separate descriptor when opening the stack. While git-pack-refs(1) fixes this issue for us by merging the tables, it runs too irregularly to keep the length of the stack within reasonable limits. This is why the reftable stack has an auto-compaction mechanism: `reftable_stack_add()` will call `reftable_stack_auto_compact()` after its added the new table, which will auto-compact the stack as required. But while this logic works alright for `reftable_stack_add()`, we do not do the same in `reftable_addition_commit()`, which is the transactional equivalent to the former function that allows us to write multiple updates to the stack atomically. Consequentially, we will easily run into file descriptor exhaustion in code paths that use many separate transactions like e.g. non-atomic fetches. Fix this issue by calling `reftable_stack_auto_compact()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-12-11reftable: handle interrupted writesPatrick Steinhardt1-3/+3
There are calls to write(3P) where we don't properly handle interrupts. Convert them to use `write_in_full()`. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-12-11reftable: handle interrupted readsPatrick Steinhardt1-1/+1
There are calls to pread(3P) and read(3P) where we don't properly handle interrupts. Convert them to use `pread_in_full()` and `read_in_full()`, respectively. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-16Merge branch 'hn/reftable-coverity-fixes'Junio C Hamano1-5/+5
Problems identified by Coverity in the reftable code have been corrected. * hn/reftable-coverity-fixes: reftable: add print functions to the record types reftable: make reftable_record a tagged union reftable: remove outdated file reftable.c reftable: implement record equality generically reftable: make reftable-record.h function signatures const correct reftable: handle null refnames in reftable_ref_record_equal reftable: drop stray printf in readwrite_test reftable: order unittests by complexity reftable: all xxx_free() functions accept NULL arguments reftable: fix resource warning reftable: ignore remove() return value in stack_test.c reftable: check reftable_stack_auto_compact() return value reftable: fix resource leak blocksource.c reftable: fix resource leak in block.c error path reftable: fix OOB stack write in print functions
2022-01-20reftable: fix resource warningHan-Wen Nienhuys1-5/+5
This would trigger in the unlikely event that we are compacting, and the next available file handle is 0. Signed-off-by: Han-Wen Nienhuys <hanwen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-23reftable: support preset file mode for writingHan-Wen Nienhuys1-6/+24
Create files with mode 0666, so umask works as intended. Provides an override, which is useful to support shared repos (test t1301-shared-repo.sh). Signed-off-by: Han-Wen Nienhuys <hanwen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-08reftable: implement stack, a mutable database of reftable files.Han-Wen Nienhuys1-0/+1396
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>