| Age | Commit message (Collapse) | Author | Files | Lines |
|
* 'fr_v2.52' of github.com:jnavila/git:
l10n: fr: version 2.52
|
|
* 'l10n-ga-2.52' of github.com:aindriu80/git-po:
l10n: ga.po: Update Irish translation for Git 2.52
|
|
* 'master' of github.com:nafmo/git-l10n-sv:
l10n: sv.po: Update Swedish translation
|
|
Reviewed-by: hms5232 <hms5232@hhming.moe>
Co-authored-by: Lumynous <lumynou5.tw@gmail.com>
Signed-off-by: Yi-Jyun Pan <pan93412@gmail.com>
|
|
Signed-off-by: Vũ Tiến Hưng <newcomerminecraft@gmail.com>
|
|
Signed-off-by: Emir SARI <emir_sari@icloud.com>
|
|
`--branch` and `--long` refer to git-status(1) options but they don’t tell us
what `short-format` and `long-format` are, respectively. And `--null`
mentions “status” but does not link to the command.
Refer to git-config(1) on `--branch` like `--short` does.
`long-format` is the git-status(1) output. So we can just say that
directly.
Replace “status” with a `linkgit` on `--null`.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
git-credential-osxkeychain skips storing a credential if its "get"
action sets "state[]=osxkeychain:seen=1". This behavior was introduced
in e1ab45b2 (osxkeychain: state to skip unnecessary store operations,
2024-05-15), which appeared in v2.46.
However, this state[] persists even if a credential returned by
"git-credential-osxkeychain get" is invalid and a subsequent helper's
"get" operation returns a valid credential. Another subsequent helper
(such as [1]) may expect git-credential-osxkeychain to store the valid
credential, but the "store" operation is incorrectly skipped because it
only checks "state[]=osxkeychain:seen=1".
To solve this issue, "state[]=osxkeychain:seen" needs to contain enough
information to identify whether the current "store" input matches the
output from the previous "get" operation (and not a credential from
another helper).
Set "state[]=osxkeychain:seen" to a value encoding the credential output
by "get", and compare it with a value encoding the credential input by
"store".
[1]: https://github.com/hickford/git-credential-oauth
Reported-by: Petter Sælen <petter@saelen.eu>
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Koji Nakamaru <koji.nakamaru@gree.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Now "git diff --check" and "git apply --whitespace=warn/fix" learned
incomplete line is a whitespace error, enable them for this project
to prevent patches to add new incomplete lines to our source to both
code and documentation files.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Introduced via aea86cf00f (The nineteenth batch, 2025-10-14).
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Update following components:
- add-patch.c
- builtin/bisect.c
- builtin/describe.c
- builtin/fast-export.c
- builtin/fast-import.c
- builtin/fetch.c
- builtin/for-each-ref.c
- builtin/gc.c
- builtin/log.c
- builtin/pack-refs.c
- builtin/range-diff.c
- builtin/reflog.c
- builtin/refs.c
- builtin/remote.c
- builtin/repo.c
- builtin/sparse-checkout.c
- command-list.h
- config.c
- diff-lib.c
- diff.c
- gpg-interface.c
- midx-write.c
- promisor-remote.c
- range-diff.c
- refs.c
- refs/files-backend.c
- refs/reftable-backend.c
- remote.c
- usage.c
- git-send-email.perl
Translate following new components:
- builtin/last-modified.c
- http.h
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
|
|
Teach "git diff" to highlight "\ No newline at end of file" message
as a whitespace error when incomplete-line whitespace error class is
in effect. Thanks to the previous refactoring of complete rewrite
code path, we can do this at a single place.
Unlike whitespace errors in the payload where we need to annotate in
line, possibly using colors, the line that has whitespace problems,
we have a dedicated line already that can serve as the error
message, so paint it as a whitespace error message.
Also teach "git diff --check" to notice incomplete lines as
whitespace errors and report when incomplete-line whitespace error
class is in effect.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The final line of a file that lacks the terminating newline at its
end is called an incomplete line. In general they are frowned upon
for many reasons (imagine concatenating two files with "cat A B" and
what happens when A ends in an incomplete line, for example), and
text-oriented tools often mishandle such a line.
Implement checks in "git apply" for incomplete lines, which is off
by default for backward compatibility's sake, so that "git apply
--whitespace={fix,warn,error}" can notice, warn against, and fix
them.
As one of the new test shows, if you modify contents on an
incomplete line in the original and leave the resulting line
incomplete, it is still considered a whitespace error, the reasoning
being that "you'd better fix it while at it if you are making a
change on an incomplete line anyway", which may controversial.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Reserve a few more bits in the diff flags word to be used for future
whitespace rules. Add WS_INCOMPLETE_LINE without implementing the
behaviour (yet).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A patch file represents the incomplete line at the end of the file
with two lines, one that is the usual "context" with " " as the
first letter, "added" with "+" as the first letter, or "removed"
with "-" as the first letter that shows the content of the line,
plus an extra "\ No newline at the end of file" line that comes
immediately after it.
Ever since the apply machinery was written, the "git apply"
machinery parses "\ No newline at the end of file" line
independently, without even knowing what line the incomplete-ness
applies to, simply because it does not even remember what the
previous line was.
This poses a problem if we want to check and warn on an incomplete
line. Revamp the code that parses a fragment, to actually drop the
'\n' at the end of the incoming patch file that terminates a line,
so that check_whitespace() calls made from the code path actually
sees an incomplete as incomplete.
Note that the result of this parsing is not directly used by the
code path that applies the patch. apply_one_fragment() function
already checks if each of the patch text it handles is followed by a
line that begins with a backslash to drop the newline at the end of
the current line it is looking at. In a sense, this patch harmonizes
the behaviour of the parsing side to what is already done in the
application side.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The diff_symbol based output framework uses one DIFF_SYMBOL_* enum
value per the kind of output lines of "git diff", which corresponds
to one output line from the xdiff machinery used internally. Most
notably, DIFF_SYMBOL_PLUS and DIFF_SYMBOL_MINUS that correspond to
"+" and "-" lines are designed to always take a complete line, even
if the output from xdiff machinery may produce "\ No newline at the
end of file" immediately after them.
But this is not true in the rewrite-diff codepath, which completely
bypasses the xdiff machinery. Since the code path feeds the bytes
directly from the payload to the output routines, the output layer
has to deal with an incomplete line with DIFF_SYMBOL_PLUS and
DIFF_SYMBOL_MINUS, which never would see an incomplete line in the
normal code paths. This lack of final newline is compensated by an
ugly hack for a fabricated DIFF_SYMBOL_NO_LF_EOF token to inject an
extra newline to the output to simulate output coming from the xdiff
machinery.
Revamp the way the complete-rewrite code path feeds the lines to the
output layer by treating the last line of the pre/post image when it
is an incomplete line specially.
This lets us remove the DIFF_SYMBOL_NO_LF_EOF hack and use the usual
DIFF_SYMBOL_CONTEXT_INCOMPLETE code path, which will later learn how
to handle whitespace errors.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Everybody else, except for emit_rewrite_lines(), calls the
emit_callback data ecbdata. Make sure we call the same thing by
the same name for consistency.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Create a helper function that reacts to "\ No newline at the end of
file" in preparation for unifying the incomplete line handling in
the code path that handles xdiff output and the code path that
bypasses xdiff and produces a complete-rewrite patch.
Currently the output from the DIFF_SYMBOL_CONTEXT_INCOMPLETE case
still (ab)uses the same code as what is used for context lines, but
that would change in a later step where we introduce support to treat
an incomplete line as a whitespace error.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The "\ No newline at the end of the file" can come after any of the
"-" (deleted preimage line), " " (unchanged line), or "+" (added
postimage line). In later steps in this series, we will start
treating a change that makes a file to end in an incomplete line
as a whitespace error, and we would need to know what the previous
line was when we react to "\ No newline" in the diff output. If
the previous line was a context (i.e., unchanged) line, the file
lacked the final newline before the change, and the change did not
touch that line and left it still incomplete, so we do not want to
warn in such a case.
Teach fn_out_consume() function to keep track of what the previous
line was, and prepare an otherwise empty switch statement to let us
react differently to "\ No newline" based on that.
Note that there is an existing curiosity (read: likely to be a bug)
in the code that increments line number in the preimage file every
time it sees a line with "\ No newline" on it, regardless of what
the previous line was. I left it as-is, because it does not affect
the main theme of this series, and more importantly, I do not think
it matters, as these numbers are used only to compare them with
blank_at_eof_in_{pre,post}image to issue a warning when we see more
empty line was added at the end, but by definition, after we see
"\ No newline at the end of the file" for an added line, we will not
see an added line for the file.
An independent audit to ensure that this curious increment can be
safely removed would make a good #leftoverbits clean-up (we may even
find some code that decrements this counter or over-increments the
other quantity this counter is compared with that compensates the
effect of this curious increment that hides a bug, in which case we
may also need to remove them).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The suppress-blank-empty feature abused the CONTEXT_INCOMPLETE
symbol that was meant to be used only for "\ No newline at the end
of file" code path.
The intent of the feature was to turn a context line we receive from
xdiff machinery (which always uses ' ' for context lines, even an
empty one) and spit it out as a truly empty line.
Perform such a conversion very locally at where a line from xdiff
that begins with ' ' is handled for output; there are many checks
before the control reaches such place that checks the first letter
of the diff output line to see if it is a context line, and having
to check for '\n' and treat it as a special case is error prone.
In order to catch similar hacks in the future, make sure the code
path that is meant for "\ No newline" case checks the first byte is
indeed a backslash.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Apply the simple rule: if you need {} in one arm of the if/else
if/else... cascade, have {} in all of them.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A comment in diff.c claimed that bits up to 12th (counting from 0th)
are whitespace rules, and 13th thru 15th are for new/old/context,
but it turns out it was miscounting. Correct them, and clarify
where the whitespace rule bits come from in the comment. Extend bit
assignment comments to cover bits used for color-moved, which
weren't described.
Also update the way these bit constants are defined to use (1 << N)
notation, instead of octal constants, as it tends to make it easier
to notice a breakage like this.
Sprinkle a few blank lines between logically distinct groups of CPP
macro definitions to make them easier to read.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git very often uses the terms "object", "reference", or "index" in its
documentation.
However, it's hard to find a clear explanation of these terms and how
they relate to each other in the documentation. The closest candidates
currently are:
1. `gitglossary`. This makes a good effort, but it's an alphabetically
ordered dictionary and a dictionary is not a good way to learn
concepts. You have to jump around too much and it's not possible to
present the concepts in the order that they should be explained.
2. `gitcore-tutorial`. This explains how to use the "core" Git commands.
This is a nice document to have, but it's not necessary to learn how
`update-index` works to understand Git's data model, and we should
not be requiring users to learn how to use the "plumbing" commands
if they want to learn what the term "index" or "object" means.
3. `gitrepository-layout`. This is a great resource, but it includes a
lot of information about configuration and internal implementation
details which are not related to the data model. It also does
not explain how commits work.
The result of this is that Git users (even users who have been using
Git for 15+ years) struggle to read the documentation because they don't
know what the core terms mean, and it's not possible to add links
to help them learn more.
Add an explanation of Git's data model. Some choices I've made in
deciding what "core data model" means:
1. Omit pseudorefs like `FETCH_HEAD`, because it's not clear to me
if those are intended to be user facing or if they're more like
internal implementation details.
2. Don't talk about submodules other than by mentioning how they
relate to trees. This is because Git has a lot of special features,
and explaining how they all work exhaustively could quickly go
down a rabbit hole which would make this document less useful for
understanding Git's core behaviour.
3. Don't discuss the structure of a commit message
(first line, trailers etc).
4. Don't mention configuration.
5. Don't mention the `.git` directory, to avoid getting too much into
implementation details
Signed-off-by: Julia Evans <julia@jvns.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git last-modified" was optimized by narrowing the set of paths to
follow as it dug deeper in the history.
* tc/last-modified-active-paths-optimization:
last-modified: implement faster algorithm
|
|
Given a set of attribute macros like:
[attr]a1 a2
[attr]a2 a3
...
[attr]a300000 -text
file a1
expanding the attributes for "file" requires expanding "a1" to "a2",
"a2" to "a3", and so on until hitting a non-macro expansion ("-text", in
this case). We implement this via recursion: fill_one() calls
macroexpand_one(), which then recurses back to fill_one(). As a result,
very deep macro chains like the one above can run out of stack space and
cause us to segfault.
The required stack space is fairly small; I needed on the order of
200,000 entries to get a segfault on Linux. So it's unlikely anybody
would hit this accidentally, leaving only malicious inputs. There you
can easily construct a repo which will segfault on clone (we look at
attributes during the checkout step, but you'd see the same trying to do
other operations, like diff in a bare repo). It's mostly harmless, since
anybody constructing such a repo is only preventing victims from cloning
their evil garbage, but it could be a nuisance for hosting sites.
One option to prevent this is to limit the depth of recursion we'll
allow. This is conceptually easy to implement, but it raises other
questions: what should the limit be, and do we need a configuration knob
for it?
The recursion here is simple enough that we can avoid those questions by
just converting it to iteration instead. Rather than iterate over the
states of a match_attr in fill_one(), we'll put them all in a queue, and
the expansion of each can add to the queue rather than recursing. Note
that this is a LIFO queue in order to keep the same depth-first order we
did with the recursive implementation. I've avoided using the word
"stack" in the code because the term is already heavily used to refer to
the stack of .gitattribute files that matches the tree structure of the
repository.
The test uses a limited stack size so we can trigger the problem with a
much smaller input than the one shown above. The value here (3000) is
enough to trigger the issue on my x86_64 Linux machine.
Reported-by: Ben Stav <benstav@miggo.io>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Building "git contacts" script (in contrib/) left the resulting
file unexecutable, which has been corrected.
* dk/make-git-contacts-executable:
perl: also mark git-contacts executable
|
|
The build procedure based on meson learned to allow builders to
specify the directory to install HTML documents.
* dk/meson-html-dir:
meson: make GIT_HTML_PATH configurable
|
|
Build procedure for Wincred credential helper has been updated.
* tu/credential-wincred-makefile-update:
wincred: align Makefile with other Makefiles in contrib
|
|
Ever since 14f9e128 (Define the project whitespace policy,
2008-02-10) added the whitespace rules to .gitattributes, we spelled
the most general rule like so:
* whitespace=!indent,trail,space
in the top-level .gitattributes file. The intent of this line was
described in the commit log message:
- Unless otherwise specified, indent with SP that could be
replaced with HT are not "bad". But SP before HT in the
indent is "bad", and trailing whitespaces are "bad".
It clearly wanted to disable indent-with-non-tab, so !indent is most
likely a misspelt form of '-indent'. Because indent-with-non-tab
has never been enabled by default, by luck this was not causing any
ill effect.
We could either remove "!indent", or spell it "-indent". The
immediate effect would be the same. It would only start to make a
difference when/if we enable indent-with-non-tab by default in
future versions of Git.
Let's take the former option to remove "!indent" from the list. We
would feel the effect first-hand ourselves before anybody else if we
ever decide to change the built-in default whitespace rules, which
would be hidden from us if we decide to rewrite it to "-indent"
instead.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Detecting renames and copies improves diff's output. This effort is
wasted if we don't show any. Disable detection in that case.
This actually fixes the error code when using the options --cached,
--find-copies-harder, --no-ext-diff and --quiet together:
run_diff_index() indirectly calls diff-lib.c::show_modified(), which
queues even non-modified entries using diff_change() because we need
them for copy detection. diff_change() sets flags.has_changes, though,
which causes diff_can_quit_early() to declare we're done after seeing
only the very first entry -- way too soon.
Using --cached, --find-copies-harder and --quiet together without
--no-ext-diff was not affected even before, as it causes the flag
flags.diff_from_contents to be set, which disables the optimization
in a different way.
Reported-by: D. Ben Knoble <ben.knoble@gmail.com>
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The 'git-maintenance(1)' command provides tooling to run maintenance
tasks over Git repositories. The 'run' subcommand, as the name suggests,
runs the maintenance tasks. When used with the '--auto' flag, it uses
heuristics to determine if the required thresholds are met for running
said maintenance tasks.
There is however a lack of insight into these heuristics. Meaning, the
checks are linked to the execution.
Add a new 'is-needed' subcommand to 'git-maintenance(1)' which allows
users to simply check if it is needed to run maintenance without
performing it.
This subcommand can check if it is needed to run maintenance without
actually running it. Ideally it should be used with the '--auto' flag,
which would allow users to check if the thresholds required are met. The
subcommand also supports the '--task' flag which can be used to check
specific maintenance tasks.
While adding the respective tests in 't/t7900-maintenance.sh', remove a
duplicate of the test: 'worktree-prune task with --auto honors
maintenance.worktree-prune.auto'.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The 'git-maintenance(1)' command supports an '--auto' flag. Usage of the
flag ensures to run maintenance tasks only if certain thresholds are
met. The heuristic is defined on a task level, wherein each task defines
an 'auto_condition', which states if the task should be run.
The 'pack-refs' task is hard-coded to return 1 as:
1. There was never a way to check if the reference backend needs to be
optimized without actually performing the optimization.
2. We can pass in the '--auto' flag to 'git-pack-refs(1)' which would
optimize based on heuristics.
The previous commit added a `refs_optimize_required()` function, which
can be used to check if a reference backend required optimization. Use
this within `pack_refs_condition()`.
This allows us to add a 'git maintenance is-needed' subcommand which can
notify the user if maintenance is needed without actually performing the
optimization. Without this change, the reference backend would always
state that optimization is needed.
Since we import 'revision.h', we need to remove the definition for
'SEEN' which is duplicated in the included header.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To allow users of the refs namespace to check if the reference backend
requires optimization, add a new field `optimize_required` field to
`struct ref_storage_be`. This field is of type `optimize_required_fn`
which is also introduced in this commit.
Modify the debug, files, packed and reftable backend to implement this
field. A following commit will expose this via 'git pack-refs' and 'git
refs optimize'.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The reftable backend performs auto-compaction as part of its regular
flow, which is required to keep the number of tables part of a stack at
bay. This allows it to stay optimized.
Compaction can also be triggered voluntarily by the user via the 'git
pack-refs' or the 'git refs optimize' command. However, currently there
is no way for the user to check if optimization is required without
actually performing it.
Extract out the heuristics logic from 'reftable_stack_auto_compact()'
into an internal function 'update_segment_if_compaction_required()'.
Then use this to add and expose `reftable_stack_compaction_required()`
which will allow users to check if the reftable backend can be
optimized.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `stack_table_sizes_for_compaction()` function returns individual
sizes of each reftable table. This function is only called by
`reftable_stack_auto_compact()` to decide which tables need to be
compacted, if any.
Modify the function to directly return the segments, which avoids the
extra step of receiving the sizes only to pass it to
`suggest_compaction_segment()`.
A future commit will also add functionality for checking whether
auto-compaction is necessary without performing it. This change allows
code re-usability in that context.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Refreshes the Irish translation for Git 2.52, including new strings and
consistency improvements. Verified with `git-po-helper check`.
Signed-off-by: Aindriú Mac Giolla Eoin <aindriu80@gmail.com>
|
|
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
|
|
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
|
|
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
|
|
A recently added configuration variable and command line option
syntax ":(optional)" for values that are of filename type
inconsistently behaved on an empty file (configuration took it
happily, while the command line option pretended as if it did not
exist), which has been corrected.
* dk/parseopt-optional-filename-fixes:
parseopt: remove unreachable code
parseopt: restore const qualifier to parsed filename
config: use boolean type for a simple flag
parseopt: use boolean type for a simple flag
doc: clarify command equivalence comment
parseopt: fix :(optional) at command line to only ignore missing files
|
|
Messages from fast-import/export are now marked for i18n.
* cc/fast-import-export-i18n-cleanup:
gpg-interface: mark a string for translation
fast-import: mark strings for translation
fast-export: mark strings for translation
gpg-interface: use left shift to define GPG_VERIFY_*
gpg-interface: simplify ssh fingerprint parsing
|
|
CI updates.
* js/ci-github-actions-update:
ci: update {download,upload}-artifact Action versions
|
|
Message fix.
* pk/reflog-migrate-message-fix:
refs: add missing space in messages
|
|
Our Bencher dashboards [1] have recently alerted us about a bunch of
performance regressions when writing references, specifically with the
reftable backend. There is a 3x regression when writing many refs with
preexisting refs in the reftable format, and a 10x regression when
migrating refs between backends in either of the formats.
Bisecting the issue lands us at 6ec4c0b45b (refs: don't store peeled
object IDs for invalid tags, 2025-10-23). The gist of the commit is that
we may end up storing peeled objects in both reftables and packed-refs
for corrupted tags, where the claimed tagged object type is different
than the actual tagged object type. This will then cause us to create
the `struct object *` with a wrong type, as well, and obviously nothing
good comes out of that.
The fix for this issue was to introduce a new flag to `peel_object()`
that causes us to verify the tagged object's type before writing it into
the refdb -- if the tag is corrupt, we skip writing the peeled value.
To verify whether the peeled value is correct we have to look up the
object type via the ODB and compare the actual type with the claimed
type, and that additional object lookup is costly.
This also explains why we see the regression only when writing refs with
the reftable backend, but we see the regression with both backends when
migrating refs:
- The reftable backend knows to store peeled values in the new table
immediately, so it has to try and peel each ref it's about to write
to the transaction. So the performance regression is visible for all
writes.
- The files backend only stores peeled values when writing the
packed-refs file, so it wouldn't hit the performance regression for
normal writes. But on ref migrations we know to write all new values
into the packed-refs file immediately, and that's why we see the
regression for both backends there.
Taking a step back though reveals an oddity in the new verification
logic: we not only verify the _tagged_ object's type, but we also verify
the type of the tag itself. But this isn't really needed, as we wouldn't
hit the bug in such a case anyway, as we only hit the issue with corrupt
tags claiming an invalid type for the tagged object.
The consequence of this is that we now started to look up the target
object of every single reference we're about to write, regardless of
whether it even is a tag or not. And that is of course quite costly.
Fix the issue by only verifying the type of the tagged objects. This
means that we of course still have a performance hit for actual tags.
But this only happens for writes anyway, and I'd claim it's preferable
to not store corrupted data in the refdb than to be fast here. Rename
the flag accordingly to clarify that we only verify the tagged object's
type.
This fix brings performance back to previous levels:
Benchmark 1: baseline
Time (mean ± σ): 46.0 ms ± 0.4 ms [User: 40.0 ms, System: 5.7 ms]
Range (min … max): 45.0 ms … 47.1 ms 54 runs
Benchmark 2: regression
Time (mean ± σ): 140.2 ms ± 1.3 ms [User: 77.5 ms, System: 60.5 ms]
Range (min … max): 138.0 ms … 142.7 ms 20 runs
Benchmark 3: fix
Time (mean ± σ): 46.2 ms ± 0.4 ms [User: 40.2 ms, System: 5.7 ms]
Range (min … max): 45.0 ms … 47.3 ms 55 runs
Summary
update-ref: baseline
1.00 ± 0.01 times faster than fix
3.05 ± 0.04 times faster than regression
[1]: https://bencher.dev/perf/git/plots
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* ps/ref-peeled-tags:
t7004: do not chdir around in the main process
ref-filter: fix stale parsed objects
ref-filter: parse objects on demand
ref-filter: detect broken tags when dereferencing them
refs: don't store peeled object IDs for invalid tags
object: add flag to `peel_object()` to verify object type
refs: drop infrastructure to peel via iterators
refs: drop `current_ref_iter` hack
builtin/show-ref: convert to use `reference_get_peeled_oid()`
ref-filter: propagate peeled object ID
upload-pack: convert to use `reference_get_peeled_oid()`
refs: expose peeled object ID via the iterator
refs: refactor reference status flags
refs: fully reset `struct ref_iterator::ref` on iteration
refs: introduce `.ref` field for the base iterator
refs: introduce wrapper struct for `each_ref_fn`
|
|
Bumps `actions/upload-artifact` from 4 to 5.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v5)
Bumps `actions/download-artifact` from 5 to 6.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v5...v6)
Originally-authored-by: dependabot[bot] <support@github.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If a file is renamed between commits and an external diff is started
through gitk on the original or the renamed file name,
gitk is unable to open the renamed file in the external diff editor.
It fails to fetch the renamed file from git, because it fetches it
using its original path in contrast to using the renamed path of the
file.
Detect the rename and open the external diff with the original and
the renamed file instead of no file (fetch the renamed file path and
name from git) no matter if the original or the renamed file is
selected in gitk.
Signed-off-by: Tobias Boesch <tobias.boesch@miele.com>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
|
|
Makefile-based builds can configure Git's internal HTML_PATH by defining
htmldir, which is useful for packagers that put documentation in
different locations. Gentoo, for example, uses version-suffixed
directories like ${prefix}/share/doc/git-2.51 and puts the HTML
documentation in an 'html' subdirectory of the same.
Propagate the same configuration knob to Meson-based builds so that
"git --html-path" on such systems can be configured to output the
correct directory.
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When installing git-contacts with Meson via -Dcontrib=contacts, the default
Perl generation fails to mark it executable. As a result, "git contacts"
reports "'contacts' is not a git command."
Unlike generate-script.sh, we aren't testing the basename here; so, glob
the script name in the case arm to match wherever the input comes from.
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* Replace $(LOADLIBES) because it is deprecated since long and it is
used nowhere else in the git project.
* Use $(gitexecdir) instead of $(libexecdir) because config.mak defines
$(libexecdir) as $(prefix)/libexec, not as $(prefix)/libexec/git-core.
* Similar to other Makefiles, let install target rule create
$(gitexecdir) to make sure the directory exists before copying the
executable and also let it respect $(DESTDIR).
* Shuffle the lines for the default settings to align them with the
other Makefiles in contrib/credential.
* Define .PHONY for all special targets (all, install, clean).
Signed-off-by: Thomas Uhle <thomas.uhle@mailbox.tu-dresden.de>
Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Update the documentation to clearly describe how the server responds when a
client sends an invalid or malformed `want` line during the HTTP protocol
exchange. The server includes the offending object name in its error message.
Signed-off-by: Queen Ediri Jessa <qjessa662@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When a file is selected in the file list, the diff window scrolls to the
corresponding section. The administrative data needed for this purpose
is extracted from the 'rename from', 'rename to', and 'copy to' lines.
Escaped file names are unescaped for this purpose. However, the lines
shown in the diff window are left in the escaped form. This is not very
pleasing. Replace the escaped form by the unescaped form.
Add a section to treat the 'copy from' case.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
|
|
When 5de460a2cfdd (gitk: Refactor per-line part of getblobdiffline and
its support) moved the body of a loop into a separate function, several
'continue' statements were changed to 'return'. But one instance was
missed. Fix it now.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
|
|
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The version of macos image used in GitHub CI has been updated to
macos-14, as the macos-13 that we have been using got deprecated.
* jc/ci-use-macos-14:
GitHub CI: macos-13 images are no more
|
|
The help text and manual page of "git bisect" command have been
made consistent with each other.
* rz/t0450-bisect-doc-update:
bisect: update usage and docs to match each other
|
|
Add a configuration variable to control the default behavior of git replay
for updating references. This allows users who prefer the traditional
pipeline output to set it once in their config instead of passing
--ref-action=print with every command.
The config variable uses string values that mirror the behavior modes:
* replay.refAction = update (default): atomic ref updates
* replay.refAction = print: output commands for pipeline
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Elijah Newren <newren@gmail.com>
Helped-by: Christian Couder <christian.couder@gmail.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The git replay command currently outputs update commands that can be
piped to update-ref to achieve a rebase, e.g.
git replay --onto main topic1..topic2 | git update-ref --stdin
This separation had advantages for three special cases:
* it made testing easy (when state isn't modified from one step to
the next, you don't need to make temporary branches or have undo
commands, or try to track the changes)
* it provided a natural can-it-rebase-cleanly (and what would it
rebase to) capability without automatically updating refs, similar
to a --dry-run
* it provided a natural low-level tool for the suite of hash-object,
mktree, commit-tree, mktag, merge-tree, and update-ref, allowing
users to have another building block for experimentation and making
new tools
However, it should be noted that all three of these are somewhat
special cases; users, whether on the client or server side, would
almost certainly find it more ergonomic to simply have the updating
of refs be the default.
For server-side operations in particular, the pipeline architecture
creates process coordination overhead. Server implementations that need
to perform rebases atomically must maintain additional code to:
1. Spawn and manage a pipeline between git-replay and git-update-ref
2. Coordinate stdout/stderr streams across the pipe boundary
3. Handle partial failure states if the pipeline breaks mid-execution
4. Parse and validate the update-ref command output
Change the default behavior to update refs directly, and atomically (at
least to the extent supported by the refs backend in use). This
eliminates the process coordination overhead for the common case.
For users needing the traditional pipeline workflow, add a new
--ref-action=<mode> option that preserves the original behavior:
git replay --ref-action=print --onto main topic1..topic2 | git update-ref --stdin
The mode can be:
* update (default): Update refs directly using an atomic transaction
* print: Output update-ref commands for pipeline use
Test suite changes:
All existing tests that expected command output now use
--ref-action=print to preserve their original behavior. This keeps
the tests valid while allowing them to verify that the pipeline workflow
still works correctly.
New tests were added to verify:
- Default atomic behavior (no output, refs updated directly)
- Bare repository support (server-side use case)
- Equivalence between traditional pipeline and atomic updates
- Real atomicity using a lock file to verify all-or-nothing guarantee
- Test isolation using test_when_finished to clean up state
- Reflog messages include replay mode and target
A following commit will add a replay.refAction configuration
option for users who prefer the traditional pipeline output as their
default behavior.
Helped-by: Elijah Newren <newren@gmail.com>
Helped-by: Patrick Steinhardt <ps@pks.im>
Helped-by: Christian Couder <christian.couder@gmail.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In preparation for adding the --ref-action option, convert option
validation to use die_for_incompatible_opt2(). This helper provides
standardized error messages for mutually exclusive options.
The following commit introduces --ref-action which will be incompatible
with certain other options. Using die_for_incompatible_opt2() now means
that commit can cleanly add its validation using the same pattern,
keeping the validation logic consistent and maintainable.
This also aligns git-replay's option handling with how other Git commands
manage option conflicts, using the established die_for_incompatible_opt*()
helper family.
Signed-off-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As this image was deprecated on Sep 22nd, and will be dropped on Dec
4th, replace these jobs to use macos-14 images instead.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
At this point in the code after running skip_prefix() on the
variable and receiving the result in the same variable, the contents
of the variable can never be NULL. The function either (1) updates
the variable to point at a later part of the string it originally
pointed at, or (2) leaves it intact if the string does not have the
prefix. (1) will never make the variable NULL, and (2) cannot be
the source of NULL, because the variable cannot be NULL before
calling skip_prefix(), which would die immediately by dereferencing
the NULL pointer in that case.
Helped-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This was unintentionally dropped in ccfcaf399f (parseopt: values of
pathname type can be prefixed with :(optional), 2025-09-28). Notably,
continue dropping the const qualifier when free'ing value; see
4049b9cfc0 (fix const issues with some functions, 2007-10-16) or
83838d5c1b (cast variable in call to free() in builtin/diff.c and
submodule.c, 2011-11-06) for more details on why.
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Documentation of command parsing for :(optional) includes a terse
comment; expand it to be clearer to readers.
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Unlike the configuration option magic, the parseopt code also ignores
empty files: compare implementations from ccfcaf399f (parseopt: values
of pathname type can be prefixed with :(optional), 2025-09-28) and
749d6d166d (config: values of pathname type can be prefixed with
:(optional), 2025-09-28).
Unify the 2 by not ignoring empty files, which is less surprising and
the intended semantics from the first patch for config.
Suggested-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: D. Ben Knoble <ben.knoble+github@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The patterns used in the .gitignore files use backslash in the way
documented for fnmatch(3); document as such to reduce confusion.
* jk/doc-backslash-in-exclude:
doc: document backslash in gitignore patterns
|
|
Leakfix.
* jk/test-delete-gpgsig-leakfix:
test-tool: fix leak in delete-gpgsig command
|
|
Test fix.
* eb/t1016-hash-transition-fix:
t1016-compatObjectFormat: really freeze time for reproduciblity
|
|
Doc mark-up fix.
* kh/doc-checkout-markup-fix:
doc: git-checkout: fix placeholder markup
|
|
The "debug" ref-backend was missing a method implementation, which
has been corrected.
* xr/ref-debug-remove-on-disk:
refs: add missing remove_on_disk implementation for debug backend
|
|
The "MyFirstContribution" tutorial tells the reader how to send out
their patches; the section gained a hint to verify the message
reached the mailing list.
* qj/doc-my1stcontrib-email-verify:
MyFirstContribution: add note on confirming patches
|
|
Tests did not set up GNUPGHOME correctly, which is fixed but some
flaky tests are exposed in t1016, which needs to be addressed
before this topic can move forward.
* tz/test-prepare-gnupghome:
t/lib-gpg: call prepare_gnupghome() in GPG2 prereq
t/lib-gpg: add prepare_gnupghome() to create GNUPGHOME dir
|
|
"git repo structure", a new command.
* jt/repo-structure:
builtin/repo: add progress meter for structure stats
builtin/repo: add keyvalue and nul format for structure stats
builtin/repo: add object counts in structure output
builtin/repo: introduce structure subcommand
ref-filter: export ref_kind_from_refname()
ref-filter: allow NULL filter pattern
builtin/repo: rename repo_info() to cmd_repo_info()
|
|
Contributed credential helpers (obviously in contrib/) now have "cd
$there && make install" target.
* tu/credential-install:
contrib/credential: add install target
|
|
AI guidelines.
* cc/doc-submitting-patches-with-ai:
SubmittingPatches: add section about AI
|
|
* kn/refs-optim-cleanup:
t/pack-refs-tests: move the 'test_done' to callees
refs: rename 'pack_refs_opts' to 'refs_optimize_opts'
refs: move to using the '.optimize' functions
|
|
* ps/ref-peeled-tags: (23 commits)
t7004: do not chdir around in the main process
ref-filter: fix stale parsed objects
ref-filter: parse objects on demand
ref-filter: detect broken tags when dereferencing them
refs: don't store peeled object IDs for invalid tags
object: add flag to `peel_object()` to verify object type
refs: drop infrastructure to peel via iterators
refs: drop `current_ref_iter` hack
builtin/show-ref: convert to use `reference_get_peeled_oid()`
ref-filter: propagate peeled object ID
upload-pack: convert to use `reference_get_peeled_oid()`
refs: expose peeled object ID via the iterator
refs: refactor reference status flags
refs: fully reset `struct ref_iterator::ref` on iteration
refs: introduce `.ref` field for the base iterator
refs: introduce wrapper struct for `each_ref_fn`
builtin/repo: add progress meter for structure stats
builtin/repo: add keyvalue and nul format for structure stats
builtin/repo: add object counts in structure output
builtin/repo: introduce structure subcommand
...
|
|
In ac0bad0af4 (t0601: refactor tests to be shareable, 2025-09-19), we
refactored 't/t0601-reffiles-pack-refs.sh' to move all of the tests to
't/pack-refs-tests.sh', which became a common test suite which was also
used by 't/t1463-refs-optimize.sh'.
This also moved the 'test_done' directive to 't/pack-refs-tests.sh'.
Which inhibits additional tests from being added to either of the tests.
Let's move the directive out to both the tests, so that we can add
additional specific tests to them. Also the test flow logic shouldn't be
part of tests which can be embedded in other test scripts.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The previous commit removed all references to 'pack_refs()' within
the refs subsystem. Continue this cleanup by also renaming
'pack_refs_opts' to 'refs_optimize_opts' and the respective flags
accordingly. Keeping the naming consistent will make the code easier to
maintain.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `struct ref_store` variable exposes two ways to optimize a reftable
backend:
1. pack_refs
2. optimize
The former was specific to the 'files' + 'packed' refs backend. The
latter is more generic and covers all backends. While the naming is
different, both of these functions perform the same functionality.
Consolidate this code to only maintain the 'optimize' functions. Do this
by modifying the backends so that they exclusively implement the
`optimize` callback, only. All users of the refs subsystem already use
the 'optimize' function so there is no changes needed on the callee
side. Finally, cleanup all references to the 'pack_refs' field of the
structure and code around it.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* ps/ref-peeled-tags: (92 commits)
t7004: do not chdir around in the main process
ref-filter: fix stale parsed objects
ref-filter: parse objects on demand
ref-filter: detect broken tags when dereferencing them
refs: don't store peeled object IDs for invalid tags
object: add flag to `peel_object()` to verify object type
refs: drop infrastructure to peel via iterators
refs: drop `current_ref_iter` hack
builtin/show-ref: convert to use `reference_get_peeled_oid()`
ref-filter: propagate peeled object ID
upload-pack: convert to use `reference_get_peeled_oid()`
refs: expose peeled object ID via the iterator
refs: refactor reference status flags
refs: fully reset `struct ref_iterator::ref` on iteration
refs: introduce `.ref` field for the base iterator
refs: introduce wrapper struct for `each_ref_fn`
builtin/repo: add progress meter for structure stats
builtin/repo: add keyvalue and nul format for structure stats
builtin/repo: add object counts in structure output
builtin/repo: introduce structure subcommand
...
|
|
Move down to no-contains subdirectory inside a subshell, just like
the previous step that created and used it does.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 054f5f457e (ref-filter: parse objects on demand, 2025-10-23) we have
started to skip parsing some objects in case we don't need to access
their values in the first place. This was done by introducing a new
member `struct expand_data::maybe_object` that gets populated on demand
via `get_or_parse_object()`.
This has led to a regression though where the object now gets reused
because we don't reset it properly. The `oi` structure is declared in
global scope, and there is no single place where we reset it before
invoking `get_object()`. The consequence is that the `maybe_object`
member doesn't get reset across calls, so subsequent calls will end up
reusing the same object.
This is only an issue for a subset of retrieved values, as not all of
the infrastructure ends up calling `get_or_parse_object()`. So the
effect is limited, which is probably why the issue wasn't detected
earlier.
Fix the issue by resetting `maybe_object` in `get_object()`.
Reported-by: Junio C Hamano <gitster@pobox.com>
Based-on-patch-by: Jeff King <peff@peff.net>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When formatting an arbitrary object we parse that object regardless of
whether or not we actually need any parsed data. In fact, many of the
atoms we have don't require any.
Refactor the code so that we parse the data on demand when we see an
atom that wants to access the objects. This leads to a small speedup,
for example in the Chromium repository with around 40000 refs:
Benchmark 1: for-each-ref --format='%(raw)' (HEAD~)
Time (mean ± σ): 388.7 ms ± 1.1 ms [User: 322.2 ms, System: 65.0 ms]
Range (min … max): 387.3 ms … 390.8 ms 10 runs
Benchmark 2: for-each-ref --format='%(raw)' (HEAD)
Time (mean ± σ): 344.7 ms ± 0.7 ms [User: 287.8 ms, System: 55.1 ms]
Range (min … max): 343.9 ms … 345.7 ms 10 runs
Summary
for-each-ref --format='%(raw)' (HEAD) ran
1.13 ± 0.00 times faster than for-each-ref --format='%(raw)' (HEAD~)
With this change, we now spend ~90% of the time decompressing objects,
which is almost as good as it gets regarding git-for-each-ref(1)'s own
infrastructure.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Users can ask git-for-each-ref(1) to peel tags and return information of
the tagged object by adding an asterisk to the format, like for example
"%(*$objectname)". If so, git-for-each-ref(1) peels that object to the
first non-tag object and then returns its values.
As mentioned in preceding commits, it can happen that the tagged object
type and the claimed object type differ, effectively resulting in a
corrupt tag. git-for-each-ref(1) would notice this mismatch, print an
error and then bail out when trying to peel the tag.
But we only notice this corruption in some very specific edge cases!
While we have a test in "t/for-each-ref-tests.sh" that verifies the
above scenario, this test is specifically crafted to detect the issue at
hand. Namely, we create two tags:
- One tag points to a specific object with the correct type.
- The other tag points to the *same* object with a different type.
The fact that both tags point to the same object is important here:
`peel_object()` wouldn't notice the corruption if the tagged objects
were different.
The root cause is that `peel_object()` calls `lookup_${type}()`
eventually, where the type is the same type declared in the tag object.
Consequently, when we have two tags pointing to the same object but with
different declared types we'll call two different lookup functions. The
first lookup will store the object with an unverified type A, whereas
the second lookup will try to look up the object with a different
unverified type B. And it is only now that we notice the discrepancy in
object types, even though type A could've already been the wrong type.
Fix the issue by verifying the object type in `populate_value()`. With
this change we'll also notice type mismatches when only dereferencing a
tag once.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Both the "files" and "reftable" backend store peeled object IDs for
references that point to tags:
- The "files" backend stores the value when packing refs, where each
peeled object ID is prefixed with "^".
- The "reftable" backend stores the value whenever writing a new
reference that points to a tag via a special ref record type.
Both of these backends use `peel_object()` to find the peeled object ID.
But as explained in the preceding commit, that function does not detect
the case where the tag's tagged object and its claimed type mismatch.
The consequence of storing these bogus peeled object IDs is that we're
less likely to detect such corruption in other parts of Git.
git-for-each-ref(1) for example does not notice anymore that the tag is
broken when using "--format=%(*objectname)" to dereference tags.
One could claim that this is good, because it still allows us to mostly
use the tag as intended. But the biggest problem here is that we now
have different behaviour for such a broken tag depending on whether or
not we have its peeled value in the refdb.
Fix the issue by verifying the object type when peeling the object. If
that verification fails we simply skip storing the peeled value in
either of the reference formats.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When peeling a tag to a non-tag object we repeatedly call
`parse_object()` on the tagged object until we find the first object
that isn't a tag. While this feels sensible at first, there is a big
catch here: `parse_object()` doesn't actually verify the type of the
tagged object.
The relevant code path here eventually ends up in `parse_tag_buffer()`.
Here, we parse the various fields of the tag, including the "type". Once
we've figured out the type and the tagged object ID, we call one of the
`lookup_${type}()` functions for whatever type we have found. There is
two possible outcomes in the successful case:
1. The object is already part of our cached objects. In that case we
double-check whether the type we're trying to look up matches the
type that was cached.
2. The object is _not_ part of our cached objects. In that case, we
simply create a new object with the expected type, but we don't
parse that object.
In the first case we might notice type mismatches, but only in the case
where our cache has the object with the correct type. In the second
case, we'll blindly assume that the type is correct and then go with it.
We'll only notice that the type might be wrong when we try to parse the
object at a later point.
Now arguably, we could change `parse_tag_buffer()` to verify the tagged
object's type for us. But that would have the effect that such a tag
cannot be parsed at all anymore, and we have a small bunch of tests for
exactly this case that assert we still can open such tags. So this
change does not feel like something we can retroactively tighten, even
though one shouldn't ever hit such corrupted tags.
Instead, add a new `flags` field to `peel_object()` that allows the
caller to opt in to strict object verification. This will be wired up at
a subset of callsites over the next few commits.
Note that this change also inlines `deref_tag_noverify()`. There's only
been two callsites of that function, the one we're changing and one in
our test helpers. The latter callsite can trivially use `deref_tag()`
instead, so by inlining the function we avoid having to pass down the
flag.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Now that the peeled object ID gets propagated via the `struct reference`
there is no need anymore to call into the reference iterator itself to
dereference an object. Remove this infrastructure.
Most of the changes are straight-forward deletions of code. There is one
exception though in `refs/packed-backend.c::write_with_updates()`. Here
we stop peeling the iterator and instead just pass the peeled object ID
of that iterator directly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In preceding commits we have refactored all callers of
`peel_iterated_oid()` to instead use `reference_get_peeled_oid()`. This
allows us to thus get rid of the former function.
Getting rid of that function is nice, but even nicer is that this also
allows us to get rid of the `current_ref_iter` hack. This global
variable tracked the currently-active ref iterator so that we can use it
to peel an object ID. Now that the peeled object ID is propagated via
`struct reference` though we don't have to depend on this hack anymore,
which makes for a more robust and easier-to-understand infrastructure.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The git-show-ref(1) command has multiple different modes:
- It knows to show all references matching a pattern.
- It knows to list all references that are an exact match to whatever
the user has provided.
- It knows to check for reference existence.
The first two commands use mostly the same infrastructure to print the
references via `show_one()`. But while the former mode uses a proper
iterator and thus has a `struct reference` available in its context, the
latter calls `refs_read_ref()` and thus doesn't. Consequently, we cannot
easily use `reference_get_peeled_oid()` to print the peeled value.
Adapt the code so that we manually construct a `struct reference` when
verifying refs. We wouldn't ever have the peeled value available anyway
as we're not using an iterator here, so we can simply plug in the values
we _do_ have.
With this change we now have a `struct reference` available at both
callsites of `show_one()` and can thus pass it, which allows us to use
`reference_get_peeled_oid()` instead of `peel_iterated_oid()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When queueing a reference in the "ref-filter" subsystem we end up
creating a new ref array item that contains the reference's info. One
bit of info that we always discard though is the peeled object ID, and
because of that we are forced to use `peel_iterated_oid()`.
Refactor the code to propagate the peeled object ID via the ref array,
if available. This allows us to manually peel tags without having to go
through the object database.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `write_v0_ref()` callback is invoked from two callsites:
- Once via `send_ref()` which is a callback passed to
`for_each_namespaced_ref_1()` and `refs_head_ref_namespaced()`.
- Once manually to announce capabilities.
When sending references to the client we also send the peeled value of
tags. As we don't have a `struct reference` available in the second
case, we cannot easily peel by calling `reference_get_peeled_oid()`, but
we instead have to depend on on global state via `peel_iterated_oid()`.
We do have a reference available though in the first case, it's only the
second case that keeps us from using `reference_get_peeled_oid()`. But
that second case only announces capabilities anyway, so we're not really
handling a reference at all here.
Adapt that case to construct a reference manually and pass that to
`write_v0_ref()`. Start to use `reference_get_peeled_oid()` now that we
always have a `struct reference` available.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Both the "files" and "reftable" backend are able to store peeled values
for tags in the respective formats. This allows for a more efficient
lookup of the target object of such a tag without having to manually
peel via the object database.
The infrastructure to access these peeled object IDs is somewhat funky
though. When iterating through objects, we store a pointer reference to
the current iterator in a global variable. The callbacks invoked by that
iterator are then expected to call `peel_iterated_oid()`, which checks
whether the globally-stored iterator's current reference refers to the
one handed into that function. If so, we ask the iterator to peel the
object, otherwise we manually peel the object via the object database.
Depending on global state like this is somewhat weird and also quite
fragile.
Introduce a new `struct reference::peeled_oid` field that can be
populated by the reference backends. This field can be accessed via a
new function `reference_get_peeled_oid()` that either uses that value,
if set, or alternatively peels via the ODB. With this change we don't
have to rely on global state anymore, but make the peeled object ID
available to the callback functions directly.
Adjust trivial callers that already have a `struct reference` available.
Remaining callers will be adjusted in subsequent commits.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The reference flags encode information like whether or not a reference
is a symbolic reference or whether it may be broken. This information is
stored in a `int flags` bitfield, which is in conflict with our modern
best practices; we tend to use an unsigned integer to store flags.
Change the type of the field to be `unsigned`. While at it, refactor the
individual flags to be part of an `enum` instead of using preprocessor
defines.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
With the introduction of the `struct ref_iterator::ref` field it now is
a whole lot easier to introduce new fields that become accessible to the
caller without having to adapt every single callsite. But there's a
downside: when a new field is introduced we always have to adapt all
backends to set that field.
This isn't something we can avoid in the general case: when the new
field is expected to be populated by all backends we of course cannot
avoid doing so. But new fields may be entirely optional, in which case
we'd still have such churn. And furthermore, it is very easy right now
to leak state from a previous iteration into the next iteration.
Address this issue by ensuring that the reference backends all fully
reset the field on every single iteration. This ensures that no state
from previous iterations can leak into the next one. And it ensures that
any newly introduced fields will be zeroed out by default.
Note that we don't have to explicitly adapt the "files" backend, as it
uses the `cache_ref_iterator` internally. Furthermore, other "wrapping"
iterators like for example the `prefix_ref_iterator` copy around the
whole reference, so these don't need to be adapted either.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The base iterator has a couple of fields that tracks the name, target,
object ID and flags for the current reference. Due to this design we
have to create a new `struct reference` whenever we want to hand over
that reference to the callback function, which is tedious and not very
efficient.
Convert the structure to instead contain a `struct reference` as member.
This member is expected to be populated by the implementations of the
iterator and is handed over to the callback directly.
While at it, simplify `should_pack_ref()` to take a `struct reference`
directly instead of passing its respective fields.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `each_ref_fn` callback function type is used across our code base
for several different functions that iterate through reference. There's
a bunch of callbacks implementing this type, which makes any changes to
the callback signature extremely noisy. An example of the required churn
is e8207717f1 (refs: add referent to each_ref_fn, 2024-08-09): adding a
single argument required us to change 48 files.
It was already proposed back then [1] that we might want to introduce a
wrapper structure to alleviate the pain going forward. While this of
course requires the same kind of global refactoring as just introducing
a new parameter, it at least allows us to more change the callback type
afterwards by just extending the wrapper structure.
One counterargument to this refactoring is that it makes the structure
more opaque. While it is obvious which callsites need to be fixed up
when we change the function type, it's not obvious anymore once we use
a structure. That being said, we only have a handful of sites that
actually need to populate this wrapper structure: our ref backends,
"refs/iterator.c" as well as very few sites that invoke the iterator
callback functions directly.
Introduce this wrapper structure so that we can adapt the iterator
interfaces more readily.
[1]: <ZmarVcF5JjsZx0dl@tanuki>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We have two different ways to write an object into the database:
- We either provide the full buffer and write the object all at once.
- Or we provide an input stream that has a `read()` function so that
we can chunk the object.
The latter is especially used for large objects, where it may be too
expensive to hold the complete object in memory all at once.
While we already have `odb_write_object()` at the ODB-layer, we don't
have an equivalent for streaming an object. Introduce a new function
`odb_write_object_stream()` to address this gap so that callers don't
have to be aware of the inner workings of how to stream an object to
disk with a specific object source.
Rename `stream_loose_object()` to `odb_source_loose_write_stream()` to
clarify its scope. This matches our modern best practices around how to
name functions.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Rename `write_object_file()` to `odb_source_loose_write_object()` so
that it becomes clear that this is tied to a specific loose object
source. This matches our modern naming schema for functions.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When writing an object that already exists in our object database we
skip the write and instead only update mtimes of the object, either in
its packed or loose object format. This logic is wholly contained in
"object-file.c", but that file is really only concerned with loose
objects. So it does not really make sense that it also contains the
logic to freshen a packed object.
Introduce a new `odb_freshen_object()` function that sits on the object
database level and two functions `packfile_store_freshen_object()` and
`odb_source_loose_freshen_object()`. Like this, the format-specific
functions can be part of their respective subsystems, while the backend
agnostic function to freshen an object sits at the object database
layer.
Note that this change also moves the logic that iterates through object
sources from the object source layer into the object database layer.
This change is intentional: object sources should ideally only have to
worry about themselves, and coordination of different sources should be
handled on the object database level.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Rename `has_loose_object()` to `odb_source_loose_has_object()` so that
it becomes clear that this is tied to a specific loose object source.
This matches our modern naming schema for functions.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When reading an object via `loose_object_info()` or `map_loose_object()`
we hand in the whole repository. We then iterate through each of the
object sources to figure out whether that source has the object in
question.
This logic is reversing responsibility though: a specific backend should
only care about one specific source, where the object sources themselves
are then managed by the object database.
Refactor the code accordingly by passing an object source to both of
these functions instead. The different sources are then handled by
either `do_oid_object_info_extended()`, which sits on the object
database level, and by `open_istream_loose()`. The latter function
arguably is still at the wrong level, but this will be cleaned up at a
later point in time.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The loose object map is used to map from the repository's canonical
object hash to the compatibility hash. As the name indicates, this map
is only used for loose objects, and as such it is tied to a specific
loose object source.
Same as with preceding commits, move this map into the loose object
source accordingly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There are two different situations where we have to clear the cache of
loose objects:
- When freeing the loose object source itself to avoid memory leaks.
- When repreparing the loose object source so that any potentially-
stale data is getting evicted from the cache.
The former is already handled by `odb_source_loose_free()`. But the
latter case is still done manually by in `odb_reprepare()`, so we are
leaking internals into that code.
Introduce a new `odb_source_loose_reprepare()` function as an equivalent
to `packfile_store_prepare()` to hide these implementation details.
Furthermore, while at it, rename the function `odb_clear_loose_cache()`
to `odb_source_loose_clear()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Our loose objects use a cache that (optionally) stores all objects for
each of the opened sharding directories. This cache is located in the
`struct odb_source`, but now that we have `struct odb_source_loose` it
makes sense to move it into the latter structure so that all state that
relates to loose objects is entirely self-contained.
Do so. While at it, rename corresponding functions to have a prefix that
relates to `struct odb_source_loose`.
Note that despite this prefix, the functions still accept a `struct
odb_source` as input. This is done intentionally: once we introduce
pluggable object databases, we will continue to accept this struct but
then do a cast inside these functions to `struct odb_source_loose`. This
design is similar to how we do it for our ref backends.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Currently, all state that relates to loose objects is held directly by
the `struct odb_source`. Introduce a new `struct odb_source_loose` to
hold the state instead so that it is entirely self-contained.
This structure will eventually morph into the backend for accessing
loose objects. As such, this is part of the refactorings to introduce
pluggable object databases.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The `fetch_if_missing` global variable is declared in "object-file.h"
but defined in "odb.c". The variable relates to the whole object
database instead of only loose objects, so move the declaration into
"odb.h" accordingly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The functions `free_object_directory()` and `free_object_directories()`
are responsible for freeing a single object source or all object sources
connected to an object database, respectively. The associated structure
has been renamed from `struct object_directory` to `struct odb_source`
in a1e2581a1e (object-store: rename `object_directory` to `odb_source`,
2025-07-01) though, so the names are somewhat stale nowadays.
Rename them to mention the new struct name instead. Furthermore, while
at it, adapt them to our modern naming schema where we first have the
subject followed by a verb.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We have three different locations where we create a new ODB source.
Deduplicate the logic via a new `odb_source_new()` function.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When adding an alternate to the object database we first check whether
or not the path is usable. A path is usable if:
- It actually exists.
- We don't have it in our object sources yet.
While the former check is trivial enough, the latter part is somewhat
subtle and prone for bugs. This is because the function doesn't only
check whether or not the given path is usable. But if it _is_ usable, we
also store that path in the map of object sources immediately.
The tricky part here is that the path that gets stored in the map is
_not_ copied. Instead, we rely on the fact that subsequent code uses
`strbuf_detach()` to store the exact same allocated memory in the
created object source. Consequently, the memory is owned by the source
but _also_ stored in the map. This subtlety is easy to miss, so if one
decides to refactor this code one can easily end up breaking this
mechanism.
Make the relationship more explicit by not storing the path as part of
`alt_odb_usable()`. Instead, store the path after we have created the
source so that we can use the source's path pointer directly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The current implementation of git-last-modified(1) works by doing a
revision walk, and inspecting the diff at each level of that walk to
annotate entries remaining in the hashmap of paths. In other words, if
the diff at some level touches a path which has not yet been associated
with a commit, then that commit becomes associated with the path.
While a perfectly reasonable implementation, it can perform poorly in
either one of two scenarios:
1. There are many entries of interest, in which case there is simply
a lot of work to do.
2. Or, there are (even a few) entries which have not been updated in a
long time, and so we must walk through a lot of history in order to
find a commit that touches that path.
This patch rewrites the last-modified implementation that addresses the
second point. The idea behind the algorithm is to propagate a set of
'active' paths (a path is 'active' if it does not yet belong to a
commit) up to parents and do a truncated revision walk.
The walk is truncated because it does not produce a revision for every
change in the original pathspec, but rather only for active paths.
More specifically, consider a priority queue of commits sorted by
generation number. First, enqueue the set of boundary commits with all
paths in the original spec marked as interesting.
Then, while the queue is not empty, do the following:
1. Pop an element, say, 'c', off of the queue, making sure that 'c'
isn't reachable by anything in the '--not' set.
2. For each parent 'p' (with index 'parent_i') of 'c', do the
following:
a. Compute the diff between 'c' and 'p'.
b. Pass any active paths that are TREESAME from 'c' to 'p'.
c. If 'p' has any active paths, push it onto the queue.
3. Any path that remains active on 'c' is associated to that commit.
This ends up being equivalent to doing something like 'git log -1 --
$path' for each path simultaneously. But, it allows us to go much faster
than the original implementation by limiting the number of diffs we
compute, since we can avoid parts of history that would have been
considered by the revision walk in the original implementation, but are
known to be uninteresting to us because we have already marked all paths
in that area to be inactive.
To avoid computing many first-parent diffs, add another trick on top of
this and check if all paths active in 'c' are DEFINITELY NOT in c's
Bloom filter. Since the commit-graph only stores first-parent diffs in
the Bloom filters, we can only apply this trick to first-parent diffs.
Comparing the performance of this new algorithm shows about a 2.5x
improvement on git.git:
Benchmark 1: master no bloom
Time (mean ± σ): 2.868 s ± 0.023 s [User: 2.811 s, System: 0.051 s]
Range (min … max): 2.847 s … 2.926 s 10 runs
Benchmark 2: master with bloom
Time (mean ± σ): 949.9 ms ± 15.2 ms [User: 907.6 ms, System: 39.5 ms]
Range (min … max): 933.3 ms … 971.2 ms 10 runs
Benchmark 3: HEAD no bloom
Time (mean ± σ): 782.0 ms ± 6.3 ms [User: 740.7 ms, System: 39.2 ms]
Range (min … max): 776.4 ms … 798.2 ms 10 runs
Benchmark 4: HEAD with bloom
Time (mean ± σ): 307.1 ms ± 1.7 ms [User: 276.4 ms, System: 29.9 ms]
Range (min … max): 303.7 ms … 309.5 ms 10 runs
Summary
HEAD with bloom ran
2.55 ± 0.02 times faster than HEAD no bloom
3.09 ± 0.05 times faster than master with bloom
9.34 ± 0.09 times faster than master no bloom
In short, the existing implementation is comparably fast *with* Bloom
filters as the new implementation is *without* Bloom filters. So, most
repositories should get a dramatic speed-up by just deploying this (even
without computing Bloom filters), and all repositories should get faster
still when computing Bloom filters.
When comparing a more extreme example of
`git last-modified -- COPYING t`, the difference is even 5 times better:
Benchmark 1: master
Time (mean ± σ): 4.372 s ± 0.057 s [User: 4.286 s, System: 0.062 s]
Range (min … max): 4.308 s … 4.509 s 10 runs
Benchmark 2: HEAD
Time (mean ± σ): 826.3 ms ± 22.3 ms [User: 784.1 ms, System: 39.2 ms]
Range (min … max): 810.6 ms … 881.2 ms 10 runs
Summary
HEAD ran
5.29 ± 0.16 times faster than master
As an added benefit, results are more consistent now. For example
implementation in 'master' gives:
$ git log --max-count=1 --format=%H -- pkt-line.h
15df15fe07ef66b51302bb77e393f3c5502629de
$ git last-modified -- pkt-line.h
15df15fe07ef66b51302bb77e393f3c5502629de pkt-line.h
$ git last-modified | grep pkt-line.h
5b49c1af03e600c286f63d9d9c9fb01403230b9f pkt-line.h
With the changes in this patch the results of git-last-modified(1)
always match those of `git log --max-count=1`.
One thing to note though, the results might be outputted in a different
order than before. This is not considerd to be an issue because nowhere
is documented the order is guaranteed.
Based-on-patches-by: Derrick Stolee <stolee@gmail.com>
Based-on-patches-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Toon Claes <toon@iotcl.com>
Acked-by: Taylor Blau <me@ttaylorr.com>
[jc: tweaked use of xcalloc() to unbreak coccicheck]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The code to walk revision graph to compute merge base has been
optimized.
* rs/merge-base-optim:
commit-reach: avoid commit_list_insert_by_date()
|
|
Finishing touches to fixes to the recent regression in "git diff -w
--quiet" and anything that needs to internally generate patch to
see if it turns empty.
* jk/diff-patch-dry-run-cleanup:
diff: simplify run_external_diff() quiet logic
diff: drop dry-run redirection to /dev/null
diff: replace diff_options.dry_run flag with NULL file
diff: drop save/restore of color_moved in dry-run mode
diff: send external diff output to diff_options.file
|
|
"git maintenance" command learns the "geometric" strategy where it
avoids doing maintenance tasks that rebuilds everything from
scratch.
* ps/maintenance-geometric:
t7900: fix a flaky test due to git-repack always regenerating MIDX
builtin/maintenance: introduce "geometric" strategy
builtin/maintenance: make "gc" strategy accessible
builtin/maintenance: extend "maintenance.strategy" to manual maintenance
builtin/maintenance: run maintenance tasks depending on type
builtin/maintenance: improve readability of strategies
builtin/maintenance: don't silently ignore invalid strategy
builtin/maintenance: make the geometric factor configurable
builtin/maintenance: introduce "geometric-repack" task
builtin/gc: make `too_many_loose_objects()` reusable without GC config
builtin/gc: remove global `repack` variable
|
|
The wildmatch code had a corner case bug that mistakenly makes
"foo**/bar" match with "foobar", which has been corrected.
* jk/match-pathname-fix:
match_pathname(): give fnmatch one char of prefix context
match_pathname(): reorder prefix-match check
|
|
* kh/doc-patch-id-1:
doc: patch-id: convert to the modern synopsis style
|
|
The 'q'(uit) command in "git add -p" has been improved to quit
without doing any meaningless work before leaving, and giving EOF
(typically control-D) to the prompt is made to behave the same way.
* rs/add-patch-quit:
add-patch: quit on EOF
add-patch: quit without skipping undecided hunks
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git bisect" command did not react correctly to "git bisect help"
and "git bisect unknown", which has been corrected.
* rz/bisect-help-unknown:
bisect: fix handling of `help` and invalid subcommands
|
|
"git shortlog" knows "--committer" and "--author" options, which
the command line completion (in contrib/) did not handle well,
which has been corrected.
* kf/log-shortlog-completion-fix:
completion: complete some 'git log' options
|
|
Regression fixes for a topic that has already been merged.
* ly/diff-name-only-with-diff-from-content:
diff: stop output garbled message in dry run mode
|
|
Two slightly different ways to get at "all the packfiles" in API
has been cleaned up.
* ps/remove-packfile-store-get-packs:
packfile: rename `packfile_store_get_all_packs()`
packfile: introduce macro to iterate through packs
packfile: drop `packfile_store_get_packs()`
builtin/grep: simplify how we preload packs
builtin/gc: convert to use `packfile_store_get_all_packs()`
object-name: convert to use `packfile_store_get_all_packs()`
|
|
strbuf_split*() to split a string into multiple strbufs is often a
wrong API to use. A few uses of it have been removed by
simplifying the code.
* ob/gpg-interface-cleanup:
gpg-interface: do not use misdesigned strbuf_split*()
gpg-interface: do not use misdesigned strbuf_split*()
|
|
"Symlink symref" has been added to the list of things that will
disappear at Git 3.0 boundary.
* ps/symlink-symref-deprecation:
refs/files: deprecate writing symrefs as symbolic links
|
|
A new configuration variable commitGraph.changedPaths allows to
turn "--changed-paths" on by default for "git commit-graph".
* ey/commit-graph-changed-paths-config:
commit-graph: add new config for changed-paths & recommend it in scalar
|
|
We track packfiles via two different lists:
- `struct packfile_store::packs` is a list that sorts local packs
first. In addition, these packs are sorted so that younger packs are
sorted towards the front.
- `struct packfile_store::mru` is a list that sorts packs so that
most-recently used packs are at the front.
The reasoning behind the ordering in the `packs` list is that younger
objects stored in the local object store tend to be accessed more
frequently, and that is certainly true for some cases. But there are
going to be lots of cases where that isn't true. Especially when
traversing history it is likely that one needs to access many older
objects, and due to our housekeeping it is very likely that almost all
of those older objects will be contained in one large pack that is
oldest.
So whether or not the ordering makes sense really depends on the use
case at hand. A flexible approach like our MRU list addresses that need,
as it will sort packs towards the front that are accessed all the time.
Intuitively, this approach is thus able to satisfy more use cases more
efficiently.
This reasoning casts some doubt on whether or not it really makes sense
to track packs via two different lists. It causes confusion, and it is
not clear whether there are use cases where the `packs` list really is
such an obvious choice.
Merge these two lists into one most-recently-used list.
Note that there is one important edge case: `for_each_packed_object()`
uses the MRU list to iterate through packs, and then it lists each
object in those packs. This would have the effect that we now sort the
current pack towards the front, thus modifying the list of packfiles we
are iterating over, with the consequence that we'll see an infinite
loop. This edge case is worked around by introducing a new field that
allows us to skip updating the MRU.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When preparing the packfile store we know to also prepare the MRU list
of packfiles with all packs that are currently loaded in the store via
`packfile_store_prepare_mru()`. So we know that the list of packs in the
MRU list should match the list of packs in the non-MRU list.
But there are some direct or indirect callsites that add a packfile to
the store via `packfile_store_add_pack()` without adding the pack to the
MRU. And while functions that access the MRU (e.g. `find_pack_entry()`)
know to call `packfile_store_prepare()`, which knows to prepare the MRU
via `packfile_store_prepare_mru()`, that operation will be turned into a
no-op because the packfile store is already prepared. So this will not
cause us to add the packfile to the MRU, and consequently we won't be
able to find the packfile in our MRU list.
There are only a handful of callers outside of "packfile.c" that add a
packfile to the store:
- "builtin/fast-import.c" adds multiple packs of imported objects, but
it knows to look up objects via `packfile_store_get_packs()`. This
function does not use the MRU, so we're good.
- "builtin/index-pack.c" adds the indexed pack to the store in case it
needs to perform consistency checks on its objects.
- "http.c" adds the fetched pack to the store so that we can access
its objects.
In all of these cases we actually want to access the contained objects.
And luckily, reading these objects works as expected:
1. We eventually end up in `do_oid_object_info_extended()`.
2. Calling `find_pack_entry()` fails because the MRU list doesn't
contain the newly added packfile.
3. The callers don't pass `OBJECT_INFO_QUICK`, so we end up
repreparing the object database. This will also cause us to
reprepare the MRU list.
4. We now retry reading the object via `find_pack_entry()`, and now we
succeed because the MRU list got populated.
This logic feels quite fragile: we intentionally add the packfile to the
store, but we then ultimately rely on repreparing the entire store only
to make the packfile accessible. While we do the correct thing in
`do_oid_object_info_extended()`, other sites that access the MRU may not
know to reprepare.
But besides being fragile it's also a waste of resources: repreparing
the object database requires us to re-read the alternates file and
discard any caches.
Refactor the code so that we unconditionally add packfiles to the MRU
when adding them to a packfile store. This makes the logic less fragile
and ensures that we don't have to reprepare the store to make the pack
accessible.
Note that this does not allow us to drop `packfile_store_prepare_mru()`
just yet: while the MRU list is already populated with all packs now,
the order in which we add these packs is indeterministic for most of the
part. So by first calling `sort_pack()` on the other packfile list and
then re-preparing the MRU list we inherit its sorting.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Move the list of packs into the packfile store. This follows the same
logic as in a previous commit, where we moved the most-recently-used
list of packs, as well.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The function `has_sha1_pack_kept_or_nonlocal()` takes an object ID and
then searches through packed objects to figure out whether the object
exists in a kept or non-local pack. As a performance optimization we
remember the packfile that contains a given object ID so that the next
call to the function first checks that same packfile again.
The way this is written is rather hard to follow though, as the caching
mechanism is intertwined with the loop that iterates through the packs.
Consequently, we need to do some gymnastics to re-start the iteration if
the cached pack does not contain the objects.
Refactor this so that we check the cached packfile at the beginning. We
don't have to re-verify whether the packfile meets the properties as we
have already verified those when storing the pack in `last_found` in the
first place. So all we need to do is to use `find_pack_entry_one()` to
check whether the pack contains the object ID, and to skip the cached
pack in the loop so that we don't search it twice.
Furthermore, stop using the `(void *)1` sentinel value and instead use a
simple `NULL` pointer to indicate that we don't have a last-found pack
yet.
This refactoring significantly simplifies the logic and makes it much
easier to follow.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When approximating the number of objects in a repository we only take
into account two data sources, the multi-pack index and the packfile
indices, as both of these data structures allow us to easily figure out
how many objects they contain.
But the way we currently approximate the number of objects is broken in
presence of a multi-pack index. This is due to two separate reasons:
- We have recently introduced initial infrastructure for incremental
multi-pack indices. Starting with that series, `num_objects` only
counts the number of objects of a specific layer of the MIDX chain,
so we do not take into account objects from parent layers.
This issue is fixed by adding `num_objects_in_base`, which contains
the sum of all objects in previous layers.
- When using the multi-pack index we may count objects contained in
packfiles twice: once via the multi-pack index, but then we again
count them via the packfile itself.
This issue is fixed by skipping any packfiles that have an MIDX.
Overall, given that we _always_ count the packs, we can only end up
overestimating the number of objects, and the overestimation is limited
to a factor of two at most.
The consequences of those issues are very limited though, as we only
approximate object counts in a small number of cases:
- When writing a commit-graph we use the approximate object count to
display the upper limit of a progress display.
- In `repo_find_unique_abbrev_r()` we use it to specify a lower limit
of how many hex digits we want to abbreviate to. Given that we use
power-of-two here to derive the lower limit we may end up with an
abbreviated hash that is one digit longer than required.
- In `estimate_repack_memory()` we may end up overestimating how much
memory a repack needs to pack objects. Conseuqently, we may end up
dropping some packfiles from a repack.
None of these are really game-changing. But it's nice to fix those
issues regardless.
While at it, convert the code to use `repo_for_each_pack()`.
Furthermore, use `odb_prepare_alternates()` instead of explicitly
preparing the packfile store. We really only want to prepare the object
database sources, and `get_multi_pack_index()` already knows to prepare
the packfile store for us.
Helped-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The dumb HTTP protocol directly fetches packfiles from the remote server
and temporarily stores them in a list of packfiles. Those packfiles are
not yet added to the repository's packfile store until we finalize the
whole fetch.
Refactor the code to instead use a `struct packfile_list` to store those
packs. This prepares us for a subsequent change where the `->next`
pointer of `struct packed_git` will go away.
Note that this refactoring creates some temporary duplication of code,
as we now have both `packfile_list_find_oid()` and `find_oid_pack()`.
The latter function will be removed in a subsequent commit though.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Packfiles have two lists associated to them:
- A list that keeps track of packfiles in the order that they were
added to a packfile store.
- A list that keeps track of packfiles in most-recently-used order so
that packfiles that are more likely to contain a specific object are
ordered towards the front.
Both of these lists are hosted by `struct packed_git` itself, So to
identify all packfiles in a repository you simply need to grab the first
packfile and then iterate the `->next` pointers or the MRU list. This
pattern has the problem that all packfiles are part of the same list,
regardless of whether or not they belong to the same object source.
With the upcoming pluggable object database effort this needs to change:
packfiles should be contained by a single object source, and reading an
object from any such packfile should use that source to look up the
object. Consequently, we need to break up the global lists of packfiles
into per-object-source lists.
A first step towards this goal is to move those lists out of `struct
packed_git` and into the packfile store. While the packfile store is
currently sitting on the `struct object_database` level, the intent is
to push it down one level into the `struct odb_source` in a subsequent
patch series.
Introduce a new `struct packfile_list` that is used to manage lists of
packfiles and use it to store the list of most-recently-used packfiles
in `struct packfile_store`. For now, the new list type is only used in a
single spot, but we'll expand its usage in subsequent patches.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To allow fast lookups of a packfile by name we use a hashmap that has
the packfile name as key and the pack itself as value. But while this is
the perfect use case for a `strmap`, we instead use `struct hashmap` and
store the hashmap entry in the packfile itself.
Simplify the code by using a `strmap` instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Previous commits have marked a number of error or warning messages in
"builtin/fast-export.c" and "builtin/fast-import.c" for translation.
As "gpg-interface.c" code is used by the fast-export and fast-import
code, we should make sure that error or warning messages are also all
marked for translation in "gpg-interface.c".
To ensure that, let's mark for translation an error message in a
die() function.
With this, all the error and warning messages emitted by fast-export
and fast-import can be properly translated.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some error or warning messages in "builtin/fast-import.c" are marked
for translation, but many are not.
To be more consistent and provide a better experience to people using a
translated version, let's mark all the remaining error or warning
messages for translation.
While at it, let's make the following small changes:
- replace "GIT" or "git" in a few error messages to just "Git",
- replace "Expected from command, got %s" to "expected 'from'
command, got '%s'", which makes it clearer that "from" is a command
and should not be translated,
- downcase error and warning messages that start with an uppercase,
- fix test cases in "t9300-fast-import.sh" that broke because an
error or warning message was downcased,
- split error and warning messages that are too long,
- adjust the indentation of some arguments of the error functions.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some error or warning messages in "builtin/fast-export.c" are marked
for translation, but many are not.
To be more consistent and provide a better experience to people using a
translated version, let's mark all the remaining error or warning
messages for translation.
While at it:
- improve how some arguments to some error functions are indented,
- remove "Error:" at the start of an error message,
- downcase error and warning messages that start with an uppercase.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In "gpg-interface.h", the definitions of the GPG_VERIFY_* boolean flags
are currently using 1, 2 and 4 while we often prefer the bitwise left
shift operator, `<<`, for that purpose to make it clearer that they are
boolean.
Let's use the left shift operator here too. Let's also fix an indent
issue with "4" while at it.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In "gpg-interface.c", the 'parse_ssh_output()' function takes a
'struct signature_check *sigc' argument and populates many members of
this 'sigc' using information parsed from 'sigc->output' which
contains the ouput of an `ssh-keygen -Y ...` command that was used to
verify an SSH signature.
When it populates 'sigc->fingerprint' though, it uses
`xstrdup(strstr(line, "key ") + 4)` while `strstr(line, "key ")` has
already been computed a few lines above and is already available in
the `key` variable.
Let's simplify this.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Clean-up "git repack" machinery to prepare for incremental update
of midx files.
* tb/incremental-midx-part-3.1: (49 commits)
builtin/repack.c: clean up unused `#include`s
repack: move `write_cruft_pack()` out of the builtin
repack: move `write_filtered_pack()` out of the builtin
repack: move `pack_kept_objects` to `struct pack_objects_args`
repack: move `finish_pack_objects_cmd()` out of the builtin
builtin/repack.c: pass `write_pack_opts` to `finish_pack_objects_cmd()`
repack: extract `write_pack_opts_is_local()`
repack: move `find_pack_prefix()` out of the builtin
builtin/repack.c: use `write_pack_opts` within `write_cruft_pack()`
builtin/repack.c: introduce `struct write_pack_opts`
repack: 'write_midx_included_packs' API from the builtin
builtin/repack.c: inline packs within `write_midx_included_packs()`
builtin/repack.c: pass `repack_write_midx_opts` to `midx_included_packs`
builtin/repack.c: inline `remove_redundant_bitmaps()`
builtin/repack.c: reorder `remove_redundant_bitmaps()`
repack: keep track of MIDX pack names using existing_packs
builtin/repack.c: use a string_list for 'midx_pack_names'
builtin/repack.c: extract opts struct for 'write_midx_included_packs()'
builtin/repack.c: remove ref snapshotting from builtin
repack: remove pack_geometry API from the builtin
...
|
|
We read the input into a strbuf, so we must free it. Without this, t1016
complains in SANITIZE=leak mode.
The bug was introduced in 7673ecd2dc (t1016-compatObjectFormat: add
tests to verify the conversion between objects, 2023-10-01). But nobody
seems to have noticed, probably because CI did not run these tests until
the fix in 6cd8369ef3 (t/lib-gpg: call prepare_gnupghome() in GPG2
prereq, 2024-07-03).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Because gitignore patterns are passed to fnmatch, the handling of
backslashes is the same as it is there: it can be used to escape
metacharacters. We do reference fnmatch(3) for more details, but it may
be friendlier to point out this implication explicitly (especially for
people who want to know about backslash handling and search the
documentation for that word). There are also two cases that I've seen
some other backslash-escaping systems handle differently, so let's
describe those:
1. A backslash before any character treats that character literally,
even if it's not otherwise a meta-character. As opposed to
including the backslash itself (like "foo\bar" in shell expands to
"foo\bar") or forbidding it ("foo\zar" is required to produce a
diagnostic in C).
2. A backslash at the end of the string is an invalid pattern (and not
a literal backslash).
This second one in particular was a point of confusion between our
implementation and the one in JGit. Our wildmatch behavior matches what
POSIX specifies for fnmatch, so the code and documentation are in line.
But let's add a test to cover this case. Note that the behavior here
differs between wildmatch itself (which is what gitignore will use) and
pathspec matching (which will only turn to wildmatch if a literal match
fails). So we match "foo\" to "foo\" in pathspecs, but not via
gitignore.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The strategy in t1016-compatObjectFormat is to build two trees with
identical commits, one tree encoded in sha1 the other tree encoded
in sha256 and to use the compatibility code to test and see if
the two trees are identical.
GPG signatures include the current time as part of the signature.
To make gpg deterministic I forced the use of gpg --faked-system-time.
Unfortunately I did not look closely enough.
By default gpg still allows time to move forward with --faked-system-time.
So in those rare instances when the system is heavily loaded and gpg runs
slower than other times, signatures over the exact same data differ
due to timestamps with a minuscule difference.
Reading through the gpg documentation with a close eye, time can be
frozen by including an exclamation point at the end of the argument to
--faked-system-time.
Add the exclamation point so gpg really runs with a fixed notion of time,
resulting in the exact same data having identical gpg signatures.
That is enough that I can run "t1016-compatObjectFormat.sh --stress"
and I don't see any failures.
It is possible a future change to gpg will make replay protection more
robust and not provide a way to allow two separate runs of gpg to
produce exactly the same signature for exactly the same data. If that
happens a deeper comparison of the two repositories will need to be
performed. A comparison that simply verifies the signatures and
compares the data for equality. For now that is a lot of work
for no gain so I am just documenting the possibility.
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Update the usage string of `git bisect` and documentation to match each
other. While at it, also:
1. Move the synopsis of `git bisect` subcommands to the synopsis
section, so that the test `t0450-txt-doc-vs-help.sh` can pass.
2. Document the `git bisect next` subcommand, which exists in the code
but is missing from the documentation.
See also: [1].
[1]: https://lore.kernel.org/git/3DA38465-7636-4EEF-B074-53E4628F5355@gmail.com/
Suggested-by: Ben Knoble <ben.knoble@gmail.com>
Signed-off-by: Ruoyu Zhong <zhongruoyu@outlook.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The placeholder markup is underscore (_), not backtick (`) as well.
The inline-verbatim markup (backticks) handle interior formatting. This
means in this case that it applies HTML `<code>` to the underscores and
`<em>` to the placeholder.
That is the effect, anyway; we can see from the rest of 042d6f34 (doc:
git-checkout: clarify `-b` and `-B`, 2025-09-10) that this was probably
an unintended mix-up.
Acked-by: Julia Evans <julia@jvns.ca>
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
CI improvements to handle the recent Rust integration better.
* ps/ci-rust:
rust: support for Windows
ci: verify minimum supported Rust version
ci: check for common Rust mistakes via Clippy
rust/varint: add safety comments
ci: check formatting of our Rust code
ci: deduplicate calls to `apt-get update`
|
|
"git fast-import" is taught to handle signed tags, just like it
recently learned to handle signed commits, in different ways.
* cc/fast-import-strip-signed-tags:
fast-import: add '--signed-tags=<mode>' option
fast-export: handle all kinds of tag signatures
t9350: properly count annotated tags
lib-gpg: allow tests with GPGSM or GPGSSH prereq first
doc: git-tag: stop focusing on GPG signed tags
|
|
"git sparse-checkout" subcommand learned a new "clean" action to
prune otherwise unused working-tree files that are outside the
areas of interest.
* ds/sparse-checkout-clean:
sparse-index: improve advice message instructions
t: expand tests around sparse merges and clean
sparse-index: point users to new 'clean' action
sparse-checkout: add --verbose option to 'clean'
dir: add generic "walk all files" helper
sparse-checkout: match some 'clean' behavior
sparse-checkout: add basics of 'clean' command
sparse-checkout: remove use of the_repository
|
|
ps/packed-git-in-object-store
* ps/remove-packfile-store-get-packs: (55 commits)
packfile: rename `packfile_store_get_all_packs()`
packfile: introduce macro to iterate through packs
packfile: drop `packfile_store_get_packs()`
builtin/grep: simplify how we preload packs
builtin/gc: convert to use `packfile_store_get_all_packs()`
object-name: convert to use `packfile_store_get_all_packs()`
builtin/repack.c: clean up unused `#include`s
repack: move `write_cruft_pack()` out of the builtin
repack: move `write_filtered_pack()` out of the builtin
repack: move `pack_kept_objects` to `struct pack_objects_args`
repack: move `finish_pack_objects_cmd()` out of the builtin
builtin/repack.c: pass `write_pack_opts` to `finish_pack_objects_cmd()`
repack: extract `write_pack_opts_is_local()`
repack: move `find_pack_prefix()` out of the builtin
builtin/repack.c: use `write_pack_opts` within `write_cruft_pack()`
builtin/repack.c: introduce `struct write_pack_opts`
repack: 'write_midx_included_packs' API from the builtin
builtin/repack.c: inline packs within `write_midx_included_packs()`
builtin/repack.c: pass `repack_write_midx_opts` to `midx_included_packs`
builtin/repack.c: inline `remove_redundant_bitmaps()`
...
|
|
When a supposedly no-op "git repack" runs across a second boundary,
because the command always touches the MIDX file and updates its
timestamp, "ls -l $GIT_DIR/objects/pack/" before and after the
operation can change, which causes such a test to fail. Only
compare the *.pack files in the directory before and after the
operation to work around this flakyness.
Arguably, git-repack(1) should learn to not rewrite the MIDX in case
we know it is already up-to-date. But this is not a new problem
introduced via the new geometric maintenance task, so for now it
should be good enough to paper over the issue.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
[jc: taken from diff to v4 from v3 that was already merged to 'next']
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Add a note after the `git send-email` section explaining how
contributors can confirm that their patches reached the mailing
list by checking https://lore.kernel.org/git/. This helps
contributors verify that their emails were successfully delivered.
Signed-off-by: Queen Ediri Jessa <qjessa662@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The debug ref backend (refs_be_debug) was missing the remove_on_disk
function pointer, which caused a segmentation fault when running
'GIT_TRACE_REFS=1 git refs migrate --ref-format=reftable' commands.
Signed-off-by: Xinyu Ruan <r200981113@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Test modernization.
* so/t2401-use-test-path-helpers:
t2401: update path checks using test_path helpers
|
|
CI update.
* js/ci-github-actions-update:
build(deps): bump actions/github-script from 7 to 8
build(deps): bump actions/setup-python from 5 to 6
build(deps): bump actions/checkout from 4 to 5
build(deps): bump actions/download-artifact from 4 to 5
|
|
Doc mark-up fixes.
* kh/doc-continued-paragraph-fix:
doc: fix accidental literal blocks
|
|
Code clean-up.
* js/unreachable-workaround-for-no-symlink-head:
refs: forbid clang to complain about unreachable code
|
|
Recent OpenSSH creates the Unix domain socket to communicate with
ssh-agent under $HOME instead of /tmp, which causes our test to
fail doe to overly long pathname in our test environment, which has
been worked around by using "ssh-agent -T".
* ps/t7528-ssh-agent-uds-workaround:
t7528: work around ETOOMANY in OpenSSH 10.1 and newer
|
|
Unicode width table update.
* tb/unicode-width-table-17:
unicode: update the width tables to Unicode 17
|
|
The "--short" option of "git status" that meant output for humans
and "-z" option to show NUL delimited output format did not mix
well, and colored some but not all things. The command has been
updated to color all elements consistently in such a case.
* jk/status-z-short-fix:
status: make coloring of "-z --short" consistent
|
|
An earlier addition to "git diff --no-index A B" to limit the
output with pathspec after the two directories misbehaved when
these directories were given with a trailing slash, which has been
corrected.
* jk/diff-no-index-with-pathspec-fix:
diff --no-index: fix logic for paths ending in '/'
|
|
Windows "real-time monitoring" interferes with the execution of
tests and affects negatively in both correctness and performance,
which has been disabled in Gitlab CI.
* ps/gitlab-ci-disable-windows-monitoring:
gitlab-ci: disable realtime monitoring to unbreak Windows jobs
|
|
The code to squelch output from "git diff -w --name-status"
etc. for paths that "git diff -w -p" would have stayed silent
leaked output from dry-run patch generation, which has been
corrected.
* jc/diff-from-contents-fix:
diff: make sure the other caller of diff_flush_patch_quietly() is silent
|
|
Recently we attempted to improve "git diff -w" and friends to
handle cases where patch output would be suppressed, but it
introduced a bug that emits unnecessary output, which has been
corrected.
* jk/diff-from-contents-fix:
diff: restore redirection to /dev/null for diff_from_contents
|
|
If we reach the end of the input, e.g. because the user pressed ctrl-D
on Linux, there is no point in showing any more prompts, as we won't get
any reply. Do the same as option 'q' would: Quit.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In match_pathname(), which we use for matching .gitignore and
.gitattribute patterns, we are comparing paths with fnmatch patterns
(actually our extended wildmatch, which will be important). There's an
extra optimization there: we pre-compute the number of non-wildcard
characters at the beginning of the pattern and do an fspathncmp() on
that prefix.
That lets us avoid fnmatch entirely on patterns without wildcards, and
shrinks the amount of work we hand off to fnmatch. For a pattern like
"foo*.txt" and a path "foobar.txt", we'd cut away the matching "foo"
prefix and just pass "*.txt" and "bar.txt" to fnmatch().
But this misses a subtle corner case. In fnmatch(), we'll think
"bar.txt" is the start of the path, but it's not. This doesn't matter
for the pattern above, but consider the wildmatch pattern "foo**/bar"
and the path "foobar". These two should not match, because there is no
file named "bar", and the "**" applies only to the containing directory
name. But after removing the "foo" prefix, fnmatch will get "**/bar" and
"bar", which it does consider a match, because "**/" can match zero
directories.
We can solve this by giving fnmatch a bit more context. As long as it
has one byte of the matched prefix, then it will know that "bar" is not
the start of the path. In this example it would get "o**/bar" and
"obar", and realize that they cannot match.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As an optimization, we use fspathncmp() to match a prefix of the pattern
that does not contain any wildcards, and then pass the remainder to
fnmatch(). If it has matched the whole thing, we can return early.
Let's shift this early-return check to before we tweak the pattern and
name strings. That will gives us more flexibility with that tweaking.
It might also save a few instructions, but I couldn't measure any
improvement in doing so (and I wouldn't be surprised if an optimizing
compiler could figure that out itself).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Add an install target rule to the Makefiles in contrib/credential in the
same manner as in other Makefiles in contrib such as for contacts or
subtree.
Signed-off-by: Thomas Uhle <thomas.uhle@mailbox.tu-dresden.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Option q implies d, i.e., it marks any undecided hunks towards the
bottom of the hunk array as skipped. This is unnecessary; later code
treats undecided and skipped hunks the same: The only functions that
use UNDECIDED_HUNK and SKIP_HUNK are patch_update_file() itself (but
not after its big for loop) and its helpers get_first_undecided() and
display_hunks().
Streamline the handling of option q by quitting immediately.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Recent OpenSSH creates the Unix domain socket to communicate with
ssh-agent under $HOME instead of /tmp, which causes our test to
fail doe to overly long pathname in our test environment, which has
been worked around by using "ssh-agent -T".
* ps/t7528-ssh-agent-uds-workaround:
t7528: work around ETOOMANY in OpenSSH 10.1 and newer
|
|
Show 'P'ipe command in "git add -p".
* rs/add-patch-document-p-for-pager:
add-patch: fully document option P
|
|
GPG signing test set-up has been broken for a year, which has been
corrected.
* jc/t1016-setup-fix:
t1016: make sure to use specified GPG
|
|
Unicode width table update.
* tb/unicode-width-table-17:
unicode: update the width tables to Unicode 17
|
|
Build procedure for a few credential helpers (in contrib/) have
been updated.
* tu/credential-makefile-updates:
contrib/credential: harmonize Makefiles
|
|
The "--short" option of "git status" that meant output for humans
and "-z" option to show NUL delimited output format did not mix
well, and colored some but not all things. The command has been
updated to color all elements consistently in such a case.
* jk/status-z-short-fix:
status: make coloring of "-z --short" consistent
|
|
Test fix.
* js/t7500-pwd-windows-fix:
t7500: fix tests with absolute path following ":(optional)" on Windows
|
|
Documentation mark-up fixes.
* rj/doc-technical-fixes:
doc: add large-object-promisors.adoc to the docs build
doc: commit-graph.adoc: fix up some formatting
doc: sparse-checkout.adoc: fix asciidoc warnings
doc: remembering-renames.adoc: fix asciidoc warnings
|
|
We have two different repacking strategies in Git:
- The "gc" strategy uses git-gc(1).
- The "incremental" strategy uses multi-pack indices and `git
multi-pack-index repack` to merge together smaller packfiles as
determined by a specific batch size.
The former strategy is our old and trusted default, whereas the latter
has historically been used for our scheduled maintenance. But both
strategies have their shortcomings:
- The "gc" strategy performs regular all-into-one repacks. Furthermore
it is rather inflexible, as it is not easily possible for a user to
enable or disable specific subtasks.
- The "incremental" strategy is not a full replacement for the "gc"
strategy as it doesn't know to prune stale data.
So today, we don't have a strategy that is well-suited for large repos
while being a full replacement for the "gc" strategy.
Introduce a new "geometric" strategy that aims to fill this gap. This
strategy invokes all the usual cleanup tasks that git-gc(1) does like
pruning reflogs and rerere caches as well as stale worktrees. But where
it differs from both the "gc" and "incremental" strategy is that it uses
our geometric repacking infrastructure exposed by git-repack(1) to
repack packfiles. The advantage of geometric repacking is that we only
need to perform an all-into-one repack when the object count in a repo
has grown significantly.
One downside of this strategy is that pruning of unreferenced objects is
not going to happen regularly anymore. Every geometric repack knows to
soak up all loose objects regardless of their reachability, and merging
two or more packs doesn't consider reachability, either. Consequently,
the number of unreachable objects will grow over time.
This is remedied by doing an all-into-one repack instead of a geometric
repack whenever we determine that the geometric repack would end up
merging all packfiles anyway. This all-into-one repack then performs our
usual reachability checks and writes unreachable objects into a cruft
pack. As cruft packs won't ever be merged during geometric repacks we
can thus phase out these objects over time.
Of course, this still means that we retain unreachable objects for far
longer than with the "gc" strategy. But the maintenance strategy is
intended especially for large repositories, where the basic assumption
is that the set of unreachable objects will be significantly dwarfed by
the number of reachable objects.
If this assumption is ever proven to be too disadvantageous we could for
example introduce a time-based strategy: if the largest packfile has not
been touched for longer than $T, we perform an all-into-one repack. But
for now, such a mechanism is deferred into the future as it is not clear
yet whether it is needed in the first place.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While the user can pick the "incremental" maintenance strategy, it is
not possible to explicitly use the "gc" strategy. This has two
downsides:
- It is impossible to use the default "gc" strategy for a specific
repository when the strategy was globally set to a different strategy.
- It is not possible to use git-gc(1) for scheduled maintenance.
Address these issues by making making the "gc" strategy configurable.
Furthermore, extend the strategy so that git-gc(1) runs for both manual
and scheduled maintenance.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The "maintenance.strategy" configuration allows users to configure how
Git is supposed to perform repository maintenance. The idea is that we
provide a set of high-level strategies that may be useful in different
contexts, like for example when handling a large monorepo. Furthermore,
the strategy can be tweaked by the user by overriding specific tasks.
In its current form though, the strategy only applies to scheduled
maintenance. This creates something of a gap, as scheduled and manual
maintenance will now use _different_ strategies as the latter would
continue to use git-gc(1) by default. This makes the strategies way less
useful than they could be on the one hand. But even more importantly,
the two different strategies might clash with one another, where one of
the strategies performs maintenance in such a way that it discards
benefits from the other strategy.
So ideally, it should be possible to pick one strategy that then applies
globally to all the different ways that we perform maintenance. This
doesn't necessarily mean that the strategy always does the _same_ thing
for every maintenance type. But it means that the strategy can configure
the different types to work in tandem with each other.
Change the meaning of "maintenance.strategy" accordingly so that the
strategy is applied to both types, manual and scheduled. As preceding
commits have introduced logic to run maintenance tasks depending on this
type we can tweak strategies so that they perform those tasks depending
on the context.
Note that this raises the question of backwards compatibility: when the
user has configured the "incremental" strategy we would have ignored
that strategy beforehand. Instead, repository maintenance would have
continued to use git-gc(1) by default.
But luckily, we can match that behaviour by:
- Keeping all current tasks of the incremental strategy as
`MAINTENANCE_TYPE_SCHEDULED`. This ensures that those tasks will not
run during manual maintenance.
- Configuring the "gc" task so that it is invoked during manual
maintenance.
Like this, the user shouldn't observe any difference in behaviour.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We basically have three different ways to execute repository
maintenance:
1. Manual maintenance via `git maintenance run`.
2. Automatic maintenance via `git maintenance run --auto`.
3. Scheduled maintenance via `git maintenance run --schedule=`.
At the moment, maintenance strategies only have an effect for the last
type of maintenance. This is about to change in subsequent commits, but
to do so we need to be able to skip some tasks depending on how exactly
maintenance was invoked.
Introduce a new maintenance type that discern between manual (1 & 2) and
scheduled (3) maintenance. Convert the `enabled` field into a bitset so
that it becomes possible to specifiy which tasks exactly should run in a
specific context.
The types picked for existing strategies match the status quo:
- The default strategy is only ever executed as part of a manual
maintenance run. It is not possible to use it for scheduled
maintenance.
- The incremental strategy is only ever executed as part of a
scheduled maintenance run. It is not possible to use it for manual
maintenance.
The strategies will be tweaked in subsequent commits to make use of this
new infrastructure.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Our maintenance strategies are essentially a large array of structures,
where each of the tasks can be enabled and scheduled individually. With
the current layout though all the configuration sits on the same nesting
layer, which makes it a bit hard to discern which initialized fields
belong to what task.
Improve readability of the individual tasks by using nested designated
initializers instead.
Suggested-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When parsing maintenance strategies we completely ignore the
user-configured value in case it is unknown to us. This makes it
basically undiscoverable to the user that scheduled maintenance is
devolving into a no-op.
Change this to instead die when seeing an unknown maintenance strategy.
While at it, pull out the parsing logic into a separate function so that
we can reuse it in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The geometric repacking task uses a factor of two for its geometric
sequence, meaning that each next pack must contain at least twice as
many objects as the next-smaller one. In some cases it may be helpful to
configure this factor though to reduce the number of packfile merges
even further, e.g. in very big repositories. But while git-repack(1)
itself supports doing this, the maintenance task does not give us a way
to tune it.
Introduce a new "maintenance.geometric-repack.splitFactor" configuration
to plug this gap.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Introduce a new "geometric-repack" task. This task uses our geometric
repack infrastructure as provided by git-repack(1) itself, which is a
strategy that especially hosting providers tend to use to amortize the
costs of repacking objects.
There is one issue though with geometric repacks, namely that they
unconditionally pack all loose objects, regardless of whether or not
they are reachable. This is done because it means that we can completely
skip the reachability step, which significantly speeds up the operation.
But it has the big downside that we are unable to expire objects over
time.
To address this issue we thus use a split strategy in this new task:
whenever a geometric repack would merge together all packs, we instead
do an all-into-one repack. By default, these all-into-one repacks have
cruft packs enabled, so unreachable objects would now be written into
their own pack. Consequently, they won't be soaked up during geometric
repacking anymore and can be expired with the next full repack, assuming
that their expiry date has surpassed.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To decide whether or not a repository needs to be repacked we estimate
the number of loose objects. If the number exceeds a certain threshold
we perform the repack, otherwise we don't.
This is done via `too_many_loose_objects()`, which takes as parameter
the `struct gc_config`. This configuration is only used to determine the
threshold. In a subsequent commit we'll add another caller of this
function that wants to pass a different limit than the one stored in
that structure.
Refactor the function accordingly so that we only take the limit as
parameter instead of the whole structure.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The global `repack` variable is used to store all command line arguments
that we eventually want to pass to git-repack(1). It is being appended
to from multiple different functions, which makes it hard to follow the
logic. Besides being hard to follow, it also makes it unnecessarily hard
to reuse this infrastructure in new code.
Refactor the code so that we store this variable on the stack and pass
a pointer to it around as needed. This is done so that we can reuse
`add_repack_all_options()` in a subsequent commit.
The refactoring itself is straight-forward. One function that deserves
attention though is `need_to_gc()`: this function determines whether or
not we need to execute garbage collection for `git gc --auto`, but also
for `git maintenance run --auto`. But besides figuring out whether we
have to perform GC, the function also sets up the `repack` arguments.
For `git gc --auto` it's trivial to adapt, as we already have the
on-stack variable at our fingertips. But for the maintenance condition
it's less obvious what to do.
As it turns out, we can just use another temporary variable there that
we then immediately discard. If we need to perform GC we execute a child
git-gc(1) process to repack objects for us, and that process will have
to recompute the arguments anyway.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We'd sometimes end up in run_external_diff() to do a dry-run diff (e.g.,
to find content-level changes for --quiet). We recognize this quiet mode
by seeing the lack of DIFF_FORMAT_PATCH in the output format.
But since introducing an explicit dry-run check via 3ed5d8bd73 (diff:
stop output garbled message in dry run mode, 2025-10-20), this logic can
never trigger. We can only get to this function by calling
diff_flush_patch(), and that comes from only two places:
1. A dry-run flush comes from diff_flush_patch_quietly(), which is
always in dry-run mode (so the other half of our "||" is true
anyway).
2. A regular flush comes from diff_flush_patch_all_file_pairs(),
which is only called when output_format has DIFF_FORMAT_PATCH in
it.
So we can simplify our "quiet" condition to just checking dry-run mode
(which used to be a specific flag, but recently became just a NULL
"file" pointer). And since it's so simple, we can just do that inline.
This makes the logic about o->file more obvious, since we handle the
NULL and non-stdout cases next to each other.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As an added protection against dry-run diffs accidentally producing
output, we redirect diff_options.file to /dev/null. But as of the
previous patch, this now does nothing, since dry-run diffs are
implemented by setting "file" to NULL.
So we can drop this extra code with no change in behavior. This is
effectively a revert of 623f7af284 (diff: restore redirection to
/dev/null for diff_from_contents, 2025-10-17) and 3da4413dbc (diff: make
sure the other caller of diff_flush_patch_quietly() is silent,
2025-10-22), but:
1. We get a conflict because we already dropped the color_moved
handling in an earlier patch. But we just resolve the conflicts to
"theirs" (removing all of the code).
2. We retain the test from 623f7af284.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We introduced a dry_run flag to diff_options in b55e6d36eb (diff: ensure
consistent diff behavior with ignore options, 2025-08-08), with the idea
that the lower-level diff code could skip output when it is set.
As we saw with the bugs fixed by 3ed5d8bd73 (diff: stop output garbled
message in dry run mode, 2025-10-20), it is easy to miss spots. In the
end, we located all of them by checking where diff_options.file is used.
That suggests another possible approach: we can replace the dry_run
boolean with a NULL pointer for "file", as we know that using "file" in
dry_run mode would always be an error. This turns any missed spots from
producing extra output[1] into a segfault. Which is less forgiving, but
that is the point: this is indicative of a programming error, and
complaining loudly and immediately is good.
[1] We protect ourselves against garbled output as a separate step,
courtesy of 623f7af284 (diff: restore redirection to /dev/null for
diff_from_contents, 2025-10-17). So in that sense this patch can
only introduce user-visible errors (since any "bugs" were going to
/dev/null before), but the idea is to catch them rather than quietly
send garbage to /dev/null.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When running a dry-run content-level diff to check whether a "--quiet"
diff has any changes, we have always unset the color_moved variable
since the feature was added in 2e2d5ac184 (diff.c: color moved lines
differently, 2017-06-30). The reasoning is not given explicitly there,
but presumably the idea is that since color_moved requires a lot of
extra computation to match lines but does not actually affect the
found_changes flag, we want to skip it.
Later, in 3da4413dbc (diff: make sure the other caller of
diff_flush_patch_quietly() is silent, 2025-10-22) we copied the same
idea for other dry-run diffs.
But neither spot actually needs to reset this flag at all, because
diff_flush_patch() will not ever compute color_moved. Nor could it, as
it is only looking at a single file-pair, and we detect moves across
files. So color_moved is checked only when we are actually doing real
DIFF_FORMAT_PATCH output, and call diff_flush_patch_all_file_pairs().
So we can get rid of these extra lines to save and restore the
color_moved flag without changing the behavior at all. (Note that there
is no "restore" to drop for the second caller, as we know at that point
we are not generating any output and can just leave the feature
disabled).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Diff output usually goes to the process stdout, but it can be redirected
with the "--output" option. We store this in the "file" pointer of
diff_options, and all of the diff code should write there instead of to
stdout.
But there's one spot we missed: running an external diff cmd. We don't
redirect its output at all, so it just defaults to the stdout of the
parent process. We should instead point its stdout at our output file.
There are a few caveats to watch out for when doing so:
- The stdout field takes a descriptor, not a FILE pointer. We can pull
out the descriptor with fileno().
- The run-command API always closes the stdout descriptor we pass to
it. So we must duplicate it (otherwise we break the FILE pointer,
since it now points to a closed descriptor).
- We don't need to worry about closing our dup'd descriptor, since the
point is that run-command will do it for us (even in the case of an
error). But we do need to make sure we skip the dup() if we set
no_stdout (because then run-command will not look at it at all).
- When the output is going to stdout, it would not be wrong to dup()
the descriptor, but we don't need to. We can skip that extra work
with a simple pointer comparison.
- It seems like you'd need to fflush() the descriptor before handing
off a copy to the child process to prevent out-of-order writes. But
that was true even before this patch! It works because run-command
always calls fflush(NULL) before running the child.
The new test shows the breakage (and fix). The need for duplicating the
descriptor doesn't need a new test; that is covered by the later test
"GIT_EXTERNAL_DIFF with more than one changed files".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|