aboutsummaryrefslogtreecommitdiffstats
path: root/builtin/fetch.c
AgeCommit message (Collapse)AuthorFilesLines
2022-07-18Merge branch 'ab/cocci-unused'Junio C Hamano1-2/+1
Add Coccinelle rules to detect the pattern of initializing and then finalizing a structure without using it in between at all, which happens after code restructuring and the compilers fail to recognize as an unused variable. * ab/cocci-unused: cocci: generalize "unused" rule to cover more than "strbuf" cocci: add and apply a rule to find "unused" strbufs cocci: have "coccicheck{,-pending}" depend on "coccicheck-test" cocci: add a "coccicheck-test" target and test *.cocci rules Makefile & .gitignore: ignore & clean "git.res", not "*.res" Makefile: remove mandatory "spatch" arguments from SPATCH_FLAGS
2022-07-11Merge branch 'ds/branch-checked-out'Junio C Hamano1-29/+17
Introduce a helper to see if a branch is already being worked on (hence should not be newly checked out in a working tree), which performs much better than the existing find_shared_symref() to replace many uses of the latter. * ds/branch-checked-out: branch: drop unused worktrees variable fetch: stop passing around unused worktrees variable branch: fix branch_checked_out() leaks branch: use branch_checked_out() when deleting refs fetch: use new branch_checked_out() and add tests branch: check for bisects and rebases branch: add branch_checked_out() helper
2022-07-06cocci: add and apply a rule to find "unused" strbufsÆvar Arnfjörð Bjarmason1-2/+1
Add a coccinelle rule to remove "struct strbuf" initialization followed by calling "strbuf_release()" function, without any uses of the strbuf in the same function. See the tests in contrib/coccinelle/tests/unused.{c,res} for what it's intended to find and replace. The inclusion of "contrib/scalar/scalar.c" is because "spatch" was manually run on it (we don't usually run spatch on contrib). Per the "buggy code" comment we also match a strbuf_init() before the xmalloc(), but we're not seeking to be so strict as to make checks that the compiler will catch for us redundant. Saying we'll match either "init" or "xmalloc" lines makes the rule simpler. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-21fetch: stop passing around unused worktrees variableJeff King1-15/+9
In 12d47e3b1f (fetch: use new branch_checked_out() and add tests, 2022-06-14), fetch's update_local_ref() function stopped using its "worktrees" parameter. It doesn't need it, since the branch_checked_out() function examines the global worktrees under the hood. So we can not only drop the unused parameter from that function, but also from its entire call chain. And as we do so all the way up to do_fetch(), we can see that nobody uses it at all, and we can drop the local variable there entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-15fetch: use new branch_checked_out() and add testsDerrick Stolee1-14/+8
When fetching refs from a remote, it is possible that the refspec will cause use to overwrite a ref that is checked out in a worktree. The existing logic in builtin/fetch.c uses a possibly-slow mechanism. Update those sections to use the new, more efficient branch_checked_out() helper. These uses were not previously tested, so add a test case that can be used for these kinds of collisions. There is only one test now, but more tests will be added as other consumers of branch_checked_out() are added. Note that there are two uses in builtin/fetch.c, but only one of the messages is tested. This is because the tested check is run before completing the fetch, and the untested check is not reachable without concurrent updates to the filesystem. Thus, it is beneficial to keep that extra check for the sake of defense-in-depth. However, we should not attempt to test the check, as the effort required is too complicated to be worth the effort. This use in update_local_ref() also requires a change in the error message because we no longer have access to the worktree struct, only the path of the worktree. This error is so rare that making a distinction between the two is not critical. Signed-off-by: Derrick Stolee <derrickstolee@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-25Merge branch 'jc/avoid-redundant-submodule-fetch'Junio C Hamano1-1/+15
"git fetch --recurse-submodules" from multiple remotes (either from a remote group, or "--all") used to make one extra "git fetch" in the submodules, which has been corrected. * jc/avoid-redundant-submodule-fetch: fetch: do not run a redundant fetch from submodule
2022-05-18fetch: do not run a redundant fetch from submoduleJunio C Hamano1-1/+15
When 7dce19d3 (fetch/pull: Add the --recurse-submodules option, 2010-11-12) introduced the "--recurse-submodule" option, the approach taken was to perform fetches in submodules only once, after all the main fetching (it may usually be a fetch from a single remote, but it could be fetching from a group of remotes using fetch_multiple()) succeeded. Later we added "--all" to fetch from all defined remotes, which complicated things even more. If your project has a submodule, and you try to run "git fetch --recurse-submodule --all", you'd see a fetch for the top-level, which invokes another fetch for the submodule, followed by another fetch for the same submodule. All but the last fetch for the submodule come from a "git fetch --recurse-submodules" subprocess that is spawned via the fetch_multiple() interface for the remotes, and the last fetch comes from the code at the end. Because recursive fetching from submodules is done in each fetch for the top-level in fetch_multiple(), the last fetch in the submodule is redundant. It only matters when fetch_one() interacts with a single remote at the top-level. While we are at it, there is one optimization that exists in dealing with a group of remote, but is missing when "--all" is used. In the former, when the group turns out to be a group of one, instead of spawning "git fetch" as a subprocess via the fetch_multiple() interface, we use the normal fetch_one() code path. Do the same when handing "--all", if it turns out that we have only one remote defined. Reviewed-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-05-16fetch: limit shared symref check only for local branchesOrgad Shaneh1-0/+1
This check was introduced in 8ee5d73137f (Fix fetch/pull when run without --update-head-ok, 2008-10-13) in order to protect against replacing the ref of the active branch by mistake, for example by running git fetch origin master:master. It was later extended in 8bc1f39f411 (fetch: protect branches checked out in all worktrees, 2021-12-01) to scan all worktrees. This operation is very expensive (takes about 30s in my repository) when there are many tags or branches, and it is executed on every fetch, even if no local heads are updated at all. Limit it to protect only refs/heads/* to improve fetch performance. Signed-off-by: Orgad Shaneh <orgads@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-04-04Merge branch 'rc/fetch-refetch'Junio C Hamano1-2/+32
"git fetch --refetch" learned to fetch everything without telling the other side what we already have, which is useful when you cannot trust what you have in the local object store. * rc/fetch-refetch: docs: mention --refetch fetch option fetch: after refetch, encourage auto gc repacking t5615-partial-clone: add test for fetch --refetch fetch: add --refetch option builtin/fetch-pack: add --refetch option fetch-pack: add refetch fetch-negotiator: add specific noop initializer
2022-03-28fetch: after refetch, encourage auto gc repackingRobert Coup1-1/+18
After invoking `fetch --refetch`, the object db will likely contain many duplicate objects. If auto-maintenance is enabled, invoke it with appropriate settings to encourage repacking/consolidation. * gc.autoPackLimit: unless this is set to 0 (disabled), override the value to 1 to force pack consolidation. * maintenance.incremental-repack.auto: unless this is set to 0, override the value to -1 to force incremental repacking. Signed-off-by: Robert Coup <robert@coup.net.nz> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-28fetch: add --refetch optionRobert Coup1-1/+14
Teach fetch and transports the --refetch option to force a full fetch without negotiating common commits with the remote. Use when applying a new partial clone filter to refetch all matching objects. Signed-off-by: Robert Coup <robert@coup.net.nz> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-25Merge branch 'gc/recursive-fetch-with-unused-submodules'Junio C Hamano1-7/+7
When "git fetch --recurse-submodules" grabbed submodule commits that would be needed to recursively check out newly fetched commits in the superproject, it only paid attention to submodules that are in the current checkout of the superproject. We now do so for all submodules that have been run "git submodule init" on. * gc/recursive-fetch-with-unused-submodules: submodule: fix latent check_has_commit() bug fetch: fetch unpopulated, changed submodules submodule: move logic into fetch_task_create() submodule: extract get_fetch_task() submodule: store new submodule commits oid_array in a struct submodule: inline submodule_commits() into caller submodule: make static functions read submodules from commits t5526: create superproject commits with test helper t5526: stop asserting on stderr literally t5526: introduce test helper to assert on fetches
2022-03-16Merge branch 'ps/fetch-mirror-optim'Junio C Hamano1-15/+27
Various optimization for "git fetch". * ps/fetch-mirror-optim: refs/files-backend: optimize reading of symbolic refs remote: read symbolic refs via `refs_read_symbolic_ref()` refs: add ability for backends to special-case reading of symbolic refs fetch: avoid lookup of commits when not appending to FETCH_HEAD upload-pack: look up "want" lines via commit-graph
2022-03-16fetch: fetch unpopulated, changed submodulesGlen Choo1-7/+7
"git fetch --recurse-submodules" only considers populated submodules (i.e. submodules that can be found by iterating the index), which makes "git fetch" behave differently based on which commit is checked out. As a result, even if the user has initialized all submodules correctly, they may not fetch the necessary submodule commits, and commands like "git checkout --recurse-submodules" might fail. Teach "git fetch" to fetch cloned, changed submodules regardless of whether they are populated. This is in addition to the current behavior of fetching populated submodules (which is always attempted regardless of what was fetched in the superproject, or even if nothing was fetched in the superproject). A submodule may be encountered multiple times (via the list of populated submodules or via the list of changed submodules). When this happens, "git fetch" only reads the 'populated copy' and ignores the 'changed copy'. Amend the verify_fetch_result() test helper so that we can assert on which 'copy' is being read. Signed-off-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-13Merge branch 'ps/fetch-atomic'Junio C Hamano1-61/+129
"git fetch" can make two separate fetches, but ref updates coming from them were in two separate ref transactions under "--atomic", which has been corrected. * ps/fetch-atomic: fetch: make `--atomic` flag cover pruning of refs fetch: make `--atomic` flag cover backfilling of tags refs: add interface to iterate over queued transactional updates fetch: report errors when backfilling tags fails fetch: control lifecycle of FETCH_HEAD in a single place fetch: backfill tags before setting upstream fetch: increase test coverage of fetches
2022-03-01fetch: avoid lookup of commits when not appending to FETCH_HEADPatrick Steinhardt1-15/+27
When fetching from a remote repository we will by default write what has been fetched into the special FETCH_HEAD reference. The order in which references are written depends on whether the reference is for merge or not, which, despite some other conditions, is also determined based on whether the old object ID the reference is being updated from actually exists in the repository. To write FETCH_HEAD we thus loop through all references thrice: once for the references that are about to be merged, once for the references that are not for merge, and finally for all references that are ignored. For every iteration, we then look up the old object ID to determine whether the referenced object exists so that we can label it as "not-for-merge" if it doesn't exist. It goes without saying that this can be expensive in case where we are fetching a lot of references. While this is hard to avoid in the case where we're writing FETCH_HEAD, users can in fact ask us to skip this work via `--no-write-fetch-head`. In that case, we do not care for the result of those lookups at all because we don't have to order writes to FETCH_HEAD in the first place. Skip this busywork in case we're not writing to FETCH_HEAD. The following benchmark performs a mirror-fetch in a repository with about two million references via `git fetch --prune --no-write-fetch-head +refs/*:refs/*`: Benchmark 1: HEAD~ Time (mean ± σ): 75.388 s ± 1.942 s [User: 71.103 s, System: 8.953 s] Range (min … max): 73.184 s … 76.845 s 3 runs Benchmark 2: HEAD Time (mean ± σ): 69.486 s ± 1.016 s [User: 65.941 s, System: 8.806 s] Range (min … max): 68.864 s … 70.659 s 3 runs Summary 'HEAD' ran 1.08 ± 0.03 times faster than 'HEAD~' Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-01Merge branch 'ps/fetch-atomic' into ps/fetch-mirror-optimJunio C Hamano1-61/+129
* ps/fetch-atomic: fetch: make `--atomic` flag cover pruning of refs fetch: make `--atomic` flag cover backfilling of tags refs: add interface to iterate over queued transactional updates fetch: report errors when backfilling tags fails fetch: control lifecycle of FETCH_HEAD in a single place fetch: backfill tags before setting upstream fetch: increase test coverage of fetches
2022-02-25Merge branch 'ja/i18n-common-messages'Junio C Hamano1-2/+2
Unify more messages to help l10n. * ja/i18n-common-messages: i18n: fix some misformated placeholders in command synopsis i18n: remove from i18n strings that do not hold translatable parts i18n: factorize "invalid value" messages i18n: factorize more 'incompatible options' messages
2022-02-23Merge branch 'ps/fetch-optim-with-commit-graph'Junio C Hamano1-2/+6
A couple of optimization to "git fetch". * ps/fetch-optim-with-commit-graph: fetch: skip computing output width when not printing anything fetch-pack: use commit-graph when computing cutoff
2022-02-18Merge branch 'js/short-help-outside-repo-fix'Junio C Hamano1-2/+4
"git cmd -h" outside a repository should error out cleanly for many commands, but instead it hit a BUG(), which has been corrected. * js/short-help-outside-repo-fix: t0012: verify that built-ins handle `-h` even without gitdir checkout/fetch/pull/pack-objects: allow `-h` outside a repository
2022-02-18Merge branch 'ab/release-transport-ls-refs-options'Junio C Hamano1-1/+1
* ab/release-transport-ls-refs-options: ls-remote & transport API: release "struct transport_ls_refs_options"
2022-02-17fetch: make `--atomic` flag cover pruning of refsPatrick Steinhardt1-8/+22
When fetching with the `--prune` flag we will delete any local references matching the fetch refspec which have disappeared on the remote. This step is not currently covered by the `--atomic` flag: we delete branches even though updating of local references has failed, which means that the fetch is not an all-or-nothing operation. Fix this bug by passing in the global transaction into `prune_refs()`: if one is given, then we'll only queue up deletions and not commit them right away. This change also improves performance when pruning many branches in a repository with a big packed-refs file: every references is pruned in its own transaction, which means that we potentially have to rewrite the packed-refs files for every single reference we're about to prune. The following benchmark demonstrates this: it performs a pruning fetch from a repository with a single reference into a repository with 100k references, which causes us to prune all but one reference. This is of course a very artificial setup, but serves to demonstrate the impact of only having to write the packed-refs file once: Benchmark 1: git fetch --prune --atomic +refs/*:refs/* (HEAD~) Time (mean ± σ): 2.366 s ± 0.021 s [User: 0.858 s, System: 1.508 s] Range (min … max): 2.328 s … 2.407 s 10 runs Benchmark 2: git fetch --prune --atomic +refs/*:refs/* (HEAD) Time (mean ± σ): 1.369 s ± 0.017 s [User: 0.715 s, System: 0.641 s] Range (min … max): 1.346 s … 1.400 s 10 runs Summary 'git fetch --prune --atomic +refs/*:refs/* (HEAD)' ran 1.73 ± 0.03 times faster than 'git fetch --prune --atomic +refs/*:refs/* (HEAD~)' Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-17fetch: make `--atomic` flag cover backfilling of tagsPatrick Steinhardt1-26/+66
When fetching references from a remote we by default also fetch all tags which point into the history we have fetched. This is a separate step performed after updating local references because it requires us to walk over the history on the client-side to determine whether the remote has announced any tags which point to one of the fetched commits. This backfilling of tags isn't covered by the `--atomic` flag: right now, it only applies to the step where we update our local references. This is an oversight at the time the flag was introduced: its purpose is to either update all references or none, but right now we happily update local references even in the case where backfilling failed. Fix this by pulling up creation of the reference transaction such that we can pass the same transaction to both the code which updates local references and to the code which backfills tags. This allows us to only commit the transaction in case both actions succeed. Note that we also have to start passing the transaction into `find_non_local_tags()`: this function is responsible for finding all tags which we need to backfill. Right now, it will happily return tags which have already been updated with our local references. But when we use a single transaction for both local references and backfilling then it may happen that we try to queue the same reference update twice to the transaction, which consequently triggers a bug. We thus have to skip over any tags which have already been queued. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-17fetch: report errors when backfilling tags failsPatrick Steinhardt1-8/+18
When the backfilling of tags fails we do not report this error to the caller, but only report it implicitly at a later point when reporting updated references. This leaves callers unable to act upon the information of whether the backfilling succeeded or not. Refactor the function to return an error code and pass it up the callstack. This causes us to correctly propagate the error back to the user of git-fetch(1). Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-17fetch: control lifecycle of FETCH_HEAD in a single placePatrick Steinhardt1-16/+19
There are two different locations where we're appending to FETCH_HEAD: first when storing updated references, and second when backfilling tags. Both times we open the file, append to it and then commit it into place, which is essentially duplicate work. Improve the lifecycle of updating FETCH_HEAD by opening and committing it once in `do_fetch()`, where we pass the structure down to the code which wants to append to it. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-17fetch: backfill tags before setting upstreamPatrick Steinhardt1-17/+18
The fetch code flow is a bit hard to understand right now: 1. We optionally prune all references which have vanished on the remote side. 2. We fetch and update all other references locally. 3. We update the upstream branch in the gitconfig. 4. We backfill tags pointing into the history we have just fetched. It is quite confusing that we fetch objects and update references in both (2) and (4), which is further stressed by the point that we use a `skip` goto label to jump from (3) to (4) in case we fail to update the gitconfig as expected. Reorder the code to first update all local references, and only after we have done so update the upstream branch information. This improves the code flow and furthermore makes it easier to refactor the way we update references together. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-11Merge branch 'tg/fetch-prune-exit-code-fix'Junio C Hamano1-4/+6
When "git fetch --prune" failed to prune the refs it wanted to prune, the command issued error messages but exited with exit status 0, which has been corrected. * tg/fetch-prune-exit-code-fix: fetch --prune: exit with error if pruning fails
2022-02-11Merge branch 'rc/negotiate-only-typofix'Junio C Hamano1-1/+1
Typofix. * rc/negotiate-only-typofix: fetch: fix negotiate-only error message
2022-02-10fetch: skip computing output width when not printing anythingPatrick Steinhardt1-2/+6
When updating references via git-fetch(1), then by default we report to the user which references have been changed. This output is formatted in a nice table such that the different columns are aligned. Because the first column contains abbreviated object IDs we thus need to iterate over all refs which have changed and compute the minimum length for their respective abbreviated hashes. While this effort makes sense in most cases, it is wasteful when the user passes the `--quiet` flag: we don't print the summary, but still compute the length. Skip computing the summary width when the user asked for us to be quiet. This gives us a speedup of nearly 10% when doing a mirror-fetch in a repository with thousands of references being updated: Benchmark 1: git fetch --quiet +refs/*:refs/* (HEAD~) Time (mean ± σ): 96.078 s ± 0.508 s [User: 91.378 s, System: 10.870 s] Range (min … max): 95.449 s … 96.760 s 5 runs Benchmark 2: git fetch --quiet +refs/*:refs/* (HEAD) Time (mean ± σ): 88.214 s ± 0.192 s [User: 83.274 s, System: 10.978 s] Range (min … max): 87.998 s … 88.446 s 5 runs Summary 'git fetch --quiet +refs/*:refs/* (HEAD)' ran 1.09 ± 0.01 times faster than 'git fetch --quiet +refs/*:refs/* (HEAD~)' Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-09Merge branch 'gc/fetch-negotiate-only-early-return'Junio C Hamano1-3/+38
"git fetch --negotiate-only" is an internal command used by "git push" to figure out which part of our history is missing from the other side. It should never recurse into submodules even when fetch.recursesubmodules configuration variable is set, nor it should trigger "gc". The code has been tightened up to ensure it only does common ancestry discovery and nothing else. * gc/fetch-negotiate-only-early-return: fetch: help translators by reusing the same message template fetch --negotiate-only: do not update submodules fetch: skip tasks related to fetching objects fetch: use goto cleanup in cmd_fetch()
2022-02-08checkout/fetch/pull/pack-objects: allow `-h` outside a repositoryJohannes Schindelin1-2/+4
When we taught these commands about the sparse index, we did not account for the fact that the `cmd_*()` functions _can_ be called without a gitdir, namely when `-h` is passed to show the usage. A plausible approach to address this is to move the `prepare_repo_settings()` calls right after the `parse_options()` calls: The latter will never return when it handles `-h`, and therefore it is safe to assume that we have a `gitdir` at that point, as long as the built-in is marked with the `RUN_SETUP` flag. However, it is unfortunately not that simple. In `cmd_pack_objects()`, for example, the repo settings need to be fully populated so that the command-line options `--sparse`/`--no-sparse` can override them, not the other way round. Therefore, we choose to imitate the strategy taken in `cmd_diff()`, where we simply do not bother to prepare and initialize the repo settings unless we have a `gitdir`. This fixes https://github.com/git-for-windows/git/issues/3688 Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-06ls-remote & transport API: release "struct transport_ls_refs_options"Ævar Arnfjörð Bjarmason1-1/+1
Fix a memory leak in codepaths that use the "struct transport_ls_refs_options" API. Since the introduction of the struct in 39835409d10 (connect, transport: encapsulate arg in struct, 2021-02-05) the caller has been responsible for freeing it. That commit in turn migrated code originally added in 402c47d9391 (clone: send ref-prefixes when using protocol v2, 2018-07-20) and b4be74105fe (ls-remote: pass ref prefixes when requesting a remote's refs, 2018-03-15). Only some of those codepaths were releasing the allocated resources of the struct, now all of them will. Mark the "t/t5511-refspec.sh" test as passing when git is compiled with SANITIZE=leak. They'll now be listed as running under the "GIT_TEST_PASSING_SANITIZE_LEAK=true" test mode (the "linux-leaks" CI target). Previously 24/47 tests would fail. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-04i18n: factorize "invalid value" messagesJean-Noël Avila1-2/+2
Use the same message when an invalid value is passed to a command line option or a configuration variable. Signed-off-by: Jean-Noël Avila <jn.avila@free.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-31fetch --prune: exit with error if pruning failsThomas Gummerer1-4/+6
When pruning refs fails, we print an error to stderr, but still exit 0 from 'git fetch'. Since this is a genuine error, fetch should be exiting with some non-zero exit code. Make it so. The --prune option was introduced in f360d844de ("builtin-fetch: add --prune option", 2009-11-10). Unfortunately it's unclear from that commit whether ignoring the exit code was an oversight or intentional, but it feels like an oversight. Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-28fetch: fix negotiate-only error messageRobert Coup1-1/+1
The error message when invoking a negotiate-only fetch without providing any tips incorrectly refers to a --negotiate-tip=* argument. Fix this to use the actual argument, --negotiation-tip=*. Signed-off-by: Robert Coup <robert@coup.net.nz> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-20fetch: help translators by reusing the same message templateJunio C Hamano1-1/+2
Follow the example set by 12909b6b (i18n: turn "options are incompatible" into "cannot be used together", 2022-01-05) and use the same message string to reduce the need for translation. Reported-by: Jiang Xin <worldhello.net@gmail.com> Helped-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-18fetch --negotiate-only: do not update submodulesGlen Choo1-1/+23
`git fetch --negotiate-only` is an implementation detail of push negotiation and, unlike most `git fetch` invocations, does not actually update the main repository. Thus it should not update submodules even if submodule recursion is enabled. This is not just slow, it is wrong e.g. push negotiation with "submodule.recurse=true" will cause submodules to be updated because it invokes `git fetch --negotiate-only`. Fix this by disabling submodule recursion if --negotiate-only was given. Since this makes --negotiate-only and --recurse-submodules incompatible, check for this invalid combination and die. This does not use the "goto cleanup" introduced in the previous commit because we want to recurse through submodules whenever a ref is fetched, and this can happen without introducing new objects. Signed-off-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-18fetch: skip tasks related to fetching objectsGlen Choo1-0/+11
cmd_fetch() does the following with the assumption that objects are fetched: * Run gc * Write commit graphs (if enabled by fetch.writeCommitGraph=true) However, neither of these tasks makes sense if objects are not fetched e.g. `git fetch --negotiate-only` never fetches objects. Speed up cmd_fetch() by bailing out early if we know for certain that objects will not be fetched. cmd_fetch() can bail out early whenever objects are not fetched, but for now this only considers --negotiate-only. The same optimization does not apply to `git fetch --dry-run` because that actually fetches objects; the dry run refers to not updating refs. Signed-off-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-18fetch: use goto cleanup in cmd_fetch()Glen Choo1-3/+4
Replace an early return with 'goto cleanup' in cmd_fetch() so that the string_list is always cleared (the string_list_clear() call is purely cleanup; the string_list is not reused). This makes cleanup consistent so that a subsequent commit can use 'goto cleanup' to bail out early. Signed-off-by: Glen Choo <chooglen@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-12Merge branch 'ps/lockfile-cleanup-fix'Junio C Hamano1-6/+11
Some lockfile code called free() in signal-death code path, which has been corrected. * ps/lockfile-cleanup-fix: fetch: fix deadlock when cleaning up lockfiles in async signals
2022-01-10Merge branch 'ja/i18n-similar-messages'Junio C Hamano1-4/+4
Similar message templates have been consolidated so that translators need to work on fewer number of messages. * ja/i18n-similar-messages: i18n: turn even more messages into "cannot be used together" ones i18n: ref-filter: factorize "%(foo) atom used without %(bar) atom" i18n: factorize "--foo outside a repository" i18n: refactor "unrecognized %(foo) argument" strings i18n: factorize "no directory given for --foo" i18n: factorize "--foo requires --bar" and the like i18n: tag.c factorize i18n strings i18n: standardize "cannot open" and "cannot read" i18n: turn "options are incompatible" into "cannot be used together" i18n: refactor "%s, %s and %s are mutually exclusive" i18n: refactor "foo and bar are mutually exclusive"
2022-01-10Merge branch 'ds/fetch-pull-with-sparse-index'Junio C Hamano1-0/+2
"git fetch" and "git pull" are now declared sparse-index clean. Also "git ls-files" learns the "--sparse" option to help debugging. * ds/fetch-pull-with-sparse-index: test-read-cache: remove --table, --expand options t1091/t3705: remove 'test-tool read-cache --table' t1092: replace 'read-cache --table' with 'ls-files --sparse' ls-files: add --sparse option fetch/pull: use the sparse index
2022-01-07fetch: fix deadlock when cleaning up lockfiles in async signalsPatrick Steinhardt1-6/+11
When fetching packfiles, we write a bunch of lockfiles for the packfiles we're writing into the repository. In order to not leave behind any cruft in case we exit or receive a signal, we register both an exit handler as well as signal handlers for common signals like SIGINT. These handlers will then unlink the locks and free the data structure tracking them. We have observed a deadlock in this logic though: (gdb) bt #0 __lll_lock_wait_private () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95 #1 0x00007f4932bea2cd in _int_free (av=0x7f4932f2eb20 <main_arena>, p=0x3e3e4200, have_lock=0) at malloc.c:3969 #2 0x00007f4932bee58c in __GI___libc_free (mem=<optimized out>) at malloc.c:2975 #3 0x0000000000662ab1 in string_list_clear () #4 0x000000000044f5bc in unlock_pack_on_signal () #5 <signal handler called> #6 _int_free (av=0x7f4932f2eb20 <main_arena>, p=<optimized out>, have_lock=0) at malloc.c:4024 #7 0x00007f4932bee58c in __GI___libc_free (mem=<optimized out>) at malloc.c:2975 #8 0x000000000065afd5 in strbuf_release () #9 0x000000000066ddb9 in delete_tempfile () #10 0x0000000000610d0b in files_transaction_cleanup.isra () #11 0x0000000000611718 in files_transaction_abort () #12 0x000000000060d2ef in ref_transaction_abort () #13 0x000000000060d441 in ref_transaction_prepare () #14 0x000000000060e0b5 in ref_transaction_commit () #15 0x00000000004511c2 in fetch_and_consume_refs () #16 0x000000000045279a in cmd_fetch () #17 0x0000000000407c48 in handle_builtin () #18 0x0000000000408df2 in cmd_main () #19 0x00000000004078b5 in main () The process was killed with a signal, which caused the signal handler to kick in and try free the data structures after we have unlinked the locks. It then deadlocks while calling free(3P). The root cause of this is that it is not allowed to call certain functions in async-signal handlers, as specified by signal-safety(7). Next to most I/O functions, this list of disallowed functions also includes memory-handling functions like malloc(3P) and free(3P) because they may not be reentrant. As a result, if we execute such functions in the signal handler, then they may operate on inconistent state and fail in unexpected ways. Fix this bug by not calling non-async-signal-safe functions when running in the signal handler. We're about to re-raise the signal anyway and will thus exit, so it's not much of a problem to keep the string list of lockfiles untouched. Note that it's fine though to call unlink(2), so we'll still clean up the lockfiles correctly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Reviewed-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-05i18n: standardize "cannot open" and "cannot read"Jean-Noël Avila1-2/+2
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr> Reviewed-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-05i18n: refactor "foo and bar are mutually exclusive"Jean-Noël Avila1-2/+2
Use static strings for constant parts of the sentences. They are all turned into "cannot be used together". Signed-off-by: Jean-Noël Avila <jn.avila@free.fr> Reviewed-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-22Merge branch 'ab/fetch-set-upstream-while-detached'Junio C Hamano1-0/+10
"git fetch --set-upstream" did not check if there is a current branch, leading to a segfault when it is run on a detached HEAD, which has been corrected. * ab/fetch-set-upstream-while-detached: pull, fetch: fix segfault in --set-upstream option
2021-12-22fetch/pull: use the sparse indexDerrick Stolee1-0/+2
The 'git fetch' and 'git pull' commands parse the index in order to determine if submodules exist. Without command_requires_full_index=0, this will expand a sparse index, causing slow performance even when there is no new data to fetch. The .gitmodules file will never be inside a sparse directory entry, and even if it was, the index_name_pos() method would expand the sparse index if needed as we search for the path by name. These commands do not iterate over the index, which is the typical thing we are careful about when integrating with the sparse index. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-07pull, fetch: fix segfault in --set-upstream optionÆvar Arnfjörð Bjarmason1-0/+10
Fix a segfault in the --set-upstream option added in 24bc1a12926 (pull, fetch: add --set-upstream option, 2019-08-19) added in v2.24.0. The code added there did not do the same checking we do for "git branch" itself since 8efb8899cfe (branch: segfault fixes and validation, 2013-02-23), which in turn fixed the same sort of segfault I'm fixing now in "git branch --set-upstream-to", see 6183d826ba6 (branch: introduce --set-upstream-to, 2012-08-20). The warning message I'm adding here is an amalgamation of the error added for "git branch" in 8efb8899cfe, and the error output install_branch_config() itself emits, i.e. it trims "refs/heads/" from the name and says "branch X on remote", not "branch refs/heads/X on remote". I think it would make more sense to simply die() here, but in the other checks for --set-upstream added in 24bc1a12926 we issue a warning() instead. Let's do the same here for consistency for now. There was an earlier submitted alternate way of fixing this in [1], due to that patch breaking threading with the original report at [2] I didn't notice it before authoring this version. I think the more detailed warning message here is better, and we should also have tests for this behavior. The --no-rebase option to "git pull" is needed as of the recently merged 7d0daf3f12f (Merge branch 'en/pull-conflicting-options', 2021-08-30). 1. https://lore.kernel.org/git/20210706162238.575988-1-clemens@endorphin.org/ 2. https://lore.kernel.org/git/CAG6gW_uHhfNiHGQDgGmb1byMqBA7xa8kuH1mP-wAPEe5Tmi2Ew@mail.gmail.com/ Reported-by: Clemens Fruhwirth <clemens@endorphin.org> Reported-by: Jan Pokorný <poki@fnusa.cz> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-01fetch: protect branches checked out in all worktreesAnders Kaseorg1-35/+40
Refuse to fetch into the currently checked out branch of any working tree, not just the current one. Fixes this previously reported bug: https://lore.kernel.org/git/cb957174-5e9a-5603-ea9e-ac9b58a2eaad@mathema.de/ As a side effect of using find_shared_symref, we’ll also refuse the fetch when we’re on a detached HEAD because we’re rebasing or bisecting on the branch in question. This seems like a sensible change. Signed-off-by: Anders Kaseorg <andersk@mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-12-01fetch: lowercase error messagesAnders Kaseorg1-24/+26
Documentation/CodingGuidelines says “do not end error messages with a full stop” and “do not capitalize the first word”. Clean up existing messages, some of which we will be touching in later steps in the series, that deviate from these rules in this file, as a preparation for the main part of the topic. Signed-off-by: Anders Kaseorg <andersk@mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-20Merge branch 'js/run-command-close-packs'Junio C Hamano1-2/+0
The run-command API has been updated so that the callers can easily ask the file descriptors open for packfiles to be closed immediately before spawning commands that may trigger auto-gc. * js/run-command-close-packs: Close object store closer to spawning child processes run_auto_maintenance(): implicitly close the object store run-command: offer to close the object store before running run-command: prettify the `RUN_COMMAND_*` flags pull: release packs before fetching commit-graph: when closing the graph, also release the slab
2021-09-20Merge branch 'ps/fetch-optim'Junio C Hamano1-34/+40
Optimize code that handles large number of refs in the "git fetch" code path. * ps/fetch-optim: fetch: avoid second connectivity check if we already have all objects fetch: merge fetching and consuming refs fetch: refactor fetch refs to be more extendable fetch-pack: optimize loading of refs via commit graph connected: refactor iterator to return next object ID directly fetch: avoid unpacking headers in object existence check fetch: speed up lookup of want refs via commit-graph
2021-09-10Merge branch 'ab/retire-advice-config'Junio C Hamano1-1/+1
Code clean up to migrate callers from older advice_config[] based API to newer advice_if_enabled() and advice_enabled() API. * ab/retire-advice-config: advice: move advice.graftFileDeprecated squashing to commit.[ch] advice: remove use of global advice_add_embedded_repo advice: remove read uses of most global `advice_` variables advice: add enum variants for missing advice variables
2021-09-10Merge branch 'ps/fetch-omit-formatting-under-quiet'Junio C Hamano1-5/+12
"git fetch --quiet" optimization to avoid useless computation of info that will never be displayed. * ps/fetch-omit-formatting-under-quiet: fetch: skip formatting updated refs with `--quiet`
2021-09-09run_auto_maintenance(): implicitly close the object storeJohannes Schindelin1-2/+0
Before spawning the auto maintenance, we need to make sure that we release all open file handles to all the `.pack` files (and MIDX files and commit-graph files and...) so that the maintenance process has the freedom to delete those files. So far, we did this manually every time before calling `run_auto_maintenance()`. With the new `close_object_store` flag, we can do that implicitly in that function, which is more robust because future callers won't be able to forget to close the object store. Note: this changes behavior slightly, as we previously _always_ closed the object store, but now we only close the object store when actually running the auto maintenance. In practice, this should not matter (if anything, it might speed up operations where auto maintenance is disabled). Suggested-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01fetch: avoid second connectivity check if we already have all objectsPatrick Steinhardt1-4/+3
When fetching refs, we are doing two connectivity checks: - The first one is done such that we can skip fetching refs in the case where we already have all objects referenced by the updated set of refs. - The second one verifies that we have all objects after we have fetched objects. We always execute both connectivity checks, but this is wasteful in case the first connectivity check already notices that we have all objects locally available. Skip the second connectivity check in case we already had all objects available. This gives us a nice speedup when doing a mirror-fetch in a repository with about 2.3M refs where the fetching repo already has all objects: Benchmark #1: HEAD~: git-fetch Time (mean ± σ): 30.025 s ± 0.081 s [User: 27.070 s, System: 4.933 s] Range (min … max): 29.900 s … 30.111 s 5 runs Benchmark #2: HEAD: git-fetch Time (mean ± σ): 25.574 s ± 0.177 s [User: 22.855 s, System: 4.683 s] Range (min … max): 25.399 s … 25.765 s 5 runs Summary 'HEAD: git-fetch' ran 1.17 ± 0.01 times faster than 'HEAD~: git-fetch' Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01fetch: merge fetching and consuming refsPatrick Steinhardt1-21/+9
The functions `fetch_refs()` and `consume_refs()` must always be called together such that we first obtain all missing objects and then update our local refs to match the remote refs. In a subsequent patch, we'll further require that `fetch_refs()` must always be called before `consume_refs()` such that it can correctly assert that we have all objects after the fetch given that we're about to move the connectivity check. Make this requirement explicit by merging both functions into a single `fetch_and_consume_refs()` function. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01fetch: refactor fetch refs to be more extendablePatrick Steinhardt1-7/+17
Refactor `fetch_refs()` code to make it more extendable by explicitly handling error cases. The refactored code should behave the same. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01connected: refactor iterator to return next object ID directlyPatrick Steinhardt1-4/+3
The object ID iterator used by the connectivity checks returns the next object ID via an out-parameter and then uses a return code to indicate whether an item was found. This is a bit roundabout: instead of a separate error code, we can just return the next object ID directly and use `NULL` pointers as indicator that the iterator got no items left. Furthermore, this avoids a copy of the object ID. Refactor the iterator and all its implementations to return object IDs directly. This brings a tiny performance improvement when doing a mirror-fetch of a repository with about 2.3M refs: Benchmark #1: 328dc58b49919c43897240f2eabfa30be2ce32a4~: git-fetch Time (mean ± σ): 30.110 s ± 0.148 s [User: 27.161 s, System: 5.075 s] Range (min … max): 29.934 s … 30.406 s 10 runs Benchmark #2: 328dc58b49919c43897240f2eabfa30be2ce32a4: git-fetch Time (mean ± σ): 29.899 s ± 0.109 s [User: 26.916 s, System: 5.104 s] Range (min … max): 29.696 s … 29.996 s 10 runs Summary '328dc58b49919c43897240f2eabfa30be2ce32a4: git-fetch' ran 1.01 ± 0.01 times faster than '328dc58b49919c43897240f2eabfa30be2ce32a4~: git-fetch' While this 1% speedup could be labelled as statistically insignificant, the speedup is consistent on my machine. Furthermore, this is an end to end test, so it is expected that the improvement in the connectivity check itself is more significant. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01fetch: avoid unpacking headers in object existence checkPatrick Steinhardt1-3/+1
When updating local refs after the fetch has transferred all objects, we do an object existence test as a safety guard to avoid updating a ref to an object which we don't have. We do so via `oid_object_info()`: if it returns an error, then we know the object does not exist. One side effect of `oid_object_info()` is that it parses the object's type, and to do so it must unpack the object header. This is completely pointless: we don't care for the type, but only want to assert that the object exists. Refactor the code to use `repo_has_object_file()`, which both makes the code's intent clearer and is also faster because it does not unpack object headers. In a real-world repo with 2.3M refs, this results in a small speedup when doing a mirror-fetch: Benchmark #1: HEAD~: git-fetch Time (mean ± σ): 33.686 s ± 0.176 s [User: 30.119 s, System: 5.262 s] Range (min … max): 33.512 s … 33.944 s 5 runs Benchmark #2: HEAD: git-fetch Time (mean ± σ): 31.247 s ± 0.195 s [User: 28.135 s, System: 5.066 s] Range (min … max): 30.948 s … 31.472 s 5 runs Summary 'HEAD: git-fetch' ran 1.08 ± 0.01 times faster than 'HEAD~: git-fetch' Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01fetch: speed up lookup of want refs via commit-graphPatrick Steinhardt1-6/+18
When updating our local refs based on the refs fetched from the remote, we need to iterate through all requested refs and load their respective commits such that we can determine whether they need to be appended to FETCH_HEAD or not. In cases where we're fetching from a remote with exceedingly many refs, resolving these refs can be quite expensive given that we repeatedly need to unpack object headers for each of the referenced objects. Speed this up by opportunistically trying to resolve object IDs via the commit graph. We only do so for any refs which are not in "refs/tags": more likely than not, these are going to be a commit anyway, and this lets us avoid having to unpack object headers completely in case the object is a commit that is part of the commit-graph. This significantly speeds up mirror-fetches in a real-world repository with 2.3M refs: Benchmark #1: HEAD~: git-fetch Time (mean ± σ): 56.482 s ± 0.384 s [User: 53.340 s, System: 5.365 s] Range (min … max): 56.050 s … 57.045 s 5 runs Benchmark #2: HEAD: git-fetch Time (mean ± σ): 33.727 s ± 0.170 s [User: 30.252 s, System: 5.194 s] Range (min … max): 33.452 s … 33.871 s 5 runs Summary 'HEAD: git-fetch' ran 1.67 ± 0.01 times faster than 'HEAD~: git-fetch' Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-30fetch: skip formatting updated refs with `--quiet`Patrick Steinhardt1-5/+12
When fetching, Git will by default print a list of all updated refs in a nicely formatted table. In order to come up with this table, Git needs to iterate refs twice: first to determine the maximum column width, and a second time to actually format these changed refs. While this table will not be printed in case the user passes `--quiet`, we still go out of our way and do all these steps. In fact, we even do more work compared to not passing `--quiet`: without the flag, we will skip all references in the column width computation which have not been updated, but if it is set we will now compute widths for all refs. Fix this issue by completely skipping both preparation of the format and formatting data for display in case the user passes `--quiet`, improving performance especially with many refs. The following benchmark shows a nice speedup for a quiet mirror-fetch in a repository with 2.3M refs: Benchmark #1: HEAD~: git-fetch Time (mean ± σ): 26.929 s ± 0.145 s [User: 24.194 s, System: 4.656 s] Range (min … max): 26.692 s … 27.068 s 5 runs Benchmark #2: HEAD: git-fetch Time (mean ± σ): 25.189 s ± 0.094 s [User: 22.556 s, System: 4.606 s] Range (min … max): 25.070 s … 25.314 s 5 runs Summary 'HEAD: git-fetch' ran 1.07 ± 0.01 times faster than 'HEAD~: git-fetch' While at it, this patch also fixes `adjust_refcol_width()` such that it skips unchanged refs in case the user passed `--quiet`, where verbosity will be negative. While this function won't be called anymore if so, this brings the comment in line with actual code. Furthermore, needless `verbosity >= 0` checks are now removed in `store_updated_refs()`: we never print to the `note` buffer anymore in case `verbosity < 0`, so we won't end up in that code block anyway. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-25advice: remove read uses of most global `advice_` variablesBen Boeckel1-1/+1
In c4a09cc9ccb (Merge branch 'hw/advise-ng', 2020-03-25), a new API for accessing advice variables was introduced and deprecated `advice_config` in favor of a new array, `advice_setting`. This patch ports all but two uses which read the status of the global `advice_` variables over to the new `advice_enabled` API. We'll deal with advice_add_embedded_repo and advice_graft_file_deprecated separately. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-24Merge branch 'jt/push-negotiation-fixes'Junio C Hamano1-1/+3
Bugfix for common ancestor negotiation recently introduced in "git push" code path. * jt/push-negotiation-fixes: fetch: die on invalid --negotiation-tip hash send-pack: fix push nego. when remote has refs send-pack: fix push.negotiate with remote helper
2021-07-16Merge branch 'ab/fetch-negotiate-segv-fix'Junio C Hamano1-0/+3
Code recently added to support common ancestry negotiation during "git push" did not sanity check its arguments carefully enough. * ab/fetch-negotiate-segv-fix: fetch: fix segfault in --negotiate-only without --negotiation-tip=* fetch: document the --negotiate-only option send-pack.c: move "no refs in common" abort earlier
2021-07-15fetch: die on invalid --negotiation-tip hashJonathan Tan1-1/+3
If a full hexadecimal hash is given as a --negotiation-tip to "git fetch", and that hash does not correspond to an object, "git fetch" will segfault if --negotiate-only is given and will silently ignore that hash otherwise. Make these cases fatal errors, just like the case when an invalid ref name or abbreviated hash is given. While at it, mark the error messages as translatable. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-08fetch: fix segfault in --negotiate-only without --negotiation-tip=*Ævar Arnfjörð Bjarmason1-0/+3
The recent --negotiate-only option would segfault in the call to oid_array_for_each() in negotiate_using_fetch() unless one or more --negotiation-tip=* options were provided. All of the other tests for the feature combine both, but nothing was checking this assumption, let's do that and add a test for it. Fixes a bug in 9c1e657a8fd (fetch: teach independent negotiation (no packfile), 2021-05-04). Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20fetch: improve grammar of "shallow roots" messageAlex Henrie1-1/+1
Signed-off-by: Alex Henrie <alexhenrie24@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-16Merge branch 'jt/push-negotiation'Junio C Hamano1-1/+26
"git push" learns to discover common ancestor with the receiving end over protocol v2. * jt/push-negotiation: send-pack: support push negotiation fetch: teach independent negotiation (no packfile) fetch-pack: refactor command and capability write fetch-pack: refactor add_haves() fetch-pack: refactor process_acks()
2021-05-05fetch: teach independent negotiation (no packfile)Jonathan Tan1-1/+26
Currently, the packfile negotiation step within a Git fetch cannot be done independent of sending the packfile, even though there is at least one application wherein this is useful. Therefore, make it possible for this negotiation step to be done independently. A subsequent commit will use this for one such application - push negotiation. This feature is for protocol v2 only. (An implementation for protocol v0 would require a separate implementation in the fetch, transport, and transport helper code.) In the protocol, the main hindrance towards independent negotiation is that the server can unilaterally decide to send the packfile. This is solved by a "wait-for-done" argument: the server will then wait for the client to say "done". In practice, the client will never say it; instead it will cease requests once it is satisfied. In the client, the main change lies in the transport and transport helper code. fetch_refs_via_pack() performs everything needed - protocol version and capability checks, and the negotiation itself. There are 2 code paths that do not go through fetch_refs_via_pack() that needed to be individually excluded: the bundle transport (excluded through requiring smart_options, which the bundle transport doesn't support) and transport helpers that do not support takeover. If or when we support independent negotiation for protocol v0, we will need to modify these 2 code paths to support it. But for now, report failure if independent negotiation is requested in these cases. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16fetch: add --prefetch optionDerrick Stolee1-1/+58
The --prefetch option will be used by the 'prefetch' maintenance task instead of sending refspecs explicitly across the command-line. The intention is to modify the refspec to place all results in refs/prefetch/ instead of anywhere else. Create helper method filter_prefetch_refspec() to modify a given refspec to fit the rules expected of the prefetch task: * Negative refspecs are preserved. * Refspecs without a destination are removed. * Refspecs whose source starts with "refs/tags/" are removed. * Other refspecs are placed within "refs/prefetch/". Finally, we add the 'force' option to ensure that prefetch refs are replaced as necessary. There are some interesting cases that are worth testing. An earlier version of this change dropped the "i--" from the loop that deletes a refspec item and shifts the remaining entries down. This allowed some refspecs to not be modified. The subtle part about the first --prefetch test is that the "refs/tags/*" refspec appears directly before the "refs/heads/bogus/*" refspec. Without that "i--", this ordering would remove the "refs/tags/*" refspec and leave the last one unmodified, placing the result in "refs/heads/*". It is possible to have an empty refspec. This is typically the case for remotes other than the origin, where users want to fetch a specific tag or branch. To correctly test this case, we need to further remove the upstream remote for the local branch. Thus, we are testing a refspec that will be deleted, leaving nothing to fetch. Helped-by: Tom Saeger <tom.saeger@oracle.com> Helped-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-17Merge branch 'jt/clone-unborn-head'Junio C Hamano1-7/+11
"git clone" tries to locally check out the branch pointed at by HEAD of the remote repository after it is done, but the protocol did not convey the information necessary to do so when copying an empty repository. The protocol v2 learned how to do so. * jt/clone-unborn-head: clone: respect remote unborn HEAD connect, transport: encapsulate arg in struct ls-refs: report unborn targets of symrefs
2021-02-05connect, transport: encapsulate arg in structJonathan Tan1-7/+11
In a future patch we plan to return the name of an unborn current branch from deep in the callchain to a caller via a new pointer parameter that points at a variable in the caller when the caller calls get_remote_refs() and transport_get_remote_refs(). In preparation for that, encapsulate the existing ref_prefixes parameter into a struct. The aforementioned unborn current branch will go into this new struct in the future patch. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-12fetch: implement support for atomic reference updatesPatrick Steinhardt1-5/+41
When executing a fetch, then git will currently allocate one reference transaction per reference update and directly commit it. This means that fetches are non-atomic: even if some of the reference updates fail, others may still succeed and modify local references. This is fine in many scenarios, but this strategy has its downsides. - The view of remote references may be inconsistent and may show a bastardized state of the remote repository. - Batching together updates may improve performance in certain scenarios. While the impact probably isn't as pronounced with loose references, the upcoming reftable backend may benefit as it needs to write less files in case the update is batched. - The reference-update hook is currently being executed twice per updated reference. While this doesn't matter when there is no such hook, we have seen severe performance regressions when doing a git-fetch(1) with reference-transaction hook when the remote repository has hundreds of thousands of references. Similar to `git push --atomic`, this commit thus introduces atomic fetches. Instead of allocating one reference transaction per updated reference, it causes us to only allocate a single transaction and commit it as soon as all updates were received. If locking of any reference fails, then we abort the complete transaction and don't update any reference, which gives us an all-or-nothing fetch. Note that this may not completely fix the first of above downsides, as the consistent view also depends on the server-side. If the server doesn't have a consistent view of its own references during the reference negotiation phase, then the client would get the same inconsistent view the server has. This is a separate problem though and, if it actually exists, can be fixed at a later point. This commit also changes the way we write FETCH_HEAD in case `--atomic` is passed. Instead of writing changes as we go, we need to accumulate all changes first and only commit them at the end when we know that all reference updates succeeded. Ideally, we'd just do so via a temporary file so that we don't need to carry all updates in-memory. This isn't trivially doable though considering the `--append` mode, where we do not truncate the file but simply append to it. And given that we support concurrent processes appending to FETCH_HEAD at the same time without any loss of data, seeding the temporary file with current contents of FETCH_HEAD initially and then doing a rename wouldn't work either. So this commit implements the simple strategy of buffering all changes and appending them to the file on commit. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-12fetch: allow passing a transaction to `s_update_ref()`Patrick Steinhardt1-20/+31
The handling of ref updates is completely handled by `s_update_ref()`, which will manage the complete lifecycle of the reference transaction. This is fine right now given that git-fetch(1) does not support atomic fetches, so each reference gets its own transaction. It is quite inflexible though, as `s_update_ref()` only knows about a single reference update at a time, so it doesn't allow us to alter the strategy. This commit prepares `s_update_ref()` and its only caller `update_local_ref()` to allow passing an external transaction. If none is given, then the existing behaviour is triggered which creates a new transaction and directly commits it. Otherwise, if the caller provides a transaction, then we only queue the update but don't commit it. This optionally allows the caller to manage when a transaction will be committed. Given that `update_local_ref()` is always called with a `NULL` transaction for now, no change in behaviour is expected from this change. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-12fetch: refactor `s_update_ref` to use common exit pathPatrick Steinhardt1-19/+26
The cleanup code in `s_update_ref()` is currently duplicated for both succesful and erroneous exit paths. This commit refactors the function to have a shared exit path for both cases to remove the duplication. Suggested-by: Christian Couder <christian.couder@gmail.com> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-12fetch: use strbuf to format FETCH_HEAD updatesPatrick Steinhardt1-5/+11
This commit refactors `append_fetch_head()` to use a `struct strbuf` for formatting the update which we're about to append to the FETCH_HEAD file. While the refactoring doesn't have much of a benefit right now, it serves as a preparatory step to implement atomic fetches where we need to buffer all updates to FETCH_HEAD and only flush them out if all reference updates succeeded. No change in behaviour is expected from this commit. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-12fetch: extract writing to FETCH_HEADPatrick Steinhardt1-29/+79
When performing a fetch with the default `--write-fetch-head` option, we write all updated references to FETCH_HEAD while the updates are performed. Given that updates are not performed atomically, it means that we we write to FETCH_HEAD even if some or all of the reference updates fail. Given that we simply update FETCH_HEAD ad-hoc with each reference, the logic is completely contained in `store_update_refs` and thus quite hard to extend. This can already be seen by the way we skip writing to the FETCH_HEAD: instead of having a conditional which simply skips writing, we instead open "/dev/null" and needlessly write all updates there. We are about to extend git-fetch(1) to accept an `--atomic` flag which will make the fetch an all-or-nothing operation with regards to the reference updates. This will also require us to make the updates to FETCH_HEAD an all-or-nothing operation, but as explained doing so is not easy with the current layout. This commit thus refactors the wa we write to FETCH_HEAD and pulls out the logic to open, append to, commit and close the file. While this may seem rather over-the top at first, pulling out this logic will make it a lot easier to update the code in a subsequent commit. It also allows us to easily skip writing completely in case `--no-write-fetch-head` was passed. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-02hashmap: provide deallocation function namesElijah Newren1-3/+3
hashmap_free(), hashmap_free_entries(), and hashmap_free_() have existed for a while, but aren't necessarily the clearest names, especially with hashmap_partial_clear() being added to the mix and lazy-initialization now being supported. Peff suggested we adopt the following names[1]: - hashmap_clear() - remove all entries and de-allocate any hashmap-specific data, but be ready for reuse - hashmap_clear_and_free() - ditto, but free the entries themselves - hashmap_partial_clear() - remove all entries but don't deallocate table - hashmap_partial_clear_and_free() - ditto, but free the entries This patch provides the new names and converts all existing callers over to the new naming scheme. [1] https://lore.kernel.org/git/20201030125059.GA3277724@coredump.intra.peff.net/ Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-05Merge branch 'jk/refspecs-negative'Junio C Hamano1-0/+10
"git fetch" and "git push" support negative refspecs. * jk/refspecs-negative: refspec: add support for negative refspecs
2020-10-05Merge branch 'jt/keep-partial-clone-filter-upon-lazy-fetch'Junio C Hamano1-1/+1
The lazy fetching done internally to make missing objects available in a partial clone incorrectly made permanent damage to the partial clone filter in the repository, which has been corrected. * jt/keep-partial-clone-filter-upon-lazy-fetch: fetch: do not override partial clone filter promisor-remote: remove unused variable
2020-09-30refspec: add support for negative refspecsJacob Keller1-0/+10
Both fetch and push support pattern refspecs which allow fetching or pushing references that match a specific pattern. Because these patterns are globs, they have somewhat limited ability to express more complex situations. For example, suppose you wish to fetch all branches from a remote except for a specific one. To allow this, you must setup a set of refspecs which match only the branches you want. Because refspecs are either explicit name matches, or simple globs, many patterns cannot be expressed. Add support for a new type of refspec, referred to as "negative" refspecs. These are prefixed with a '^' and mean "exclude any ref matching this refspec". They can only have one "side" which always refers to the source. During a fetch, this refers to the name of the ref on the remote. During a push, this refers to the name of the ref on the local side. With negative refspecs, users can express more complex patterns. For example: git fetch origin refs/heads/*:refs/remotes/origin/* ^refs/heads/dontwant will fetch all branches on origin into remotes/origin, but will exclude fetching the branch named dontwant. Refspecs today are commutative, meaning that order doesn't expressly matter. Rather than forcing an implied order, negative refspecs will always be applied last. That is, in order to match, a ref must match at least one positive refspec, and match none of the negative refspecs. This is similar to how negative pathspecs work. Signed-off-by: Jacob Keller <jacob.keller@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-28fetch: do not override partial clone filterJonathan Tan1-1/+1
When a fetch with the --filter argument is made, the configured default filter is set even if one already exists. This change was made in 5e46139376 ("builtin/fetch: remove unique promisor remote limitation", 2019-06-25) - in particular, changing from: * If this is the FIRST partial-fetch request, we enable partial * on this repo and remember the given filter-spec as the default * for subsequent fetches to this remote. to: * If this is a partial-fetch request, we enable partial on * this repo if not already enabled and remember the given * filter-spec as the default for subsequent fetches to this * remote. (The given filter-spec is "remembered" even if there is already an existing one.) This is problematic whenever a lazy fetch is made, because lazy fetches are made using "git fetch --filter=blob:none", but this will also happen if the user invokes "git fetch --filter=<filter>" manually. Therefore, restore the behavior prior to 5e46139376, which writes a filter-spec only if the current fetch request is the first partial-fetch one (for that remote). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25Merge branch 'ds/maintenance-part-1'Junio C Hamano1-2/+4
A "git gc"'s big brother has been introduced to take care of more repository maintenance tasks, not limited to the object database cleaning. * ds/maintenance-part-1: maintenance: add trace2 regions for task execution maintenance: add auto condition for commit-graph task maintenance: use pointers to check --auto maintenance: create maintenance.<task>.enabled config maintenance: take a lock on the objects directory maintenance: add --task option maintenance: add commit-graph task maintenance: initialize task array maintenance: replace run_auto_gc() maintenance: add --quiet option maintenance: create basic maintenance runner
2020-09-22Merge branch 'ar/fetch-ipversion-in-all'Junio C Hamano1-1/+4
"git fetch --all --ipv4/--ipv6" forgot to pass the protocol options to instances of the "git fetch" that talk to individual remotes, which has been corrected. * ar/fetch-ipversion-in-all: fetch: pass --ipv4 and --ipv6 options to sub-fetches
2020-09-22Merge branch 'os/fetch-submodule-optim'Junio C Hamano1-1/+3
Optimization around submodule handling. * os/fetch-submodule-optim: fetch: do not look for submodule changes in unchanged refs
2020-09-17maintenance: replace run_auto_gc()Derrick Stolee1-2/+4
The run_auto_gc() method is used in several places to trigger a check for repo maintenance after some Git commands, such as 'git commit' or 'git fetch'. To allow for extra customization of this maintenance activity, replace the 'git gc --auto [--quiet]' call with one to 'git maintenance run --auto [--quiet]'. As we extend the maintenance builtin with other steps, users will be able to select different maintenance activities. Rename run_auto_gc() to run_auto_maintenance() to be clearer what is happening on this call, and to expose all callers in the current diff. Rewrite the method to use a struct child_process to simplify the calls slightly. Since 'git fetch' already allows disabling the 'git gc --auto' subprocess, add an equivalent option with a different name to be more descriptive of the new behavior: '--[no-]maintenance'. Update the documentation to include these options at the same time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-15fetch: pass --ipv4 and --ipv6 options to sub-fetchesAlex Riesen1-0/+4
The options indicate user intent for the whole fetch operation, and ignoring them in sub-fetches (i.e. "--all" and recursive fetching of submodules) is quite unexpected when, for instance, it is intended to limit all of the communication to a specific transport protocol for some reason. Signed-off-by: Alex Riesen <alexander.riesen@cetitec.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-06refspec: add and use refspec_appendf()René Scharfe1-5/+2
Add a function for building a refspec using printf-style formatting. It frees callers from managing their own buffer. Use it throughout the tree to shorten and simplify its callers. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-06fetch: do not look for submodule changes in unchanged refsOrgad Shaneh1-1/+3
When fetching recursively with submodules, for each ref in the superproject, we call check_for_new_submodule_commits() which collects all the objects that have to be checked for submodule changes on calculate_changed_submodule_paths(). On the first call, it also collects all the existing refs for excluding them from the scan. calculate_changed_submodule_paths() creates an argument array with all the collected new objects, followed by --not and all the old objects. This argv is passed to setup_revisions, which parses each argument, converts it back to an oid and resolves the object. The parsing itself also does redundant work, because it is treated like user input, while in fact it is a full oid. So it needlessly attempts to look it up as ref (checks if it has ^, ~ etc.), checks if it is a file name etc. For a repository with many refs, all of this is expensive. But if the fetch in the superproject did not update the ref (i.e. the objects that are required to exist in the submodule did not change), there is no need to include it in the list. Before commit be76c212 (fetch: ensure submodule objects fetched, 2018-12-06), submodule reference changes were only detected for refs that were changed, but not for new refs. This commit covered also this case, but what it did was to just include every ref. This change should reduce the number of scanned refs by about half (except the case of a no-op fetch, which will not scan any ref), because all the existing refs will still be listed after --not. The regression was reported here: https://public-inbox.org/git/CAGHpTBKSUJzFSWc=uznSu2zB33qCSmKXM- iAjxRCpqNK5bnhRg@mail.gmail.com/ Signed-off-by: Orgad Shaneh <orgads@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-03Merge branch 'jt/lazy-fetch'Junio C Hamano1-9/+41
Updates to on-demand fetching code in lazily cloned repositories. * jt/lazy-fetch: fetch: no FETCH_HEAD display if --no-write-fetch-head fetch-pack: remove no_dependents code promisor-remote: lazy-fetch objects in subprocess fetch-pack: do not lazy-fetch during ref iteration fetch: only populate existing_refs if needed fetch: avoid reading submodule config until needed fetch: allow refspecs specified through stdin negotiator/noop: add noop fetch negotiator
2020-09-02fetch: no FETCH_HEAD display if --no-write-fetch-headJonathan Tan1-1/+7
887952b8c6 ("fetch: optionally allow disabling FETCH_HEAD update", 2020-08-18) introduced the ability to disable writing to FETCH_HEAD during fetch, but did not suppress the "<source> -> FETCH_HEAD" message when this ability is used. This message is misleading in this case, because FETCH_HEAD is not written. Also, because "fetch" is used to lazy-fetch missing objects in a partial clone, this significantly clutters up the output in that case since the objects to be fetched are potentially numerous. Therefore, suppress this message when --no-write-fetch-head is passed (but not when --dry-run is set). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-27Merge branch 'jk/leakfix'Junio C Hamano1-1/+1
Code clean-up. * jk/leakfix: submodule--helper: fix leak of core.worktree value config: fix leak in git_config_get_expiry_in_days() config: drop git_config_get_string_const() config: fix leaks from git_config_get_string_const() checkout: fix leak of non-existent branch names submodule--helper: use strbuf_release() to free strbufs clear_pattern_list(): clear embedded hashmaps
2020-08-18fetch: only populate existing_refs if neededJonathan Tan1-4/+9
In "fetch", get_ref_map() iterates over all refs to populate "existing_refs" in order to populate peer_ref->old_oid in the returned refmap, even if the refmap has no peer_ref set - which is the case when only literal hashes (i.e. no refs by name) are fetched. Iterating over refs causes the targets of those refs to be checked for existence. Avoiding this is especially important when we use "git fetch" to perform lazy fetches in a partial clone because a target of such a ref may need to be itself lazy-fetched (and otherwise causing an infinite loop). Therefore, avoid populating "existing_refs" until necessary. With this patch, because Git lazy-fetches objects by literal hashes (to be done in a subsequent commit), it will then be able to guarantee avoiding reading targets of refs. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-18fetch: avoid reading submodule config until neededJonathan Tan1-2/+8
In "fetch", there are two parameters submodule_fetch_jobs_config and recurse_submodules that can be set in a variety of ways: through .gitmodules, through .git/config, and through the command line. Currently "fetch" handles this by first reading .gitmodules, then reading .git/config (allowing it to overwrite existing values), then reading the command line (allowing it to overwrite existing values). Notice that we can avoid reading .gitmodules if .git/config and/or the command line already provides us with what we need. In addition, if recurse_submodules is found to be "no", we do not need the value of submodule_fetch_jobs_config. Avoiding reading .gitmodules is especially important when we use "git fetch" to perform lazy fetches in a partial clone because the .gitmodules file itself might need to be lazy fetched (and otherwise causing an infinite loop). In light of all this, avoid reading .gitmodules until necessary. When reading it, we may only need one of the two parameters it provides, so teach fetch_config_from_gitmodules() to support NULL arguments. With this patch, users (including Git itself when invoking "git fetch" to lazy-fetch) will be able to guarantee avoiding reading .gitmodules by passing --recurse-submodules=no. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-18fetch: allow refspecs specified through stdinJonathan Tan1-2/+17
In a subsequent patch, partial clones will be taught to fetch missing objects using a "git fetch" subprocess. Because the number of objects fetched may be too numerous to fit on the command line, teach "fetch" to accept refspecs passed through stdin. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-18fetch: optionally allow disabling FETCH_HEAD updateJunio C Hamano1-3/+12
If you run fetch but record the result in remote-tracking branches, and either if you do nothing with the fetched refs (e.g. you are merely mirroring) or if you always work from the remote-tracking refs (e.g. you fetch and then merge origin/branchname separately), you can get away with having no FETCH_HEAD at all. Teach "git fetch" a command line option "--[no-]write-fetch-head". The default is to write FETCH_HEAD, and the option is primarily meant to be used with the "--no-" prefix to override this default, because there is no matching fetch.writeFetchHEAD configuration variable to flip the default to off (in which case, the positive form may become necessary to defeat it). Note that under "--dry-run" mode, FETCH_HEAD is never written; otherwise you'd see list of objects in the file that you do not actually have. Passing `--write-fetch-head` does not force `git fetch` to write the file. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-14config: fix leaks from git_config_get_string_const()Jeff King1-1/+1
There are two functions to get a single config string: - git_config_get_string() - git_config_get_string_const() One might naively think that the first one allocates a new string and the second one just points us to the internal configset storage. But in fact they both allocate a new copy; the second one exists only to avoid having to cast when using it with a const global which we never intend to free. The documentation for the function explains that clearly, but it seems I'm not alone in being surprised by this. Of 17 calls to the function, 13 of them leak the resulting value. We could obviously fix these by adding the appropriate free(). But it would be simpler still if we actually had a non-allocating way to get the string. There's git_config_get_value() but that doesn't quite do what we want. If the config key is present but is a boolean with no value (e.g., "[foo]bar" in the file), then we'll get NULL (whereas the string versions will print an error and die). So let's introduce a new variant, git_config_get_string_tmp(), that behaves as these callers expect. We need a new name because we have new semantics but the same function signature (so even if we converted the four remaining callers, topics in flight might be surprised). The "tmp" is because this value should only be held onto for a short time. In practice it's rare for us to clear and refresh the configset, invalidating the pointer, but hopefully the "tmp" makes callers think about the lifetime. In each of the converted cases here the value only needs to last within the local function or its immediate caller. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-30strvec: rename struct fieldsJeff King1-3/+3
The "argc" and "argv" names made sense when the struct was argv_array, but now they're just confusing. Let's rename them to "nr" (which we use for counts elsewhere) and "v" (which is rather terse, but reads well when combined with typical variable names like "args.v"). Note that we have to update all of the callers immediately. Playing tricks with the preprocessor is hard here, because we wouldn't want to rewrite unrelated tokens. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: fix indentation in renamed callsJeff King1-1/+1
Code which split an argv_array call across multiple lines, like: argv_array_pushl(&args, "one argument", "another argument", "and more", NULL); was recently mechanically renamed to use strvec, which results in mis-matched indentation like: strvec_pushl(&args, "one argument", "another argument", "and more", NULL); Let's fix these up to align the arguments with the opening paren. I did this manually by sifting through the results of: git jump grep 'strvec_.*,$' and liberally applying my editor's auto-format. Most of the changes are of the form shown above, though I also normalized a few that had originally used a single-tab indentation (rather than our usual style of aligning with the open paren). I also rewrapped a couple of obvious cases (e.g., where previously too-long lines became short enough to fit on one), but I wasn't aggressive about it. In cases broken to three or more lines, the grouping of arguments is sometimes meaningful, and it wasn't worth my time or reviewer time to ponder each case individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: convert builtin/ callers away from argv_array nameJeff King1-27/+27
We eventually want to drop the argv_array name and just use strvec consistently. There's no particular reason we have to do it all at once, or care about interactions between converted and unconverted bits. Because of our preprocessor compat layer, the names are interchangeable to the compiler (so even a definition and declaration using different names is OK). This patch converts all of the files in builtin/ to keep the diff to a manageable size. The conversion was done purely mechanically with: git ls-files '*.c' '*.h' | xargs perl -i -pe ' s/ARGV_ARRAY/STRVEC/g; s/argv_array/strvec/g; ' and then selectively staging files with "git add builtin/". We'll deal with any indentation/style fallouts separately. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: rename files from argv-array to strvecJeff King1-1/+1
This requires updating #include lines across the code-base, but that's all fairly mechanical, and was done with: git ls-files '*.c' '*.h' | xargs perl -i -pe 's/argv-array.h/strvec.h/' Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-29Merge branch 'xl/upgrade-repo-format'Junio C Hamano1-3/+0
Allow runtime upgrade of the repository format version, which needs to be done carefully. There is a rather unpleasant backward compatibility worry with the last step of this series, but it is the right thing to do in the longer term. * xl/upgrade-repo-format: check_repository_format_gently(): refuse extensions for old repositories sparse-checkout: upgrade repository to version 1 when enabling extension fetch: allow adding a filter after initial clone repository: add a helper function to perform repository format upgrade
2020-06-17Merge branch 'js/reflog-anonymize-for-clone-and-fetch'Junio C Hamano1-2/+7
The reflog entries for "git clone" and "git fetch" did not anonymize the URL they operated on. * js/reflog-anonymize-for-clone-and-fetch: clone/fetch: anonymize URLs in the reflog
2020-06-05fetch: allow adding a filter after initial cloneXin Li1-3/+0
Retroactively adding a filter can be useful for existing shallow clones as they allow users to see earlier change histories without downloading all git objects in a regular --unshallow fetch. Without this patch, users can make a clone partial by editing the repository configuration to convert the remote into a promisor, like:   git config core.repositoryFormatVersion 1   git config extensions.partialClone origin   git fetch --unshallow --filter=blob:none origin Since the hard part of making this work is already in place and such edits can be error-prone, teach Git to perform the required configuration change automatically instead. Note that this change does not modify the existing git behavior which recognizes setting extensions.partialClone without changing repositoryFormatVersion. Signed-off-by: Xin Li <delphij@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-04clone/fetch: anonymize URLs in the reflogJohannes Schindelin1-2/+7
Even if we strongly discourage putting credentials into the URLs passed via the command-line, there _is_ support for that, and users _do_ do that. Let's scrub them before writing them to the reflog. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-13Merge branch 'jc/auto-gc-quiet'Junio C Hamano1-8/+2
Teach "am", "commit", "merge" and "rebase", when they are run with the "--quiet" option, to pass "--quiet" down to "gc --auto". * jc/auto-gc-quiet: auto-gc: pass --quiet down from am, commit, merge and rebase auto-gc: extract a reusable helper from "git fetch"
2020-05-13Merge branch 'tb/shallow-cleanup'Junio C Hamano1-0/+1
Code cleanup. * tb/shallow-cleanup: shallow: use struct 'shallow_lock' for additional safety shallow.h: document '{commit,rollback}_shallow_file' shallow: extract a header file for shallow-related functions commit: make 'commit_graft_pos' non-static
2020-05-07auto-gc: extract a reusable helper from "git fetch"Junio C Hamano1-8/+2
Back in 1991006c (fetch: convert argv_gc_auto to struct argv_array, 2014-08-16), we taught "git fetch --quiet" to pass the "--quiet" option down to "gc --auto". This issue, however, is not limited to "fetch": $ git grep -e 'gc.*--auto' \*.c finds hits in "am", "commit", "merge", and "rebase" and these commands do not pass "--quiet" down to "gc --auto" when they themselves are told to be quiet. As a preparatory step, let's introduce a helper function run_auto_gc(), that the caller can pass a boolean "quiet", and redo the fix to "git fetch" using the helper. Signed-off-by: Junio C Hamano <gitster@pobox.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-30shallow: extract a header file for shallow-related functionsTaylor Blau1-0/+1
There are many functions in commit.h that are more related to shallow repositories than they are to any sort of generic commit machinery. Likely this began when there were only a few shallow-related functions, and commit.h seemed a reasonable enough place to put them. But, now there are a good number of shallow-related functions, and placing them all in 'commit.h' doesn't make sense. This patch extracts a 'shallow.h', which takes all of the declarations from 'commit.h' for functions which already exist in 'shallow.c'. We will bring the remaining shallow-related functions defined in 'commit.c' in a subsequent patch. For now, move only the ones that already are implemented in 'shallow.c', and update the necessary includes. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-28Use OPT_CALLBACK and OPT_CALLBACK_FDenton Liu1-6/+6
In the codebase, there are many options which use OPTION_CALLBACK in a plain ol' struct definition. However, we have the OPT_CALLBACK and OPT_CALLBACK_F macros which are meant to abstract these plain struct definitions away. These macros are useful as they semantically signal to developers that these are just normal callback option with nothing fancy happening. Replace plain struct definitions of OPTION_CALLBACK with OPT_CALLBACK or OPT_CALLBACK_F where applicable. The heavy lifting was done using the following (disgusting) shell script: #!/bin/sh do_replacement () { tr '\n' '\r' | sed -e 's/{\s*OPTION_CALLBACK,\s*\([^,]*\),\([^,]*\),\([^,]*\),\([^,]*\),\([^,]*\),\s*0,\(\s*[^[:space:]}]*\)\s*}/OPT_CALLBACK(\1,\2,\3,\4,\5,\6)/g' | sed -e 's/{\s*OPTION_CALLBACK,\s*\([^,]*\),\([^,]*\),\([^,]*\),\([^,]*\),\([^,]*\),\([^,]*\),\(\s*[^[:space:]}]*\)\s*}/OPT_CALLBACK_F(\1,\2,\3,\4,\5,\6,\7)/g' | tr '\r' '\n' } for f in $(git ls-files \*.c) do do_replacement <"$f" >"$f.tmp" mv "$f.tmp" "$f" done The result was manually inspected and then reformatted to match the style of the surrounding code. Finally, using `git grep OPTION_CALLBACK \*.c`, leftover results which were not handled by the script were manually transformed. Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-22Merge branch 'jt/connectivity-check-optim-in-partial-clone'Junio C Hamano1-7/+0
Simplify the commit ancestry connectedness check in a partial clone repository in which "promised" objects are assumed to be obtainable lazily on-demand from promisor remote repositories. * jt/connectivity-check-optim-in-partial-clone: connected: always use partial clone optimization
2020-03-29connected: always use partial clone optimizationJonathan Tan1-7/+0
With 50033772d5 ("connected: verify promisor-ness of partial clone", 2020-01-30), the fast path (checking promisor packs) in check_connected() now passes a subset of the slow path (rev-list) - if all objects to be checked are found in promisor packs, both the fast path and the slow path will pass; otherwise, the fast path will definitely not pass. This means that we can always attempt the fast path whenever we need to do the slow path. The fast path is currently guarded by a flag; therefore, remove that flag. Also, make the fast path fallback to the slow path - if the fast path fails, the failing OID and all remaining OIDs will be passed to rev-list. The main user-visible benefit is the performance of fetch from a partial clone - specifically, the speedup of the connectivity check done before the fetch. In particular, a no-op fetch into a partial clone on my computer was sped up from 7 seconds to 0.01 seconds. This is a complement to the work in 2df1aa239c ("fetch: forgo full connectivity check if --filter", 2020-01-30), which is the child of the aforementioned 50033772d5. In that commit, the connectivity check *after* the fetch was sped up. The addition of the fast path might cause performance reductions in these cases: - If a partial clone or a fetch into a partial clone fails, Git will fruitlessly run rev-list (it is expected that everything fetched would go into promisor packs, so if that didn't happen, it is most likely that rev-list will fail too). - Any connectivity checks done by receive-pack, in the (in my opinion, unlikely) event that a partial clone serves receive-pack. I think that these cases are rare enough, and the performance reduction in this case minor enough (additional object DB access), that the benefit of avoiding a flag outweighs these. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-02Merge branch 'ds/partial-clone-fixes'Junio C Hamano1-5/+5
Fix for a bug revealed by a recent change to make the protocol v2 the default. * ds/partial-clone-fixes: partial-clone: avoid fetching when looking for objects partial-clone: demonstrate bugs in partial fetch
2020-02-22partial-clone: avoid fetching when looking for objectsDerrick Stolee1-5/+5
When using partial clone, find_non_local_tags() in builtin/fetch.c checks each remote tag to see if its object also exists locally. There is no expectation that the object exist locally, but this function nevertheless triggers a lazy fetch if the object does not exist. This can be extremely expensive when asking for a commit, as we are completely removed from the context of the non-existent object and thus supply no "haves" in the request. 6462d5eb9a (fetch: remove fetch_if_missing=0, 2019-11-05) removed a global variable that prevented these fetches in favor of a bitflag. However, some object existence checks were not updated to use this flag. Update find_non_local_tags() to use OBJECT_INFO_SKIP_FETCH_OBJECT in addition to OBJECT_INFO_QUICK. The _QUICK option only prevents repreparing the pack-file structures. We need to be extremely careful about supplying _SKIP_FETCH_OBJECT when we expect an object to not exist due to updated refs. This resolves a broken test in t5616-partial-clone.sh. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-14Merge branch 'tb/commit-graph-object-dir'Junio C Hamano1-1/+1
The code to compute the commit-graph has been taught to use a more robust way to tell if two object directories refer to the same thing. * tb/commit-graph-object-dir: commit-graph.h: use odb in 'load_commit_graph_one_fd_st' commit-graph.c: remove path normalization, comparison commit-graph.h: store object directory in 'struct commit_graph' commit-graph.h: store an odb in 'struct write_commit_graph_context' t5318: don't pass non-object directory to '--object-dir'
2020-02-04commit-graph.h: store an odb in 'struct write_commit_graph_context'Taylor Blau1-1/+1
There are lots of places in 'commit-graph.h' where a function either has (or almost has) a full 'struct object_directory *', accesses '->path', and then throws away the rest of the struct. This can cause headaches when comparing the locations of object directories across alternates (e.g., in the case of deciding if two commit-graph layers can be merged). These paths are normalized with 'normalize_path_copy()' which mitigates some comparison issues, but not all [1]. Replace usage of 'char *object_dir' with 'odb->path' by storing a 'struct object_directory *' in the 'write_commit_graph_context' structure. This is an intermediate step towards getting rid of all path normalization in 'commit-graph.c'. Resolving a user-provided '--object-dir' argument now requires that we compare it to the known alternates for equality. Prior to this patch, an unknown '--object-dir' argument would silently exit with status zero. This can clearly lead to unintended behavior, such as verifying commit-graphs that aren't in a repository's own object store (or one of its alternates), or causing a typo to mask a legitimate commit-graph verification failure. Make this error non-silent by 'die()'-ing when the given '--object-dir' does not match any known alternate object store. [1]: In my testing, for example, I can get one side of the commit-graph code to fill object_dir with "./objects" and the other with just "objects". Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-30fetch: forgo full connectivity check if --filterJonathan Tan1-1/+10
If a filter is specified, we do not need a full connectivity check on the contents of the packfile we just fetched; we only need to check that the objects referenced are promisor objects. This significantly speeds up fetches into repositories that have many promisor objects, because during the connectivity check, all promisor objects are enumerated (to mark them UNINTERESTING), and that takes a significant amount of time. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-06Merge branch 'ds/commit-graph-set-size-mult'Junio C Hamano1-3/+1
The code to write split commit-graph file(s) upon fetching computed bogus value for the parameter used in splitting the resulting files, which has been corrected. * ds/commit-graph-set-size-mult: commit-graph: prefer default size_mult when given zero
2020-01-02commit-graph: prefer default size_mult when given zeroDerrick Stolee1-3/+1
In 50f26bd ("fetch: add fetch.writeCommitGraph config setting", 2019-09-02), the fetch builtin added the capability to write a commit-graph using the "--split" feature. This feature creates multiple commit-graph files, and those can merge based on a set of "split options" including a size multiple. The default size multiple is 2, which intends to provide a log_2 N depth of the commit-graph chain where N is the number of commits. However, I noticed during dogfooding that my commit-graph chains were becoming quite large when left only to builds by 'git fetch'. It turns out that in split_graph_merge_strategy(), we default the size_mult variable to 2 except we override it with the context's split_opts if they exist. In builtin/fetch.c, we create such a split_opts, but do not populate it with values. This problem is due to two failures: 1. It is unclear that we can add the flag COMMIT_GRAPH_WRITE_SPLIT with a NULL split_opts. 2. If we have a non-NULL split_opts, then we override the default values even if a zero value is given. Correct both of these issues. First, do not override size_mult when the options provide a zero value. Second, stop creating a split_opts in the fetch builtin. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-12-06Merge branch 'rs/use-skip-prefix-more'Junio C Hamano1-9/+3
Code cleanup. * rs/use-skip-prefix-more: name-rev: use skip_prefix() instead of starts_with() push: use skip_prefix() instead of starts_with() shell: use skip_prefix() instead of starts_with() fmt-merge-msg: use skip_prefix() instead of starts_with() fetch: use skip_prefix() instead of starts_with()
2019-12-01Merge branch 'jt/fetch-remove-lazy-fetch-plugging'Junio C Hamano1-3/+2
"git fetch" codepath had a big "do not lazily fetch missing objects when I ask if something exists" switch. This has been corrected by marking the "does this thing exist?" calls with "if not please do not lazily fetch it" flag. * jt/fetch-remove-lazy-fetch-plugging: promisor-remote: remove fetch_if_missing=0 clone: remove fetch_if_missing=0 fetch: remove fetch_if_missing=0
2019-12-01Merge branch 'en/doc-typofix'Junio C Hamano1-1/+1
Docfix. * en/doc-typofix: Fix spelling errors in no-longer-updated-from-upstream modules multimail: fix a few simple spelling errors sha1dc: fix trivial comment spelling error Fix spelling errors in test commands Fix spelling errors in messages shown to users Fix spelling errors in names of tests Fix spelling errors in comments of testcases Fix spelling errors in code comments Fix spelling errors in documentation outside of Documentation/ Documentation: fix a bunch of typos, both old and new
2019-12-01Merge branch 'js/fetch-multi-lockfix'Junio C Hamano1-2/+8
Fetching from multiple remotes into the same repository in parallel had a bad interaction with the recent change to (optionally) update the commit-graph after a fetch job finishes, as these parallel fetches compete with each other. Which has been corrected. * js/fetch-multi-lockfix: fetch: avoid locking issues between fetch.jobs/fetch.writeCommitGraph fetch: add the command-line option `--write-commit-graph`
2019-12-01Merge branch 'rt/fetch-message-fix'Junio C Hamano1-1/+1
A small message update. * rt/fetch-message-fix: fetch.c: fix typo in a warning message
2019-11-27fetch: use skip_prefix() instead of starts_with()René Scharfe1-9/+3
Get rid of magic numbers by letting skip_prefix() set the pointer "what". Signed-off-by: René Scharfe <l.s.r@web.de> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-10Fix spelling errors in code commentsElijah Newren1-1/+1
Reported-by: Jens Schleusener <Jens.Schleusener@fossies.org> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-08fetch: remove fetch_if_missing=0Jonathan Tan1-3/+2
In fetch_pack() (and all functions it calls), pass OBJECT_INFO_SKIP_FETCH_OBJECT whenever we query an object that could be a tree or blob that we do not want to be lazy-fetched even if it is absent. Thus, the only lazy-fetches occurring for trees and blobs are when resolving deltas. Thus, we can remove fetch_if_missing=0 from builtin/fetch.c. Remove this, and also add a test ensuring that such objects are not lazy-fetched. (We might be able to remove fetch_if_missing=0 from other places too, but I have limited myself to builtin/fetch.c in this commit because I have not written tests for the other commands yet.) Note that commits and tags may still be lazy-fetched. I limited myself to objects that could be trees or blobs here because Git does not support creating such commit- and tag-excluding clones yet, and even if such a clone were manually created, Git does not have good support for fetching a single commit (when fetching a commit, it and all its ancestors would be sent). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-06fetch: avoid locking issues between fetch.jobs/fetch.writeCommitGraphJohannes Schindelin1-1/+2
When both `fetch.jobs` and `fetch.writeCommitGraph` is set, we currently try to write the commit graph in each of the concurrent fetch jobs, which frequently leads to error messages like this one: fatal: Unable to create '.../.git/objects/info/commit-graphs/commit-graph-chain.lock': File exists. Let's avoid this by holding off from writing the commit graph until all fetch jobs are done. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-06fetch: add the command-line option `--write-commit-graph`Johannes Schindelin1-1/+6
This option overrides the config setting `fetch.writeCommitGraph`, if both are set. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-02fetch.c: fix typo in a warning messageRalf Thielow1-1/+1
Signed-off-by: Ralf Thielow <ralf.thielow@gmail.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-24fetch: delay fetch_if_missing=0 until after configJonathan Tan1-2/+2
Suppose, from a repository that has ".gitmodules", we clone with --filter=blob:none: git clone --filter=blob:none --no-checkout \ https://kernel.googlesource.com/pub/scm/git/git Then we fetch: git -C git fetch This will cause a "unable to load config blob object", because the fetch_config_from_gitmodules() invocation in cmd_fetch() will attempt to load ".gitmodules" (which Git knows to exist because the client has the tree of HEAD) while fetch_if_missing is set to 0. fetch_if_missing is set to 0 too early - ".gitmodules" here should be lazily fetched. Git must set fetch_if_missing to 0 before the fetch because as part of the fetch, packfile negotiation happens (and we do not want to fetch any missing objects when checking existence of objects), but we do not need to set it so early. Move the setting of fetch_if_missing to the earliest possible point in cmd_fetch(), right before any fetching happens. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-15Merge branch 'js/trace2-fetch-push'Junio C Hamano1-7/+15
Dev support. * js/trace2-fetch-push: transport: push codepath can take arbitrary repository push: add trace2 instrumentation fetch: add trace2 instrumentation
2019-10-15Merge branch 'ew/hashmap'Junio C Hamano1-14/+18
Code clean-up of the hashmap API, both users and implementation. * ew/hashmap: hashmap_entry: remove first member requirement from docs hashmap: remove type arg from hashmap_{get,put,remove}_entry OFFSETOF_VAR macro to simplify hashmap iterators hashmap: introduce hashmap_free_entries hashmap: hashmap_{put,remove} return hashmap_entry * hashmap: use *_entry APIs for iteration hashmap_cmp_fn takes hashmap_entry params hashmap_get{,_from_hash} return "struct hashmap_entry *" hashmap: use *_entry APIs to wrap container_of hashmap_get_next returns "struct hashmap_entry *" introduce container_of macro hashmap_put takes "struct hashmap_entry *" hashmap_remove takes "const struct hashmap_entry *" hashmap_get takes "const struct hashmap_entry *" hashmap_add takes "struct hashmap_entry *" hashmap_get_next takes "const struct hashmap_entry *" hashmap_entry_init takes "struct hashmap_entry *" packfile: use hashmap_entry in delta_base_cache_entry coccicheck: detect hashmap_entry.hash assignment diff: use hashmap_entry_init on moved_entry.ent
2019-10-15Merge branch 'js/fetch-jobs'Junio C Hamano1-17/+107
"git fetch --jobs=<n>" allowed <n> parallel jobs when fetching submodules, but this did not apply to "git fetch --multiple" that fetches from multiple remote repositories. It now does. * js/fetch-jobs: fetch: let --jobs=<n> parallelize --multiple, too
2019-10-07Merge branch 'ms/fetch-follow-tag-optim'Junio C Hamano1-8/+10
The code used in following tags in "git fetch" has been optimized. * ms/fetch-follow-tag-optim: fetch: use oidset to keep the want OIDs for faster lookup
2019-10-07hashmap_entry: remove first member requirement from docsEric Wong1-1/+1
Comments stating that "struct hashmap_entry" must be the first member in a struct are no longer valid. Suggested-by: Phillip Wood <phillip.wood123@gmail.com> Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap: introduce hashmap_free_entriesEric Wong1-3/+3
`hashmap_free_entries' behaves like `container_of' and passes the offset of the hashmap_entry struct to the internal `hashmap_free_' function, allowing the function to free any struct pointer regardless of where the hashmap_entry field is located. `hashmap_free' no longer takes any arguments aside from the hashmap itself. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_cmp_fn takes hashmap_entry paramsEric Wong1-4/+5
Another step in eliminating the requirement of hashmap_entry being the first member of a struct. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_get{,_from_hash} return "struct hashmap_entry *"Eric Wong1-4/+7
Update callers to use hashmap_get_entry, hashmap_get_entry_from_hash or container_of as appropriate. This is another step towards eliminating the requirement of hashmap_entry being the first field in a struct. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_add takes "struct hashmap_entry *"Eric Wong1-1/+1
This is less error-prone than "void *" as the compiler now detects invalid types being passed. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_entry_init takes "struct hashmap_entry *"Eric Wong1-1/+1
C compilers do type checking to make life easier for us. So rely on that and update all hashmap_entry_init callers to take "struct hashmap_entry *" to avoid future bugs while improving safety and readability. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-06fetch: let --jobs=<n> parallelize --multiple, tooJohannes Schindelin1-17/+107
So far, `--jobs=<n>` only parallelizes submodule fetches/clones, not `--multiple` fetches, which is unintuitive, given that the option's name does not say anything about submodules in particular. Let's change that. With this patch, also fetches from multiple remotes are parallelized. For backwards-compatibility (and to prepare for a use case where submodule and multiple-remote fetches may need different parallelization limits), the config setting `submodule.fetchJobs` still only controls the submodule part of `git fetch`, while the newly-introduced setting `fetch.parallel` controls both (but can be overridden for submodules with `submodule.fetchJobs`). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-03fetch: add trace2 instrumentationJosh Steadmon1-7/+15
Add trace2 regions to fetch-pack.c and builtins/fetch.c to better track time spent in the various phases of a fetch: * listing refs * negotiation for protocol versions v0-v2 * fetching refs * consuming refs Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-30Merge branch 'ds/commit-graph-on-fetch'Junio C Hamano1-0/+15
A configuration variable tells "git fetch" to write the commit graph after finishing. * ds/commit-graph-on-fetch: fetch: add fetch.writeCommitGraph config setting
2019-09-18Merge branch 'md/list-objects-filter-combo'Junio C Hamano1-6/+3
The list-objects-filter API (used to create a sparse/lazy clone) learned to take a combined filter specification. * md/list-objects-filter-combo: list-objects-filter-options: make parser void list-objects-filter-options: clean up use of ALLOC_GROW list-objects-filter-options: allow mult. --filter strbuf: give URL-encoding API a char predicate fn list-objects-filter-options: make filter_spec a string_list list-objects-filter-options: move error check up list-objects-filter: implement composite filters list-objects-filter-options: always supply *errbuf list-objects-filter: put omits set in filter struct list-objects-filter: encapsulate filter components
2019-09-18Merge branch 'cc/multi-promisor'Junio C Hamano1-19/+10
Teach the lazy clone machinery that there can be more than one promisor remote and consult them in order when downloading missing objects on demand. * cc/multi-promisor: Move core_partial_clone_filter_default to promisor-remote.c Move repository_format_partial_clone to promisor-remote.c Remove fetch-object.{c,h} in favor of promisor-remote.{c,h} remote: add promisor and partial clone config to the doc partial-clone: add multiple remotes in the doc t0410: test fetching from many promisor remotes builtin/fetch: remove unique promisor remote limitation promisor-remote: parse remote.*.partialclonefilter Use promisor_remote_get_direct() and has_promisor_remote() promisor-remote: use repository_format_partial_clone promisor-remote: add promisor_remote_reinit() promisor-remote: implement promisor_remote_get_direct() Add initial support for many promisor remotes fetch-object: make functions return an error code t0410: remove pipes after git commands
2019-09-16fetch: use oidset to keep the want OIDs for faster lookupMasaya Suzuki1-8/+10
During git-fetch, the client checks if the advertised tags' OIDs are already in the fetch request's want OID set. This check is done in a linear scan. For a repository that has a lot of refs, repeating this scan takes 15+ minutes. In order to speed this up, create a oid_set for other refs' OIDs. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-03fetch: add fetch.writeCommitGraph config settingDerrick Stolee1-0/+15
The commit-graph feature is now on by default, and is being written during 'git gc' by default. Typically, Git only writes a commit-graph when a 'git gc --auto' command passes the gc.auto setting to actualy do work. This means that a commit-graph will typically fall behind the commits that are being used every day. To stay updated with the latest commits, add a step to 'git fetch' to write a commit-graph after fetching new objects. The fetch.writeCommitGraph config setting enables writing a split commit-graph, so on average the cost of writing this file is very small. Occasionally, the commit-graph chain will collapse to a single level, and this could be slow for very large repos. For additional use, adjust the default to be true when feature.experimental is enabled. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-19pull, fetch: add --set-upstream optionCorentin BOMPARD1-1/+50
Add the --set-upstream option to git pull/fetch which lets the user set the upstream configuration (branch.<current-branch-name>.merge and branch.<current-branch-name>.remote) for the current branch. A typical use-case is: git clone http://example.com/my-public-fork git remote add main http://example.com/project-main-repo git pull --set-upstream main master or, instead of the last line: git fetch --set-upstream main master git merge # or git rebase This is mostly equivalent to cloning project-main-repo (which sets upsteam) and then "git remote add" my-public-fork, but may feel more natural for people using a hosting system which allows forking from the web UI. This functionality is analog to "git push --set-upstream". Signed-off-by: Corentin BOMPARD <corentin.bompard@etu.univ-lyon1.fr> Signed-off-by: Nathan BERBEZIER <nathan.berbezier@etu.univ-lyon1.fr> Signed-off-by: Pablo CHABANNE <pablo.chabanne@etu.univ-lyon1.fr> Signed-off-by: Matthieu Moy <git@matthieu-moy.fr> Patch-edited-by: Matthieu Moy <git@matthieu-moy.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-06l10n: reformat some localized strings for v2.23.0Jean-Noël Avila1-4/+11
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-07-09Merge branch 'ds/fetch-disable-force-notice'Junio C Hamano1-1/+33
"git fetch" and "git pull" reports when a fetch results in non-fast-forward updates to let the user notice unusual situation. The commands learned "--no-shown-forced-updates" option to disable this safety feature. * ds/fetch-disable-force-notice: pull: add --[no-]show-forced-updates passthrough fetch: warn about forced updates in branch listing fetch: add --[no-]show-forced-updates argument
2019-07-09Merge branch 'nd/fetch-multi-gc-once'Junio C Hamano1-6/+11
"git fetch" that grabs from a group of remotes learned to run the auto-gc only once at the very end. * nd/fetch-multi-gc-once: fetch: only run 'gc' once when fetching multiple remotes
2019-07-09Merge branch 'ds/close-object-store'Junio C Hamano1-1/+1
The commit-graph file is now part of the "files that the runtime may keep open file descriptors on, all of which would need to be closed when done with the object store", and the file descriptor to an existing commit-graph file now is closed before "gc" finalizes a new instance to replace it. * ds/close-object-store: packfile: rename close_all_packs to close_object_store packfile: close commit-graph in close_all_packs commit-graph: use raw_object_store when closing
2019-07-09Merge branch 'fc/fetch-with-import-fix'Junio C Hamano1-10/+18
Code restructuring during 2.20 period broke fetching tags via "import" based transports. * fc/fetch-with-import-fix: fetch: fix regression with transport helpers fetch: make the code more understandable fetch: trivial cleanup t5801 (remote-helpers): add test to fetch tags t5801 (remote-helpers): cleanup refspec stuff
2019-06-28list-objects-filter-options: make filter_spec a string_listMatthew DeVore1-6/+3
Make the filter_spec string a string_list rather than a raw C string. The list of strings must be concatted together to make a complete filter_spec. A future patch will use this capability to build "combine:" filter specs gradually. A strbuf would seem to be a more natural choice for this object, but it unfortunately requires initialization besides just zero'ing out the memory. This results in all container structs, and all containers of those structs, etc., to also require initialization. Initializing them all would be more cumbersome that simply using a string_list, which behaves properly when its contents are zero'd. For the purposes of code simplification, change behavior in how filter specs are conveyed over the protocol: do not normalize the tree:<depth> filter specs since there should be no server in existence that supports tree:# but not tree:#k etc. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Matthew DeVore <matvore@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25builtin/fetch: remove unique promisor remote limitationChristian Couder1-15/+5
As the infrastructure for more than one promisor remote has been introduced in previous patches, we can remove code that forbids the registration of more than one promisor remote. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25promisor-remote: parse remote.*.partialclonefilterChristian Couder1-1/+1
This makes it possible to specify a different partial clone filter for each promisor remote. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25Use promisor_remote_get_direct() and has_promisor_remote()Christian Couder1-5/+6
Instead of using the repository_format_partial_clone global and fetch_objects() directly, let's use has_promisor_remote() and promisor_remote_get_direct(). This way all the configured promisor remotes will be taken into account, not only the one specified by extensions.partialClone. Also when cloning or fetching using a partial clone filter, remote.origin.promisor will be set to "true" instead of setting extensions.partialClone to "origin". This makes it possible to use many promisor remote just by fetching from them. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-21fetch: warn about forced updates in branch listingDerrick Stolee1-1/+24
The --[no-]show-forced-updates option in 'git fetch' can be confusing for some users, especially if it is enabled via config setting and not by argument. Add advice to warn the user that the (forced update) messages were not listed. Additionally, warn users when the forced update check takes longer than ten seconds, and recommend that they disable the check. These messages can be disabled by the advice.fetchShowForcedUpdates config setting. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-21fetch: add --[no-]show-forced-updates argumentDerrick Stolee1-1/+10
After updating a set of remove refs during a 'git fetch', we walk the commits in the new ref value and not in the old ref value to discover if the update was a forced update. This results in two things happening during the command: 1. The line including the ref update has an additional "(forced-update)" marker at the end. 2. The ref log for that remote branch includes a bit saying that update is a forced update. For many situations, this forced-update message happens infrequently, or is a small bit of information among many ref updates. Many users ignore these messages, but the calculation required here slows down their fetches significantly. Keep in mind that they do not have the opportunity to calculate a commit-graph file containing the newly-fetched commits, so these comparisons can be very slow. Add a '--[no-]show-forced-updates' option that allows a user to skip this calculation. The only permanent result is dropping the forced-update bit in the reflog. Include a new fetch.showForcedUpdates config setting that allows this behavior without including the argument in every command. The config setting is overridden by the command-line arguments. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-19fetch: only run 'gc' once when fetching multiple remotesNguyễn Thái Ngọc Duy1-6/+11
In multiple remotes mode, git-fetch is launched for n-1 remotes and the last remote is handled by the current process. Each of these processes will in turn run 'gc' at the end. This is not really a problem because even if multiple 'gc --auto' is run at the same time we still handle it correctly. It does show multiple "auto packing in the background" messages though. And we may waste some resources when gc actually runs because we still do some stuff before checking the lock and moving it to background. So let's try to avoid that. We should only need one 'gc' run after all objects and references are added anyway. Add a new option --no-auto-gc that will be used by those n-1 processes. 'gc --auto' will always run on the main fetch process (*). (*) even if we fetch remotes in parallel at some point in future, this should still be fine because we should "join" all those processes before this step. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-12packfile: rename close_all_packs to close_object_storeDerrick Stolee1-1/+1
The close_all_packs() method is now responsible for more than just pack-files. It also closes the commit-graph and the multi-pack-index. Rename the function to be more descriptive of its larger role. The name also fits because the input parameter is a raw_object_store. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-04fetch: fix regression with transport helpersFelipe Contreras1-2/+3
Commit e198b3a740 changed the behavior of fetch with regards to tags. Before, null oids where not ignored, now they are, regardless of whether the refs have been explicitly cleared or not. e198b3a740 (fetch: replace string-list used as a look-up table with a hashmap) When using a transport helper the oids can certainly be null. So now tags are ignored and fetching them is impossible. This patch fixes that by having a specific flag that is set only when we explicitly want to ignore the refs, restoring the original behavior. Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-04fetch: make the code more understandableFelipe Contreras1-7/+9
The comment makes it seem as if the condition is the other way around. The exception is when the oid is null, so check for that. Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-04fetch: trivial cleanupFelipe Contreras1-3/+8
Create a helper function to clear an item. The way items are cleared has changed, and will change again soon. No functional changes. Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-20Merge branch 'jk/no-sigpipe-during-network-transport'Junio C Hamano1-0/+2
On platforms where "git fetch" is killed with SIGPIPE (e.g. OSX), the upload-pack that runs on the other end that hangs up after detecting an error could cause "git fetch" to die with a signal, which led to a flakey test. "git fetch" now ignores SIGPIPE during the network portion of its operation (this is not a problem as we check the return status from our write(2)s). * jk/no-sigpipe-during-network-transport: fetch: ignore SIGPIPE during network operation fetch: avoid calling write_or_die()
2019-03-05fetch: ignore SIGPIPE during network operationJeff King1-0/+2
The default SIGPIPE behavior can be useful for a command that generates a lot of output: if the receiver of our output goes away, we'll be notified asynchronously to stop generating it (typically by killing the program). But for a command like fetch, which is primarily concerned with receiving data and writing it to disk, an unexpected SIGPIPE can be awkward. We're already checking the return value of all of our write() calls, and dying due to the signal takes away our chance to gracefully handle the error. On Linux, we wouldn't generally see SIGPIPE at all during fetch. If the other side of the network connection hangs up, we'll see ECONNRESET. But on OS X, we get a SIGPIPE, and the process is killed. This causes t5570 to racily fail, as we sometimes die by signal (instead of the expected die() call) when the server side hangs up. Let's ignore SIGPIPE during the network portion of the fetch, which will cause our write() to return EPIPE, giving us consistent behavior across platforms. This fixes the test flakiness, but note that it stops short of fixing the larger problem. The server side hit a fatal error, sent us an "ERR" packet, and then hung up. We notice the failure because we're trying to write to a closed socket. But by dying immediately, we never actually read the ERR packet and report its content to the user. This is a (racy) problem on all platforms. So this patch lays the groundwork from which that problem might be fixed consistently, but it doesn't actually fix it. Note the placement of the SIGPIPE handling. The absolute minimal change would be to ignore SIGPIPE only when we're writing. But twiddling the signal handler for each write call is inefficient and maintenance burden. On the opposite end of the spectrum, we could simply declare that fetch does not need SIGPIPE handling, since it doesn't generate a lot of output, and we could just ignore it at the start of cmd_fetch(). This patch takes a middle ground. It ignores SIGPIPE during the network operation (which is admittedly most of the program, since the actual network operations are all done under the hood by the transport code). So it's still pretty coarse. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-11Fix typos in translatable strings for v2.21.0Jean-Noël Avila1-1/+1
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06Merge branch 'jk/loose-object-cache-oid'Junio C Hamano1-4/+3
Code clean-up. * jk/loose-object-cache-oid: prefer "hash mismatch" to "sha1 mismatch" sha1-file: avoid "sha1 file" for generic use in messages sha1-file: prefer "loose object file" to "sha1 file" in messages sha1-file: drop has_sha1_file() convert has_sha1_file() callers to has_object_file() sha1-file: convert pass-through functions to object_id sha1-file: modernize loose header/stream functions sha1-file: modernize loose object file functions http: use struct object_id instead of bare sha1 update comment references to sha1_object_info() sha1-file: fix outdated sha1 comment references
2019-02-05Merge branch 'nd/fetch-compact-update'Junio C Hamano1-2/+6
"git fetch" output cleanup. * nd/fetch-compact-update: fetch: prefer suffix substitution in compact fetch.output
2019-02-05Merge branch 'js/filter-options-should-use-plain-int'Junio C Hamano1-1/+6
Update the protocol message specification to allow only the limited use of scaled quantities. This is ensure potential compatibility issues will not go out of hand. * js/filter-options-should-use-plain-int: filter-options: expand scaled numbers tree:<depth>: skip some trees even when collecting omits list-objects-filter: teach tree:# how to handle >0
2019-01-29Merge branch 'sb/submodule-recursive-fetch-gets-the-tip'Junio C Hamano1-9/+2
"git fetch --recurse-submodules" may not fetch the necessary commit that is bound to the superproject, which is getting corrected. * sb/submodule-recursive-fetch-gets-the-tip: fetch: ensure submodule objects fetched submodule.c: fetch in submodules git directory instead of in worktree submodule: migrate get_next_submodule to use repository structs repository: repo_submodule_init to take a submodule struct submodule: store OIDs in changed_submodule_names submodule.c: tighten scope of changed_submodule_names struct submodule.c: sort changed_submodule_names before searching it submodule.c: fix indentation sha1-array: provide oid_array_filter
2019-01-29Merge branch 'cc/fetch-error-message-fix'Junio C Hamano1-2/+4
Error message fix. * cc/fetch-error-message-fix: fetch: fix extensions.partialclone name in error message
2019-01-27fetch: prefer suffix substitution in compact fetch.outputNguyễn Thái Ngọc Duy1-2/+6
I have a remote named "jch" and it has a branch with the same name. And fetch.output is set to "compact". Fetching this remote looks like this From https://github.com/gitster/git + eb7fd39f6b...835363af2f jch -> */jch (forced update) 6f11fd5edb..59b12ae96a nd/config-move-to -> jch/* * [new branch] nd/diff-parseopt -> jch/* * [new branch] nd/the-index-final -> jch/* Notice that the local side of branch jch starts with "*" instead of ending with it like the rest. It's not exactly wrong. It just looks weird. This patch changes the find-and-replace code a bit to try finding prefix first before falling back to strstr() which finds a substring from left to right. Now we have something less OCD From https://github.com/gitster/git + eb7fd39f6b...835363af2f jch -> jch/* (forced update) 6f11fd5edb..59b12ae96a nd/config-move-to -> jch/* * [new branch] nd/diff-parseopt -> jch/* * [new branch] nd/the-index-final -> jch/* Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-15filter-options: expand scaled numbersJosh Steadmon1-1/+6
When communicating with a remote server or a subprocess, use expanded numbers rather than numbers with scaling suffix in the object filter spec (e.g. "limit:blob=1k" becomes "limit:blob=1024"). Update the protocol docs to note that clients should always perform this expansion, to allow for more compatibility between server implementations. Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-14fetch: fix extensions.partialclone name in error messageChristian Couder1-2/+4
There is "extensions.partialclone" and "core.partialCloneFilter", but not "core.partialclone". Only "extensions.partialclone" is meant to contain a remote name. While at it, let's wrap the relevant code lines to keep them at a reasonable length. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-08convert has_sha1_file() callers to has_object_file()Jeff King1-4/+3
The only remaining callers of has_sha1_file() actually have an object_id already. They can use the "object" variant, rather than dereferencing the hash themselves. The code changes here were completely generated by the included coccinelle patch. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-12-09fetch: ensure submodule objects fetchedStefan Beller1-9/+2
Currently when git-fetch is asked to recurse into submodules, it dispatches a plain "git-fetch -C <submodule-dir>" (with some submodule related options such as prefix and recusing strategy, but) without any information of the remote or the tip that should be fetched. But this default fetch is not sufficient, as a newly fetched commit in the superproject could point to a commit in the submodule that is not in the default refspec. This is common in workflows like Gerrit's. When fetching a Gerrit change under review (from refs/changes/??), the commits in that change likely point to submodule commits that have not been merged to a branch yet. Fetch a submodule object by id if the object that the superproject points to, cannot be found. For now this object is fetched from the 'origin' remote as we defer getting the default remote to a later patch. A list of new submodule commits are already generated in certain conditions (by check_for_new_submodule_commits()); this new feature invokes that function in more situations. The submodule checks were done only when a ref in the superproject changed, these checks were extended to also be performed when fetching into FETCH_HEAD for completeness, and add a test for that too. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-18Merge branch 'jk/unused-parameter-fixes'Junio C Hamano1-0/+2
Various functions have been audited for "-Wunused-parameter" warnings and bugs in them got fixed. * jk/unused-parameter-fixes: midx: double-check large object write loop assert NOARG/NONEG behavior of parse-options callbacks parse-options: drop OPT_DATE() apply: return -1 from option callback instead of calling exit(1) cat-file: report an error on multiple --batch options tag: mark "--message" option with NONEG show-branch: mark --reflog option as NONEG format-patch: mark "--no-numbered" option with NONEG status: mark --find-renames option with NONEG cat-file: mark batch options with NONEG pack-objects: mark index-version option as NONEG ls-files: mark exclude options as NONEG am: handle --no-patch-format option apply: mark include/exclude options as NONEG
2018-11-13Merge branch 'jc/war-on-string-list'Junio C Hamano1-46/+100
Replace three string-list instances used as look-up tables in "git fetch" with hashmaps. * jc/war-on-string-list: fetch: replace string-list used as a look-up table with a hashmap
2018-11-06assert NOARG/NONEG behavior of parse-options callbacksJeff King1-0/+2
When we define a parse-options callback, the flags we put in the option struct must match what the callback expects. For example, a callback which does not handle the "unset" parameter should only be used with PARSE_OPT_NONEG. But since the callback and the option struct are not defined next to each other, it's easy to get this wrong (as earlier patches in this series show). Fortunately, the compiler can help us here: compiling with -Wunused-parameters can show us which callbacks ignore their "unset" parameters (and likewise, ones that ignore "arg" expect to be triggered with PARSE_OPT_NOARG). But after we've inspected a callback and determined that all of its callers use the right flags, what do we do next? We'd like to silence the compiler warning, but do so in a way that will catch any wrong calls in the future. We can do that by actually checking those variables and asserting that they match our expectations. Because this is such a common pattern, we'll introduce some helper macros. The resulting messages aren't as descriptive as we could make them, but the file/line information from BUG() is enough to identify the problem (and anyway, the point is that these should never be seen). Each of the annotated callbacks in this patch triggers -Wunused-parameters, and was manually inspected to make sure all callers use the correct options (so none of these BUGs should be triggerable). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-01fetch: replace string-list used as a look-up table with a hashmapJunio C Hamano1-46/+100
In find_non_local_tags() helper function (used to implement the "follow tags"), we use string_list_has_string() on two string lists as a way to see if a refname has already been processed, etc. All this code predates more modern in-core lookup API like hashmap; replace them with two hashmaps and one string list---the hashmaps are used for look-ups and the string list is to keep the order of items in the returned result stable (which is the only single thing hashmap does worse than lookups on string-list). Similarly, get_ref_map() uses another string-list as a look-up table for existing refs. Replace it with a hashmap. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-19Merge branch 'jt/avoid-ls-refs'Junio C Hamano1-6/+26
Over some transports, fetching objects with an exact commit object name can be done without first seeing the ref advertisements. The code has been optimized to exploit this. * jt/avoid-ls-refs: fetch: do not list refs if fetching only hashes transport: list refs before fetch if necessary transport: do not list refs if possible transport: allow skipping of ref listing
2018-10-07fetch: do not list refs if fetching only hashesJonathan Tan1-6/+26
If only hash literals are given on a "git fetch" command-line, tag following is not requested, and the fetch is done using protocol v2, a list of refs is not required from the remote. Therefore, optimize by invoking transport_get_remote_refs() only if we need the refs. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-09-21fetch: in partial clone, check presence of targetsJonathan Tan1-2/+13
When fetching an object that is known as a promisor object to the local repository, the connectivity check in quickfetch() in builtin/fetch.c succeeds, causing object transfer to be bypassed. However, this should not happen if that object is merely promised and not actually present. Because this happens, when a user invokes "git fetch origin <sha-1>" on the command-line, the <sha-1> object may not actually be fetched even though the command returns an exit code of 0. This is a similar issue (but with a different cause) to the one fixed by a0c9016abd ("upload-pack: send refs' objects despite "filter"", 2018-07-09). Therefore, update quickfetch() to also directly check for the presence of all objects to be fetched. Its documentation and name are also updated to better reflect what it does. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-09-17Merge branch 'ab/fetch-tags-noclobber'Junio C Hamano1-7/+13
The rules used by "git push" and "git fetch" to determine if a ref can or cannot be updated were inconsistent; specifically, fetching to update existing tags were allowed even though tags are supposed to be unmoving anchoring points. "git fetch" was taught to forbid updates to existing tags without the "--force" option. * ab/fetch-tags-noclobber: fetch: stop clobbering existing tags without --force fetch: document local ref updates with/without --force push doc: correct lies about how push refspecs work push doc: move mention of "tag <tag>" later in the prose push doc: remove confusing mention of remote merger fetch tests: add a test for clobbering tag behavior push tests: use spaces in interpolated string push tests: make use of unused $1 in test description fetch: change "branch" to "reference" in --force -h output
2018-09-17Merge branch 'jk/cocci'Junio C Hamano1-3/+3
spatch transformation to replace boolean uses of !hashcmp() to newly introduced oideq() is added, and applied, to regain performance lost due to support of multiple hash algorithms. * jk/cocci: show_dirstat: simplify same-content check read-cache: use oideq() in ce_compare functions convert hashmap comparison functions to oideq() convert "hashcmp() != 0" to "!hasheq()" convert "oidcmp() != 0" to "!oideq()" convert "hashcmp() == 0" to hasheq() convert "oidcmp() == 0" to oideq() introduce hasheq() and oideq() coccinelle: use <...> for function exclusion
2018-09-17Merge branch 'ds/reachable'Junio C Hamano1-0/+1
The code for computing history reachability has been shuffled, obtained a bunch of new tests to cover them, and then being improved. * ds/reachable: commit-reach: correct accidental #include of C file commit-reach: use can_all_from_reach commit-reach: make can_all_from_reach... linear commit-reach: replace ref_newer logic test-reach: test commit_contains test-reach: test can_all_from_reach_with_flags test-reach: test reduce_heads test-reach: test get_merge_bases_many test-reach: test is_descendant_of test-reach: test in_merge_bases test-reach: create new test tool for ref_newer commit-reach: move can_all_from_reach_with_flags upload-pack: generalize commit date cutoff upload-pack: refactor ok_to_give_up() upload-pack: make reachable() more generic commit-reach: move commit_contains from ref-filter commit-reach: move ref_newer from remote.c commit.h: remove method declarations commit-reach: move walk methods from commit.c
2018-08-31fetch: stop clobbering existing tags without --forceÆvar Arnfjörð Bjarmason1-6/+12
Change "fetch" to treat "+" in refspecs (aka --force) to mean we should clobber a local tag of the same name. This changes the long-standing behavior of "fetch" added in 853a3697dc ("[PATCH] Multi-head fetch.", 2005-08-20). Before this change, all tag fetches effectively had --force enabled. See the git-fetch-script code in fast_forward_local() with the comment: > Tags need not be pointing at commits so there is no way to > guarantee "fast-forward" anyway. That commit and the rest of the history of "fetch" shows that the "+" (--force) part of refpecs was only conceived for branch updates, while tags have accepted any changes from upstream unconditionally and clobbered the local tag object. Changing this behavior has been discussed as early as 2011[1]. The current behavior doesn't make sense to me, it easily results in local tags accidentally being clobbered. We could namespace our tags per-remote and not locally populate refs/tags/*, but as with my 97716d217c ("fetch: add a --prune-tags option and fetch.pruneTags config", 2018-02-09) it's easier to work around the current implementation than to fix the root cause. So this change implements suggestion #1 from Jeff's 2011 E-Mail[1], "fetch" now only clobbers the tag if either "+" is provided as part of the refspec, or if "--force" is provided on the command-line. This also makes it nicely symmetrical with how "tag" itself works when creating tags. I.e. we refuse to clobber any existing tags unless "--force" is supplied. Now we can refuse all such clobbering, whether it would happen by clobbering a local tag with "tag", or by fetching it from the remote with "fetch". Ref updates outside refs/{tags,heads/* are still still not symmetrical with how "git push" works, as discussed in the recently changed pull-fetch-param.txt documentation. This change brings the two divergent behaviors more into line with one another. I don't think there's any reason "fetch" couldn't fully converge with the behavior used by "push", but that's a topic for another change. One of the tests added in 31b808a032 ("clone --single: limit the fetch refspec to fetched branch", 2012-09-20) is being changed to use --force where a clone would clobber a tag. This changes nothing about the existing behavior of the test. 1. https://public-inbox.org/git/20111123221658.GA22313@sigill.intra.peff.net/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-31fetch: change "branch" to "reference" in --force -h outputÆvar Arnfjörð Bjarmason1-1/+1
The -h output has been referring to the --force command as forcing the overwriting of local branches, but since "fetch" more generally fetches all sorts of references in all refs/ namespaces, let's talk about forcing the update of a a "reference" instead. This wording was initially introduced in 8320199873 ("Rewrite builtin-fetch option parsing to use parse_options().", 2007-12-04). Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-29convert "hashcmp() == 0" to hasheq()Jeff King1-1/+1
This is the partner patch to the previous one, but covering the "hash" variants instead of "oid". Note that our coccinelle rule is slightly more complex to avoid triggering the call in hasheq(). I didn't bother to add a new rule to convert: - hasheq(E1->hash, E2->hash) + oideq(E1, E2) Since these are new functions, there won't be any such existing callers. And since most of the code is already using oideq, we're not likely to introduce new ones. We might still see "!hashcmp(E1->hash, E2->hash)" from topics in flight. But because our new rule comes after the existing ones, that should first get converted to "!oidcmp(E1, E2)" and then to "oideq(E1, E2)". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-29convert "oidcmp() == 0" to oideq()Jeff King1-2/+2
Using the more restrictive oideq() should, in the long run, give the compiler more opportunities to optimize these callsites. For now, this conversion should be a complete noop with respect to the generated code. The result is also perhaps a little more readable, as it avoids the "zero is equal" idiom. Since it's so prevalent in C, I think seasoned programmers tend not to even notice it anymore, but it can sometimes make for awkward double negations (e.g., we can drop a few !!oidcmp() instances here). This patch was generated almost entirely by the included coccinelle patch. This mechanical conversion should be completely safe, because we check explicitly for cases where oidcmp() is compared to 0, which is what oideq() is doing under the hood. Note that we don't have to catch "!oidcmp()" separately; coccinelle's standard isomorphisms make sure the two are treated equivalently. I say "almost" because I did hand-edit the coccinelle output to fix up a few style violations (it mostly keeps the original formatting, but sometimes unwraps long lines). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-15Merge branch 'jt/connectivity-check-after-unshallow'Junio C Hamano1-24/+4
"git fetch" sometimes failed to update the remote-tracking refs, which has been corrected. * jt/connectivity-check-after-unshallow: fetch-pack: unify ref in and out param
2018-08-15Merge branch 'jt/tag-following-with-proto-v2-fix'Junio C Hamano1-1/+1
The wire-protocol v2 relies on the client to send "ref prefixes" to limit the bandwidth spent on the initial ref advertisement. "git fetch $remote branch:branch" that asks tags that point into the history leading to the "branch" automatically followed sent to narrow prefix and broke the tag following, which has been fixed. * jt/tag-following-with-proto-v2-fix: fetch: send "refs/tags/" prefix upon CLI refspecs t5702: test fetch with multiple refspecs at a time
2018-08-02Merge branch 'jt/fetch-nego-tip'Junio C Hamano1-0/+43
"git fetch" learned a new option "--negotiation-tip" to limit the set of commits it tells the other end as "have", to reduce wasted bandwidth and cycles, which would be helpful when the receiving repository has a lot of refs that have little to do with the history at the remote it is fetching from. * jt/fetch-nego-tip: fetch-pack: support negotiation tip whitelist
2018-08-02Merge branch 'sb/object-store-lookup'Junio C Hamano1-3/+6
lookup_commit_reference() and friends have been updated to find in-core object for a specific in-core repository instance. * sb/object-store-lookup: (32 commits) commit.c: allow lookup_commit_reference to handle arbitrary repositories commit.c: allow lookup_commit_reference_gently to handle arbitrary repositories tag.c: allow deref_tag to handle arbitrary repositories object.c: allow parse_object to handle arbitrary repositories object.c: allow parse_object_buffer to handle arbitrary repositories commit.c: allow get_cached_commit_buffer to handle arbitrary repositories commit.c: allow set_commit_buffer to handle arbitrary repositories commit.c: migrate the commit buffer to the parsed object store commit-slabs: remove realloc counter outside of slab struct commit.c: allow parse_commit_buffer to handle arbitrary repositories tag: allow parse_tag_buffer to handle arbitrary repositories tag: allow lookup_tag to handle arbitrary repositories commit: allow lookup_commit to handle arbitrary repositories tree: allow lookup_tree to handle arbitrary repositories blob: allow lookup_blob to handle arbitrary repositories object: allow lookup_object to handle arbitrary repositories object: allow object_as_type to handle arbitrary repositories tag: add repository argument to deref_tag tag: add repository argument to parse_tag_buffer tag: add repository argument to lookup_tag ...
2018-08-01fetch-pack: unify ref in and out paramJonathan Tan1-24/+4
When a user fetches: - at least one up-to-date ref and at least one non-up-to-date ref, - using HTTP with protocol v0 (or something else that uses the fetch command of a remote helper) some refs might not be updated after the fetch. This bug was introduced in commit 989b8c4452 ("fetch-pack: put shallow info in output parameter", 2018-06-28) which allowed transports to report the refs that they have fetched in a new out-parameter "fetched_refs". If they do so, transport_fetch_refs() makes this information available to its caller. Users of "fetched_refs" rely on the following 3 properties: (1) it is the complete list of refs that was passed to transport_fetch_refs(), (2) it has shallow information (REF_STATUS_REJECT_SHALLOW set if relevant), and (3) it has updated OIDs if ref-in-want was used (introduced after 989b8c4452). In an effort to satisfy (1), whenever transport_fetch_refs() filters the refs sent to the transport, it re-adds the filtered refs to whatever the transport supplies before returning it to the user. However, the implementation in 989b8c4452 unconditionally re-adds the filtered refs without checking if the transport refrained from reporting anything in "fetched_refs" (which it is allowed to do), resulting in an incomplete list, no longer satisfying (1). An earlier effort to resolve this [1] solved the issue by readding the filtered refs only if the transport did not refrain from reporting in "fetched_refs", but after further discussion, it seems that the better solution is to revert the API change that introduced "fetched_refs". This API change was first suggested as part of a ref-in-want implementation that allowed for ref patterns and, thus, there could be drastic differences between the input refs and the refs actually fetched [2]; we eventually decided to only allow exact ref names, but this API change remained even though its necessity was decreased. Therefore, revert this API change by reverting commit 989b8c4452, and make receive_wanted_refs() update the OIDs in the sought array (like how update_shallow() updates shallow information in the sought array) instead. A test is also included to show that the user-visible bug discussed at the beginning of this commit message no longer exists. [1] https://public-inbox.org/git/20180801171806.GA122458@google.com/ [2] https://public-inbox.org/git/86a128c5fb710a41791e7183207c4d64889f9307.1485381677.git.jonathantanmy@google.com/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-24Merge branch 'jt/connectivity-check-after-unshallow'Junio C Hamano1-56/+94
"git fetch" failed to correctly validate the set of objects it received when making a shallow history deeper, which has been corrected. * jt/connectivity-check-after-unshallow: fetch-pack: write shallow, then check connectivity fetch-pack: implement ref-in-want fetch-pack: put shallow info in output parameter fetch: refactor to make function args narrower fetch: refactor fetch_refs into two functions fetch: refactor the population of peer ref OIDs upload-pack: test negotiation with changing repository upload-pack: implement ref-in-want test-pkt-line: add unpack-sideband subcommand
2018-07-24fetch: send "refs/tags/" prefix upon CLI refspecsJonathan Tan1-1/+1
When performing tag following, in addition to using the server's "include-tag" capability to send tag objects (and emulating it if the server does not support that capability), "git fetch" relies upon the presence of refs/tags/* entries in the initial ref advertisement to locally create refs pointing to the aforementioned tag objects. When using protocol v2, refs/tags/* entries in the initial ref advertisement may be suppressed by a ref-prefix argument, leading to the tag object being downloaded, but the ref not being created. Commit dcc73cf7ff ("fetch: generate ref-prefixes when using a configured refspec", 2018-05-18) ensured that "refs/tags/" is always sent as a ref prefix when "git fetch" is invoked with no refspecs, but not when "git fetch" is invoked with refspecs. Extend that functionality to make it work in both situations. This also necessitates a change another test which tested ref advertisement filtering using tag refs - since tag refs are sent by default now, the test has been switched to using branch refs instead. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>