| Age | Commit message (Collapse) | Author | Files | Lines |
|
The diff machinery has two ways to detect changes to set the exit code:
Just comparing hashes and comparing blob contents. The latter is needed
if certain changes have to be ignored, e.g. with --ignore-space-change
or --ignore-matching-lines. It's enabled by the diff_options flag
diff_from_contents.
The code for handling binary files added by 1aaf69e669 (diff: shortcut
for diff'ing two binary SHA-1 objects, 2014-08-16) always uses a quick
hash-only comparison, even if the slow way is taken. We need it to
report a hash difference as a change for the purpose of setting the
exit code, though, but it never did. Fix that.
d7b97b7185 (diff: let external diffs report that changes are
uninteresting, 2024-06-09) set diff_from_contents if external diff
programs are allowed. This is the default e.g. for git diff, and so
that change exposed the inconsistency much more widely.
Reported-by: Kohei Shibata <shiba200712@gmail.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The diff machinery has two ways to detect changes to set the exit code:
Just comparing hashes and comparing blob contents. The latter is needed
if certain changes have to be ignored, e.g. with --ignore-space-change
or --ignore-matching-lines. It's enabled by the diff_options flag
diff_from_contents.
The slower mode as never considered submodules (and subrepos) as changes
with --submodule=diff or --submodule=log, which is inconsistent with
--submodule=short (the default). Fix it.
d7b97b7185 (diff: let external diffs report that changes are
uninteresting, 2024-06-09) set diff_from_contents if external diff
programs are allowed. This is the default e.g. for git diff, and so
that change exposed the inconsistency much more widely.
Reported-by: David Hull <david.hull@friendbuy.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The diff machinery has two ways to detect changes to set the exit code:
Just comparing hashes and comparing blob contents. The latter is needed
if certain changes have to be ignored, e.g. with --ignore-space-change
or --ignore-matching-lines. It's enabled by the diff_options flag
diff_from_contents.
The slower mode has never considered copies and renames to be changes,
which is inconsistent with the quicker one. Fix it. Even if we ignore
the file contents (because it's empty or contains only ignored lines),
there's still the meta data change of adding or changing a filename, so
we need to report it in the exit code.
d7b97b7185 (diff: let external diffs report that changes are
uninteresting, 2024-06-09) set diff_from_contents if external diff
programs are allowed. This is the default e.g. for git diff, and so
that change exposed the inconsistency much more widely.
Reported-by: Jorge Luis Martinez Gomez <jol@jol.dev>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
l10n-2.46.0-rnd2
* tag 'l10n-2.46.0-rnd2' of https://github.com/git-l10n/git-po:
l10n: zh_CN: updated translation for 2.46
l10n: sv.po: Update Swedish translation
l10n: zh_TW: Git 2.46
l10n: Update German translation
l10n: vi: Updated translation for 2.46
l10n: uk: v2.46 update
l10n: bg.po: Updated Bulgarian translation (5734t)
l10n: fr: v2.46.0
l10n: tr: Update Turkish translations
l10n: po-id for 2.46
|
|
Signed-off-by: Teng Long <dyroneteng@gmail.com>
Co-authored-by: 依云 <lilydjwg@gmail.com>
Reviewed-by: 依云 <lilydjwg@gmail.com>
Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
|
|
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
|
|
* 'l10n/zh-TW/2024-07-24' of github.com:l10n-tw/git-po:
l10n: zh_TW: Git 2.46
|
|
* 'l10n-de-2.46' of github.com:ralfth/git:
l10n: Update German translation
|
|
* 'vi-2.46' of github.com:Nekosha/git-po:
l10n: vi: Updated translation for 2.46
|
|
* '2.46-uk-update' of github.com:arkid15r/git-ukrainian-l10n:
l10n: uk: v2.46 update
|
|
* 'master' of github.com:alshopov/git-po:
l10n: bg.po: Updated Bulgarian translation (5734t)
|
|
* 'l10N_fr_2.46' of github.com:jnavila/git:
l10n: fr: v2.46.0
|
|
* 'tr-l10n' of github.com:bitigchi/git-po:
l10n: tr: Update Turkish translations
|
|
* 'po-id' of github.com:bagasme/git-po:
l10n: po-id for 2.46
|
|
Co-authored-by: Lumynous <lumynou5.tw@gmail.com>
Co-authored-by: Ngoo Ka-iu <willy04wu69@gmail.com>
Co-authored-by: Nightfeather Chen <slat@nightfeather.me>
Co-authored-by: Kisaragi Hiu <mail@kisaragi-hiu.com>
Co-authored-by: hms5232 <hms5232@hhming.moe>
Signed-off-by: Yi-Jyun Pan <pan93412@gmail.com>
|
|
Reviewed-by: Matthias Rüster <matthias.ruester@gmail.com>
Signed-off-by: Ralf Thielow <ralf.thielow@gmail.com>
|
|
Signed-off-by: Vũ Tiến Hưng <newcomerminecraft@gmail.com>
|
|
Co-authored-by: Kate Golovanova <kate@kgthreads.com>
Signed-off-by: Arkadii Yakovets <ark@cho.red>
Signed-off-by: Kate Golovanova <kate@kgthreads.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Hotfix for a topic already in -rc.
* ps/ref-storage-migration-fix:
refs: fix format migration on Cygwin
|
|
Work around asciidoctor's css that renders `monospace` material
in the SYNOPSIS section of manual pages as block elements.
* js/doc-markup-updates-fix:
Doc: fix Asciidoctor css workaround
asciidoctor: fix `synopsis` rendering
|
|
Fix documentation mark-up regression in 2.45.
* ja/doc-markup-updates-fix:
doc: git-clone fix discrepancy between asciidoc and asciidoctor
|
|
Repacking a repository with multi-pack index started making stupid
pack selections in Git 2.45, which has been corrected.
* ds/midx-write-repack-fix:
midx-write: revert use of --stdin-packs
t5319: add failing test case for repack/expire
|
|
The previous step introduced docinfo.html to be used to tweak the
CSS used by the asciidoctor, that by default renders <code> inside
<pre> as a block element, breaking the SYNOPSIS section of a few
pages that adopted a new convention we use since Git 2.45.
But in this project, HTML files are all generated. We do not force
any human to write HTML by hand, which is an unusual and cruel
punishment. "*.html" is in the .gitignore file, and "make clean"
removes them. Having a tracked .html file makes "make clean" make
the tree dirty by removing the tracked docinfo.html file.
Let's do an obvious, minimum and stupid workaround to generate that
file at runtime instead. The mark-up is being rethought in a major
way for the next development cycle, and the CSS workaround we added
in the previous step may have to adjusted, possibly in a large way,
anyway.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It was reported that t1460-refs-migrate.sh fails when using Cygwin with
errors like the following:
error: could not link file '.git/ref_migration.sr9pEF/reftable' to '.git/reftable': Permission denied
As some debugging surfaced, the root cause of this is that some files of
the newly-initialized ref store are still open when the target format is
the "reftable" format, and Cygwin refuses to rename open files.
Fix this issue by closing the new ref store before renaming its files
into place. This is a slight change in behaviour compared to before,
where we kept the new ref store open and then updated the repository's
ref store to point to it.
While we could re-open the new ref store after we have moved files
around, this is ultimately unnecessary. We know that the only user of
`repo_migrate_ref_storage_format()` is the git-refs(1) command, and it
won't access the ref store after it has been migrated anyway. So
reinitializing the ref store would be a waste of time. Regardless of
that it is still sensible to leave the repository in a consistent state.
But instead of reinitializing the ref store, we can simply unset the
repo's ref store altogether and let `get_main_ref_store()` lazily
initialize the new ref store as required.
Reported-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since 76880f0510c (doc: git-clone: apply new documentation formatting
guidelines, 2024-03-29), the synopsis of `git clone`'s manual page is
rendered differently than before; Its parent commit did the same for
`git init`.
The result looks quite nice. When rendered with AsciiDoc, that is. When
rendered using AsciiDoctor and displayed in a graphical web browser such
as Firefox, Chrome, Edge, etc, the result is quite unpleasant to my eye,
reading something like this:
SYNOPSIS
git clone
[
--template=
<template-directory>]
[
-l
] [
-s
] [
--no-hardlinks
] [
-q
] [
[... continuing like this ...]
The reason is that AsciiDoctor's default style sheet contains this (see
https://github.com/asciidoctor/asciidoctor/blob/854923b15533/src/stylesheets/asciidoctor.css#L519-L521
for context):
pre > code {
display: block;
}
It is this `display: block` that forces the parts that are enclosed in
`<code>` tags (such as the `git clone` or the `--template=` part) to be
rendered on their own line.
Side note: This seems not to affect console web browsers like `lynx` or
`w3m`, most likely because most style sheet directions cannot be
respected in text terminals and therefore they seem to punt on style
sheets altogether.
To fix this, let's apply the method recommended by AsciiDoctor in
https://docs.asciidoctor.org/asciidoctor/latest/html-backend/default-stylesheet/#customize-docinfo
to partially override AsciiDoctor's default style sheet so that the
`<code>` sections of the synopsis are no longer each rendered on their
own, individual lines.
This fixes https://github.com/git-for-windows/git/issues/5063.
Even on the Git home page, where AsciiDoctor's default stylesheet is
_not_ used, this change resulted in some unpleasant rendering where not
only the font is changed for the `<code>` sections of the synopsis, but
padding and a different background color make the visual impression
quite uneven. This has been addressed in the meantime, via
https://github.com/git/git-scm.com/commit/a492d0565512.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
|
|
Asciidoc.py does not have the concept of generalized roles, whereas
asciidoctor interprets [foo]`blah` as blah with role foo in the
synopsis, making in effect foo disappear in the output. Note that
square brackets not directly followed by an inline markup do not
define a role, which is why we do not have the issue on other parts of
the documentation.
In order to get a consistant result across asciidoctor and
asciidoc.py, the hack is to use the {empty} entity
to split the bracket part from the inline format part.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
|
|
This reverts b7d6f23a171 (midx-write.c: use `--stdin-packs` when
repacking, 2024-04-01) and then marks the test created in the previous
change as passing.
The fundamental issue with the reverted change is that the focus on
pack-files separates the object selection from how the multi-pack-index
selects a single pack-file for an object ID with multiple copies among
the tracked pack-files.
The change was made with the intention of improving delta compression in
the resulting pack-file, but that can be resolved with the existing
object list mechanism. There are other potential pitfalls of doing an
object walk at this time if the repository is a blobless partial clone,
and that will require additional testing on top of the one that changes
here.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Acked-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Emir SARI <emir_sari@icloud.com>
|
|
Update following components:
* builtin/clone.c
* builtin/config.c
* builtin/for-each-repo.c
* builtin/refs.c
* command-list.h
* commit-graph.c
* http.c
* pack-bitmap-write.c
* pack-bitmap.c
* promisor-remote.c
* refs.c
* sequencer.c
Translate following new components:
* pseudo-merge.c
* refs/files-backend.c
Signed-off-by: Bagas Sanjaya <bagasdotme@gmail.com>
|
|
Git 2.45.0 included the change b7d6f23a171 (midx-write.c: use
`--stdin-packs` when repacking, 2024-04-01) which caused the 'git
multi-pack-index repack' command to use 'git pack-objects --stdin-packs'
instead of listing the objects to repack. While this change was
motivated by efficient cross-process communication and the ability to
improve delta compression, it breaks a fundamental function of the
'incremental-repack' task that is enabled by default in Scalar clones or
Git repositories that run 'git maintenance start'.
The 'incremental-repack' task performs a two-step process of the
'expire' and 'repack' subcommands of the 'git multi-pack-index' builtin.
The 'expire' command removes any pack-files listed in the
multi-pack-index but without any referenced objects. The 'repack' task
then finds a batch of pack-files to repack and sends their objects to
'git pack-objects'. Both the pack-files chosen for the batch and the
objects chosen to repack are based on the ones that the multi-pack-index
references. Objects that appear in a pack-file but have a duplicate copy
in a newer pack-file are not considered in this case. Since the
multi-pack-index references only the newest copy of an object, this
allows the next 'incremental-repack' task to remove the pack-files in
the next 'expire' task. This delay is intentional due to how Windows
handles may block deletion of files with open read handles.
However, the mentioned commit changed this behavior to divorce the set
of objects referenced by the multi-pack-index and instead use a set of
"included" and "excluded" pack-files in the 'git pack-objects' builtin.
When a pack-file is selected as "included", only the objects it contains
but are not in any "excluded" pack-files are considered for repacking.
This has led to client repositories failing to remove old pack-files as
they still have some referenced objects. This grows over time until the
point that Git is trying to repack the same pack-files over and over.
For now, create a test case that demonstrates the expected behavior, but
also fails in its final line. The setup here it attempting to recreate a
typical situation for a repository that uses a blobless partial clone.
There would be a large initial pack-file from the clone that is never
selected in the 'repack' batch. There are other pack-files that have a
combination of new objects from incremental fetches and possibly blobs
that are not connected to those incremental fetches; these blobs could
be filled in from commands like 'git checkout' or 'git blame'. The
pack-files also have some overlap on purpose so test-1 has some
duplicates in test-2 and test-2 has some duplicates in test-3.
At the end of the test, the test-2 pack-file still exists though it
should have been expired. This test will pass when reverting the
offending commit.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Test fix as a follow-up to an already graduated topic.
* jk/am-retry:
t4153: stop redirecting input from /dev/zero
|
|
Doc update.
* tb/pseudo-merge-reachability-bitmap:
Documentation/gitpacking: make sample configs listing blocks
|
|
Doc update.
* ps/pseudo-ref-terminology:
Documentation/glossary: fix double word
|
|
Doc update.
* tb/doc-max-tree-depth-fix:
Documentation: fix default value for core.maxTreeDepth
|
|
Comment fix.
* ch/refs-without-the-repository-fix:
refs: correct the version numbers in a comment
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Build fix.
* js/unit-test-oidtree-cmake-fix:
cmake: fix build of `t-oidtree`
|
|
"git var GIT_SHELL_PATH" should report the path to the shell used
to spawn external commands, but it didn't do so on Windows, which
has been corrected.
* js/var-git-shell-path:
var(win32): do report the GIT_SHELL_PATH that is actually used
run-command: declare the `git_shell_path()` function globally
run-command(win32): resolve the path to the Unix shell early
mingw(is_msys2_sh): handle forward slashes in the `sh.exe` path, too
win32: override `fspathcmp()` with a directory separator-aware version
strvec: declare the `strvec_push_nodup()` function globally
run-command: refactor getting the Unix shell path into its own function
|
|
What happens when http.cookieFile gets the special value "" has
been clarified in the documentation.
* ps/doc-http-empty-cookiefile:
doc: update http.cookieFile with in-memory cookie processing
|
|
"git push '' HEAD:there" used to hit a BUG(); it has been corrected
to die with "fatal: bad repository ''".
* kn/push-empty-fix:
builtin/push: call set_refspecs after validating remote
|
|
The http.cookieFile and http.saveCookies configuration variables
have a few values that need to be avoided, which are now ignored
with warning messages.
* jc/http-cookiefile:
http.c: cookie file tightening
|
|
The test framework learned to take the test body not as a single
string but as a here-document.
* jk/test-body-in-here-doc:
t/.gitattributes: ignore whitespace in chainlint expect files
t: convert some here-doc test bodies
test-lib: allow test snippets as here-docs
chainlint.pl: add tests for test body in heredoc
chainlint.pl: recognize test bodies defined via heredoc
chainlint.pl: check line numbers in expected output
chainlint.pl: force CRLF conversion when opening input files
chainlint.pl: do not spawn more threads than we have scripts
chainlint.pl: only start threads if jobs > 1
chainlint.pl: add test_expect_success call to test snippets
|
|
Tests that use GIT_TEST_SANITIZE_LEAK_LOG feature got their exit
status inverted, which has been corrected.
* rj/test-sanitize-leak-log-fix:
test-lib: GIT_TEST_SANITIZE_LEAK_LOG enabled by default
test-lib: fix GIT_TEST_SANITIZE_LEAK_LOG
|
|
When `core.maxTreeDepth` was originally introduced via be20128bfa (add
core.maxTreeDepth config, 2023-08-31), its default value was 4096.
There have since been a couple of updates to its default value that were
not reflected in the documentation for `core.maxTreeDepth`:
- 4d5693ba05 (lower core.maxTreeDepth default to 2048, 2023-08-31)
- b64d78ad02 (max_tree_depth: lower it for MSVC to avoid stack
overflows, 2023-11-01)
Commit 4d5693ba05 lowers the default to 2048 for platforms with smaller
stack sizes, and commit b64d78ad02 lowers the default even further when
Git is compiled with MSVC.
Neither of these changes were reflected in the documentation, which I
noticed while merging newer releases back into GitHub's private fork
(which contained the original implementation of `core.maxTreeDepth`).
Update the documentation to reflect what the platform-specific default
values are.
Noticed-by: Keith W. Campbell <keithc@ca.ibm.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Remove a spurious "that".
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This document contains a few sample config snippets. At least with
Asciidoctor, the section headers are rendered *more* indented than the
variables that follow:
[bitmapPseudoMerge "all"]
pattern = "refs/"
...
To address this, wrap these listings in AsciiDoc listing blocks. Remove
the indentation from the section headings. This is similar to how we
handle such sample config elsewhere, e.g., in config.txt.
While we're here, fix the nearby "wiht" typo.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 852a171018 (am: let command-line options override saved options,
2015-08-04) redirected a few "git am" invocations from /dev/zero, even
though it did not expect "am" to read the input. This was necessary at
the time because those tests used test_terminal, and as described in
18d8c26930 (test_terminal: redirect child process' stdin to a pty,
2015-08-04):
Note that due to the way the code is structured, the child's stdin
pseudo-tty will be closed when we finish reading from our stdin. This
means that in the common case, where our stdin is attached to /dev/null,
the child's stdin pseudo-tty will be closed immediately. Some operations
like isatty(), which git-am uses, require the file descriptor to be
open, and hence if the success of the command depends on such functions,
test_terminal's stdin should be redirected to a source with large amount
of data to ensure that the child's stdin is not closed, e.g.
test_terminal git am --3way </dev/zero
But we later dropped the use of test_terminal in 53ce2e3f0a (am: add
explicit "--retry" option, 2024-06-06). That commit dropped one of the
redirections from /dev/zero but not the other.
In theory the remaining one should not cause any problems, but it turns
out that at least one platform (NonStop) does not have /dev/zero at all.
We never noticed before because it also did not pass the TTY prereq,
meaning these tests were not run at all there until 53ce2e3f0a.
So let's drop the useless /dev/zero mention. There are others in the
test suite, but they are run only for tests marked with EXPENSIVE (so
not typically by default).
Reported-by: Randall S. Becker <rsbecker@nexbridge.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A handful of entries are added to the GitFAQ document.
* bc/gitfaq-more:
doc: mention that proxies must be completely transparent
gitfaq: add entry about syncing working trees
gitfaq: give advice on using eol attribute in gitattributes
gitfaq: add documentation on proxies
|
|
The http transport can now be told to send request with
authentication material without first getting a 401 response.
* bc/http-proactive-auth:
http: allow authenticating proactively
|
|
Shell script clean-up.
* jc/where-is-bash-for-ci:
ci: unify bash calling convention
|
|
A new warning message is issued when a command has to expand a
sparse index to handle working tree cruft that are outside of the
sparse checkout.
* ds/advice-sparse-index-expansion:
advice: warn when sparse index expands
|
|
Address-looking strings found on the trailer are now placed on the
Cc: list after running through sanitize_address by "git send-email".
* cb/send-email-sanitize-trailer-addresses:
git-send-email: use sanitized address when reading mbox body
|
|
The "ort" merge backend saw one bugfix for a crash that happens
when inner merge gets killed, and assorted code clean-ups.
* en/ort-inner-merge-error-fix:
merge-ort: fix missing early return
merge-ort: convert more error() cases to path_msg()
merge-ort: upon merge abort, only show messages causing the abort
merge-ort: loosen commented requirements
merge-ort: clearer propagation of failure-to-function from merge_submodule
merge-ort: fix type of local 'clean' var in handle_content_merge ()
merge-ort: maintain expected invariant for priv member
merge-ort: extract handling of priv member into reusable function
|
|
The paragraph talks about a change made in c8f815c2 (refs: remove
functions without ref store, 2024-05-07), which is v2.46.0-rc0~119^2
and will be published as part of v2.46, not v2.45.
Signed-off-by: Christian Hesse <mail@eworm.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A test in reftable library has been rewritten using the unit test
framework.
* cp/unit-test-reftable-record:
t-reftable-record: add tests for reftable_log_record_compare_key()
t-reftable-record: add tests for reftable_ref_record_compare_name()
t-reftable-record: add index tests for reftable_record_is_deletion()
t-reftable-record: add obj tests for reftable_record_is_deletion()
t-reftable-record: add log tests for reftable_record_is_deletion()
t-reftable-record: add ref tests for reftable_record_is_deletion()
t-reftable-record: add comparison tests for obj records
t-reftable-record: add comparison tests for index records
t-reftable-record: add comparison tests for ref records
t-reftable-record: add reftable_record_cmp() tests for log records
t: move reftable/record_test.c to the unit testing framework
|
|
"git push" that pushes only deletion gave an unnecessary and
harmless error message when push negotiation is configured, which
has been corrected.
* jc/disable-push-nego-for-deletion:
push: avoid showing false negotiation errors
|
|
Docfix.
* ri/doc-show-branch-fix:
doc: fix the max number of branches shown by "show-branch"
|
|
Developer build procedure fix.
* tb/dev-build-pedantic-fix:
config.mak.dev: fix typo when enabling -Wpedantic
|
|
Custom control structures we invented more recently have been
taught to the clang-format file.
* rs/clang-format-updates:
clang-format: include kh_foreach* macros in ForEachMacros
|
|
GitWeb update to use committer date consistently in rss/atom feeds.
* am/gitweb-feed-use-committer-date:
gitweb: rss/atom change published/updated date to committer date
|
|
Test suite has been taught not to unnecessarily rely on DNS failing
a bogus external name.
* jk/tests-without-dns:
t/lib-bundle-uri: use local fake bundle URLs
t5551: do not confirm that bogus url cannot be used
t5553: use local url for invalid fetch
|
|
An existing test of oidmap API has been rewritten with the
unit-test framework.
* gt/unit-test-oidmap:
t: migrate helper/test-oidmap.c to unit-tests/t-oidmap.c
|
|
"git describe --dirty --broken" forgot to refresh the index before
seeing if there is any chang, ("git describe --dirty" correctly did
so), which has been corrected.
* as/describe-broken-refresh-index-fix:
describe: refresh the index when 'broken' flag is used
|
|
A test that no longer leaks has been marked as such.
* rj/t0613-no-longer-leaks:
t0613: mark as leak-free
|
|
A test that no longer leaks has been marked as such.
* rj/t0612-no-longer-leaks:
t0612: mark as leak-free
|
|
On Windows, Unix-like paths like `/bin/sh` make very little sense. In
the best case, they simply don't work, in the worst case they are
misinterpreted as absolute paths that are relative to the drive
associated with the current directory.
To that end, Git does not actually use the path `/bin/sh` that is
recorded e.g. when `run_command()` is called with a Unix shell
command-line. Instead, as of 776297548e (Do not use SHELL_PATH from
build system in prepare_shell_cmd on Windows, 2012-04-17), it
re-interprets `/bin/sh` as "look up `sh` on the `PATH` and use the
result instead".
This is the logic users expect to be followed when running `git var
GIT_SHELL_PATH`.
However, when 1e65721227 (var: add support for listing the shell,
2023-06-27) introduced support for `git var GIT_SHELL_PATH`, Windows was
not special-cased as above, which is why it outputs `/bin/sh` even
though that disagrees with what Git actually uses.
Let's fix this by using the exact same logic as `prepare_shell_cmd()`,
adjusting the Windows-specific `git var GIT_SHELL_PATH` test case to
verify that it actually finds a working executable.
Reported-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The intention is to use it in `git var GIT_SHELL_PATH`, therefore we
need this function to stop being file-local only.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 776297548e (Do not use SHELL_PATH from build system in
prepare_shell_cmd on Windows, 2012-04-17), the hard-coded path to the
Unix shell was replaced by passing `sh` instead when executing Unix
shell scripts in Git.
This was done because the hard-coded path to the Unix shell is incorrect
on Windows because it not only is a Unix-style absolute path instead of
a Windows one, but Git uses the runtime prefix feature on Windows, i.e.
the correct path cannot be hard-coded.
Naturally, the `sh` argument will be resolved to the full path of said
executable eventually.
To help fixing the bug where `git var GIT_SHELL_PATH` currently does not
reflect that logic, but shows that incorrect hard-coded Unix-style
absolute path, let's resolve the full path to the `sh` executable early
in the `git_shell_path()` function so that we can use it in `git var`,
too, and be sure that the output is equivalent to what `run_command()`
does when it is asked to execute a command-line using a Unix shell.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Whether the full path to the MSYS2 Bash is specified using backslashes
or forward slashes, in either case the command-line arguments need to be
quoted in the MSYS2-specific manner instead of using regular Win32
command-line quoting rules.
In preparation for `prepare_shell_cmd()` to use the full path to
`sh.exe` (with forward slashes for consistency), let's teach the
`is_msys2_sh()` function about this; Otherwise 5580.4 'clone with
backslashed path' would fail once `prepare_shell_cmd()` uses the full
path instead of merely `sh`.
This patch relies on the just-introduced fix where `fspathcmp()` handles
backslashes and forward slashes as equivalent on Windows.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
On Windows, the backslash is the directory separator, even if the
forward slash can be used, too, at least since Windows NT.
This means that the paths `a/b` and `a\b` are equivalent, and
`fspathcmp()` needs to be made aware of that fact.
Note that we have to override both `fspathcmp()` and `fspathncmp()`, and
the former cannot be a mere pre-processor constant that transforms calls
to `fspathcmp(a, b)` into `fspathncmp(a, b, (size_t)-1)` because the
function `report_collided_checkout()` in `unpack-trees.c` wants to
assign `list.cmp = fspathcmp`.
Also note that `fspatheq()` does _not_ need to be overridden because it
calls `fspathcmp()` internally.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This function differs from `strvec_push()` in that it takes ownership of
the allocated string that is passed as second argument.
This is useful when appending elements to the string array that have
been freshly allocated and serve no further other purpose after that.
Without declaring this function globally, call sites would allocate the
memory, only to have `strvec_push()` duplicate the string, and then the
first copy would need to be released. Having this function globally
avoids that kind of unnecessary work.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This encapsulates the platform-specific logic better.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When the `oidtree` test helper was turned into a unit test, a new
`lib-oid` source file was added as dependency. This was only done in the
Makefile so far, but also needs to be done in the CMake definition.
This is a companion of ed548408723d (t/: migrate helper/test-oidtree.c
to unit-tests/t-oidtree.c, 2024-06-08).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When an end-user runs "git push" with an empty string for the remote
repository name, e.g.
$ git push '' main
"git push" fails with a BUG(). Even though this is a nonsense request
that we want to fail, we shouldn't hit a BUG(). Instead we want to give
a sensible error message, e.g., 'bad repository'".
This is because since 9badf97c42 (remote: allow resetting url list,
2024-06-14), we reset the remote URL if the provided URL is empty. When
a user of 'remotes_remote_get' tries to fetch a remote with an empty
repo name, the function initializes the remote via 'make_remote'. But
the remote is still not a valid remote, since the URL is empty, so it
tries to add the URL alias using 'add_url_alias'. This in-turn will call
'add_url', but since the URL is empty we call 'strvec_clear' on the
`remote->url`. Back in 'remotes_remote_get', we again check if the
remote is valid, which fails, so we return 'NULL' for the 'struct
remote *' value.
The 'builtin/push.c' code, calls 'set_refspecs' before validating the
remote. This worked with empty repo names earlier since we would get a
remote, albeit with an empty URL. With the new changes, we get a 'NULL'
remote value, this causes the check for remote to fail and raises the
BUG in 'set_refspecs'.
Do a simple fix by doing remote validation first. Also add a test to
validate the bug fix. With this, we can also now directly pass remote to
'set_refspecs' instead of it trying to lazily obtain it.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Code clean-up.
* rs/simplify-submodule-helper-super-prefix-invocation:
submodule--helper: use strvec_pushf() for --super-prefix
|
|
Typofix.
* as/pathspec-h-typofix:
pathspec: fix typo "glossary-context.txt" -> "glossary-content.txt"
|
|
Documentation only mentions how to read cookies from the given file
and how to save them to the file using http.saveCookies.
But underlying libcURL allows the HTTP cookies used only in memory;
cookies from the server will be accepted and sent back in successive
requests within same connection, by using an empty string as the
filename. Document this.
Signed-off-by: Piotr Szlazak <piotr.szlazak@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As we currently describe in t/README, it can happen that:
Some tests run "git" (or "test-tool" etc.) without properly checking
the exit code, or git will invoke itself and fail to ferry the
abort() exit code to the original caller.
Therefore, GIT_TEST_SANITIZE_LEAK_LOG=true is needed to be set to
capture all memory leaks triggered by our tests.
It seems unnecessary to force users to remember this option, as
forgetting it could lead to missed memory leaks.
We could solve the problem by making it "true" by default, but that
might suggest we think "false" makes sense, which isn't the case.
Therefore, the best approach is to remove the option entirely while
maintaining the capability to detect memory leaks in blind spots of our
tests.
Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The ".expect" files in t/chainlint/ are snippets of expected output from
the chainlint script, and do not necessarily conform to our usual code
style. Especially with the recent change to retain line numbers, blank
lines in the input script end up with trailing whitespace as we print
"3 " for line 3, for example. The point of these files is to match the
output verbatim, so let's not complain about the trailing spaces.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The t1404 script checks a lot of output from Git which contains single
quotes. Because the test snippets are themselves wrapped in the same
single-quotes, we have to resort to using $SQ to match them. This is
error-prone and makes the tests harder to read.
Instead, let's use the new here-doc feature added in the previous
commit, which lets us write anything in the test body we want (except
the here-doc end marker on a line by itself, of course).
Note that we do use "\" in our marker to avoid interpolation (which is
the whole point). But we don't use "<<-", as we want to preserve
whitespace in the snippet (and running with "-v" before and after shows
that we produce the exact same output, except with the ugly $SQ
references fixed).
I just converted every test here, even though only some of them use
$SQ. But it would be equally correct to mix-and-match styles if we don't
mind the inconsistency.
I've also converted a few tests in t0600 which were moved from t1404 (I
had written this patch before they were moved, but it seemed worth
porting over the changes rather than losing them).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Most test snippets are wrapped in single quotes, like:
test_expect_success 'some description' '
do_something
'
This sometimes makes the snippets awkward to write, because you can't
easily use single quotes within them. We sometimes work around this with
$SQ, or by loosening regexes to use "." instead of a literal quote, or
by using double quotes when we'd prefer to use single-quotes (and just
adding extra backslash-escapes to avoid interpolation).
This commit adds another option: feeding the snippet via the function's
stdin. This doesn't conflict with anything the snippet would want to do,
because we always redirect its stdin from /dev/null anyway (which we'll
continue to do).
A few notes on the implementation:
- it would be nice to push this down into test_run_, but we can't, as
test_expect_success and test_expect_failure want to see the actual
script content to report it for verbose-mode. A helper function
limits the amount of duplication in those callers here.
- The helper function is a little awkward to call, as you feed it the
name of the variable you want to set. The more natural thing in
shell would be command substitution like:
body=$(body_or_stdin "$2")
but that loses trailing whitespace. There are tricks around this,
like:
body=$(body_or_stdin "$2"; printf .)
body=${body%.}
but we'd prefer to keep such tricks in the helper, not in each
caller.
- I implemented the helper using a sequence of "read" calls. Together
with "-r" and unsetting the IFS, this preserves incoming whitespace.
An alternative is to use "cat" (which then requires the gross "."
trick above). But this saves us a process, which is probably a good
thing. The "read" builtin does use more read() syscalls than
necessary (one per byte), but that is almost certainly a win over a
separate process.
Both are probably slower than passing a single-quoted string, but
the difference is lost in the noise for a script that I converted as
an experiment.
- I handle test_expect_success and test_expect_failure here. If we
like this style, we could easily extend it to other spots (e.g.,
lazy_prereq bodies) on top of this patch.
- even though we are using "local", we have to be careful about our
variable names. Within test_expect_success, any variable we declare
with local will be seen as local by the test snippets themselves (so
it wouldn't persist between tests like normal variables would).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The chainlint.pl script recently learned about the upcoming:
test_expect_success 'some test' - <<\EOT
TEST_BODY
EOT
syntax, where TEST_BODY should be checked in the usual way. Let's make
sure this works by adding a few tests. The "here-doc-body" file tests
the basic syntax, including an embedded here-doc which we should still
be able to recognize.
Likewise the "here-doc-body-indent" checks the same thing, but using the
"<<-" operator. We wouldn't expect this to be used normally, but we
would not want to accidentally miss a body that uses it. The
"pathological" variant checks the opposite: we don't get confused by an
indented tag within the here-doc body.
The "here-doc-double" tests the handling of two here-doc tags on the
same line. This is not something we'd expect anybody to do in practice,
but the code was written defensively to handle this, so let's make sure
it works.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In order to check tests for semantic problems, chainlint.pl scans test
scripts, looking for tests defined as:
test_expect_success [prereq] title '
body
'
where `body` is a single string which is then treated as a standalone
chunk of code and "linted" to detect semantic issues. (The same happens
for `test_expect_failure` definitions.)
The introduction of test definitions in which the test body is instead
presented via a heredoc rather than as a single string creates a blind
spot in the linting process since such invocations are not recognized by
chainlint.pl.
Prepare for this new style by also recognizing tests defined as:
test_expect_success [prereq] title - <<\EOT
body
EOT
A minor complication is that chainlint.pl has never considered heredoc
bodies significant since it doesn't scan them for semantic problems,
thus it has always simply thrown them away. However, with the new
`test_expect_success` calling sequence, heredoc bodies become
meaningful, thus need to be captured.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While working on chainlint.pl recently, we introduced some bugs that
showed incorrect line numbers in the output. But it was hard to notice,
since we sanitize the output by removing all of the line numbers! It
would be nice to retain these so we can catch any regressions.
The main reason we sanitize is for maintainability: we concatenate all
of the test snippets into a single file, so it's hard for each ".expect"
file to know at which offset its test input will be found. We can handle
that by storing the per-test line numbers in the ".expect" files, and
then dynamically offsetting them as we build the concatenated test and
expect files together.
The changes to the ".expect" files look like tedious boilerplate, but it
actually makes adding new tests easier. You can now just run:
perl chainlint.pl chainlint/foo.test |
tail -n +2 >chainlint/foo.expect
to save the output of the script minus the comment headers (after
checking that it is correct, of course). Whereas before you had to strip
the line numbers. The conversions here were done mechanically using
something like the script above, and then spot-checked manually.
It would be possible to do all of this in shell via the Makefile, but it
gets a bit complicated (and requires a lot of extra processes). Instead,
I've written a short perl script that generates the concatenated files
(we already depend on perl, since chainlint.pl uses it). Incidentally,
this improves a few other things:
- we incorrectly used $(CHAINLINTTMP_SQ) inside a double-quoted
string. So if your test directory required quoting, like:
make "TEST_OUTPUT_DIRECTORY=/tmp/h'orrible"
we'd fail the chainlint tests.
- the shell in the Makefile didn't handle &&-chaining correctly in its
loops (though in practice the "sed" and "cat" invocations are not
likely to fail).
- likewise, the sed invocation to strip numbers was hiding the exit
code of chainlint.pl itself. In practice this isn't a big deal;
since there are linter violations in the test files, we expect it to
exit non-zero. But we could later use exit codes to distinguish
serious errors from expected ones.
- we now use a constant number of processes, instead of scaling with
the number of test scripts. So it should be a little faster (on my
machine, "make check-chainlint" goes from 133ms to 73ms).
There are some alternatives to this approach, but I think this is still
a good intermediate step:
1. We could invoke chainlint.pl individually on each test file, and
compare it to the expected output (and possibly using "make" to
avoid repeating already-done checks). This is a much bigger change
(and we'd have to figure out what to do with the "# LINT" lines in
the inputs). But in this case we'd still want the "expect" files to
be annotated with line numbers. So most of what's in this patch
would be needed anyway.
2. Likewise, we could run a single chainlint.pl and feed it all of the
scripts (with "--jobs=1" to get deterministic output). But we'd
still need to annotate the scripts as we did here, and we'd still
need to either assemble the "expect" file, or break apart the
script output to compare to each individual ".expect" file.
So we may pursue those in the long run, but this patch gives us more
robust tests without too much extra work or moving in a useless
direction.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The lexer in chainlint.pl can't handle CRLF line endings; it complains
about an internal error in scan_token() if we see one. For example, in
our Windows CI environment:
$ perl chainlint.pl chainlint/for-loop.test | cat -v
Thread 2 terminated abnormally: internal error scanning character '^M'
This doesn't break "make check-chainlint" (yet), because we assemble a
concatenated input by passing the contents of each file through "sed".
And the "sed" we use will strip out the CRLFs. But the next patch is
going to rework this a bit, which does break check-chainlint on Windows.
Plus it's probably nicer to folks on Windows who might work on chainlint
itself and write new tests.
In theory we could fix the parser to handle this, but it's not really
worth the trouble. We should be able to ask the input layer to translate
the line endings for us. In fact, I'd expect this to happen by default,
as perl's documentation claims Win32 uses the ":unix:crlf" PERLIO layer
by default ("unix" here just refers to using read/write syscalls, and
then "crlf" layers the translation on top). However, this doesn't seem
to be the case in our Windows CI environment. I didn't dig into the
exact reason, but it is perhaps because we are using an msys build of
perl rather than a "true" Win32 build.
At any rate, it is easy-ish to just ask explicitly for the conversion.
In the above example, setting PERLIO=crlf in the environment is enough
to make it work. Curiously, though, this doesn't work when invoking
chainlint via "make". Again, I didn't dig into it, but it may have to do
with msys programs calling Windows programs or vice versa.
We can make it work consistently by just explicitly asking for CRLF
translation when we open the files. This will even work on non-Windows
platforms, though we wouldn't really expect to find CRLF files there.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The chainlint.pl script spawns worker threads to check many scripts in
parallel. This is good if you feed it a lot of scripts. But if you give
it few (or one), then the overhead of spawning the threads dominates. We
can easily notice that we have fewer scripts than threads and scale back
as appropriate.
This patch reduces the time to run:
time for i in chainlint/*.test; do
perl chainlint.pl $i
done >/dev/null
on my system from ~4.1s to ~1.1s, where I have 8+8 cores.
As with the previous patch, this isn't the usual way we run chainlint
(we feed many scripts at once, which is why it supports threading in the
first place). So this won't make a big difference in the real world, but
it may help us out in the future, and it makes experimenting with and
debugging the chainlint tests a bit more pleasant.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If the system supports threads, chainlint.pl will always spawn worker
threads to do the real work. But when --jobs=1, this is pointless, since
we could just do the work in the main thread. And spawning even a single
thread has a high overhead. For example, on my Linux system, running:
for i in chainlint/*.test; do
perl chainlint.pl --jobs=1 $i
done >/dev/null
takes ~1.7s without this patch, and ~1.1s after. We don't usually spawn
a bunch of individual chainlint.pl processes (instead we feed several
scripts at once, and the parallelism outweighs the setup cost). But it's
something we've considered doing, and since we already have fallback
code for systems without thread support, it's pretty easy to make this
work.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The chainlint tests are a series of individual files, each holding a
test body. The "make check-chainlint" target assembles them into a
single file, adding a "test_expect_success" function call around each.
Let's instead include that function call in the files themselves. This
is a little more boilerplate, but has several advantages:
1. You can now run chainlint manually on snippets with just "perl
chainlint.perl chainlint/foo.test". This can make developing and
debugging a little easier.
2. Many of the tests implicitly relied on the syntax of the lines
added by the Makefile (in particular the use of single-quotes).
This assumption is much easier to see when the single-quotes are
alongside the test body.
3. We had no way to test how the chainlint program handled
various test_expect_success lines themselves. Now we'll be able to
check variations.
The change to the .test files was done mechanically, using the same
test names they would have been assigned by the Makefile (this is
important to match the expected output). The Makefile has the minimal
change to drop the extra lines; there are more cleanups possible but a
future patch in this series will rewrite this substantially anyway.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The http.cookiefile configuration variable is used to call
curl_easy_setopt() to set CURLOPT_COOKIEFILE and if http.savecookies
is set, the same value is used for CURLOPT_COOKIEJAR. The former is
used only to read cookies at startup, the latter is used to write
cookies at the end.
The manual pages https://curl.se/libcurl/c/CURLOPT_COOKIEFILE.html
and https://curl.se/libcurl/c/CURLOPT_COOKIEJAR.html talk about two
interesting special values.
* "" (an empty string) given to CURLOPT_COOKIEFILE means not to
read cookies from any file upon startup.
* It is not specified what "" (an empty string) given to
CURLOPT_COOKIEJAR does; presumably open a file whose name is an
empty string and write cookies to it? In any case, that is not
what we want to see happen, ever.
* "-" (a dash) given to CURLOPT_COOKIEFILE makes cURL read cookies
from the standard input, and given to CURLOPT_COOKIEJAR makes
cURL write cookies to the standard output. Neither of which we
want ever to happen.
So, let's make sure we avoid these nonsense cases. Specifically,
when http.cookies is set to "-", ignore it with a warning, and when
it is set to "" and http.savecookies is set, ignore http.savecookies
with a warning.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When making a request over HTTP(S), Git only sends authentication if it
receives a 401 response. Thus, if a repository is open to the public
for reading, Git will typically never ask for authentication for fetches
and clones.
However, there may be times when a user would like to authenticate
nevertheless. For example, a forge may give higher rate limits to users
who authenticate because they are easier to contact in case of excessive
use. Or it may be useful for a known heavy user, such as an internal
service, to proactively authenticate so its use can be monitored and, if
necessary, throttled.
Let's make this possible with a new option, "http.proactiveAuth". This
option specifies a type of authentication which can be used to
authenticate against the host in question. This is necessary because we
lack the WWW-Authenticate header to provide us details; similarly, we
cannot accept certain types of authentication because we require
information from the server, such as a nonce or challenge, to
successfully authenticate.
If we're in auto mode and we got a username and password, set the
authentication scheme to Basic. libcurl will not send authentication
proactively unless there's a single choice of allowed authentication,
and we know in this case we didn't get an authtype entry telling us what
scheme to use, or we would have taken a different codepath and written
the header ourselves. In any event, of the other schemes that libcurl
supports, Digest and NTLM require a nonce or challenge, which means that
they cannot work with proactive auth, and GSSAPI does not use a username
and password at all, so Basic is the only logical choice among the
built-in options.
Note that the existing http_proactive_auth variable signifies proactive
auth if there are already credentials, which is different from the
functionality we're adding, which always seeks credentials even if none
are provided. Nonetheless, t5540 tests the existing behavior for
WebDAV-based pushes to an open repository without credentials, so we
preserve it. While at first this may seem an insecure and bizarre
decision, it may be that authentication is done with TLS certificates,
in which case it might actually provide a quite high level of security.
Expand the variable to use an enum to handle the additional cases and a
helper function to distinguish our new cases from the old ones.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We already document in the FAQ that proxies must be completely
transparent and not modify the request or response in any way, but add
similar documentation to the http.proxy entry. We know that while the
FAQ is very useful, users sometimes are less likely to read in favor of
the documentation specific to an option or command, so adding it in both
places will help users be adequately informed.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Users very commonly want to sync their working tree with uncommitted
changes across machines, often to carry across in-progress work or
stashes. Despite this not being a recommended approach, users want to
do it and are not dissuaded by suggestions not to, so let's recommend a
sensible technique.
The technique that many users are using is their preferred cloud syncing
service, which is a bad idea. Users have reported problems where they
end up with duplicate files that won't go away (with names like "file.c
2"), broken references, oddly named references that have date stamps
appended to them, missing objects, and general corruption and data loss.
That's because almost all of these tools sync file by file, which is a
great technique if your project is a single word processing document or
spreadsheet, but is utterly abysmal for Git repositories because they
don't necessarily snapshot the entire repository correctly. They also
tend to sync the files immediately instead of when the repository is
quiescent, so writing multiple files, as occurs during a commit or a gc,
can confuse the tools and lead to corruption.
We know that the old standby, rsync, is up to the task, provided that
the repository is quiescent, so let's suggest that and dissuade people
from using cloud syncing tools. Let's tell people about common things
they should be aware of before doing this and that this is still
potentially risky. Additionally, let's tell people that Git's security
model does not permit sharing working trees across users in case they
planned to do that. While we'd still prefer users didn't try to do
this, hopefully this will lead them in a safer direction.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the FAQ, we tell people how to use the text attribute, but we fail to
explain what to do with the eol attribute. As we ourselves have
noticed, most shell implementations do not care for carriage returns,
and as such, people will practically always want them to use LF endings.
Similar things can be said for batch files on Windows, except with CRLF
endings.
Since these are common things to have in a repository, let's help users
make a good decision by recommending that they use the gitattributes
file to correctly check out the endings.
In addition, let's correct the cross-reference to this question, which
originally referred to "the following entry", even though a new entry
has been inserted in between. The cross-reference notation should
prevent this from occurring and provide a link in formats, such as HTML,
which support that.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Many corporate environments and local systems have proxies in use. Note
the situations in which proxies can be used and how to configure them.
At the same time, note what standards a proxy must follow to work with
Git. Explicitly call out certain classes that are known to routinely
have problems reported various places online, including in the Git for
Windows issue tracker and on Stack Overflow, and recommend against the
use of such software, noting that they are associated with myriad
security problems (including, for example, breaking sandboxing and image
integrity[0], and, for TLS middleboxes, the use of insecure protocols
and ciphers and lack of certificate verification[1]). Don't mention the
specific nature of these security problems in the FAQ entry because they
are extremely numerous and varied and we wish to keep the FAQ entry
relatively brief.
[0] https://issues.chromium.org/issues/40285192
[1] https://faculty.cc.gatech.edu/~mbailey/publications/ndss17_interception.pdf
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Under ci/ hierarchy, we run scripts under either "sh" (any Bourne
compatible POSIX shell would work) or specifically "bash" (as they
require features from bash, e.g., ${parameter/pattern/string}
expansion). As we have the CI environment under our control, we can
expect that /bin/sh will always be fine to run the scripts that only
require a Bourne shell, but we may not know where "bash" is
installed depending on the distro used.
So let's make sure we start these scripts with either one of these:
#!/bin/sh
#!/usr/bin/env bash
Yes, the latter has to assume that everybody installs "env" at that
path and not as /bin/env or /usr/local/bin/env, but this currently
is the best we could do.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The code to deal with modified paths that are out-of-cone in a
sparsely checked out working tree has been optimized.
* ds/sparse-lstat-caching:
sparse-index: improve lstat caching of sparse paths
sparse-index: count lstat() calls
sparse-index: use strbuf in path_found()
sparse-index: refactor path_found()
sparse-checkout: refactor skip worktree retry logic
|
|
When bundleURI interface fetches multiple bundles, Git failed to
take full advantage of all bundles and ended up slurping duplicated
objects.
* xx/bundie-uri-fixes:
unbundle: extend object verification for fetches
fetch-pack: expose fsckObjects configuration logic
bundle-uri: verify oid before writing refs
|
|
More memory leaks have been plugged.
* ps/leakfixes-more: (29 commits)
builtin/blame: fix leaking ignore revs files
builtin/blame: fix leaking prefixed paths
blame: fix leaking data for blame scoreboards
line-range: plug leaking find functions
merge: fix leaking merge bases
builtin/merge: fix leaking `struct cmdnames` in `get_strategy()`
sequencer: fix memory leaks in `make_script_with_merges()`
builtin/clone: plug leaking HEAD ref in `wanted_peer_refs()`
apply: fix leaking string in `match_fragment()`
sequencer: fix leaking string buffer in `commit_staged_changes()`
commit: fix leaking parents when calling `commit_tree_extended()`
config: fix leaking "core.notesref" variable
rerere: fix various trivial leaks
builtin/stash: fix leak in `show_stash()`
revision: free diff options
builtin/log: fix leaking commit list in git-cherry(1)
merge-recursive: fix memory leak when finalizing merge
builtin/merge-recursive: fix leaking object ID bases
builtin/difftool: plug memory leaks in `run_dir_diff()`
object-name: free leaking object contexts
...
|
|
The Bloom filter used for path limited history traversal was broken
on systems whose "char" is unsigned; update the implementation and
bump the format version to 2.
* tb/path-filter-fix:
bloom: introduce `deinit_bloom_filters()`
commit-graph: reuse existing Bloom filters where possible
object.h: fix mis-aligned flag bits table
commit-graph: new Bloom filter version that fixes murmur3
commit-graph: unconditionally load Bloom filters
bloom: prepare to discard incompatible Bloom filters
bloom: annotate filters with hash version
repo-settings: introduce commitgraph.changedPathsVersion
t4216: test changed path filters with high bit paths
t/helper/test-read-graph: implement `bloom-filters` mode
bloom.h: make `load_bloom_filter_from_graph()` public
t/helper/test-read-graph.c: extract `dump_graph_info()`
gitformat-commit-graph: describe version 2 of BDAT
commit-graph: ensure Bloom filters are read with consistent settings
revision.c: consult Bloom filters for root commits
t/t4216-log-bloom.sh: harden `test_bloom_filters_not_used()`
|
|
date parser updates to be more careful about underflowing epoch
based timestamp.
* db/date-underflow-fix:
date: detect underflow/overflow when parsing dates with timezone offset
t0006: simplify prerequisites
|
|
When GIT_PAGER failed to spawn, depending on the code path taken,
we failed immediately (correct) or just spew the payload to the
standard output (incorrect). The code now always fail immediately
when GIT_PAGER fails.
* rj/pager-die-upon-exec-failure:
pager: die when paging to non-existing command
|
|
Doc update.
* ss/doc-eol-attr-fix:
doc: fix case error of eol attribute in example
|
|
"git archive --add-virtual-file=<path>:<contents>" never paid
attention to the --prefix=<prefix> option but the documentation
said it would. The documentation has been corrected.
* jc/archive-prefix-with-add-virtual-file:
archive: document that --add-virtual-file takes full path
|
|
Typically, forcing a sparse index to expand to a full index means that
Git could not determine the status of a file outside of the
sparse-checkout and needed to expand sparse trees into the full list of
sparse blobs. This operation can be very slow when the sparse-checkout
is much smaller than the full tree at HEAD.
When users are in this state, there is usually a modified or untracked
file outside of the sparse-checkout mentioned by the output of 'git
status'. There are a number of reasons why this is insufficient:
1. Users may not have a full understanding of which files are inside or
outside of their sparse-checkout. This is more common in monorepos
that manage the sparse-checkout using custom tools that map build
dependencies into sparse-checkout definitions.
2. In some cases, an empty directory could exist outside the
sparse-checkout and these empty directories are not reported by 'git
status' and friends.
3. If the user has '.gitignore' or 'exclude' files, then 'git status'
will squelch the warnings and not demonstrate any problems.
In order to help users who are in this state, add a new advice message
to indicate that a sparse index is expanded to a full index. This
message should be written at most once per process, so add a static
global 'give_advice_on_expansion' to sparse-index.c. Further, there is a
case in 'git sparse-checkout set' that uses the sparse index as an
in-memory data structure (even when writing a full index) so we need to
disable the message in that kind of case.
The t1092-sparse-checkout-compatibility.sh test script compares the
behavior of several Git commands across full and sparse repositories,
including sparse repositories with and without a sparse index. We need
to disable the advice in the sparse-index repo to avoid differences in
stderr. By leaving the advice on in the sparse-checkout repo (without
the sparse index), we can test the behavior of disabling the advice in
convert_to_sparse(). (Indeed, these tests are how that necessity was
discovered.) Add a test that reenables the advice and demonstrates that
the message is output.
The advice message is defined outside of expand_index() to avoid super-
wide lines. It is also defined as a macro to avoid compile issues with
-Werror=format-security.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The number to be displayed is calculated by the following defined in
object.h:
#define REV_SHIFT 2
#define MAX_REVS (FLAG_BITS - REV_SHIFT)
FLAG_BITS is currently 28, so 26 is the correct number.
Signed-off-by: Rikita Ishikawa <lagrange.resolvent@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The author date is used for published/updated date in the rss/atom
feed stream. Change it to the committer date that reflects the
"published/updated" definition better and makes rss/atom feeds more
linear. Gitlab/Github rss/atom feeds use the committer date.
Additionally, to be consistent, also use the committer date to
determine the date of the last commit to send in the feed
instead of the author date.
Signed-off-by: Jesús Ariel Cabello Mateos <080ariel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* https://github.com/j6t/git-gui:
git-gui: fix inability to quit after closing another instance
git-gui: sv.po: Update Swedish translation (576t0f0u)
git-gui: note the new maintainer
Makefile(s): do not enforce "all indents must be done with tab"
Makefile(s): avoid recipe prefix in conditional statements
doc: switch links to https
doc: update links to current pages
git-gui: po: fix typo in French "aperçu"
|
|
The problem can be reproduced on Linux with this sequence:
1. Run git gui from a terminal.
2. Edit the commit message and wait for at least 2 seconds.
3. Terminate the instance from the terminal, for example with Ctrl-C,
to simulate crash. This leaves the file .git/GITGUI_BCK behind.
4. Start two instances of git gui &.
At this point the first instance can be closed (it renames
.git/GITGUI_BCK to .git/GITGUI_MSG), but the seconds brings an error
message about the absent file and cannot be closed thereafter and must
be killed from the command line.
The renaming that happens by the first instance is the correct action
and need not be repeated by the second instance. It is the correct
action to ignore the failed renaming.
On the other hand, the second instance could just edit the commit
message again, wait 2 seconds to write GITGUI_BCK, and then can be
closed without failing. At this point, since the user has edited the
message, it is again correct to preserve the edited version in
GITGUI_MSG.
* os/catch-rename:
git-gui: fix inability to quit after closing another instance
|
|
The command for generating the list of ForEachMacros searches for
macros whose name contains the string "for_each". Include those whose
name contains "foreach" as well. That brings in kh_foreach and
kh_foreach_value from khash.h.
Regenerating the list also brings in hashmap-based macros added by
87571c3f71 (hashmap: use *_entry APIs for iteration, 2019-10-06),
f0e63c4113 (hashmap: use *_entry APIs to wrap container_of, 2019-10-06),
4fa1d501f7 (strmap: add functions facilitating use as a string->int map,
2020-11-05), b70c82e6ed (strmap: add more utility functions,
2020-11-05), and 1201eb628a (strmap: add a strset sub-type, 2020-11-06).
for_each_abbrev is no longer found because its definition was removed by
d850b7a545 (cocci: apply the "cache.h" part of "the_repository.pending",
2023-03-28). Note that it had been a false positive, though, as it had
been a function wrapper, not a for-like macro.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In ebd2e4a13a (Makefile: restrict -Wpedantic and -Wno-pedantic-ms-format
better, 2021-09-28), we tightened our Makefile's behavior to only enable
-Wpedantic when compiling with either gcc5/clang4 or greater as older
compiler versions did not have support for -Wpedantic.
Commit ebd2e4a13a was looking for either "gcc5" or "clang4" to appear in
the COMPILER_FEATURES variable, combining the two "$(filter ...)"
searches with an "$(or ...)".
But ebd2e4a13a has a typo where instead of writing:
ifneq ($(or ($filter ...),$(filter ...)),)
we wrote:
ifneq (($or ($filter ...),$(filter ...)),)
Causing our Makefile (when invoked with DEVELOPER=1, and a sufficiently
recent compiler version) to barf:
$ make DEVELOPER=1
config.mak.dev:13: extraneous text after 'ifneq' directive
[...]
Correctly combine the results of the two "$(filter ...)" operations by
using "$(or ...)", not "$or".
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
One of the conversions in f19b9165 (merge-ort: convert more error()
cases to path_msg(), 2024-06-19) accidentally lost the early return.
Restore it.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
helper/test-oidmap.c along with t0016-oidmap.sh test the oidmap.h
library which is built on top of hashmap.h.
Migrate them to the unit testing framework for better performance,
concise code and better debugging. Along with the migration also plug
memory leaks and make the test logic independent for all the tests.
The migration removes 'put' tests from t0016, because it is used as
setup to all the other tests, so testing it separately does not yield
any benefit.
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Mentored-by: Kaartic Sivaraam <kaartic.sivaraam@gmail.com>
Reviewed-by: Josh Steadmon <steadmon@google.com>
Helped-by: Phillip Wood <phillip.wood123@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Ghanshyam Thakkar <shyamthakkar001@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When "git push" is configured to use the push negotiation, a push of
deletion of a branch (without pushing anything else) may end up not
having anything to negotiate for the common ancestor discovery.
In such a case, we end up making an internal invocation of "git
fetch --negotiate-only" without any "--negotiate-tip" parameters
that stops the negotiate-only fetch from being run, which by itself
is not a bad thing (one fewer round-trip), but the end-user sees a
"fatal: --negotiate-only needs one or more --negotiation-tip=*"
message that the user cannot act upon.
Teach "git push" to notice the situation and omit performing the
negotiate-only fetch to begin with. One fewer process spawned, one
fewer "alarming" message given the user.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git diff --no-ext-diff" when diff.external is configured ignored
the "--color-moved" option.
* rs/diff-color-moved-w-no-ext-diff-fix:
diff: allow --color-moved with --no-ext-diff
|
|
Leakfix.
* ew/object-convert-leakfix:
object-file: fix leak on conversion failure
|
|
Memory ownership rules for the in-core representation of
remote.*.url configuration values have been straightened out, which
resulted in a few leak fixes and code clarification.
* jk/remote-wo-url:
remote: drop checks for zero-url case
remote: always require at least one url in a remote
t5801: test remote.*.vcs config
t5801: make remote-testgit GIT_DIR setup more robust
remote: allow resetting url list
config: document remote.*.url/pushurl interaction
remote: simplify url/pushurl selection
remote: use strvecs to store remote url/pushurl
remote: transfer ownership of memory in add_url(), etc
remote: refactor alias_url() memory ownership
archive: fix check for missing url
|
|
CI job to build minimum fuzzers learned to pass NO_CURL=NoThanks to
the build procedure, as its build environment does not offer, or
the rest of the build needs, anything cURL.
* jc/fuzz-sans-curl:
fuzz: minimum fuzzers environment lacks libcURL
|
|
"git version --build-options" reports the version information of
OpenSSL and other libraries (if used) in the build.
* rb/build-options-w-lib-versions:
version: teach --build-options to reports zlib version information
version: teach --build-options to reports libcurl version information
version: --build-options reports OpenSSL version information
|
|
A CPP macro USE_THE_REPOSITORY_VARIABLE is introduced to help
transition the codebase to rely less on the availability of the
singleton the_repository instance.
* ps/use-the-repository:
hex: guard declarations with `USE_THE_REPOSITORY_VARIABLE`
t/helper: remove dependency on `the_repository` in "proc-receive"
t/helper: fix segfault in "oid-array" command without repository
t/helper: use correct object hash in partial-clone helper
compat/fsmonitor: fix socket path in networked SHA256 repos
replace-object: use hash algorithm from passed-in repository
protocol-caps: use hash algorithm from passed-in repository
oidset: pass hash algorithm when parsing file
http-fetch: don't crash when parsing packfile without a repo
hash-ll: merge with "hash.h"
refs: avoid include cycle with "repository.h"
global: introduce `USE_THE_REPOSITORY_VARIABLE` macro
hash: require hash algorithm in `empty_tree_oid_hex()`
hash: require hash algorithm in `is_empty_{blob,tree}_oid()`
hash: make `is_null_oid()` independent of `the_repository`
hash: convert `oidcmp()` and `oideq()` to compare whole hash
global: ensure that object IDs are always padded
hash: require hash algorithm in `oidread()` and `oidclr()`
hash: require hash algorithm in `hasheq()`, `hashcmp()` and `hashclr()`
hash: drop (mostly) unused `is_empty_{blob,tree}_sha1()` functions
|
|
The output from "git cat-file --batch-check" and "--batch-command
(info)" should not be unbuffered, for which some tests have been
added.
* ew/cat-file-unbuffered-tests:
t1006: ensure cat-file info isn't buffered by default
Git.pm: use array in command_bidi_pipe example
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Code clean-up.
* rs/remove-unused-find-header-mem:
commit: remove find_header_mem()
|
|
Code cleanup.
* jc/worktree-git-path:
worktree_git_path(): move the declaration to path.h
|
|
"git fetch-pack -k -k" without passing "--lock-pack" (which we
never do ourselves) did not work at all, which has been corrected.
* jk/fetch-pack-fsck-wo-lock-pack:
fetch-pack: fix segfault when fscking without --lock-pack
|
|
A helper function shared between two tests had a copy-paste bug,
which has been corrected.
* jk/t5500-typofix:
t5500: fix mistaken $SERVER reference in helper function
|
|
An unused extern declaration for mingw has been removed to prevent
it from causing build failure.
* js/mingw-remove-unused-extern-decl:
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
|
|
Earlier we stopped using the tree of HEAD as the default source of
attributes in a bare repository, but failed to document it. This
has been corrected.
* jc/no-default-attr-tree-in-bare:
attr.tree: HEAD:.gitattributes is no longer the default in a bare repo
|
|
We forgot to normalize the result of getcwd() to NFC on macOS where
all other paths are normalized, which has been corrected. This still
does not address the case where core.precomposeUnicode configuration
is not defined globally.
* tb/precompose-getcwd:
macOS: ls-files path fails if path of workdir is NFD
|
|
When the user adds to "git rebase -i" instruction to "pick" a merge
commit, the error experience is not pleasant. Such an error is now
caught earlier in the process that parses the todo list.
* pw/rebase-i-error-message:
rebase -i: improve error message when picking merge
rebase -i: pass struct replay_opts to parse_insn_line()
|
|
The "-k" and "--rfc" options of "format-patch" will now error out
when used together, as one tells us not to add anything to the
title of the commit, and the other one tells us to add "RFC" in
addition to "PATCH".
* ds/format-patch-rfc-and-k:
format-patch: ensure that --rfc and -k are mutually exclusive
|
|
reftable_log_record_compare_key() is a function defined by
reftable/record.{c, h} and is used to compare the keys of two
log records when sorting multiple log records using 'qsort'.
In the current testing setup, this function is left unexercised.
Add a testing function for the same.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
reftable_ref_record_compare_name() is a function defined by
reftable/record.{c, h} and is used to compare the refname of two
ref records when sorting multiple ref records using 'qsort'.
In the current testing setup, this function is left unexercised.
Add a testing function for the same.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
reftable_record_is_deletion() is a function defined in
reftable/record.{c, h} that determines whether a record is of
type deletion or not. In the current testing setup, this function
is left untested for index records.
Add tests for this function in the case of index records.
Note that since index records cannot be of type deletion, this function
must always return '0' when called on an index record.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
reftable_record_is_deletion() is a function defined in
reftable/record.{c, h} that determines whether a record is of
type deletion or not. In the current testing setup, this function
is left untested for two of the four record types (obj, index).
Add tests for this function in the case of obj records.
Note that since obj records cannot be of type deletion, this function
must always return '0' when called on an obj record.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
reftable_record_is_deletion() is a function defined in
reftable/record.{c, h} that determines whether a record is of
type deletion or not. In the current testing setup, this function
is left untested for three of the four record types (log, obj, index).
Add tests for this function in the case of log records.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
reftable_record_is_deletion() is a function defined in
reftable/record.{c, h} that determines whether a record is of
type deletion or not. In the current testing setup, this function
is left untested for all the four record types (ref, log, obj, index).
Add tests for this function in the case of ref records.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the current testing setup for obj records, the comparison
functions for obj records, reftable_obj_record_cmp_void() and
reftable_obj_record_equal_void() are left untested.
Add tests for the same by using the wrapper functions
reftable_record_cmp() and reftable_record_equal() for
reftable_index_record_cmp_void() and reftable_index_record_equal_void()
respectively.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the current testing setup for index records, the comparison
functions for index records, reftable_index_record_cmp() and
reftable_index_record_equal() are left untested.
Add tests for the same by using the wrapper functions
reftable_record_cmp() and reftable_record_equal() for
reftable_index_record_cmp() and reftable_index_record_equal()
respectively.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the current testing setup for ref records, the comparison
functions for ref records, reftable_ref_record_cmp_void() and
reftable_ref_record_equal() are left untested.
Add tests for the same by using the wrapper functions
reftable_record_cmp() and reftable_record_equal() for
reftable_ref_record_cmp_void() and reftable_ref_record_equal()
respectively.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the current testing setup for log records, only
reftable_log_record_equal() among log record's comparison functions
is tested.
Modify the existing tests to exercise reftable_log_record_cmp_void()
(using the wrapper function reftable_record_cmp()) alongside
reftable_log_record_equal().
Note that to achieve this, we'll need to replace instances of
reftable_log_record_equal() with the wrapper function
reftable_record_equal().
Rename the now modified test to reflect its nature of exercising
all comparison operations, not just equality.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
reftable/record_test.c exercises the functions defined in
reftable/record.{c, h}. Migrate reftable/record_test.c to the
unit testing framework. Migration involves refactoring the tests
to use the unit testing framework instead of reftable's test
framework, and renaming the tests to fit unit-tests' naming scheme.
While at it, change the type of index variable 'i' to 'size_t'
from 'int'. This is because 'i' is used in comparison against
'ARRAY_SIZE(x)' which is of type 'size_t'.
Also, use set_hash() which is defined locally in the test file
instead of set_test_hash() which is defined by
reftable/test_framework.{c, h}. This is fine to do as both these
functions are similarly implemented, and
reftable/test_framework.{c, h} is not #included in the ported test.
Get rid of reftable_record_print() from the tests as well, because
it clutters the test framework's output and we have no way of
verifying the output.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Chandra Pratap <chandrapratap3519@gmail.com>
Acked-by: Karthik Nayak <karthik.188@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A quick test tells us that t0612 does not trigger any leak:
$ make SANITIZE=leak test GIT_TEST_PASSING_SANITIZE_LEAK=check GIT_TEST_SANITIZE_LEAK_LOG=true GIT_TEST_OPTS=-i T=t0612-reftable-jgit-compatibility.sh
[...]
*** t0612-reftable-jgit-compatibility.sh ***
in GIT_TEST_PASSING_SANITIZE_LEAK=check mode, setting --invert-exit-code for TEST_PASSES_SANITIZE_LEAK != true
ok 1 - CGit repository can be read by JGit
ok 2 - JGit repository can be read by CGit
ok 3 - mixed writes from JGit and CGit
ok 4 - JGit can read multi-level index
# passed all 4 test(s)
1..4
# faking up non-zero exit with --invert-exit-code
make[2]: *** [Makefile:75: t0612-reftable-jgit-compatibility.sh] Error 1
Let's mark it as leak-free to silence the machinery activated by
`GIT_TEST_PASSING_SANITIZE_LEAK=check`.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When a test that leaks runs with GIT_TEST_SANITIZE_LEAK_LOG=true,
the test returns zero, which is not what we want.
In the if-else's chain we have in "check_test_results_san_file_", we
consider three variables: $passes_sanitize_leak, $sanitize_leak_check
and, implicitly, GIT_TEST_SANITIZE_LEAK_LOG (always set to "true" at
that point).
For the first two variables we have different considerations depending
on the value of $test_failure, which makes sense. However, for the
third, GIT_TEST_SANITIZE_LEAK_LOG, we don't; regardless of
$test_failure, we use "invert_exit_code=t" to produce a non-zero
return value.
That assumes "$test_failure" is always zero at that point. But it
may not be:
$ git checkout v2.40.1
$ make test SANITIZE=leak T=t3200-branch.sh # this fails
$ make test SANITIZE=leak GIT_TEST_SANITIZE_LEAK_LOG=true T=t3200-branch.sh # this succeeds
[...]
With GIT_TEST_SANITIZE_LEAK_LOG=true, our logs revealed a memory leak, exiting with a non-zero status!
# faked up failures as TODO & now exiting with 0 due to --invert-exit-code
We need to use "invert_exit_code=t" only when "$test_failure" is zero.
Let's add the missing conditions in the if-else's chain to make it work
as expected.
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Rubén Justo <rjusto@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We can mark t0613 as leak-free:
$ make test SANITIZE=leak GIT_TEST_PASSING_SANITIZE_LEAK=check GIT_TEST_SANITIZE_LEAK_LOG=true T=t0613-reftable-write-options.sh
[...]
*** t0613-reftable-write-options.sh ***
in GIT_TEST_PASSING_SANITIZE_LEAK=check mode, setting --invert-exit-code for TEST_PASSES_SANITIZE_LEAK != true
ok 1 - default write options
ok 2 - disabled reflog writes no log blocks
ok 3 - many refs results in multiple blocks
ok 4 - tiny block size leads to error
ok 5 - small block size leads to multiple ref blocks
ok 6 - small block size fails with large reflog message
ok 7 - block size exceeding maximum supported size
ok 8 - restart interval at every single record
ok 9 - restart interval exceeding maximum supported interval
ok 10 - object index gets written by default with ref index
ok 11 - object index can be disabled
# passed all 11 test(s)
1..11
# faking up non-zero exit with --invert-exit-code
make[2]: *** [Makefile:75: t0613-reftable-write-options.sh] Error 1
Do it.
Signed-off-by: Rubén Justo <rjusto@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The pathspec syntax is explained in the file "glossary-content.txt".
Moreover, no file named "glossary-context.txt" exists in the repository.
Signed-off-by: Abhijeet Sonar <abhijeet.nkt@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Use the strvec_pushf() call that already appends a slash to also produce
the stuck form of the option --super-prefix instead of adding the option
name in a separate call of strvec_push() or strvec_pushl(). This way we
can more easily see that these parts make up a single option with its
argument and save a function call.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Addresses that are mentioned on the trailers in the commit log
messages (e.g., "Reviewed-by") are added to the "Cc:" list by "git
send-email". These hand-written addresses, however, may be
malformed (e.g., having unquoted "." and other punctutation marks in
the display-name part) and can upset MTA.
The code does use the sanitize_address() helper on these
address-looking strings to turn them into valid addresses, but it is
used only to see if the address should be suppressed. The original
string taken from the message is added to the @cc list if the code
decides the address is not suppressed.
Because the addresses on trailer lines are hand-written and more
likely to contain malformed addresses, when adding to the @cc list,
use the result from sanitize_address, not the original. Note that
we do not modify the behaviour for addresses taken from the e-mail
headers, as they are more likely to be machine generated and
well-formed.
Signed-off-by: Csókás, Bence <csokas.bence@prolan.hu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If you open 2 git gui instances in the same directory, then close one
of them and try to close the other, an error message pops up, saying:
'error renaming ".git/GITGUI_BCK": no such file or directory', and it
is no longer possible to close the window ever.
Fix by catching this error, and proceeding even if the file no longer
exists.
Signed-off-by: Orgad Shaneh <orgads@gmail.com>
|
|
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Fix for a progress bar.
* ds/ahead-behind-fix:
commit-graph: increment progress indicator
|
|
Doc update.
* ds/doc-add-interactive-singlekey:
doc: interactive.singleKey is disabled by default
|
|
Varargs functions that are unannotated as printf-like or execl-like
have been annotated as such.
* jc/varargs-attributes:
__attribute__: add a few missing format attributes
__attribute__: mark some functions with LAST_ARG_MUST_BE_NULL
__attribute__: remove redundant attribute declaration for git_die_config()
__attribute__: trace2_region_enter_printf() is like "printf"
|
|
Fix for an embarrassing typo that prevented Python2 tests from running
anywhere.
* ps/ci-fix-detection-of-ubuntu-20:
ci: fix check for Ubuntu 20.04
|
|
An overly large ".gitignore" files are now rejected silently.
* jk/cap-exclude-file-size:
dir.c: reduce max pattern file size to 100MB
dir.c: skip .gitignore, etc larger than INT_MAX
|
|
The safe.directory configuration knob has been updated to
optionally allow leading path matches.
* jc/safe-directory-leading-path:
safe.directory: allow "lead/ing/path/*" match
|
|
Code simplification.
* rs/difftool-env-simplify:
difftool: add env vars directly in run_file_diff()
|
|
"git init" in an already created directory, when the user
configuration has includeif.onbranch, started to fail recently,
which has been corrected.
* ps/fix-reinit-includeif-onbranch:
setup: fix bug with "includeIf.onbranch" when initializing dir
|
|
The chainlint script (invoked during "make test") did nothing when
it failed to detect the number of available CPUs. It now falls
back to 1 CPU to avoid the problem.
* es/chainlint-ncores-fix:
chainlint.pl: latch CPU count directly reported by /proc/cpuinfo
chainlint.pl: fix incorrect CPU count on Linux SPARC
chainlint.pl: make CPU count computation more robust
|
|
Doc update.
* jc/rev-parse-fatal-doc:
rev-parse: document how --is-* options work outside a repository
|
|
The documentation for "git diff --name-only" has been clarified
that it is about showing the names in the post-image tree.
* jc/doc-diff-name-only:
diff: document what --name-only shows
|
|
Test fix.
* mt/t0211-typofix:
t/t0211-trace2-perf.sh: fix typo patern -> pattern
|
|
Code clean-up to remove an unused struct definition.
* dg/fetch-pack-code-cleanup:
fetch-pack: remove unused 'struct loose_object_iter'
|
|
Doc fix.
* dm/update-index-doc-fix:
documentation: git-update-index: add --show-index-version to synopsis
|
|
Scalar fix.
* ds/scalar-reconfigure-all-fix:
scalar: avoid segfault in reconfigure --all
|
|
Doc update.
* vd/doc-merge-tree-x-option:
Documentation/git-merge-tree.txt: document -X
|
|
P4 update.
* fa/p4-error:
git-p4: show Perforce error to the user
|
|
The maximum size of attribute files is enforced more consistently.
* tb/attr-limits:
attr.c: move ATTR_MAX_FILE_SIZE check into read_attr_from_buf()
|
|
Code clean-up to remove code that is now a noop.
* rs/diff-parseopts-cleanup:
diff-lib: stop calling diff_setup_done() in do_diff_cache()
|
|
Command line completion support for zsh (in contrib/) has been
updated to stop exposing internal state to end-user shell
interaction.
* dk/zsh-git-repo-path-fix:
completion: zsh: stop leaking local cache variable
|
|
zsh can pretend to be a normal shell pretty well except for some
glitches that we tickle in some of our scripts. Work them around
so that "vimdiff" and our test suite works well enough with it.
* bc/zsh-compatibility:
vimdiff: make script and tests work with zsh
t4046: avoid continue in &&-chain for zsh
|
|
A scheduled "git maintenance" job is expected to work on all
repositories it knows about, but it stopped at the first one that
errored out. Now it keeps going.
* js/for-each-repo-keep-going:
maintenance: running maintenance should not stop on errors
for-each-repo: optionally keep going on an error
|
|
"git stash -S" did not handle binary files correctly, which has
been corrected.
* aj/stash-staged-fix:
stash: fix "--staged" with binary files
|
|
The procedure to build multi-pack-index got confused by the
replace-refs mechanism, which has been corrected by disabling the
latter.
* xx/disable-replace-when-building-midx:
midx: disable replace objects
|
|
"git rebase --signoff" used to forget that it needs to add a
sign-off to the resulting commit when told to continue after a
conflict stops its operation.
* pw/rebase-m-signoff-fix:
rebase -m: fix --signoff with conflicts
sequencer: store commit message in private context
sequencer: move current fixups to private context
sequencer: start removing private fields from public API
sequencer: always free "struct replay_opts"
|
|
The clear_skip_worktree_from_present_files() method was first introduced
in af6a51875a (repo_read_index: clear SKIP_WORKTREE bit from files
present in worktree, 2022-01-14) to allow better interaction with the
working directory in the presence of paths outside of the
sparse-checkout. The initial implementation would lstat() every single
SKIP_WORKTREE path to see if it existed; if it ran across a sparse
directory that existed (when a sparse index was in use), then it would
expand the index and then check every SKIP_WORKTREE path.
Since these lstat() calls were very expensive, this was improved in
d79d299352 (Accelerate clear_skip_worktree_from_present_files() by
caching, 2022-01-14) by caching directories that do not exist so it
could avoid lstat()ing any files under such directories. However, there
are some inefficiencies in that caching mechanism.
The caching mechanism stored only the parent directory as not existing,
even if a higher parent directory also does not exist. This means that
wasted lstat() calls would occur when the paths passed to path_found()
change immediate parent directories but within the same parent directory
that does not exist.
To create an example repository that demonstrates this problem, it helps
to have a directory outside of the sparse-checkout that contains many
deep paths. In particular, the first paths (in lexicographic order)
underneath the sparse directory should have deep directory structures,
maximizing the difference between the old caching algorithm that looks
to a single parent and the new caching algorithm that looks to the
top-most missing directory.
The performance test script p2000-sparse-operations.sh takes the sample
repository and copies its HEAD to several copies nested in directories
of the form f<i>/f<j>/f<k> where i, j, and k are numbers from 1 to 4.
The sparse-checkout cone is then selected as "f2/f4/". Creating "f1/f1/"
will trigger the behavior and also lead to some interesting cases for
the caching algorithm since "f1/f1/" exists but "f1/f2/" and "f3/" do
not.
This is difficult to notice when running performance tests using the Git
repository (or a blow-up of the Git repository, as in
p2000-sparse-operations.sh) because Git has a very shallow directory
structure.
This change reorganizes the caching algorithm to focus on storing the
highest level leading directory that does not exist; specifically this
means that that directory's parent _does_ exist. By doing a little extra
work on a path passed to path_found(), we can short-circuit all of the
paths passed to path_found() afterwards that match a prefix with that
non-existing directory. When in a repository where the first sparse file
is likely to have a much deeper path than the first non-existing
directory, this can realize significant gains.
The details of this algorithm require careful attention, so the new
implementation of path_found() has detailed comments, including the use
of a new max_common_dir_prefix() method that may be of independent
interest.
It's worth noting that this is not universally positive, since we are
doing extra lstat() calls to establish the exact path to cache. In the
blow-up of the Git repository, we can see that the lstat count
_increases_ from 28 to 31. However, these numbers were already
artificially low.
Contributor Elijah Newren created a publicly-available test repository
that demonstrates the difference in these caching algorithms in the most
extreme way. To test, follow these steps:
git clone --sparse https://github.com/newren/gvfs-like-git-bomb
cd gvfs-like-git-bomb
./runme.sh # NOTE: check scripts before running!
At this point, assuming you do not have index.sparse=true set globally,
the index has one million paths with the SKIP_WORKTREE bit and they will
all be sent to path_found() in the sparse loop. You can measure this by
running 'git status' with GIT_TRACE2_PERF=1:
Sparse files in the index: 1,000,000
sparse_lstat_count (before): 200,000
sparse_lstat_count (after): 2
And here are the performance numbers:
Benchmark 1: old
Time (mean ± σ): 397.5 ms ± 4.1 ms
Range (min … max): 391.2 ms … 404.8 ms 10 runs
Benchmark 2: new
Time (mean ± σ): 252.7 ms ± 3.1 ms
Range (min … max): 249.4 ms … 259.5 ms 11 runs
Summary
'new' ran
1.57 ± 0.02 times faster than 'old'
By modifying this example further, we can demonstrate a more realistic
example and include the sparse index expansion. Continue by creating
this directory, confusing both caching algorithms somewhat:
mkdir -p bomb/d/e/f/a/a
Then re-run the 'git status' tests to see these statistics:
Sparse files in the index: 1,000,000
sparse_lstat_count (before): 724,010
sparse_lstat_count (after): 106
Benchmark 1: old
Time (mean ± σ): 753.0 ms ± 3.5 ms
Range (min … max): 749.7 ms … 760.9 ms 10 runs
Benchmark 2: new
Time (mean ± σ): 201.4 ms ± 3.2 ms
Range (min … max): 196.0 ms … 207.9 ms 14 runs
Summary
'new' ran
3.74 ± 0.06 times faster than 'old'
Note that if this repository had a sparse index enabled, the additional
cost of expanding the sparse index affects the total time of these
commands by over four seconds, significantly diminishing the benefit of
the caching algorithm. Having existing paths outside of the
sparse-checkout is a known performance issue for the sparse index and is
a known trade-off for the performance benefits given when no such paths
exist.
Using an internal monorepo with over two million paths at HEAD and a
typical sparse-checkout cone such that the sparse index contains
~190,000 entries (including over two thousand sparse trees), I was able
to measure these lstat counts when one sparse directory actually exists
on disk:
Sparse files in expanded index: 1,841,997
full_lstat_count (before): 1,188,161
full_lstat_count (after): 4,404
This resulted in this absolute time change, on a warm disk:
Time in full loop (before): 13.481 s
Time in full loop (after): 0.081 s
(These times were calculated on a Windows machine, where lstat() is
slower than a similar Linux machine.)
Helped-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The clear_skip_worktree.. methods already report some statistics about
how many cache entries are checked against path_found() due to having
the skip-worktree bit set. However, due to path_found() performing some
caching, this isn't the only information that would be helpful to
report.
Add a new lstat_count member to the path_found_data struct to count the
number of times path_found() calls lstat(). This will be helpful to help
explain performance problems in this method as well as to demonstrate
future changes to the caching algorithm in a more concrete way than
end-to-end timings.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The path_found() method previously reused strings from the cache entries
the calling methods were using. This prevents string manipulation in
place and causes some odd reallocation before the final lstat() call in
the method.
Refactor the method to use strbufs and copy the path into the strbuf,
but also only the parent directory and not the whole path. This looks
like extra copying when assigning the path to the strbuf, but we save an
allocation by dropping the 'tmp' string, and we are "reusing" the copy
from 'tmp' to put the data in the strbuf.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In advance of changing the behavior of path_found(), take all of the
intermediate data values and group them into a single struct. This
simplifies the method prototype as well as the initialization. Future
changes can be made directly to the struct and method without changing
the callers with this approach.
Note that the clear_path_found_data() method is currently empty, as
there is nothing to free. This method is a placeholder for future
changes that require a non-trivial implementation. Its stub is created
now so consumers could call it now and not change in future changes.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The clear_skip_worktree_from_present_files() method was introduced in
af6a51875a (repo_read_index: clear SKIP_WORKTREE bit from files present
in worktree, 2022-01-14) to help cases where sparse-checkout is enabled
but some paths outside of the sparse-checkout also exist on disk. This
operation can be slow as it needs to check path existence in a way not
stored in the index, so caching was introduced in d79d299352 (Accelerate
clear_skip_worktree_from_present_files() by caching, 2022-01-14).
This check is particularly confusing in the presence of a sparse index,
as a sparse tree entry corresponding to an existing directory must first
be expanded to a full index before examining the paths within. This is
currently implemented using a 'goto' and a boolean variable to ensure we
restart only once.
Even with that caching, it was noticed that this could take a long time
to execute. 89aaab11a3 (index: add trace2 region for clear skip
worktree, 2022-11-03) introduced trace2 regions to measure this time.
Further, the way the loop repeats itself was slightly confusing and
prone to breakage, so a BUG() statement was added in 8c7abdc596 (index:
raise a bug if the index is materialised more than once, 2022-11-03) to
be sure that the second run of the loop does not hit any sparse trees.
One thing that can be confusing about the current setup is that the
trace2 regions nest and it is not clear that a second loop is running
after a sparse index is expanded. Here is an example of what the regions
look like in a typical case:
| region_enter | ... | label:clear_skip_worktree_from_present_files
| region_enter | ... | ..label:update
| region_leave | ... | ..label:update
| region_enter | ... | ..label:ensure_full_index
| region_enter | ... | ....label:update
| region_leave | ... | ....label:update
| region_leave | ... | ..label:ensure_full_index
| data | ... | ..sparse_path_count:1
| data | ... | ..sparse_path_count_full:269538
| region_leave | ... | label:clear_skip_worktree_from_present_files
One thing that is particularly difficult to understand about these
regions is that most of the time is spent between the close of the
ensure_full_index region and the reporting of the end data. This is
because of the restart of the loop being within the same region as the
first iteration of the loop.
This change refactors the method into two separate methods that are
traced separately. This will be more important later when we change
other features of the methods, but for now the only functional change is
the difference in the structure of the trace regions.
After this change, the same telemetry section is split into three
distinct chunks:
| region_enter | ... | label:clear_skip_worktree_from_present_files_sparse
| data | ... | ..sparse_path_count:1
| region_leave | ... | label:clear_skip_worktree_from_present_files_sparse
| region_enter | ... | label:update
| region_leave | ... | label:update
| region_enter | ... | label:ensure_full_index
| region_enter | ... | ..label:update
| region_leave | ... | ..label:update
| region_leave | ... | label:ensure_full_index
| region_enter | ... | label:clear_skip_worktree_from_present_files_full
| data | ... | ..full_path_count:269538
| region_leave | ... | label:clear_skip_worktree_from_present_files_full
Here, we see the sparse loop terminating early with its first sparse
path being a sparse directory containing a file. Then, that loop's
region terminates before ensure_full_index begins (in this case, the
cache-tree must also be computed). Then, _after_ the index is expanded,
the full loop begins with its own region.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git fetch-pack -k -k" without passing "--lock-pack" (which we
never do ourselves) did not work at all, which has been corrected.
* jk/fetch-pack-fsck-wo-lock-pack:
fetch-pack: fix segfault when fscking without --lock-pack
|
|
Code clean-up.
* rs/remove-unused-find-header-mem:
commit: remove find_header_mem()
|
|
A helper function shared between two tests had a copy-paste bug,
which has been corrected.
* jk/t5500-typofix:
t5500: fix mistaken $SERVER reference in helper function
|
|
An unused extern declaration for mingw has been removed to prevent
it from causing build failure.
* js/mingw-remove-unused-extern-decl:
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
|
|
When "git merge" sees that the index cannot be refreshed (e.g. due
to another process doing the same in the background), it died but
after writing MERGE_HEAD etc. files, which was useless for the
purpose to recover from the failure.
* kz/merge-fail-early-upon-refresh-failure:
merge: avoid write merge state when unable to write index
|
|
A few of the bundle URI tests point config at a fake bundle; they care
only that the client has been configured with _some_ bundle, but it
doesn't have to actually contain objects.
For the file:// tests, we use "$BUNDLE_URI_REPO_URI/fake.bdl", a
non-existent file inside the actual remote repo. But for git:// and
http:// tests, we use "https://example.com/fake.bdl". This works OK in
practice, but it means we actually make a request to example.com (which
returns a placeholder HTML response). That can be annoying when running
the test suite on a spotty network (it doesn't produce a wrong result,
since we expect it to fail, but it may introduce delays).
We can reduce our dependency on the outside world by using a local URL.
It would work to just do "file://$PWD/fake.bdl" here, since the bundle
code does not care about the actual location. But in the long run I
suspect we may have more restrictions on which protocols can be passed
around as bundle URIs. So instead, let's stick with the file:// repo's
pattern and just point to a bogus name based on the remote repo's URL.
For http this makes perfect sense; we'll make a request to the local
http server and find that there's nothing there. For git:// it's a
little weird, as you wouldn't normally access a bundle file over git://
at all. But it's probably the most reasonable guess we can make for now,
and anybody who tightens protocol selection later will know better
what's the best path forward.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
t5551 tries to access a URL with a bogus hostname and confirms that
http.curloptResolve lets us use this otherwise unresolvable name.
Before doing so, though, we confirm that trying to access the bogus
hostname without http.curloptResolve fails as expected. This isn't
testing Git at all, but is confirming the test's assumptions. That's
often a good thing to do, but in this case it means that we'll actually
try to resolve the external name. Even though it's unlikely that
"gitbogusexamplehost.invalid" would ever resolve, the DNS lookup itself
may take time.
It's probably reasonable to just assume that this obviously-bogus name
would not actually resolve in practice, which lets us reduce our test
suite's dependency on the outside world.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We test how "fetch --set-upstream" behaves when given an invalid URL,
using the bogus URL "http://nosuchdomain.example.com". But finding out
that it is invalid requires an actual DNS lookup.
Reduce our dependency on external factors by using an invalid local
filesystem URL, which works just as well for our purposes.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|