| Age | Commit message (Collapse) | Author | Files | Lines |
|
In the early days of Git, Perl was used quite prominently throughout the
project. This has changed significantly as almost all of the executables
we ship nowadays have eventually been rewritten in C. Only a handful of
subsystems remain that require Perl:
- gitweb, a read-only web interface.
- A couple of scripts that allow importing repositories from GNU Arch,
CVS and Subversion.
- git-send-email(1), which can be used to send mails.
- git-request-pull(1), which is used to request somebody to pull from
a URL by sending an email.
- git-filter-branch(1), which uses Perl with the `--state-branch`
option. This command is typically recommended against nowadays in
favor of git-filter-repo(1).
- Our Perl bindings for Git.
- The netrc Git credential helper.
None of these subsystems can really be considered to be part of the
"core" of Git, and an installation without them is fully functional.
It is more likely than not that an end user wouldn't even notice that
any features are missing if those tools weren't installed. But while
Perl nowadays very much is an optional dependency of Git, there is a
significant limitation when Perl isn't available: developers cannot run
our test suite.
Preceding commits have started to lift this restriction by removing the
strict dependency on Perl in many central parts of the test library. But
there are still many tests that rely on small Perl helpers to do various
different things.
Introduce a new PERL_TEST_HELPERS prerequisite that guards all tests
that require Perl. This prerequisite is explicitly different than the
preexisting PERL prerequisite:
- PERL records whether or not features depending on the Perl
interpreter are built.
- PERL_TEST_HELPERS records whether or not a Perl interpreter is
available for our tests.
By having these two separate prerequisites we can thus distinguish
between tests that inherently depend on Perl because the underlying
feature does, and those tests that depend on Perl because the test
itself is using Perl.
Adapt all tests to set the PERL_TEST_HELPERS prerequisite as needed.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git 2.47.2
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE4fA2sf7nIh/HeOzvsLXohpav5ssFAmdkT1sACgkQsLXohpav
# 5svdhRAAq0WoZIg+33vYNNVSTm3Ux9RJslmXs3lQuhuUJ61hK/28drSLU29GH7x7
# 3nmmjp1cegnXRVLBAfoYDdzPprNNrQFQEHQEzgG/GDZw0OXn+WTZuNyrrUYoa+sd
# QSLlElRj2qrpHIMOsMIBKBSNB+qjJHOMGdxcBAS768TfnQpGIpc1KJa24TxsVBzC
# ScP4uvrFfPyQrqFUgiUhCeqLnO/6T5i/QAn/8cS5a1+zor5ZHSlw28TZTOxN2odo
# Rulp/FtehiDEzmRowgD3M4fImAPY6Ib6VORCYASqpJFFla30tu2bQqEi6raOMTec
# hg5Ibkmj6fHFONaYvoTMRkYHmtUnNgIPU/CYPwswNk8w1+PPQfJ+TYjBXOQgdTLW
# F0azHBHh7NRmEHVydiF9CqjgNVRzjO4IEZfGqXNFPPMvR6UUzDaIkrpYbwXBFMin
# GNPV3QISeXj9ROjJoCv0nclXETwWemykjZlD6b5krXn5TaJlFb+69qJvXrCLq5WY
# EoevSqKkB9HVK9si7P8Sh1cPGOr3kfiFPmMNKFVI8l0+iDFgBywOomWNS/JEzqu1
# nN142DKdL1W/rkeMUhbX2h11CZNvHKIOy3iaA4MTOing8/eMzyUUQ73Ck7odYs4f
# rZ0tTXKJhxojPvBpTxYe9SxM0bDLREiOv0zX76+sIuhbAQCmk0o=
# =MNNf
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 19 Dec 2024 08:52:43 AM PST
# gpg: using RSA key E1F036B1FEE7221FC778ECEFB0B5E88696AFE6CB
# gpg: Good signature from "Junio C Hamano <gitster@pobox.com>" [ultimate]
# gpg: aka "Junio C Hamano <junio@pobox.com>" [ultimate]
# gpg: aka "Junio C Hamano <jch@google.com>" [ultimate]
* tag 'v2.47.2':
Git 2.47.2
Git 2.46.3
Git 2.45.3
Git 2.44.3
Git 2.43.6
Git 2.42.4
Git 2.41.3
Git 2.40.4
credential: disallow Carriage Returns in the protocol by default
credential: sanitize the user prompt
credential_format(): also encode <host>[:<port>]
t7300: work around platform-specific behaviour with long paths on MinGW
compat/regex: fix argument order to calloc(3)
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
ci: remove 'Upload failed tests' directories' step from linux32 jobs
|
|
* maint-2.46:
Git 2.46.3
Git 2.45.3
Git 2.44.3
Git 2.43.6
Git 2.42.4
Git 2.41.3
Git 2.40.4
credential: disallow Carriage Returns in the protocol by default
credential: sanitize the user prompt
credential_format(): also encode <host>[:<port>]
t7300: work around platform-specific behaviour with long paths on MinGW
compat/regex: fix argument order to calloc(3)
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
ci: remove 'Upload failed tests' directories' step from linux32 jobs
|
|
* maint-2.45:
Git 2.45.3
Git 2.44.3
Git 2.43.6
Git 2.42.4
Git 2.41.3
Git 2.40.4
credential: disallow Carriage Returns in the protocol by default
credential: sanitize the user prompt
credential_format(): also encode <host>[:<port>]
t7300: work around platform-specific behaviour with long paths on MinGW
compat/regex: fix argument order to calloc(3)
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
ci: remove 'Upload failed tests' directories' step from linux32 jobs
|
|
* maint-2.43:
Git 2.43.6
Git 2.42.4
Git 2.41.3
Git 2.40.4
credential: disallow Carriage Returns in the protocol by default
credential: sanitize the user prompt
credential_format(): also encode <host>[:<port>]
t7300: work around platform-specific behaviour with long paths on MinGW
compat/regex: fix argument order to calloc(3)
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
ci: remove 'Upload failed tests' directories' step from linux32 jobs
|
|
* maint-2.42:
Git 2.42.4
Git 2.41.3
Git 2.40.4
credential: disallow Carriage Returns in the protocol by default
credential: sanitize the user prompt
credential_format(): also encode <host>[:<port>]
t7300: work around platform-specific behaviour with long paths on MinGW
compat/regex: fix argument order to calloc(3)
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
ci: remove 'Upload failed tests' directories' step from linux32 jobs
|
|
* maint-2.40:
Git 2.40.4
credential: disallow Carriage Returns in the protocol by default
credential: sanitize the user prompt
credential_format(): also encode <host>[:<port>]
t7300: work around platform-specific behaviour with long paths on MinGW
compat/regex: fix argument order to calloc(3)
mingw: drop bogus (and unneeded) declaration of `_pgmptr`
ci: remove 'Upload failed tests' directories' step from linux32 jobs
|
|
When asking the user interactively for credentials, we want to avoid
misleading them e.g. via control sequences that pretend that the URL
targets a trusted host when it does not.
While Git learned, over the course of the preceding commits, to disallow
URLs containing URL-encoded control characters by default, credential
helpers are still allowed to specify values very freely (apart from Line
Feed and NUL characters, anything is allowed), and this would allow,
say, a username containing control characters to be specified that would
then be displayed in the interactive terminal prompt asking the user for
the password, potentially sending those control characters directly to
the terminal. This is undesirable because control characters can be used
to mislead users to divulge secret information to untrusted sites.
To prevent such an attack vector, let's add a `git_prompt()` that forces
the displayed text to be sanitized, i.e. displaying question marks
instead of control characters.
Note: While this commit's diff changes a lot of `user@host` strings to
`user%40host`, which may look suspicious on the surface, there is a good
reason for that: this string specifies a user name, not a
<username>@<hostname> combination! In the context of t5541, the actual
combination looks like this: `user%40@127.0.0.1:5541`. Therefore, these
string replacements document a net improvement introduced by this
commit, as `user@host@127.0.0.1` could have left readers wondering where
the user name ends and where the host name begins.
Hinted-at-by: Jeff King <peff@peff.net>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
Now that the default value for TEST_PASSES_SANITIZE_LEAK is `true` there
is no longer a need to have that variable declared in all of our tests.
Drop it.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Leakfixes.
* jk/http-leakfixes: (28 commits)
http-push: clean up local_refs at exit
http-push: clean up loose request when falling back to packed
http-push: clean up objects list
http-push: free xml_ctx.cdata after use
http-push: free remote_ls_ctx.dentry_name
http-push: free transfer_request strbuf
http-push: free transfer_request dest field
http-push: free curl header lists
http-push: free repo->url string
http-push: clear refspecs before exiting
http-walker: free fake packed_git list
remote-curl: free HEAD ref with free_one_ref()
http: stop leaking buffer in http_get_info_packs()
http: call git_inflate_end() when releasing http_object_request
http: fix leak of http_object_request struct
http: fix leak when redacting cookies from curl trace
transport-helper: fix leak of dummy refs_list
fetch-pack: clear pack lockfiles list
fetch: free "raw" string when shrinking refspec
transport-helper: fix strbuf leak in push_refs_with_push()
...
|
|
The new_http_object_request() function allocates a struct on the heap,
along with some fields inside the struct. But the matching function to
clean it up, release_http_object_request(), only frees the interior
fields without freeing the struct itself, causing a leak.
The related http_pack_request new/release pair gets this right, and at
first glance we should be able to do the same thing and just add a
single free() call. But there's a catch.
These http_object_request structs are typically embedded in the
object_request struct of http-walker.c. And when we clean up that parent
struct, it sanity-checks the embedded struct to make sure we are not
leaking descriptors. Which means a use-after-free if we simply free()
the embedded struct.
I have no idea how valuable that sanity-check is, or whether it can
simply be deleted. This all goes back to 5424bc557f (http*: add helper
methods for fetching objects (loose), 2009-06-06). But the obvious way
to make it all work is to be sure we set the pointer to NULL after
freeing it (and our freeing process closes the descriptor, so we know
there is no leak).
To make sure we do that consistently, we'll switch the pointer we take
in release_http_object_request() to a pointer-to-pointer, and we'll set
it to NULL ourselves. And then the compiler can help us find each caller
which needs to be updated.
Most cases will just pass "&obj_req->req", which will obviously do the
right thing. In a few cases, like http-push's finish_request(), we are
working with a copy of the pointer, so we don't NULL the original. But
it's OK because the next step is to free the struct containing the
original pointer anyway.
This lets us mark t5551 as leak-free. Ironically this is the "smart"
http test, and the leak here only affects dumb http. But there's a
single dumb-http invocation in there. The full dumb tests are in t5550,
which still has some more leaks.
This also makes t5559 leak-free, as it's just an HTTP/2 variant of
t5551. But we don't need to mark it as such, since it inherits the flag
from t5551.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When scripts or background maintenance wish to perform HTTP(S) requests,
there is a risk that our stored credentials might be invalid. At the
moment, this causes the credential helper to ping the user and block the
process. Even if the credential helper does not ping the user, Git falls
back to the 'askpass' method, which includes a direct ping to the user
via the terminal.
Even setting the 'core.askPass' config as something like 'echo' will
causes Git to fallback to a terminal prompt. It uses
git_terminal_prompt(), which finds the terminal from the environment and
ignores whether stdin has been redirected. This can also block the
process awaiting input.
Create a new config option to prevent user interaction, favoring a
failure to a blocked process.
The chosen name, 'credential.interactive', is taken from the config
option used by Git Credential Manager to already avoid user
interactivity, so there is already one credential helper that integrates
with this option. However, older versions of Git Credential Manager also
accepted other string values, including 'auto', 'never', and 'always'.
The modern use is to use a boolean value, but we should still be
careful that some users could have these non-booleans. Further, we
should respect 'never' the same as 'false'. This is respected by the
implementation and test, but not mentioned in the documentation.
The implementation for the Git interactions takes place within
credential_getpass(). The method prototype is modified to return an
'int' instead of 'void'. This allows us to detect that no attempt was
made to fill the given credential, changing the single caller slightly.
Also, a new trace2 region is added around the interactive portion of the
credential request. This provides a way to measure the amount of time
spent in that region for commands that _are_ interactive. It also makes
a conventient way to test that the config option works with
'test_region'.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
t5551 tries to access a URL with a bogus hostname and confirms that
http.curloptResolve lets us use this otherwise unresolvable name.
Before doing so, though, we confirm that trying to access the bogus
hostname without http.curloptResolve fails as expected. This isn't
testing Git at all, but is confirming the test's assumptions. That's
often a good thing to do, but in this case it means that we'll actually
try to resolve the external name. Even though it's unlikely that
"gitbogusexamplehost.invalid" would ever resolve, the DNS lookup itself
may take time.
It's probably reasonable to just assume that this obviously-bogus name
would not actually resolve in practice, which lets us reduce our test
suite's dependency on the outside world.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Fetching via protocol v0 over Smart HTTP transport sometimes failed
to correctly auto-follow tags.
* jk/fetch-auto-tag-following-fix:
transport-helper: re-examine object dir after fetching
|
|
This patch fixes a bug where fetch over http (or any helper) using the
v0 protocol may sometimes fail to auto-follow tags. The bug comes from
61c7711cfe (sha1-file: use loose object cache for quick existence check,
2018-11-12). But to explain why (and why this is the right fix), let's
take a step back.
After fetching a pack, the object database has changed, but we may still
hold in-memory caches that are now out of date. Traditionally this was
just the packed_git list, but 61c7711cfe started using a loose-object
cache, as well.
Usually these caches are invalidated automatically. When an expected
object cannot be found, the low-level object lookup routines call
reprepare_packed_git(), which re-scans the set of packs (and thanks to
some preparatory patches ahead of 61c7711cfe, throws away the loose
object cache). But not all calls do this! In some cases we expect that
the object might not exist, and pass OBJECT_INFO_QUICK to tell the
low-level routines not to bother re-scanning. And the tag auto-following
code is one such caller, since we are asking about oids that the other
side has (but we might not have locally).
To deal with this, we explicitly call reprepare_packed_git() ourselves
after fetching a pack; this goes all the way back to 48ec3e5c07
(Incorporate fetched packs in future object traversal, 2008-06-15). But
that only helps if we call fetch_pack() in the main fetch process. When
we're using a transport helper, it happens in a separate sub-process,
and the parent process is left with old values. So this is only a
problem with protocols which require a separate helper process (like
http).
This patch fixes it by teaching the parent process in the transport
helper relationship to make that same reprepare call after the helper
finishes fetching.
You might be left with some lingering questions, like:
1. Why only the v0 protocol, and not v2? It's because in v2 the child
helper doesn't actually run fetch_pack(); it merely establishes a
tunnel over which the main process can talk to the remote side (so
the fetch_pack() and reprepare happen in the main process).
2. Wouldn't we have the same bug even before the 61c7711cfe added
the loose object cache? For example, when we store the fetch as a
pack locally, wouldn't our packed_git list still be out of date?
If we store a pack, everything works because other parts of the
fetch process happen to trigger a call to reprepare_packed_git().
In particular, before storing whatever ref was originally
requested, we'll make sure we have the pointed-to object, and that
call happens without the QUICK flag. So in that case we'll see that
we don't know about it, reprepare, and then repeat our lookup. And
now we _do_ know about the pack, and further calls with QUICK will
find its contents.
Whereas when we unpack the result into loose objects, we never get
that same invalidation trigger. We didn't have packs before, and we
don't after. But when we do the loose object lookup, we find the
object. There's no way to realize that we didn't have the object
before the pack, and that having it now means things have changed
(in theory we could do a superfluous cache lookup to see that it
was missing from the old cache; but depending on the tags the other
side showed us, we might not even have filled in that part of the
cache earlier).
3. Why does the included test use "--depth 1"? This is important
because without it, we happen to invalidate the cache as a side
effect of other parts of the fetch process. What happens in a
non-shallow fetch is something like this:
1. we call find_non_local_tags() once before actually getting the
pack, to see if there are any tags we can fill in from what we
already have. This fills in the cache (which is obviously
missing objects we're about to fetch).
2. before fetching the actual pack, fetch_and_consume_refs()
calls check_exist_and_connected(), to see if we even need to
fetch a pack at all. This doesn't use QUICK (though arguably
it could, as it's purely an optimization). And since it sees
there are objects we are indeed missing, that triggers a
reprepare_packed_git() call, which throws out the loose object
cache.
3. after fetching, now we call find_non_local_tags() again. And
since step (2) invalidated our loose object cache, we find
the new objects and create the tags.
So everything works, but mostly due to luck. Whereas in a fetch
with --depth, we skip step 2 entirely, and thus the out-of-date
cache is still in place for step 3, giving us the wrong answer.
So the test works with a small "--depth 1" fetch, which makes sure that
we don't store the pack from the other side, and that we don't trigger
the accidental cache invalidation. And of course it forces the use of
v0 along with using the http protocol.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We have multiple tests in t5551 that write thousands of tags. To do so
efficiently we generate the tags by writing the `packed-refs` file
directly, which of course assumes that the reference database is backed
by the files backend.
Refactor the code to instead use a single `git update-ref --stdin`
command to write the tags. While the on-disk end result is not the same
as we now have a bunch of loose refs instead of a single packed-refs
file, the distinction shouldn't really matter for any of the tests that
use this helper.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
They are equivalents and the former still exists, so as long as the
only change this commit makes are to rewrite test_i18ngrep to
test_grep, there won't be any new bug, even if there still are
callers of test_i18ngrep remaining in the tree, or when merged to
other topics that add new uses of test_i18ngrep.
This patch was produced more or less with
git grep -l -e 'test_i18ngrep ' 't/t[0-9][0-9][0-9][0-9]-*.sh' |
xargs perl -p -i -e 's/test_i18ngrep /test_grep /'
and a good way to sanity check the result yourself is to run the
above in a checkout of c4603c1c (test framework: further deprecate
test_i18ngrep, 2023-10-31) and compare the resulting working tree
contents with the result of applying this patch to the same commit.
You'll see that test_i18ngrep in a few t/lib-*.sh files corrected,
in addition to the manual reproduction.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When cloning an empty repository, protocol versions 0 and 1 currently
offer nothing but the header and flush packets for the /info/refs
endpoint. This means that no capabilities are provided, so the client
side doesn't know what capabilities are present.
However, this does pose a problem when working with SHA-256
repositories, since we use the capabilities to know the remote side's
object format (hash algorithm). As of 8b214c2e9d ("clone: propagate
object-format when cloning from void", 2023-04-05), this has been fixed
for protocol v2, since there we always read the hash algorithm from the
remote.
Fortunately, the push version of the protocol already indicates a clue
for how to solve this. When the /info/refs endpoint is accessed for a
push and the remote is empty, we include a dummy "capabilities^{}" ref
pointing to the all-zeros object ID. The protocol documentation already
indicates this should _always_ be sent, even for fetches and clones, so
let's just do that, which means we'll properly announce the hash
algorithm as part of the capabilities. This just works with the
existing code because we share the same ref code for fetches and clones,
and libgit2, JGit, and dulwich do as well.
There is one minor issue to fix, though. If we called send_ref with
namespaces, we would return NULL with the capabilities entry, which
would cause a crash. Instead, let's refactor out a function to print
just the ref itself without stripping the namespace and use it for our
special capabilities entry.
Add several sets of tests for HTTP as well as for local clones. The
behavior can be slightly different for HTTP versus a local or SSH clone
because of the stateless-rpc functionality, so it's worth testing both.
Signed-off-by: brian m. carlson <bk2204@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
One test needs to be tweaked in order for t5559 to pass with SSL/TLS set
up. When we make our initial clone, we check that the curl trace of
requests is what we expected. But we need to fix two things:
- along with ignoring "data" lines from the trace, we need to ignore
"SSL data" lines
- when TLS is used, the server is able to tell the client (via ALPN)
that it supports HTTP/2 before the first HTTP request is made. So
rather than request an upgrade using an HTTP header, it can just
speak HTTP/2 immediately
With this patch, running:
LIB_HTTPD_SSL=1 ./t5559-http-fetch-smart-http2.sh
works, whereas it did not before.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We pick apart a curl trace, looking for "=> Send header:" and so on, and
matching against an expected set of requests and responses. We remove
"== Info" lines entirely. However, our parser is fooled when running the
test with LIB_HTTPD_SSL on Ubuntu 20.04 (as found in our linux-gcc CI
job), as curl hands us an "Info" buffer with a newline, and we get:
== Info: successfully set certificate verify locations:
== Info: CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
=> Send SSL data[...]
which results in the "CApath" line ending up in the cleaned-up output,
causing the test to fail.
Arguably the tracing code should detect this and put it on two separate
"== Info" lines. But this is actually a curl bug, fixed by their
80d73bcca (tls: provide the CApath verbose log on its own line,
2020-08-18). It's simpler to just work around it here.
Since we are using GIT_TRACE_CURL, every line should just start with one
of "<=", "==", or "=>", and we can throw away anything else. In fact, we
can just replace the pattern for deleting "*" lines. Those were from the
old GIT_CURL_VERBOSE output, but we switched over in 14e24114d9
(t5551-http-fetch-smart.sh: use the GIT_TRACE_CURL environment var,
2016-09-05).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
After making a request, we check that it stored the expected cookies.
This depends on the protocol version, because the cookies we store
depend on the exact requests we made (and for ls-remote, v2 will always
hit /git-upload-pack to get the refs, whereas v0 is happy with the
initial ref advertisement).
As a result, hardly anybody runs this test, as you'd have to manually
set GIT_TEST_PROTOCOL_VERSION=0 to do so.
Let's teach it to handle both protocol versions. One way to do this
would be to make the expectation conditional on the protocol used. But
there's a simpler solution. The reason that v0 doesn't hit
/git-upload-pack is that ls-remote doesn't fetch any objects. If we
instead do a fetch (making sure there's an actual object to grab), then
both v0 and v2 will hit the same endpoints and set the same cookies.
Note that we do have to clean up our new tag here; otherwise it confuses
the later "clone 2,000 tags" test.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
After making an HTTP request that should store cookies, we check that
the expected values are in the cookie file. We don't want to look at the
whole file, because it has noisy comments at the top that we shouldn't
depend on. But we strip out the interesting bits using "tail -3", which
is brittle. It requires us to put an extra blank line in our expected
output, and it would fail to notice any reordering or extra content in
the cookie file.
Instead, let's just grep for non-blank lines that are not comments,
which more directly describes what we're interested in.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We perform a clone and a fetch, and then check that we saw the expected
requests in Apache's access log. In the v2 protocol, there will be one
extra request to /git-upload-pack for each operation (since the initial
/info/refs probe is just used to upgrade the protocol).
As a result, this test is a noop unless the use of the v0 protocol is
forced. Which means that hardly anybody runs it, since you have to do so
manually.
Let's update it to handle v2 and run it always. We could do this by just
conditionally adding in the extra POST lines. But if we look at the
origin of the test in 7da4e2280c (test smart http fetch and push,
2009-10-30), the point is really just to make sure that the smart
git-upload-pack service was used at all. So rather than counting up the
individual requests, let's just make sure we saw each of the expected
types. This is a bit looser, but makes maintenance easier.
Since we're now matching with grep, we can also loosen the HTTP/1.1
match, which allows this test to pass when run with HTTP/2 via t5559.
That lets:
GIT_TEST_PROTOCOL_VERSION=0 ./t5559-http-fetch-smart-http2.sh
run to completion, which previously failed (and of course it works if
you use v2, as well).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
After cloning an http repository, we check the curl trace to make sure
the expected requests were made. But since the expected trace was never
updated to handle v2, it is only run when you ask the test suite to run
in v0 mode (which hardly anybody does).
Let's update it to handle both protocols. This isn't too hard since v2
just sends an extra header and an extra request. So we can just annotate
those extra lines and strip them out for v0 (and drop the annotations
for v2). I didn't bother handling v1 here, as it's not really of
practical interest (it would drop the extra v2 request, but still have
the "git-protocol" lines).
There's a similar tweak needed at the end. Since we check the
"accept-encoding" value loosely, we grep for it rather than finding it
in the verbatim trace. This grep insists that there are exactly 2
matches, but of course in v2 with the extra request there are 3. We
could tweak the number, but it's simpler still to just check that we saw
at least one match. The verbatim check already confirmed how many
instances of the header we have; we're really just checking here that
"gzip" is in the value (it's possible, of course, that the headers could
have different values, but that seems like an unlikely bug).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the "clone http repository" test, we check the curl trace to make
sure the expected requests were made. This whole script was marked to
handle only the v0 protocol in d790ee1707 (tests: fix protocol version
for overspecifications, 2019-02-25). That makes sense, since v2 requires
an extra request, so tests as specific as this would fail unless
modified.
Later, in preparation for v2 becoming the default, this was tweaked by
8a1b0978ab (test: request GIT_TEST_PROTOCOL_VERSION=0 when appropriate,
2019-12-23). There we run the trace check only if the user has
explicitly asked to test protocol version 0. But it also forced the
clone itself to run with the v0 protocol.
This makes the check for "can we expect a v0 trace" silly; it will
always be v0. But much worse, it means that the clone we are testing is
not like the one that normal users would run. They would use the
defaults, which are now v2. And since this is supposed to be a basic
check of clone-over-http, we should do the same.
Let's fix this by dropping the extra v0 override. The test still passes
because the trace checking only kicks in if we asked to use v0
explicitly (this is the same as before; even though we were running a v0
clone, unless you specifically set GIT_TEST_PROTOCOL_VERSION=0, the
trace check was always skipped).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We check that the curl trace of a clone has the lines we expect, but
this won't work when we run the test under t5559, because a few details
are different under HTTP/2 (but nobody noticed because it only happens
when you manually set GIT_TEST_PROTOCOL_VERSION to "0").
We can handle both HTTP protocols with a few tweaks:
- we'll drop the HTTP "101 Switching Protocols" response, as well as
various protocol upgrade headers. These details aren't interesting
to us. We just want to make sure the correct protocol was used (and
we do in the main request/response lines).
- successful HTTP/2 responses just say "200" and not "200 OK"; we can
normalize these
- replace HTTP/1.1 with a variable in the request/response lines. We
can use the existing $HTTP_PROTO for this, as it's already set to
"HTTP/2" when appropriate. We do need to tweak the fallback value to
"HTTP/1.1" to match what curl will write (prior to this patch, the
fallback value didn't matter at all; we only checked if it was the
literal string "HTTP/2").
Note that several lines still expect HTTP/1.1 unconditionally. The first
request does so because the client requests an upgrade during the
request. The POST request and response do so because you can't do an
upgrade if there is a request body. (This will all be different if we
trigger HTTP/2 via ALPN, but the tests aren't yet capable of that).
This is enough to let:
GIT_TEST_PROTOCOL_VERSION=0 ./t5559-http-fetch-smart-http2.sh
pass the "clone http repository" test (but there are some other failures
later on).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There's a test in t5551 which checks the curl trace (after simplifying
it a bit). It doesn't work with HTTP/2, because in that case curl
outputs all of the headers in lower-case. Even though this test is run
with HTTP/2 by t5559, nobody has noticed because checking the trace only
happens if GIT_TEST_PROTOCOL_VERSION is manually set to "0".
Let's fix this by lower-casing all of the header names in the trace, and
then checking for those in our expected code (this is easier than making
HTTP/2 traces look like HTTP/1.1, since HTTP/1.1 uses title-casing).
Sadly, we can't quite do this in our existing sed script. This works if
you have GNU sed:
s/^\\([><]\\) \\([A-Za-z0-9-]*:\\)/\1 \L\2\E/
but \L is a GNU-ism, and I don't think there's a portable solution. We
could just "tr A-Z a-z" on the way in, of course, but that makes the
non-header parts harder to read (e.g., lowercase "post" requests). But
to paraphrase Baron Munchausen, I have learned from experience that a
modicum of Perl can be most efficacious.
Note that this doesn't quite get the test passing with t5559; there are
more fixes needed on top.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit b0c4adcdd7 (remote-curl: send Accept-Language header to server,
2022-07-11) added tests to make sure the header is sent via HTTP.
However, it checks in two places:
1. In the expected trace output, we check verbatim for the header and
its value.
2. Afterwards, we grep for the header again in the trace file.
This (2) is probably cargo-culted from the earlier grep for
Accept-Encoding. It is needed for the encoding because we smudge the
value of that header when doing the verbatim check; see 1a53e692af
(remote-curl: accept all encodings supported by curl, 2018-05-22).
But we don't do so for the language header, so any problem that the
"grep" would catch in (2) would already have been caught by (1).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 9ee6bcd398 (t5541-http-push: add test for URLs with trailing
slash, 2010-04-08) added a test that clones a URL with a trailing slash,
and confirms that we don't send a doubled slash (like "$url//info/refs")
to the server.
But this test makes no sense in t5541, which is about pushing. It should
have been added in t5551. Let's move it there.
But putting it at the end is tricky, since it checks the entire contents
of the Apache access log. We could get around this by clearing the log
before our test. But there's an even simpler solution: just make sure no
doubled slashes appear in the log (fortunately, "http://" does not
appear in the log itself).
As a bonus, this also lets us drop the check for the v0 protocol (which
is otherwise necessary since v2 makes multiple requests, and
check_access_log insists on exactly matching the number of requests,
even though we don't care about that here).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
With GIT_TRACE_CURL=1 or GIT_CURL_VERBOSE=1, sensitive headers like
"Authorization" and "Cookie" get redacted. However, since [1], curl's
h2h3 module (invoked when using HTTP/2) also prints headers in its
"info", which don't get redacted. For example,
echo 'github.com TRUE / FALSE 1698960413304 o foo=bar' >cookiefile &&
GIT_TRACE_CURL=1 GIT_TRACE_CURL_NO_DATA=1 git \
-c 'http.cookiefile=cookiefile' \
-c 'http.version=' \
ls-remote https://github.com/git/git refs/heads/main 2>output &&
grep 'cookie' output
produces output like:
23:04:16.920495 http.c:678 == Info: h2h3 [cookie: o=foo=bar]
23:04:16.920562 http.c:637 => Send header: cookie: o=<redacted>
Teach http.c to check for h2h3 headers in info and redact them using the
existing header redaction logic. This fixes the broken redaction logic
that we noted in the previous commit, so mark the redaction tests as
passing under HTTP2.
[1] https://github.com/curl/curl/commit/f8c3724aa90472c0e617ddbbc420aa199971eb77
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
|
|
We have occasionally seen bugs that affect Git running only against an
HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http:
match headers case-insensitively when redacting, 2021-09-22). But since
we have no test coverage using HTTP/2, we only uncover these bugs in the
wild.
That commit gives a recipe for converting our Apache setup to support
HTTP/2, but:
- it's not necessarily portable
- we don't want to just test HTTP/2; we really want to do a variety of
basic tests for _both_ protocols
This patch handles both problems by running a duplicate of t5551
(labeled as t5559 here) with an alternate-universe setup that enables
HTTP/2. So we'll continue to run t5551 as before, but run the same
battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given
platform, then t5559 should bail during the webserver setup, and
gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from
"auto" to "yes", where the point is to complain when webserver setup
fails).
In theory other http-related test scripts could benefit from the same
duplication, but doing t5551 should give us a reasonable check of basic
functionality, and would have caught both bugs we've seen in the wild
with HTTP/2.
A few notes on the implementation:
- a script enables the server side config by calling enable_http2
before starting the webserver. This avoids even trying to load any
HTTP/2 config for t5551 (which is what lets it keep working with
regular HTTP even on systems that don't support it). This also sets
a prereq which can be used by individual tests.
- As discussed in b66c77a64e, the http2 module isn't compatible with
the "prefork" mpm, so we need to pick something else. I chose
"event" here, which works on my Debian system, but it's possible
there are platforms which would prefer something else. We can adjust
that later if somebody finds such a platform.
- The test "large fetch-pack requests can be sent using chunked
encoding" makes sure we use a chunked transfer-encoding by looking
for that header in the trace. But since HTTP/2 has its own streaming
mechanisms, we won't find such a header. We could skip the test
entirely by marking it with !HTTP2. But there's some value in making
sure that the fetch itself succeeded. So instead, we'll confirm that
either we're using HTTP2 _or_ we saw the expected chunked header.
- the redaction tests fail under HTTP/2 with recent versions of curl.
This is a bug! I've marked them with !HTTP2 here to skip them under
t5559 for the moment. Using test_expect_failure would be more
appropriate, but would require a bunch of boilerplate. Since we'll
be fixing them momentarily, let's just skip them for now to keep the
test suite bisectable, and we can re-enable them in the commit that
fixes the bug.
- one alternative layout would be to push most of t5551 into a
lib-t5551.sh script, then source it from both t5551 and t5559.
Keeping t5551 intact seemed a little simpler, as its one less level
of indirection for people fixing bugs/regressions in the non-HTTP/2
tests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
|
|
It is unclear as to _why_, but under certain circumstances the warning
about credentials being passed as part of the URL seems to be swallowed
by the `git remote-https` helper in the Windows jobs of Git's CI builds.
Since it is not actually important how many times Git prints the
warning/error message, as long as it prints it at least once, let's just
make the test a bit more lenient and test for the latter instead of the
former, which works around these CI issues.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
|
|
Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06) added tests for our handling of passwords in URLs. Since the
obvious URL to be affected is git-over-http, the tests use http. However
they don't set up a test server; they just try to access
https://localhost, assuming it will fail (because the nothing is
listening there).
This causes some possible problems:
- There might be a web server running on localhost, and we do not
actually want to connect to that.
- The DNS resolver, or the local firewall, might take a substantial
amount of time (or forever, whichever comes first) to fail to
connect, slowing down the tests cases unnecessarily.
- Since there's no server, our tests for "allow" and "warn" still
expect the clone/fetch/push operations to fail, even though in the
real world we'd expect these to succeed. We scrape stderr to see
what happened, but it's not as robust as a more realistic test.
Let's instead move these to t5551, which is all about testing http and
where we have a real server. That eliminates any issues with contacting
a strange URL, and lets the "allow" and "warn" tests confirm that the
operation actually succeeds.
It's not quite a verbatim move for a few reasons:
- we can drop the LIBCURL dependency; it's already part of
lib-httpd.sh
- we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To
avoid repetition, we'll add a few extra variables.
- the "https://username:@localhost" test uses a funny URL that
lib-httpd.sh doesn't provide. We'll similarly construct it in a
variable. Note that we're hard-coding the lib-httpd username here,
but t5551 already does that everywhere.
- for the "domain:port" test, the URL provided by lib-httpd is fine,
since our test server will always be on an exotic port. But we'll
confirm in the test that this is so.
- since our message-matching is done via grep, I simplified it to use
a regex, rather than trying to massage lib-httpd's variables.
Arguably this makes it more readable, too, while retaining the bits
we care about: the fatal/warning distinction, the "uses plaintext"
message, and the fact that the password was redacted.
- we'll use the /auth/ path for the repo, which shows that we are
indeed making use of the auth information when needed.
- we'll also use /smart/; most of these tests could be done via /dumb/
in t5550, but setting up pushes there requires extra effort and
dependencies. The smart protocol is what most everyone is using
these days anyway.
This patch is my own, but I stole the analysis and a few bits of the
commit message from a patch by Johannes Schindelin.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
|
|
Continue the move of existing Documentation/technical/* protocol and
file-format documentation into our main documentation space. By moving
the things that discuss the protocol we can properly link from
e.g. lsrefs.unborn and protocol.version documentation to a manpage we
build by default.
So far we have been using the "gitformat-" prefix for the
documentation we've been moving over from Documentation/technical/*,
but for protocol documentation let's use "gitprotocol-*".
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git server end's ability to accept Accept-Language header was introduced
in f18604bbf2 (http: add Accept-Language header if possible, 2015-01-28),
but this is only used by very early phase of the transfer, which is HTTP
GET request to discover references. For other phases, like POST request
in the smart HTTP, the server does not know what language the client
speaks.
Teach git client to learn end-user's preferred language and throw
accept-language header to the server side. Once the server gets this header,
it has the ability to talk to end-user with language they understand.
This would be very helpful for many non-English speakers.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Li Linchao <lilinchao@oschina.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Libcurl has a CURLOPT_RESOLVE easy option that allows
the result of hostname resolution in the following
format to be passed:
[+]HOST:PORT:ADDRESS[,ADDRESS]
This way, redirects and everything operating against the
HOST+PORT will use the provided ADDRESS(s).
The following format is also allowed to stop using
hostname resolutions that have already been passed:
-HOST:PORT
See https://curl.se/libcurl/c/CURLOPT_RESOLVE.html for
more details.
Let's add a corresponding "http.curloptResolve" config
option that takes advantage of CURLOPT_RESOLVE.
Each value configured for the "http.curloptResolve" key
is passed "as is" to libcurl through CURLOPT_RESOLVE, so
it should be in one of the above 2 formats. This keeps
the implementation simple and makes us consistent with
libcurl's CURLOPT_RESOLVE, and with curl's corresponding
`--resolve` command line option.
The implementation uses CURLOPT_RESOLVE only in
get_active_slot() which is called by all the HTTP
request sending functions.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Sensitive data in the HTTP trace were supposed to be redacted, but
we failed to do so in HTTP/2 requests.
* jk/http-redact-fix:
http: match headers case-insensitively when redacting
|
|
When HTTP/2 is in use, we fail to correctly redact "Authorization" (and
other) headers in our GIT_TRACE_CURL output.
We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace().
It passes them along to curl_dump_header(), which in turn checks
redact_sensitive_header(). We see the headers as a text buffer like:
Host: ...
Authorization: Basic ...
After breaking it into lines, we match each header using skip_prefix().
This is case-sensitive, even though HTTP headers are case-insensitive.
This has worked reliably in the past because these headers are generated
by curl itself, which is predictable in what it sends.
But when HTTP/2 is in use, instead we get a lower-case "authorization:"
header, and we fail to match it. The fix is simple: we should match with
skip_iprefix().
Testing is more complicated, though. We do have a test for the redacting
feature, but we don't hit the problem case because our test Apache setup
does not understand HTTP/2. You can reproduce the issue by applying this
on top of the test change in this patch:
diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf
index afa91e38b0..19267c7107 100644
--- a/t/lib-httpd/apache.conf
+++ b/t/lib-httpd/apache.conf
@@ -29,6 +29,9 @@ ErrorLog error.log
LoadModule setenvif_module modules/mod_setenvif.so
</IfModule>
+LoadModule http2_module modules/mod_http2.so
+Protocols h2c
+
<IfVersion < 2.4>
LockFile accept.lock
</IfVersion>
@@ -64,8 +67,8 @@ LockFile accept.lock
<IfModule !mod_access_compat.c>
LoadModule access_compat_module modules/mod_access_compat.so
</IfModule>
-<IfModule !mod_mpm_prefork.c>
- LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
+<IfModule !mod_mpm_event.c>
+ LoadModule mpm_event_module modules/mod_mpm_event.so
</IfModule>
<IfModule !mod_unixd.c>
LoadModule unixd_module modules/mod_unixd.so
diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh
index 1c2a444ae7..ff74f0ae8a 100755
--- a/t/t5551-http-fetch-smart.sh
+++ b/t/t5551-http-fetch-smart.sh
@@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' '
git push public main:main
'
+test_expect_success 'prefer http/2' '
+ git config --global http.version HTTP/2
+'
+
setup_askpass_helper
test_expect_success 'clone http repository' '
but this has a few issues:
- it's not necessarily portable. The http2 apache module might not be
available on all systems. Further, the http2 module isn't compatible
with the prefork mpm, so we have to switch to something else. But we
don't necessarily know what's available. It would be nice if we
could have conditional config, but IfModule only tells us if a
module is already loaded, not whether it is available at all.
This might be a non-issue. The http tests are already optional, and
modern-enough systems may just have both of these. But...
- if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm
not sure how much that matters since it's all handled by curl under
the hood, but I'd worry that some detail leaks through. We'd
probably want two scripts running similar tests, one with HTTP/2 and
one with HTTP/1.1.
- speaking of which, a later test fails with the patch above! The
problem is that it is making sure we used a chunked
transfer-encoding by looking for that header in the trace. But
HTTP/2 doesn't support that, as it has its own streaming mechanisms
(the overall operation works fine; we just don't see the header in
the trace).
Furthermore, even with the changes above, this test still does not
detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2
requests, which confuse it. Quoting only the interesting bits from the
resulting trace file, we first see:
=> Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1
=> Send header: Connection: Upgrade, HTTP2-Settings
=> Send header: Upgrade: h2c
=> Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA
<= Recv header: HTTP/1.1 401 Unauthorized
<= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT
<= Recv header: Server: Apache/2.4.49 (Debian)
<= Recv header: WWW-Authenticate: Basic realm="git-auth"
So the client asks for HTTP/2, but Apache does not do the upgrade for
the 401 response. Then the client repeats with credentials:
=> Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1
=> Send header: Authorization: Basic <redacted>
=> Send header: Connection: Upgrade, HTTP2-Settings
=> Send header: Upgrade: h2c
=> Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA
<= Recv header: HTTP/1.1 101 Switching Protocols
<= Recv header: Upgrade: h2c
<= Recv header: Connection: Upgrade
<= Recv header: HTTP/2 200
<= Recv header: content-type: application/x-git-upload-pack-advertisement
So the client does properly redact there, because we're speaking
HTTP/1.1, and the server indicates it can do the upgrade. And then the
client will make further requests using HTTP/2:
=> Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2
=> Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA==
=> Send header: content-type: application/x-git-upload-pack-request
And there we can see that the credential is _not_ redacted. This part of
the test is what gets confused:
# Ensure that there is no "Basic" followed by a base64 string, but that
# the auth details are redacted
! grep "Authorization: Basic [0-9a-zA-Z+/]" trace &&
grep "Authorization: Basic <redacted>" trace
The first grep does not match the un-redacted HTTP/2 header, because
it insists on an uppercase "A". And the second one does find the
HTTP/1.1 header. So as far as the test is concerned, everything is OK,
but it failed to notice the un-redacted lines.
We can make this test (and the other related ones) more robust by adding
"-i" to grep case-insensitively. This isn't really doing anything for
now, since we're not actually speaking HTTP/2, but it future-proofs the
tests for a day when we do (either we add explicit HTTP/2 test support,
or it's eventually enabled by default by our Apache+curl test setup).
And it doesn't hurt in the meantime for the tests to be more careful.
The change to use "grep -i", coupled with the changes to use HTTP/2
shown above, causes the test to fail with the current code, and pass
after this patch is applied.
And finally, there's one other way to demonstrate the issue (and how I
actually found it originally). Looking at GIT_TRACE_CURL output against
github.com, you'll see the unredacted output, even if you didn't set
http.version. That's because setting it is only necessary for curl to
send the extra headers in its HTTP/1.1 request that say "Hey, I speak
HTTP/2; upgrade if you do, too". But for a production site speaking
https, the server advertises via ALPN, a TLS extension, that it supports
HTTP/2, and the client can immediately start using it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since we use the v2 protocol by default, the connection of a v2 client
to a v2 server is well covered by the test suite. And with the
GIT_TEST_PROTOCOL_VERSION knob, we can easily test a v0 client
connecting to a v2-aware server (which will then just speak v0). But we
have no regular tests that a v2 client, when encountering a non-v2-aware
server, will correctly fall back to using v0.
In theory this is a job for the cross-version tests in t/interop, but:
- they cover only git:// and file:// clones
- they are not part of the usual test suite, so nobody ever runs them
anyway
Since using v2 over http requires configuring the web server to pass
along the Git-Protocol header, we can easily create a situation where
the server does not respect the v2 probe, and the conversation falls
back to v0.
This works just fine. This new test is not about fixing any particular
bug, but just making sure that the system works (and continues to work)
as expected.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This reverts commit 1b0d9545bb85912a16b367229d414f55d140d3be.
That commit does fix the situation it intended to (avoiding Negotiate
even when the credentials were provided in the URL), but it creates a
more serious regression: we now never hit the conditional for "we had a
username and password, tried them, but the server still gave us a 401".
That has two bad effects:
1. we never call credential_reject(), and thus a bogus credential
stored by a helper will live on forever
2. we never return HTTP_NOAUTH, so the error message the user gets is
"The requested URL returned error: 401", instead of "Authentication
failed".
Doing this correctly seems non-trivial, as we don't know whether the
Negotiate auth was a problem. Since this is a regression in the upcoming
v2.23.0 release (for which we're in -rc0), let's revert for now and work
on a fix separately.
(Note that this isn't a pure revert; the previous commit added a test
showing the regression, so we can now flip it to expect_success).
Reported-by: Ben Humphreys <behumphreys@atlassian.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We test authentication with http, and we independently test that
credential helpers work, but we don't have any tests that cover the
two features working together. Let's add two:
1. Make sure that a successful request asks the helper to save the
credential. This works as expected.
2. Make sure that a failed request asks the helper to forget the
credential. This is marked as expect_failure, as it was recently
regressed by 1b0d9545bb (remote-curl: fall back to basic auth if
Negotiate fails, 2021-03-22). The symptom here is that the second
request should prompt the user, but doesn't.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This trick was performed via
$ (cd t &&
sed -i -e 's/master/main/g' -e 's/MASTER/MAIN/g' \
-e 's/Master/Main/g' -e 's/retsam/niam/g' \
-- t55[4-9]*.sh t556x*)
This allows us to define `GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main`
for those tests.
Note that t5541 uses the reversed `master` name: `retsam`. We replace it
by the equivalent for `main`: `niam`.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In addition to the manual adjustment to let the `linux-gcc` CI job run
the test suite with `master` and then with `main`, this patch makes sure
that GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME is set in all test scripts
that currently rely on the initial branch name being `master by default.
To determine which test scripts to mark up, the first step was to
force-set the default branch name to `master` in
- all test scripts that contain the keyword `master`,
- t4211, which expects `t/t4211/history.export` with a hard-coded ref to
initialize the default branch,
- t5560 because it sources `t/t556x_common` which uses `master`,
- t8002 and t8012 because both source `t/annotate-tests.sh` which also
uses `master`)
This trick was performed by this command:
$ sed -i '/^ *\. \.\/\(test-lib\|lib-\(bash\|cvs\|git-svn\)\|gitweb-lib\)\.sh$/i\
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\
' $(git grep -l master t/t[0-9]*.sh) \
t/t4211*.sh t/t5560*.sh t/t8002*.sh t/t8012*.sh
After that, careful, manual inspection revealed that some of the test
scripts containing the needle `master` do not actually rely on a
specific default branch name: either they mention `master` only in a
comment, or they initialize that branch specificially, or they do not
actually refer to the current default branch. Therefore, the
aforementioned modification was undone in those test scripts thusly:
$ git checkout HEAD -- \
t/t0027-auto-crlf.sh t/t0060-path-utils.sh \
t/t1011-read-tree-sparse-checkout.sh \
t/t1305-config-include.sh t/t1309-early-config.sh \
t/t1402-check-ref-format.sh t/t1450-fsck.sh \
t/t2024-checkout-dwim.sh \
t/t2106-update-index-assume-unchanged.sh \
t/t3040-subprojects-basic.sh t/t3301-notes.sh \
t/t3308-notes-merge.sh t/t3423-rebase-reword.sh \
t/t3436-rebase-more-options.sh \
t/t4015-diff-whitespace.sh t/t4257-am-interactive.sh \
t/t5323-pack-redundant.sh t/t5401-update-hooks.sh \
t/t5511-refspec.sh t/t5526-fetch-submodules.sh \
t/t5529-push-errors.sh t/t5530-upload-pack-error.sh \
t/t5548-push-porcelain.sh \
t/t5552-skipping-fetch-negotiator.sh \
t/t5572-pull-submodule.sh t/t5608-clone-2gb.sh \
t/t5614-clone-submodules-shallow.sh \
t/t7508-status.sh t/t7606-merge-custom.sh \
t/t9302-fast-import-unpack-limit.sh
We excluded one set of test scripts in these commands, though: the range
of `git p4` tests. The reason? `git p4` stores the (foreign) remote
branch in the branch called `p4/master`, which is obviously not the
default branch. Manual analysis revealed that only five of these tests
actually require a specific default branch name to pass; They were
modified thusly:
$ sed -i '/^ *\. \.\/lib-git-p4\.sh$/i\
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\
' t/t980[0167]*.sh t/t9811*.sh
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In trace output (when GIT_TRACE_CURL is true), redact the values of all
HTTP cookies by default. Now that auth headers (since the implementation
of GIT_TRACE_CURL in 74c682d3c6 ("http.c: implement the GIT_TRACE_CURL
environment variable", 2016-05-24)) and cookie values (since this
commit) are redacted by default in these traces, also allow the user to
inhibit these redactions through an environment variable.
Since values of all cookies are now redacted by default,
GIT_REDACT_COOKIES (which previously allowed users to select individual
cookies to redact) now has no effect.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Whenever GIT_CURL_VERBOSE is set, teach Git to behave as if
GIT_TRACE_CURL=1 and GIT_TRACE_CURL_NO_DATA=1 is set, instead of setting
CURLOPT_VERBOSE.
This is to prevent inadvertent revelation of sensitive data. In
particular, GIT_CURL_VERBOSE redacts neither the "Authorization" header
nor any cookies specified by GIT_REDACT_COOKIES.
Unifying the tracing mechanism also has the future benefit that any
improvements to the tracing mechanism will benefit both users of
GIT_CURL_VERBOSE and GIT_TRACE_CURL, and we do not need to remember to
implement any improvement twice.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Verify that when GIT_TRACE_CURL is set, Git prints out "Authorization:
Basic <redacted>" instead of the base64-encoded authorization details.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since 8cbeba0632 (tests: define GIT_TEST_PROTOCOL_VERSION,
2019-02-25), it has been possible to run tests with a newer protocol
version by setting the GIT_TEST_PROTOCOL_VERSION envvar to a version
number. Tests that assume protocol v0 handle this by explicitly
setting
GIT_TEST_PROTOCOL_VERSION=
or similar constructs like 'test -z "$GIT_TEST_PROTOCOL_VERSION" ||
return 0' to declare that they only handle the default (v0) protocol.
The emphasis there is a bit off: it would be clearer to specify
GIT_TEST_PROTOCOL_VERSION=0 to inform the reader that these tests are
specifically testing and relying on details of protocol v0. Do so.
This way, a reader does not need to know what the default protocol
version is, and the tests can continue to work when the default
protocol version used by Git advances past v0.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Update smart-http test.
* jt/t5551-test-chunked:
t5551: test usage of chunked encoding explicitly
|
|
When run using GIT_TEST_PROTOCOL_VERSION=2, a test in t5551 fails
because 4 POSTs (probe, ls-refs, probe, fetch) are sent instead of 2
(probe, fetch).
One way to resolve this would be to relax the condition (from "= 2" to
greater than 1, say), but upon further inspection, the test probably
shouldn't be counting the number of POSTs. This test states that large
requests are split across POSTs, but this is not correct; the main
change is that chunked transfer encoding is used, but the request is
still contained within one POST. (The test coincidentally works because
Git indeed sends 2 POSTs in the case of a large request, but that is
because, as stated above, the first POST is a probing RPC - see
post_rpc() in remote-curl.c for more information.)
Therefore, instead of counting POSTs, check that chunked transfer
encoding is used. This also has the desirable side effect of passing
with GIT_TEST_PROTOCOL_VERSION=2.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The two tests 'invalid Content-Type rejected' and 'server-side error
detected' in 't5551-http-fetch-smart.sh' use "plain" 'grep' to check
that 'git clone' failed with the expected error message, but the
messages they are checking are translated, and, consequently, these
tests fail when the test script is run with GIT_TEST_GETTEXT_POISON
enabled.
Use 'test_i18ngrep' instead.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Test framework update to more robustly clean up leftover files and
processes after tests are done.
* sg/test-atexit:
t9811-git-p4-label-import: fix pipeline negation
git p4 test: disable '-x' tracing in the p4d watchdog loop
git p4 test: simplify timeout handling
git p4 test: clean up the p4d cleanup functions
git p4 test: use 'test_atexit' to kill p4d and the watchdog process
t0301-credential-cache: use 'test_atexit' to stop the credentials helper
tests: use 'test_atexit' to stop httpd
git-daemon: use 'test_atexit` to stop 'git-daemon'
test-lib: introduce 'test_atexit'
t/lib-git-daemon: make sure to kill the 'git-daemon' process
test-lib: fix interrupt handling with 'dash' and '--verbose-log -x'
|
|
Test update.
* jt/t5551-protocol-v2-does-not-have-half-auth:
t5551: mark half-auth no-op fetch test as v0-only
|
|
Help developers by making it easier to run most of the tests under
different versions of over-the-wire protocols.
* jt/test-protocol-version:
t5552: compensate for v2 filtering ref adv.
tests: fix protocol version for overspecifications
t5700: only run with protocol version 1
t5512: compensate for v0 only sending HEAD symrefs
t5503: fix overspecification of trace expectation
tests: always test fetch of unreachable with v0
t5601: check ssh command only with protocol v0
tests: define GIT_TEST_PROTOCOL_VERSION
|
|
When using protocol v0, upload-pack over HTTP permits a "half-auth"
configuration in which, at the web server layer, the info/refs path is
not protected by authentication but the git-upload-pack path is, so that
a user can perform fetches that do not download any objects without
authentication, but still needs authentication to download objects.
But protocol v2 does not support this, because both ref and pack are
obtained from the git-upload-pack path.
Mark the test verifying this behavior as protocol v0-only, with a
description of what needs to be done to make v2 support this.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Use 'test_atexit' to run cleanup commands to stop httpd at the end of
the test script or upon interrupt or failure, as it is shorter,
simpler, and more robust than registering such cleanup commands in the
trap on EXIT in the test scripts.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
These tests are also marked with a NEEDSWORK comment.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some tests check that fetching an unreachable object fails, but protocol
v2 allows such fetches. Unset GIT_TEST_PROTOCOL_VERSION so that these
tests are always run using protocol v0.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Define a GIT_TEST_PROTOCOL_VERSION environment variable meant to be used
from tests. When set, this ensures protocol.version is at least the
given value, allowing the entire test suite to be run as if this
configuration is in place for all repositories.
As of this patch, all tests pass whether GIT_TEST_PROTOCOL_VERSION is
unset or set to 0. Some tests fail when GIT_TEST_PROTOCOL_VERSION is set
to 1 or 2, but this will be dealt with in subsequent patches.
This is based on work by Ævar Arnfjörð Bjarmason.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When a smart HTTP server sends an error message via pkt-line, we detect
the error due to using PACKET_READ_DIE_ON_ERR_PACKET. This case was
added by 2d103c31c2 (pack-protocol.txt: accept error packets in any
context, 2018-12-29), but not covered by tests.
Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Over some transports, fetching objects with an exact commit object
name can be done without first seeing the ref advertisements. The
code has been optimized to exploit this.
* jt/avoid-ls-refs:
fetch: do not list refs if fetching only hashes
transport: list refs before fetch if necessary
transport: do not list refs if possible
transport: allow skipping of ref listing
|
|
If only hash literals are given on a "git fetch" command-line, tag
following is not requested, and the fetch is done using protocol v2, a
list of refs is not required from the remote. Therefore, optimize by
invoking transport_get_remote_refs() only if we need the refs.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In t5551 we check that we save cookies correctly to a file when
http.cookiefile and http.savecookies are set. To do so we create an
expect file that expects the cookies in a certain order.
However after e2ef8d6fa ("cookies: support creation-time attribute for
cookies", 2018-08-28) in curl.git (released in curl 7.61.1) that order
changed.
We document the file format as "Netscape/Mozilla cookie file
format (see curl(1))", so any format produced by libcurl should be
fine here. Sort the files, to be agnostic to the order of the
cookies, and make the test pass with both curl versions > 7.61.1 and
earlier curl versions.
Reported-by: Todd Zullinger <tmz@pobox.com>
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Move setup code inside test_expect blocks, to catch unexpected
failures in the setup steps, and bring the test scripts in line with
our modern test style.
Suggested-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git fetch" sometimes failed to update the remote-tracking refs,
which has been corrected.
* jt/connectivity-check-after-unshallow:
fetch-pack: unify ref in and out param
|
|
httpd tests saw occasional breakage due to the way its access log
gets inspected by the tests, which has been updated to make them
less flaky.
* sg/httpd-test-unflake:
t/lib-httpd: avoid occasional failures when checking access.log
t/lib-httpd: add the strip_access_log() helper function
t5541: clean up truncating access log
|
|
When a user fetches:
- at least one up-to-date ref and at least one non-up-to-date ref,
- using HTTP with protocol v0 (or something else that uses the fetch
command of a remote helper)
some refs might not be updated after the fetch.
This bug was introduced in commit 989b8c4452 ("fetch-pack: put shallow
info in output parameter", 2018-06-28) which allowed transports to
report the refs that they have fetched in a new out-parameter
"fetched_refs". If they do so, transport_fetch_refs() makes this
information available to its caller.
Users of "fetched_refs" rely on the following 3 properties:
(1) it is the complete list of refs that was passed to
transport_fetch_refs(),
(2) it has shallow information (REF_STATUS_REJECT_SHALLOW set if
relevant), and
(3) it has updated OIDs if ref-in-want was used (introduced after
989b8c4452).
In an effort to satisfy (1), whenever transport_fetch_refs()
filters the refs sent to the transport, it re-adds the filtered refs to
whatever the transport supplies before returning it to the user.
However, the implementation in 989b8c4452 unconditionally re-adds the
filtered refs without checking if the transport refrained from reporting
anything in "fetched_refs" (which it is allowed to do), resulting in an
incomplete list, no longer satisfying (1).
An earlier effort to resolve this [1] solved the issue by readding the
filtered refs only if the transport did not refrain from reporting in
"fetched_refs", but after further discussion, it seems that the better
solution is to revert the API change that introduced "fetched_refs".
This API change was first suggested as part of a ref-in-want
implementation that allowed for ref patterns and, thus, there could be
drastic differences between the input refs and the refs actually fetched
[2]; we eventually decided to only allow exact ref names, but this API
change remained even though its necessity was decreased.
Therefore, revert this API change by reverting commit 989b8c4452, and
make receive_wanted_refs() update the OIDs in the sought array (like how
update_shallow() updates shallow information in the sought array)
instead. A test is also included to show that the user-visible bug
discussed at the beginning of this commit message no longer exists.
[1] https://public-inbox.org/git/20180801171806.GA122458@google.com/
[2] https://public-inbox.org/git/86a128c5fb710a41791e7183207c4d64889f9307.1485381677.git.jonathantanmy@google.com/
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The last test of 't5561-http-backend.sh', 'server request log matches
test results' may fail occasionally, because the order of entries in
Apache's access log doesn't match the order of requests sent in the
previous tests, although all the right requests are there. I saw it
fail on Travis CI five times in the span of about half a year, when
the order of two subsequent requests was flipped, and could trigger
the failure with a modified Git. However, I was unable to trigger it
with stock Git on my machine. Three tests in
't5541-http-push-smart.sh' and 't5551-http-fetch-smart.sh' check
requests in the log the same way, so they might be prone to a similar
occasional failure as well.
When a test sends a HTTP request, it can continue execution after
'git-http-backend' fulfilled that request, but Apache writes the
corresponding access log entry only after 'git-http-backend' exited.
Some time inevitably passes between fulfilling the request and writing
the log entry, and, under unfavourable circumstances, enough time
might pass for the subsequent request to be sent and fulfilled by a
different Apache thread or process, and then Apache writes access log
entries racily.
This effect can be exacerbated by adding a bit of variable delay after
the request is fulfilled but before 'git-http-backend' exits, e.g.
like this:
diff --git a/http-backend.c b/http-backend.c
index f3dc218b2..bbf4c125b 100644
--- a/http-backend.c
+++ b/http-backend.c
@@ -709,5 +709,7 @@ int cmd_main(int argc, const char **argv)
max_request_buffer);
cmd->imp(&hdr, cmd_arg);
+ if (getpid() % 2)
+ sleep(1);
return 0;
}
This delay considerably increases the chances of log entries being
written out of order, and in turn makes t5561's last test fail almost
every time. Alas, it doesn't seem to be enough to trigger a similar
failure in t5541 and t5551.
So, since we can't just rely on the order of access log entries always
corresponding the order of requests, make checking the access log more
deterministic by sorting (simply lexicographically) both the stripped
access log entries and the expected entries before the comparison with
'test_cmp'. This way the order of log entries won't matter and
occasional out-of-order entries won't trigger a test failure, but the
comparison will still notice any unexpected or missing log entries.
OTOH, this sorting will make it harder to identify from which test an
unexpected log entry came from or which test's request went missing.
Therefore, in case of an error include the comparison of the unsorted
log enries in the test output as well.
And since all this should be performed in four tests in three test
scripts, put this into a new helper function 'check_access_log' in
't/lib-httpd.sh'.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Four tests in three httpd-related test scripts check the contents of
Apache's 'access.log', and they all do so by running 'sed' with the
exact same script consisting of four s/// commands to strip
uninteresting log fields and to vertically align the requested URLs.
Extract this into a common helper function 'strip_access_log' in
'lib-httpd.sh', and use it in all of those tests.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Configure curl to accept all encodings which curl supports instead of
only accepting gzip responses.
This fixes an issue when using an installation of curl which is built
without the "zlib" feature. Since aa90b9697 (Enable info/refs gzip
decompression in HTTP client, 2012-09-19) we end up requesting "gzip"
encoding anyway despite libcurl not being able to decode it. Worse,
instead of getting a clear error message indicating so, we end up
falling back to "dumb" http, producing a confusing and difficult to
debug result.
Since curl doesn't do any checking to verify that it supports the a
requested encoding, instead set the curl option `CURLOPT_ENCODING` with
an empty string indicating that curl should send an "Accept-Encoding"
header containing only the encodings supported by curl.
Reported-by: Anton Golubev <anton.golubev@gmail.com>
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
GIT_TRACE_CURL provides a way to debug what is being sent and received
over HTTP, with automatic redaction of sensitive information. But it
also logs data transmissions, which significantly increases the log file
size, sometimes unnecessarily. Add an option "GIT_TRACE_CURL_NO_DATA" to
allow the user to omit such data transmissions.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When using GIT_TRACE_CURL, Git already redacts the "Authorization:" and
"Proxy-Authorization:" HTTP headers. Extend this redaction to a
user-specified list of cookies, specified through the
"GIT_REDACT_COOKIES" environment variable.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When the http server gives an incomplete response to a smart-http
rpc call, it could lead to client waiting for a full response that
will never come. Teach the client side to notice this condition
and abort the transfer.
An improvement counterproposal has failed.
cf. <20161114194049.mktpsvgdhex2f4zv@sigill.intra.peff.net>
* dt/smart-http-detect-server-going-away:
upload-pack: optionally allow fetching any sha1
remote-curl: don't hang when a server dies before any output
|
|
Transport with dumb http can be fooled into following foreign URLs
that the end user does not intend to, especially with the server
side redirects and http-alternates mechanism, which can lead to
security issues. Tighten the redirection and make it more obvious
to the end user when it happens.
* jk/http-walker-limit-redirect-2.9:
http: treat http-alternates like redirects
http: make redirects more obvious
remote-curl: rename shadowed options variable
http: always update the base URL for redirects
http: simplify update_url_from_redirect
|
|
If a malicious server redirects the initial ref
advertisement, it may be able to leak sha1s from other,
unrelated servers that the client has access to. For
example, imagine that Alice is a git user, she has access to
a private repository on a server hosted by Bob, and Mallory
runs a malicious server and wants to find out about Bob's
private repository.
Mallory asks Alice to clone an unrelated repository from her
over HTTP. When Alice's client contacts Mallory's server for
the initial ref advertisement, the server issues an HTTP
redirect for Bob's server. Alice contacts Bob's server and
gets the ref advertisement for the private repository. If
there is anything to fetch, she then follows up by asking
the server for one or more sha1 objects. But who is the
server?
If it is still Mallory's server, then Alice will leak the
existence of those sha1s to her.
Since commit c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28), the client usually rewrites the base
URL such that all further requests will go to Bob's server.
But this is done by textually matching the URL. If we were
originally looking for "http://mallory/repo.git/info/refs",
and we got pointed at "http://bob/other.git/info/refs", then
we know that the right root is "http://bob/other.git".
If the redirect appears to change more than just the root,
we punt and continue to use the original server. E.g.,
imagine the redirect adds a URL component that Bob's server
will ignore, like "http://bob/other.git/info/refs?dummy=1".
We can solve this by aborting in this case rather than
silently continuing to use Mallory's server. In addition to
protecting from sha1 leakage, it's arguably safer and more
sane to refuse a confusing redirect like that in general.
For example, part of the motivation in c93c92f30 is
avoiding accidentally sending credentials over clear http,
just to get a response that says "try again over https". So
even in a non-malicious case, we'd prefer to err on the side
of caution.
The downside is that it's possible this will break a
legitimate but complicated server-side redirection scheme.
The setup given in the newly added test does work, but it's
convoluted enough that we don't need to care about it. A
more plausible case would be a server which redirects a
request for "info/refs?service=git-upload-pack" to just
"info/refs" (because it does not do smart HTTP, and for some
reason really dislikes query parameters). Right now we
would transparently downgrade to dumb-http, but with this
patch, we'd complain (and the user would have to set
GIT_SMART_HTTP=0 to fetch).
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It seems a little silly to do a reachabilty check in the case where we
trust the user to access absolutely everything in the repository.
Also, it's racy in a distributed system -- perhaps one server
advertises a ref, but another has since had a force-push to that ref,
and perhaps the two HTTP requests end up directed to these different
servers.
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In the event that a HTTP server closes the connection after giving a
200 but before giving any packets, we don't want to hang forever
waiting for a response that will never come. Instead, we should die
immediately.
One case where this happens is when attempting to fetch a dangling
object by its object name. In this case, the server dies before
sending any data. Prior to this patch, fetch-pack would wait for
data from the server, and remote-curl would wait for fetch-pack,
causing a deadlock.
Despite this patch, there is other possible malformed input that could
cause the same deadlock (e.g. a half-finished pktline, or a pktline but
no trailing flush). There are a few possible solutions to this:
1. Allowing remote-curl to tell fetch-pack about the EOF (so that
fetch-pack could know that no more data is coming until it says
something else). This is tricky because an out-of-band signal would
be required, or the http response would have to be re-framed inside
another layer of pkt-line or something.
2. Make remote-curl understand some of the protocol. It turns out
that in addition to understanding pkt-line, it would need to watch for
ack/nak. This is somewhat fragile, as information about the protocol
would end up in two places. Also, pkt-lines which are already at the
length limit would need special handling.
Both of these solutions would require a fair amount of work, whereas
this hack is easy and solves at least some of the problem.
Still to do: it would be good to give a better error message
than "fatal: The remote end hung up unexpectedly".
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Use the new GIT_TRACE_CURL environment variable instead
of the deprecated GIT_CURL_VERBOSE.
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To support this developer's use case of allowing build agents token-based
access to private repositories, we introduced the http.extraheader
feature, allowing extra HTTP headers to be sent along with every HTTP
request.
This patch verifies that we can configure these extra HTTP headers via the
command-line for use with `git submodule update`, too. Example: git -c
http.extraheader="Secret: Sauce" submodule update --init
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To test that extra HTTP headers are passed correctly, t5551 verifies that
a fetch succeeds when two required headers are passed, and that the fetch
does not succeed when those headers are not passed.
However, this test would also succeed if the configuration required only
one header. As Apache's configuration is notoriously tricky (this
developer frequently requires StackOverflow's help to understand Apache's
documentation), especially when still supporting the 2.2 line, let's just
really make sure that the test verifies what we want it to verify.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We introduce a way to send custom HTTP headers with all requests.
This allows us, for example, to send an extra token from build agents
for temporary access to private repositories. (This is the use case that
triggered this patch.)
This feature can be used like this:
git -c http.extraheader='Secret: sssh!' fetch $URL $REF
Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only
a single list, overriding any previous call. This means we have to
collect _all_ of the headers we want to use into a single list, and
feed it to cURL in one shot. Since we already unconditionally set a
"pragma" header when initializing the curl handles, we can add our new
headers to that list.
For callers which override the default header list (like probe_rpc),
we provide `http_copy_default_headers()` so they can do the same
trick.
Big thanks to Jeff King and Junio Hamano for their outstanding help and
patient reviews.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Communication between the HTTP server and http_backend process can
lead to a dead-lock when relaying a large ref negotiation request.
Diagnose the situation better, and mitigate it by reading such a
request first into core (to a reasonable limit).
* jk/http-backend-deadlock:
http-backend: spool ref negotiation requests to buffer
t5551: factor out tag creation
http-backend: fix die recursion with custom handler
|
|
* jk/http-backend-deadlock-2.3:
http-backend: spool ref negotiation requests to buffer
t5551: factor out tag creation
http-backend: fix die recursion with custom handler
|
|
* jk/http-backend-deadlock-2.2:
http-backend: spool ref negotiation requests to buffer
t5551: factor out tag creation
http-backend: fix die recursion with custom handler
|
|
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Test clean-up.
* jk/skip-http-tests-under-no-curl:
tests: skip dav http-push tests under NO_EXPAT=NoThanks
t/lib-httpd.sh: skip tests if NO_CURL is defined
|
|
One of our tests in t5551 creates a large number of tags,
and jumps through some hoops to do it efficiently. Let's
factor that out into a function so we can make other similar
tests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If we built git without curl, we can't actually test against
an http server. In fact, all of the test scripts which
include lib-httpd.sh already perform this check, with one
exception: t5540. For those scripts, this is a noop, and for
t5540, this is a bugfix (it used to fail when built with
NO_CURL, though it could go unnoticed if you had a stale
git-remote-https in your build directory).
Noticed-by: Junio C Hamano <junio@pobox.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
People often forget to chain the commands in their test together
with &&, leaving a failure from an earlier command in the test go
unnoticed. The new GIT_TEST_CHAIN_LINT mechanism allows you to
catch such a mistake more easily.
* jk/test-chain-lint: (36 commits)
t9001: drop save_confirm helper
t0020: use test_* helpers instead of hand-rolled messages
t: simplify loop exit-code status variables
t: fix some trivial cases of ignored exit codes in loops
t7701: fix ignored exit code inside loop
t3305: fix ignored exit code inside loop
t0020: fix ignored exit code inside loops
perf-lib: fix ignored exit code inside loop
t6039: fix broken && chain
t9158, t9161: fix broken &&-chain in git-svn tests
t9104: fix test for following larger parents
t4104: drop hand-rolled error reporting
t0005: fix broken &&-chains
t7004: fix embedded single-quotes
t0050: appease --chain-lint
t9001: use test_when_finished
t4117: use modern test_* helpers
t6034: use modern test_* helpers
t1301: use modern test_* helpers
t0020: use modern test_* helpers
...
|
|
Test fixes.
* jk/test-annoyances:
t5551: make EXPENSIVE test cheaper
t5541: move run_with_cmdline_limit to test-lib.sh
t: pass GIT_TRACE through Apache
t: redirect stderr GIT_TRACE to descriptor 4
t: translate SIGINT to an exit
|
|
These are tests which are missing a link in their &&-chain,
but during a setup phase. We may fail to notice failure in
commands that build the test environment, but these are
typically not expected to fail at all (but it's still good
to double-check that our test environment is what we
expect).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
These are tests which are missing a link in their &&-chain,
but in a way that probably does not effect the outcome of
the test. Most of these are of the form:
some_cmd >actual
test_cmp expect actual
The main point of the test is to verify the output, and a
failure in some_cmd would probably be noticed by bogus
output. But it is good for the tests to also confirm that
"some_cmd" does not die unexpectedly after producing its
output.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
These are tests which are missing a link in their &&-chain,
in a location which causes a significant portion of the test
to be missed (e.g., the test effectively does nothing, or
consists of a long string of actions and output comparisons,
and we throw away the exit code of at least one part of the
string).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We create 50,000 tags to check that we don't overflow the
command-line of fetch-pack. But by using run_with_cmdline_limit,
we can get the same effect with a much smaller number of
tags. This makes the test fast enough that we can drop the
EXPENSIVE prereq, which means people will actually run it.
It was not documented to do so, but this test was also the
only test of a clone-over-http that requires multiple POSTs
during the conversation. We can continue to test that by
dropping http.postbuffer to its minimum size, and checking
that we get two POSTs.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When upload-pack advertises the refs (either for a normal,
non-stateless request, or for the initial contact in a
stateless one), we call for_each_ref with the send_ref
function as its callback. send_ref, in turn, calls
mark_our_ref, which checks whether the ref is hidden, and
sets OUR_REF or HIDDEN_REF on the object as appropriate. If
it is hidden, mark_our_ref also returns "1" to signal
send_ref that the ref should not be advertised.
If we are not advertising refs, (i.e., the follow-up
invocation by an http client to send its "want" lines), we
use mark_our_ref directly as a callback to for_each_ref. Its
marking does the right thing, but when it then returns "1"
to for_each_ref, the latter interprets this as an error and
stops iterating. As a result, we skip marking all of the
refs that come lexicographically after it. Any "want" lines
from the client asking for those objects will fail, as they
were not properly marked with OUR_REF.
To solve this, we introduce a wrapper callback around
mark_our_ref which always returns 0 (even if the ref is
hidden, we want to keep iterating). We also tweak the
signature of mark_our_ref to exclude unnecessary parameters
that were present only to conform to the callback interface.
This should make it less likely for somebody to accidentally
use it as a callback in the future.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* 'jc/test-lazy-prereq' (early part):
t3419: drop unnecessary NOT_EXPENSIVE pseudo-prerequisite
t3302: drop unnecessary NOT_EXPENSIVE pseudo-prerequisite
t3302: do not chdir around in the primary test process
t3302: coding style updates
test: turn USR_BIN_TIME into a lazy prerequisite
test: turn EXPENSIVE into a lazy prerequisite
|
|
* tb/t5551-clone-notice-to-stderr:
t5551: fix the 50,000 tag test
|
|
Make clear which one is for dumb protocol, which one is for smart from
their file name.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|