aboutsummaryrefslogtreecommitdiffstats
path: root/t/lib-httpd
AgeCommit message (Collapse)AuthorFilesLines
2025-04-07t/lib-httpd: refactor "one-time-perl" CGI script to not depend on PerlPatrick Steinhardt3-30/+29
Our Apache HTTPD setup exposes an "one_time_perl" endpoint to access repositories. If used, we execute the "apply-one-time-perl.sh" CGI script that checks whether we have a "one-time-perl" script. If so, that script gets executed so that it can munge what would be served. Once done, the script gets removed so that it doesn't execute a second time. As the name says, this functionality expects the user to pass a Perl script. This isn't really necessary though: we can just as easily implement the same thing with arbitrary scripts. Refactor the code so that we instead expect an arbitrary script to exist and rename the functionality to "one-time-script". Adapt callers to use shell utilities instead of Perl so that we can drop the PERL_TEST_HELPERS prerequisite. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-04-16t5563: refactor for multi-stage authenticationbrian m. carlson1-4/+13
Some HTTP authentication schemes, such as NTLM- and Kerberos-based options, require more than one round trip to authenticate. Currently, these can only be supported in libcurl, since Git does not have support for this in the credential helper protocol. However, in a future commit, we'll add support for this functionality into the credential helper protocol and Git itself. Because we don't really want to implement either NTLM or Kerberos, both of which are complex protocols, we'll want to test this using a fake credential authentication scheme. In order to do so, update t5563 and its backend to allow us to accept multiple sets of credentials and respond with different behavior in each case. Since we can now provide any number of possible status codes, provide a non-specific reason phrase so we don't have to generate a more specific one based on the response. The reason phrase is mandatory according to the status-line production in RFC 7230, but clients SHOULD ignore it, and curl does (except to print it). Each entry in the authorization and challenge fields contains an ID, which indicates a corresponding credential and response. If the response is a 200 status, then we continue to execute git-http-backend. Otherwise, we print the corresponding status and response. If no ID is matched, we use the default response with a status of 401. Note that there is an implicit order to the parameters. The ID is always first and the creds or response value is always last, and therefore may contain spaces, equals signs, or other arbitrary data. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-11t/lib-httpd: stop using legacy crypt(3) for authenticationPatrick Steinhardt2-2/+2
When setting up httpd for our tests, we also install a passwd and proxy-passwd file that contain the test user's credentials. These credentials currently use crypt(3) as the password encryption schema. This schema can be considered deprecated nowadays as it is not safe anymore. Quoting Apache httpd's documentation [1]: > Unix only. Uses the traditional Unix crypt(3) function with a > randomly-generated 32-bit salt (only 12 bits used) and the first 8 > characters of the password. Insecure. This is starting to cause issues in modern Linux distributions. glibc has deprecated its libcrypt library that used to provide crypt(3) in favor of the libxcrypt library. This newer replacement provides a compile time switch to disable insecure password encryption schemata, which causes crypt(3) to always return `EINVAL`. The end result is that httpd tests that exercise authentication will fail on distros that use libxcrypt without these insecure encryption schematas. Regenerate the passwd files to instead use the default password encryption schema, which is md5. While it feels kind of funny that an MD5-based encryption schema should be more secure than anything else, it is the current default and supported by all platforms. Furthermore, it really doesn't matter all that much given that these files are only used for testing purposes anyway. [1]: https://httpd.apache.org/docs/2.4/misc/password_encryptions.html Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-09-21test-lib: set UBSAN_OPTIONS to match ASanJeff King1-0/+1
For a long time we have used ASAN_OPTIONS to set abort_on_error. This is important because we want to notice detected problems even in programs which are expected to fail. But we never did the same for UBSAN_OPTIONS. This means that our UBSan test suite runs might silently miss some cases. It also causes a more visible effect, which is that t4058 complains about unexpected "fixes" (and this is how I noticed the issue): $ make SANITIZE=undefined CC=gcc && (cd t && ./t4058-*) ... ok 8 - git read-tree does not segfault # TODO known breakage vanished ok 9 - reset --hard does not segfault # TODO known breakage vanished ok 10 - git diff HEAD does not segfault # TODO known breakage vanished The tests themselves aren't that interesting. We have a known bug where these programs segfault, and they do when compiled without sanitizers. With UBSan, when the test runs: test_might_fail git read-tree --reset base it gets: cache-tree.c:935:9: runtime error: member access within misaligned address 0x5a5a5a5a5a5a5a5a for type 'struct cache_entry', which requires 8 byte alignment So that's garbage memory which would _usually_ cause us to segfault, but UBSan catches it and complains first about the alignment. That makes sense, but the weird thing is that UBSan then exits instead of aborting, so our test_might_fail call considers that an acceptable outcome and the test "passes". Curiously, this historically seems to have aborted, because I've run "make test" with UBSan many times (and so did our CI) and we never saw the problem. Even more curiously, I see an abort if I use clang with ASan and UBSan together, like: # this aborts! make SANITIZE=undefined,address CC=clang But not with just UBSan, and not with both when used with gcc: # none of these do make SANITIZE=undefined CC=gcc make SANITIZE=undefined CC=clang make SANITIZE=undefined,address CC=gcc Likewise moving to older versions of gcc (I tried gcc-11 and gcc-12 on my Debian system) doesn't abort. Nor does moving around in Git's history. Neither this test nor the relevant code have been touched in a while, and going back to v2.41.0 produces the same outcome (even though many UBSan CI runs have passed in the meantime). So _something_ changed on my system (and likely will soon on other people's, since this is stock Debian unstable), but I didn't track it further. I don't know why it ever aborted in the past, but we definitely should be explicit here and tell UBSan what we want to happen. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-05-19Merge branch 'jk/http-test-cgipassauth-unavailable-in-older-apache'Junio C Hamano1-0/+2
We started unconditionally testing with CGIPassAuth directive but it is unavailable in older Apache that ships with CentOS 7 that has about a year of shelf-life still left. The test has conditionally been disabled when running with an ancient Apache. This was a fix for a recent regression caught before the release, so no need to mention it in the release notes. * jk/http-test-cgipassauth-unavailable-in-older-apache: t/lib-httpd: make CGIPassAuth support conditional
2023-05-18t/lib-httpd: make CGIPassAuth support conditionalJeff King1-0/+2
Commit 988aad99b4 (t5563: add tests for basic and anoymous HTTP access, 2023-02-27) added tests that require Apache to support the CGIPassAuth directive, which was added in Apache 2.4.13. This is fairly old (~8 years), but recent enough that we still encounter it in the wild (e.g., RHEL/CentOS 7, which is not EOL until June 2024). We can live with skipping the new tests on such a platform. But unfortunately, since the directive is used unconditionally in our apache.conf, it means the web server fails to start entirely, and we cannot run other HTTP tests at all (e.g., the basic ones in t5551). We can fix that by making the config conditional, and only triggering it for t5563. That solves the problem for t5551 (which then ignores the directive entirely). For t5563, we'd see apache complain in start_httpd; with the default setting of GIT_TEST_HTTPD, we'd then skip the whole script. But that leaves one small problem: people may set GIT_TEST_HTTPD=1 explicitly, which instructs the tests to fail (rather than skip) when we can't start the webserver (to avoid accidentally missing some tests). This could be worked around by having the user manually set GIT_SKIP_TESTS on a platform with an older Apache. But we can be a bit friendlier by doing the version check ourselves and setting an appropriate prereq. We'll use the (lack of) prereq to then skip the rest of t5563. In theory we could use the prereq to skip individual tests, but in practice this whole script depends on it. Reported-by: Todd Zullinger <tmz@pobox.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-11Merge branch 'jk/use-perl-path-consistently'Junio C Hamano2-1/+3
Tests had a few places where we ignored PERL_PATH and blindly used /usr/bin/perl, which have been corrected. * jk/use-perl-path-consistently: t/lib-httpd: pass PERL_PATH to CGI scripts
2023-04-06t/lib-httpd: pass PERL_PATH to CGI scriptsJeff King2-1/+3
As discussed in t/README, tests should aim to use PERL_PATH rather than straight "perl". We usually do this automatically with a "perl" function in test-lib.sh, but a few cases need to be handled specially. One such case is the apply-one-time-perl.sh CGI, which invokes plain "perl". It should be using $PERL_PATH, but to make that work, we must also instruct Apache to pass through the variable. Prior to this patch, doing: mv /usr/bin/perl /usr/bin/my-perl make PERL_PATH=/usr/bin/my-perl test would fail t5702, t5703, and t5616. After this it passes. This is a pretty extreme case, as even if you install perl elsewhere, you'd likely still have it in your $PATH. A more realistic case is that you don't want to use the perl in your $PATH (because it's older, broken, etc) and expect PERL_PATH to consistently override that (since that's what it's documented to do). Removing it completely is just a convenient way of completely breaking it for testing purposes. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-17Merge branch 'mc/credential-helper-www-authenticate'Junio C Hamano2-0/+45
Allow information carried on the WWW-AUthenticate header to be passed to the credential helpers. * mc/credential-helper-www-authenticate: credential: add WWW-Authenticate header to cred requests http: read HTTP WWW-Authenticate response headers t5563: add tests for basic and anoymous HTTP access
2023-02-28Merge branch 'jk/http-test-fixes'Junio C Hamano1-1/+1
Various fix-ups on HTTP tests. * jk/http-test-fixes: t5559: make SSL/TLS the default t5559: fix test failures with LIB_HTTPD_SSL t/lib-httpd: enable HTTP/2 "h2" protocol, not just h2c t/lib-httpd: respect $HTTPD_PROTO in expect_askpass() t5551: drop curl trace lines without headers t5551: handle v2 protocol in cookie test t5551: simplify expected cookie file t5551: handle v2 protocol in upload-pack service test t5551: handle v2 protocol when checking curl trace t5551: stop forcing clone to run with v0 protocol t5551: handle HTTP/2 when checking curl trace t5551: lower-case headers in expected curl trace t5551: drop redundant grep for Accept-Language t5541: simplify and move "no empty path components" test t5541: stop marking "used receive-pack service" test as v0 only t5541: run "used receive-pack service" test earlier
2023-02-27t5563: add tests for basic and anoymous HTTP accessMatthew John Cheetham2-0/+45
Add a test showing simple anoymous HTTP access to an unprotected repository, that results in no credential helper invocations. Also add a test demonstrating simple basic authentication with simple credential helper support. Leverage a no-parsed headers (NPH) CGI script so that we can directly control the HTTP responses to simulate a multitude of good, bad and ugly remote server implementations around auth. Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-23t/lib-httpd: enable HTTP/2 "h2" protocol, not just h2cJeff King1-1/+1
Commit 73c49a4474 (t: run t5551 tests with both HTTP and HTTP/2, 2022-11-11) added Apache config to enable HTTP/2. However, it only enabled the "h2c" protocol, which allows cleartext HTTP/2 (generally based on an upgrade header during an HTTP/1.1 request). This is what t5559 is generally testing, since by default we don't set up SSL/TLS. However, it should be possible to run t5559 with LIB_HTTPD_SSL set. In that case, Apache will advertise support for HTTP/2 via ALPN during the TLS handshake. But we need to tell it support "h2" (the non-cleartext version) to do so. Without that, then curl does not even try to do the HTTP/1.1 upgrade (presumably because after seeing that we did TLS but didn't get the ALPN indicator, it assumes it would be fruitless). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-16add basic http proxy testsJeff King2-0/+17
We do not test our http proxy functionality at all in the test suite, so this is a pretty big blind spot. Let's at least add a basic check that we can go through an authenticating proxy to perform a clone. A few notes on the implementation: - I'm using a single apache instance to proxy to itself. This seems to work fine in practice, and we can check with a test that this rather unusual setup is doing what we expect. - I've put the proxy tests into their own script, and it's the only one which loads the apache proxy config. If any platform can't handle this (e.g., doesn't have the right modules), the start_httpd step should fail and gracefully skip the rest of the script (but all the other http tests in existing scripts will continue to run). - I used a separate passwd file to make sure we don't ever get confused between proxy and regular auth credentials. It's using the antiquated crypt() format. This is a terrible choice security-wise in the modern age, but it's what our existing passwd file uses, and should be portable. It would probably be reasonable to switch both of these to bcrypt, but we can do that in a separate patch. - On the client side, we test two situations with credentials: when they are present in the url, and when the username is present but we prompt for the password. I think we should be able to handle the case that _neither_ is present, but an HTTP 407 causes us to prompt for them. However, this doesn't seem to work. That's either a bug, or at the very least an opportunity for a feature, but I punted on it for now. The point of this patch is just getting basic coverage, and we can explore possible deficiencies later. - this doesn't work with LIB_HTTPD_SSL. This probably would be valuable to have, as https over an http proxy is totally different (it uses CONNECT to tunnel the session). But adding in mod_proxy_connect and some basic config didn't seem to work for me, so I punted for now. Much of the rest of the test suite does not currently work with LIB_HTTPD_SSL either, so we shouldn't be making anything much worse here. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01t/lib-httpd: increase ssl key size to 2048 bitsJeff King1-1/+1
Recent versions of openssl will refuse to work with 1024-bit RSA keys, as they are considered insecure. I didn't track down the exact version in which the defaults were tightened, but the Debian-package openssl 3.0 on my system yields: $ LIB_HTTPD_SSL=1 ./t5551-http-fetch-smart.sh -v -i [...] SSL Library Error: error:0A00018F:SSL routines::ee key too small 1..0 # SKIP web server setup failed This could probably be overcome with configuration, but that's likely to be a headache (especially if it requires touching /etc/openssl). Let's just pick a key size that's less outrageously out of date. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01t/lib-httpd: drop SSLMutex configJeff King1-1/+0
The SSL config enabled by setting LIB_HTTPD_SSL does not work with Apache versions greater than 2.2, as more recent versions complain about the SSLMutex directive. According to https://httpd.apache.org/docs/current/upgrading.html: Directives AcceptMutex, LockFile, RewriteLock, SSLMutex, SSLStaplingMutex, and WatchdogMutexPath have been replaced with a single Mutex directive. You will need to evaluate any use of these removed directives in your 2.2 configuration to determine if they can just be deleted or will need to be replaced using Mutex. Deleting this line will just use the system default, which seems sensible. The original came as part of faa4bc35a0 (http-push: add regression tests, 2008-02-27), but no specific reason is given there (or on the mailing list) for its presence. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01t/lib-httpd: bump required apache version to 2.4Jeff King1-18/+4
Apache 2.4 has been out since early 2012, almost 11 years. And its predecessor, 2.2, has been out of support since its last release in 2017, over 5 years ago. The last mention on the mailing list was from around the same time, in this thread: https://lore.kernel.org/git/20171231023234.21215-1-tmz@pobox.com/ We can probably assume that 2.4 is available everywhere. And the stakes are fairly low, as the worst case is that such a platform would skip the http tests. This lets us clean up a few minor version checks in the config file, but also revert f1f2b45be0 (tests: adjust the configuration for Apache 2.2, 2016-05-09). Its technique isn't _too_ bad, but certainly required a bit more explanation than the 2.4 version it replaced. I manually confirmed that the test in t5551 still behaves as expected (if you replace "cadabra" with "foo", the server correctly rejects the request). It will also help future patches which will no longer have to deal with conditional config for this old version. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01t/lib-httpd: bump required apache version to 2.2Jeff King1-8/+0
Apache 2.2 was released in 2005, almost 18 years ago. We can probably assume that people are running a version at least that old (and the stakes for removing it are fairly low, as the worst case is that they would not run the http tests against their ancient version). Dropping support for the older versions cleans up the config file a little, and will also enable us to bump the required version further (with more cleanups) in a future patch. Note that the file actually checks for version 2.1. In apache's versioning scheme, odd numbered versions are for development and even numbers are for stable releases. So 2.1 and 2.2 are effectively the same from our perspective. Older versions would just fail to start, which would generally cause us to skip the tests. However, we do have version detection code in lib-httpd.sh which produces a nicer error message, so let's update that, too. I didn't bother handling the case of "3.0", etc. Apache has been on 2.x for 21 years, with no signs of bumping the major version. And if they eventually do, I suspect there will be enough breaking changes that we'd need to update more than just the numeric version check. We can worry about that hypothetical when it happens. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-14t: run t5551 tests with both HTTP and HTTP/2Jeff King1-3/+16
We have occasionally seen bugs that affect Git running only against an HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http: match headers case-insensitively when redacting, 2021-09-22). But since we have no test coverage using HTTP/2, we only uncover these bugs in the wild. That commit gives a recipe for converting our Apache setup to support HTTP/2, but: - it's not necessarily portable - we don't want to just test HTTP/2; we really want to do a variety of basic tests for _both_ protocols This patch handles both problems by running a duplicate of t5551 (labeled as t5559 here) with an alternate-universe setup that enables HTTP/2. So we'll continue to run t5551 as before, but run the same battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given platform, then t5559 should bail during the webserver setup, and gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from "auto" to "yes", where the point is to complain when webserver setup fails). In theory other http-related test scripts could benefit from the same duplication, but doing t5551 should give us a reasonable check of basic functionality, and would have caught both bugs we've seen in the wild with HTTP/2. A few notes on the implementation: - a script enables the server side config by calling enable_http2 before starting the webserver. This avoids even trying to load any HTTP/2 config for t5551 (which is what lets it keep working with regular HTTP even on systems that don't support it). This also sets a prereq which can be used by individual tests. - As discussed in b66c77a64e, the http2 module isn't compatible with the "prefork" mpm, so we need to pick something else. I chose "event" here, which works on my Debian system, but it's possible there are platforms which would prefer something else. We can adjust that later if somebody finds such a platform. - The test "large fetch-pack requests can be sent using chunked encoding" makes sure we use a chunked transfer-encoding by looking for that header in the trace. But since HTTP/2 has its own streaming mechanisms, we won't find such a header. We could skip the test entirely by marking it with !HTTP2. But there's some value in making sure that the fetch itself succeeded. So instead, we'll confirm that either we're using HTTP2 _or_ we saw the expected chunked header. - the redaction tests fail under HTTP/2 with recent versions of curl. This is a bug! I've marked them with !HTTP2 here to skip them under t5559 for the moment. Using test_expect_failure would be more appropriate, but would require a bunch of boilerplate. Since we'll be fixing them momentarily, let's just skip them for now to keep the test suite bisectable, and we can re-enable them in the commit that fixes the bug. - one alternative layout would be to push most of t5551 into a lib-t5551.sh script, then source it from both t5551 and t5559. Keeping t5551 intact seemed a little simpler, as its one less level of indirection for people fixing bugs/regressions in the non-HTTP/2 tests. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-10-06t/lib-httpd: pass LANG and LC_ALL to ApacheRené Scharfe1-0/+2
t5411 starts a web server with no explicit language setting, so it uses the system default. Ten of its tests expect it to return error messages containing the prefix "fatal: ", emitted by die(). This prefix can be localized since a1fd2cf8cd (i18n: mark message helpers prefix for translation, 2022-06-21), however. As a result these ten tests break for me on a system with LANG="de_DE.UTF-8" because the web server sends localized messages with "Schwerwiegend: " instead of "fatal: ". Fix these tests by passing LANG and LC_ALL to the web server, which are set to "C" by t/test-lib.sh, to get untranslated messages on both sides. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-29Merge branch 'jk/http-push-status-fix'Junio C Hamano2-0/+10
"git push" client talking to an HTTP server did not diagnose the lack of the final status report from the other side correctly, which has been corrected. * jk/http-push-status-fix: transport-helper: recognize "expecting report" error from send-pack send-pack: complain about "expecting report" with --helper-status
2021-10-18send-pack: complain about "expecting report" with --helper-statusJeff King2-0/+10
When pushing to a server which erroneously omits the final ref-status report, the client side should complain about the refs for which we didn't receive the status (because we can't just assume they were updated). This works over most transports like ssh, but for http we'll print a very misleading "Everything up-to-date". It works for ssh because send-pack internally sets the status of each ref to REF_STATUS_EXPECTING_REPORT, and then if the server doesn't tell us about a particular ref, it will stay at that value. When we print the final status table, we'll see that we're still on EXPECTING_REPORT and complain then. But for http, we go through remote-curl, which invokes send-pack with "--stateless-rpc --helper-status". The latter option causes send-pack to return a machine-readable list of ref statuses to the remote helper. But ever since its inception in de1a2fdd38 (Smart push over HTTP: client side, 2009-10-30), the send-pack code has simply omitted mention of any ref which ended up in EXPECTING_REPORT. In the remote helper, we then take the absence of any status report from send-pack to mean that the ref was not even something we tried to send, and thus it prints "Everything up-to-date". Fortunately it does detect the eventual non-zero exit from send-pack, and propagates that in its own non-zero exit code. So at least a careful script invoking "git push" would notice the failure. But sending the misleading message on stderr is certainly confusing for humans (not to mention the machine-readable "push --porcelain" output, though again, any careful script should be checking the exit code from push, too). Nobody seems to have noticed because the server in this instance has to be misbehaving: it has promised to support the ref-status capability (otherwise the client will not set EXPECTING_REPORT at all), but didn't send us any. If the connection were simply cut, then send-pack would complain about getting EOF while trying to read the status. But if the server actually sends a flush packet (i.e., saying "now you have all of the ref statuses" without actually sending any), then the client ends up in this confused situation. The fix is simple: we should return an error message from "send-pack --helper-status", just like we would for any other error per-ref error condition (in the test I included, the server simply omits all ref status responses, but a more insidious version of this would skip only some of them). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-10http-backend: handle HTTP_GIT_PROTOCOL CGI variableJeff King1-2/+0
When a client requests the v2 protocol over HTTP, they set the Git-Protocol header. Webservers will generally make that available to our CGI as HTTP_GIT_PROTOCOL in the environment. However, that's not sufficient for upload-pack, etc, to respect it; they look in GIT_PROTOCOL (without the HTTP_ prefix). Either the webserver or the CGI is responsible for relaying that HTTP header into the GIT_PROTOCOL variable. Traditionally, our tests have configured the webserver to do so, but that's a burden on the server admin. We can make this work out of the box by having the http-backend CGI copy the contents of HTTP_GIT_PROTOCOL to GIT_PROTOCOL. There are no new tests here. By removing the SetEnvIf line from our test Apache config, we're now relying on this behavior of http-backend to trigger the v2 protocol there (and there are numerous tests that fail if this doesn't work). There is one subtlety here: we copy HTTP_GIT_PROTOCOL only if there is no existing GIT_PROTOCOL variable. That leaves the webserver admin free to override the client's decision if they choose. This is unlikely to be useful in practice, but is more flexible. And indeed, it allows the v2-to-v0 fallback test added in the previous commit to continue working. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-10t5551: test v2-to-v0 http protocol fallbackJeff King1-0/+5
Since we use the v2 protocol by default, the connection of a v2 client to a v2 server is well covered by the test suite. And with the GIT_TEST_PROTOCOL_VERSION knob, we can easily test a v0 client connecting to a v2-aware server (which will then just speak v0). But we have no regular tests that a v2 client, when encountering a non-v2-aware server, will correctly fall back to using v0. In theory this is a job for the cross-version tests in t/interop, but: - they cover only git:// and file:// clones - they are not part of the usual test suite, so nobody ever runs them anyway Since using v2 over http requires configuring the web server to pass along the Git-Protocol header, we can easily create a situation where the server does not respect the v2 probe, and the conversation falls back to v0. This works just fine. This new test is not about fixing any particular bug, but just making sure that the system works (and continues to work) as expected. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24remote-curl: error on incomplete packetDenton Liu3-0/+14
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated with a partially written packet, remote-curl will blindly send the partially written packet before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the partial packet and continue reading, expecting more input. This results in a deadlock between the two processes. For a stateless connection, inspect packets before sending them and error out if a packet line packet is incomplete. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27t/lib-httpd: avoid using macOS' sedJohannes Schindelin3-27/+30
Among other differences relative to GNU sed, macOS' sed always ends its output with a trailing newline, even if the input did not have such a trailing newline. Surprisingly, this makes three httpd-based tests fail on macOS: t5616, t5702 and t5703. ("Surprisingly" because those tests have been around for some time, but apparently nobody runs them on macOS with a working Apache2 setup.) The reason is that we use `sed` in those tests to filter the response of the web server. Apart from the fact that we use GNU constructs (such as using a space after the `c` command instead of a backslash and a newline), we have another problem: macOS' sed LF-only newlines while webservers are supposed to use CR/LF ones. Even worse, t5616 uses `sed` to replace a binary part of the response with a new binary part (kind of hoping that the replaced binary part does not contain a 0x0a byte which would be interpreted as a newline). To that end, it calls on Perl to read the binary pack file and hex-encode it, then calls on `sed` to prefix every hex digit pair with a `\x` in order to construct the text that the `c` statement of the `sed` invocation is supposed to insert. So we call Perl and sed to construct a sed statement. The final nail in the coffin is that macOS' sed does not even interpret those `\x<hex>` constructs. Let's just replace all of that by Perl snippets. With Perl, at least, we do not have to deal with GNU vs macOS semantics, we do not have to worry about unwanted trailing newlines, and we do not have to spawn commands to construct arguments for other commands to be spawned (i.e. we can avoid a whole lot of shell scripting complexity). The upshot is that this fixes t5616, t5702 and t5703 on macOS with Apache2. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-29apply-one-time-sed.sh: modernize styleDenton Liu1-3/+5
Convert `[ ... ]` to use `test` and test for the existence of a regular file (`-f`) instead of any file (`-e`). Move the `then`s onto their own lines so that it conforms with the general test style. Instead of redirecting input into sed, allow it to open its own input. Use `cmp -s` instead of `diff` since we only care about whether the two files are equal and `diff` is overkill for this. Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-05-08t/lib-httpd: pass LSAN_OPTIONS through apacheJeff King1-0/+1
Just as we instruct Apache to pass through ASAN_OPTIONS (so that server-side Git programs it spawns will respect our options while running the tests), we should do the same with LSAN_OPTIONS. Otherwise trying to collect a list of leaks like: export LSAN_OPTIONS=exitcode=0:log_path=/tmp/lsan make SANITIZE=leak test won't work for http tests (the server-side programs won't log their leaks to the right place, and they'll prematurely die, producing a spurious test failure). Signed-off-by: Jeff King <peff@peff.net> Acked-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06t5551: test server-side ERR packetJosh Steadmon2-0/+7
When a smart HTTP server sends an error message via pkt-line, we detect the error due to using PACKET_READ_DIE_ON_ERR_PACKET. This case was added by 2d103c31c2 (pack-protocol.txt: accept error packets in any context, 2018-12-29), but not covered by tests. Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-05Merge branch 'jt/fetch-v2-sideband'Junio C Hamano1-0/+1
"git fetch" and "git upload-pack" learned to send all exchange over the sideband channel while talking the v2 protocol. * jt/fetch-v2-sideband: tests: define GIT_TEST_SIDEBAND_ALL {fetch,upload}-pack: sideband v2 fetch response sideband: reverse its dependency on pkt-line pkt-line: introduce struct packet_writer pack-protocol.txt: accept error packets in any context Use packet_reader instead of packet_read_line
2019-01-17tests: define GIT_TEST_SIDEBAND_ALLJonathan Tan1-0/+1
Define a GIT_TEST_SIDEBAND_ALL environment variable meant to be used from tests. When set to true, this overrides uploadpack.allowsidebandall to true, allowing the entire test suite to be run as if this configuration is in place for all repositories. As of this patch, all tests pass whether GIT_TEST_SIDEBAND_ALL is unset or set to 1. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10test: test GIT_CURL_VERBOSE=1 shows an errorMasaya Suzuki1-0/+1
This tests GIT_CURL_VERBOSE shows an error when an URL returns 500. This exercises the code in remote_curl. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-28upload-pack: test negotiation with changing repositoryBrandon Williams2-0/+30
Add tests to check the behavior of fetching from a repository which changes between rounds of negotiation (for example, when different servers in a load-balancing agreement participate in the same stateless RPC negotiation). This forms a baseline of comparison to the ref-in-want functionality (which will be introduced to the client in subsequent commits), and ensures that subsequent commits do not change existing behavior. As part of this effort, a mechanism to substitute strings in a single HTTP response is added. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-04http: fix v1 protocol tests with apache httpd < 2.4Todd Zullinger1-6/+4
The apache config used by tests was updated to use the SetEnvIf directive to set the Git-Protocol header in 19113a26b6 ("http: tell server that the client understands v1", 2017-10-16). Setting the Git-Protocol header is restricted to httpd >= 2.4, but mod_setenvif and the SetEnvIf directive work with lower versions, at least as far back as 2.0, according to the httpd documentation: https://httpd.apache.org/docs/2.0/mod/mod_setenvif.html Drop the restriction. Tested with httpd 2.2 and 2.4. Signed-off-by: Todd Zullinger <tmz@pobox.com> Acked-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-17http: tell server that the client understands v1Brandon Williams1-0/+7
Tell a server that protocol v1 can be used by sending the http header 'Git-Protocol' with 'version=1' indicating this. Also teach the apache http server to pass through the 'Git-Protocol' header as an environment variable 'GIT_PROTOCOL'. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-28http: attempt updating base URL only if no errorJonathan Tan1-0/+9
http.c supports HTTP redirects of the form http://foo/info/refs?service=git-upload-pack -> http://anything -> http://bar/info/refs?service=git-upload-pack (that is to say, as long as the Git part of the path and the query string is preserved in the final redirect destination, the intermediate steps can have any URL). However, if one of the intermediate steps results in an HTTP exception, a confusing "unable to update url base from redirection" message is printed instead of a Curl error message with the HTTP exception code. This was introduced by 2 commits. Commit c93c92f ("http: update base URLs when we see redirects", 2013-09-28) introduced a best-effort optimization that required checking if only the "base" part of the URL differed between the initial request and the final redirect destination, but it performed the check before any HTTP status checking was done. If something went wrong, the normal code path was still followed, so this did not cause any confusing error messages until commit 6628eb4 ("http: always update the base URL for redirects", 2016-12-06), which taught http to die if the non-"base" part of the URL differed. Therefore, teach http to check the HTTP status before attempting to check if only the "base" part of the URL differed. This commit teaches http_request_reauth to return early without updating options->base_url upon an error; the only invoker of this function that passes a non-NULL "options" is remote-curl.c (through "http_get_strbuf"), which only uses options->base_url for an informational message in the situations that this commit cares about (that is, when the return value is not HTTP_OK). The included test checks that the redirect scheme at the beginning of this commit message works, and that returning a 502 in the middle of the redirect scheme produces the correct result. Note that this is different from the test in commit 6628eb4 ("http: always update the base URL for redirects", 2016-12-06) in that this commit tests that a Git-shaped URL (http://.../info/refs?service=git-upload-pack) works, whereas commit 6628eb4 tests that a non-Git-shaped URL (http://.../info/refs/foo?service=git-upload-pack) does not work (even though Git is processing that URL) and is an error that is fatal, not silently swallowed. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-19Merge branch 'jk/http-walker-limit-redirect-2.9'Junio C Hamano1-0/+14
Transport with dumb http can be fooled into following foreign URLs that the end user does not intend to, especially with the server side redirects and http-alternates mechanism, which can lead to security issues. Tighten the redirection and make it more obvious to the end user when it happens. * jk/http-walker-limit-redirect-2.9: http: treat http-alternates like redirects http: make redirects more obvious remote-curl: rename shadowed options variable http: always update the base URL for redirects http: simplify update_url_from_redirect
2016-12-06http: make redirects more obviousJeff King1-0/+6
We instruct curl to always follow HTTP redirects. This is convenient, but it creates opportunities for malicious servers to create confusing situations. For instance, imagine Alice is a git user with access to a private repository on Bob's server. Mallory runs her own server and wants to access objects from Bob's repository. Mallory may try a few tricks that involve asking Alice to clone from her, build on top, and then push the result: 1. Mallory may simply redirect all fetch requests to Bob's server. Git will transparently follow those redirects and fetch Bob's history, which Alice may believe she got from Mallory. The subsequent push seems like it is just feeding Mallory back her own objects, but is actually leaking Bob's objects. There is nothing in git's output to indicate that Bob's repository was involved at all. The downside (for Mallory) of this attack is that Alice will have received Bob's entire repository, and is likely to notice that when building on top of it. 2. If Mallory happens to know the sha1 of some object X in Bob's repository, she can instead build her own history that references that object. She then runs a dumb http server, and Alice's client will fetch each object individually. When it asks for X, Mallory redirects her to Bob's server. The end result is that Alice obtains objects from Bob, but they may be buried deep in history. Alice is less likely to notice. Both of these attacks are fairly hard to pull off. There's a social component in getting Mallory to convince Alice to work with her. Alice may be prompted for credentials in accessing Bob's repository (but not always, if she is using a credential helper that caches). Attack (1) requires a certain amount of obliviousness on Alice's part while making a new commit. Attack (2) requires that Mallory knows a sha1 in Bob's repository, that Bob's server supports dumb http, and that the object in question is loose on Bob's server. But we can probably make things a bit more obvious without any loss of functionality. This patch does two things to that end. First, when we encounter a whole-repo redirect during the initial ref discovery, we now inform the user on stderr, making attack (1) much more obvious. Second, the decision to follow redirects is now configurable. The truly paranoid can set the new http.followRedirects to false to avoid any redirection entirely. But for a more practical default, we will disallow redirects only after the initial ref discovery. This is enough to thwart attacks similar to (2), while still allowing the common use of redirects at the repository level. Since c93c92f30 (http: update base URLs when we see redirects, 2013-09-28) we re-root all further requests from the redirect destination, which should generally mean that no further redirection is necessary. As an escape hatch, in case there really is a server that needs to redirect individual requests, the user can set http.followRedirects to "true" (and this can be done on a per-server basis via http.*.followRedirects config). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06http: always update the base URL for redirectsJeff King1-0/+8
If a malicious server redirects the initial ref advertisement, it may be able to leak sha1s from other, unrelated servers that the client has access to. For example, imagine that Alice is a git user, she has access to a private repository on a server hosted by Bob, and Mallory runs a malicious server and wants to find out about Bob's private repository. Mallory asks Alice to clone an unrelated repository from her over HTTP. When Alice's client contacts Mallory's server for the initial ref advertisement, the server issues an HTTP redirect for Bob's server. Alice contacts Bob's server and gets the ref advertisement for the private repository. If there is anything to fetch, she then follows up by asking the server for one or more sha1 objects. But who is the server? If it is still Mallory's server, then Alice will leak the existence of those sha1s to her. Since commit c93c92f30 (http: update base URLs when we see redirects, 2013-09-28), the client usually rewrites the base URL such that all further requests will go to Bob's server. But this is done by textually matching the URL. If we were originally looking for "http://mallory/repo.git/info/refs", and we got pointed at "http://bob/other.git/info/refs", then we know that the right root is "http://bob/other.git". If the redirect appears to change more than just the root, we punt and continue to use the original server. E.g., imagine the redirect adds a URL component that Bob's server will ignore, like "http://bob/other.git/info/refs?dummy=1". We can solve this by aborting in this case rather than silently continuing to use Mallory's server. In addition to protecting from sha1 leakage, it's arguably safer and more sane to refuse a confusing redirect like that in general. For example, part of the motivation in c93c92f30 is avoiding accidentally sending credentials over clear http, just to get a response that says "try again over https". So even in a non-malicious case, we'd prefer to err on the side of caution. The downside is that it's possible this will break a legitimate but complicated server-side redirection scheme. The setup given in the newly added test does work, but it's convoluted enough that we don't need to care about it. A more plausible case would be a server which redirects a request for "info/refs?service=git-upload-pack" to just "info/refs" (because it does not do smart HTTP, and for some reason really dislikes query parameters). Right now we would transparently downgrade to dumb-http, but with this patch, we'd complain (and the user would have to set GIT_SMART_HTTP=0 to fetch). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-08Merge branch 'ew/git-svn-http-tests'Junio C Hamano1-2/+2
Tests for "git svn" have been taught to reuse the lib-httpd test infrastructure when testing the subversion integration that interacts with subversion repositories served over the http:// protocol. * ew/git-svn-http-tests: git svn: migrate tests to use lib-httpd t/t91*: do not say how to avoid the tests
2016-07-25git svn: migrate tests to use lib-httpdEric Wong1-2/+2
This allows us to use common test infrastructure and parallelize the tests. For now, GIT_SVN_TEST_HTTPD=true needs to be set to enable the SVN HTTP tests because we reuse the same test cases for both file:// and http:// SVN repositories. SVN_HTTPD_PORT is no longer honored. Tested under Apache 2.2 and 2.4 on Debian 7.x (wheezy) and 8.x (jessie), respectively. Cc: Clemens Buchacher <drizzd@aon.at> Cc: Michael J Gruber <git@drmicha.warpmail.net> Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-17Merge branch 'js/http-custom-headers'Junio C Hamano1-4/+12
Update tests for "http.extraHeaders=<header>" to be portable back to Apache 2.2 (the original depended on <RequireAll/> which is a more recent feature). * js/http-custom-headers: submodule: ensure that -c http.extraheader is heeded t5551: make the test for extra HTTP headers more robust tests: adjust the configuration for Apache 2.2
2016-05-10tests: adjust the configuration for Apache 2.2Johannes Schindelin1-4/+12
Lars Schneider noticed that the configuration introduced to test the extra HTTP headers cannot be used with Apache 2.2 (which is still actively maintained, as pointed out by Junio Hamano). To let the tests pass with Apache 2.2 again, let's substitute the offending <RequireAll> and `expr` by using old school RewriteCond statements. As RewriteCond does not allow testing for *non*-matches, we simply match the desired case first and let it pass by marking the RewriteRule as '[L]' ("last rule, do not process any other matching RewriteRules after this"), and then have another RewriteRule that matches all other cases and lets them fail via '[F]' ("fail"). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Tested-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-06Merge branch 'js/http-custom-headers'Junio C Hamano1-0/+8
HTTP transport clients learned to throw extra HTTP headers at the server, specified via http.extraHeader configuration variable. * js/http-custom-headers: http: support sending custom HTTP headers
2016-04-27http: support sending custom HTTP headersJohannes Schindelin1-0/+8
We introduce a way to send custom HTTP headers with all requests. This allows us, for example, to send an extra token from build agents for temporary access to private repositories. (This is the use case that triggered this patch.) This feature can be used like this: git -c http.extraheader='Secret: sssh!' fetch $URL $REF Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only a single list, overriding any previous call. This means we have to collect _all_ of the headers we want to use into a single list, and feed it to cURL in one shot. Since we already unconditionally set a "pragma" header when initializing the curl handles, we can add our new headers to that list. For callers which override the default header list (like probe_rpc), we provide `http_copy_default_headers()` so they can do the same trick. Big thanks to Jeff King and Junio Hamano for their outstanding help and patient reviews. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-14Merge branch 'jk/test-httpd-config-nosystem' into maintJunio C Hamano1-0/+1
The tests that involve running httpd leaked the system-wide configuration in /etc/gitconfig to the tested environment. * jk/test-httpd-config-nosystem: t/lib-httpd: pass through GIT_CONFIG_NOSYSTEM env
2016-04-06Merge branch 'jk/test-httpd-config-nosystem'Junio C Hamano1-0/+1
The tests that involve running httpd leaked the system-wide configuration in /etc/gitconfig to the tested environment. * jk/test-httpd-config-nosystem: t/lib-httpd: pass through GIT_CONFIG_NOSYSTEM env
2016-03-18t/lib-httpd: pass through GIT_CONFIG_NOSYSTEM envJeff King1-0/+1
We set GIT_CONFIG_NOSYSTEM in our test scripts so that we do not accidentally read /etc/gitconfig and have it influence the outcome of the tests. But when running smart-http tests, Apache will clean the environment, including this variable, and the "server" side of our http operations will read it. You can see this breakage by doing something like: make ./git config --system http.getanyfile false make test which will cause t5561 to fail when it tests the fallback-to-dumb operation. We can fix this by instructing Apache to pass through the variable. Unlike with other variables (e.g., 89c57ab3's GIT_TRACE), we don't need to set a dummy value to prevent warnings from Apache. test-lib.sh already makes sure that GIT_CONFIG_NOSYSTEM is set and exported. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-25t/lib-httpd: load mod_unixdMichael J Gruber1-0/+3
In contrast to apache 2.2, apache 2.4 does not load mod_unixd in its default configuration (because there are choices). Thus, with the current config, apache 2.4.10 will not be started and the httpd tests will not run on distros with default apache config (RedHat type). Enable mod_unixd to make the httpd tests run. This does not affect distros negatively which have that config already in their default (Debian type). httpd tests will run on these before and after this patch. Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25http: limit redirection depthBlake Burkhart1-0/+3
By default, libcurl will follow circular http redirects forever. Let's put a cap on this so that somebody who can trigger an automated fetch of an arbitrary repository (e.g., for CI) cannot convince git to loop infinitely. The value chosen is 20, which is the same default that Firefox uses. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25http: limit redirection to protocol-whitelistBlake Burkhart1-0/+1
Previously, libcurl would follow redirection to any protocol it was compiled for support with. This is desirable to allow redirection from HTTP to HTTPS. However, it would even successfully allow redirection from HTTP to SFTP, a protocol that git does not otherwise support at all. Furthermore git's new protocol-whitelisting could be bypassed by following a redirect within the remote helper, as it was only enforced at transport selection time. This patch limits redirects within libcurl to HTTP, HTTPS, FTP and FTPS. If there is a protocol-whitelist present, this list is limited to those also allowed by the whitelist. As redirection happens from within libcurl, it is impossible for an HTTP redirect to a protocol implemented within another remote helper. When the curl version git was compiled with is too old to support restrictions on protocol redirection, we warn the user if GIT_ALLOW_PROTOCOL restrictions were requested. This is a little inaccurate, as even without that variable in the environment, we would still restrict SFTP, etc, and we do not warn in that case. But anything else means we would literally warn every time git accesses an http remote. This commit includes a test, but it is not as robust as we would hope. It redirects an http request to ftp, and checks that curl complained about the protocol, which means that we are relying on curl's specific error message to know what happened. Ideally we would redirect to a working ftp server and confirm that we can clone without protocol restrictions, and not with them. But we do not have a portable way of providing an ftp server, nor any other protocol that curl supports (https is the closest, but we would have to deal with certificates). [jk: added test and version warning] Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-12t: pass GIT_TRACE through ApacheJeff King1-0/+1
Apache removes GIT_TRACE from the environment before running git-http-backend. This can make it hard to debug the server side of an http session. Let's let it through. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-11t: support clang/gcc AddressSanitizerJeff King1-0/+1
When git is compiled with "-fsanitize=address" (using clang or gcc >= 4.8), all invocations of git will check for buffer overflows. This is similar to running with valgrind, except that it is more thorough (because of the compiler support, function-local buffers can be checked, too) and runs much faster (making it much less painful to run the whole test suite with the checks turned on). Unlike valgrind, the magic happens at compile-time, so we don't need the same infrastructure in the test suite that we did to support --valgrind. But there are two things we can help with: 1. On some platforms, the leak-detector is on by default, and causes every invocation of "git init" (and thus every test script) to fail. Since running git with the leak detector is pointless, let's shut it off automatically in the tests, unless the user has already configured it. 2. When apache runs a CGI, it clears the environment of unknown variables. This means that the $ASAN_OPTIONS config doesn't make it to git-http-backend, and it dies due to the leak detector. Let's mark the variable as OK for apache to pass. With these two changes, running make CC=clang CFLAGS=-fsanitize=address test works out of the box. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-17signed push: teach smart-HTTP to pass "git push --signed" aroundJunio C Hamano1-0/+1
The "--signed" option received by "git push" is first passed to the transport layer, which the native transport directly uses to notice that a push certificate needs to be sent. When the transport-helper is involved, however, the option needs to be told to the helper with set_helper_option(), and the helper needs to take necessary action. For the smart-HTTP helper, the "necessary action" involves spawning the "git send-pack" subprocess with the "--signed" option. Once the above all gets wired in, the smart-HTTP transport now can use the push certificate mechanism to authenticate its pushes. Add a test that is modeled after tests for the native transport in t5534-push-signed.sh to t5541-http-push-smart.sh. Update the test Apache configuration to pass GNUPGHOME environment variable through. As PassEnv would trigger warnings for an environment variable that is not set, export it from test-lib.sh set to a harmless value when GnuPG is not being used in the tests. Note that the added test is deliberately loose and does not check the nonce in this step. This is because the stateless RPC mode is inevitably flaky and a nonce that comes back in the actual push processing is one issued by a different process; if the two interactions with the server crossed a second boundary, the nonces will not match and such a check will fail. A later patch in the series will work around this shortcoming. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-17http: fix charset detection of extract_content_type()Yi EungJun1-0/+4
extract_content_type() could not extract a charset parameter if the parameter is not the first one and there is a whitespace and a following semicolon just before the parameter. For example: text/plain; format=fixed ;charset=utf-8 And it also could not handle correctly some other cases, such as: text/plain; charset=utf-8; format=fixed text/plain; some-param="a long value with ;semicolons;"; charset=utf-8 Thanks-to: Jeff King <peff@peff.net> Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27remote-curl: reencode http error messagesJeff King1-0/+4
We currently recognize an error message with a content-type "text/plain; charset=utf-16" as text, but we ignore the charset parameter entirely. Let's encode it to log_output_encoding, which is presumably something the user's terminal can handle. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: extract type/subtype portion of content-typeJeff King1-1/+7
When we get a content-type from curl, we get the whole header line, including any parameters, and without any normalization (like downcasing or whitespace) applied. If we later try to match it with strcmp() or even strcasecmp(), we may get false negatives. This could cause two visible behaviors: 1. We might fail to recognize a smart-http server by its content-type. 2. We might fail to relay text/plain error messages to users (especially if they contain a charset parameter). This patch teaches the http code to extract and normalize just the type/subtype portion of the string. This is technically passing out less information to the callers, who can no longer see the parameters. But none of the current callers cares, and a future patch will add back an easier-to-use method for accessing those parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-23t5550: test display of remote http error messagesJeff King2-0/+21
Since commit 426e70d (remote-curl: show server content on http errors, 2013-04-05), we relay any text/plain error messages from the remote server to the user. However, we never tested it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-23t/lib-httpd: use write_script to copy CGI scriptsJeff King1-1/+0
Using write_script will set our shebang line appropriately with $SHELL_PATH. The script that is there now is quite simple and likely to succeed even with a non-POSIX /bin/sh, but it does not hurt to be defensive. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-01-02use distinct username/password for http auth testsJeff King1-1/+1
The httpd server we set up to test git's http client code knows about a single account, in which both the username and password are "user@host" (the unusual use of the "@" here is to verify that we handle the character correctly when URL escaped). This means that we may miss a certain class of errors in which the username and password are mixed up internally by git. We can make our tests more robust by having distinct values for the username and password. In addition to tweaking the server passwd file and the client URL, we must teach the "askpass" harness to accept multiple values. As a bonus, this makes the setup of some tests more obvious; when we are expecting git to ask only about the password, we can seed the username askpass response with a bogus value. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-30Merge branch 'jk/http-auth-redirects'Junio C Hamano1-0/+2
Handle the case where http transport gets redirected during the authorization request better. * jk/http-auth-redirects: http.c: Spell the null pointer as NULL remote-curl: rewrite base url from info/refs redirects remote-curl: store url as a strbuf remote-curl: make refs_url a strbuf http: update base URLs when we see redirects http: provide effective url to callers http: hoist credential request out of handle_curl_result http: refactor options to http_get_* http_request: factor out curlinfo_strbuf http_get_file: style fixes
2013-10-14remote-curl: rewrite base url from info/refs redirectsJeff King1-0/+2
For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-07-30http: add http.savecookies option to write out HTTP cookiesDave Borowitz1-0/+8
HTTP servers may send Set-Cookie headers in a response and expect them to be set on subsequent requests. By default, libcurl behavior is to store such cookies in memory and reuse them across requests within a single session. However, it may also make sense, depending on the server and the cookies, to store them across sessions. Provide users an option to enable this behavior, writing cookies out to the same file specified in http.cookiefile. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-23Merge branch 'jk/apache-test-for-2.4'Junio C Hamano1-1/+19
* jk/apache-test-for-2.4: lib-httpd/apache.conf: check version only after mod_version loads t/lib-httpd/apache.conf: configure an MPM module for apache 2.4 t/lib-httpd/apache.conf: load compat access module in apache 2.4 t/lib-httpd/apache.conf: load extra auth modules in apache 2.4 t/lib-httpd/apache.conf: do not use LockFile in apache >= 2.4
2013-06-21lib-httpd/apache.conf: check version only after mod_version loadsJeff King1-3/+4
Commit 0442743 introduced an <IfVersion> directive near the top of the apache config file. However, at that point we have not yet checked for and loaded the mod_version module. This means that the directive will behave oddly if mod_version is dynamically loaded, failing to match when it should. We can fix this by moving the whole block below the LoadModule directive for mod_version. Reported-by: Brian Gernhardt <brian@gernhardtsoftware.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-14t/lib-httpd/apache.conf: configure an MPM module for apache 2.4Jeff King1-0/+3
Versions of Apache before 2.4 always had a "MultiProcessing Module" (MPM) statically built in, which manages the worker threads/processes. We do not care which one, as it is largely a performance issue, and we put only a light load on the server during our testing. As of Apache 2.4, the MPM module is loadable just like any other module, but exactly one such module must be loaded. On a system where the MPMs are compiled dynamically (e.g., Debian unstable), this means that our test Apache server will not start unless we provide the appropriate configuration. Unfortunately, we do not actually know which MPM modules are available or appropriate for the system on which the tests are running. This patch picks the "prefork" module, as it is likely to be available on all Unix-like systems. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-14t/lib-httpd/apache.conf: load compat access module in apache 2.4Jeff King1-0/+3
In apache 2.4, the "Order" directive has gone away in favor of a new system in mod_authz_host. However, since we want our config file to remain compatible across multiple Apache versions, we can use mod_access_compat to keep using the older style. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-14t/lib-httpd/apache.conf: load extra auth modules in apache 2.4Jeff King1-0/+9
In apache 2.4, the "Auth*" and "Require" directives have moved into the authn_core and authz_core modules, respectively. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-14t/lib-httpd/apache.conf: do not use LockFile in apache >= 2.4Jeff King1-0/+2
The LockFile directive from earlier versions of apache has been replaced by the Mutex directive. The latter seems to give sane defaults and does not need any specific customization, so we can get away with just adding a version check to the use of LockFile. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-21Merge branch 'jk/doc-http-backend'Junio C Hamano1-0/+18
Improve documentation to illustrate "push authenticated, fetch anonymous" configuration for smart HTTP servers. * jk/doc-http-backend: doc/http-backend: match query-string in apache half-auth example doc/http-backend: give some lighttpd config examples doc/http-backend: clarify "half-auth" repo configuration
2013-04-13doc/http-backend: match query-string in apache half-auth exampleJeff King1-0/+18
When setting up a "half-auth" repository in which reads can be done anonymously but writes require authentication, it is best if the server can require authentication for both the ref advertisement and the actual receive-pack POSTs. This alleviates the need for the admin to set http.receivepack in the repositories, and means that the client is challenged for credentials immediately, instead of partway through the push process (and git clients older than v1.7.11.7 had trouble handling these challenges). Since detecting a push during the ref advertisement requires matching the query string, and this is non-trivial to do in Apache, we have traditionally punted and instructed users to just protect "/git-receive-pack$". This patch provides the mod_rewrite recipe to actually match the ref advertisement, which is preferred. While we're at it, let's add the recipe to our test scripts so that we can be sure that it works, and doesn't get broken (either by our changes or by changes in Apache). Signed-off-by: Jeff King <peff@peff.net> Acked-by: Jakub Narębski <jnareb@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-09http-backend: respect GIT_NAMESPACE with dumb clientsJohn Koleszar1-0/+5
Filter the list of refs returned via the dumb HTTP protocol according to the active namespace, consistent with other clients of the upload-pack service. Signed-off-by: John Koleszar <jkoleszar@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04Verify Content-Type from smart HTTP serversShawn Pearce2-0/+15
Before parsing a suspected smart-HTTP response verify the returned Content-Type matches the standard. This protects a client from attempting to process a payload that smells like a smart-HTTP server response. JGit has been doing this check on all responses since the dawn of time. I mistakenly failed to include it in git-core when smart HTTP was introduced. At the time I didn't know how to get the Content-Type from libcurl. I punted, meant to circle back and fix this, and just plain forgot about it. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-20Merge branch 'jk/maint-http-half-auth-fetch'Junio C Hamano1-0/+7
Fixes fetch from servers that ask for auth only during the actual packing phase. This is not really a recommended configuration, but it cleans up the code at the same time. * jk/maint-http-half-auth-fetch: remote-curl: retry failed requests for auth even with gzip remote-curl: hoist gzip buffer size to top of post_rpc
2012-10-31remote-curl: retry failed requests for auth even with gzipJeff King1-0/+7
Commit b81401c taught the post_rpc function to retry the http request after prompting for credentials. However, it did not handle two cases: 1. If we have a large request, we do not retry. That's OK, since we would have sent a probe (with retry) already. 2. If we are gzipping the request, we do not retry. That was considered OK, because the intended use was for push (e.g., listing refs is OK, but actually pushing objects is not), and we never gzip on push. This patch teaches post_rpc to retry even a gzipped request. This has two advantages: 1. It is possible to configure a "half-auth" state for fetching, where the set of refs and their sha1s are advertised, but one cannot actually fetch objects. This is not a recommended configuration, as it leaks some information about what is in the repository (e.g., an attacker can try brute-forcing possible content in your repository and checking whether it matches your branch sha1). However, it can be slightly more convenient, since a no-op fetch will not require a password at all. 2. It future-proofs us should we decide to ever gzip more requests. Signed-off-by: Jeff King <peff@peff.net>
2012-09-07Merge branch 'jk/maint-http-half-auth-push'Junio C Hamano1-10/+15
Pushing to smart HTTP server with recent Git fails without having the username in the URL to force authentication, if the server is configured to allow GET anonymously, while requiring authentication for POST. * jk/maint-http-half-auth-push: http: prompt for credentials on failed POST http: factor out http error code handling t: test http access to "half-auth" repositories t: test basic smart-http authentication t/lib-httpd: recognize */smart/* repos as smart-http t/lib-httpd: only route auth/dumb to dumb repos t5550: factor out http auth setup t5550: put auth-required repo in auth/dumb
2012-08-27t: test http access to "half-auth" repositoriesJeff King1-0/+7
Some sites set up http access to repositories such that fetching is anonymous and unauthenticated, but pushing is authenticated. While there are multiple ways to do this, the technique advertised in the git-http-backend manpage is to block access to locations matching "/git-receive-pack$". Let's emulate that advice in our test setup, which makes it clear that this advice does not actually work. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27t/lib-httpd: recognize */smart/* repos as smart-httpJeff King1-9/+7
We do not currently test authentication for smart-http repos at all. Part of the infrastructure to do this is recognizing that auth/smart is indeed a smart-http repo. The current apache config recognizes only "^/smart/*" as smart-http. Let's instead treat anything with /smart/ in the URL as smart-http. This is obviously a stupid thing to do for a real production site, but for our test suite we know that our repositories will not have this magic string in the name. Note that we will route /foo/smart/bar.git directly to git-http-backend/bar.git; in other words, everything before the "/smart/" is irrelevant to finding the repo on disk (but may impact apache config, for example by triggering auth checks). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27t/lib-httpd: only route auth/dumb to dumb reposJeff King1-1/+1
Our test apache config points all of auth/ directly to the on-disk repositories via an Alias directive. This works fine because everything authenticated is currently in auth/dumb, which is a subset. However, this would conflict with a ScriptAlias for auth/smart (which will come in future patches), so let's narrow the Alias. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-24t/lib-httpd: handle running under --valgrindJeff King1-1/+4
Running the http tests with valgrind does not work for two reasons: 1. Apache complains about following the symbolic link from git-http-backend to valgrind.sh. 2. Apache does not pass through the GIT_VALGRIND variable to the backend CGI. This patch fixes both problems. Unfortunately, there is a slight hack we need to handle passing environment variables through Apache. If we just tell it: PassEnv GIT_VALGRIND then Apache will complain when GIT_VALGRIND is not set. If we try: SetEnv GIT_VALGRIND ${GIT_VALGRIND} then when GIT_VALGRIND is not set, it will pass through the literal "${GIT_VALGRIND}". Instead, we now unconditionally pass through GIT_VALGRIND from lib-httpd.sh into apache, even if it is empty. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-30http-backend: respect existing GIT_COMMITTER_* variablesJeff King1-0/+7
The http-backend program sets default GIT_COMMITTER_NAME and GIT_COMMITTER_EMAIL variables based on the REMOTE_USER and REMOTE_ADDR variables provided by the webserver. However, it unconditionally overwrites any existing GIT_COMMITTER variables, which may have been customized by site-specific code in the webserver (or in a script wrapping http-backend). Let's leave those variables intact if they already exist, assuming that any such configuration was intentional. There is a slight chance of a regression if somebody has set GIT_COMMITTER_* for the entire webserver, not intending it to leak through http-backend. We could protect against this by passing the information in alternate variables. However, it seems unlikely that anyone will care about that regression, and there is value in the simplicity of using the common variable names that are used elsewhere in git. While we're tweaking the environment-handling in http-backend, let's switch it to use argv_array to handle the list of variables. That makes the memory management much simpler. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-13t5540: test DAV push with authenticationJeff King1-0/+3
We don't currently test this case at all, and instead just test the DAV mechanism over an unauthenticated push. That isn't very realistic, as most people will want to authenticate pushes. Two of the tests expect_failure as they reveal bugs: 1. Pushing without a username in the URL fails to ask for credentials when we get an HTTP 401. This has always been the case, but it would be nice if it worked like smart-http. 2. Pushing with a username fails to ask for the password since 986bbc0 (http: don't always prompt for password, 2011-11-04). This is a severe regression in v1.7.8, as authenticated push-over-DAV is now totally unusable unless you have credentials in your .netrc. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-12-08Merge branch 'gc/http-with-non-ascii-username-url'Junio C Hamano2-0/+30
* gc/http-with-non-ascii-username-url: Fix username and password extraction from HTTP URLs t5550: test HTTP authentication and userinfo decoding Conflicts: t/lib-httpd/apache.conf
2010-11-17t5550: test HTTP authentication and userinfo decodingGabriel Corona2-0/+30
Add a test for HTTP authentication and proper percent-decoding of the userinfo (username and password) part of the URL. Signed-off-by: Gabriel Corona <gabriel.corona@enst-bretagne.fr> Acked-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-09-27smart-http: Don't change POST to GET when following redirectTay Ray Chuan1-0/+7
For a long time (29508e1 "Isolate shared HTTP request functionality", Fri Nov 18 11:02:58 2005), we've followed HTTP redirects with CURLOPT_FOLLOWLOCATION. However, when the remote HTTP server returns a redirect the default libcurl action is to change a POST request into a GET request while following the redirect, but the remote http backend does not expect that. Fix this by telling libcurl to always keep the request as type POST with CURLOPT_POSTREDIR. For users of libcurl older than 7.19.1, use CURLOPT_POST301 instead, which only follows 301s instead of both 301s and 302s. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-06Smart-http: check if repository is OK to export before serving itTarmigan Casebolt1-0/+5
Similar to how git-daemon checks whether a repository is OK to be exported, smart-http should also check. This check can be satisfied in two different ways: the environmental variable GIT_HTTP_EXPORT_ALL may be set to export all repositories, or the individual repository may have the file git-daemon-export-ok. Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Tarmigan Casebolt <tarmigan+git@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04test smart http fetch and pushShawn O. Pearce1-0/+17
The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http tests: use /dumb/ URL prefixShawn O. Pearce1-1/+6
To clarify what part of the HTTP transprot is being tested we change the URLs used by existing tests to include /dumb/ at the start, indicating they use the non-Git aware code paths. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-20http tests: Darwin is not that specialJunio C Hamano1-6/+1
We have PidFile definition in the file already, and we have added necessary LoadModule for log_config_module recently. This patch will end up giving LockFile to everybody not just limited to Darwin, but why not? Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-11test: do not LoadModule log_config_module unconditionallyJohannes Schindelin1-1/+3
LoadModule directive for log_config_module will not work if the module is built-in. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10Include log_config module in apache.confDaniel Barkalow1-0/+1
The log_config module is needed for at least some versions of apache to support the LogFormat directive. Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-25Allow HTTP tests to run on DarwinJay Soffian1-0/+6
This patch allows the HTTP tests to run on OS X 10.5. It is not sufficient to be able to pass in LIB_HTTPD_PATH and LIB_HTTPD_MODULE_PATH alone, as the apache.conf also needs a couple tweaks. These changes are put into an <IfDefine> to keep them Darwin specific, but this means lib-httpd.sh needs to be modified to pass -DDarwin to apache when running on Darwin. As long as we're making this change to lib-httpd.sh, we may as well set LIB_HTTPD_PATH and LIB_HTTPD_MODULE_PATH to appropriate default values for the platform. Note that we now pass HTTPD_PARA to apache at shutdown as well. Otherwise apache will emit a harmless, but noisy warning that LogFormat is an unknown directive. Signed-off-by: Jay Soffian <jaysoffian@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-17http-push: when making directories, have a trailing slash in the path nameJohannes Schindelin1-0/+2
The function lock_remote() sends MKCOL requests to make leading directories; However, if it does not put a forward slash '/' at the end of the path, the server sends a 301 redirect. By leaving the '/' in place, we can avoid this additional step. Incidentally, at least one version of Curl (7.16.3) does not resend credentials when it follows a 301 redirect, so this commit also fixes a bug. Original patch by Tay Ray Chuan <rctay89@gmail.com>. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-07-08Avoid apache complaining about lack of server's FQDNMike Hommey1-0/+1
On some setups, apache will say: apache2: Could not reliably determine the server's fully qualified domain name, using $(IP_address) for ServerName Avoid this message polluting tests output by setting a ServerName in apache configuration. Signed-off-by: Mike Hommey <mh@glandium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-27http-push: add regression testsClemens Buchacher2-0/+42
http-push tests require a web server with WebDAV support. This commit introduces a HTTPD test library, which can be configured using the following environment variables. GIT_TEST_HTTPD enable HTTPD tests LIB_HTTPD_PATH web server path LIB_HTTPD_MODULE_PATH web server modules path LIB_HTTPD_PORT listening port LIB_HTTPD_DAV enable DAV LIB_HTTPD_SVN enable SVN LIB_HTTPD_SSL enable SSL Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>