aboutsummaryrefslogtreecommitdiffstats
path: root/remote-curl.c
AgeCommit message (Collapse)AuthorFilesLines
2022-04-04Merge branch 'rc/fetch-refetch'Junio C Hamano1-0/+6
"git fetch --refetch" learned to fetch everything without telling the other side what we already have, which is useful when you cannot trust what you have in the local object store. * rc/fetch-refetch: docs: mention --refetch fetch option fetch: after refetch, encourage auto gc repacking t5615-partial-clone: add test for fetch --refetch fetch: add --refetch option builtin/fetch-pack: add --refetch option fetch-pack: add refetch fetch-negotiator: add specific noop initializer
2022-03-28builtin/fetch-pack: add --refetch optionRobert Coup1-0/+6
Add a refetch option to fetch-pack to force a full fetch. Use when applying a new partial clone filter to refetch all matching objects. Signed-off-by: Robert Coup <robert@coup.net.nz> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-04remote-curl.c: free memory in cmd_main()Ævar Arnfjörð Bjarmason1-4/+8
Plug a trivial memory leak in code added in a2d725b7bdf (Use an external program to implement fetching with curl, 2009-08-05). To do this have the cmd_main() use a "goto cleanup" pattern, and to return an error of 1 unless we can fall through to the http_cleanup() at the end. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-11-25run-command API users: use strvec_pushv(), not argv assignmentÆvar Arnfjörð Bjarmason1-1/+1
Migrate those run-command API users that assign directly to the "argv" member to use a strvec_pushv() of "args" instead. In these cases it did not make sense to further refactor these callers, e.g. daemon.c could be made to construct the arguments closer to handle(), but that would require moving the construction from its cmd_main() and pass "argv" through two intermediate functions. It would be possible for a change like this to introduce a regression if we were doing: cp.argv = argv; argv[1] = "foo"; And changed the code, as is being done here, to: strvec_pushv(&cp.args, argv); argv[1] = "foo"; But as viewing this change with the "-W" flag reveals none of these functions modify variable that's being pushed afterwards in a way that would introduce such a logic error. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-25Merge branch 'ab/pkt-line-cleanup'Junio C Hamano1-1/+1
Code clean-up. * ab/pkt-line-cleanup: pkt-line.[ch]: remove unused packet_read_line_buf() pkt-line.[ch]: remove unused packet_buf_write_len()
2021-10-15pkt-line.[ch]: remove unused packet_read_line_buf()Ævar Arnfjörð Bjarmason1-1/+1
This function was added in 4981fe750b1 (pkt-line: share buffer/descriptor reading implementation, 2013-02-23), but in 01f9ec64c8a (Use packet_reader instead of packet_read_line, 2018-12-29) the code that was using it was removed. Since it's being removed we can in turn remove the "src" and "src_len" arguments to packet_read(), all the remaining users just passed a NULL/NULL pair to it. That function is only a thin wrapper for packet_read_with_status() which still needs those arguments, but for the thin packet_read() convenience wrapper we can do away with it for now. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-11Merge branch 'ab/http-pinned-public-key-mismatch'Junio C Hamano1-0/+4
HTTPS error handling updates. * ab/http-pinned-public-key-mismatch: http: check CURLE_SSL_PINNEDPUBKEYNOTMATCH when emitting errors
2021-09-28string-list.[ch]: remove string_list_init() compatibility functionÆvar Arnfjörð Bjarmason1-2/+2
Remove this function left over to accommodate in-flight changes, see 770fedaf9fb (string-list.[ch]: add a string_list_init_{nodup,dup}(), 2021-07-01) for the recent change to add "string_list_init_{nodup,dup}()" initializers. There was only one user of the API left in remote-curl.c. I don't know why I didn't include this change to remote-curl.c in bc40dfb10a0 (string-list.h users: change to use *_{nodup,dup}(), 2021-07-01), perhaps I just missed it. In any case, let's change that one user to use the new API, as of writing this there are no in-flight changes that use, so this seems like a good time to drop this before we get any new users of this compatibility API. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-27http: check CURLE_SSL_PINNEDPUBKEYNOTMATCH when emitting errorsÆvar Arnfjörð Bjarmason1-0/+4
Change the error shown when a http.pinnedPubKey doesn't match to point the http.pinnedPubKey variable added in aeff8a61216 (http: implement public key pinning, 2016-02-15), e.g.: git -c http.pinnedPubKey=sha256/someNonMatchingKey ls-remote https://github.com/git/git.git fatal: unable to access 'https://github.com/git/git.git/' with http.pinnedPubkey configuration: SSL: public key does not match pinned public key! Before this we'd emit the exact same thing without the " with http.pinnedPubkey configuration". The advantage of doing this is that we're going to get a translated message (everything after the ":" is hardcoded in English in libcurl), and we've got a reference to the git-specific configuration variable that's causing the error. Unfortunately we can't test this easily, as there are no tests that require https:// in the test suite, and t/lib-httpd.sh doesn't know how to set up such tests. See [1] for the start of a discussion about what it would take to have divergent "t/lib-httpd/apache.conf" test setups. #leftoverbits 1. https://lore.kernel.org/git/YUonS1uoZlZEt+Yd@coredump.intra.peff.net/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-24Merge branch 'ab/http-drop-old-curl'Junio C Hamano1-9/+2
Support for ancient versions of cURL library (pre 7.19.4) has been dropped. * ab/http-drop-old-curl: http: rename CURLOPT_FILE to CURLOPT_WRITEDATA http: drop support for curl < 7.19.3 and < 7.17.0 (again) http: drop support for curl < 7.19.4 http: drop support for curl < 7.16.0 http: drop support for curl < 7.11.1
2021-07-30http: rename CURLOPT_FILE to CURLOPT_WRITEDATAÆvar Arnfjörð Bjarmason1-2/+2
The CURLOPT_FILE name is an alias for CURLOPT_WRITEDATA, the CURLOPT_WRITEDATA name has been preferred since curl 7.9.7, released in May 2002[1]. 1. https://curl.se/libcurl/c/CURLOPT_WRITEDATA.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-30http: drop support for curl < 7.16.0Jeff King1-4/+0
In the last commit we dropped support for curl < 7.11.1, let's continue that and drop support for versions older than 7.16.0. This allows us to get rid of some now-obsolete #ifdefs. Choosing 7.16.0 is a somewhat arbitrary cutoff: 1. It came out in October of 2006, almost 15 years ago. Besides being a nice round number, around 10 years is a common end-of-life support period, even for conservative distributions. 2. That version introduced the curl_multi interface, which gives us a lot of bang for the buck in removing #ifdefs RHEL 5 came with curl 7.15.5[1] (released in August 2006). RHEL 5's extended life cycle program ended on 2020-11-30[1]. RHEL 6 comes with curl 7.19.7 (released in November 2009), and RHEL 7 comes with 7.29.0 (released in February 2013). 1. http://lore.kernel.org/git/873e1f31-2a96-5b72-2f20-a5816cad1b51@jupiterrise.com Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-30http: drop support for curl < 7.11.1Jeff King1-3/+0
Drop support for this ancient version of curl and simplify the code by allowing us get rid of some "#ifdef"'s. Git will not build with vanilla curl older than 7.11.1 due our use of CURLOPT_POSTFIELDSIZE in 37ee680d9b (http.postbuffer: allow full range of ssize_t values, 2017-04-11). This field was introduced in curl 7.11.1. We could solve these compilation problems with more #ifdefs, but it's not worth the trouble. Version 7.11.1 came out in March of 2004, over 17 years ago. Let's declare that too old and drop any existing ifdefs that go further back. One obvious benefit is that we'll have fewer conditional bits cluttering the code. This patch drops all #ifdefs that reference older versions (note that curl's preprocessor macros are in hex, so we're looking for 070b01, not 071101). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-28Merge branch 'dl/packet-read-response-end-fix'Junio C Hamano1-1/+1
Error message update. * dl/packet-read-response-end-fix: pkt-line: replace "stateless separator" with "response end"
2021-07-09pkt-line: replace "stateless separator" with "response end"Denton Liu1-1/+1
In 0181b600a6 (pkt-line: define PACKET_READ_RESPONSE_END, 2020-05-19), the Response End packet was defined for Git's network protocol. When the patch was sent, it included an oversight where the error messages referenced "stateless separator", the work-in-progress name, over "response end", the final name chosen. Correct these error messages by having them correctly reference a "response end" packet. Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-12remote-curl: fix clone on sha256 reposEric Wong1-0/+2
The remote-https process needs to update it's own instance of `the_repository' when it sees an HTTP(S) remote is using sha256. Without this, parse_oid_hex() fails to handle sha256 OIDs when it's eventually called by parse_fetch(). Tested with: git clone https://yhbt.net/sha256test.git GIT_SMART_HTTP=0 git clone https://yhbt.net/sha256test.git (plain http:// also works) Cloning the URL via git:// required no changes Signed-off-by: Eric Wong <e@80x24.org> Acked-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-03push: parse and set flag for "--force-if-includes"Srinidhi Kaushik1-1/+13
The previous commit added the necessary machinery to implement the "--force-if-includes" protection, when "--force-with-lease" is used without giving exact object the remote still ought to have. Surface the feature by adding a command line option and a configuration variable to enable it. - Add a flag: "TRANSPORT_PUSH_FORCE_IF_INCLUDES" to indicate that the new option was passed from the command line of via configuration settings; update command line and configuration parsers to set the new flag accordingly. - Introduce a new configuration option "push.useForceIfIncludes", which is equivalent to setting "--force-if-includes" in the command line. - Update "remote-curl" to recognize and pass this option to "send-pack" when enabled. - Update "advise" to catch the reject reason "REJECT_REF_NEEDS_UPDATE", set when the ref status is "REF_STATUS_REJECT_REMOTE_UPDATED" and (optionally) print a help message when the push fails. - The new option is a "no-op" in the following scenarios: * When used without "--force-with-lease". * When used with "--force-with-lease", and if the expected commit on the remote side is specified as an argument. Signed-off-by: Srinidhi Kaushik <shrinidhi.kaushik@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-03Merge branch 'jt/lazy-fetch'Junio C Hamano1-6/+0
Updates to on-demand fetching code in lazily cloned repositories. * jt/lazy-fetch: fetch: no FETCH_HEAD display if --no-write-fetch-head fetch-pack: remove no_dependents code promisor-remote: lazy-fetch objects in subprocess fetch-pack: do not lazy-fetch during ref iteration fetch: only populate existing_refs if needed fetch: avoid reading submodule config until needed fetch: allow refspecs specified through stdin negotiator/noop: add noop fetch negotiator
2020-08-24(various): document from_promisor parameterJonathan Tan1-0/+3
88e2f9ed8e ("introduce fetch-object: fetch one promisor object", 2017-12-05) plumbed through the from_promisor parameter but did not document it everywhere it appeared. Add the documentation. (It also plumbed through the no_dependents parameter, but I have left that alone because it is being removed in a commit under review [1].) [1] https://lore.kernel.org/git/e8f16d69089a5011c355d5939c56fa53b7a1eb2d.1597184949.git.jonathantanmy@google.com/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-18fetch-pack: remove no_dependents codeJonathan Tan1-6/+0
Now that Git has switched to using a subprocess to lazy-fetch missing objects, remove the no_dependents code as it is no longer used. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-10Merge branch 'jk/strvec'Junio C Hamano1-51/+51
The argv_array API is useful for not just managing argv but any "vector" (NULL-terminated array) of strings, and has seen adoption to a certain degree. It has been renamed to "strvec" to reduce the barrier to adoption. * jk/strvec: strvec: rename struct fields strvec: drop argv_array compatibility layer strvec: update documention to avoid argv_array strvec: fix indentation in renamed calls strvec: convert remaining callers away from argv_array name strvec: convert more callers away from argv_array name strvec: convert builtin/ callers away from argv_array name quote: rename sq_dequote_to_argv_array to mention strvec strvec: rename files from argv-array to strvec argv-array: rename to strvec argv-array: use size_t for count and alloc
2020-07-30strvec: rename struct fieldsJeff King1-3/+3
The "argc" and "argv" names made sense when the struct was argv_array, but now they're just confusing. Let's rename them to "nr" (which we use for counts elsewhere) and "v" (which is rather terse, but reads well when combined with typical variable names like "args.v"). Note that we have to update all of the callers immediately. Playing tricks with the preprocessor is hard here, because we wouldn't want to rewrite unrelated tokens. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-30Merge branch 'bc/push-cas-cquoted-refname' into masterJunio C Hamano1-1/+5
Pushing a ref whose name contains non-ASCII character with the "--force-with-lease" option did not work over smart HTTP protocol, which has been corrected. * bc/push-cas-cquoted-refname: remote-curl: make --force-with-lease work with non-ASCII ref names
2020-07-28strvec: fix indentation in renamed callsJeff King1-4/+4
Code which split an argv_array call across multiple lines, like: argv_array_pushl(&args, "one argument", "another argument", "and more", NULL); was recently mechanically renamed to use strvec, which results in mis-matched indentation like: strvec_pushl(&args, "one argument", "another argument", "and more", NULL); Let's fix these up to align the arguments with the opening paren. I did this manually by sifting through the results of: git jump grep 'strvec_.*,$' and liberally applying my editor's auto-format. Most of the changes are of the form shown above, though I also normalized a few that had originally used a single-tab indentation (rather than our usual style of aligning with the open paren). I also rewrapped a couple of obvious cases (e.g., where previously too-long lines became short enough to fit on one), but I wasn't aggressive about it. In cases broken to three or more lines, the grouping of arguments is sometimes meaningful, and it wasn't worth my time or reviewer time to ponder each case individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: convert remaining callers away from argv_array nameJeff King1-43/+43
We eventually want to drop the argv_array name and just use strvec consistently. There's no particular reason we have to do it all at once, or care about interactions between converted and unconverted bits. Because of our preprocessor compat layer, the names are interchangeable to the compiler (so even a definition and declaration using different names is OK). This patch converts all of the remaining files, as the resulting diff is reasonably sized. The conversion was done purely mechanically with: git ls-files '*.c' '*.h' | xargs perl -i -pe ' s/ARGV_ARRAY/STRVEC/g; s/argv_array/strvec/g; ' We'll deal with any indentation/style fallouts separately. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: rename files from argv-array to strvecJeff King1-1/+1
This requires updating #include lines across the code-base, but that's all fairly mechanical, and was done with: git ls-files '*.c' '*.h' | xargs perl -i -pe 's/argv-array.h/strvec.h/' Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-20remote-curl: make --force-with-lease work with non-ASCII ref namesbrian m. carlson1-1/+5
When we invoke a remote transport helper and pass an option with an argument, we quote the argument as a C-style string if necessary. This is the case for the cas option, which implements the --force-with-lease command-line flag, when we're passing a non-ASCII refname. However, the remote curl helper isn't designed to parse such an argument, meaning that if we try to use --force-with-lease with an HTTP push and a non-ASCII refname, we get an error like this: error: cannot parse expected object name '0000000000000000000000000000000000000000"' Note the double quote, which get_oid has reminded us is not valid in an hex object ID. Even if we had been able to parse it, we would send the wrong data to the server: we'd send an escaped ref, which would not behave as the user wanted and might accidentally result in updating or deleting a ref we hadn't intended. Since we need to expect a quoted C-style string here, just check if the first argument is a double quote, and if so, unquote it. Note that if the refname contains a double quote, then we will have double-quoted it already, so there is no ambiguity. We test for this case only in the smart protocol, since the DAV-based protocol is not capable of handling this capability. We use UTF-8 because this is nicer in our tests and friendlier to Windows, but the code should work for all non-ASCII refs. While we're at it, since the name of the option is now well established and isn't going to change, let's inline it instead of using the #define constant. Reported-by: Frej Bjon <frej.bjon@nemit.fi> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-06Merge branch 'bc/sha-256-part-2'Junio C Hamano1-4/+42
SHA-256 migration work continues. * bc/sha-256-part-2: (44 commits) remote-testgit: adapt for object-format bundle: detect hash algorithm when reading refs t5300: pass --object-format to git index-pack t5704: send object-format capability with SHA-256 t5703: use object-format serve option t5702: offer an object-format capability in the test t/helper: initialize the repository for test-sha1-array remote-curl: avoid truncating refs with ls-remote t1050: pass algorithm to index-pack when outside repo builtin/index-pack: add option to specify hash algorithm remote-curl: detect algorithm for dumb HTTP by size builtin/ls-remote: initialize repository based on fetch t5500: make hash independent serve: advertise object-format capability for protocol v2 connect: parse v2 refs with correct hash algorithm connect: pass full packet reader when parsing v2 refs Documentation/technical: document object-format for protocol v2 t1302: expect repo format version 1 for SHA-256 builtin/show-index: provide options to determine hash algo t5302: modernize test formatting ...
2020-06-19remote-curl: avoid truncating refs with ls-remotebrian m. carlson1-1/+3
Normally, the remote-curl transport helper is aware of the hash algorithm we're using because we're in a repo with the appropriate hash algorithm set. However, when using git ls-remote outside of a repository, we won't have initialized the hash algorithm properly, so use hash_to_hex_algop to print the ref corresponding to the algorithm we've detected. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-19remote-curl: detect algorithm for dumb HTTP by sizebrian m. carlson1-2/+21
When reading the info/refs file for a repository, we have no explicit way to detect which hash algorithm is in use because the file doesn't provide one. Detect the hash algorithm in use by the size of the first object ID. If we have an empty repository, we don't know what the hash algorithm is on the remote side, so default to whatever the local side has configured. Without doing this, we cannot clone an empty repository since we don't know its hash algorithm. Test this case appropriately, since we currently have no tests for cloning an empty repository with the dumb HTTP protocol. We anonymize the URL like elsewhere in the function in case the user has decided to include a secret in the URL. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-27remote-curl: implement object-format extensionsbrian m. carlson1-1/+18
Implement the object-format extensions that let us determine the hash algorithm in use when pushing, pulling, and fetching. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24stateless-connect: send response end packetDenton Liu1-0/+5
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated before the transaction is complete, remote-curl will blindly forward the packets before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the transaction and continue reading, expecting more input to continue the transaction. This results in a deadlock between the two processes. This can be seen in the following command which does not terminate: $ git -c protocol.version=2 clone https://github.com/git/git.git --shallow-since=20151012 Cloning into 'git'... whereas the v1 version does terminate as expected: $ git -c protocol.version=1 clone https://github.com/git/git.git --shallow-since=20151012 Cloning into 'git'... fatal: the remote end hung up unexpectedly Instead of blindly forwarding packets, make remote-curl insert a response end packet after proxying the responses from the remote server when using stateless_connect(). On the RPC client side, ensure that each response ends as described. A separate control packet is chosen because we need to be able to differentiate between what the remote server sends and remote-curl's control packets. By ensuring in the remote-curl code that a server cannot send response end packets, we prevent a malicious server from being able to perform a denial of service attack in which they spoof a response end packet and cause the described deadlock to happen. Reported-by: Force Charlie <charlieio@outlook.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24pkt-line: define PACKET_READ_RESPONSE_ENDDenton Liu1-0/+2
In a future commit, we will use PACKET_READ_RESPONSE_END to separate messages proxied by remote-curl. To prepare for this, add the PACKET_READ_RESPONSE_END enum value. In switch statements that need a case added, die() or BUG() when a PACKET_READ_RESPONSE_END is unexpected. Otherwise, mirror how PACKET_READ_DELIM is implemented (especially in cases where packets are being forwarded). Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24remote-curl: error on incomplete packetDenton Liu1-3/+56
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated with a partially written packet, remote-curl will blindly send the partially written packet before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the partial packet and continue reading, expecting more input. This results in a deadlock between the two processes. For a stateless connection, inspect packets before sending them and error out if a packet line packet is incomplete. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-18remote-curl: remove label indentationDenton Liu1-1/+1
In the codebase, labels are aligned to the leftmost column. Remove the space-indentation from `free_specs:` to conform to this. Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-18remote-curl: fix typoDenton Liu1-1/+1
Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-30oid_array: rename source file from sha1-arrayJeff King1-1/+1
We renamed the actual data structure in 910650d2f8 (Rename sha1_array to oid_array, 2017-03-31), but the file is still called sha1-array. Besides being slightly confusing, it makes it more annoying to grep for leftover occurrences of "sha1" in various files, because the header is included in so many places. Let's complete the transition by renaming the source and header files (and fixing up a few comment references). I kept the "-" in the name, as that seems to be our style; cf. fc1395f4a4 (sha1_file.c: rename to use dash in file name, 2018-04-10). We also have oidmap.h and oidset.h without any punctuation, but those are "struct oidmap" and "struct oidset" in the code. We _could_ make this "oidarray" to match, but somehow it looks uglier to me because of the length of "array" (plus it would be a very invasive patch for little gain). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-09Merge branch 'rs/show-progress-in-dumb-http-fetch'Junio C Hamano1-0/+1
"git fetch" over HTTP walker protocol did not show any progress output. We inherently do not know how much work remains, but still we can show something not to bore users. * rs/show-progress-in-dumb-http-fetch: remote-curl: show progress for fetches over dumb HTTP
2020-03-03remote-curl: show progress for fetches over dumb HTTPRené Scharfe1-0/+1
Fetching over dumb HTTP transport doesn't show any progress, even with the option --progress. If the connection is slow or there is a lot of data to get then this can take a long time while the user is left to wonder if git got stuck. We don't know the number of objects to fetch at the outset, but we can count the ones we got. Show an open-ended progress indicator based on that number if the user asked for it. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-31C: use skip_prefix() to avoid hardcoded string lengthJunio C Hamano1-2/+3
We often skip an optional prefix in a string with a hardcoded constant, e.g. if (starts_with(string, "prefix")) string += 6; which is less error prone when written skip_prefix(string, "prefix", &string); Note that this changes a few error messages from "git reflog expire --expire=nonsense.timestamp", which used to complain by saying '--expire=nonsense.timestamp' is not a valid timestamp but with this change, we say 'nonsense.timestamp' is not a valid timestamp which is more technically correct (the string with --expire= as a prefix obviously cannot be a valid timestamp, but the error is about the part of the input without that prefix). Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-23Merge branch 'bc/smart-http-atomic-push'Junio C Hamano1-1/+12
The atomic push over smart HTTP transport did not work, which has been corrected. * bc/smart-http-atomic-push: remote-curl: pass on atomic capability to remote side
2019-10-17remote-curl: pass on atomic capability to remote sidebrian m. carlson1-1/+12
When pushing more than one reference with the --atomic option, the server is supposed to perform a single atomic transaction to update the references, leaving them either all to succeed or all to fail. This works fine when pushing locally or over SSH, but when pushing over HTTP, we fail to pass the atomic capability to the remote side. In fact, we have not reported this capability to any remote helpers during the life of the feature. Now normally, things happen to work nevertheless, since we actually check for most types of failures, such as non-fast-forward updates, on the client side, and just abort the entire attempt. However, if the server side reports a problem, such as the inability to lock a ref, the transaction isn't atomic, because we haven't passed the appropriate capability over and the remote side has no way of knowing that we wanted atomic behavior. Fix this by passing the option from the transport code through to remote helpers, and from the HTTP remote helper down to send-pack. With this change, we can detect if the server side rejects the push and report back appropriately. Note the difference in the messages: the remote side reports "atomic transaction failed", while our own checking rejects pushes with the message "atomic push failed". Document the atomic option in the remote helper documentation, so other implementers can implement it if they like. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-15remote-curl: use argv_array in parse_push()René Scharfe1-13/+9
Use argv_array to build an array of strings instead of open-coding it. This simplifies the code a bit. We also need to make the specs parameter of push(), push_dav() and push_git() const to match the argv member of the argv_array. That's fine, as all three only actually read from the specs array anyway. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-03i18n: fix typos found during l10n for git 2.22.0Jiang Xin1-2/+2
Fix two typos introduced by the following commits: + 31fba9d3b4 (diff-parseopt: convert --[src|dst]-prefix, 2019-03-24) + ed8b4132c8 (remote-curl: mark all error messages for translation, 2019-03-05) Signed-off-by: Jiang Xin <worldhello.net@gmail.com> Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-25Merge branch 'bc/hash-transition-16'Junio C Hamano1-5/+6
Conversion from unsigned char[20] to struct object_id continues. * bc/hash-transition-16: (35 commits) gitweb: make hash size independent Git.pm: make hash size independent read-cache: read data in a hash-independent way dir: make untracked cache extension hash size independent builtin/difftool: use parse_oid_hex refspec: make hash size independent archive: convert struct archiver_args to object_id builtin/get-tar-commit-id: make hash size independent get-tar-commit-id: parse comment record hash: add a function to lookup hash algorithm by length remote-curl: make hash size independent http: replace sha1_to_hex http: compute hash of downloaded objects using the_hash_algo http: replace hard-coded constant with the_hash_algo http-walker: replace sha1_to_hex http-push: remove remaining uses of sha1_to_hex http-backend: allow 64-character hex names http-push: convert to use the_hash_algo builtin/pull: make hash-size independent builtin/am: make hash size independent ...
2019-04-16Merge branch 'js/remote-curl-i18n'Junio C Hamano1-25/+25
Error messages given from the http transport have been updated so that they can be localized. * js/remote-curl-i18n: remote-curl: mark all error messages for translation
2019-04-16Merge branch 'js/anonymize-remote-curl-diag'Junio C Hamano1-6/+13
remote-http transport did not anonymize URLs reported in its error messages at places. * js/anonymize-remote-curl-diag: curl: anonymize URLs in error messages and warnings
2019-04-01remote-curl: make hash size independentbrian m. carlson1-5/+6
Change one hard-coded use of the constant 40 to a reference to the_hash_algo. In addition, switch a use of get_oid_hex to parse_oid_hex to avoid the need to use a constant. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-07Merge branch 'jt/http-auth-proto-v2-fix'Junio C Hamano1-205/+179
Unify RPC code for smart http in protocol v0/v1 and v2, which fixes a bug in the latter (lack of authentication retry) and generally improves the code base. * jt/http-auth-proto-v2-fix: remote-curl: use post_rpc() for protocol v2 also remote-curl: refactor reading into rpc_state's buf remote-curl: reduce scope of rpc_state.result remote-curl: reduce scope of rpc_state.stdin_preamble remote-curl: reduce scope of rpc_state.argv
2019-03-06remote-curl: mark all error messages for translationJohannes Schindelin1-25/+25
Suggested by Jeff King. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-05curl: anonymize URLs in error messages and warningsJohannes Schindelin1-6/+13
Just like 47abd85ba0 (fetch: Strip usernames from url's before storing them, 2009-04-17) and later 882d49ca5c (push: anonymize URL in status output, 2016-07-13), this change anonymizes URLs (read: strips them of user names and especially passwords) in user-facing error messages and warnings. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-03remote-curl: use post_rpc() for protocol v2 alsoJonathan Tan1-184/+149
When transmitting and receiving POSTs for protocol v0 and v1, remote-curl uses post_rpc() (and associated functions), but when doing the same for protocol v2, it uses a separate set of functions (proxy_rpc() and others). Besides duplication of code, this has caused at least one bug: the auth retry mechanism that was implemented in v0/v1 was not implemented in v2. To fix this issue and avoid it in the future, make remote-curl also use post_rpc() when handling protocol v2. Because line lengths are written to the HTTP request in protocol v2 (unlike in protocol v0/v1), this necessitates changes in post_rpc() and some of the functions it uses; perform these changes too. A test has been included to ensure that the code for both the unchunked and chunked variants of the HTTP request is exercised. Note: stateless_connect() has been updated to use the lower-level packet reading functions instead of struct packet_reader. The low-level control is necessary here because we cannot change the destination buffer of struct packet_reader while it is being used; struct packet_buffer has a peeking mechanism which relies on the destination buffer being present in between a peek and a read. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-22trace2: create new combined trace facilityJeff Hostetler1-0/+7
Create a new unified tracing facility for git. The eventual intent is to replace the current trace_printf* and trace_performance* routines with a unified set of git_trace2* routines. In addition to the usual printf-style API, trace2 provides higer-level event verbs with fixed-fields allowing structured data to be written. This makes post-processing and analysis easier for external tools. Trace2 defines 3 output targets. These are set using the environment variables "GIT_TR2", "GIT_TR2_PERF", and "GIT_TR2_EVENT". These may be set to "1" or to an absolute pathname (just like the current GIT_TRACE). * GIT_TR2 is intended to be a replacement for GIT_TRACE and logs command summary data. * GIT_TR2_PERF is intended as a replacement for GIT_TRACE_PERFORMANCE. It extends the output with columns for the command process, thread, repo, absolute and relative elapsed times. It reports events for child process start/stop, thread start/stop, and per-thread function nesting. * GIT_TR2_EVENT is a new structured format. It writes event data as a series of JSON records. Calls to trace2 functions log to any of the 3 output targets enabled without the need to call different trace_printf* or trace_performance* routines. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-22remote-curl: refactor reading into rpc_state's bufJonathan Tan1-9/+24
Currently, whenever remote-curl reads pkt-lines from its response file descriptor, only the payload is written to its buf, not the 4 characters denoting the length. A future patch will require the ability to also write those 4 characters, so in preparation for that, refactor this read into its own function. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14remote-curl: reduce scope of rpc_state.resultJonathan Tan1-12/+13
The result field in struct rpc_state is only used in rpc_service(), and not in any functions it directly or indirectly calls. Refactor it to become an argument of rpc_service() instead. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14remote-curl: reduce scope of rpc_state.stdin_preambleJonathan Tan1-9/+4
The stdin_preamble field in struct rpc_state is only used in rpc_service(), and not in any functions it directly or indirectly calls. Refactor it to become an argument of rpc_service() instead. An observation of all callers of rpc_service() shows that the preamble is always set, so we no longer need the "if (preamble)" check. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14remote-curl: reduce scope of rpc_state.argvJonathan Tan1-7/+5
The argv field in struct rpc_state is only used in rpc_service(), and not in any functions it directly or indirectly calls. Refactor it to become an argument of rpc_service() instead. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06remote-curl: tighten "version 2" check for smart-httpJeff King1-1/+1
In a v2 smart-http conversation, the server should reply to our initial request with a pkt-line saying "version 2". We check that with starts_with(), but really that should be the only thing in that packet. A response of "version 20" should not match. Let's tighten this check to use strcmp(). Note that we don't need to worry about a trailing newline here, because the ptk-line code will have chomped it for us already. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06remote-curl: refactor smart-http discoveryJeff King1-43/+57
After making initial contact with an http server, we have to decide if the server supports smart-http, and if so, which version. Our rules are a bit inconsistent: 1. For v0, we require that the content-type indicates a smart-http response. We also require the response to look vaguely like a pkt-line starting with "#". If one of those does not match, we fall back to dumb-http. But according to our http protocol spec[1]: Dumb servers MUST NOT return a return type starting with `application/x-git-`. If we see the expected content-type, we should consider it smart-http. At that point we can parse the pkt-line for real, and complain if it is not syntactically valid. 2. For v2, we do not actually check the content-type. Our v2 protocol spec says[2]: When using the http:// or https:// transport a client makes a "smart" info/refs request as described in `http-protocol.txt`[...] and the http spec is clear that for a smart-http response[3]: The Content-Type MUST be `application/x-$servicename-advertisement`. So it is required according to the spec. These inconsistencies were easy to miss because of the way the original code was written as an inline conditional. Let's pull it out into its own function for readability, and improve a few things: - we now predicate the smart/dumb decision entirely on the presence of the correct content-type - we do a real pkt-line parse before deciding how to proceed (and die if it isn't valid) - use skip_prefix() for comparing service strings, instead of constructing expected output in a strbuf; this avoids dealing with memory cleanup Note that this _is_ tightening what the client will allow. It's all according to the spec, but it's possible that other implementations might violate these. However, violating these particular rules seems like an odd choice for a server to make. [1] Documentation/technical/http-protocol.txt, l. 166-167 [2] Documentation/technical/protocol-v2.txt, l. 63-64 [3] Documentation/technical/http-protocol.txt, l. 247 Helped-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-05Merge branch 'jt/fetch-v2-sideband'Junio C Hamano1-9/+20
"git fetch" and "git upload-pack" learned to send all exchange over the sideband channel while talking the v2 protocol. * jt/fetch-v2-sideband: tests: define GIT_TEST_SIDEBAND_ALL {fetch,upload}-pack: sideband v2 fetch response sideband: reverse its dependency on pkt-line pkt-line: introduce struct packet_writer pack-protocol.txt: accept error packets in any context Use packet_reader instead of packet_read_line
2019-01-29Merge branch 'ms/http-no-more-failonerror'Junio C Hamano1-5/+24
Debugging help for http transport. * ms/http-no-more-failonerror: test: test GIT_CURL_VERBOSE=1 shows an error remote-curl: unset CURLOPT_FAILONERROR remote-curl: define struct for CURLOPT_WRITEFUNCTION http: enable keep_error for HTTP requests http: support file handles for HTTP_KEEP_ERROR
2019-01-10remote-curl: unset CURLOPT_FAILONERRORMasaya Suzuki1-0/+10
By not setting CURLOPT_FAILONERROR, curl parses the HTTP response headers even if the response is an error. This makes GIT_CURL_VERBOSE to show the HTTP headers, which is useful for debugging. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10remote-curl: define struct for CURLOPT_WRITEFUNCTIONMasaya Suzuki1-4/+14
In order to pass more values for rpc_in, define a struct and pass it as an additional value. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10http: enable keep_error for HTTP requestsMasaya Suzuki1-1/+0
curl stops parsing a response when it sees a bad HTTP status code and it has CURLOPT_FAILONERROR set. This prevents GIT_CURL_VERBOSE to show HTTP headers on error. keep_error is an option to receive the HTTP response body for those error responses. By enabling this option, curl will process the HTTP response headers, and they're shown if GIT_CURL_VERBOSE is set. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-02pack-protocol.txt: accept error packets in any contextMasaya Suzuki1-3/+6
In the Git pack protocol definition, an error packet may appear only in a certain context. However, servers can face a runtime error (e.g. I/O error) at an arbitrary timing. This patch changes the protocol to allow an error packet to be sent instead of any packet. Without this protocol spec change, when a server cannot process a request, there's no way to tell that to a client. Since the server cannot produce a valid response, it would be forced to cut a connection without telling why. With this protocol spec change, the server can be more gentle in this situation. An old client may see these error packets as an unexpected packet, but this is not worse than having an unexpected EOF. Following this protocol spec change, the error packet handling code is moved to pkt-line.c. Implementation wise, this implementation uses pkt-line to communicate with a subprocess. Since this is not a part of Git protocol, it's possible that a packet that is not supposed to be an error packet is mistakenly parsed as an error packet. This error packet handling is enabled only for the Git pack protocol parsing code considering this. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-02Use packet_reader instead of packet_read_lineMasaya Suzuki1-7/+15
By using and sharing a packet_reader while handling a Git pack protocol request, the same reader option is used throughout the code. This makes it easy to set a reader option to the request parsing code. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-12-10style: the opening '{' of a function is in a separate lineNguyễn Thái Ngọc Duy1-1/+2
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-21Merge branch 'en/double-semicolon-fix' into maintJunio C Hamano1-1/+1
Code clean-up. * en/double-semicolon-fix: Remove superfluous trailing semicolons
2018-11-12remote-curl.c: xcurl_off_t is not portable (on 32 bit platfoms)Torsten Bögershausen1-3/+4
When setting DEVELOPER = 1 DEVOPTS = extra-all "gcc (Raspbian 6.3.0-18+rpi1+deb9u1) 6.3.0 20170516" errors out with "comparison is always false due to limited range of data type" "[-Werror=type-limits]" It turns out that the function xcurl_off_t() has 2 flavours: - It gives a warning 32 bit systems, like Linux - It takes the signed ssize_t as a paramter, but the only caller is using a size_t (which is typically unsigned these days) The original motivation of this function is to make sure that sizes > 2GiB are handled correctly. The curl documentation says: "For any given platform/compiler curl_off_t must be typedef'ed to a 64-bit wide signed integral data type" On a 32 bit system "size_t" can be promoted into a 64 bit signed value without loss of data, and therefore we may see the "comparison is always false" warning. On a 64 bit system it may happen, at least in theory, that size_t is > 2^63, and then the promotion from an unsigned "size_t" into a signed "curl_off_t" may be a problem. One solution to suppress a possible compiler warning could be to remove the function xcurl_off_t(). However, to be on the very safe side, we keep it and improve it: - The len parameter is changed from ssize_t to size_t - A temporally variable "size" is used, promoted int uintmax_t and the compared with "maximum_signed_value_of_type(curl_off_t)". Thanks to Junio C Hamano for this hint. Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-09-24Merge branch 'en/double-semicolon-fix'Junio C Hamano1-1/+1
Code clean-up. * en/double-semicolon-fix: Remove superfluous trailing semicolons
2018-09-05Remove superfluous trailing semicolonsElijah Newren1-1/+1
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20Merge branch 'js/typofixes'Junio C Hamano1-1/+1
Comment update. * js/typofixes: remote-curl: remove spurious period git-compat-util.h: fix typo
2018-08-08remote-curl: remove spurious periodJohannes Schindelin1-1/+1
We should not interrupt. sentences in the middle. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-23remote-curl: accept compressed responses with protocol v2Brandon Williams1-0/+1
Configure curl to accept compressed responses when using protocol v2 by setting `CURLOPT_ENCODING` to "", which indicates that curl should send an "Accept-Encoding" header with all supported compression encodings. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-23remote-curl: accept all encodings supported by curlBrandon Williams1-1/+1
Configure curl to accept all encodings which curl supports instead of only accepting gzip responses. This fixes an issue when using an installation of curl which is built without the "zlib" feature. Since aa90b9697 (Enable info/refs gzip decompression in HTTP client, 2012-09-19) we end up requesting "gzip" encoding anyway despite libcurl not being able to decode it. Worse, instead of getting a clear error message indicating so, we end up falling back to "dumb" http, producing a confusing and difficult to debug result. Since curl doesn't do any checking to verify that it supports the a requested encoding, instead set the curl option `CURLOPT_ENCODING` with an empty string indicating that curl should send an "Accept-Encoding" header containing only the encodings supported by curl. Reported-by: Anton Golubev <anton.golubev@gmail.com> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-08Merge branch 'ma/http-walker-no-partial'Junio C Hamano1-3/+0
"git http-fetch" (deprecated) had an optional and experimental "feature" to fetch only commits and/or trees, which nobody used. This has been removed. * ma/http-walker-no-partial: walker: drop fields of `struct walker` which are always 1 http-fetch: make `-a` standard behaviour
2018-05-08Merge branch 'bw/protocol-v2'Junio C Hamano1-4/+276
The beginning of the next-gen transfer protocol. * bw/protocol-v2: (35 commits) remote-curl: don't request v2 when pushing remote-curl: implement stateless-connect command http: eliminate "# service" line when using protocol v2 http: don't always add Git-Protocol header http: allow providing extra headers for http requests remote-curl: store the protocol version the server responded with remote-curl: create copy of the service name pkt-line: add packet_buf_write_len function transport-helper: introduce stateless-connect transport-helper: refactor process_connect_service transport-helper: remove name parameter connect: don't request v2 when pushing connect: refactor git_connect to only get the protocol version once fetch-pack: support shallow requests fetch-pack: perform a fetch using v2 upload-pack: introduce fetch server command push: pass ref prefixes when pushing fetch: pass ref prefixes when fetching ls-remote: pass ref prefixes when requesting a remote's refs transport: convert transport_get_remote_refs to take a list of ref prefixes ...
2018-04-24walker: drop fields of `struct walker` which are always 1Martin Ågren1-3/+0
After the previous commit, both users of `struct walker` set `get_tree`, `get_history` and `get_all` to 1. Drop those fields and simplify the walker implementation accordingly. Let's hope that any out-of-tree users will not mind this change. They should notice that the compilation fails as they try to set these fields. (If they do not set them, note that `get_http_walker()` leaves them undefined, so the behavior will have been undefined all the time.) Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-11exec_cmd: rename to use dash in file nameStefan Beller1-1/+1
This is more consistent with the project style. The majority of Git's source files use dashes in preference to underscores in their file names. Signed-off-by: Stefan Beller <sbeller@google.com>
2018-03-15remote-curl: don't request v2 when pushingBrandon Williams1-1/+10
In order to be able to ship protocol v2 with only supporting fetch, we need clients to not issue a request to use protocol v2 when pushing (since the client currently doesn't know how to push using protocol v2). This allows a client to have protocol v2 configured in `protocol.version` and take advantage of using v2 for fetch and falling back to using v0 when pushing while v2 for push is being designed. We could run into issues if we didn't fall back to protocol v2 when pushing right now. This is because currently a server will ignore a request to use v2 when contacting the 'receive-pack' endpoint and fall back to using v0, but when push v2 is rolled out to servers, the 'receive-pack' endpoint will start responding using v2. So we don't want to get into a state where a client is requesting to push with v2 before they actually know how to push using v2. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15remote-curl: implement stateless-connect commandBrandon Williams1-1/+206
Teach remote-curl the 'stateless-connect' command which is used to establish a stateless connection with servers which support protocol version 2. This allows remote-curl to act as a proxy, allowing the git client to communicate natively with a remote end, simply using remote-curl as a pass through to convert requests to http. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15http: eliminate "# service" line when using protocol v2Brandon Williams1-0/+3
When an http info/refs request is made, requesting that protocol v2 be used, don't send a "# service" line since this line is not part of the v2 spec. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15http: don't always add Git-Protocol headerBrandon Williams1-0/+33
Instead of always sending the Git-Protocol header with the configured version with every http request, explicitly send it when discovering refs and then only send it on subsequent http requests if the server understood the version requested. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15remote-curl: store the protocol version the server responded withBrandon Williams1-1/+3
Store the protocol version the server responded with when performing discovery. This will be used in a future patch to either change the 'Git-Protocol' header sent in subsequent requests or to determine if a client needs to fallback to using a different protocol version. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15remote-curl: create copy of the service nameBrandon Williams1-2/+3
Make a copy of the service name being requested instead of relying on the buffer pointed to by the passed in 'const char *' to remain unchanged. Currently, all service names are string constants, but a subsequent patch will introduce service names from external sources. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14protocol: introduce enum protocol_version value protocol_v2Brandon Williams1-0/+3
Introduce protocol_v2, a new value for 'enum protocol_version'. Subsequent patches will fill in the implementation of protocol_v2. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14connect: discover protocol version outside of get_remote_headsBrandon Williams1-2/+18
In order to prepare for the addition of protocol_v2 push the protocol version discovery outside of 'get_remote_heads()'. This will allow for keeping the logic for processing the reference advertisement for protocol_v1 and protocol_v0 separate from the logic for protocol_v2. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-28Merge branch 'jk/push-options-via-transport-fix'Junio C Hamano1-1/+10
"git push" over http transport did not unquote the push-options correctly. * jk/push-options-via-transport-fix: remote-curl: unquote incoming push-options t5545: factor out http repository setup
2018-02-27Merge branch 'js/packet-read-line-check-null'Junio C Hamano1-0/+2
Some low level protocol codepath could crash when they get an unexpected flush packet, which is now fixed. * js/packet-read-line-check-null: always check for NULL return from packet_read_line() correct error messages for NULL packet_read_line()
2018-02-20remote-curl: unquote incoming push-optionsJeff King1-1/+10
The transport-helper protocol c-style quotes the value of any options passed to the helper via the "option <key> <value>" directive. However, remote-curl doesn't actually unquote the push-option values, meaning that we will send the quoted version to the other side (whereas git-over-ssh would send the raw value). The pack-protocol.txt documentation defines the push-options as a series of VCHARs, which excludes most characters that would need quoting. But: 1. You can still see the bug with a valid push-option that starts with a double-quote (since that triggers quoting). 2. We do currently handle any non-NUL characters correctly in git-over-ssh. So even though the spec does not say that we need to handle most quoted characters, it's nice if our behavior is consistent between protocols. There are two new tests: the "direct" one shows that this already works in the non-http case, and the http one covers this bugfix. Reported-by: Jon Simons <jon@jonsimons.org> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-08always check for NULL return from packet_read_line()Jon Simons1-0/+2
The packet_read_line() function will die if it sees any protocol or socket errors. But it will return NULL for a flush packet; some callers which are not expecting this may dereference NULL if they get an unexpected flush. This would involve the other side breaking protocol, but we should flag the error rather than segfault. Signed-off-by: Jon Simons <jon@jonsimons.org> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-08fetch: support filtersJeff Hostetler1-0/+6
Teach fetch to support filters. This is only allowed for the remote configured in extensions.partialcloneremote. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-05introduce fetch-object: fetch one promisor objectJonathan Tan1-1/+13
Introduce fetch-object, providing the ability to fetch one object from a promisor remote. This uses fetch-pack. To do this, the transport mechanism has been updated with 2 flags, "from-promisor" to indicate that the resulting pack comes from a promisor remote (and thus should be annotated as such by index-pack), and "no-dependents" to indicate that only the objects themselves need to be fetched (but fetching additional objects is nevertheless safe). Whenever "no-dependents" is used, fetch-pack will refrain from using any object flags, because it is most likely invoked as part of a dynamic object fetch by another Git command (which may itself use object flags). An alternative to this is to leave fetch-pack alone, and instead update the allocation of flags so that fetch-pack's flags never overlap with any others, but this will end up shrinking the number of flags available to nearly every other Git command (that is, every Git command that accesses objects), so the approach in this commit was used instead. This will be tested in a subsequent commit. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-15config: don't include config.h by defaultBrandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-23Merge branch 'dt/http-postbuffer-can-be-large'Junio C Hamano1-3/+9
Allow the http.postbuffer configuration variable to be set to a size that can be expressed in size_t, which can be larger than ulong on some platforms. * dt/http-postbuffer-can-be-large: http.postbuffer: allow full range of ssize_t values
2017-04-19Merge branch 'bc/object-id'Junio C Hamano1-2/+2
Conversion from unsigned char [40] to struct object_id continues. * bc/object-id: Documentation: update and rename api-sha1-array.txt Rename sha1_array to oid_array Convert sha1_array_for_each_unique and for_each_abbrev to object_id Convert sha1_array_lookup to take struct object_id Convert remaining callers of sha1_array_lookup to object_id Make sha1_array_append take a struct object_id * sha1-array: convert internal storage for struct sha1_array to object_id builtin/pull: convert to struct object_id submodule: convert check_for_new_submodule_commits to object_id sha1_name: convert disambiguate_hint_fn to take object_id sha1_name: convert struct disambiguate_state to object_id test-sha1-array: convert most code to struct object_id parse-options-cb: convert sha1_array_append caller to struct object_id fsck: convert init_skiplist to struct object_id builtin/receive-pack: convert portions to struct object_id builtin/pull: convert portions to struct object_id builtin/diff: convert to struct object_id Convert GIT_SHA1_RAWSZ used for allocation to GIT_MAX_RAWSZ Convert GIT_SHA1_HEXSZ used for allocation to GIT_MAX_HEXSZ Define new hash-size constants for allocating memory
2017-04-13http.postbuffer: allow full range of ssize_t valuesDavid Turner1-3/+9
Unfortunately, in order to push some large repos where a server does not support chunked encoding, the http postbuffer must sometimes exceed two gigabytes. On a 64-bit system, this is OK: we just malloc a larger buffer. This means that we need to use CURLOPT_POSTFIELDSIZE_LARGE to set the buffer size. Signed-off-by: David Turner <dturner@twosigma.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Rename sha1_array to oid_arraybrian m. carlson1-1/+1
Since this structure handles an array of object IDs, rename it to struct oid_array. Also rename the accessor functions and the initialization constant. This commit was produced mechanically by providing non-Documentation files to the following Perl one-liners: perl -pi -E 's/struct sha1_array/struct oid_array/g' perl -pi -E 's/\bsha1_array_/oid_array_/g' perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g' Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28sha1-array: convert internal storage for struct sha1_array to object_idbrian m. carlson1-1/+1
Make the internal storage for struct sha1_array use an array of struct object_id internally. Update the users of this struct which inspect its internals. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-22remote-curl: allow push optionsBrandon Williams1-0/+8
Teach remote-curl to understand push options and to be able to convey them across HTTP. Signed-off-by: Brandon Williams <bmwill@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-10Merge branch 'dt/smart-http-detect-server-going-away'Junio C Hamano1-0/+8
When the http server gives an incomplete response to a smart-http rpc call, it could lead to client waiting for a full response that will never come. Teach the client side to notice this condition and abort the transfer. An improvement counterproposal has failed. cf. <20161114194049.mktpsvgdhex2f4zv@sigill.intra.peff.net> * dt/smart-http-detect-server-going-away: upload-pack: optionally allow fetching any sha1 remote-curl: don't hang when a server dies before any output
2016-12-19Merge branch 'jk/http-walker-limit-redirect-2.9'Junio C Hamano1-9/+13
Transport with dumb http can be fooled into following foreign URLs that the end user does not intend to, especially with the server side redirects and http-alternates mechanism, which can lead to security issues. Tighten the redirection and make it more obvious to the end user when it happens. * jk/http-walker-limit-redirect-2.9: http: treat http-alternates like redirects http: make redirects more obvious remote-curl: rename shadowed options variable http: always update the base URL for redirects http: simplify update_url_from_redirect
2016-12-06http: make redirects more obviousJeff King1-0/+4
We instruct curl to always follow HTTP redirects. This is convenient, but it creates opportunities for malicious servers to create confusing situations. For instance, imagine Alice is a git user with access to a private repository on Bob's server. Mallory runs her own server and wants to access objects from Bob's repository. Mallory may try a few tricks that involve asking Alice to clone from her, build on top, and then push the result: 1. Mallory may simply redirect all fetch requests to Bob's server. Git will transparently follow those redirects and fetch Bob's history, which Alice may believe she got from Mallory. The subsequent push seems like it is just feeding Mallory back her own objects, but is actually leaking Bob's objects. There is nothing in git's output to indicate that Bob's repository was involved at all. The downside (for Mallory) of this attack is that Alice will have received Bob's entire repository, and is likely to notice that when building on top of it. 2. If Mallory happens to know the sha1 of some object X in Bob's repository, she can instead build her own history that references that object. She then runs a dumb http server, and Alice's client will fetch each object individually. When it asks for X, Mallory redirects her to Bob's server. The end result is that Alice obtains objects from Bob, but they may be buried deep in history. Alice is less likely to notice. Both of these attacks are fairly hard to pull off. There's a social component in getting Mallory to convince Alice to work with her. Alice may be prompted for credentials in accessing Bob's repository (but not always, if she is using a credential helper that caches). Attack (1) requires a certain amount of obliviousness on Alice's part while making a new commit. Attack (2) requires that Mallory knows a sha1 in Bob's repository, that Bob's server supports dumb http, and that the object in question is loose on Bob's server. But we can probably make things a bit more obvious without any loss of functionality. This patch does two things to that end. First, when we encounter a whole-repo redirect during the initial ref discovery, we now inform the user on stderr, making attack (1) much more obvious. Second, the decision to follow redirects is now configurable. The truly paranoid can set the new http.followRedirects to false to avoid any redirection entirely. But for a more practical default, we will disallow redirects only after the initial ref discovery. This is enough to thwart attacks similar to (2), while still allowing the common use of redirects at the repository level. Since c93c92f30 (http: update base URLs when we see redirects, 2013-09-28) we re-root all further requests from the redirect destination, which should generally mean that no further redirection is necessary. As an escape hatch, in case there really is a server that needs to redirect individual requests, the user can set http.followRedirects to "true" (and this can be done on a per-server basis via http.*.followRedirects config). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06remote-curl: rename shadowed options variableJeff King1-9/+9
The discover_refs() function has a local "options" variable to hold the http_get_options we pass to http_get_strbuf(). But this shadows the global "struct options" that holds our program-level options, which cannot be accessed from this function. Let's give the local one a more descriptive name so we can tell the two apart. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18remote-curl: don't hang when a server dies before any outputDavid Turner1-0/+8
In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-10Merge branch 'nd/shallow-deepen'Junio C Hamano1-31/+49
The existing "git fetch --depth=<n>" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=<n>", has been added to make this easier to use. "git clone" also learned "--shallow-since=<date>" and "--shallow-exclude=<tag>" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". * nd/shallow-deepen: (27 commits) fetch, upload-pack: --deepen=N extends shallow boundary by N commits upload-pack: add get_reachable_list() upload-pack: split check_unreachable() in two, prep for get_reachable_list() t5500, t5539: tests for shallow depth excluding a ref clone: define shallow clone boundary with --shallow-exclude fetch: define shallow boundary with --shallow-exclude upload-pack: support define shallow boundary by excluding revisions refs: add expand_ref() t5500, t5539: tests for shallow depth since a specific date clone: define shallow clone boundary based on time with --shallow-since fetch: define shallow boundary with --shallow-since upload-pack: add deepen-since to cut shallow repos based on time shallow.c: implement a generic shallow boundary finder based on rev-list fetch-pack: use a separate flag for fetch in deepening mode fetch-pack.c: mark strings for translating fetch-pack: use a common function for verbose printing fetch-pack: use skip_prefix() instead of starts_with() upload-pack: move rev-list code out of check_non_tip() upload-pack: make check_non_tip() clean things up on error upload-pack: tighten number parsing at "deepen" lines ...
2016-07-06Merge branch 'jk/common-main-2.8' into jk/common-mainJunio C Hamano1-4/+1
* jk/common-main-2.8: mingw: declare main()'s argv as const common-main: call git_setup_gettext() common-main: call restore_sigpipe_to_default() common-main: call sanitize_stdfds() common-main: call git_extract_argv0_path() add an extra level of indirection to main()
2016-07-01common-main: call git_setup_gettext()Jeff King1-2/+0
This should be part of every program, as otherwise users do not get translated error messages. However, some external commands forgot to do so (e.g., git-credential-store). This fixes them, and eliminates the repeated code in programs that did remember to use it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01common-main: call git_extract_argv0_path()Jeff King1-1/+0
Every program which links against libgit.a must call this function, or risk hitting an assert() in system_path() that checks whether we have configured argv0_path (though only when RUNTIME_PREFIX is defined, so essentially only on Windows). Looking at the diff, you can see that putting it into the common main() saves us having to do it individually in each of the external commands. But what you can't see are the cases where we _should_ have been doing so, but weren't (e.g., git-credential-store, and all of the t/helper test programs). This has been an accident-waiting-to-happen for a long time, but wasn't triggered until recently because it involves one of those programs actually calling system_path(). That happened with git-credential-store in v2.8.0 with ae5f677 (lazily load core.sharedrepository, 2016-03-11). The program: - takes a lock file, which... - opens a tempfile, which... - calls adjust_shared_perm to fix permissions, which... - lazy-loads the config (as of ae5f677), which... - calls system_path() to find the location of /etc/gitconfig On systems with RUNTIME_PREFIX, this means credential-store reliably hits that assert() and cannot be used. We never noticed in the test suite, because we set GIT_CONFIG_NOSYSTEM there, which skips the system_path() lookup entirely. But if we were to tweak git_config() to find /etc/gitconfig even when we aren't going to open it, then the test suite shows multiple failures (for credential-store, and for some other test helpers). I didn't include that tweak here because it's way too specific to this particular call to be worth carrying around what is essentially dead code. The implementation is fairly straightforward, with one exception: there is exactly one caller (git.c) that actually cares about the result of the function, and not the side-effect of setting up argv0_path. We can accommodate that by simply replacing the value of argv[0] in the array we hand down to cmd_main(). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01add an extra level of indirection to main()Jeff King1-1/+1
There are certain startup tasks that we expect every git process to do. In some cases this is just to improve the quality of the program (e.g., setting up gettext()). In others it is a requirement for using certain functions in libgit.a (e.g., system_path() expects that you have called git_extract_argv0_path()). Most commands are builtins and are covered by the git.c version of main(). However, there are still a few external commands that use their own main(). Each of these has to remember to include the correct startup sequence, and we are not always consistent. Rather than just fix the inconsistencies, let's make this harder to get wrong by providing a common main() that can run this standard startup. We basically have two options to do this: - the compat/mingw.h file already does something like this by adding a #define that replaces the definition of main with a wrapper that calls mingw_startup(). The upside is that the code in each program doesn't need to be changed at all; it's rewritten on the fly by the preprocessor. The downside is that it may make debugging of the startup sequence a bit more confusing, as the preprocessor is quietly inserting new code. - the builtin functions are all of the form cmd_foo(), and git.c's main() calls them. This is much more explicit, which may make things more obvious to somebody reading the code. It's also more flexible (because of course we have to figure out _which_ cmd_foo() to call). The downside is that each of the builtins must define cmd_foo(), instead of just main(). This patch chooses the latter option, preferring the more explicit approach, even though it is more invasive. We introduce a new file common-main.c, with the "real" main. It expects to call cmd_main() from whatever other objects it is linked against. We link common-main.o against anything that links against libgit.a, since we know that such programs will need to do this setup. Note that common-main.o can't actually go inside libgit.a, as the linker would not pick up its main() function automatically (it has no callers). The rest of the patch is just adjusting all of the various external programs (mostly in t/helper) to use cmd_main(). I've provided a global declaration for cmd_main(), which means that all of the programs also need to match its signature. In particular, many functions need to switch to "const char **" instead of "char **" for argv. This effect ripples out to a few other variables and functions, as well. This makes the patch even more invasive, but the end result is much better. We should be treating argv strings as const anyway, and now all programs conform to the same signature (which also matches the way builtins are defined). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch, upload-pack: --deepen=N extends shallow boundary by N commitsNguyễn Thái Ngọc Duy1-1/+13
In git-fetch, --depth argument is always relative with the latest remote refs. This makes it a bit difficult to cover this use case, where the user wants to make the shallow history, say 3 levels deeper. It would work if remote refs have not moved yet, but nobody can guarantee that, especially when that use case is performed a couple months after the last clone or "git fetch --depth". Also, modifying shallow boundary using --depth does not work well with clones created by --since or --not. This patch fixes that. A new argument --deepen=<N> will add <N> more (*) parent commits to the current history regardless of where remote refs are. Have/Want negotiation is still respected. So if remote refs move, the server will send two chunks: one between "have" and "want" and another to extend shallow history. In theory, the client could send no "want"s in order to get the second chunk only. But the protocol does not allow that. Either you send no want lines, which means ls-remote; or you have to send at least one want line that carries deep-relative to the server.. The main work was done by Dongcan Jiang. I fixed it up here and there. And of course all the bugs belong to me. (*) We could even support --deepen=<N> where <N> is negative. In that case we can cut some history from the shallow clone. This operation (and --depth=<shorter depth>) does not require interaction with remote side (and more complicated to implement as a result). Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch: define shallow boundary with --shallow-excludeNguyễn Thái Ngọc Duy1-0/+9
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch: define shallow boundary with --shallow-sinceNguyễn Thái Ngọc Duy1-2/+9
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13remote-curl.c: convert fetch_git() to use argv_arrayNguyễn Thái Ngọc Duy1-28/+18
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-27http: support sending custom HTTP headersJohannes Schindelin1-2/+2
We introduce a way to send custom HTTP headers with all requests. This allows us, for example, to send an extra token from build agents for temporary access to private repositories. (This is the use case that triggered this patch.) This feature can be used like this: git -c http.extraheader='Secret: sssh!' fetch $URL $REF Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only a single list, overriding any previous call. This means we have to collect _all_ of the headers we want to use into a single list, and feed it to cURL in one shot. Since we already unconditionally set a "pragma" header when initializing the curl handles, we can add our new headers to that list. For callers which override the default header list (like probe_rpc), we provide `http_copy_default_headers()` so they can do the same trick. Big thanks to Jeff King and Junio Hamano for their outstanding help and patient reviews. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-26Merge branch 'jk/tighten-alloc'Junio C Hamano1-13/+13
Update various codepaths to avoid manually-counted malloc(). * jk/tighten-alloc: (22 commits) ewah: convert to REALLOC_ARRAY, etc convert ewah/bitmap code to use xmalloc diff_populate_gitlink: use a strbuf transport_anonymize_url: use xstrfmt git-compat-util: drop mempcpy compat code sequencer: simplify memory allocation of get_message test-path-utils: fix normalize_path_copy output buffer size fetch-pack: simplify add_sought_entry fast-import: simplify allocation in start_packfile write_untracked_extension: use FLEX_ALLOC helper prepare_{git,shell}_cmd: use argv_array use st_add and st_mult for allocation size computation convert trivial cases to FLEX_ARRAY macros use xmallocz to avoid size arithmetic convert trivial cases to ALLOC_ARRAY convert manual allocations to argv_array argv-array: add detach function add helpers for allocating flex-array structs harden REALLOC_ARRAY and xcalloc against size_t overflow tree-diff: catch integer overflow in combine_diff_path allocation ...
2016-02-24Merge branch 'sp/remote-curl-ssl-strerror'Junio C Hamano1-2/+14
Help those who debug http(s) part of the system. * sp/remote-curl-ssl-strerror: remote-curl: include curl_errorstr on SSL setup failures
2016-02-24Merge branch 'ew/force-ipv4'Junio C Hamano1-0/+13
"git fetch" and friends that make network connections can now be told to only use ipv4 (or ipv6). * ew/force-ipv4: connect & http: support -4 and -6 switches for remote operations
2016-02-22convert trivial cases to ALLOC_ARRAYJeff King1-1/+2
Each of these cases can be converted to use ALLOC_ARRAY or REALLOC_ARRAY, which has two advantages: 1. It automatically checks the array-size multiplication for overflow. 2. It always uses sizeof(*array) for the element-size, so that it can never go out of sync with the declared type of the array. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22convert manual allocations to argv_arrayJeff King1-12/+11
There are many manual argv allocations that predate the argv_array API. Switching to that API brings a few advantages: 1. We no longer have to manually compute the correct final array size (so it's one less thing we can screw up). 2. In many cases we had to make a separate pass to count, then allocate, then fill in the array. Now we can do it in one pass, making the code shorter and easier to follow. 3. argv_array handles memory ownership for us, making it more obvious when things should be free()d and and when not. Most of these cases are pretty straightforward. In some, we switch from "run_command_v" to "run_command" which lets us directly use the argv_array embedded in "struct child_process". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-15remote-curl: include curl_errorstr on SSL setup failuresShawn Pearce1-2/+14
For curl error 35 (CURLE_SSL_CONNECT_ERROR) users need the additional text stored in CURLOPT_ERRORBUFFER to debug why the connection did not start. This is curl_errorstr inside of http.c, so include that in the message if it is non-empty. Sometimes HTTP response codes aren't yet available, such as when the SSL setup fails. Don't include HTTP 0 in the message. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-12connect & http: support -4 and -6 switches for remote operationsEric Wong1-0/+13
Sometimes it is necessary to force IPv4-only or IPv6-only operation on networks where name lookups may return a non-routable address and stall remote operations. The ssh(1) command has an equivalent switches which we may pass when we run them. There may be old ssh(1) implementations out there which do not support these switches; they should report the appropriate error in that case. rsync support is untouched for now since it is deprecated and scheduled to be removed. Signed-off-by: Eric Wong <normalperson@yhbt.net> Reviewed-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-15strbuf: introduce strbuf_getline_{lf,nul}()Junio C Hamano1-3/+3
The strbuf_getline() interface allows a byte other than LF or NUL as the line terminator, but this is only because I wrote these codepaths anticipating that there might be a value other than NUL and LF that could be useful when I introduced line_termination long time ago. No useful caller that uses other value has emerged. By now, it is clear that the interface is overly broad without a good reason. Many codepaths have hardcoded preference to read either LF terminated or NUL terminated records from their input, and then call strbuf_getline() with LF or NUL as the third parameter. This step introduces two thin wrappers around strbuf_getline(), namely, strbuf_getline_lf() and strbuf_getline_nul(), and mechanically rewrites these call sites to call either one of them. The changes contained in this patch are: * introduction of these two functions in strbuf.[ch] * mechanical conversion of all callers to strbuf_getline() with either '\n' or '\0' as the third parameter to instead call the respective thin wrapper. After this step, output from "git grep 'strbuf_getline('" would become a lot smaller. An interim goal of this series is to make this an empty set, so that we can have strbuf_getline_crlf() take over the shorter name strbuf_getline(). Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20parse_fetch: convert to use struct object_idbrian m. carlson1-6/+6
Convert the parse_fetch function to use struct object_id. Remove the strlen check as get_oid_hex will fail safely on receiving a too-short NUL-terminated string. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Convert struct ref to use object_id.brian m. carlson1-5/+5
Use struct object_id in three fields in struct ref and convert all the necessary places that use it. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-10-05use alloc_ref rather than hand-allocating "struct ref"Jeff King1-4/+1
This saves us some manual computation, and eliminates a call to strcpy. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-19push: support signing pushes iff the server supports itDave Borowitz1-5/+11
Add a new flag --sign=true (or --sign=false), which means the same thing as the original --signed (or --no-signed). Give it a third value --sign=if-asked to tell push and send-pack to send a push certificate if and only if the server advertised a push cert nonce. If not, warn the user that their push may not be as secure as they thought. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-17Merge branch 'rs/deflate-init-cleanup'Junio C Hamano1-1/+0
Code simplification. * rs/deflate-init-cleanup: zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}
2015-03-05zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}René Scharfe1-1/+0
Clear the git_zstream variable at the start of git_deflate_init() etc. so that callers don't have to do that. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-18Merge branch 'ye/http-accept-language'Junio C Hamano1-0/+2
Using environment variable LANGUAGE and friends on the client side, HTTP-based transports now send Accept-Language when making requests. * ye/http-accept-language: http: add Accept-Language header if possible
2015-02-17Merge branch 'jk/remote-curl-an-array-in-struct-cannot-be-null'Junio C Hamano1-1/+1
Fix a misspelled conditional that is always true. * jk/remote-curl-an-array-in-struct-cannot-be-null: do not check truth value of flex arrays
2015-01-28do not check truth value of flex arraysJeff King1-1/+1
There is no point in checking "!ref->name" when ref is a "struct ref". The name field is a flex-array, and there always has a non-zero address. This is almost certainly not hurting anything, but it does cause clang-3.6 to complain. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-28http: add Accept-Language header if possibleYi EungJun1-0/+2
Add an Accept-Language header which indicates the user's preferred languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG. Examples: LANGUAGE= -> "" LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1" LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1" LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1" This gives git servers a chance to display remote error messages in the user's preferred language. Limit the number of languages to 1,000 because q-value must not be smaller than 0.001, and limit the length of Accept-Language header to 4,000 bytes for some HTTP servers which cannot accept such long header. Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-08Merge branch 'jc/push-cert'Junio C Hamano1-1/+12
Allow "git push" request to be signed, so that it can be verified and audited, using the GPG signature of the person who pushed, that the tips of branches at a public repository really point the commits the pusher wanted to, without having to "trust" the server. * jc/push-cert: (24 commits) receive-pack::hmac_sha1(): copy the entire SHA-1 hash out signed push: allow stale nonce in stateless mode signed push: teach smart-HTTP to pass "git push --signed" around signed push: fortify against replay attacks signed push: add "pushee" header to push certificate signed push: remove duplicated protocol info send-pack: send feature request on push-cert packet receive-pack: GPG-validate push certificates push: the beginning of "git push --signed" pack-protocol doc: typofix for PKT-LINE gpg-interface: move parse_signature() to where it should be gpg-interface: move parse_gpg_output() to where it should be send-pack: clarify that cmds_sent is a boolean send-pack: refactor inspecting and resetting status and sending commands send-pack: rename "new_refs" to "need_pack_data" receive-pack: factor out capability string generation send-pack: factor out capability string generation send-pack: always send capabilities send-pack: refactor decision to send update per ref send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher ...
2014-09-19Merge branch 'da/styles'Junio C Hamano1-1/+1
* da/styles: stylefix: asterisks stick to the variable, not the type
2014-09-19Merge branch 'jk/send-pack-many-refspecs'Junio C Hamano1-1/+7
The number of refs that can be pushed at once over smart HTTP was limited by the command line length. The limitation has been lifted by passing these refs from the standard input of send-pack. * jk/send-pack-many-refspecs: send-pack: take refspecs over stdin
2014-09-17signed push: teach smart-HTTP to pass "git push --signed" aroundJunio C Hamano1-1/+12
The "--signed" option received by "git push" is first passed to the transport layer, which the native transport directly uses to notice that a push certificate needs to be sent. When the transport-helper is involved, however, the option needs to be told to the helper with set_helper_option(), and the helper needs to take necessary action. For the smart-HTTP helper, the "necessary action" involves spawning the "git send-pack" subprocess with the "--signed" option. Once the above all gets wired in, the smart-HTTP transport now can use the push certificate mechanism to authenticate its pushes. Add a test that is modeled after tests for the native transport in t5534-push-signed.sh to t5541-http-push-smart.sh. Update the test Apache configuration to pass GNUPGHOME environment variable through. As PassEnv would trigger warnings for an environment variable that is not set, export it from test-lib.sh set to a harmless value when GnuPG is not being used in the tests. Note that the added test is deliberately loose and does not check the nonce in this step. This is because the stateless RPC mode is inevitably flaky and a nonce that comes back in the actual push processing is one issued by a different process; if the two interactions with the server crossed a second boundary, the nonces will not match and such a check will fail. A later patch in the series will work around this shortcoming. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-02stylefix: asterisks stick to the variable, not the typeDavid Aguilar1-1/+1
Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-26send-pack: take refspecs over stdinJeff King1-1/+7
Pushing a large number of refs works over most transports, because we implement send-pack as an internal function. However, it can sometimes fail when pushing over http, because we have to spawn "git send-pack --stateless-rpc" to do the heavy lifting, and we pass each refspec on the command line. This can cause us to overflow the OS limits on the size of the command line for a large push. We can solve this by giving send-pack a --stdin option and using it from remote-curl. We already dealt with this on the fetch-pack side in 078b895 (fetch-pack: new --stdin option to read refs from stdin, 2012-04-02). The stdin option (and in particular, its use of packet-lines for stateless-rpc input) is modeled after that solution. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: introduce CHILD_PROCESS_INITRené Scharfe1-2/+1
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-21Merge branch 'jk/remote-curl-squelch-extra-errors'Junio C Hamano1-8/+6
* jk/remote-curl-squelch-extra-errors: remote-curl: mark helper-protocol errors more clearly remote-curl: use error instead of fprintf(stderr) remote-curl: do not complain on EOF from parent git
2014-07-10remote-curl: mark helper-protocol errors more clearlyJeff King1-4/+4
When we encounter an error in remote-curl, we generally just report it to stderr. There is no need for the user to care that the "could not connect to server" error was generated by git-remote-https rather than a function in the parent git-fetch process. However, when the error is in the protocol between git and the helper, it makes sense to clearly identify which side is complaining. These cases shouldn't ever happen, but when they do, we can make them less confusing by being more verbose. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-10remote-curl: use error instead of fprintf(stderr)Jeff King1-5/+5
We usually prefix our error messages with "error: ", but many error messages from remote-curl are simply printed with fprintf. This can make the output a little harder to read (especially because such message may be intermingled with errors from the parent git process). There is no reason to avoid error(), as we are already calling it many places (in addition to libgit.a functions which use it). While we're adjusting the messages, we can also drop the capitalization which makes them unlike other git error messages. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-10remote-curl: do not complain on EOF from parent gitJeff King1-2/+0
The parent git process is supposed to send us an empty line to indicate that the conversation is over. However, the parent process may die() if there is a problem with the operation (e.g., we try to fetch a ref that does not exist). In this case, it produces a useful message, but then remote-curl _also_ produces an unhelpful message: $ git pull origin matser fatal: couldn't find remote ref matser Unexpected end of command stream The "right" way to fix this is to teach the parent git to always cleanly close the connection to the helper, letting it know that we are done. Implementing that is rather clunky, though, as it would involve either replacing die() operations with returning errors up the stack (until we disconnect the transport), or adding an atexit handler to clean up any transport helpers left open. It's much simpler to just suppress the EOF message in remote-curl. It was not added to address any real-world situation in the first place, but rather a "we should probably report unexpected things" suggestion[1]. It is the parent git which drives the operation, and whose exit value actually matters. If the parent dies, then the helper has no need to complain (except as a debugging aid). In the off chance that the pipe is closed without the parent dying, it can still notice the non-zero exit code. [1] http://article.gmane.org/gmane.comp.version-control.git/176036 Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-20use skip_prefix to avoid repeating stringsJeff King1-7/+8
It's a common idiom to match a prefix and then skip past it with strlen, like: if (starts_with(foo, "bar")) foo += strlen("bar"); This avoids magic numbers, but means we have to repeat the string (and there is no compiler check that we didn't make a typo in one of the strings). We can use skip_prefix to handle this case without repeating ourselves. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27remote-curl: reencode http error messagesJeff King1-7/+10
We currently recognize an error message with a content-type "text/plain; charset=utf-16" as text, but we ignore the charset parameter entirely. Let's encode it to log_output_encoding, which is presumably something the user's terminal can handle. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: extract type/subtype portion of content-typeJeff King1-1/+1
When we get a content-type from curl, we get the whole header line, including any parameters, and without any normalization (like downcasing or whitespace) applied. If we later try to match it with strcmp() or even strcasecmp(), we may get false negatives. This could cause two visible behaviors: 1. We might fail to recognize a smart-http server by its content-type. 2. We might fail to relay text/plain error messages to users (especially if they contain a charset parameter). This patch teaches the http code to extract and normalize just the type/subtype portion of the string. This is technically passing out less information to the callers, who can no longer see the parameters. But none of the current callers cares, and a future patch will add back an easier-to-use method for accessing those parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18http: never use curl_easy_performJeff King1-4/+1
We currently don't reuse http connections when fetching via the smart-http protocol. This is bad because the TCP handshake introduces latency, and especially because SSL connection setup may be non-trivial. We can fix it by consistently using curl's "multi" interface. The reason is rather complicated: Our http code has two ways of being used: queuing many "slots" to be fetched in parallel, or fetching a single request in a blocking manner. The parallel code is built on curl's "multi" interface. Most of the single-request code uses http_request, which is built on top of the parallel code (we just feed it one slot, and wait until it finishes). However, one could also accomplish the single-request scheme by avoiding curl's multi interface entirely and just using curl_easy_perform. This is simpler, and is used by post_rpc in the smart-http protocol. It does work to use the same curl handle in both contexts, as long as it is not at the same time. However, internally curl may not share all of the cached resources between both contexts. In particular, a connection formed using the "multi" code will go into a reuse pool connected to the "multi" object. Further requests using the "easy" interface will not be able to reuse that connection. The smart http protocol does ref discovery via http_request, which uses the "multi" interface, and then follows up with the "easy" interface for its rpc calls. As a result, we make two HTTP connections rather than reusing a single one. We could teach the ref discovery to use the "easy" interface. But it is only once we have done this discovery that we know whether the protocol will be smart or dumb. If it is dumb, then our further requests, which want to fetch objects in parallel, will not be able to reuse the same connection. Instead, this patch switches post_rpc to build on the parallel interface, which means that we use it consistently everywhere. It's a little more complicated to use, but since we have the infrastructure already, it doesn't add any code; we can just factor out the relevant bits from http_request. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-01-17Merge branch 'nd/shallow-clone'Junio C Hamano1-5/+30
Fetching from a shallow-cloned repository used to be forbidden, primarily because the codepaths involved were not carefully vetted and we did not bother supporting such usage. This attempts to allow object transfer out of a shallow-cloned repository in a controlled way (i.e. the receiver become a shallow repository with truncated history). * nd/shallow-clone: (31 commits) t5537: fix incorrect expectation in test case 10 shallow: remove unused code send-pack.c: mark a file-local function static git-clone.txt: remove shallow clone limitations prune: clean .git/shallow after pruning objects clone: use git protocol for cloning shallow repo locally send-pack: support pushing from a shallow clone via http receive-pack: support pushing to a shallow clone via http smart-http: support shallow fetch/clone remote-curl: pass ref SHA-1 to fetch-pack as well send-pack: support pushing to a shallow clone receive-pack: allow pushes that update .git/shallow connected.c: add new variant that runs with --shallow-file add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses receive/send-pack: support pushing from a shallow clone receive-pack: reorder some code in unpack() fetch: add --update-shallow to accept refs that update .git/shallow upload-pack: make sure deepening preserves shallow roots fetch: support fetching from a shallow repository clone: support remote shallow repository ...
2013-12-17Merge branch 'cc/starts-n-ends-with'Junio C Hamano1-7/+7
Remove a few duplicate implementations of prefix/suffix comparison functions, and rename them to starts_with and ends_with. * cc/starts-n-ends-with: replace {pre,suf}fixcmp() with {starts,ends}_with() strbuf: introduce starts_with() and ends_with() builtin/remote: remove postfixcmp() and use suffixcmp() instead environment: normalize use of prefixcmp() by removing " != 0"
2013-12-10smart-http: support shallow fetch/cloneNguyễn Thái Ngọc Duy1-4/+28
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10remote-curl: pass ref SHA-1 to fetch-pack as wellNguyễn Thái Ngọc Duy1-1/+2
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10connect.c: teach get_remote_heads to parse "shallow" linesNguyễn Thái Ngọc Duy1-1/+1
No callers pass a non-empty pointer as shallow_points at this stage. As a result, all clients still refuse to talk to shallow repository on the other end. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05replace {pre,suf}fixcmp() with {starts,ends}_with()Christian Couder1-7/+7
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05Merge branch 'bc/http-100-continue'Junio C Hamano1-10/+21
Issue "100 Continue" responses to help use of GSS-Negotiate authentication scheme over HTTP transport when needed. * bc/http-100-continue: remote-curl: fix large pushes with GSSAPI remote-curl: pass curl slot_results back through run_slot http: return curl's AUTHAVAIL via slot_results
2013-10-31remote-curl: fix large pushes with GSSAPIBrian M. Carlson1-2/+9
Due to an interaction between the way libcurl handles GSSAPI authentication over HTTP and the way git uses libcurl, large pushes (those over http.postBuffer bytes) would fail due to an authentication failure requiring a rewind of the curl buffer. Such a rewind was not possible because the data did not fit into the entire buffer. Enable the use of the Expect: 100-continue header for large requests where the server offers GSSAPI authentication to avoid this issue, since the request would otherwise fail. This allows git to get the authentication data right before sending the pack contents. Existing cases where pushes would succeed, including small requests using GSSAPI, still disable the use of 100 Continue, as it causes problems for some remote HTTP implementations (servers and proxies). Signed-off-by: Brian M. Carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2013-10-31remote-curl: pass curl slot_results back through run_slotJeff King1-9/+13
Some callers may want to know more than just the integer error code we return. Let them optionally pass a slot_results struct to fill in (or NULL if they do not care). In either case we continue to return the integer code. We can also give probe_rpc the same treatment (since it builds directly on run_slot). Signed-off-by: Jeff King <peff@peff.net>
2013-10-30Merge branch 'jk/http-auth-redirects'Junio C Hamano1-28/+41
Handle the case where http transport gets redirected during the authorization request better. * jk/http-auth-redirects: http.c: Spell the null pointer as NULL remote-curl: rewrite base url from info/refs redirects remote-curl: store url as a strbuf remote-curl: make refs_url a strbuf http: update base URLs when we see redirects http: provide effective url to callers http: hoist credential request out of handle_curl_result http: refactor options to http_get_* http_request: factor out curlinfo_strbuf http_get_file: style fixes
2013-10-14remote-curl: rewrite base url from info/refs redirectsJeff King1-0/+4
For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14remote-curl: store url as a strbufJeff King1-19/+19
We use a strbuf to generate the string containing the remote URL, but then detach it to a bare pointer. This makes it harder to later manipulate the URL, as we have forgotten the length (and the allocation semantics are not as clear). Let's instead keep the strbuf around. As a bonus, this eliminates a confusing double-use of the "buf" strbuf in main(). Prior to this, it was used both for constructing the url, and for reading commands from stdin. The downside is that we have to update each call site to refer to "url.buf" rather than just "url" when they want the C string. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14remote-curl: make refs_url a strbufJeff King1-8/+7
In the discover_refs function, we use a strbuf named "buffer" for multiple purposes. First we build the info/refs URL in it, and then detach that to a bare pointer. Then, we use the same strbuf to store the result of fetching the refs. Let's instead keep a separate refs_url strbuf. This is less confusing, as the "buffer" strbuf is now used for only one thing. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14http: hoist credential request out of handle_curl_resultJeff King1-1/+6
When we are handling a curl response code in http_request or in the remote-curl RPC code, we use the handle_curl_result helper to translate curl's response into an easy-to-use code. When we see an HTTP 401, we do one of two things: 1. If we already had a filled-in credential, we mark it as rejected, and then return HTTP_NOAUTH to indicate to the caller that we failed. 2. If we didn't, then we ask for a new credential and tell the caller HTTP_REAUTH to indicate that they may want to try again. Rejecting in the first case makes sense; it is the natural result of the request we just made. However, prompting for more credentials in the second step does not always make sense. We do not know for sure that the caller is going to make a second request, and nor are we sure that it will be to the same URL. Logically, the prompt belongs not to the request we just finished, but to the request we are (maybe) about to make. In practice, it is very hard to trigger any bad behavior. Currently, if we make a second request, it will always be to the same URL (even in the face of redirects, because curl handles the redirects internally). And we almost always retry on HTTP_REAUTH these days. The one exception is if we are streaming a large RPC request to the server (e.g., a pushed packfile), in which case we cannot restart. It's extremely unlikely to see a 401 response at this stage, though, as we would typically have seen it when we sent a probe request, before streaming the data. This patch drops the automatic prompt out of case 2, and instead requires the caller to do it. This is a few extra lines of code, and the bug it fixes is unlikely to come up in practice. But it is conceptually cleaner, and paves the way for better handling of credentials across redirects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30http: refactor options to http_get_*Jeff King1-2/+7
Over time, the http_get_strbuf function has grown several optional parameters. We now have a bitfield with multiple boolean options, as well as an optional strbuf for returning the content-type of the response. And a future patch in this series is going to add another strbuf option. Treating these as separate arguments has a few downsides: 1. Most call sites need to add extra NULLs and 0s for the options they aren't interested in. 2. The http_get_* functions are actually wrappers around 2 layers of low-level implementation functions. We have to pass these options through individually. 3. The http_get_strbuf wrapper learned these options, but nobody bothered to do so for http_get_file, even though it is backed by the same function that does understand the options. Let's consolidate the options into a single struct. For the common case of the default options, we'll allow callers to simply pass a NULL for the options struct. The resulting code is often a few lines longer, but it ends up being easier to read (and to change as we add new options, since we do not need to update each call site). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-09Merge branch 'jc/push-cas'Junio C Hamano1-0/+12
Allow a safer "rewind of the remote tip" push than blind "--force", by requiring that the overwritten remote ref to be unchanged since the new history to replace it was prepared. The machinery is more or less ready. The "--force" option is again the big red button to override any safety, thanks to J6t's sanity (the original round allowed --lockref to defeat --force). The logic to choose the default implemented here is fragile (e.g. "git fetch" after seeing a failure will update the remote-tracking branch and will make the next "push" pass, defeating the safety pretty easily). It is suitable only for the simplest workflows, and it may hurt users more than it helps them. * jc/push-cas: push: teach --force-with-lease to smart-http transport send-pack: fix parsing of --force-with-lease option t5540/5541: smart-http does not support "--force-with-lease" t5533: test "push --force-with-lease" push --force-with-lease: tie it all together push --force-with-lease: implement logic to populate old_sha1_expect[] remote.c: add command line option parser for "--force-with-lease" builtin/push.c: use OPT_BOOL, not OPT_BOOLEAN cache.h: move remote/connect API out of it
2013-09-09Merge branch 'nd/clone-connectivity-shortcut'Junio C Hamano1-1/+14
* nd/clone-connectivity-shortcut: smart http: use the same connectivity check on cloning
2013-08-02push: teach --force-with-lease to smart-http transportJunio C Hamano1-1/+15
We have been passing enough information to enable the compare-and-swap logic down to the transport layer, but the transport helper was not passing it to smart-http transport. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-23smart http: use the same connectivity check on cloningNguyễn Thái Ngọc Duy1-1/+14
This is an extension of c6807a4 (clone: open a shortcut for connectivity check - 2013-05-26) to reduce the cost of connectivity check at clone time, this time with smart http protocol. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-09remote-http: use argv-arrayJunio C Hamano1-16/+16
Instead of using a hand-managed argument array, use argv-array API to manage dynamically formulated command line. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06remote-curl: die directly with http error messagesJeff King1-2/+1
When we encounter an unknown http error (e.g., a 403), we hand the error code to http_error, which then prints it with error(). After that we die with the redundant message "HTTP request failed". Instead, let's just drop http_error entirely, which does nothing but pass arguments to error(), and instead die directly with a useful message. So before: $ git clone https://example.com/repo.git Cloning into 'repo'... error: unable to access 'https://example.com/repo.git': The requested URL returned error: 403 Forbidden fatal: HTTP request failed and after: $ git clone https://example.com/repo.git Cloning into 'repo'... fatal: unable to access 'https://example.com/repo.git': The requested URL returned error: 403 Forbidden Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: simplify http_error helper functionJeff King1-1/+1
This helper function should really be a one-liner that prints an error message, but it has ended up unnecessarily complicated: 1. We call error() directly when we fail to start the curl request, so we must later avoid printing a duplicate error in http_error(). It would be much simpler in this case to just stuff the error message into our usual curl_errorstr buffer rather than printing it ourselves. This means that http_error does not even have to care about curl's exit value (the interesting part is in the errorstr buffer already). 2. We return the "ret" value passed in to us, but none of the callers actually cares about our return value. We can just drop this entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06remote-curl: consistently report repo url for http errorsJeff King1-2/+2
When we report http errors in fetching the initial ref advertisement, we show the full URL we attempted to use, including "info/refs?service=git-upload-pack". While this may be useful for debugging a broken server, it is unnecessarily verbose and confusing for most cases, in which the client user is not even the same person as the owner of the repository. Let's just show the repository URL; debugging can happen with GIT_CURL_VERBOSE, which shows way more useful information, anyway. At the same time, let's also make sure to mention the repository URL when we report failed authentication (previously we said only "Authentication failed"). Knowing the URL can help the user realize why authentication failed (e.g., they meant to push to remote A, not remote B). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06remote-curl: always show friendlier 404 messageJeff King1-4/+2
When we get an http 404 trying to get the initial list of refs from the server, we try to be helpful and remind the user that update-server-info may need to be run. This looks like: $ git clone https://github.com/non/existent Cloning into 'existent'... fatal: https://github.com/non/existent/info/refs?service=git-upload-pack not found: did you run git update-server-info on the server? Suggesting update-server-info may be a good suggestion for users who are in control of the server repo and who are planning to set up dumb http. But for users of smart http, and especially users who are not in control of the server repo, the advice is useless and confusing. Since most people are expected to use smart http these days, it does not make sense to keep the update-server-info hint. We not only drop the mention of update-server-info, but also show only the main repo URL, not the full "info/refs" and service parameter. These elements may be useful for debugging a broken server configuration, but in the majority of cases, users are not fetching from their own repositories, but rather from other people's repositories; they have neither the power nor interest to fix a broken configuration, and the extra components just make the message more confusing. Users who do want to debug can and should use GIT_CURL_VERBOSE to get more complete information on the actual URLs visited. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06remote-curl: let servers override http 404 adviceJeff King1-1/+2
When we get an http 404 trying to get the initial list of refs from the server, we try to be helpful and remind the user that update-server-info may need to be run. This looks like: $ git clone https://github.com/non/existent Cloning into 'existent'... fatal: https://github.com/non/existent/info/refs?service=git-upload-pack not found: did you run git update-server-info on the server? Suggesting update-server-info may be a good suggestion for users who are in control of the server repo and who are planning to set up dumb http. But for users of smart http, and especially users who are not in control of the server repo, the advice is useless and confusing. The previous patch taught remote-curl to show custom advice from the server when it is available. When we have shown messages from the server, we can also drop our custom advice; what the server has to say is likely to be more accurate and helpful. We not only drop the mention of update-server-info, but also show only the main repo URL, not the full "info/refs" and service parameter. These elements may be useful for debugging a broken server configuration, but again, anything the server has provided is likely to be more useful (and one can still use GIT_CURL_VERBOSE to get much more complete debugging information). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06remote-curl: show server content on http errorsJeff King1-1/+32
If an http request to a remote git server fails, we show only the http response code, or sometimes a custom message for particular codes. This gives the server no opportunity to offer a more detailed explanation of the reason for the failure, or to give extra advice. This patch teaches remote-curl to record and display the body content of a failed http response. We only display such responses when the content-type is advertised as text/plain, as it is the most likely to look presentable on the user's terminal (and it is hoped to be a good indication that the message is intended for git clients, and not for a web browser). Each line of the new output is prepended with "remote:". Example output may look like this (assuming the server is configured to display such a helpful message): $ GIT_SMART_HTTP=0 git clone https://example.com/some/repo.git Cloning into 'repo'... remote: Sorry, fetching via dumb http is forbidden. remote: Please upgrade your git client to v1.6.6 or greater remote: and make sure that smart-http is enabled. error: The requested URL returned error: 403 while accessing http://localhost:5001/some/repo.git/info/refs fatal: HTTP request failed For the sake of simplicity, we only record and display these errors during the initial fetch of the ref list, as that is the initial contact with the server and where the most common, interesting errors happen (and there is already precedent, as that is the only place we currently massage http error codes into more helpful messages). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24remote-curl: always parse incoming refsJeff King1-9/+13
When remote-curl receives a list of refs from a server, it keeps the whole buffer intact. When we get a "list" command, we feed the result to get_remote_heads, and when we get a "fetch" or "push" command, we feed it to fetch-pack or send-pack, respectively. If the HTTP response from the server is truncated for any reason, we will get an incomplete ref advertisement. If we then feed this incomplete list to fetch-pack, one of a few things may happen: 1. If the truncation is in a packet header, fetch-pack will notice the bogus line and complain. 2. If the truncation is inside a packet, fetch-pack will keep waiting for us to send the rest of the packet, which we never will. 3. If the truncation is at a packet boundary, fetch-pack will keep waiting for us to send the next packet, which we never will. As a result, fetch-pack hangs, waiting for input. However, remote-curl believes it has sent all of the advertisement, and therefore waits for fetch-pack to speak. The two processes end up in a deadlock. We do notice the broken ref list if we feed it to get_remote_heads. So if git asks the helper to do a "list" followed by a "fetch", we are safe; we'll abort during the list operation, which parses the refs. This patch teaches remote-curl to always parse and save the incoming ref list when we read the ref advertisement from a server. That means that we will always verify and abort before even running fetch-pack (or send-pack) when reading a corrupted list, even if we do not run the "list" command explicitly. Since we save the result, in the common case of running "list" then "fetch", we do not do any extra parsing at all. In the case of just a "fetch", we do an extra round of parsing, but only once. Note also that the "fetch" case will now also initialize server_capabilities from the remote (in remote-curl; we already would do so inside fetch-pack). Doing "list+fetch" already does this. It doesn't actually matter now, but the new behavior is arguably more correct, should remote-curl ever start caring about the server's capability list. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24remote-curl: move ref-parsing code up in fileJeff King1-59/+59
The ref-parsing functions are static. Let's move them up in the file to be available to more functions, which will help us with later refactoring. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24remote-curl: pass buffer straight to get_remote_headsJeff King1-24/+2
Until recently, get_remote_heads only knew how to read refs from a file descriptor. To hack around this, we spawned a thread (or forked a process) to write the buffer back to us. Now that we can just pass it our buffer directly, we don't have to use this hack anymore. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24teach get_remote_heads to read from a memory bufferJeff King1-1/+1
Now that we can read packet data from memory as easily as a descriptor, get_remote_heads can take either one as a source. This will allow further refactoring in remote-curl. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24pkt-line: share buffer/descriptor reading implementationJeff King1-12/+10
The packet_read function reads from a descriptor. The packet_get_line function is similar, but reads from an in-memory buffer, and uses a completely separate implementation. This patch teaches the generic packet_read function to accept either source, and we can do away with packet_get_line's implementation. There are two other differences to account for between the old and new functions. The first is that we used to read into a strbuf, but now read into a fixed size buffer. The only two callers are fine with that, and in fact it simplifies their code, since they can use the same static-buffer interface as the rest of the packet_read_line callers (and we provide a similar convenience wrapper for reading from a buffer rather than a descriptor). This is technically an externally-visible behavior change in that we used to accept arbitrary sized packets up to 65532 bytes, and now cap out at LARGE_PACKET_MAX, 65520. In practice this doesn't matter, as we use it only for parsing smart-http headers (of which there is exactly one defined, and it is small and fixed-size). And any extension headers would be breaking the protocol to go over LARGE_PACKET_MAX anyway. The other difference is that packet_get_line would return on error rather than dying. However, both callers of packet_get_line are actually improved by dying. The first caller does its own error checking, but we can drop that; as a result, we'll actually get more specific reporting about protocol breakage when packet_read dies internally. The only downside is that packet_read will not print the smart-http URL that failed, but that's not a big deal; anybody not debugging can already see the remote's URL already, and anybody debugging would want to run with GIT_CURL_VERBOSE anyway to see way more information. The second caller, which is just trying to skip past any extra smart-http headers (of which there are none defined, but which we allow to keep room for future expansion), did not error check at all. As a result, it would treat an error just like a flush packet. The resulting mess would generally cause an error later in get_remote_heads, but now we get error reporting much closer to the source of the problem. Brown-paper-bag-fixes-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: teach packet_read_line to chomp newlinesJeff King1-3/+3
The packets sent during ref negotiation are all terminated by newline; even though the code to chomp these newlines is short, we end up doing it in a lot of places. This patch teaches packet_read_line to auto-chomp the trailing newline; this lets us get rid of a lot of inline chomping code. As a result, some call-sites which are not reading line-oriented data (e.g., when reading chunks of packfiles alongside sideband) transition away from packet_read_line to the generic packet_read interface. This patch converts all of the existing callsites. Since the function signature of packet_read_line does not change (but its behavior does), there is a possibility of new callsites being introduced in later commits, silently introducing an incompatibility. However, since a later patch in this series will change the signature, such a commit would have to be merged directly into this commit, not to the tip of the series; we can therefore ignore the issue. This is an internal cleanup and should produce no change of behavior in the normal case. However, there is one corner case to note. Callers of packet_read_line have never been able to tell the difference between a flush packet ("0000") and an empty packet ("0004"), as both cause packet_read_line to return a length of 0. Readers treat them identically, even though Documentation/technical/protocol-common.txt says we must not; it also says that implementations should not send an empty pkt-line. By stripping out the newline before the result gets to the caller, we will now treat the newline-only packet ("0005\n") the same as an empty packet, which in turn gets treated like a flush packet. In practice this doesn't matter, as neither empty nor newline-only packets are part of git's protocols (at least not for the line-oriented bits, and readers who are not expecting line-oriented packets will be calling packet_read directly, anyway). But even if we do decide to care about the distinction later, it is orthogonal to this patch. The right place to tighten would be to stop treating empty packets as flush packets, and this change does not make doing so any harder. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: drop safe_write functionJeff King1-2/+2
This is just write_or_die by another name. The one distinction is that write_or_die will treat EPIPE specially by suppressing error messages. That's fine, as we die by SIGPIPE anyway (and in the off chance that it is disabled, write_or_die will simulate it). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04Verify Content-Type from smart HTTP serversShawn Pearce1-5/+12
Before parsing a suspected smart-HTTP response verify the returned Content-Type matches the standard. This protects a client from attempting to process a payload that smells like a smart-HTTP server response. JGit has been doing this check on all responses since the dawn of time. I mistakenly failed to include it in git-core when smart HTTP was introduced. At the time I didn't know how to get the Content-Type from libcurl. I punted, meant to circle back and fix this, and just plain forgot about it. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-21Merge branch 'jk/maint-http-half-auth-fetch'Junio C Hamano1-1/+1
Finishing touches to squelch a compiler warning. * jk/maint-http-half-auth-fetch: remote-curl.c: Fix a compiler warning
2012-11-21remote-curl.c: Fix a compiler warningRamsay Jones1-1/+1
In particular, gcc issues an "'gzip_size' might be used uninitialized" warning (-Wuninitialized). However, this warning is a false positive, since the 'gzip_size' variable would not, in fact, be used uninitialized. In order to suppress the warning, we simply initialise the variable to zero in it's declaration. Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-20Merge branch 'jk/maint-http-half-auth-fetch'Junio C Hamano1-8/+17
Fixes fetch from servers that ask for auth only during the actual packing phase. This is not really a recommended configuration, but it cleans up the code at the same time. * jk/maint-http-half-auth-fetch: remote-curl: retry failed requests for auth even with gzip remote-curl: hoist gzip buffer size to top of post_rpc
2012-10-31remote-curl: retry failed requests for auth even with gzipJeff King1-1/+10
Commit b81401c taught the post_rpc function to retry the http request after prompting for credentials. However, it did not handle two cases: 1. If we have a large request, we do not retry. That's OK, since we would have sent a probe (with retry) already. 2. If we are gzipping the request, we do not retry. That was considered OK, because the intended use was for push (e.g., listing refs is OK, but actually pushing objects is not), and we never gzip on push. This patch teaches post_rpc to retry even a gzipped request. This has two advantages: 1. It is possible to configure a "half-auth" state for fetching, where the set of refs and their sha1s are advertised, but one cannot actually fetch objects. This is not a recommended configuration, as it leaks some information about what is in the repository (e.g., an attacker can try brute-forcing possible content in your repository and checking whether it matches your branch sha1). However, it can be slightly more convenient, since a no-op fetch will not require a password at all. 2. It future-proofs us should we decide to ever gzip more requests. Signed-off-by: Jeff King <peff@peff.net>
2012-10-31remote-curl: hoist gzip buffer size to top of post_rpcJeff King1-7/+7
When we gzip the post data for a smart-http rpc request, we compute the gzip body and its size inside the "use_gzip" conditional. We keep track of the body after the conditional ends, but not the size. Let's remember both, which will enable us to retry failed gzip requests in a future patch. Signed-off-by: Jeff King <peff@peff.net>
2012-10-29Merge branch 'jk/maint-http-init-not-in-result-handler'Jeff King1-8/+9
Further clean-up to the http codepath that picks up results after cURL library is done with one request slot. * jk/maint-http-init-not-in-result-handler: http: do not set up curl auth after a 401 remote-curl: do not call run_slot repeatedly
2012-10-16Merge branch 'jk/maint-http-half-auth-push'Junio C Hamano1-1/+1
Fixes a regression in maint-1.7.11 (v1.7.11.7), maint (v1.7.12.1) and master (v1.8.0-rc0). * jk/maint-http-half-auth-push: http: fix segfault in handle_curl_result
2012-10-12http: do not set up curl auth after a 401Jeff King1-1/+1
When we get an http 401, we prompt for credentials and put them in our global credential struct. We also feed them to the curl handle that produced the 401, with the intent that they will be used on a retry. When the code was originally introduced in commit 42653c0, this was a necessary step. However, since dfa1725, we always feed our global credential into every curl handle when we initialize the slot with get_active_slot. So every further request already feeds the credential to curl. Moreover, accessing the slot here is somewhat dubious. After the slot has produced a response, we don't actually control it any more. If we are using curl_multi, it may even have been re-initialized to handle a different request. It just so happens that we will reuse the curl handle within the slot in such a case, and that because we only keep one global credential, it will be the one we want. So the current code is not buggy, but it is misleading. By cleaning it up, we can remove the slot argument entirely from handle_curl_result, making it much more obvious that slots should not be accessed after they are marked as finished. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12remote-curl: do not call run_slot repeatedlyJeff King1-7/+8
Commit b81401c (http: prompt for credentials on failed POST) taught post_rpc to call run_slot in a loop in order to retry a request after asking the user for credentials. However, after a call to run_slot we will have called finish_active_slot. This means we have released the slot, and we should no longer look at it. As it happens, this does not cause any bugs in the current code, since we know that we are not using curl_multi in this code path, and therefore nobody will have taken over our slot in the meantime. However, it is good form to actually call get_active_slot again. It also future proofs us against changes in the http code. We can do this by jumping back to a retry label at the top of our function. We just need to reorder a few setup lines that should not be repeated; everything else within the loop is either idempotent, needs to be repeated, or in a path we do not follow (e.g., we do not even try when large_request is set, because we don't know how much data we might have streamed from our helper program). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12http: fix segfault in handle_curl_resultJeff King1-1/+1
When we create an http active_request_slot, we can set its "results" pointer back to local storage. The http code will fill in the details of how the request went, and we can access those details even after the slot has been cleaned up. Commit 8809703 (http: factor out http error code handling) switched us from accessing our local results struct directly to accessing it via the "results" pointer of the slot. That means we're accessing the slot after it has been marked as finished, defeating the whole purpose of keeping the results storage separate. Most of the time this doesn't matter, as finishing the slot does not actually clean up the pointer. However, when using curl's multi interface with the dumb-http revision walker, we might actually start a new request before handing control back to the original caller. In that case, we may reuse the slot, zeroing its results pointer, and leading the original caller to segfault while looking for its results inside the slot. Instead, we need to pass a pointer to our local results storage to the handle_curl_result function, rather than relying on the pointer in the slot struct. This matches what the original code did before the refactoring (which did not use a separate function, and therefore just accessed the results struct directly). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-29Merge branch 'jk/smart-http-switch'Junio C Hamano1-4/+5
Allows users to turn off smart-http when talking to dumb-only servers. * jk/smart-http-switch: remote-curl: let users turn off smart http remote-curl: rename is_http variable
2012-09-29Merge branch 'sp/maint-http-enable-gzip'Junio C Hamano1-2/+2
Allows a more common 'gzip' Accept-Encoding to be used. * sp/maint-http-enable-gzip: Enable info/refs gzip decompression in HTTP client
2012-09-21remote-curl: let users turn off smart httpJeff King1-1/+2
Usually there is no need for users to specify whether an http remote is smart or dumb; the protocol is designed so that a single initial request is made, and the client can determine the server's capability from the response. However, some misconfigured dumb-only servers may not like the initial request by a smart client, as it contains a query string. Until recently, commit 703e6e7 worked around this by making a second request. However, that commit was recently reverted due to its side effect of masking the initial request's error code. Since git has had that workaround for several years, we don't know exactly how many such misconfigured servers are out there. The reversion of 703e6e7 assumes they are rare enough not to worry about. Still, that reversion leaves somebody who does run into such a server with no escape hatch at all. Let's give them an environment variable they can tweak to perform the "dumb" request. This is intentionally not a documented interface. It's overly simple and is really there for debugging in case somebody does complain about git not working with their server. A real user-facing interface would entail a per-remote or per-URL config variable. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20remote-curl: rename is_http variableJeff King1-3/+3
We don't actually care whether the connection is http or not; what we care about is whether it might be smart http. Rename the variable to be more accurate, which will make it easier to later make smart-http optional. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20Enable info/refs gzip decompression in HTTP clientShawn O. Pearce1-2/+2
Some HTTP servers try to use gzip compression on the /info/refs request to save transfer bandwidth. Repositories with many tags may find the /info/refs request can be gzipped to be 50% of the original size due to the few but often repeated bytes used (hex SHA-1 and commonly digits in tag names). For most HTTP requests enable "Accept-Encoding: gzip" ensuring the /info/refs payload can use this encoding format. Only request gzip encoding from servers. Although deflate is supported by libcurl, most servers have standardized on gzip encoding for compression as that is what most browsers support. Asking for deflate increases request sizes by a few bytes, but is unlikely to ever be used by a server. Disable the Accept-Encoding header on probe RPCs as response bodies are supposed to be exactly 4 bytes long, "0000". The HTTP headers requesting and indicating compression use more space than the data transferred in the body. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20Revert "retry request without query when info/refs?query fails"Shawn O. Pearce1-16/+2
This reverts commit 703e6e76a14825e5b0c960d525f34e607154b4f7. Retrying without the query parameter was added as a workaround for a single broken HTTP server at git.debian.org[1]. The server was misconfigured to route every request with a query parameter into gitweb.cgi. Admins fixed the server's configuration within 16 hours of the bug report to the Git mailing list, but we still patched Git with this fallback and have been paying for it since. Most Git hosting services configure the smart HTTP protocol and the retry logic confuses users when there is a transient HTTP error as Git dropped the real error from the smart HTTP request. Removing the retry makes root causes easier to identify. [1] http://thread.gmane.org/gmane.comp.version-control.git/137609 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27http: prompt for credentials on failed POSTJeff King1-8/+15
All of the smart-http GET requests go through the http_get_* functions, which will prompt for credentials and retry if we see an HTTP 401. POST requests, however, do not go through any central point. Moreover, it is difficult to retry in the general case; we cannot assume the request body fits in memory or is even seekable, and we don't know how much of it was consumed during the attempt. Most of the time, this is not a big deal; for both fetching and pushing, we make a GET request before doing any POSTs, so typically we figure out the credentials during the first request, then reuse them during the POST. However, some servers may allow a client to get the list of refs from receive-pack without authentication, and then require authentication when the client actually tries to POST the pack. This is not ideal, as the client may do a non-trivial amount of work to generate the pack (e.g., delta-compressing objects). However, for a long time it has been the recommended example configuration in git-http-backend(1) for setting up a repository with anonymous fetch and authenticated push. This setup has always been broken without putting a username into the URL. Prior to commit 986bbc0, it did work with a username in the URL, because git would prompt for credentials before making any requests at all. However, post-986bbc0, it is totally broken. Since it has been advertised in the manpage for some time, we should make sure it works. Unfortunately, it is not as easy as simply calling post_rpc again when it fails, due to the input issue mentioned above. However, we can still make this specific case work by retrying in two specific instances: 1. If the request is large (bigger than LARGE_PACKET_MAX), we will first send a probe request with a single flush packet. Since this request is static, we can freely retry it. 2. If the request is small and we are not using gzip, then we have the whole thing in-core, and we can freely retry. That means we will not retry in some instances, including: 1. If we are using gzip. However, we only do so when calling git-upload-pack, so it does not apply to pushes. 2. If we have a large request, the probe succeeds, but then the real POST wants authentication. This is an extremely unlikely configuration and not worth worrying about. While it might be nice to cover those instances, doing so would be significantly more complex for very little real-world gain. In the long run, we will be much better off when curl learns to internally handle authentication as a callback, and we can cleanly handle all cases that way. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-05-10Merge branch 'jk/maint-push-progress' into maintJunio C Hamano1-0/+1
"git push" over smart-http lost progress output a few releases ago. By Jeff King * jk/maint-push-progress: t5541: test more combinations of --progress teach send-pack about --[no-]progress send-pack: show progress when isatty(2)