aboutsummaryrefslogtreecommitdiffstats
path: root/fetch-pack.c
AgeCommit message (Collapse)AuthorFilesLines
2019-01-10upload-pack: teach deepen-relative in protocol v2Jonathan Tan1-0/+2
Commit 685fbd3291 ("fetch-pack: perform a fetch using v2", 2018-03-15) attempted to teach Git deepen-relative in protocol v2 (among other things), but it didn't work: (1) fetch-pack.c needs to emit "deepen-relative". (2) upload-pack.c needs to ensure that the correct deepen_relative variable is passed to deepen() (there are two - the static variable and the one in struct upload_pack_data). (3) Before deepen() computes the list of reachable shallows, it first needs to mark all "our" refs as OUR_REF. v2 currently does not do this, because unlike v0, it is not needed otherwise. Fix all this and include a test demonstrating that it works now. For (2), the static variable deepen_relative is also eliminated, removing a source of confusion. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10fetch-pack: do not take shallow lock unnecessarilyJonathan Tan1-2/+9
When fetching using protocol v2, the remote may send a "shallow-info" section if the client is shallow. If so, Git as the client currently takes the shallow file lock, even if the "shallow-info" section is empty. This is not a problem except that Git does not support taking the shallow file lock after modifying the shallow file, because is_repository_shallow() stores information that is never cleared. And this take-after-modify occurs when Git does a tag-following fetch from a shallow repository on a transport that does not support tag following (since in this case, 2 fetches are performed). To solve this issue, take the shallow file lock (and perform all other shallow processing) only if the "shallow-info" section is non-empty; otherwise, behave as if it were empty. A full solution (probably, ensuring that any action of committing shallow file locks also includes clearing the information stored by is_repository_shallow()) would solve the issue without need for this patch, but this patch is independently useful (as an optimization to prevent writing a file in an unnecessary case), hence why I wrote it. I have included a NEEDSWORK outlining the full solution. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-04Merge branch 'jk/loose-object-cache'Junio C Hamano1-37/+2
Code clean-up with optimization for the codepath that checks (non-)existence of loose objects. * jk/loose-object-cache: odb_load_loose_cache: fix strbuf leak fetch-pack: drop custom loose object cache sha1-file: use loose object cache for quick existence check object-store: provide helpers for loose_objects_cache sha1-file: use an object_directory for the main object dir handle alternates paths the same as the main object dir sha1_file_name(): overwrite buffer instead of appending rename "alternate_object_database" to "object_directory" submodule--helper: prefer strip_suffix() to ends_with() fsck: do not reuse child_process structs
2019-01-02pack-protocol.txt: accept error packets in any contextMasaya Suzuki1-4/+4
In the Git pack protocol definition, an error packet may appear only in a certain context. However, servers can face a runtime error (e.g. I/O error) at an arbitrary timing. This patch changes the protocol to allow an error packet to be sent instead of any packet. Without this protocol spec change, when a server cannot process a request, there's no way to tell that to a client. Since the server cannot produce a valid response, it would be forced to cut a connection without telling why. With this protocol spec change, the server can be more gentle in this situation. An old client may see these error packets as an unexpected packet, but this is not worse than having an unexpected EOF. Following this protocol spec change, the error packet handling code is moved to pkt-line.c. Implementation wise, this implementation uses pkt-line to communicate with a subprocess. Since this is not a part of Git protocol, it's possible that a packet that is not supposed to be an error packet is mistakenly parsed as an error packet. This error packet handling is enabled only for the Git pack protocol parsing code considering this. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-02Use packet_reader instead of packet_read_lineMasaya Suzuki1-27/+34
By using and sharing a packet_reader while handling a Git pack protocol request, the same reader option is used throughout the code. This makes it easy to set a reader option to the request parsing code. Signed-off-by: Masaya Suzuki <masayasuzuki@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-13fetch-pack: drop custom loose object cacheJeff King1-37/+2
Commit 024aa4696c (fetch-pack.c: use oidset to check existence of loose object, 2018-03-14) added a cache to avoid calling stat() for a bunch of loose objects we don't have. Now that OBJECT_INFO_QUICK handles this caching itself, we can drop the custom solution. Note that this might perform slightly differently, as the original code stopped calling readdir() when we saw more loose objects than there were refs. So: 1. The old code might have spent work on readdir() to fill the cache, but then decided there were too many loose objects, wasting that effort. 2. The new code might spend a lot of time on readdir() if you have a lot of loose objects, even though there are very few objects to ask about. In practice it probably won't matter either way; see the previous commit for some discussion of the tradeoff. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-01fetch-pack: be more precise in parsing v2 responseJonathan Tan1-0/+12
Each section in a protocol v2 response is followed by either a DELIM packet (indicating more sections to follow) or a FLUSH packet (indicating none to follow). But when parsing the "acknowledgments" section, do_fetch_pack_v2() is liberal in accepting both, but determines whether to continue reading or not based solely on the contents of the "acknowledgments" section, not on whether DELIM or FLUSH was read. There is no issue with a protocol-compliant server, but can result in confusing error messages when communicating with a server that serves unexpected additional sections. Consider a server that sends "new-section" after "acknowledgments": - client writes request - client reads the "acknowledgments" section which contains no "ready", then DELIM - since there was no "ready", client needs to continue negotiation, and writes request - client reads "new-section", and reports to the end user "expected 'acknowledgments', received 'new-section'" For the person debugging the involved Git implementation(s), the error message is confusing in that "new-section" was not received in response to the latest request, but to the first one. One solution is to always continue reading after DELIM, but in this case, we can do better. We know from the protocol that "ready" means at least the packfile section is coming (hence, DELIM) and that no "ready" means that no sections are to follow (hence, FLUSH). So teach process_acks() to enforce this. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-19Merge branch 'tb/filter-alternate-refs'Junio C Hamano1-2/+1
When pushing into a repository that borrows its objects from an alternate object store, "git receive-pack" that responds to the push request on the other side lists the tips of refs in the alternate to reduce the amount of objects transferred. This sometimes is detrimental when the number of refs in the alternate is absurdly large, in which case the bandwidth saved in potentially fewer objects transferred is wasted in excessively large ref advertisement. The alternate refs that are advertised are now configurable with a pair of configuration variables. * tb/filter-alternate-refs: transport.c: introduce core.alternateRefsPrefixes transport.c: introduce core.alternateRefsCommand transport.c: extract 'fill_alternate_refs_command' transport: drop refnames from for_each_alternate_ref
2018-10-19Merge branch 'jt/avoid-ls-refs'Junio C Hamano1-1/+1
Over some transports, fetching objects with an exact commit object name can be done without first seeing the ref advertisements. The code has been optimized to exploit this. * jt/avoid-ls-refs: fetch: do not list refs if fetching only hashes transport: list refs before fetch if necessary transport: do not list refs if possible transport: allow skipping of ref listing
2018-10-19Merge branch 'jt/non-blob-lazy-fetch'Junio C Hamano1-42/+73
A partial clone that is configured to lazily fetch missing objects will on-demand issue a "git fetch" request to the originating repository to fill not-yet-obtained objects. The request has been optimized for requesting a tree object (and not the leaf blob objects contained in it) by telling the originating repository that no blobs are needed. * jt/non-blob-lazy-fetch: fetch-pack: exclude blobs when lazy-fetching trees fetch-pack: avoid object flags if no_dependents
2018-10-09transport: drop refnames from for_each_alternate_refJeff King1-2/+1
None of the current callers use the refname parameter we pass to their callbacks. In theory somebody _could_ do so, but it's actually quite weird if you think about it: it's a ref in somebody else's repository. So the name has no meaning locally, and in fact there may be duplicates if there are multiple alternates. The users of this interface really only care about seeing some ref tips, since that promises that the alternate has the full commit graph reachable from there. So let's keep the information we pass back to the bare minimum. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-07transport: do not list refs if possibleJonathan Tan1-1/+1
When all refs to be fetched are exact OIDs, it is possible to perform a fetch without requiring the remote to list refs if protocol v2 is used. Teach Git to do this. This currently has an effect only for lazy fetches done from partial clones. The change necessary to likewise optimize "git fetch <remote> <sha-1>" will be done in a subsequent patch. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-04fetch-pack: load tip_oids eagerly iff neededRené Scharfe1-21/+15
tip_oids_contain() lazily loads refs into an oidset at its first call. It abuses the internal (sub)member .map.tablesize of that oidset to check if it has done that already. Determine if the oidset needs to be populated upfront and then do that instead. This duplicates a loop, but simplifies the existing one by separating concerns between the two. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-04fetch-pack: factor out is_unmatched_ref()René Scharfe1-8/+11
Move the code to determine if a request is unmatched to its own little helper. This allows us to reuse it in a subsequent patch. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-04fetch-pack: exclude blobs when lazy-fetching treesJonathan Tan1-0/+14
A partial clone with missing trees can be obtained using "git clone --filter=tree:none <repo>". In such a repository, when a tree needs to be lazily fetched, any tree or blob it directly or indirectly references is fetched as well, regardless of whether the original command required those objects, or if the local repository already had some of them. This is because the fetch protocol, which the lazy fetch uses, does not allow clients to request that only the wanted objects be sent, which would be the ideal solution. This patch implements a partial solution: specify the "blob:none" filter, somewhat reducing the fetch payload. This change has no effect when lazily fetching blobs (due to how filters work). And if lazily fetching a commit (such repositories are difficult to construct and is not a use case we support very well, but it is possible), referenced commits and trees are still fetched - only the blobs are not fetched. The necessary code change is done in fetch_pack() instead of somewhere closer to where the "filter" instruction is written to the wire so that only one part of the code needs to be changed in order for users of all protocol versions to benefit from this optimization. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-04fetch-pack: avoid object flags if no_dependentsJonathan Tan1-42/+59
When fetch_pack() is invoked as part of another Git command (due to a lazy fetch from a partial clone, for example), it uses object flags that may already be used by the outer Git command. The commit that introduced the lazy fetch feature (88e2f9ed8e ("introduce fetch-object: fetch one promisor object", 2017-12-05)) tried to avoid this overlap, but it did not avoid it totally. It was successful in avoiding writing COMPLETE, but did not avoid reading COMPLETE, and did not avoid writing and reading ALTERNATE. Ensure that no flags are written or read by fetch_pack() in the case where it is used to perform a lazy fetch. To do this, it is sufficient to avoid checking completeness of wanted refs (unnecessary in the case of lazy fetches), and to avoid negotiation-related work (in the current implementation, already, no negotiation is performed). After that was done, the lack of overlap was verified by checking all direct and indirect usages of COMPLETE and ALTERNATE - that they are read or written only if no_dependents is false. There are other possible solutions to this issue: (1) Split fetch-pack.{c,h} into a flag-using part and a non-flag-using part, and whenever no_dependents is set, only use the non-flag-using part. (2) Make fetch_pack() be able to be used with arbitrary repository objects. fetch_pack() should then create its own repository object based on the given repository object, with its own object hashtable, so that the flags do not conflict. (1) is possible but invasive - some functions would need to be split; and such invasiveness would potentially be unnecessary if we ever were to need (2) anyway. (2) would be useful if we were to support, say, submodules that were partial clones themselves, but I don't know when or if the Git project plans to support those. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-29convert "oidcmp() != 0" to "!oideq()"Jeff King1-1/+1
This is the flip side of the previous two patches: checking for a non-zero oidcmp() can be more strictly expressed as inequality. Like those patches, we write "!= 0" in the coccinelle transformation, which covers by isomorphism the more common: if (oidcmp(E1, E2)) As with the previous two patches, this patch can be achieved almost entirely by running "make coccicheck"; the only differences are manual line-wrap fixes to match the original code. There is one thing to note for anybody replicating this, though: coccinelle 1.0.4 seems to miss the case in builtin/tag.c, even though it's basically the same as all the others. Running with 1.0.7 does catch this, so presumably it's just a coccinelle bug that was fixed in the interim. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-17Merge branch 'ab/fsck-transfer-updates'Junio C Hamano1-2/+30
The test performed at the receiving end of "git push" to prevent bad objects from entering repository can be customized via receive.fsck.* configuration variables; we now have gained a counterpart to do the same on the "git fetch" side, with fetch.fsck.* configuration variables. * ab/fsck-transfer-updates: fsck: test and document unknown fsck.<msg-id> values fsck: add stress tests for fsck.skipList fsck: test & document {fetch,receive}.fsck.* config fallback fetch: implement fetch.fsck.* transfer.fsckObjects tests: untangle confusing setup config doc: elaborate on fetch.fsckObjects security config doc: elaborate on what transfer.fsckObjects does config doc: unify the description of fsck.* and receive.fsck.* config doc: don't describe *.fetchObjects twice receive.fsck.<msg-id> tests: remove dead code
2018-08-15Merge branch 'jt/connectivity-check-after-unshallow'Junio C Hamano1-15/+15
"git fetch" sometimes failed to update the remote-tracking refs, which has been corrected. * jt/connectivity-check-after-unshallow: fetch-pack: unify ref in and out param
2018-08-15Merge branch 'bw/fetch-pack-i18n'Junio C Hamano1-8/+8
i18n updates. * bw/fetch-pack-i18n: fetch-pack: mark die strings for translation
2018-08-02Merge branch 'jt/fetch-negotiator-skipping'Junio C Hamano1-2/+5
Add a server-side knob to skip commits in exponential/fibbonacci stride in an attempt to cover wider swath of history with a smaller number of iterations, potentially accepting a larger packfile transfer, instead of going back one commit a time during common ancestor discovery during the "git fetch" transaction. * jt/fetch-negotiator-skipping: negotiator/skipping: skip commits during fetch
2018-08-02Merge branch 'jt/fetch-nego-tip'Junio C Hamano1-2/+18
"git fetch" learned a new option "--negotiation-tip" to limit the set of commits it tells the other end as "have", to reduce wasted bandwidth and cycles, which would be helpful when the receiving repository has a lot of refs that have little to do with the history at the remote it is fetching from. * jt/fetch-nego-tip: fetch-pack: support negotiation tip whitelist
2018-08-02Merge branch 'sb/object-store-lookup'Junio C Hamano1-15/+21
lookup_commit_reference() and friends have been updated to find in-core object for a specific in-core repository instance. * sb/object-store-lookup: (32 commits) commit.c: allow lookup_commit_reference to handle arbitrary repositories commit.c: allow lookup_commit_reference_gently to handle arbitrary repositories tag.c: allow deref_tag to handle arbitrary repositories object.c: allow parse_object to handle arbitrary repositories object.c: allow parse_object_buffer to handle arbitrary repositories commit.c: allow get_cached_commit_buffer to handle arbitrary repositories commit.c: allow set_commit_buffer to handle arbitrary repositories commit.c: migrate the commit buffer to the parsed object store commit-slabs: remove realloc counter outside of slab struct commit.c: allow parse_commit_buffer to handle arbitrary repositories tag: allow parse_tag_buffer to handle arbitrary repositories tag: allow lookup_tag to handle arbitrary repositories commit: allow lookup_commit to handle arbitrary repositories tree: allow lookup_tree to handle arbitrary repositories blob: allow lookup_blob to handle arbitrary repositories object: allow lookup_object to handle arbitrary repositories object: allow object_as_type to handle arbitrary repositories tag: add repository argument to deref_tag tag: add repository argument to parse_tag_buffer tag: add repository argument to lookup_tag ...
2018-08-02Merge branch 'jt/fetch-pack-negotiator'Junio C Hamano1-168/+87
Code restructuring and a small fix to transport protocol v2 during fetching. * jt/fetch-pack-negotiator: fetch-pack: introduce negotiator API fetch-pack: move common check and marking together fetch-pack: make negotiation-related vars local fetch-pack: use ref adv. to prune "have" sent fetch-pack: directly end negotiation if ACK ready fetch-pack: clear marks before re-marking fetch-pack: split up everything_local()
2018-08-01fetch-pack: unify ref in and out paramJonathan Tan1-15/+15
When a user fetches: - at least one up-to-date ref and at least one non-up-to-date ref, - using HTTP with protocol v0 (or something else that uses the fetch command of a remote helper) some refs might not be updated after the fetch. This bug was introduced in commit 989b8c4452 ("fetch-pack: put shallow info in output parameter", 2018-06-28) which allowed transports to report the refs that they have fetched in a new out-parameter "fetched_refs". If they do so, transport_fetch_refs() makes this information available to its caller. Users of "fetched_refs" rely on the following 3 properties: (1) it is the complete list of refs that was passed to transport_fetch_refs(), (2) it has shallow information (REF_STATUS_REJECT_SHALLOW set if relevant), and (3) it has updated OIDs if ref-in-want was used (introduced after 989b8c4452). In an effort to satisfy (1), whenever transport_fetch_refs() filters the refs sent to the transport, it re-adds the filtered refs to whatever the transport supplies before returning it to the user. However, the implementation in 989b8c4452 unconditionally re-adds the filtered refs without checking if the transport refrained from reporting anything in "fetched_refs" (which it is allowed to do), resulting in an incomplete list, no longer satisfying (1). An earlier effort to resolve this [1] solved the issue by readding the filtered refs only if the transport did not refrain from reporting in "fetched_refs", but after further discussion, it seems that the better solution is to revert the API change that introduced "fetched_refs". This API change was first suggested as part of a ref-in-want implementation that allowed for ref patterns and, thus, there could be drastic differences between the input refs and the refs actually fetched [2]; we eventually decided to only allow exact ref names, but this API change remained even though its necessity was decreased. Therefore, revert this API change by reverting commit 989b8c4452, and make receive_wanted_refs() update the OIDs in the sought array (like how update_shallow() updates shallow information in the sought array) instead. A test is also included to show that the user-visible bug discussed at the beginning of this commit message no longer exists. [1] https://public-inbox.org/git/20180801171806.GA122458@google.com/ [2] https://public-inbox.org/git/86a128c5fb710a41791e7183207c4d64889f9307.1485381677.git.jonathantanmy@google.com/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-27fetch: implement fetch.fsck.*Ævar Arnfjörð Bjarmason1-2/+30
Implement support for fetch.fsck.* corresponding with the existing receive.fsck.*. This allows for pedantically cloning repositories with specific issues without turning off fetch.fsckObjects. One such repository is https://github.com/robbyrussell/oh-my-zsh.git which before this change will emit this error when cloned with fetch.fsckObjects: error: object 2b7227859263b6aabcc28355b0b994995b7148b6: zeroPaddedFilemode: contains zero-padded file modes fatal: Error in object fatal: index-pack failed Now with fetch.fsck.zeroPaddedFilemode=warn we'll warn about that issue, but the clone will succeed: warning: object 2b7227859263b6aabcc28355b0b994995b7148b6: zeroPaddedFilemode: contains zero-padded file modes warning: object a18c4d13c2a5fa2d4ecd5346c50e119b999b807d: zeroPaddedFilemode: contains zero-padded file modes warning: object 84df066176c8da3fd59b13731a86d90f4f1e5c9d: zeroPaddedFilemode: contains zero-padded file modes The motivation for this is to be able to turn on fetch.fsckObjects globally across a fleet of computers but still be able to manually clone various legacy repositories by either white-listing specific issues, or better yet whitelist specific objects. The use of --git-dir=* instead of -C in the tests could be considered somewhat archaic, but the tests I'm adding here are duplicating the corresponding receive.* tests with as few changes as possible. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-24Merge branch 'jt/connectivity-check-after-unshallow'Junio C Hamano1-10/+74
"git fetch" failed to correctly validate the set of objects it received when making a shallow history deeper, which has been corrected. * jt/connectivity-check-after-unshallow: fetch-pack: write shallow, then check connectivity fetch-pack: implement ref-in-want fetch-pack: put shallow info in output parameter fetch: refactor to make function args narrower fetch: refactor fetch_refs into two functions fetch: refactor the population of peer ref OIDs upload-pack: test negotiation with changing repository upload-pack: implement ref-in-want test-pkt-line: add unpack-sideband subcommand
2018-07-23fetch-pack: mark die strings for translationBrandon Williams1-8/+8
Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-18Merge branch 'sb/object-store-grafts'Junio C Hamano1-7/+8
The conversion to pass "the_repository" and then "a_repository" throughout the object access API continues. * sb/object-store-grafts: commit: allow lookup_commit_graft to handle arbitrary repositories commit: allow prepare_commit_graft to handle arbitrary repositories shallow: migrate shallow information into the object parser path.c: migrate global git_path_* to take a repository argument cache: convert get_graft_file to handle arbitrary repositories commit: convert read_graft_file to handle arbitrary repositories commit: convert register_commit_graft to handle arbitrary repositories commit: convert commit_graft_pos() to handle arbitrary repositories shallow: add repository argument to is_repository_shallow shallow: add repository argument to check_shallow_file_for_update shallow: add repository argument to register_shallow shallow: add repository argument to set_alternate_shallow_file commit: add repository argument to lookup_commit_graft commit: add repository argument to prepare_commit_graft commit: add repository argument to read_graft_file commit: add repository argument to register_commit_graft commit: add repository argument to commit_graft_pos object: move grafts to object parser object-store: move object access functions to object-store.h
2018-07-16negotiator/skipping: skip commits during fetchJonathan Tan1-2/+5
Introduce a new negotiation algorithm used during fetch that skips commits in an effort to find common ancestors faster. The skips grow similarly to the Fibonacci sequence as the commit walk proceeds further away from the tips. The skips may cause unnecessary commits to be included in the packfile, but the negotiation step typically ends more quickly. Usage of this algorithm is guarded behind the configuration flag fetch.negotiationAlgorithm. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-03fetch-pack: support negotiation tip whitelistJonathan Tan1-2/+18
During negotiation, fetch-pack eventually reports as "have" lines all commits reachable from all refs. Allow the user to restrict the commits sent in this way by providing a whitelist of tips; only the tips themselves and their ancestors will be sent. Both globs and single objects are supported. This feature is only supported for protocols that support connect or stateless-connect (such as HTTP with protocol v2). This will speed up negotiation when the repository has multiple relatively independent branches (for example, when a repository interacts with multiple repositories, such as with linux-next [1] and torvalds/linux [2]), and the user knows which local branch is likely to have commits in common with the upstream branch they are fetching. [1] https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next/ [2] https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-03fetch-pack: write shallow, then check connectivityJonathan Tan1-0/+31
When fetching, connectivity is checked after the shallow file is updated. There are 2 issues with this: (1) the connectivity check is only performed up to ancestors of existing refs (which is not thorough enough if we were deepening an existing ref in the first place), and (2) there is no rollback of the shallow file if the connectivity check fails. To solve (1), update the connectivity check to check the ancestry chain completely in the case of a deepening fetch by refraining from passing "--not --all" when invoking rev-list in connected.c. To solve (2), have fetch_pack() perform its own connectivity check before updating the shallow file. To support existing use cases in which "git fetch-pack" is used to download objects without much regard as to the connectivity of the resulting objects with respect to the existing repository, the connectivity check is only done if necessary (that is, the fetch is not a clone, and the fetch involves shallow/deepen functionality). "git fetch" still performs its own connectivity check, preserving correctness but sometimes performing redundant work. This redundancy is mitigated by the fact that fetch_pack() reports if it has performed a connectivity check itself, and if the transport supports connect or stateless-connect, it will bubble up that report so that "git fetch" knows not to perform the connectivity check in such a case. This was noticed when a user tried to deepen an existing repository by fetching with --no-shallow from a server that did not send all necessary objects - the connectivity check as run by "git fetch" succeeded, but a subsequent "git fsck" failed. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29tag: add repository argument to deref_tagStefan Beller1-3/+6
Add a repository argument to allow the callers of deref_tag to be more specific about which repository to act on. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29commit: add repository argument to lookup_commitStefan Beller1-2/+3
Add a repository argument to allow callers of lookup_commit to be more specific about which repository to handle. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29object: add repository argument to lookup_objectStefan Beller1-6/+7
Add a repository argument to allow callers of lookup_object to be more specific about which repository to handle. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29object: add repository argument to parse_objectStefan Beller1-8/+10
Add a repository argument to allow the callers of parse_object to be more specific about which repository to act on. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29Merge branch 'sb/object-store-grafts' into sb/object-store-lookupJunio C Hamano1-7/+8
* sb/object-store-grafts: commit: allow lookup_commit_graft to handle arbitrary repositories commit: allow prepare_commit_graft to handle arbitrary repositories shallow: migrate shallow information into the object parser path.c: migrate global git_path_* to take a repository argument cache: convert get_graft_file to handle arbitrary repositories commit: convert read_graft_file to handle arbitrary repositories commit: convert register_commit_graft to handle arbitrary repositories commit: convert commit_graft_pos() to handle arbitrary repositories shallow: add repository argument to is_repository_shallow shallow: add repository argument to check_shallow_file_for_update shallow: add repository argument to register_shallow shallow: add repository argument to set_alternate_shallow_file commit: add repository argument to lookup_commit_graft commit: add repository argument to prepare_commit_graft commit: add repository argument to read_graft_file commit: add repository argument to register_commit_graft commit: add repository argument to commit_graft_pos object: move grafts to object parser object-store: move object access functions to object-store.h
2018-06-28Merge branch 'jk/fetch-all-peeled-fix'Junio C Hamano1-4/+4
"git fetch-pack --all" used to unnecessarily fail upon seeing an annotated tag that points at an object other than a commit. * jk/fetch-all-peeled-fix: fetch-pack: test explicitly that --all can fetch tag references pointing to non-commits fetch-pack: don't try to fetch peel values with --all
2018-06-28fetch-pack: implement ref-in-wantBrandon Williams1-3/+35
Implement ref-in-want on the client side so that when a server supports the "ref-in-want" feature, a client will send "want-ref" lines for each reference the client wants to fetch. This feature allows clients to tolerate inconsistencies that exist when a remote repository's refs change during the course of negotiation. This allows a client to request to request a particular ref without specifying the OID of the ref. This means that instead of hitting an error when a ref no longer points at the OID it did at the beginning of negotiation, negotiation can continue and the value of that ref will be sent at the termination of negotiation, just before a packfile is sent. More information on the ref-in-want feature can be found in Documentation/technical/protocol-v2.txt. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-28fetch-pack: put shallow info in output parameterBrandon Williams1-7/+8
Expand the transport fetch method signature, by adding an output parameter, to allow transports to return information about the refs they have fetched. Then communicate shallow status information through this mechanism instead of by modifying the input list of refs. This does require clients to sometimes generate the ref map twice: once from the list of refs provided by the remote (as is currently done) and potentially once from the new list of refs that the fetch mechanism provides. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-15fetch-pack: introduce negotiator APIJonathan Tan1-166/+39
Introduce the new files fetch-negotiator.{h,c}, which contains an API behind which the details of negotiation are abstracted. Currently, only one algorithm is available: the existing one. This patch is written to be easily reviewed: static functions are moved verbatim from fetch-pack.c to negotiator/default.c, and it can be seen that the lines replaced by negotiator->X() calls are present in the X() functions respectively. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-15fetch-pack: move common check and marking togetherJonathan Tan1-2/+4
When receiving 'ACK <object-id> continue' for a common commit, check if the commit was already known to be common and mark it as such if not up front. This should make future refactoring of how the information about common commits is stored more straightforward. No visible change intended. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-15fetch-pack: make negotiation-related vars localJonathan Tan1-47/+69
Reduce the number of global variables by making the priority queue and the count of non-common commits in it local, passing them as a struct to various functions where necessary. This also helps in the case that fetch_pack() is invoked twice in the same process (when tag following is required when using a transport that does not support tag following), in that different priority queues will now be used in each invocation, instead of reusing the possibly non-empty one. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-15fetch-pack: use ref adv. to prune "have" sentJonathan Tan1-3/+3
In negotiation using protocol v2, fetch-pack sometimes does not make full use of the information obtained in the ref advertisement: specifically, that if the server advertises a commit that the client also has, the client never needs to inform the server that it has the commit's parents, since it can just tell the server that it has the advertised commit and it knows that the server can and will infer the rest. This is because, in do_fetch_pack_v2(), rev_list_insert_ref_oid() is invoked before mark_complete_and_common_ref(). This means that if we have a commit that is both our ref and their ref, it would be enqueued by rev_list_insert_ref_oid() as SEEN, and since it is thus already SEEN, mark_complete_and_common_ref() would not enqueue it. If mark_complete_and_common_ref() were invoked first, as it is in do_fetch_pack() for protocol v0, then mark_complete_and_common_ref() would enqueue it with COMMON_REF | SEEN. The addition of COMMON_REF ensures that its parents are not sent as "have" lines. Change the order in do_fetch_pack_v2() to be consistent with do_fetch_pack(), and to avoid sending unnecessary "have" lines. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-15fetch-pack: directly end negotiation if ACK readyJonathan Tan1-4/+5
When "ACK %s ready" is received, find_common() clears rev_list in an attempt to stop further "have" lines from being sent [1]. It is much more readable to explicitly break from the loop instead. So explicitly break from the loop, and make the clearing of the rev_list happen unconditionally. [1] The rationale is further described in the originating commit f2cba9299b ("fetch-pack: Finish negotation if remote replies "ACK %s ready"", 2011-03-14). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-14fetch-pack: clear marks before re-markingJonathan Tan1-3/+3
If tag following is required when using a transport that does not support tag following, fetch_pack() will be invoked twice in the same process, necessitating a clearing of the object flags used by fetch_pack() sometime during the second invocation. This is currently done in find_common(), which means that the invocation of mark_complete_and_common_ref() in do_fetch_pack() is useless. (This cannot be reproduced with Git alone, because all transports that come with Git support tag following.) Therefore, move this clearing from find_common() to its parent function do_fetch_pack(), right before it calls mark_complete_and_common_ref(). This has been occurring since the commit that introduced the clearing of marks, 420e9af498 ("Fix tag following", 2008-03-19). The corresponding code for protocol v2 in do_fetch_pack_v2() does not have this problem, as the clearing of flags is done before any marking (whether by rev_list_insert_ref_oid() or mark_complete_and_common_ref()). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-14fetch-pack: split up everything_local()Jonathan Tan1-9/+30
The function everything_local(), despite its name, also (1) marks commits as COMPLETE and COMMON_REF and (2) invokes filter_refs() as important side effects. Extract (1) into its own function (mark_complete_and_common_ref()) and remove (2). The restoring of save_commit_buffer, which was introduced in a1c6d7c1a7 ("fetch-pack: restore save_commit_buffer after use", 2017-12-08), is a concern of the parse_object() call in mark_complete_and_common_ref(), so it has been moved from the end of everything_local() to the end of mark_complete_and_common_ref(). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-11fetch-pack: don't try to fetch peel values with --allJeff King1-4/+4
When "fetch-pack --all" sees a tag-to-blob on the remote, it tries to fetch both the tag itself ("refs/tags/foo") and the peeled value that the remote advertises ("refs/tags/foo^{}"). Asking for the object pointed to by the latter can cause upload-pack to complain with "not our ref", since it does not mark the peeled objects with the OUR_REF (unless they were at the tip of some other ref). Arguably upload-pack _should_ be marking those peeled objects. But it never has in the past, since clients would generally just ask for the tag and expect to get the peeled value along with it. And that's how "git fetch" works, as well as older versions of "fetch-pack --all". The problem was introduced by 5f0fc64513 (fetch-pack: eliminate spurious error messages, 2012-09-09). Before then, the matching logic was something like: if (refname is ill-formed) do nothing else if (doing --all) always consider it matched else look through list of sought refs for a match That commit wanted to flip the order of the second two arms of that conditional. But we ended up with: if (refname is ill-formed) do nothing else look through list of sought refs for a match if (--all and no match so far) always consider it matched That means tha an ill-formed ref will trigger the --all conditional block, even though we should just be ignoring it. We can fix that by having a single "else" with all of the well-formed logic, that checks the sought refs and "--all" in the correct order. Reported-by: Kirill Smelkov <kirr@nexedi.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-30Merge branch 'jt/partial-clone-proto-v2'Junio C Hamano1-4/+19
Transfer protocol v2 learned to support the partial clone. * jt/partial-clone-proto-v2: {fetch,upload}-pack: support filter in protocol v2 upload-pack: read config when serving protocol v2 upload-pack: fix error message typo
2018-05-23Merge branch 'bw/server-options'Junio C Hamano1-0/+7
The transport protocol v2 is getting updated further. * bw/server-options: fetch: send server options when using protocol v2 ls-remote: send server options when using protocol v2 serve: introduce the server-option capability
2018-05-18path.c: migrate global git_path_* to take a repository argumentStefan Beller1-1/+1
Migrate all git_path_* functions that are defined in path.c to take a repository argument. Unlike other patches in this series, do not use the #define trick, as we rewrite the whole function, which is rather small. This doesn't migrate all the functions, as other builtins have their own local path functions defined using GIT_PATH_FUNC. So keep that macro around to serve the other locations. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-18shallow: add repository argument to is_repository_shallowStefan Beller1-2/+2
Add a repository argument to allow callers of is_repository_shallow to be more specific about which repository to handle. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-18shallow: add repository argument to register_shallowStefan Beller1-1/+1
Add a repository argument to allow callers of register_shallow to be more specific about which repository to handle. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-16object-store: move object access functions to object-store.hStefan Beller1-0/+1
This should make these functions easier to find and cache.h less overwhelming to read. In particular, this moves: - read_object_file - oid_object_info - write_object_file As a result, most of the codebase needs to #include object-store.h. In this patch the #include is only added to files that would fail to compile otherwise. It would be better to #include wherever identifiers from the header are used. That can happen later when we have better tooling for it. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-08Merge branch 'bw/protocol-v2'Junio C Hamano1-6/+333
The beginning of the next-gen transfer protocol. * bw/protocol-v2: (35 commits) remote-curl: don't request v2 when pushing remote-curl: implement stateless-connect command http: eliminate "# service" line when using protocol v2 http: don't always add Git-Protocol header http: allow providing extra headers for http requests remote-curl: store the protocol version the server responded with remote-curl: create copy of the service name pkt-line: add packet_buf_write_len function transport-helper: introduce stateless-connect transport-helper: refactor process_connect_service transport-helper: remove name parameter connect: don't request v2 when pushing connect: refactor git_connect to only get the protocol version once fetch-pack: support shallow requests fetch-pack: perform a fetch using v2 upload-pack: introduce fetch server command push: pass ref prefixes when pushing fetch: pass ref prefixes when fetching ls-remote: pass ref prefixes when requesting a remote's refs transport: convert transport_get_remote_refs to take a list of ref prefixes ...
2018-05-06{fetch,upload}-pack: support filter in protocol v2Jonathan Tan1-4/+19
The fetch-pack/upload-pack protocol v2 was developed independently of the filter parameter (used in partial fetches), thus it did not include support for it. Add support for the filter parameter. Like in the legacy protocol, the server advertises and supports "filter" only if uploadpack.allowfilter is configured. Like in the legacy protocol, the client continues with a warning if "--filter" is specified, but the server does not advertise it. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-02Merge branch 'bw/protocol-v2' into jt/partial-clone-proto-v2Junio C Hamano1-6/+333
The beginning of the next-gen transfer protocol. * bw/protocol-v2: (35 commits) remote-curl: don't request v2 when pushing remote-curl: implement stateless-connect command http: eliminate "# service" line when using protocol v2 http: don't always add Git-Protocol header http: allow providing extra headers for http requests remote-curl: store the protocol version the server responded with remote-curl: create copy of the service name pkt-line: add packet_buf_write_len function transport-helper: introduce stateless-connect transport-helper: refactor process_connect_service transport-helper: remove name parameter connect: don't request v2 when pushing connect: refactor git_connect to only get the protocol version once fetch-pack: support shallow requests fetch-pack: perform a fetch using v2 upload-pack: introduce fetch server command push: pass ref prefixes when pushing fetch: pass ref prefixes when fetching ls-remote: pass ref prefixes when requesting a remote's refs transport: convert transport_get_remote_refs to take a list of ref prefixes ...
2018-04-24fetch: send server options when using protocol v2Brandon Williams1-0/+7
Teach fetch to optionally accept server options by specifying them on the cmdline via '-o' or '--server-option'. These server options are sent to the remote end when performing a fetch communicating using protocol version 2. If communicating using a protocol other than v2 the provided options are ignored and not sent to the remote end. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-24Merge branch 'bw/protocol-v2' into HEADJunio C Hamano1-6/+333
* bw/protocol-v2: (35 commits) remote-curl: don't request v2 when pushing remote-curl: implement stateless-connect command http: eliminate "# service" line when using protocol v2 http: don't always add Git-Protocol header http: allow providing extra headers for http requests remote-curl: store the protocol version the server responded with remote-curl: create copy of the service name pkt-line: add packet_buf_write_len function transport-helper: introduce stateless-connect transport-helper: refactor process_connect_service transport-helper: remove name parameter connect: don't request v2 when pushing connect: refactor git_connect to only get the protocol version once fetch-pack: support shallow requests fetch-pack: perform a fetch using v2 upload-pack: introduce fetch server command push: pass ref prefixes when pushing fetch: pass ref prefixes when fetching ls-remote: pass ref prefixes when requesting a remote's refs transport: convert transport_get_remote_refs to take a list of ref prefixes ...
2018-04-11exec_cmd: rename to use dash in file nameStefan Beller1-1/+1
This is more consistent with the project style. The majority of Git's source files use dashes in preference to underscores in their file names. Signed-off-by: Stefan Beller <sbeller@google.com>
2018-04-11Merge branch 'sb/packfiles-in-repository'Junio C Hamano1-1/+2
Refactoring of the internal global data structure continues. * sb/packfiles-in-repository: packfile: keep prepare_packed_git() private packfile: allow find_pack_entry to handle arbitrary repositories packfile: add repository argument to find_pack_entry packfile: allow reprepare_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git_one to handle arbitrary repositories packfile: add repository argument to reprepare_packed_git packfile: add repository argument to prepare_packed_git packfile: add repository argument to prepare_packed_git_one packfile: allow install_packed_git to handle arbitrary repositories packfile: allow rearrange_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git_mru to handle arbitrary repositories
2018-04-10Merge branch 'ti/fetch-everything-local-optim'Junio C Hamano1-3/+42
A "git fetch" from a repository with insane number of refs into a repository that is already up-to-date still wasted too many cycles making many lstat(2) calls to see if these objects at the tips exist as loose objects locally. These lstat(2) calls are optimized away by enumerating all loose objects beforehand. It is unknown if the new strategy negatively affects existing use cases, fetching into a repository with many loose objects from a repository with small number of refs. * ti/fetch-everything-local-optim: fetch-pack.c: use oidset to check existence of loose object
2018-03-26packfile: add repository argument to reprepare_packed_gitStefan Beller1-1/+2
See previous patch for explanation. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15fetch-pack: support shallow requestsBrandon Williams1-1/+70
Enable shallow clones and deepen requests using protocol version 2 if the server 'fetch' command supports the 'shallow' feature. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15fetch-pack: perform a fetch using v2Brandon Williams1-6/+264
When communicating with a v2 server, perform a fetch by requesting the 'fetch' command. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15fetch-pack: do not check links for partial fetchJonathan Tan1-2/+11
When doing a partial clone or fetch with transfer.fsckobjects=1, use the --fsck-objects instead of the --strict flag when invoking index-pack so that links are not checked, only objects. This is because incomplete links are expected when doing a partial clone or fetch. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14fetch-pack.c: use oidset to check existence of loose objectTakuto Ikuta1-3/+42
When fetching from a repository with large number of refs, because to check existence of each refs in local repository to packed and loose objects, 'git fetch' ends up doing a lot of lstat(2) to non-existing loose form, which makes it slow. Instead of making as many lstat(2) calls as the refs the remote side advertised to see if these objects exist in the loose form, first enumerate all the existing loose objects in hashmap beforehand and use it to check existence of them if the number of refs is larger than the number of loose objects. With this patch, the number of lstat(2) calls in `git fetch` is reduced from 411412 to 13794 for chromium repository, it has more than 480000 remote refs. I took time stat of `git fetch` when fetch-pack happens for chromium repository 3 times on linux with SSD. * with this patch 8.105s 8.309s 7.640s avg: 8.018s * master 12.287s 11.175s 12.227s avg: 11.896s On my MacBook Air which has slower lstat(2). * with this patch 14.501s * master 1m16.027s `git fetch` on slow disk will be improved largely. Signed-off-by: Takuto Ikuta <tikuta@chromium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-27Merge branch 'js/packet-read-line-check-null'Junio C Hamano1-2/+2
Some low level protocol codepath could crash when they get an unexpected flush packet, which is now fixed. * js/packet-read-line-check-null: always check for NULL return from packet_read_line() correct error messages for NULL packet_read_line()
2018-02-13Merge branch 'jh/partial-clone'Junio C Hamano1-0/+17
The machinery to clone & fetch, which in turn involves packing and unpacking objects, have been told how to omit certain objects using the filtering mechanism introduced by the jh/object-filtering topic, and also mark the resulting pack as a promisor pack to tolerate missing objects, taking advantage of the mechanism introduced by the jh/fsck-promisors topic. * jh/partial-clone: t5616: test bulk prefetch after partial fetch fetch: inherit filter-spec from partial clone t5616: end-to-end tests for partial clone fetch-pack: restore save_commit_buffer after use unpack-trees: batch fetching of missing blobs clone: partial clone partial-clone: define partial clone settings in config fetch: support filters fetch: refactor calculation of remote list fetch-pack: test support excluding large blobs fetch-pack: add --no-filter fetch-pack, index-pack, transport: partial clone upload-pack: add object filtering for partial clone
2018-02-13Merge branch 'jh/fsck-promisors'Junio C Hamano1-21/+27
In preparation for implementing narrow/partial clone, the machinery for checking object connectivity used by gc and fsck has been taught that a missing object is OK when it is referenced by a packfile specially marked as coming from trusted repository that promises to make them available on-demand and lazily. * jh/fsck-promisors: gc: do not repack promisor packfiles rev-list: support termination at promisor objects sha1_file: support lazily fetching missing objects introduce fetch-object: fetch one promisor object index-pack: refactor writing of .keep files fsck: support promisor objects as CLI argument fsck: support referenced promisor objects fsck: support refs pointing to promisor objects fsck: introduce partialclone extension extension.partialclone: introduce partial clone extension
2018-02-08correct error messages for NULL packet_read_line()Jeff King1-2/+2
The packet_read_line() function dies if it gets an unexpected EOF. It only returns NULL if we get a flush packet (or technically, a zero-length "0004" packet, but nobody is supposed to send those, and they are indistinguishable from a flush in this interface). Let's correct error messages which claim an unexpected EOF; it's really an unexpected flush packet. While we're here, let's also check "!line" instead of "!len" in the second case. The two events should always coincide, but checking "!line" makes it more obvious that we are not about to dereference NULL. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-08fetch-pack: restore save_commit_buffer after useJonathan Tan1-0/+4
In fetch-pack, the global variable save_commit_buffer is set to 0, but not restored to its original value after use. In particular, if show_log() (in log-tree.c) is invoked after fetch_pack() in the same process, show_log() will return before printing out the commit message (because the invocation to get_cached_commit_buffer() returns NULL, because the commit buffer was not saved). I discovered this when attempting to run "git log -S" in a partial clone, triggering the case where revision walking lazily loads missing objects. Therefore, restore save_commit_buffer to its original value after use. An alternative to solve the problem I had is to replace get_cached_commit_buffer() with get_commit_buffer(). That invocation was introduced in commit a97934d ("use get_cached_commit_buffer where appropriate", 2014-06-13) to replace "commit->buffer" introduced in commit 3131b71 ("Add "--show-all" revision walker flag for debugging", 2008-02-13). In the latter commit, the commit author seems to be deciding between not showing an unparsed commit at all and showing an unparsed commit without the message (which is what the commit does), and did not mention parsing the unparsed commit, so I prefer to preserve the existing behavior. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-08fetch-pack, index-pack, transport: partial cloneJeff Hostetler1-0/+13
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-06Merge branch 'jk/fewer-pack-rescan'Junio C Hamano1-1/+2
Internaly we use 0{40} as a placeholder object name to signal the codepath that there is no such object (e.g. the fast-forward check while "git fetch" stores a new remote-tracking ref says "we know there is no 'old' thing pointed at by the ref, as we are creating it anew" by passing 0{40} for the 'old' side), and expect that a codepath to locate an in-core object to return NULL as a sign that the object does not exist. A look-up for an object that does not exist however is quite costly with a repository with large number of packfiles. This access pattern has been optimized. * jk/fewer-pack-rescan: sha1_file: fast-path null sha1 as a missing object everything_local: use "quick" object existence check p5551: add a script to test fetch pack-dir rescans t/perf/lib-pack: use fast-import checkpoint to create packs p5550: factor out nonsense-pack creation
2017-12-05introduce fetch-object: fetch one promisor objectJonathan Tan1-21/+27
Introduce fetch-object, providing the ability to fetch one object from a promisor remote. This uses fetch-pack. To do this, the transport mechanism has been updated with 2 flags, "from-promisor" to indicate that the resulting pack comes from a promisor remote (and thus should be annotated as such by index-pack), and "no-dependents" to indicate that only the objects themselves need to be fetched (but fetching additional objects is nevertheless safe). Whenever "no-dependents" is used, fetch-pack will refrain from using any object flags, because it is most likely invoked as part of a dynamic object fetch by another Git command (which may itself use object flags). An alternative to this is to leave fetch-pack alone, and instead update the allocation of flags so that fetch-pack's flags never overlap with any others, but this will end up shrinking the number of flags available to nearly every other Git command (that is, every Git command that accesses objects), so the approach in this commit was used instead. This will be tested in a subsequent commit. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-21everything_local: use "quick" object existence checkJeff King1-1/+2
In b495697b82 (fetch-pack: avoid repeatedly re-scanning pack directory, 2013-01-26), we noticed that everything_local() could waste time trying to find and parse objects which we _expect_ to be missing. The solution was to put has_sha1_file() in front of parse_object() to skip the more-expensive parse attempt. That optimization was negated later when has_sha1_file() learned to do the same re-scan in 45e8a74873 (has_sha1_file: re-check pack directory before giving up, 2013-08-30). We can restore it by using the "quick" flag to tell has_sha1_file (actually has_object_file these days) that we prefer speed to thoroughness for this call. See also the fixes in 5827a0354 and 0eeb077be7 for prior art and discussion on using the "quick" flag for these cases. The recently-added performance regression test in p5551 demonstrates the problem. You can see the original fix: Test b495697b82^ b495697b82 -------------------------------------------------------- 5551.4: fetch 1.68(1.33+0.35) 0.87(0.69+0.18) -48.2% and then the regression: Test 45e8a74873^ 45e8a74873 --------------------------------------------------------- 5551.4: fetch 0.96(0.77+0.19) 2.55(2.04+0.50) +165.6% and now our fix: Test HEAD^ HEAD -------------------------------------------------------- 5551.4: fetch 7.21(6.58+0.63) 5.47(5.04+0.43) -24.1% You can also see that other things have gotten a lot slower since 2013. We'll deal with those in separate patches. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-01oidmap: map with OID as keyJonathan Tan1-1/+1
This is similar to using the hashmap in hashmap.c, but with an easier-to-use API. In particular, custom entry comparisons no longer need to be written, and lookups can be done without constructing a temporary entry structure. This is implemented as a thin wrapper over the hashmap API. In particular, this means that there is an additional 4-byte overhead due to the fact that the first 4 bytes of the hash is redundantly stored. For now, I'm taking the simpler approach, but if need be, we can reimplement oidmap without affecting the callers significantly. oidset has been updated to use oidmap. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-23pack: move {,re}prepare_packed_git and approximate_object_countJonathan Tan1-0/+1
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-24Merge branch 'bw/config-h'Junio C Hamano1-0/+1
Fix configuration codepath to pay proper attention to commondir that is used in multi-worktree situation, and isolate config API into its own header file. * bw/config-h: config: don't implicitly use gitdir or commondir config: respect commondir setup: teach discover_git_directory to respect the commondir config: don't include config.h by default config: remove git_config_iter config: create config.h
2017-06-15config: don't include config.h by defaultBrandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-30Merge branch 'jt/fetch-allow-tip-sha1-implicitly'Junio C Hamano1-2/+38
There is no good reason why "git fetch $there $sha1" should fail when the $sha1 names an object at the tip of an advertised ref, even when the other side hasn't enabled allowTipSHA1InWant. * jt/fetch-allow-tip-sha1-implicitly: fetch-pack: always allow fetching of literal SHA1s
2017-05-29Merge branch 'bc/object-id'Junio C Hamano1-48/+49
Conversion from uchar[20] to struct object_id continues. * bc/object-id: (53 commits) object: convert parse_object* to take struct object_id tree: convert parse_tree_indirect to struct object_id sequencer: convert do_recursive_merge to struct object_id diff-lib: convert do_diff_cache to struct object_id builtin/ls-tree: convert to struct object_id merge: convert checkout_fast_forward to struct object_id sequencer: convert fast_forward_to to struct object_id builtin/ls-files: convert overlay_tree_on_cache to object_id builtin/read-tree: convert to struct object_id sha1_name: convert internals of peel_onion to object_id upload-pack: convert remaining parse_object callers to object_id revision: convert remaining parse_object callers to object_id revision: rename add_pending_sha1 to add_pending_oid http-push: convert process_ls_object and descendants to object_id refs/files-backend: convert many internals to struct object_id refs: convert struct ref_update to use struct object_id ref-filter: convert some static functions to struct object_id Convert struct ref_array_item to struct object_id Convert the verify_pack callback to struct object_id Convert lookup_tag to struct object_id ...
2017-05-16Merge branch 'js/larger-timestamps'Junio C Hamano1-4/+4
Some platforms have ulong that is smaller than time_t, and our historical use of ulong for timestamp would mean they cannot represent some timestamp that the platform allows. Invent a separate and dedicated timestamp_t (so that we can distingiuish timestamps and a vanilla ulongs, which along is already a good move), and then declare uintmax_t is the type to be used as the timestamp_t. * js/larger-timestamps: archive-tar: fix a sparse 'constant too large' warning use uintmax_t for timestamps date.c: abort if the system time cannot handle one of our timestamps timestamp_t: a new data type for timestamps PRItime: introduce a new "printf format" for timestamps parse_timestamp(): specify explicitly where we parse timestamps t0006 & t5000: skip "far in the future" test when time_t is too limited t0006 & t5000: prepare for 64-bit timestamps ref-filter: avoid using `unsigned long` for catch-all data type
2017-05-16fetch-pack: always allow fetching of literal SHA1sJonathan Tan1-2/+38
fetch-pack, when fetching a literal SHA-1 from a server that is not configured with uploadpack.allowtipsha1inwant (or similar), always returns an error message of the form "Server does not allow request for unadvertised object %s". However, it is sometimes the case that such object is advertised. This situation would occur, for example, if a user or a script was provided a SHA-1 instead of a branch or tag name for fetching, and wanted to invoke "git fetch" or "git fetch-pack" using that SHA-1. Teach fetch-pack to also check the SHA-1s of the refs in the received ref advertisement if a literal SHA-1 was given by the user. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08object: convert parse_object* to take struct object_idbrian m. carlson1-7/+7
Make parse_object, parse_object_or_die, and parse_object_buffer take a pointer to struct object_id. Remove the temporary variables inserted earlier, since they are no longer necessary. Transform all of the callers using the following semantic patch: @@ expression E1; @@ - parse_object(E1.hash) + parse_object(&E1) @@ expression E1; @@ - parse_object(E1->hash) + parse_object(E1) @@ expression E1, E2; @@ - parse_object_or_die(E1.hash, E2) + parse_object_or_die(&E1, E2) @@ expression E1, E2; @@ - parse_object_or_die(E1->hash, E2) + parse_object_or_die(E1, E2) @@ expression E1, E2, E3, E4, E5; @@ - parse_object_buffer(E1.hash, E2, E3, E4, E5) + parse_object_buffer(&E1, E2, E3, E4, E5) @@ expression E1, E2, E3, E4, E5; @@ - parse_object_buffer(E1->hash, E2, E3, E4, E5) + parse_object_buffer(E1, E2, E3, E4, E5) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08Convert lookup_commit* to struct object_idbrian m. carlson1-1/+1
Convert lookup_commit, lookup_commit_or_die, lookup_commit_reference, and lookup_commit_reference_gently to take struct object_id arguments. Introduce a temporary in parse_object buffer in order to convert this function. This is required since in order to convert parse_object and parse_object_buffer, lookup_commit_reference_gently and lookup_commit_or_die would need to be converted. Not introducing a temporary would therefore require that lookup_commit_or_die take a struct object_id *, but lookup_commit would take unsigned char *, leaving a confusing and hard-to-use interface. parse_object_buffer will lose this temporary in a later patch. This commit was created with manual changes to commit.c, commit.h, and object.c, plus the following semantic patch: @@ expression E1, E2; @@ - lookup_commit_reference_gently(E1.hash, E2) + lookup_commit_reference_gently(&E1, E2) @@ expression E1, E2; @@ - lookup_commit_reference_gently(E1->hash, E2) + lookup_commit_reference_gently(E1, E2) @@ expression E1; @@ - lookup_commit_reference(E1.hash) + lookup_commit_reference(&E1) @@ expression E1; @@ - lookup_commit_reference(E1->hash) + lookup_commit_reference(E1) @@ expression E1; @@ - lookup_commit(E1.hash) + lookup_commit(&E1) @@ expression E1; @@ - lookup_commit(E1->hash) + lookup_commit(E1) @@ expression E1, E2; @@ - lookup_commit_or_die(E1.hash, E2) + lookup_commit_or_die(&E1, E2) @@ expression E1, E2; @@ - lookup_commit_or_die(E1->hash, E2) + lookup_commit_or_die(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08shallow: convert shallow registration functions to object_idbrian m. carlson1-2/+2
Convert register_shallow and unregister_shallow to take struct object_id. register_shallow is a caller of lookup_commit, which we will convert later. It doesn't make sense for the registration and unregistration functions to have incompatible interfaces, so convert them both. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-02fetch-pack: convert to struct object_idbrian m. carlson1-44/+45
Convert all uses of unsigned char [20] to struct object_id. Switch one use of get_sha1_hex to parse_oid_hex to avoid the need for a constant. This change is necessary in order to convert parse_object. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-27timestamp_t: a new data type for timestampsJohannes Schindelin1-3/+3
Git's source code assumes that unsigned long is at least as precise as time_t. Which is incorrect, and causes a lot of problems, in particular where unsigned long is only 32-bit (notably on Windows, even in 64-bit versions). So let's just use a more appropriate data type instead. In preparation for this, we introduce the new `timestamp_t` data type. By necessity, this is a very, very large patch, as it has to replace all timestamps' data type in one go. As we will use a data type that is not necessarily identical to `time_t`, we need to be very careful to use `time_t` whenever we interact with the system functions, and `timestamp_t` everywhere else. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-23Merge branch 'dt/xgethostname-nul-termination'Junio C Hamano1-2/+2
gethostname(2) may not NUL terminate the buffer if hostname does not fit; unfortunately there is no easy way to see if our buffer was too small, but at least this will make sure we will not end up using garbage past the end of the buffer. * dt/xgethostname-nul-termination: xgethostname: handle long hostnames use HOST_NAME_MAX to size buffers for gethostname(2)
2017-04-23Merge branch 'jt/fetch-pack-error-reporting'Junio C Hamano1-0/+2
"git fetch-pack" was not prepared to accept ERR packet that the upload-pack can send with a human-readable error message. It showed the packet contents with ERR prefix, so there was no data loss, but it was redundant to say "ERR" in an error message. * jt/fetch-pack-error-reporting: fetch-pack: show clearer error message upon ERR
2017-04-23PRItime: introduce a new "printf format" for timestampsJohannes Schindelin1-1/+1
Currently, Git's source code treats all timestamps as if they were unsigned longs. Therefore, it is okay to write "%lu" when printing them. There is a substantial problem with that, though: at least on Windows, time_t is *larger* than unsigned long, and hence we will want to switch away from the ill-specified `unsigned long` data type. So let's introduce the pseudo format "PRItime" (currently simply being defined to "lu") to make it easier to change the data type used for timestamps. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-18xgethostname: handle long hostnamesDavid Turner1-1/+1
If the full hostname doesn't fit in the buffer supplied to gethostname, POSIX does not specify whether the buffer will be null-terminated, so to be safe, we should do it ourselves. Introduce new function, xgethostname, which ensures that there is always a \0 at the end of the buffer. Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-18use HOST_NAME_MAX to size buffers for gethostname(2)René Scharfe1-1/+1
POSIX limits the length of host names to HOST_NAME_MAX. Export the fallback definition from daemon.c and use this constant to make all buffers used with gethostname(2) big enough for any possible result and a terminating NUL. Inspired-by: David Turner <dturner@twosigma.com> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: David Turner <dturner@twosigma.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-17fetch-pack: show clearer error message upon ERRJonathan Tan1-0/+2
Currently, fetch-pack prints a confusing error message ("expected ACK/NAK") when the server it's communicating with sends a pkt-line starting with "ERR". Replace it with a less confusing error message. Also update the documentation describing the fetch-pack/upload-pack protocol (pack-protocol.txt) to indicate that "ERR" can be sent in the place of "ACK" or "NAK". In practice, this has been done for quite some time by other Git implementations (e.g. JGit sends "want $id not valid") and by Git itself (since commit bdb31ea: "upload-pack: report "not our ref" to client", 2017-02-23) whenever a "want" line references an object that it does not have. (This is uncommon, but can happen if a repository is garbage-collected during a negotiation.) Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Rename sha1_array to oid_arraybrian m. carlson1-13/+13
Since this structure handles an array of object IDs, rename it to struct oid_array. Also rename the accessor functions and the initialization constant. This commit was produced mechanically by providing non-Documentation files to the following Perl one-liners: perl -pi -E 's/struct sha1_array/struct oid_array/g' perl -pi -E 's/\bsha1_array_/oid_array_/g' perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g' Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Make sha1_array_append take a struct object_id *brian m. carlson1-4/+4
Convert the callers to pass struct object_id by changing the function declaration and definition and applying the following semantic patch: @@ expression E1, E2; @@ - sha1_array_append(E1, E2.hash) + sha1_array_append(E1, &E2) @@ expression E1, E2; @@ - sha1_array_append(E1, E2->hash) + sha1_array_append(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28sha1-array: convert internal storage for struct sha1_array to object_idbrian m. carlson1-6/+6
Make the internal storage for struct sha1_array use an array of struct object_id internally. Update the users of this struct which inspect its internals. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-14Merge branch 'mm/fetch-show-error-message-on-unadvertised-object'Junio C Hamano1-13/+38
"git fetch" that requests a commit by object name, when the other side does not allow such an request, failed without much explanation. * mm/fetch-show-error-message-on-unadvertised-object: fetch-pack: add specific error for fetching an unadvertised object fetch_refs_via_pack: call report_unmatched_refs fetch-pack: move code to report unmatched refs to a function
2017-03-02fetch-pack: add specific error for fetching an unadvertised objectMatt McCutchen1-15/+27
Enhance filter_refs (which decides whether a request for an unadvertised object should be sent to the server) to record a new match status on the "struct ref" when a request is not allowed, and have report_unmatched_refs check for this status and print a special error message, "Server does not allow request for unadvertised object". Signed-off-by: Matt McCutchen <matt@mattmccutchen.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-02fetch-pack: move code to report unmatched refs to a functionMatt McCutchen1-0/+13
Prepare to reuse this code in transport.c for "git fetch". While we're here, internationalize the existing error message. Signed-off-by: Matt McCutchen <matt@mattmccutchen.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08fetch-pack: cache results of for_each_alternate_refJeff King1-10/+42
We may run for_each_alternate_ref() twice, once in find_common() and once in everything_local(). This operation can be expensive, because it involves running a sub-process which must freshly load all of the alternate's refs from disk. Let's cache and reuse the results between the two calls. We can make some optimizations based on the particular use pattern in fetch-pack to keep our memory usage down. The first is that we only care about the sha1s, not the refs themselves. So it's OK to store only the sha1s, and to suppress duplicates. The natural fit would therefore be a sha1_array. However, sha1_array's de-duplication happens only after it has read and sorted all entries. It still stores each duplicate. For an alternate with a large number of refs pointing to the same commits, this is a needless expense. Instead, we'd prefer to eliminate duplicates before putting them in the cache, which implies using a hash. We can further note that fetch-pack will call parse_object() on each alternate sha1. We can therefore keep our cache as a set of pointers to "struct object". That gives us a place to put our "already seen" bit with an optimized hash lookup. And as a bonus, the object stores the sha1 for us, so pointer-to-object is all we need. There are two extra optimizations I didn't do here: - we actually store an array of pointer-to-object. Technically we could just walk the obj_hash table looking for entries with the ALTERNATE flag set (because our use case doesn't care about the order here). But that hash table may be mostly composed of non-ALTERNATE entries, so we'd waste time walking over them. So it would be a slight win in memory use, but a loss in CPU. - the items we pull out of the cache are actual "struct object"s, but then we feed "obj->sha1" to our sub-functions, which promptly call parse_object(). This second parse is cheap, because it starts with lookup_object() and will bail immediately when it sees we've already parsed the object. We could save the extra hash lookup, but it would involve refactoring the functions we call. It may or may not be worth the trouble. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08for_each_alternate_ref: pass name/oid instead of ref structJeff King1-4/+8
Breaking down the fields in the interface makes it easier to change the backend of for_each_alternate_ref to something that doesn't use "struct ref" internally. The only field that callers actually look at is the oid, anyway. The refname is kept in the interface as a plausible thing for future code to want. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-11fetch-pack.c: correct command at the beginning of an error messageRalf Thielow1-1/+1
One error message in fetch-pack.c uses 'git fetch_pack' at the beginning which is not a git command. Use 'git fetch-pack' instead. Signed-off-by: Ralf Thielow <ralf.thielow@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-10Merge branch 'nd/shallow-deepen'Junio C Hamano1-73/+96
The existing "git fetch --depth=<n>" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=<n>", has been added to make this easier to use. "git clone" also learned "--shallow-since=<date>" and "--shallow-exclude=<tag>" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". * nd/shallow-deepen: (27 commits) fetch, upload-pack: --deepen=N extends shallow boundary by N commits upload-pack: add get_reachable_list() upload-pack: split check_unreachable() in two, prep for get_reachable_list() t5500, t5539: tests for shallow depth excluding a ref clone: define shallow clone boundary with --shallow-exclude fetch: define shallow boundary with --shallow-exclude upload-pack: support define shallow boundary by excluding revisions refs: add expand_ref() t5500, t5539: tests for shallow depth since a specific date clone: define shallow clone boundary based on time with --shallow-since fetch: define shallow boundary with --shallow-since upload-pack: add deepen-since to cut shallow repos based on time shallow.c: implement a generic shallow boundary finder based on rev-list fetch-pack: use a separate flag for fetch in deepening mode fetch-pack.c: mark strings for translating fetch-pack: use a common function for verbose printing fetch-pack: use skip_prefix() instead of starts_with() upload-pack: move rev-list code out of check_non_tip() upload-pack: make check_non_tip() clean things up on error upload-pack: tighten number parsing at "deepen" lines ...
2016-10-10Merge branch 'rs/qsort'Junio C Hamano1-1/+1
We call "qsort(array, nelem, sizeof(array[0]), fn)", and most of the time third parameter is redundant. A new QSORT() macro lets us omit it. * rs/qsort: show-branch: use QSORT use QSORT, part 2 coccicheck: use --all-includes by default remove unnecessary check before QSORT use QSORT add QSORT
2016-09-29use QSORTRené Scharfe1-1/+1
Apply the semantic patch contrib/coccinelle/qsort.cocci to the code base, replacing calls of qsort(3) with QSORT. The resulting code is shorter and supports empty arrays with NULL pointers. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-09-23fetch-pack: do not reset in_vain on non-novel acksJonathan Tan1-2/+9
The MAX_IN_VAIN mechanism was introduced in commit f061e5f ("fetch-pack: give up after getting too many "ack continue"", 2006-05-24) to stop ref negotiation if a number of consecutive "have"s have been sent with no corresponding new acks. This is to stop the client from digging too deep in an irrelevant side branch in vain without ever finding a common ancestor. A use case (as described in that commit) is the scenario in which the local repository has more roots than the remote repository. However, during a negotiation in which stateless RPCs are used, MAX_IN_VAIN will (almost) never trigger (in the more-roots scenario above and others) because in each new request, the client has to inform the server of objects it already has and knows the server has (to remind the server of the state), which the server then acks. Make fetch-pack only consider, as new acks for the purpose of MAX_IN_VAIN, acks for objects for which the client has never received an ack before in this session. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-19fetch-pack: grow stateless RPC windows exponentiallyJonathan Tan1-7/+12
When updating large repositories, the LARGE_FLUSH limit (that is, the limit at which the window growth strategy switches from exponential to linear) is reached quite quickly. Use a conservative exponential growth strategy when that limit is reached instead (and increase LARGE_FLUSH so that there is no regression in window size). This optimization is only applied during stateless RPCs to avoid the issue raised and fixed in commit 44d8dc54 (Fix potential local deadlock during fetch-pack, 2011-03-29). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch, upload-pack: --deepen=N extends shallow boundary by N commitsNguyễn Thái Ngọc Duy1-0/+3
In git-fetch, --depth argument is always relative with the latest remote refs. This makes it a bit difficult to cover this use case, where the user wants to make the shallow history, say 3 levels deeper. It would work if remote refs have not moved yet, but nobody can guarantee that, especially when that use case is performed a couple months after the last clone or "git fetch --depth". Also, modifying shallow boundary using --depth does not work well with clones created by --since or --not. This patch fixes that. A new argument --deepen=<N> will add <N> more (*) parent commits to the current history regardless of where remote refs are. Have/Want negotiation is still respected. So if remote refs move, the server will send two chunks: one between "have" and "want" and another to extend shallow history. In theory, the client could send no "want"s in order to get the second chunk only. But the protocol does not allow that. Either you send no want lines, which means ls-remote; or you have to send at least one want line that carries deep-relative to the server.. The main work was done by Dongcan Jiang. I fixed it up here and there. And of course all the bugs belong to me. (*) We could even support --deepen=<N> where <N> is negative. In that case we can cut some history from the shallow clone. This operation (and --depth=<shorter depth>) does not require interaction with remote side (and more complicated to implement as a result). Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch: define shallow boundary with --shallow-excludeNguyễn Thái Ngọc Duy1-1/+14
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch: define shallow boundary with --shallow-sinceNguyễn Thái Ngọc Duy1-1/+11
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch-pack: use a separate flag for fetch in deepening modeNguyễn Thái Ngọc Duy1-6/+8
The shallow repo could be deepened or shortened when then user gives --depth. But in future that won't be the only way to deepen/shorten a repo. Stop relying on args->depth in this mode. Future deepening methods can simply set this flag on instead of updating all these if expressions. The new name "deepen" was chosen after the command to define shallow boundary in pack protocol. New commands also follow this tradition. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch-pack.c: mark strings for translatingNguyễn Thái Ngọc Duy1-38/+37
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch-pack: use a common function for verbose printingNguyễn Thái Ngọc Duy1-46/+42
This reduces the number of "if (verbose)" which makes it a bit easier to read imo. It also makes it easier to redirect all these printouts, to a file for example. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-20fetch-pack: isolate sigpipe in demuxer threadJeff King1-3/+1
In commit 9ff18fa (fetch-pack: ignore SIGPIPE in sideband demuxer, 2016-02-24), we started using sigchain_push() to ignore SIGPIPE in the async demuxer thread. However, this is rather clumsy, as it ignores SIGPIPE for the entire process, including the main thread. At the time we didn't have any per-thread signal support, but we now we do. Let's use it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-25fetch-pack: ignore SIGPIPE in sideband demuxerJeff King1-1/+5
If the other side feeds us a bogus pack, index-pack (or unpack-objects) may die early, before consuming all of its input. As a result, the sideband demuxer may get SIGPIPE (racily, depending on whether our data made it into the pipe buffer or not). If this happens and we are compiled with pthread support, it will take down the main thread, too. This isn't the end of the world, as the main process will just die() anyway when it sees index-pack failed. But it does mean we don't get a chance to say "fatal: index-pack failed" or similar. And it also means that we racily fail t5504, as we sometimes die() and sometimes are killed by SIGPIPE. So let's ignore SIGPIPE while demuxing the sideband. We are already careful to check the return value of write(), so we won't waste time writing to a broken pipe. The caller will notice the error return from the async thread, though in practice we don't even get that far, as we die() as soon as we see that index-pack failed. The non-sideband case is already fine; we let index-pack read straight from the socket, so there is no SIGPIPE at all. Technically the non-threaded async case is also OK without this (the forked async process gets SIGPIPE), but it's not worth distinguishing from the threaded case here. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20Remove get_object_hash.brian m. carlson1-2/+2
Convert all instances of get_object_hash to use an appropriate reference to the hash member of the oid member of struct object. This provides no functional change, as it is essentially a macro substitution. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Convert struct object to object_idbrian m. carlson1-1/+1
struct object is one of the major data structures dealing with object IDs. Convert it to use struct object_id instead of an unsigned char array. Convert get_object_hash to refer to the new member as well. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Add several uses of get_object_hash.brian m. carlson1-2/+2
Convert most instances where the sha1 member of struct object is dereferenced to use get_object_hash. Most instances that are passed to functions that have versions taking struct object_id, such as get_sha1_hex/get_oid_hex, or instances that can be trivially converted to use struct object_id instead, are not converted. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Convert struct ref to use object_id.brian m. carlson1-9/+9
Use struct object_id in three fields in struct ref and convert all the necessary places that use it. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-10-05fetch-pack: use argv_array for index-pack / unpack-objectsJeff King1-29/+27
This cleans up a magic number that must be kept in sync with the rest of the code (the number of argv slots). It also lets us drop some fixed buffers and an sprintf (since we can now use argv_array_pushf). We do still have to keep one fixed buffer for calling gethostname, but at least now the size computations for it are much simpler. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10memoize common git-path "constant" filesJeff King1-1/+1
One of the most common uses of git_path() is to pass a constant, like git_path("MERGE_MSG"). This has two drawbacks: 1. The return value is a static buffer, and the lifetime is dependent on other calls to git_path, etc. 2. There's no compile-time checking of the pathname. This is OK for a one-off (after all, we have to spell it correctly at least once), but many of these constant strings appear throughout the code. This patch introduces a series of functions to "memoize" these strings, which are essentially globals for the lifetime of the program. We compute the value once, take ownership of the buffer, and return the cached value for subsequent calls. cache.h provides a helper macro for defining these functions as one-liners, and defines a few common ones for global use. Using a macro is a little bit gross, but it does nicely document the purpose of the functions. If we need to touch them all later (e.g., because we learned how to change the git_dir variable at runtime, and need to invalidate all of the stored values), it will be much easier to have the complete list. Note that the shared-global functions have separate, manual declarations. We could do something clever with the macros (e.g., expand it to a declaration in some places, and a declaration _and_ a definition in path.c). But there aren't that many, and it's probably better to stay away from too-magical macros. Likewise, if we abandon the C preprocessor in favor of generating these with a script, we could get much fancier. E.g., normalizing "FOO/BAR-BAZ" into "git_path_foo_bar_baz". But the small amount of saved typing is probably not worth the resulting confusion to readers who want to grep for the function's definition. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-07-01Merge branch 'me/fetch-into-shallow-safety'Junio C Hamano1-1/+1
"git fetch --depth=<depth>" and "git clone --depth=<depth>" issued a shallow transfer request even to an upload-pack that does not support the capability. * me/fetch-into-shallow-safety: fetch-pack: check for shallow if depth given
2015-06-17fetch-pack: check for shallow if depth givenMike Edgar1-1/+1
When a repository is first fetched as a shallow clone, either by git-clone or by fetching into an empty repo, the server's capabilities are not currently consulted. The client will send shallow requests even if the server does not understand them, and the resulting error may be unhelpful to the user. This change pre-emptively checks so we can exit with a helpful error if necessary. Signed-off-by: Mike Edgar <adgar@google.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-05Merge branch 'bc/object-id'Junio C Hamano1-8/+21
for_each_ref() callback functions were taught to name the objects not with "unsigned char sha1[20]" but with "struct object_id". * bc/object-id: (56 commits) struct ref_lock: convert old_sha1 member to object_id warn_if_dangling_symref(): convert local variable "junk" to object_id each_ref_fn_adapter(): remove adapter rev_list_insert_ref(): remove unneeded arguments rev_list_insert_ref_oid(): new function, taking an object_oid mark_complete(): remove unneeded arguments mark_complete_oid(): new function, taking an object_oid clear_marks(): rewrite to take an object_id argument mark_complete(): rewrite to take an object_id argument send_ref(): convert local variable "peeled" to object_id upload-pack: rewrite functions to take object_id arguments find_symref(): convert local variable "unused" to object_id find_symref(): rewrite to take an object_id argument write_one_ref(): rewrite to take an object_id argument write_refs_to_temp_dir(): convert local variable sha1 to object_id submodule: rewrite to take an object_id argument shallow: rewrite functions to take object_id arguments handle_one_ref(): rewrite to take an object_id argument add_info_ref(): rewrite to take an object_id argument handle_one_reflog(): rewrite to take an object_id argument ...
2015-05-25rev_list_insert_ref(): remove unneeded argumentsMichael Haggerty1-3/+3
Now that the function is not being used as an each_ref_sha1_fn, we can delete the unused arguments in its signature. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25rev_list_insert_ref_oid(): new function, taking an object_oidMichael Haggerty1-3/+7
This function can be used with for_each_ref() without having to be wrapped. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25mark_complete(): remove unneeded argumentsMichael Haggerty1-3/+3
Now that the function is not being used as an each_ref_sha1_fn, we can delete the unused arguments in its signature. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25mark_complete_oid(): new function, taking an object_oidMichael Haggerty1-4/+7
This function can be used with for_each_ref() without having to be wrapped. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25clear_marks(): rewrite to take an object_id argumentMichael Haggerty1-8/+5
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25each_ref_fn: change to take an object_id parameterMichael Haggerty1-4/+13
Change typedef each_ref_fn to take a "const struct object_id *oid" parameter instead of "const unsigned char *sha1". To aid this transition, implement an adapter that can be used to wrap old-style functions matching the old typedef, which is now called "each_ref_sha1_fn"), and make such functions callable via the new interface. This requires the old function and its cb_data to be wrapped in a "struct each_ref_fn_sha1_adapter", and that object to be used as the cb_data for an adapter function, each_ref_fn_adapter(). This is an enormous diff, but most of it consists of simple, mechanical changes to the sites that call any of the "for_each_ref" family of functions. Subsequent to this change, the call sites can be rewritten one by one to use the new interface. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-22upload-pack: optionally allow fetching reachable sha1Fredrik Medley1-1/+9
With uploadpack.allowReachableSHA1InWant configuration option set on the server side, "git fetch" can make a request with a "want" line that names an object that has not been advertised (likely to have been obtained out of band or from a submodule pointer). Only objects reachable from the branch tips, i.e. the union of advertised branches and branches hidden by transfer.hideRefs, will be processed. Note that there is an associated cost of having to walk back the history to check the reachability. This feature can be used when obtaining the content of a certain commit, for which the sha1 is known, without the need of cloning the whole repository, especially if a shallow fetch is used. Useful cases are e.g. repositories containing large files in the history, fetching only the needed data for a submodule checkout, when sharing a sha1 without telling which exact branch it belongs to and in Gerrit, if you think in terms of commits instead of change numbers. (The Gerrit case has already been solved through allowTipSHA1InWant as every Gerrit change has a ref.) Signed-off-by: Fredrik Medley <fredrik.medley@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-22upload-pack: prepare to extend allow-tip-sha1-in-wantFredrik Medley1-3/+6
To allow future extensions, e.g. allowing non-tip sha1, replace the boolean allow_tip_sha1_in_want variable with the flag-style allow_request_with_bare_object_name variable. Signed-off-by: Fredrik Medley <fredrik.medley@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-19fetch-pack: remove dead assignment to ref->new_sha1Jeff King1-3/+0
In everything_local(), we used to assign the current ref's value found in ref->old_sha1 to ref->new_sha1 when we already have all the necessary objects to complete the history leading to that commit. This copying was broken at 49bb805e (Do not ask for objects known to be complete., 2005-10-19) and ever since we instead stuffed a random bytes in ref->new_sha1 here. No code complained or failed due to this breakage. It turns out that no code path that comes after this assignment even looks at ref->new_sha1 at all. - The only caller of everything_local(), do_fetch_pack(), returns this list of refs, whose element has bogus new_sha1 values, to its caller. It does not look at the elements itself, but does pass them to find_common, which looks only at the name and old_sha1 fields. - The only caller of do_fetch_pack(), fetch_pack(), returns this list to its caller. It does not look at the elements nor act on them. - One of the two callers of fetch_pack() is cmd_fetch_pack(), the top-level that implements "git fetch-pack". The only thing it looks at in the elements of the returned ref list is the old_sha1 and name fields. - The other caller of fetch_pack() is fetch_refs_via_pack() in the transport layer, which is a helper that implements "git fetch". It only cares about whether the returned list is empty (i.e. failed to fetch anything). Just drop the bogus assignment, that is not even necessary. The remote-tracking refs are updated based on a different list and not using the ref list being manipulated by this code path; the caller do_fetch_pack() created a copy of that real ref list and passed the copy down to this function, and modifying the elements here does not affect anything. Noticed-by: Kyle J. McKay <mackyle@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-19filter_ref: make a copy of extra "sought" entriesJeff King1-3/+2
If the server supports allow_tip_sha1_in_want, we add any unmatched raw-sha1 entries in our "sought" list of refs to the list of refs we will ask the other side for. We do so by inserting the original "struct ref" directly into our list, rather than making a copy. This has several problems. The most minor problem is that one cannot ever free the resulting list; it contains structs that are copies of the remote refs (made earlier by fetch_pack) along with sought refs that are referenced elsewhere. But more importantly that we set the ref->next pointer to NULL, chopping off the remainder of any existing list that the ref was a part of. We get the set of "sought" refs in an array rather than a linked list, but that array is often in turn generated from a list. The test modification in t5516 demonstrates this. Rather than fetching just an exact sha1, we fetch that sha1 plus another ref: - we build a linked list of refs to fetch when do_fetch calls get_ref_map; the exact sha1 is first, followed by the named ref ("refs/heads/extra" in this case). - we pass that linked list to transport_fetch_ref, which squashes it into an array of pointers - that array goes to fetch_pack, which calls filter_ref. There we generate the want list from a mix of what the remote side has advertised, and the "sought" entry for the exact sha1. We set the sought entry's "next" pointer to NULL. - after we return from transport_fetch_refs, we then try to update the refs by following the linked list. But our list is now truncated, and we do not update refs/heads/extra at all. We can fix this by making a copy of the ref. There's nothing that fetch_pack does to it that must be reflected in the original "sought" list (and indeed, if that were the case we would have a serious bug, because it is only exact-sha1 entries which are treated this way). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-19filter_ref: avoid overwriting ref->old_sha1 with garbageJeff King1-1/+5
If the server supports allow_tip_sha1_in_want, then fetch-pack's filter_refs function tries to check whether a ref is a request for a straight sha1 by running: if (get_sha1_hex(ref->name, ref->old_sha1)) ... I.e., we are using get_sha1_hex to ask "is this ref name a sha1?". If it is true, then the contents of ref->old_sha1 will end up unchanged. But if it is false, then get_sha1_hex makes no guarantees about what it has written. With a ref name like "abcdefoo", we would overwrite 3 bytes of ref->old_sha1 before realizing that it was not a sha1. This is likely not a problem in practice, as anything in refs->name (besides a sha1) will start with "refs/", meaning that we would notice on the first character that there is a problem. Still, we are making assumptions about the state left in the output when get_sha1_hex returns an error (e.g., it could start from the end of the string, or error check the values only once they were placed in the output). It's better to be defensive. We could just check that we have exactly 40 characters of sha1. But let's be even more careful and make sure that we have a 40-char hex refname that matches what is in old_sha1. This is perhaps overly defensive, but spells out our assumptions clearly. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-01lockfile.h: extract new header file for the functions in lockfile.cMichael Haggerty1-0/+1
Move the interface declaration for the functions in lockfile.c from cache.h to a new file, lockfile.h. Add #includes where necessary (and remove some redundant includes of cache.h by files that already include builtin.h). Move the documentation of the lock_file state diagram from lockfile.c to the new header file. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-11Merge branch 'rs/child-process-init'Junio C Hamano1-2/+1
Code clean-up. * rs/child-process-init: run-command: inline prepare_run_command_v_opt() run-command: call run_command_v_opt_cd_env() instead of duplicating it run-command: introduce child_process_init() run-command: introduce CHILD_PROCESS_INIT
2014-08-20run-command: introduce CHILD_PROCESS_INITRené Scharfe1-2/+1
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-07fetchpack.c: replace `git_config()` with `git_config_get_*()` familyTanay Abhra1-27/+8
Use `git_config_get_*()` family instead of `git_config()` to take advantage of the config-set API which provides a cleaner control flow. Signed-off-by: Tanay Abhra <tanayabh@gmail.com> Reviewed-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-09Merge branch 'jk/skip-prefix'Junio C Hamano1-10/+14
* jk/skip-prefix: http-push: refactor parsing of remote object names imap-send: use skip_prefix instead of using magic numbers use skip_prefix to avoid repeated calculations git: avoid magic number with skip_prefix fetch-pack: refactor parsing in get_ack fast-import: refactor parsing of spaces stat_opt: check extra strlen call daemon: use skip_prefix to avoid magic numbers fast-import: use skip_prefix for parsing input use skip_prefix to avoid repeating strings use skip_prefix to avoid magic numbers transport-helper: avoid reading past end-of-string fast-import: fix read of uninitialized argv memory apply: use skip_prefix instead of raw addition refactor skip_prefix to return a boolean avoid using skip_prefix as a boolean daemon: mark some strings as const parse_diff_color_slot: drop ofs parameter
2014-06-20fetch-pack: refactor parsing in get_ackJeff King1-6/+9
There are several uses of the magic number "line+45" when parsing ACK lines from the server, and it's rather unclear why 45 is the correct number. We can make this more clear by keeping a running pointer as we parse, using skip_prefix to jump past the first "ACK ", then adding 40 to jump past get_sha1_hex (which is still magical, but hopefully 40 is less magical to readers of git code). Note that this actually puts us at line+44. The original required some character between the sha1 and further ACK flags (it is supposed to be a space, but we never enforced that). We start our search for flags at line+44, which meanas we are slightly more liberal than the old code. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-20use skip_prefix to avoid magic numbersJeff King1-4/+5
It's a common idiom to match a prefix and then skip past it with a magic number, like: if (starts_with(foo, "bar")) foo += 3; This is easy to get wrong, since you have to count the prefix string yourself, and there's no compiler check if the string changes. We can use skip_prefix to avoid the magic numbers here. Note that some of these conversions could be much shorter. For example: if (starts_with(arg, "--foo=")) { bar = arg + 6; continue; } could become: if (skip_prefix(arg, "--foo=", &bar)) continue; However, I have left it as: if (skip_prefix(arg, "--foo=", &v)) { bar = v; continue; } to visually match nearby cases which need to actually process the string. Like: if (skip_prefix(arg, "--foo=", &v)) { bar = atoi(v); continue; } Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-09Use starts_with() for C strings instead of memcmp()René Scharfe1-1/+1
Convert three cases of checking for a constant prefix using memcmp() to starts_with(). This way there is no need for magic string length constants and we avoid running over the end of the string should it be shorter than the prefix. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-04-03Merge branch 'nd/log-show-linear-break'Junio C Hamano1-0/+1
Attempts to show where a single-strand-of-pearls break in "git log" output. * nd/log-show-linear-break: log: add --show-linear-break to help see non-linear history object.h: centralize object flag allocation
2014-03-25object.h: centralize object flag allocationNguyễn Thái Ngọc Duy1-0/+1
While the field "flags" is mainly used by the revision walker, it is also used in many other places. Centralize the whole flag allocation to one place for a better overview (and easier to move flags if we have too). Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-21Merge branch 'jk/shallow-update-fix'Junio C Hamano1-11/+0
Serving objects from a shallow repository needs to write a new file to hold the temporary shallow boundaries but it was not cleaned when we exit due to die() or a signal. * jk/shallow-update-fix: shallow: verify shallow file after taking lock shallow: automatically clean up shallow tempfiles shallow: use stat_validity to check for up-to-date file
2014-02-27shallow: automatically clean up shallow tempfilesJeff King1-11/+0
We sometimes write tempfiles of the form "shallow_XXXXXX" during fetch/push operations with shallow repositories. Under normal circumstances, we clean up the result when we are done. However, we do no take steps to clean up after ourselves when we exit due to die() or signal death. This patch teaches the tempfile creation code to register handlers to clean up after ourselves. To handle this, we change the ownership semantics of the filename returned by setup_temporary_shallow. It now keeps a copy of the filename itself, and returns only a const pointer to it. We can also do away with explicit tempfile removal in the callers. They all exit not long after finishing with the file, so they can rely on the auto-cleanup, simplifying the code. Note that we keep things simple and maintain only a single filename to be cleaned. This is sufficient for the current caller, but we future-proof it with a die("BUG"). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-10fetch-pack: fix deepen shallow over smart http with no-done capNguyễn Thái Ngọc Duy1-1/+2
In smart http, upload-pack adds new shallow lines at the beginning of each rpc response. Only shallow lines from the first rpc call are useful. After that they are thrown away. It's designed this way because upload-pack is stateless and has no idea when its shallow lines are helpful or not. So after refs are negotiated with multi_ack_detailed and the server thinks it learned enough, it sends "ACK obj-id ready", terminates the rpc call and waits for the final rpc round. The client sends "done". The server sends another response, which also has shallow lines at the beginning, and the last "ACK obj-id" line. When no-done is active, the last round is cut out, the server sends "ACK obj-id ready" and "ACK obj-id" in the same rpc response. fetch-pack is updated to recognize this and not send "done". However it still tries to consume shallow lines, which are never sent. Update the code, make sure to skip consuming shallow lines when no-done is enabled. Reported-by: Jeff King <peff@peff.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-01-27Merge branch 'jk/allow-fetch-onelevel-refname'Junio C Hamano1-1/+1
"git clone" would fail to clone from a repository that has a ref directly under "refs/", e.g. "refs/stash", because different validation paths do different things on such a refname. Loosen the client side's validation to allow such a ref. * jk/allow-fetch-onelevel-refname: fetch-pack: do not filter out one-level refs
2014-01-17Merge branch 'nd/shallow-clone'Junio C Hamano1-12/+119
Fetching from a shallow-cloned repository used to be forbidden, primarily because the codepaths involved were not carefully vetted and we did not bother supporting such usage. This attempts to allow object transfer out of a shallow-cloned repository in a controlled way (i.e. the receiver become a shallow repository with truncated history). * nd/shallow-clone: (31 commits) t5537: fix incorrect expectation in test case 10 shallow: remove unused code send-pack.c: mark a file-local function static git-clone.txt: remove shallow clone limitations prune: clean .git/shallow after pruning objects clone: use git protocol for cloning shallow repo locally send-pack: support pushing from a shallow clone via http receive-pack: support pushing to a shallow clone via http smart-http: support shallow fetch/clone remote-curl: pass ref SHA-1 to fetch-pack as well send-pack: support pushing to a shallow clone receive-pack: allow pushes that update .git/shallow connected.c: add new variant that runs with --shallow-file add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses receive/send-pack: support pushing from a shallow clone receive-pack: reorder some code in unpack() fetch: add --update-shallow to accept refs that update .git/shallow upload-pack: make sure deepening preserves shallow roots fetch: support fetching from a shallow repository clone: support remote shallow repository ...
2014-01-15fetch-pack: do not filter out one-level refsJeff King1-1/+1
Currently fetching a one-level ref like "refs/foo" does not work consistently. The outer "git fetch" program filters the list of refs, checking each against check_refname_format. Then it feeds the result to do_fetch_pack to actually negotiate the haves/wants and get the pack. The fetch-pack code does its own filter, and it behaves differently. The fetch-pack filter looks for refs in "refs/", and then feeds everything _after_ the slash (i.e., just "foo") into check_refname_format. But check_refname_format is not designed to look at a partial refname. It complains that the ref has only one component, thinking it is at the root (i.e., alongside "HEAD"), when in reality we just fed it a partial refname. As a result, we omit a ref like "refs/foo" from the pack request, even though "git fetch" then tries to store the resulting ref. If we happen to get the object anyway (e.g., because the ref is contained in another ref we are fetching), then the fetch succeeds. But if it is a unique object, we fail when trying to update "refs/foo". We can fix this by just passing the whole refname into check_refname_format; we know the part we were omitting is "refs/", which is acceptable in a refname. This at least makes the checks consistent with each other. This problem happens most commonly with "refs/stash", which is the only one-level ref in wide use. However, our test does not use "refs/stash", as we may later want to restrict it specifically (not because it is one-level, but because of the semantics of stashes). We may also want to do away with the multiple levels of filtering (which can cause problems when they are out of sync), or even forbid one-level refs entirely. However, those decisions can come later; this fixes the most immediate problem, which is the mismatch between the two. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-01-06shallow: remove unused codeRamsay Jones1-1/+0
Commit 58babfff ("shallow.c: the 8 steps to select new commits for .git/shallow", 05-12-2013) added a function to implement step 5 of the quoted eight steps, namely 'remove_nonexistent_ours_in_pack()'. This function implements an optional optimization step in the new shallow commit selection algorithm. However, this function has no callers. (The commented out call sites would need to change, in order to provide information required by the function.) Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Acked-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-17Merge branch 'cc/starts-n-ends-with'Junio C Hamano1-6/+6
Remove a few duplicate implementations of prefix/suffix comparison functions, and rename them to starts_with and ends_with. * cc/starts-n-ends-with: replace {pre,suf}fixcmp() with {starts,ends}_with() strbuf: introduce starts_with() and ends_with() builtin/remote: remove postfixcmp() and use suffixcmp() instead environment: normalize use of prefixcmp() by removing " != 0"
2013-12-10fetch: add --update-shallow to accept refs that update .git/shallowNguyễn Thái Ngọc Duy1-0/+27
The same steps are done as in when --update-shallow is not given. The only difference is we now add all shallow commits in "ours" and "theirs" to .git/shallow (aka "step 8"). Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10fetch: support fetching from a shallow repositoryNguyễn Thái Ngọc Duy1-2/+30
This patch just put together pieces from the 8 steps patch. We stop at step 7 and reject refs that require new shallow commits. Note that, by rejecting refs that require new shallow commits, we leave dangling objects in the repo, which become "object islands" by the next "git fetch" of the same source. If the first fetch our "ours" set is zero and we do practically nothing at step 7, "ours" is full at the next fetch and we may need to walk through commits for reachability test. Room for improvement. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10clone: support remote shallow repositoryNguyễn Thái Ngọc Duy1-3/+51
Cloning from a shallow repository does not follow the "8 steps for new .git/shallow" because if it does we need to get through step 6 for all refs. That means commit walking down to the bottom. Instead the rule to create .git/shallow is simpler and, more importantly, cheap: if a shallow commit is found in the pack, it's probably used (i.e. reachable from some refs), so we add it. Others are dropped. One may notice this method seems flawed by the word "probably". A shallow commit may not be reachable from any refs at all if it's attached to an object island (a group of objects that are not reachable by any refs). If that object island is not complete, a new fetch request may send more objects to connect it to some ref. At that time, because we incorrectly installed the shallow commit in this island, the user will not see anything after that commit (fsck is still ok). This is not desired. Given that object islands are rare (C Git never sends such islands for security reasons) and do not really harm the repository integrity, a tradeoff is made to surprise the user occasionally but work faster everyday. A new option --strict could be added later that follows exactly the 8 steps. "git prune" can also learn to remove dangling objects _and_ the shallow commits that are attached to them from .git/shallow. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10fetch-pack.c: move shallow update code out of fetch_pack()Nguyễn Thái Ngọc Duy1-9/+13
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10shallow.c: extend setup_*_shallow() to accept extra shallow commitsNguyễn Thái Ngọc Duy1-2/+3
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05replace {pre,suf}fixcmp() with {starts,ends}_with()Christian Couder1-6/+6
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05Merge branch 'jk/robustify-parse-commit'Junio C Hamano1-5/+3
* jk/robustify-parse-commit: checkout: do not die when leaving broken detached HEAD use parse_commit_or_die instead of custom message use parse_commit_or_die instead of segfaulting assume parse_commit checks for NULL commit assume parse_commit checks commit->object.parsed log_tree_diff: die when we fail to parse a commit
2013-10-30Merge branch 'jl/pack-transfer-avoid-double-close'Junio C Hamano1-0/+4
The codepath that send_pack() calls pack_objects() mistakenly closed the same file descriptor twice, leading to potentially closing a wrong file descriptor that was opened in the meantime. * jl/pack-transfer-avoid-double-close: Clear fd after closing to avoid double-close error
2013-10-24assume parse_commit checks commit->object.parsedJeff King1-5/+3
The parse_commit function will check the "parsed" flag of the object and do nothing if it is set. There is no need for callers to check the flag themselves, and doing so only clutters the code. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-23Merge branch 'nd/fetch-into-shallow' into maintJunio C Hamano1-52/+1
When there is no sufficient overlap between old and new history during a "git fetch" into a shallow repository, objects that the sending side knows the receiving end has were unnecessarily sent. * nd/fetch-into-shallow: Add testcase for needless objects during a shallow fetch list-objects: mark more commits as edges in mark_edges_uninteresting list-objects: reduce one argument in mark_edges_uninteresting upload-pack: delegate rev walking in shallow fetch to pack-objects shallow: add setup_temporary_shallow() shallow: only add shallow graft points to new shallow file move setup_alternate_shallow and write_shallow_commits to shallow.c
2013-10-23Clear fd after closing to avoid double-close errorJens Lindstrom1-0/+4
In send_pack(), clear the fd passed to pack_objects() by setting it to -1, since pack_objects() closes the fd (via a call to run_command()). Likewise, in get_pack(), clear the fd passed to run_command(). Not doing so risks having git_transport_push(), caller of send_pack(), closing the fd again, possibly incorrectly closing some other open file; or similarly with fetch_refs_from_pack(), indirect caller of get_pack(). Signed-off-by: Jens Lindström <jl@opera.com> Acked-by: Jeff King <peff@peff.net> Acked-by: Duy Nguyen <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-24Merge branch 'nd/fetch-pack-error-reporting-fix'Jonathan Nieder1-5/+5
* nd/fetch-pack-error-reporting-fix: fetch-pack.c: show correct command name that fails
2013-09-20Merge branch 'nd/fetch-into-shallow'Junio C Hamano1-52/+1
When there is no sufficient overlap between old and new history during a fetch into a shallow repository, we unnecessarily sent objects the sending side knows the receiving end has. * nd/fetch-into-shallow: Add testcase for needless objects during a shallow fetch list-objects: mark more commits as edges in mark_edges_uninteresting list-objects: reduce one argument in mark_edges_uninteresting upload-pack: delegate rev walking in shallow fetch to pack-objects shallow: add setup_temporary_shallow() shallow: only add shallow graft points to new shallow file move setup_alternate_shallow and write_shallow_commits to shallow.c
2013-09-18fetch-pack.c: show correct command name that failsNguyễn Thái Ngọc Duy1-5/+5
When --shallow-file is added to the command line, it has to be before the subcommand name, the first argument won't be the command name any more. Stop assuming that and keep track of the command name explicitly. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-09Merge branch 'jc/push-cas'Junio C Hamano1-0/+1
Allow a safer "rewind of the remote tip" push than blind "--force", by requiring that the overwritten remote ref to be unchanged since the new history to replace it was prepared. The machinery is more or less ready. The "--force" option is again the big red button to override any safety, thanks to J6t's sanity (the original round allowed --lockref to defeat --force). The logic to choose the default implemented here is fragile (e.g. "git fetch" after seeing a failure will update the remote-tracking branch and will make the next "push" pass, defeating the safety pretty easily). It is suitable only for the simplest workflows, and it may hurt users more than it helps them. * jc/push-cas: push: teach --force-with-lease to smart-http transport send-pack: fix parsing of --force-with-lease option t5540/5541: smart-http does not support "--force-with-lease" t5533: test "push --force-with-lease" push --force-with-lease: tie it all together push --force-with-lease: implement logic to populate old_sha1_expect[] remote.c: add command line option parser for "--force-with-lease" builtin/push.c: use OPT_BOOL, not OPT_BOOLEAN cache.h: move remote/connect API out of it
2013-09-05Merge branch 'nd/fetch-pack-shallow-fix' into maintJunio C Hamano1-1/+3
The recent "short-cut clone connectivity check" topic broke a shallow repository when a fetch operation tries to auto-follow tags. * nd/fetch-pack-shallow-fix: fetch-pack: do not remove .git/shallow file when --depth is not specified
2013-08-30Merge branch 'nd/fetch-pack-shallow-fix'Junio C Hamano1-1/+3
The recent "short-cut clone connectivity check" topic broke a shallow repository when a fetch operation tries to auto-follow tags. * nd/fetch-pack-shallow-fix: fetch-pack: do not remove .git/shallow file when --depth is not specified
2013-08-25fetch-pack: do not remove .git/shallow file when --depth is not specifiedNguyễn Thái Ngọc Duy1-1/+3
fetch_pack() can remove .git/shallow file when a shallow repository becomes a full one again. This behavior is triggered incorrectly when tags are also fetched because fetch_pack() will be called twice. At the first fetch_pack() call: - shallow_lock is set up - alternate_shallow_file points to shallow_lock.filename, which is "shallow.lock" - commit_lock_file is called, which sets shallow_lock.filename to "". alternate_shallow_file also becomes "" because it points to the same memory. At the second call, setup_alternate_shallow() is not called and alternate_shallow_file remains "". It's mistaken as unshallow case and .git/shallow is removed. The end result is a broken repository. Fix this by always initializing alternate_shallow_file when fetch_pack() is called. As an extra measure, check if args->depth > 0 before commit/rollback shallow file. Reported-by: Kacper Kornet <kornet@camk.edu.pl> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-18move setup_alternate_shallow and write_shallow_commits to shallow.cNguyễn Thái Ngọc Duy1-52/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-08cache.h: move remote/connect API out of itJunio C Hamano1-0/+1
The definition of "struct ref" in "cache.h", a header file so central to the system, always confused me. This structure is not about the local ref used by sha1-name API to name local objects. It is what refspecs are expanded into, after finding out what refs the other side has, to define what refs are updated after object transfer succeeds to what values. It belongs to "remote.h" together with "struct refspec". While we are at it, also move the types and functions related to the Git transport connection to a new header file connect.h Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-02fetch-pack: avoid quadratic behavior in rev_list_pushJeff King1-7/+6
When we call find_common to start finding common ancestors with the remote side of a fetch, the first thing we do is insert the tip of each ref into our rev_list linked list. We keep the list sorted the whole time with commit_list_insert_by_date, which means our insertion ends up doing O(n^2) timestamp comparisons. We could teach rev_list_push to use an unsorted list, and then sort it once after we have added each ref. However, in get_rev, we process the list by popping commits off the front and adding parents back in timestamp-sorted order. So that procedure would still operate on the large list. Instead, we can replace the linked list with a heap-based priority queue, which can do O(log n) insertion, making the whole insertion procedure O(n log n). As a result of switching to the prio_queue struct, we fix two minor bugs: 1. When we "pop" a commit in get_rev, and when we clear the rev_list in find_common, we do not take care to free the "struct commit_list", and just leak its memory. With the prio_queue implementation, the memory management is handled for us. 2. In get_rev, we look at the head commit of the list, possibly push its parents onto the list, and then "pop" the front of the list off, assuming it is the same element that we just peeked at. This is typically going to be the case, but would not be in the face of clock skew: the parents are inserted by date, and could potentially be inserted at the head of the list if they have a timestamp newer than their descendent. In this case, we would accidentally pop the parent, and never process it at all. The new implementation pulls the commit off of the queue as we examine it, and so does not suffer from this problem. With this patch, a fetch of a single commit into a repository with 50,000 refs went from: real 0m7.984s user 0m7.852s sys 0m0.120s to: real 0m2.017s user 0m1.884s sys 0m0.124s Before this patch, a larger case with 370K refs still had not completed after tens of minutes; with this patch, it completes in about 12 seconds. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-02fetch-pack: avoid quadratic list insertion in mark_completeJeff King1-1/+2
We insert the commit pointed to by each ref one-by-one into the "complete" commit_list using insert_by_date. Because each insertion is O(n), we end up with O(n^2) behavior. This typically doesn't matter, because the number of refs is reasonably small. And even if there are a lot of refs, they often point to a smaller set of objects (in which case the optimization in commit ea5f220 keeps our "n" small). However, in pathological repositories (hundreds of thousands of refs, each pointing to a unique commit), this quadratic behavior can make a difference. Since we do not care about the list order until we have finished building it, we can simply keep it unsorted during the insertion phase, then sort it afterwards. On a repository like the one described above, this dropped the time to do a no-op fetch from 2.0s to 1.7s. On normal repositories, it probably does not matter at all, but it does not hurt to protect ourselves from pathological cases. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-05-28clone: open a shortcut for connectivity checkNguyễn Thái Ngọc Duy1-1/+10
In order to make sure the cloned repository is good, we run "rev-list --objects --not --all $new_refs" on the repository. This is expensive on large repositories. This patch attempts to mitigate the impact in this special case. In the "good" clone case, we only have one pack. If all of the following are met, we can be sure that all objects reachable from the new refs exist, which is the intention of running "rev-list ...": - all refs point to an object in the pack - there are no dangling pointers in any object in the pack - no objects in the pack point to objects outside the pack The second and third checks can be done with the help of index-pack as a slight variation of --strict check (which introduces a new condition for the shortcut: pack transfer must be used and the number of objects large enough to call index-pack). The first is checked in check_everything_connected after we get an "ok" from index-pack. "index-pack + new checks" is still faster than the current "index-pack + rev-list", which is the whole point of this patch. If any of the conditions fail, we fall back to the good old but expensive "rev-list ..". In that case it's even more expensive because we have to pay for the new checks in index-pack. But that should only happen when the other side is either buggy or malicious. Cloning linux-2.6 over file:// before after real 3m25.693s 2m53.050s user 5m2.037s 4m42.396s sys 0m13.750s 0m16.574s A more realistic test with ssh:// over wireless before after real 11m26.629s 10m4.213s user 5m43.196s 5m19.444s sys 0m35.812s 0m37.630s This shortcut is not applied to shallow clones, partly because shallow clones should have no more objects than a usual fetch and the cost of rev-list is acceptable, partly to avoid dealing with corner cases when grafting is involved. This shortcut does not apply to unpack-objects code path either because the number of objects must be small in order to trigger that code path. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-05-28fetch-pack: prepare updated shallow file before fetching the packNguyễn Thái Ngọc Duy1-36/+37
index-pack --strict looks up and follows parent commits. If shallow information is not ready by the time index-pack is run, index-pack may be led to non-existent objects. Make fetch-pack save shallow file to disk before invoking index-pack. git learns new global option --shallow-file to pass on the alternate shallow file path. Undocumented (and not even support --shallow-file= syntax) because it's unlikely to be used again elsewhere. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-01Merge branch 'jk/pkt-line-cleanup'Junio C Hamano1-9/+9
Clean up pkt-line API, implementation and its callers to make them more robust. * jk/pkt-line-cleanup: do not use GIT_TRACE_PACKET=3 in tests remote-curl: always parse incoming refs remote-curl: move ref-parsing code up in file remote-curl: pass buffer straight to get_remote_heads teach get_remote_heads to read from a memory buffer pkt-line: share buffer/descriptor reading implementation pkt-line: provide a LARGE_PACKET_MAX static buffer pkt-line: move LARGE_PACKET_MAX definition from sideband pkt-line: teach packet_read_line to chomp newlines pkt-line: provide a generic reading function with options pkt-line: drop safe_write function pkt-line: move a misplaced comment write_or_die: raise SIGPIPE when we get EPIPE upload-archive: use argv_array to store client arguments upload-archive: do not copy repo name send-pack: prefer prefixcmp over memcmp in receive_status fetch-pack: fix out-of-bounds buffer offset in get_ack upload-pack: remove packet debugging harness upload-pack: do not add duplicate objects to shallow list upload-pack: use get_sha1_hex to parse "shallow" lines
2013-03-21Merge branch 'jc/fetch-raw-sha1'Junio C Hamano1-32/+69
Allows requests to fetch objects at any tip of refs (including hidden ones). It seems that there may be use cases even outside Gerrit (e.g. $gmane/215701). * jc/fetch-raw-sha1: fetch: fetch objects by their exact SHA-1 object names upload-pack: optionally allow fetching from the tips of hidden refs fetch: use struct ref to represent refs to be fetched parse_fetch_refspec(): clarify the codeflow a bit
2013-02-20pkt-line: provide a LARGE_PACKET_MAX static bufferJeff King1-6/+6
Most of the callers of packet_read_line just read into a static 1000-byte buffer (callers which handle arbitrary binary data already use LARGE_PACKET_MAX). This works fine in practice, because: 1. The only variable-sized data in these lines is a ref name, and refs tend to be a lot shorter than 1000 characters. 2. When sending ref lines, git-core always limits itself to 1000 byte packets. However, the only limit given in the protocol specification in Documentation/technical/protocol-common.txt is LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in pack-protocol.txt, and then only describing what we write, not as a specific limit for readers. This patch lets us bump the 1000-byte limit to LARGE_PACKET_MAX. Even though git-core will never write a packet where this makes a difference, there are two good reasons to do this: 1. Other git implementations may have followed protocol-common.txt and used a larger maximum size. We don't bump into it in practice because it would involve very long ref names. 2. We may want to increase the 1000-byte limit one day. Since packets are transferred before any capabilities, it's difficult to do this in a backwards-compatible way. But if we bump the size of buffer the readers can handle, eventually older versions of git will be obsolete enough that we can justify bumping the writers, as well. We don't have plans to do this anytime soon, but there is no reason not to start the clock ticking now. Just bumping all of the reading bufs to LARGE_PACKET_MAX would waste memory. Instead, since most readers just read into a temporary buffer anyway, let's provide a single static buffer that all callers can use. We can further wrap this detail away by having the packet_read_line wrapper just use the buffer transparently and return a pointer to the static storage. That covers most of the cases, and the remaining ones already read into their own LARGE_PACKET_MAX buffers. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: teach packet_read_line to chomp newlinesJeff King1-2/+0
The packets sent during ref negotiation are all terminated by newline; even though the code to chomp these newlines is short, we end up doing it in a lot of places. This patch teaches packet_read_line to auto-chomp the trailing newline; this lets us get rid of a lot of inline chomping code. As a result, some call-sites which are not reading line-oriented data (e.g., when reading chunks of packfiles alongside sideband) transition away from packet_read_line to the generic packet_read interface. This patch converts all of the existing callsites. Since the function signature of packet_read_line does not change (but its behavior does), there is a possibility of new callsites being introduced in later commits, silently introducing an incompatibility. However, since a later patch in this series will change the signature, such a commit would have to be merged directly into this commit, not to the tip of the series; we can therefore ignore the issue. This is an internal cleanup and should produce no change of behavior in the normal case. However, there is one corner case to note. Callers of packet_read_line have never been able to tell the difference between a flush packet ("0000") and an empty packet ("0004"), as both cause packet_read_line to return a length of 0. Readers treat them identically, even though Documentation/technical/protocol-common.txt says we must not; it also says that implementations should not send an empty pkt-line. By stripping out the newline before the result gets to the caller, we will now treat the newline-only packet ("0005\n") the same as an empty packet, which in turn gets treated like a flush packet. In practice this doesn't matter, as neither empty nor newline-only packets are part of git's protocols (at least not for the line-oriented bits, and readers who are not expecting line-oriented packets will be calling packet_read directly, anyway). But even if we do decide to care about the distinction later, it is orthogonal to this patch. The right place to tighten would be to stop treating empty packets as flush packets, and this change does not make doing so any harder. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: drop safe_write functionJeff King1-1/+1
This is just write_or_die by another name. The one distinction is that write_or_die will treat EPIPE specially by suppressing error messages. That's fine, as we die by SIGPIPE anyway (and in the off chance that it is disabled, write_or_die will simulate it). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20fetch-pack: fix out-of-bounds buffer offset in get_ackJeff King1-0/+2
When we read acks from the remote, we expect either: ACK <sha1> or ACK <sha1> <multi-ack-flag> We parse the "ACK <sha1>" bit from the line, and then start looking for the flag strings at "line+45"; if we don't have them, we assume it's of the first type. But if we do have the first type, then line+45 is not necessarily inside our string at all! It turns out that this works most of the time due to the way we parse the packets. They should come in with a newline, and packet_read puts an extra NUL into the buffer, so we end up with: ACK <sha1>\n\0 with the newline at offset 44 and the NUL at offset 45. We then strip the newline, putting a NUL at offset 44. So when we look at "line+45", we are looking past the end of our string; but it's OK, because we hit the terminator from the original string. This breaks down, however, if the other side does not terminate their packets with a newline. In that case, our packet is one character shorter, and we start looking through uninitialized memory for the flag. No known implementation sends such a packet, so it has never come up in practice. This patch tightens the check by looking for a short, flagless ACK before trying to parse the flag. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-07fetch: fetch objects by their exact SHA-1 object namesJunio C Hamano1-1/+21
Teach "git fetch" to accept an exact SHA-1 object name the user may obtain out of band on the LHS of a pathspec, and send it on a "want" message when the server side advertises the allow-tip-sha1-in-want capability. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-07fetch: use struct ref to represent refs to be fetchedJunio C Hamano1-31/+48
Even though "git fetch" has full infrastructure to parse refspecs to be fetched and match them against the list of refs to come up with the final list of refs to be fetched, the list of refs that are requested to be fetched were internally converted to a plain list of strings at the transport layer and then passed to the underlying fetch-pack driver. Stop this conversion and instead pass around an array of refs. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-01Merge branch 'jk/gc-auto-after-fetch'Junio C Hamano1-0/+3
Help "fetch only" repositories that do not trigger "gc --auto" often enough. * jk/gc-auto-after-fetch: fetch-pack: avoid repeatedly re-scanning pack directory fetch: run gc --auto after fetching
2013-01-26Merge branch 'jk/maint-gc-auto-after-fetch' into jk/gc-auto-after-fetchJunio C Hamano1-0/+3
* jk/maint-gc-auto-after-fetch: fetch-pack: avoid repeatedly re-scanning pack directory fetch: run gc --auto after fetching
2013-01-03Merge branch 'mk/qnx'Junio C Hamano1-2/+1
Port to QNX. * mk/qnx: Port to QNX Make lock local to fetch_pack
2012-10-29fetch-pack: move core code to libgit.aNguyễn Thái Ngọc Duy1-0/+951
fetch_pack() is used by transport.c, part of libgit.a while it stays in builtin/fetch-pack.c. Move it to fetch-pack.c so that we won't get undefined reference if a program that uses libgit.a happens to pull it in. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Jeff King <peff@peff.net>
2007-09-19Make fetch-pack a builtin with an internal APIDaniel Barkalow1-789/+0
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-07War on whitespaceJunio C Hamano1-1/+1
This uses "git-apply --whitespace=strip" to fix whitespace errors that have crept in to our source files over time. There are a few files that need to have trailing whitespaces (most notably, test vectors). The results still passes the test, and build result in Documentation/ area is unchanged. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-05-16connect: display connection progressMichael S. Tsirkin1-1/+1
Make git notify the user about host resolution/connection attempts. This is useful both as a progress indicator on slow links, and helps reassure the user there are no firewall problems. Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-04Merge branch 'js/fetch-progress' (early part)Junio C Hamano1-3/+9
* 'js/fetch-progress' (early part): Fixup no-progress for fetch & clone fetch & clone: do not output progress when not on a tty Conflicts: git-fetch.sh
2007-02-24Fixup no-progress for fetch & cloneJohannes Schindelin1-7/+3
The intent of the commit 'fetch & clone: do not output progress when not on a tty' was to make fetching and cloning less chatty when output was not redirected (such as in a cron job). However, there was a serious thinko in that commit. It assumed that the client _and_ the server got this update at the same time. But this is obviously not the case, and therefore upload-pack died on seeing the option "--no-progress". This patch fixes that issue by making it a protocol option. So, until your server is updated, you still see the progress, but once the server has this patch, it will be quiet. A minor issue was also fixed: when cloning, the checkout did not heed no_progress. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-20prefixcmp(): fix-up mechanical conversion.Junio C Hamano1-5/+5
Previous step converted use of strncmp() with literal string mechanically even when the result is only used as a boolean: if (!strncmp("foo", arg, 3)) ==> if (!(-prefixcmp(arg, "foo"))) This step manually cleans them up to read: if (!prefixcmp(arg, "foo")) Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-20Mechanical conversion to use prefixcmp()Junio C Hamano1-6/+6
This mechanically converts strncmp() to use prefixcmp(), but only when the parameters match specific patterns, so that they can be verified easily. Leftover from this will be fixed in a separate step, including idiotic conversions like if (!strncmp("foo", arg, 3)) => if (!(-prefixcmp(arg, "foo"))) This was done by using this script in px.perl #!/usr/bin/perl -i.bak -p if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) { s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|; } if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) { s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|; } and running: $ git grep -l strncmp -- '*.c' | xargs perl px.perl Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-19fetch & clone: do not output progress when not on a ttyJohannes Schindelin1-3/+13
This adds the option "--no-progress" to fetch-pack and upload-pack, and makes fetch and clone pass this option when stdout is not a tty. While at documenting that option, also document --strict and --timeout options for upload-pack. Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-28Don't force everybody to call setup_ident().Junio C Hamano1-1/+0
Back when only handful commands that created commit and tag were the only users of committer identity information, it made sense to explicitly call setup_ident() to pre-fill the default value from the gecos information. But it is much simpler for programs to make the call automatic when get_ident() is called these days, since many more programs want to use the information when updating the reflog. Signed-off-by: Junio C Hamano <junkio@cox.net>