We have a number of git
repositories which have grown to an unmanageable size due to the historical inclusion of binary test files and java .jar
files.
We are just about to go through the exercise of git filter-branch
ing these repositories, re-cloning them everywhere they are used (from dozens to hundreds of deployments each, depending on the repo) and given the problems with rewriting history I was wondering if there might be any other solutions.
Ideally I would like to externalise problem files without rewriting the history of each repository. In theory this should be possible because you are checking out the same files, with the same sizes and the same hashes, just sourcing them from a different place (a remote rather than the local object store). Alas none of the potential solutions I have found so far appear to allow me to do this.
Starting with git-annex, the closest I could find to a solution to my problem was How to retroactively annex a file already in a git repo, but as with just removing the large files, this requires the history to be re-written to convert the original git add
into a git annex add
.
Moving on from there, I started looking at other projects listed on what git-annex is not, so I examined git-bigfiles, git-media and git-fat. Unfortunately we can't use the git-bigfiles fork of git
since we are an Eclipse shop and use a mixture of git
and EGit. It doesn't look like git-media or git-fat can do what I want either, since while you could replace existing large files with the external equivalents, you would still need to rewrite the history in order to remove large files which had already been committed.
So, is it possible to slim a .git repository without rewriting history, or should we go back to the plan of using git filter-branch
and a whole load of redeployments?
As an aside, believe that this should be possible, but is probably tied to the same limitations as those of git
s current shallow clone implementation.
Git already supports multiple possible locations for the same blob, since any given blob could be in the loose object store (.git/objects
) or in a pack file (.git/objects) so theoretically you would just need something like git-annex
to be hooked in at that level rather than higher up (i.e. have the concept of a download on demand remote blob if you like). Unfortunately I can't find anyone having implemented or even suggested anything like this.
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…