After being burned a few times using Dropbox to store private git repos, I moved them all to a private server on AWS. And for quite a while now I've been corruption free and have had no problems. However, while cloning a rather old repo recently I ran into the strangest problem.
$ git clone git@someplace.com:/Project.git remote: Counting objects: 100, done. remote: Compressing objects: 100% (100/100), done. error: git upload-pack: git-pack-objects died with error. remote: fatal: Out of memory, malloc failed (tried to allocate 1000000000 bytes) remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: git upload-pack: aborting due to possible repository corruption on the remote side. fatal: index-pack failed
Great! I tried a git fetch
on another machine that had some but not all of the repo.
When that failed with the same error I did some research. The problem is that git-pack-objects tries to pack everything into memory and sends the entire thing as one giant package. So, how do I get my repo sync'd to this new machine? The first thing is to copy the entire directoy down using scp and clone that copy:
$ scp -r git@someplace.com:/Project.git Project.git $ git clone Project.git Project $ cd Project $ git fetch
So, now I have a local repo copy with most of the data I need. And to connect the remote origin
back to the AWS repo:
$ git remote remove origin $ git remote add origin git@someplace.com:/Project.git
And so far so good. The project seems pretty stable and I can push/pull/fetch from the repo stored on a AWS. If however you don't have the ability to download the entire bare repo there are several config variables that you can play with on the server stored repo, but I haven't tried them yet:
[core] packedGitLimit = 128m packedGitWindowSize = 128m [pack] deltaCacheSize = 128m packSizeLimit = 128m windowMemory = 128m