checkout@v2 unable to write pack file
See original GitHub issueOccasionally, the checkout@v2
step fails with the output below.
Once the runner gets into this state, it will continue to fail until we remote into our self-hosted runner and execute git gc
:
Fetching the repository
...
Resolving deltas: 99% (402/405)
Resolving deltas: 100% (405/405)
Resolving deltas: 100% (405/405), completed with 100 local objects.
##[error]error: unable to write file .git/objects/pack/pack-6d70b7a79f3ca1bb7921c7e8f6b60365e85b3bea.pack: Permission denied
##[error]fatal: cannot store pack file
##[error]fatal: index-pack failed
The process 'C:\Program Files\Git\cmd\git.exe' failed with exit code 128
...
Our configuration looks like:
steps:
- uses: actions/checkout@v2
fetch-depth: 0
lfs: 'true'
Not sure if this is somehow related to checkout@v2
automatically disabling garbage collection?
https://github.com/actions/checkout/blob/main/src/git-source-provider.ts#L91
Issue Analytics
- State:
- Created 3 years ago
- Reactions:5
- Comments:7
Top Results From Across the Web
Cannot do a git pull, unable to write to pack directory
Check if you have permissions, i.e. if you execute the git pull command via CLI, make sure the CLI instance has proper authorization...
Read more >Git - Packfiles - Git SCM
The packfile is a single file containing the contents of all the objects that were removed from your filesystem. The index is a...
Read more >Git checkouts fail on Windows with "Filename too long error
Cause. According to the msysgit wiki on GitHub and the related fix this error, Filename too long, comes from a Windows API limitation...
Read more >Help: getting rid of giant git packs - Google Groups
Were many of the files named tmp*pack*? In those cases it was results of failed pack operations left behind for debugging. For instance...
Read more >jenkins git does not release pack files, prevents wiping ...
The *.pack file cannot be deleted until the javaw.exe process (JNLP connection to jenkins) is killed. The only way I have been able...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
اگر دوست داری در پروژه باشی و به نتایج مطلوب برسی به کارت با قدرت ادامه بده تا در آینده ای نزدیک پاداش خوبی بگیری و کاری برای مردم جهان انجام دادی من همه رو تحت نظر دارم خیالت راحت
Found the root cause (For me atleast).
What was happening was someone renamed a branch and the pipeline was not set to cleanup its workspace before starting. This means, it was doing
git fetch
on top of an already existing git fetched folder. This caused the lock to happen. The same lock was able to be reproduced locally. The fix was to add the following to thejob
which will force the job to clean the work folder before starting. By doing this, it eliminated the scenario where your are fetching on existing fetched git, since we dont have gc onInfo can be found here on how this
workspace
setting