Failure message: cp skipping file, as it was replaced while being copiedSee original GitHub issue
Workflow that I’m testing:
- Mount a google storage bucket with gcsfuse
- Copy a binary from that bucket to the current working directory
- Make that binary executable
- Run that binary
However, when I’m doing this for multiple simultaneous jobs, I occasionally get error messages like the following:
command--james--190429-094127-78.6 (attempt 1) failed. Retrying. Failure message: cp: skipping file '/mnt/data/mount/gs/my_bucket/projects/jamesp/bin/my-executable', as it was replaced while being copied
One possibility is that my working directory is within the mounted bucket, in which case copying the file to itself (by another simultaneous job) could trigger that. But that doesn’t seem like what I’d expect, just from mounting a bucket. To clarify, what is the current working directory by default when a job is launched in dsub?
- Created 4 years ago
- Comments:9 (2 by maintainers)
Top GitHub Comments
If you are able to create a reproducible test case that demonstrates clearly that the file is in fact not being modified while you are trying to copy it, then it seems worth filing an issue with gcsfuse.
I’m going to leave this issue open for now as I think we should update the
dsub documentation to more clearly indicate that using the bucket
--mount flag should be done only when you’ve really proved out that the standard
--output mechanisms are insufficient.
you must recreate your source file from another gcsfuse.
this error actually because gcsfuse has a “cache” about a file, this cache has a default timeout (1s), when you
cp file1 file2, cp command first stat
file1 file, that maybe hit the cache, and then open file1 file, then stat file1 file again, then cache maybe timeout, this stat will get file1’s info from backend storage, cp command then compare two stat’s result, if it’s not same, that means, file1 maybe recreated, then cp command will return error message " skipping file, as it was replaced while being copied".