File automatically removed after uploading
See original GitHub issueGive an expressive description of what is went wrong
A file, which is uploaded successfully can’t be found when examining the destination folder. In one case I could even see it quickly appearing before it went away again.
Version of Meteor-Files you’re experiencing this issue
My versions file tells me that I’m using ostrio:files@1.10.2
Version of Meteor you’re experiencing this issue
Haven’t checked any other - I’m currently on METEOR@1.8
Where this issue appears? OS (Mac/Win/Linux)? Browser name and its version?
I’m effectively running on a mix of Mac and Linux for this example. I have a host system, which is Mac, holding the folder where the files should be stored. This folder is mounted into some virtual box machines, all running Ubuntu, which have a docker container holding the app. I’m using meteor up to deploy them. I sat this up to come closer to my staging environment, where I was seeing this issue before, which has multiple docker containers, running the app, on a single server, holding the folder.
Is it Client or Server issue?
Looks like a server issue to me.
Post Client and/or Server logs with enabled debug option, you can enable “debug” mode in Constructor
[192.168.1.168]2019-02-05T12:03:17.702178628Z [FilesCollection] [File Start Method] Minimal.xlsx - wf9H2R63JSXNrhYGR
[192.168.1.168]2019-02-05T12:03:17.703847806Z [FilesCollection] [Upload] [DDP Start Method] Got #-1/1 chunks, dst: Minimal.xlsx
[192.168.1.168]2019-02-05T12:03:17.770289643Z [FilesCollection] [Upload] [DDP] Got #1/1 chunks, dst: Minimal.xlsx
[192.168.1.168]2019-02-05T12:03:17.819228020Z [FilesCollection] [Upload] [DDP] Got #-1/1 chunks, dst: Minimal.xlsx
[192.168.1.168]2019-02-05T12:03:17.852270100Z [FilesCollection] [Upload] [finish(ing)Upload] -> /data/imports/wf9H2R63JSXNrhYGR.xlsx
[192.168.1.168]2019-02-05T12:03:17.924674024Z [FilesCollection] [Upload] [finish(ed)Upload] -> /data/imports/wf9H2R63JSXNrhYGR.xlsx
[192.168.1.169]2019-02-05T12:03:17.600118615Z [FilesCollection] [_preCollectionCursor.observe] [changed]: wf9H2R63JSXNrhYGR
[192.168.1.169]2019-02-05T12:03:17.635168287Z [FilesCollection] [_preCollectionCursor.observe] [removed]: wf9H2R63JSXNrhYGR
[192.168.1.168]2019-02-05T12:03:18.036400123Z [FilesCollection] [_preCollectionCursor.observe] [removed]: wf9H2R63JSXNrhYGR
[192.168.1.168]2019-02-05T12:03:18.037418058Z [FilesCollection] [_preCollectionCursor.observe] [removeUnfinishedUpload]: wf9H2R63JSXNrhYGR
And I have to admit, it looks a lot similar to the problem, described in #324, but unfortunately I can’t go back as far as to version 1.7.5 of this plugin.
Could also be related to #666 even though the author of this issue claims the file would exist and he doesn’t show a trigger of removeUnfinishedUpload.
My problem is that I can’t reproduce this on a regular basis, but only when running at least 3 of the instances - and even then it’s only occasionally this happens. I guess I’m on a race condition here.
Occasionally it produces this log and leaves the file in place:
[192.168.1.168]2019-02-05T12:38:36.180408137Z [FilesCollection] [File Start Method] Minimal.xlsx - deAzpMh8QbhKpgeJe
[192.168.1.168]2019-02-05T12:38:36.182031228Z [FilesCollection] [Upload] [DDP Start Method] Got #-1/1 chunks, dst: Minimal.xlsx
[192.168.1.168]2019-02-05T12:38:36.229334437Z [FilesCollection] [Upload] [DDP] Got #1/1 chunks, dst: Minimal.xlsx
[192.168.1.168]2019-02-05T12:38:36.238273219Z [FilesCollection] [Upload] [DDP] Got #-1/1 chunks, dst: Minimal.xlsx
[192.168.1.168]2019-02-05T12:38:36.246799668Z [FilesCollection] [Upload] [finish(ing)Upload] -> /data/imports/deAzpMh8QbhKpgeJe.xlsx
[192.168.1.168]2019-02-05T12:38:36.286452749Z [FilesCollection] [Upload] [finish(ed)Upload] -> /data/imports/deAzpMh8QbhKpgeJe.xlsx
[192.168.1.169]2019-02-05T12:38:36.304769136Z [FilesCollection] [_preCollectionCursor.observe] [changed]: deAzpMh8QbhKpgeJe
[192.168.1.168]2019-02-05T12:38:36.314845185Z [FilesCollection] [_preCollectionCursor.observe] [changed]: deAzpMh8QbhKpgeJe
[192.168.1.169]2019-02-05T12:38:36.332151635Z [FilesCollection] [_preCollectionCursor.observe] [removed]: deAzpMh8QbhKpgeJe
[192.168.1.168]2019-02-05T12:38:36.346839057Z [FilesCollection] [_preCollectionCursor.observe] [removed]: deAzpMh8QbhKpgeJe
To nail it down, I’ve also tried to couple a version of the application, running on my mac in dev-mode, to the same database these 3 instances are running on, and I got it failing there as well - just much less times than in it’s docker instances - all pointing to a race condition …
Issue Analytics
- State:
- Created 5 years ago
- Comments:19 (9 by maintainers)

Top Related StackOverflow Question
@JonathanLehner May I know where to change the stream setting? I have the same problem.
Ok, I’ve digged into it and I think I’m now aware of what the problem is and how a solution could look like, which would not require a change on either the Meteor core or the redis-oplog extension. In both cases this remains a valid problem, in both cases it remains a problem of each of the implementations.
Let’s take the case where we have two servers, where one of them (let’s call him the first) is closer to the database than the other (let’s call him the second).
Also take a look at the hook, described here https://github.com/VeliovGroup/Meteor-Files/blob/a5149baff36dde6d9a415f4ef261fc27402fe4af/server.js#L313-L336
In this example, the first will trigger the update. He will also be the one picking it up first, triggering the delete and running the delete hook.
Now finally the second one receives the information that the document has changed, but is (due to https://github.com/meteor/meteor/issues/10443 or https://github.com/cult-of-coders/redis-oplog/issues/311 - depending on if you use
redis-oplogor not) unable to receive the document, because the first server already deleted it. He therefore skips the update hook - alright, that’s fine. Now he receives the delete-hook, which he wants to execute. He will, neither here, be able to receive the document, which means, that the propertyisFinishedwill not exist, which in turn will result in that he starts removing the file.~My suggestion: Set
isFinishedto false and to a strict check for false, which will be negative in case the document already is deleted. In this case another server already should have taken the right choice at this condition.~I sadly have to admit that strict check doesn’t help us here 😢
EDIT: Found a different way to prove that the file was uploaded completely. Created a PR to make it easy 😊