Having more than 128 files in the root directory corrupts the filesystem when reading or writing
See original GitHub issue- Format a USB stick to FAT32.
- Create 129 (or any higher number of) files in the root directory of the USB stick. Files can be anything, even empty ones. Filenames also can be anything, even file1, file2, …
- Plug in the USB stick to an Android device.
- Initialize the
UsbMassStorageDevice
as in documentation and print thestorageDevice.partitions[0].fileSystem.rootDirectory.listFiles().size
. Instead of printing the real number of files, it prints that number modulo 128 (e.g., if you have 129 files on the USB stick, it prints 1, and so on). If you try to read or write any files then, it corrupts the filesystem on the USB stick.
Issue Analytics
- State:
- Created 2 years ago
- Comments:9 (2 by maintainers)
Top Results From Across the Web
What is Data Corruption and Can You Prevent It? - phoenixNAP
Read about data corruption, one of the most common data-related errors that lead to permanent file loss if there are no proper precautions....
Read more >513221 – ext4 filesystem corruption and data loss
This will tell us whether we have 0s on disk, or holes in the file. If there was no power loss or other...
Read more >Can't delete files on NTFS file system - Windows Server
You can't delete a file or a folder on an NTFS file system volume ... if a file is more than 128 folders...
Read more >Read/Write HPFS 2.09 — The Linux Kernel documentation
When searching for file thats name has characters >= 128, codepages are used - see below. OS/2 ignores dots and spaces at the...
Read more >Design of the FAT file system - Wikipedia
Design of the FAT file system ; 4,294,967,295 bytes (4 GB - 1) with FAT16B and FAT32 · FAT12: 4,068 for 8 KB...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The bug is there: https://github.com/tracmap/libaums/blob/develop/libaums/src/main/java/com/github/mjdev/libaums/driver/scsi/ScsiBlockDevice.kt#L258
inBuffer
may already contain data from previous reads (e.g., when we callrootDirectory.listFiles()
, and filenames span over multiple clusters).inBuffer.clear()
wipes all that data.[@magnusja I also faced the issue reported by @stbelousov, If we put more than 100 files in usb, we get count of lesser files and also the rest of the files got deleted, number of files are not fixed, so it may also depends on size of files or how the files are stored in usb, also I found that most of the time the name of last file remain in the usb get changed, I used 300 small size image from this zip small-size_300_images.zip After scan only 45 images remain in the usb from file “gen_257_1604472283911.jpg” to “gen_300_1604472289002.jpg” (44 files) and “gen_2~6.jpg” (original name of the file was “gen_256_1604472283828.jpg”) this supports this comment https://github.com/magnusja/libaums/issues/298#issuecomment-849216577 Finally I used the fix provided by @stbelousov in his fork https://github.com/tracmap/libaums/commit/7b9219106419142dec6b1a179067037a9e73c18d and this solved the issue for me. I suggest you to integrate his fix in the repo ASAP. Thanks in advance, also thanks to @stbelousov for providing fix