Memory problems and long load times for multiframe ultrasound data
See original GitHub issueHello @dannyrb you seem most active on this problem, after several days of debugging I too am having issues loading Multiframe Ultrasound data. I have compiled a list of related issues from this repo at the bottom.
In my case I have RGB encoded, 400mb, 204 frame Ultrasound data in a single multiframe DCM file with transfer syntax 1.2.840.10008.1.2.1
Config:
var config = {
maxWebWorkers: 1,
startWebWorkersOnDemand: false,
taskConfiguration: {
decodeTask: {
initializeCodecsOnStartup: false,
loadCodecsOnStartup: false,
usePDFJS: false,
strict: false,
},
},
webWorkerTaskPaths: [
'https://unpkg.com/cornerstone-wado-image-loader@4.1.0/dist/610.bundle.min.worker.js',
'https://unpkg.com/cornerstone-wado-image-loader@4.1.0/dist/888.bundle.min.worker.js',
],
};
A few observations:
- The per-frame load time (after the network call finishes) appears to scale with the size of the source file (dataset). I experience near-instant loads with a 12 frame file. Intermediate per-frame load times for a 100mb fluoroscopy file. ~1 sec / frame load times for the 400 mb file described above.
- If I throttle the decoding to the above configuration (load/initializeCodecs: false, maxWebWorkers:1) and only request one frame at a time before requesting for another (no queueing in webWorkerManager) then usually I can load all frames (this takes several minutes). Sometimes though the garbage collection seems to stop working and the memory will quickly climb to take up everything on my machine (16gb available, crashes around 15gb).
- After crashing, the memory will never get cleared
- Manually hitting the garbage collection button on chrome dev tools wipes this 15gb down to <1gb.
- running a profile on loading indicates that ~97% of the time is spent doing postMessage, seems like a lot of overheard for using webworkers.
Conclusions/Deductions from above observations
- For each frame request, a fully copy of the source dataset is being made
- The data is freed, but for some reason not being properly garbage collected, even after a very long time
What does a solution look like:
- A fix to reduce memory allocation and time taken to load/decode a single frame
- A bulk decode method that decodes all of the frames, even it is blocking.
- A way to decode without the use of webworkers (and without allocating new memory)
Related issues
#235 #425 #411 https://github.com/cornerstonejs/cornerstoneWADOImageLoader/issues/156#issuecomment-794628050 https://github.com/cornerstonejs/cornerstoneWADOImageLoader/issues/428 #373
https://github.com/cornerstonejs/cornerstone/issues/576 https://github.com/cornerstonejs/cornerstone/issues/519
Issue Analytics
- State:
- Created a year ago
- Reactions:2
- Comments:16 (4 by maintainers)
Top Results From Across the Web
cornerstonejs - Bountysource
Memory problems and long load times for multiframe ultrasound data $ 0 ... Created 8 months ago in cornerstonejs/cornerstoneWADOImageLoader with 6 comments. Hello ......
Read more >Multiframe Ultrasound DICOM file creation - Stack Overflow
Memory should not be a problem as I devide a long stream into subfiles so I control the number of images present in...
Read more >Cannot load large Ultrasound DICOM file - 3D Slicer Community
The application has run out of memory while trying to load this long sequence. 100MB compressed video can be several GB when uncompressed....
Read more >Technical Publications - GE Healthcare
ULTRASOUND MULTIFRAME (US MF) INFORMATION OBJECT IMPLEMENTATION38 ... well advised to ignore those data elements (per the DICOM v3.0 standard).
Read more >Voluson 730 Ultrasound System - GE Healthcare
specification defines structures for a multitude of medical data. ... This AE handles sending ultrasound images to a storage server using ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@nyacoub
This seems to help fix the runaway memory problems, thanks!
Might need to use chrome memory profiler to take a look