Reader memory usage/memory usage when wrapping vtkDataSets
See original GitHub issueAs pointed out in https://github.com/pyvista/pyvista-support/issues/500#issuecomment-921108703 and https://github.com/pyvista/pyvista-support/issues/500#issuecomment-921106561, readers seem to duplicate memory usage.
Using this code shows that the mesh data is copied from the reader to the mesh object. This is only released when deleting the reader object.
import pyvista as pv
from pyvista import examples
from memory_profiler import profile
@profile
def run():
filename = examples.download_parched_canal_4k(load=False)
reader = pv.get_reader(filename)
mesh = reader.read()
del reader
if __name__ == "__main__":
run()
$python test.py
Filename: test.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
6 110.2 MiB 110.2 MiB 1 @profile
7 def run():
8 110.2 MiB 0.0 MiB 1 filename = examples.download_parched_canal_4k(load=False)
9 110.4 MiB 0.2 MiB 1 reader = pv.get_reader(filename)
10 302.8 MiB 192.3 MiB 1 mesh = reader.read()
11 206.8 MiB -96.0 MiB 1 del reader
Issue Analytics
- State:
- Created a year ago
- Reactions:3
- Comments:8 (8 by maintainers)
Top Results From Across the Web
consuming much memory when the model runs - Support
I encounter a strange question, when the model runs, the memory of my computer is consumed so much, about ten times or more...
Read more >Python NewCellIterator leak (#17557) · Issues · VTK / VTK
Expected Behavior iterating cell using polydataInstance.NewCellIterator() multiple times should increase the memory usage linearly.
Read more >Basic API Usage — PyVista 0.37.0 documentation
read() functions to either wrap a VTK data object in memory or read a VTK or VTK-friendly file format. Wrapping a VTK Data...
Read more >[Paraview-developers] minimizing memory for reader (or ...
I am reworking a reader module to see how I can improve memory usage and performance by using internal caching and I would...
Read more >VTK: A Powerful Open Source Data Visualization Tool
However, by processing it in VTK, the dataset is cached multiple times within the VTK pipeline and we quickly reach the 2 GB...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
It turns out that some datasets, like PolyData, use shallow copy by default with an option. Others only allow deep copy. This seems like a straightforward PR at this point.
It fixes the issue raised in this PR, which is that wrapping certain vtk datasets doubles memory and may seem to leak memory unless there vtk dataset is garbage collected. This manifests in the reader since the reader keeps a reference to the vtk dataset object.