Issue running parametric umap on large data - proto limit
See original GitHub issueI’m trying to run a basic parametric UMAP embedding on my dataset. Each image is (193, 229, 193) so flattened out is 8,530,021 dimensions.
This is the error when trying to run embedding = embedder.fit_transform(train_images)
.
raise ValueError(
ValueError: Tried to convert 'params' to a tensor and failed. Error: Cannot create a tensor proto whose content is larger than 2GB.
It seems like this is just a limitation of tensorflow so I am not sure what can be done about this on your end. But I was wondering if anyone has faced something similar with large datasets and if there was a workaround for this issue?
Issue Analytics
- State:
- Created 3 years ago
- Comments:8
Top Results From Across the Web
Tried Parametric UMAP but its performance does not seem to ...
Problem : I am trying to perform text clustering using Sentence Transformers embedding of 748 dimensions. Method: I have supervised data of around...
Read more >Frequently Asked Questions — umap 0.5 documentation
If your dataset is not especially large but you have found that UMAP runs out of memory when operating on it consider using...
Read more >A review of UMAP in population genetics - Nature
UMAP allows for specification of a minimum distance between nearest neighbours in low-dimensional space: higher values are useful for ...
Read more >Use of “default” parameter settings when analyzing single cell ...
The pipeline for performing unbiased cell clustering within the Seurat pipeline is: (1) filter the dataset based on minimum/maximum cut-offs for genes/cell, ...
Read more >Processing single-cell RNA-seq datasets using SingCellaR
This protocol describes a method for analyzing single-cell RNA sequencing (scRNA-seq) datasets using an R package called SingCellaR. In addition ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
It looks like the issue here is the tensorflow dataset / iterator being created from a numpy array, which has an upper limit of 2GB. We could try a different iterator, which would move the upper limit to the amount of data that fits in RAM. [1] https://stackoverflow.com/a/53382823/200663 [2] https://stackoverflow.com/a/55126482/200663
Have you tried decreasing batch_size?