TypeError when fitting on >= 4096 samples
See original GitHub issueHi, I systematically get a TypeError when I run the fit() method on a dataset that is >=4096 samples. E.g.:
reducer = umap.UMAP()
np.random.seed(0)
reducer.fit(np.random.rand(4096,16))
produces the error, while
reducer = umap.UMAP()
np.random.seed(0)
reducer.fit(np.random.rand(4095,16))
runs fine.
I have installed umap with:
pip3 install umap-learn[plot]
This is the traceback of the error:
TypeError Traceback (most recent call last) <ipython-input-98-8feab5de4b07> in <module> 1 reducer = umap.UMAP(random_state=41,init=‘random’, force_approximation_algorithm=True) ----> 2 reducer.fit(np.random.rand(4096,16))
~/anaconda2/envs/ecpackage3/lib/python3.8/site-packages/umap/umap_.py in fit(self, X, y) 1831 self._search_graph.data = _data 1832 self._search_graph = self._search_graph.maximum( -> 1833 self._search_graph.transpose() 1834 ).tocsr() 1835
~/anaconda2/envs/ecpackage3/lib/python3.8/site-packages/scipy/sparse/lil.py in transpose(self, axes, copy) 435 436 def transpose(self, axes=None, copy=False): –> 437 return self.tocsr(copy=copy).transpose(axes=axes, copy=False).tolil(copy=False) 438 439 transpose.doc = spmatrix.transpose.doc
~/anaconda2/envs/ecpackage3/lib/python3.8/site-packages/scipy/sparse/lil.py in tocsr(self, copy) 460 indptr = np.empty(M + 1, dtype=idx_dtype) 461 indptr[0] = 0 –> 462 _csparsetools.lil_get_lengths(self.rows, indptr[1:]) 463 np.cumsum(indptr, out=indptr) 464 nnz = indptr[-1]
_csparsetools.pyx in scipy.sparse._csparsetools.lil_get_lengths()
~/anaconda2/envs/ecpackage3/lib/python3.8/site-packages/scipy/sparse/_csparsetools.cpython-38-darwin.so in View.MemoryView.memoryview_cwrapper()
~/anaconda2/envs/ecpackage3/lib/python3.8/site-packages/scipy/sparse/_csparsetools.cpython-38-darwin.so in View.MemoryView.memoryview.cinit()
TypeError: a bytes-like object is required, not ‘list’
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:7 (2 by maintainers)
Top GitHub Comments
Update: after installing pynndescent the error disappeared. Not sure if it is the expected behavior though…
Should be fixed in master; I’ll try to roll up any other fixes/patches and make a release soon.