Consistent failure on OSX in draft 0.20.2
See original GitHub issueWhile trying to build wheels for 0.20.2, all Mac builds have failed (e.g. https://travis-ci.org/MacPython/scikit-learn-wheels/jobs/469752588) with:
___________________ test_pca_dtype_preservation[randomized] ____________________
svd_solver = 'randomized'
@pytest.mark.parametrize('svd_solver', solver_list)
def test_pca_dtype_preservation(svd_solver):
> check_pca_float_dtype_preservation(svd_solver)
svd_solver = 'randomized'
../venv/lib/python2.7/site-packages/sklearn/decomposition/tests/test_pca.py:707:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
svd_solver = 'randomized'
def check_pca_float_dtype_preservation(svd_solver):
# Ensure that PCA does not upscale the dtype when input is float32
X_64 = np.random.RandomState(0).rand(1000, 4).astype(np.float64)
X_32 = X_64.astype(np.float32)
pca_64 = PCA(n_components=3, svd_solver=svd_solver,
random_state=0).fit(X_64)
pca_32 = PCA(n_components=3, svd_solver=svd_solver,
random_state=0).fit(X_32)
assert pca_64.components_.dtype == np.float64
assert pca_32.components_.dtype == np.float32
assert pca_64.transform(X_64).dtype == np.float64
assert pca_32.transform(X_32).dtype == np.float32
assert_array_almost_equal(pca_64.components_, pca_32.components_,
> decimal=5)
E AssertionError:
E Arrays are not almost equal to 5 decimals
E
E (mismatch 16.6666666667%)
E x: array([[ 0.62022, 0.15983, -0.38317, -0.66555],
E [ 0.26318, 0.24085, 0.90801, -0.21966],
E [-0.12498, -0.88109, 0.16727, -0.42437]])
E y: array([[ 0.62022, 0.15983, -0.38317, -0.66555],
E [ 0.26318, 0.24084, 0.90801, -0.21967],
E [-0.12498, -0.88109, 0.16726, -0.42436]], dtype=float32)
X_32 = array([[ 0.54881352, 0.71518934, 0.60276335, 0.54488319],
[ 0.423654...],
[ 0.43487364, 0.83000296, 0.93280619, 0.30833843]], dtype=float32)
X_64 = array([[ 0.5488135 , 0.71518937, 0.60276338, 0.54488318],
[ 0.423654...91, 0.34963937],
[ 0.43487363, 0.83000295, 0.93280618, 0.30833843]])
pca_32 = PCA(copy=True, iterated_power='auto', n_components=3, random_state=0,
svd_solver='randomized', tol=0.0, whiten=False)
pca_64 = PCA(copy=True, iterated_power='auto', n_components=3, random_state=0,
svd_solver='randomized', tol=0.0, whiten=False)
svd_solver = 'randomized'
I can’t see any change in 0.20.2 that could have caused this new failure.
Issue Analytics
- State:
- Created 5 years ago
- Comments:8 (8 by maintainers)
Top Results From Across the Web
Consistent widget crash on Big Sur - Drafts for Mac
I'm running the latest version of Drafts (29.0.01) on Big Sur, but I keep seeing this error: Drafts-WidgetExtension-OSX cannot be opened ...
Read more >If an error occurred while updating or installing macOS
The message might say that an error occurred while downloading, preparing, or installing, or that the installer is damaged or could not be ......
Read more >Release Notes | Mimestream
Fixed · macOS 12.3: Inbox zero confetti pop animation displays in wrong spot · The Inbox Zero animation shouldn't show in response to...
Read more >Changelog - Cypress Documentation
The .within() command now requires a single subject and throws an error if given more than one subject. This change adds consistency around...
Read more >Error:[0, 0, 0]***Error code was: 0″- MAC - Final Draft®
(NOTE: If you do not see the FLEXnet Publisher folder or the FLEXnet folder, skip Steps 4-7 and reinstall Final Draft. Instructions are...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I guess this is likely to be a scipy issue, since we always use the latest version of scipy and scipy just released 1.2.0, but I don’t have a mac.
I don’t think there’s need for further investigation. We can’t expect the same level of accuracy when comparing against float32. The default rtol we use to compare float64 is 1e-7. It’s reasonable to use a rtol between 1e-3 and 1e-4 for float32 since precision is halved.