question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Possible speedups using Cython

See original GitHub issue

Hello! I saw this project on Reddit and was skeptical about whether compiling code that mainly uses NumPy with Cython provides any speedup.

It seems that it doesn’t. I ran a couple of tests using this function: https://github.com/anish-lakkapragada/SeaLion/blob/1439a12880cf71888e6b858228973cbe91bc8293/sealion/cython_knn.pyx#L11-L15

I created two versions: one compiled with Cython (r2_score) and another pasted straight into Python (r2_score_python). Both contain the same exact code. I then ran a simple test in a Jupyter notebook:

y1, y2 = np.random.rand(2, 10_000_000)

%timeit r2_score(y1, y2)  # compiled with Cython
# 902 ms ± 4.82 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit r2_score_python(y1, y2)  # just pasted into Python
# 897 ms ± 1.91 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Not much of a difference, it seems (902 ms vs 897 ms). Somewhat surprisingly, Python + NumPy (r2_score_python) is actually faster than Cython + NumPy (r2_score).

Another solution: ugly but fast

I quickly hacked up this code:

%%cython -a --verbose
# cython: language_level=3, boundscheck=False, wraparound=False

import numpy as np
cimport numpy as np

cdef arr_sub_arr(double[:] dst, double[:] x, double[:] y):
    for i in range(x.shape[0]):
        dst[i] = x[i] - y[i]
        
cdef arr_sub_float(double[:] dst, double[:] x, double y):
    for i in range(x.shape[0]):
        dst[i] = x[i] - y
        
cdef arr_square(double[:] dst, double[:] src):    
    for i in range(src.shape[0]):
        dst[i] = src[i] * src[i]
        
cdef arr_mean(double[:] src):
    cdef Py_ssize_t n_elem = src.shape[0]
    
    return arr_sum(src) / n_elem
        
cdef arr_sum(double[:] src):
    cdef double _sum = 0.0
    
    for i in range(src.shape[0]):
        _sum += src[i]
        
    return _sum

cdef __r2_score_cython(double[:] y_pred, double[:] y_test):
    arr_sub_arr(y_pred, y_pred, y_test)
    arr_square(y_pred, y_pred)
    
    cdef double num = arr_sum(y_pred)
    cdef double y_test_mean = arr_mean(y_test)
    
    arr_sub_float(y_pred, y_test, y_test_mean)
    arr_square(y_pred, y_pred)
    cdef double denum = arr_sum(y_pred)
    
    return 1 - num / denum
    

cpdef r2_score_cython(np.ndarray y_pred, np.ndarray y_test):
    assert y_pred.ndim == y_test.ndim == 1, "Invalid number of dimenstions"
    
    cdef Py_ssize_t sh1 = y_pred.shape[0]
    cdef Py_ssize_t sh2 = y_test.shape[0]
    assert sh1 == sh2, f"Shape mismatch"
    
    return __r2_score_cython(
        np.array(y_pred),  # make a copy!
        y_test
    )

Nothing fancy - just for loops and a copy of y_pred being used to store intermediate computations, so it’s also memory efficient and doesn’t allocate any new arrays at all.

Here code like double[:] is a memoryview, not a NumPy array. See this Cython tutorial.

The speedup

%timeit r2_score_cython(y1, y2)
# 85.2 ms ± 153 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Your original code ran in 897 ms. The code I quickly wrote with simple loops (but with types and compiled via Cython!) ran in 85 ms, which is 10.5 times faster!

That is to say that you can get way more speed out of Cython than you’re currently getting. You get a lot of speed but now have to think about memory allocation and write for loops yourself, and this also works only for floating-point numbers - not integers or any other data types (I think this can be solved using fused types).

So, if you’re into Cython, you could probably think of other ways to speed up your code.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:12 (7 by maintainers)

github_iconTop GitHub Comments

2reactions
ForceBrucommented, Sep 3, 2021

Hello! I saw this project on Reddit and was skeptical about whether compiling code that mainly uses NumPy with Cython provides any speedup.

Just curious because I think I am as skeptical, what is the point of using this weird syntax? I mean wouldn’t it be better if performance critical components are written in c/c++ and provide a python API?

Is the syntax that weird, though? This is basically Python syntax with slight modifications to let you specify types. (I think Cython should switch to Python’s type annotations like cdef my_var: double = 3.141 at some point) Writing something in C or C++ means that you’ll need to know C or C++, which are quite different from Python (and also each other).

Furthermore, Cython already compiles to C, so before rewriting in C or C++ manually, you need to be sure that it’d be worth it, that your code can be faster than code generated by Cython. IMO, Cython is a middle ground between Python and “proper” compiled and statically typed languages like C, C++ and Rust. Which is great, because it’s almost as simple as Python and not nearly as difficult as C++. The advantage of Cython is that it’s similar to Python and lets you jump right in and get noticeable speedups without too much trouble, whereas using C or C++ implies learning C or C++ first and being able to wield it confidently, which is pretty difficult.

0reactions
ghostcommented, Sep 3, 2021

It’s not that weird, I’m just using weird because I wrote a few things in C++, which makes it more familiar to me than Cython. Generally I agree, learning C/C++ is more difficult than simply changing defs to cdefs, declaring additional stuff every now and then, and so on.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Use Cython to get more than 30X speedup on your Python code
Cython will get you good speedups on almost any raw Python code, without too much extra effort at all. The key thing to...
Read more >
Speedup your existing Python project with Cython +30x
As we see before, Python as poor performance in term of speedup and that can increase exponentially with the size of the data...
Read more >
Cython Speedups | PyCharm Documentation - JetBrains
If you want to manually compile Cython speedups, do not use the PyCharm UI. <PYCHARM_INSTALLATION_PATH> is the path to the PyCharm installation ...
Read more >
Improve Python performance using Cython - LogRocket Blog
Well, yes and no. Using Cython everywhere doesn't always guarantee increasd speed. However, you can use it in functions that involve a lot...
Read more >
"Debugger speedups using cython not found." - Stack Overflow
I use python 3.3 interpreter, pydev and eclipse neon on Windows 7, 64 bit. I got the error: warning: Debugger speedups using cython...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found