Performance
See original GitHub issueWhat can we do to make this fast? At the moment, calling j.sin is 10x slower than math.sin or numpy.sin.
I’m guessing that this is due to type inference issues with the PyCall.jl callback. Is there a way we could provide annotations or hints for individual functions?
Issue Analytics
- State:
- Created 6 years ago
- Comments:8 (5 by maintainers)
Top Results From Across the Web
Performance Definition & Meaning - Merriam-Webster
1 · the execution of an action · something accomplished : deed, feat ; 3 · the action of representing a character in...
Read more >Performance Bicycle - Your Next Best Ride
Shop road, mountain & gravel bikes. Huge selection of parts, components & clothing from Specialized, Shimano & more!
Read more >Performance - Wikipedia
A performance is an act of staging or presenting a play, concert, or other form of entertainment. It is also defined as the...
Read more >86 Synonyms & Antonyms for PERFORMANCE - Thesaurus.com
Find 86 ways to say PERFORMANCE, along with antonyms, related words, and example sentences at Thesaurus.com, the world's most trusted free thesaurus.
Read more >Performance Definition & Meaning - Dictionary.com
performance · a musical, dramatic, or other entertainment presented before an audience. · the act of performing a ceremony, play, piece of music,...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

My (possibly quixotic) purpose was thinking about how we can make Julia an easy-to-use replacement for C for writing Python backend code. Where is the overhead, and what could we do to make it slimmer?
@simonbyrne, but users normally only write Python backend code in C or Cython or whatever for big, expensive functions. That’s why everything in NumPy is vectorized, after all. And for big, expensive function, the function-call overhead shouldn’t matter.
Part of the overhead comes in converting the argument to a Julia type. Since PyCall does not know what kind of Python object you are passing, it has to
convert(PyAny, args)...on the tuple of arguments. This, in turn, needs to check what Julia type it should try to convert to. The way this is done is a naive list of if statements right now … it could be improved a lot if we used e.g. some kind of hash table to memoize the results based on thePyObject_Type.Of course, something like
sincould be made faster (but much less flexible) if we only check for one or two possible types. And you could avoid the dynamic dispatch on the Julia side too.