Can't Async/Await yahoo_fin api calls
See original GitHub issueI am using FastAPI and it would seem I can’t directly await the API Calls because I receive this error:
object numpy.float64 can't be used in 'await' expression
It would be very helpful to be able to await these:
price = await si.get_live_price(symbol)
chain = await options.get_options_chain(symbol, expiration)[optionType]
Issue Analytics
- State:
- Created a year ago
- Comments:6
Top Results From Across the Web
Async /await not working on making a fetch API call in javascript
I am making a fetch API call from my js file ,I have a doubt that when i am using Async/await still the...
Read more >How to Easily Call APIs With Fetch and Async/Await in ...
In today's video I'll be showing you how easy it is to call APIs ( REST ) using the Fetch API in JavaScript...
Read more >Using Async/Await with the Fetch API - JavaScript Tutorial
... you how you can combine the power of Async/Await with the Fetch API to fire off HTTP requests. ... Your browser can't...
Read more >How to Create an Async API Call with asyncio | Python
Start learning cybersecurity with CBT Nuggets. https://courses.cbt.gg/securityIn this video, Ben Finkel covers how to create an asynchronous ...
Read more >Why Asynchronous Web API Endpoints | RESTful | async await
This video is part of my web API course, to register the course, please use the link below to get a ... Your...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Hey, it was incredibly fast. In fact too fast lol. I remember I was using asyncio and making these calls but I believe I was hitting some rate limiting on yahoo. Basically the pages wouldn’t all finish loading.
I decided to keep it how it was sync but then use rabbitmq and batch the requests out to the queue. I this way I was able to get better performance while still having the request content complete to be able to be parsed.
On Sun, Sep 18, 2022, 9:41 AM msingh00 @.***> wrote:
The data is for about 20 years of daily data usually.
I compared performance of:
I have settled on the last as its more stable, but yields better performance than plain python multi-processing. I do development on windows (though my servers are hosted linux)…Though it’s great for dealing with large data sets, modin has some stability issues on windows right now - so doing development on windows became a deal breaker. Ray allowed me to get better scaling and performance than standard python + multi-processing. It achieves this i believe by doing something you seem to be doing…using a centralized cache to deal with performance issues related to cross process serialization. At one point they were using redis, but right now i believe they have gotten rid of redis and implemented a custom GCS with optional backing storage (which could still use redis if you like).
Its a very interesting project that i hope gains traction and support.