[performance] feed_to_graph_path is slow on larger feeds
See original GitHub issuetest_feed_to_graph_path
itself is the slowest test by far. Create benchmarks and identify which steps are slowest. Find ways to speed up operations and get graph creation process to be as fast as possible.
Issue Analytics
- State:
- Created 6 years ago
- Comments:11 (10 by maintainers)
Top Results From Across the Web
Paced Bottle Feeding - Minnesota Department of Health
This feeding method slows down the flow of milk into the nipple and the mouth, allowing the baby to eat more slowly, and...
Read more >Nutrition Assessment: Feeding - About this Site
Assessment of Feeding Performance Responses to Tactile Input Feeding Position Oral Motor Control Physiologic Control Sucking, Swallowing, Breathing
Read more >Breastfeeding after one month: What to expect | Medela
When does breastfeeding frequency slow down? ... “His stomach is growing so he can take larger feeds, plus your mature milk keeps him...
Read more >Frequency of Feeding - La Leche League International
Every baby is different, and every mother is different. Some mothers have larger storage capacities than others, so one baby may get more...
Read more >The newborn feeding schedule: The evidence for feeding on cue
As a result, some babies require longer feeding bouts than others. This is especially true for the lower birth weight baby and the...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Huge performance gain found right here: https://github.com/kuanb/peartree/issues/87
(Thank you @yiyange)
LA Metro (without digging around for the exact numbers) used to take 12-15 minutes.
It now takes: Without MP: 231s With MP: 229s
So, no observable improvement. Of course, it’s running in a Docker environment that only has access to 2 CPUs on my '16 Macbook Pro. A better test would be to use a virtual machine on AWS / GCloud or wherever and see what gains are achieved there.
That said, we can observe that there are pretty limited (essentially no observable) gains to be had by MP for the typical user/use case (local machine, in a Notebook like environment). This is something that should be addressed long term.