Add benchmarks
See original GitHub issueBranched off of https://github.com/dedupeio/dedupe/issues/965#issuecomment-1046319666.
EDIT: See next comment for using ASV instead of @profile
Place @profile decorators on bottleneck functions using memory_profiler
List this dependency as extra, so that most users don’t need to install it.
Also, to prevent overhead from @profile getting run always (even when we don’t want profiling), wrap it in our own custom decorator that is usually a noop:
def dd_profile(func, *args, **kwargs):
# Maybe a better way to configure this? Would have to be at import time
if os.environ["DEDUPE_PROFILE"]:
# Actually add the profiler wrapper
return profile(func, *args, **kwargs)
else:
# noop
return func
Next steps are to probably actually make a new branch and apply it to some of the examples?
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Adding a Benchmark Line to a Graph - Stephanie Evergreen
Adding a benchmark line to a graph gives loads of context for the viewer. Here's how to make one right inside Excel. It's...
Read more >Add benchmarks | Substrate_ Docs - Substrate Documentation
Add benchmarks. This guide illustrates how to write a simple benchmark for a pallet, test the benchmark, and run benchmarking commands to generate...
Read more >Facebook Ad Benchmarks for YOUR Industry [Data]
Check out the Facebook ad performance benchmarks our clients are seeing, including: Average Click-Through Rate (CTR) on Facebook by industry ...
Read more >Linkedin Ad Benchmarks 2022 - An Always Up-to-date Guide
Here are the common benchmarks that are essential for Linkedin advertisers. ... Message Ad / Inmail Benchmarks; Linkedin Video Ad Benchmarks ...
Read more >How to add benchmark products
Add benchmark products. Open the Testdriver Admin Tool, and go to the Products screen. Click on the License tab to see the UL...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
i think precision and recall are really still the best ones.
look at canonical.py in tests to see how precision and recall is calculated there.