Batching `./corrections` for performance
See original GitHub issuePerformance is obviously great 😄 But just wanted to start a discussion.
-
I noticed a lot of time is spent on the corrections,
-
UnTag is called surprisingly often (6th place in nlp, and 1st place in
lookups
) - I’m probably wrong, but is this called because compromise changes its mind about tags after they were added on first pass?
nlp
lookups
Obviously these times are pretty tiny, but curious if any more room to improve!
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (9 by maintainers)
Top Results From Across the Web
Tips For Improving Your Batch Record Review Process
After discussion, if needed for clarification, the batch record corrections are made by the operations person and verified by the reviewer.
Read more >The Four Pillars of Batch Processing Management
Performance management is an ongoing process. First, you must gather the performancedata, which should consist of the CPU times for each job, ...
Read more >Achieving consistently reliable batch processing to spec
This helps chemical processors achieve more consistently reliable batch processing to spec, which enhances quality, productivity and profit.
Read more >Performance Reviews are a Batch Process - Velaction
And yet, nearly every company batches its performance reviews into an annual evaluation. The irony is thick. Those very evaluations—the ones containing 365...
Read more >Batch Process Control—Unique Challenges and Opportunities
Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
efrt got me thinking in this direction! Feels really close to compress all 1s and 0s to 0…then worry about uncompressing 😂
Yeh I think it’s a question of optimise the matching at build, optimise before running (so again, data has to be stored in a way so we can cut things out fast), then maybe what we’re left with, we do more directly rather than using the user focused
.match
.Yeh that would make sense, haven’t had a chance to look yet. But that might be a good place to start, store the conflicts in a map so we can skip some steps