Coursier could cache jar validation more agressively
See original GitHub issueRelated to https://github.com/sbt/sbt/issues/5508 where I described a problem with sbt+coursier in a multi module scenario. Basically running sbt update
in a multi module project takes a lot of time because coursier is validating the same dependencies several times (once per module AFAICT)
One of the approaches to fix this problem would be to cache this more aggressively in courier and skip validation of already validated files…
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Low-level API - Coursier
Task as a Monad to wrap them. It also fetches artifacts (JARs, etc.). It caches all of these (metadata and artifacts) on disk,...
Read more >fetch - Coursier
The fetch command, like the resolve command, resolves the transitive dependencies of one or more dependencies. It also goes one step further, by...
Read more >Aggressive Use of DNSSEC-Validated Cache RFC 8198
Aggressive negative caching was first proposed in Section 6 of DNSSEC Lookaside Validation (DLV) [RFC5074] in order to find covering NSEC records ...
Read more >sbt Reference Manual — Combined Pages
Generated files (compiled classes, packaged jars, managed files, caches, and documentation) will be written to the target directory by default.
Read more >A Framework for Enterprise Java - JBoss.org
Using JBossCache in Seam . ... Unlike plain Java EE or J2EE components, Seam components may simultaneously ... rendering, webservices, caching and more....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Oh yeah sorry, I haven’t had as much time as I anticipated. I implemented this but wanted to run some benchmarks before I PRed it, especially with different locking mechanisms. I’ll PR it so you can have a look
@oyvindberg There’s already a concept of “auxiliary files” in
FileCache
, which correspond to files kept alongside another one. For example, if we get a SHA-1 via an HTTP header when downloadingfoo/name
, it’s kept infoo/.name.sha1
. This particular auxiliary file is written around here. When a changing artifact (snapshots typically) is re-downloaded, its auxiliary files are wiped .Checksums are currently computed around here. You may want to write an auxiliary file here (with an extension along the lines of
.sha1.computed
for example), or read an existing one.One thing to pay attention to is concurrency. You need to hold a lock on the main file when writing an auxiliary file.