compilation performance bottleneck
See original GitHub issueI have disabled formatOnCompile
, yet if I do a no-op compile
of my project in a warmed up sbt, this is the bottleneck (backtraces) in CSV format (taking around 20 seconds across all projects / test configs)
Is anybody able to confirm?
"Reverse Call Tree","Time (ms)","Level"
"com.lucidchart.scalafmt.impl.ScalafmtFactory.fromConfig(String) ScalafmtFactory.scala","76016","1"
"com.lucidchart.scalafmt.impl.ScalafmtFactory.fromConfig(String) ScalafmtFactory.scala:9","","2"
"com.lucidchart.sbt.scalafmt.ScalafmtCorePlugin$autoImport$.$anonfun$scalafmtCoreSettings$26(Tuple4) ScalafmtCorePlugin.scala:154","","3"
"com.lucidchart.sbt.scalafmt.ScalafmtCorePlugin$autoImport$$$Lambda$4957.apply(Object)","","4"
"scala.Function1$$Lambda$610.apply(Object)","","5"
"sbt.internal.util.$tilde$greater.$anonfun$$u2219$1($tilde$greater, Function1, Predef$$less$colon$less, Object) TypeFunctions.scala:42","","6"
"sbt.internal.util.$tilde$greater$$Lambda$2274.apply(Object)","","7"
"sbt.std.Transform$$anon$4.work(Object) System.scala:64","","8"
"sbt.Execute.$anonfun$submit$2(Node, Object) Execute.scala:257","","9"
"sbt.Execute$$Lambda$2291.apply()","","10"
"sbt.internal.util.ErrorHandling$.wideConvert(Function0) ErrorHandling.scala:16","","11"
"sbt.Execute.work(Object, Function0, CompletionService) Execute.scala:266","","12"
"sbt.Execute.$anonfun$submit$1(Execute, Object, CompletionService, Node, Object) Execute.scala:257","","13"
"sbt.Execute$$Lambda$2282.apply()","","14"
"sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions$$anon$4, Object, Function0) ConcurrentRestrictions.scala:167","","15"
"sbt.ConcurrentRestrictions$$anon$4$$Lambda$2289.apply()","","16"
"sbt.CompletionService$$anon$2.call() CompletionService.scala:32","","17"
"java.lang.Thread.run() Thread.java:748","","18"
Issue Analytics
- State:
- Created 6 years ago
- Comments:10 (10 by maintainers)
Top Results From Across the Web
What's c++ compiling performance bottle neck? - Stack Overflow
As for the real bottleneck, it's got something to do with the #include mechanism. Languages with modules are compiled much faster.
Read more >compiler - What are the bottlenecks for the Java build speed?
IME the biggest bottleneck is usually the disk speed. ... achieved by splitting the project into modules that are then compiled separately.
Read more >7.7 Performance Issues
A sequential bottleneck occurs when a code fragment does not incorporate sufficient parallelism or when parallelism exists (in the sense that data dependencies ......
Read more >Finding build bottlenecks with C++ Build Insights
In this article, we discuss two methods that you can use to identify bottlenecks in your builds: manually by using the vcperf analysis...
Read more >Talking About SQL Server Performance Bottlenecks
These bottlenecks occur for a variety of reasons, but they frequently happen as a result of problems with memory, I/O, or CPU. Though...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Seems pretty trivial to add a cache here: https://github.com/lucidsoftware/neo-sbt-scalafmt/blob/master/scalafmt-impl/src/scalafmt-1.0/main/scala/com/lucidchart/scalafmt/impl/ScalafmtFactory.scala#L11
Does sbt have some general purpose cache implementation that could be used? Or maybe Guava is already depended on - its CacheBuilder is pretty comprehensive if a little untyped.
I think caching that method (if possible) is the way to go. I cannot add much more to this discussion; I don’t have experience with this library. Maybe someone can have a look at this in our next Scala Center spree at Lambda World if you don’t have time to open a PR.