Unable to build repo with co-dependent workspaces
See original GitHub issueDescribe the bug
When I run yarn build, I get no output and at some point Node crashes due to OOM.
╰─ time yarn build
<--- Last few GCs --->
[322:0x609d110] 892070 ms: Scavenge (reduce) 4090.3 (4095.9) -> 4089.3 (4096.9) MB, 7.0 / 0.0 ms (average mu = 0.294, current mu = 0.276) allocation failure
[322:0x609d110] 892078 ms: Scavenge (reduce) 4090.3 (4095.9) -> 4089.3 (4096.9) MB, 6.6 / 0.0 ms (average mu = 0.294, current mu = 0.276) allocation failure
[322:0x609d110] 892141 ms: Scavenge (reduce) 4090.3 (4095.9) -> 4089.3 (4096.9) MB, 7.1 / 0.0 ms (average mu = 0.294, current mu = 0.276) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xa04200 node::Abort() [/usr/bin/node]
2: 0x94e4e9 node::FatalError(char const*, char const*) [/usr/bin/node]
3: 0xb7860e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
4: 0xb78987 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/bin/node]
5: 0xd33215 [/usr/bin/node]
6: 0xd33d9f [/usr/bin/node]
7: 0xd41e2b v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node]
8: 0xd444e5 v8::internal::Heap::HandleGCRequest() [/usr/bin/node]
9: 0xceab27 v8::internal::StackGuard::HandleInterrupts() [/usr/bin/node]
10: 0x1059cca v8::internal::Runtime_StackGuardWithGap(int, unsigned long*, v8::internal::Isolate*) [/usr/bin/node]
11: 0x1400039 [/usr/bin/node]
yarn build 930.00s user 22.41s system 106% cpu 14:55.61 total
To Reproduce I’m currently assuming this is due to workspace A depending on workspace B and vice versa.
I assume this could be reproduced in the example in this repo. I haven’t had time to try that yet.
Expected behavior In circular dependencies, projects still only be built once.
In my case, I don’t even need projects to be built according to their topology at all. I only compile TS in my builds and all builds access only the current source of any other workspace. The build artifacts are only relevant at runtime.
So it would be fine to just run a build in all “dirty” workspaces at the same time.
Desktop (please complete the following information):
- OS: Windows/WSL
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (5 by maintainers)
Top Results From Across the Web
Workspace fails to launch after adding git repo to project
This error message indicates there's an issue importing the attached repository to your project. To begin troubleshooting, please remove the ...
Read more >Spring Boot 3 native image fails with there was no org ...
I've upgrade a quite complex Spring Boot application to Boot 3.0 and Cloud 2022.0.0-SNAPSHOT. These are the defined dependencies:
Read more >Unable to connect to mysql instance. · Issue #164 - GitHub
Hi, My environment is CF 257 with Diego 1,14.1 and CF-Mysql 34. I'm facing the problem to start an application with a bind...
Read more >The Case for Monorepos: A sane Workspace Setup (Part 2)
Learn how to setup dev tooling in a monorepo, run tasks efficiently, release multiple packages and overcome common DevOps challenges.
Read more >Repository layout - Google Groups
Our Java code is built using a custom tool that symlinks relevant Java files into a build tree and runs ant / javac...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

I’m going to close this for now (though feel free to reopen if need be).
yarn.build expects your dependencies to be a directed acyclic graph. In general, if you have control keeping your dependencies as a DAG is much easier in the long run.
While I also commented on the other issue, I’ll also wanted to add to a thought here. I realized late yesterday night, that the topological build is pretty core to yarn.builds behavior. I assume this is due to the bundling that yarn.build also provides, which probably implies a reliance on build output.
In my scenario, where I purely build for local debugging purposes, the build output is not relevant to the dependents. So a topological build actually slows down the process unnecessarily.
I’d understand if a full parallel build of all targets does not fit the design of yarn.build, but I would consider it very beneficial to control this aspect of the build.
I’m not sure if this information could be deduced from the projects themselves, if this would require a switch, or if it’s even feasible to support both.