force:source:push -- ERROR: Maximum size of request reached. Maximum size of request is 52428800 bytes
See original GitHub issueSummary
Trying to use force:source:push
to delivery metadata and receiving an error that the request is too large.
Steps To Reproduce:
Add a significant amount of metadata, most easily static resource files. Static resources have an individual max of 5MB so you’ll need a few of them. The compressed total size should be > 40MB to be safe.
Try and deliver that metadata to an org using sfdx force:source:push
Expected result
Push would be successful and deal with the various size limitations without crashing.
DX already pre-compresses the static resources into zip files and stores them in the temp directory. It might be appropriate to have a flag or other configuration that authorizes delivering the static resources in appropriate sized chunks BEFORE the main deployment. Static resources don’t have external dependencies that I know of so they are an ideal candidate to pre-load before the remainder of the metadata.
This is opening up the scratch org to partial success and somewhat unpredictable state. I think that is a reasonable trade off and also why I recommend this be a behavioral flag instead of default behavior.
Actual result
Push fails and offers no reasonable recourse to use that mechanism to delivery configuration.
Additional information
This is a specific documented limit https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_deploy.htm The base64 encoded size of the deploy package cannot be over 50MB, ~39MB on disk.
In my case I have roughly 4MB of metadata configuration files (apex, aura, lwc, object, etc…) and 36MB of static resources.
SFDX CLI Version(to find the version of the CLI engine run sfdx --version):
sfdx-cli/7.8.1-8f830784cc win32-x64 node-v10.15.3
SFDX plugin Version(to find the version of the CLI plugin run sfdx plugins --core)
@oclif/plugin-commands 1.2.2 (core)
@oclif/plugin-help 2.1.6 (core)
@oclif/plugin-not-found 1.2.2 (core)
@oclif/plugin-plugins 1.7.8 (core)
@oclif/plugin-update 1.3.9 (core)
@oclif/plugin-warn-if-update-available 1.7.0 (core)
@oclif/plugin-which 1.0.3 (core)
@salesforce/sfdx-trust 3.0.2 (core)
analytics 1.1.2 (core)
generator 1.1.0 (core)
salesforcedx 45.16.0 (core)
├─ force-language-services 45.12.0 (core)
└─ salesforce-alm 45.18.0 (core)
sfdx-cli 7.8.1 (core)
sfdx-typegen 0.6.0 (link) C:\Users\aheber\dev\sfdx-typegen
OS and version: Windows 10
Issue Analytics
- State:
- Created 4 years ago
- Comments:7 (2 by maintainers)
Top GitHub Comments
For any future readers. I’ve built an SFDX plugin to help handle this. You can deploy all static resources via the tooling api. This allows us not to push ALL content at once and bypass the error.
https://github.com/aheber/sfdx-heber#sfdx-heberstaticresourcesdeploy--c--r--v-string--u-string---apiversion-string---json---loglevel-tracedebuginfowarnerrorfataltracedebuginfowarnerrorfatal
From there we .forceignore all static resources temporarily during the initial push and that is getting us off the ground. As an added bonus we went from a 7+ minute static resource deployment to ~30 seconds.
@clairebianchi I want to make a case for this. DX should do something other than fall over because I have too much code/static resources/etc. #110 still only helps as a workaround to the limit described here.
Is this a “the team doesn’t consider this a problem” or is it “they are unable to take on the work to fix this as an unsupported edge case?” If the later could this be a backlog item somewhere instead of being closed?