bug: Badly formatted response/Unable to transform data from server errors in TRPC client
See original GitHub issueProvide environment information
System:
OS: macOS 12.6
CPU: (8) arm64 Apple M1 Pro
Memory: 167.42 MB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 16.17.0 - ~/.nvm/versions/node/v16.17.0/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v16.17.0/bin/yarn
npm: 9.2.0 - ~/.nvm/versions/node/v16.17.0/bin/npm
Browsers:
Brave Browser: 108.1.46.134
Safari: 16.0
Describe the bug
I have discussed the issue a bit here: https://github.com/trpc/trpc/discussions/3437
I am using expo react native frontend and aws-lambda backend with adapter.
In my experience, I am receiving Badly formatted Response from Server
errors occasionally when using the TRPC client.
Furthermore, I noticed that when hitting at least two different endpoints in one component (e.g. two different useQuery
calls), the frequency at which this error occurs is almost 100%.
I have verified that the data from the server is actually correct by hitting the endpoint without trpc.
Link to reproduction
https://github.com/jacksonludwig/trpc-repro
To reproduce
The reproduction is a bare bones CDK app using TRPC’s lambda adapter, so unfortunately you need an AWS account setup to test it unless you can figure out how to run it locally.
- run
git clone https://github.com/jacksonludwig/trpc-repro.git
- run
npm i
in server andnpm i --legacy-peer-deps
in client. - run
npx cdk deploy --all --require-approval never
in server - replace the api endpoint in
App.tsx
with endpoint of the restapi server output - run
npm start
in the client - open the app with a simulator or real device by scanning the QR code with the expo go app
- examine some requests fail with
Badly formatted Response from Server
orCould not transform Response from server
Additional information
The error in the repro is Unable to transform data from server
instead of Badly formatted response from server
as in my original discussion. I’m not sure why this is, but it looks similar enough that I would assume the cause is the same.
👨👧👦 Contributing
- 🙋♂️ Yes, I’d be down to file a PR fixing this bug!
Issue Analytics
- State:
- Created 9 months ago
- Comments:8 (4 by maintainers)
My advice would be to do single endpoint with batching until you hit any actual problems with that setup.
The overall load will be smaller since you only need to resolve context etc once per batch and you can do smart optimizations with deduping db queries and doing dataloaders in your backend.
If you ever hit problems with this, you can split up services within their own API endpoints, there’s little/no actual value of doing this on a per procedure basis. You’ll have more cold starts and heavier load on your DB (which usually is the real bottleneck) and additionally it’s more work for the infra to be glued together.
Less is more.
I now do not think it is TRPC’s issue. It is just that batching does not make sense when creating an API using multiple lambdas on API gateway.
Thanks for your help, I think this issue can be closed in that case.
For the record, this is the sort of architecture that I am trying to achieve but that I think does not allow batching (unless I am misunderstanding how it works).