CRT engine fails to read objects larger than the window size with localstack
See original GitHub issueDescribe the bug
Hey, this is strange, and is making me worry that I’m just doing something really dumb, but given that this SDK is in beta perhaps there is some genuine weirdness going on here.
I am trying to set up a simple read/write S3 service in spring-boot using the kotlin aws sdk, and it seems like there is some really odd truncating going on with the ByteStream
class, but only when the ByteStream
comes from a downloaded object.
I have two very simple tests
describe("S3 File Upload Service") {
it("can upload and download a file from S3") {
// arrange
val fileContent = javaClass.classLoader.getResource("all_star.txt")?.readText()
?: error("Where's ma txt file 😡")
val fileKey = "swamp"
// act
fileStorageService.uploadFile(fileKey, fileContent.encodeToByteArray().inputStream())
val result = fileStorageService.downloadFile(fileKey)
// assert
result.contentLength!! shouldBeExactly ByteStream.fromString(result.decodeToString()).contentLength!!
}
it("Pure bytestream test") {
val fileContent = javaClass.classLoader.getResource("all_star.txt")?.readText()
?: error("Where's ma txt file 😡")
val bs = ByteStream.fromString(fileContent)
bs.contentLength!! shouldBeExactly ByteStream.fromString(bs.decodeToString()).contentLength!!
}
}
where fileStorageService
implements a very minimal interface
interface IFileStorageService {
suspend fun uploadFile(key: String, stream: InputStream): Boolean
suspend fun downloadFile(key: String): ByteStream
}
The weirdness is that in the first test, result.contentLength!! shouldBeExactly ByteStream.fromString(result.decodeToString()).contentLength!!
fails, while in the second test, it succeeds (as expected).
In the first test, I get an error
79823 should be equal to 16384
java.lang.AssertionError: 79823 should be equal to 16384
at my.service.file.storage.S3FileStorageServiceIT$3$1.invokeSuspend(S3FileStorageServiceIT.kt:55)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.internal.ScopeCoroutine.afterResume(Scopes.kt:33)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:102)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlinx.coroutines.UndispatchedCoroutine.afterResume(CoroutineContext.kt:142)
at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:102)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Expected behavior
Decoding a byte stream and then encoding back to a byte stream should result in, if not completely identical streams, then at least streams with identical content lengths.
Current behavior
It seems to work, except in the case that a bytestream has been pulled from a getObject
request
Steps to Reproduce
Pretty simple, upload a file to S3 (in my case I’m using localstack to try all of this locally).
The file I’m using to test is the full transcript of Shrek, link here. This isn’t a hard requirement for reproducing, but at the same time… it totally is.
Then just try to upload it, download it, and compare the decoded bytestream to the expected body.
Possible Solution
No real idea, I’m wondering if there is any possibility that it has something to do with using localstack? but I would really rather not provision an actual bucket just to compare against my failed local testing.
Context
I’m just a simple man, trying to write to his bucket and read it back.
AWS Kotlin SDK version used
0.11.0-beta
Platform (JVM/JS/Native)
JVM
Operating System and version
Mac OS Big Sur
Issue Analytics
- State:
- Created 2 years ago
- Comments:15
Top GitHub Comments
Thanks for confirming. I’ll open a ticket with the CRT team to see if they can track down anything but since it’s specific to localstack no further action will be taken on this right now.
Can confirm that the workaround you posted works for me. Only caveat is I had to use
aws.smithy.kotlin:http-client-engine-ktor:0.7.6
as I’m not sure where the snapshot jars get published 😅I will follow up with info on the real bucket attempt soon.
Really appreciate the help! Very excited to see this SDK moving towards stable 😃