Allow custom object key for artifacts instead of md5 hash
See original GitHub issueDescribe your idea/feature/enhancement
I don’t know if it’s a bug or not, but the s3 object key of the generated artifacts (using sam package
) is the md5 hash of the zip file. On the documentation, I see that the name of the uploaded zip file is the same of the source folder, but I can’t get it work in this way.
I need this, because I want to reference the uploaded object using a data source in Terraform (now I’m parsing the json output of sam package
).
Furthermore, since the 2 keys are different (different hashes), sam package
creates a new object instead of creating a new version of the older artifact.
Issue Analytics
- State:
- Created 2 years ago
- Reactions:4
- Comments:6 (2 by maintainers)
Top Results From Across the Web
MD5 hashing in Android - Stack Overflow
I want to provide a basic level of authentication by passing username/password in POST requests. MD5 hashing is trivial in C# and provides ......
Read more >Hashes and ETags: best practices | Cloud Storage
An object's ETag header returns the object's MD5 value if all the following are true: The request is being made through the XML...
Read more >Amazon S3 source action - AWS CodePipeline
The ETag is an MD5 hash of the object. ETag reflects only changes to the contents of an object, not its metadata. The...
Read more >Artifactory REST API - JFrog - JFrog Documentation
Description: Calculates an artifact's SHA256 checksum and attaches it as a property (with key "sha256"). If the artifact is a folder, then recursively ......
Read more >Artifact Reference :: Velociraptor - Digging deeper!
This artifact maintains a local (client side) database of file hashes. It is then possible to query this database using the Generic.Forensic.LocalHashes.Query ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Actually I use SAM only for building and uploading artificats and Terraform to attach them to a function, so I don’t think
sam delete
would work in my case since I’ve no stack.What about storing the
name
(or something else which doesn’t change in each deploy) of the object/package inbuild.toml
? Whensam package
is run, it would get the latest version of the object with keyprefix
/name
(I think that currentlyname
is the hash of the builded package) and would compare its md5 hash with the one stored inbuild.toml
. If they are equals, then skip the upload, otherwise do it.Hey @rpf3 , sorry for the late reply. Unfortunately I didn’t come up with any solutions. I only created a script in my
package.json
to delete all the artifacts with a given prefix using AWS cli and I manually run it.