question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Unable to run aws cli command from dockerfile

See original GitHub issue

Troubleshooting:

I have the following docker file :

FROM node:14.17.0-slim as backend-assets

RUN apt-get update ***

# aws cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip
RUN ./aws/install && aws --version

ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_SESSION_TOKEN

WORKDIR /root/backend

RUN aws s3 cp s3://XXXX assets --recursive
....

I used to build it locally by injecting AWS credentials ARGs as follow:

DOCKER_BUILDKIT=1 docker build --target backend-assets -t testing \
 --build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
 --build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
 --build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN --rm --progress=plain .

and then the following instruction works well so all files are downloaded from s3 bucket.

RUN aws s3 cp s3://XXXX assets --recursive

But when I tried to automate it in Github Actions using docker/build-push-action@v2 as follows:

- name: Build and Push backend-assets image
        uses: docker/build-push-action@v2
        with:  
          context: ./backend 
          push: true
          target: backend-assets
          tags: ${{ steps.login-ecr.outputs.registry }}/backend-assets:${{ steps.set-commit-sha.outputs.sha_short }}
          cache-from: type=registry,ref=${{ steps.login-ecr.outputs.registry }}/backend-assets:latest
          cache-to: type=inline
          build-args: |
            "AWS_ACCESS_KEY_ID=${{ env.AWS_ACCESS_KEY_ID }}"
            "AWS_SECRET_ACCESS_KEY=${{ env.AWS_SECRET_ACCESS_KEY }}"
            "AWS_SESSION_TOKEN=${{ env.AWS_SESSION_TOKEN }}"

the action raised the error below.

Behavior:

Expected behavior:

It should download files from s3 bucket.

Actual behavior:

error: failed to solve: process "/bin/sh -c aws s3 cp s3://XXXX assets --recursive --debug" did not complete successfully: exit code: 255
Error: buildx failed with: error: failed to solve: process "/bin/sh -c aws s3 cp s3://XXXX assets --recursive " did not complete successfully: exit code: 255

Logs:

This is the running action from Github logs:

/usr/bin/docker buildx build 
        --build-arg AWS_ACCESS_KEY_ID=*** \
        --build-arg AWS_SECRET_ACCESS_KEY=*** 
        --build-arg AWS_SESSION_TOKEN=XXXX 
        --tag ***.dkr.ecr.***.amazonaws.com/backend-assets:XXXX 
        --target backend-assets 
        --iidfile /tmp/docker-build-push-qsrsq2/iidfile 
        --metadata-file /tmp/docker-build-push-qsrsq2/metadata-file 
        --cache-from type=registry,ref=***.dkr.ecr.***.amazonaws.com/backend-assets:latest 
        --cache-to type=inline --push ./backend

Thanks in advance.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
crazy-maxcommented, Oct 21, 2021

I think you’re missing the AWS_DEFAULT_REGION as I see in some blog posts out there:

It doesn’t work using secrets

The secret works as shown in https://github.com/docker/buildx/issues/809#issuecomment-947915521. It’s just the usage of the AWS command so I don’t think this is related to the action or the Dockerfile itself. Anyway I suggest to open a thread on StackOverflow about your issue.

0reactions
akramfstgcommented, Oct 21, 2021

Also ARG is not an env var so will not be visible by the AWS CLI command. You can do:

Aren’t they equivalent (during build?) i.e., both available as environment variable?

I already tried the docker file locally on my mac and it worked… why it cannot interpret that command?

Read more comments on GitHub >

github_iconTop Results From Across the Web

amazon web services - Dockerfile for awscli - Stack Overflow
I am trying to create a docker file that will install awscli and run the command to list s3. Once the command is...
Read more >
Using the official AWS CLI version 2 Amazon ECR Public ...
Run the official AWS CLI version 2 images. The first time you use the docker run command, the latest image is downloaded to...
Read more >
How to create a docker image with AWS CLI and Serverless ...
Now you are all set to push this image up to the docker hub. Go to your terminal and run this command to...
Read more >
Getting Started with AWS CLI v2 as a Docker Container
Each time you run the AWS CLI v2 Docker image and mount the AWS credentials, you will be able to execute CLI commands...
Read more >
Dockerfile reference - Docker Documentation
This page describes the commands you can use in a Dockerfile . ... syntax=docker/dockerfile:1 FROM python:3 RUN pip install awscli RUN --mount=type=secret ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found