How does the mlflow artifact proxy server configure AWS credentials?
See original GitHub issueI am trying to use an mlflow server with proxied artifact storage set up in s3.
https://www.mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server
The mlflow server is running in its own container in an ec2 instance. So far it is either unable to access the s3 bucket (invalid access token) or I’ve had to configure client side credentials. When I configure client side credentials i am able to load a model to the registry. Also, I can use the mlflow client to list registered models just fine.
since the mlflow server is running in a container do you need to map a folder so that it can access aws credentials? Do the credentials need to be stored in environment variables when the server is started? How else would the mlflow server in the container get the aws credentials to access the S3 bucket?
This is what I’m running to start the mlflow server:
CMD mlflow server --backend-store-uri ${BACKEND_URI} --default-artifact-root ${ARTIFACT_ROOT} \ --host 0.0.0.0 --port 5000 --artifacts-destination ${ARTIFACT_ROOT} \ --serve-artifacts
Issue Analytics
- State:
- Created a year ago
- Comments:7 (4 by maintainers)
Created new experiment through gui and tried to upload artifacts and also created new experiment via the code snippet you listed above. Tried to log artifacts and get
NoCredentialsError: Unable to locate credentials
, though in doing some other tests there may be some other admin settings preventing access. For some reason the boto3 access, which was working, does not work now. Feel free to close this ticket and I will reopen if necessary if mlflow appears to be the source of the error again. Thank you.@zstern Any updates here?