Prefect User Config not working with Prefect cloud
See original GitHub issueDescription
A clear description of the bug Working with user config files locally works great, as described in this document
However, I’ve tried exporting a user config file (.toml) to prefect cloud (we’re running the Fargate agent). I do this by copying the file into docker storage using the files
arg. I then set the environment variable PREFECT__USER_CONFIG_PATH
to the path of this .toml file.
The flows get deployed successfully, the file gets copied over and the envar gets set. However,
when the flow is executed on cloud, and we call prefect.config.SFK_USER
we get the following error:
`DEBUG | Task ‘execute_snowflake_query[0]’: Calling task.run() method… |
---|---|
2019-11-9 3:38pm | prefect.CloudTaskRunner |
As an alternative way of deploying the config, rather than setting PREFECT__USER_CONFIG_PATH
, I tried parsing the .toml file during the deployment and prefixing every envar with PREFECT__
and just adding these envars to the env_vars arg in docker storage. This produced the same error (above).
Expected Behavior
What did you expect to happen instead? I would expect the value of prefect.config.SFK_USER to get returned, just as it does when I run this locally.
Reproduction
A minimal example that exhibits the behavior.
- create a file called uat.toml. Inside of this file put SFK_USER = PREFECT_READ_ONLY
- in your flow code, call prefect.config.SFK_USER within one of your tasks
Deploy Flow
3. when deploying your flow to docker storage, in the files
arg, use the source and destination of this file to copy the file into the docker container
4. in the env_vars arg, set PREFECT__USER_CONFIG_PATH to the destination file path of uat.toml
Run the Flow 5. From prefect cloud trigger the newly deployed flow
Environment
Any additional information about your environment We’re running the Fargate Agent
Issue Analytics
- State:
- Created 4 years ago
- Comments:11 (1 by maintainers)
Thanks, @cicdw. This makes sense. We will definitely use the
prefect.*
pattern going forward.I’ll go ahead and close this issue.
Hi @mhmcdonald ! I’ve been lurking and I think I know what’s going on. It’s a little complicated, but ultimately boils down to how Flows are serialized and stored (using
cloudpickle
).Whenever you use the
pattern,
cloudpickle
freezes theconfig
object as-is. However, whenever you usecloudpickle
recognizesconfig
as something it can access as a module attribute, and therefore at runtime it dynamically loads the configuration object (as you were expecting). In general we recommend theprefect.*
pattern for both config and context for this reason.You can see this in action if you call
cloudpickle.dumps(...)
on each of the above tasks – you’ll notice that one of them produces a large output with your entire config file, whereas the other one is very minimal.