Vertex AI Pipeline - Container OP `set_cpu_limit` does not work with parameter_values nor at runtime
See original GitHub issueHello Kubeflow Team, Hello Google Team,
The container OP .set_cpu_limit
only works when the value is set explicit and not via parameter_values or at runtime
https://github.com/kubeflow/pipelines/blob/4906ab2f1142043517249a62b9f22bc122971fdf/sdk/python/kfp/dsl/_container_op.py#L378
Reproduce
- parameter_values: see steps to reproduce
- runtime: see https://github.com/kubeflow/pipelines/blob/master/samples/core/resource_spec/runtime_resource_request.py
Environment
- How did you deploy Kubeflow Pipelines (KFP)? Vertex AI Pipelines
- kfp 1.8.4
- kfp-pipeline-spec 0.1.11
- kfp-server-api 1.7.0
Steps to reproduce
Not working
from kfp.v2.dsl import pipeline
@pipeline(name="reproduction",
pipeline_root="ADD PIPELINE ROOT")
def pipeline(cpu_limit: str):
train_op = train().set_cpu_limit(cpu_limit)
compiler.Compiler().compile(pipeline_func=pipeline,
package_path='pipeline.json')
api_client = AIPlatformClient(
project_id="ADD PROJECT",
region="us-central1"
)
response = api_client.create_run_from_job_spec(
'pipeline.json',
parameter_values={
'cpu_limit': "16"
}
)
Working
from kfp.v2.dsl import pipeline
@pipeline(name="reproduction",
pipeline_root="ADD PIPELINE ROOT")
def pipeline():
train_op = train().set_cpu_limit("16")
compiler.Compiler().compile(pipeline_func=pipeline,
package_path='pipeline.json')
api_client = AIPlatformClient(
project_id="ADD PROJECT",
region="us-central1"
)
response = api_client.create_run_from_job_spec(
'pipeline.json'
)
Expected result
The CPU limits can be set via parameter_values
Looking forward to your feedback
Issue Analytics
- State:
- Created 2 years ago
- Reactions:2
- Comments:13
Top Results From Across the Web
Specify the machine configuration for a pipeline step | Vertex AI
By setting the machine type parameters on the pipeline step, you can manage the requirements of each step in your pipeline. If you...
Read more >How do I log parameters from custom containers in my Vertex ...
When using custom containers to train models in my pipeline, I want a way to log parameters. I don't see this documented yet....
Read more >Google Vertex AI: The Easiest Way to Run ML Pipelines
Artifacts and Parameters represent the data that components either use (input) or produce (output). Passing data between components A and B is not...
Read more >Vertex AI Pipelines - The Easiest Way to Run ML ... - YouTube
Google Vertex AI : The Easiest Way to Run ML Pipelines Notebook: ... try restarting your device. Your browser can't play this video....
Read more >Open Vertex AI Pipelines issues - Issue Tracker
ID TYPE STATUS VOTES P S BLOCKED BY
255392054 Feature Request Assigned 7 P2 S2 255510120
255405637 Feature Request Assigned 6 P2 S2 255492920
261704459 Customer...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi are there any updates? This would be a huge benefit for re-using pipelines without the need to re-compile them.
+++
I agree with @SaschaHeyer, we are building reusable pipeline templates with only data changing and depending on the data size, we would want to be able to configure the CPU and Memory for each of the components through pipeline params or any other way.