Can't import mlflow due to protobuf update to version 4.21 [BUG]
See original GitHub issueSystem information
mlflow version: 1.26 python version: 3.7
Describe the problem
Cannot import mlflow with the latest dependency of protobuf (4.21)
Tracking information
No response
Code to reproduce issue
import mlflow
Other info / logs
TypeError Traceback (most recent call last)
<command-4092054516117478> in <module>
----> 1 import mlflow
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _find_and_load(name, import_)
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _load_unlocked(spec)
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _load_backward_compatible(spec)
/local_disk0/tmp/1653558472209-0/PostImportHook.py in load_module(self, fullname)
187
188 def load_module(self, fullname):
--> 189 module = self.loader.load_module(fullname)
190 notify_module_loaded(module)
191
/databricks/python/lib/python3.7/site-packages/mlflow/__init__.py in <module>
30 from mlflow.version import VERSION as __version__ # pylint: disable=unused-import
31 from mlflow.utils.logging_utils import _configure_mlflow_loggers
---> 32 import mlflow.tracking._model_registry.fluent
33 import mlflow.tracking.fluent
34
/databricks/python/lib/python3.7/site-packages/mlflow/tracking/__init__.py in <module>
6 """
7
----> 8 from mlflow.tracking.client import MlflowClient
9 from mlflow.tracking._tracking_service.utils import (
10 set_tracking_uri,
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _find_and_load(name, import_)
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _load_unlocked(spec)
/local_disk0/pythonVirtualEnvDirs/virtualEnv-b0db65a3-9256-450f-9467-e817ece7ad9e/lib/python3.7/importlib/_bootstrap.py in _load_backward_compatible(spec)
/local_disk0/tmp/1653558472209-0/PostImportHook.py in load_module(self, fullname)
187
188 def load_module(self, fullname):
--> 189 module = self.loader.load_module(fullname)
190 notify_module_loaded(module)
191
/databricks/python/lib/python3.7/site-packages/mlflow/tracking/client.py in <module>
14 from typing import Any, Dict, Sequence, List, Optional, Union, TYPE_CHECKING
15
---> 16 from mlflow.entities import Experiment, Run, RunInfo, Param, Metric, RunTag, FileInfo, ViewType
17 from mlflow.store.entities.paged_list import PagedList
18 from mlflow.entities.model_registry import RegisteredModel, ModelVersion
/databricks/python/lib/python3.7/site-packages/mlflow/entities/__init__.py in <module>
4 """
5
----> 6 from mlflow.entities.experiment import Experiment
7 from mlflow.entities.experiment_tag import ExperimentTag
8 from mlflow.entities.file_info import FileInfo
/databricks/python/lib/python3.7/site-packages/mlflow/entities/experiment.py in <module>
1 from mlflow.entities._mlflow_object import _MLflowObject
----> 2 from mlflow.entities.experiment_tag import ExperimentTag
3 from mlflow.protos.service_pb2 import (
4 Experiment as ProtoExperiment,
5 ExperimentTag as ProtoExperimentTag,
/databricks/python/lib/python3.7/site-packages/mlflow/entities/experiment_tag.py in <module>
1 from mlflow.entities._mlflow_object import _MLflowObject
----> 2 from mlflow.protos.service_pb2 import ExperimentTag as ProtoExperimentTag
3
4
5 class ExperimentTag(_MLflowObject):
/databricks/python/lib/python3.7/site-packages/mlflow/protos/service_pb2.py in <module>
16
17
---> 18 from .scalapb import scalapb_pb2 as scalapb_dot_scalapb__pb2
19 from . import databricks_pb2 as databricks__pb2
20
/databricks/python/lib/python3.7/site-packages/mlflow/protos/scalapb/scalapb_pb2.py in <module>
33 message_type=None, enum_type=None, containing_type=None,
34 is_extension=True, extension_scope=None,
---> 35 serialized_options=None, file=DESCRIPTOR)
36 MESSAGE_FIELD_NUMBER = 1020
37 message = _descriptor.FieldDescriptor(
/databricks/python/lib/python3.7/site-packages/google/protobuf/descriptor.py in __new__(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key)
558 has_default_value=True, containing_oneof=None, json_name=None,
559 file=None, create_key=None): # pylint: disable=redefined-builtin
--> 560 _message.Message._CheckCalledFromGeneratedFile()
561 if is_extension:
562 return _message.default_pool.FindExtensionByName(full_name)
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
What component(s) does this bug affect?
-
area/artifacts
: Artifact stores and artifact logging -
area/build
: Build and test infrastructure for MLflow -
area/docs
: MLflow documentation pages -
area/examples
: Example code -
area/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registry -
area/models
: MLmodel format, model serialization/deserialization, flavors -
area/projects
: MLproject format, project running backends -
area/scoring
: MLflow Model server, model deployment tools, Spark UDFs -
area/server-infra
: MLflow Tracking server backend -
area/tracking
: Tracking Service, tracking client APIs, autologging
What interface(s) does this bug affect?
-
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev server -
area/docker
: Docker use across MLflow’s components, such as MLflow Projects and MLflow Models -
area/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registry -
area/windows
: Windows support
What language(s) does this bug affect?
-
language/r
: R APIs and clients -
language/java
: Java APIs and clients -
language/new
: Proposals for new client languages
What integration(s) does this bug affect?
-
integrations/azure
: Azure and Azure ML integrations -
integrations/sagemaker
: SageMaker integrations -
integrations/databricks
: Databricks integrations
Issue Analytics
- State:
- Created a year ago
- Reactions:7
- Comments:14 (7 by maintainers)
Top Results From Across the Web
Julien Chaumond on Twitter: "Looks like one update of the ...
Can't import mlflow due to protobuf update to version 4.21 [BUG] · Issue #5949 · mlflow/mlflow. System information mlflow version: 1.26 ...
Read more >python - TypeError: Descriptors cannot not be created directly
Solution 1: Downgrade protobuf protobuf has recently Released the latest version and the Cause of this Update This error Occurs.
Read more >Building Ray from Source — Ray 2.2.0 - the Ray documentation
Every time you want to update your local version you can pull the changes from ... consider removing those (grpc and protobuf) for...
Read more >Model serving keep relaunching - Databricks Community
TypeError: Descriptors cannot not be created directly. · If this call came from a _pb2.py file, your generated code is out of date...
Read more >Cannot import nengo_dl on ms azure - Nengo forum
Hi, I tried to: pip install nengo-dl on Microsoft azure I was able to install it but got this exception when import nengo_dl:...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Semantic versioning requires that packages only make major breaking changes when changing the major version.
I think that at any point the mlflow package should have upper limit on the major version of each direct dependency. When new version of a direct dependency comes out, MLFlow team can bump the upper limit and test for regressions.
The possibility of permanent breakage like in this case would be order of magnitude smaller than now.
Same for mlflow version 1.11.0. I have set the protobuf to 3.2.0 and works. But I think that mlflow should have all its requirements freezed to and specific version.