Python3 native logging does not work properly with Azure ML sdk
See original GitHub issue- Package Name: azureml
- Package Version: 1.12 but has seen this since 1.0.62
- Operating System: mac & linux
- Python Version: python3
Describe the bug
Native logging package does not log anything in a case where azureml sdk was used. It has been issue for me for a while and recently I found the cause. The cause is related to from azureml.pipeline.steps import PythonScriptStep
Tested in Python3 azureml-sdk 1.12. But I have been observing this since azureml-sdk 1.0.62
azureml-core==1.12.0.post2 azureml-dataprep==2.2.3 azureml-dataprep-native==22.0.0 azureml-dataprep-rslex==1.0.1 azureml-dataset-runtime==1.12.0 azureml-defaults==1.14.0 azureml-model-management-sdk==1.0.1b6.post1 azureml-monitoring==0.1.0a21 azureml-pipeline==1.12.0 azureml-pipeline-core==1.12.0 azureml-pipeline-steps==1.12.0
To Reproduce
` import logging from azureml.core import Workspace, Dataset, Environment, ScriptRunConfig, Experiment from azureml.core.runconfig import RunConfiguration from azureml.pipeline.steps import PythonScriptStep
if name == ‘main’: subscription_id = ‘xxxxxxxxxx’ resource_group = ‘yyyyyyyy’ workspace_name = ‘zzzzzzzz-ws’
logging.basicConfig(level=logging.INFO, format='[%(asctime)s][%(name)s:' \
'%(filename)s:%(lineno)d]%(levelname)s: %(message)s')
logging.info('Logging starts ....')
workspace = Workspace(subscription_id, resource_group, workspace_name)
run_config = RunConfiguration(framework='python')
logging.debug('workspace loaded {}'.format(workspace))
dataset = Dataset.get_by_name(workspace, name='iris_dataset')
logging.info(dataset.to_pandas_dataframe().head())
# Enable Docker
run_config.environment = Environment(name='LoggingEnvironment')
run_config.environment.docker.enabled = True
run_config.environment.python.user_managed_dependencies = False
experiment = Experiment(workspace=workspace,
name='logging-test')
src = ScriptRunConfig(source_directory='.',
script='helloworld.py',
run_config=run_config)`
stdout:
python3 logging_test.py Failure while loading azureml_run_type_providers. Failed to load entrypoint hyperdrive = azureml.train.hyperdrive:HyperDriveRun._from_run_dto with exception (azureml-telemetry 1.14.0 (/home/xxxx/.virtualenvmlops/lib/python3.6/site-packages), Requirement.parse('azureml-telemetry~=1.12.0')). Failure while loading azureml_run_type_providers. Failed to load entrypoint automl = azureml.train.automl.run:AutoMLRun._from_run_dto with exception (azureml-telemetry 1.14.0 (/home/xxxx/.virtualenvmlops/lib/python3.6/site-packages), Requirement.parse('azureml-telemetry~=1.12.0')).
Expected behavior
When I commented out from azureml.pipeline.steps import PythonScriptStep
line and there were native loggings produced.
` Failure while loading azureml_run_type_providers. Failed to load entrypoint hyperdrive = azureml.train.hyperdrive:HyperDriveRun._from_run_dto with exception (azureml-telemetry 1.14.0 (/home/xxxx/.virtualenvmlops/lib/python3.6/site-packages), Requirement.parse(‘azureml-telemetry~=1.12.0’)). Failure while loading azureml_run_type_providers. Failed to load entrypoint automl = azureml.train.automl.run:AutoMLRun._from_run_dto with exception (azureml-telemetry 1.14.0 (/home/xxxx/.virtualenvmlops/lib/python3.6/site-packages), Requirement.parse(‘azureml-telemetry~=1.12.0’)). [2020-09-30 08:57:26,969][root:logging_test.py:15]INFO: Logging starts … [2020-09-30 08:57:28,306][azureml.core.run:run.py:297]INFO: Could not load the run context. Logging offline [2020-09-30 08:57:30,301][azureml.core.run:run.py:297]INFO: Could not load the run context. Logging offline [2020-09-30 08:57:34,103][root:logging_test.py:21]INFO: sepal_length sepal_width petal_length petal_width species species_a species_b 0 5.1 3.5 1.4 0.2 setosa virginica versicolor 1 4.9 3.0 1.4 0.2 setosa setosa versicolor 2 4.7 3.2 1.3 0.2 setosa versicolor versicolor 3 4.6 3.1 1.5 0.2 setosa versicolor setosa 4 5.0 3.6 1.4 0.2 setosa setosa virginica [2020-09-30 08:57:34,119][azureml._restclient.clientbase:clientbase.py:179]INFO: Created a worker pool for first use
`
Screenshots If applicable, add screenshots to help explain your problem.
Additional context Add any other context about the problem here.
Issue Analytics
- State:
- Created 3 years ago
- Reactions:1
- Comments:5
@storesund Thanks. Your solution solved the issue. I just put the
logging.basicConfig(level=logging.INFO, format='[%(asctime)s][%(name)s:' \ '%(filename)s:%(lineno)d]%(levelname)s: %(message)s')
before importing any azure library and I could control logging level.#please-close