TypeError with confusion matrix
See original GitHub issueDescribe the bug Training data is twitter airline sentiment. My model definition yaml is
input_features:
-
name: text
type: text
output_features:
-
name: airline_sentiment
type: category
After training, I used ludwig test to produce test_statistics.json. When I want to visualize a confusion matrix of the output ludwig visualize --visualization confusion_matrix -tes ./results_0/test_statistics.json
, I got TypeError
Traceback (most recent call last):
File "C:\Users\huuhi\Anaconda3\Scripts\ludwig-script.py", line 11, in <module>
load_entry_point('ludwig==0.2.1', 'console_scripts', 'ludwig')()
File "C:\Users\huuhi\Anaconda3\lib\site-packages\ludwig-0.2.1-py3.6.egg\ludwig\cli.py", line 108, in main
File "C:\Users\huuhi\Anaconda3\lib\site-packages\ludwig-0.2.1-py3.6.egg\ludwig\cli.py", line 64, in __init__
File "C:\Users\huuhi\Anaconda3\lib\site-packages\ludwig-0.2.1-py3.6.egg\ludwig\cli.py", line 94, in visualize
File "C:\Users\huuhi\Anaconda3\lib\site-packages\ludwig-0.2.1-py3.6.egg\ludwig\visualize.py", line 3119, in cli
File "C:\Users\huuhi\Anaconda3\lib\site-packages\ludwig-0.2.1-py3.6.egg\ludwig\visualize.py", line 650, in confusion_matrix_cli
File "C:\Users\huuhi\Anaconda3\lib\site-packages\ludwig-0.2.1-py3.6.egg\ludwig\utils\data_utils.py", line 76, in load_json
TypeError: expected str, bytes or os.PathLike object, not NoneType
Environment:
- OS: Win 10
- Python 3
- Ludwig version 0.2
Issue Analytics
- State:
- Created 4 years ago
- Comments:5 (1 by maintainers)
Top Results From Across the Web
type error while creating confusion matrix - Stack Overflow
I'm trying to create a confusion_matrix but I get the following error: TypeError: Labels in y_true and y_pred should be of the same...
Read more >CONFUSION MATRIX (TYPE I & TYPE II ERROR) - LinkedIn
A confusion matrix is a summarized table of the number of correct and incorrect predictions (or actual and predicted values) yielded by a ......
Read more >TypeError with confusion matrix · Issue #477 - GitHub
When I want to visualize a confusion matrix of the output ludwig visualize --visualization confusion_matrix -tes .
Read more >sklearn.metrics.ConfusionMatrixDisplay
Compute Confusion Matrix to evaluate the accuracy of a classification. ... Plot the confusion matrix given an estimator, the data, and the label....
Read more >Plot a Confusion Matrix | Kaggle
I find it helpful to see how well a classifier is doing by plotting a confusion matrix. This function produces both 'regular' and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I had faced similar problems while training using custom dataset as well. What worked for me was two things,
Changing these two worked out for me. Let me know if it works for you too.
The command I used: ludwig visualize --visualization confusion_matrix --normalize --top_n_class 8 --test_statistics ./results_1/test_statistics.json --ground_truth_metada train1.json
Just to clarify for people who may end up reading this issue: the
--ground_truth_metada
parameter accepts the metadata JSON file that is created when a dataset is used the firt time with the same name of the dataset with .JSON at the end. If the preprocessed data saving is skipped, the same information is actually present inresults/experiment_run_0/model/tran_set_metadata.json
(results and experiment run may be different in your case).