question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Issues with some logger metrics

See original GitHub issue

Hi, I defined the logger object as shown in the Continuum tutorials for computing the CL metrics. While some metrics work (accuracy and bwt for instance), others don’t, in particular average_incremental_accuracy and online_cumulative_performance. I’m going to paste a part of my code where I iterate through some epochs of a specific task/experience. Probably I’m doing some mistakes somewhere.

print(f"Start training for {epochs} epochs")
    for epoch in range(epochs):
            model.train()
            for x,y,t in train_loader:
                x = x.to(device)
                y = y.to(device)
                
                optimizer.zero_grad()
                predictions = model(x)['logits']
                loss = criterion(predictions,y[:,0])
                loss.backward()
                optimizer.step()
                
                
                logger.add([predictions.cpu().argmax(dim=1),y[:,0].cpu(),t], subset= 'train')
                
                # Test phase
            if args.eval_every and epoch % args.eval_every == 0:
                model.eval()
                test_loss = 0.
                with torch.inference_mode(): 
                    for x_test, y_test, t_test in test_loader:
                        predic_test = model(x_test)['logits']
                        test_loss += criterion(predic_test,y_test[:,0].cuda())
                       
                        
                        logger.add([predic_test.cpu().argmax(dim=1),y_test[:,0].cpu(),t], subset = 'test')
                     test_loss /= len(test_loader)
                    print(f"Train accuracy: {logger.online_accuracy}")
                    print(f"Test accuracy: {logger.accuracy}")
            
            print(logger.average_incremental_accuracy)
            print([round(100 * acc_t, 2) for acc_t in logger.accuracy_per_task])
            
            logger.end_epoch()

During the second task, I got this error for the average_incremental_accuracy:

Traceback (most recent call last):
  File "main.py", line 263, in <module>
    main(args)
  File "main.py", line 222, in main
    print(logger.average_incremental_accuracy)
  File "/home/stek/.local/lib/python3.8/site-packages/continuum/metrics/utils.py", line 11, in wrapper2
    return func(self)
  File "/home/stek/.local/lib/python3.8/site-packages/continuum/metrics/logger.py", line 134, in average_incremental_accuracy
    return statistics.mean([
  File "/home/stek/.local/lib/python3.8/site-packages/continuum/metrics/logger.py", line 135, in <listcomp>
    accuracy(all_preds[t], all_targets[t])
  File "/home/stek/.local/lib/python3.8/site-packages/continuum/metrics/metrics.py", line 14, in accuracy
    assert task_preds.size > 0
AssertionError

It seems like the task_preds are not present in the logger.

I’m waiting for your suggestions about the issue. If the code is not sufficient, I can include the entire scripts, but since the logger relies on the predicted and true labels, I don’t think the rest of the code is relevant. Thanks in advance.

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:13

github_iconTop GitHub Comments

1reaction
TLESORTcommented, May 27, 2022

I just merged fluentspeech branch to master, so you can use fix_metrics branch to have the updated logger. Thanks for reporting errors ! 😃

0reactions
umbertocappellazzocommented, May 30, 2022

@arthurdouillard I upgraded continuum to the version 1.2.3., but it says ImportError: cannot import name 'FluentSpeech' from 'continuum.datasets' (/home/stek/.local/lib/python3.8/site-packages/continuum/datasets/__init__.py). In fact the file is not present. Can u cast a glance over it? Thx.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Troubleshoot log-based metrics - Google Cloud
This page provides troubleshooting information for common scenarios when using log-based metrics in Cloud Logging.
Read more >
3 Common logging challenges | Fastly
1. Logs need to be meaningful and logs need to be used. Arguably, the biggest logging challenge is making sure your logs contain...
Read more >
Issue with logging metrics #1094 - GitHub
Describe the bug Not able to log metrics or models in custom jobs You should set an event logger before calling: Traceback (most...
Read more >
Logs and Metrics: What are they, and how do they help me?
This is a high-level overview of logs and metrics and how to monitor them. It goes from looking at the history of IT...
Read more >
Observability Mythbusters: Logs and Metrics Aren't Enough
In this article, I will be addressing whether or not logs and metrics are good enough for debugging, and I'll be talking about...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found