question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ValueError: Targets should be binary (0 or 1).

See original GitHub issue

I am facing this issue. ValueError: For binary cases, y must be comprised of 0's and 1's.

The task is multilabel, and I am converting to binary with:

def custom_prepare_batch(batch, device, non_blocking):
    x, y = batch["img"], batch["lab"]
    return (
        convert_tensor(x, device=device, non_blocking=non_blocking),
        convert_tensor(y, device=device, non_blocking=non_blocking),
    )


### model update function
def process_function(engine, batch):
    model.train()

    images, targets = custom_prepare_batch(batch, device=device, non_blocking=True)
    
    optimizer.zero_grad()
    outputs = model(images)
    
    for task in range(targets.shape[1]):
        task_output = outputs[:,task]
        task_target = targets[:,task]
        mask = ~torch.isnan(task_target)
        task_output = task_output[mask]
        task_target = task_target[mask]
        if len(task_target) > 0:
            if agreement_threshold > 0.0:
                mean_loss, masks = and_mask_utils.get_grads(
                    agreement_threshold=agreement_threshold,
                    batch_size=1,
                    loss_fn=criterion,
                    n_agreement_envs=batch_size,
                    params=optimizer.param_groups[0]['params'],
                    output=task_output,
                    target=task_target,
                    method="and_mask",
                    scale_grad_inverse_sparsity=scale_grad_inverse_sparsity,
                )
            else:
                mean_loss = criterion(y_pred, y)
                mean_loss.backward()
    

    optimizer.step()
    
    return {
#         "batchloss": mean_loss.item()
    }

I used this from the ignite docs:

def activated_output_transform(output):
    y_pred, y = output
    y_pred = torch.sigmoid(y_pred)
    return y_pred, y

metrics = {
        "roc_auc": ROC_AUC(activated_output_transform),
     }

And, now I am getting ValueError: Targets should be binary (0 or 1).

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:16

github_iconTop GitHub Comments

1reaction
etettehcommented, Jan 20, 2021

Thank you so much. This is exactly what my task looks like, and I can work with this.

1reaction
vfdev-5commented, Jan 20, 2021

Right, in this case, you have to split the target with output_transform function and create 16 metrics. Maybe, something like that could work in your case:

from functools import partial
import torch

from ignite.utils import to_onehot
from ignite.engine import Engine
from ignite.contrib.metrics import ROC_AUC


torch.manual_seed(0)
num_tasks = 16
batch_size = 4
roc_auc_per_task = {}


def ot_per_task(output, task_index):
    y_pred, y = output
    # Remove NaNs with mask
    task_output = y_pred[:, task_index]
    task_target = y[:,task_index]
    mask = ~torch.isnan(task_target)
    task_output = torch.sigmoid(task_output[mask])
    task_target = task_target[mask].long()
    return task_output, task_target


for i in range(num_tasks):
    roc_auc_per_task["auc_{}".format(i)] = ROC_AUC(output_transform=partial(ot_per_task, task_index=i))
    

def processing_fn(e, b):
    # Let's generate predictions and targets
    y_true = torch.randint(0, 2, size=(batch_size, num_tasks)).double()
    # add nans
    for _ in range(int(batch_size * num_tasks * 0.33)):
        i = torch.randint(0, batch_size, size=(1, )).item()
        j = torch.randint(0, num_tasks, size=(1, )).item()
        y_true[i, j] = float("nan")
    y_preds = (torch.rand(batch_size, num_tasks) - 0.5) * 10    
    return y_preds, y_true

evaluator = Engine(processing_fn)

# Add metrics
for n, m in roc_auc_per_task.items():
    m.attach(evaluator, name=n)

evaluator.run([0, 1, 3, 4, 5])
evaluator.state.metrics

HTH

Read more comments on GitHub >

github_iconTop Results From Across the Web

Accuracy Score ValueError: Can't Handle mix of binary and ...
I have a problem to evaluate the predicted results using the accuracy_score metric. This is my true Data : array([1, 1, 0, 0,...
Read more >
classification metrics can't handle a mix of unknown and ...
ValueError : Classification metrics can't handle a mix of unknown and binary targets. You are trying to compare integer and non-integer values. (1...
Read more >
ValueError while running multi-class predictions - nlp
I am doing multi-class predicition with softmax and CrossEntropyLoss. My output is [0.4559, 0.2230, 0.3211] and label is [1].
Read more >
sklearn.preprocessing.LabelBinarizer
... 0, 0, 1]]). Binary targets transform to a column vector ... The 2-d matrix should only contain 0 and 1, represents multilabel...
Read more >
Accuracy Score ValueError: Can't Handle mix of ... - Intellipaat
Accuracy Score ValueError: Can't Handle mix of binary and continuous target · array([1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1,...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found