question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to find suitable lrs with FastaiLRFinder when the optimizer has Multiple groups ?

See original GitHub issue
optimizer = optim.SGD([
    {'params': model.conv.parameters(), 'lr': 1},
    {'params': model.linear.parameters(), 'lr': 0.1},
], lr=3e-4, momentum=0.9)

Such as this, the optimizer has two different groups. Can anyone give an example?

Issue Analytics

  • State:closed
  • Created a year ago
  • Comments:5

github_iconTop GitHub Comments

1reaction
xiaoye-hhhcommented, Sep 8, 2022

Thanks a lot. It’s helpful.

1reaction
vfdev-5commented, Sep 7, 2022

@xiaoye-hhh thanks for question, you can check https://pytorch-ignite.ai/how-to-guides/04-fastai-lr-finder/#with-lr-finder and update it to multiple groups.

Our FastaiLRFinder can accept multiple groups but checks with a single lr range without respecting initial lr (e.g. 1.0, 0.1) in your case. So, it means that it will most probably suggest the same lr for both groups.

import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.models import resnet18
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from ignite.handlers import FastaiLRFinder

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()

        self.model = resnet18(num_classes=20)
        self.model.conv1 = nn.Conv2d(
            1, 64, kernel_size=3, padding=1, bias=False
        )
        self.linear = nn.Linear(20, 10)

    def forward(self, x):
        return self.model(x)


model = Net().to(device)

data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])

train_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=True),
    batch_size=128,
    shuffle=True,
)

test_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=False),
    batch_size=256,
    shuffle=False,
)


model = Net().to(device)
# optimizer = torch.optim.RMSprop(model.parameters(), lr=1e-06)
optimizer = torch.optim.SGD([
    {'params': model.model.parameters(), 'lr': 0.1},
    {'params': model.linear.parameters(), 'lr': 0.01},
], momentum=0.9)

criterion = nn.CrossEntropyLoss()

trainer = create_supervised_trainer(model, optimizer, criterion, device=device)
lr_finder = FastaiLRFinder()

# To restore the model's and optimizer's states after running the LR Finder
to_save = {"model": model, "optimizer": optimizer}

with lr_finder.attach(trainer, to_save, end_lr=1.0) as trainer_with_lr_finder:
    trainer_with_lr_finder.run(train_loader)

print("Suggested LR", lr_finder.lr_suggestion())
> Suggested LR [0.10451768106330113, 0.10451768106330113]
Read more comments on GitHub >

github_iconTop Results From Across the Web

How to use FastaiLRFinder with Ignite
This how-to guide demonstrates how we can leverage the FastaiLRFinder handler to find an optimal learning rate to train our model on.
Read more >
Source code for ignite.handlers.lr_finder - PyTorch
[docs]class FastaiLRFinder: """Learning rate finder handler for supervised trainers. While attached, the handler increases the learning rate in between two ...
Read more >
PyTorch using LR-Scheduler with param groups of different LR's
You are right, learning rate scheduler should update each group's learning rate one by one. After a bit of testing, it looks like, ......
Read more >
Source code for monai.optimizers.lr_finder
See the License for the specific language governing permissions and # limitations ... raise ValueError( f"Loader has unsupported type: {type(data_loader)}.
Read more >
ignite - bytemeta
gradient_accumulation_steps influences scale of the loss · How to find suitable lrs with FastaiLRFinder when the optimizer has Multiple groups ?
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found