question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Add support for `OptimizerParamGroup`

See original GitHub issue

OptimizerParamGroup is used to set different learning rates, weight decay rates, and other optimizer properties differently for different model parameters.

Exmaple: AdamW(vector<OptimizerParamGroup> param_groups, AdamWOptions defaults)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Reactions:1
  • Comments:16 (16 by maintainers)

github_iconTop GitHub Comments

1reaction
NiklasGustafssoncommented, Feb 11, 2022

@dsyme, @lostmsu – we can have it both ways:

var pgs = new ASGD.ParamsGroup[]
{
    new (lin1.parameters(), new () { LearningRate = 0.005f }),
    new (lin2.parameters())
};

var optimizer = torch.optim.ASGD(new ASGD.ParamsGroup[]
{
    new () { Parameters = lin1.parameters(), Options = { LearningRate = 0.005f } },
    new () { Parameters = lin2.parameters() }
});
let pgs = [|
     SGD.ParamsGroup(Parameters = model.parameters(), Options = SGD.Options(momentum = 1.0, dampening = 0.5));
     SGD.ParamsGroup(model.parameters(), momentum = 1.5, dampening = 0.1)
|]

let optimizer = SGD([|
    SGD.ParamsGroup(model.parameters(), momentum = 1.0, dampening = 0.5);
    SGD.ParamsGroup(model.parameters(), momentum = 1.5, dampening = 0.1)
|], lr)
1reaction
NiklasGustafssoncommented, Feb 9, 2022

Which can be further simplified by renaming the optimizer class. This should be alright, since the name of the factory method that should be the natural way of creating an optimizer doesn’t change:

            var pgs = new ParamsGroup<ASGD.Options>[]
            {
                new () { Parameters = lin1.parameters(), Options = { LearningRate = 0.005f } },
                new () { Parameters = lin2.parameters() }
            };

I’ll need to check how this looks in F#, too. Just as important to get the F# UX right…

Read more comments on GitHub >

github_iconTop Results From Across the Web

In pytorch how do you use add_param_group () with a ...
Add a param group to the Optimizer s param_groups. This can be useful when fine tuning a pre-trained network as frozen layers can...
Read more >
torch.optim.Optimizer.add_param_group
Add a param group to the Optimizer s param_groups . This can be useful when fine tuning a pre-trained network as frozen layers...
Read more >
torch.optim — PyTorch 2.0 documentation
Most commonly used methods are already supported, and the interface is general enough, ... Add a param group to the Optimizer s param_groups...
Read more >
Adding new parameter groups to an optimizer · Issue #292
Hey,. I am currently training a growing model that requires me to add new param groups to the optimizer at each growth.
Read more >
torch.optim — PyTorch master documentation
Add a param group to the Optimizer s param_groups . This can be useful when fine tuning a pre-trained network as frozen layers...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found