question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Couple of suggestions for `nn.Sequential`.

See original GitHub issue

Hi @patrick-kidger ,

I was wondering if the following modifications are worth adding to nn.Sequential

  1. Supporting OrderedDicts. They have an ordering but also make it easier to access specific indices based on names. Possible bug: At the moment it is possible to only initialize a sequential with an OrderedDict (but it shouldn’t?).

  2. Supporting delete, insert, update, append to edit the state of layers. Given that __getitem__ already exists, these would also be qol additions.

Thanks.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:5 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
patrick-kidgercommented, Aug 31, 2022

Currently the layers attribute of Sequential can be a list – which of course supports insert, append, ect … However it may make sense to force this to be a tuple to keep it immutable.

Yeah, that’d make sense.

This is pretty common with transfer learning – i.e. doing “model surgery” on a pretrained model to adapt it to a new task. This is where the mutability of PyTorch Modules is really convenient. It would be pretty cool if equinox Modules could be unfrozen and refrozen to allow PyTorch-like manipulation

For performing model surgery - or indeed any kind of PyTree manipulation - I’d recommend eqx.tree_at. For example:

mlp = eqx.nn.MLP(...)
new_linear = eqx.nn.Linear(...)
mlp2 = eqx.tree_at(lambda m: m.layers[-1], mlp, new_linear)

Which replaces the last layer of the MLP.

Does this generally solve your use cases here?

I’ve been thinking of adding an “advanced tricks” section to the docs describing this kind of thing.

0reactions
jenksptcommented, Aug 31, 2022

Is it common to need to change which layers you have in a module after-the-fact?

This is pretty common with transfer learning – i.e. doing “model surgery” on a pretrained model to adapt it to a new task. This is where the mutability of PyTorch Modules is really convenient. It would be pretty cool if equinox Modules could be unfrozen and refrozen to allow PyTorch-like manipulation

Read more comments on GitHub >

github_iconTop Results From Across the Web

PyTorch - Neural Network I - nn.Sequential - YouTube
In this video I cover nn. Sequential API and details of training a sequential network.
Read more >
A simple extension of nn.Sequential - vision - PyTorch Forums
Hi there! I'm working through some Udacity courses on PyTorch and decided to go the extra mile to extend the nn.Sequential class.
Read more >
When should I use nn.ModuleList and when ... - Stack Overflow
I am new to Pytorch and one thing that I don't quite understand is the usage of nn.ModuleList and nn.Sequential . Can I...
Read more >
Three Ways to Build a Neural Network in PyTorch
Having said this, the goal of this article is to illustrate a few different ways that one can create a neural network in...
Read more >
CS231n/PyTorch.ipynb at master - GitHub
Module to define arbitrary neural network architecture. PyTorch Sequential API: we will use nn.Sequential to define a linear feed-forward network very ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found