question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Why do I have to initialize the deformation gradient with zeros?

See original GitHub issue

Why do I have to initialize the deformation gradient with zeros, add the ones to the diagonal terms and finally add the displacement gradient? I mean I tried to write down

F = grad(u)
for i in range(3):
    F[i, i] += 1.

instead of

F = zeros_like(grad(u))
for i in range(3):
    F[i, i] += 1.
F += grad(u)

but this seems to give a different result. Honestly, I do not understand why. Do you have any explanation for that? (I took the code snippet from example 36)

In addition to that,I find the different import statements for math helpers a bit confusing - although I understand why they are necessary. I mean these imports:

from numpy import (einsum, linalg as nla, zeros,
                   zeros_like, concatenate, split as npsplit,
                   hstack, abs as npabs, arange, sqrt)
from skfem.helpers import grad, transpose, det, inv

In the beginning I did not understand why I can’t use det and inv from numpy.linalg but then I realized the special structure of the arrays (3,3,n,m). I think this shape is used in order to obtain a reasonable performance with Python/NumPy 😄

Anyway, I’ll experiment with the examples and I hope Github Issues are the right place to ask. Thanks and with best regards, Andreas

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:8 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
kinnalacommented, Mar 25, 2021

This is a classical mistake when using numpy. I did not notice it first but now it’s obvious.

One should write

F = grad(u).copy()

instead. Otherwise you’ll modify the contents of w.

1reaction
kinnalacommented, Mar 24, 2021

In the beginning I did not understand why I can’t use det and inv from numpy.linalg but then I realized the special structure of the arrays (3,3,n,m). I think this shape is used in order to obtain a reasonable performance with Python/NumPy

This is correct. Here is some information related to the forms: https://scikit-fem.readthedocs.io/en/latest/forms.html#helpers-are-useful-but-not-necessary

Read more comments on GitHub >

github_iconTop Results From Across the Web

Why is the displacement gradient tensor equal to zero when ...
By calculation, it is clear that displacement gradient tensor equal to zero when the motion is pure shifting (no change in shape).
Read more >
Why do we need to set the gradients manually to zero in ...
Since the backward() function accumulates gradients, and you don't want to mix up gradients between minibatches, you have to zero them out at ......
Read more >
Deformation Gradient Tensor - an overview - ScienceDirect.com
Tensor operations in convected coordinate system​​ The deformation gradient tensor plays the role to transform the reference base vector to the current one....
Read more >
Why do we need to call zero_grad() in PyTorch?
So, the default action has been set to accumulate (i.e. sum) the gradients on every loss. backward() call. Because of this, when you...
Read more >
Deformation Gradient - Continuum Mechanics
In this case, F=I F = I , is indicative of a lack of deformations. As will be shown next, it is also...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found