Set alpha value to 0 for RunningAverage
See original GitHub issue❓ Questions/Help/Support
When I look at the source code of RunningAverage
class, I see that there is a factor variable, alpha
, which is set to 0.98
as default. As far as I know, this is to smooth the value of a metric like this:
def compute(self) -> Union[torch.Tensor, float]:
if self._value is None:
self._value = self._get_src_value()
else:
self._value = self._value * self.alpha + (1.0 - self.alpha) * self._get_src_value()
return self._value
The use case is that I want to log my metrics during validation time (using contrib.metrics.ProgressBar
), these metrics should not be smoothed, however. One more case is that the smoothed value when plotting to Tensorboard is smoothed once again, which leads to quite unreliable the plot. By disabling the smooth factor, I set alpha
to 0 and it raised an exception:
if not (0.0 < alpha <= 1.0):
raise ValueError("Argument alpha should be a float between 0.0 and 1.0.")
I wonder why it is not 0.0 <= alpha <= 1.0
? Is it a mistake or any idea behind it?
Issue Analytics
- State:
- Created 3 years ago
- Comments:11 (2 by maintainers)
Top Results From Across the Web
How to calculate moving average without keeping the count ...
Exponential weighted moving average. Initially: average = 0 counter = 0. For each value: counter += 1 average = average + (value ......
Read more >Moving average - Wikipedia
In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different subsets of the...
Read more >Background Subtraction in an Image using Concept of ...
alpha : Weight of the input image. Alpha decides the speed of updating. If you set a lower value for this variable, running...
Read more >setAlpha(_:) | Apple Developer Documentation
A value that specifies the opacity level. Values can range from 0.0 (transparent) to 1.0 (opaque). Values outside this range are clipped to...
Read more >Running Average model – Background Subtraction
If alpha is a higher value, average image tries to catch even very fast and short changes in the data. If you set...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@VinhLoiIT we just merged the PR #979 with
MetricUsage
, it will be available today or tomorrow in nightly releases. If you can give it a try and leave us a feedback, it would be awesome! Thanks !Meanwhile I close the issue as solved. Please, open another issue if need more support.
@vfdev-5 I think it worth to mention that the new metric (e.g.,
Running
) should have an option to reset the metric veryreset_interval
(similar to the behavior ofrunning_loss
in pytorch tutorial), so it can be used in both training and validating. Therefore, to calculate running metrics while training or validating, we can useRunning
class to get raw value andRunningAverage
to get smoothed metric value. What do you think?