Unify metrics output type
See original GitHub issue🚀 Feature
The idea is to verify the output type for all metrics (output of compute
function) and update the docs accordingly.
In general, metric’s output should be a float number. In some particular cases, like Recall/Precision with average=False
, the output is a torch tensor. So, let’s see and decide if the output of compute()
method can be :
def compute() -> float
def compute() -> Union[float, torch.Tensor]
and tensor is on CPUdef compute() -> torch.Tensor
with tensor on CPU
To address this FR, we have to make sure for each metric what kind of type it supposes to return and update the docs accordingly.
Issue Analytics
- State:
- Created 3 years ago
- Comments:12 (7 by maintainers)
Top Results From Across the Web
Unify Metric Types documentation · Issue #4117 - GitHub
I'm curious, do you think this is better represented as output examples? For example, the prometheus sink's output. I do like the type...
Read more >Store UniFi Controller Metrics in Prometheus or InfluxDB
When used with InfluxDB it polls the UniFi controller at an interval (default 30 seconds). The collected data is stored in InfluxDB and...
Read more >Viewing performance metrics by using APIs - NetApp
Viewing performance metrics by using APIs ; Accumulated counters, such as minimum, maximum, 95th percentile, and the average performance values ...
Read more >Output plugins | Logstash Reference [8.5] - Elastic
An output plugin sends event data to a particular destination. Outputs are the final stage in the event pipeline. The following output plugins...
Read more >Log to metric | Vector documentation
status: stable egress: batch state: stateless output: metrics ... Vector's internal gauge type represents changes to that value. Gauges should be used to ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
IMHO handle an unique type could simplify the underlying code. If no issue about performance occurs, I would prefer have
torch.tensor
(compatible withfloat
) rather than union of types.But maybe it’s because I’m too lazy 😊
Didn’t know this, thanks. Since the day I switched to
idist
, I turned everything related toidist.xxx
. 😅