Timestamp resolution not granular enough
See original GitHub issueHey team,
I have been experiencing a problem where logs in Azure Log Analytics are out of order. This is due to the timestamp resolution not being high enough (e.g. if you log twice, two nanoseconds apart, they get sent with the same timestamp).
This is shown by the screenshot below that is produced by the following code:
if app_insights_connection_str:
logger.addHandler(AzureLogHandler(
connection_string=app_insights_connection_str, export_interval=EXPORT_INTERVAL_SECS)
)
for i in range(10000):
for j in range(10):
logger.info(f'log loop {i}, {j}')
time.sleep(1)
I think this is due to the code in logging\_init_.py
line 287 that produces millisecond level timestamps:
def __init__(self, name, level, pathname, lineno,
msg, args, exc_info, func=None, sinfo=None, **kwargs):
"""
Initialize a logging record with interesting information.
"""
ct = time.time()
self.name = name
self.msg = msg
Perhaps this could be fixed by logging at nanosecond resolution using time.time_ns()
? Alternatively we could have a separate counter that counts up and submits it as part of the default envelope, thus giving us ordering information.
Thanks!
Alex
Issue Analytics
- State:
- Created 2 years ago
- Reactions:1
- Comments:9 (4 by maintainers)
Top Results From Across the Web
Getting high-resolution timestamps out of DateTime.Now
For some time now, I've had success in using Stopwatch.GetTimestamp() for high-resolution timing, but keep in mind it's not that simple since I ......
Read more >Time stamps in database tables, intended for auditing or ...
Granularity of time measurements refers to the precision available in time stamp values. Granularity coarser than one second is not sufficient ...
Read more >Capturing Timestamp Precision for Digital Forensics - CiteSeerX
In this paper we present a survey we conducted of popular operating systems and software packages to determine what time precision and rounding...
Read more >The journey to support nanosecond timestamps in Elasticsearch
Read this blog post for the why and how on our journey to be able to store dates in nanosecond resolution from Elasticsearch...
Read more >Does the timestamp type have (hidden) milliseconds?
So, what I'm taking from this is that if you choose not to specifically store TIMESTAMP s with a higher precision than 0,...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Fair points. Though note that it’s not even the ingestion service; we can’t seem to get high enough precision from Python to denote the difference between timestamps in some cases.
In other words, time() and time_ns() return the same thing on multiple calls back to back. So even if we had nanosecond granularity, it wouldn’t necessarily help since multiple log calls could have the same nanosecond.
A counter solves this, though I tend to agree that if no one else seems to need this, it’s probably not worth injecting it in the code for all.
Thanks still.
Closing for now. Feel free to reopen if workaround does not work.