gRPC client + botocore + datadog exporter instrumentation error
See original GitHub issueThis one is a bit complex - when using the gRPC client instrumentation to make a call to an instrumented gRPC service that then makes a call via boto to the DynamoDB API, the combination of instrumentation leads to this error:
2020-12-17 14:19:39,666 DEBUG [botocore.httpsession:298] Exception received when sending urllib3 HTTP request
Traceback (most recent call last):
File "/home/michael/work/myapp/.venv/lib/python3.8/site-packages/botocore/httpsession.py", line 254, in send
urllib_response = conn.urlopen(
File "/home/michael/work/myapp/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/home/michael/work/myapp/.venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 394, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/michael/work/myapp/.venv/lib/python3.8/site-packages/urllib3/connection.py", line 234, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/usr/lib64/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/michael/work/myapp/.venv/lib/python3.8/site-packages/botocore/awsrequest.py", line 91, in _send_request
rval = super(AWSConnection, self)._send_request(
File "/usr/lib64/python3.8/http/client.py", line 1296, in _send_request
self.putheader(hdr, value)
File "/home/michael/work/myapp/.venv/lib/python3.8/site-packages/urllib3/connection.py", line 219, in putheader
_HTTPConnection.putheader(self, header, *values)
File "/usr/lib64/python3.8/http/client.py", line 1232, in putheader
if _is_illegal_header_value(values[i]):
TypeError: expected string or bytes-like object
Without either client instrumentation, this call works perfectly.
Steps to reproduce As described above - I don’t have a simplified sample, but I’ll try to produce one.
What is the expected behavior? No exception 😃
What is the actual behavior? The above exception.
Additional context
I’m definitely working on this, but any suggestions would certainly be welcome, I’m not quite sure what the invalid header is or where it might get set.
Issue Analytics
- State:
- Created 3 years ago
- Comments:18 (18 by maintainers)
Top Results From Across the Web
Datadog Exporter Example — OpenTelemetry Python documentation
These examples show how to use OpenTelemetry to send tracing data to Datadog. Basic Example¶. Installation. pip install opentelemetry-api pip install ...
Read more >Tracing Python Applications - Datadog Docs
Install and configure the Datadog Agent to receive traces from your now instrumented application. By default the Datadog Agent is enabled in your...
Read more >Tracer Startup Logs - Datadog Docs
Startup logs · Log location: The Python tracer logs configuration information as INFO-level. It logs diagnostics information, if found, as ERROR. · Configuration:....
Read more >Python Compatibility Requirements - Datadog Docs
The Python APM Client library follows a versioning policy that specifies the support level for the different versions of the library and Python...
Read more >Serverless Trace Propagation - Datadog Docs
Additional instrumentation is sometimes required to see a single, connected trace ... import json import boto3 import os from datadog_lambda.tracing import ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Have you tried adding a pdb statement to this snippet, within a try except?
You should be able to extract all the information you need from that pdb session. prints work too, just a bit more time consuming.
That looks a lot like the PR for this which I closed because it didn’t fix the problem at the source, it just prevented it from propagating:
https://github.com/open-telemetry/opentelemetry-python-contrib/pull/272
If you guys think this is the best way to fix it, I guess that works for me.