Revisit use of read_namespaced_pod_log
See original GitHub issueRight now we use the k8s read_namespaced_pod_log
to extract the GraphQL response printed by dagster-graphql
to stdout—see here
We should probably find a less fragile way to accomplish this; it’s not clear to me that this API is designed for durability / reliable reads of pod outputs
Issue Analytics
- State:
- Created 4 years ago
- Comments:8 (8 by maintainers)
Top Results From Across the Web
readNamespacedPodLog with follow=true · Issue #110 - GitHub
I have a problem with the usage of readNamespacedPodLog with the follow parameter set to true. This function returns a Promise (BlueBird).
Read more >CoreV1Api.readNamespacedPodLog - Java - Tabnine
How to use. readNamespacedPodLog. method. in. io.kubernetes.client.apis.CoreV1Api · Best Java code snippets using io.kubernetes.client.apis.CoreV1Api.
Read more >Issue using readNamespacedPodLog - Stack Overflow
I am trying to ascertain if there are log files for pods with sizes > 10Mb and reporting on them. When I do...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
per conversation this morning with @catherinewu, we could instead just have the containers write directly to the instance database and remove this
parse_raw_log_lines
entirely. If the containers had access to the instance, we could also remove passing tags over the graphql interface; see https://dagster.phacility.com/D4017We’ve solved this with the
k8s_job_executor
, it persists with thecelery_k8s_job_executor