GCP fleet collection improvements
See original GitHub issue- status
--kill
prompts Y/N for deleting, which slows down the process when there are tons of instances
-
the output was going too fast when I ran the script, but I noticed some instances were not created. I think the error was “CPU limit” or something? maybe “compute limit”.
-
I don’t really care about the trace data, is there a way to make that optional to collect or download?
Issue Analytics
- State:
- Created 3 years ago
- Comments:6
Top Results From Across the Web
How fleets work | Fleet management
This page provides a deeper dive into how fleets help you manage multi-cluster deployments, including some key fleet terminology and concepts.
Read more >Overview | Last Mile Fleet Solution
Google Maps Platform Last Mile Fleet Solution is a development toolkit for building applications to power first and last mile delivery fleets.
Read more >Plan and operate a fleet of shared runners - GitLab Docs
Each CI/CD job is executed in a Google Cloud Platform (GCP) n1-standard-1 VM. ... An essential step in operating a runner fleet at...
Read more >Managing large fleets of Compute Engine VM fleets - YouTube
Speaker: Ravi Kiran Chintalapudi Watch more: Google Cloud Next '20: OnAir → https://goo.gle/next2020 Subscribe to the GCP Channel ...
Read more >Silent Data Corruption - Google Cloud Platform Console Help
As part of our ongoing fleet maintenance, we regularly screen our fleet on a ... against potentially erroneous garbage collection of files and...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
So I’ve been using the scripts a lot over the last couple of days and have already started to make a couple changes. I’ll put up a PR soon that includes new scripts for extracting and doing some quick analysis on the data as well.
Yeah, that could work. I was thinking of implementing some sort of timeout although I’m not very proficient in BASH so I can’t say how feasible/difficult that would be.