question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Optimal solution for reducing GetMetricStatistics requests?

See original GitHub issue

What is the recommended solution for reducing Cloudwatch API calls? Currently when scraping the following metrics, we are getting over 4 Million requests over ~2 days which is really expensive

- aws_namespace: AWS/SQS
    aws_metric_name: ApproximateAgeOfOldestMessage
    aws_dimensions: [QueueName]
    aws_statistics: [Average, Maximum]
  - aws_namespace: AWS/SQS
    aws_metric_name: NumberOfMessagesDeleted
    aws_dimensions: [QueueName]
    aws_statistics: [Sum]
  - aws_namespace: AWS/SQS
    aws_metric_name: NumberOfMessagesSent
    aws_dimensions: [QueueName]
    aws_statistics: [Sum, Average]
  - aws_namespace: AWS/SQS
    aws_metric_name: ApproximateNumberOfMessagesVisible
    aws_dimensions: [QueueName]
    aws_statistics: [Sum]
  - aws_namespace: AWS/SNS
    aws_metric_name: NumberOfMessagesPublished
    aws_dimensions: [TopicName]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HealthyHostCount
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Average]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: UnHealthyHostCount
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Average]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: TargetConnectionErrorCount
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: TargetResponseTime
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Average]
    aws_extended_statistics: [p75, p90, p95]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: RequestCount
    aws_dimensions: [LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HTTPCode_ELB_4XX_Count
    aws_dimensions: [LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HTTPCode_ELB_5XX_Count
    aws_dimensions: [LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HTTPCode_Target_2XX_Count
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HTTPCode_Target_3XX_Count
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HTTPCode_Target_4XX_Count
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: AWS/ApplicationELB
    aws_metric_name: HTTPCode_Target_5XX_Count
    aws_dimensions: [TargetGroup, LoadBalancer]
    aws_statistics: [Sum]
  - aws_namespace: "AWS/Billing"
    aws_dimensions: [Currency,ServiceName]
    aws_dimensions_select:
      Currency: [USD]
    aws_metric_name: EstimatedCharges
    aws_statistics: [Sum, Average]
    set_timestamp: false

We are looking for a solution other than simply reducing the number of metrics scraped. Can you provide a “scrape_delay” option that will wait a pre-defined time interval after completing a scrape and beginning the next scrape?

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:8 (1 by maintainers)

github_iconTop GitHub Comments

0reactions
andy-hancommented, May 6, 2019

@learnlalit I have yaml file to override some of the default values for the prometheus-operator values file. In there, we specify that we need an additionalServiceMonitor for the cloudwatch-exporter. At that point, I’ve changed the scrape interval for how often prometheus will call the exporter for metrics

prometheus:
     additionalServiceMonitors:   
          - name: cloudwatch-exporter
          namespace: monitoring
          labels:
               app: cloudwatch-exporter
          jobLabel: cloudwatch-exporter
          endpoints:
               - port: http
               path: /metrics
               interval: "300s"
Read more comments on GitHub >

github_iconTop Results From Across the Web

Understand CloudWatch charges and reduce future charges
To reduce costs: Make ListMetrics calls through the console for free rather than making them through the AWS CLI.
Read more >
A Hidden Cost to Monitoring AWS with 3rd Party Tools
Pulling data via CloudWatch GetMetricData API calls is expensive, not only because of the price of $10.00/million metrics requested, but because ...
Read more >
[cloudwatch exporter] API usage optimization - Google Groups
So are you intending on trying to move to using the bulk multi-request. API call where possible to reduce the number of calls...
Read more >
CloudWatch billing and cost - 亚马逊云科技
However, GetMetricStatistics is included under the Amazon Free Tier limit for up to one million API requests, which can help you reduce costs...
Read more >
CloudWatch billing increase - New Relic Documentation
Cause. New Relic Infrastructure Amazon integrations leverage CloudWatch to gather metrics. AWS charges joint customers for requests that exceed the first one ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found