Increase TABLE_COUNT_LIMIT or make it configurable for postgres integration
See original GitHub issueOutput of the info page
Additional environment details (Operating System, Cloud provider, etc):
datadog agent running as docker container (6.10.2) monitoring postgreqSQL 10 in AWS (RDS).
schema has more than 200 tables:
$ grep -c '^CREATE TABLE ' SCHEMA_SQL.sql
202
Steps to reproduce the issue:
- load DB schema
- start datadog agent
- observe metric postgresql.table.count (with schema tag)
Describe the results you received: not all schemas are reported correctly for the table count per schema.
Describe the results you expected: some tables are omitted from the table count.
Additional information you deem important (e.g. issue happens only occasionally):
Because the inner select limit the total number of tables returned, the general metric is incorrect if there are more than 200 tables in the database. This may result in unexpected monitoring and behavior if there is an alarm when the number of tables changes.
The limit is currently hardcoded in postgres.py to 200, it would be good to make it configurable via the configuration file for the database.
Issue Analytics
- State:
- Created 4 years ago
- Comments:6 (2 by maintainers)
Top GitHub Comments
PR created.
Feature is now available
6.14.x
. Thanks @fischaz https://github.com/DataDog/integrations-core/pull/3729