Inconsistent reporting of the number of compactions
See original GitHub issueI am doing some testing on a branch and I noticed an inconsistency in how we report the number of compactions. The Monitor reports 4 compactions active. The fate print
command shows 1 FATE transaction. And the listcompactions
command shows a compaction per each tablet compacted.
root@uno> listcompactions 2022-02-25T13:20:25,037 [Shell.audit] INFO : root@uno> listcompactions SERVER | AGE | TYPE | REASON | READ | WROTE | TABLE | TABLET | INPUT | OUTPUT | ITERATORS | ITERATOR OPTIONS ip-10-113-12-25:9997 | 33m19s | FULL | USER | 388K | 388K | ci | 2;59999999999999a2;533333333333333b | 1 | /2/t-00001by/A0000214.rf_tmp | [] | {} ip-10-113-12-25:9997 | 33m13s | FULL | USER | 387K | 387K | ci | 2;533333333333333b;4cccccccccccccd4 | 1 | /2/t-00001bx/A0000215.rf_tmp | [] | {} ip-10-113-12-25:9997 | 23m44s | FULL | USER | 276K | 276K | ci | 2<;79999999999999a5 | 1 | /2/default_tablet/A000021e.rf_tmp | [] | {} ip-10-113-12-25:9997 | 23m44s | FULL | USER | 276K | 276K | ci | 2;6000000000000009;59999999999999a2 | 1 | /2/t-00001bz/A000021f.rf_tmp | [] | {} ip-10-113-12-25:10000 | 33m17s | FULL | USER | 388K | 388K | ci | 2;6cccccccccccccd7;666666666666667 | 1 | /2/t-00001c1/A0000250.rf_tmp | [] | {} ip-10-113-12-25:10000 | 33m15s | FULL | USER | 388K | 388K | ci | 2;666666666666667;6000000000000009 | 1 | /2/t-00001c0/A0000251.rf_tmp | [] | {} ip-10-113-12-25:10000 | 23m43s | FULL | USER | 276K | 276K | ci | 2;79999999999999a5;733333333333333e | 1 | /2/t-00001c3/A0000254.rf_tmp | [] | {} ip-10-113-12-25:10000 | 23m38s | FULL | USER | 275K | 275K | ci | 2;733333333333333e;6cccccccccccccd7 | 1 | /2/t-00001c2/A0000255.rf_tmp | [] | {} root@uno> fate print 2022-02-25T13:20:29,504 [Shell.audit] INFO : root@uno> fate print 2022-02-25T13:20:29,506 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/mike/workspace/uno/install/accumulo-2.1.0-SNAPSHOT/conf/accumulo.properties 2022-02-25T13:20:29,520 [zookeeper.ZooSession] DEBUG: Connecting to localhost:2181 with timeout 30000 with auth txid: 73ecbda03cea6dc3 status: IN_PROGRESS op: CompactRange locked: [R:+default, R:2] locking: [] top: org.apache.accumulo.manager.tableOps.TraceRepo@61126fdf created: 2022-02-25T17:13:59.413Z 1 transactions
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
Leveled Compaction in Apache Cassandra - DataStax
There are three problems with size-tiered compaction in update-heavy workloads: Performance can be inconsistent because there are no guarantees ...
Read more >Inconsistent table statistics · Issue #2922 · scylladb ... - GitHub
The thing is, this compaction was made because of this very problem where the reported number of rows and results from queries were...
Read more >2021 Soils and Aggregate Compaction Manual
On a test report, the Atterberg Limits are expressed as a number, not a percent. Even so, they do represent moisture content.
Read more >Cassandra nodetool status inconsistent on different nodes ...
I checked compactionhistory. Here is part of results. It shows many records related to keyspace system. Compaction History: id keyspace_name columnfamily_name ...
Read more >Factors Affecting Compaction of Asphalt Pavements
More recently, many of these financial adjustments have been based upon statistical criteria as defined in PWL (percent-within-limits) specifications.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Fate is listing transactions though, not compactions.
I don’t have any ideas how to make this better.