Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[core] workload D reports read failures when used with client side batched inserts

See original GitHub issue

Hi, I am using PG 13 (seen on older PGs also) and cloned YCSB last week and using jdbc binding 0.16. Irrespective of what number I set operationcount and recordcount to, I observe the same failure always.

The command I use: (I have run load A by now)
./bin/ycsb run jdbc -P workloads/workloadd -P jdbc/src/main/conf/ -p recordcount=100 -p operationcount=100 -cp jdbc/target/postgresql-9.0-802.jdbc4.jar


# ./bin/ycsb run jdbc -P workloads/workloadd -P jdbc/src/main/conf/ -p recordcount=100 -p operationcount=100 -cp jdbc/target/postgresql-9.0-802.jdbc4.jar
[WARN]  Running against a source checkout. In order to get our runtime dependencies we'll have to invoke Maven. Depending on the state of your system, this may take ~30-45 seconds
[DEBUG]  Running 'mvn -pl -am package -DskipTests dependency:build-classpath -DincludeScope=compile -Dmdep.outputFilterFile=true'
java -cp jdbc/target/postgresql-9.0-802.jdbc4.jar:/home/cc/YCSB/jdbc/conf:/home/cc/YCSB/jdbc/target/jdbc-binding-0.16.0-SNAPSHOT.jar:/home/cc/YCSB/jdbc/target/jdbc-binding-0.17.0-SNAPSHOT.jar:/root/.m2/repository/org/apache/geronimo/specs/geronimo-jta_1.1_spec/1.1.1/geronimo-jta_1.1_spec-1.1.1.jar:/root/.m2/repository/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/root/.m2/repository/net/sourceforge/serp/serp/1.13.1/serp-1.13.1.jar:/root/.m2/repository/org/hdrhistogram/HdrHistogram/2.1.4/HdrHistogram-2.1.4.jar:/root/.m2/repository/org/apache/openjpa/openjpa-jdbc/2.1.1/openjpa-jdbc-2.1.1.jar:/root/.m2/repository/org/apache/geronimo/specs/geronimo-jms_1.1_spec/1.1.1/geronimo-jms_1.1_spec-1.1.1.jar:/home/cc/YCSB/core/target/core-0.17.0-SNAPSHOT.jar:/root/.m2/repository/org/apache/openjpa/openjpa-kernel/2.1.1/openjpa-kernel-2.1.1.jar:/root/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.4/jackson-core-asl-1.9.4.jar:/root/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/root/.m2/repository/commons-lang/commons-lang/2.4/commons-lang-2.4.jar:/root/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.4/jackson-mapper-asl-1.9.4.jar:/root/.m2/repository/org/apache/openjpa/openjpa-lib/2.1.1/openjpa-lib-2.1.1.jar:/root/.m2/repository/commons-pool/commons-pool/1.5.4/commons-pool-1.5.4.jar -db -P workloads/workloadd -P jdbc/src/main/conf/ -p recordcount=100 -p operationcount=100 -t
Command line: -db -P workloads/workloadd -P jdbc/src/main/conf/ -p recordcount=100 -p operationcount=100 -t
YCSB Client 0.17.0-SNAPSHOT

Loading workload...
Starting test.
Adding shard node URL: jdbc:postgresql://localhost:5432/abc?reWriteBatchedInserts=true
Using shards: 1, batchSize:1000, fetchSize: 1000
DBWrapper: report latency for each error is false and specific error codes to track for latency are: []
[OVERALL], RunTime(ms), 203
[OVERALL], Throughput(ops/sec), 492.61083743842363
[TOTAL_GCS_PS_Scavenge], Count, 0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.0
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 0
[TOTAL_GC_TIME], Time(ms), 0
[TOTAL_GC_TIME_%], Time(%), 0.0
[READ], Operations, 71
[READ], AverageLatency(us), 862.4084507042254
[READ], MinLatency(us), 301
[READ], MaxLatency(us), 27935
[READ], 95thPercentileLatency(us), 922
[READ], 99thPercentileLatency(us), 1168
[READ], Return=OK, 71
[READ], Return=NOT_FOUND, 21
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 7366.0
[CLEANUP], MinLatency(us), 7364
[CLEANUP], MaxLatency(us), 7367
[CLEANUP], 95thPercentileLatency(us), 7367
[CLEANUP], 99thPercentileLatency(us), 7367
[INSERT], Operations, 8
[INSERT], AverageLatency(us), 573.0
[INSERT], MinLatency(us), 311
[INSERT], MaxLatency(us), 1881
[INSERT], 95thPercentileLatency(us), 1881
[INSERT], 99thPercentileLatency(us), 1881
[READ-FAILED], Operations, 21
[READ-FAILED], AverageLatency(us), 285.57142857142856
[READ-FAILED], MinLatency(us), 210
[READ-FAILED], MaxLatency(us), 409
[READ-FAILED], 95thPercentileLatency(us), 388
[READ-FAILED], 99thPercentileLatency(us), 409

This is my file:


I don’t particularly see failures in any other workloads. Let me know if you need any further information. I could verify the same failure on different machines also.

And I have made no changes to the workload D file:


Issue Analytics

  • State:open
  • Created 4 years ago
  • Comments:5

github_iconTop GitHub Comments

busbeycommented, Oct 2, 2019

Oh! It’s probably because the batched insert hasn’t actually finished inserting when the shared state in workload D has updated the key range to include the new records.

I’ll add this to the known issues on the next release.

VinayBanakarcommented, Oct 2, 2019

Okay, so removing reWriteBatchedInserts from db.url seems to fix the issue. So likely there is a problem with batched inserts but haven’t been able to debug more.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Benchmarking Cloud Serving Systems with YCSB
We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded....
Read more >
Database Engine events and errors - SQL Server
Consult this MSSQL error code list to find explanations for error messages for SQL Server database engine events.
Read more >
Server APARs fixed in the IBM Spectrum Protect server ...
What APARs are fixed in the IBM Spectrum Protect server Version 8.1 fix pack levels?
Read more >
Insert, update, and delete data using data manipulation ...
Locking. You execute DML statements inside read-write transactions. When Spanner reads data, it acquires shared read locks on limited portions ...
Read more >
Documentation: 15: pgbench - PostgreSQL
The last line reports the number of transactions per second. The default TPC-B-like transaction test requires specific tables to be set up beforehand....
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Post

No results found

github_iconTop Related Hashnode Post

No results found