[core] workload D reports read failures when used with client side batched inserts
See original GitHub issueHi, I am using PG 13 (seen on older PGs also) and cloned YCSB last week and using jdbc binding 0.16. Irrespective of what number I set operationcount and recordcount to, I observe the same failure always.
The command I use: (I have run load A by now)
./bin/ycsb run jdbc -P workloads/workloadd -P jdbc/src/main/conf/db.properties -p recordcount=100 -p operationcount=100 -cp jdbc/target/postgresql-9.0-802.jdbc4.jar
Result:
# ./bin/ycsb run jdbc -P workloads/workloadd -P jdbc/src/main/conf/db.properties -p recordcount=100 -p operationcount=100 -cp jdbc/target/postgresql-9.0-802.jdbc4.jar
[WARN] Running against a source checkout. In order to get our runtime dependencies we'll have to invoke Maven. Depending on the state of your system, this may take ~30-45 seconds
[DEBUG] Running 'mvn -pl com.yahoo.ycsb:jdbc-binding -am package -DskipTests dependency:build-classpath -DincludeScope=compile -Dmdep.outputFilterFile=true'
java -cp jdbc/target/postgresql-9.0-802.jdbc4.jar:/home/cc/YCSB/jdbc/conf:/home/cc/YCSB/jdbc/target/jdbc-binding-0.16.0-SNAPSHOT.jar:/home/cc/YCSB/jdbc/target/jdbc-binding-0.17.0-SNAPSHOT.jar:/root/.m2/repository/org/apache/geronimo/specs/geronimo-jta_1.1_spec/1.1.1/geronimo-jta_1.1_spec-1.1.1.jar:/root/.m2/repository/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/root/.m2/repository/net/sourceforge/serp/serp/1.13.1/serp-1.13.1.jar:/root/.m2/repository/org/hdrhistogram/HdrHistogram/2.1.4/HdrHistogram-2.1.4.jar:/root/.m2/repository/org/apache/openjpa/openjpa-jdbc/2.1.1/openjpa-jdbc-2.1.1.jar:/root/.m2/repository/org/apache/geronimo/specs/geronimo-jms_1.1_spec/1.1.1/geronimo-jms_1.1_spec-1.1.1.jar:/home/cc/YCSB/core/target/core-0.17.0-SNAPSHOT.jar:/root/.m2/repository/org/apache/openjpa/openjpa-kernel/2.1.1/openjpa-kernel-2.1.1.jar:/root/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.4/jackson-core-asl-1.9.4.jar:/root/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/root/.m2/repository/commons-lang/commons-lang/2.4/commons-lang-2.4.jar:/root/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.4/jackson-mapper-asl-1.9.4.jar:/root/.m2/repository/org/apache/openjpa/openjpa-lib/2.1.1/openjpa-lib-2.1.1.jar:/root/.m2/repository/commons-pool/commons-pool/1.5.4/commons-pool-1.5.4.jar com.yahoo.ycsb.Client -db com.yahoo.ycsb.db.JdbcDBClient -P workloads/workloadd -P jdbc/src/main/conf/db.properties -p recordcount=100 -p operationcount=100 -t
Command line: -db com.yahoo.ycsb.db.JdbcDBClient -P workloads/workloadd -P jdbc/src/main/conf/db.properties -p recordcount=100 -p operationcount=100 -t
YCSB Client 0.17.0-SNAPSHOT
Loading workload...
Starting test.
Adding shard node URL: jdbc:postgresql://localhost:5432/abc?reWriteBatchedInserts=true
Using shards: 1, batchSize:1000, fetchSize: 1000
DBWrapper: report latency for each error is false and specific error codes to track for latency are: []
[OVERALL], RunTime(ms), 203
[OVERALL], Throughput(ops/sec), 492.61083743842363
[TOTAL_GCS_PS_Scavenge], Count, 0
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 0
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.0
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 0
[TOTAL_GC_TIME], Time(ms), 0
[TOTAL_GC_TIME_%], Time(%), 0.0
[READ], Operations, 71
[READ], AverageLatency(us), 862.4084507042254
[READ], MinLatency(us), 301
[READ], MaxLatency(us), 27935
[READ], 95thPercentileLatency(us), 922
[READ], 99thPercentileLatency(us), 1168
[READ], Return=OK, 71
[READ], Return=NOT_FOUND, 21
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 7366.0
[CLEANUP], MinLatency(us), 7364
[CLEANUP], MaxLatency(us), 7367
[CLEANUP], 95thPercentileLatency(us), 7367
[CLEANUP], 99thPercentileLatency(us), 7367
[INSERT], Operations, 8
[INSERT], AverageLatency(us), 573.0
[INSERT], MinLatency(us), 311
[INSERT], MaxLatency(us), 1881
[INSERT], 95thPercentileLatency(us), 1881
[INSERT], 99thPercentileLatency(us), 1881
[INSERT], Return=BATCHED_OK, 8
[READ-FAILED], Operations, 21
[READ-FAILED], AverageLatency(us), 285.57142857142856
[READ-FAILED], MinLatency(us), 210
[READ-FAILED], MaxLatency(us), 409
[READ-FAILED], 95thPercentileLatency(us), 388
[READ-FAILED], 99thPercentileLatency(us), 409
This is my db.properties file:
db.driver=org.postgresql.Driver
jdbc.fetchsize=1000
#db.url=jdbc:postgresql://localhost:5432/abc?reWriteBatchedInserts=true&sslmode='verify-ca'&ssl='true'&sslcert='/home/cc/postFix/pg13Data/data/client/postgresql.crt'&sslkey='/home/cc/postFix/pg13Data/data/client/postgresql.key'&sslrootcert='/home/cc/postFix/pg13Data/data/server.crt'
db.url=jdbc:postgresql://localhost:5432/abc?reWriteBatchedInserts=true
db.user=abc
db.passwd=PASSWORD
db.batchsize=1000
jdbc.batchupdateapi=true
I don’t particularly see failures in any other workloads. Let me know if you need any further information. I could verify the same failure on different machines also.
And I have made no changes to the workload D file:
recordcount=1000
operationcount=1000
workload=com.yahoo.ycsb.workloads.CoreWorkload
readallfields=true
readproportion=0.95
updateproportion=0
scanproportion=0
insertproportion=0.05
requestdistribution=latest
Issue Analytics
- State:
- Created 4 years ago
- Comments:5
Top Results From Across the Web
Benchmarking Cloud Serving Systems with YCSB
We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded....
Read more >Database Engine events and errors - SQL Server
Consult this MSSQL error code list to find explanations for error messages for SQL Server database engine events.
Read more >Server APARs fixed in the IBM Spectrum Protect server ...
What APARs are fixed in the IBM Spectrum Protect server Version 8.1 fix pack levels?
Read more >Insert, update, and delete data using data manipulation ...
Locking. You execute DML statements inside read-write transactions. When Spanner reads data, it acquires shared read locks on limited portions ...
Read more >Documentation: 15: pgbench - PostgreSQL
The last line reports the number of transactions per second. The default TPC-B-like transaction test requires specific tables to be set up beforehand....
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Oh! It’s probably because the batched insert hasn’t actually finished inserting when the shared state in workload D has updated the key range to include the new records.
I’ll add this to the known issues on the next release.
Okay, so removing reWriteBatchedInserts from db.url seems to fix the issue. So likely there is a problem with batched inserts but haven’t been able to debug more.