Does not work with Cassandra 3.0See original GitHub issue
Because Cassandra 3.0 changed some internal tables and older versions of drivers try to access them, both tools crash when trying to connect. Please update the Java driver.
# ./cassandra-unloader -f xxx.csv -host 126.96.36.199 -schema xxx.foo SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /188.8.131.52:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces)) at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223) at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78) at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272) at com.datastax.driver.core.Cluster.init(Cluster.java:158) at com.datastax.driver.core.Cluster.connect(Cluster.java:248) at com.datastax.loader.CqlDelimUnload.setup(CqlDelimUnload.java:329) at com.datastax.loader.CqlDelimUnload.run(CqlDelimUnload.java:350) at com.datastax.loader.CqlDelimUnload.main(CqlDelimUnload.java:444)
- Created 8 years ago
- Comments:8 (1 by maintainers)
Top GitHub Comments
I loaded 62 million rows of data to a table with 150 columns. The load rate was about 8200 rows per second on a 4 node cluster with 16 cores (32 logical)/SSDs/264GB memory. In a small cluster of 4 VM nodes , my load rate was about 1000 rows per second with 1 thread. You want to create smaller CSV files so you can run your job in parallel. If compaction is falling behind, you may want to lower number of threads. You need to monitor where is the bottleneck. It’s a tuning process.
I didn’t close this issue, but yes we support 3.0 now.