Must have sharding column with subquery error occurred when use “exists”.
See original GitHub issueMust have sharding column with subquery error occurred when use “exists”.
When execute sql as follow :
select * from ts_order td where exists (select 1 from ts_order_address tda where tda.order_id = 1);
Get error : Must have sharding column with subquery.
Which version of ShardingSphere did you use?
4.0.0-RC1
Which project did you use? Sharding-JDBC or Sharding-Proxy?
Sharding-JDBC
Expected behavior
When I use version 3.0.1 get result without error :
[main] INFO ShardingSphere-SQL - Actual SQL: ds_0 ::: select * from ts_order_0000 td where exists (select 1 from ts_order_address_0000 tda where tda.order_id = 1);
[main] INFO ShardingSphere-SQL - Actual SQL: ds_0 ::: select * from ts_order_0001 td where exists (select 1 from ts_order_address_0001 tda where tda.order_id = 1);
[main] INFO ShardingSphere-SQL - Actual SQL: ds_1 ::: select * from ts_order_0002 td where exists (select 1 from ts_order_address_0002 tda where tda.order_id = 1);
[main] INFO ShardingSphere-SQL - Actual SQL: ds_1 ::: select * from ts_order_0003 td where exists (select 1 from ts_order_address_0003 tda where tda.order_id = 1);
------------------------------------------------------------
id
1
------------------------------------------------------------
Actual behavior
Exception in thread "main" java.lang.IllegalStateException: Must have sharding column with subquery.
Reason analyze (If you can)
Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc.
Dependency
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.0.9</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-core</artifactId>
<version>4.0.0-RC1</version>
</dependency>
SQL
-- prepare schemas
create database test_d_0;
create database test_d_1;
-- prepare tables
-- prepare table ts_order
CREATE TABLE test_d_0.`ts_order_0000` (`id` BIGINT NOT NULL AUTO_INCREMENT,PRIMARY KEY (`id`));
CREATE TABLE test_d_0.`ts_order_0001` (`id` BIGINT NOT NULL AUTO_INCREMENT,PRIMARY KEY (`id`));
CREATE TABLE test_d_1.`ts_order_0002` (`id` BIGINT NOT NULL AUTO_INCREMENT,PRIMARY KEY (`id`));
CREATE TABLE test_d_1.`ts_order_0003` (`id` BIGINT NOT NULL AUTO_INCREMENT,PRIMARY KEY (`id`));
-- prepare table ts_order_address
CREATE TABLE `test_d_0`.`ts_order_address_0000` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `order_id` BIGINT NOT NULL, `address` VARCHAR(255) NULL, PRIMARY KEY (`id`));
CREATE TABLE `test_d_0`.`ts_order_address_0001` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `order_id` BIGINT NOT NULL, `address` VARCHAR(255) NULL, PRIMARY KEY (`id`));
CREATE TABLE `test_d_1`.`ts_order_address_0002` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `order_id` BIGINT NOT NULL, `address` VARCHAR(255) NULL, PRIMARY KEY (`id`));
CREATE TABLE `test_d_1`.`ts_order_address_0003` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `order_id` BIGINT NOT NULL, `address` VARCHAR(255) NULL, PRIMARY KEY (`id`));
-- prepare init data
INSERT INTO `test_d_0`.`ts_order_0001` (`id`) VALUES ('1');
INSERT INTO `test_d_0`.`ts_order_address_0001` (`id`, `order_id`, `address`) VALUES ('1', '1', 'china');
Java Code
// ignore package and import
public static void main(String[] args) throws Exception {
queryTest("sharding-config-subquery-test2.yaml",
"select * from ts_order td where exists (select 1 from ts_order_address tda where tda.order_id = 1);");
}
private static void queryTest(String conifgResourcePath, String querySql) throws Exception {
String shardingConfig = loadFromResource(conifgResourcePath);
DataSource dataSource = YamlShardingDataSourceFactory.createDataSource(shardingConfig.getBytes("utf-8"));
Connection connection = dataSource.getConnection();
Statement st = connection.createStatement();
ResultSet rs = st.executeQuery(querySql);
showData(rs);
connection.close();
}
private static String loadFromResource(String path) throws Exception{
BufferedReader reader = new BufferedReader(new InputStreamReader(Class.class
.getResourceAsStream("/" + path), "utf-8"));
StringBuffer sb = new StringBuffer();
CharBuffer charBuffer = CharBuffer.allocate(1024);
for (int count = reader.read(charBuffer); count > 0; count = reader.read(charBuffer)) {
sb.append(charBuffer.flip());
}
return sb.toString();
}
private static void showData(ResultSet rs) throws SQLException {
System.out.println("------------------------------------------------------------");
int colCount = rs.getMetaData().getColumnCount();
for (int i = 0; i < colCount; i++) {
System.out.print(String.format("%-20s", rs.getMetaData().getColumnName(i + 1)));
}
System.out.println();
int dataCount = 0;
while (rs.next()) {
dataCount++;
for (int i = 0; i < colCount; i++) {
System.out.print(String.format("%-20s", rs.getObject(i + 1) + " "));
}
System.out.println();
}
System.out.println("------------------------------------------------------------");
System.out.println("Get Data rows " + dataCount);
}
Sharding Rule sharding-config-subquery-test2.yaml
dataSources:
ds_0: !!com.alibaba.druid.pool.DruidDataSource
driverClassName: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/test_d_0
username: root
password: root135
ds_1: !!com.alibaba.druid.pool.DruidDataSource
driverClassName: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/test_d_1
username: root
password: root135
shardingRule:
tables:
ts_order:
actualDataNodes: ds_0.ts_order_000${0..1},ds_1.ts_order_000${2..3}
databaseStrategy:
inline:
shardingColumn: id
algorithmExpression: ds_${new BigDecimal(id).abs().divideAndRemainder(4)[1].longValue().intdiv(2)}
tableStrategy:
inline:
shardingColumn: id
algorithmExpression: ts_order_${String.format("%04d",new BigDecimal(id).abs().divideAndRemainder(4)[1].longValue())}
ts_order_address:
actualDataNodes: ds_0.ts_order_address_000${0..1},ds_1.ts_order_address_000${2..3}
databaseStrategy:
inline:
shardingColumn: order_id
algorithmExpression: ds_${new BigDecimal(order_id).abs().divideAndRemainder(4)[1].longValue().intdiv(2)}
tableStrategy:
inline:
shardingColumn: order_id
algorithmExpression: ts_order_address_${String.format("%04d",new BigDecimal(order_id).abs().divideAndRemainder(4)[1].longValue())}
bindingTables:
- ts_order, ts_order_address
props:
sql.show: true
Example codes for reproduce this issue (such as a github link).
Issue Analytics
- State:
- Created 4 years ago
- Comments:12 (10 by maintainers)
Top Results From Across the Web
Must have sharding column with subquery error occurred ...
Must have sharding column with subquery error occurred when use “exists”. When execute sql as follow : select * from ts_order td where...
Read more >[GitHub] [incubator-shardingsphere] codefairy08 commented on ...
codefairy08 commented on issue #2263: Must have sharding column with subquery error occurred when use “exists”.
Read more >Error codes - PolarDB-X - Alibaba Cloud Documentation Center
The data table in the database shard is manually deleted or renamed. ... If the sequence that you specified exists and the TDDL-4401...
Read more >ORA-02100 to ORA-04099 - Oracle Help Center
ORA-02644: Shard 'string' already exists in specified rack 'string'. Cause: The specified rack already contained another shard in the same Oracle Data Guard ......
Read more >No error while executing a query with wrong column name
Query3-EXISTS-vs-IN ... uses the b_id from #A in the subquery. If you want to produce an error you would need to add the...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@AlbertTao the Select with subquery is executed successfully with sharding condition which is routed one sharding node。This restriction was added to after 3.0.1 version。 The reason is: 1 there are more sharding tables in subquery,the execution result may be wrong。 2 there is a lot of data in the table, the execution time will be very long.
Seems you guys do not consider this scenario: “table join itself”, which also throws this “must have” exception. e.g. we optimize limit sql by using "select a.* from table a join(select id from table limit n, m) r using(id). I think this is a bug.