No valid solution found using the tutorial example
See original GitHub issueSource Code:
import tvm
def gemm(A, B):
k = tvm.reduce_axis((0, B.shape[0]))
return tvm.compute((A.shape[0], B.shape[1]), lambda i, j: tvm.sum(A[i, k] * B[k, j], axis=k))
def wrap_gemm(N, K, M):
A = tvm.placeholder((N, K))
B = tvm.placeholder((K, M))
Output = gemm(A, B)
return [Output.op], [A, B, Output]
from flextensor.task import register_task, Task
from flextensor.model import WalkerGroup
from flextensor.scheduler import schedule
if __name__ == '__main__':
task = Task(
"gemm",
"gemm",
wrap_gemm,
(1024, 1024, 1024),
"llvm",
0)
register_task(task)
s, bufs, configs = schedule(
task.key, # give the key of target task
slevel=4,
rlevel=3,
op_trial=100,
timeout=10,
op_stop=30,
method="searching",
parallel=4,
)
Output:
graph space size 1
op 0 space size: 43188288
[Warning] Directory lib is not empty, but reusing it
warm up [1584960963.652893] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584960977.754150] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584960991.611287] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961005.486223] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961019.563355] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961033.576784] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961047.659202] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961061.669718] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961075.572861] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961089.390276] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961103.236112] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961117.260888] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961131.503041] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961145.477987] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961159.549252] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961173.469501] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961187.413991] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961201.389768] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961215.701637] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
warm up [1584961230.179169] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
Warning: No valid schedule found in warm up process, please use more trials
Now automatically use more trials, increase 20
warm up [1584961244.064868] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
Warning: No valid schedule found in warm up process, please use more trials
Now automatically use more trials, increase 20
warm up [1584961257.944109] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
Warning: No valid schedule found in warm up process, please use more trials
Now automatically use more trials, increase 20
warm up [1584961271.798598] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
Warning: No valid schedule found in warm up process, please use more trials
Now automatically use more trials, increase 20
warm up [1584961285.662753] [ inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf inf ]
Warning: No valid schedule found in warm up process, please use more trials
...
...
Issue Analytics
- State:
- Created 3 years ago
- Comments:10
Top Results From Across the Web
CMake error at CMakeLists.txt:30 (project) - Stack Overflow
CMake Error: your CXX compiler: "CMAKE_CXX_COMPILER-NOTFOUND" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name.
Read more >Working with DSolve: A User's Guide
The aim of these tutorials is to provide a self-contained working guide for solving different types of problems with DSolve. The first step...
Read more >How to fix “Error: No valid exports main found for ... - Dev Genius
Recently when I was developing for a React application that I hadn't pulled from the master branch in a while, I came across...
Read more >Check if given Sudoku board configuration is valid or not
Recommended: Please try your approach on {IDE} first, before moving on to the solution. Brute force approach to check if the given sudoku...
Read more >Correcting the Source Reference not Valid Error in a Pivot Table
Below is an example of a simple table with range B2:E6. When we try to insert a PivotTable, an error is displayed and...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
This is a small bug and I have checked in a fix for this. BTW, if you want to try some advanced searching, please take a look at
flextensor/optimize
directory. For example, you can tryoptimize_conv2d.py
with the following command:The meanings of different options are:
Great. Thanks for your suggestion.