[Tensorize] Tensorize failed after reorder
See original GitHub issueHi all, case below can reproduce this issue. we explore a little bit, found that when do CHECK, still using stale axis info, when disable these CHECKS, tensorize can give the expected result. we raise this issue for the better solution of this case.
TensorEngine/src/op/tensorize.cc:238: Check failed: is_one(e.region[i]->extent) Tensorize tensor_intrin: Input dimension mismatch with tensor intrin expected shape=[16], given region=[range(min=0, ext=16), range(min=(j + 0), ext=1), range(min=(k + 0), ext=1)]
Thanks,
import tvm
import numpy as np
def intrin_vadd(n):
x = tvm.placeholder((n,), name='vx')
y = tvm.placeholder((n,), name='vy')
z = tvm.compute(x.shape, lambda i: x[i] + y[i], name='z')
def intrin_func(ins, outs):
xx, yy = ins
zz = outs[0]
return tvm.call_packed("vadd", xx, yy, zz)
with tvm.build_config(offset_factor=16):
return tvm.decl_tensor_intrin(z.op, intrin_func)
def test_tensorize_vadd():
m = 16
n = 16
l = 16
x = tvm.placeholder((m,n,l), name='x')
y = tvm.placeholder((m,n,l), name='y')
z = tvm.compute(x.shape, lambda i,j,k: x[i,j,k] + y[i,j,k], name='z')
def check(factor):
s = tvm.create_schedule(z.op)
xa, xb, xc = s[z].op.axis
s[z].reorder(xb,xc,xa)
vadd = intrin_vadd(factor)
s[z].tensorize(xa, vadd)
s = s.normalize()
print(tvm.lower(s, [x, y, z], simple_mode=True))
check(16)
test_tensorize_vadd()
Issue Analytics
- State:
- Created 5 years ago
- Comments:6 (6 by maintainers)
Top Results From Across the Web
[Tensorize] Tensorize failed after reorder · Issue #1625 - GitHub
Hi all, case below can reproduce this issue. we explore a little bit, found that when do CHECK, still using stale axis info,...
Read more >[Tensorize] Tensorize failed after reorder - Apache TVM Discuss
Hi all, case below can reproduce this issue. we explore a little bit, found that when do CHECK, still using stale axis info,...
Read more >Tensorization techniques — Tensorlab 3.0 documentation
In this chapter, we discuss the following tensorization and corresponding detensorization techniques: Hankelization makes use of a Hankel matrix/tensor mapping.
Read more >Leveraging Stochasticity, Quantization, and Tensorization for ...
that tensorization improves model performance by preserving correlations present ... rameter distribution overlaid with quantization error (squared error).
Read more >UNIT: Unifying Tensorized Instruction Compilation - arXiv
Our goal is to automatically tensorize1 mixed-precision deep learning tensor operations across a variety of hardware platforms. We resolve the challenges ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
For this situation, you need to bind buffer with strides in tensor intrinsic.
understand, I’ll make a copy to discuss