question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[Tensorize] Tensorize failed after reorder

See original GitHub issue

Hi all, case below can reproduce this issue. we explore a little bit, found that when do CHECK, still using stale axis info, when disable these CHECKS, tensorize can give the expected result. we raise this issue for the better solution of this case.

TensorEngine/src/op/tensorize.cc:238: Check failed: is_one(e.region[i]->extent) Tensorize tensor_intrin: Input dimension mismatch with tensor intrin expected shape=[16], given region=[range(min=0, ext=16), range(min=(j + 0), ext=1), range(min=(k + 0), ext=1)]

Thanks,

import tvm
import numpy as np

def intrin_vadd(n):
    x = tvm.placeholder((n,), name='vx')
    y = tvm.placeholder((n,), name='vy')
    z = tvm.compute(x.shape, lambda i: x[i] + y[i], name='z')
    def intrin_func(ins, outs):
        xx, yy = ins
        zz = outs[0]
        return tvm.call_packed("vadd", xx, yy, zz)
    with tvm.build_config(offset_factor=16):
        return tvm.decl_tensor_intrin(z.op, intrin_func)


def test_tensorize_vadd():
    m = 16
    n = 16
    l = 16
    x = tvm.placeholder((m,n,l), name='x')
    y = tvm.placeholder((m,n,l), name='y')
    z = tvm.compute(x.shape, lambda i,j,k: x[i,j,k] + y[i,j,k], name='z')

    def check(factor):
        s = tvm.create_schedule(z.op)
        xa, xb, xc = s[z].op.axis
        s[z].reorder(xb,xc,xa)
        vadd = intrin_vadd(factor)
        s[z].tensorize(xa, vadd)
        s = s.normalize()
        print(tvm.lower(s, [x, y, z], simple_mode=True))

    check(16)

test_tensorize_vadd()

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:6 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
ZihengJiangcommented, Sep 2, 2018

For this situation, you need to bind buffer with strides in tensor intrinsic.

import tvm
import numpy as np

def intrin_vadd(n):
    x = tvm.placeholder((n, 1, 1), name='vx')
    y = tvm.placeholder((n, 1, 1), name='vy')
    z = tvm.compute(x.shape, lambda i, j, k: x[i, j, k] + y[i, j, k], name='z')
    def intrin_func(ins, outs):
        xx, yy = ins
        zz = outs[0]
        return tvm.call_packed("vadd", xx, yy, zz)


    strides = [tvm.var('so'), tvm.var('si'), 1]
    offset_factor = 1
    xb = tvm.decl_buffer(x.shape, x.dtype,
                         name="xb",
                         offset_factor=offset_factor,
                         strides=strides)
    yb = tvm.decl_buffer(y.shape, y.dtype,
                         name="yb",
                         offset_factor=offset_factor,
                         strides=strides)
    zb = tvm.decl_buffer(z.shape, z.dtype,
                         name="zb",
                         offset_factor=offset_factor,
                         strides=strides)
    binds = {x: xb, y: yb, z: zb}
    return tvm.decl_tensor_intrin(z.op, intrin_func, binds=binds)


def test_tensorize_vadd():
    m = 16
    n = 16
    l = 16
    x = tvm.placeholder((m,n, l), name='x')
    y = tvm.placeholder((m,n, l), name='y')
    z = tvm.compute(x.shape, lambda i,j, k: x[i,j, k] + y[i,j, k], name='z')

    def check(factor):
        s = tvm.create_schedule(z.op)
        xa, xb, xc = s[z].op.axis
        s[z].reorder(xb, xc, xa)
        print(tvm.lower(s, [x, y, z], simple_mode=True))
        vadd = intrin_vadd(factor)
        s[z].tensorize(xa, vadd)
        s = s.normalize()
        print(tvm.lower(s, [x, y, z], simple_mode=True))

    check(16)

test_tensorize_vadd()
0reactions
xqdancommented, Aug 23, 2018

understand, I’ll make a copy to discuss

Read more comments on GitHub >

github_iconTop Results From Across the Web

[Tensorize] Tensorize failed after reorder · Issue #1625 - GitHub
Hi all, case below can reproduce this issue. we explore a little bit, found that when do CHECK, still using stale axis info,...
Read more >
[Tensorize] Tensorize failed after reorder - Apache TVM Discuss
Hi all, case below can reproduce this issue. we explore a little bit, found that when do CHECK, still using stale axis info,...
Read more >
Tensorization techniques — Tensorlab 3.0 documentation
In this chapter, we discuss the following tensorization and corresponding detensorization techniques: Hankelization makes use of a Hankel matrix/tensor mapping.
Read more >
Leveraging Stochasticity, Quantization, and Tensorization for ...
that tensorization improves model performance by preserving correlations present ... rameter distribution overlaid with quantization error (squared error).
Read more >
UNIT: Unifying Tensorized Instruction Compilation - arXiv
Our goal is to automatically tensorize1 mixed-precision deep learning tensor operations across a variety of hardware platforms. We resolve the challenges ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found