No outputs with the IPEX optimized model on Colab
See original GitHub issueI am on Colab and here’s the CPU stat:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping: 0
CPU MHz: 2199.998
BogoMIPS: 4399.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 56320K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
I installed the extension with
!python -m pip install intel_extension_for_pytorch -f https://software.intel.com/ipex-whl-stable
When I am running:
import torch
import torchvision.models as models
model = models.resnet50(pretrained=True)
model.eval()
data = torch.rand(1, 3, 224, 224)
model = model.to(memory_format=torch.channels_last)
data = data.to(memory_format=torch.channels_last)
#################### code changes ####################
import intel_extension_for_pytorch as ipex
model = ipex.optimize(model)
######################################################
with torch.no_grad():
print(model(data))
It doesn’t do anything. In fact when run in the interactive mode, it makes Colab restart the runtime. But when run from the Colab terminal (or from shell using python <script_name>.py
) it doesn’t print anything.
Issue Analytics
- State:
- Created a year ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
intel_extension_for_pytorch.ipynb - Colaboratory
model, optimizer = ipex.optimize(model, optimizer, dtype=torch.float32) ... and users can get the performance benefit without additional code changes.
Read more >Efficient Training on a Single GPU - Hugging Face
However, not all free GPU memory can be used by the user. When a model is loaded to the GPU also the kernels...
Read more >How to Use Google Colab for Deep Learning
Google Colab is a great platform for deep learning enthusiasts, and it can also be used to test basic machine learning models, gain...
Read more >Intel® Extension for PyTorch
Ease-of-use Python API: Intel® Extension for PyTorch* provides simple frontend Python APIs and utilities for users to get performance optimizations such as ...
Read more >Meshroom in Google Colab but no output file - Stack Overflow
Have you tried this part of the code? # Choose format (tar.gz or zip) !tar -czvf out.tar.gz ./out from google.colab import files ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@rahulunair thanks for providing the Colab.
I further extended it to include the following:
@EikanWang I can confirm that with IPEX (version 1.12.100) it works.
With the vanilla PT model, the average latency is 0.1710120957200047 secs and with the optimized model it’s 0.13993526895998912 secs.
Here’s my extended Colab: https://colab.research.google.com/gist/sayakpaul/2d31764c53da3c959c9294d457a3c0eb/scratchpad.ipynb.
I believe with an AVX512-capable machine, the runtime boosts will be more significant. Colab (free version) doesn’t provide a machine equipped with AVX512 I believe.
Thanks for your information. We are working on this issue and will keep you posted if there is any finding.