Initialize MTGP on GPU
See original GitHub issueIt looks like get_MTGP
initilizes the model on cpu by default and it doesn’t have a device
argument like the other modelbridges.
Is there another way to run the multi-task model with EI on GPU? Thanks a lot!
Issue Analytics
- State:
- Created a year ago
- Comments:5
Top Results From Across the Web
Reliable Initialization of GPU-enabled Parallel Stochastic ...
Our work provides empirically checked statuses designed to initialize a particular ... Twister for Graphics Processors (MTGP) that has just been released.
Read more >3. Device API Overview - NVIDIA Documentation Center
This function initializes n states, based on the specified parameter set and ... The following example uses the cuRAND host MTGP setup API,...
Read more >Reliable Initialization of GPU-enabled Parallel ... - arXiv Vanity
In this manner, we first intend to analyze the features of a particular generator designed for GPU hardware architectures: MTGP. The second purpose...
Read more >Mersenne Twister for Graphic Processors (MTGP)
MTGP is a new variant of Mersenne Twister (MT) introduced by Mutsuo Saito and Makoto Matsumoto in 2009. MTGP is designed with some...
Read more >[PDF] Reliable Initialization of GPU-enabled Parallel Stochastic ...
A type of pseudorandom number generator, Mersenne Twister for Graphic Processor (MTGP), for efficient generation on graphic processessing units (GPUs), ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I didn’t realize there were other factory functions taking a device arg. Made a PR. Also feel free to submit PRs for this kind of thing in the future.
Thanks for the quick response. I just tried this and ran into some issues. For context, I’m basically trying to reproduce the multi-task optimization tutorial (https://ax.dev/tutorials/multi_task.html). This means that I need a multi-type model, correct? I couldn’t find something like
Models.MT_MTGP
, or is there yet another workaround?What now ended up working for me was to adapt get_MTGP to pass the device argument through like the other factory functions do