question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

saasbo_nehvi.ipynb fails on trying to factorize non positive definite by PyTorch. Both on CPU and GPU

See original GitHub issue

Pytorch 1.10 + cu102, ubuntu 20.04 GPU: RTX6000 and also on CPU also on windows CPU

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: mean -0.10 0.82 -0.07 -1.19 1.44 194.46 1.01 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: outputscale 4.83 4.12 3.55 0.77 9.44 172.53 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[0] 1.20 0.83 1.01 0.21 2.43 111.66 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[1] 9.63 15.33 6.16 1.06 15.24 169.07 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[2] 6.15 4.81 4.76 0.68 12.08 220.23 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[3] 6.48 5.05 5.24 1.06 11.37 215.25 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[4] 4.65 3.28 3.76 0.99 7.92 179.06 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[5] 2.68 2.72 1.51 0.36 6.37 94.22 1.03 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[6] 7.65 11.52 4.93 1.80 13.97 134.99 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[7] 5.88 5.04 4.29 0.77 12.71 171.61 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[8] 6.81 5.45 5.31 1.12 12.40 232.39 1.00 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: lengthscale[9] 6.97 7.43 4.73 1.00 14.96 105.17 1.01 [INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian:

[INFO 11-23 16:03:40] ax.models.torch.fully_bayesian: MCMC elapsed time: 37.0998637676239 Sample: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 768/768 [00:35, 21.45it/s, step size=3.51e-01, acc. prob=0.925] [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: mean std median 5.0% 95.0% n_eff r_hat [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: mean -0.16 0.79 -0.17 -1.28 1.19 308.53 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: outputscale 5.30 4.53 3.87 0.66 12.11 186.09 1.01 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[0] 0.97 0.55 0.79 0.29 1.75 172.10 1.02 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[1] 11.15 18.79 7.34 1.03 20.34 166.06 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[2] 7.47 10.60 5.15 1.13 13.23 183.21 1.01 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[3] 9.06 11.70 5.16 1.00 19.29 181.14 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[4] 7.57 16.39 4.83 0.71 13.13 146.31 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[5] 6.99 13.35 3.76 0.86 14.55 209.26 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[6] 8.88 9.48 6.12 1.40 17.37 124.39 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[7] 4.50 4.05 3.41 0.77 9.05 162.64 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[8] 9.89 10.98 6.63 1.58 20.14 124.57 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: lengthscale[9] 5.13 4.50 3.75 0.57 9.31 70.10 1.00 [INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian:

[INFO 11-23 16:04:16] ax.models.torch.fully_bayesian: MCMC elapsed time: 35.84380340576172 [WARNING 11-23 16:05:24] ax.service.utils.report_utils: Ignoring user-specified deduplicate_on_map_keys = True since exp.fetch_data().map_keys is empty or does not exist. Check that at least one element of metrics (or exp.metrics if metrics is None) inherits from MapMetric. Iteration: 0, HV: 0.053266771382096754 Warmup: 18%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 139/768 [00:14, 7.02it/s, step size=5.49e-01, acc. prob=0.780]

RuntimeError Traceback (most recent call last) C:\Users\H00633~1\AppData\Local\Temp/ipykernel_2128/2495104591.py in <module> 3 model = None 4 for i in range(N_BATCH): ----> 5 model = Models.FULLYBAYESIANMOO( 6 experiment=experiment, 7 data=data,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\registry.py in call(self, search_space, experiment, data, silently_filter_kwargs, **kwargs) 318 319 # Create model bridge with the consolidated kwargs. –> 320 model_bridge = bridge_class( 321 search_space=search_space or not_none(experiment).search_space, 322 experiment=experiment,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\multi_objective_torch.py in init(self, experiment, search_space, data, model, transforms, transform_configs, torch_dtype, torch_device, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, objective_thresholds, default_model_gen_options) 101 optimization_config = mooc 102 –> 103 super().init( 104 experiment=experiment, 105 search_space=search_space,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\torch.py in init(self, experiment, search_space, data, model, transforms, transform_configs, torch_dtype, torch_device, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, default_model_gen_options) 69 self.device = torch_device 70 self._default_model_gen_options = default_model_gen_options or {} β€”> 71 super().init( 72 experiment=experiment, 73 search_space=search_space,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\base.py in init(self, search_space, model, transforms, experiment, data, transform_configs, status_quo_name, status_quo_features, optimization_config, fit_out_of_design, fit_abandoned) 178 self.model = model 179 try: –> 180 self._fit( 181 model=model, 182 search_space=search_space,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\torch.py in _fit(self, model, search_space, observation_features, observation_data) 101 ) -> None: # pragma: no cover 102 self._validate_observation_data(observation_data) –> 103 super()._fit( 104 model=model, 105 search_space=search_space,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\array.py in _fit(self, model, search_space, observation_features, observation_data) 98 ) 99 # Fit –> 100 self._model_fit( 101 model=model, 102 Xs=Xs_array,

c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\torch.py in _model_fit(self, model, Xs, Ys, Yvars, search_space_digest, metric_names, candidate_metadata) 137 Yvars: List[Tensor] = self._array_list_to_tensors(Yvars) 138 # pyre-fixme[16]: Optional has no attribute fit. –> 139 self.model.fit( 140 Xs=Xs, 141 Ys=Ys,

c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\botorch.py in fit(self, Xs, Ys, Yvars, search_space_digest, metric_names, candidate_metadata) 293 ) 294 self.metric_names = metric_names –> 295 self.model = self.model_constructor( # pyre-ignore [28] 296 Xs=Xs, 297 Ys=Ys,

c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\fully_bayesian.py in get_and_fit_model_mcmc(Xs, Ys, Yvars, task_features, fidelity_features, metric_names, state_dict, refit_model, use_input_warping, use_loocv_pseudo_likelihood, num_samples, warmup_steps, thinning, max_tree_depth, disable_progbar, gp_kernel, verbose, **kwargs) 158 if state_dict is None or refit_model: 159 for X, Y, Yvar, m in zip(Xs, Ys, Yvars, models): –> 160 samples = run_inference( 161 pyro_model=pyro_model, # pyre-ignore [6] 162 X=X,

c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\fully_bayesian.py in run_inference(pyro_model, X, Y, Yvar, num_samples, warmup_steps, thinning, use_input_warping, max_tree_depth, disable_progbar, gp_kernel, verbose) 439 disable_progbar=disable_progbar, 440 ) –> 441 mcmc.run( 442 X, 443 Y,

~\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\messenger.py in _context_wrap(context, fn, *args, **kwargs) 10 def _context_wrap(context, fn, *args, **kwargs): 11 with context: β€”> 12 return fn(*args, **kwargs) 13 14

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\api.py in run(self, *args, **kwargs) 561 # requires_grad", which happens with jit_compile under PyTorch 1.7 562 args = [arg.detach() if torch.is_tensor(arg) else arg for arg in args] –> 563 for x, chain_id in self.sampler.run(*args, **kwargs): 564 if num_samples[chain_id] == 0: 565 num_samples[chain_id] += 1

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\api.py in run(self, *args, **kwargs) 221 logger = initialize_logger(logger, β€œβ€, progress_bar) 222 hook_w_logging = _add_logging_hook(logger, progress_bar, self.hook) –> 223 for sample in _gen_samples( 224 self.kernel, 225 self.warmup_steps,

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\api.py in _gen_samples(kernel, warmup_steps, num_samples, hook, chain_id, *args, **kwargs) 148 yield {name: params[name].shape for name in save_params} 149 for i in range(warmup_steps): –> 150 params = kernel.sample(params) 151 hook( 152 kernel,

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\nuts.py in sample(self, params) 435 direction == 1 436 ): # go to the right, start from the right leaf of current tree –> 437 new_tree = self._build_tree( 438 z_right, 439 r_right,

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\nuts.py in _build_tree(self, z, r, z_grads, log_slice, direction, tree_depth, energy_current) 279 r = half_tree.r_left 280 z_grads = half_tree.z_left_grads –> 281 other_half_tree = self._build_tree( 282 z, r, z_grads, log_slice, direction, tree_depth - 1, energy_current 283 )

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\nuts.py in _build_tree(self, z, r, z_grads, log_slice, direction, tree_depth, energy_current) 279 r = half_tree.r_left 280 z_grads = half_tree.z_left_grads –> 281 other_half_tree = self._build_tree( 282 z, r, z_grads, log_slice, direction, tree_depth - 1, energy_current 283 )

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\nuts.py in _build_tree(self, z, r, z_grads, log_slice, direction, tree_depth, energy_current) 279 r = half_tree.r_left 280 z_grads = half_tree.z_left_grads –> 281 other_half_tree = self._build_tree( 282 z, r, z_grads, log_slice, direction, tree_depth - 1, energy_current 283 )

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\nuts.py in _build_tree(self, z, r, z_grads, log_slice, direction, tree_depth, energy_current) 252 ): 253 if tree_depth == 0: –> 254 return self._build_basetree( 255 z, r, z_grads, log_slice, direction, energy_current 256 )

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\nuts.py in _build_basetree(self, z, r, z_grads, log_slice, direction, energy_current) 197 def _build_basetree(self, z, r, z_grads, log_slice, direction, energy_current): 198 step_size = self.step_size if direction == 1 else -self.step_size –> 199 z_new, r_new, z_grads, potential_energy = velocity_verlet( 200 z, 201 r,

~\AppData\Roaming\Python\Python39\site-packages\pyro\ops\integrator.py in velocity_verlet(z, r, potential_fn, kinetic_grad, step_size, num_steps, z_grads) 30 r_next = r.copy() 31 for _ in range(num_steps): β€”> 32 z_next, r_next, z_grads, potential_energy = _single_step_verlet( 33 z_next, r_next, potential_fn, kinetic_grad, step_size, z_grads 34 )

~\AppData\Roaming\Python\Python39\site-packages\pyro\ops\integrator.py in _single_step_verlet(z, r, potential_fn, kinetic_grad, step_size, z_grads) 52 z[site_name] = z[site_name] + step_size * r_grads[site_name] # z(n+1) 53 β€”> 54 z_grads, potential_energy = potential_grad(potential_fn, z) 55 for site_name in r: 56 r[site_name] = r[site_name] + 0.5 * step_size * (-z_grads[site_name]) # r(n+1)

~\AppData\Roaming\Python\Python39\site-packages\pyro\ops\integrator.py in potential_grad(potential_fn, z) 81 return grads, z_nodes[0].new_tensor(float(β€œnan”)) 82 else: β€”> 83 raise e 84 85 grads = grad(potential_energy, z_nodes)

~\AppData\Roaming\Python\Python39\site-packages\pyro\ops\integrator.py in potential_grad(potential_fn, z) 74 node.requires_grad_(True) 75 try: β€”> 76 potential_energy = potential_fn(z) 77 # deal with singular matrices 78 except RuntimeError as e:

~\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\util.py in _potential_fn_jit(self, skip_jit_warnings, jit_options, params) 292 293 if self._compiled_fn: –> 294 return self._compiled_fn(*vals) 295 296 with pyro.validation_enabled(False):

RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\torch\distributions\multivariate_normal.py(151): init C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\distributions\distribution.py(18): call c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\fully_bayesian.py(400): pyro_model C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\messenger.py(12): _context_wrap C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\messenger.py(12): _context_wrap C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\messenger.py(12): _context_wrap C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\trace_messenger.py(174): call C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\trace_messenger.py(198): get_trace C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\util.py(278): _potential_fn C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\util.py(305): _pe_jit c:\users\h00633314\miniconda3\lib\contextlib.py(79): inner C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\torch\jit_trace.py(786): trace C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\util.py(309): _potential_fn_jit C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\ops\integrator.py(76): potential_grad C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\hmc.py(328): setup C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\api.py(144): _gen_samples C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\api.py(223): run C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\infer\mcmc\api.py(563): run C:\Users\h00633314\AppData\Roaming\Python\Python39\site-packages\pyro\poutine\messenger.py(12): _context_wrap c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\fully_bayesian.py(441): run_inference c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\fully_bayesian.py(160): get_and_fit_model_mcmc c:\users\h00633314\miniconda3\lib\site-packages\ax\models\torch\botorch.py(295): fit c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\torch.py(139): _model_fit c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\array.py(100): _fit c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\torch.py(103): _fit c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\base.py(180): init c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\torch.py(71): init c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\multi_objective_torch.py(103): init c:\users\h00633314\miniconda3\lib\site-packages\ax\modelbridge\registry.py(320): call C:\Users\H00633~1\AppData\Local\Temp/ipykernel_2128/2495104591.py(5): <module> c:\users\h00633314\miniconda3\lib\site-packages\IPython\core\interactiveshell.py(3444): run_code c:\users\h00633314\miniconda3\lib\site-packages\IPython\core\interactiveshell.py(3364): run_ast_nodes c:\users\h00633314\miniconda3\lib\site-packages\IPython\core\interactiveshell.py(3172): run_cell_async c:\users\h00633314\miniconda3\lib\site-packages\IPython\core\async_helpers.py(68): _pseudo_sync_runner c:\users\h00633314\miniconda3\lib\site-packages\IPython\core\interactiveshell.py(2947): _run_cell c:\users\h00633314\miniconda3\lib\site-packages\IPython\core\interactiveshell.py(2901): run_cell c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\zmqshell.py(533): run_cell c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\ipkernel.py(353): do_execute c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\kernelbase.py(648): execute_request c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\kernelbase.py(353): dispatch_shell c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\kernelbase.py(446): process_one c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\kernelbase.py(457): dispatch_queue c:\users\h00633314\miniconda3\lib\asyncio\events.py(80): _run c:\users\h00633314\miniconda3\lib\asyncio\base_events.py(1890): _run_once c:\users\h00633314\miniconda3\lib\asyncio\base_events.py(596): run_forever c:\users\h00633314\miniconda3\lib\site-packages\tornado\platform\asyncio.py(199): start c:\users\h00633314\miniconda3\lib\site-packages\ipykernel\kernelapp.py(677): start c:\users\h00633314\miniconda3\lib\site-packages\traitlets\config\application.py(846): launch_instance c:\users\h00633314\miniconda3\lib\site-packages\ipykernel_launcher.py(16): <module> c:\users\h00633314\miniconda3\lib\runpy.py(87): _run_code c:\users\h00633314\miniconda3\lib\runpy.py(197): _run_module_as_main RuntimeError: torch.linalg.cholesky: The factorization could not be completed because the input is not positive-definite (the leading minor of order 26 is not positive-definite).

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:10 (6 by maintainers)

github_iconTop GitHub Comments

1reaction
hanochkcommented, Nov 24, 2021

Hi David, thanks for pointing out about the fix you have made it wotks just fine.

On Tue, Nov 23, 2021, 23:22 David Eriksson @.***> wrote:

I ran the tutorial notebook a few times with Pyro (master) and couldn’t reproduce the issue.

β€” You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/facebook/Ax/issues/730#issuecomment-977179972, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB6WGLYOJH2FUKHISYCYJQTUNQAZDANCNFSM5ITTQU3A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

1reaction
hanochkcommented, Nov 23, 2021

Many thanks, David, just finished review your paper πŸ˜ƒ

On Tue, Nov 23, 2021, 20:35 David Eriksson @.***> wrote:

It looks like Pyro hasn’t released a new version since that PR was committed, so you probably have to install the latest version from source (see more here: https://github.com/pyro-ppl/pyro).

β€” You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/facebook/Ax/issues/730#issuecomment-976991003, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB6WGL2C4ZR2W3ZJ2ETIOILUNPNI3ANCNFSM5ITTQU3A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Numerical issue with cholesky decomposition (even ... - GitHub
The only real way the kernel would become not positive definite on ... It happens both for qParEGO and qEHVI, and both on...
Read more >
Error Cholesky CPU - PyTorch Forums
Hi, I'm getting this RuntimeError while running an optimization algorithm. I was able to run without problems 2 days ago, but now I...
Read more >
Tutorial 2: Introduction to PyTorch - UvA DL Notebooks
PyTorch is an open source machine learning framework that allows you to write your own neural networks and optimize them efficiently. However, PyTorch...
Read more >
How to make Jupyter Notebook to run on GPU? - Stack Overflow
In Google Collab you can choose your notebook to run on cpu or gpu environment. Now I have a laptop withΒ ...
Read more >
Getting Started With Pytorch In Google Collab With Free GPU
Pytorch is a deep learning framework for Python programming language based on Torch, ... At its core, PyTorch provides two main features:.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found