question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Proposal: separate Predictor objects

See original GitHub issue

Feature request

model.predict_f() currently recomputes everything (kernel matrix, Cholesky, triangular_solve) from scratch at each call. Instead of recomputing these quantities, we should provide some kind of predictor object that only does those computations that touch the new input points, and caches everything that can be precomputed.

Motivation

Why does this matter? We want GPflow to be as fast as it can be! Recomputing lots of cacheable quantities from scratch is inefficient and unnecessarily slow.

The slowness of predictions (particularly in a loop) is a well-known, recurring issue, e.g. https://github.com/GPflow/GPflow/issues/1554, https://github.com/GPflow/GPflow/issues/1189, https://github.com/GPflow/GPflow/issues/1030, https://github.com/GPflow/GPflow/issues/1003. We recently merged https://github.com/GPflow/GPflow/pull/1528 which was added so that a special-cased version of this proposal could be implemented down-stream. (Note that there are further efficiency gains possible that even #1528 does not allow for.) It would be much better to implement this properly, and make it available for all users out of the box.

Proposal

Describe the solution you would like Predictions should be efficient out-of-the-box for GPflow models. Details are open for discussion!

One option could be something like a model.predictor() method which returns an (optionally tf.function-decorated) closure that takes new input points Xnew and computes the prediction efficiently (having already precomputed and cached all that can be computed without access to the test inputs).

Questions:

  • Should this be extended to the other predict functions (predict mean and var, predict log density)?
  • How will we be able to cache all the quantities needed within the various conditional branches? (E.g. not just cholesky(Kuu), but also the solve against q_mu)

Issue Analytics

  • State:open
  • Created 3 years ago
  • Reactions:5
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
joelberkeley-secondmindcommented, Oct 22, 2020

this ticket is introduced as: here’s a problem, implement this solution. Can I suggest: here’s a problem, solve the problem, here’s one possible way of doing it. So as to avoid biasing to particular solutions?

0reactions
st--commented, Oct 27, 2021

@joelberkeley-secondmind we’re caching the quantities depending on the training data. There’s no caching of test points.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Detecting and Matching Related Objects with One Proposal ...
In this paper, we propose a simple yet efficient way to detect and match players and related objects at once without extra cost,...
Read more >
Detecting and Matching Related Objects with ... - ResearchGate
We propose to consider the task information (e.g. multiple queries and click information) rather than only queries. Our resultant prediction ...
Read more >
Detecting and Matching Related Objects ... - CVF Open Access
In this paper, we propose a sim- ple yet efficient way to detect and match players and re- lated objects at once without...
Read more >
Localize, Assemble, and Predicate: Contextual Object ...
Contextual Object Proposal Embedding for Visual Relation Detection. Ruihai Wu, Kehan Xu, ... on predicting objects and relationships with separately and.
Read more >
size prediction for pedestrian detection in surveillance videos
Therefore, positions in a surveillance image tend to associate with scenes of different depths. Object proposals at different depths will appear ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found