question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Inverted prediction with regressor?

See original GitHub issue

Hi all,

First of all, thanks for your work on this awesome repository!

I’m having some issues with an ‘exploding’ forecast that seems to be kind of inverted. I’m creating a Prophet model for ‘Object A’ that doesn’t have a lot of data. Therefore, there’s no seasonality found in the data and thus only a trend available. To compensate for this, I’m extracting the seasonality component from a model that was trained on ‘Group A’ to which ‘Object A’ belongs (and which has more data available). I’m then adding the seasonality output from the ‘Group A’ model as a regressor to the model of ‘Object A’.

So to summarise: The seasonality of ‘Group A’ (to which ‘Object A’ belongs) is added as a regressor to ‘Object A’. You can find a small schematic overview of this below. Untitled333

You can see the prediction of ‘Object A’, together with the components, in the screenshot below.

Screenshot 2020-11-02 at 18 (Left top graph is the trend of ‘Object A’; left bottom graph is the regressor and thus the seasonality of ‘Group A’; right graph is the prediction of ‘Object A’)

The issue here is that the prediction is kind of ‘exploding’ in the wrong direction. You can see that the extra regressor is going up around 2021-03/2021-04, but the actual prediction is going down. Around 2021-06, the regressor is going down again, while the prediction is actually going up.

I don’t really understand how this is happening as this way of working (extracting seasonality data from a model and using it in another model) is working fine for all my other models. I suppose it has something to do with the negative trend, but I still don’t feel like it is expected behaviour?

The regressor is added to the model using the following code:

model.add_regressor('Regressor Name', mode='multiplicative')

The model itself is simply created with the following code:

model = Prophet()

Extra remarks

  • The data itself is of weekly format.
  • The data can not go into negative (should always be positive). I’m currently using the ‘clipping’ approach from #1668 where I simply clip predicted negative values to 0.
  • I’m using Prophet 0.7.1 together with Python 3.7
  • The seasonality which I’m using of ‘Group A’ is the following: Screenshot 2020-11-04 at 15 08 16

Thank you in advance for any input and/or solution!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:12 (4 by maintainers)

github_iconTop GitHub Comments

2reactions
JeremyKeustersML6commented, Nov 12, 2020

I think in these lines:

        while min(trend[indx_future:]) < 0:
            indx_neg = indx_future + np.argmax(trend[indx_future:] < 0)

if you replace 0 with something greater than 0 that would do it. However, the trend is fit in a scaled space, where everything has been divided by m.y_scale. So if you want the trend to saturate at 5, then you would need to in the trend code above set it to 5 / m.y_scale (except m isn’t being passed into that function, so you’d need to hack it in somewhere). I think the combination of this class with an additive regressor may be worth giving a shot too.

Thanks again for your answer! I indeed tried something like this already in the past, but the program would just be stuck on some kind if infinite loop, so that probably has to do with the m.y_scale you’re talking of. I’ll try to hack m into that function, thanks a lot for this!


On the other hand, I was pleasantly surprised with the results that I got when using the growth='flat' solution from @hansukyang !

For Object A: Screenshot 2020-11-10 at 14

For Object B: Screenshot 2020-11-10 at 14 (1)

The results make sense, as I wanted more ‘weight’ on the regressor. It also actually didn’t make any sense to get a real trend based on only a few datapoints, so this is kind of solved here. I will still experiment by adding eg. the trend of ‘Group A’ and ‘Group B’ respectively (instead of only adding their seasonalities) to still have some kind of ‘real trend’ in there!


Thank you again to @bletham and @hansukyang ! You’ve been both really helpful! I might still update this issue in the future with additional findings I encounter.

1reaction
blethamcommented, Nov 17, 2020

Ah, yes, I should have noticed that sooner. The

            k_t[indx_neg:] -= k_t[indx_neg]
            m_t[indx_neg:] -= m_t[indx_neg]

is resetting the trend to have 0 slope (k_t) and 0 offset (m_t) at indx_neg so that the trend is flat at 0. To have the trend be flat at 1/y_scale, What we really want is to have 0 slope and offset 1/y_scale. So

            k_t[indx_neg:] -= k_t[indx_neg]
            m_t[indx_neg:] = m_t[indx_neg:] - m_t[indx_neg] + 1/y_scale

should do the job. I would worry a little bit about relying on < for the float comparison with 1/y_scale, so I’d probably add a little tolerance in there, like add an extra 1e-6 or something to m_t.

Read more comments on GitHub >

github_iconTop Results From Across the Web

The Prediction Properties of Inverse and Reverse Regression ...
This paper presents prediction intervals for x using inverse regression based on the. Delta Method, as well as prediction intervals using reverse regression....
Read more >
Inverse Regression vs Reverse Regression - Cross Validated
The lines for E(Y|X) and E(X|Y) are not the same. So if you do the regression in the wrong direction and then invert...
Read more >
A comparison of four methods of inverse prediction
The object of inverse prediction is to infer the value of a condition x* that caused an observed response y*, based on a...
Read more >
A Comparison of Four Methods of Inverse Prediction
The four methods compared are (1) inverse regression(IR), based on a point estimate of x* from y*, along with a delta-method approximation to...
Read more >
Standard error of inverse prediction for dose–response ... - NCBI
This paper develops a new metric, the standard error of inverse prediction (SEIP), for a dose–response relationship (calibration curve) when ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found