question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Doubts in Demand Forecasting using TFT

See original GitHub issue

Hello I am really thankful for the work that you have put in for pytorch-forecasting. In the demand forecasting using TFT example:

  1. for idx in range(10): # plot 10 examples best_tft.plot_prediction(new_x, new_raw_predictions, idx=idx, show_future_observed=False); I am not sure what these 10 examples are from, are they for different batch sizes?
  2. In the Actuals vs Prediction:

Checking how the model performs across different slices of the data allows us to detect weaknesses. Plotted below are the means of predictions vs actuals across each variable divided into 100 bins using the Now, we can directly predict on the generated data using the calculate_prediction_actual_by_variable() and plot_prediction_actual_by_variable() methods. The gray bars denote the frequency of the variable by bin, i.e. are a histogram. Does this mean to say that we are trying to tell how much these individual variables help in prediction?

  1. new_raw_predictions, new_x = best_tft.predict(new_prediction_data, mode="raw", return_x=True) What is difference between mode=“raw” and mode=“prediction”>?

  2. Also I checked the values in val_dataloader using x,y in iter(dataloader). Here the x['decoder_target'] has the target values that we want to predict already present. So: - in case of training this is being used for teacher forcing, right? - also when using predicting on val_dataloader will this part be still used or it will be ignored. (Is mode=‘prediction’ during .predict(...) has some role to play in ensuring this doesn’t gets used or validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0) does this?

Thank you.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
jdb78commented, Jun 5, 2021
  1. By ranges, I mean that you can see in your examples that the model tends to not capture low predictions (what could be outliers). Generally, there is no range in the discount_in_percent variable that it seems to struggle with.
  2. yes (right axis is frequency)
  3. Looks like it is ok with the discount. If your model makes good predictions depends on the metrics and what you deem good. But there are also other considerations. I would check the dependence plots and also good and bad examples as in the tutorial to get an idea.
1reaction
jdb78commented, Jun 5, 2021

First of all: Great to hear you like the package! Let me try to answer your questions:

I am not sure what these 10 examples are from, are they for different batch sizes?

These are just the first 10 examples. If from the training dataset, they will be random selections.

Does this mean to say that we are trying to tell how much these individual variables help in prediction?

No, you this helps to understand in variable ranges, the model struggles to make good predictions. The importance of variables for prediction is another analysis.

What is difference between mode=“raw” and mode=“prediction”>?

"raw" is about the raw output of the network which contains predictions, target scales, etc. while "prediction" is just prediction of the target values

Here the x[‘decoder_target’] has the target values that we want to predict already present. So:

It is actually not necessary for teacher-forcing but mostly for plotting.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Interpretable tourism demand forecasting with temporal fusion ...
The TFT model can produce explicable predictions of tourism demand, including attention analysis of time steps and the ranking of input factors' ...
Read more >
Temporal Fusion Transformer: A Primer on Deep Forecasting ...
In today's article, we will implement a Temporal Fusion Transformer (TFT). We will use the Darts library, as we did for the RNN...
Read more >
Scalable time series forecasting - Medium
Introduction to TFT​​ Multi-horizon forecasting is the prediction of variables-of-interest at multiple future time steps. This is a crucial ...
Read more >
Temporal Fusion Transformers for interpretable multi-horizon ...
In this paper, we propose the Temporal Fusion Transformer (TFT) – an attention-based DNN architecture for multi-horizon forecasting that achieves high ...
Read more >
Interpretable Deep Learning for Time Series Forecasting
Finally, TFT has been used to help retail and logistics companies with demand forecasting by both improving forecasting accuracy and providing ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found