Sharing the same concerns about having too little data, you can do that like this.
First, it's a good idea to keep your values between -1 and +1, so I'd normalize them first.
For the LSTM model, you must make sure you're using return_sequences=True
.
There is nothing "wrong" with your model, but it may need more or less layers or units to achieve what you desire. (There is no clear answer to this, though).
Training the model to predict the next step:
All you need is to pass Y as a shifted X:
entireData = arrayWithShape((samples,52,1))
X = entireData[:,:-1,:]
y = entireData[:,1:,:]
Train the model using these.
Predicting the future:
Now, for predicting the future, since we need to use predicted elements as input for more predicted elements, we are going to use a loop and make the model stateful=True
.
Create a model equal to the previous one, with these changes:
- All LSTM layers must have
stateful=True
- The batch input shape must be
(batch_size,None, 1)
- This allows variable lengths
Copy the weights of the previously trained model:
newModel.set_weights(oldModel.get_weights())
Predict only one sample at a time and never forget to call model.reset_states()
before starting any sequence.
First predict with the sequence you already know (this will make sure the model prepares its states properly for predicting the future)
model.reset_states()
predictions = model.predict(entireData)
By the way we trained, the last step in predictions will be the first future element:
futureElement = predictions[:,-1:,:]
futureElements = []
futureElements.append(futureElement)
Now we make a loop where this element is the input. (Because of stateful, the model will understand it's a new input step of the previous sequence instead of a new sequence)
for i in range(howManyPredictions):
futureElement = model.predict(futureElement)
futureElements.append(futureElement)
This link contains a complete example predicting the future of two features: https://github.com/danmoller/TestRepo/blob/master/TestBookLSTM.ipynb