-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to prepare training data, especially the size ? #2
Comments
Hi! The input to the LiESN/ESN should be a 3D tensor of size "batch size" x "time length" x "input dimension". The input dimension is set at the creation of the LiESN. To use the batch size superior to one, all the input time series should have the same length. If it is not the case I use batch_size = 1. Hope it will help you. Nils |
Hi Nils, Thank you for your reply. However, I am afraid your guess is not correct exactly. In the example above, What should I do? Could you please help me find a way to reshape the input data? Thanks a lot. |
Hi, So your input data is a 30-dim time series of length 10000, right? The class TensorDataset will take samples along first dimension of x_tr. So the tensor you give to the ESN is probably of size "batch_size" x 30. If x_tr is a single dataset, you can give it directly to the ESN after adding a batch dimension :
Can you show me the complete code? Regards, Nils |
Dear Nils, My code is as follows:
Since the raw input data of
Thank you very much for your patience. Regards, |
Hi, I'm trying to replicate the examples provided (https://github.com/nschaetti/EchoTorch/blob/master/examples/timeserie_prediction/narma10_esn.py). The same kind of issue appears :
The line that make this issue appears is : It seems related to the issue you ran into @FajunChen. Also, when trying on a custom dataset, using the @nschaetti maybe one way to adress this issue in the future would be to include an util to import and automatically convert tabular data like .csv ? |
Hi, I have the same error while trying to compile the Switch Attractor Example. The input_dim is equal to 1 by default , but if I try to change it i get a size mismatch error. |
Sorry, it is long time ago. I can not remember the details now.
…------------------ Original message ------------------
From: "johnnylousas";
Sendtime: Sunday, May 5, 2019 9:58 PM
To: "nschaetti/EchoTorch";
Cc: "Fajun Chen"; "Mention";
Subject: Re: [nschaetti/EchoTorch] How to prepare training data, especiallythe size ? (#2)
Hi,
I have the same error while trying to compile the Switch Attractor Example. The input_dim is equal to 1 by default , but if I try to change it i get a size mismatch error.
Can you help ?
thanks in advance !
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Then how do I fix it ? |
I've become interested in solving the problem that gives rise to this bug (having an input dimension greater than 1). Coupled nonlinear systems are, on their own, pretty cool. Having accurate (over relatively short timescale) models would be extraordinarily useful. I've brute-forced my way through every windows bug and edited Nils' code so it can be called without producing pickling errors. But now I can't even figure out where w_in.mv is defined. |
If I'm not mistaken it has been corrected by the fix 6e1ea94 2 months ago : 6e1ea94 I believe @nschaetti may want to review this issue to decide to close it or not ! :) |
Interesting. I'm using the corrected code. |
I have a batch of time serial data for regression analysis. Every timestamp has 30 features. At the beginning data are prepared as numpy ndarries. Then, I transform them into tensor datasets and set the batch_size=15 for data loader, just like this:
However, I got an error as follows.
It looks like the forward method need parameter "u" to be a 3-D tensor, and time_length need to be set explicitly. Is the time_length mean the number of reservoirs ? but we already have the hidden_dim.
I am quite confused about how to prepare the training data for LiESN. Could you please help me?
The text was updated successfully, but these errors were encountered: