You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Details reproduced from email correspondence in November 2019.
There are slight discrepancies in the output of guaranteed_preprocessed.py from the current version of the code and the figures from Kates-Harbeck et al (2019) when applied to the JET dataset.
When specifying the following definition of the all_signals dictionary in data/signals.py for the 0D FRNN model for JET carbon wall (CW) training -> ITER-like wall (ILW) testing (jet_data_0D), I end up with 5502 processed shots out of the 5524 raw input files, and 479/488 raw disruptive shots:
This amounts to 8 fewer overall shots vs. the 5510 which was published in the Extended Data Table 2 (see below). Specifically, there are 3 more CW disruptive shots in the (train+validate) set in my results, which means that there are actually 4 fewer nondisruptive CW and 7 fewer nondisruptive ILW shots, too.
One of @ge-dong's .npz files of processed JET shots from October 2019 exactly matches my numbers. I have looked through the Git history of the relevant preprocessing files, and cannot account for a change to the “omit” criteria that would have caused such a change. I assume that the raw input shot lists and data have not changed for the 5524 JET candidates since the paper was published.
I have long suspected that the shot counts and some of the early JET results in the paper predate the addition of the 2x Prad,core, Prad,edge signals to the JET datasets in the code (even though they do appear in Extended Data Table 1). They were not in our original 8-9 signal set, and the files in /tigress/jk7/best_performance/deep_jet/ do not list them.
If you remove pradedge, pradcore from the all_signals dictionary and run guarantee_preprocessed.py you end up with 5514 total processed shots, the ILW set counts exactly match the test set numbers from the paper, 1191 (174):
In this case there are only 4 extra disruptive CW shots vs. the numbers from the paper, and the split of disruptive shots between the train and validate sets is different, but this could be due to a change in the random number sequence….
So perhaps the Nature paper numbers were from before the addition of the 2x extra radiated power signals on JET, and there has been some change in the preprocessing since publication that allows an extra 4x disruptive shots to be not be omitted.
To-do
The lesson from this is that guarantee_preprocessed.py should always output another .txt file (or add to the existing processed_shotlists/d3d_0D/shot_lists_signal_group_X.npz) the details of the omitted shots:
omitted shot numbers (so that the set of all input/raw shot numbers could be reconstructed in conjunction with the included shot numbers)
criterion for omission
Plaintext would be good for reproducibility; see #41. Even though this info is contained in the .npz file and/or codebase, it would be good to dump to .txt:
Exact shot numbers for included shots in each of the train/validate/test set
Precise signal path (in MDSPlus database) info for all signals in this signal group
Details about resampling, clipping, causal shifting, etc.
Will be useful for real-time inference model @mdboyer
The text was updated successfully, but these errors were encountered:
Details reproduced from email correspondence in November 2019.
There are slight discrepancies in the output of
guaranteed_preprocessed.py
from the current version of the code and the figures from Kates-Harbeck et al (2019) when applied to the JET dataset.When specifying the following definition of the
all_signals
dictionary indata/signals.py
for the 0D FRNN model for JET carbon wall (CW) training -> ITER-like wall (ILW) testing (jet_data_0D
), I end up with 5502 processed shots out of the 5524 raw input files, and 479/488 raw disruptive shots:This amounts to 8 fewer overall shots vs. the 5510 which was published in the Extended Data Table 2 (see below). Specifically, there are 3 more CW disruptive shots in the (train+validate) set in my results, which means that there are actually 4 fewer nondisruptive CW and 7 fewer nondisruptive ILW shots, too.
One of @ge-dong's
.npz
files of processed JET shots from October 2019 exactly matches my numbers. I have looked through the Git history of the relevant preprocessing files, and cannot account for a change to the “omit” criteria that would have caused such a change. I assume that the raw input shot lists and data have not changed for the 5524 JET candidates since the paper was published.I have long suspected that the shot counts and some of the early JET results in the paper predate the addition of the 2x Prad,core, Prad,edge signals to the JET datasets in the code (even though they do appear in Extended Data Table 1). They were not in our original 8-9 signal set, and the files in
/tigress/jk7/best_performance/deep_jet/
do not list them.If you remove
pradedge, pradcore
from the all_signals dictionary and runguarantee_preprocessed.py
you end up with 5514 total processed shots, the ILW set counts exactly match the test set numbers from the paper, 1191 (174):In this case there are only 4 extra disruptive CW shots vs. the numbers from the paper, and the split of disruptive shots between the train and validate sets is different, but this could be due to a change in the random number sequence…. So perhaps the Nature paper numbers were from before the addition of the 2x extra radiated power signals on JET, and there has been some change in the preprocessing since publication that allows an extra 4x disruptive shots to be not be omitted.
To-do
The lesson from this is that
guarantee_preprocessed.py
should always output another.txt
file (or add to the existingprocessed_shotlists/d3d_0D/shot_lists_signal_group_X.npz
) the details of the omitted shots:Plaintext would be good for reproducibility; see #41. Even though this info is contained in the
.npz
file and/or codebase, it would be good to dump to.txt
:Will be useful for real-time inference model @mdboyer
The text was updated successfully, but these errors were encountered: