diff --git a/README b/README index 64836ad..ef94497 100644 --- a/README +++ b/README @@ -38,20 +38,33 @@ The file structure of the WRF Test Framework is as follows: Runs/ -- All WRF tests are performed here clean -- Script for cleaning out old tests. run `./clean` or view the script's comments for usage instructions + run_all_Cheyenne.csh -- Script for running qsub for GNU and INTEL and screen for PGI. run_from_github.py -- Script for running the test directly from code on Github. See below for instructions. qsub_this_to_run_all_compilers.pbs.csh -- Script for running tests for all compilers on Cheyenne. See below for instructions. + qsub_*.pbs -- Script for running tests for each specific compiler on Cheyenne. ================================================================================================= - TO RUN DIRECTLY FROM GITHUB ON CHEYENNE + TO RUN DIRECTLY FROM GITHUB ON A SUPERCOMPUTER (CHEYENNE) ================================================================================================= -1) View the script `scripts/run_all_for_qsub.csh` and make modifications if necessary. The most - common modification that may need to be made is to change the compiler versions. Note that - this script sets up each compiler as necessary, then submits each individual test as - described below. +1) If you are running on Cheyenne, view the next files and make modifications if necessary: + *) `qsub_intel.pbs` and `scripts/run_intel.csh` + *) `qsub_gnu.pbs` and `scripts/run_gnu.csh` + in order to change the account to be charged. Other common modification that may need to be made is + to change the compiler versions. + Note that script `run_all_Cheyenne.csh` sets up each compiler as necessary, then submits each individual + test as described below using the previous scripts. It is also important to view files: + *) `regTest_*_Cheyenne.wtf` + and make modifications if necessary. This file must end in "*.wtf" (case sensitive). + Lengthy explanations are given in the existing WRF Test Files for each configuration. + The most important parameters to check are near the top of the file. + + If you are not running on Cheyenne, view `qsub_this_to_run_all_compilers.pbs.csh`, + `scripts/run_all_for_qsub1.csh`, `scripts/run_all_for_qsub2.csh`, and regTest_*_*.wtf + for similar modifications. 2) In the top-level directory, run the script: @@ -60,35 +73,57 @@ The file structure of the WRF Test Framework is as follows: This will prompt you to enter the URL of the repository you want to test (for example, https://github.com/wrf-model/WRF), and the name of the branch you want to test (for example, master). It will download the code, pack it into a tar file, and submit a test job using the - `qsub_this_to_run_all_compiler.csh` script described below. + `qsub_this_to_run_all_compiler.csh` script described below. Finally, it will ask if you are on + Cheyenne. If the answer is yes, it will run qsub for GNU and INTEL and screen for PGI using the + script run_all_Cheyenne.csh. If not, it will run qsub for everything using the script + qsub_this_to_run_all_compilers.pbs.csh. -3) When the test has completed, results will be summarized in the directory Runs/RESULTS. +3) In order to follow the executions, one can look at files foo_* for each compiler. When the test + has completed, results will be summarized in the directory Runs/RESULTS. ================================================================================================= - TO RUN ON A LOCAL TAR FILE ON CHEYENNE + TO RUN ON A LOCAL TAR FILE ON A SUPERCOMPUTER (CHEYENNE) ================================================================================================= 1) Place one or more WRF source tar files in the "tarballs" directory. Each file must end in "*.tar" (case sensitive), and the tarfile names should not contain spaces. Each WRF tarfile - must create the directory "WRFV3" when unpacked. Otherwise, the script will fail. NOTE: IT + must create the directory "WRF" when unpacked. Otherwise, the script will fail. NOTE: IT IS NOT RECOMMENDED TO RUN FOR MORE THAN ONE TAR FILE AT A TIME, unless you are running overnight, as the process can take a long time and use a significant fraction of Cheyenne's resources. -2) View the script `scripts/run_all_for_qsub.csh` and make modifications if necessary. Note - that this script sets up each compiler as necessary, then submits each individual test as - described belowcompiler This file must end in "*.wtf" (case sensitive). +2) If you are running on Cheyenne, view the next files and make modifications if necessary: + *) `qsub_intel.pbs` and `scripts/run_intel.csh` + *) `qsub_gnu.pbs` and `scripts/run_gnu.csh` + For instance, the account usually has to be changed. Note that script `run_all_Cheyenne.csh` + sets up each compiler as necessary, then submits each individual test as described below using + the previous scripts. It is also important to view files: + *) `regTest_*_Cheyenne.wtf` + and make modifications if necessary. These files must end in "*.wtf" (case sensitive). Lengthy explanations are given in the existing WRF Test Files for each configuration. The most important parameters to check are near the top of the file. + + If you are not running on Cheyenne, view `qsub_this_to_run_all_compilers.pbs.csh`, + `scripts/run_all_for_qsub1.csh`, `scripts/run_all_for_qsub2.csh`, and regTest_*_*.wtf + for similar modifications. + +3) If you are on Cheyenne, in the top-level directory, issue the command: + + "./run_all_Cheyenne.csh >& cheyenne.log &" + + This will run separate tests simultaneously using qsub for GNU and Intel, and screen for + PGI compilers, as well as separate tests for WRFDA 4DVAR (since the configuration option + numbers are different). -3) In the top-level directory, issue the command: + If you are not on Cheyenne, in the top-level directory, issue the command: "qsub < qsub_this_to_run_all_compilers.pbs.csh" - This will run separate tests simultaneously for GNU, Intel, and PGI compilers, as well as - separate tests for WRFDA 4DVAR (since the configuration option numbers are different). + This will run separate tests simultaneously using qsub for GNU, Intel, and PGI compilers, + as well as separate tests for WRFDA 4DVAR (since the configuration option numbers are different). -4) When the test has completed, results will be summarized in the directory Runs/RESULTS. +4) In order to follow the executions, one can look at files foo_* for each compiler. When the test + has completed, results will be summarized in the directory Runs/RESULTS. ================================================================================================= TO RUN MANUALLY FOR A SINGLE COMPILER @@ -118,7 +153,8 @@ The file structure of the WRF Test Framework is as follows: WRF runs have completed when all the WRF output files are checked for invalid values (NaNs). -5) When the test has completed, results will be summarized in the directory Runs/RESULTS. +5) In order to follow the executions, one can look at files foo_* for each compiler. When the test + has completed, results will be summarized in the directory Runs/RESULTS. ------------------------------------------------------------------------------------------------------ diff --git a/qsub_gnu.pbs b/qsub_gnu.pbs new file mode 100644 index 0000000..d334da8 --- /dev/null +++ b/qsub_gnu.pbs @@ -0,0 +1,46 @@ +#!/bin/csh -f +# +# PBS batch script +# +# Job name +#PBS -N gnu_WTF_v04.08 +# queue: share, regular, economy +#PBS -q economy +# Combine error and output files +#PBS -j oe +# output filename +#PBS -o regtest_gnu.out +# error filename +#PBS -e regtest_gnu.out +# wallclock time hh:mm:ss +#PBS -l walltime=12:00:00 +# Project charge code +#PBS -A UCUD0004 +# Claim 18 cores for this script (individual jobs created by this script can use way more) +#PBS -l select=1:ncpus=18:mem=60GB +# Send email on abort or end of main job +#PBS -m ae + +if ( ! -e Data ) then + echo "ERROR ERROR ERROR" + echo "" + echo "'Data' directory not found" + echo "If on Cheyenne, link /glade/p/wrf/Data into your WTF directory" + echo "" + echo "Otherwise, it can be downloaded from http://www2.mmm.ucar.edu/wrf/tmp/" + exit 1 +endif + +if ( ! -e Data/v04.08 ) then + echo "ERROR ERROR ERROR" + echo "" + echo "'Data' directory is not the correct version (v04.08)" + echo "If on Cheyenne, link /glade/p/wrf/Data into your WTF directory" + echo "" + echo "Otherwise, it can be downloaded from http://www2.mmm.ucar.edu/wrf/tmp/data_v04.08.tar" + exit 2 +endif + +setenv TMPDIR /gpfs/fs1/scratch/$USER/tmp # CISL-recommended hack for Cheyenne builds + +scripts/run_gnu.csh diff --git a/qsub_intel.pbs b/qsub_intel.pbs new file mode 100644 index 0000000..ae55a54 --- /dev/null +++ b/qsub_intel.pbs @@ -0,0 +1,46 @@ +#!/bin/csh -f +# +# PBS batch script +# +# Job name +#PBS -N intel_WTF_v04.08 +# queue: share, regular, economy +#PBS -q economy +# Combine error and output files +#PBS -j oe +# output filename +#PBS -o regtest_intel.out +# error filename +#PBS -e regtest_intel.out +# wallclock time hh:mm:ss +#PBS -l walltime=12:00:00 +# Project charge code +#PBS -A UCUD0004 +# Claim 18 cores for this script (individual jobs created by this script can use way more) +#PBS -l select=1:ncpus=18:mem=60GB +# Send email on abort or end of main job +#PBS -m ae + +if ( ! -e Data ) then + echo "ERROR ERROR ERROR" + echo "" + echo "'Data' directory not found" + echo "If on Cheyenne, link /glade/p/wrf/Data into your WTF directory" + echo "" + echo "Otherwise, it can be downloaded from http://www2.mmm.ucar.edu/wrf/tmp/" + exit 1 +endif + +if ( ! -e Data/v04.08 ) then + echo "ERROR ERROR ERROR" + echo "" + echo "'Data' directory is not the correct version (v04.08)" + echo "If on Cheyenne, link /glade/p/wrf/Data into your WTF directory" + echo "" + echo "Otherwise, it can be downloaded from http://www2.mmm.ucar.edu/wrf/tmp/data_v04.08.tar" + exit 2 +endif + +setenv TMPDIR /gpfs/fs1/scratch/$USER/tmp # CISL-recommended hack for Cheyenne builds + +scripts/run_intel.csh diff --git a/qsub_pgi.pbs b/qsub_pgi.pbs new file mode 100644 index 0000000..a759362 --- /dev/null +++ b/qsub_pgi.pbs @@ -0,0 +1,48 @@ +#!/bin/csh -f +# +# PBS batch script +# +# Job name +#PBS -N pgi_WTF_v04.08 +# queue: share, regular, economy +#PBS -q economy +# Combine error and output files +#PBS -j oe +# output filename +#PBS -o regtest_pgi.out +# error filename +#PBS -e regtest_pgi.out +# wallclock time hh:mm:ss +#PBS -l walltime=12:00:00 +# Project charge code +#PBS -A UCUD0004 +# Claim 18 cores for this script (individual jobs created by this script can use way more) +#PBS -l select=1:ncpus=18:mem=60GB +# Send email on abort or end of main job +#PBS -m ae +# login-node environment +#PBS -l inception=login + +if ( ! -e Data ) then + echo "ERROR ERROR ERROR" + echo "" + echo "'Data' directory not found" + echo "If on Cheyenne, link /glade/p/wrf/Data into your WTF directory" + echo "" + echo "Otherwise, it can be downloaded from http://www2.mmm.ucar.edu/wrf/tmp/" + exit 1 +endif + +if ( ! -e Data/v04.08 ) then + echo "ERROR ERROR ERROR" + echo "" + echo "'Data' directory is not the correct version (v04.08)" + echo "If on Cheyenne, link /glade/p/wrf/Data into your WTF directory" + echo "" + echo "Otherwise, it can be downloaded from http://www2.mmm.ucar.edu/wrf/tmp/data_v04.08.tar" + exit 2 +endif + +setenv TMPDIR /gpfs/fs1/scratch/$USER/tmp # CISL-recommended hack for Cheyenne builds + +scripts/run_pgi.csh diff --git a/qsub_this_to_run_all_compilers.pbs.csh b/qsub_this_to_run_all_compilers.pbs.csh index 637860f..4b550d7 100644 --- a/qsub_this_to_run_all_compilers.pbs.csh +++ b/qsub_this_to_run_all_compilers.pbs.csh @@ -5,7 +5,7 @@ # Job name #PBS -N WTF_v04.08 # queue: share, regular, economy -#PBS -q share +#PBS -q economy # Combine error and output files #PBS -j oe # output filename @@ -13,9 +13,9 @@ # error filename #PBS -e regtest.out # wallclock time hh:mm:ss -#PBS -l walltime=06:00:00 +#PBS -l walltime=12:00:00 # Project charge code -#PBS -A P64000400 +#PBS -A UCUD0004 # Claim 18 cores for this script (individual jobs created by this script can use way more) #PBS -l select=1:ncpus=18:mem=60GB # Send email on abort or end of main job @@ -42,7 +42,7 @@ if ( ! -e Data/v04.08 ) then exit 2 endif -setenv TMPDIR /glade/scratch/$USER/tmp # CISL-recommended hack for Cheyenne builds +setenv TMPDIR /gpfs/fs1/scratch/$USER/tmp # CISL-recommended hack for Cheyenne builds scripts/run_all_for_qsub1.csh scripts/run_all_for_qsub2.csh diff --git a/regTest_gnu_Cheyenne.wtf b/regTest_gnu_Cheyenne.wtf index 9c8edb9..6fc121d 100644 --- a/regTest_gnu_Cheyenne.wtf +++ b/regTest_gnu_Cheyenne.wtf @@ -36,7 +36,7 @@ fi # idealized cases, and if any are specified, then "em_real" or "em_real8" (respectively) must also be included earlier in the list. export BUILD_TYPES="em_real em_real8 em_hill2d_x nmm_nest em_chem em_chem_kpp em_b_wave em_quarter_ss em_quarter_ss8 nmm_hwrf wrfda_3dvar em_move em_fire" -export BUILD_TYPES="em_real em_real8 em_hill2d_x nmm_nest em_chem em_b_wave em_quarter_ss em_quarter_ss8 nmm_hwrf em_move em_fire" +export BUILD_TYPES="em_real em_real8 em_hill2d_x em_chem em_b_wave em_quarter_ss em_quarter_ss8 em_move em_fire" # Select one of the directories below. This is the parent directory for all namelist.input files to be used for # the tests. Below this directory should be subdirectories "em_real", "nmm_nest", etc with namelist.input files @@ -86,13 +86,13 @@ export BATCH_QUEUE_TYPE=PBS # Cheyenne: share, regular, economy # Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BUILD_QUEUE=share -export TEST_QUEUE=share +export BUILD_QUEUE=regular +export TEST_QUEUE=regular #export TEST_QUEUE=economy # Account charge code string for queue-managed computers. Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BATCH_ACCOUNT=NMMM0054 +export BATCH_ACCOUNT=UCUD0004 # Number of processors/tasks to use for building WRF. # No reason to set larger than 6 due to diminishing returns on compile time. diff --git a/regTest_gnu_Cheyenne_WRFDA.wtf b/regTest_gnu_Cheyenne_WRFDA.wtf index 710da72..2c209bb 100644 --- a/regTest_gnu_Cheyenne_WRFDA.wtf +++ b/regTest_gnu_Cheyenne_WRFDA.wtf @@ -85,13 +85,13 @@ export BATCH_QUEUE_TYPE=PBS # Cheyenne: share, regular, economy # Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BUILD_QUEUE=share -export TEST_QUEUE=share +export BUILD_QUEUE=regular +export TEST_QUEUE=regular #export TEST_QUEUE=economy # Account charge code string for queue-managed computers. Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BATCH_ACCOUNT=NMMM0054 +export BATCH_ACCOUNT=UCUD0004 # Number of processors/tasks to use for building WRF. # No reason to set larger than 6 due to diminishing returns on compile time. diff --git a/regTest_intel_Cheyenne.wtf b/regTest_intel_Cheyenne.wtf index b7105a6..1de0c0a 100644 --- a/regTest_intel_Cheyenne.wtf +++ b/regTest_intel_Cheyenne.wtf @@ -37,7 +37,7 @@ fi export BUILD_TYPES="em_real" export BUILD_TYPES="em_real em_real8 em_hill2d_x nmm_nest em_chem em_chem_kpp em_b_wave em_quarter_ss em_quarter_ss8 nmm_hwrf wrfda_3dvar em_move em_fire" -export BUILD_TYPES="em_real em_real8 em_hill2d_x nmm_nest em_chem em_b_wave em_quarter_ss em_quarter_ss8 nmm_hwrf em_move em_fire" +export BUILD_TYPES="em_real em_real8 em_hill2d_x em_chem em_b_wave em_quarter_ss em_quarter_ss8 em_move em_fire" # Select one of the directories below. This is the parent directory for all namelist.input files to be used for @@ -88,13 +88,13 @@ export BATCH_QUEUE_TYPE=PBS # Cheyenne: share, regular, economy # Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BUILD_QUEUE=share -export TEST_QUEUE=share +export BUILD_QUEUE=regular +export TEST_QUEUE=regular #export TEST_QUEUE=economy # Account charge code string for queue-managed computers. Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BATCH_ACCOUNT=NMMM0054 +export BATCH_ACCOUNT=UCUD0004 # Number of processors/tasks to use for building WRF. # No reason to set larger than 6 due to diminishing returns on compile time. diff --git a/regTest_intel_Cheyenne_WRFDA.wtf b/regTest_intel_Cheyenne_WRFDA.wtf index 3222193..a97f174 100644 --- a/regTest_intel_Cheyenne_WRFDA.wtf +++ b/regTest_intel_Cheyenne_WRFDA.wtf @@ -85,13 +85,13 @@ export BATCH_QUEUE_TYPE=PBS # Cheyenne: share, regular, economy # Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BUILD_QUEUE=share -export TEST_QUEUE=share +export BUILD_QUEUE=regular +export TEST_QUEUE=regular #export TEST_QUEUE=economy # Account charge code string for queue-managed computers. Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BATCH_ACCOUNT=NMMM0054 +export BATCH_ACCOUNT=UCUD0004 # Number of processors/tasks to use for building WRF. # No reason to set larger than 6 due to diminishing returns on compile time. diff --git a/regTest_pgi_Cheyenne.wtf b/regTest_pgi_Cheyenne.wtf index fc0f8a3..44685b0 100644 --- a/regTest_pgi_Cheyenne.wtf +++ b/regTest_pgi_Cheyenne.wtf @@ -36,7 +36,7 @@ fi # idealized cases, and if any are specified, then "em_real" or "em_real8" (respectively) must also be included earlier in the list. export BUILD_TYPES="em_real em_real8 em_hill2d_x nmm_nest em_chem em_chem_kpp em_b_wave em_quarter_ss em_quarter_ss8 nmm_hwrf wrfda_3dvar em_move em_fire" -export BUILD_TYPES="em_real em_real8 em_hill2d_x nmm_nest em_chem em_b_wave em_quarter_ss em_quarter_ss8 nmm_hwrf em_move em_fire" +export BUILD_TYPES="em_real em_real8 em_hill2d_x em_chem em_b_wave em_quarter_ss em_quarter_ss8 em_move em_fire" # Select one of the directories below. This is the parent directory for all namelist.input files to be used for # the tests. Below this directory should be subdirectories "em_real", "nmm_nest", etc with namelist.input files @@ -72,7 +72,7 @@ export CONFIGURE_MPI="54" # WRF tests are performed consecutively on the local machine. # Recommended values: both false for personal computers, both true for mainframe computers with batch queues. -export BATCH_COMPILE=true +export BATCH_COMPILE=false export BATCH_TEST=true # Can be set to "LSF", "PBS", or "NQS". Right now, "NQS" has been tested on janus.colorado.edu only. @@ -86,13 +86,13 @@ export BATCH_QUEUE_TYPE=PBS # Cheyenne: share, regular, economy # Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BUILD_QUEUE=share -export TEST_QUEUE=share +export BUILD_QUEUE=regular +export TEST_QUEUE=regular #export TEST_QUEUE=economy # Account charge code string for queue-managed computers. Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BATCH_ACCOUNT=NMMM0054 +export BATCH_ACCOUNT=UCUD0004 # Number of processors/tasks to use for building WRF. # No reason to set larger than 6 due to diminishing returns on compile time. diff --git a/regTest_pgi_Cheyenne_WRFDA.wtf b/regTest_pgi_Cheyenne_WRFDA.wtf index 4fe6d34..0a7828a 100644 --- a/regTest_pgi_Cheyenne_WRFDA.wtf +++ b/regTest_pgi_Cheyenne_WRFDA.wtf @@ -85,13 +85,13 @@ export BATCH_QUEUE_TYPE=PBS # Cheyenne: share, regular, economy # Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BUILD_QUEUE=share -export TEST_QUEUE=share +export BUILD_QUEUE=regular +export TEST_QUEUE=regular #export TEST_QUEUE=economy # Account charge code string for queue-managed computers. Ignored when BATCH_COMPILE and BATCH_TEST are false. -export BATCH_ACCOUNT=NMMM0054 +export BATCH_ACCOUNT=UCUD0004 # Number of processors/tasks to use for building WRF. # No reason to set larger than 6 due to diminishing returns on compile time. diff --git a/run_all_Cheyenne.csh b/run_all_Cheyenne.csh new file mode 100755 index 0000000..2721b4c --- /dev/null +++ b/run_all_Cheyenne.csh @@ -0,0 +1,26 @@ +#!/bin/csh + + +################### GNU +echo submit gnu WTF +qsub qsub_gnu.pbs +echo Waiting 10 seconds to submit next job ... +echo + +sleep 10 + +################### Intel +echo submit intel WTF +qsub qsub_intel.pbs +echo Waiting 10 seconds to submit next job ... +echo + +sleep 10 + +################### PGI +echo submit PGI WTF +qsub qsub_pgi.pbs +echo Waiting all the jobs ... +echo + +wait diff --git a/run_from_github.py b/run_from_github.py index 8c324c4..b00de15 100755 --- a/run_from_github.py +++ b/run_from_github.py @@ -39,7 +39,7 @@ def main(): branch=None # Keep track of version number for Data directory - version="v04.01" + version="v04.08" # First things first: check if user has a "Data" directory, quit with helpful message if they don't if not os.path.isdir("Data"): @@ -145,9 +145,9 @@ def main(): os.chdir(tardir) - if os.path.isdir("WRFV3"): + if os.path.isdir("WRF"): cont = '' - print("\nWARNING: \n" + tardir + "/WRFV3 already exists.\nIf you continue, this directory will be deleted and overwritten.\n") + print("\nWARNING: \n" + tardir + "/WRF already exists.\nIf you continue, this directory will be deleted and overwritten.\n") while not cont: cont = raw_input("Do you wish to continue? (y/n) ") if re.match('y', cont, re.IGNORECASE) is not None: @@ -158,10 +158,10 @@ def main(): else: print("Unrecognized input: " + cont) cont='' - shutil.rmtree("WRFV3") + shutil.rmtree("WRF") - os.system("git clone " + url + " WRFV3") - os.chdir("WRFV3") + os.system("git clone " + url + " WRF") + os.chdir("WRF") # We have to check the exit status of git checkout, otherwise we may not get the right code! err = subprocess.call(["git", "checkout", branch]) @@ -212,14 +212,22 @@ def main(): out = tarfile.open(tarname, mode='w') try: - out.add('WRFV3') # Adding "WRFV3" directory to tar file + out.add('WRF') # Adding "WRF" directory to tar file finally: out.close() # Close tar file os.chdir("../") - - os.system('qsub < qsub_this_to_run_all_compilers.pbs.csh') - + + cont='' + while not cont: + cont = raw_input("Are you using Cheyenne? (y/n) ") + if re.match('y', cont, re.IGNORECASE) is not None: + os.system('./run_all_Cheyenne.csh >& cheyenne.log &') + elif re.match('n', cont, re.IGNORECASE) is not None: + os.system('qsub < qsub_this_to_run_all_compilers.pbs.csh') + else: + print("Unrecognized input: " + cont) + cont='' if __name__ == "__main__": main() diff --git a/scripts/allCheck.ksh b/scripts/allCheck.ksh index 6bd7930..f0bd699 100755 --- a/scripts/allCheck.ksh +++ b/scripts/allCheck.ksh @@ -31,10 +31,11 @@ writeTestSummary() variation_wt=$4 # e.g. Standard, Quilting, Adaptive testType_wt=$5 # e.g. FCAST, BFB, COMPILE outcome_wt=$6 # e.g. PASS, FAIL, PASS_MANUAL + summary_file=$7 # e.g. github_openwfm_fuel-moisture-model_intel_13_14_15.2019-02-14_13:49:19 # Fixed-format output thanks to ksh printf printf "%-13s %-22s %-10s %-7s %-8s %-11s\n" \ - ${wrfFlavor_wt} ${namelistFile_wt} ${variation_wt} ${parType_wt} ${testType_wt} ${outcome_wt} >> $SUMMARY_FILE + ${wrfFlavor_wt} ${namelistFile_wt} ${variation_wt} ${parType_wt} ${testType_wt} ${outcome_wt} >> $summary_file } @@ -91,6 +92,7 @@ writeForecastResult() nlist_wf=$4 vtion_wf=$5 testtype_wf=$6 + summary_file=$7 if [ "$vtion_wf" = "WRFDA" ]; then if [[ ! -f $testDir_wf/da_wrfvar.exe ]]; then @@ -131,7 +133,7 @@ writeForecastResult() fi fi # Write a formatted line to the "results file". - writeTestSummary $wrftype_wf $nlist_wf $parallelType_wf $vtion_wf $testtype_wf $result + writeTestSummary $wrftype_wf $nlist_wf $parallelType_wf $vtion_wf $testtype_wf $result $summary_file } @@ -152,6 +154,7 @@ writeBitForBit() parType_wb=$5 variation_wb=$6 testType_wb=$7 + summary_file=$8 checkDir_wb=`dirname $checkFile_wb` compareDir_wb=`dirname $compareFile_wb` @@ -196,7 +199,7 @@ writeBitForBit() fi # Write a formatted line to the "results file". - writeTestSummary $wrfType_wb $namelist_wb $parType_wb $variation_wb $testType_wb $result_wb + writeTestSummary $wrfType_wb $namelist_wb $parType_wb $variation_wb $testType_wb $result_wb $summary_file } @@ -215,8 +218,8 @@ done # The path to the file containing a summary of all test results #export SUMMARY_FILE=${TEST_DIR}/RESULTS/${TEST_ID}_${COMPILER_WTF}.`date +"%Y-%m-%d_%T"` -export SUMMARY_FILE=${TEST_DIR}/RESULTS/${TEST_ID}_${COMPILER_WTF}_${CONFIGURE_CHOICES}.`date +"%Y-%m-%d_%T"` -export SUMMARY_FILE=`echo $SUMMARY_FILE | sed 's/ /_/g'` +SUMMARY_FILE=${TEST_DIR}/RESULTS/${TEST_ID}_${COMPILER_WTF}_${CONFIGURE_CHOICES}.`date +"%Y-%m-%d_%T"` +SUMMARY_FILE=`echo $SUMMARY_FILE | sed 's/ /_/g'` echo "Test results will be summarized in '$SUMMARY_FILE'." @@ -279,7 +282,7 @@ for configOpt in $CONFIGURE_CHOICES; do \rm -f $checkDir/*.tst \rm -f $checkDir/fort.* - writeForecastResult $parType $checkDir $wrfType $namelist $variation $testType & + writeForecastResult $parType $checkDir $wrfType $namelist $variation $testType $SUMMARY_FILE & testCounter=$((testCounter + 1)) echo testCounter == $testCounter @@ -367,12 +370,12 @@ for configOpt in $WRF_COMPARE_PLATFORMS; do if [[ $variation = "WRFDA" ]]; then checkFile=$checkDir/wrfvar_output compareFile=$compareDir/wrfvar_output - writeBitForBit $checkFile $compareFile $wrfType $namelist $parType $variation $testType & + writeBitForBit $checkFile $compareFile $wrfType $namelist $parType $variation $testType $SUMMARY_FILE & testCounter=$((testCounter + 1)) else checkFile=$checkDir/wrfout_d01* compareFile=$compareDir/wrfout_d01* - writeBitForBit $checkFile $compareFile $wrfType $namelist $parType $variation $testType & + writeBitForBit $checkFile $compareFile $wrfType $namelist $parType $variation $testType $SUMMARY_FILE & testCounter=$((testCounter + 1)) fi else diff --git a/scripts/buildWrf.ksh b/scripts/buildWrf.ksh index e25f3fc..165a864 100755 --- a/scripts/buildWrf.ksh +++ b/scripts/buildWrf.ksh @@ -471,13 +471,13 @@ fi OS_NAME=`uname` -if [ "$OS_NAME" -eq "Darwin" ]; then +if [ "$OS_NAME" = "Darwin" ]; then TMPDIR=/Volumes/sysdisk1/$thisUser/tmp mkdir $TMPDIR TMPDIR=$TMPDIR/$BUILD_STRING mkdir $TMPDIR -elif [ "$OS_NAME" -eq "Linux" -a -d /glade ]; then - TMPDIR=/glade/scratch/$thisUser/tmp/$BUILD_STRING +elif [ "$OS_NAME" = "Linux" -a -d /glade ]; then + TMPDIR=/gpfs/fs1/scratch/$thisUser/tmp/$BUILD_STRING mkdir -p $TMPDIR else TMPDIR=/tmp/$BUILD_STRING @@ -494,7 +494,7 @@ if $RUN_COMPILE; then LSF) BSUB="bsub -K -q $BUILD_QUEUE -P $BATCH_ACCOUNT -n $NUM_PROCS -a poe -W $wallTime -J $BUILD_STRING -o build.out -e build.err" ;; PBS) BSUB="qsub -Wblock=true -q $BUILD_QUEUE -A $BATCH_ACCOUNT -l select=1:ncpus=$NUM_PROCS:mem=${MEM_BUILD}GB -l walltime=${wallTime} -N $BUILD_STRING -o build.out -e build.err" - TMPDIR=/glade/scratch/$thisUser/tmp/$BUILD_STRING + TMPDIR=/gpfs/fs1/scratch/$thisUser/tmp/$BUILD_STRING cat > build.sh << EOF export TMPDIR="$TMPDIR" # CISL-recommended hack for Cheyenne builds export MPI_DSM_DISTRIBUTE=0 # CISL-recommended hack for distributing jobs properly in share queue @@ -547,24 +547,28 @@ EOF #For some reason wait isn't working, so let's use batchWait to ensure this job is done batchWait $BATCH_QUEUE_TYPE $BUILD_STRING 60 else - export J="-j ${NUM_PROCS}" - date > StartTime_${COMPILE_TYPE}.txt #Clean - ./clean -a + if $UNPACK_WRF; then + ./clean -a + fi #Configure - $CONFIGURE_COMMAND << EOT + if $RUN_CONFIGURE; then + $CONFIGURE_COMMAND << EOT $CONFIG_OPTION $NEST_OPTION EOT - cp configure.wrf configure.wrf.core=${COMPILE_TYPE}_build=${CONFIG_OPTION} - # The configure.wrf file needs to be adjusted as to whether we are requesting real*4 or real*8 - # as the default floating precision. - if $REAL8; then - sed -e '/^RWORDSIZE/s/\$(NATIVE_RWORDSIZE)/8/' \ - -e '/^PROMOTION/s/#//' configure.wrf > foo ; /bin/mv foo configure.wrf + cp configure.wrf configure.wrf.core=${COMPILE_TYPE}_build=${CONFIG_OPTION} + # The configure.wrf file needs to be adjusted as to whether we are requesting real*4 or real*8 + # as the default floating precision. + if $REAL8; then + sed -e '/^RWORDSIZE/s/\$(NATIVE_RWORDSIZE)/8/' \ + -e '/^PROMOTION/s/#//' configure.wrf > foo ; /bin/mv foo configure.wrf + fi fi - + #Compile + export J="-j ${NUM_PROCS}" + date > StartTime_${COMPILE_TYPE}.txt \rm -f *COMPILE.tst # Remove previous compile test results if [ "$COMPATIBLE_BUILD" = "em_real" ]; then sed -e 's/WRF_USE_CLM/WRF_USE_CLM -DCLWRFGHG/' configure.wrf 2>&1 .foofoo diff --git a/scripts/run_all_for_qsub1.csh b/scripts/run_all_for_qsub1.csh index 9b1b04c..1bce6db 100755 --- a/scripts/run_all_for_qsub1.csh +++ b/scripts/run_all_for_qsub1.csh @@ -1,10 +1,10 @@ #!/bin/csh set RUN_DA = NO -set RUN_DA = YEPPERS +#set RUN_DA = YEPPERS set INTELversion = 17.0.1 -set PGIversion = 17.9 +set PGIversion = 16.5 set GNUversion = 6.3.0 echo @@ -65,6 +65,7 @@ sleep 10 ################### PGI echo submit PGI WTF module swap intel pgi/${PGIversion} +module load netcdf module list ( nohup scripts/run_WRF_Tests.ksh -R regTest_pgi_Cheyenne.wtf ) >&! foo_pgi & if ( $RUN_DA != NO ) then diff --git a/scripts/run_all_for_qsub2.csh b/scripts/run_all_for_qsub2.csh index f97a27e..cdca146 100755 --- a/scripts/run_all_for_qsub2.csh +++ b/scripts/run_all_for_qsub2.csh @@ -1,7 +1,7 @@ #!/bin/csh set RUN_DA = NO -set RUN_DA = YEPPERS +#set RUN_DA = YEPPERS set INTELversion = 17.0.1 set PGIversion = 17.9 diff --git a/scripts/run_gnu.csh b/scripts/run_gnu.csh new file mode 100755 index 0000000..783c583 --- /dev/null +++ b/scripts/run_gnu.csh @@ -0,0 +1,44 @@ +#!/bin/csh + +set RUN_DA = NO +#set RUN_DA = YEPPERS + +set GNUversion = 6.3.0 + +echo +echo +echo Script will submit GNU WTF jobs to Cheyenne. +echo + +scripts/checkModules intel >&! /dev/null +set OK_intel = $status + +scripts/checkModules pgi >&! /dev/null +set OK_pgi = $status + +scripts/checkModules gnu >&! /dev/null +set OK_gnu = $status + +if ( $OK_gnu == 0 ) then + echo Already set up for gnu environment + module swap gnu gnu/${GNUversion} +else if ( $OK_pgi == 0 ) then + echo Changing from pgi to gnu environment + module swap pgi gnu/${GNUversion} +else if ( $OK_intel == 0 ) then + echo Changing from intel to gnu environment + module swap intel gnu/${GNUversion} +endif +module list +echo + +################### GNU +echo submit gnu WTF + +( nohup scripts/run_WRF_Tests.ksh -R regTest_gnu_Cheyenne.wtf ) >&! foo_gnu & +if ( $RUN_DA != NO ) then + echo submit gnu WRFDA WTF + ( nohup scripts/run_WRF_Tests.ksh -R regTest_gnu_Cheyenne_WRFDA.wtf ) >&! foo_gnu_WRFDA & +endif + +wait diff --git a/scripts/run_intel.csh b/scripts/run_intel.csh new file mode 100755 index 0000000..992fbed --- /dev/null +++ b/scripts/run_intel.csh @@ -0,0 +1,47 @@ +#!/bin/csh + +set RUN_DA = NO +#set RUN_DA = YEPPERS + +set INTELversion = 17.0.1 + +echo +echo +echo Script will submit Intel WTF jobs to Cheyenne. +echo + +scripts/checkModules intel >&! /dev/null +set OK_intel = $status + +scripts/checkModules pgi >&! /dev/null +set OK_pgi = $status + +scripts/checkModules gnu >&! /dev/null +set OK_gnu = $status + +if ( $OK_gnu == 0 ) then + echo Already set up for intel environment + module swap gnu intel/${INTELversion} +else if ( $OK_pgi == 0 ) then + echo Changing from pgi to intel environment + module swap pgi intel/${INTELversion} +else if ( $OK_intel == 0 ) then + echo Changing from intel to intel environment + module swap intel intel/${INTELversion} +endif +module list +echo + +################### Intel +echo submit intel WTF +module swap gnu intel/${INTELversion} +module list +( nohup scripts/run_WRF_Tests.ksh -R regTest_intel_Cheyenne.wtf ) >&! foo_intel & +if ( $RUN_DA != NO ) then +# WRFPLUS GIVES INTERNAL COMPILER ERROR, SKIP FOR NOW + echo submit intel WRFDA WTF + ( nohup scripts/run_WRF_Tests.ksh -R regTest_intel_Cheyenne_WRFDA.wtf ) >&! foo_intel_WRFDA & +endif +echo + +wait diff --git a/scripts/run_pgi.csh b/scripts/run_pgi.csh new file mode 100755 index 0000000..7d20fc1 --- /dev/null +++ b/scripts/run_pgi.csh @@ -0,0 +1,46 @@ +#!/bin/csh + +set RUN_DA = NO +#set RUN_DA = YEPPERS + +set PGIversion = 17.9 + +echo +echo +echo Script will submit PGI WTF jobs to Cheyenne. +echo + +scripts/checkModules intel >&! /dev/null +set OK_intel = $status + +scripts/checkModules pgi >&! /dev/null +set OK_pgi = $status + +scripts/checkModules gnu >&! /dev/null +set OK_gnu = $status + +if ( $OK_gnu == 0 ) then + echo Already set up for pgi environment + module swap gnu pgi/${PGIversion} +else if ( $OK_pgi == 0 ) then + echo Changing from pgi to pgi environment + module swap pgi pgi/${PGIversion} +else if ( $OK_intel == 0 ) then + echo Changing from intel to pgi environment + module swap intel pgi/${PGIversion} +endif +module list +echo + +################### PGI +echo submit PGI WTF +module swap intel pgi/${PGIversion} +module load netcdf +module list +( nohup scripts/run_WRF_Tests.ksh -R regTest_pgi_Cheyenne.wtf ) >&! foo_pgi & +if ( $RUN_DA != NO ) then + echo submit pgi WRFDA WTF + ( nohup scripts/run_WRF_Tests.ksh -R regTest_pgi_Cheyenne_WRFDA.wtf ) >&! foo_pgi_WRFDA & +endif + +wait diff --git a/scripts/testWrf.ksh b/scripts/testWrf.ksh index 8d11b07..bd8b09a 100755 --- a/scripts/testWrf.ksh +++ b/scripts/testWrf.ksh @@ -200,7 +200,7 @@ if [ "$BATCH_COMPILE" = false -a "$BATCH_TEST" = false ]; then else NUM_PROC=`grep NUM_PROCESSORS ${NAMELIST_PATH} | cut -d '=' -f 2` if [ -z "$NUM_PROC" ]; then - if [[ $BATCH_QUEUE = "share" ]] || [[ $BATCH_QUEUE = "caldera" ]]; then + if [[ $BATCH_QUEUE = "share" ]] || [[ $BATCH_QUEUE = "regular" ]] || [[ $BATCH_QUEUE = "caldera" ]]; then NUM_PROC=$NUM_PROC_TEST else case $BATCH_QUEUE_TYPE in @@ -510,7 +510,7 @@ EOF fi fi # if $CREATE_DIR - +maxMem=100 # Max memory in GB per each test job # To allow many tests to be done in parallel, put all tests (even serial jobs) in a processing queue. if $BATCH_TEST; then case $BATCH_QUEUE_TYPE in @@ -543,15 +543,24 @@ if $BATCH_TEST; then openmp) if [ -z "$runMem" ]; then (( totalMem = NUM_PROC * BATCH_MEM )) runMem=$totalMem + if [ $runMem -gt $maxMem ]; then + runMem=$maxMem + fi fi BSUB="qsub -q $BATCH_QUEUE -A $BATCH_ACCOUNT -l select=1:ncpus=$NUM_PROC:ompthreads=$NUM_PROC:mem=${runMem}GB -l walltime=$runTime -N $jobString -o test.out -e test.err" ;; mpi) if [ -z "$runMem" ]; then (( totalMem = NUM_PROC * BATCH_MEM )) runMem=$totalMem + if [ $runMem -gt $maxMem ]; then + runMem=$maxMem + fi fi BSUB="qsub -q $BATCH_QUEUE -A $BATCH_ACCOUNT -l select=1:ncpus=$NUM_PROC:mpiprocs=$NUM_PROC:mem=${runMem}GB -l walltime=$runTime -N $jobString -o test.out -e test.err" ;; + *) echo "Error: Unknown parallel type $PARALLEL_TYPE!" + exit 2 + ;; esac cd $testDir echo $BSUB > submitCommand