Skip to content

Maintaining Regression Tests

James Davies edited this page Nov 18, 2020 · 41 revisions

Download and configure jfrog

  1. You need JFROG CLI to talk to Artifactory. Get it here:

    https://jfrog.com/getcli/

    Install it in $HOME/bin, making sure to chmod u+x on it and making sure $HOME/bin is in your $PATH in your .bash_profile. The first time you run it it will ask for configuration. Our artifactory server is

    https://bytesalad.stsci.edu/artifactory/

    and it accepts your AD credentials. It will save these to a profile in $HOME/.jfrog/ so you don't have to enter them every time you use it. To configure:

    $ jfrog rt config
    Artifactory server ID [https://bytesalad.stsci.edu/artifactory]: 
    JFrog Artifactory URL [https://bytesalad.stsci.edu/artifactory/]: 
    JFrog Distribution URL (Optional): 
    Access token (Leave blank for username and password/API key): 
    User [<YOUR_USERNAME>]: 
    Password/API key: <YOUR_AD_PASSWORD>
    [Info] Encrypting password...
    

    When you update your AD password, you will need to run jfrog rt config again, and just update the password, which it will cache (encrypted). Be sure to leave "Access Token" blank at the prompt, so it instead gets your AD username and password.

Regression Tests in the Time of Cholera

Lots of us are running regression tests from home right now. Even on VPN, running the whole suite will need to download ~50GB of data. And even single tests can spend more time downloading the data files than running the test. So to make this a bit quicker to iterate, you can cache the whole test suite input and truth data locally. Here's how:

Make sure you have jfrog cli installed and in your path, say ~/bin/. Download it via curl -fL https://getcli.jfrog.io | sh and make sure you've configured it to point to https://bytesalad.stsci.edu/artifactory using jfrog rt config (see below).

All interactions with https://bytesalad.stsci.edu/artifactory need to be done on VPN. It is only available on the internal network.

If you want to store the test suite in $HOME for example, make a directory and then do the sync:

$ mkdir ~/test_bigdata
$ mkdir ~/test_bigdata/jwst-pipeline
$ cd ~/test_bigdata/jwst-pipeline
$ jfrog rt dl "jwst-pipeline/dev/*" ./

And that will keep your local cache updated with what is on bytesalad in much the same way as rsync -av. The first time you do it you'll be downloading ~50GB of data, but every subsequent update will just get any diffs.

Then add

export TEST_BIGDATA=$HOME/test_bigdata/

to your .bash_profile and you're good to go. Just remember to do

cd ~/test_bigdata/jwst-pipeline
jfrog rt dl "jwst-pipeline/dev/*" ./ --sync-deletes

at least once first, before you run the tests.

Okify via interactive script

This is how we are currently updating truth files on our regression tests.

The okify_regtests script prompts the user to okify or skip failed tests. The script relies on JFrog CLI (see below for instructions on installing and configuring JFrog CLI).

To OKify test(s), run the script like so:

$ okify_regtests <build number>

The script will provide the assertion error, traceback, and request a decision:

Downloading test okify artifacts to local directory /var/folders/jg/by5st33j7ps356dgb4kn8w900001n5/T/tmpd5rxjrx0
24 failed tests to okify
----------------------------------------------------------------- test_name -----------------------------------------------------------------
run_pipelines = {'input': '/data1/jenkins/workspace/RT/JWST/clone/test_outputs/popen-gw28/test_nircam_tsgrism_run_pipelines0/jw0072101...h_remote': 'jwst-pipeline/dev/truth/test_nircam_tsgrism_stages/jw00721012001_03103_00001-seg001_nrcalong_calints.fits'}
fitsdiff_default_kwargs = {'atol': 1e-07, 'ignore_fields': ['DATE', 'CAL_VER', 'CAL_VCS', 'CRDS_VER', 'CRDS_CTX', 'NAXIS1', ...], 'ignore_hdus': ['ASDF'], 'ignore_keywords': ['DATE', 'CAL_VER', 'CAL_VCS', 'CRDS_VER', 'CRDS_CTX', 'NAXIS1', ...], ...}
suffix = 'calints'

    @pytest.mark.bigdata
    @pytest.mark.parametrize("suffix", ["calints", "extract_2d", "flat_field",
        "o012_crfints", "srctype", "x1dints"])
    def test_nircam_tsgrism_stage2(run_pipelines, fitsdiff_default_kwargs, suffix):
        """Regression test of tso-spec2 pipeline performed on NIRCam TSO grism data."""
        rtdata = run_pipelines
        rtdata.input = "jw00721012001_03103_00001-seg001_nrcalong_rateints.fits"
        output = "jw00721012001_03103_00001-seg001_nrcalong_" + suffix + ".fits"
        rtdata.output = output
    
        rtdata.get_truth("truth/test_nircam_tsgrism_stages/" + output)
    
        diff = FITSDiff(rtdata.output, rtdata.truth, **fitsdiff_default_kwargs)
>       assert diff.identical, diff.report()
E       AssertionError: 
E          fitsdiff: 4.0
E          a: /data1/jenkins/workspace/RT/JWST/clone/test_outputs/popen-gw28/test_nircam_tsgrism_run_pipelines0/jw00721012001_03103_00001-seg001_nrcalong_calints.fits
E          b: /data1/jenkins/workspace/RT/JWST/clone/test_outputs/popen-gw28/test_nircam_tsgrism_run_pipelines0/truth/jw00721012001_03103_00001-seg001_nrcalong_calints.fits
E          HDU(s) not to be compared:
E           ASDF
E          Keyword(s) not to be compared:
E           CAL_VCS CAL_VER CRDS_CTX CRDS_VER DATE NAXIS1 TFORM*
E          Table column(s) not to be compared:
E           CAL_VCS CAL_VER CRDS_CTX CRDS_VER DATE NAXIS1 TFORM*
E          Maximum number of different data values to be reported: 10
E          Relative tolerance: 1e-05, Absolute tolerance: 1e-07
E         
E         Extension HDU 1:
E         
E            Headers contain differences:
E              Headers have different number of cards:
E               a: 49
E               b: 48
E              Extra keyword 'SRCTYPE' in a: 'POINT'
E         
E       assert False
E        +  where False = <astropy.io.fits.diff.FITSDiff object at 0x7fb1d9ed2950>.identical

../../../jwst/regtest/test_nircam_tsgrism.py:48: AssertionError
---------------------------------------------------------------------------------------------------------------------------------------------
OK: jwst-pipeline-results/2020-02-11_jenkins-RT-JWST-540_0/test_nircam_tsgrism_stage2/jw00721012001_03103_00001-seg001_nrcalong_calints.fits
--> jwst-pipeline/dev/truth/test_nircam_tsgrism_stages/jw00721012001_03103_00001-seg001_nrcalong_calints.fits
---------------------------------------------------------------------------------------------------------------------------------------------
Enter 'o' to okify, 's' to skip:

Choosing 'o' will cause the script to overwrite the artifactory truth file with the result file from the failed test run. 's' will ignore this diff output and move on to the next.

Okify by hand on Artifactory

If the OKify script above does not work for you, then you may have to use the following method.

  1. You need JFROG CLI to talk to Artifactory. See above.

  2. Find out what truth files need updating by looking at the test results. So this failed result:

    https://plwishmaster.stsci.edu:8081/job/RT/job/JWST/9/testReport/jwst.tests_nightly.general.miri.test_miri_lrs_bkgnod/TestSpec2Pipeline/test_mrs2pipeline1/
    

    I look at the Standard Output to find

    Renamed jw80500018001_02101_00002_MIRIFUSHORT_s3d.fits as new 'truth' file: jw80500018001_02101_00002_MIRIFUSHORT_s3d_ref.fits
    

    And then if I look further up at the start of the error message I can find where this has been dumped in jwcalibdev:

    /data1/jenkins/workspace/RT/JWST/test_outputs/test_mrs2pipeline10/jw80500018001_02101_00002_MIRIFUSHORT_x1d_ref.fits
    
  3. Find the results for the failed tests here

    https://bytesalad.stsci.edu/artifactory/jwst-pipeline-results/
    

    There is a directory with test results for each build ordered by date. Only the most recent builds are retained.

  4. Copy these files from jwst-pipeline-results to jwst-pipeline/dev, using the dropdown menu in the upper right, making sure the the correct path and file names are used.

Clone this wiki locally