Skip to content

Commit

Permalink
More rewording of cached data docs warnings
Browse files Browse the repository at this point in the history
  • Loading branch information
bernstei committed Jun 8, 2023
1 parent cf9a5fb commit f366bed
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 5 deletions.
5 changes: 3 additions & 2 deletions docs/source/overview.configset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,9 @@ Input and output of atomic structures
To efficiently restart interrupted operations, if the ``OutputSpec`` object specifies storing the output
data in a file, autoparallelized workflow operations will use the existing file instead of redoing the calculation.
If the workflow code (or any functions that are called by it, directly or indirectly) are changed, this will not
be detected, the the previous, perhaps no longer correct, output will still be used.
The user is required to manually delete output files from operations that have been changed.
be detected, and the previous, perhaps no longer correct, output will still be used.
The user must manually delete output files from operations that have been changed to force
the calculation to be redone.

Users should consult the simple example in :doc:`first_example`, or the documentation of the two classes at
:meth:`wfl.configset.ConfigSet` and :meth:`wfl.configset.OutputSpec`
Expand Down
8 changes: 5 additions & 3 deletions docs/source/overview.queued.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,14 +111,16 @@ duplicate submission is indeed correct). All functions already ignore the
The hashing mechanism is only designed for interrupted runs, and does
not detect changes to the called function (or to any functions that
function calls). If the code is being modified, the user should erase the
`ExPyRe` staged job directories, and clean up the sqlite database file,
`ExPyRe` staged job directories, and clean up the `sqlite` database file,
before rerunning. Using a per-project `_expyre` directory makes this
easier, since the database file can simply be erased, otherwise the `xpr` command
line tool needs to be used to delete the previously created jobs.
Note that this is only relevant to incomplete autoparallelized
operations, since any completed operation no longer depends on anything
`ExPyRe`-related. See the corresponding warning in :doc:`overview.configset`.
operations, since any completed operation (once all the remote job outputs have
been gathered into the location specified in the `OutputSpec`) no longer depends on
anything `ExPyRe`-related. See also the warning in the
`OutputSpec` [documentation](overview.configset).
```

## WFL\_EXPYRE\_INFO syntax
Expand Down

0 comments on commit f366bed

Please sign in to comment.