diff --git a/404.html b/404.html index e468cb1..a64cb32 100644 --- a/404.html +++ b/404.html @@ -52,6 +52,7 @@ + diff --git a/code-of-conduct.html b/code-of-conduct.html index 94e677e..137814a 100644 --- a/code-of-conduct.html +++ b/code-of-conduct.html @@ -52,6 +52,7 @@ + @@ -445,16 +446,17 @@

2.5 Scope

2.6 Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement anonymously using this [form] (https://forms.gle/2DnPXuCKKCN3qp7ZA).

-

Reports will be -reviewed by a member of the NOAA Fisheries Office of Science and -Technology who is not participating in the FIMS Project [Patrick Lynch], +reported to the community leaders responsible for enforcement anonymously using +this form.

+

Reports will be reviewed by a member of the NOAA Fisheries Office of Science +and Technology who is not participating in the FIMS Project [Patrick Lynch] but has the full support of FIMS Community Leaders. All reports will be reviewed promptly and fairly.

All community leaders are obligated to respect the privacy and security of the reporter of any incident whenever possible; however, please note behaviors that -meet the official criteria for harrassment must be reported by supervisors under -NOAA policy.

+meet the official +criteria +for harrassment must be reported by supervisors under NOAA policy.

2.7 Enforcement guidelines

diff --git a/contributor-guidelines.html b/contributor-guidelines.html index 19f255a..478e792 100644 --- a/contributor-guidelines.html +++ b/contributor-guidelines.html @@ -52,6 +52,7 @@ + @@ -387,18 +388,18 @@

Chapter 7 Contributor Guidelines<

External contributions and feedback are important to the development and future maintenance of FIMS and are welcome. This section provides guidelines and workflows for FIMS developers and collaborators on how to contribute to the project.

7.1 Style Guide

-

The FIMS project uses style guides to ensure our code is consistent, easy to use (e.g. read, share, and verify), and ultimately easier to write. We use the Google C++ Style Guide and the tidyverse style guide for R code.

+

The FIMS project uses style guides to ensure our code is consistent, easy to use (e.g., read, share, and verify), and ultimately easier to write. We use the Google C++ Style Guide and the tidyverse style guide for R code.

7.2 Naming Conventions

-

The FIMS implementation team has chosen to use typename instead of class when defining templates for consistency with the TMB package. While types may be defined in many ways, for consistency developers are asked to use Type instead of T to define Types within FIMS.

+

The FIMS implementation team has chosen to use typename instead of class when defining templates for consistency with the TMB package. While types may be defined in many ways, for consistency developers are asked to use Type instead of T` to define Types within FIMS.

7.3 Coding Good Practices

Following good software development and coding practices simplifies collaboration, improves readability, and streamlines testing and review. The following are industry-accepted standards:

@@ -500,7 +501,7 @@

7.5.2 Branch Protection

7.5.3 GitHub cloning and branching

For contributors with write access to the FIMS repo, changes should be made on a feature branch after cloning the repo. The FIMS repo can be cloned to a local machine by using on the command line:

-
git clone https://github.com/NOAA-FIMS/FIMS.git
+
git clone https://github.com/NOAA-FIMS/FIMS.git

7.5.4 Outside collaborators and forks

@@ -536,10 +537,10 @@

7.7.0.2 How Do I Submit A (Good)

Bugs are tracked as GitHub issues. Create an issue on the toolbox Github repository and provide the following information by following the steps outlined in the reprex package. Explain the problem and include additional details to help maintainers reproduce the problem using the Bug Report issue template.

Provide more context by answering these questions:

    -
  • Did the problem start happening recently (e.g. after updating to a new version of R) or was this always a problem?
  • +
  • Did the problem start happening recently (e.g., after updating to a new version of R) or was this always a problem?
  • If the problem started happening recently, can you reproduce the problem in an older version of R? What’s the most recent version in which the problem doesn’t happen?
  • Can you reliably reproduce the issue? If not, provide details about how often the problem happens and under which conditions it normally happens.
  • -
  • If the problem is related to working with files (e.g. reading in data files), does the problem happen for all files and projects or only some? Does the problem happen only when working with local or remote files (e.g. on network drives), with files of a specific type (e.g. only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using?
  • +
  • If the problem is related to working with files (e.g., reading in data files), does the problem happen for all files and projects or only some? Does the problem happen only when working with local or remote files (e.g., on network drives), with files of a specific type (e.g., only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using?

Include details about your configuration and environment:

    @@ -588,10 +589,10 @@

    7.9 Branch Workflow7.9.1 Branching Good Practices

    The following suggestions will help ensure optimal performance of the trunk-based branching strategy:

      -
    1. Branches and commits should be kept small (e.g. a couple commits, a few lines of code) to allow for rapid merges and deployments.
    2. +
    3. Branches and commits should be kept small (e.g., a couple commits, a few lines of code) to allow for rapid merges and deployments.
    4. Use feature flags to wrap new changes in an inactive code path for later activation (rather than creating a separate repository feature branch).
    5. Delete branches after it is merged to the trunk; avoid repositories with a large number of “active” branches.
    6. -
    7. Merge branches to the trunk frequently (e.g. at least every few days; tag as a release commit) to avoid merge conflicts.
    8. +
    9. Merge branches to the trunk frequently (e.g., at least every few days; tag as a release commit) to avoid merge conflicts.
    10. Use caching layers where appropriate to optimize build and test execution times.

@@ -608,23 +609,23 @@

7.9.3 git workflow
  • Use the following commands to create a branch:
  • -
    $ git checkout -b <branchname> main           #creates a local branch
    -$ git push origin <branchname>                #pushes branch back to gitHub
    +
    $ git checkout -b <branchname> main           #creates a local branch
    +$ git push origin <branchname>                #pushes branch back to gitHub
    1. Periodically merge changes from main into branch
    -
    $ git merge main                              #merges changes from main into branch
    +
    $ git merge main                              #merges changes from main into branch
    1. While editing code, commit regularly following commit messages guidelines
    -
    $ git add <filename>                          #stages file for commit
    -$ git commit -m"Commit Message"               #commits changes
    +
    $ git add <filename>                          #stages file for commit
    +$ git commit -m"Commit Message"               #commits changes
    1. To push changes to gitHub, first set the upstream location:
    -
    $ git push --set-upstream origin <branchname> #pushes change to feature branch on gitHub
    +
    $ git push --set-upstream origin <branchname> #pushes change to feature branch on gitHub

    After which, changes can be pushed as:

    -
    $ git push                 #pushes change to feature branch on gitHub
    +
    $ git push                 #pushes change to feature branch on gitHub
    1. When finished, create a pull request to the main branch following pull request guidelines
    2. @@ -651,7 +652,7 @@

      7.11 Commit Messages7.11 Commit Messages @@ -723,8 +724,8 @@

      7.14 Code Review - +
      +

      7.14.1 Assigning Reviewers

      Reviewers of PRs for changes to the codebase in FIMS should be @@ -812,8 +813,8 @@

      7.15 Clean up local branches -
      $ git checkout main           //switches back to main branch
      -$ git branch -d <branchname>  //deletes branch from local repository
      +
      $ git checkout main           //switches back to main branch
      +$ git branch -d <branchname>  //deletes branch from local repository

      7.16 GitHub Actions

      diff --git a/developer-software-guide.html b/developer-software-guide.html index ed7df28..c52b4dd 100644 --- a/developer-software-guide.html +++ b/developer-software-guide.html @@ -52,6 +52,7 @@ + @@ -421,10 +422,10 @@

      6.0.3 vscode setuphere using the command:

      -
      remotes::install_github("ManuelHentschel/vscDebugger")
      +
      remotes::install_github("ManuelHentschel/vscDebugger")

      To improve the plot viewer when creating plots in R, install the httpgd package:

      -
      install.packages("httpgd")
      +
      install.packages("httpgd")

      To add syntax highlighting and other features to the R terminal, radian can be installed. Note needs to be installed first in order to download radian.

      @@ -434,50 +435,50 @@

      6.0.3 vscode setup
      {
      -    // Associate .RMD files with markdown:
      -    "files.associations": {
      -        "*.Rmd": "markdown",
      -    },
      -    // A cmake setting
      -    "cmake.configureOnOpen": true, 
      -    // Set where the rulers are, needed for Rewrap. 72 is the default we have
      -    // decided on for FIMS repositories.z
      -    "editor.rulers": [
      -        72
      -    ],
      -    // Should the editor suggest inline edits?
      -    "editor.inlineSuggest.enabled": true,
      -    // Settings for github copilot and which languages to use it with or not.
      -    "github.copilot.enable": {
      -        "*": true,
      -        "yaml": false,
      -        "plaintext": false,
      -        "markdown": false,
      -        "latex": false,
      -        "r": false
      -    }, 
      -    // Setting for sending R code from the editor to the terminal
      -    "r.alwaysUseActiveTerminal": true,
      -    // Needed to send large chunks of code to the r terminal when using radian
      -    "r.bracketedPaste": true,
      -    // Needed to use httpgd for plotting in vscode
      -    "r.plot.useHttpgd": true,
      -    // path to the r terminal (in this case, radian). Necessary to get the terminal to use radian.
      -    "r.rterm.windows": "C://Users//my.name//AppData//Local//Programs//Python//Python310//Scripts//radian.exe", //Use this only for Windows 
      -    // options for the r terminal
      -    "r.rterm.option": [
      -        "--no-save",
      -        "--no-restore",
      -        "max.print=500"
      -    ],
      -    // Setting for whether to allow linting of documents or not
      -    "r.lsp.diagnostics": true,
      -    // When looking at diffs under the version control tab, should whitspace be ignored?
      -    "diffEditor.ignoreTrimWhitespace": false,
      -    // What is the max number of lines that are printed as output to the terminal?
      -    "terminal.integrated.scrollback": 10000
      -}

      +
      {
      +    // Associate .RMD files with markdown:
      +    "files.associations": {
      +        "*.Rmd": "markdown",
      +    },
      +    // A cmake setting
      +    "cmake.configureOnOpen": true, 
      +    // Set where the rulers are, needed for Rewrap. 72 is the default we have
      +    // decided on for FIMS repositories.z
      +    "editor.rulers": [
      +        72
      +    ],
      +    // Should the editor suggest inline edits?
      +    "editor.inlineSuggest.enabled": true,
      +    // Settings for github copilot and which languages to use it with or not.
      +    "github.copilot.enable": {
      +        "*": true,
      +        "yaml": false,
      +        "plaintext": false,
      +        "markdown": false,
      +        "latex": false,
      +        "r": false
      +    }, 
      +    // Setting for sending R code from the editor to the terminal
      +    "r.alwaysUseActiveTerminal": true,
      +    // Needed to send large chunks of code to the r terminal when using radian
      +    "r.bracketedPaste": true,
      +    // Needed to use httpgd for plotting in vscode
      +    "r.plot.useHttpgd": true,
      +    // path to the r terminal (in this case, radian). Necessary to get the terminal to use radian.
      +    "r.rterm.windows": "C://Users//my.name//AppData//Local//Programs//Python//Python310//Scripts//radian.exe", //Use this only for Windows 
      +    // options for the r terminal
      +    "r.rterm.option": [
      +        "--no-save",
      +        "--no-restore",
      +        "max.print=500"
      +    ],
      +    // Setting for whether to allow linting of documents or not
      +    "r.lsp.diagnostics": true,
      +    // When looking at diffs under the version control tab, should whitspace be ignored?
      +    "diffEditor.ignoreTrimWhitespace": false,
      +    // What is the max number of lines that are printed as output to the terminal?
      +    "terminal.integrated.scrollback": 10000
      +}

      Some suggested R shortcuts could be helpful.

      @@ -513,19 +514,19 @@

      6.0.4 C++ compiler
      gcc: fatal error: no input files compilation
      -terminated.
      +
      gcc: fatal error: no input files compilation
      +terminated.

      If not, you will need to check that the compiler is on the path. The easiest way to do so is by creating a text file .Renviron in your Documents folder which contains the following line:

      PATH="${RTOOLS44_HOME}\usr\bin;${PATH}"

      You can do this with a text editor, or from R like so (note that in R code you need to escape backslashes):

      -
      write('PATH="${RTOOLS44_HOME}\\usr\\bin;${PATH}"', file = "~/.Renviron", append = TRUE)
      +
      write('PATH="${RTOOLS44_HOME}\\usr\\bin;${PATH}"', file = "~/.Renviron", append = TRUE)

      Restart R, and verify that make can be found, which should show the path to your Rtools installation.

      -
      Sys.which("make") ##
      -"C:\\rtools44\\usr\\bin\\make.exe" 
      +
      Sys.which("make") ##
      +"C:\\rtools44\\usr\\bin\\make.exe" 

      6.0.5 GoogleTest

      @@ -570,8 +571,8 @@

      6.0.7 Doxygen
      cmake -S. -B build -G Ninja
      -cmake -- build build

      +
      cmake -S. -B build -G Ninja
      +cmake -- build build
      diff --git a/documentation-template.html b/documentation-template.html index f07b14e..b34398f 100644 --- a/documentation-template.html +++ b/documentation-template.html @@ -52,6 +52,7 @@ + diff --git a/fims-project-management-process.html b/fims-project-management-process.html index 65ab7bd..eb1a9f1 100644 --- a/fims-project-management-process.html +++ b/fims-project-management-process.html @@ -52,6 +52,7 @@ + @@ -552,24 +553,16 @@

      3.2.1 Issue lifecycle -Flow chart that describes above process visually, e.g. how an issue moves from creation, to activation, to response or resolution, and is finally closed. +Flow chart that describes above process visually, e.g., how an issue moves from creation, to activation, to response or resolution, and is finally closed.

      -Figure 3.2: Flow chart that describes above process visually, e.g. how an issue moves from creation, to activation, to response or resolution, and is finally closed. +Figure 3.2: Flow chart that describes above process visually, e.g., how an issue moves from creation, to activation, to response or resolution, and is finally closed.

      3.2.2 M2 development workflow

      -
      graph TD
      -    A -- R interface group --> C[short-lived feature branch];
      -    A -- Documentation group --> D[short-lived feature branch];
      -    A -- NLL group --> E[development branch];
      -    A -- More features group --> E[development branch];
      -    E --> F[short-lived feature branch];
      -    C --> G[Merge into main];
      -    D --> G;
      -    F --> E;
      -    E --> G;
      +
      +

      3.2.3 Feature validation

      diff --git a/glossary.html b/glossary.html index 19c9cae..8dd7667 100644 --- a/glossary.html +++ b/glossary.html @@ -52,6 +52,7 @@ + diff --git a/hpp-template-for-c-modules.html b/hpp-template-for-c-modules.html index 7db2194..00283d6 100644 --- a/hpp-template-for-c-modules.html +++ b/hpp-template-for-c-modules.html @@ -52,6 +52,7 @@ + @@ -385,157 +386,157 @@

      Chapter 8 .hpp template for C++ modules

      In this section we will describe how to structure a new .hpp file in FIMS.

      -
      //  tmplate.hpp
      -//  Fisheries Integrated Modeling System (FIMS)
      -
      -//define the header gaurd
      -#ifndef template_hpp 
      -#define template_hpp
      -
      -//inherit from model_base
      -#include "../common.hpp" 
      -#include <iostream>
      -
      -/**
      - * In this example, we utilize the concept of inheritence and 
      - * polymorphism (https://www.geeksforgeeks.org/polymorphism-in-c/). All
      - * classes inherit from model_base. Name1 and Name2 inherit from NameBase.
      - * Classes Name1 and Name2 must implement they're own version of 
      - * "virtual T evaluate(const T& t)", which will have unique logic. 
      - */
      -
      -
      -/*
      - * fims namespace
      - */
      -namespace fims{
      -
      -/**
      - * NameBase class. Inherits from model_base.
      - */
      -template <class T>
      -class NameBase: public model_base<T>{ //note that model_base gets template parameter T.
      -protected:
      -
      -public:
      -  virtual T Evaluate(const T& t)=0; //"= 0;" means this must be implemented in child.
      -};
      -  
      -/* 
      -* Template class inherits from  NameBase
      -*/
      -template <class T>
      -class Name1: public NameBase<T>{
      -
      -public:
      - 
      -    
      -  /*
      -   *Default constructor
      -   *Initialize any memory here.
      -   */
      -  Name1(){
      -  }
      -  
      - 
      -  /**
      -   * Destructor; this method destructs Name1 object.
      -   * Delete any allocated memory here.
      -   */
      -  ~ Name1(){
      -    std::cout <<"I just deleted Name1 object" << std::endl;
      -  }
      -  
      -   /**
      -    * Note: this function must have the same signature as evaluate in NameBase.
      -    * Overloaded virtual function. This is polymorphism, meaning the 
      -    * signature has the same appearance, but the function itself has unique logic.
      -    * 
      -    * @param t
      -    * @return t+1
      -    */
      -   virtual T Evaluate(const T& t) {
      -     std::cout<<"evaluate in Name1 received "<<t<<
      -     "as a method parameter, returning "<<(t+1)<<std::endl;
      -     return t+1; //unique logic for Name1 class
      -   }
      -
      -};
      -
      -  /* 
      -* Template class inherits from  NameBase
      -*/
      -template <class T>
      -class Name2: public NameBase<T>{
      -
      -public:
      - 
      -    
      -  /*
      -   *Default constructor.
      -   *Initialize any memory here.
      -   */
      -  Name2(){
      -  }
      -  
      - 
      -  /**
      -   * Destructor; this method destructs the Name2 object.
      -   * Delete any allocated memory here.
      -   */
      -  ~ Name2(){
      -    std::cout <<"I just deleted Name2 object" << std::endl;
      -  }
      -  
      -   /**
      -    * Note: this function must have the same signature as evaluate in NameBase.
      -    * Overloaded virtual function. This is polymorphism, meaning the 
      -    * signature has the same appearance, but the function itself has unique logic.
      -    * 
      -    * @param t
      -    * @return t^2
      -    */
      -   virtual T Evaluate(const T& t) {
      -     std::cout<<"evaluate in Name2 received "<<t<<
      -     "as a method parameter, returning "<<(t*t)<<std::endl;
      -     return t*t; //unique logic for Name2 class
      -   }
      -
      -};
      -  
      -/**
      - * Add additional implementations below.
      - */
      -  
      -
      -
      -
      -} //end namespace
      -
      -/**
      - *Example usage:
      - *
      - * void main(int argc, char** argv){
      - *    NameBase<double>* name = NULL; //pointer to a NameBase object
      - *    Name1<double> n1; //inherits from NameBase
      - *    Name2<double> n2; //inherits from NameBase
      - *
      - *    name = &n1; //name now points to n1
      - *    name->Evalute(2.0); //unique logic for n1
      - *
      - *    name = &n2; //name now points to n2
      - *    name->Evalute(2.0); //unique logic for n2
      - * }
      - *
      - * Output:
      - * evaluate in Name1 received 2 as a method parameter, returning 3
      - * evaluate in Name2 received 2 as a method parameter, returning 4
      - *
      - */
      -
      -
      -
      -#endif /*template_hpp */
      +
      //  tmplate.hpp
      +//  Fisheries Integrated Modeling System (FIMS)
      +
      +//define the header gaurd
      +#ifndef template_hpp 
      +#define template_hpp
      +
      +//inherit from model_base
      +#include "../common.hpp" 
      +#include <iostream>
      +
      +/**
      + * In this example, we utilize the concept of inheritence and 
      + * polymorphism (https://www.geeksforgeeks.org/polymorphism-in-c/). All
      + * classes inherit from model_base. Name1 and Name2 inherit from NameBase.
      + * Classes Name1 and Name2 must implement they're own version of 
      + * "virtual T evaluate(const T& t)", which will have unique logic. 
      + */
      +
      +
      +/*
      + * fims namespace
      + */
      +namespace fims{
      +
      +/**
      + * NameBase class. Inherits from model_base.
      + */
      +template <class T>
      +class NameBase: public model_base<T>{ //note that model_base gets template parameter T.
      +protected:
      +
      +public:
      +  virtual T Evaluate(const T& t)=0; //"= 0;" means this must be implemented in child.
      +};
      +  
      +/* 
      +* Template class inherits from  NameBase
      +*/
      +template <class T>
      +class Name1: public NameBase<T>{
      +
      +public:
      + 
      +    
      +  /*
      +   *Default constructor
      +   *Initialize any memory here.
      +   */
      +  Name1(){
      +  }
      +  
      + 
      +  /**
      +   * Destructor; this method destructs Name1 object.
      +   * Delete any allocated memory here.
      +   */
      +  ~ Name1(){
      +    std::cout <<"I just deleted Name1 object" << std::endl;
      +  }
      +  
      +   /**
      +    * Note: this function must have the same signature as evaluate in NameBase.
      +    * Overloaded virtual function. This is polymorphism, meaning the 
      +    * signature has the same appearance, but the function itself has unique logic.
      +    * 
      +    * @param t
      +    * @return t+1
      +    */
      +   virtual T Evaluate(const T& t) {
      +     std::cout<<"evaluate in Name1 received "<<t<<
      +     "as a method parameter, returning "<<(t+1)<<std::endl;
      +     return t+1; //unique logic for Name1 class
      +   }
      +
      +};
      +
      +  /* 
      +* Template class inherits from  NameBase
      +*/
      +template <class T>
      +class Name2: public NameBase<T>{
      +
      +public:
      + 
      +    
      +  /*
      +   *Default constructor.
      +   *Initialize any memory here.
      +   */
      +  Name2(){
      +  }
      +  
      + 
      +  /**
      +   * Destructor; this method destructs the Name2 object.
      +   * Delete any allocated memory here.
      +   */
      +  ~ Name2(){
      +    std::cout <<"I just deleted Name2 object" << std::endl;
      +  }
      +  
      +   /**
      +    * Note: this function must have the same signature as evaluate in NameBase.
      +    * Overloaded virtual function. This is polymorphism, meaning the 
      +    * signature has the same appearance, but the function itself has unique logic.
      +    * 
      +    * @param t
      +    * @return t^2
      +    */
      +   virtual T Evaluate(const T& t) {
      +     std::cout<<"evaluate in Name2 received "<<t<<
      +     "as a method parameter, returning "<<(t*t)<<std::endl;
      +     return t*t; //unique logic for Name2 class
      +   }
      +
      +};
      +  
      +/**
      + * Add additional implementations below.
      + */
      +  
      +
      +
      +
      +} //end namespace
      +
      +/**
      + *Example usage:
      + *
      + * void main(int argc, char** argv){
      + *    NameBase<double>* name = NULL; //pointer to a NameBase object
      + *    Name1<double> n1; //inherits from NameBase
      + *    Name2<double> n2; //inherits from NameBase
      + *
      + *    name = &n1; //name now points to n1
      + *    name->Evalute(2.0); //unique logic for n1
      + *
      + *    name = &n2; //name now points to n2
      + *    name->Evalute(2.0); //unique logic for n2
      + * }
      + *
      + * Output:
      + * evaluate in Name1 received 2 as a method parameter, returning 3
      + * evaluate in Name2 received 2 as a method parameter, returning 4
      + *
      + */
      +
      +
      +
      +#endif /*template_hpp */
      diff --git a/index.html b/index.html index 7a30600..7c56784 100644 --- a/index.html +++ b/index.html @@ -52,6 +52,7 @@ + diff --git a/libs/htmltools-fill-0.5.8.1/fill.css b/libs/htmltools-fill-0.5.8.1/fill.css new file mode 100644 index 0000000..841ea9d --- /dev/null +++ b/libs/htmltools-fill-0.5.8.1/fill.css @@ -0,0 +1,21 @@ +@layer htmltools { + .html-fill-container { + display: flex; + flex-direction: column; + /* Prevent the container from expanding vertically or horizontally beyond its + parent's constraints. */ + min-height: 0; + min-width: 0; + } + .html-fill-container > .html-fill-item { + /* Fill items can grow and shrink freely within + available vertical space in fillable container */ + flex: 1 1 auto; + min-height: 0; + min-width: 0; + } + .html-fill-container > :not(.html-fill-item) { + /* Prevent shrinking or growing of non-fill items */ + flex: 0 0 auto; + } +} diff --git a/m1-model-specification.html b/m1-model-specification.html index 6cfcbe3..53640f1 100644 --- a/m1-model-specification.html +++ b/m1-model-specification.html @@ -52,6 +52,7 @@ + @@ -533,7 +534,7 @@

      4.5 Modeling loops4.5 Modeling loops diff --git a/search_index.json b/search_index.json index b587167..5a7687d 100644 --- a/search_index.json +++ b/search_index.json @@ -1 +1 @@ -[["index.html", "FIMS Developer Handbook Chapter 1 Contributing to this book 1.1 Description 1.2 Edit and preview book changes", " FIMS Developer Handbook FIMS Implementation Team 2024-06-15 Chapter 1 Contributing to this book This is a book written in Markdown describing the FIMS development workflow for FIMS developers and contributors. It is intended as a living document and will change over time as the FIMS project matures. Some sections may be incomplete or missing entirely. Suggestions or contributions may be made via the FIMS collaborative workflow github site https://github.com/NOAA-FIMS/collaborative_workflow. This section describes how to edit and contribute to the book. 1.1 Description Each bookdown chapter is an .Rmd file, and each .Rmd file can contain one or more chapters. A chapter must start with a first-level heading: # A good chapter, and can contain one (and only one) first-level heading. Use second-level and higher headings within chapters like: ## A short section or ### An even shorter section. The index.Rmd file is required, and is also your first book chapter. It will be the homepage when you render the book. 1.2 Edit and preview book changes When you want to make a change to this book, follow the below steps: 1. Create a new feature branch either from the issue requesting the change or from the repo on Github. 2. Pull the remote branch into your local branch and make your changes to the .Rmd files locally. 3. When you are done editing, do not render the book locally, but push your changes to the remote feature branch. 4. Pushing to the remote feature branch initiates a Github action that creates a .zip file you should download and unzip. Open the file index.html in a browser to preview the rendered .html content. If the action fails, this means the bookdown could not be rendered. Use the Github action log to determine what the problem is. 5. When the book can be rendered and you are satisfied with the changes, submit a pull request to merge the feature branch into main. "],["code-of-conduct.html", "Chapter 2 Code of conduct 2.1 FIMS contributor conduct 2.2 Our pledge 2.3 Our standards 2.4 Enforcement responsibilities 2.5 Scope 2.6 Enforcement 2.7 Enforcement guidelines 2.8 Supporting good conduct 2.9 Attribution", " Chapter 2 Code of conduct 2.1 FIMS contributor conduct 2.2 Our pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. 2.3 Our standards Examples of behavior that contributes to a positive environment for our community include: Demonstrating empathy and kindness toward other people Being respectful of differing opinions, viewpoints, and experiences Giving and gracefully accepting constructive feedback Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: The use of sexualized language or imagery, and sexual attention or advances of any kind Trolling, insulting or derogatory comments, and personal or political attacks Public or private harassment Publishing others’ private information, such as a physical or email address, without their explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting 2.4 Enforcement responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. 2.5 Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. 2.6 Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement anonymously using this [form] (https://forms.gle/2DnPXuCKKCN3qp7ZA). Reports will be reviewed by a member of the NOAA Fisheries Office of Science and Technology who is not participating in the FIMS Project [Patrick Lynch], but has the full support of FIMS Community Leaders. All reports will be reviewed promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident whenever possible; however, please note behaviors that meet the official criteria for harrassment must be reported by supervisors under NOAA policy. 2.7 Enforcement guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: 2.7.1 1. Correction Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. 2.7.2 2. Warning Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. 2.7.3 3. Temporary ban Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. 2.7.4 4. Permanent ban Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. 2.8 Supporting good conduct FIMS Community leaders will create default community health files (e.g. CONTRIBUTING, CODE_OF_CONDUCT) to be used in all repositories owned by FIMS. 2.9 Attribution This Code of Conduct is copied from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder. For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations. "],["fims-project-management-process.html", "Chapter 3 FIMS project management process 3.1 FIMS governance 3.2 FIMS development cycle", " Chapter 3 FIMS project management process 3.1 FIMS governance The FIMS Terms of Reference describes the high level organization of the FIMS Project. Additional details on roles and responsibilities are provided here. 3.1.1 Developers Developers are expected to adhere to the principles and guidelines outlined within this handbook, including the Code of Conduct, Contributer Guidelines, Style Guide, Issue Tracking, and Testing. 3.1.2 C++ developers The C++ developer responsibilities include: Writing the module code. Creating documentation for the module and building the documentation in doxygen to ensure it is error-free. Run cmake --build build and review generated doxygen HTMLs locally to ensure they are error-free. Implementing the suite of required test cases in Google Test for the module. Run cmake --build build and ctest --test-dir build locally and make sure the C++ tests pass before pushing tests to remote feature branch. If there are failing tests, run ctest --test-dir --rerun-failed --output-on-failure to re-run the failed tests verbosely. Ensuring the run-clang-tidy and run-googletest Github Actions workflows pass on the remote feature branch 3.1.3 R developers The R developers responsibilities include: Writing the Rcpp interface to the C++ code. Writing Roxygen documentation for any R functions. Run devtools::document() locally and before pushing changes to the remote branch. Writing testthat() test cases for any R functionality Run devtools::test() locally before pushing tests to the remote feature branch. Running styler::style_pkg() to style R code locally and then push changes to remote feature branch. If there are many changes, please do this in a separate commit. Running devtools::check() locally and make sure the package can be compiled and R tests pass. If there are failing tests, run devtools::test(filter = \"file_name\") (where “test-file_name.R” is the testthat file containing failing tests) and edit code/tests to troubleshoot tests. During development, run devtools::build() locally to build the package more frequently and faster. Ensuring the code passes the call-r-cmd-check GitHub Action workflow on the remote feature branch. 3.1.4 All developers Once these are complete, the developer should create a pull request according to the correct template and assign the issue tracking the completion of the bug fix and/or feature to the assigned review team. The developer must resolve any issues arising from the review and get confirmation from the review team before the pull request is merged into the upstream branch. 3.1.5 Reviewers The reviewers are responsible for adhering to documented guidelines in the Code Review section. Reviewers should confirm that the new code is able to build and run within their own development environment as well as via the Github actions on the repository. Reviewers should clearly document which components of the code need to be improved to be accurate, comply with project guidelines and style, or do not work, in the pull request thread so that the developer knows what they need to fix. 3.1.6 Project Lead The Project Lead is responsible for driving decisions on FIMS features, user interfaces, and project guidelines and standards based on the vision and objectives of FIMS and discussions with the OST development team and regional product representatives. The project lead ensures the FIMS product satisfies user and business requirements, incorporates feedback, and iterates on the design and development as needed. The Project Lead will triage issues and pull requests weekly and ensure development and code review occur in a timely manner and according to project guidelines, priorities, and standards. The Project Lead is also responsible for communicating project status via maintenance of the Github projects and scheduling tasks and managing change requests. 3.1.7 Lead Software Architect The Lead Software Architect is responsible for advising the design of designing the FIMS product architecture to maximize portability and extensibility, managing technical risks and opportunities, mentoring development and implementation team members, advising the project lead on software design, refactor, and implementation decisions, scheduling of tasks, managing change requests, and guaranteeing quality of deliveries via code review. The Lead Software Architect also educates the team on technical best practices. 3.1.8 Lead Test Engineer The Lead Test Engineer is responsible for designing and driving test objectives, test strategies, and test plans of the FIMS product at subsequent milestones. The Lead Test Engineer will identify the tools for test reporting, management and automation, guide and monitor the design, implementation, and execution of test cases and test procedures. The Lead Test Engineer will train and mentor implementation team members on how to effectively write and debug tests. 3.1.9 Lead Statistical Computing Engineer The Lead Statistical Computing Engineer is responsible for designing the FIMS statistical architecture that maximizes statistical accuracy and ensures the implementation of statistical good practices. The Lead Statistical Computing Engineer will advise the Project Lead on design and implementation decisions and will work closely with the Lead Software Architect to ensure a balance between computation and statistical efficiency, and with the Lead Test Engineer to develop tests that check the statistical accuracy of model design. 3.1.10 Outreach and Transition Coordinator The Outreach and Transition Coordinator communicates with policy-makers, NOAA leadership, and regional offices on transition plans from existing assessment models and processes to FIMS. This coordinator works with academic partners to develop and coordinate training on using FIMS. 3.1.11 Lead of Workflows, Accessibility, and Integration The Lead of Workflows, Accessibility, and Integration is responsible for designing and driving workflows and automation to support the reliability and robustness of the FIMS. The Lead of Workflows, Accessibility, and Integration ensures FIMS aligns with expected standards for accessibility and quality control in accordance with guidelines set by the Fisheries Integrated Toolbox. This lead coordinates with the Lead Test Engineer to ensure test cases are automated and successfully run by GitHub Actions and coordinates with the Lead Statistical Computing Engineer to identify opportunities to expand FIMS across related disciplines. 3.1.12 Regional representatives Regional representatives are expected to assist in FIMS implementation through design, development, and testing of FIMS. They also communicate FIMS progress and design to their respective regions and teammates. Representatives serve as power users who provide basic training and outreach within their centers on transitioning to FIMS. These representatives are also responsible for relaying feedback, questions, and training requests that they cannot complete back to the NSAP development team and Project Lead. Regional representatives are expected to introduce their partner fishery management organizations to FIMS to assist transition of FIMS from research to operations. 3.1.13 Code of conduct enforcement The code of conduct enforcer is responsible for responding to allegations of code of conduct violations in an appropriate manner. This could include a conversation with the violator, his or her manager, up to and including expulsion from the FIMS development team. If the violator is an external collaborator, they can be banned from contributing to the FIMS Github resources in the future. 3.1.14 External collaborators External collaborators interested in contributing to FIMS development are required to clone or fork the FIMS repository, make changes, and submit a pull request. However, collaborators are strongly encouraged to submit an issue via the main FIMS repository for discussion prior to development. In general, forks are discouraged for development that is intended for integration into FIMS as it becomes difficult to keep track of multiple forks. If collaborators wish to use FIMS as a starting-point for a brand new project that they do not intend to merge back into the main branch, they can start a fork. However, if they intend to create a pull request, they should clone the repository and use a branch. Pull requests from forks will be reviewed the same as a pull request submitted from a branch. Users will need to conform to the same standards and all contributions must pass the standard tests as well as provide tests that check the new feature. 3.2 FIMS development cycle FIMS is structured as an agile software development process with live development on the web and Github. The development process cycles through a planning, analysis, and design phase leading to the establishment of a developmental Milestone. The implementation phase is made up of several development sprints that meet the objectives of the established Milestone. This is followed by testing & integration and a maintenance phase before the cycle starts over again. FIMS is currently in the implementation phase of Milestone 1. See M1 model specification for a description of the model. Figure 3.1: FIMS Development Cycle. Current development stage is the implementation phase of Milestone 1 3.2.1 Issue lifecycle FIMS development will adhere to a lifecycle for issues that makes it clear which issues can be resolved when. Creation — The event that marks the creation of an issue. An issue is not Active when it is Created. Issues that are opened are assigned to the FIMS Project Lead with the label: needs-triage. A issue is not considered Active until this label is removed. * Activation — When the needs-triage label is removed and the issue is assigned to a developer, the issue becomes Active. This event happens once in the lifecycle of an issue. Activation usually is not undone but it can be undone if an issue needs additional discussion; in this case, the needs-triage label is applied again. An issue is Active from the time it is Activated until reaches Resolution. * Response — This event only happens if the triage team deems an issue to a wont-fix or delayed. This requires communication with the party who opened the issue as to why this will not be addressed or will be moved to a later milestone. Resolution — The event that marks the resolution of an issue. This event happens once in the lifetime of an issue. This event can be undone if an issue transitions from a resolved status to an unresolved status, in which case the system considers the issue as never had been resolved. A resolution involves a code check-in and pull request, at which point someone must review and approve the pull request before the issue can transition states. In Review - The issue is “in review” after a code solution has been proposed and is being considered via a pull request. If this is approved, the issue can move into the “Closed” state. * Closure—The event that marks the closure of an Issue. This even happens once in the lifetime of an issue. The issue can enter the Closed state from either the “In Review” or “Response” state. Figure 3.2: Flow chart that describes above process visually, e.g. how an issue moves from creation, to activation, to response or resolution, and is finally closed. 3.2.2 M2 development workflow graph TD A -- R interface group --> C[short-lived feature branch]; A -- Documentation group --> D[short-lived feature branch]; A -- NLL group --> E[development branch]; A -- More features group --> E[development branch]; E --> F[short-lived feature branch]; C --> G[Merge into main]; D --> G; F --> E; E --> G; 3.2.3 Feature validation FIMS uses a standardized set of criteria to prioritize and determine which features will be incorporated into the next development milestone. TODO: add criteria (to be defined) used to prioritize features for future milestones "],["m1-model-specification.html", "Chapter 4 M1 model specification 4.1 Inherited functors from TMB 4.2 Beverton-Holt recruitment function 4.3 Logistic function with extensions 4.4 Catch and fishing mortality 4.5 Modeling loops 4.6 Expected numbers and quantities 4.7 Initial values 4.8 Likelihood calculations 4.9 Statistical Inference:", " Chapter 4 M1 model specification This section describes the implementation of the modules in FIMS in milestone 1. For the first milestone, we implemented enough complexity to adequately test a very standard population model. For this reason, we implemented the minimum structure that can run the model described in Li et al. 2021. The FIMS at the end of milestone 1 is an age-structured integrated assessment model with two fleets (one survey, one fishery) and two sexes. 4.1 Inherited functors from TMB 4.1.1 Atomic functions Wherever possible, FIMS avoids reinginvent atomic functions with extant definitions in TMB. If there is a need for a new atomic function the development team can add it to TMB using the TMB_ATOMIC_VECTOR_FUNCTION() macro following the instructions here. Prototype atomic functions under development for future FIMS milestones are stored in the fims_math.hpp file in the m1-prototypes repository. 4.1.2 Statistical distributions All of the statistical distributions needed for the first milestone of FIMS are implemented in TMB and need not be replicated. Code can be found here. Distribution Name FIMS wrapper Normal dnorm FIMS code Multinomial dmultinom FIMS code Lognormal uses dnorm FIMS code 4.1.2.1 Normal Distribution \\[f(x) = \\frac{1}{\\sigma\\sqrt{2\\pi}}\\mathrm{exp}\\Bigg(-\\frac{(x-\\mu)^2}{2\\sigma^2} \\Bigg),\\] where \\(\\mu\\) is the mean of the distribution and \\(\\sigma^2\\) is the variance. 4.1.2.2 Multinomial Distribution For \\(k\\) categories and sample size, \\(n\\), \\[f(\\underline{y}) = \\frac{n!}{y_{1}!... y_{k}!}p^{y_{1}}_{1}...p^{y_{k}}_{k},\\] where \\(\\sum^{k}_{i=1}y_{i} = n\\), \\(p_{i} > 0\\), and \\(\\sum^{k}_{i=1}p_{i} = 1\\). The mean and variance of \\(y_{i}\\) are respectively: \\(\\mu_{i} = np_{i}\\), \\(\\sigma^{2}_{i} = np_{i}(1-p_{i})\\) 4.1.2.3 Lognormal Distribution \\[f(x) = \\frac{1.0}{ x\\sigma\\sqrt{2\\pi} }\\mathrm{exp}\\Bigg(-\\frac{(\\mathrm{ln}(x) - \\mu)^{2}}{2\\sigma^{2}}\\Bigg),\\] where \\(\\mu\\) is the mean of the distribution of \\(\\mathrm{ln(x)}\\) and \\(\\sigma^2\\) is the variance of \\(\\mathrm{ln}(x)\\). 4.2 Beverton-Holt recruitment function For parity with existing stock assessment models, the first recruitment option in FIMS is the steepness parameterization of the Beverton-Holt model (Beverton and Holt, 1957), \\[R_t(S_{t-1}) =\\frac{0.8R_0hS_{t-1}}{0.2R_0\\phi_0(1-h) + S_{t-1}(h-0.2)}\\] where \\(R_t\\) and \\(S_t\\) are mean recruitment and spawning biomass at time \\(t\\), \\(h\\) is steepness, and \\(\\phi_0\\) is the unfished spawning biomass per recruit. The initial FIMS model implements a static spawning biomass-per-recruit function, with the ability to overload the method in the future to allow for time-variation in spawning biomass per recruit that results from variation in life-history characteristics (e.g., natural mortality, maturity, or weight-at-age). Recruitment deviations (\\(r_t\\)) are assumed to be normally distributed in log space with standard deviation \\(\\sigma_R\\), \\[r_t \\sim N(0,\\sigma_R^2)\\] Because \\(r_t\\) are applied as multiplicative, lognormal deviations, predictions of realized recruitment include a term for bias correction (\\(\\sigma^2_R/2\\)). However, true \\(r_t\\) values are not known, but rather estimated (\\(\\hat{r}_t\\)), and thus the bias correction applies an adjustment factor, \\(b_t=\\frac{E[SD(\\hat{r}_{t})]^2}{\\sigma_R^2}\\) (Methot and Taylor, 2011). The adjusted bias correction, mean recruitment, and recruitment deviations are then used to compute realized recruitment (\\(R^*_t\\)), \\[R^*_t=R_t\\cdot\\mathrm{exp}\\Bigg(\\hat{r}_{t}-b_t\\frac{\\sigma_R^2}{2}\\Bigg)\\] The recruitment function should take as input the values of \\(S_t\\), \\(h\\), \\(R_0\\), \\(\\phi_0\\), \\(\\sigma_R\\), and \\(\\hat{r}_{t}\\), and return mean-unbiased (\\(R_t\\)) and realized (\\(R^*_t\\)) recruitment. 4.3 Logistic function with extensions \\[y_i=\\frac{1}{1+\\mathrm{exp}(-s \\cdot(x_i-\\nu))}\\] Where \\(y_i\\) is the quantity of interest (proportion mature, selected, etc.), \\(x_i\\) is the index (can be age or size or any other quantity), \\(\\nu\\) is the median (inflection point), and \\(s\\) is the slope parameter from an alternative parameterization. Logistic functions for maturity and selectivity should inherit and extend upon the base logistic function implementation. The parameterization for the double logistic curve is specified as \\[y_i=\\frac{1.0}{ 1.0 + \\mathrm{exp}(-1.0 \\cdot s_1(x_i - \\nu_1))} \\left(1-\\frac{1.0}{ 1.0 + \\mathrm{exp}(-1.0 \\cdot s_2 (x_i - \\nu_2))} \\right)\\] Where \\(s_1\\) and and \\(\\nu_1\\) are the slope and median (50%) parameters for the ascending limb of the curve, and \\(s_2\\) and and \\(\\nu_2\\) are the slope and median parameters for the descending limb of the curve. This is currently only implemented for the selectivity module. 4.4 Catch and fishing mortality The Baranov catch equation relates catch to instantaneous fishing and natural mortality. \\[ C_{f,a,t}=\\frac{F_{f,a,t}}{F_{f,a,t}+M}\\Bigg[1-\\mathrm{exp}(-F_{f,a,t}-M)\\Bigg]N_{a,t}\\] Where \\(C_{f,a,t}\\) is the catch at age \\(a\\) at time \\(t\\) for fleet \\(f\\), \\(F_t\\) is instantaneous fishing mortality, \\(M\\) is assumed constant over ages and time in the minimum viable assessment model, \\(N_{a,t}\\) is the number of age \\(a\\) fish at time \\(t\\). \\[F_{f,a,t}=\\sum_{a=0}^A s_{f,a}F_t\\] \\(s_{f,a}\\) is selectivity at age \\(a\\) for fleet \\(f\\). Selectivity-at-age is constant over time. Catch is in metric tons and survey is in number, so calculating catch weight (\\(CW_t\\)) is done as follows: \\[ CW_t=\\sum_{a=0}^A C_{a,t}w_a \\] Survey numbers are calculated as follows \\[I_t=q_t\\sum_{a=0}^AN_{a,t}\\] Where \\(I_t\\) is the survey index and \\(q_t\\) is survey catchability at time \\(t\\). 4.5 Modeling loops This tier associates the expected values for each population section associated with a data source to that data source using a likelihood function. These likelihood functions are then combined into an objective function that is passed to TMB. The population loop is initialized at a user-specified age, time increment, and seasonal structure, rather than assuming ages, years, or seasons follow any pre-defined structure. Population categories will be described flexibly, such that subpopulations such as unique sexes, stocks, species, or areas can be handled identically to reduce duplication. Each subpopulation will have a unique set of attributes assigned to it, such that each subpopulation can share or have a different functional process (e.g. recruitment function, size-at-age) than a different category. Spawning time and recruitment time are user-specified and can occur more than once per year. For the purposes of replicating model comparison project outputs, in milestone 1, all processes including spawning and recruitment occur January 1, but these should be specified via the spawn_time and recruit_time inputs into FIMS to allow for future flexibility. Spawning and recruitment timing can be input as a scalar or vector to account for multiple options. Within the population loop, matrices denoting population properties at different partitions (age, season, sex) are translated into a single, dimension-folded index. A lookup table is computed at model start so that the dimension-folded index can be mapped to its corresponding population partition or time partition (e.g. population(sex, area, age, species, time, …)) so the programmer can understand what is happening. The model steps through each specified timestep to match the data to expected values, and population processes occur in the closest specified timestep to the user-input process timing (e.g. recruitment) across a small timestep that is a predefined constant. 4.6 Expected numbers and quantities The expected values are calculated as follows in the population.hpp file: \\[ B_t=\\sum_{a=0}^AN_{a,t}w_a\\] where \\(B_t\\) is total biomass in time \\(t\\), \\(N\\) is total numbers, \\(w_a\\) is weight-at-age \\(a\\) in kilograms. \\[N_t=\\sum_{a=0}^AN_{a,t}\\] where \\(N_at\\) is the total number of fish at age \\(a\\) in time \\(t\\). \\[UnfishedNAA_{t,0} = R0_{t}\\] Annual unfished numbers at age and unfished spawning biomass are tracked in the model assuming annual recruitment at rzero and only natural mortality. This provides a dynamic reference point that accounts for time varying rzero and M. This does not currently include recruitment deviations. \\[UnfishedNAA_{t,0} = R0_{t}\\] \\[UnfishedNAA_{t,a} = UnfishedNAA_{t-1,a-1}exp(-M_{t-1,a-1})\\] for all t>0 and numbers at age at the start of the first year are model parameter inputs. \\[ UnfishedSSB_t=\\sum_{a=0}^AUnfishedNAA_{a,t}w_aFracFemale_aFracMature_a\\] All spawning stock biomass values are current calculated at January 1 each year. This will be updated in future milestones. 4.7 Initial values The initial equilibrium recruitment (\\(R_{eq}\\)) is calculated as follows: \\[R_{eq} = \\frac{R_{0}(4h\\phi_{F} - (1-h)\\phi_{0})}{(5h-1)\\phi_{F}} \\] where \\(\\phi_{F}\\) is the initial spawning biomass per recruitment given the initial fishing mortality \\(F\\). The initial population structure at the start of the first model year is input as an estimated parameter vector of numbers at age. This allows maximum flexibility for the model to estimate non-equilibrium starting conditions. Future milestones could add an option to input a single F value used to calculate an equilibrium starting structure. 4.8 Likelihood calculations Age composition likelihood links proportions at age from data to model using a multinomial likelihood function. The multinomial and lognormal distributions, including atomic functions are provided within TMB. Survey index likelihood links estimated CPUE to input data CPUE in biomass using a lognormal distribution. (model.hpp) Catch index likelihood links estimated catch to input data catch in biomass using a lognormal distribution. (model.hpp) Age composition likelihoods link catch-at-age to expected catch-at-age using a multinomial distribution. Recruitment deviations control the difference between expected and realized recruitment, and they follow a lognormal distribution. (recruitment_base.hpp) 4.9 Statistical Inference: TODO: Add description detailing the statistical inference used in M1 "],["user-guide.html", "Chapter 5 User guide 5.1 User Installation Guide 5.2 Installing the package from Github 5.3 Installing from R 5.4 Running the model", " Chapter 5 User guide This section details installation guides for users. See the developer installation guide. 5.1 User Installation Guide This section describes how to install the FIMS R package and dependencies. 5.2 Installing the package from Github The following software is required: - R version 4.0.0 or newer (or RStudio version 1.2.5042 or newer) - the remotes R package - TMB (install instructions at are here.) 5.2.1 Windows users Rtools4.4 (available from here) this likely requires IT support to install it on NOAA computers (or any without administrative accounts) 5.3 Installing from R remotes::install_github("NOAA-FIMS/FIMS") library(FIMS) 5.4 Running the model This section describes how to set-up and run the model. 5.4.1 Specifying the model 5.4.1.1 Naming conventions TODO: add description and link to naming conventions 5.4.1.2 Structuing data input You can add components to the model using S4 classes. #TODO: add script to demonstrate how to structure data input 5.4.1.3 Defining model specifications #TODO: add scripts detailing how to set up different components of the model 5.4.2 How to run the model #TODO: add script with examples on how to run the model 5.4.3 Extracting model output Here is how you get the model output. #Todo add code for how to extract model output "],["developer-software-guide.html", "Chapter 6 Developer Software Guide", " Chapter 6 Developer Software Guide This section describes the software you will need to contribute to this project. This is in addition to the software dependencies described in the user installation guide which you should ensure are installed first. 6.0.1 git You will need git installed locally, and you may prefer to use an additional git GUI client such as GitKraken or GitHub Desktop. If your preferred git client is the RStudio IDE, you can configure Git and RStudio integration following these instructions. To install git, please follow the instructions on this page for your operating system. You can find the downloads for your operating system on the left-hand navigation bar on that page. 6.0.2 Development environment An integrated development environment is recommended to organize code files, outputs, and build and debug environments. The most popular IDEs on the development team are RStudio and Visual Studio Code. You are welcome to use another IDE or a text-editor based workflow if you strongly prefer that. 6.0.3 vscode setup Please follow the instructions here to set up vscode for use with R. For those migrating from R studio to VS code, this post on migrating to VS code may be helpful. In addition to those instructions, you may need to install the vscDebugger package here using the command: remotes::install_github("ManuelHentschel/vscDebugger") To improve the plot viewer when creating plots in R, install the httpgd package: install.packages("httpgd") To add syntax highlighting and other features to the R terminal, radian can be installed. Note needs to be installed first in order to download radian. A number of optional settings that could be added to the user settings (settings.json) file in vscode to improve the usability of R in VS code. For example, the settings for interacting with R terminals can be adjusted. Here are some that you may want to use with FIMS: { // Associate .RMD files with markdown: "files.associations": { "*.Rmd": "markdown", }, // A cmake setting "cmake.configureOnOpen": true, // Set where the rulers are, needed for Rewrap. 72 is the default we have // decided on for FIMS repositories.z "editor.rulers": [ 72 ], // Should the editor suggest inline edits? "editor.inlineSuggest.enabled": true, // Settings for github copilot and which languages to use it with or not. "github.copilot.enable": { "*": true, "yaml": false, "plaintext": false, "markdown": false, "latex": false, "r": false }, // Setting for sending R code from the editor to the terminal "r.alwaysUseActiveTerminal": true, // Needed to send large chunks of code to the r terminal when using radian "r.bracketedPaste": true, // Needed to use httpgd for plotting in vscode "r.plot.useHttpgd": true, // path to the r terminal (in this case, radian). Necessary to get the terminal to use radian. "r.rterm.windows": "C://Users//my.name//AppData//Local//Programs//Python//Python310//Scripts//radian.exe", //Use this only for Windows // options for the r terminal "r.rterm.option": [ "--no-save", "--no-restore", "max.print=500" ], // Setting for whether to allow linting of documents or not "r.lsp.diagnostics": true, // When looking at diffs under the version control tab, should whitspace be ignored? "diffEditor.ignoreTrimWhitespace": false, // What is the max number of lines that are printed as output to the terminal? "terminal.integrated.scrollback": 10000 } Some suggested R shortcuts could be helpful. To set up C++ with vscode, instructions are here. Other helpful extensions that can be found in the VScode marketplace are: - Github Copilot: An AI tool that helps with line completion - Live Share: Collaborate on the same file remotely with other developers - Rewrap: Helps rewrapping comments and text lines at a specified character count. Note that to get it working it will be necessary to add rulers - There are a number of keymap packages that import key mappings from commonly used text editors (e.g., Sublime, Notepad++, atom, etc.). Searching “keymap” in the marketplace should help find these. - GitLens (or GitLess): Adds more Git functionality. Note that some of the GitLens functionality is not free, and GitLess is a fork before the addition of these premium features. Note that the keybindings.json and settings.json could be copied from one computer to another to make it easier to set up VS code with the settings needed. Note that the settings.json location differs depending on the operating system. Typically, it is good practice to not restore old sessions after shutting down the IDE. To avoid restoring old sessions in the VS Code terminals (including R terminal), in the Setting User Interface within VS Code (get to this by opening the command palette and searching for Preferences: Open Settings (UI)), under Features > Terminal, uncheck the option “Enable Persistent Sessions.” Rstudio addins can be accessed by searching for Rstudio addin in the command palette. Clicking on “R: Launch Rstudio Addin” should provide a list of addin options. 6.0.4 C++ compiler Windows users who installed Rtools should have a C++ compiler (gcc) as part of the bundle. To ensure the C++ compiler is on your path, open a command prompt and type gcc. If you get the below message, you are all set: gcc: fatal error: no input files compilation terminated. If not, you will need to check that the compiler is on the path. The easiest way to do so is by creating a text file .Renviron in your Documents folder which contains the following line: PATH="${RTOOLS44_HOME}\\usr\\bin;${PATH}" You can do this with a text editor, or from R like so (note that in R code you need to escape backslashes): write('PATH="${RTOOLS44_HOME}\\\\usr\\\\bin;${PATH}"', file = "~/.Renviron", append = TRUE) Restart R, and verify that make can be found, which should show the path to your Rtools installation. Sys.which("make") ## "C:\\\\rtools44\\\\usr\\\\bin\\\\make.exe" 6.0.5 GoogleTest You will need to install CMake and ninja and validate you have the correct setup by following the steps outlined in the test case template. 6.0.6 GDB debugger Windows users who use GoogleTest may need GDB debugger to see what is going on inside of the program while it executes, or what the program is doing at the moment it crashed. rtools44 includes the GDB debugger. The steps below help install 64-bit version gdb.exe. Open Command Prompt and type gdb. If you see details of usage, GDB debugger is already in your PATH. If not, follow the instructions below to install GDB debugger and add it to your PATH. Install Rtools following the instructions here Open ~/rtools44/mingw64.exe to run commands in the mingw64 shell. Run command pacman -Sy mingw-w64-x86_64-gdb to install 64-bit version (more information can be found in R on Windows FAQ) Type Y in the mingw64 shell to proceed with installation Check whether ~/rtools44/mingw64/bin/gdb.exe exists or not Add rtools44 to the PATH and you can check that the path is working by running which gdb in a command window 6.0.7 Doxygen To build C++ documentation website for FIMS, a documentation generator Doxygen needs to be installed. Doxygen automates the generation of documentation from source code comments. To install Doxygen, please follow the instructions here to install Doxygen on various operating systems. Below are steps to install 64-bit version of Doxygen 1.11.0 on Windows. Download doxygen-1.11.0.windows.x64.bin.zip and extract the applications to Documents\\Apps\\Doxygen or other preferred folder. Add Doxygen to the PATH by following similar instructions here. Open a command window and run where doxygen to check if Doxygen is added to the PATH. Two commands on the command line are needed to generate C++ documentation for FIMS locally: cmake -S. -B build -G Ninja cmake -- build build "],["contributor-guidelines.html", "Chapter 7 Contributor Guidelines 7.1 Style Guide 7.2 Naming Conventions 7.3 Coding Good Practices 7.4 Roadmap to FIMS File Structure and Organization 7.5 GitHub Collaborative Environment 7.6 Issue Tracking 7.7 Reporting Bugs 7.8 Suggesting Features 7.9 Branch Workflow 7.10 Code Development 7.11 Commit Messages 7.12 Merge Conflicts 7.13 Pull Requests 7.14 Code Review 7.15 Clean up local branches 7.16 GitHub Actions", " Chapter 7 Contributor Guidelines External contributions and feedback are important to the development and future maintenance of FIMS and are welcome. This section provides guidelines and workflows for FIMS developers and collaborators on how to contribute to the project. 7.1 Style Guide The FIMS project uses style guides to ensure our code is consistent, easy to use (e.g. read, share, and verify), and ultimately easier to write. We use the Google C++ Style Guide and the tidyverse style guide for R code. 7.2 Naming Conventions The FIMS implementation team has chosen to use typename instead of class when defining templates for consistency with the TMB package. While types may be defined in many ways, for consistency developers are asked to use Type instead of T to define Types within FIMS. 7.3 Coding Good Practices Following good software development and coding practices simplifies collaboration, improves readability, and streamlines testing and review. The following are industry-accepted standards: Adhere to the FIMS Project style guide Avoid rework - take the time to check for existing options (e.g. in-house, open source, etc.) before writing code Keep code as simple as possible Use meaningful variable names that are easy to understand and clearly represent the data they store Use descriptive titles and consistent conventions for class and function names Use consistent names for temporary variables that have the same kind of role Add clear and concise coding comments Use consistent formatting and indentation to improve readability and organization Group code into separate blocks for individual tasks Avoid hard-coded values to ensure portability of code Follow the DRY principle - “Don’t Repeat Yourself” (or your code) Avoid deep nesting Limit line length (wrap ~72 characters) Capitalize SQL queries so they are readily distinguishable from table/column names Lint your code 7.4 Roadmap to FIMS File Structure and Organization 7.4.1 Files that go in inst/include 7.4.1.1 common This folder includes files that are shared between the interface, the TMB objective function, and the mathematics and population dynamics components of the package. 7.4.1.2 interface This includes the R interface files. 7.4.1.3 population dynamics There are subfolders underneath this folder that correspond to the different components of the population dynamics model. Each of the modules will need a .hpp file that only consists of #include statements for the files under the subfolders. In the subfolder, there will need to be one file called _base.hpp that defines the base class for the module type. The base class should only need a constructor method and a number of methods (e.g. evaluate()) that are not specific to the type of functions available under the subfolders but reused for all objects of that class type. 7.4.2 Files that go in src/ 7.4.2.1 FIMS.cpp This is the TMB objective function. 7.5 GitHub Collaborative Environment Communication is managed via the NOAA-FIMS Github organization. New features requests and bugs should be submitted as issues to the FIMS development repo. For guidelines on submitting issues, see Issue Tracking. GitProjects TODO: add description * GitHub Teams TODO: add description * All contributers, both internal and external, are required to abide by the Code of Conduct 7.5.1 FIMS Branching Strategy There are several branching strategies available that will work within the Git environment and other version control systems. However, it is important to find a strategy that works well for both current and future contributors. Branching strategies provide guidance for how, when, and why branches are created and named, which also ties into necessary guidance surrounding issue tracking. The FIMS Project uses a Scaled Trunk Based Development branching strategy to make tasks easier without compromising quality. Scaled Trunk Based Development; image credit: https://reviewpad.com/blog/github-flow-trunk-based-development-and-code-reviews/ This strategy is required for continuous integration and facilitates knowledge of steps that must be taken prior to, during, and after making changes to the code, while still allowing anyone interested in the code to read it at any time. Additionally, trunk-based development captures the following needs without being overly complicated: Short-lived branches to minimize stale code and merge conflicts * Fast release times, especially for bug fixes * Ability to release bug fixes without new features 7.5.2 Branch Protection Branch protection allows for searching branch names with grep functionality to apply merging rules (i.e., protection). This will be helpful to protect the main/trunk branch such that pull requests cannot be merged in prior to passing various checks or by individuals without the authority to do so. 7.5.3 GitHub cloning and branching For contributors with write access to the FIMS repo, changes should be made on a feature branch after cloning the repo. The FIMS repo can be cloned to a local machine by using on the command line: git clone https://github.com/NOAA-FIMS/FIMS.git 7.5.4 Outside collaborators and forks Outside collaborators without write access to the FIMS repos will be required to fork the repository, make changes, and submit a pull request. Forks are discouraged for every-day development because it becomes difficult to keep track of all of the forks. Thus, it will be important for those working on forks to be active in the issue tracker in the main repository prior to working on their fork — just like any member of the organization would do if they were working within the organization. Knowledge of future projects, ideas, concerns, etc. should always be documented in an issue before the code is altered. Pull requests from forks will be reviewed the same as a pull request submitted from a branch. Users will need to conform to the same standards and all contributions must pass the standard tests as well as have tests that check the new feature. To fork and then clone a repository, follow the Github Documentation for forking a repo. Once cloned, changes can be made on a feature branch. When ready to submit changes follow the Github Documentation on creating a pull request from a fork 7.6 Issue Tracking Use of the GitHub issue tracker is key to keeping everyone informed and prioritizing key tasks. All future projects, ideas, concerns, development, etc. must be documented in an issue before the code is altered. Issues should be filed and tagged prior to any code changes whether the change pertains to a bug or the development of a feature. Issues are automatically tagged with the status: triage_needed tag and placed on the Issue Triage Board. Issues will subsequently be labeled and given an assignee and milestone by whoever is in charge of the Triage Board. 7.7 Reporting Bugs This section guides you through submitting a bug report for any toolbox tool. Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports. 7.7.0.1 Before Submitting A Bug Report Check if it is related to version. We recommend using sessionInfo() within your R console and submitting the results in your bug report. Also, please check your R version against the required R version in the DESCRIPTION file and update if needed to see if that fixes the issue. Perform a cursory search of issues to see if the problem has already been reported. If it has and the issue is still open, add a comment to the existing issue instead of opening a new one. If it has and the issue is closed, open a new issue and include a link to the original issue in the body of your new one. 7.7.0.2 How Do I Submit A (Good) Bug Report? Bugs are tracked as GitHub issues. Create an issue on the toolbox Github repository and provide the following information by following the steps outlined in the reprex package. Explain the problem and include additional details to help maintainers reproduce the problem using the Bug Report issue template. Provide more context by answering these questions: Did the problem start happening recently (e.g. after updating to a new version of R) or was this always a problem? If the problem started happening recently, can you reproduce the problem in an older version of R? What’s the most recent version in which the problem doesn’t happen? Can you reliably reproduce the issue? If not, provide details about how often the problem happens and under which conditions it normally happens. If the problem is related to working with files (e.g. reading in data files), does the problem happen for all files and projects or only some? Does the problem happen only when working with local or remote files (e.g. on network drives), with files of a specific type (e.g. only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using? Include details about your configuration and environment: Which version of the tool are you using? What’s the name and version of the OS you’re using? Which packages do you have installed? You can get that list by running sessionInfo(). 7.8 Suggesting Features This section guides you through submitting an feature suggestion for toolbox packages, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions. Before creating enhancement suggestions, please check the issues list as you might find out that you don’t need to create one. When you are creating an enhancement suggestion, please include an “enhancement” tag in the issues. 7.8.0.1 Before Submitting A Feature Suggestion Check you have the latest version of the package. Check if the development branch has that enhancement in the works. Perform a cursory search of the issues and enhancement tags to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. 7.8.0.2 How Do I Submit A (Good) Feature Suggestion? Feature suggestions are tracked as GitHub issues. Create an issue on the repository and use the Feature Request issue template. 7.8.1 Issue Labels Utilize labels on issues: To describe the kind of work to be done: bug, enhancement, task, discussion, question, suitable for beginners To indicate the state of the issue: urgent, current, next, eventually, won’t fix, duplicate 7.8.2 Issue Templates Templates are available and stored within each repository to guide users through the process of submitting a new issue. Example templates for issues can be found on GitHub Docs. Use these references and existing templates stored in .github/ISSUE_TEMPLATE for reference when creating a new template. 7.9 Branch Workflow This section details the workflow to create a branch in order to contribute to FIMS. 7.9.1 Branching Good Practices The following suggestions will help ensure optimal performance of the trunk-based branching strategy: Branches and commits should be kept small (e.g. a couple commits, a few lines of code) to allow for rapid merges and deployments. Use feature flags to wrap new changes in an inactive code path for later activation (rather than creating a separate repository feature branch). Delete branches after it is merged to the trunk; avoid repositories with a large number of “active” branches. Merge branches to the trunk frequently (e.g. at least every few days; tag as a release commit) to avoid merge conflicts. Use caching layers where appropriate to optimize build and test execution times. 7.9.2 Branch Naming Conventions Example: R-pkg-skeleton Keep it brief Use a hyphen as separators 7.9.3 git workflow Use the following commands to create a branch: $ git checkout -b <branchname> main #creates a local branch $ git push origin <branchname> #pushes branch back to gitHub Periodically merge changes from main into branch $ git merge main #merges changes from main into branch While editing code, commit regularly following commit messages guidelines $ git add <filename> #stages file for commit $ git commit -m"Commit Message" #commits changes To push changes to gitHub, first set the upstream location: $ git push --set-upstream origin <branchname> #pushes change to feature branch on gitHub After which, changes can be pushed as: $ git push #pushes change to feature branch on gitHub When finished, create a pull request to the main branch following pull request guidelines 7.10 Code Development Code is written following the Style Guide, FIMS Naming Conventions, and Coding Good Practices 7.11 Commit Messages FIMS Project contributors should provide clear, descriptive commit messages to communicate to collaborators details about changes that have occurred and improve team efficiency. Good commit messages follow the following practices: Include a short summary of the change for the subject/title (<50 characters) Include a blank line in between the ‘subject’ and ‘body’ Specify the type of commit: * fix: bug fix * feat: new feature * test: testing * docs: documentation * chore: regular code maintenance (e.g. updating dependencies) * refactor: refactoring codebase * style: changes that do not affect the meaning of the code; instead address code styling/formmatting * perf: performance improvements * revert: reverts a previous commit * build: changes that affect the build system If the commit addresses an issue, indicate the issue# in the title Provide a brief explanatory description of the change, addressing what and why was changed Wrap to ~72 characters Write in the imperative (e.g. “Fix bug”, not “Fixed bug”) If necessary, separate paragraphs by blank lines Utilize BREAKING CHANGE: <description> to provide expanation or further context about the issue being addressed. If the commit closes an issue, include a footer to note that (i.e. “Closes #19”) 7.12 Merge Conflicts 7.12.1 What is a merge conflict? A merge conflict happens when changes have occured to the same piece of code on the two branches being merged. This means Git cannot automatically determine which version of the change should be kept. Most merge conflicts are small and easy to figure out. See the Github Documentation on merge conflicts for more information. 7.12.2 How to prevent merge conflicts Merge in small changes often rather than making many changes on a branch that is kept separate from the main branch for a long time. Avoid refactoring the same piece of code in different ways on separate branches. Avoid working in the same files on separate branches. 7.12.3 How to resolve merge conflicts Merge conflicts can be resolved on Github or locally using Git. An additional helpful resource is this guide to merge conflicts. 7.13 Pull Requests Once development of a module is complete, the contributor must initiate a pull request. Github will automatically start an independent review process before the branch can be merged back into the main development branch. Pull requests are used to identify changes pushed to development branches. Open pull requests allow the FIMS Development Team to discuss and review the changes, as well as add follow-up commits before merging to the main branch. As noted in the branching stratgegy section, branches, commits, and pull requests should be kept small to enable rapid review and reduce the chance of merge conflicts. Any pull requests for the FIMS Project must be fully tested and reviewed before being merged into the main branch. Use the pull request template to create pull requests. Pull requests without this template attached will not be approved. 7.14 Code Review Code review ensures health and continuous improvement of the FIMS codebase, while simultaneously helping FIMS developers become familiar with the codebase and ensure there is a diverse team of knolwedgable collaborators to support the continued development and maintenance of FIMS. CI/CD requires rapid review of all new/modified code, so processes must be in place to support this pace. FIMS code review will utilize tools available via GitHub, which allows reviewers to analyze code changes, provide inline comments, and view change histories. The Google code review developer guide provides a useful set of guidelines for both reviewers and code authors. Below is a flowchart for the FIMS code review process. The author starts by submitting a pull request (PR), ensuring documentation, tests, and CI checks are complete, then propose a reviewer. The reviewer receives the review request and either executes the review independently or pairs with another team representative if assistance is needed. Based on the review, changes may be requested, which the author must address before approval. Once the PR is approved, the author merges it into the main branch. 7.14.1 Assigning Reviewers Reviewers of PRs for changes to the codebase in FIMS should be suggested by the author of the PR. For those FIMS Implementation Team Members that keep their status in Github current (see “Setting a status” for more information), authors can use the status information to prevent assigning a reviewer who is known to be “Busy”. If a review has been assigned to you and you don’t feel like you have the expertise to address it properly, please respond directly to the PR so a different reviewer can be found promptly. 7.14.2 Automated Testing Automated testing provides an initial layer of quality assurance and lets reviewers know that the code meets certain standards. For more on FIMS testing, see Testing and GitHub Actions. 7.14.3 Review Checklist While automated testing can assure the code structure and logic pass quality checks, human reviewers are required to evaluate things like functionality, readability, etc. Every pull request is accompanied by an automatically generated checklist of major considerations for code reviews; additional guidance is provided below for reviewers to evaluate when providing feedback on code: Design (Is the code in the proper location? Are files organized intuitively? Are components divided up in a sensible way? Does the pull request include an appropriate number of changes, or would the code changes be better broken into more focused parts? Is the code focused on only requirements within the current scope? Does the code follow object- oriented design principles? Will changes be easy to maintain? Is the code more complex than it needs to be?) Functionality (Does the code function as it is expected to? Are changes, including to the user interface (if applicable), good for users? Does parallel computing remain functional? How will the change impact other parts of the system? Are there any unhandled edge cases? Are there other code improvements possible?) Testing (Does the code have appropriate unit tests? Are tests well- designed? Have dependencies been appropriately tested? Does automated testing cover the code exchange adequately? Could the test structure be improved?) Readability (Is the code and data flow easy to understand? Are there any parts of the code that are confusing or commented out? Are names clear? Does the code include any errors, repeats, or incomplete sections? Does the code adhere to the FIMS Style Guide?) Documentation (Are there clearl and useful comments available to why the code has been implemented as it has been? Is the code appropriately documented (doxygen and roxygen)? Is the README file complete, current, and adequately describe project/changes?) Security (Does using this code open the software to possible security violations or vulnerabilities?) Performance (Are there ways to improve on the code’s performance? Is there any complex logic that could be simplified? Could any of the code be replaced with built-in functions? Will this change have any impacts on system performance? Is there any debugging code that could be removed? Are there any optimizations that could be removed and still maintain system performance?) 7.14.4 Review Good Practices Good reviews require good review habits. Try to follow these suggestions: Review in short sessions (< 60 minutes) to maintain focus and attention to detail Don’t try to review more than 400 lines of code in a single session Provide constructive and supportive feedback Ask open-ended questions and offer alternatives or possible workarounds Avoid strong/opinionated statements Applaud good solutions Don’t say “you” Be clear about which questions/comments are non-blocking or unimportant; likewise, be explicit when approving a change or requesting follow-up Aim to minimize the number of nitpicks (if there are a lot, suggest a team-level resolution) Use the FIMS Style Guide to settle any style arguments 7.15 Clean up local branches If a code reviewer approves the pull request, FIMS workflow managers will merge the feature/bug branch back into the main repository and delete the branch. At this stage, the contributor should also delete the branch from the local repository using the following commands: $ git checkout main //switches back to main branch $ git branch -d <branchname> //deletes branch from local repository 7.16 GitHub Actions FIMS uses GitHub Actions to automate routine tasks. These tasks include: Backup checks for developers Routine GitHub workflow tasks (not important for developers to monitor) Currently, the GitHub Actions in the FIMS repository include: GitHub Action Name Description Type Runs a Check on PRs? Runs on: call-r-cmd-check Runs R CMD Check Backup Check Yes Push to any branch run-clang-tidy Checks for C++ code Backup Check Yes Push to any branch run-googletest Runs the google C++ unit tests Backup Check Yes Push to any branch run-doxygen Builds the C++ documentation Backup Check No Push to main branch run-clang-format Styles C++ code Routine workflow task No Push to main branch call-doc-and-style-r documents and styles R code Routine workflow task No Push to main branch pr-checklist Generates a checklist as a comment for reviewers on PRs Routine workflow task No Opening a PR YAML files in a subdirectory of the FIMS repository specify the setup for the GitHub Actions. Some of the actions depend on reusable workflows available in {ghactions4r}. Runs of the GitHub Actions can be viewed by navigating to the Actions tab of the FIMS repository. The status of GitHub Action runs can also be viewed on pull requests or next to commits throughout the FIMS repository. 7.16.1 Details on Backup Checks Developers must make sure that the checks on their pull requests pass, as typically changes will not be merged into the main branch until all GitHub Actions are passing (the exception is if there are known reasons for the GitHub Actions to fail that are not related to the pull request). Other responsibilities of developers are listed in the Code Development section. Additional details about the backup check GitHub Actions: call-r-cmd-check runs R CMD Check on the FIMS package using the current version of R. Three runs occur simultaneously, on three operating systems: Windows, Linux (Ubuntu), and OSX. R CMD Check ensures that the FIMS package can be downloaded without error. An error means that the package cannot be downloaded successfully on the operating system for the run that failed. Developers should investigate the failing runs and make fixes. To replicate the GitHub Actions workflow locally, use devtools::check() run-clang-tidy runs checks while compiling the C++ code. If this run fails, fixes need to be made to the C++ code to address the issue identified. run-googletest Runs the GoogleTest C++ unit tests and benchmarking. If this run fails, then fixes need to be made to the C++ code and/or the GoogleTest C++ unit tests. To replicate this GitHub Actions workflow locally, follow instructions in the testing section. 7.16.2 Debugging Broken Runs GitHub Actions can fail for many reasons, so debugging is necessary to find the cause of the failing run. Some steps that can help with debugging are: Ask for help as needed! Some members of the FIMS team who have experience debugging GitHub Actions are Bai, Kathryn, and Ian. Investigate why the run failed by looking in the log. Try to replicate the problem locally. For example, if the call-r-cmd-check run fails during the testthat tests, try running the testthat tests locally (e.g., using devtools::test()). If the problem can be replicated, try to fix locally by fixing one test or issue at a time. Then push the changes up to GitHub and monitor the new Github Action run. If the problem cannot be replicated locally, it could be a operating specific issue; for example, if using Windows locally, it may be an issue specific to Mac or Linux. Sometimes, runs may fail because a particular dependency wasn’t available at the exact point in time need for the run (e.g., maybe R didn’t install because the R executable couldn’t be downloaded); if that is the case, wait a few hours to a day and try to rerun. If it continues to fail for more than a day, a change in the GitHub Action YAML file may be needed. 7.16.3 How do I request a new Github Action workflow? Routine actions and checks should be captured in a GitHub Action workflow in order to improve efficiency of the development process and/or improve automated checks on the FIMS codebase. New GitHub Action workflows can be requested by opening an issue in the FIMS repository. "],["hpp-template-for-c-modules.html", "Chapter 8 .hpp template for C++ modules", " Chapter 8 .hpp template for C++ modules In this section we will describe how to structure a new .hpp file in FIMS. // tmplate.hpp // Fisheries Integrated Modeling System (FIMS) //define the header gaurd #ifndef template_hpp #define template_hpp //inherit from model_base #include "../common.hpp" #include <iostream> /** * In this example, we utilize the concept of inheritence and * polymorphism (https://www.geeksforgeeks.org/polymorphism-in-c/). All * classes inherit from model_base. Name1 and Name2 inherit from NameBase. * Classes Name1 and Name2 must implement they're own version of * "virtual T evaluate(const T& t)", which will have unique logic. */ /* * fims namespace */ namespace fims{ /** * NameBase class. Inherits from model_base. */ template <class T> class NameBase: public model_base<T>{ //note that model_base gets template parameter T. protected: public: virtual T Evaluate(const T& t)=0; //"= 0;" means this must be implemented in child. }; /* * Template class inherits from NameBase */ template <class T> class Name1: public NameBase<T>{ public: /* *Default constructor *Initialize any memory here. */ Name1(){ } /** * Destructor; this method destructs Name1 object. * Delete any allocated memory here. */ ~ Name1(){ std::cout <<"I just deleted Name1 object" << std::endl; } /** * Note: this function must have the same signature as evaluate in NameBase. * Overloaded virtual function. This is polymorphism, meaning the * signature has the same appearance, but the function itself has unique logic. * * @param t * @return t+1 */ virtual T Evaluate(const T& t) { std::cout<<"evaluate in Name1 received "<<t<< "as a method parameter, returning "<<(t+1)<<std::endl; return t+1; //unique logic for Name1 class } }; /* * Template class inherits from NameBase */ template <class T> class Name2: public NameBase<T>{ public: /* *Default constructor. *Initialize any memory here. */ Name2(){ } /** * Destructor; this method destructs the Name2 object. * Delete any allocated memory here. */ ~ Name2(){ std::cout <<"I just deleted Name2 object" << std::endl; } /** * Note: this function must have the same signature as evaluate in NameBase. * Overloaded virtual function. This is polymorphism, meaning the * signature has the same appearance, but the function itself has unique logic. * * @param t * @return t^2 */ virtual T Evaluate(const T& t) { std::cout<<"evaluate in Name2 received "<<t<< "as a method parameter, returning "<<(t*t)<<std::endl; return t*t; //unique logic for Name2 class } }; /** * Add additional implementations below. */ } //end namespace /** *Example usage: * * void main(int argc, char** argv){ * NameBase<double>* name = NULL; //pointer to a NameBase object * Name1<double> n1; //inherits from NameBase * Name2<double> n2; //inherits from NameBase * * name = &n1; //name now points to n1 * name->Evalute(2.0); //unique logic for n1 * * name = &n2; //name now points to n2 * name->Evalute(2.0); //unique logic for n2 * } * * Output: * evaluate in Name1 received 2 as a method parameter, returning 3 * evaluate in Name2 received 2 as a method parameter, returning 4 * */ #endif /*template_hpp */ "],["documentation-template.html", "Chapter 9 Documentation Template 9.1 Writing function reference 9.2 Writing a vignette 9.3 Step by step documentation update process", " Chapter 9 Documentation Template In this section we will describe how to document your code. For more information about code documentation in general, please see the toolbox blog post on code documentation. This post describes the differences between the types of documentation, while below we give specific, brief instructions on developer responsibilities for FIMS. 9.1 Writing function reference Function reference can be written inline in comments above the function in either C++ or R. The tools you can use to generate reference from comments are called Doxygen and Roxygen in C++ and R respectively. Both can include LaTeX syntax to denote equations, and both use @ tags to name components of the function reference /** * @brief This function calculates the von Bertalanffy growth curve. * \\f$ * * length\\_at\\_age = lmin + (lmax - lmin)*\\frac{(1.0 - c^ {(age - a\\_min)}))}{(1.0 - c^{(a\\_max - a\\_min)})} * * \\f$ * * @param age * @param sex * @return length\\_at\\_age */ The only difference between syntax for R and C++ code is how comments are denoted in the language. #' This function calculates the von Bertalanffy growth curve. #' #' @param age #' @param sex #' @return length_at_age You should, at minimum, include the tags @param, @return, and @examples in your function reference if it is an exported function. Functions that are only called internally do not require an @examples tag. Other useful tags include @seealso and @export for Roxygen chunks. 9.2 Writing a vignette If this is an exported function, a vignette can be a helpful tool to users to know how to use your function. For now, a rough approximation of the “get started” vignette is written in the software user guide page of this book. If you include a vignette for your function, you can link to it in the Roxygen documentation with the following code. #' \\code{vignette("help", package = "mypkg")} 9.3 Step by step documentation update process Write the function reference in either R or C++ as described above. Check the software user guide and check that any changes you have made to the code are reflected in the code snippets on that page. Push to the feature branch. Ensure that the documentation created by the automated workflow is correct and that any test cases execute successfully before merging into main. "],["testing.html", "Chapter 10 Testing 10.1 Introduction 10.2 C++ unit testing and benchmarking 10.3 Templates for GoogleTest testing 10.4 R testing 10.5 Test case documentation template and examples", " Chapter 10 Testing This section describes testing for FIMS. FIMS uses Google Test for C++ unit testing and testthat for R unit testing. 10.1 Introduction FIMS testing framework will include different types of testing to make sure that changes to FIMS code are working as expected. The unit and functional tests will be developed during the initial development stage when writing individual functions or modules. After completing development of multiple modules, integration testing will be developed to verify that different modules work well together. Checks will be added in the software to catch user input errors when conducting run-time testing. Regression testing and platform compatibility testing will be executed before pre-releasing FIMS. Beta-testing will be used to gather feedback from users (i.e., members of FIMS implementation team and other users) during the pre-release stage. After releasing the first version of FIMS, the development team will go back to the beginning of the testing cycle and write unit tests when a new feature needs to be implemented. One-off testing will be used for testing new features and fixing user-reported bugs when maintaining FIMS. More details of each type of test can be found in the Glossary section. FIMS will use GoogleTest to build a C++ unit testing framework and R testthat to build an R testing framework. FIMS will use Google Benchmark to measure the real time and CPU time used for running the produced binaries. 10.2 C++ unit testing and benchmarking 10.2.1 Requirements To use GoogleTest, you will need: A compatible operating system (e.g. Windows, masOS, or Linux). A C++ compiler that supports at least C++ 11 standard or newer (e.g. gcc 5.0+, clang 5.0+, or MSVC 2015+). For macOS users, Xcode 9.3+ provides clang 5.0. For R users, rtools4 includes gcc. A build system for building the testing project. CMake and a compatible build tool such as Ninja are approved by NMFS HQ. 10.2.2 Setup for Windows users Download CMake 3.22.1 (cmake-3.22.1-windows-x86_64.zip) and put the file folder to Documents\\Apps or other preferred folder. Download ninja v1.10.2 (ninja-win.zip) and put the application to Documents\\Apps or other preferred folder. Open your Command Prompt and type cmake. If you see details of usage, cmake is already in your PATH. If not, follow the the instructions below to add cmake to your PATH. In the same command prompt, type ninja. If you see a message that starts with ninja:, even if it is an error about not finding build.ninja, this means that ninja is already in your PATH. If ninja is not found, follow the instructions below to add ninja to your path. 10.2.3 Adding cmake and ninja to your PATH on Windows In the Windows search bar next to the start menu, search for Edit environment variables for your account and open the Environment Variables window. Click Edit... under the User variables for firstname.lastname section. Click New, add path to cmake, if needed (e.g., cmake-3.22.1-windows-x86_64\\bin or C:\\Program Files\\CMake\\bin are common paths), and click OK. Click New, add path to the location of the Ninja executable, if needed (e.g., Documents\\Apps\\ninja-win or C:\\Program Files\\ninja-win), and click OK. You may need to restart your computer to update the envirionmental variables. You can check that the path is working by running where cmake or where ninja in a command terminal. Note that in certain Fisheries centers, NOAA employees do not have administrative privileges enabled to edit the local environmental path. In this situation it is necessary to create a ticket with IT to add cmake and ninja to your PATH on Windows. 10.2.4 Setup for Linux and Mac users See CMake installation instructions for installing CMake on other platforms. Add cmake to your PATH. You can check that the path is working by running which cmake in a command window. Download ninja v1.10.2 (ninja-win.zip) and put the binary in your preferred location. Add Ninja to your PATH. You can check that the path is working by running which ninja in a command window. Open a command window and type cmake. If you see usage, cmake is found. If not, cmake may still need to be added to your PATH. Open a command window and type ninja. If you see a message starting with ninja:, ninja is found. Otherwise, try changing the permissions or adding to your path. 10.2.5 How to edit your PATH and change file permissions for Linux and Mac To check if the binary is in your path, assuming the binary is named ninja: open a Terminal window and type which ninja and hit enter. If you get nothing returned, then ninja is not in your path. The easiest way to fix this is to move the ninja binary to a folder that’s already in your path. To find existing path folders type echo $PATH in the terminal and hit enter. Now move the ninja binary to one of these folders. For example, in a Terminal window type: sudo cp ~/Downloads/ninja /usr/bin/ To move ninja from the downloads folder to /usr/bin. You will need to use sudo and enter your password after to have permission to move a file to a folder like /usr/bin/. Also note that you may need to add executable permissions to the ninja binary after downloading it. You can do that by switching to the folder where you placed the binary (cd /usr/bin/ if you followed the instructions above), and running the command: sudo chmod +x ninja Check that ninja is now executable and in your path: which ninja If you followed the instructions above, you will see the following line returned: /usr/bin/ninja 10.2.6 Set up FIMS testing project Clone the FIMS repository on the command line using: git clone https://github.com/NOAA-FIMS/FIMS.git cd FIMS There is a file called CMakeLists.txt in the top level of the directory. This file instructs Cmake on how to create the build files, including setting up Google Test. The Google Test testing code is in the tests/gtest subdirectory. Within this subdirectory is a file called CMakeLists.txt. This file contains additional specifications for CMake, in particular instructions on how to register the individual tests. 10.2.7 Build and run the tests Three commands on the command line are needed to build the tests: cmake -S . -B build -G Ninja This generates the build system using Ninja as the generator. Note there is now a subfolder called build. Next, in the same command window, use cmake to build in the build subfolder: cmake --build build Finally, run the C++ tests: ctest --test-dir build The output from running the tests should look something like: Internal ctest changing into directory: C:/github_repos/NOAA-FIMS_org/FIMS/build Test project C:/github_repos/NOAA-FIMS_org/FIMS/build Start 1: dlognorm.use_double_inputs 1/5 Test #1: dlognorm.use_double_inputs ....... Passed 0.04 sec Start 2: dlognorm.use_int_inputs 2/5 Test #2: dlognorm.use_int_inputs .......... Passed 0.04 sec Start 3: modelTest.eta 3/5 Test #3: modelTest.eta .................... Passed 0.04 sec Start 4: modelTest.nll 4/5 Test #4: modelTest.nll .................... Passed 0.04 sec Start 5: modelTest.evaluate 5/5 Test #5: modelTest.evaluate ............... Passed 0.04 sec 100% tests passed, 0 tests failed out of 5 10.2.8 Adding a C++ test Create a file dlognorm.hpp within the src subfolder that contains a simple function: #include <cmath> template<class Type> Type dlognorm(Type x, Type meanlog, Type sdlog){ Type resid = (log(x)-meanlog)/sdlog; Type logres = -log(sqrt(2*M_PI)) - log(sdlog) - Type(0.5)*resid*resid - log(x); return logres; } Then, create a test file dlognorm-unit.cpp in the tests/gtest subfolder that has a test suite for the dlognorm function: #include "gtest/gtest.h" #include "../../src/dlognorm.hpp" // # R code that generates true values for the test // dlnorm(1.0, 0.0, 1.0, TRUE) = -0.9189385 // dlnorm(5.0, 10.0, 2.5, TRUE) = -9.07679 namespace { // TestSuiteName: dlognormTest; TestName: DoubleInput and IntInput // Test dlognorm with double input values TEST(dlognormTest, DoubleInput) { EXPECT_NEAR( dlognorm(1.0, 0.0, 1.0) , -0.9189385 , 0.0001 ); EXPECT_NEAR( dlognorm(5.0, 10.0, 2.5) , -9.07679 , 0.0001 ); } // Test dlognorm with integer input values TEST(dlognormTest, IntInput) { EXPECT_NEAR( dlognorm(1, 0, 1) , -0.9189385 , 0.0001 ); } } EXPECT_NEAR(val1, val2, absolute_error) verifies that the difference between val1 and val2 does not exceed the absolute error bound absolute_error. EXPECT_NE(val1, val2) verifies that val1 is not equal to val2. Please see GoogleTest assertions reference for more EXPECT_ macros. 10.2.9 Add tests to tests/gtest/CMakeLists.txt and run a binary To build the code, add the following contents to the end of the tests/gtest/CMakeLists.txt file: add_executable(dlognorm_test dlognorm-unit.cpp ) target_include_directories(dlognorm_test PUBLIC ${CMAKE_SOURCE_DIR}/../ ) target_link_libraries(dlognorm_test gtest_main ) include(GoogleTest) gtest_discover_tests(dlognorm_test) The above configuration enables testing in CMake, declares the C++ test binary you want to build (dlognorm_test), and links it to GoogleTest (gtest_main). Now you can build and run your test. Open a command window in the FIMS repo (if not already opened) and type: cmake -S . -B build -G Ninja This generates the build system using Ninja as the generator. Next, in the same command window, use cmake to build: cmake --build build Finally, run the tests in the same command window: ctest --test-dir build The output when running ctest might look like this. Note there is a failing test: Internal ctest changing into directory: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build Test project C:/Users/Kathryn.Doering/Documents/testing/FIMS/build Start 1: dlognorm.use_double_inputs 1/7 Test #1: dlognorm.use_double_inputs ....... Passed 0.04 sec Start 2: dlognorm.use_int_inputs 2/7 Test #2: dlognorm.use_int_inputs .......... Passed 0.04 sec Start 3: modelTest.eta 3/7 Test #3: modelTest.eta .................... Passed 0.04 sec Start 4: modelTest.nll 4/7 Test #4: modelTest.nll .................... Passed 0.04 sec Start 5: modelTest.evaluate 5/7 Test #5: modelTest.evaluate ............... Passed 0.04 sec Start 6: dlognormTest.DoubleInput 6/7 Test #6: dlognormTest.DoubleInput ......... Passed 0.04 sec Start 7: dlognormTest.IntInput 7/7 Test #7: dlognormTest.IntInput ............***Failed 0.04 sec 86% tests passed, 1 tests failed out of 7 Total Test time (real) = 0.28 sec The following tests FAILED: 7 - dlognormTest.IntInput (Failed) Errors while running CTest Output from these tests are in: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build/Testing/Temporary/LastTest.log Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely. 10.2.10 Debugging a C++ test There are two ways to debug a C++ test, interactively using gdb or via print statements. To use gdb, make sure it is installed and on your path. Debug C++ code (e.g., segmentation error/memory corruption) using gdb: cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug cmake --build build --parallel 16 ctest --test-dir build --parallel 16 gdb ./build/tests/gtest/population_dynamics_population.exe c // to continue without paging run // to see which line of code is broken print this->log_naa // for example, print this->log_naa to see the value of log_naa; print i // for example, print i from the broken for loop bt // backtrace q // to quit Debug C++ code without using gdb: Update code in a .hpp file by calling std::ofstream out(“file_name.txt”) Then use out << variable; to print out values of the variable nfleets = fleets.size(); std::ofstream out("debug.txt"); out <<nfleets; More complex examples with text identifying the quantities out <<" fleet_index: "<<fleet_index<<" index_yaf: "<<index_yaf<<" index_yf: "<<index_yf<<"\\n"; out <<" population.Fmort[index_yf]: "<<population.Fmort[index_yf]<<"\\n"; Git Bash cmake -S . -B build -G Ninja cmake --build build --parallel 16 ctest --test-dir build --parallel 16 The output of the print statements will be in this test file: FIMS/build/tests/gtest/debug.txt 10.2.11 Benchmark example Google Benchmark measures the real time and CPU time used for running the produced binary. We will continue using the dlognorm.hpp example. Create a benchmark file dlognorm_benchmark.cpp and put it in the tests/gtest subfolder: #include "benchmark/benchmark.h" #include "../../src/dlognorm.hpp" void BM_dlgnorm(benchmark::State& state) { for (auto _ : state) dlognorm(5.0, 10.0, 2.5); } BENCHMARK(BM_dlgnorm); This file runs the dlognorm function and uses BENCHMARK to see how long it takes. A more comprehensive feature overview of benchmarking is available in the Google Benchmark GitHub repository. 10.2.12 Add benchmarks to tests/gtest/CMakeLists.txt and run the benchmark To build the code, add the following contents to the end of your tests/gtest/CMakeLists.txt file: FetchContent_Declare( googlebenchmark URL https://github.com/google/benchmark/archive/refs/tags/v1.6.0.zip ) FetchContent_MakeAvailable(googlebenchmark) add_executable(dlognorm_benchmark dlognorm_benchmark.cpp ) target_include_directories(dlognorm_benchmark PUBLIC ${CMAKE_SOURCE_DIR}/../ ) target_link_libraries(dlognorm_benchmark benchmark_main ) To run the benchmark, open the command line open in the FIMS repo (if not already open) and run cmake, sending output to the build subfolder: cmake --build build Then run the dlognorm_benchmark executable created: build/tests/gtest/dlognorm_benchmark.exe The output from dlognorm_benchmark.exe might look like this: Run on (8 X 2112 MHz CPU s) CPU Caches: L1 Data 32 KiB (x4) L1 Instruction 32 KiB (x4) L2 Unified 256 KiB (x4) L3 Unified 8192 KiB (x1) ***WARNING*** Library was built as DEBUG. Timings may be affected. ----------------------------------------------------- Benchmark Time CPU Iterations ----------------------------------------------------- BM_dlgnorm 153 ns 153 ns 4480000 10.2.12.1 Remove files produced by this example If you don’t want to keep any of the files produced by this example and want to completely clear any uncommitted changes and files from the git repo, use git restore . to get rid of un committed changes in git tracked files. To get rid of all untracked files in the repo, use: git clean -fd 10.2.13 Clean up after running C++ tests 10.2.13.1 Clean up CMake-generated files and re-run tests After running the examples above, the build generates files (i.e., the source code, libraries, and executables) and saves the files in the build subfolder. The example above demonstrates an “out-of-source” build which puts generated files in a completely separate directory, so that the source tree is unchanged after running tests. Using a separate source and build tree reduces the need to delete files that differ between builds. If you still would like to delete CMake-generated files, just delete the build folder, and then build and run tests by repeating the commands below. The files from the build folder are included in the FIMS repository’s .gitignore file, so should not be pushed to the FIMS repository. 10.2.13.2 Clean up individual tests For simple C++ functions like the examples above, we do not need to clean up the tests. Clean up is only necessary in a few situations. If memory for an object was allocated during testing and not deallocated - The object needs to be deleted (e.g., delete object). If you used a test fixture from GoogleTest to use the same data configuration for multiple tests, TearDown() can be used to clean up the test and then the test fixture will be deleted. Please see more details from GoogleTest user’s guide. 10.3 Templates for GoogleTest testing This section includes templates for creating unit tests and benchmarks. This is the code that would go into the .cpp files in tests/gtest. 10.3.1 Unit test template #include "gtest/gtest.h" #include "../../src/code.hpp" // # R code that generates true values for the test namespace { // Description of Test 1 TEST(TestSuiteName, Test1Name) { ... test body ... } // Description of Test 2 TEST(TestSuiteName, Test2Name) { ... test body ... } } 10.3.2 Benchmark template #include "benchmark/benchmark.h" #include "../../src/code.hpp" void BM_FunctionName(benchmark::State& state) { for (auto _ : state) // This code gets timed Function() } // Register the function as a benchmark BENCHMARK(BM_FunctionName); 10.3.3 tests/gtest/CMakeLists.txt template These lines are added each time a new test suite (all tests in a file) is added: // Add test suite 1 add_executable(TestSuiteName1 test1.cpp ) target_link_libraries(TestSuiteName1 gtest_main ) gtest_discover_tests(TestSuiteName1) These lines are added each time a new benchmark file is added: // Add benchmark 1 add_executable(benchmark1 benchmark1.cpp ) target_link_libraries(benchmark1 benchmark_main ) 10.4 R testing FIMS uses {testthat} for writing R tests. You can install the packages following the instructions on testthat website. If you are not familiar with testthat, the testing chapter in R packages gives a good overview of testing workflow, along with structure explanation and concrete examples. 10.4.1 Testing FIMS locally To test FIMS R functions interactively and locally, use devtools::install() rather than devtools::load_all(). This is because using load_all() will turn on the debugger, bloating the .o file, and may lead to a compilation error (e.e., Fatal error: can't write 326 bytes to section .text of FIMS.o: 'file too big' as: FIMS.o: too many sections (35851)). Note that useful interactive tests should should be converted into {testthat} or googletest tests. 10.4.2 Testing using gdbsource You can interactively debug C++ code using TMB::gdbsource() in RStudio. Just add these two lines to the top of the test-fims-estimation.R file require(testthat) devtools::load_all("C:\\\\Users\\\\chris\\\\noaa-git\\\\FIMS") 10.4.3 R testthat naming conventions and file organization We try to group functions and their helpers together (the “main function plus helpers” approach) Always name the test file the same as the R file, but with test- prepended (ex, test-myfunction.R contains testthat tests for the R code in R/myfunction.R). This is the convention in the tidyverse style guide. testthat tests that are a test of rcpp should be called test-rcpp-[description].R Integration tests which do not have a corresponding .R file should use the convention test-integration-[description].R. 10.4.4 R testthat template The format for an individual testthat test is is: test_that("TestName", { ...test body... }) Multiple testthat tests can be put in the same file if they are related to the same .R file (see naming conventions above). 10.5 Test case documentation template and examples A testing plan must be developed while designing (i.e., before coding) new FIMS features or Rcpp modules. Please update the test cases in the FIMS/tests/milestoneX_test_cases.md file (e.g., FIMS/tests/miletone1_test_cases.md). This testing plan is documented using the test case documentation template below. 10.5.1 Test case documentation template Individual functional or integration test cases will be designed following the template below. Test ID. Create a meaningful name for the test case. Features to be tested. Provide a brief statement of test objectives and description of the features to be tested. (Identify the test items following the FIMS software design specification document and identify all features that will not be tested and the rationale for exclusion) Approach. Specify the approach that will ensure that the features are adequately tested and specify which type of test is used in this case. Evaluation criteria. Provide a list of expected results and acceptance criteria. Pass/fail criteria. Specify the criteria used to determine whether each feature has passed or failed testing. In addition to setting pass/fail criteria with specific tolerance values, a documentation that just views the outputs of some tests may be useful if the tests require additional computations, simulations, and comparisons Test deliverables. Identify all information that is to be delivered by the test activity. Test logs and automated status reports 10.5.2 Test case documentation examples 10.5.2.1 General test case documentation The test case documentation below is a general case to apply to many functions/modules. For individual functions/modules, please make detailed test cases for specific options, noting “same as the general test case” where appropriate. Test ID Features to be tested General test case The function/module returns correct output values given different input values The function/module returns error messages when users give wrong types of inputs The function/module notifies an error if the input value is outside the bound of the input parameter Approach Prepare expected true values using R Run tests in R using testthat and compare output values with expected values Push tests to the working repository and run tests using GitHub Actions Run tests in different OS environments (windows latest, macOS latest, and ubuntu latest) using GitHub Actions Submit pull request for code review Evaluation Criteria The tests pass if the output values equal to the expected true values The tests pass if the function/module returns error messages when users give wrong types of inputs The tests pass if the function/module returns error messages when user provides an input value that is outside the bound of the input parameter Test deliverables Test logs on GitHub Actions. Document results of logs in the feature pull request. 10.5.2.2 Functional test example: TMB probability mass function of the multinomial distribution Test ID Probability mass function of the multinomial distribution Features to be tested Same as the general test case Approach Functional test Prepare expected true values using R function dmultinom from package ‘stats’ Evaluation Criteria Same as the general test case Test deliverables Same as the general test case 10.5.2.3 Integration test example: Li et al. 2021 age-structured stock assessment model comparison Test ID Age-structured stock assessment comparison (Li et al. 2021) Features to be tested Null case (update standard deviation of the log of recruitment from 0.2 to 0.5 based on Siegfried et al. 2016 snapper-grouper complex) Recruitment variability Stochastic Fishing mortality (F) F patterns (e.g., roller coaster: up then down and down then up; constant Flow, FMSY, and Fhigh) Selectivity patterns Recruitment bias adjustment Initial condition (unit of catch: number or weight) Model misspecification (e.g., growth, natural mortality, and steepness, catchability etc) Approach Integration test Prepare expected true values from an operating model using R functions from Age_Structured_Stock_Assessment_Model_Comparison GitHub repository Evaluation Criteria Summarize median absolute relative error (MARE) between true values from the operating model and the FIMS estimation model If all MAREs from the null case are less than 10% and all MARES are less than 15%, the tests pass. If the MAREs are greater than 15%, a closer examination is needed. Test deliverables In addition to the test logs on GitHub Actions, a document that includes comparison figures from various cases (e.g., Fig 5 and 6 from Li et al. 2021) will be automatically generated A table that shows median absolute relative errors in unfished recruitment, catchability, spawning stock biomass, recruitment, fishing mortality, and reference points (e.g., Table 6 from Li et al. 2021) will be automatically generated 10.5.2.4 simulation testing: challenges and solutions One thing that might be challenging for comparing simulation results is that changes to the order of calls to simulate will change the simulated values. Tests may fail even though it is just because different random numbers are used or the order of the simulation changes through model development. Several solutions could be used to address the simulation testing issue. Please see discussions on the FIMS-planning issue page for details. Once we start developing simulation modules,we can use these two ways to compare simulated data from FIMS and a test: Add a TRUE/FALSE parameter in each FIMS simulation module for setting up testing seed. When testing the module, set the paramter to TRUE to fix the seed number in R and conduct tests. If adding a TRUE/FALSE parameter does not work as expected, then carefully check simulated data from each component and make sure it is not a model coding error. FIMS will use set.seed() from R to set the seed. The {rstream} package will be investigated if one of the requirements of FIMS simulation module is to generate multiple streams of random numbers to associate distinct streams of random numbers with different sources of randomness. {rstream} was specifically designed to address the issue of needing very long streams of pseudo-random numbers for parallel computations. Please see rstream paper and RngStreams for more details. "],["glossary.html", "Glossary Testing Glossary 10.6 C++ Glossary", " Glossary In this section we will define terms that come up throughout this handbook. Testing Glossary Unit testing Description: It tests individual methods and functions of the classes, components or modules used by the software independently. It executes only small portions of the test cases during the development process. Writer: Developer Advantages: It finds problems early and helps trace the bugs in the development cycle; cheap to automate when a method has clear input parameters and output; can be run quickly. Limitations: Tedious to create; it won’t catch integration errors if a method or a function has interactions with something external to the software. Examples: A recruitment module may consist of a few stock-recruit functions. We could use a set of unit test cases that ensure each stock-recruit function is correct and meets its design as intended while developing the function. Reference: Wikipedia description Functional testing Description: It checks software’s performance with respect to its specified requirements. Testers do not need to examine the internal structure of the piece of software tested but just test a slice of functionality of the whole system after it has been developed. Writer: Tester Advantages: It verifies that the functionalities of the software are working as defined; lead to reduced developer bias since the tester has not been involved in the software’s development. Limitations: Need to create input data and determine output based on each function’s specifications; need to know how to compare actual and expected outputs and how to check whether the software works as the requirements specified. Examples: The software requires development of catch-based projection. We could use a set of functional test cases that help verify if the model produces correct output given specified catch input after catch-based projection has been implemented in the system. Reference: Wikipedia description; WHAM testthat examples Integration testing Description: A group of software modules are coupled together and tested. Integrate software modules all together and verify the interfaces between modules against the software design. It is tested until the software works as a system. Writer: Tester Advantages: It builds a working version of the system by putting the modules together. It assembles a software system and helps detect errors associated with interfacing. Limitations: The tests only can be executed after all the modules are developed. It may be difficult to locate errors because all components are integrated together. Examples: After developing all the modules, we could set up a few stock assessment test models and check if the software can read the input file, run the stock assessment models, and provide desired output. Reference: Wikipedia description Run-time testing Description: Checks added in the software that catch user input errors. The developer will add in checks to the software; the user will trigger these checks if there are input errors Writer: developer Advantages: Provides guidance to the user while using the software Limitations: Adding many checks can cause the software to run more slowly, the messages need to be helpful so the user can fix the input error. Examples: A user inputs a vector of values when they only need to input a single integer value. When running the software, they get an error message telling them that they should use a single integer value instead. Reference: Testing R code book Regression testing Description: Re-running tests to ensure that previously developed and tested software still performs after a change. Testers can execute regression testing after adding a new feature to the software or whenever a previously discovered issue has been fixed. Testers can run all tests or a part of the test suite to check the correctness or quality of the software. Writer: Tester Advantages: It ensures that the changes made to the software have not affected the existing functionalities or correctness of the software. Limitations: If the team makes changes to the software often, it may be difficult to run all tests from the test suite frequently. In that case, it’s a good idea to have a regression testing schedule. For example, run a part of the test suite that is higher in priority after every change and run the full test suite weekly or monthly, etc. Examples: Set up a test suite like the the Stock Synethesis test-models repository. The test cases can be based on real stock assessment models, but may not be the final model version or may have been altered for testing purposes. Test the final software by running this set of models and seeing if the same results for key model quantities remain the same relative to a “reference run” (e.g., the last release of the software). Reference: Wikipedia description Platform compatibility testing Description: It checks whether the software is capable of running on different operating systems and versions of other softwares. Testers need to define a set of environments or platforms the application is expected to work on. Testers can test the software on different operating systems or platforms and report the bugs. Writer: Tester Advantages: It ensures that the developed software works under different configurations and is compatible with the client’s environment. Limitations: Testers need to have knowledge of the testing environment and platforms to understand the expected software behavior under different configurations. It may be difficult to figure out why the software produces different results when using different operating systems. Examples: Set up an automated workflow and see if the software can be compatible with different operating systems, such as Windows, macOS, and Linux. Also, testers can check if the software is compatible with different versions of R (e.g., release version and version 3.6, etc). Reference: International Software Testing Qualification Board Beta testing Description: It is a form of external user acceptance testing and the feedback from users can ensure the software has fewer bugs. The software is released to a limited end-users outside of the implementation team and the end-users (beta testers) can report issues of beta software to the implementation team after further testing. Writer: Members of implementation team and other users Advantages: It helps in uncovering unexpected errors that happen in the client’s environment. The implementation team can receive direct feedback from users before shipping the software to users. Limitations: The testing environment is not under the control of the implementation team and it may be hard to reproduce the bugs. Examples: Prepare a document that describes the new features of the software and share it with selected end-users. Send a pre-release of the software to selected users for further testing and gather feedback from users. Reference: Wikipedia description; SS prerelease example One-off testing Description: It is for replicating and fixing user-reported bugs. It is a special testing that needs to be completed outside of the ordinary routine. Testers write a test that replicates the bug and run the test to check if the test is failing as expected. After fixing the bug, the testers can run the test again and check if the test is passing. Writer: Developer and tester Advantages: The test is simple, fast, and efficient for fixing bugs. Limitations: The tests are specific to bugs and may require manual testing. Examples: A bug is found in the code and the software does not work properly. Tester can create a test to replicate the bug and the test would fail as expected. After the developer fixes the bug, the tester can run the test and see if the issue is resolved. Reference: International Software Testing Qualification Board; SS bug fix example 10.6 C++ Glossary Some C++ vocabulary that is used within FIMS that will be helpful for novice C++ programmers to understand. 10.6.1 singleton Defines a class that is only used to create an object one time. This is a design pattern. See more information 10.6.2 class Provides the “recipe” for the structure of an object, including the data members and functions. Like data structures (structs), but also includes functions. See more information. 10.6.3 functor A functor is a class that acts like a function. See more details about functors. ### constructor A special method that is called when a new object is created and usually initializes data members of the object. See the definition of constructor. 10.6.4 destructor The last method of an object called automatically before an object is destroyed. See the definition of destructor. 10.6.5 header guards Makes sure that there are not multiple copies of a header in a file. Details are available. 10.6.6 preprocessing macro/derectives Begin with a # in the code, these tell the preproccessor (not the compiler) what to do. These directives are complete before compiling. See more info on preprocessing 10.6.7 struct Similar to a class, but only contains data members and not functions. All members are public. Comes from C. See details on struct "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]] +[["index.html", "FIMS Developer Handbook Chapter 1 Contributing to this book 1.1 Description 1.2 Edit and preview book changes", " FIMS Developer Handbook FIMS Implementation Team 2024-06-15 Chapter 1 Contributing to this book This is a book written in Markdown describing the FIMS development workflow for FIMS developers and contributors. It is intended as a living document and will change over time as the FIMS project matures. Some sections may be incomplete or missing entirely. Suggestions or contributions may be made via the FIMS collaborative workflow github site https://github.com/NOAA-FIMS/collaborative_workflow. This section describes how to edit and contribute to the book. 1.1 Description Each bookdown chapter is an .Rmd file, and each .Rmd file can contain one or more chapters. A chapter must start with a first-level heading: # A good chapter, and can contain one (and only one) first-level heading. Use second-level and higher headings within chapters like: ## A short section or ### An even shorter section. The index.Rmd file is required, and is also your first book chapter. It will be the homepage when you render the book. 1.2 Edit and preview book changes When you want to make a change to this book, follow the below steps: 1. Create a new feature branch either from the issue requesting the change or from the repo on Github. 2. Pull the remote branch into your local branch and make your changes to the .Rmd files locally. 3. When you are done editing, do not render the book locally, but push your changes to the remote feature branch. 4. Pushing to the remote feature branch initiates a Github action that creates a .zip file you should download and unzip. Open the file index.html in a browser to preview the rendered .html content. If the action fails, this means the bookdown could not be rendered. Use the Github action log to determine what the problem is. 5. When the book can be rendered and you are satisfied with the changes, submit a pull request to merge the feature branch into main. "],["code-of-conduct.html", "Chapter 2 Code of conduct 2.1 FIMS contributor conduct 2.2 Our pledge 2.3 Our standards 2.4 Enforcement responsibilities 2.5 Scope 2.6 Enforcement 2.7 Enforcement guidelines 2.8 Supporting good conduct 2.9 Attribution", " Chapter 2 Code of conduct 2.1 FIMS contributor conduct 2.2 Our pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. 2.3 Our standards Examples of behavior that contributes to a positive environment for our community include: Demonstrating empathy and kindness toward other people Being respectful of differing opinions, viewpoints, and experiences Giving and gracefully accepting constructive feedback Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: The use of sexualized language or imagery, and sexual attention or advances of any kind Trolling, insulting or derogatory comments, and personal or political attacks Public or private harassment Publishing others’ private information, such as a physical or email address, without their explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting 2.4 Enforcement responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. 2.5 Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. 2.6 Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement anonymously using this form. Reports will be reviewed by a member of the NOAA Fisheries Office of Science and Technology who is not participating in the FIMS Project [Patrick Lynch] but has the full support of FIMS Community Leaders. All reports will be reviewed promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident whenever possible; however, please note behaviors that meet the official criteria for harrassment must be reported by supervisors under NOAA policy. 2.7 Enforcement guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: 2.7.1 1. Correction Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. 2.7.2 2. Warning Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. 2.7.3 3. Temporary ban Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. 2.7.4 4. Permanent ban Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. 2.8 Supporting good conduct FIMS Community leaders will create default community health files (e.g. CONTRIBUTING, CODE_OF_CONDUCT) to be used in all repositories owned by FIMS. 2.9 Attribution This Code of Conduct is copied from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html. Community Impact Guidelines were inspired by Mozilla’s code of conduct enforcement ladder. For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations. "],["fims-project-management-process.html", "Chapter 3 FIMS project management process 3.1 FIMS governance 3.2 FIMS development cycle", " Chapter 3 FIMS project management process 3.1 FIMS governance The FIMS Terms of Reference describes the high level organization of the FIMS Project. Additional details on roles and responsibilities are provided here. 3.1.1 Developers Developers are expected to adhere to the principles and guidelines outlined within this handbook, including the Code of Conduct, Contributer Guidelines, Style Guide, Issue Tracking, and Testing. 3.1.2 C++ developers The C++ developer responsibilities include: Writing the module code. Creating documentation for the module and building the documentation in doxygen to ensure it is error-free. Run cmake --build build and review generated doxygen HTMLs locally to ensure they are error-free. Implementing the suite of required test cases in Google Test for the module. Run cmake --build build and ctest --test-dir build locally and make sure the C++ tests pass before pushing tests to remote feature branch. If there are failing tests, run ctest --test-dir --rerun-failed --output-on-failure to re-run the failed tests verbosely. Ensuring the run-clang-tidy and run-googletest Github Actions workflows pass on the remote feature branch 3.1.3 R developers The R developers responsibilities include: Writing the Rcpp interface to the C++ code. Writing Roxygen documentation for any R functions. Run devtools::document() locally and before pushing changes to the remote branch. Writing testthat() test cases for any R functionality Run devtools::test() locally before pushing tests to the remote feature branch. Running styler::style_pkg() to style R code locally and then push changes to remote feature branch. If there are many changes, please do this in a separate commit. Running devtools::check() locally and make sure the package can be compiled and R tests pass. If there are failing tests, run devtools::test(filter = \"file_name\") (where “test-file_name.R” is the testthat file containing failing tests) and edit code/tests to troubleshoot tests. During development, run devtools::build() locally to build the package more frequently and faster. Ensuring the code passes the call-r-cmd-check GitHub Action workflow on the remote feature branch. 3.1.4 All developers Once these are complete, the developer should create a pull request according to the correct template and assign the issue tracking the completion of the bug fix and/or feature to the assigned review team. The developer must resolve any issues arising from the review and get confirmation from the review team before the pull request is merged into the upstream branch. 3.1.5 Reviewers The reviewers are responsible for adhering to documented guidelines in the Code Review section. Reviewers should confirm that the new code is able to build and run within their own development environment as well as via the Github actions on the repository. Reviewers should clearly document which components of the code need to be improved to be accurate, comply with project guidelines and style, or do not work, in the pull request thread so that the developer knows what they need to fix. 3.1.6 Project Lead The Project Lead is responsible for driving decisions on FIMS features, user interfaces, and project guidelines and standards based on the vision and objectives of FIMS and discussions with the OST development team and regional product representatives. The project lead ensures the FIMS product satisfies user and business requirements, incorporates feedback, and iterates on the design and development as needed. The Project Lead will triage issues and pull requests weekly and ensure development and code review occur in a timely manner and according to project guidelines, priorities, and standards. The Project Lead is also responsible for communicating project status via maintenance of the Github projects and scheduling tasks and managing change requests. 3.1.7 Lead Software Architect The Lead Software Architect is responsible for advising the design of designing the FIMS product architecture to maximize portability and extensibility, managing technical risks and opportunities, mentoring development and implementation team members, advising the project lead on software design, refactor, and implementation decisions, scheduling of tasks, managing change requests, and guaranteeing quality of deliveries via code review. The Lead Software Architect also educates the team on technical best practices. 3.1.8 Lead Test Engineer The Lead Test Engineer is responsible for designing and driving test objectives, test strategies, and test plans of the FIMS product at subsequent milestones. The Lead Test Engineer will identify the tools for test reporting, management and automation, guide and monitor the design, implementation, and execution of test cases and test procedures. The Lead Test Engineer will train and mentor implementation team members on how to effectively write and debug tests. 3.1.9 Lead Statistical Computing Engineer The Lead Statistical Computing Engineer is responsible for designing the FIMS statistical architecture that maximizes statistical accuracy and ensures the implementation of statistical good practices. The Lead Statistical Computing Engineer will advise the Project Lead on design and implementation decisions and will work closely with the Lead Software Architect to ensure a balance between computation and statistical efficiency, and with the Lead Test Engineer to develop tests that check the statistical accuracy of model design. 3.1.10 Outreach and Transition Coordinator The Outreach and Transition Coordinator communicates with policy-makers, NOAA leadership, and regional offices on transition plans from existing assessment models and processes to FIMS. This coordinator works with academic partners to develop and coordinate training on using FIMS. 3.1.11 Lead of Workflows, Accessibility, and Integration The Lead of Workflows, Accessibility, and Integration is responsible for designing and driving workflows and automation to support the reliability and robustness of the FIMS. The Lead of Workflows, Accessibility, and Integration ensures FIMS aligns with expected standards for accessibility and quality control in accordance with guidelines set by the Fisheries Integrated Toolbox. This lead coordinates with the Lead Test Engineer to ensure test cases are automated and successfully run by GitHub Actions and coordinates with the Lead Statistical Computing Engineer to identify opportunities to expand FIMS across related disciplines. 3.1.12 Regional representatives Regional representatives are expected to assist in FIMS implementation through design, development, and testing of FIMS. They also communicate FIMS progress and design to their respective regions and teammates. Representatives serve as power users who provide basic training and outreach within their centers on transitioning to FIMS. These representatives are also responsible for relaying feedback, questions, and training requests that they cannot complete back to the NSAP development team and Project Lead. Regional representatives are expected to introduce their partner fishery management organizations to FIMS to assist transition of FIMS from research to operations. 3.1.13 Code of conduct enforcement The code of conduct enforcer is responsible for responding to allegations of code of conduct violations in an appropriate manner. This could include a conversation with the violator, his or her manager, up to and including expulsion from the FIMS development team. If the violator is an external collaborator, they can be banned from contributing to the FIMS Github resources in the future. 3.1.14 External collaborators External collaborators interested in contributing to FIMS development are required to clone or fork the FIMS repository, make changes, and submit a pull request. However, collaborators are strongly encouraged to submit an issue via the main FIMS repository for discussion prior to development. In general, forks are discouraged for development that is intended for integration into FIMS as it becomes difficult to keep track of multiple forks. If collaborators wish to use FIMS as a starting-point for a brand new project that they do not intend to merge back into the main branch, they can start a fork. However, if they intend to create a pull request, they should clone the repository and use a branch. Pull requests from forks will be reviewed the same as a pull request submitted from a branch. Users will need to conform to the same standards and all contributions must pass the standard tests as well as provide tests that check the new feature. 3.2 FIMS development cycle FIMS is structured as an agile software development process with live development on the web and Github. The development process cycles through a planning, analysis, and design phase leading to the establishment of a developmental Milestone. The implementation phase is made up of several development sprints that meet the objectives of the established Milestone. This is followed by testing & integration and a maintenance phase before the cycle starts over again. FIMS is currently in the implementation phase of Milestone 1. See M1 model specification for a description of the model. Figure 3.1: FIMS Development Cycle. Current development stage is the implementation phase of Milestone 1 3.2.1 Issue lifecycle FIMS development will adhere to a lifecycle for issues that makes it clear which issues can be resolved when. Creation — The event that marks the creation of an issue. An issue is not Active when it is Created. Issues that are opened are assigned to the FIMS Project Lead with the label: needs-triage. A issue is not considered Active until this label is removed. * Activation — When the needs-triage label is removed and the issue is assigned to a developer, the issue becomes Active. This event happens once in the lifecycle of an issue. Activation usually is not undone but it can be undone if an issue needs additional discussion; in this case, the needs-triage label is applied again. An issue is Active from the time it is Activated until reaches Resolution. * Response — This event only happens if the triage team deems an issue to a wont-fix or delayed. This requires communication with the party who opened the issue as to why this will not be addressed or will be moved to a later milestone. Resolution — The event that marks the resolution of an issue. This event happens once in the lifetime of an issue. This event can be undone if an issue transitions from a resolved status to an unresolved status, in which case the system considers the issue as never had been resolved. A resolution involves a code check-in and pull request, at which point someone must review and approve the pull request before the issue can transition states. In Review - The issue is “in review” after a code solution has been proposed and is being considered via a pull request. If this is approved, the issue can move into the “Closed” state. * Closure—The event that marks the closure of an Issue. This even happens once in the lifetime of an issue. The issue can enter the Closed state from either the “In Review” or “Response” state. Figure 3.2: Flow chart that describes above process visually, e.g., how an issue moves from creation, to activation, to response or resolution, and is finally closed. 3.2.2 M2 development workflow 3.2.3 Feature validation FIMS uses a standardized set of criteria to prioritize and determine which features will be incorporated into the next development milestone. TODO: add criteria (to be defined) used to prioritize features for future milestones "],["m1-model-specification.html", "Chapter 4 M1 model specification 4.1 Inherited functors from TMB 4.2 Beverton-Holt recruitment function 4.3 Logistic function with extensions 4.4 Catch and fishing mortality 4.5 Modeling loops 4.6 Expected numbers and quantities 4.7 Initial values 4.8 Likelihood calculations 4.9 Statistical Inference:", " Chapter 4 M1 model specification This section describes the implementation of the modules in FIMS in milestone 1. For the first milestone, we implemented enough complexity to adequately test a very standard population model. For this reason, we implemented the minimum structure that can run the model described in Li et al. 2021. The FIMS at the end of milestone 1 is an age-structured integrated assessment model with two fleets (one survey, one fishery) and two sexes. 4.1 Inherited functors from TMB 4.1.1 Atomic functions Wherever possible, FIMS avoids reinginvent atomic functions with extant definitions in TMB. If there is a need for a new atomic function the development team can add it to TMB using the TMB_ATOMIC_VECTOR_FUNCTION() macro following the instructions here. Prototype atomic functions under development for future FIMS milestones are stored in the fims_math.hpp file in the m1-prototypes repository. 4.1.2 Statistical distributions All of the statistical distributions needed for the first milestone of FIMS are implemented in TMB and need not be replicated. Code can be found here. Distribution Name FIMS wrapper Normal dnorm FIMS code Multinomial dmultinom FIMS code Lognormal uses dnorm FIMS code 4.1.2.1 Normal Distribution \\[f(x) = \\frac{1}{\\sigma\\sqrt{2\\pi}}\\mathrm{exp}\\Bigg(-\\frac{(x-\\mu)^2}{2\\sigma^2} \\Bigg),\\] where \\(\\mu\\) is the mean of the distribution and \\(\\sigma^2\\) is the variance. 4.1.2.2 Multinomial Distribution For \\(k\\) categories and sample size, \\(n\\), \\[f(\\underline{y}) = \\frac{n!}{y_{1}!... y_{k}!}p^{y_{1}}_{1}...p^{y_{k}}_{k},\\] where \\(\\sum^{k}_{i=1}y_{i} = n\\), \\(p_{i} > 0\\), and \\(\\sum^{k}_{i=1}p_{i} = 1\\). The mean and variance of \\(y_{i}\\) are respectively: \\(\\mu_{i} = np_{i}\\), \\(\\sigma^{2}_{i} = np_{i}(1-p_{i})\\) 4.1.2.3 Lognormal Distribution \\[f(x) = \\frac{1.0}{ x\\sigma\\sqrt{2\\pi} }\\mathrm{exp}\\Bigg(-\\frac{(\\mathrm{ln}(x) - \\mu)^{2}}{2\\sigma^{2}}\\Bigg),\\] where \\(\\mu\\) is the mean of the distribution of \\(\\mathrm{ln(x)}\\) and \\(\\sigma^2\\) is the variance of \\(\\mathrm{ln}(x)\\). 4.2 Beverton-Holt recruitment function For parity with existing stock assessment models, the first recruitment option in FIMS is the steepness parameterization of the Beverton-Holt model (Beverton and Holt, 1957), \\[R_t(S_{t-1}) =\\frac{0.8R_0hS_{t-1}}{0.2R_0\\phi_0(1-h) + S_{t-1}(h-0.2)}\\] where \\(R_t\\) and \\(S_t\\) are mean recruitment and spawning biomass at time \\(t\\), \\(h\\) is steepness, and \\(\\phi_0\\) is the unfished spawning biomass per recruit. The initial FIMS model implements a static spawning biomass-per-recruit function, with the ability to overload the method in the future to allow for time-variation in spawning biomass per recruit that results from variation in life-history characteristics (e.g., natural mortality, maturity, or weight-at-age). Recruitment deviations (\\(r_t\\)) are assumed to be normally distributed in log space with standard deviation \\(\\sigma_R\\), \\[r_t \\sim N(0,\\sigma_R^2)\\] Because \\(r_t\\) are applied as multiplicative, lognormal deviations, predictions of realized recruitment include a term for bias correction (\\(\\sigma^2_R/2\\)). However, true \\(r_t\\) values are not known, but rather estimated (\\(\\hat{r}_t\\)), and thus the bias correction applies an adjustment factor, \\(b_t=\\frac{E[SD(\\hat{r}_{t})]^2}{\\sigma_R^2}\\) (Methot and Taylor, 2011). The adjusted bias correction, mean recruitment, and recruitment deviations are then used to compute realized recruitment (\\(R^*_t\\)), \\[R^*_t=R_t\\cdot\\mathrm{exp}\\Bigg(\\hat{r}_{t}-b_t\\frac{\\sigma_R^2}{2}\\Bigg)\\] The recruitment function should take as input the values of \\(S_t\\), \\(h\\), \\(R_0\\), \\(\\phi_0\\), \\(\\sigma_R\\), and \\(\\hat{r}_{t}\\), and return mean-unbiased (\\(R_t\\)) and realized (\\(R^*_t\\)) recruitment. 4.3 Logistic function with extensions \\[y_i=\\frac{1}{1+\\mathrm{exp}(-s \\cdot(x_i-\\nu))}\\] Where \\(y_i\\) is the quantity of interest (proportion mature, selected, etc.), \\(x_i\\) is the index (can be age or size or any other quantity), \\(\\nu\\) is the median (inflection point), and \\(s\\) is the slope parameter from an alternative parameterization. Logistic functions for maturity and selectivity should inherit and extend upon the base logistic function implementation. The parameterization for the double logistic curve is specified as \\[y_i=\\frac{1.0}{ 1.0 + \\mathrm{exp}(-1.0 \\cdot s_1(x_i - \\nu_1))} \\left(1-\\frac{1.0}{ 1.0 + \\mathrm{exp}(-1.0 \\cdot s_2 (x_i - \\nu_2))} \\right)\\] Where \\(s_1\\) and and \\(\\nu_1\\) are the slope and median (50%) parameters for the ascending limb of the curve, and \\(s_2\\) and and \\(\\nu_2\\) are the slope and median parameters for the descending limb of the curve. This is currently only implemented for the selectivity module. 4.4 Catch and fishing mortality The Baranov catch equation relates catch to instantaneous fishing and natural mortality. \\[ C_{f,a,t}=\\frac{F_{f,a,t}}{F_{f,a,t}+M}\\Bigg[1-\\mathrm{exp}(-F_{f,a,t}-M)\\Bigg]N_{a,t}\\] Where \\(C_{f,a,t}\\) is the catch at age \\(a\\) at time \\(t\\) for fleet \\(f\\), \\(F_t\\) is instantaneous fishing mortality, \\(M\\) is assumed constant over ages and time in the minimum viable assessment model, \\(N_{a,t}\\) is the number of age \\(a\\) fish at time \\(t\\). \\[F_{f,a,t}=\\sum_{a=0}^A s_{f,a}F_t\\] \\(s_{f,a}\\) is selectivity at age \\(a\\) for fleet \\(f\\). Selectivity-at-age is constant over time. Catch is in metric tons and survey is in number, so calculating catch weight (\\(CW_t\\)) is done as follows: \\[ CW_t=\\sum_{a=0}^A C_{a,t}w_a \\] Survey numbers are calculated as follows \\[I_t=q_t\\sum_{a=0}^AN_{a,t}\\] Where \\(I_t\\) is the survey index and \\(q_t\\) is survey catchability at time \\(t\\). 4.5 Modeling loops This tier associates the expected values for each population section associated with a data source to that data source using a likelihood function. These likelihood functions are then combined into an objective function that is passed to TMB. The population loop is initialized at a user-specified age, time increment, and seasonal structure, rather than assuming ages, years, or seasons follow any pre-defined structure. Population categories will be described flexibly, such that subpopulations such as unique sexes, stocks, species, or areas can be handled identically to reduce duplication. Each subpopulation will have a unique set of attributes assigned to it, such that each subpopulation can share or have a different functional process (e.g., recruitment function, size-at-age) than a different category. Spawning time and recruitment time are user-specified and can occur more than once per year. For the purposes of replicating model comparison project outputs, in milestone 1, all processes including spawning and recruitment occur January 1, but these should be specified via the spawn_time and recruit_time inputs into FIMS to allow for future flexibility. Spawning and recruitment timing can be input as a scalar or vector to account for multiple options. Within the population loop, matrices denoting population properties at different partitions (age, season, sex) are translated into a single, dimension-folded index. A lookup table is computed at model start so that the dimension-folded index can be mapped to its corresponding population partition or time partition (e.g., population(sex, area, age, species, time, …)) so the programmer can understand what is happening. The model steps through each specified timestep to match the data to expected values, and population processes occur in the closest specified timestep to the user-input process timing (e.g., recruitment) across a small timestep that is a predefined constant. 4.6 Expected numbers and quantities The expected values are calculated as follows in the population.hpp file: \\[ B_t=\\sum_{a=0}^AN_{a,t}w_a\\] where \\(B_t\\) is total biomass in time \\(t\\), \\(N\\) is total numbers, \\(w_a\\) is weight-at-age \\(a\\) in kilograms. \\[N_t=\\sum_{a=0}^AN_{a,t}\\] where \\(N_at\\) is the total number of fish at age \\(a\\) in time \\(t\\). \\[UnfishedNAA_{t,0} = R0_{t}\\] Annual unfished numbers at age and unfished spawning biomass are tracked in the model assuming annual recruitment at rzero and only natural mortality. This provides a dynamic reference point that accounts for time varying rzero and M. This does not currently include recruitment deviations. \\[UnfishedNAA_{t,0} = R0_{t}\\] \\[UnfishedNAA_{t,a} = UnfishedNAA_{t-1,a-1}exp(-M_{t-1,a-1})\\] for all t>0 and numbers at age at the start of the first year are model parameter inputs. \\[ UnfishedSSB_t=\\sum_{a=0}^AUnfishedNAA_{a,t}w_aFracFemale_aFracMature_a\\] All spawning stock biomass values are current calculated at January 1 each year. This will be updated in future milestones. 4.7 Initial values The initial equilibrium recruitment (\\(R_{eq}\\)) is calculated as follows: \\[R_{eq} = \\frac{R_{0}(4h\\phi_{F} - (1-h)\\phi_{0})}{(5h-1)\\phi_{F}} \\] where \\(\\phi_{F}\\) is the initial spawning biomass per recruitment given the initial fishing mortality \\(F\\). The initial population structure at the start of the first model year is input as an estimated parameter vector of numbers at age. This allows maximum flexibility for the model to estimate non-equilibrium starting conditions. Future milestones could add an option to input a single F value used to calculate an equilibrium starting structure. 4.8 Likelihood calculations Age composition likelihood links proportions at age from data to model using a multinomial likelihood function. The multinomial and lognormal distributions, including atomic functions are provided within TMB. Survey index likelihood links estimated CPUE to input data CPUE in biomass using a lognormal distribution. (model.hpp) Catch index likelihood links estimated catch to input data catch in biomass using a lognormal distribution. (model.hpp) Age composition likelihoods link catch-at-age to expected catch-at-age using a multinomial distribution. Recruitment deviations control the difference between expected and realized recruitment, and they follow a lognormal distribution. (recruitment_base.hpp) 4.9 Statistical Inference: TODO: Add description detailing the statistical inference used in M1 "],["user-guide.html", "Chapter 5 User guide 5.1 User Installation Guide 5.2 Installing the package from Github 5.3 Installing from R 5.4 Running the model", " Chapter 5 User guide This section details installation guides for users. See the developer installation guide. 5.1 User Installation Guide This section describes how to install the FIMS R package and dependencies. 5.2 Installing the package from Github The following software is required: - R version 4.0.0 or newer (or RStudio version 1.2.5042 or newer) - the remotes R package - TMB (install instructions at are here.) 5.2.1 Windows users Rtools4.4 (available from here) this likely requires IT support to install it on NOAA computers (or any without administrative accounts) 5.3 Installing from R remotes::install_github("NOAA-FIMS/FIMS") library(FIMS) 5.4 Running the model This section describes how to set-up and run the model. 5.4.1 Specifying the model 5.4.1.1 Naming conventions TODO: add description and link to naming conventions 5.4.1.2 Structuing data input You can add components to the model using S4 classes. #TODO: add script to demonstrate how to structure data input 5.4.1.3 Defining model specifications #TODO: add scripts detailing how to set up different components of the model 5.4.2 How to run the model #TODO: add script with examples on how to run the model 5.4.3 Extracting model output Here is how you get the model output. #Todo add code for how to extract model output "],["developer-software-guide.html", "Chapter 6 Developer Software Guide", " Chapter 6 Developer Software Guide This section describes the software you will need to contribute to this project. This is in addition to the software dependencies described in the user installation guide which you should ensure are installed first. 6.0.1 git You will need git installed locally, and you may prefer to use an additional git GUI client such as GitKraken or GitHub Desktop. If your preferred git client is the RStudio IDE, you can configure Git and RStudio integration following these instructions. To install git, please follow the instructions on this page for your operating system. You can find the downloads for your operating system on the left-hand navigation bar on that page. 6.0.2 Development environment An integrated development environment is recommended to organize code files, outputs, and build and debug environments. The most popular IDEs on the development team are RStudio and Visual Studio Code. You are welcome to use another IDE or a text-editor based workflow if you strongly prefer that. 6.0.3 vscode setup Please follow the instructions here to set up vscode for use with R. For those migrating from R studio to VS code, this post on migrating to VS code may be helpful. In addition to those instructions, you may need to install the vscDebugger package here using the command: remotes::install_github("ManuelHentschel/vscDebugger") To improve the plot viewer when creating plots in R, install the httpgd package: install.packages("httpgd") To add syntax highlighting and other features to the R terminal, radian can be installed. Note needs to be installed first in order to download radian. A number of optional settings that could be added to the user settings (settings.json) file in vscode to improve the usability of R in VS code. For example, the settings for interacting with R terminals can be adjusted. Here are some that you may want to use with FIMS: { // Associate .RMD files with markdown: "files.associations": { "*.Rmd": "markdown", }, // A cmake setting "cmake.configureOnOpen": true, // Set where the rulers are, needed for Rewrap. 72 is the default we have // decided on for FIMS repositories.z "editor.rulers": [ 72 ], // Should the editor suggest inline edits? "editor.inlineSuggest.enabled": true, // Settings for github copilot and which languages to use it with or not. "github.copilot.enable": { "*": true, "yaml": false, "plaintext": false, "markdown": false, "latex": false, "r": false }, // Setting for sending R code from the editor to the terminal "r.alwaysUseActiveTerminal": true, // Needed to send large chunks of code to the r terminal when using radian "r.bracketedPaste": true, // Needed to use httpgd for plotting in vscode "r.plot.useHttpgd": true, // path to the r terminal (in this case, radian). Necessary to get the terminal to use radian. "r.rterm.windows": "C://Users//my.name//AppData//Local//Programs//Python//Python310//Scripts//radian.exe", //Use this only for Windows // options for the r terminal "r.rterm.option": [ "--no-save", "--no-restore", "max.print=500" ], // Setting for whether to allow linting of documents or not "r.lsp.diagnostics": true, // When looking at diffs under the version control tab, should whitspace be ignored? "diffEditor.ignoreTrimWhitespace": false, // What is the max number of lines that are printed as output to the terminal? "terminal.integrated.scrollback": 10000 } Some suggested R shortcuts could be helpful. To set up C++ with vscode, instructions are here. Other helpful extensions that can be found in the VScode marketplace are: - Github Copilot: An AI tool that helps with line completion - Live Share: Collaborate on the same file remotely with other developers - Rewrap: Helps rewrapping comments and text lines at a specified character count. Note that to get it working it will be necessary to add rulers - There are a number of keymap packages that import key mappings from commonly used text editors (e.g., Sublime, Notepad++, atom, etc.). Searching “keymap” in the marketplace should help find these. - GitLens (or GitLess): Adds more Git functionality. Note that some of the GitLens functionality is not free, and GitLess is a fork before the addition of these premium features. Note that the keybindings.json and settings.json could be copied from one computer to another to make it easier to set up VS code with the settings needed. Note that the settings.json location differs depending on the operating system. Typically, it is good practice to not restore old sessions after shutting down the IDE. To avoid restoring old sessions in the VS Code terminals (including R terminal), in the Setting User Interface within VS Code (get to this by opening the command palette and searching for Preferences: Open Settings (UI)), under Features > Terminal, uncheck the option “Enable Persistent Sessions.” Rstudio addins can be accessed by searching for Rstudio addin in the command palette. Clicking on “R: Launch Rstudio Addin” should provide a list of addin options. 6.0.4 C++ compiler Windows users who installed Rtools should have a C++ compiler (gcc) as part of the bundle. To ensure the C++ compiler is on your path, open a command prompt and type gcc. If you get the below message, you are all set: gcc: fatal error: no input files compilation terminated. If not, you will need to check that the compiler is on the path. The easiest way to do so is by creating a text file .Renviron in your Documents folder which contains the following line: PATH="${RTOOLS44_HOME}\\usr\\bin;${PATH}" You can do this with a text editor, or from R like so (note that in R code you need to escape backslashes): write('PATH="${RTOOLS44_HOME}\\\\usr\\\\bin;${PATH}"', file = "~/.Renviron", append = TRUE) Restart R, and verify that make can be found, which should show the path to your Rtools installation. Sys.which("make") ## "C:\\\\rtools44\\\\usr\\\\bin\\\\make.exe" 6.0.5 GoogleTest You will need to install CMake and ninja and validate you have the correct setup by following the steps outlined in the test case template. 6.0.6 GDB debugger Windows users who use GoogleTest may need GDB debugger to see what is going on inside of the program while it executes, or what the program is doing at the moment it crashed. rtools44 includes the GDB debugger. The steps below help install 64-bit version gdb.exe. Open Command Prompt and type gdb. If you see details of usage, GDB debugger is already in your PATH. If not, follow the instructions below to install GDB debugger and add it to your PATH. Install Rtools following the instructions here Open ~/rtools44/mingw64.exe to run commands in the mingw64 shell. Run command pacman -Sy mingw-w64-x86_64-gdb to install 64-bit version (more information can be found in R on Windows FAQ) Type Y in the mingw64 shell to proceed with installation Check whether ~/rtools44/mingw64/bin/gdb.exe exists or not Add rtools44 to the PATH and you can check that the path is working by running which gdb in a command window 6.0.7 Doxygen To build C++ documentation website for FIMS, a documentation generator Doxygen needs to be installed. Doxygen automates the generation of documentation from source code comments. To install Doxygen, please follow the instructions here to install Doxygen on various operating systems. Below are steps to install 64-bit version of Doxygen 1.11.0 on Windows. Download doxygen-1.11.0.windows.x64.bin.zip and extract the applications to Documents\\Apps\\Doxygen or other preferred folder. Add Doxygen to the PATH by following similar instructions here. Open a command window and run where doxygen to check if Doxygen is added to the PATH. Two commands on the command line are needed to generate C++ documentation for FIMS locally: cmake -S. -B build -G Ninja cmake -- build build "],["contributor-guidelines.html", "Chapter 7 Contributor Guidelines 7.1 Style Guide 7.2 Naming Conventions 7.3 Coding Good Practices 7.4 Roadmap to FIMS File Structure and Organization 7.5 GitHub Collaborative Environment 7.6 Issue Tracking 7.7 Reporting Bugs 7.8 Suggesting Features 7.9 Branch Workflow 7.10 Code Development 7.11 Commit Messages 7.12 Merge Conflicts 7.13 Pull Requests 7.14 Code Review 7.15 Clean up local branches 7.16 GitHub Actions", " Chapter 7 Contributor Guidelines External contributions and feedback are important to the development and future maintenance of FIMS and are welcome. This section provides guidelines and workflows for FIMS developers and collaborators on how to contribute to the project. 7.1 Style Guide The FIMS project uses style guides to ensure our code is consistent, easy to use (e.g., read, share, and verify), and ultimately easier to write. We use the Google C++ Style Guide and the tidyverse style guide for R code. 7.2 Naming Conventions The FIMS implementation team has chosen to use typename instead of class when defining templates for consistency with the TMB package. While types may be defined in many ways, for consistency developers are asked to use Type instead of T` to define Types within FIMS. 7.3 Coding Good Practices Following good software development and coding practices simplifies collaboration, improves readability, and streamlines testing and review. The following are industry-accepted standards: Adhere to the FIMS Project style guide Avoid rework - take the time to check for existing options (e.g., in-house, open source, etc.) before writing code Keep code as simple as possible Use meaningful variable names that are easy to understand and clearly represent the data they store Use descriptive titles and consistent conventions for class and function names Use consistent names for temporary variables that have the same kind of role Add clear and concise coding comments Use consistent formatting and indentation to improve readability and organization Group code into separate blocks for individual tasks Avoid hard-coded values to ensure portability of code Follow the DRY principle - “Don’t Repeat Yourself” (or your code) Avoid deep nesting Limit line length (wrap ~72 characters) Capitalize SQL queries so they are readily distinguishable from table/column names Lint your code 7.4 Roadmap to FIMS File Structure and Organization 7.4.1 Files that go in inst/include 7.4.1.1 common This folder includes files that are shared between the interface, the TMB objective function, and the mathematics and population dynamics components of the package. 7.4.1.2 interface This includes the R interface files. 7.4.1.3 population dynamics There are subfolders underneath this folder that correspond to the different components of the population dynamics model. Each of the modules will need a .hpp file that only consists of #include statements for the files under the subfolders. In the subfolder, there will need to be one file called _base.hpp that defines the base class for the module type. The base class should only need a constructor method and a number of methods (e.g., evaluate()) that are not specific to the type of functions available under the subfolders but reused for all objects of that class type. 7.4.2 Files that go in src/ 7.4.2.1 FIMS.cpp This is the TMB objective function. 7.5 GitHub Collaborative Environment Communication is managed via the NOAA-FIMS Github organization. New features requests and bugs should be submitted as issues to the FIMS development repo. For guidelines on submitting issues, see Issue Tracking. GitProjects TODO: add description * GitHub Teams TODO: add description * All contributers, both internal and external, are required to abide by the Code of Conduct 7.5.1 FIMS Branching Strategy There are several branching strategies available that will work within the Git environment and other version control systems. However, it is important to find a strategy that works well for both current and future contributors. Branching strategies provide guidance for how, when, and why branches are created and named, which also ties into necessary guidance surrounding issue tracking. The FIMS Project uses a Scaled Trunk Based Development branching strategy to make tasks easier without compromising quality. Scaled Trunk Based Development; image credit: https://reviewpad.com/blog/github-flow-trunk-based-development-and-code-reviews/ This strategy is required for continuous integration and facilitates knowledge of steps that must be taken prior to, during, and after making changes to the code, while still allowing anyone interested in the code to read it at any time. Additionally, trunk-based development captures the following needs without being overly complicated: Short-lived branches to minimize stale code and merge conflicts * Fast release times, especially for bug fixes * Ability to release bug fixes without new features 7.5.2 Branch Protection Branch protection allows for searching branch names with grep functionality to apply merging rules (i.e., protection). This will be helpful to protect the main/trunk branch such that pull requests cannot be merged in prior to passing various checks or by individuals without the authority to do so. 7.5.3 GitHub cloning and branching For contributors with write access to the FIMS repo, changes should be made on a feature branch after cloning the repo. The FIMS repo can be cloned to a local machine by using on the command line: git clone https://github.com/NOAA-FIMS/FIMS.git 7.5.4 Outside collaborators and forks Outside collaborators without write access to the FIMS repos will be required to fork the repository, make changes, and submit a pull request. Forks are discouraged for every-day development because it becomes difficult to keep track of all of the forks. Thus, it will be important for those working on forks to be active in the issue tracker in the main repository prior to working on their fork — just like any member of the organization would do if they were working within the organization. Knowledge of future projects, ideas, concerns, etc. should always be documented in an issue before the code is altered. Pull requests from forks will be reviewed the same as a pull request submitted from a branch. Users will need to conform to the same standards and all contributions must pass the standard tests as well as have tests that check the new feature. To fork and then clone a repository, follow the Github Documentation for forking a repo. Once cloned, changes can be made on a feature branch. When ready to submit changes follow the Github Documentation on creating a pull request from a fork 7.6 Issue Tracking Use of the GitHub issue tracker is key to keeping everyone informed and prioritizing key tasks. All future projects, ideas, concerns, development, etc. must be documented in an issue before the code is altered. Issues should be filed and tagged prior to any code changes whether the change pertains to a bug or the development of a feature. Issues are automatically tagged with the status: triage_needed tag and placed on the Issue Triage Board. Issues will subsequently be labeled and given an assignee and milestone by whoever is in charge of the Triage Board. 7.7 Reporting Bugs This section guides you through submitting a bug report for any toolbox tool. Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports. 7.7.0.1 Before Submitting A Bug Report Check if it is related to version. We recommend using sessionInfo() within your R console and submitting the results in your bug report. Also, please check your R version against the required R version in the DESCRIPTION file and update if needed to see if that fixes the issue. Perform a cursory search of issues to see if the problem has already been reported. If it has and the issue is still open, add a comment to the existing issue instead of opening a new one. If it has and the issue is closed, open a new issue and include a link to the original issue in the body of your new one. 7.7.0.2 How Do I Submit A (Good) Bug Report? Bugs are tracked as GitHub issues. Create an issue on the toolbox Github repository and provide the following information by following the steps outlined in the reprex package. Explain the problem and include additional details to help maintainers reproduce the problem using the Bug Report issue template. Provide more context by answering these questions: Did the problem start happening recently (e.g., after updating to a new version of R) or was this always a problem? If the problem started happening recently, can you reproduce the problem in an older version of R? What’s the most recent version in which the problem doesn’t happen? Can you reliably reproduce the issue? If not, provide details about how often the problem happens and under which conditions it normally happens. If the problem is related to working with files (e.g., reading in data files), does the problem happen for all files and projects or only some? Does the problem happen only when working with local or remote files (e.g., on network drives), with files of a specific type (e.g., only JavaScript or Python files), with large files or files with very long lines, or with files in a specific encoding? Is there anything else special about the files you are using? Include details about your configuration and environment: Which version of the tool are you using? What’s the name and version of the OS you’re using? Which packages do you have installed? You can get that list by running sessionInfo(). 7.8 Suggesting Features This section guides you through submitting an feature suggestion for toolbox packages, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions. Before creating enhancement suggestions, please check the issues list as you might find out that you don’t need to create one. When you are creating an enhancement suggestion, please include an “enhancement” tag in the issues. 7.8.0.1 Before Submitting A Feature Suggestion Check you have the latest version of the package. Check if the development branch has that enhancement in the works. Perform a cursory search of the issues and enhancement tags to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. 7.8.0.2 How Do I Submit A (Good) Feature Suggestion? Feature suggestions are tracked as GitHub issues. Create an issue on the repository and use the Feature Request issue template. 7.8.1 Issue Labels Utilize labels on issues: To describe the kind of work to be done: bug, enhancement, task, discussion, question, suitable for beginners To indicate the state of the issue: urgent, current, next, eventually, won’t fix, duplicate 7.8.2 Issue Templates Templates are available and stored within each repository to guide users through the process of submitting a new issue. Example templates for issues can be found on GitHub Docs. Use these references and existing templates stored in .github/ISSUE_TEMPLATE for reference when creating a new template. 7.9 Branch Workflow This section details the workflow to create a branch in order to contribute to FIMS. 7.9.1 Branching Good Practices The following suggestions will help ensure optimal performance of the trunk-based branching strategy: Branches and commits should be kept small (e.g., a couple commits, a few lines of code) to allow for rapid merges and deployments. Use feature flags to wrap new changes in an inactive code path for later activation (rather than creating a separate repository feature branch). Delete branches after it is merged to the trunk; avoid repositories with a large number of “active” branches. Merge branches to the trunk frequently (e.g., at least every few days; tag as a release commit) to avoid merge conflicts. Use caching layers where appropriate to optimize build and test execution times. 7.9.2 Branch Naming Conventions Example: R-pkg-skeleton Keep it brief Use a hyphen as separators 7.9.3 git workflow Use the following commands to create a branch: $ git checkout -b <branchname> main #creates a local branch $ git push origin <branchname> #pushes branch back to gitHub Periodically merge changes from main into branch $ git merge main #merges changes from main into branch While editing code, commit regularly following commit messages guidelines $ git add <filename> #stages file for commit $ git commit -m"Commit Message" #commits changes To push changes to gitHub, first set the upstream location: $ git push --set-upstream origin <branchname> #pushes change to feature branch on gitHub After which, changes can be pushed as: $ git push #pushes change to feature branch on gitHub When finished, create a pull request to the main branch following pull request guidelines 7.10 Code Development Code is written following the Style Guide, FIMS Naming Conventions, and Coding Good Practices 7.11 Commit Messages FIMS Project contributors should provide clear, descriptive commit messages to communicate to collaborators details about changes that have occurred and improve team efficiency. Good commit messages follow the following practices: Include a short summary of the change for the subject/title (<50 characters) Include a blank line in between the ‘subject’ and ‘body’ Specify the type of commit: * fix: bug fix * feat: new feature * test: testing * docs: documentation * chore: regular code maintenance (e.g., updating dependencies) * refactor: refactoring codebase * style: changes that do not affect the meaning of the code; instead address code styling/formmatting * perf: performance improvements * revert: reverts a previous commit * build: changes that affect the build system If the commit addresses an issue, indicate the issue# in the title Provide a brief explanatory description of the change, addressing what and why was changed Wrap to ~72 characters Write in the imperative (e.g., “Fix bug”, not “Fixed bug”) If necessary, separate paragraphs by blank lines Utilize BREAKING CHANGE: <description> to provide expanation or further context about the issue being addressed. If the commit closes an issue, include a footer to note that (i.e., “Closes #19”) 7.12 Merge Conflicts 7.12.1 What is a merge conflict? A merge conflict happens when changes have occured to the same piece of code on the two branches being merged. This means Git cannot automatically determine which version of the change should be kept. Most merge conflicts are small and easy to figure out. See the Github Documentation on merge conflicts for more information. 7.12.2 How to prevent merge conflicts Merge in small changes often rather than making many changes on a branch that is kept separate from the main branch for a long time. Avoid refactoring the same piece of code in different ways on separate branches. Avoid working in the same files on separate branches. 7.12.3 How to resolve merge conflicts Merge conflicts can be resolved on Github or locally using Git. An additional helpful resource is this guide to merge conflicts. 7.13 Pull Requests Once development of a module is complete, the contributor must initiate a pull request. Github will automatically start an independent review process before the branch can be merged back into the main development branch. Pull requests are used to identify changes pushed to development branches. Open pull requests allow the FIMS Development Team to discuss and review the changes, as well as add follow-up commits before merging to the main branch. As noted in the branching stratgegy section, branches, commits, and pull requests should be kept small to enable rapid review and reduce the chance of merge conflicts. Any pull requests for the FIMS Project must be fully tested and reviewed before being merged into the main branch. Use the pull request template to create pull requests. Pull requests without this template attached will not be approved. 7.14 Code Review Code review ensures health and continuous improvement of the FIMS codebase, while simultaneously helping FIMS developers become familiar with the codebase and ensure there is a diverse team of knolwedgable collaborators to support the continued development and maintenance of FIMS. CI/CD requires rapid review of all new/modified code, so processes must be in place to support this pace. FIMS code review will utilize tools available via GitHub, which allows reviewers to analyze code changes, provide inline comments, and view change histories. The Google code review developer guide provides a useful set of guidelines for both reviewers and code authors. Below is a flowchart for the FIMS code review process. The author starts by submitting a pull request (PR), ensuring documentation, tests, and CI checks are complete, then propose a reviewer. The reviewer receives the review request and either executes the review independently or pairs with another team representative if assistance is needed. Based on the review, changes may be requested, which the author must address before approval. Once the PR is approved, the author merges it into the main branch. 7.14.1 Assigning Reviewers Reviewers of PRs for changes to the codebase in FIMS should be suggested by the author of the PR. For those FIMS Implementation Team Members that keep their status in Github current (see “Setting a status” for more information), authors can use the status information to prevent assigning a reviewer who is known to be “Busy”. If a review has been assigned to you and you don’t feel like you have the expertise to address it properly, please respond directly to the PR so a different reviewer can be found promptly. 7.14.2 Automated Testing Automated testing provides an initial layer of quality assurance and lets reviewers know that the code meets certain standards. For more on FIMS testing, see Testing and GitHub Actions. 7.14.3 Review Checklist While automated testing can assure the code structure and logic pass quality checks, human reviewers are required to evaluate things like functionality, readability, etc. Every pull request is accompanied by an automatically generated checklist of major considerations for code reviews; additional guidance is provided below for reviewers to evaluate when providing feedback on code: Design (Is the code in the proper location? Are files organized intuitively? Are components divided up in a sensible way? Does the pull request include an appropriate number of changes, or would the code changes be better broken into more focused parts? Is the code focused on only requirements within the current scope? Does the code follow object- oriented design principles? Will changes be easy to maintain? Is the code more complex than it needs to be?) Functionality (Does the code function as it is expected to? Are changes, including to the user interface (if applicable), good for users? Does parallel computing remain functional? How will the change impact other parts of the system? Are there any unhandled edge cases? Are there other code improvements possible?) Testing (Does the code have appropriate unit tests? Are tests well- designed? Have dependencies been appropriately tested? Does automated testing cover the code exchange adequately? Could the test structure be improved?) Readability (Is the code and data flow easy to understand? Are there any parts of the code that are confusing or commented out? Are names clear? Does the code include any errors, repeats, or incomplete sections? Does the code adhere to the FIMS Style Guide?) Documentation (Are there clearl and useful comments available to why the code has been implemented as it has been? Is the code appropriately documented (doxygen and roxygen)? Is the README file complete, current, and adequately describe project/changes?) Security (Does using this code open the software to possible security violations or vulnerabilities?) Performance (Are there ways to improve on the code’s performance? Is there any complex logic that could be simplified? Could any of the code be replaced with built-in functions? Will this change have any impacts on system performance? Is there any debugging code that could be removed? Are there any optimizations that could be removed and still maintain system performance?) 7.14.4 Review Good Practices Good reviews require good review habits. Try to follow these suggestions: Review in short sessions (< 60 minutes) to maintain focus and attention to detail Don’t try to review more than 400 lines of code in a single session Provide constructive and supportive feedback Ask open-ended questions and offer alternatives or possible workarounds Avoid strong/opinionated statements Applaud good solutions Don’t say “you” Be clear about which questions/comments are non-blocking or unimportant; likewise, be explicit when approving a change or requesting follow-up Aim to minimize the number of nitpicks (if there are a lot, suggest a team-level resolution) Use the FIMS Style Guide to settle any style arguments 7.15 Clean up local branches If a code reviewer approves the pull request, FIMS workflow managers will merge the feature/bug branch back into the main repository and delete the branch. At this stage, the contributor should also delete the branch from the local repository using the following commands: $ git checkout main //switches back to main branch $ git branch -d <branchname> //deletes branch from local repository 7.16 GitHub Actions FIMS uses GitHub Actions to automate routine tasks. These tasks include: Backup checks for developers Routine GitHub workflow tasks (not important for developers to monitor) Currently, the GitHub Actions in the FIMS repository include: GitHub Action Name Description Type Runs a Check on PRs? Runs on: call-r-cmd-check Runs R CMD Check Backup Check Yes Push to any branch run-clang-tidy Checks for C++ code Backup Check Yes Push to any branch run-googletest Runs the google C++ unit tests Backup Check Yes Push to any branch run-doxygen Builds the C++ documentation Backup Check No Push to main branch run-clang-format Styles C++ code Routine workflow task No Push to main branch call-doc-and-style-r documents and styles R code Routine workflow task No Push to main branch pr-checklist Generates a checklist as a comment for reviewers on PRs Routine workflow task No Opening a PR YAML files in a subdirectory of the FIMS repository specify the setup for the GitHub Actions. Some of the actions depend on reusable workflows available in {ghactions4r}. Runs of the GitHub Actions can be viewed by navigating to the Actions tab of the FIMS repository. The status of GitHub Action runs can also be viewed on pull requests or next to commits throughout the FIMS repository. 7.16.1 Details on Backup Checks Developers must make sure that the checks on their pull requests pass, as typically changes will not be merged into the main branch until all GitHub Actions are passing (the exception is if there are known reasons for the GitHub Actions to fail that are not related to the pull request). Other responsibilities of developers are listed in the Code Development section. Additional details about the backup check GitHub Actions: call-r-cmd-check runs R CMD Check on the FIMS package using the current version of R. Three runs occur simultaneously, on three operating systems: Windows, Linux (Ubuntu), and OSX. R CMD Check ensures that the FIMS package can be downloaded without error. An error means that the package cannot be downloaded successfully on the operating system for the run that failed. Developers should investigate the failing runs and make fixes. To replicate the GitHub Actions workflow locally, use devtools::check() run-clang-tidy runs checks while compiling the C++ code. If this run fails, fixes need to be made to the C++ code to address the issue identified. run-googletest Runs the GoogleTest C++ unit tests and benchmarking. If this run fails, then fixes need to be made to the C++ code and/or the GoogleTest C++ unit tests. To replicate this GitHub Actions workflow locally, follow instructions in the testing section. 7.16.2 Debugging Broken Runs GitHub Actions can fail for many reasons, so debugging is necessary to find the cause of the failing run. Some steps that can help with debugging are: Ask for help as needed! Some members of the FIMS team who have experience debugging GitHub Actions are Bai, Kathryn, and Ian. Investigate why the run failed by looking in the log. Try to replicate the problem locally. For example, if the call-r-cmd-check run fails during the testthat tests, try running the testthat tests locally (e.g., using devtools::test()). If the problem can be replicated, try to fix locally by fixing one test or issue at a time. Then push the changes up to GitHub and monitor the new Github Action run. If the problem cannot be replicated locally, it could be a operating specific issue; for example, if using Windows locally, it may be an issue specific to Mac or Linux. Sometimes, runs may fail because a particular dependency wasn’t available at the exact point in time need for the run (e.g., maybe R didn’t install because the R executable couldn’t be downloaded); if that is the case, wait a few hours to a day and try to rerun. If it continues to fail for more than a day, a change in the GitHub Action YAML file may be needed. 7.16.3 How do I request a new Github Action workflow? Routine actions and checks should be captured in a GitHub Action workflow in order to improve efficiency of the development process and/or improve automated checks on the FIMS codebase. New GitHub Action workflows can be requested by opening an issue in the FIMS repository. "],["hpp-template-for-c-modules.html", "Chapter 8 .hpp template for C++ modules", " Chapter 8 .hpp template for C++ modules In this section we will describe how to structure a new .hpp file in FIMS. // tmplate.hpp // Fisheries Integrated Modeling System (FIMS) //define the header gaurd #ifndef template_hpp #define template_hpp //inherit from model_base #include "../common.hpp" #include <iostream> /** * In this example, we utilize the concept of inheritence and * polymorphism (https://www.geeksforgeeks.org/polymorphism-in-c/). All * classes inherit from model_base. Name1 and Name2 inherit from NameBase. * Classes Name1 and Name2 must implement they're own version of * "virtual T evaluate(const T& t)", which will have unique logic. */ /* * fims namespace */ namespace fims{ /** * NameBase class. Inherits from model_base. */ template <class T> class NameBase: public model_base<T>{ //note that model_base gets template parameter T. protected: public: virtual T Evaluate(const T& t)=0; //"= 0;" means this must be implemented in child. }; /* * Template class inherits from NameBase */ template <class T> class Name1: public NameBase<T>{ public: /* *Default constructor *Initialize any memory here. */ Name1(){ } /** * Destructor; this method destructs Name1 object. * Delete any allocated memory here. */ ~ Name1(){ std::cout <<"I just deleted Name1 object" << std::endl; } /** * Note: this function must have the same signature as evaluate in NameBase. * Overloaded virtual function. This is polymorphism, meaning the * signature has the same appearance, but the function itself has unique logic. * * @param t * @return t+1 */ virtual T Evaluate(const T& t) { std::cout<<"evaluate in Name1 received "<<t<< "as a method parameter, returning "<<(t+1)<<std::endl; return t+1; //unique logic for Name1 class } }; /* * Template class inherits from NameBase */ template <class T> class Name2: public NameBase<T>{ public: /* *Default constructor. *Initialize any memory here. */ Name2(){ } /** * Destructor; this method destructs the Name2 object. * Delete any allocated memory here. */ ~ Name2(){ std::cout <<"I just deleted Name2 object" << std::endl; } /** * Note: this function must have the same signature as evaluate in NameBase. * Overloaded virtual function. This is polymorphism, meaning the * signature has the same appearance, but the function itself has unique logic. * * @param t * @return t^2 */ virtual T Evaluate(const T& t) { std::cout<<"evaluate in Name2 received "<<t<< "as a method parameter, returning "<<(t*t)<<std::endl; return t*t; //unique logic for Name2 class } }; /** * Add additional implementations below. */ } //end namespace /** *Example usage: * * void main(int argc, char** argv){ * NameBase<double>* name = NULL; //pointer to a NameBase object * Name1<double> n1; //inherits from NameBase * Name2<double> n2; //inherits from NameBase * * name = &n1; //name now points to n1 * name->Evalute(2.0); //unique logic for n1 * * name = &n2; //name now points to n2 * name->Evalute(2.0); //unique logic for n2 * } * * Output: * evaluate in Name1 received 2 as a method parameter, returning 3 * evaluate in Name2 received 2 as a method parameter, returning 4 * */ #endif /*template_hpp */ "],["documentation-template.html", "Chapter 9 Documentation Template 9.1 Writing function reference 9.2 Writing a vignette 9.3 Step by step documentation update process", " Chapter 9 Documentation Template In this section we will describe how to document your code. For more information about code documentation in general, please see the toolbox blog post on code documentation. This post describes the differences between the types of documentation, while below we give specific, brief instructions on developer responsibilities for FIMS. 9.1 Writing function reference Function reference can be written inline in comments above the function in either C++ or R. The tools you can use to generate reference from comments are called Doxygen and Roxygen in C++ and R respectively. Both can include LaTeX syntax to denote equations, and both use @ tags to name components of the function reference /** * @brief This function calculates the von Bertalanffy growth curve. * \\f$ * * length\\_at\\_age = lmin + (lmax - lmin)*\\frac{(1.0 - c^ {(age - a\\_min)}))}{(1.0 - c^{(a\\_max - a\\_min)})} * * \\f$ * * @param age * @param sex * @return length\\_at\\_age */ The only difference between syntax for R and C++ code is how comments are denoted in the language. #' This function calculates the von Bertalanffy growth curve. #' #' @param age #' @param sex #' @return length_at_age You should, at minimum, include the tags @param, @return, and @examples in your function reference if it is an exported function. Functions that are only called internally do not require an @examples tag. Other useful tags include @seealso and @export for Roxygen chunks. 9.2 Writing a vignette If this is an exported function, a vignette can be a helpful tool to users to know how to use your function. For now, a rough approximation of the “get started” vignette is written in the software user guide page of this book. If you include a vignette for your function, you can link to it in the Roxygen documentation with the following code. #' \\code{vignette("help", package = "mypkg")} 9.3 Step by step documentation update process Write the function reference in either R or C++ as described above. Check the software user guide and check that any changes you have made to the code are reflected in the code snippets on that page. Push to the feature branch. Ensure that the documentation created by the automated workflow is correct and that any test cases execute successfully before merging into main. "],["testing.html", "Chapter 10 Testing 10.1 Introduction 10.2 C++ unit testing and benchmarking 10.3 Templates for GoogleTest testing 10.4 R testing 10.5 Test case documentation template and examples", " Chapter 10 Testing This section describes testing for FIMS. FIMS uses Google Test for C++ unit testing and testthat for R unit testing. 10.1 Introduction FIMS testing framework will include different types of testing to make sure that changes to FIMS code are working as expected. The unit and functional tests will be developed during the initial development stage when writing individual functions or modules. After completing development of multiple modules, integration testing will be developed to verify that different modules work well together. Checks will be added in the software to catch user input errors when conducting run-time testing. Regression testing and platform compatibility testing will be executed before pre-releasing FIMS. Beta-testing will be used to gather feedback from users (i.e., members of FIMS implementation team and other users) during the pre-release stage. After releasing the first version of FIMS, the development team will go back to the beginning of the testing cycle and write unit tests when a new feature needs to be implemented. One-off testing will be used for testing new features and fixing user-reported bugs when maintaining FIMS. More details of each type of test can be found in the Glossary section. FIMS will use GoogleTest to build a C++ unit testing framework and R testthat to build an R testing framework. FIMS will use Google Benchmark to measure the real time and CPU time used for running the produced binaries. 10.2 C++ unit testing and benchmarking 10.2.1 Requirements To use GoogleTest, you will need: A compatible operating system (e.g., Windows, masOS, or Linux). A C++ compiler that supports at least C++ 11 standard or newer (e.g. gcc 5.0+, clang 5.0+, or MSVC 2015+). For macOS users, Xcode 9.3+ provides clang 5.0. For R users, rtools4 includes gcc. A build system for building the testing project. CMake and a compatible build tool such as Ninja are approved by NMFS HQ. 10.2.2 Setup for Windows users Download CMake 3.22.1 (cmake-3.22.1-windows-x86_64.zip) and put the file folder to Documents\\Apps or other preferred folder. Download ninja v1.10.2 (ninja-win.zip) and put the application to Documents\\Apps or other preferred folder. Open your Command Prompt and type cmake. If you see details of usage, cmake is already in your PATH. If not, follow the the instructions below to add cmake to your PATH. In the same command prompt, type ninja. If you see a message that starts with ninja:, even if it is an error about not finding build.ninja, this means that ninja is already in your PATH. If ninja is not found, follow the instructions below to add ninja to your path. 10.2.3 Adding cmake and ninja to your PATH on Windows In the Windows search bar next to the start menu, search for Edit environment variables for your account and open the Environment Variables window. Click Edit... under the User variables for firstname.lastname section. Click New, add path to cmake, if needed (e.g., cmake-3.22.1-windows-x86_64\\bin or C:\\Program Files\\CMake\\bin are common paths), and click OK. Click New, add path to the location of the Ninja executable, if needed (e.g., Documents\\Apps\\ninja-win or C:\\Program Files\\ninja-win), and click OK. You may need to restart your computer to update the envirionmental variables. You can check that the path is working by running where cmake or where ninja in a command terminal. Note that in certain Fisheries centers, NOAA employees do not have administrative privileges enabled to edit the local environmental path. In this situation it is necessary to create a ticket with IT to add cmake and ninja to your PATH on Windows. 10.2.4 Setup for Linux and Mac users See CMake installation instructions for installing CMake on other platforms. Add cmake to your PATH. You can check that the path is working by running which cmake in a command window. Download ninja v1.10.2 (ninja-win.zip) and put the binary in your preferred location. Add Ninja to your PATH. You can check that the path is working by running which ninja in a command window. Open a command window and type cmake. If you see usage, cmake is found. If not, cmake may still need to be added to your PATH. Open a command window and type ninja. If you see a message starting with ninja:, ninja is found. Otherwise, try changing the permissions or adding to your path. 10.2.5 How to edit your PATH and change file permissions for Linux and Mac To check if the binary is in your path, assuming the binary is named ninja: open a Terminal window and type which ninja and hit enter. If you get nothing returned, then ninja is not in your path. The easiest way to fix this is to move the ninja binary to a folder that’s already in your path. To find existing path folders type echo $PATH in the terminal and hit enter. Now move the ninja binary to one of these folders. For example, in a Terminal window type: sudo cp ~/Downloads/ninja /usr/bin/ To move ninja from the downloads folder to /usr/bin. You will need to use sudo and enter your password after to have permission to move a file to a folder like /usr/bin/. Also note that you may need to add executable permissions to the ninja binary after downloading it. You can do that by switching to the folder where you placed the binary (cd /usr/bin/ if you followed the instructions above), and running the command: sudo chmod +x ninja Check that ninja is now executable and in your path: which ninja If you followed the instructions above, you will see the following line returned: /usr/bin/ninja 10.2.6 Set up FIMS testing project Clone the FIMS repository on the command line using: git clone https://github.com/NOAA-FIMS/FIMS.git cd FIMS There is a file called CMakeLists.txt in the top level of the directory. This file instructs Cmake on how to create the build files, including setting up Google Test. The Google Test testing code is in the tests/gtest subdirectory. Within this subdirectory is a file called CMakeLists.txt. This file contains additional specifications for CMake, in particular instructions on how to register the individual tests. 10.2.7 Build and run the tests Three commands on the command line are needed to build the tests: cmake -S . -B build -G Ninja This generates the build system using Ninja as the generator. Note there is now a subfolder called build. Next, in the same command window, use cmake to build in the build subfolder: cmake --build build Finally, run the C++ tests: ctest --test-dir build The output from running the tests should look something like: Internal ctest changing into directory: C:/github_repos/NOAA-FIMS_org/FIMS/build Test project C:/github_repos/NOAA-FIMS_org/FIMS/build Start 1: dlognorm.use_double_inputs 1/5 Test #1: dlognorm.use_double_inputs ....... Passed 0.04 sec Start 2: dlognorm.use_int_inputs 2/5 Test #2: dlognorm.use_int_inputs .......... Passed 0.04 sec Start 3: modelTest.eta 3/5 Test #3: modelTest.eta .................... Passed 0.04 sec Start 4: modelTest.nll 4/5 Test #4: modelTest.nll .................... Passed 0.04 sec Start 5: modelTest.evaluate 5/5 Test #5: modelTest.evaluate ............... Passed 0.04 sec 100% tests passed, 0 tests failed out of 5 10.2.8 Adding a C++ test Create a file dlognorm.hpp within the src subfolder that contains a simple function: #include <cmath> template<class Type> Type dlognorm(Type x, Type meanlog, Type sdlog){ Type resid = (log(x)-meanlog)/sdlog; Type logres = -log(sqrt(2*M_PI)) - log(sdlog) - Type(0.5)*resid*resid - log(x); return logres; } Then, create a test file dlognorm-unit.cpp in the tests/gtest subfolder that has a test suite for the dlognorm function: #include "gtest/gtest.h" #include "../../src/dlognorm.hpp" // # R code that generates true values for the test // dlnorm(1.0, 0.0, 1.0, TRUE) = -0.9189385 // dlnorm(5.0, 10.0, 2.5, TRUE) = -9.07679 namespace { // TestSuiteName: dlognormTest; TestName: DoubleInput and IntInput // Test dlognorm with double input values TEST(dlognormTest, DoubleInput) { EXPECT_NEAR( dlognorm(1.0, 0.0, 1.0) , -0.9189385 , 0.0001 ); EXPECT_NEAR( dlognorm(5.0, 10.0, 2.5) , -9.07679 , 0.0001 ); } // Test dlognorm with integer input values TEST(dlognormTest, IntInput) { EXPECT_NEAR( dlognorm(1, 0, 1) , -0.9189385 , 0.0001 ); } } EXPECT_NEAR(val1, val2, absolute_error) verifies that the difference between val1 and val2 does not exceed the absolute error bound absolute_error. EXPECT_NE(val1, val2) verifies that val1 is not equal to val2. Please see GoogleTest assertions reference for more EXPECT_ macros. 10.2.9 Add tests to tests/gtest/CMakeLists.txt and run a binary To build the code, add the following contents to the end of the tests/gtest/CMakeLists.txt file: add_executable(dlognorm_test dlognorm-unit.cpp ) target_include_directories(dlognorm_test PUBLIC ${CMAKE_SOURCE_DIR}/../ ) target_link_libraries(dlognorm_test gtest_main ) include(GoogleTest) gtest_discover_tests(dlognorm_test) The above configuration enables testing in CMake, declares the C++ test binary you want to build (dlognorm_test), and links it to GoogleTest (gtest_main). Now you can build and run your test. Open a command window in the FIMS repo (if not already opened) and type: cmake -S . -B build -G Ninja This generates the build system using Ninja as the generator. Next, in the same command window, use cmake to build: cmake --build build Finally, run the tests in the same command window: ctest --test-dir build The output when running ctest might look like this. Note there is a failing test: Internal ctest changing into directory: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build Test project C:/Users/Kathryn.Doering/Documents/testing/FIMS/build Start 1: dlognorm.use_double_inputs 1/7 Test #1: dlognorm.use_double_inputs ....... Passed 0.04 sec Start 2: dlognorm.use_int_inputs 2/7 Test #2: dlognorm.use_int_inputs .......... Passed 0.04 sec Start 3: modelTest.eta 3/7 Test #3: modelTest.eta .................... Passed 0.04 sec Start 4: modelTest.nll 4/7 Test #4: modelTest.nll .................... Passed 0.04 sec Start 5: modelTest.evaluate 5/7 Test #5: modelTest.evaluate ............... Passed 0.04 sec Start 6: dlognormTest.DoubleInput 6/7 Test #6: dlognormTest.DoubleInput ......... Passed 0.04 sec Start 7: dlognormTest.IntInput 7/7 Test #7: dlognormTest.IntInput ............***Failed 0.04 sec 86% tests passed, 1 tests failed out of 7 Total Test time (real) = 0.28 sec The following tests FAILED: 7 - dlognormTest.IntInput (Failed) Errors while running CTest Output from these tests are in: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build/Testing/Temporary/LastTest.log Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely. 10.2.10 Debugging a C++ test There are two ways to debug a C++ test, interactively using gdb or via print statements. To use gdb, make sure it is installed and on your path. Debug C++ code (e.g., segmentation error/memory corruption) using gdb: cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug cmake --build build --parallel 16 ctest --test-dir build --parallel 16 gdb ./build/tests/gtest/population_dynamics_population.exe c // to continue without paging run // to see which line of code is broken print this->log_naa // for example, print this->log_naa to see the value of log_naa; print i // for example, print i from the broken for loop bt // backtrace q // to quit Debug C++ code without using gdb: Update code in a .hpp file by calling std::ofstream out(“file_name.txt”) Then use out << variable; to print out values of the variable nfleets = fleets.size(); std::ofstream out("debug.txt"); out <<nfleets; More complex examples with text identifying the quantities out <<" fleet_index: "<<fleet_index<<" index_yaf: "<<index_yaf<<" index_yf: "<<index_yf<<"\\n"; out <<" population.Fmort[index_yf]: "<<population.Fmort[index_yf]<<"\\n"; Git Bash cmake -S . -B build -G Ninja cmake --build build --parallel 16 ctest --test-dir build --parallel 16 The output of the print statements will be in this test file: FIMS/build/tests/gtest/debug.txt 10.2.11 Benchmark example Google Benchmark measures the real time and CPU time used for running the produced binary. We will continue using the dlognorm.hpp example. Create a benchmark file dlognorm_benchmark.cpp and put it in the tests/gtest subfolder: #include "benchmark/benchmark.h" #include "../../src/dlognorm.hpp" void BM_dlgnorm(benchmark::State& state) { for (auto _ : state) dlognorm(5.0, 10.0, 2.5); } BENCHMARK(BM_dlgnorm); This file runs the dlognorm function and uses BENCHMARK to see how long it takes. A more comprehensive feature overview of benchmarking is available in the Google Benchmark GitHub repository. 10.2.12 Add benchmarks to tests/gtest/CMakeLists.txt and run the benchmark To build the code, add the following contents to the end of your tests/gtest/CMakeLists.txt file: FetchContent_Declare( googlebenchmark URL https://github.com/google/benchmark/archive/refs/tags/v1.6.0.zip ) FetchContent_MakeAvailable(googlebenchmark) add_executable(dlognorm_benchmark dlognorm_benchmark.cpp ) target_include_directories(dlognorm_benchmark PUBLIC ${CMAKE_SOURCE_DIR}/../ ) target_link_libraries(dlognorm_benchmark benchmark_main ) To run the benchmark, open the command line open in the FIMS repo (if not already open) and run cmake, sending output to the build subfolder: cmake --build build Then run the dlognorm_benchmark executable created: build/tests/gtest/dlognorm_benchmark.exe The output from dlognorm_benchmark.exe might look like this: Run on (8 X 2112 MHz CPU s) CPU Caches: L1 Data 32 KiB (x4) L1 Instruction 32 KiB (x4) L2 Unified 256 KiB (x4) L3 Unified 8192 KiB (x1) ***WARNING*** Library was built as DEBUG. Timings may be affected. ----------------------------------------------------- Benchmark Time CPU Iterations ----------------------------------------------------- BM_dlgnorm 153 ns 153 ns 4480000 10.2.12.1 Remove files produced by this example If you don’t want to keep any of the files produced by this example and want to completely clear any uncommitted changes and files from the git repo, use git restore . to get rid of un committed changes in git tracked files. To get rid of all untracked files in the repo, use: git clean -fd 10.2.13 Clean up after running C++ tests 10.2.13.1 Clean up CMake-generated files and re-run tests After running the examples above, the build generates files (i.e., the source code, libraries, and executables) and saves the files in the build subfolder. The example above demonstrates an “out-of-source” build which puts generated files in a completely separate directory, so that the source tree is unchanged after running tests. Using a separate source and build tree reduces the need to delete files that differ between builds. If you still would like to delete CMake-generated files, just delete the build folder, and then build and run tests by repeating the commands below. The files from the build folder are included in the FIMS repository’s .gitignore file, so should not be pushed to the FIMS repository. 10.2.13.2 Clean up individual tests For simple C++ functions like the examples above, we do not need to clean up the tests. Clean up is only necessary in a few situations. If memory for an object was allocated during testing and not deallocated - The object needs to be deleted (e.g., delete object). If you used a test fixture from GoogleTest to use the same data configuration for multiple tests, TearDown() can be used to clean up the test and then the test fixture will be deleted. Please see more details from GoogleTest user’s guide. 10.3 Templates for GoogleTest testing This section includes templates for creating unit tests and benchmarks. This is the code that would go into the .cpp files in tests/gtest. 10.3.1 Unit test template #include "gtest/gtest.h" #include "../../src/code.hpp" // # R code that generates true values for the test namespace { // Description of Test 1 TEST(TestSuiteName, Test1Name) { ... test body ... } // Description of Test 2 TEST(TestSuiteName, Test2Name) { ... test body ... } } 10.3.2 Benchmark template #include "benchmark/benchmark.h" #include "../../src/code.hpp" void BM_FunctionName(benchmark::State& state) { for (auto _ : state) // This code gets timed Function() } // Register the function as a benchmark BENCHMARK(BM_FunctionName); 10.3.3 tests/gtest/CMakeLists.txt template These lines are added each time a new test suite (all tests in a file) is added: // Add test suite 1 add_executable(TestSuiteName1 test1.cpp ) target_link_libraries(TestSuiteName1 gtest_main ) gtest_discover_tests(TestSuiteName1) These lines are added each time a new benchmark file is added: // Add benchmark 1 add_executable(benchmark1 benchmark1.cpp ) target_link_libraries(benchmark1 benchmark_main ) 10.4 R testing FIMS uses {testthat} for writing R tests. You can install the packages following the instructions on testthat website. If you are not familiar with testthat, the testing chapter in R packages gives a good overview of testing workflow, along with structure explanation and concrete examples. 10.4.1 Testing FIMS locally To test FIMS R functions interactively and locally, use devtools::install() rather than devtools::load_all(). This is because using load_all() will turn on the debugger, bloating the .o file, and may lead to a compilation error (e.e., Fatal error: can't write 326 bytes to section .text of FIMS.o: 'file too big' as: FIMS.o: too many sections (35851)). Note that useful interactive tests should should be converted into {testthat} or googletest tests. 10.4.2 Testing using gdbsource You can interactively debug C++ code using TMB::gdbsource() in RStudio. Just add these two lines to the top of the test-fims-estimation.R file require(testthat) devtools::load_all("C:\\\\Users\\\\chris\\\\noaa-git\\\\FIMS") 10.4.3 R testthat naming conventions and file organization We try to group functions and their helpers together (the “main function plus helpers” approach) Always name the test file the same as the R file, but with test- prepended (ex, test-myfunction.R contains testthat tests for the R code in R/myfunction.R). This is the convention in the tidyverse style guide. testthat tests that are a test of rcpp should be called test-rcpp-[description].R Integration tests which do not have a corresponding .R file should use the convention test-integration-[description].R. 10.4.4 R testthat template The format for an individual testthat test is is: test_that("TestName", { ...test body... }) Multiple testthat tests can be put in the same file if they are related to the same .R file (see naming conventions above). 10.5 Test case documentation template and examples A testing plan must be developed while designing (i.e., before coding) new FIMS features or Rcpp modules. Please update the test cases in the FIMS/tests/milestoneX_test_cases.md file (e.g., FIMS/tests/miletone1_test_cases.md). This testing plan is documented using the test case documentation template below. 10.5.1 Test case documentation template Individual functional or integration test cases will be designed following the template below. Test ID. Create a meaningful name for the test case. Features to be tested. Provide a brief statement of test objectives and description of the features to be tested. (Identify the test items following the FIMS software design specification document and identify all features that will not be tested and the rationale for exclusion) Approach. Specify the approach that will ensure that the features are adequately tested and specify which type of test is used in this case. Evaluation criteria. Provide a list of expected results and acceptance criteria. Pass/fail criteria. Specify the criteria used to determine whether each feature has passed or failed testing. In addition to setting pass/fail criteria with specific tolerance values, a documentation that just views the outputs of some tests may be useful if the tests require additional computations, simulations, and comparisons Test deliverables. Identify all information that is to be delivered by the test activity. Test logs and automated status reports 10.5.2 Test case documentation examples 10.5.2.1 General test case documentation The test case documentation below is a general case to apply to many functions/modules. For individual functions/modules, please make detailed test cases for specific options, noting “same as the general test case” where appropriate. Test ID Features to be tested General test case The function/module returns correct output values given different input values The function/module returns error messages when users give wrong types of inputs The function/module notifies an error if the input value is outside the bound of the input parameter Approach Prepare expected true values using R Run tests in R using testthat and compare output values with expected values Push tests to the working repository and run tests using GitHub Actions Run tests in different OS environments (windows latest, macOS latest, and ubuntu latest) using GitHub Actions Submit pull request for code review Evaluation Criteria The tests pass if the output values equal to the expected true values The tests pass if the function/module returns error messages when users give wrong types of inputs The tests pass if the function/module returns error messages when user provides an input value that is outside the bound of the input parameter Test deliverables Test logs on GitHub Actions. Document results of logs in the feature pull request. 10.5.2.2 Functional test example: TMB probability mass function of the multinomial distribution Test ID Probability mass function of the multinomial distribution Features to be tested Same as the general test case Approach Functional test Prepare expected true values using R function dmultinom from package ‘stats’ Evaluation Criteria Same as the general test case Test deliverables Same as the general test case 10.5.2.3 Integration test example: Li et al. 2021 age-structured stock assessment model comparison Test ID Age-structured stock assessment comparison (Li et al. 2021) Features to be tested Null case (update standard deviation of the log of recruitment from 0.2 to 0.5 based on Siegfried et al. 2016 snapper-grouper complex) Recruitment variability Stochastic Fishing mortality (F) F patterns (e.g., roller coaster: up then down and down then up; constant Flow, FMSY, and Fhigh) Selectivity patterns Recruitment bias adjustment Initial condition (unit of catch: number or weight) Model misspecification (e.g., growth, natural mortality, and steepness, catchability etc) Approach Integration test Prepare expected true values from an operating model using R functions from Age_Structured_Stock_Assessment_Model_Comparison GitHub repository Evaluation Criteria Summarize median absolute relative error (MARE) between true values from the operating model and the FIMS estimation model If all MAREs from the null case are less than 10% and all MARES are less than 15%, the tests pass. If the MAREs are greater than 15%, a closer examination is needed. Test deliverables In addition to the test logs on GitHub Actions, a document that includes comparison figures from various cases (e.g., Fig 5 and 6 from Li et al. 2021) will be automatically generated A table that shows median absolute relative errors in unfished recruitment, catchability, spawning stock biomass, recruitment, fishing mortality, and reference points (e.g., Table 6 from Li et al. 2021) will be automatically generated 10.5.2.4 simulation testing: challenges and solutions One thing that might be challenging for comparing simulation results is that changes to the order of calls to simulate will change the simulated values. Tests may fail even though it is just because different random numbers are used or the order of the simulation changes through model development. Several solutions could be used to address the simulation testing issue. Please see discussions on the FIMS-planning issue page for details. Once we start developing simulation modules,we can use these two ways to compare simulated data from FIMS and a test: Add a TRUE/FALSE parameter in each FIMS simulation module for setting up testing seed. When testing the module, set the paramter to TRUE to fix the seed number in R and conduct tests. If adding a TRUE/FALSE parameter does not work as expected, then carefully check simulated data from each component and make sure it is not a model coding error. FIMS will use set.seed() from R to set the seed. The {rstream} package will be investigated if one of the requirements of FIMS simulation module is to generate multiple streams of random numbers to associate distinct streams of random numbers with different sources of randomness. {rstream} was specifically designed to address the issue of needing very long streams of pseudo-random numbers for parallel computations. Please see rstream paper and RngStreams for more details. "],["glossary.html", "Glossary Testing Glossary 10.6 C++ Glossary", " Glossary In this section we will define terms that come up throughout this handbook. Testing Glossary Unit testing Description: It tests individual methods and functions of the classes, components or modules used by the software independently. It executes only small portions of the test cases during the development process. Writer: Developer Advantages: It finds problems early and helps trace the bugs in the development cycle; cheap to automate when a method has clear input parameters and output; can be run quickly. Limitations: Tedious to create; it won’t catch integration errors if a method or a function has interactions with something external to the software. Examples: A recruitment module may consist of a few stock-recruit functions. We could use a set of unit test cases that ensure each stock-recruit function is correct and meets its design as intended while developing the function. Reference: Wikipedia description Functional testing Description: It checks software’s performance with respect to its specified requirements. Testers do not need to examine the internal structure of the piece of software tested but just test a slice of functionality of the whole system after it has been developed. Writer: Tester Advantages: It verifies that the functionalities of the software are working as defined; lead to reduced developer bias since the tester has not been involved in the software’s development. Limitations: Need to create input data and determine output based on each function’s specifications; need to know how to compare actual and expected outputs and how to check whether the software works as the requirements specified. Examples: The software requires development of catch-based projection. We could use a set of functional test cases that help verify if the model produces correct output given specified catch input after catch-based projection has been implemented in the system. Reference: Wikipedia description; WHAM testthat examples Integration testing Description: A group of software modules are coupled together and tested. Integrate software modules all together and verify the interfaces between modules against the software design. It is tested until the software works as a system. Writer: Tester Advantages: It builds a working version of the system by putting the modules together. It assembles a software system and helps detect errors associated with interfacing. Limitations: The tests only can be executed after all the modules are developed. It may be difficult to locate errors because all components are integrated together. Examples: After developing all the modules, we could set up a few stock assessment test models and check if the software can read the input file, run the stock assessment models, and provide desired output. Reference: Wikipedia description Run-time testing Description: Checks added in the software that catch user input errors. The developer will add in checks to the software; the user will trigger these checks if there are input errors Writer: developer Advantages: Provides guidance to the user while using the software Limitations: Adding many checks can cause the software to run more slowly, the messages need to be helpful so the user can fix the input error. Examples: A user inputs a vector of values when they only need to input a single integer value. When running the software, they get an error message telling them that they should use a single integer value instead. Reference: Testing R code book Regression testing Description: Re-running tests to ensure that previously developed and tested software still performs after a change. Testers can execute regression testing after adding a new feature to the software or whenever a previously discovered issue has been fixed. Testers can run all tests or a part of the test suite to check the correctness or quality of the software. Writer: Tester Advantages: It ensures that the changes made to the software have not affected the existing functionalities or correctness of the software. Limitations: If the team makes changes to the software often, it may be difficult to run all tests from the test suite frequently. In that case, it’s a good idea to have a regression testing schedule. For example, run a part of the test suite that is higher in priority after every change and run the full test suite weekly or monthly, etc. Examples: Set up a test suite like the the Stock Synethesis test-models repository. The test cases can be based on real stock assessment models, but may not be the final model version or may have been altered for testing purposes. Test the final software by running this set of models and seeing if the same results for key model quantities remain the same relative to a “reference run” (e.g., the last release of the software). Reference: Wikipedia description Platform compatibility testing Description: It checks whether the software is capable of running on different operating systems and versions of other softwares. Testers need to define a set of environments or platforms the application is expected to work on. Testers can test the software on different operating systems or platforms and report the bugs. Writer: Tester Advantages: It ensures that the developed software works under different configurations and is compatible with the client’s environment. Limitations: Testers need to have knowledge of the testing environment and platforms to understand the expected software behavior under different configurations. It may be difficult to figure out why the software produces different results when using different operating systems. Examples: Set up an automated workflow and see if the software can be compatible with different operating systems, such as Windows, macOS, and Linux. Also, testers can check if the software is compatible with different versions of R (e.g., release version and version 3.6, etc). Reference: International Software Testing Qualification Board Beta testing Description: It is a form of external user acceptance testing and the feedback from users can ensure the software has fewer bugs. The software is released to a limited end-users outside of the implementation team and the end-users (beta testers) can report issues of beta software to the implementation team after further testing. Writer: Members of implementation team and other users Advantages: It helps in uncovering unexpected errors that happen in the client’s environment. The implementation team can receive direct feedback from users before shipping the software to users. Limitations: The testing environment is not under the control of the implementation team and it may be hard to reproduce the bugs. Examples: Prepare a document that describes the new features of the software and share it with selected end-users. Send a pre-release of the software to selected users for further testing and gather feedback from users. Reference: Wikipedia description; SS prerelease example One-off testing Description: It is for replicating and fixing user-reported bugs. It is a special testing that needs to be completed outside of the ordinary routine. Testers write a test that replicates the bug and run the test to check if the test is failing as expected. After fixing the bug, the testers can run the test again and check if the test is passing. Writer: Developer and tester Advantages: The test is simple, fast, and efficient for fixing bugs. Limitations: The tests are specific to bugs and may require manual testing. Examples: A bug is found in the code and the software does not work properly. Tester can create a test to replicate the bug and the test would fail as expected. After the developer fixes the bug, the tester can run the test and see if the issue is resolved. Reference: International Software Testing Qualification Board; SS bug fix example 10.6 C++ Glossary Some C++ vocabulary that is used within FIMS that will be helpful for novice C++ programmers to understand. 10.6.1 singleton Defines a class that is only used to create an object one time. This is a design pattern. See more information 10.6.2 class Provides the “recipe” for the structure of an object, including the data members and functions. Like data structures (structs), but also includes functions. See more information. 10.6.3 functor A functor is a class that acts like a function. See more details about functors. ### constructor A special method that is called when a new object is created and usually initializes data members of the object. See the definition of constructor. 10.6.4 destructor The last method of an object called automatically before an object is destroyed. See the definition of destructor. 10.6.5 header guards Makes sure that there are not multiple copies of a header in a file. Details are available. 10.6.6 preprocessing macro/derectives Begin with a # in the code, these tell the preproccessor (not the compiler) what to do. These directives are complete before compiling. See more info on preprocessing 10.6.7 struct Similar to a class, but only contains data members and not functions. All members are public. Comes from C. See details on struct "],["404.html", "Page not found", " Page not found The page you requested cannot be found (perhaps it was moved or renamed). You may want to try searching to find the page's new location, or use the table of contents to find the page you are looking for. "]] diff --git a/testing.html b/testing.html index 5130f54..73c27e7 100644 --- a/testing.html +++ b/testing.html @@ -52,6 +52,7 @@ + @@ -415,7 +416,7 @@

      10.2 C++ unit testing and benchma

      10.2.1 Requirements

      To use GoogleTest, you will need:

        -
      • A compatible operating system (e.g. Windows, masOS, or Linux).

      • +
      • A compatible operating system (e.g., Windows, masOS, or Linux).

      • A C++ compiler that supports at least C++ 11 standard or newer (e.g. gcc 5.0+, clang 5.0+, or MSVC 2015+). For macOS users, Xcode 9.3+ provides clang 5.0. For R users, @@ -507,8 +508,8 @@

        10.2.5 How to edit your PATH and

        10.2.6 Set up FIMS testing project

        Clone the FIMS repository on the command line using:

        -
        git clone https://github.com/NOAA-FIMS/FIMS.git
        -cd FIMS
        +
        git clone https://github.com/NOAA-FIMS/FIMS.git
        +cd FIMS

        There is a file called CMakeLists.txt in the top level of the directory. This file instructs Cmake on how to create the build files, including setting up Google Test.

        @@ -520,71 +521,71 @@

        10.2.6 Set up FIMS testing projec

        10.2.7 Build and run the tests

        Three commands on the command line are needed to build the tests:

        -
        cmake -S . -B build -G Ninja
        +
        cmake -S . -B build -G Ninja

        This generates the build system using Ninja as the generator. Note there is now a subfolder called build.

        Next, in the same command window, use cmake to build in the build subfolder:

        -
        cmake --build build
        +
        cmake --build build

        Finally, run the C++ tests:

        -
        ctest --test-dir build
        +
        ctest --test-dir build

        The output from running the tests should look something like:

        -
        Internal ctest changing into directory: C:/github_repos/NOAA-FIMS_org/FIMS/build
        -Test project C:/github_repos/NOAA-FIMS_org/FIMS/build
        -    Start 1: dlognorm.use_double_inputs
        -1/5 Test #1: dlognorm.use_double_inputs .......   Passed    0.04 sec
        -    Start 2: dlognorm.use_int_inputs
        -2/5 Test #2: dlognorm.use_int_inputs ..........   Passed    0.04 sec
        -    Start 3: modelTest.eta
        -3/5 Test #3: modelTest.eta ....................   Passed    0.04 sec
        -    Start 4: modelTest.nll
        -4/5 Test #4: modelTest.nll ....................   Passed    0.04 sec
        -    Start 5: modelTest.evaluate
        -5/5 Test #5: modelTest.evaluate ...............   Passed    0.04 sec
        -
        -100% tests passed, 0 tests failed out of 5
        +
        Internal ctest changing into directory: C:/github_repos/NOAA-FIMS_org/FIMS/build
        +Test project C:/github_repos/NOAA-FIMS_org/FIMS/build
        +    Start 1: dlognorm.use_double_inputs
        +1/5 Test #1: dlognorm.use_double_inputs .......   Passed    0.04 sec
        +    Start 2: dlognorm.use_int_inputs
        +2/5 Test #2: dlognorm.use_int_inputs ..........   Passed    0.04 sec
        +    Start 3: modelTest.eta
        +3/5 Test #3: modelTest.eta ....................   Passed    0.04 sec
        +    Start 4: modelTest.nll
        +4/5 Test #4: modelTest.nll ....................   Passed    0.04 sec
        +    Start 5: modelTest.evaluate
        +5/5 Test #5: modelTest.evaluate ...............   Passed    0.04 sec
        +
        +100% tests passed, 0 tests failed out of 5

        10.2.8 Adding a C++ test

        Create a file dlognorm.hpp within the src subfolder that contains a simple function:

        -
        #include <cmath>
        -
        -template<class Type>
        -Type dlognorm(Type x, Type meanlog, Type sdlog){
        -  Type resid = (log(x)-meanlog)/sdlog;
        -  Type logres = -log(sqrt(2*M_PI)) - log(sdlog) - Type(0.5)*resid*resid - log(x);
        -  return logres;
        -}
        +
        #include <cmath>
        +
        +template<class Type>
        +Type dlognorm(Type x, Type meanlog, Type sdlog){
        +  Type resid = (log(x)-meanlog)/sdlog;
        +  Type logres = -log(sqrt(2*M_PI)) - log(sdlog) - Type(0.5)*resid*resid - log(x);
        +  return logres;
        +}

        Then, create a test file dlognorm-unit.cpp in the tests/gtest subfolder that has a test suite for the dlognorm function:

        -
        #include "gtest/gtest.h"
        -#include "../../src/dlognorm.hpp"
        -
        -// # R code that generates true values for the test
        -// dlnorm(1.0, 0.0, 1.0, TRUE) = -0.9189385
        -// dlnorm(5.0, 10.0, 2.5, TRUE) = -9.07679
        -
        -namespace {
        -
        -  // TestSuiteName: dlognormTest; TestName: DoubleInput and IntInput
        -  // Test dlognorm with double input values
        -
        -  TEST(dlognormTest, DoubleInput) {
        -
        -    EXPECT_NEAR( dlognorm(1.0, 0.0, 1.0) , -0.9189385 , 0.0001 );
        -    EXPECT_NEAR( dlognorm(5.0, 10.0, 2.5) , -9.07679 , 0.0001 );
        -
        -  }
        -
        -  // Test dlognorm with integer input values
        -
        -  TEST(dlognormTest, IntInput) {
        -
        -    EXPECT_NEAR( dlognorm(1, 0, 1) , -0.9189385 , 0.0001 );
        -
        -  }
        -
        -}
        +
        #include "gtest/gtest.h"
        +#include "../../src/dlognorm.hpp"
        +
        +// # R code that generates true values for the test
        +// dlnorm(1.0, 0.0, 1.0, TRUE) = -0.9189385
        +// dlnorm(5.0, 10.0, 2.5, TRUE) = -9.07679
        +
        +namespace {
        +
        +  // TestSuiteName: dlognormTest; TestName: DoubleInput and IntInput
        +  // Test dlognorm with double input values
        +
        +  TEST(dlognormTest, DoubleInput) {
        +
        +    EXPECT_NEAR( dlognorm(1.0, 0.0, 1.0) , -0.9189385 , 0.0001 );
        +    EXPECT_NEAR( dlognorm(5.0, 10.0, 2.5) , -9.07679 , 0.0001 );
        +
        +  }
        +
        +  // Test dlognorm with integer input values
        +
        +  TEST(dlognormTest, IntInput) {
        +
        +    EXPECT_NEAR( dlognorm(1, 0, 1) , -0.9189385 , 0.0001 );
        +
        +  }
        +
        +}

        EXPECT_NEAR(val1, val2, absolute_error) verifies that the difference between val1 and val2 does not exceed the absolute error bound absolute_error. EXPECT_NE(val1, val2) verifies that val1 is not @@ -595,88 +596,88 @@

        10.2.8 Adding a C++ test

        10.2.9 Add tests to tests/gtest/CMakeLists.txt and run a binary

        To build the code, add the following contents to the end of the tests/gtest/CMakeLists.txt file:

        -
        
        -add_executable(dlognorm_test
        -  dlognorm-unit.cpp
        -)
        -
        -target_include_directories(dlognorm_test
        -  PUBLIC
        -    ${CMAKE_SOURCE_DIR}/../
        -)
        -
        -target_link_libraries(dlognorm_test
        -  gtest_main
        -)
        -
        -include(GoogleTest)
        -gtest_discover_tests(dlognorm_test)
        +
        
        +add_executable(dlognorm_test
        +  dlognorm-unit.cpp
        +)
        +
        +target_include_directories(dlognorm_test
        +  PUBLIC
        +    ${CMAKE_SOURCE_DIR}/../
        +)
        +
        +target_link_libraries(dlognorm_test
        +  gtest_main
        +)
        +
        +include(GoogleTest)
        +gtest_discover_tests(dlognorm_test)

        The above configuration enables testing in CMake, declares the C++ test binary you want to build (dlognorm_test), and links it to GoogleTest (gtest_main). Now you can build and run your test. Open a command window in the FIMS repo (if not already opened) and type:

        -
        cmake -S . -B build -G Ninja
        +
        cmake -S . -B build -G Ninja

        This generates the build system using Ninja as the generator.

        Next, in the same command window, use cmake to build:

        -
        cmake --build build
        +
        cmake --build build

        Finally, run the tests in the same command window:

        -
        ctest --test-dir build
        +
        ctest --test-dir build

        The output when running ctest might look like this. Note there is a failing test:

        -
        Internal ctest changing into directory: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build
        -Test project C:/Users/Kathryn.Doering/Documents/testing/FIMS/build
        -    Start 1: dlognorm.use_double_inputs
        -1/7 Test #1: dlognorm.use_double_inputs .......   Passed    0.04 sec
        -    Start 2: dlognorm.use_int_inputs
        -2/7 Test #2: dlognorm.use_int_inputs ..........   Passed    0.04 sec
        -    Start 3: modelTest.eta
        -3/7 Test #3: modelTest.eta ....................   Passed    0.04 sec
        -    Start 4: modelTest.nll
        -4/7 Test #4: modelTest.nll ....................   Passed    0.04 sec
        -    Start 5: modelTest.evaluate
        -5/7 Test #5: modelTest.evaluate ...............   Passed    0.04 sec
        -    Start 6: dlognormTest.DoubleInput
        -6/7 Test #6: dlognormTest.DoubleInput .........   Passed    0.04 sec
        -    Start 7: dlognormTest.IntInput
        -7/7 Test #7: dlognormTest.IntInput ............***Failed    0.04 sec
        -
        -86% tests passed, 1 tests failed out of 7
        -
        -Total Test time (real) =   0.28 sec
        -
        -The following tests FAILED:
        -          7 - dlognormTest.IntInput (Failed)
        -Errors while running CTest
        -Output from these tests are in: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build/Testing/Temporary/LastTest.log
        -Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.
        +
        Internal ctest changing into directory: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build
        +Test project C:/Users/Kathryn.Doering/Documents/testing/FIMS/build
        +    Start 1: dlognorm.use_double_inputs
        +1/7 Test #1: dlognorm.use_double_inputs .......   Passed    0.04 sec
        +    Start 2: dlognorm.use_int_inputs
        +2/7 Test #2: dlognorm.use_int_inputs ..........   Passed    0.04 sec
        +    Start 3: modelTest.eta
        +3/7 Test #3: modelTest.eta ....................   Passed    0.04 sec
        +    Start 4: modelTest.nll
        +4/7 Test #4: modelTest.nll ....................   Passed    0.04 sec
        +    Start 5: modelTest.evaluate
        +5/7 Test #5: modelTest.evaluate ...............   Passed    0.04 sec
        +    Start 6: dlognormTest.DoubleInput
        +6/7 Test #6: dlognormTest.DoubleInput .........   Passed    0.04 sec
        +    Start 7: dlognormTest.IntInput
        +7/7 Test #7: dlognormTest.IntInput ............***Failed    0.04 sec
        +
        +86% tests passed, 1 tests failed out of 7
        +
        +Total Test time (real) =   0.28 sec
        +
        +The following tests FAILED:
        +          7 - dlognormTest.IntInput (Failed)
        +Errors while running CTest
        +Output from these tests are in: C:/Users/Kathryn.Doering/Documents/testing/FIMS/build/Testing/Temporary/LastTest.log
        +Use "--rerun-failed --output-on-failure" to re-run the failed cases verbosely.

        10.2.10 Debugging a C++ test

        There are two ways to debug a C++ test, interactively using gdb or via print statements. To use gdb, make sure it is installed and on your path.

        Debug C++ code (e.g., segmentation error/memory corruption) using gdb:

        -
        cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug
        -cmake --build build --parallel 16
        -ctest --test-dir build --parallel 16
        -gdb ./build/tests/gtest/population_dynamics_population.exe
        -c // to continue without paging
        -run // to see which line of code is broken
        -print this->log_naa // for example, print this->log_naa to see the value of log_naa; 
        -print i // for example, print i from the broken for loop
        -bt // backtrace
        -q // to quit
        +
        cmake -S . -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug
        +cmake --build build --parallel 16
        +ctest --test-dir build --parallel 16
        +gdb ./build/tests/gtest/population_dynamics_population.exe
        +c // to continue without paging
        +run // to see which line of code is broken
        +print this->log_naa // for example, print this->log_naa to see the value of log_naa; 
        +print i // for example, print i from the broken for loop
        +bt // backtrace
        +q // to quit

        Debug C++ code without using gdb: Update code in a .hpp file by calling std::ofstream out(“file_name.txt”) Then use out << variable; to print out values of the variable

        -
        nfleets = fleets.size();
        -std::ofstream out("debug.txt");
        -out <<nfleets;
        +
        nfleets = fleets.size();
        +std::ofstream out("debug.txt");
        +out <<nfleets;

        More complex examples with text identifying the quantities

        -
        out <<" fleet_index: "<<fleet_index<<" index_yaf: "<<index_yaf<<" index_yf: "<<index_yf<<"\n";
        -out <<" population.Fmort[index_yf]: "<<population.Fmort[index_yf]<<"\n";
        +
        out <<" fleet_index: "<<fleet_index<<" index_yaf: "<<index_yaf<<" index_yf: "<<index_yf<<"\n";
        +out <<" population.Fmort[index_yf]: "<<population.Fmort[index_yf]<<"\n";

        Git Bash

        -
        cmake -S . -B build -G Ninja 
        -cmake --build build --parallel 16
        -ctest --test-dir build --parallel 16
        +
        cmake -S . -B build -G Ninja 
        +cmake --build build --parallel 16
        +ctest --test-dir build --parallel 16

        The output of the print statements will be in this test file: FIMS/build/tests/gtest/debug.txt

        @@ -685,15 +686,15 @@

        10.2.11 Benchmark example -
        #include "benchmark/benchmark.h"
        -#include "../../src/dlognorm.hpp"
        -
        -void BM_dlgnorm(benchmark::State& state)
        -{
        -  for (auto _ : state)
        -    dlognorm(5.0, 10.0, 2.5);
        -}
        -BENCHMARK(BM_dlgnorm);
        +
        #include "benchmark/benchmark.h"
        +#include "../../src/dlognorm.hpp"
        +
        +void BM_dlgnorm(benchmark::State& state)
        +{
        +  for (auto _ : state)
        +    dlognorm(5.0, 10.0, 2.5);
        +}
        +BENCHMARK(BM_dlgnorm);

        This file runs the dlognorm function and uses BENCHMARK to see how long it takes.

        A more comprehensive feature overview of benchmarking is available in @@ -703,51 +704,51 @@

        10.2.11 Benchmark example10.2.12 Add benchmarks to tests/gtest/CMakeLists.txt and run the benchmark

        To build the code, add the following contents to the end of your tests/gtest/CMakeLists.txt file:

        -
        
        -FetchContent_Declare(
        -  googlebenchmark
        -  URL https://github.com/google/benchmark/archive/refs/tags/v1.6.0.zip
        -)
        -FetchContent_MakeAvailable(googlebenchmark)
        -
        -add_executable(dlognorm_benchmark
        -  dlognorm_benchmark.cpp
        -)
        -
        -target_include_directories(dlognorm_benchmark
        -  PUBLIC
        -    ${CMAKE_SOURCE_DIR}/../
        -)
        -
        -target_link_libraries(dlognorm_benchmark
        -  benchmark_main
        -)
        +
        
        +FetchContent_Declare(
        +  googlebenchmark
        +  URL https://github.com/google/benchmark/archive/refs/tags/v1.6.0.zip
        +)
        +FetchContent_MakeAvailable(googlebenchmark)
        +
        +add_executable(dlognorm_benchmark
        +  dlognorm_benchmark.cpp
        +)
        +
        +target_include_directories(dlognorm_benchmark
        +  PUBLIC
        +    ${CMAKE_SOURCE_DIR}/../
        +)
        +
        +target_link_libraries(dlognorm_benchmark
        +  benchmark_main
        +)

        To run the benchmark, open the command line open in the FIMS repo (if not already open) and run cmake, sending output to the build subfolder:

        -
        cmake --build build
        +
        cmake --build build

        Then run the dlognorm_benchmark executable created:

        -
        build/tests/gtest/dlognorm_benchmark.exe
        +
        build/tests/gtest/dlognorm_benchmark.exe

        The output from dlognorm_benchmark.exe might look like this:

        -
        Run on (8 X 2112 MHz CPU s)
        -CPU Caches:
        -  L1 Data 32 KiB (x4)
        -L1 Instruction 32 KiB (x4)
        -L2 Unified 256 KiB (x4)
        -L3 Unified 8192 KiB (x1)
        -***WARNING*** Library was built as DEBUG. Timings may be affected.
        ------------------------------------------------------
        -  Benchmark           Time             CPU   Iterations
        ------------------------------------------------------
        -  BM_dlgnorm        153 ns          153 ns      4480000
        +
        Run on (8 X 2112 MHz CPU s)
        +CPU Caches:
        +  L1 Data 32 KiB (x4)
        +L1 Instruction 32 KiB (x4)
        +L2 Unified 256 KiB (x4)
        +L3 Unified 8192 KiB (x1)
        +***WARNING*** Library was built as DEBUG. Timings may be affected.
        +-----------------------------------------------------
        +  Benchmark           Time             CPU   Iterations
        +-----------------------------------------------------
        +  BM_dlgnorm        153 ns          153 ns      4480000

        10.2.12.1 Remove files produced by this example

        If you don’t want to keep any of the files produced by this example and want to completely clear any uncommitted changes and files from the git repo, use

        -
        git restore .
        +
        git restore .

        to get rid of un committed changes in git tracked files. To get rid of all untracked files in the repo, use:

        -
        git clean -fd
        +
        git clean -fd
        @@ -796,67 +797,67 @@

        10.3 Templates for GoogleTest tes This is the code that would go into the .cpp files in tests/gtest.

        10.3.1 Unit test template

        -
        #include "gtest/gtest.h"
        -#include "../../src/code.hpp"
        -
        -// # R code that generates true values for the test
        -
        -namespace {
        -
        -  // Description of Test 1
        -  TEST(TestSuiteName, Test1Name) {
        -
        -    ... test body ...
        -
        -  }
        -
        -  // Description of Test 2
        -  TEST(TestSuiteName, Test2Name) {
        -
        -    ... test body ...
        -
        -  }
        -
        -}
        +
        #include "gtest/gtest.h"
        +#include "../../src/code.hpp"
        +
        +// # R code that generates true values for the test
        +
        +namespace {
        +
        +  // Description of Test 1
        +  TEST(TestSuiteName, Test1Name) {
        +
        +    ... test body ...
        +
        +  }
        +
        +  // Description of Test 2
        +  TEST(TestSuiteName, Test2Name) {
        +
        +    ... test body ...
        +
        +  }
        +
        +}

        10.3.2 Benchmark template

        -
        #include "benchmark/benchmark.h"
        -#include "../../src/code.hpp"
        -
        -void BM_FunctionName(benchmark::State& state)
        -{
        -  for (auto _ : state)
        -    // This code gets timed
        -    Function()
        -}
        -
        -// Register the function as a benchmark
        -BENCHMARK(BM_FunctionName);
        +
        #include "benchmark/benchmark.h"
        +#include "../../src/code.hpp"
        +
        +void BM_FunctionName(benchmark::State& state)
        +{
        +  for (auto _ : state)
        +    // This code gets timed
        +    Function()
        +}
        +
        +// Register the function as a benchmark
        +BENCHMARK(BM_FunctionName);

        10.3.3 tests/gtest/CMakeLists.txt template

        These lines are added each time a new test suite (all tests in a file) is added:

        -
        // Add test suite 1
        -add_executable(TestSuiteName1
        -  test1.cpp
        +
        // Add test suite 1
        +add_executable(TestSuiteName1
        +  test1.cpp
        +)
        +
        +target_link_libraries(TestSuiteName1
        +  gtest_main
        +)
        +
        +gtest_discover_tests(TestSuiteName1)
        +

        These lines are added each time a new benchmark file is added:

        +
        // Add benchmark 1
        +add_executable(benchmark1
        +  benchmark1.cpp
         )
         
        -target_link_libraries(TestSuiteName1
        -  gtest_main
        -)
        -
        -gtest_discover_tests(TestSuiteName1)
        -

        These lines are added each time a new benchmark file is added:

        -
        // Add benchmark 1
        -add_executable(benchmark1
        -  benchmark1.cpp
        -)
        -
        -target_link_libraries(benchmark1
        -  benchmark_main
        -)
        +target_link_libraries(benchmark1 + benchmark_main +)

        @@ -874,8 +875,8 @@

        10.4.1 Testing FIMS locally

        10.4.2 Testing using gdbsource

        You can interactively debug C++ code using TMB::gdbsource() in RStudio. Just add these two lines to the top of the test-fims-estimation.R file

        -
        require(testthat)
        -devtools::load_all("C:\\Users\\chris\\noaa-git\\FIMS")
        +
        require(testthat)
        +devtools::load_all("C:\\Users\\chris\\noaa-git\\FIMS")

        10.4.3 R testthat naming conventions and file organization

        @@ -889,11 +890,11 @@

        10.4.3 R testthat naming conventi

        10.4.4 R testthat template

        The format for an individual testthat test is is:

        -
        test_that("TestName", {
        -
        -  ...test body...
        -
        -})
        +
        test_that("TestName", {
        +
        +  ...test body...
        +
        +})

        Multiple testthat tests can be put in the same file if they are related to the same .R file (see naming conventions above).

        diff --git a/user-guide.html b/user-guide.html index 967fbc3..46ff8d3 100644 --- a/user-guide.html +++ b/user-guide.html @@ -52,6 +52,7 @@ + @@ -411,8 +412,8 @@

        5.2.1 Windows users

        5.3 Installing from R

        -
        remotes::install_github("NOAA-FIMS/FIMS")
        -library(FIMS)
        +
        remotes::install_github("NOAA-FIMS/FIMS")
        +library(FIMS)

      5.4 Running the model

      @@ -426,21 +427,21 @@

      5.4.1.1 Naming conventions

      5.4.1.2 Structuing data input

      You can add components to the model using S4 classes.

      -
      #TODO: add script to demonstrate how to structure data input
      +
      #TODO: add script to demonstrate how to structure data input

      5.4.1.3 Defining model specifications

      -
      #TODO: add scripts detailing how to set up different components of the model
      +
      #TODO: add scripts detailing how to set up different components of the model

      5.4.2 How to run the model

      -
      #TODO: add script with examples on how to run the model
      +
      #TODO: add script with examples on how to run the model

      5.4.3 Extracting model output

      Here is how you get the model output.

      -
      #Todo add code for how to extract model output
      +
      #Todo add code for how to extract model output