diff --git a/.codacy.yml b/.codacy.yml new file mode 100644 index 0000000..58d5262 --- /dev/null +++ b/.codacy.yml @@ -0,0 +1,7 @@ +--- + +# For options see: https://support.codacy.com/hc/en-us/articles/115002130625-Codacy-Configuration-File + +duplication: + exclude_paths: + - tests/* diff --git a/.gitignore b/.gitignore index 5d789b4..e6020b7 100644 --- a/.gitignore +++ b/.gitignore @@ -16,3 +16,6 @@ genDoc/call_graph_uml/* # Exclude PyCharm config files but include inspection profiles .idea/* !.idea/inspectionProfiles/ + +# Exclude the coverate reports +tests/other_files/coverage/reports/* diff --git a/.idea/inspectionProfiles/profiles_settings.xml b/.idea/inspectionProfiles/profiles_settings.xml index 105ce2d..dd4c951 100644 --- a/.idea/inspectionProfiles/profiles_settings.xml +++ b/.idea/inspectionProfiles/profiles_settings.xml @@ -1,5 +1,6 @@ + diff --git a/.travis.yml b/.travis.yml index d0853b2..e97bcd3 100644 --- a/.travis.yml +++ b/.travis.yml @@ -42,13 +42,14 @@ git: submodules: false script: - - tree --charset unicode ../../ + - tree -a -I .git ../ - find . -type f -name "*.py" -exec pylint -j 0 --exit-zero {} \; - - python -m unittest test - - pushd ./tests - - coverage run --branch test__cmd-line.py + - cd tests/functional_tests + - python -m unittest -v + - cd ../unit_tests + - coverage run -m unittest -v - tree -a -I .git ../ - - coverage report + - coverage report --omit="/home/travis/virtualenv/*","/home/travis/build/bmwcarit/Emma/tests/*" - coverage xml after_success: diff --git a/CONTRIBUTORS b/CONTRIBUTORS index 98c9a99..1d42130 100644 --- a/CONTRIBUTORS +++ b/CONTRIBUTORS @@ -7,7 +7,7 @@ N: Full name I: Initials (besides the name initials can be used (alternatively to someones name) to identify someones comments) E: Email address K: PGP key ID and fingerprint -F: Files and directories with globbing patterns patterns +F: Files and directories with globbing patterns L: Link providing more information about this individual D: Description C: Country @@ -21,7 +21,7 @@ An entry ends with two newlines. ```````````````````` N: Marcel Schmalzl I: MSc -E: marcel.schmalzl@partner.bmw.de +E: marcel.schmalzl( .at)partner.bmw.de F: * D: Emma originator L: https://github.com/holzkohlengrill/ @@ -38,7 +38,8 @@ C: Germany ```````````````````` N: András Gergő Kocsis I: AGK -E: kocsisandrasgergo@gmail.com +E: kocsisandrasgergo F: * D: Emma maintainer +L: https://github.com/KGergo88 C: Germany diff --git a/README.html b/README.html index a0ce59e..cf33df2 100644 --- a/README.html +++ b/README.html @@ -94,6 +94,7 @@ +

Codacy Badge Codacy Badge Build Status License: GPL v3

Emma

Emma Memory and Mapfile Analyser (Emma)

@@ -5427,7 +5428,7 @@

Contents

  • Contribute
  • Dependencies & Licences
  • -

    Install dependencies: pip3 install Pygments Markdown matplotlib pandas pypiscout

    +

    Install dependencies: Python 3.6 or higher; pip3 install Pygments Markdown matplotlib pandas pypiscout

    General Workflow

    The following figure shows a possible workflow using Emma:

    Project files are already present
  • Create intermediate .csv from mapfiles with Emma:
  • -
    python emma.py -p .\MyProjectFolder --map .\MyProjectFolder\mapfiles --dir .\MyProjectFolder\analysis --subdir Analysis_1
    -
    +
    python emma.py -p .\MyProjectFolder --map .\MyProjectFolder\mapfiles --dir .\MyProjectFolder\analysis --subdir Analysis_1
    1. Generate reports and graphs with Emma Visualiser:
    -
    python emma_vis.py -p .\MyProjectFolder --dir .\MyProjectFolder\analysis --subdir Analysis_1 -q 
    -
    +
    python emma_vis.py -p .\MyProjectFolder --dir .\MyProjectFolder\analysis --subdir Analysis_1 -q 

    Project files that have to be created

    @@ -8588,14 +8587,13 @@

    Project files that have to be cre " alt="./doc/images/globalConfigScheme.png" width="60%" />

    A globalConfig.json could look like this:

    -
    {
    -    "configID1": {
    -        "addressSpacesPath": "addressSpaces.json",
    -        "sectionsPath": "sections.json",
    -        "patternsPath": "patterns.json"
    -    }
    -}
    -
    +
    {
    +    "configID1": {
    +        "addressSpacesPath": "addressSpaces.json",
    +        "sectionsPath": "sections.json",
    +        "patternsPath": "patterns.json"
    +    }
    +}

    Full documentation

    diff --git a/README.md b/README.md index 1c14207..65d9a29 100644 --- a/README.md +++ b/README.md @@ -23,12 +23,12 @@ Holding the aforementioned **augmented data** makes it easy to **detect issues i The Emma visualiser helps you to create nice plots and reports in a `.png` and `.html` and markdown file format. -The whole Emma tool suite contains command line options making it convenient to be **run on a build server** like `-Werror` (treat all warnings as errors) or `--no-prompt` (exit and fail on user prompts; user prompts can happen when ambiguous configurations appear such as multiple matches for one configured map files). +The whole Emma tool suite contains command line options making it convenient to be **run on a build server** like `--Werror` (treat all warnings as errors) or `--no-prompt` (exit and fail on user prompts; user prompts can happen when ambiguous configurations appear such as multiple matches for one configured map files). ------------------------
    -
    +
    ------------------------ # Contents @@ -46,7 +46,7 @@ Install dependencies: Python 3.6 or higher; `pip3 install Pygments Markdown matp # General Workflow The following figure shows a possible workflow using Emma: -
    +
    **Emma** - as the core component - produces an intermediate `.csv` file. Inputs are mapfiles and JSON files (for configuration (memory layout, sizes, ...)). From this point you are very flexible to choose your own pipeline. You could @@ -96,7 +96,7 @@ A basic configuration can be short per file. For complex systems you can choose One main concept includes the `globalConfig.json`. You can see this as meta-config. Each configuration ID (configID) is a separately conducted analysis. Per configID you state individually the configuration files you want to use for this exact analysis. Herewith you can mix and match any combination of subconfigs you prefer. -
    +
    A `globalConfig.json` could look like this: diff --git a/doc/contribution.html b/doc/contribution.html new file mode 100644 index 0000000..ba824c5 --- /dev/null +++ b/doc/contribution.html @@ -0,0 +1,1883 @@ + + + + + + + + +

    Contribution guide

    +

    This guide will assist you in order to successfully submit contributions for the Emma project.

    +

    Obtain the current version of the source code

    +

    Assure that you are working with the latest stable version.

    +

    Describe your changes

    +

    Absolutely describe your changes regardless of what problem you solved nor how complex your changes are. Keep the following points in mind:

    +

    General

    +
      +
    • What is your motivation to do to this change?
    • +
    • Why it is worth fixing?
    • +
    • How will it affect end-user?
    • +
    • If optimisations were done - quantify them. Present trade-offs (e.g. run-time vs. memory).
    • +
    • Evaluate your contribution objectively. What are pro's, con's?
    • +
    • One contribution should solve only one problem. If not split it.
    • +
    • Your description should be self-explanatory (avoiding external resources).
    • +
    +

    Linking, referencing & documentation

    +
      +
    • If you link to tickets/mailing lists etc. reference to them. Summarise what is its principal outcome.
    • +
    • When referencing specific commits: state the commit ID (use the long hash) + (!) the commit massage in order to make it more readable for the reviewer.
    • +
    +

    Implementation specific

    +
      +
    • How you solved the problem?
    • +
    • If your solution is complex: provide an introduction before going into details.
    • +
    • If your patch your solution describe what you have done and why.
    • +
    +

    Test and review your code

    +

    Test it on a clean environment.

    +

    Review your code with regards to our coding guidelines.

    +

    Check and act on the review process

    +

    You may receive comments regarding your submission. In order to be considered you must respond to those comments.

    +

    Sign your work

    +

    You must sign the Developer Certificate of Origin (DCO) for any submission. We use this to keep track of who contributed what. Additionaly you certify that the contribution is your own work or you have the right to pass it on as an open-source patch.

    +

    To do so you read the DCO and agree by adding a "sign-off" line at the end of the explanation of the patch like in the example below:

    +
    Signed-off-by: Joe Contrib <joe.contrib@somedomain.com>
    + + +

    Fill in your full name (no pseudonyms) and your email address surrounded by angle brackets.

    +

    Small or formal changes can be done in square bracket notation:

    +
    Signed-off-by: Joe Contrib <joe.contrib@somedomain.com>
    +[alice.maintain@somedomain.com: struct foo moved from foo.c to foo.h]
    +Signed-off-by: Alice Maintain <alice.maintain@somedomain.com>
    + + +

    Do the pull request

    +
    git request-pull master git://repo-url.git my-signed-tag
    + + +

    See also here for more information about pull requests in general on GitHub.

    +
    +

    Coding guidelines

    +

    Generally PEP-8 or the Google style guide apply. However we deviate slightly from this (see the following sections).

    +

    Style

    +
      +
    • Naming conventions
        +
      • mixedCase (camelCase and PascalCase is used)
      • +
      • Methods/functions and variables start with a minuscule (camelCase)
      • +
      • Class names names start with a majuscule (PascalCase)
      • +
      • Global variables: CAPS_WITH_UNDER
      • +
      +
    • +
    • Max line length rule is ignored since it decreases readability, use line-breaks where appropriate
    • +
    • Imports in Python
        +
      • The imports need to be separated from other parts of the file with 2-2 blank lines above and under
      • +
      • They need to be grouped into the following three groups:
      • +
      • Python Standard Library Imports
      • +
      • 3rd Party Imports
      • +
      • Emma Imports
      • +
      • The groups need to be in the same order as in the previous list and they need to be separated from each other with a single blank line
      • +
      • Only packages and modules shall be imported, individual classes and functions not
      • +
      • Imports shall not use renaming
      • +
      • There are some exceptions:
      • +
      • Importing from shared_libs.stringConstants shall be done in the following way: from shared_libs.stringConstants import *
      • +
      • Importing the pyipiscout library shall be done in the following way: import pypiscout as sc
      • +
      +
    • +
    • British English shall be used in function names and comments
        +
      • Exceptions:
          +
        • map file is always written as mapfile
        • +
        +
      • +
      +
    • +
    • TODO, FIXME and similar tags should be in the format: # TODO: This is my TODO (<author>)
        +
      • First letter in the comment is a majuscule
      • +
      • The comment ends with the name of the author or with an unique and consistent abbreviation/alias/username/pseudonym (preferably your initials if still available; if you are unsure check the CONTRIBUTORS file)
      • +
      +
    • +
    +

    Path handling

    +

    Use os.path.normpath() where appropriate. Using / and \ can cause problems between OSes (even on Windows with WSL) and strange things could happen. Also prefer joinPath() (in shared_libs.Emma_helper) instetad of os.path.join().

    +

    Raising exceptions

    +
      +
    • Exceptions should be avoided where possible. Instead use a descriptive error message using SCout and exit with sys.exit(<err-code>)
    • +
    • the default error code is -10
    • +
    • For the user it is hard to distinquish whether an exception was caused by a bug or wrong user input
    • +
    +
    +

    Adding compiler support

    +

    The Emma tool was created in order to collect information from mapfiles and write them to CSV documents. +These mapfiles are created by the compiler during the compilation of a software and their format is specific to the compiler used.

    +

    Emma was designed in a way, that adding support for new compilers is possible with only minor changes to the existing codebase. +This chapter explains what are the new components that need to be added and how to integrate them into Emma.

    +

    Compiler handling

    +

    The globalConfig.json file of a configuration lists the configId-s of the project (see doc/readme for more info on Emma configurations). +Every configId has a key "compiler" (for example see the doc/test_project configuration). +The value belonging to this key determines the classes used during runtime for the compiler specific tasks.

    +

    New classes that need to be added

    +

    To implement the support for a new compiler, new classes need to be developed. +These will be responsible for reading the compiler specific parts of the configuration and for processing the mapfiles. +These classes should be located in the emma_libs folder, and it is recommended that they contain the name of the compiler they belong to as a prefix, for example: emma_libs/ghsConfiguration.py

    +

    These new classes need to provide a specific interface. For this reason the following classes need to be subclassed:

    +
      +
    • From emma_libs/specificConfiguration.py: SpecificConfiguration
    • +
    • From emma_libs/mapfileProcessor.py: MapfileProcessor
    • +
    +

    Changes need to be done on the existing codebase

    +

    In order to integrate these new classes, the following changes need to be made:

    +
      +
    • Adding name string constant for the new compiler
        +
      • The file shared_libs/stringConstants.py contains the names of the supported compiler
      • +
      • For consistency the name strings should follow the format defined in the CMake documentation
      • +
      • Example: COMPILER_NAME_GHS = "GHS"
      • +
      +
    • +
    • Extending the SpecificConfiguration factory
        +
      • The file emma_libs/specificConfigurationFactory.py contains the function createSpecificConfiguration(compiler, **kwargs)
      • +
      • This function is responsible for creating objects of subclasses of the SpecificConfiguration class that are specific to a compiler
      • +
      • Here, using the previously defined compiler name constant a new entry has to be made
      • +
      +
    • +
    • Extending the MapfileProcessor factory
        +
      • The file emma_libs/mapfileProcessorFactory.py contains the function createSpecificMapfileProcesor(compiler, **kwargs)
      • +
      • This function is responsible for creating objects of subclasses of the MapFileProcessor class that are specific to a compiler
      • +
      • Here, using the previously defined compiler name constant a new entry has to be made
      • +
      +
    • +
    +

    Expected functionality of the new components

    +

    The previously mentioned abstract classes from which the new classes need to inherit, describe the prototypes of the methods that need to be implemented in their docstrings. +During runtime these methods (the ones with green background) will be called in the following order:

    +
    images/emmaAddingCompilerSupportActivityDiagram.png
    + +

    The methods need to implement the following functionality:

    +
      +
    • +

      readConfiguration()

      +
        +
      • Process all the config files under the configurationPath that are belonging to the configId, the function was called with
      • +
      • Extend the configuration dictionary that was given to the function with the data extracted (this already contains the compiler independent configuration)
      • +
      • Collect the mapfiles from the mapfilesPath based on the configuration
      • +
      +
    • +
    • +

      checkConfiguration()

      +
        +
      • Check whether the configuration belonging to the configId is valid, meaning that the mapfile processing will be possible with it
      • +
      • Return True for a valid configuration, False otherwise (in the False case, Emma might skip the mapfile processing for this configId)
      • +
      +
    • +
    • +

      processMapfiles()

      +
        +
      • Create and return two lists of MemEntry objects that are representing the sections and objects found in the mapfiles respectively
      • +
      • The lists need to be ordered in an ascending order based on the addressStart of the MemEntry objects
      • +
      • The following members need to be filled out in the MemEntry objects:
          +
        • configId
        • +
        • mapfileName
        • +
        • addressStart
        • +
        • addressLength or addressEnd
        • +
        • sectionName
        • +
        • objectName
        • +
        • compilerSpecificData
        • +
        +
      • +
      • Before returning, call the MapfileProcessor::fillOutMemoryRegionsAndMemoryTypes() on the created lists to fill out the memTypeTag and memType members
      • +
      +
    • +
    +

    Expected changes to project and documentation after adding a new compiler support

    +

    To make it possible for users and other contributors to easily get started with the added new support, the following non code related changes need to made:

    +
      +
    • Add a test project with a step-by-step introduction to the doc folder. Take the doc/test_project as example.
    • +
    • Add a chapter to the doc/readme.md describing the configuration of the new compiler. Take the chapter Formal Definition of the GHS compiler specific configuration as example.
    • +
    • After changing the Markdown files, please re-generate the HTML files with the genDoc/genReadmeHtmlFromMd.py script, using the --no_graphs command line argument.
    • +
    +
    +

    Colour palette

    +

    The following colour palette is used for the documentation:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ColourHSL(A)RGB(A)RGBA (hex)
    Yellow34, 255, 192255, 230, 128ffe680ff
    Light orange17, 255, 213255, 204, 170ffccaaff
    Dark orange17, 145, 204233, 198, 175e9c6afff
    Light blue136, 145, 204175, 221, 233afdde9ff
    Green68, 145, 204198, 233, 175c6e9afff
    Light grey0, 0, 236236, 236, 236ecececff
    Grey0, 0, 204204, 204, 204ccccccff
    + + + + \ No newline at end of file diff --git a/doc/contribution.md b/doc/contribution.md index 501a9c9..afffc06 100644 --- a/doc/contribution.md +++ b/doc/contribution.md @@ -1,6 +1,6 @@ # Contribution guide -This guide will assist you in order to sucessfully submit contributions for the Emma project. +This guide will assist you in order to successfully submit contributions for the Emma project. ## Obtain the current version of the source code Assure that you are working with the latest stable version. @@ -97,12 +97,88 @@ Generally [PEP-8](https://github.com/python/peps/blob/master/pep-0008.txt) or th ## Path handling Use `os.path.normpath()` where appropriate. Using `/` and `\` can cause problems between OSes (even on Windows with WSL) and strange things could happen. Also prefer `joinPath()` (in `shared_libs.Emma_helper`) instetad of `os.path.join()`. - + ## Raising exceptions * Exceptions should be avoided where possible. Instead use a descriptive error message using `SCout` and exit with `sys.exit()` * the default error code is `-10` * For the user it is hard to distinquish whether an exception was caused by a bug or wrong user input +---------------------------------------------------------------- +# Adding compiler support +The Emma tool was created in order to collect information from mapfiles and write them to CSV documents. +These mapfiles are created by the compiler during the compilation of a software and their format is specific to the compiler used. + +Emma was designed in a way, that adding support for new compilers is possible with only minor changes to the existing codebase. +This chapter explains what are the new components that need to be added and how to integrate them into Emma. + +## Compiler handling +The globalConfig.json file of a configuration lists the configId-s of the project (see doc/readme for more info on Emma configurations). +Every configId has a key "compiler" (for example see the **doc/test_project** configuration). +The value belonging to this key determines the classes used during runtime for the compiler specific tasks. + +## New classes that need to be added +To implement the support for a new compiler, new classes need to be developed. +These will be responsible for reading the compiler specific parts of the configuration and for processing the mapfiles. +These classes should be located in the **emma_libs** folder, and it is recommended that they contain the name of the compiler they belong to as a prefix, for example: **emma_libs/ghsConfiguration.py** + +These new classes need to provide a specific interface. For this reason the following classes need to be subclassed: + +* From **emma_libs/specificConfiguration.py**: `SpecificConfiguration` +* From **emma_libs/mapfileProcessor.py**: `MapfileProcessor` + +## Changes need to be done on the existing codebase +In order to integrate these new classes, the following changes need to be made: + +* Adding name string constant for the new compiler + * The file **shared_libs/stringConstants.py** contains the names of the supported compiler + * For consistency the name strings should follow the format defined in the [CMake documentation](https://cmake.org/cmake/help/v3.0/variable/CMAKE_LANG_COMPILER_ID.html) + * Example: `COMPILER_NAME_GHS = "GHS"` +* Extending the SpecificConfiguration factory + * The file **emma_libs/specificConfigurationFactory.py** contains the function `createSpecificConfiguration(compiler, **kwargs)` + * This function is responsible for creating objects of subclasses of the `SpecificConfiguration` class that are specific to a compiler + * Here, using the previously defined compiler name constant a new entry has to be made +* Extending the MapfileProcessor factory + * The file **emma_libs/mapfileProcessorFactory.py** contains the function `createSpecificMapfileProcesor(compiler, **kwargs)` + * This function is responsible for creating objects of subclasses of the `MapFileProcessor` class that are specific to a compiler + * Here, using the previously defined compiler name constant a new entry has to be made + +## Expected functionality of the new components +The previously mentioned abstract classes from which the new classes need to inherit, describe the prototypes of the methods that need to be implemented in their docstrings. +During runtime these methods (the ones with green background) will be called in the following order: + +
    + +The methods need to implement the following functionality: + +* `readConfiguration()` + * Process all the config files under the `configurationPath` that are belonging to the `configId`, the function was called with + * Extend the `configuration` dictionary that was given to the function with the data extracted (this already contains the compiler independent configuration) + * Collect the mapfiles from the `mapfilesPath` based on the configuration + +* `checkConfiguration()` + * Check whether the `configuration` belonging to the `configId` is valid, meaning that the mapfile processing will be possible with it + * Return `True` for a valid `configuration`, `False` otherwise (in the `False` case, Emma might skip the mapfile processing for this `configId`) + +* `processMapfiles()` + * Create and return **two** lists of `MemEntry` objects that are representing the sections and objects found in the mapfiles respectively + * The lists need to be ordered in an ascending order based on the `addressStart` of the `MemEntry` objects + * The following members need to be filled out in the `MemEntry` objects: + * `configId` + * `mapfileName` + * `addressStart` + * `addressLength` or `addressEnd` + * `sectionName` + * `objectName` + * `compilerSpecificData` + * Before returning, call the `MapfileProcessor::fillOutMemoryRegionsAndMemoryTypes()` on the created lists to fill out the `memTypeTag` and `memType` members + +## Expected changes to project and documentation after adding a new compiler support +To make it possible for users and other contributors to easily get started with the added new support, the following non code related changes need to made: + +* Add a test project with a step-by-step introduction to the **doc** folder. Take the **doc/test_project** as example. +* Add a chapter to the **doc/readme.md** describing the configuration of the new compiler. Take the chapter **Formal Definition of the GHS compiler specific configuration** as example. +* After changing the Markdown files, please re-generate the HTML files with the **genDoc/genReadmeHtmlFromMd.py** script, using the `--no_graphs` command line argument. + ---------------------------------------------------------------- # Colour palette The following colour palette is used for the documentation: diff --git a/doc/images/configDependencies.png b/doc/images/configDependencies.png index f1525db..f27152b 100644 Binary files a/doc/images/configDependencies.png and b/doc/images/configDependencies.png differ diff --git a/doc/images/configDependencies.svg b/doc/images/configDependencies.svg index c1cf1c9..61ff768 100644 --- a/doc/images/configDependencies.svg +++ b/doc/images/configDependencies.svg @@ -5,35 +5,32 @@ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" + xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" - fill-opacity="1" - color-rendering="auto" - color-interpolation="auto" - text-rendering="auto" - stroke="black" - stroke-linecap="square" - width="808" + width="808.42651" stroke-miterlimit="10" - shape-rendering="auto" - stroke-opacity="1" - fill="black" - stroke-dasharray="none" font-weight="normal" - stroke-width="1" - height="562" - font-family="'Dialog'" + height="532.56421" font-style="normal" - stroke-linejoin="miter" font-size="12px" - stroke-dashoffset="0" - image-rendering="auto" version="1.1" id="svg254" sodipodi:docname="configDependencies.svg" - inkscape:version="0.92.3 (2405546, 2018-03-11)"> + inkscape:version="0.92.3 (2405546, 2018-03-11)" + style="font-style:normal;font-weight:normal;font-size:12px;font-family:Dialog;color-interpolation:auto;fill:#000000;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:miter;stroke-miterlimit:10;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto" + inkscape:export-filename="C:\0-repos\Emma\doc\images\configDependencies.png" + inkscape:export-xdpi="200" + inkscape:export-ydpi="200"> + id="metadata258"> + + + + + + + inkscape:zoom="1.5956923" + inkscape:cx="451.2148" + inkscape:cy="266.2821" + inkscape:window-x="1912" + inkscape:window-y="32" + inkscape:window-maximized="1" + inkscape:current-layer="svg254" + fit-margin-top="0" + fit-margin-left="0" + fit-margin-right="0" + fit-margin-bottom="0" /> - + id="genericDefs"> + + + + + + + + + + + + y2="229" + y1="189" + x2="465" + gradientUnits="userSpaceOnUse" + x1="385"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="229" + y1="189" + x2="771" + gradientUnits="userSpaceOnUse" + x1="691"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="484.25" + y1="444.25" + x2="512.25983" + gradientUnits="userSpaceOnUse" + x1="432.2598"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="489.00711" + y1="449.00711" + x2="517" + gradientUnits="userSpaceOnUse" + x1="437"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="493.76431" + y1="453.76431" + x2="521.77411" + gradientUnits="userSpaceOnUse" + x1="441.77411"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="493.76431" + y1="453.76431" + x2="627" + gradientUnits="userSpaceOnUse" + x1="547"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="229" + y1="189" + x2="159" + gradientUnits="userSpaceOnUse" + x1="29"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="83" + y1="43" + x2="481.5" + gradientUnits="userSpaceOnUse" + x1="368.5"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="46" + y1="6" + x2="806.70001" + gradientUnits="userSpaceOnUse" + x1="553.70001"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="365.45001" + y1="295.04999" + x2="359.7402" + gradientUnits="userSpaceOnUse" + x1="174.1402"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + y2="357.25" + y1="303.25" + x2="627" + gradientUnits="userSpaceOnUse" + x1="519"> + stop-color="rgb(232,238,247)" + stop-opacity="1" /> + stop-color="rgb(183,201,227)" + stop-opacity="1" /> + id="clipPath1" + clipPathUnits="userSpaceOnUse"> + id="path58" + d="M 0,0 H 808 V 562 H 0 Z" + inkscape:connector-curvature="0" /> + id="clipPath2" + clipPathUnits="userSpaceOnUse"> + id="path61" + d="M 14,-9 H 822 V 553 H 14 Z" + inkscape:connector-curvature="0" /> - - - - image/svg+xml - - mytitle - - - - - - - - - - patterns*.json - - - - - - - sections*.json - - - - - - - - - mapfiles - - - - - - - - - - - - - - - - - - - - mapfile - - - - - - - monolith file - - - - - - - addressSpaces*.json - - - - - - - globalConfig.json - - - - - - - = meta config: - globalConfig basically loads all sub configs - - - - - - - if memRegionExcludes - defined - - - - - - - if VAS defined - - - - - ref - - - ref - - - ref - - - ref - - - - - - - Exclude certain - memory regions - - - - - yes: - - VAS names must be - consistent between files - - - - + + + + patterns*.json + + + sections*.json + + + mapfiles + + + + + + + + mapfile + + + monolith file + + + addressSpaces*.json + + + globalConfig.json + + + = meta config: + globalConfig basically loads all sub configs + + + if memRegionExcludes + defined + + + if VAS defined + + + ref + + + ref + + + ref + + + ref + + + + + + + Exclude certain + memory regions + + + + + yes: + + VAS names must be + consistent between files + + diff --git a/doc/images/emmaAddingCompilerSupportActivityDiagram.png b/doc/images/emmaAddingCompilerSupportActivityDiagram.png new file mode 100644 index 0000000..df7d448 Binary files /dev/null and b/doc/images/emmaAddingCompilerSupportActivityDiagram.png differ diff --git a/doc/images/emmaAddingCompilerSupportActivityDiagram.svg b/doc/images/emmaAddingCompilerSupportActivityDiagram.svg new file mode 100644 index 0000000..7c23561 --- /dev/null +++ b/doc/images/emmaAddingCompilerSupportActivityDiagram.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/doc/images/emmaClassDiagram.png b/doc/images/emmaClassDiagram.png new file mode 100644 index 0000000..c34ef38 Binary files /dev/null and b/doc/images/emmaClassDiagram.png differ diff --git a/doc/images/emmaClassDiagram.svg b/doc/images/emmaClassDiagram.svg new file mode 100644 index 0000000..f0cd3f1 --- /dev/null +++ b/doc/images/emmaClassDiagram.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/doc/readme-vis.html b/doc/readme-vis.html index 3ed70e0..82adb95 100644 --- a/doc/readme-vis.html +++ b/doc/readme-vis.html @@ -102,29 +102,23 @@

    Emma Visualiser


    Contents

      +
    1. Emma Visualiser
    2. +
    3. Contents
    4. Requirements
    5. Process
    6. Usage
    7. -
    8. Arguments
        -
      1. Required Arguments:
      2. -
      3. Optional Arguments:
      4. +
      5. Arguments in detail
      6. +
      7. Optional Arguments
      8. Quiet Mode
      9. Overview
      10. Append Mode
      11. -
      -
    9. -
    10. Project Configuration
        -
      1. budgets.json
      2. -
      3. [supplement]
      4. -
      -
    11. -
    12. Input Files
    13. -
    14. Output Folder and Files
    15. -
    16. Examples
        +
      1. Project Configuration
      2. +
      3. budgets.json
      4. +
      5. [supplement]
      6. +
      7. Input/Output Files
      8. +
      9. Examples
      10. Calling Graph Emma Visualiser
      -
    17. -

    Requirements

      @@ -133,74 +127,105 @@

      Requirements

  • Python libraries
      -
    • pypiscout 1.7 or higher: (pip3 install pypiscout)
    • +
    • pypiscout 2.0 or higher: (pip3 install pypiscout)
    • Pandas 0.22 or higher: (pip3 install pandas)
    • Matplotlib 2.2.0 or higher: (pip3 install matplotlib)
    • Markdown 3.0.1 or higher: (pip3 install Markdown)
    • Pygments 2.3.1 or higher: (pip3 install Pygments)
  • -
  • Tested on Windows but should also work on Linux systems
  • +
  • Tested on Windows and Linux systems
  • Process

    After analysing the mapfiles with the emma.py script, one can visualise them using emma_vis.py.

    Usage

    -
    $ python emma_vis.py --help
    -usage: Emma Visualiser [-h] [--version] --project PROJECT [--quiet] [--append]
    -                       [--dir DIR] [--subdir SUBDIR] [--overview]
    -                       [--categorised_image_csv] [--noprompt]
    +
    $ python emma_vis.py --help
    +usage: Emma Visualiser [-h] [--version] --projectDir PROJECTDIR [--quiet]
    +                    [--append] [--inOutDir INOUTDIR] [--subDir SUBDIR]
    +                    [--overview] [--categorised_image_csv] [--noprompt]
     
    -Data aggregation and visualisation tool for Emma Memory and Mapfile Analyser (Emma).
    +Data aggregation and visualisation tool for Emma Memory and Mapfile Analyser
    +(Emma).
     
     optional arguments:
    -  -h, --help            show this help message and exit
    -  --version             Display the version number.
    -  --project PROJECT, -p PROJECT
    -                        Path of directory holding the configs files. The
    -                        project name will be derived from the root folder
    -                        (default: None)
    -  --quiet, -q           Automatically accepts last modified .csv file in
    -                        ./memStats folder (default: False)
    -  --append, -a          Append reports to file in ./results folder (default:
    -                        False)
    -  --dir DIR, -d DIR     User defined path to the statistics root directory.
    -                        (default: None)
    -  --subdir SUBDIR       User defined subdirectory in results folder. (default:
    -                        )
    -  --overview, -ovw      Create a .html overview. (default: False)
    -  --categorised_image_csv, -cat_img
    +-h, --help            show this help message and exit
    +--version             Display the version number.
    +--projectDir PROJECTDIR, -p PROJECTDIR
    +                        Path to directory holding the config files. The
    +                        project name will be derived from this folder name,
    +                        (default: None)
    +--quiet, -q           Automatically accepts last modified .csv file in
    +                        ./memStats folder (default: False)
    +--append              Append reports to file in ./results folder (default:
    +                        False)
    +--inOutDir INOUTDIR, -i INOUTDIR
    +                        Path containing the memStats directory (-> Emma
    +                        output). If not given the `project` directory will be
    +                        used. (default: None)
    +--subDir SUBDIR       Sub-directory of `inOutDir` where the Emma Visualiser
    +                        results will be stored. If not given results will be
    +                        stored in `inOutDir`. (default: None)
    +--overview            Create a .html overview. (default: False)
    +--categorised_image_csv, -cat_img
                             Save a .csv of categories found inside the image
    -                        summary (default: False)
    -  --noprompt            Exit fail on user prompt. (default: False)
    +                        summary (default: False)
    +--noprompt            Exit program with an error if a user prompt occurs;
    +                        useful for CI systems (default: False)
     
    -********* Marcel Schmalzl, Felix Mueller, Gergo Kocsis - 2017-2019 *********
    -
    +Copyright (C) 2019 The Emma authors License GPL-3.0: GNU GPL version 3 +<https://gnu.org/licenses/gpl.html>. This is free software: you are free to +change and redistribute it. There is NO WARRANTY, to the extent permitted by +law. -

    Arguments

    -

    Required Arguments:

    +

    Arguments in detail

    +

    Optional Arguments

      -
    • --project PROJECT, -p PROJECT
    • +
    • --inOutDir INOUTDIR, -i INOUTDIR
    • +
    • --subDir SUBDIR
    -

    Specify the project root directory (the folder holding the .json files). The project name will be derived from the root folder.

    -

    Optional Arguments:

    +

    User defined path for the folder ./memStats holding generated statistics from Emma. If not specified the schema below will be followed:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Argument ->--projectDir--inOutDir--subDirI/O path
    Given?xprojectDir
    Given?xxinOutDir
    Given?xxxjoin(inOutDir + subDir)
    +

    I/O path denotes the path containing memStats. In the same path the results folder will be created.

    +

    By defining SUBDIR a folder with the given name is created in the results directory. This option makes it easier to distinguish between different development stages when batch analysing mapfiles.

      -
    • --help, -h
    • -
    -

    Show the help message.

    -
      -
    • --append, -a
    • +
    • --append

    Additional reports in .csv format will be created in the ./results directory.

      -
    • --directory DIRECTORY, --dir DIRECTORY, -d DIRECTORY
    • -
    -

    User defined path for the folder ./memStats holding generated statistics from Emma (default: ./memStats). If not specified the program will ask you to confirm the default path.

    -
      -
    • --res_subdir RES_SUBDIR
    • -
    -

    User defined subdirectory in results folder. By defining RES_SUBDIR a folder with the given name is created in the results directory. This option makes it easier to distinguish between different development stages when batch analysing mapfiles.

    -
    • --categorised_image_csv, -cat_img Save a .csv of categories found inside the image summary (default: False).
    @@ -208,15 +233,15 @@

    Quiet Mode

    • --quiet, -q
    -

    Automatically accepts last modified .csv file in ./memStats folder (default: False). If not specified the program will ask you to confirm the default path.

    +

    Automatically accepts last modified .csv file in ./memStats folder (default: False). If not specified the program will ask you to confirm the default path if not given or ambiguous.

    Overview

      -
    • --overview, -ovw
    • +
    • --overview

    This creates a .md and .html output containing an overview of the memory usage.

    Append Mode

      -
    • --append, -a
    • +
    • --append

    Appends analyses to .csv files. This can be used to visualise memory usage over different versions.

    Project Configuration

    @@ -229,14 +254,13 @@

    budgets.json

    The config file needs to have the following format:

    { "Project Threshold in %": ,

    -
    "Budgets": [
    +
    "Budgets": [
         [<CONFIG_ID>, <MEMORY_TYPE>, <AVAILABLE_MEMORY>],
         .
         .
         .
         [<CONFIG_ID>, <MEMORY_TYPE>, <AVAILABLE_MEMORY>]
    -]
    -
    +]

    }

    @@ -262,9697 +286,10869 @@

    budgets.json

  • The <AVAILABLE_MEMORY>s are defining the available memory for a <MEMORY_TYPE> of a <CONFIG_ID> in bytes
  • [supplement]

    -

    .md files in this directory will be appended to the report created by the --overview command. +

    .md files in this directory will be appended to the report created by the --overview command. This can be used to append additional remarks to the overview. -This is completely user defined, Emma and it´s components are not relying on these files in any way.

    -

    Input Files

    -

    If not specified otherwise with the --quiet and --directory commands, the visualiser will choose the last modified image and module summary .csv files in the ./[PROJECT]/memStats directory. If there is no module summary present the visualisation of the modules will be skipped.

    -

    Output Folder and Files

    +This is completely user defined, Emma and its components are not relying on these files in any way.

    +

    Input/Output Files

    All output files will be saved to ./[PROJECT]/results.

    +

    If not specified otherwise using the --quiet and --inOutDir commands, the visualiser will choose the last modified section and object summary .csv files in the ./[PROJECT]/memStats directory. If there is no module summary present the visualisation of the modules will be skipped.

    Output files are:

      -
    • .png's of all plots
    • -
    • Overview mode creates .md and .html files of the overview
    • -
    • A .csv file showing which section contains which modules.
    • +
    • .png's of all plots
    • +
    • Overview mode creates .md and .html files of the overview
    • +
    • A .csv file showing which section contains which modules

    Examples

    After the Image Summary has been created with emma.py and the memStats CSV files were saved to the directory ../[PROJECT]/results/memStats, it can be visualised using:

    -
    python emma_vis.py \
    ---project ..\[PROJECT] \
    ---dir ..\[PROJECT]\results \
    ---quiet \
    ---overview
    -
    +
    python emma_vis.py \
    +--project ..\[PROJECT] \
    +--dir ..\[PROJECT]\results \
    +--quiet \
    +--overview

    Calling Graph Emma Visualiser

    -

    ../genDoc/call_graph_uml/emma_vis_filtered.profile.png

    +

    ../genDoc/call_graph_uml/emma_vis_filtered.profile.png

    diff --git a/doc/readme-vis.md b/doc/readme-vis.md index c84e9bb..2b94219 100644 --- a/doc/readme-vis.md +++ b/doc/readme-vis.md @@ -5,22 +5,22 @@ ------------------------ # Contents -1. [Requirements](#requirements) -1. [Process](#process) -1. [Usage](#usage) -1. [Arguments](#arguments) - 1. [Required Arguments:](#required-arguments) - 1. [Optional Arguments:](#optional-arguments) - 1. [Quiet Mode](#quiet-mode) - 1. [Overview](#overview) - 1. [Append Mode](#append-mode) -1. [Project Configuration](#project-configuration) - 1. [budgets.json](#budgets.json) - 1. [[supplement]](#supplement) -1. [Input Files](#input-files) -1. [Output Folder and Files](#output-folder-and-files) -1. [Examples](#examples) - 1. [Calling Graph Emma Visualiser](#calling-graph-emma-visualiser) +1. [Emma Visualiser](#emma-visualiser) +2. [Contents](#contents) +3. [Requirements](#requirements) +4. [Process](#process) +5. [Usage](#usage) +6. [Arguments in detail](#arguments-in-detail) + 1. [Optional Arguments](#optional-arguments) + 2. [Quiet Mode](#quiet-mode) + 3. [Overview](#overview) + 4. [Append Mode](#append-mode) +7. [Project Configuration](#project-configuration) + 1. [`budgets.json`](#budgetsjson) + 2. [`[supplement]`](#supplement) +8. [Input/Output Files](#inputoutput-files) +9. [Examples](#examples) + 1. [Calling Graph Emma Visualiser](#calling-graph-emma-visualiser) ------------------------ @@ -28,12 +28,12 @@ * Python 3.6 or higher * Tested with 3.6.1rc1; 3.7.0 * Python libraries - * pypiscout 1.7 or higher: (`pip3 install pypiscout`) + * pypiscout 2.0 or higher: (`pip3 install pypiscout`) * Pandas 0.22 or higher: (`pip3 install pandas`) * Matplotlib 2.2.0 or higher: (`pip3 install matplotlib`) * Markdown 3.0.1 or higher: (`pip3 install Markdown`) * Pygments 2.3.1 or higher: (`pip3 install Pygments`) -* Tested on Windows but should also work on Linux systems +* Tested on Windows and Linux systems @@ -45,83 +45,87 @@ After analysing the mapfiles with the `emma.py` script, one can visualise them u # Usage $ python emma_vis.py --help - usage: Emma Visualiser [-h] [--version] --project PROJECT [--quiet] [--append] - [--dir DIR] [--subdir SUBDIR] [--overview] - [--categorised_image_csv] [--noprompt] + usage: Emma Visualiser [-h] [--version] --projectDir PROJECTDIR [--quiet] + [--append] [--inOutDir INOUTDIR] [--subDir SUBDIR] + [--overview] [--categorised_image_csv] [--noprompt] - Data aggregation and visualisation tool for Emma Memory and Mapfile Analyser (Emma). + Data aggregation and visualisation tool for Emma Memory and Mapfile Analyser + (Emma). optional arguments: - -h, --help show this help message and exit - --version Display the version number. - --project PROJECT, -p PROJECT - Path of directory holding the configs files. The - project name will be derived from the root folder + -h, --help show this help message and exit + --version Display the version number. + --projectDir PROJECTDIR, -p PROJECTDIR + Path to directory holding the config files. The + project name will be derived from this folder name, (default: None) - --quiet, -q Automatically accepts last modified .csv file in + --quiet, -q Automatically accepts last modified .csv file in ./memStats folder (default: False) - --append, -a Append reports to file in ./results folder (default: + --append Append reports to file in ./results folder (default: False) - --dir DIR, -d DIR User defined path to the statistics root directory. - (default: None) - --subdir SUBDIR User defined subdirectory in results folder. (default: - ) - --overview, -ovw Create a .html overview. (default: False) - --categorised_image_csv, -cat_img + --inOutDir INOUTDIR, -i INOUTDIR + Path containing the memStats directory (-> Emma + output). If not given the `project` directory will be + used. (default: None) + --subDir SUBDIR Sub-directory of `inOutDir` where the Emma Visualiser + results will be stored. If not given results will be + stored in `inOutDir`. (default: None) + --overview Create a .html overview. (default: False) + --categorised_image_csv, -cat_img Save a .csv of categories found inside the image summary (default: False) - --noprompt Exit fail on user prompt. (default: False) - - ********* Marcel Schmalzl, Felix Mueller, Gergo Kocsis - 2017-2019 ********* - + --noprompt Exit program with an error if a user prompt occurs; + useful for CI systems (default: False) + Copyright (C) 2019 The Emma authors License GPL-3.0: GNU GPL version 3 + . This is free software: you are free to + change and redistribute it. There is NO WARRANTY, to the extent permitted by + law. -# Arguments -## Required Arguments: -* ``` --project PROJECT, -p PROJECT ``` +# Arguments in detail +## Optional Arguments -Specify the project root directory (the folder holding the .json files). The project name will be derived from the root folder. +* `--inOutDir INOUTDIR, -i INOUTDIR` +* `--subDir SUBDIR` +User defined path for the folder `./memStats` holding generated statistics from Emma. If not specified the schema below will be followed: -## Optional Arguments: +| Argument -> | `--projectDir` | `--inOutDir` | `--subDir` | I/O path | +| ----------- | -------------- | ------------ | ---------- | ----------------------- | +| Given? | x | | | projectDir | +| Given? | x | x | | inOutDir | +| Given? | x | x | x | join(inOutDir + subDir) | -* ```--help, -h``` +I/O path denotes the path containing `memStats`. In the same path the `results` folder will be created. -Show the help message. +By defining `SUBDIR` a folder with the given name is created in the results directory. This option makes it easier to distinguish between different development stages when batch analysing mapfiles. -* ```--append, -a ``` +* `--append` Additional reports in .csv format will be created in the ./results directory. -* ```--directory DIRECTORY, --dir DIRECTORY, -d DIRECTORY``` - -User defined path for the folder ./memStats holding generated statistics from Emma (default: ./memStats). If not specified the program will ask you to confirm the default path. - -* ```--res_subdir RES_SUBDIR``` - -User defined subdirectory in results folder. By defining `RES_SUBDIR` a folder with the given name is created in the results directory. This option makes it easier to distinguish between different development stages when batch analysing mapfiles. -* ```--categorised_image_csv, -cat_img``` +* `--categorised_image_csv, -cat_img` Save a .csv of categories found inside the image summary (default: False). ## Quiet Mode -* ```--quiet, -q``` +* `--quiet, -q` -Automatically accepts last modified .csv file in ./memStats folder (default: False). If not specified the program will ask you to confirm the default path. +Automatically accepts last modified `.csv` file in `./memStats` folder (default: False). If not specified the program will ask you to confirm the default path if not given or ambiguous. ## Overview -* ```--overview, -ovw``` +* `--overview` This creates a .md and .html output containing an overview of the memory usage. ## Append Mode -* ```--append, -a``` +* `--append` Appends analyses to .csv files. This can be used to visualise memory usage over different versions. @@ -131,7 +135,7 @@ Appends analyses to .csv files. This can be used to visualise memory usage over There are several configuration files needed in order to analyze your project. Most of them are described in the Emma documentation. Here, only the ones described that are used by the Emma Visualiser exclusively. -## ```budgets.json``` +## `budgets.json` This config file is used to define the available memory for every memory area of every configID. Besides this it defines a threshold value as well that will be displayed on the diagrams. This threshold can be for example @@ -169,30 +173,25 @@ The following rules apply: * The ``s are defining the available memory for a `` of a `` in bytes ## `[supplement]` - -.md files in this directory will be appended to the report created by the ```--overview``` command. +`.md` files in this directory will be appended to the report created by the `--overview` command. This can be used to append additional remarks to the overview. -This is completely user defined, Emma and it´s components are not relying on these files in any way. - - -# Input Files +This is completely user defined, Emma and its components are not relying on these files in any way. -If not specified otherwise with the ```--quiet``` and ```--directory``` commands, the visualiser will choose the last modified image and module summary .csv files in the ./[PROJECT]/memStats directory. If there is no module summary present the visualisation of the modules will be skipped. +# Input/Output Files +All output files will be saved to `./[PROJECT]/results`. -# Output Folder and Files +If not specified otherwise using the `--quiet` and `--inOutDir` commands, the visualiser will choose the last modified section and object summary .csv files in the `./[PROJECT]/memStats` directory. If there is no module summary present the visualisation of the modules will be skipped. -All output files will be saved to `./[PROJECT]/results`. Output files are: -* .png's of all plots -* Overview mode creates .md and .html files of the overview -* A .csv file showing which section contains which modules. +* `.png`'s of all plots +* Overview mode creates `.md` and `.html` files of the overview +* A `.csv` file showing which section contains which modules # Examples - After the Image Summary has been created with emma.py and the memStats CSV files were saved to the directory `../[PROJECT]/results/memStats`, it can be visualised using: :::bash @@ -203,4 +202,4 @@ After the Image Summary has been created with emma.py and the memStats CSV files --overview ## Calling Graph Emma Visualiser -
    +
    diff --git a/doc/readme.html b/doc/readme.html index 64774c6..5da3a3b 100644 --- a/doc/readme.html +++ b/doc/readme.html @@ -97,56 +97,53 @@

    Emma

    Emma Memory and Mapfile Analyser

    -

    Conduct static (i.e. worst case) memory consumption analyses based on arbitrary linker map files (Green Hills map files are the default but others - like GCC - are supported via configuration options). This tool creates a summary/overview about static memory usage in form of a comma separated values file.

    +

    Conduct static (i.e. worst case) memory consumption analyses based on linker map files (currently only Green Hills map files are supported). +This tool creates a summary/overview about static memory usage in form of a comma separated values (CSV) file.


    Contents

    -
      + -
    1. Configuration File Dependencies
    2. -
    +
  • Formal Definition of the GHS compiler specific configuration
  • -
  • Output Files
      -
    1. Image Summary
    2. -
    3. Module Summary
    4. +
    5. Output Files
    6. +
    7. Section Summary
    8. +
    9. Object Summary
    10. Objects in Sections
    11. CSV header
    12. -
    -
  • -
  • Terminology
  • -
  • Examples
      -
    1. Matching module name and category using categoriesKeywords.json
    2. -
    3. Removing not needed module names from categories.json
    4. -
    -
  • -
  • General Information on Mapfiles and the Build Chain
  • -
  • Technical Details
      +
    1. Terminology
    2. +
    3. Examples
    4. +
    5. Matching object name and category using categoriesKeywords.json
    6. +
    7. Removing not needed object names from categoriesObjects.json
    8. +
    9. General Information About Map Files and Build Chains
    10. +
    11. Technical Details
    12. GHS Monolith file generation
    13. Class diagram Emma
    14. Calling Graph Emma
    15. -
    -
  • - +

    Requirements

      @@ -155,14 +152,14 @@

      Requirements

  • Python libraries
      -
    • pypiscout (pip3 install pypiscout)
    • +
    • pypiscout 2.0 or higher: (pip3 install pypiscout)
  • -
  • Tested on Windows but should also work on Linux systems
  • +
  • Tested on Windows and Linux systems
  • Process

    Using the Mapfile Analyser is a two step process. The first step is to extract the required information from the mapfiles and save it to .csv files. -This is done with the emma.py script. The second step is to visualise the data. The documentation can be found in the Emma visualiser readme document.

    +This is done with the emma.py script. The second step is to visualise the data. This document explains the first part only, the visualisation is documented in the Emma visualiser readme document.

    Limitations

    The Emma is only suitable for analyzing projects where the devices have a single linear physical address space:

      @@ -172,1052 +169,130 @@

      Limitations

      Devices based on architectures like this can be analyzed with Emma.

    Usage

    -

    Image and module summaries of the specified mapfiles will be created.

    -
    $ python emma.py --help
    -usage: Emma Memory and Mapfile Analyser (Emma) [-h] [--version] --project PROJECT
    -                                           --mapfiles MAPFILES [--dir DIR]
    -                                           [--subdir SUBDIR] [--analyse_debug]
    -                                           [--create_categories]
    -                                           [--remove_unmatched] [--noprompt]
    -                                           [-Werror]
    +

    Section and object summaries of the specified mapfiles will be created.

    +
    $ python emma.py --help
    +usage: Emma Memory and Mapfile Analyser (Emma) [-h] [--version] --project PROJECT
    +                                           --mapfiles MAPFILES [--dir DIR]
    +                                           [--subdir SUBDIR] [--analyse_debug]
    +                                           [--create_categories]
    +                                           [--remove_unmatched] [--noprompt]
    +                                           [--Werror]
     
    -Analyser for mapfiles from Greens Hills Linker (other files are supported via
    -configuration options).It creates a summary/overview about static memory usage
    +Analyser for mapfiles from Greens Hills Linker (other files are supported via
    +configuration options).It creates a summary/overview about static memory usage
     in form of a comma separated values file.
     
     optional arguments:
    -  -h, --help           show this help message and exit
    +  -h, --help           show this help message and exit
       --version            Display the version number.
       --project PROJECT    Path of directory holding the configuration.The project
                            name will be derived from the the name of this folder.
    -                       (default: None)
    -  --mapfiles MAPFILES  The folder containing the mapfiles that needs to be
    -                       analyzed. (default: None)
    -  --dir DIR            Output folder holding the statistics. (default: None)
    +                       (default: None)
    +  --mapfiles MAPFILES  The folder containing the map files that need to be
    +                       analysed. (default: None)
    +  --dir DIR            Output folder holding the statistics. (default: None)
       --subdir SUBDIR      User defined subdirectory name in the --dir folder.
    -                       (default: None)
    -  --analyse_debug      Include DWARF debug sections in analysis (default:
    -                       False)
    -  --create_categories  Create categories.json from keywords. (default: False)
    +                       (default: None)
    +  --analyse_debug      Include DWARF debug sections in analysis (default:
    +                       False)
    +  --create_categories  Create categories.json from keywords. (default: False)
       --remove_unmatched   Remove unmatched modules from categories.json.
    -                       (default: False)
    -  --noprompt           Exit fail on user prompt. (default: False)
    -  -Werror              Treat all warnings as errors. (default: False)
    +                       (default: False)
    +  --noprompt           Exit fail on user prompt. (default: False)
    +  --Werror             Treat all warnings as errors. (default: False)
     
    -********* Marcel Schmalzl, Felix Mueller, Gergo Kocsis - 2017-2019 *********
    -
    +********* Marcel Schmalzl, Felix Mueller, Gergo Kocsis - 2017-2019 *********

    Arguments

    Required Arguments

    -
    --project PROJECT, -p PROJECT
    +
    --project PROJECT, -p PROJECT
     
    ---mapfiles MAPFILES, --map MAPFILES
    -
    +--mapfiles MAPFILES, --map MAPFILES

    Optional Arguments

    -
    --dir
    -    User defined path for the top folder holding the `memStats`/output files.
    -    Per default it uses the same directory as the config files.
    -
    ---stats_dir
    -    User defined path inside the folder given in the `--dir` argument.
    -    This is usefull when batch analysing mapfiles from various development stages.
    -    Every analysis output gets it's own directory.
    -
    ---create_categories, -ctg
    -    Create categories.json from keywords.json for easier module categorisation.
    -
    ---remove_unmatched, -ru
    -    Remove unmatched entries from categories.json. This is usefull when a categories.json from another project is used.
    -
    ---help, -h
    -
    ---dir STATS_DIR, -d STATS_DIR
    -    User defined path for the folder `memStats` holding generated statistics from Emma (default: ./memStats).
    -
    ---analyse_debug, --dbg
    -    Normally we remove DWARF debug sections from the analysis to show the relevant information for a possible release software.
    -    This can be prevented if this argument is set. DWARF section names are defined in stringConstants.py
    -    `.unused_ram` is always excluded (regardless of this flag)
    -
    ---noprompt
    -    Exit and fail on user prompt. Normally this happens when some files or configurations are ambiguous.
    -    This is useful when running Emma on CI systems.
    -
    --Werror
    -
    - - +
      +
    • --dir
        +
      • User defined path for the top folder holding the memStats/output files. Per default it uses the same directory as the config files.
      • +
      +
    • +
    • --stats_dir
    • +
    • User defined path inside the folder given in the --dir argument. This is usefull when batch analysing mapfiles from various development stages. Every analysis output gets it's own directory.
    • +
    • --create_categories
    • +
    • Create categories*.json from categories*Keywords.json for easier categorisation.
    • +
    • --remove_unmatched,
    • +
    • Remove unmatched entries from categories*.json. This is useful when a categories*.json from another project is used.
    • +
    • --analyse_debug, --dbg
    • +
    • Normally we remove DWARF debug sections from the analysis to show the relevant information for a possible release software. This can be prevented if this argument is set. DWARF section names are defined in stringConstants.py. .unused_ram is always excluded (regardless of this flag)
    • +
    • --noprompt
    • +
    • Exit and fail on user prompt. Normally this happens when some files or configurations are ambiguous. This is useful when running Emma on CI systems.
    • +

    Project Configuration

    -

    The memory analysis will be executed based on the project configuration. In order to be able to use Emma with your project, -you need to create a configuration matching your project's hardware and software. -The configuration has to be established correctly, because errors made in it, can distort the memory analysis results or it will be refused by Emma.

    +

    The memory analysis will be executed based on the project configuration. In order to be able to use Emma with your project, you need to create a configuration matching your project's hardware and software. Configure Emma with high diligence since errors may lead to incorrect results of your analysis. During the analysis Emma performs some sanity checks which helps you detecting misconfiguration.

    This chapter explains the role and functionality of each part of the configuration and illustrates all the settings that can be used. Based on this description the user will have to create his/her own configuration.

    Creating a configuration is done by writing several JSON files (if you are not familiar with JSON, please visit https://www.json.org). This chapter will go trough the topic by formally defining the format, rules and the functionality of the config files. -There are practical example projects available in the doc folder available. These projects will lead you step by step trough the process of +There are practical example projects available in the doc folder. These projects will lead you step by step trough the process of creating a configuration and they also contain map files that can be analyzed.

    -

    The following example projects are available:

    +

    Currently the following example projects are available:

    • doc/test_project - A Test Project that illustrates a system with a hardware that consists of two devices: an MCU and an SOC. -The mapfiles of this project are using the default mapfile format of Emma.
    • +Both of the devices have a GHS compiler specific configuration and mapfiles.
    -

    Formal Definition

    -

    An Emma project configuration contains several JSON files:

    -
    +--[<PROJECT>]
    +

    An Emma project configuration consists of two parts: the generic configuration and the compiler specific configuration.

    +

    Formal definition of the generic configuration

    +

    The generic part of the configuration contains the following files:

    +
    +-- [<PROJECT>]
     |   +-- [supplement]
     |   +-- globalConfig.json
     |   +-- addressSpaces*.json
    -|   +-- patterns*.json
    -|   +-- virtualSections*.json
     |   +-- budgets.json
     |   +-- categoriesObjects.json
     |   +-- categoriesObjectsKeywords.json
     |   +-- categoriesSections.json
     |   +-- categoriesSectionsKeywords.json
    -
    +| +-- <COMPILER_SPECIFIC_CONFIGURATION_FILES>

    The files containing the asterisk symbol can be freely named by the user because the actual file names will have to be listed in the globalConfig.json.

    -

    [<PROJECT>]

    +

    PROJECT

    The configuration has to be contained by a folder. The name of the folder will be the name of the configuration. From the files ending with a * symbol, the configuration can contain more than one but maximum up to the number of configIDs defined in globalConfig.json.

    -

    [supplement]

    -

    You can add .md files into this folder with mark down syntax to add information regarding your project that will be contained by the .html overview. +

    supplement

    +

    You can add .md files into this folder with Markdown syntax to add information regarding your project that will be contained by the .html overview. For more information please refer to the Emma Visualiser's documentation.

    globalConfig.json

    -

    The globalConfig.json is the starting point of the configurations. -It defines the memory configurations of the system and defines the names of the config files that belong to these. -A memory configuration is describes a unit that has memory associated to it, for example an MCU, MPU or an SOC. -During the analysis, it will be examined to which extent the memory resources that are available for these units are used.

    -

    In Emma, a memory configuration is called a configID. For each configID the the following config files need to be defined:

    -
    ./images/globalConfigScheme.png
    - +

    The globalConfig.json is the starting point of a configuration, this file defines the configId-s. +The configId-s are the hardware units of the system that have memory associated to them, for example an MCU, MPU or an SOC. +During the analysis, it will be examined to which extent these memory resources are used.

    +

    For each configId, globalConfig.json assigns a compiler. This means that the mapfiles belonging to the configId were created by the selected compiler. +This is important, since the format of these files are specific to the compiler. For each configId an addressSpaces*.json config file will be assigned. +Furthermore the globalConfig.json assigns compiler specific config files to each configId, that need to be consistent with the selected compiler. +For example if a GHS compiler was selected to the configId, then the compiler specific configuration part of this configId have to fulfill the requirements +described in the Formal Definition of the GHS compiler specific configuration chapter.

    The globalConfig.json has to have the following format:

    -
    {
    -    <CONFIG_ID>: {
    -        "addressSpacesPath": <CONFIG_FILE>,
    -        "patternsPath": <CONFIG_FILE>,
    -        "virtualSectionsPath": <CONFIG_FILE>,
    -        "ignoreConfigID": <BOOL>
    -    },
    -    .
    -    .
    -    .
    -    <CONFIG_ID>: {
    -        "addressSpacesPath": <CONFIG_FILE>,
    -        "patternsPath": <CONFIG_FILE>,
    -        "virtualSectionsPath": <CONFIG_FILE>,
    -        "ignoreConfigID": <BOOL>
    -    }
    -}
    -
    +
    {
    +    <CONFIG_ID>: {
    +        "compiler": <COMPILER_NAME>,
    +        "addressSpacesPath": <CONFIG_FILE>,
    +        "mapfiles": <MAPFILES_REL_PATH>,
    +        "ignoreConfigID": <BOOL>,
    +        <COMPILER_SPECIFIC_KEY_VALUE_PAIRS>
    +    },
    +    .
    +    .
    +    .
    +    <CONFIG_ID>: {
    +        "compiler": <COMPILER_NAME>,
    +        "addressSpacesPath": <CONFIG_FILE>,
    +        "mapfiles": <MAPFILES_REL_PATH>,
    +        "ignoreConfigID": <BOOL>,
    +        <COMPILER_SPECIFIC_KEY_VALUE_PAIRS>
    +    }
    +}

    The following rules apply:

    @@ -1225,17 +300,28 @@

    globalConfig.json

  • The file contains a single unnamed JSON object
  • The types used in the description:
    • <CONFIG_ID> is a string
    • +
    • <COMPILER_NAME> is a string
    • <CONFIG_FILE> is a string
    • +
    • <MAPFILES_REL_PATH> is a string, with the special characters escaped in it
    • <BOOL> is a boolean value containing either true or false
    • +
    • <COMPILER_SPECIFIC_KEY_VALUE_PAIRS> are the key-value pairs that are required by the selected compiler
  • There has to be at least one configID defined
  • -
  • You must assign three config files for each configID by defining the following key, value pairs:
      +
    • You must select a compiler for every configID, by defining the compiler key. The possible values are:
        +
      • "GHS" - Green Hills Compiler
      • +
      +
    • +
    • You must assign the following config files for each configID by defining the following key, value pairs:
      • by defining addressSpacesPath, the config file that defines the address spaces is assigned
      • -
      • by defining patternsPath, the config file that defines the patterns is assigned
      • -
      • by defining virtualSectionsPath, the config file that listing the sections of the virtual address spaces is assigned
      • The config files have to be in the same folder as the globalConfig.json
      • -
      • The config files don't need to be different for each configID (for example you can use the same sections config file for all the configIDs)
      • +
      • The config files don't need to be different for each configID (for example you can use the same address spaces config file for all the configIDs)
      • +
      +
    • +
    • The mapfiles:
        +
      • specifies a folder relative to the one given via --mapfiles command line argument
      • +
      • is optional, if is defined for a configID, then the map files belonging to this configId will be searched for within this folder
      • +
      • Otherwise the mapfiles will be searched for in the --mapfiles root map file path
    • The ignoreConfigID:
        @@ -1244,31 +330,30 @@

        globalConfig.json

    -

    address spaces*.json

    +

    addressSpaces*.json

    The address spaces config files define the existing memory areas for the configIDs they were assigned to in the globalConfigs.json.

    These config files have to have the following format:

    -
    {
    -    "offset": <ADDRESS>,
    -    "memory": {
    -        <MEMORY_AREA>: {
    -            "start": <ADDRESS>,
    -            "end": <ADDRESS>,
    -            "type": <MEMORY_TYPE>
    -        },
    -        .
    -        .
    -        .
    -        <MEMORY_AREA>: {
    -            "start": <ADDRESS>,
    -            "end": <ADDRESS>,
    -            "type": <MEMORY_TYPE>
    -        }
    -    },
    -    "ignoreMemory": [
    -        <MEMORY_AREA>, ... <MEMORY_AREA>
    -    ]
    -}
    -
    +
    {
    +    "offset": <ADDRESS>,
    +    "memory": {
    +        <MEMORY_AREA>: {
    +            "start": <ADDRESS>,
    +            "end": <ADDRESS>,
    +            "type": <MEMORY_TYPE>
    +        },
    +        .
    +        .
    +        .
    +        <MEMORY_AREA>: {
    +            "start": <ADDRESS>,
    +            "end": <ADDRESS>,
    +            "type": <MEMORY_TYPE>
    +        }
    +    },
    +    "ignoreMemory": [
    +        <MEMORY_AREA>, ... <MEMORY_AREA>
    +    ]
    +}

    The following rules apply:

    @@ -1302,37 +387,3040 @@

    address spaces*.json

    budgets.json

    The budgets config file belongs to the Emma Visualiser. For a description, please see: doc\readme-vis.md.

    +

    categoriesObjects.json and categoriesSections.json

    +

    The categories config files are used to categorize objects and sections to user defined categories by using their full names. +These files are optional, if no categorization needed, these config files do not need to be created. +This function can be used for example to group the software components of one company together which will make the results easier to understand.

    +

    The categoriesObjects.json is used for the objects and the categoriesSections.json is used for the section categorization. +The objects and sections will be first tried to be categorized by these files. If they could not be categorized, then the software will try +to categorize them based on the categoriesObjectsKeywords.json and categoriesSectionsKeywords.json files.

    +

    These config files have to have the following format:

    +
    {
    +    <CATEGORY>: [
    +        <NAME>,
    +        .
    +        .
    +        .
    +        <NAME>
    +    ],
    +    .
    +    .
    +    .
    +    <CATEGORY>: [
    +        <NAME>,
    +        .
    +        .
    +        .
    +        <NAME>
    +    ]
    +}
    + + +

    The following rules apply:

    +
      +
    • The file contains a single unnamed JSON object
    • +
    • The types used in the description:
        +
      • <CATEGORY> is a string containing a unique category name
      • +
      • <NAME> is a string
      • +
      +
    • +
    • The categorisation can be done either by hand or with the --create_categories command line argument (for usage see there)
    • +
    • The <NAME> has to contain full names of the sections or objects
    • +
    +

    categoriesObjectsKeywords.json and categoriesSectionsKeywords.json

    +

    The categories keywords config files are used to categorize objects and sections to user defined categories by using only substrings of their names. +These files are optional, if no categorization needed, these config files do not need to be created. +This function can be used for example to group the software components of one company together which will make the results easier to understand.

    +

    The categoriesObjectsKeywords.json is used for the objects and the categoriesSectionsKeywords.json is used for the section categorization. +The objects and sections will only tried to be categorized by these files if the categorization by the categoriesObjects.json and categoriesSections.json files failed. +If they could not be categorized, then the software will assign them to a category called <Emma_UnknownCategory> so they will be more visible in the results.

    +

    These config files have to have the following format:

    +
    {
    +    <CATEGORY>: [
    +        <KEYWORD>,
    +        .
    +        .
    +        .
    +        <KEYWORD>
    +    ],
    +    <CATEGORY>: [
    +        <KEYWORD>,
    +        .
    +        .
    +        .
    +        <KEYWORD>
    +    ]
    +}
    + + +

    The following rules apply:

    +
      +
    • The file contains a single unnamed JSON object
    • +
    • The types used in the description:
        +
      • <CATEGORY> is a string containing a unique category name
      • +
      • <KEYWORD> is a string
      • +
      +
    • +
    • The categorisation has to be done by hand
    • +
    • The <KEYWORD> contains a regex pattern for the names of the sections or objects
    • +
    +

    Formal Definition of the GHS compiler specific configuration

    +

    The GHS compiler specific part of the configuration contains the following files:

    +
    +-- [<PROJECT>]
    +|   +-- <GENERIC_CONFIGURATION_FILES>
    +|   +-- patterns*.json
    +|   +-- virtualSections*.json
    + + +

    The following dependencies exist within this type of a configuration:

    +
    ./images/configDependencies.png
    + +

    In globalConfig.json, you need to reference (ref relations on the picture):

    +
      +
    1. addressSpaces*.json
    2. +
    3. patterns*.json
    4. +
    5. sections*.json
    6. +
    +

    memRegionExcludes: You can exclude certain memory regions with this keyword in patterns*.json. In order to do this the memory regions/tags must match with those defined in addressSpaces*.json.

    +

    If you have virtual address spaces (VASes) defined. You need a "monolith file" pattern defined in patterns*.json in order to be able to translate virtual addresses back to physical addresses. In the same file you give each VAS a name. This name is later used to identify which section belongs to which VAS (defined in virtualSections*.json). The VAS names must match between those two files. This is needed in order to avoid name clashes of sections names between different VASes.

    +

    Extensions to the globalConfig.json

    +

    The globalConfig.json has to have the following format for configId-s that have selected "GHS" as compiler:

    +
    {
    +    <CONFIG_ID>: {
    +        <GENERIC_KEY_VALUE_PAIRS>,
    +        "patternsPath": <CONFIG_FILE>,
    +        "virtualSectionsPath": <CONFIG_FILE>
    +    },
    +    .
    +    .
    +    .
    +    <CONFIG_ID>: {
    +        <GENERIC_KEY_VALUE_PAIRS>,
    +        "patternsPath": <CONFIG_FILE>,
    +        "virtualSectionsPath": <CONFIG_FILE>
    +    }
    +}
    + + +

    The following rules apply:

    +
      +
    • The types used in the description: +
    • +
    • You must assign a patterns config file for each configID by defining the patternsPath key
    • +
    • If the configId contains virtual address spaces, you must assign a config file describing them by defining virtualSectionsPath key
    • +
    • The assigned config files have to be in the same folder as the globalConfig.json
    • +
    • The config files don't need to be different for each configID (for example you can use the same virtual sections config file for all the configIDs)
    • +

    patterns*.json

    The patterns config files define regex patterns for finding the mapfiles, monolith files and processing their content. -They belong to the configID they were assigned to in the globalConfigs.json.

    +They belong to the configID they were assigned to in the globalConfigs.json.

    These config files have to have the following format:

    -
    {
    -    "mapfiles": {
    -        <SW_NAME>: {
    -            "regex": [<REGEX_PATTERN>, ... <REGEX_PATTERN>],
    -            "VAS": <VAS_NAME>,
    -            "UniquePatternSections": <REGEX_PATTERN>,
    -            "UniquePatternObjects": <REGEX_PATTERN>,
    -            "memRegionExcludes": [<MEMORY_AREA>, ... <MEMORY_AREA>]
    -        },
    -        .
    -        .
    -        .
    -        <SW_NAME>: {
    -            "regex": [<REGEX_PATTERN>, ... <REGEX_PATTERN>],
    -            "VAS": <VAS_NAME>,
    -            "UniquePatternSections": <REGEX_PATTERN>,
    -            "UniquePatternObjects": <REGEX_PATTERN>,
    -            "memRegionExcludes": [<MEMORY_AREA>, ... <MEMORY_AREA>]
    -        },
    -    },
    -    "monoliths": {
    -        "<MONILITH_NAME>": {
    -            "regex": [<REGEX_PATTERN>, ... <REGEX_PATTERN>]
    -        }
    -    }
    -}
    -
    +
    {
    +    "mapfiles": {
    +        <SW_NAME>: {
    +            "regex": [<REGEX_PATTERN>, ... <REGEX_PATTERN>],
    +            "VAS": <VAS_NAME>,
    +            "UniquePatternSections": <REGEX_PATTERN>,
    +            "UniquePatternObjects": <REGEX_PATTERN>,
    +            "memRegionExcludes": [<MEMORY_AREA>, ... <MEMORY_AREA>]
    +        },
    +        .
    +        .
    +        .
    +        <SW_NAME>: {
    +            "regex": [<REGEX_PATTERN>, ... <REGEX_PATTERN>],
    +            "VAS": <VAS_NAME>,
    +            "UniquePatternSections": <REGEX_PATTERN>,
    +            "UniquePatternObjects": <REGEX_PATTERN>,
    +            "memRegionExcludes": [<MEMORY_AREA>, ... <MEMORY_AREA>]
    +        },
    +    },
    +    "monoliths": {
    +        "<MONILITH_NAME>": {
    +            "regex": [<REGEX_PATTERN>, ... <REGEX_PATTERN>]
    +        }
    +    }
    +}

    The following rules apply:

    @@ -1389,26 +3477,25 @@

    virtualSections*.json

    The virtual sections config files are used to assign the sections of the virtual address spaces to a virtual address spaces defined in the patterns*.jsonfile. This is needed because the mapfiles can contain physical and virtual sections as well and Emma needs to identify the virtual ones and assign them to a specific virtual address space. -If your configuration does not use virtual address spaces, the virtualSections*.json files are not needed.

    +If your configuration does not use virtual address spaces, the virtualSections*.json file is not needed.

    This config file have to have the following format:

    -
    {
    -    <VAS_NAME>: [
    -        <SECTION_NAME>,
    -        .
    -        .
    -        .
    -        <SECTION_NAME>
    -    ],
    -    ...
    -    <VAS_NAME>: [
    -        <SECTION_NAME>,
    -        .
    -        .
    -        .
    -        <SECTION_NAME>
    -    ]
    -}
    -
    +
    {
    +    <VAS_NAME>: [
    +        <SECTION_NAME>,
    +        .
    +        .
    +        .
    +        <SECTION_NAME>
    +    ],
    +    ...
    +    <VAS_NAME>: [
    +        <SECTION_NAME>,
    +        .
    +        .
    +        .
    +        <SECTION_NAME>
    +    ]
    +}

    The following rules apply:

    @@ -1423,2923 +3510,17 @@

    virtualSections*.json

  • Every <VAS_NAME> key has an array as value that lists the sections that belong to the virtual address space
  • There are no rules for the assignment, this needs to be done intuitively based on the project being analyzed
  • -

    categoriesObjects.json and categoriesSections.json

    -

    The categories config files are used to categorize objects and sections to user defined categories by using their full names. -These files are optional, if no categorization needed, these config files do not need to be created. -This function can be used for example to group the software components of one company together which will make the results easier to understand.

    -

    The categoriesObjects.json is used for the objects and the categoriesSections.json is used for the section categorization. -The objects and sections will be first tried to be categorized by these files. If they could not be categorized, then the software will try -to categorize them based on the categoriesObjectsKeywords.json and categoriesSectionsKeywords.json files.

    -

    These config files have to have the following format:

    -
    {
    -    <CATEGORY>: [
    -        <NAME>,
    -        .
    -        .
    -        .
    -        <NAME>
    -    ],
    -    .
    -    .
    -    .
    -    <CATEGORY>: [
    -        <NAME>,
    -        .
    -        .
    -        .
    -        <NAME>
    -    ]
    -}
    -
    - - -

    The following rules apply:

    -
      -
    • The file contains a single unnamed JSON object
    • -
    • The types used in the description:
        -
      • <CATEGORY> is a string containing a unique category name
      • -
      • <NAME> is a string
      • -
      -
    • -
    • The categorisation can be done either by hand or with the --create_categories command line argument (for usage see there)
    • -
    • The <NAME> has to contain full names of the sections or objects
    • -
    -

    categoriesObjectsKeywords.json and categoriesSectionsKeywords.json

    -

    The categories keywords config files are used to categorize objects and sections to user defined categories by using only substrings of their names. -These files are optional, if no categorization needed, these config files do not need to be created. -This function can be used for example to group the software components of one company together which will make the results easier to understand.

    -

    The categoriesObjectsKeywords.json is used for the objects and the categoriesSectionsKeywords.json is used for the section categorization. -The objects and sections will only tried to be categorized by these files if the categorization by the categoriesObjects.json and categoriesSections.json files failed. -If they could not be categorized, then the software will assign them to a category called <unspecified> so they will be more visible in the results.

    -

    These config files have to have the following format:

    -
    {
    -    <CATEGORY>: [
    -        <KEYWORD>,
    -        .
    -        .
    -        .
    -        <KEYWORD>
    -    ],
    -    <CATEGORY>: [
    -        <KEYWORD>,
    -        .
    -        .
    -        .
    -        <KEYWORD>
    -    ]
    -}
    -
    - - -

    The following rules apply:

    -
      -
    • The file contains a single unnamed JSON object
    • -
    • The types used in the description:
        -
      • <CATEGORY> is a string containing a unique category name
      • -
      • <KEYWORD> is a string
      • -
      -
    • -
    • The categorisation has to be done by hand
    • -
    • The <KEYWORD> contains a regex pattern for the names of the sections or objects
    • -
    -

    Configuration File Dependencies

    -

    In order to work correctly Emma expects certain relations between configuration files. This section shall provide an overview:

    -
    ./images/configDependencies.png
    - -

    Since globalConfig.json is "just" a meta-config you reference filenames of

    -
      -
    1. addressSpaces*.json
    2. -
    3. patterns*.json and
    4. -
    5. sections*.json.
    6. -
    -

    These filenames must (obviously) match with the real filenames - there are referenced (ref).

    -

    memRegionExcludes: You can exclude certain memory regions with this keyword in patterns*.json. In order to do this the memory regions/tags must match with those defined in addressSpaces*.json.

    -

    If you have virtual address spaces (VASes) defined. We need a "monolith file" pattern defined in patterns*.json in order to be able to translate virtual addresses back to physical addresses. In the same file you give each VAS a name/tag. This tag is later used to identify which section belongs to which VAS (defined in sections*.json). Again, the VAS names must match between those two files. We do this since you may have name clashes of sections names between different VASes.

    Output Files

    The output Files will be saved to the memStats folder of the respective project. The filename will have this form:

    -
    <PROJECT_NAME>_Image_Summary_TIMESTAMP.csv
    -<PROJECT_NAME>_Module_Summary_TIMESTAMP.csv
    -<PROJECT_NAME>_Objects_in_Sections_TIMESTAMP.csv
    -
    +
    <PROJECT_NAME>_Section_Summary_TIMESTAMP.csv
    +<PROJECT_NAME>_Object_Summary_TIMESTAMP.csv
    +<PROJECT_NAME>_Objects_in_Sections_TIMESTAMP.csv
    -

    Image Summary

    -

    The file <PROJECT_NAME>_Image_Summary_<TIMESTAMP>.csv contains the sections from the mapfiles.

    -

    Module Summary

    -

    The file <PROJECT_NAME>_Module_Summary_<TIMESTAMP>.csv contains the objects from the mapfiles.

    +

    Section Summary

    +

    The file <PROJECT_NAME>_Section_Summary_<TIMESTAMP>.csv contains the sections from the mapfiles.

    +

    Object Summary

    +

    The file <PROJECT_NAME>_Object_Summary_<TIMESTAMP>.csv contains the objects from the mapfiles.

    Objects in Sections

    "Objects in sections" provides ways to obtain a finer granularity of the categorisation result. Therefore categorised sections containing (smaller) objects of a different category got split up and result into a more accurate categorisation. As a result you will get output files in form of a .csv file which sets you up to do later processing on this data easily. In this file additional information is added like:

      @@ -7768,7 +6949,7 @@

      Objects in Sections

      AAAAAAAAkLWEYwAAAAAAAAAAAAAAAMhawjEAAAAAAAAAAAAAAABkLeEYAAAAAAAAAAAAAAAAspZw DAAAAAAAAAAAAAAAAFlLOAYAAAAAAAAAAAAAAICsJRwDAAAAAAAAAAAAAABA1hKOAQAAAAAAAAAA AAAAIGsJxwAAAAAAAAAAAAAAAJC1/g9IYu4l3Q4QSAAAAABJRU5ErkJggg== -" alt="./images/objectsInSections.png" width="90%" /> +" alt="./images/objectsInSections.png" width="90%">

      The file <PROJECT_NAME>_Objects_in_Sections_<TIMESTAMP>.csv is the result of the "merge" of the objects and the sections file.

      Objects/sections that are "touching" or overlapping each other in some way (see the above figure) are resolved in this step. Therefore the "weaker" section/object is split (and has therefore a reduced by size after each step).

      @@ -7796,9 +6977,9 @@

      CSV header

      The CSV file has the following columns:

      • The address start, end and sizes: addrStartHex; addrEndHex; sizeHex; addrStartDec; addrEndDec; sizeDec; sizeHumanReadable
      • -
      • The section and module name: section; moduleName Note: If the image summary contains only sections, the column moduleName will be left empty.
      • +
      • The section and object name: sectionName; moduleName Note: If the image summary contains only sections, the column moduleName will be left empty.
      • configID, memType and tag are from the config files.
      • -
      • vasName is the virtual address space name defined in sections.json. The DMA field indicates whether a section/module is in a VAS.
      • +
      • vasName is the virtual address space name defined in sections.json. The DMA field indicates whether a section/object is in a VAS.
      • category: The category evaluated from categories*.json
      • mapfile: The mapfile, the entry originates from.
      • overlapFlag: Indicates whether one section overlaps with another.
      • @@ -7813,23 +6994,22 @@

        Terminology

      Examples

      Create a Mapfile Summary for :

      -
      emma.py --project ..\<PROJECT> \
      ---mapfiles ..\MyMapfiles \
      ---dir ..\MyMapfiles\results
      -
      +
      emma.py --project ..\<PROJECT> \
      +--mapfiles ..\MyMapfiles \
      +--dir ..\MyMapfiles\results
      -

      Matching module name and category using categoriesKeywords.json

      -

      categoriesKeywords.json can be used to match module names with catgories by user defined keywords.

      +

      Matching object name and category using categoriesKeywords.json

      +

      categoriesObjectsKeywords.json can be used to match object names with catgories by user defined keywords.

      • Arguments required: --create_categories
      • This step will append the newly categorised modules to categories.json. The program will ask you to confirm to overwrite the file.
      -

      Removing not needed module names from categories.json

      -

      Not needed module names can be removed from categories.json, for example when categories.json from another project is used.

      +

      Removing not needed object names from categoriesObjects.json

      +

      Not needed object names can be removed from categoriesObjects.json, for example when categoriesObjects.json from another project is used.

      • Arguments required: --remove_unmatched
      • -
      • This step will remove never matching module names from categories.json. Some modules never match because e.g. the module got removed or is not present in the current release. The program will ask you to confirm to overwrite the file.
      • +
      • This step will remove never matching object names from categoriesObjects.json. Some modules never match because e.g. the object got removed or is not present in the current release. The program will ask you to confirm to overwrite the file.

      General Information About Map Files and Build Chains

        @@ -7844,17243 +7024,21605 @@

        General Informatio

        Technical Details

        GHS Monolith file generation

        Execute this to generate the monolith files (you need to have the ELF file for this step).

        -
        gdump.exe -virtual_mapping -no_trunc_sec_names Application.elf >> monolith.map
        -gdump.exe -map             -no_trunc_sec_names Application.elf >> monolith.map
        -
        +
        gdump.exe -virtual_mapping -no_trunc_sec_names Application.elf >> monolith.map
        +gdump.exe -map             -no_trunc_sec_names Application.elf >> monolith.map

        By default long names will be truncated. This can lead to inaccurate results. In order to prevent this use -no_trunc_sec_names.

        Class diagram Emma

        -

        ../genDoc/call_graph_uml/classes_mapfileRegexes.png
        -
        ../genDoc/call_graph_uml/classes_memoryEntry.png
        -
        ../genDoc/call_graph_uml/classes_memoryManager.png

        +

        images/emmaClassDiagram.png

        Calling Graph Emma

        -

        ../genDoc/call_graph_uml/emma_filtered.profile.png

        +

        ../genDoc/call_graph_uml/emma_filtered.profile.png

        diff --git a/doc/readme.md b/doc/readme.md index b4b3853..f968ebf 100644 --- a/doc/readme.md +++ b/doc/readme.md @@ -1,45 +1,47 @@ # Emma **Emma Memory and Mapfile Analyser** -> Conduct static (i.e. worst case) memory consumption analyses based on arbitrary linker map files (Green Hills map files are the default but others - like GCC - are supported via configuration options). This tool creates a summary/overview about static memory usage in form of a comma separated values file. +> Conduct static (i.e. worst case) memory consumption analyses based on linker map files (currently only Green Hills map files are supported). +This tool creates a summary/overview about static memory usage in form of a comma separated values (CSV) file. ------------------------ # Contents -1. [Emma](#emma) -2. [Contents](#contents) -3. [Requirements](#requirements) -4. [Process](#process) -5. [Limitations](#limitations) -6. [Usage](#usage) -7. [Arguments](#arguments) - 1. [Required Arguments](#required-arguments) - 2. [Optional Arguments](#optional-arguments) -8. [Project Configuration](#project-configuration) - 1. [Formal Definition](#formal-definition) - 1. [`[]`](#project) - 2. [`[supplement]`](#supplement) - 3. [`globalConfig.json`](#globalconfigjson) - 4. [`address spaces*.json`](#address-spacesjson) - 5. [`budgets.json`](#budgetsjson) - 6. [`patterns*.json`](#patternsjson) - 7. [`virtualSections*.json`](#virtualsectionsjson) - 8. [`categoriesObjects.json` and `categoriesSections.json`](#categoriesobjectsjson-and-categoriessectionsjson) - 9. [`categoriesObjectsKeywords.json` and `categoriesSectionsKeywords.json`](#categoriesobjectskeywordsjson-and-categoriessectionskeywordsjson) - 2. [Configuration File Dependencies](#configuration-file-dependencies) -9. [Output Files](#output-files) - 1. [Image Summary](#image-summary) - 2. [Module Summary](#module-summary) - 3. [Objects in Sections](#objects-in-sections) - 4. [CSV header](#csv-header) -10. [Terminology](#terminology) -11. [Examples](#examples) - 1. [Matching module name and category using `categoriesKeywords.json`](#matching-module-name-and-category-using-categorieskeywordsjson) - 2. [Removing not needed module names from `categories.json`](#removing-not-needed-module-names-from-categoriesjson) -12. [General Information About Map Files and Build Chains](#general-information-about-map-files-and-build-chains) -13. [Technical Details](#technical-details) - 1. [GHS Monolith file generation](#ghs-monolith-file-generation) - 2. [Class diagram Emma](#class-diagram-emma) - 3. [Calling Graph Emma](#calling-graph-emma) +- [Emma](#emma) +- [Contents](#contents) +- [Requirements](#requirements) +- [Process](#process) +- [Limitations](#limitations) +- [Usage](#usage) +- [Arguments](#arguments) + - [Required Arguments](#required-arguments) + - [Optional Arguments](#optional-arguments) +- [Project Configuration](#project-configuration) + - [Formal definition of the generic configuration](#formal-definition-of-the-generic-configuration) + - [`PROJECT`](#project) + - [`supplement`](#supplement) + - [`globalConfig.json`](#globalconfigjson) + - [`addressSpaces*.json`](#addressspacesjson) + - [`budgets.json`](#budgetsjson) + - [`categoriesObjects.json` and `categoriesSections.json`](#categoriesobjectsjson-and-categoriessectionsjson) + - [`categoriesObjectsKeywords.json` and `categoriesSectionsKeywords.json`](#categoriesobjectskeywordsjson-and-categoriessectionskeywordsjson) + - [Formal Definition of the GHS compiler specific configuration](#formal-definition-of-the-ghs-compiler-specific-configuration) + - [Extensions to the `globalConfig.json`](#extensions-to-the-globalconfigjson) + - [`patterns*.json`](#patternsjson) + - [`virtualSections*.json`](#virtualsectionsjson) +- [Output Files](#output-files) + - [Section Summary](#section-summary) + - [Object Summary](#object-summary) + - [Objects in Sections](#objects-in-sections) + - [CSV header](#csv-header) +- [Terminology](#terminology) +- [Examples](#examples) + - [Matching object name and category using `categoriesKeywords.json`](#matching-object-name-and-category-using-categorieskeywordsjson) + - [Removing not needed object names from `categoriesObjects.json`](#removing-not-needed-object-names-from-categoriesobjectsjson) +- [General Information About Map Files and Build Chains](#general-information-about-map-files-and-build-chains) +- [Technical Details](#technical-details) + - [GHS Monolith file generation](#ghs-monolith-file-generation) + - [Class diagram Emma](#class-diagram-emma) + - [Calling Graph Emma](#calling-graph-emma) ------------------------ @@ -47,13 +49,13 @@ * Python 3.6 or higher * Tested with 3.6.1rc1; 3.7.0 * Python libraries - * pypiscout (`pip3 install pypiscout`) -* Tested on Windows but should also work on Linux systems + * pypiscout 2.0 or higher: (`pip3 install pypiscout`) +* Tested on Windows and Linux systems # Process Using the Mapfile Analyser is a two step process. The first step is to extract the required information from the mapfiles and save it to .csv files. -This is done with the `emma.py` script. The second step is to visualise the data. The documentation can be found in the Emma visualiser readme document. +This is done with the `emma.py` script. The second step is to visualise the data. This document explains the first part only, the visualisation is documented in the Emma visualiser readme document. # Limitations The Emma is only suitable for analyzing projects where the devices have a single linear physical address space: @@ -64,7 +66,7 @@ The Emma is only suitable for analyzing projects where the devices have a single Devices based on architectures like this can be analyzed with Emma. # Usage -Image and module summaries of the specified mapfiles will be created. +Section and object summaries of the specified mapfiles will be created. $ python emma.py --help usage: Emma Memory and Mapfile Analyser (Emma) [-h] [--version] --project PROJECT @@ -72,7 +74,7 @@ Image and module summaries of the specified mapfiles will be created. [--subdir SUBDIR] [--analyse_debug] [--create_categories] [--remove_unmatched] [--noprompt] - [-Werror] + [--Werror] Analyser for mapfiles from Greens Hills Linker (other files are supported via configuration options).It creates a summary/overview about static memory usage @@ -95,7 +97,7 @@ Image and module summaries of the specified mapfiles will be created. --remove_unmatched Remove unmatched modules from categories.json. (default: False) --noprompt Exit fail on user prompt. (default: False) - -Werror Treat all warnings as errors. (default: False) + --Werror Treat all warnings as errors. (default: False) ********* Marcel Schmalzl, Felix Mueller, Gergo Kocsis - 2017-2019 ********* @@ -110,13 +112,13 @@ Image and module summaries of the specified mapfiles will be created. ## Optional Arguments * `--dir` - * User defined path for the top folder holding the `memStats`/output files. Per default it uses the same directory as the config files. + * User defined path for the top folder holding the `memStats`/output files. Per default it uses the same directory as the config files. * `--stats_dir` - * User defined path inside the folder given in the `--dir` argument. This is usefull when batch analysing mapfiles from various development stages. Every analysis output gets it's own directory. + * User defined path inside the folder given in the `--dir` argument. This is usefull when batch analysing mapfiles from various development stages. Every analysis output gets it's own directory. * `--create_categories` - * Create `categories.json` from `keywords.json` for easier module categorisation. + * Create `categories*.json` from `categories*Keywords.json` for easier categorisation. * `--remove_unmatched`, - * Remove unmatched entries from categories.json. This is usefull when a `categories.json` from another project is used. + * Remove unmatched entries from `categories*.json`. This is useful when a `categories*.json` from another project is used. * `--analyse_debug`, `--dbg` * Normally we remove DWARF debug sections from the analysis to show the relevant information for a possible release software. This can be prevented if this argument is set. DWARF section names are defined in `stringConstants.py`. `.unused_ram` is always excluded (regardless of this flag) * `--noprompt` @@ -130,72 +132,74 @@ The memory analysis will be executed based on the project configuration. In orde This chapter explains the role and functionality of each part of the configuration and illustrates all the settings that can be used. Based on this description the user will have to create his/her own configuration. -Creating a configuration file is done in the JSON format (if you are not familiar with JSON, please visit https://www.json.org). - -This chapter will go trough the topic by formally defining the format, rules and the functionality of the config files. There are practical example projects available in the doc folder available. These projects will lead you step by step trough the process of +Creating a configuration is done by writing several JSON files (if you are not familiar with JSON, please visit https://www.json.org). +This chapter will go trough the topic by formally defining the format, rules and the functionality of the config files. +There are practical example projects available in the **doc** folder. These projects will lead you step by step trough the process of creating a configuration and they also contain map files that can be analyzed. -The following example projects are available: +Currently the following example projects are available: * **doc/test_project** - A Test Project that illustrates a system with a hardware that consists of two devices: an MCU and an SOC. -The mapfiles of this project are using the default mapfile format of Emma. +Both of the devices have a GHS compiler specific configuration and mapfiles. -## Formal Definition +An Emma project configuration consists of two parts: the generic configuration and the compiler specific configuration. -An Emma project configuration contains several JSON files: +## Formal definition of the generic configuration +The generic part of the configuration contains the following files: - +--[] + +-- [] | +-- [supplement] | +-- globalConfig.json | +-- addressSpaces*.json - | +-- patterns*.json - | +-- virtualSections*.json | +-- budgets.json | +-- categoriesObjects.json | +-- categoriesObjectsKeywords.json | +-- categoriesSections.json | +-- categoriesSectionsKeywords.json + | +-- The files containing the asterisk symbol can be freely named by the user because the actual file names will have to be listed in the globalConfig.json. -### `[]` +### `PROJECT` The configuration has to be contained by a folder. The name of the folder will be the name of the configuration. From the files ending with a * symbol, the configuration can contain more than one but maximum up to the number of configIDs defined in globalConfig.json. -### `[supplement]` -You can add .md files into this folder with mark down syntax to add information regarding your project that will be contained by the .html overview. +### `supplement` +You can add .md files into this folder with Markdown syntax to add information regarding your project that will be contained by the .html overview. For more information please refer to the Emma Visualiser's documentation. ### `globalConfig.json` -The globalConfig.json is the starting point of the configurations. -It defines the memory configurations of the system and defines the names of the config files that belong to these. -A memory configuration is describes a unit that has memory associated to it, for example an MCU, MPU or an SOC. -During the analysis, it will be examined to which extent the memory resources that are available for these units are used. - -In Emma, a memory configuration is called a **configID**. For each `configID` the the following config files need to be defined: +The globalConfig.json is the starting point of a configuration, this file defines the **configId**-s. +The configId-s are the hardware units of the system that have memory associated to them, for example an MCU, MPU or an SOC. +During the analysis, it will be examined to which extent these memory resources are used. -
        +For each configId, globalConfig.json assigns a compiler. This means that the mapfiles belonging to the configId were created by the selected compiler. +This is important, since the format of these files are specific to the compiler. For each configId an addressSpaces*.json config file will be assigned. +Furthermore the globalConfig.json assigns compiler specific config files to each configId, that need to be consistent with the selected compiler. +For example if a GHS compiler was selected to the configId, then the compiler specific configuration part of this configId have to fulfill the requirements +described in the [Formal Definition of the GHS compiler specific configuration](#formal-definition-of-the-GHS-compiler-specific-configuration) chapter. The globalConfig.json has to have the following format: :::json { : { + "compiler": , "addressSpacesPath": , - "patternsPath": , - "virtualSectionsPath": , + "mapfiles": , "ignoreConfigID": , - "mapfiles": + }, . . . : { + "compiler": , "addressSpacesPath": , - "patternsPath": , - "virtualSectionsPath": , - "ignoreConfigID": + "mapfiles": , + "ignoreConfigID": , + } } @@ -204,26 +208,27 @@ The following rules apply: * The file contains a single unnamed JSON object * The types used in the description: * `` is a string + * `` is a string * `` is a string + * `` is a string, with the special characters escaped in it * `` is a boolean value containing either **true** or **false** - * `` is a relative path of type string (special charcaters must be escaped) + * `` are the key-value pairs that are required by the selected compiler * There has to be at least one **configID** defined -* You must assign three config files for each `configID` by defining the following key, value pairs: - * by defining **`addressSpacesPath`**, the config file that defines the address spaces is assigned - * by defining **`patternsPath`**, the config file that defines the patterns is assigned - * by defining **`virtualSectionsPath`**, the config file that listing the sections of the virtual address spaces is assigned +* You must select a compiler for every configID, by defining the **compiler** key. The possible values are: + * "GHS" - Green Hills Compiler +* You must assign the following config files for each configID by defining the following key, value pairs: + * by defining **addressSpacesPath**, the config file that defines the address spaces is assigned * The config files have to be in the same folder as the globalConfig.json - * The config files don't need to be different for each `configID` (for example you can use the same sections config file for all the configIDs) -* `ignoreConfigID`: - * can be used to mark a `configID` as ignored, which means that this will not be processed during the analysis + * The config files don't need to be different for each configID (for example you can use the same address spaces config file for all the configIDs) +* The mapfiles: + * specifies a folder **relative** to the one given via **--mapfiles** command line argument + * is optional, if is defined for a configID, then the map files belonging to this configId will be searched for within this folder + * Otherwise the mapfiles will be searched for in the **--mapfiles** root map file path +* The ignoreConfigID: + * can be used to mark a configID as ignored, which means that this will not be processed during the analysis * is optional, it does not need to be included in every configID, leaving it has the same effect as including it with false -* `mapfiles`: - * specifies a path **relative** to the one given via command line argument (-> root map file path). - * If `mapfiles` is specified for a `configID` map files will be searched within this **relative** path. - * Otherwise the root map file path will be used. - -### `address spaces*.json` +### `addressSpaces*.json` The address spaces config files define the existing memory areas for the configIDs they were assigned to in the globalConfigs.json. These config files have to have the following format: @@ -275,9 +280,141 @@ The following rules apply: ### `budgets.json` The budgets config file belongs to the Emma Visualiser. For a description, please see: **doc\readme-vis.md**. +### `categoriesObjects.json` and `categoriesSections.json` +The categories config files are used to categorize objects and sections to user defined categories by using their full names. +These files are optional, if no categorization needed, these config files do not need to be created. +This function can be used for example to group the software components of one company together which will make the results easier to understand. + +The `categoriesObjects.json` is used for the objects and the `categoriesSections.json` is used for the section categorization. +The objects and sections will be first tried to be categorized by these files. If they could not be categorized, then the software will try +to categorize them based on the `categoriesObjectsKeywords.json` and `categoriesSectionsKeywords.json` files. + +These config files have to have the following format: + + :::json + { + : [ + , + . + . + . + + ], + . + . + . + : [ + , + . + . + . + + ] + } + +The following rules apply: + +* The file contains a single unnamed JSON object +* The types used in the description: + * `` is a string containing a unique category name + * `` is a string +* The categorisation can be done either by hand or with the **--create_categories** command line argument (for usage see there) +* The `` has to contain full names of the sections or objects + +### `categoriesObjectsKeywords.json` and `categoriesSectionsKeywords.json` +The categories keywords config files are used to categorize objects and sections to user defined categories by using only substrings of their names. +These files are optional, if no categorization needed, these config files do not need to be created. +This function can be used for example to group the software components of one company together which will make the results easier to understand. + +The `categoriesObjectsKeywords.json` is used for the objects and the `categoriesSectionsKeywords.json` is used for the section categorization. +The objects and sections will only tried to be categorized by these files if the categorization by the `categoriesObjects.json` and `categoriesSections.json` files failed. +If they could not be categorized, then the software will assign them to a category called `` so they will be more visible in the results. + +These config files have to have the following format: + + :::json + { + : [ + , + . + . + . + + ], + : [ + , + . + . + . + + ] + } + +The following rules apply: + +* The file contains a single unnamed JSON object +* The types used in the description: + * `` is a string containing a unique category name + * `` is a string +* The categorisation has to be done by hand +* The `` contains a regex pattern for the names of the sections or objects + +## Formal Definition of the GHS compiler specific configuration +The GHS compiler specific part of the configuration contains the following files: + + +-- [] + | +-- + | +-- patterns*.json + | +-- virtualSections*.json + +The following dependencies exist within this type of a configuration: + +
        + +In `globalConfig.json`, you need to reference (ref relations on the picture): + +1. `addressSpaces*.json` +2. `patterns*.json` +3. `sections*.json` + +`memRegionExcludes`: You can exclude certain memory regions with this keyword in `patterns*.json`. In order to do this the memory regions/tags must match with those defined in `addressSpaces*.json`. + +If you have virtual address spaces (VASes) defined. You need a `"monolith file"` pattern defined in `patterns*.json` in order to be able to translate virtual addresses back to physical addresses. In the same file you give each VAS a name. This name is later used to identify which section belongs to which VAS (defined in `virtualSections*.json`). The VAS names must match between those two files. This is needed in order to avoid name clashes of sections names between different VASes. + +### Extensions to the `globalConfig.json` + +The globalConfig.json has to have the following format **for configId-s that have selected "GHS" as compiler**: + + :::json + { + : { + , + "patternsPath": , + "virtualSectionsPath": + }, + . + . + . + : { + , + "patternsPath": , + "virtualSectionsPath": + } + } + +The following rules apply: + +* The types used in the description: + * `` are the key-value pairs discussed in the [Formal definition of the generic configuration](#formal-definition-of-the-generic-configuration) chapter + * `` is a string +* You must assign a patterns config file for each configID by defining the **patternsPath** key +* If the configId contains virtual address spaces, you must assign a config file describing them by defining **virtualSectionsPath** key +* The assigned config files have to be in the same folder as the globalConfig.json +* The config files don't need to be different for each configID (for example you can use the same virtual sections config file for all the configIDs) + ### `patterns*.json` The patterns config files define regex patterns for finding the mapfiles, monolith files and processing their content. -They belong to the `configID` they were assigned to in the globalConfigs.json. +They belong to the `configID` they were assigned to in the `globalConfigs.json`. These config files have to have the following format: @@ -318,7 +455,7 @@ The following rules apply: * `` is a string * `` is a string containing a unique name * The **mapfiles** object must be present in the file with at least one entry: - * Each entry describes a SW unit of the `configID` (eg. a bootloader or application if an MCU is used or a process if an OS, like Linux is used): + * Each entry describes a SW unit of the configId (eg. a bootloader or application if an MCU is used or a process if an OS, like Linux is used): * The **regex** defines one ore more regex pattern to find the mapfile that contains the data for this SW unit: * It is possible to give more than one regex patterns in case of non-uniform mapfile names * If more than one map file will be found for the SW unit, a warning will be thrown @@ -335,7 +472,7 @@ The following rules apply: * The **memRegionExcludes** lists the memory areas that needs to be ignored during the analysis of the mapfile * The sections and objects of the mapfile that belong to the memory areas listed here will be ignored * The memory areas can be selected from the elements defined in the "memory" object of address spaces config file -* The **monoliths** object is optional, it is only needed if the `configID` has virtual address spaces +* The **monoliths** object is optional, it is only needed if the configId has virtual address spaces * If one the of the mapfiles object has a VAS key, then a monolith is needed * It is possible to give more than one regex patterns in case of non-uniform monolith file names * If more than one monolith file will be found for the SW unit, a warning will be thrown @@ -345,7 +482,7 @@ The following rules apply: The virtual sections config files are used to assign the sections of the virtual address spaces to a virtual address spaces defined in the `patterns*.json`file. This is needed because the mapfiles can contain physical and virtual sections as well and Emma needs to identify the virtual ones and assign them to a specific virtual address space. -If your configuration does not use virtual address spaces, the `virtualSections*.json` files are not needed. +If your configuration does not use virtual address spaces, the `virtualSections*.json` file is not needed. This config file have to have the following format: @@ -378,118 +515,21 @@ The following rules apply: * Every `` key has an array as value that lists the sections that belong to the virtual address space * There are no rules for the assignment, this needs to be done intuitively based on the project being analyzed -### `categoriesObjects.json` and `categoriesSections.json` -The categories config files are used to categorize objects and sections to user defined categories by using their full names. -These files are optional, if no categorization needed, these config files do not need to be created. -This function can be used for example to group the software components of one company together which will make the results easier to understand. - -The `categoriesObjects.json` is used for the objects and the `categoriesSections.json` is used for the section categorization. -The objects and sections will be first tried to be categorized by these files. If they could not be categorized, then the software will try -to categorize them based on the `categoriesObjectsKeywords.json` and `categoriesSectionsKeywords.json` files. - -These config files have to have the following format: - - :::json - { - : [ - , - . - . - . - - ], - . - . - . - : [ - , - . - . - . - - ] - } - -The following rules apply: - -* The file contains a single unnamed JSON object -* The types used in the description: - * `` is a string containing a unique category name - * `` is a string -* The categorisation can be done either by hand or with the **--create_categories** command line argument (for usage see there) -* The `` has to contain full names of the sections or objects - -### `categoriesObjectsKeywords.json` and `categoriesSectionsKeywords.json` -The categories keywords config files are used to categorize objects and sections to user defined categories by using only substrings of their names. -These files are optional, if no categorization needed, these config files do not need to be created. -This function can be used for example to group the software components of one company together which will make the results easier to understand. - -The `categoriesObjectsKeywords.json` is used for the objects and the `categoriesSectionsKeywords.json` is used for the section categorization. -The objects and sections will only tried to be categorized by these files if the categorization by the `categoriesObjects.json` and `categoriesSections.json` files failed. -If they could not be categorized, then the software will assign them to a category called `` so they will be more visible in the results. - -These config files have to have the following format: - - :::json - { - : [ - , - . - . - . - - ], - : [ - , - . - . - . - - ] - } - -The following rules apply: - -* The file contains a single unnamed JSON object -* The types used in the description: - * `` is a string containing a unique category name - * `` is a string -* The categorisation has to be done by hand -* The `` contains a regex pattern for the names of the sections or objects - -## Configuration File Dependencies -In order to work correctly Emma expects certain relations between configuration files. This section shall provide an overview: - -
        - -Since `globalConfig.json` is "just" a meta-config you reference filenames of - -1. `addressSpaces*.json` -2. `patterns*.json` and -3. `sections*.json`. - -These filenames must (obviously) match with the real filenames - there are referenced (`ref`). - -`memRegionExcludes`: You can exclude certain memory regions with this keyword in `patterns*.json`. In order to do this the memory regions/tags must match with those defined in `addressSpaces*.json`. - -If you have virtual address spaces (VASes) defined. We need a "monolith file" pattern defined in `patterns*.json` in order to be able to translate virtual addresses back to physical addresses. In the same file you give each VAS a name/tag. This tag is later used to identify which section belongs to which VAS (defined in `sections*.json`). Again, the VAS names must match between those two files. We do this since you may have name clashes of sections names between different VASes. - - # Output Files The output Files will be saved to the memStats folder of the respective project. The filename will have this form: :::bash - _Image_Summary_TIMESTAMP.csv - _Module_Summary_TIMESTAMP.csv + _Section_Summary_TIMESTAMP.csv + _Object_Summary_TIMESTAMP.csv _Objects_in_Sections_TIMESTAMP.csv -## Image Summary +## Section Summary -The file `_Image_Summary_.csv` contains the sections from the mapfiles. +The file `_Section_Summary_.csv` contains the sections from the mapfiles. -## Module Summary +## Object Summary -The file `_Module_Summary_.csv` contains the objects from the mapfiles. +The file `_Object_Summary_.csv` contains the objects from the mapfiles. ## Objects in Sections "Objects in sections" provides ways to obtain a finer granularity of the categorisation result. Therefore categorised sections containing (smaller) objects of a different category got split up and result into a more accurate categorisation. As a result you will get output files in form of a `.csv` file which sets you up to do later processing on this data easily. In this file additional information is added like: @@ -500,7 +540,7 @@ The file `_Module_Summary_.csv` contains the objects fr * All meta data about the origin of each section/object (mapfile, addess space, ...) * ... -
        +
        The file `_Objects_in_Sections_.csv` is the result of the "merge" of the objects and the sections file. @@ -534,9 +574,9 @@ Section names for section reserves and entries are `` and ` The CSV file has the following columns: * The address start, end and sizes: `addrStartHex; addrEndHex; sizeHex; addrStartDec; addrEndDec; sizeDec; sizeHumanReadable` -* The section and module name: `section; moduleName` Note: If the image summary contains only sections, the column moduleName will be left empty. +* The section and object name: `sectionName; moduleName` Note: If the image summary contains only sections, the column moduleName will be left empty. * `configID`, `memType` and `tag` are from the config files. -* `vasName` is the virtual address space name defined in sections.json. The `DMA` field indicates whether a section/module is in a VAS. +* `vasName` is the virtual address space name defined in sections.json. The `DMA` field indicates whether a section/object is in a VAS. * `category`: The category evaluated from categories*.json * `mapfile`: The mapfile, the entry originates from. * `overlapFlag`: Indicates whether one section overlaps with another. @@ -557,17 +597,17 @@ Create a Mapfile Summary for : --mapfiles ..\MyMapfiles \ --dir ..\MyMapfiles\results -## Matching module name and category using `categoriesKeywords.json` -`categoriesKeywords.json` can be used to match module names with catgories by user defined keywords. +## Matching object name and category using `categoriesKeywords.json` +`categoriesObjectsKeywords.json` can be used to match object names with catgories by user defined keywords. * Arguments required: ```--create_categories``` * This step will append the newly categorised modules to `categories.json`. The program will ask you to confirm to overwrite the file. -## Removing not needed module names from `categories.json` -Not needed module names can be removed from `categories.json`, for example when `categories.json` from another project is used. +## Removing not needed object names from `categoriesObjects.json` +Not needed object names can be removed from `categoriesObjects.json`, for example when `categoriesObjects.json` from another project is used. * Arguments required: ```--remove_unmatched``` -* This step will remove never matching module names from `categories.json`. Some modules never match because e.g. the module got removed or is not present in the current release. The program will ask you to confirm to overwrite the file. +* This step will remove never matching object names from `categoriesObjects.json`. Some modules never match because e.g. the object got removed or is not present in the current release. The program will ask you to confirm to overwrite the file. # General Information About Map Files and Build Chains @@ -591,9 +631,7 @@ Execute this to generate the monolith files (you need to have the ELF file for t By default long names will be truncated. This can lead to inaccurate results. In order to prevent this use `-no_trunc_sec_names`. ## Class diagram Emma -
        -
        -
        +
        ## Calling Graph Emma -
        +
        diff --git a/doc/test_project/globalConfig.json b/doc/test_project/globalConfig.json index 48d7148..a073b88 100644 --- a/doc/test_project/globalConfig.json +++ b/doc/test_project/globalConfig.json @@ -1,9 +1,11 @@ { "MCU": { + "compiler": "GHS", "addressSpacesPath": "addressSpaces_MCU.json", "patternsPath": "patterns_MCU.json" }, "SOC": { + "compiler": "GHS", "addressSpacesPath": "addressSpaces_SOC.json", "patternsPath": "patterns_SOC.json", "virtualSectionsPath": "virtualSections_SOC.json" diff --git a/doc/test_project/readme.html b/doc/test_project/readme.html index ef92360..83191a3 100644 --- a/doc/test_project/readme.html +++ b/doc/test_project/readme.html @@ -100,15 +100,14 @@

        Test Project

        The goal of the project is to present a simple system without any complicated parts in order to introduce the new users step-by-step to creating an Emma configuration.

        The project can be analyzed by running the following commands from the Emma top-level folder:

        -
        Emma:
        +
        Emma:
             $ python emma.py --project doc/test_project --mapfiles doc/test_project/mapfiles --dir doc/test_project/results
         Emma Visualier:
        -    $ python emma_vis.py --project doc/test_project --dir doc/test_project/results --overview --quiet
        -
        + $ python emma_vis.py --project doc/test_project --dir doc/test_project/results --overview --quiet

        The folder structure of the Test Project:

        -
        +--[test_project]
        +
        +--[test_project]
         |   +-- [mapfiles]                          # The mapfiles of the project are stored here
         |   +-- [readme]                            # The folder of this description
         |   +-- [results]                           # The results of the analyse will be stored here by the commands above
        @@ -122,8 +121,7 @@ 

        Test Project

        | +-- categoriesObjects.json | +-- categoriesSections.json | +-- categoriesObjectsKeywords.json -| +-- categoriesSectionsKeywords.json -
        +| +-- categoriesSectionsKeywords.json

        Project description

        @@ -1331,7 +1329,7 @@

        MCU

        7H/KXxvwjDHGGPv/gmtHFxljjDH2X4UDnjHGGHNDHPCMMcaYG+KAZ4wxxtwQBzxjjDHmhjjgGWOM MTfEAc8YY4y5IQ54xhhjzA1xwDPGGGNuiAOeMcYYc0Mc8Iwxxpgb4oBnjDHG3A7w/wBqP/ZIDXZM KAAAAABJRU5ErkJggg== -" alt="../../images/test_project/MCU_MemoryLayout.png" width="25%" /> +" alt="../images/test_project/MCU_MemoryLayout.png" width="25%">

        SOC

        The SOC is running a complex operating system that utilizes the memory management unit (MMU) of the processor. @@ -1921,33 +1919,35 @@

        SOC

        ihcWVmh4HmHBxHsmOs2RVJWV19ar1ehMz87B1cXTzYlr+bHc/6+Mm4unq3WmDYPauEYJsUZHNy/3 ljU2+09lMLWioby8VNJIZNy93MXWGfASeOr/HgYA0BZccQJAAhQGABKgMACQAIUBgAQoDAAkQGEA eG4Uyv8DOnZFrL7fHLwAAAAASUVORK5CYII= -" alt="../../images/test_project/SOC_MemoryLayout.png" width="25%" /> +" alt="../images/test_project/SOC_MemoryLayout.png" width="25%">

        Creating the configuration

        This chapter will explain creating the configuration for the project by explaining every step the user should take.

        globalConfig.json

        Creating the project configuration starts with creating the globalConfig.json. This file lists the programmable devices of the system and assigns the further configuration files to configuration.

        -

        We will add our two devices (MCU and SOC) as JSON objects. For every device, at least two config files must be assigned, -the addressSpaces and the patterns. For devices using virtual address spaces, a third one, the sections is needed as well.

        +

        We will add our two devices (MCU and SOC) as JSON objects. For every device, the used compiler and at least two config files must be assigned: +the addressSpaces and the patterns. For devices using virtual address spaces, a third one, the sections is needed as well. +As we are using Green Hills Compiler, so the mapfiles will be in this format, we will define the value "GHS" for both of the devices.

        Since in our system, we have different devices with different memory layout, we have assigned two different addressSpaces config files to them. If your system contains multiple devices of the same type, you can use the same addressSpaces config file for these devices.

        The patterns config file files are usually unique for all the configIDs.

        Since the MCU does not have an MMU so it does not have virtual address spaces, we will not assign a sections config file to it. In contrast to the MCU, the SOC does use virtual address spaces, so we will assign a sections config file to it.

        -
        {
        -    "MCU": {
        -        "addressSpacesPath": "addressSpaces_MCU.json",
        -        "patternsPath": "patterns_MCU.json"
        -    },
        -    "SOC": {
        -        "addressSpacesPath": "addressSpaces_SOC.json",
        -        "patternsPath": "patterns_SOC.json",
        -        "virtualSectionsPath": "virtualSections_SOC.json",
        -    }
        -}
        -
        +
        {
        +    "MCU": {
        +        "compiler": "GHS",
        +        "addressSpacesPath": "addressSpaces_MCU.json",
        +        "patternsPath": "patterns_MCU.json"
        +    },
        +    "SOC": {
        +        "compiler": "GHS",
        +        "addressSpacesPath": "addressSpaces_SOC.json",
        +        "patternsPath": "patterns_SOC.json",
        +        "virtualSectionsPath": "virtualSections_SOC.json"
        +    }
        +}

        In the following, the config files assigned to the MCU will be explained.

        @@ -1960,29 +1960,28 @@

        addressSpaces_MCU.json

        The "SRAM" area is an internal RAM area, where data is stored. The last area is the "Device", where the SFRs of the controller are located. Since our software will have neither data nor code in this area, we will add it to the "ignoreMemory" array so it will not be a analyzed by Emma.

        -
        {
        -    "memory": {
        -        "Code": {
        -            "start": "0x00000000",
        -            "end": "0x1FFFFFFF",
        -            "type": "INT_FLASH"
        -        },
        -        "SRAM": {
        -            "start": "0x20000000",
        -            "end": "0x3FFFFFFF",
        -            "type": "INT_RAM"
        -        },
        -        "Device": {
        -            "start": "0x40000000",
        -            "end": "0x5FFFFFFF",
        -            "type": "INT_RAM"
        -        }
        -    },
        -    "ignoreMemory": [
        -        "Device"
        -    ]
        -}
        -
        +
        {
        +    "memory": {
        +        "Code": {
        +            "start": "0x00000000",
        +            "end": "0x1FFFFFFF",
        +            "type": "INT_FLASH"
        +        },
        +        "SRAM": {
        +            "start": "0x20000000",
        +            "end": "0x3FFFFFFF",
        +            "type": "INT_RAM"
        +        },
        +        "Device": {
        +            "start": "0x40000000",
        +            "end": "0x5FFFFFFF",
        +            "type": "INT_RAM"
        +        }
        +    },
        +    "ignoreMemory": [
        +        "Device"
        +    ]
        +}

        patterns_MCU.json

        @@ -1999,24 +1998,22 @@

        patterns_MCU.json

        is that the Application and the Bootloader will never run at the same time. During the Application runtime, the data of the Bootloader is not present in the memory, only the code, so the SRAM memory are needs to be ignored to get the correct results.

        -
        {
        -    "mapfiles": {
        -        "MCU_Application": {
        -            "regex": ["\\bMCU_Application\\.map"]
        -        },
        -        "MCU_Bootloader": {
        -            "regex": ["\\bMCU_Bootloader\\.map"]
        -            "memRegionExcludes": ["SRAM"]                
        -        }
        -    }
        -}
        -
        +
        {
        +    "mapfiles": {
        +        "MCU_Application": {
        +            "regex": ["\\bMCU_Application\\.map"]
        +        },
        +        "MCU_Bootloader": {
        +            "regex": ["\\bMCU_Bootloader\\.map"]
        +            "memRegionExcludes": ["SRAM"]                
        +        }
        +    }
        +}

        At this point the absolute minimum configuration for the MCU is done. You can try it out by adding the following line to the "SOC" object in the globalConfig.json:

        -
        "ignoreConfigID": true
        -
        +
        "ignoreConfigID": true

        This will lead to, that Emma will completely ignore the "SOC" configID and will not trow error @@ -2038,36 +2035,35 @@

        addressSpaces_SOC.json

        but it will not be present after the booting process anymore so it will not be part of the Emma analyse.

        Based on all these info we will ignore the "Boot", "SRAM" and "Peripherals" memory areas by adding them to the "ignoreMemory" array.

        -
        {
        -    "memory": {
        -        "Boot": {
        -            "start": "0x00000000",
        -            "end": "0x0FFFFFFF",
        -            "type": "INT_FLASH"
        -        },
        -        "SRAM": {
        -            "start": "0x10000000",
        -            "end": "0x1FFFFFFF",
        -            "type": "INT_RAM"
        -        },
        -        "Peripherals": {
        -            "start": "0x40000000",
        -            "end": "0x5FFFFFFF",
        -            "type": "INT_RAM"
        -        },
        -        "DDR": {
        -          "start": "0xC0000000",
        -          "end": "0xFFFFFFFF",
        -          "type": "EXT_RAM"
        -        }
        -    },
        -    "ignoreMemory": [
        -      "Boot",
        -      "SRAM",
        -      "Peripherals"
        -    ]
        -}
        -
        +
        {
        +    "memory": {
        +        "Boot": {
        +            "start": "0x00000000",
        +            "end": "0x0FFFFFFF",
        +            "type": "INT_FLASH"
        +        },
        +        "SRAM": {
        +            "start": "0x10000000",
        +            "end": "0x1FFFFFFF",
        +            "type": "INT_RAM"
        +        },
        +        "Peripherals": {
        +            "start": "0x40000000",
        +            "end": "0x5FFFFFFF",
        +            "type": "INT_RAM"
        +        },
        +        "DDR": {
        +          "start": "0xC0000000",
        +          "end": "0xFFFFFFFF",
        +          "type": "EXT_RAM"
        +        }
        +    },
        +    "ignoreMemory": [
        +      "Boot",
        +      "SRAM",
        +      "Peripherals"
        +    ]
        +}

        patterns_SOC.json

        @@ -2091,27 +2087,26 @@

        patterns_SOC.json

        the Emma documentation).

        For the SOC_monolith.map, we need to create the "monoliths" object. This object will contain a user defined name for the monolith, in this case "SOC_monolith" with a reg pattern just like the for the other mapfiles.

        -
        {
        -    "mapfiles": {
        -        "SOC_OperatingSystem": {
        -            "regex": ["\\bSOC_OperatingSystem\\.map"]
        -        },
        -        "SOC_Application": {
        -            "regex": ["\\bSOC_Application\\.map"],
        -            "VAS": "APP"
        -        },
        -        "SOC_NetLogger": {
        -            "regex": ["\\bSOC_NetLogger\\.map"],
        -            "VAS": "NETLOG"
        -        }
        -    },
        -    "monoliths": {
        -        "SOC_monolith": {
        -            "regex": ["\\bSOC_monolith\\.map"]
        -        }
        -    }
        -}
        -
        +
        {
        +    "mapfiles": {
        +        "SOC_OperatingSystem": {
        +            "regex": ["\\bSOC_OperatingSystem\\.map"]
        +        },
        +        "SOC_Application": {
        +            "regex": ["\\bSOC_Application\\.map"],
        +            "VAS": "APP"
        +        },
        +        "SOC_NetLogger": {
        +            "regex": ["\\bSOC_NetLogger\\.map"],
        +            "VAS": "NETLOG"
        +        }
        +    },
        +    "monoliths": {
        +        "SOC_monolith": {
        +            "regex": ["\\bSOC_monolith\\.map"]
        +        }
        +    }
        +}

        virtualSections_SOC.json

        @@ -2120,25 +2115,24 @@

        virtualSections_SOC.json

        sections as well and Emma needs to identify the virtual ones and assign them to a specific virtual address space. In this configuration only the SOC has virtual address spaces. The MCU does not need a config file like this.

        -
        {
        -    "APP": [
        -        ".app_text",
        -        ".app_rodata",
        -        ".app_data",
        -        ".app_bss",
        -        ".app_heap",
        -        ".app_stack"
        -    ],
        -    "NETLOG": [
        -        ".netlog_text",
        -        ".netlog_rodata",
        -        ".netlog_data",
        -        ".netlog_bss",
        -        ".netlog_heap",
        -        ".netlog_stack"
        -    ]
        -}
        -
        +
        {
        +    "APP": [
        +        ".app_text",
        +        ".app_rodata",
        +        ".app_data",
        +        ".app_bss",
        +        ".app_heap",
        +        ".app_stack"
        +    ],
        +    "NETLOG": [
        +        ".netlog_text",
        +        ".netlog_rodata",
        +        ".netlog_data",
        +        ".netlog_bss",
        +        ".netlog_heap",
        +        ".netlog_stack"
        +    ]
        +}

        Categorization

        @@ -2162,96 +2156,93 @@

        Categorization

        have a group, they will be assigned to the default group.

        categoriesSections.json

        This config file is used for grouping sections with their full name.

        -
        {
        -  "ReservedArea": [
        -    "bootloader",
        -    "application"
        -  ]
        -}
        -
        +
        {
        +  "ReservedArea": [
        +    "bootloader",
        +    "application"
        +  ]
        +}

        categoriesSectionsKeywords.json

        This config file is used for grouping sections with name patterns.

        -
        {
        -  "InterruptVectors": [
        -    "vectors"
        -  ],
        -  "Code": [
        -    "text",
        -    "os"
        -  ],
        -  "ConstantData": [
        -    "rodata"
        -  ],
        -  "StaticData": [
        -    "data",
        -    "bss"
        -  ],
        -  "DynamicData": [
        -    "stack",
        -    "heap"
        -  ]
        -}
        -
        +
        {
        +  "InterruptVectors": [
        +    "vectors"
        +  ],
        +  "Code": [
        +    "text",
        +    "os"
        +  ],
        +  "ConstantData": [
        +    "rodata"
        +  ],
        +  "StaticData": [
        +    "data",
        +    "bss"
        +  ],
        +  "DynamicData": [
        +    "stack",
        +    "heap"
        +  ]
        +}

        categoriesObjects.json

        This config file is used for grouping objects with their full name.

        -
        {
        -  "Identifiers": [
        -    "identifiers.o"
        -  ],
        -  "Globals": [
        -    "globals.o"
        -  ],
        -  "MCU_RTOS": [
        -    "os_scheduler.o",
        -    "os_tick.o",
        -    "os_heap.o",
        -    "os_queue.o",
        -    "os_diagnostics.o"
        -    ],
        -    "MCU_CAN_Stack": [
        -    "can_driver.o",
        -    "can_prot_frame.o",
        -    "can_prot_transfer.o",
        -    "can_prot_message.o",
        -    "can_prot_connection.o",
        -    "can_prot_control.o",
        -    "can_prot_firmware.o"
        -  ],
        -  "SOC_OperatingSystem": [
        -    "kernel.o",
        -    "kernel_api.o",
        -    "scheduler.o",
        -    "memory_manager.o",
        -    "network_manager.o",
        -    "process_manager.o",
        -    "ethernet_driver.o",
        -    "tcp_ip_stack.o",
        -    "display_driver.o",
        -    "touch_driver.o"
        -  ],
        -  "SOC_ApplicationLogging": [
        -    "netlog_lib.o",
        -    "logging.o"
        -  ],
        -  "SOC_ApplicationGraphics": [
        -    "gfx_lib.o",
        -    "touch_screen.o",
        -    "gui_main.o",
        -    "gui_animations.o"
        -  ],
        -  "SOC_NetLog": [
        -    "netlog_driver.o",
        -    "netlog_filter.o",
        -    "netlog_network_handler.o",
        -    "netlog_transfer.o",
        -    "netlog_connection.o"
        -  ]
        -}
        -
        +
        {
        +  "Identifiers": [
        +    "identifiers.o"
        +  ],
        +  "Globals": [
        +    "globals.o"
        +  ],
        +  "MCU_RTOS": [
        +    "os_scheduler.o",
        +    "os_tick.o",
        +    "os_heap.o",
        +    "os_queue.o",
        +    "os_diagnostics.o"
        +    ],
        +    "MCU_CAN_Stack": [
        +    "can_driver.o",
        +    "can_prot_frame.o",
        +    "can_prot_transfer.o",
        +    "can_prot_message.o",
        +    "can_prot_connection.o",
        +    "can_prot_control.o",
        +    "can_prot_firmware.o"
        +  ],
        +  "SOC_OperatingSystem": [
        +    "kernel.o",
        +    "kernel_api.o",
        +    "scheduler.o",
        +    "memory_manager.o",
        +    "network_manager.o",
        +    "process_manager.o",
        +    "ethernet_driver.o",
        +    "tcp_ip_stack.o",
        +    "display_driver.o",
        +    "touch_driver.o"
        +  ],
        +  "SOC_ApplicationLogging": [
        +    "netlog_lib.o",
        +    "logging.o"
        +  ],
        +  "SOC_ApplicationGraphics": [
        +    "gfx_lib.o",
        +    "touch_screen.o",
        +    "gui_main.o",
        +    "gui_animations.o"
        +  ],
        +  "SOC_NetLog": [
        +    "netlog_driver.o",
        +    "netlog_filter.o",
        +    "netlog_network_handler.o",
        +    "netlog_transfer.o",
        +    "netlog_connection.o"
        +  ]
        +}

        categoriesObjectsKeywords.json

        @@ -2269,23 +2260,22 @@

        budgets.json

        memory is smaller than those. This config file shall contain the memory sizes that physically exist in the device.

        We have to include every memory type: "INT_RAM", "EXT_RAM", "INT_FLASH", "EXT_FLASH". If your project has more than one memory area that are not ignored, with the same type, simply add their sizes together and include them like that in this file.

        -
        {
        -    "Project Threshold in %": 80,
        +
        {
        +    "Project Threshold in %": 80,
         
         
        -    "Budgets": [
        -        ["MCU", "INT_RAM",    262144],
        -        ["MCU", "EXT_RAM",         0],
        -        ["MCU", "INT_FLASH",  524288],
        -        ["MCU", "EXT_FLASH",       0],
        +    "Budgets": [
        +        ["MCU", "INT_RAM",    262144],
        +        ["MCU", "EXT_RAM",         0],
        +        ["MCU", "INT_FLASH",  524288],
        +        ["MCU", "EXT_FLASH",       0],
         
        -        ["SOC", "INT_RAM",          0],
        -        ["SOC", "EXT_RAM",   33554432],
        -        ["SOC", "INT_FLASH",        0],
        -        ["SOC", "EXT_FLASH",        0]
        -    ]
        -}
        -
        + ["SOC", "INT_RAM", 0], + ["SOC", "EXT_RAM", 33554432], + ["SOC", "INT_FLASH", 0], + ["SOC", "EXT_FLASH", 0] + ] +} diff --git a/doc/test_project/readme.md b/doc/test_project/readme.md index 610b8e9..62ded32 100644 --- a/doc/test_project/readme.md +++ b/doc/test_project/readme.md @@ -55,7 +55,7 @@ simple RTOS running on it called OS. The memory layout of the MCU: -
        +
        ### SOC @@ -74,7 +74,7 @@ to the NetLogger framework if available on the system. The memory layout of the SOC: -
        +
        ## Creating the configuration @@ -85,8 +85,9 @@ This chapter will explain creating the configuration for the project by explaini Creating the project configuration starts with creating the globalConfig.json. This file lists the programmable devices of the system and assigns the further configuration files to configuration. -We will add our two devices (MCU and SOC) as JSON objects. For every device, at least two config files must be assigned, +We will add our two devices (MCU and SOC) as JSON objects. For every device, the used compiler and at least two config files must be assigned: the addressSpaces and the patterns. For devices using virtual address spaces, a third one, the sections is needed as well. +As we are using Green Hills Compiler, so the mapfiles will be in this format, we will define the value "GHS" for both of the devices. Since in our system, we have different devices with different memory layout, we have assigned two different addressSpaces config files to them. If your system contains multiple devices of the same type, you can use the same @@ -100,16 +101,19 @@ In contrast to the MCU, the SOC does use virtual address spaces, so we will assi :::json { "MCU": { + "compiler": "GHS", "addressSpacesPath": "addressSpaces_MCU.json", "patternsPath": "patterns_MCU.json" }, "SOC": { + "compiler": "GHS", "addressSpacesPath": "addressSpaces_SOC.json", "patternsPath": "patterns_SOC.json", - "virtualSectionsPath": "virtualSections_SOC.json", + "virtualSectionsPath": "virtualSections_SOC.json" } } + In the following, the config files assigned to the MCU will be explained. ### addressSpaces_MCU.json diff --git a/emma.py b/emma.py index f367ff3..30a73d6 100644 --- a/emma.py +++ b/emma.py @@ -17,58 +17,23 @@ """ +import sys import timeit import datetime import argparse -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper import emma_libs.memoryManager -import emma_libs.memoryMap -def main(args): - sc.header("Preparing image summary", symbol=".") - - # Create MemoryManager instance with Variables for image summary - sectionSummary = emma_libs.memoryManager.SectionParser(args) # String identifier for outfilenames - - # FIXME: Something before importData() takes quite a lot of processing time (MSc) - numAnalyzedConfigIDs = sectionSummary.importData() # Read Data - - if numAnalyzedConfigIDs >= 1: - sectionSummary.resolveDuplicateContainmentOverlap() - if not args.create_categories and not args.remove_unmatched: - # Normal run; write csv report - sectionSummary.writeSummary() - else: - # Categorisation-only run: do not write a csv report - pass - - sc.header("Preparing module summary", symbol=".") - - objectSummary = emma_libs.memoryManager.ObjectParser(args) # String identifier for outfilenames - - # FIXME: Something before importData() takes quite a lot of processing time (MSc) - numAnalyzedConfigIDs = objectSummary.importData() # Read Data - if numAnalyzedConfigIDs >= 1: - objectSummary.resolveDuplicateContainmentOverlap() - - if args.create_categories: - fileChanged = objectSummary.createCategoriesJson() # Create categories.json from keywords - if fileChanged: - objectSummary.importData() # Re-read Data - elif args.remove_unmatched: - objectSummary.removeUnmatchedFromCategoriesJson() - else: - objectSummary.writeSummary() - - sc.header("Preparing objects in sections summary", symbol=".") - - objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionSummary.consumerCollection, objectSummary.consumerCollection) - emma_libs.memoryMap.memoryMapToCSV(args.dir, args.subdir, args.project, objectsInSections) +def main(arguments): + memoryManager = emma_libs.memoryManager.MemoryManager(*processArguments(arguments)) + memoryManager.readConfiguration() + memoryManager.processMapfiles() + memoryManager.createReports() def parseArgs(arguments=""): @@ -120,12 +85,6 @@ def parseArgs(arguments=""): help="User defined subdirectory name in the --dir folder.", default=None ) - # parser.add_argument( # TODO: Currently not implemented (useful?) (MSc) - # "--verbose", - # "-v", - # help="Generate verbose output", - # action="count" # See: https://docs.python.org/3/library/argparse.html#action - # ) parser.add_argument( "--analyse_debug", help="Include DWARF debug sections in analysis", @@ -151,41 +110,66 @@ def parseArgs(arguments=""): default=False ) parser.add_argument( - "-Werror", + "--Werror", help="Treat all warnings as errors.", action="store_true", default=False ) - if "" == arguments: - args = parser.parse_args() + # We will either parse the arguments string if it is not empty, + # or (in the default case) the data from sys.argv + if arguments == "": + parsedArguments = parser.parse_args() else: - args = parser.parse_args(arguments) + # Arguments were passed to this function (e.g. for unit testing) + parsedArguments = parser.parse_args(arguments) - if args.dir is None: - args.dir = args.project - else: - # Get paths straight (only forward slashes) - args.dir = shared_libs.emma_helper.joinPath(args.dir) + return parsedArguments + + +def processArguments(arguments): + """ + Function to extract the settings values from the command line arguments. + :param arguments: The command line arguments, that is the result of the parser.parse_args(). + :return: The setting values. + """ + projectName = shared_libs.emma_helper.projectNameFromPath(shared_libs.emma_helper.joinPath(arguments.project)) + configurationPath = shared_libs.emma_helper.joinPath(arguments.project) + mapfilesPath = shared_libs.emma_helper.joinPath(arguments.mapfiles) - if args.subdir is not None: + # If an output directory was not specified then the result will be stored to the project folder + if arguments.dir is None: + directory = arguments.project + else: # Get paths straight (only forward slashes) - args.subdir = shared_libs.emma_helper.joinPath(args.subdir) + directory = shared_libs.emma_helper.joinPath(arguments.dir) + # Get paths straight (only forward slashes) or set it to empty if it was empty + subDir = shared_libs.emma_helper.joinPath(arguments.subdir) if arguments.subdir is not None else "" - args.mapfiles = shared_libs.emma_helper.joinPath(args.mapfiles) + outputPath = shared_libs.emma_helper.joinPath(directory, subDir, OUTPUT_DIR) + analyseDebug = arguments.analyse_debug + createCategories = arguments.create_categories + removeUnmatched = arguments.remove_unmatched + noPrompt = arguments.noprompt - return args + return projectName, configurationPath, mapfilesPath, outputPath, analyseDebug, createCategories, removeUnmatched, noPrompt if __name__ == "__main__": - args = parseArgs() + # Parsing the arguments + ARGS = parseArgs() + + sc(invVerbosity=-1, actionWarning=(lambda: sys.exit(-10) if ARGS.Werror is not None else None), actionError=lambda: sys.exit(-10)) - sc.header("Emma Memory and Mapfile Analyser", symbol="/") + sc().header("Emma Memory and Mapfile Analyser", symbol="/") - timeStart = timeit.default_timer() - sc.info("Started processing at", datetime.datetime.now().strftime("%H:%M:%S")) + # Starting the time measurement of Emma + TIME_START = timeit.default_timer() + sc().info("Started processing at", datetime.datetime.now().strftime("%H:%M:%S")) - main(args) + # Executing Emma + main(ARGS) - timeEnd = timeit.default_timer() - sc.info("Finished job at:", datetime.datetime.now().strftime("%H:%M:%S"), "(duration: " "{0:.2f}".format(timeEnd - timeStart) + "s)") + # Stopping the time measurement of Emma + TIME_END = timeit.default_timer() + sc().info("Finished job at:", datetime.datetime.now().strftime("%H:%M:%S"), "(duration: " "{0:.2f}".format(TIME_END - TIME_START) + "s)") diff --git a/emma_delta_libs/Delta.py b/emma_delta_libs/Delta.py index 56c26ce..51ecd99 100644 --- a/emma_delta_libs/Delta.py +++ b/emma_delta_libs/Delta.py @@ -48,8 +48,8 @@ def __buildDelta(self) -> pandas.DataFrame: DELTA_PERCENTAGE = "Delta %" DELTA_HUMAN_READABLE = "Delta" - lhs = self.__lhs.reset_index().set_index([CONFIG_ID, MEM_TYPE, TAG, MAPFILE, SECTION_NAME]) - rhs = self.__rhs.reset_index().set_index([CONFIG_ID, MEM_TYPE, TAG, MAPFILE, SECTION_NAME]) + lhs = self.__lhs.reset_index().set_index([CONFIG_ID, MEM_TYPE, MEM_TYPE_TAG, MAPFILE, SECTION_NAME]) + rhs = self.__rhs.reset_index().set_index([CONFIG_ID, MEM_TYPE, MEM_TYPE_TAG, MAPFILE, SECTION_NAME]) delta = lhs.join(rhs, lsuffix=LHS_SUFFIX, rsuffix=RHS_SUFFIX) delta[DELTA_SIZE_DEC] = delta[SIZE_DEC + LHS_SUFFIX] - delta[SIZE_DEC + RHS_SUFFIX] diff --git a/emma_delta_libs/FilePresenter.py b/emma_delta_libs/FilePresenter.py index 87293f5..d16913d 100644 --- a/emma_delta_libs/FilePresenter.py +++ b/emma_delta_libs/FilePresenter.py @@ -21,7 +21,7 @@ import os import typing -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import emma_delta_libs.FileSelector @@ -45,10 +45,9 @@ def chooseCandidates(self) -> typing.List[str]: # TODO: Validate all inputs (FM) self.__printFileType() try: - filetype: str = self.__filetypes[int(input("Choose File type >"))] + filetype: str = self.__filetypes[int(input("Choose File type >\n"))] except KeyError: - sc.error("Select valid Summary.") - sys.exit(-10) + sc().error("Select valid Summary.\n") candidates: typing.List[str] = self.__fileSelector.getCandidates() self.__printCandidates(candidates) @@ -56,21 +55,19 @@ def chooseCandidates(self) -> typing.List[str]: indices: typing.List[str] = indices.split(" ") indices: typing.List[int] = [int(i) for i in indices] if len(indices) <= 1: - sc.error("Select more than one file.") - sys.exit(-10) + sc().error("Select more than one file.") selectedFiles: typing.List[str] = self.__fileSelector.selectFiles(indices, filetype) self.__printSelectedFiles(selectedFiles) return selectedFiles def __printCandidates(self, candidates: typing.List[str]) -> None: - print("") for i, candidate in enumerate(candidates): string = " " + str(i) + ": " + candidate print(string) def __printSelectedFiles(self, paths: typing.List[str]) -> None: - sc.info("Selected files:") + sc().info("Selected files:") for i, path in enumerate(paths): pathSplit: typing.List[str] = os.path.split(path) version: str = os.path.split(os.path.split(pathSplit[0])[0])[1] @@ -79,6 +76,5 @@ def __printSelectedFiles(self, paths: typing.List[str]) -> None: print(string) def __printFileType(self) -> None: - print("") for i, file in self.__filetypes.items(): print(" " + str(i) + ": " + file) diff --git a/emma_delta_libs/FileSelector.py b/emma_delta_libs/FileSelector.py index f94ebd1..e287fb2 100644 --- a/emma_delta_libs/FileSelector.py +++ b/emma_delta_libs/FileSelector.py @@ -21,7 +21,7 @@ import os import typing -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper @@ -53,8 +53,7 @@ def __fileToUse(self, subStringIdentifier: str, path: str) -> str: lastModifiedFiles: typing.List[str] = shared_libs.emma_helper.lastModifiedFilesInDir(path, ".csv") # Newest/youngest file is last element if not lastModifiedFiles: - sc.error("No matching Files in: " + path) - sys.exit(-10) + sc().error("No matching Files in: " + path) # Backwards iterate over file list (so newest file will be first) for i in range(len(lastModifiedFiles) - 1, -1, -1): diff --git a/emma_delta_libs/RootSelector.py b/emma_delta_libs/RootSelector.py index 01893dc..695e6e4 100644 --- a/emma_delta_libs/RootSelector.py +++ b/emma_delta_libs/RootSelector.py @@ -19,7 +19,7 @@ import os -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper @@ -33,7 +33,7 @@ def selectRoot() -> str: deltaConfigPath: str = shared_libs.emma_helper.joinPath("./", DELTA_CONFIG) if os.path.isfile(deltaConfigPath): rootpath = shared_libs.emma_helper.readJson(deltaConfigPath)[DELTA_LATEST_PATH] - sc.info("Using " + rootpath + " as project.") + sc().info("Using " + rootpath + " as project.") else: rootpath = input("Enter project root path >") diff --git a/emma_deltas.py b/emma_deltas.py index 35e7146..511a5d1 100644 --- a/emma_deltas.py +++ b/emma_deltas.py @@ -22,7 +22,7 @@ import datetime import argparse -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import @@ -33,7 +33,12 @@ import emma_delta_libs.RootSelector -def parseArgs(): +def parseArgs(arguments=""): + """ + Argument parser + :param arguments: List of strings specifying the arguments to be parsed (default: "" (-> meaning that arguments from the command line should be parsed) + :return: Argparse object + """ # Argument parser parser = argparse.ArgumentParser( prog="Emma Delta Analyser", @@ -72,7 +77,15 @@ def parseArgs(): action="store_true" ) - return parser.parse_args() + # We will either parse the arguments string if it is not empty, + # or (in the default case) the data from sys.argv + if "" == arguments: + parsedArguments = parser.parse_args() + else: + # Arguments were passed to this function (e.g. for unit testing) + parsedArguments = parser.parse_args(arguments) + + return parsedArguments def main(args): @@ -90,23 +103,24 @@ def main(args): filePresenter = emma_delta_libs.FilePresenter.FilePresenter(fileSelector=fileSelector) candidates = filePresenter.chooseCandidates() else: - sc.error("No matching arguments.") - sys.exit(-10) + sc().error("No matching arguments.") delta = emma_delta_libs.Delta.Delta(files=candidates, outfile=args.outfile) delta.tocsv() - sc.info("Saved delta to " + args.outfile) + sc().info("Saved delta to " + args.outfile) if __name__ == "__main__": args = parseArgs() - sc.header("Emma Memory and Mapfile Analyser - Deltas", symbol="/") + sc(invVerbosity=-1, actionWarning=(lambda : sys.exit(-10) if args.Werror is not None else None), actionError=lambda : sys.exit(-10)) + + sc().header("Emma Memory and Mapfile Analyser - Deltas", symbol="/") timeStart = timeit.default_timer() - sc.info("Started processing at", datetime.datetime.now().strftime("%H:%M:%S")) + sc().info("Started processing at", datetime.datetime.now().strftime("%H:%M:%S")) main(args) timeEnd = timeit.default_timer() - sc.info("Finished job at:", datetime.datetime.now().strftime("%H:%M:%S"), "(duration: " + "{0:.2f}".format(timeEnd - timeStart) + "s)") + sc().info("Finished job at:", datetime.datetime.now().strftime("%H:%M:%S"), "(duration: " + "{0:.2f}".format(timeEnd - timeStart) + "s)") diff --git a/emma_libs/__init__.py b/emma_libs/__init__.py index 8b15f92..5e32dae 100644 --- a/emma_libs/__init__.py +++ b/emma_libs/__init__.py @@ -15,4 +15,3 @@ You should have received a copy of the GNU General Public License along with this program. If not, see """ - diff --git a/emma_libs/categorisation.py b/emma_libs/categorisation.py new file mode 100644 index 0000000..099b190 --- /dev/null +++ b/emma_libs/categorisation.py @@ -0,0 +1,331 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import os +import re + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import shared_libs.emma_helper +import emma_libs.memoryEntry + + +class Categorisation: + # pylint: disable=too-many-instance-attributes + # Rationale: The class needs to store the paths for the categorisation files, this leads to the amount of class members. + + """ + Class to implement functionality that are related to categorisation of MemEntry objects. + """ + def __init__(self, categoriesObjectsPath, categoriesObjectsKeywordsPath, categoriesSectionsPath, categoriesSectionsKeywordsPath, noPrompt): + # pylint: disable=too-many-arguments + # Rationale: The categorisation paths and the settings needs to be set-up by this function. + + self.noprompt = noPrompt + # These are list of sections and objects that are categorised by keywords (these will be used for updating the categories*.json files) + self.keywordCategorisedSections = [] + self.keywordCategorisedObjects = [] + # Storing the paths to the categories files. + self.categoriesObjectsPath = categoriesObjectsPath + self.categoriesObjectsKeywordsPath = categoriesObjectsKeywordsPath + self.categoriesSectionsPath = categoriesSectionsPath + self.categoriesSectionsKeywordsPath = categoriesSectionsKeywordsPath + # Loading the categories files. These files are optional, if they are not present we will store None instead. + self.categoriesObjects = Categorisation.__readCategoriesJson(self.categoriesObjectsPath) + self.categoriesObjectsKeywords = Categorisation.__readCategoriesJson(self.categoriesObjectsKeywordsPath) + self.categoriesSections = Categorisation.__readCategoriesJson(self.categoriesSectionsPath) + self.categoriesSectionsKeywords = Categorisation.__readCategoriesJson(self.categoriesSectionsKeywordsPath) + + def fillOutCategories(self, sectionCollection, objectCollection): + """ + Organisational function to call sub-functions that fill out categories in a sectionCollection and objectCollection. + :param sectionCollection: List of MemEntry objects that represent sections. The categories will be filled out in these. + :param objectCollection: List of MemEntry objects that represent objects. The categories will be filled out in these. + :return: None + """ + self.__fillOutSectionCategories(sectionCollection) + self.__fillOutObjectCategories(objectCollection) + + def manageCategoriesFiles(self, updateCategoriesFromKeywordMatches, removeUnmatchedCategories, sectionCollection, objectCollection): + """ + Organisational function to call sub-functions that updates the categoriesSections and categoriesObjects files. + This function needs to be called after running the collections with the fillOutCategories(). + :param updateCategoriesFromKeywordMatches: True if the categoriesSections and categoriesObjects needs to be updated, + from the matches found during categorisation with the categoriesSectionsKeywords + and categoriesObjectsKeywords respectively, False otherwise. + :param removeUnmatchedCategories: True if the categories that did not match needs to be removed from + categoriesSections and categoriesObjects, False otherwise. + :param sectionCollection: List of MemEntry objects that represent sections. The categories needs to be already filled out in these by the fillOutCategories(). + :param objectCollection: List of MemEntry objects that represent objects. The categories needs to be already filled out in these by the fillOutCategories(). + :return: None + """ + self.__manageSectionCategoriesFiles(updateCategoriesFromKeywordMatches, removeUnmatchedCategories, sectionCollection) + self.__manageObjectCategoriesFiles(updateCategoriesFromKeywordMatches, removeUnmatchedCategories, objectCollection) + + @staticmethod + def __readCategoriesJson(path): + """ + Function ro read in a categorisation json file. + :param path: The path of the file that needs to be read. + :return: Content of the json file. + """ + if os.path.exists(path): + categoriesJson = shared_libs.emma_helper.readJson(path) + else: + categoriesJson = None + sc().warning("There was no " + os.path.basename(path) + " file found, the categorization based on this will be skipped.") + return categoriesJson + + def __fillOutSectionCategories(self, sectionCollection): + """ + Function to fill out the categories in a section collection. + :param sectionCollection: List of MemEntry objects representing sections. + :return: None + """ + # Filling out sections + for consumer in sectionCollection: + consumerName = consumer.sectionName + consumer.category = Categorisation.__evalCategoryOfAnElement(consumerName, self.categoriesSections, self.categoriesSectionsKeywords, self.keywordCategorisedSections) + + def __fillOutObjectCategories(self, objectCollection): + """ + Function to fill out the categories in an object collection. + :param objectCollection: List of MemEntry objects representing objects. + :return: None + """ + # Filling out objects + for consumer in objectCollection: + consumerName = consumer.objectName + consumer.category = Categorisation.__evalCategoryOfAnElement(consumerName, self.categoriesObjects, self.categoriesObjectsKeywords, self.keywordCategorisedObjects) + + def __manageSectionCategoriesFiles(self, updateCategoriesFromKeywordMatches, removeUnmatchedCategories, sectionCollection): + """ + Function that updates the categoriesSections file based on an already categorised section collection. + :param updateCategoriesFromKeywordMatches: True if the categoriesSections needs to be updated, from the matches + found during categorisation with the categoriesSectionsKeywords, + False otherwise. + :param removeUnmatchedCategories: True if the categories that did not match needs to be removed from + categoriesSections, False otherwise. + :param sectionCollection: List of MemEntry objects that represent sections. + :return: None + """ + # Updating the section categorisation file + if updateCategoriesFromKeywordMatches: + # Asking the user whether a file shall be updated. If no prompt is on we will overwrite by default. + sc().info("Merge categoriesSections.json with categorised modules from " + CATEGORIES_KEYWORDS_SECTIONS_JSON + "?\nIt will be overwritten.\n`y` to accept, any other key to discard.") + if self.noprompt: + sc().wwarning("No prompt is active. Chose to overwrite file.") + text = "y" + else: + text = input("> ") + # If an update is allowed + if text == "y": + Categorisation.__updateCategoriesJson(self.categoriesSections, self.keywordCategorisedSections, self.categoriesSectionsPath) + # Re-categorize sections if the categorisation file have been changed + self.__fillOutSectionCategories(sectionCollection) + sc().info("The " + self.categoriesSectionsPath + " was updated.") + else: + sc().info(text + " was entered, aborting the update. The " + self.categoriesSectionsPath + " was not changed.") + # Do we need to remove the unmatched categories? + if removeUnmatchedCategories: + if self.noprompt: + sc().wwarning("No prompt is active. Chose `y` to remove unmatched categories.") + text = "y" + else: + text = input("> ") + if text == "y": + sc().info("Remove unmatched modules from " + CATEGORIES_SECTIONS_JSON + "?\nIt will be overwritten.\n `y` to accept, any other key to discard.") + Categorisation.__removeUnmatchedFromCategoriesJson(self.categoriesSections, sectionCollection, emma_libs.memoryEntry.SectionEntry, self.categoriesSectionsPath) + else: + sc().info(text + " was entered, aborting the removal. The " + self.categoriesSectionsPath + " was not changed.") + + def __manageObjectCategoriesFiles(self, updateCategoriesFromKeywordMatches, removeUnmatchedCategories, objectCollection): + """ + Function that updates the categoriesObjects file based on an already categorised object collection. + :param updateCategoriesFromKeywordMatches: True if the categoriesObjects needs to be updated, from the matches + found during categorisation with the categoriesObjectsKeywords, + False otherwise. + :param removeUnmatchedCategories: True if the categories that did not match needs to be removed from + categoriesObjects, False otherwise. + :param objectCollection: List of MemEntry objects that represent objects. + :return: None + """ + if updateCategoriesFromKeywordMatches: + # Updating the object categorisation file + sc().info("Merge categoriesObjects.json with categorised modules from " + CATEGORIES_KEYWORDS_OBJECTS_JSON + "?\nIt will be overwritten.\n`y` to accept, any other key to discard.") + if self.noprompt: + sc().wwarning("No prompt is active. Chose `y` to overwrite.") + text = "y" + else: + text = input("> ") + # If an update is allowed + if text == "y": + Categorisation.__updateCategoriesJson(self.categoriesObjects, self.keywordCategorisedObjects, self.categoriesObjectsPath) + sc().info("The " + self.categoriesObjectsPath + " was updated.") + # Re-categorize objects if the categorisation file have been changed + self.__fillOutObjectCategories(objectCollection) + else: + sc().info(text + " was entered, aborting the update. The file " + self.categoriesObjectsPath + " was not changed.") + # Do we need to remove the unmatched categories? + if removeUnmatchedCategories: + if self.noprompt: + sc().wwarning("No prompt is active. Chose `y` to remove unmatched.") + text = "y" + else: + text = input("> ") + if text == "y": + sc().info("Remove unmatched modules from " + CATEGORIES_OBJECTS_JSON + "?\nIt will be overwritten.\n `y` to accept, any other key to discard.") + Categorisation.__removeUnmatchedFromCategoriesJson(self.categoriesObjects, objectCollection, emma_libs.memoryEntry.ObjectEntry, self.categoriesObjectsPath) + else: + sc().info(text + " was entered, aborting the removal. The " + self.categoriesObjectsPath + " was not changed.") + + @staticmethod + def __evalCategoryOfAnElement(nameString, categories, categoriesKeywords, keywordCategorisedElements): + """ + Function to find the category of an element. First the categorisation will be tried with the categories file, + and if that fails with the categoriesKeywords file. If this still fails a default value will be set for the category. + If the element was categorised by a keyword then the element will be added to the keywordCategorisedElements list. + :param nameString: The name string of the element that needs to be categorised. + :param categories: Content of the categories file. + :param categoriesKeywords: Content of the categoriesKeywords file. + :param keywordCategorisedElements: List of elements that were categorised by keywords. + :return: Category string + """ + foundCategory = Categorisation.__searchCategoriesJson(nameString, categories) + if foundCategory is None: + # If there is no match check for keyword specified in categoriesKeywordsJson + foundCategory = Categorisation.__categoriseByKeyword(nameString, categoriesKeywords, keywordCategorisedElements) + if foundCategory is None: + # If there is still no match then we will assign the default constant + foundCategory = UNKNOWN_CATEGORY + return foundCategory + + @staticmethod + def __searchCategoriesJson(nameString, categories): + """ + Function to search categories for a name in a categories file. + :param nameString: String that categories needs to be searched for. + :param categories: File the categories needs to be searched in. + :return: String that contains the categories comma separated that were found for the nameString, else None. + """ + result = None + + # Did we get a file? + if categories is not None: + categoriesFoundForTheName = [] + # Iterating trough the categories + for category in categories: + # Look through elements that shall be ordered to this category + for categoryElementName in categories[category]: + # If the element name matches the nameString then we will add this category as found + if nameString == categoryElementName: + categoriesFoundForTheName.append(category) + # If we have found categories then we will sort them and return them comma separated + if categoriesFoundForTheName: + categoriesFoundForTheName.sort() + result = ", ".join(categoriesFoundForTheName) + return result + + @staticmethod + def __categoriseByKeyword(nameString, categoriesKeywords, keywordCategorisedElements): + """ + Function to search a category for a name in a categoriesKeywords file. + :param nameString: String that categories needs to be searched for. + :param categoriesKeywords: File the categories needs to be searched in. + :param keywordCategorisedElements: List of pairs that contains elements that were categorised by keywords as (name, category). + :return: String that contains the category that was found for the nameString, else None. + """ + result = None + + # If a categoriesKeywords file was received + if categoriesKeywords is not None: + # For all the categories + for category in categoriesKeywords: + # For all the keywords belonging to this category + for keyword in categoriesKeywords[category]: + # Creating a regex pattern from the keyword + pattern = r"""\w*""" + keyword + r"""\w*""" + # Finding the first occurrence of the pattern in the nameString + if re.search(pattern, nameString) is not None: + # Adding the element to the list of elements that were keyword categorised as a pair of (name, category) + keywordCategorisedElements.append((nameString, category)) + result = category + return result + + @staticmethod + def __updateCategoriesJson(categoriesToUpdate, newCategories, outputPath): + """ + Updates a categories file with new categories. + :param categoriesToUpdate: This is the categories file that needs to be updated. + :param newCategories: These are the new categories that will be added to the categories file. + :param outputPath: This is the path where the updated file will be written. + :return: None. + """ + # Format newCategories to {Categ1: [ObjectName1, ObjectName2, ...], Categ2: [...]} + formattedNewCategories = {} + for key, value in dict(newCategories).items(): + formattedNewCategories[value] = formattedNewCategories.get(value, []) + formattedNewCategories[value].append(key) + + # Merge categories from keyword search with categories from categories.json + mergedCategories = {**formattedNewCategories, **categoriesToUpdate} + + # Sort moduleCategories case-insensitive in alphabetical order + for key in mergedCategories.keys(): + mergedCategories[key].sort(key=lambda s: s.lower()) + + # Write the data to the outputPath + shared_libs.emma_helper.writeJson(outputPath, mergedCategories) + + @staticmethod + def __removeUnmatchedFromCategoriesJson(categoriesToRemoveFrom, consumerCollection, memEntryHandler, outputPath): + """ + Removes categories from the categories files the categories for those no matches were found. + :param categoriesToRemoveFrom: This is the categories file from which we remove the unmatched categories. + :param consumerCollection: This is the consumer collection based on we will decide which category has matched. + :param memEntryHandler: This is a subclass of the MemEntryHandler. + :param outputPath: This is the path where the categories file will be written to. + """ + + # Make a dict of {name : category} from consumerCollection + rawCategorisedConsumerCollection = {memEntryHandler.getName(memEntry): memEntry.category for memEntry in consumerCollection} + + # Format rawCategorisedModulesConsumerCollection to {Categ1: [ObjectName1, ObjectName2, ...], Categ2: [...]} + categorisedElements = {} + for key, value in rawCategorisedConsumerCollection.items(): + categorisedElements[value] = categorisedElements.get(value, []) + categorisedElements[value].append(key) + + for category in categoriesToRemoveFrom: # For every category in categories.json + if category not in categorisedElements: + # If category is in categories.json but has never occured in the mapfiles (hence not present in consumerCollection) + # Remove the not occuring category entirely + categoriesToRemoveFrom.pop(category) + else: + # Category occurs in consumerCollection, hence is present in mapfiles, + # overwrite old category object list with the ones acutally occuring in mapfiles + categoriesToRemoveFrom[category] = categorisedElements[category] + + # Sort self.categories case-insensitive in alphabetical order + for key in categoriesToRemoveFrom.keys(): + categoriesToRemoveFrom[key].sort(key=lambda s: s.lower()) + + # Write the data to the outputPath + shared_libs.emma_helper.writeJson(outputPath, categoriesToRemoveFrom) diff --git a/emma_libs/configuration.py b/emma_libs/configuration.py new file mode 100644 index 0000000..22ca924 --- /dev/null +++ b/emma_libs/configuration.py @@ -0,0 +1,133 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import shared_libs.emma_helper +import emma_libs.specificConfigurationFactory + + +class Configuration: + # pylint: disable=too-few-public-methods + # Rationale: This class does not need to provide more functionality than reading in the configuration. + + """ + Class for handling the configuration reading and processing. + """ + def __init__(self): + self.specificConfigurations = dict() + self.globalConfig = None + + def readConfiguration(self, configurationPath, mapfilesPath, noPrompt): + """ + Function to read in the configuration and process it´s data. + :param configurationPath: This is the path of the folder where the configuration files are. + :param mapfilesPath: This is the path of the folder where the mapfiles are. + :param noPrompt: True if user prompts are allowed, False otherwise, in which caseThis is the path of the folder where the configuration files are. + :return: None + """ + # Check whether the configurationPath exists + shared_libs.emma_helper.checkIfFolderExists(configurationPath) + + # Processing the globalConfig.json + globalConfigPath = shared_libs.emma_helper.joinPath(configurationPath, "globalConfig.json") + self.globalConfig = Configuration.__readGlobalConfigJson(globalConfigPath) + sc().info("Imported " + str(len(self.globalConfig)) + " global config entries:" + str(list(self.globalConfig.keys()))) + + # Processing the generic configuration parts for all the configId + for configId in self.globalConfig: + # Processing the addressSpaces*.json + if "addressSpacesPath" in self.globalConfig[configId]: + addressSpacesPath = shared_libs.emma_helper.joinPath(configurationPath, self.globalConfig[configId]["addressSpacesPath"]) + self.globalConfig[configId]["addressSpaces"] = Configuration.__readAddressSpacesJson(addressSpacesPath) + else: + sc().error("The " + configId + " does not have the key: " + "addressSpacesPath") + + # Setting up the mapfile search paths for the configId + # TODO: add option for recursive search (MSc) + if MAPFILES in self.globalConfig[configId]: + mapfilesPathForThisConfigId = shared_libs.emma_helper.joinPath(mapfilesPath, self.globalConfig[configId][MAPFILES]) + else: + mapfilesPathForThisConfigId = mapfilesPath + shared_libs.emma_helper.checkIfFolderExists(mapfilesPathForThisConfigId) + + # Creating the SpecificConfiguration object + if "compiler" in self.globalConfig[configId]: + usedCompiler = self.globalConfig[configId]["compiler"] + self.specificConfigurations[configId] = emma_libs.specificConfigurationFactory.createSpecificConfiguration(usedCompiler, noPrompt=noPrompt) + # Processing the compiler dependent parts of the configuration + sc().info("Processing the mapfiles of the configID \"" + configId + "\"") + self.specificConfigurations[configId].readConfiguration(configurationPath, mapfilesPathForThisConfigId, configId, self.globalConfig[configId]) + # Validating the the configuration + if not self.specificConfigurations[configId].checkConfiguration(configId, self.globalConfig[configId]): + sc().warning("The specificConfiguration object of the configId \"" + + configId + "\" reported that the configuration is invalid!\n" + + "The configId \"" + configId + "\" will not be analysed!") + else: + sc().error("The configuration of the configID \"" + configId + "\" does not contain a \"compiler\" key!") + + @staticmethod + def __readGlobalConfigJson(path): + """ + Function to read in and process the globalConfig. + :param path: Path of the globalConfig file. + :return: The content of the globalConfig. + """ + # Load the globalConfig file + globalConfig = shared_libs.emma_helper.readJson(path) + + # Loading the config files of the defined configID-s + for configId in list(globalConfig.keys()): # List of keys required so we can remove the ignoreConfigID entrys + # Skip configID if ["ignoreConfigID"] is True + if IGNORE_CONFIG_ID in globalConfig[configId].keys(): + # Check that flag has the correct type + if not isinstance(globalConfig[configId][IGNORE_CONFIG_ID], bool): + sc().error("The " + IGNORE_CONFIG_ID + " of " + configId + " has a type " + + str(type(globalConfig[configId][IGNORE_CONFIG_ID])) + " instead of bool. " + + "Please be sure to use correct JSON syntax: boolean constants are written true and false.") + elif globalConfig[configId][IGNORE_CONFIG_ID] is True: + globalConfig.pop(configId) + + # Check whether the globalConfig is empty + if not globalConfig: + sc().warning("No configID was defined or all of them were ignored.") + + return globalConfig + + @staticmethod + def __readAddressSpacesJson(path): + """ + Function to read in and process the addressSpaces config file. + :param path: Path of the addressSpaces config file. + :return: The content of the addressSpaces config file. + """ + # Load the addressSpaces file + addressSpaces = shared_libs.emma_helper.readJson(path) + + # Removing the imported memory entries if they are listed in the IGNORE_MEMORY + if IGNORE_MEMORY in addressSpaces.keys(): + for memoryToIgnore in addressSpaces[IGNORE_MEMORY]: + if addressSpaces["memory"][memoryToIgnore]: + addressSpaces["memory"].pop(memoryToIgnore) + sc().info("The memory entry \"" + memoryToIgnore + "\" of the \"" + path + "\" is marked to be ignored...") + else: + sc().error("The key " + memoryToIgnore + " which is in the ignore list, does not exist in the memory object of " + path) + + return addressSpaces diff --git a/emma_libs/ghsConfiguration.py b/emma_libs/ghsConfiguration.py new file mode 100644 index 0000000..0a5f175 --- /dev/null +++ b/emma_libs/ghsConfiguration.py @@ -0,0 +1,308 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import os +import sys +import re + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import shared_libs.emma_helper +import emma_libs.specificConfiguration +import emma_libs.ghsMapfileRegexes + + +class GhsConfiguration(emma_libs.specificConfiguration.SpecificConfiguration): + """ + Class to handle a GHS compiler specific configuration. + """ + def __init__(self, noPrompt): + super().__init__(noPrompt) + self.noPrompt = noPrompt + + def readConfiguration(self, configurationPath, mapfilesPath, configId, configuration) -> None: + """ + Function to read in the GHS compiler specific part of the configuration and extend the already existing configuration with it. + :param configurationPath: Path of the directory where the configuration is located. + :param mapfilesPath: Path of the directory where the mapfiles are located. + :param configId: ConfigId to which the configuration belongs to. + :param configuration: The configuration dictionary that needs to be extended with the compiler specific data. + :return: None + """ + # Loading the patterns*.json + if "patternsPath" in configuration: + patternsPath = shared_libs.emma_helper.joinPath(configurationPath, configuration["patternsPath"]) + configuration["patterns"] = shared_libs.emma_helper.readJson(patternsPath) + else: + sc().error("Missing patternsPath definition in the globalConfig,json for the configId: " + configId + "!") + + # Loading the virtualSections*.json if the file is present (only needed in case of VAS-es) + if "virtualSectionsPath" in configuration: + virtualSectionsPath = shared_libs.emma_helper.joinPath(configurationPath, configuration["virtualSectionsPath"]) + configuration["virtualSections"] = shared_libs.emma_helper.readJson(virtualSectionsPath) + + # Loading the mapfiles + GhsConfiguration.__addMapfilesToConfiguration(mapfilesPath, configuration) + + # Loading the monolith file + # Flag to load a monolith file only once; we don't do it here since we might work with DMA only + configuration["monolithLoaded"] = False + # Flag to sort monolith table only once; we don't do it here since we might work with DMA only + configuration["sortMonolithTabularised"] = False + self.__addMonolithsToConfiguration(mapfilesPath, configuration) + + def checkConfiguration(self, configId, configuration) -> bool: + """ + Function to check the GHS compiler specific part of the configuration. + :param configId: The configId the configuration belongs to. + :param configuration: The configuration dictionary that needs to be checked. + :return: True if the configuration is correct, False otherwise. + """ + result = False + if GhsConfiguration.__checkNumberOfFoundMapfiles(configId, configuration): + if GhsConfiguration.__checkMonolithSections(configuration, self.noPrompt): + result = True + return result + + @staticmethod + def __addMapfilesToConfiguration(mapfilesPath, configuration): + """ + Function to add the mapfiles to the configuration. + :param mapfilesPath: Path of the folder where the mapfiles are located. + :param configuration: Configuration to which the mapfiles need to be added. + :return: None + """ + if os.path.isdir(mapfilesPath): + GhsConfiguration.__addFilesToConfiguration(mapfilesPath, configuration, "mapfiles") + else: + sc().error("The mapfiles folder (\"" + mapfilesPath + "\") does not exist!") + + def __addMonolithsToConfiguration(self, mapfilesPath, configuration): + """ + Function to add monolith files to the configuration. + :param mapfilesPath: Path of the folder where the mapfiles (and the monolith files) are located. + :param configuration: Configuration to which the monolith files need to be added. + :return: None + """ + + def ifAnyNonDMA(configuration): + """ + Function to check whether a configuration has any non-DMA mapfiles. + If a VAS is defined for a mapfile, then it is considered to be non-DMA. + :param configuration: Configuration whose mapfiles needs to be checked. + :return: True if any non-DMA mapfile was found, False otherwise. + """ + result = False + for entry in configuration["patterns"]["mapfiles"]: + if "VAS" in configuration["patterns"]["mapfiles"][entry]: + result = True + break + return result + + if os.path.isdir(mapfilesPath): + if ifAnyNonDMA(configuration): + numMonolithMapFiles = GhsConfiguration.__addFilesToConfiguration(mapfilesPath, configuration, "monoliths") + if numMonolithMapFiles > 1: + sc().warning("More than one monolith file found; Result may be non-deterministic") + elif numMonolithMapFiles < 1: + sc().error("No monolith files was detected but some mapfiles require address translation (VASes used)") + self.__addTabularisedMonoliths(configuration) + else: + sc().error("The mapfiles folder (\"" + mapfilesPath + "\") does not exist!") + + @staticmethod + def __addFilesToConfiguration(path, configuration, fileType): + """ + Function to add a specific file type to the configuration. + :param path: Path where the files needs to be searched for. + :param configuration: Configuration to which the files need to be added to. + :param fileType: Filetype that needs to be searched for. + :return: Number of files found. + """ + # For every file in the received path + for file in os.listdir(path): + # For every entry for the received fileType + for entry in configuration["patterns"][fileType]: + foundFiles = [] + # For every regex pattern defined for this entry + for regex in configuration["patterns"][fileType][entry]["regex"]: + # We will try to match the current file with its path and add it to the found files if it matched + searchCandidate = shared_libs.emma_helper.joinPath(path, file) + if re.search(regex, searchCandidate): + foundFiles.append(os.path.abspath(searchCandidate)) + # If we have found any file for this file type + if foundFiles: + # We will add it to the configuration and also check whether more than one file was found to this pattern + configuration["patterns"][fileType][entry]["associatedFilename"] = foundFiles[0] + sc().info(LISTING_INDENT + "Found " + fileType + ": ", foundFiles[0]) + if len(foundFiles) > 1: + sc().warning("Ambiguous regex pattern in '" + configuration["patternsPath"] + "'. Selected '" + foundFiles[0] + "'. Regex matched: " + "".join(foundFiles)) + + # Check for found files in patterns and do some clean-up + # We need to convert the keys into a temporary list in order to avoid iterating on the original which may be changed during the loop, that causes a runtime error + for entry in list(configuration["patterns"][fileType]): + if "associatedFilename" not in configuration["patterns"][fileType][entry]: + sc().warning("No file found for ", str(entry).ljust(20), "(pattern:", ''.join(configuration["patterns"][fileType][entry]["regex"]) + " );", "skipping...") + del configuration["patterns"][fileType][entry] + + return len(configuration["patterns"][fileType]) + + def __addTabularisedMonoliths(self, configuration): + """ + Manages Monolith selection, parsing and converting to a list. + :param configuration: Configuration to which the monoliths need to be added. + :return: None + """ + + def loadMonolithFile(configuration, noprompt): + """ + Function to Load monolith file. + :param configuration: Configuration to which the monoliths need to be added. + :param noprompt: True if no user prompts shall be made, False otherwise, in which case a program exit will be made. + :return: Content of the monolith file if it could be read, else None. + """ + result = None + mapfileIndexChosen = 0 # Take the first monolith file in list (default case) + numMonolithFiles = len(configuration["patterns"]["monoliths"]) + keyMonolithMapping = {} + + # Update globalConfig with all monolith filenames + for key, monolith in enumerate(configuration["patterns"]["monoliths"]): + keyMonolithMapping.update({str(key): configuration["patterns"]["monoliths"][monolith]["associatedFilename"]}) + + if numMonolithFiles > 1: + sc().info("More than one monolith file detected:") + # Display files + for key, monolith in keyMonolithMapping.items(): + sc().info(LISTING_INDENT, f"{key}: {monolith}") + if noprompt: + sc().wwarning("No prompt is active. Using first monolith file in list: " + str(keyMonolithMapping['0'])) + mapfileIndexChosen = 0 + else: + # Let the user choose the index of the correct file + sc().info("Choose the index of the desired monolith file") + mapfileIndexChosen = shared_libs.emma_helper.Prompt.idx() + while not 0 <= mapfileIndexChosen < numMonolithFiles: # Accept only values within the range [0, numMonolithFiles) + sc().warning("Invalid value; try again:") + mapfileIndexChosen = shared_libs.emma_helper.Prompt.idx() + elif numMonolithFiles < 1: + sc().error("No monolith file found but needed for processing") + + # Finally load the file + configuration["monolithLoaded"] = True + with open(keyMonolithMapping[str(mapfileIndexChosen)], "r") as fp: + result = fp.readlines() + + return result + + def tabulariseAndSortMonolithContent(monolithContent): + """ + Parses the monolith file and returns a "table" (addresses are int's) of the following structure: + table[n-th_entry][0] = virtual(int), ...[1] = physical(int), ...[2] = offset(int), ...[3] = size(int), ...[4] = section(str) + Offset = physical - virtual + :param monolithContent: Content from monolith as text (all lines) + :return: list of lists + """ + table = [] # "headers": virtual, physical, size, section + monolithPattern = emma_libs.ghsMapfileRegexes.UpperMonolithPattern() + for line in monolithContent: + match = re.search(emma_libs.ghsMapfileRegexes.UpperMonolithPattern().pattern, line) + if match: + table.append([ + int(match.group(monolithPattern.Groups.virtualAdress), 16), + int(match.group(monolithPattern.Groups.physicalAdress), 16), + (int(match.group(monolithPattern.Groups.physicalAdress), 16) - int(match.group(monolithPattern.Groups.virtualAdress), 16)), + int(match.group(monolithPattern.Groups.size), 16), + match.group(monolithPattern.Groups.section) + ]) + return table + + # Load and register Monoliths + monolithContent = loadMonolithFile(configuration, self.noPrompt) + configuration["sortMonolithTabularised"] = tabulariseAndSortMonolithContent(monolithContent) + + @staticmethod + def __checkNumberOfFoundMapfiles(configId, configuration): + """ + Function to check the number of found mapfiles in a configuration. + :param configId: The configId the configuration belongs to. + :param configuration: The configuration in which the found mapfiles need to be checked. + :return: + """ + result = False + # Checking the number of the mapfiles that were found with the regexes + if configuration["patterns"]["mapfiles"]: + # If there is at least one, then the check was passed + result = True + else: + sc().warning("No mapfiles found for configID: \"" + configId + "\"!") + return result + + @staticmethod + def __checkMonolithSections(configuration, noprompt): + # pylint: disable=too-many-locals + # Rationale: The code quality would not increase significantly from fewer local variables. + """ + The function collects the VAS sections from the monolith files and from the global config and from the monolith mapfile. + :param configuration: Configuration thats monolith sections need to be checked. + :param noprompt: True if no user prompts shall be made, False otherwise, in which case a program exit will be made. + :return: True if the monolith sections were ok, False otherwise. + """ + result = False + foundInConfigID = [] + foundInMonolith = [] + + # Extract sections from monolith file + monolithPattern = emma_libs.ghsMapfileRegexes.UpperMonolithPattern() + + # Check if a monolith was loaded to this configID that can be checked + # In case there was no monolith loaded, the configuration does not need it so the check is passed + if configuration["monolithLoaded"]: + for entry in configuration["patterns"]["monoliths"]: + with open(configuration["patterns"]["monoliths"][entry]["associatedFilename"], "r") as monolithFile: + monolithContent = monolithFile.readlines() + for line in monolithContent: + lineComponents = re.search(monolithPattern.pattern, line) + if lineComponents: # if match + foundInMonolith.append(lineComponents.group(monolithPattern.Groups.section)) + + for vas in configuration["virtualSections"]: + foundInConfigID += configuration["virtualSections"][vas] + + # Compare sections from configID with found in monolith file + sectionsNotInConfigID = set(foundInMonolith) - set(foundInConfigID) + if sectionsNotInConfigID: + sc().warning("Monolith File has the following sections. You might want to add it them the respective VAS in " + configuration["virtualSectionsPath"] + "!") + for section in sectionsNotInConfigID: + sc().info(LISTING_INDENT, section) + sc().warning("Still continue? (y/n)") + if noprompt: + sc().wwarning("No prompt is active. Chose `y` to continue.") + text = "y" + else: + text = input("> ") + if text != "y": + sys.exit(-10) + else: + result = True + else: + result = True + + return result diff --git a/emma_libs/ghsMapfileProcessor.py b/emma_libs/ghsMapfileProcessor.py new file mode 100644 index 0000000..946e433 --- /dev/null +++ b/emma_libs/ghsMapfileProcessor.py @@ -0,0 +1,242 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import os +import re +import bisect +import collections + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import shared_libs.emma_helper +import emma_libs.mapfileProcessor +import emma_libs.ghsMapfileRegexes +import emma_libs.memoryEntry + + +class GhsMapfileProcessor(emma_libs.mapfileProcessor.MapfileProcessor): + """ + A class to handle mapfile processing for GHS specific mapfiles. + """ + def __init__(self): + self.analyseDebug = None + + def processMapfiles(self, configId, configuration, analyseDebug): + """ + Function to process mapfiles. + :param configId: ConfigId the configuration belongs to. + :param configuration: The configuration that contains the information about the mapfiles that needs to be processed. + :param analyseDebug: True if the debug sections and objects need to be analysed as well, False otherwise. + :return: A tuple of two lists containing MemEntry objects representing the sections and objects that were extracted from the mapfiles. + """ + self.analyseDebug = analyseDebug + + sectionCollection = self.__importData(configId, configuration, emma_libs.ghsMapfileRegexes.ImageSummaryPattern()) + objectCollection = self.__importData(configId, configuration, emma_libs.ghsMapfileRegexes.ModuleSummaryPattern()) + + return sectionCollection, objectCollection + + def __importData(self, configId, configuration, defaultRegexPattern): + # pylint: disable=too-many-locals + # Rationale: This is legacy code, it will not be changed. + + """ + Function to import data from the mapfiles. + :param configId: A configId to which the configuration belongs to. + :param configuration: A configuration that contains the information about the mapfiles. + :param defaultRegexPattern: The default regex pattern that shall be used for the data extraction. + :return: A list of MemEntry objects made from the data created. + """ + result = [] + memoryRegionsToExcludeFromMapfiles = {} + + # Reading the hexadecimal offset value from the addresSpaces*.json. This value is optional, in case it is not defined, we will assume that it is 0. + offset = int(configuration["addressSpaces"]["offset"], 16) if "offset" in configuration["addressSpaces"].keys() else 0 + # Defining a list of sections that will be excluded (including the objects residing in it) from the analysis based on the value that was loaded from the arguments + listOfExcludedSections = [".unused_ram"] if self.analyseDebug else SECTIONS_TO_EXCLUDE + + # Importing every mapfile that was found + for mapfile in configuration["patterns"]["mapfiles"]: + # Opening the mapfile and reading in its content + with open(configuration["patterns"]["mapfiles"][mapfile]["associatedFilename"], "r") as mapfileFileObject: + mapfileContent = mapfileFileObject.readlines() + + # Storing the name of the mapfile + mapfileName = os.path.split(configuration["patterns"]["mapfiles"][mapfile]["associatedFilename"])[-1] + + # Storing the list of ignored memory areas to this mapfile + # This will be a necessary parameter for the MapfileProcessor::fillOutMemoryRegionsAndMemoryTypes() + if "memRegionExcludes" in configuration["patterns"]["mapfiles"][mapfile]: + memoryRegionsToExcludeFromMapfiles[mapfileName] = configuration["patterns"]["mapfiles"][mapfile]["memRegionExcludes"] + + # If there is a VAS defined for the mapfile, then the addresses found in it are virtual addresses, otherwise they are physical addresses + mapfileContainsVirtualAddresses = ("VAS" in configuration["patterns"]["mapfiles"][mapfile]) + # Loading the regex pattern that will be used for this mapfile + regexPatternData = self.__getRegexPattern(defaultRegexPattern, configuration["patterns"]["mapfiles"][mapfile]) + + # Analysing the mapfile with the loaded regex line-by-line + lineNumber = 0 + for line in mapfileContent: + lineNumber += 1 + + # Extracting the components from the line with the regex, if there was no match, we will continue with the next line + lineComponents = re.search(regexPatternData.pattern, line) + if lineComponents: + # If the section name of this element is in the list that we want to exclude then we can continue with the next line + if lineComponents.group(regexPatternData.Groups.section).rstrip() in listOfExcludedSections: + continue + # If this mapfile contains virtual addresses then we need to translate the address of this element + vasName = None + vasSectionName = None + if mapfileContainsVirtualAddresses: + # Name of the Virtual address space to which the elements of this mapfile belongs + vasName = configuration["patterns"]["mapfiles"][mapfile]["VAS"] + # List of the virtual sections that were belong to this mapfile. The address translation is done with the help of these sections. + virtualSectionsOfThisMapfile = configuration["virtualSections"][vasName] + # The part of the monolith file that contains the address translation data + monolithFileContent = configuration["sortMonolithTabularised"] + # Calculating the physical address and getting the name of the virtual section based on which the translation was done + physicalAddress, vasSectionName = self.__translateAddress(lineComponents.group(regexPatternData.Groups.origin), + lineComponents.group(regexPatternData.Groups.size), + virtualSectionsOfThisMapfile, + monolithFileContent) + # Check whether the address translation failed + if physicalAddress is None: + warningSectionName = lineComponents.group(regexPatternData.Groups.section).rstrip() + warningObjectName = ("::" + lineComponents.group(regexPatternData.Groups.module).rstrip()) if hasattr(regexPatternData.Groups, "module") else "" + sc().warning("The address translation failed for the element: \"" + mapfile + "(line " + str(lineNumber) + ")::" + + warningSectionName + warningObjectName + " (size: " + str(int(lineComponents.group(regexPatternData.Groups.size), 16)) + " B)\" of the configID \"" + + configId + "\"!") + # We will not store this element and continue with the next one + continue + # In case the mapfile contains phyisical addresses, no translation is needed, we are just reading the address that is in the mapfile + else: + physicalAddress = int(lineComponents.group(regexPatternData.Groups.origin), 16) - offset + + # Determining the addressLength + addressLength = int(lineComponents.group(regexPatternData.Groups.size), 16) + # Check whether the address is valid + if addressLength < 0: + sc().warning("Negative addressLength found.") + + # Creating the compiler specific data that we will store in the memEntry + # This will be a collections.OrderedDict as the MemEntry requires it + compilerSpecificData = collections.OrderedDict() + compilerSpecificData["DMA"] = (vasName is None) + compilerSpecificData["vasName"] = vasName + compilerSpecificData["vasSectionName"] = vasSectionName + + # Creating a MemEntry object from the data that we got from the mapfile + memEntry = emma_libs.memoryEntry.MemEntry(configID=configId, + mapfileName=mapfileName, + addressStart=physicalAddress, + addressLength=addressLength, + sectionName=lineComponents.group(regexPatternData.Groups.section).rstrip(), + objectName=regexPatternData.getModuleName(lineComponents), + compilerSpecificData=compilerSpecificData) + + # Finding the index, where we need to insert the memEntry + index = bisect.bisect_right(result, memEntry) + # Inserts at index, elements to right will be pushed "one index up" + result.insert(index, memEntry) + + # Filling out the memory regions and memory types and ignoring the entries that did not have a match + super().fillOutMemoryRegionsAndMemoryTypes(result, configuration, True, memoryRegionsToExcludeFromMapfiles) + + return result + + @staticmethod + def __getRegexPattern(defaultPattern: emma_libs.ghsMapfileRegexes.RegexPatternBase, mapfileEntry): + """ + Function to determine whether the default regex patterns can be used for the mapfile processing or a unique pattern was configured in the configuration. + :param defaultPattern: The default regex patterns that shall be used if no unique pattern was defined for the mapfile. + :param mapfileEntry: The mapfile entry of the configuration that may contain unique regex patterns defined for this mapfile. + :return: The regex pattern that shall be used during the mapfile processing. + """ + regexPattern = defaultPattern + + if isinstance(defaultPattern, emma_libs.ghsMapfileRegexes.ImageSummaryPattern): + if UNIQUE_PATTERN_SECTIONS in mapfileEntry.keys(): + # Create new instance of pattern class when a unique pattern is needed + sectionPattern = emma_libs.ghsMapfileRegexes.ImageSummaryPattern() + # If a unique regex pattern is needed, e.g. when the mapfile has a different format and cannot be parsed with the default pattern + # Overwrite default pattern with unique one + sectionPattern.pattern = mapfileEntry[UNIQUE_PATTERN_SECTIONS] + regexPattern = sectionPattern + elif isinstance(defaultPattern, emma_libs.ghsMapfileRegexes.ModuleSummaryPattern): + if UNIQUE_PATTERN_OBJECTS in mapfileEntry.keys(): + # Create new instance of pattern class when a unique pattern is needed + objectPattern = emma_libs.ghsMapfileRegexes.ModuleSummaryPattern() + # If a unique regex pattern is needed, e.g. when the mapfile has a different format and cannot be parsed with the default pattern + # Overwrite default pattern with unique one + objectPattern.pattern = mapfileEntry[UNIQUE_PATTERN_OBJECTS] + regexPattern = objectPattern + else: + sc().error("Unexpected default regex pattern (" + type(defaultPattern).__name__ + ")!") + + return regexPattern + + @staticmethod + def __translateAddress(elementVirtualStartAddress, elementSize, virtualSectionsOfThisMapfile, monolithFileContent): + # pylint: disable=too-many-locals + # Rationale: This is legacy code, it will not be changed. + + """ + Calculates the physical address for an element (= section or object). + The patterns config file can assign a VAS to a mapfile. Every VAS has VAS sections that are defined in the + virtualSections file. The monolith file contains all the virtual sections of all the VAS-es with data + based on which the address translation can be done. + In order to do the translation we loop trough the entries in the monolith file and see whether the entry belongs + to the VAS of this element. If so, when we need to make sure that the element resides within the virtual section. + If that is also true, the address translation can be easily done with the data found in the monolith file. + :param elementVirtualStartAddress: The start address of the element in the VAS + :param elementSize: The size of the element in bytes + :param virtualSectionsOfThisMapfile: List of virtual sections that belong to the VAS of the element + :param monolithFileContent: List of all the virtual sections from the monolith file. + :return: Physical start address of the element and the name of the virtual section the translation was done with. + """ + # This are indexes used for accessing the elements of one monolith file entry + monolithIndexVirtual = 0 + monolithIndexOffset = 2 + monolithIndexSize = 3 + monolithIndexSectionName = 4 + # Converting the received start address and size to decimal + _, elementVirtualStartAddress = shared_libs.emma_helper.unifyAddress(elementVirtualStartAddress) + _, elementSize = shared_libs.emma_helper.unifyAddress(elementSize) + # Setting up the return values with default values + elementPhysicalStartAddress = None + virtualSectionName = None + + # We will go trough all the entries in the monolith file to find the virtual section this element belongs to + for entry in monolithFileContent: + # If the element belongs to this virtual section we will try to do the address translation + virtualSectionName = entry[monolithIndexSectionName] + if virtualSectionName in virtualSectionsOfThisMapfile: + # Setting up data for the translation (for the end addresses we need to be careful in case we have zero lengths) + virtualSectionStartAddress = entry[monolithIndexVirtual] + virtualSectionEndAddress = virtualSectionStartAddress + (entry[monolithIndexSize] - 1) if entry[monolithIndexSize] > 0 else virtualSectionStartAddress + elementVirtualEndAddress = elementVirtualStartAddress + (elementSize - 1) if elementSize > 0 else elementVirtualStartAddress + # If the element is contained by this virtual section then we will use this one for the translation + if virtualSectionStartAddress <= elementVirtualStartAddress <= elementVirtualEndAddress <= virtualSectionEndAddress: + addressTranslationOffset = entry[monolithIndexOffset] + elementPhysicalStartAddress = elementVirtualStartAddress + addressTranslationOffset + # FIXME: maybe it should be displayed/captured if we got more than one matches! (It should never happen but still...) (MSc) + break + return elementPhysicalStartAddress, virtualSectionName diff --git a/emma_libs/mapfileRegexes.py b/emma_libs/ghsMapfileRegexes.py similarity index 75% rename from emma_libs/mapfileRegexes.py rename to emma_libs/ghsMapfileRegexes.py index bf10ff2..bc95906 100644 --- a/emma_libs/mapfileRegexes.py +++ b/emma_libs/ghsMapfileRegexes.py @@ -24,10 +24,13 @@ import re -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc class Groups: + # pylint: disable=too-few-public-methods + # Rationale: This is a special class to be used for mapfile processing, it does not have to have more public mehtods. + """ Helper class for regex groups """ @@ -44,15 +47,21 @@ def __init__(self): class RegexPatternBase: + # pylint: disable=too-few-public-methods + # Rationale: This is a special class to be used for mapfile processing, it does not have to have more public mehtods. + """ Base class for module/image summary """ - def __init__(self, defaultSectionOffset=None): + def __init__(self): self.pattern = None self.Groups = Groups() class ModuleSummaryPattern(RegexPatternBase): + # pylint: disable=too-few-public-methods + # Rationale: This is a special class to be used for mapfile processing, it does not have to have more public mehtods. + """ Class holding the regex pattern for module summary """ @@ -76,6 +85,9 @@ def getModuleName(self, lineComponents): class ImageSummaryPattern(RegexPatternBase): + # pylint: disable=too-few-public-methods + # Rationale: This is a special class to be used for mapfile processing, it does not have to have more public mehtods. + """ Class holding the regex pattern for image summary """ @@ -96,8 +108,8 @@ def __init__(self): self.Groups.size = "sizeHex" self.Groups.sectionOffset = "sectionOffset" - # TODO : Why do we have this here and why dont we have a "getImageName" instead? (AGK) - def getModuleName(self, lineComponents): + def getModuleName(self, lineComponents): # pylint: disable=unused-argument, no-self-use + # Rationale: Sections do not have object names. This function has to have the same prototype as the other subclasses of the RegexPatternBase. """ :param lineComponents: A mapfile line - here not needed (image has no module names (>> thus empty string)) :return: Empty string (because image has no module names) @@ -106,6 +118,9 @@ def getModuleName(self, lineComponents): class UpperMonolithPattern(RegexPatternBase): + # pylint: disable=too-few-public-methods + # Rationale: This is a special class to be used for mapfile processing, it does not have to have more public mehtods. + """ Class holding regex pattern for virtual <-> physical section mapping in monolith file """ @@ -126,11 +141,14 @@ def __init__(self): class LowerMonolithPattern(RegexPatternBase): + # pylint: disable=too-few-public-methods + # Rationale: This is a special class to be used for mapfile processing, it does not have to have more public mehtods. + """ Class holding the regex pattern for the lower part of the monolith file """ def __init__(self): - super().__init__(defaultSectionOffset=0) + super().__init__() self.pattern = re.compile(r""" # ^\s{4}[\.*\w+]+\s+0x[0-9a-f]+\s+[0x]*[0-9a-f]+\s\(\w+\s*/[\w+,\s]+\) (?:\s{4})(?P
        [\.*\w+]+) # Section @@ -143,9 +161,10 @@ def __init__(self): self.Groups.origin = "address" # address is equvialent to base Address self.Groups.size = "size" - sc.info("Preparing lower monolith summary...") + sc().info("Preparing lower monolith summary...") - def getModuleName(self, lineComponents): + def getModuleName(self, lineComponents): # pylint: disable=unused-argument, no-self-use + # Rationale: Sections do not have object names. This function has to have the same prototype as the other subclasses of the RegexPatternBase. """ :param lineComponents: A mapfile line - here not needed (image has no module names (>> thus empty string)) :return: Empty string (because image has no module names) diff --git a/emma_libs/mapfileProcessor.py b/emma_libs/mapfileProcessor.py new file mode 100644 index 0000000..0414299 --- /dev/null +++ b/emma_libs/mapfileProcessor.py @@ -0,0 +1,125 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import abc + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import + + +class MapfileProcessor(abc.ABC): + """ + A partly abstract parent class for the compiler specific mapfile processors. + Defining interfaces and common functionality for subclasses that will be used for mapfile processing. + """ + @abc.abstractmethod + def processMapfiles(self, configId, configuration, analyseDebug): + """ + Abstract function to process mapfiles. + :param configId: The configId to which the configuration belongs to. + :param configuration: The configuration based on which the mapfiles can be processed. + :param analyseDebug: True if the debug sections and objects need to be analysed as well, False otherwise. + :return: A tuple of two lists of MemEntry objects representing the sections and objects. + Illustration: (sectionCollection, objectCollection), where sectionCollection is list(MemEntry) and objectCollection is list(MemEntry). + """ + + @staticmethod + def fillOutMemoryRegionsAndMemoryTypes(listOfMemEntryObjects, configuration, removeElementsWithoutMemoryRegionOrType, memoryRegionsToExcludeFromMapfiles=None): + """ + Fills out the memory type and the memory regions in a list of MemEntry objects. + This function needs to be called by the subclasses of this class during the mapfile processing, + after creating a list of MemEntry objects that have the following data filled out in them: + - configID + - mapfileName + - addressStart + - addressLength + - sectionName + - objectName + - compilerSpecificData + :param listOfMemEntryObjects: The list or MemEntry objects that will be updated. + :param configuration: That belongs to the same configId as the MemEntry objects. + :param removeElementsWithoutMemoryRegionOrType: True if elements to which no memory type or region was found shall be removed, False otherwise. + :param memoryRegionsToExcludeFromMapfiles: Dictionary, based on which MemEntry objects can be excluded if they belong to a memory region that is ignored for the mapfile, the object was created from. + The dictionary contains mapfile names as keys and lists of strings with memory region names as values. + If the functionality is not needed, then it shall be set to None. + :return: None + """ + def printElementRemovalMessage(memEntry, loggerLevel, reason): + """ + Function to print out a message informing the user that an element will be removed. + :param memEntry: MemoryEntry that will be removed. + :param loggerLevel: Loglevel with which the message needs to be printed. + :param reason: Reason why the element will be removed. + :return: None + """ + objectName = ("::" + memEntry.objectName) if hasattr(memEntry, "module") else "" + loggerLevel("The element: \"" + memEntry.mapfile + "::" + memEntry.sectionName + objectName + + " (@" + memEntry.addressStartHex() + ", size: " + str(memEntry.addressLength) + " B)\" of the configID \"" + + memEntry.configID + "\" was removed. Reason: " + reason) + + def isElementMarkedAsExcluded(excludedMemoryRegionsFromMapfiles, memEntry): + """ + Function to check whether the element was marked as excluded. + :param excludedMemoryRegionsFromMapfiles: See docstring of fillOutMemoryRegionsAndMemoryTypes(). + :param memEntry: MemEntry object for which it should be decided whether it is excluded or not. + :return: True if element was marked as excluded, False otherwise. + """ + result = False + # If there is an exclusion list for memory regions per mapfiles + if excludedMemoryRegionsFromMapfiles is not None: + # If there is an excluded memory region for the mapfile of the memory entry + if memEntry.mapfile in excludedMemoryRegionsFromMapfiles.keys(): + # If the memory region of the mem entry is excluded + if memEntry.memTypeTag in excludedMemoryRegionsFromMapfiles[memEntry.mapfile]: + result = True + return result + + listOfElementsToKeep = [] + memoryCandidates = configuration["addressSpaces"]["memory"] + + # For every memEntryObject + for element in listOfMemEntryObjects: + # For every defined memory region + for memoryRegion in memoryCandidates: + # If the element is in this memoryRegion + # For elements that do not have addressEnd the addressStart comparison is enough + if int(memoryCandidates[memoryRegion]["start"], 16) <= element.addressStart: + if element.addressEnd() is None or (element.addressEnd() <= int(memoryCandidates[memoryRegion]["end"], 16)): + # Then we store the memoryRegion data in the element + element.memTypeTag = memoryRegion + element.memType = memoryCandidates[memoryRegion]["type"] + # If this region is not excluded for the mapfile the element belongs to then we will keep it + if not isElementMarkedAsExcluded(memoryRegionsToExcludeFromMapfiles, element): + listOfElementsToKeep.append(element) + else: + printElementRemovalMessage(element, sc().debug, "Its memory region was excluded for this mapfile!") + break + # If we have reached this point, then we did not find a memory region + else: + # If we do not have to remove elements without a memory region then we will fill it out with the default values and keep it + if not removeElementsWithoutMemoryRegionOrType: + element.memTypeTag = UNKNOWN_MEM_REGION + element.memType = UNKNOWN_MEM_TYPE + listOfElementsToKeep.append(element) + # If we have to remove it, then we will print a report + else: + printElementRemovalMessage(element, sc().warning, "It does not belong to any of the memory regions!") + # Overwriting the content of the list of memory entry objects with the elements that we will keep + listOfMemEntryObjects[:] = listOfElementsToKeep diff --git a/emma_libs/mapfileProcessorFactory.py b/emma_libs/mapfileProcessorFactory.py new file mode 100644 index 0000000..f28e1df --- /dev/null +++ b/emma_libs/mapfileProcessorFactory.py @@ -0,0 +1,41 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import emma_libs.ghsMapfileProcessor + + +def createSpecificMapfileProcesor(compiler, **kwargs): + """ + A factory for creating an object of one of the subclasses of the SpecificMapfileProcessor class. + The concrete subclass is selected based on the received compiler name. + :param compiler: The compiler name. + :param kwargs: The arguments that will be forwarded to the constructor during the object creation. + :return: An object of the selected subclass of the SpecificMapfileProcessor. + """ + + mapfileProcessor = None + if COMPILER_NAME_GHS == compiler: + mapfileProcessor = emma_libs.ghsMapfileProcessor.GhsMapfileProcessor(**kwargs) + else: + sc().error("Unexpected compiler value: " + compiler) + + return mapfileProcessor diff --git a/emma_libs/memoryEntry.py b/emma_libs/memoryEntry.py index 04f99b6..4067604 100644 --- a/emma_libs/memoryEntry.py +++ b/emma_libs/memoryEntry.py @@ -16,74 +16,78 @@ along with this program. If not, see """ -# This file contains the parser and internal data structures for holding the mapfile information. -# The memEntry class stores a mapfile-element. -# The MemoryManager class handles parsing, categorisation and overlap/containment flagging. +import abc +import collections +from pypiscout.SCout_Logger import Logger as sc -import sys - -import pypiscout as sc - +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper class MemEntry: - def __init__(self, tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength=None, addressEnd=None): - """ - Class storing one memory entry + meta data - Chose addressLength or addressEnd (one and only one of those two must be given) - - Meta data arguments - :param tag: [string] Arbitrary name (f.ex. IO_RAM, CM_RAM, ...) in order to distinguish not only by memType - :param memType: [string] {int, ext} flash, {int, ext} RAM - :param vasName: [string] name of the corresponding virtual address space (VAS) - :param vasSectionName [string] name of the vasSection the address translation was done for this element with - :param section: [string] section name; i.e.: `.text`, `.debug_abbrev`, `.rodata`, ... - :param moduleName: [string] name of the module - :param mapfileName: [string] name of the mapfile where we found this entry - :param configID: [class/string(=members)] defining the micro system - :param category: [string] = classifier (/ logical grouping of known modules) - # dma: [bool] true if we can use physical addresses directly (false for address translation i.e. via monolith file) - - Address related arguments - :param addressStart: [string(hex) or int(dec)] start address - :param addressLength: [string(hex) or int(dec) or nothing(default = None)] address length - :param addressEnd: [string(hex) or int(dec) or nothing(default = None)] end address - """ - - # Check if we got hex or dec addresses and decide how to convert those - # Start address - self.addressStartHex, self.addressStart = shared_libs.emma_helper.unifyAddress(addressStart) - - if addressLength is None: - self.__setAddressesGivenEnd(addressEnd) - elif addressEnd is None: - self.__setAddressesGivenLength(addressLength) - elif addressLength is None and addressEnd is None: - sc.error("Either addressLength or addressEnd must be given!") - sys.exit(-10) - else: - # TODO: Add verbose output here (MSc) - # TODO: if self.args.verbosity <= 1: - sc.warning("MemEntry: addressLength AND addressEnd were both given. The addressLength will be used and the addressEnd will be recalculated based on it.") - self.__setAddressesGivenLength(addressLength) - - self.memTypeTag = tag # Differentiate in more detail between memory sections/types - self.vasName = vasName - self.vasSectionName = vasSectionName - if vasName is None: # Probably we can just trust that a VAS name of `None` or "" is give; Anyway this seems more safe to me - # Direct memory access - self.dma = True - else: - self.dma = False + # pylint: disable=too-many-instance-attributes + # Rationale: This class needs to store all the attributes of an entry. + """ + A class to represent an entry in the memory. This is a generic class, it can represent both sections and objects. + To handle objects of this class according to their type, please use one of the subclasses of the @ref:MemEntryHandler. + """ + def __init__(self, configID, mapfileName, addressStart, addressLength=None, addressEnd=None, sectionName="", objectName="", memType="", memTypeTag="", category="", compilerSpecificData=None): + # pylint: disable=too-many-arguments + # Rationale: The constructor needs to be able to fully setup during construction. + + """ + Constructor of the MemEntry class. + :param configID: [string] The configId the entry belongs to. + :param mapfileName: [string] The name of the mapfile the entry was extracted from. + :param addressStart: [int or string] The start address of the entry in bytes. It can be an int or a hexadecimal value as string. + :param addressLength: [int or string] The length of the entry in bytes. Either this or the addressEnd must be given. It can be an int or a hexadecimal value as string. + :param addressEnd: [int or string] The end address of the entry in bytes. Either this or the addressLength must be given. It can be an int or a hexadecimal value as string. + :param sectionName: [string] Section name. In case of objects this shall contain the name of the section, the object belongs to. + :param objectName: [string] Object name, for sections shall be empty. + :param memType: [string] The type of the memory the entry is located in. For example: INT_FLASH, EXT_FLASH, INT_RAM, EXT_RAM... + :param memTypeTag: [string] The name of the memory area the entry is located in. This is a logical subtype of the memType value. For example: Code, DataTable... + :param category: [string] The name of the category, the entry belongs to. This is only a logical grouping. For example: GraphicFramework, EthernetDriver, HMI + :param compilerSpecificData: [collections.OrderedDict] Data that comes from the object of the MapfileProcessor subclasseses during the mapfile processing. + """ - self.section = section # Section type; i.e.: `.text`, `.debug_abbrev`, `.rodata`, ... - self.moduleName = moduleName # Module name (obj files, ...) - self.mapfile = mapfileName # Shows mapfile association (belongs to mapfile `self.mapfile`) self.configID = configID + self.mapfile = mapfileName + + # Converting the address related parameters to int + if addressStart is not None: + _, addressStart = shared_libs.emma_helper.unifyAddress(addressStart) + if addressLength is not None: + _, addressLength = shared_libs.emma_helper.unifyAddress(addressLength) + if addressEnd is not None: + _, addressEnd = shared_libs.emma_helper.unifyAddress(addressEnd) + + self.addressStart = addressStart + + # Initializing the length to None. This will be later overwritten, but the member has to be created in __init__() + self.addressLength = None + if addressLength is None and addressEnd is None: + sc().error("Either addressLength or addressEnd must be given!") + elif addressEnd is not None and addressLength is None: + self.setAddressesGivenEnd(addressEnd) + elif addressLength is not None and addressEnd is None: + self.setAddressesGivenLength(addressLength) + else: + sc().warning("MemEntry: addressLength AND addressEnd were both given. The addressLength will be used.") + self.setAddressesGivenLength(addressLength) + + self.sectionName = sectionName + self.objectName = objectName + self.memType = memType - self.category = category # = classifier (/grouping) + self.memTypeTag = memTypeTag + self.category = category + + self.compilerSpecificData = None + if isinstance(compilerSpecificData, collections.OrderedDict): + self.compilerSpecificData = compilerSpecificData + else: + sc().error("The compilerSpecificData has to be of type " + type(collections.OrderedDict).__name__ + " instad of " + type(compilerSpecificData).__name__ + "!") # Flags for overlapping, containment and duplicate self.overlapFlag = None @@ -92,42 +96,17 @@ def __init__(self, tag, vasName, vasSectionName, section, moduleName, mapfileNam self.containingOthersFlag = None self.overlappingOthersFlag = None - # Original values. These are stored in case the element is moved later. Then the original values will be still accessible. - self.addressStartOriginal = self.addressStartHex + # Original values. These are stored in case the element is changed during processing later later. Then the original values will be still visible in the report. + self.addressStartOriginal = self.addressStart self.addressLengthOriginal = self.addressLength - self.addressLengthHexOriginal = self.addressLengthHex - self.addressEndOriginal = self.addressEndHex - - def __setAddressesGivenEnd(self, addressEnd): - # Set addressEnd and addressEndHex - self.addressEndHex, self.addressEnd = shared_libs.emma_helper.unifyAddress(addressEnd) - # Calculate addressLength - self.addressLength = self.addressEnd - self.addressStart + 1 - self.addressLengthHex = hex(self.addressLength) - - def __setAddressesGivenLength(self, addressLength): - # Set addressLength - self.addressLengthHex, self.addressLength = shared_libs.emma_helper.unifyAddress(addressLength) - # Calculate addressEnd - self.addressEnd = (self.addressStart + self.addressLength - 1) if 0 < self.addressLength else self.addressStart - self.addressEndHex = hex(self.addressEnd) - def equalConfigID(self, other): + def __eq__(self, other): """ - Function to evaluate whether two sections have the same config ID - :return: + This is not implemented because we shall compare MemEntry objects trough the subclasses of the MemEntryHandler class. + :param other: another MemEntry object. + :return: None """ - return self.configID == other.configID - - def equalSection(self, other): - if isinstance(other, MemEntry): - return self.addressStart == other.addressStart and self.addressEnd == other.addressEnd and self.section == other.section and self.configID == other.configID and self.mapfile == other.mapfile and self.vasName == other.vasName - return NotImplemented - - def equalObject(self, other): - if isinstance(other, MemEntry): - return self.addressStart == other.addressStart and self.addressEnd == other.addressEnd and self.section == other.section and self.moduleName == other.moduleName and self.configID == other.configID and self.mapfile == other.mapfile and self.vasName == other.vasName - return NotImplemented + raise NotImplementedError("Operator __eq__ not defined between " + type(self).__name__ + " objects!") def __lt__(self, other): """ @@ -140,39 +119,197 @@ def __lt__(self, other): # TODO: Do we want to compare the length (shortest first) when address ist the same? (MSc) return self.addressStart < other.addressStart + def addressStartHex(self): + """ + Function to get the addressStart in hex. + """ + return hex(self.addressStart) + + def addressLengthHex(self): + """ + Function to get the addressLength in hex. + """ + return hex(self.addressLength) + + def addressEnd(self): + """ + Function to get the addressEnd. This is the end address of the MemEntry object. + By definition the end address is the addressStart + addressLength - 1. + Objects that have 0 addressLength do not have end addresses by definition. + :return: The addressEnd if addressLength is not 0, None otherwise. + """ + return self.__calculateAddressEnd(self.addressStart, self.addressLength) -# TODO : Evaluate, whether we could delete this class and only have the MemEntry (AGK) -class SectionEntry(MemEntry): - def __init__(self, tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength=None, addressEnd=None): - super().__init__(tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength, addressEnd) + def addressEndHex(self): + """ + Function to get the addressEnd in hex. + :return: The addressEnd in hex as string if addressLength is not 0, "" otherwise. + """ + return hex(self.addressEnd()) if self.addressEnd() is not None else "" - def __eq__(self, other): - if isinstance(other, MemEntry): - return self.addressStart == other.addressStart and self.addressEnd == other.addressEnd and self.section == other.section and self.configID == other.configID and self.mapfile == other.mapfile and self.vasName == other.vasName + def addressStartHexOriginal(self): + """ + Function to get the original addressStart in hex. + This means the addressStart value that was given to the object at construction. + """ + return hex(self.addressStartOriginal) + + def addressLengthHexOriginal(self): + """ + Function to get the original addressLength in hex. + This means the addressLength value that was given to (or calculated) the object at construction. + """ + return hex(self.addressLengthOriginal) + + def addressEndOriginal(self): + """ + Function to get the original addressEnd. + This means the addressEnd value calculated from the addressStart and addressLength that was given to (or calculated) the object at construction. + """ + return self.__calculateAddressEnd(self.addressStartOriginal, self.addressLengthOriginal) + + def addressEndHexOriginal(self): + """ + Function to get the original addressEnd in hex. + :return: The addressEndOriginal in hex as string if addressLength is not 0, "" otherwise. + """ + return hex(self.addressEndOriginal()) if self.addressEndOriginal() is not None else "" + + def equalConfigID(self, other): + """ + Function to decide whether two MemEntry objects have the same configIDs. + :param other: Another MemEntry object. + :return: True if the two MemEntry objects have the same configID, False otherwise. + """ + return self.configID == other.configID + + def setAddressesGivenEnd(self, addressEnd): + """ + Function to set the address length value from an address end value. + :return: None + """ + if self.addressStart <= addressEnd: + self.addressLength = addressEnd - self.addressStart + 1 else: - return NotImplemented + sc().error("MemEntry: The addressEnd (" + str(addressEnd) + ") is smaller than the addressStart (" + str(self.addressStart) + ")!") - def __hash__(self): - return hash((self.addressStart, self.addressEnd, self.section, self.configID, self.mapfile, self.vasName)) + def setAddressesGivenLength(self, addressLength): + """ + Function to set the address length value from an address length value. + :return: None + """ + if addressLength >= 0: + self.addressLength = addressLength + else: + sc().error("MemEntry: The addressLength (" + str(addressLength) + ") is negative!") + + @staticmethod + def __calculateAddressEnd(addressStart, addressLength): + """ + Function to calculate the end address from a start address and a length. + :return: The calculated end address if that exists (the object has addressLenght > 0), None otherwise. + """ + result = None + # Is this a non-zero length memEntry object? + if addressLength > 0: + result = addressStart + addressLength - 1 + return result -# TODO : Evaluate, whether we could delete this class and only have the MemEntry (AGK) -class ObjectEntry(MemEntry): - def __init__(self, tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength=None, addressEnd=None): - super().__init__(tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength, addressEnd) +class MemEntryHandler(abc.ABC): + """ + Abstract class describing an interface that a class that´s purpose is the handling of MemEntry objects shall have. + """ def __eq__(self, other): - if isinstance(other, MemEntry): - return self.addressStart == other.addressStart and \ - self.addressEnd == other.addressEnd and \ - self.section == other.section and \ - self.moduleName == other.moduleName and \ - self.configID == other.configID and \ - self.mapfile == other.mapfile and \ - self.vasName == other.vasName and \ - self.vasSectionName == other.vasSectionName - else: - return NotImplemented + raise NotImplementedError("This member shall not be used, use the isEqual() instead!") + + @staticmethod + @abc.abstractmethod + def isEqual(first, second): + """ + Function to check whether two MemEntry objects are equal. + :param first: MemEntry object. + :param second: MemEntry object. + :return: True if the first and second objects are equal, false otherwise. + """ - def __hash__(self): - return hash((self.addressStart, self.addressEnd, self.section, self.moduleName, self.configID, self.mapfile, self.vasName, self.vasSectionName)) + @staticmethod + @abc.abstractmethod + def getName(memEntry): + """ + A name getter for MemEntry objects. + :param memEntry: The MemEntry object that´s name want to be get. + :return: A string representing the name created from the MemEntry object. + """ + + +class SectionEntry(MemEntryHandler): + """ + A MemEntryHandler for handling MemEntries that represents sections. + """ + def __eq__(self, other): + raise NotImplementedError("This member shall not be used, use the isEqual() instead!") + + @staticmethod + def isEqual(first, second): + """ + Function to decide whether two sections are equal. + :param first: MemEntry object representing a section. + :param second: MemEntry object representing a section. + :return: True if the object first and second are equal, False otherwise. + """ + if not isinstance(first, MemEntry) or not isinstance(second, MemEntry): + raise TypeError("The argument needs to be of a type MemEntry.") + + return ((first.addressStart == second.addressStart) and + (first.addressLength == second.addressLength) and + (first.sectionName == second.sectionName) and + (first.configID == second.configID) and + (first.mapfile == second.mapfile) and + (first.compilerSpecificData == second.compilerSpecificData)) + + @staticmethod + def getName(memEntry): + """ + A name getter for MemEntry object representing a section. + :param memEntry: The MemEntry object that´s name want to be get. + :return: A string representing the name created from the MemEntry object. + """ + return memEntry.sectionName + + +class ObjectEntry(MemEntryHandler): + """ + A MemEntryHandler for handling MemEntries that represents objects. + """ + def __eq__(self, other): + raise NotImplementedError("This member shall not be used, use the isEqual() instead!") + + @staticmethod + def isEqual(first, second): + """ + Function to decide whether two object are equal. + :param first: MemEntry object representing an object. + :param second: MemEntry object representing an object. + :return: True if the object first and second are equal, False otherwise. + """ + if not isinstance(first, MemEntry) or not isinstance(second, MemEntry): + raise TypeError("The argument needs to be of a type MemEntry.") + + return ((first.addressStart == second.addressStart) and + (first.addressLength == second.addressLength) and + (first.sectionName == second.sectionName) and + (first.objectName == second.objectName) and + (first.configID == second.configID) and + (first.mapfile == second.mapfile) and + (first.compilerSpecificData == second.compilerSpecificData)) + + @staticmethod + def getName(memEntry): + """ + A name getter for MemEntry object representing an object. + :param memEntry: The MemEntry object that´s name want to be get. + :return: A string representing the name created from the MemEntry object. + """ + return memEntry.sectionName + "::" + memEntry.objectName diff --git a/emma_libs/memoryManager.py b/emma_libs/memoryManager.py index 4dcd08c..9718d9b 100644 --- a/emma_libs/memoryManager.py +++ b/emma_libs/memoryManager.py @@ -17,888 +17,135 @@ """ -import sys import os -import re -import bisect -import csv -import datetime -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper -import emma_libs.mapfileRegexes import emma_libs.memoryEntry - - -# Global timestamp -# (parsed from the .csv file from ./memStats) -timestamp = datetime.datetime.now().strftime("%Y-%m-%d - %Hh%Ms%S") +import emma_libs.configuration +import emma_libs.mapfileProcessorFactory +import emma_libs.memoryMap +import emma_libs.categorisation class MemoryManager: - # TODO : After discussions with MSc, this class could be cut up into more parts. (AGK) """ - Class for reading and saving/sorting the mapfiles + A class to organize the processing of the configuration and the mapfiles and the storage of the created reports. """ - def __init__(self, args, categoriesPath, categoriesKeywordsPath, fileIdentifier, regexData): - """ - Memory manager for mapfile information aggregation. - :param args: command line arguments - :param categoriesPath: Path to a projects categories JSON - :param categoriesKeywordsPath: Path to a projects categoriesKeywords JSON - :param fileIdentifier: String for the summary filename written in writeSummary() - :param regexData: An object derived from class RegexPatternBase holding the regex patterns - """ - def readGlobalConfigJson(configPath): - """ - :param configPath: file path - :return: the globalConfig dict - """ - # Load global config file - globalConfig = shared_libs.emma_helper.readJson(configPath) - - # Loading the config files of the defined configID-s - for configID in list(globalConfig.keys()): # List of keys required so we can remove the ignoreConfigID entrys - if IGNORE_CONFIG_ID in globalConfig[configID].keys(): - # Skip configID if ["ignoreConfigID"] is True - if type(globalConfig[configID][IGNORE_CONFIG_ID]) is not bool: - # Check that flag has the correct type - sc.error("The " + IGNORE_CONFIG_ID + " of " + configID + " has a type " + str(type(globalConfig[configID][IGNORE_CONFIG_ID])) + " instead of bool. " + - "Please be sure to use correct JSON syntax: boolean constants are written true and false.") - sys.exit(-10) - elif globalConfig[configID][IGNORE_CONFIG_ID] is True: - globalConfig.pop(configID) - if len(globalConfig) < 1 and self.args.verbosity <= 1: - sc.warning("No configID left; all were ignored.") - continue - - # Load and check mapfiles - # TODO: add option for recursive search (MSc) - if MAPFILES in globalConfig[configID].keys(): - absMapfilesDirPath = os.path.abspath(shared_libs.emma_helper.joinPath(self.mapfileRootPath, globalConfig[configID][MAPFILES])) - shared_libs.emma_helper.checkIfFolderExists(absMapfilesDirPath) - globalConfig[configID]["individualConfigIdMapfilePath"] = absMapfilesDirPath - - # Loading the addressSpaces - if "addressSpacesPath" in globalConfig[configID].keys(): - globalConfig[configID]["addressSpaces"] = shared_libs.emma_helper.readJson(shared_libs.emma_helper.joinPath(self.projectPath, globalConfig[configID]["addressSpacesPath"])) - - # Removing the imported memory entries if they are listed in IGNORE_MEM_TYPE key - if IGNORE_MEMORY in globalConfig[configID]["addressSpaces"].keys(): - for memoryToIgnore in globalConfig[configID]["addressSpaces"][IGNORE_MEMORY]: - if globalConfig[configID]["addressSpaces"]["memory"][memoryToIgnore]: - globalConfig[configID]["addressSpaces"]["memory"].pop(memoryToIgnore) - sc.info("The memory entry \"" + memoryToIgnore + "\" of the configID \"" + configID + "\" is marked to be ignored...") - else: - sc.error("The key " + memoryToIgnore + " which is in the ignore list, does not exist in the memory object of " + globalConfig[configID]["addressSpacesPath"]) - sys.exit(-10) - else: - sc.error("The " + CONFIG_ID + " does not have the key: " + "addressSpacesPath") - sys.exit(-10) - - # Loading the sections if the file is present (only needed in case of VAS-es) - if "virtualSectionsPath" in globalConfig[configID].keys(): - globalConfig[configID]["virtualSections"] = shared_libs.emma_helper.readJson(shared_libs.emma_helper.joinPath(self.projectPath, globalConfig[configID]["virtualSectionsPath"])) - - # Loading the patterns - if "patternsPath" in globalConfig[configID].keys(): - globalConfig[configID]["patterns"] = shared_libs.emma_helper.readJson(shared_libs.emma_helper.joinPath(self.projectPath, globalConfig[configID]["patternsPath"])) - else: - sc.error("The " + CONFIG_ID + " does not have the key: " + "patternsPath") - sys.exit(-10) - - # Flag to load a monolith file only once; we don't do it here since we might work with DMA only - globalConfig[configID]["monolithLoaded"] = False - # Flag to sort monolith table only once; we don't do it here since we might work with DMA only - globalConfig[configID]["sortMonolithTabularised"] = False - return globalConfig - - self.args = args - # Attributes for file paths - self.analyseDebug = args.analyse_debug - self.projectPath = args.project - self.project = shared_libs.emma_helper.projectNameFromPath(shared_libs.emma_helper.joinPath(args.project)) - self.mapfileRootPath = shared_libs.emma_helper.joinPath(args.mapfiles) - self.__categoriesFilePath = categoriesPath - self.__categoriesKeywordsPath = categoriesKeywordsPath - shared_libs.emma_helper.checkIfFolderExists(self.projectPath) - - # Regex data, has to be a sub-class of RegexPatternBase - self.regexPatternData = regexData - - # Load local config JSONs from global config - self.globalConfig = readGlobalConfigJson(configPath=shared_libs.emma_helper.joinPath(self.projectPath, "globalConfig.json")) - sc.info("Imported " + str(len(self.globalConfig)) + " global config entries") - - # Loading the categories config files. These files are optional, if they are not present we will store None instead. - if os.path.exists(self.__categoriesFilePath): - self.categoriesJson = shared_libs.emma_helper.readJson(self.__categoriesFilePath) + class Settings: + # pylint: disable=too-many-instance-attributes, too-many-arguments, too-few-public-methods + # Rationale: This class´s only purpose is to store settings, thus having too many members and parameters is not an error. + """ + Settings that influence the operation of the MemoryManager object. + """ + def __init__(self, projectName, configurationPath, mapfilesPath, outputPath, analyseDebug, createCategories, removeUnmatched, noPrompt): + self.projectName = projectName + self.configurationPath = configurationPath + self.mapfilesPath = mapfilesPath + self.outputPath = outputPath + self.analyseDebug = analyseDebug + self.createCategories = createCategories + self.removeUnmatched = removeUnmatched + self.noPrompt = noPrompt + + def __init__(self, projectName, configurationPath, mapfilesPath, outputPath, analyseDebug, createCategories, removeUnmatched, noPrompt): + # pylint: disable=too-many-arguments + # Rationale: We need to initialize the Settings, so the number of arguments are needed. + + # Processing the command line arguments and storing it into the settings member + self.settings = MemoryManager.Settings(projectName, configurationPath, mapfilesPath, outputPath, analyseDebug, createCategories, removeUnmatched, noPrompt) + # Check whether the configuration and the mapfiles folders exist + shared_libs.emma_helper.checkIfFolderExists(self.settings.mapfilesPath) + # The configuration is empty at this moment, it can be read in with another method + self.configuration = None + # The memory content is empty at this moment, it can be loaded with another method + self.memoryContent = None + # The categorisation object does not exist yet, it can be created after reading in the configuration + self.categorisation = None + + def readConfiguration(self): + """ + A method to read the configuration. + :return: None + """ + # Reading in the configuration + self.configuration = emma_libs.configuration.Configuration() + self.configuration.readConfiguration(self.settings.configurationPath, self.settings.mapfilesPath, self.settings.noPrompt) + # Creating the categorisation object + self.categorisation = emma_libs.categorisation.Categorisation(shared_libs.emma_helper.joinPath(self.settings.configurationPath, CATEGORIES_OBJECTS_JSON), + shared_libs.emma_helper.joinPath(self.settings.configurationPath, CATEGORIES_KEYWORDS_OBJECTS_JSON), + shared_libs.emma_helper.joinPath(self.settings.configurationPath, CATEGORIES_SECTIONS_JSON), + shared_libs.emma_helper.joinPath(self.settings.configurationPath, CATEGORIES_KEYWORDS_SECTIONS_JSON), + self.settings.noPrompt) + + def processMapfiles(self): + """ + A method to process the mapfiles. + :return: None + """ + # If the configuration was already loaded + if self.configuration is not None: + + # We will create an empty memory content that will be filled now + self.memoryContent = {} + + # Processing the mapfiles for every configId + for configId in self.configuration.globalConfig: + + # Creating the configId in the memory content + self.memoryContent[configId] = {} + + sc().info("Importing Data for \"" + configId + "\", this may take some time...") + + # Creating a mapfile processor based on the compiler that was defined for the configId + usedCompiler = self.configuration.globalConfig[configId]["compiler"] + mapfileProcessor = emma_libs.mapfileProcessorFactory.createSpecificMapfileProcesor(usedCompiler) + + # Importing the mapfile contents for the configId with the created mapfile processor + sectionCollection, objectCollection = mapfileProcessor.processMapfiles(configId, self.configuration.globalConfig[configId], self.settings.analyseDebug) + + # Filling out the categories in the consumerCollections + self.categorisation.fillOutCategories(sectionCollection, objectCollection) + + # Updating the categorisation files from the categorisation keywords and remove the unmatched one based on the settings + self.categorisation.manageCategoriesFiles(self.settings.createCategories, self.settings.removeUnmatched, sectionCollection, objectCollection) + + # Resolving the duplicate, containment and Overlap in the consumerCollections + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(sectionCollection, emma_libs.memoryEntry.SectionEntry) + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(objectCollection, emma_libs.memoryEntry.ObjectEntry) + + # Storing the consumer collections + self.memoryContent[configId][FILE_IDENTIFIER_SECTION_SUMMARY] = sectionCollection + self.memoryContent[configId][FILE_IDENTIFIER_OBJECT_SUMMARY] = objectCollection + + # Creating a common consumerCollection + self.memoryContent[configId][FILE_IDENTIFIER_OBJECTS_IN_SECTIONS] = emma_libs.memoryMap.calculateObjectsInSections(self.memoryContent[configId][FILE_IDENTIFIER_SECTION_SUMMARY], + self.memoryContent[configId][FILE_IDENTIFIER_OBJECT_SUMMARY]) else: - self.categoriesJson = None - sc.warning("There was no " + os.path.basename(self.__categoriesFilePath) + " file found, the categorization based on this will be skipped.") - if os.path.exists(self.__categoriesKeywordsPath): - self.categorisedKeywordsJson = shared_libs.emma_helper.readJson(self.__categoriesKeywordsPath) - else: - self.categorisedKeywordsJson = None - sc.warning("There was no " + os.path.basename(self.__categoriesKeywordsPath) + " file found, the categorization based on this will be skipped.") - - # Init consumer data - self.consumerCollection = [] # the actual data (contains instances of `MemEntry`) - self.categorisedFromKeywords = [] # This list will be filled with matches from category keywords, needed in createCategoriesJson() - # self.containingOthers = set() - - # The file identifier is used for the filename in writeReport() - self.__fileIdentifier = fileIdentifier - - # Add map and monolith files - self.__addMonolithsToGlobalConfig() - self.__addMapfilesToGlobalConfig() - self.__validateConfigIDs() - - # Filename for csv file - self.outputPath = createMemStatsFilepath(args.dir, args.subdir, self.__fileIdentifier, self.project) - - def checkMonolithSections(): - """ - The function collects the VAS sections from the monolith files and from the global config and from the monolith mapfile - :return: nothing - """ - foundInConfigID = [] - foundInMonolith = [] - - # Extract sections from monolith file - monolithPattern = emma_libs.mapfileRegexes.UpperMonolithPattern() - for configID in self.globalConfig: - # Check if a monolith was loaded to this configID that can be checked - if self.globalConfig[configID]["monolithLoaded"]: - for entry in self.globalConfig[configID]["patterns"]["monoliths"]: - with open(self.globalConfig[configID]["patterns"]["monoliths"][entry]["associatedFilename"], "r") as monolithFile: - monolithContent = monolithFile.readlines() - for line in monolithContent: - lineComponents = re.search(monolithPattern.pattern, line) - if lineComponents: # if match - foundInMonolith.append(lineComponents.group(monolithPattern.Groups.section)) - - for vas in self.globalConfig[configID]["virtualSections"]: - foundInConfigID += self.globalConfig[configID]["virtualSections"][vas] - - # Compare sections from configID with found in monolith file - sectionsNotInConfigID = set(foundInMonolith) - set(foundInConfigID) - if sectionsNotInConfigID: - sc.warning("Monolith File has the following sections. You might want to add it to the respective VAS in " + self.globalConfig[configID]["virtualSectionsPath"] + "!") - print(sectionsNotInConfigID) - sc.warning("Still continue? (y/n)") if not self.args.Werror else sys.exit(-10) - text = input("> ") if not self.args.noprompt else sys.exit(-10) - if text != "y": - sys.exit(-10) - return - - checkMonolithSections() - - def __addTabularisedMonoliths(self, configID): - """ - Manages Monolith selection, parsing and converting to a list - :param configID: configID - :return: nothing - """ - def loadMonolithMapfileOnce(configID): - """ - Load monolith mapfile (only once) - :return: Monolith file content - """ - - if not self.globalConfig[configID]["monolithLoaded"]: - mapfileIndexChosen = 0 # Take the first monolith file in list (default case) - numMonolithFiles = len(self.globalConfig[configID]["patterns"]["monoliths"]) - keyMonolithMapping = {} - - # Update globalConfig with all monolith filenames - for key, monolith in enumerate(self.globalConfig[configID]["patterns"]["monoliths"]): - keyMonolithMapping.update({str(key): self.globalConfig[configID]["patterns"]["monoliths"][monolith]["associatedFilename"]}) - - if numMonolithFiles > 1: - sc.info("More than one monolith file detected. Which to chose (1, 2, ...)?") - # Display files - for key, monolith in keyMonolithMapping.items(): - print(" ", key.ljust(10), monolith) - # Ask for index which file to chose - mapfileIndexChosen = shared_libs.emma_helper.Prompt.idx() if not self.args.noprompt else sys.exit(-10) - while not (0 <= mapfileIndexChosen < numMonolithFiles): - sc.warning("Invalid value; try again:") - if self.args.Werror: - sys.exit(-10) - mapfileIndexChosen = shared_libs.emma_helper.Prompt.idx() if not self.args.noprompt else sys.exit(-10) - elif numMonolithFiles < 1: - sc.error("No monolith file found but needed for processing") - sys.exit(-10) - - # Finally load the file - self.globalConfig[configID]["monolithLoaded"] = True - with open(keyMonolithMapping[str(mapfileIndexChosen)], "r") as fp: - content = fp.readlines() - return content - - def tabulariseAndSortOnce(monolithContent, configID): - """ - Parses the monolith file and returns a "table" (addresses are int's) of the following structure: - table[n-th_entry][0] = virtual(int), ...[1] = physical(int), ...[2] = offset(int), ...[3] = size(int), ...[4] = section(str) - Offset = physical - virtual - :param monolithContent: Content from monolith as text (all lines) - :return: list of lists - """ - table = [] # "headers": virtual, physical, size, section - monolithPattern = emma_libs.mapfileRegexes.UpperMonolithPattern() - for line in monolithContent: - match = re.search(emma_libs.mapfileRegexes.UpperMonolithPattern().pattern, line) - if match: - table.append([ - int(match.group(monolithPattern.Groups.virtualAdress), 16), - int(match.group(monolithPattern.Groups.physicalAdress), 16), - int(match.group(monolithPattern.Groups.physicalAdress), 16) - int(match.group(monolithPattern.Groups.virtualAdress), 16), - int(match.group(monolithPattern.Groups.size), 16), - match.group(monolithPattern.Groups.section) - ]) - self.globalConfig[configID]["sortMonolithTabularised"] = True - return table - - # Load and register Monoliths - monolithContent = loadMonolithMapfileOnce(configID) - if not self.globalConfig[configID]["sortMonolithTabularised"]: - self.globalConfig[configID]["sortMonolithTabularised"] = tabulariseAndSortOnce(monolithContent, configID) - - # TODO : This could be renamed to "__addFileTypesForAConfigId" (AGK) - def __addFilesPerConfigID(self, configID, fileType): - """ - Adds the found full absolute path of the found file as new element to `self.globalConfig[configID]["patterns"]["mapfiles"]["associatedFilename"]` - Deletes elements from `self.globalConfig[configID]["patterns"]["mapfiles"]` which were not found and prints a warning for each - :param fileType: Either "patterns" (=default) for mapfiles or "monoliths" for monolith files - :return: Number of files detected - """ - if MAPFILES in self.globalConfig[configID].keys(): - mapfilePath = self.globalConfig[configID]["individualConfigIdMapfilePath"] - else: - mapfilePath = self.mapfileRootPath - - # Find mapfiles - for file in os.listdir(mapfilePath): - for entry in self.globalConfig[configID]["patterns"][fileType]: - foundFiles = [] - for regex in self.globalConfig[configID]["patterns"][fileType][entry]["regex"]: - searchCandidate = shared_libs.emma_helper.joinPath(self.mapfileRootPath, file) - match = re.search(regex, searchCandidate) - if match: - foundFiles.append(os.path.abspath(searchCandidate)) - if foundFiles: - self.globalConfig[configID]["patterns"][fileType][entry]["associatedFilename"] = foundFiles[0] - print("\t\t\t Found " + fileType + ": ", foundFiles[0]) - if len(foundFiles) > 1: - sc.warning("Ambiguous regex pattern in '" + self.globalConfig[configID]["patternsPath"] + "'. Selected '" + foundFiles[0] + "'. Regex matched: " + "".join(foundFiles)) - if self.args.Werror: - sys.exit(-10) - - # Check for found files in patterns and do some clean-up - # We need to convert the keys into a temporary list in order to avoid iterating on the original which may be changed during the loop, that causes a runtime error - for entry in list(self.globalConfig[configID]["patterns"][fileType]): - if "associatedFilename" not in self.globalConfig[configID]["patterns"][fileType][entry]: - sc.warning("No file found for ", str(entry).ljust(20), "(pattern:", ''.join(self.globalConfig[configID]["patterns"][fileType][entry]["regex"]) + " );", "skipping...") - if self.args.Werror: - sys.exit(-10) - del self.globalConfig[configID]["patterns"][fileType][entry] - return len(self.globalConfig[configID]["patterns"][fileType]) - - def __addMonolithsToGlobalConfig(self): - - def ifAnyNonDMA(configID): - """ - Checks if non-DMA files were found - Do not run this before mapfiles are searched (try: `findMapfiles` and evt. `findMonolithMapfiles` before) - :return: True if files which require address translation were found; otherwise False - """ - entryFound = False - for entry in self.globalConfig[configID]["patterns"]["mapfiles"]: - if "VAS" in self.globalConfig[configID]["patterns"]["mapfiles"][entry]: - entryFound = True - # TODO : here, a break could improve the performance (AGK) - return entryFound - - for configID in self.globalConfig: - sc.info("Processing configID \"" + configID + "\" for monolith files") - if ifAnyNonDMA(configID): - numMonolithMapFiles = self.__addFilesPerConfigID(configID, fileType="monoliths") - if numMonolithMapFiles > 1: - sc.warning("More than one monolith file found; Result may be non-deterministic") - if self.args.Werror: - sys.exit(-10) - elif numMonolithMapFiles < 1: - sc.error("No monolith files was detected but some mapfiles require address translation (VASes used)") - sys.exit(-10) - self.__addTabularisedMonoliths(configID) - - def __addMapfilesToGlobalConfig(self): - for configID in self.globalConfig: - sc.info("Processing configID \"" + configID + "\"") - numMapfiles = self.__addFilesPerConfigID(configID=configID, fileType="mapfiles") - sc.info(str(numMapfiles) + " mapfiles found") - - def __validateConfigIDs(self): - configIDsToDelete = [] - - # Search for invalid configIDs - for configID in self.globalConfig: - numMapfiles = len(self.globalConfig[configID]["patterns"]["mapfiles"]) - if numMapfiles < 1: - if self.args.Werror: - sc.error("No mapfiles found for configID: '" + configID + "'") - sys.exit(-10) - else: - sc.warning("No mapfiles found for configID: '" + configID + "', skipping...") - configIDsToDelete.append(configID) - - # Remove invalid configIDs separately (for those where no mapfiles were found) - # Do this in a separate loop since we cannot modify and iterate in the same loop - for invalidConfigID in configIDsToDelete: - del self.globalConfig[invalidConfigID] - if self.args.verbosity <= 2: - sc.warning("Removing the configID " + invalidConfigID + " because no mapfiles were found for it...") - else: - if 0 == len(self.globalConfig): - sc.error("No mapfiles were found for any of the configIDs...") - sys.exit(-10) - - def __evalMemRegion(self, physAddr, configID): - """ - Search within the memory regions to find the address given from a line - :param configID: Configuration ID from globalConfig.json (referenced in patterns.json) - :param physAddr: input address in hex or dec - :return: None if nothing was found; otherwise the unique name of the memory region defined in addressSpaces*.json (DDR, ...) - """ - address = shared_libs.emma_helper.unifyAddress(physAddr)[1] # we only want dec >> [1] - memRegion = None - memType = None - memoryCandidates = self.globalConfig[configID]["addressSpaces"]["memory"] - - # Find the address in the memory map and set its type (int, ext RAM/ROM) - for currAddrSpace in memoryCandidates: - # TODO : This is wrong here, because it does not take into account that we have a size as well. - # TODO : This means that it can be that the start of the element is inside of the memory region but it reaches trough it´s borders. - # TODO : This could mean an error in either in the config or the mapfiles. Because of this info needs to be noted. (AGK) - if int(memoryCandidates[currAddrSpace]["start"], 16) <= address <= int(memoryCandidates[currAddrSpace]["end"], 16): - memRegion = currAddrSpace - memType = memoryCandidates[currAddrSpace]["type"] - # TODO : Maybe we should break here to improve performance. (AGK) - # TODO: Do we want to save mapfile entrys which don't fit into the pre-configured adresses from addressSpaces*.json? If so add the needed code here (FM) - - # # Debug Print - # if memRegion is None: - # if address != 0: - # print("<<<>>>>", - # currAddrSpace, ">>>>>", memoryCandidates[currAddrSpace]["start"], "<=", hex(address), "<=", memoryCandidates[currAddrSpace]["end"]) - return memRegion, memType - - def __translateAddress(self, elementVirtualStartAddress, elementSize, virtualSectionsOfThisMapfile, monolithFileContent): - """ - Calculates the physical address for an element (= section or object). - The patterns config file can assign a VAS to a mapfile. Every VAS has VAS sections that are defined in the - virtualSections file. The monolith file contains all the virtual sections of all the VAS-es with data - based on which the address translation can be done. - In order to do the translation we loop trough the entries in the monolith file and see whether the entry belongs - to the VAS of this element. If so, when we need to make sure that the element resides within the virtual section. - If that is also true, the address translation can be easily done with the data found in the monolith file. - :param elementVirtualStartAddress: The start address of the element in the VAS - :param elementSize: The size of the element in bytes - :param virtualSectionsOfThisMapfile: List of virtual sections that belong to the VAS of the element - :param monolithFileContent: List of all the virtual sections from the monolith file. - :return: Physical start address of the element and the name of the virtual section the translation was done with. - """ - # This are indexes used for accessing the elements of one monolith file entry - monolithIndexVirtual = 0 - monolithIndexOffset = 2 - monolithIndexSize = 3 - monolithIndexSectionName = 4 - # Converting the received start address and size to decimal - _, elementVirtualStartAddress = shared_libs.emma_helper.unifyAddress(elementVirtualStartAddress) - _, elementSize = shared_libs.emma_helper.unifyAddress(elementSize) - # Setting up the return values with default values - elementPhysicalStartAddress = None - virtualSectionName = None - - # We will go trough all the entries in the monolith file to find the virtual section this element belongs to - for entry in monolithFileContent: - # If the element belongs to this virtual section we will try to do the address translation - virtualSectionName = entry[monolithIndexSectionName] - if virtualSectionName in virtualSectionsOfThisMapfile: - # Setting up data for the translation (for the end addresses we need to be careful in case we have zero lengths) - virtualSectionStartAddress = entry[monolithIndexVirtual] - virtualSectionEndAddress = virtualSectionStartAddress + (entry[monolithIndexSize] - 1) if 0 < entry[monolithIndexSize] else virtualSectionStartAddress - elementVirtualEndAddress = elementVirtualStartAddress + (elementSize - 1) if 0 < elementSize else elementVirtualStartAddress - # If the element is contained by this virtual section then we will use this one for the translation - if virtualSectionStartAddress <= elementVirtualStartAddress <= elementVirtualEndAddress <= virtualSectionEndAddress: - addressTranslationOffset = entry[monolithIndexOffset] - elementPhysicalStartAddress = elementVirtualStartAddress + addressTranslationOffset - # FIXME: maybe it should be displayed/captured if we got more than one matches! (It should never happen but still...) (MSc) - break - return elementPhysicalStartAddress, virtualSectionName - - def __addMemEntry(self, tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength): - """ - Add an entry to the consumerCollection list with insort algorithm. - Params are constructor arguments for memEntry object. - :param tag: - :param vasName: - :param vasSectionName - :param section: - :param moduleName: - :param mapfileName: - :param configID: - :param memType: - :param category: - :param addressStart: - :param addressLength: - :return: nothing - """ - - # TODO: Fix this architectural error (FM) - mapfileEntryToAdd = None - if isinstance(self.regexPatternData, emma_libs.mapfileRegexes.ImageSummaryPattern): - # Create Section entry - mapfileEntryToAdd = emma_libs.memoryEntry.SectionEntry(tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength) - elif isinstance(self.regexPatternData, emma_libs.mapfileRegexes.ModuleSummaryPattern): - # Create Object entry - mapfileEntryToAdd = emma_libs.memoryEntry.ObjectEntry(tag, vasName, vasSectionName, section, moduleName, mapfileName, configID, memType, category, addressStart, addressLength) - - # Insert elements by address in progressing order - i = bisect.bisect_right(self.consumerCollection, mapfileEntryToAdd) - self.consumerCollection.insert(i, mapfileEntryToAdd) # Inserts at i, elements to right will be pushed "one inedx up" - - def __categoriseByKeyword(self, nameString): - """ - Categorise a nameString by a keyword specified in categoriesKeywords.json - :param nameString: The name-string of the module to categorise - :return: The category string, None if no matching keyword found or the categoriesKeywords.json was not loaded - """ - if self.categorisedKeywordsJson is not None: - for category in self.categorisedKeywordsJson: - for keyword in range(len(self.categorisedKeywordsJson[category])): - pattern = r"""\w*""" + self.categorisedKeywordsJson[category][keyword] + r"""\w*""" # Search module name for substring specified in categoriesKeywords.json - if re.search(pattern, nameString) is not None: # Check for first occurence - self.categorisedFromKeywords.append((nameString, category)) # Append categorised module, is list of tuples because dict doesn't support duplicate keys - return category - return None - - def __searchCategoriesJson(self, nameString): - if self.categoriesJson is not None: - categoryEval = [] - for category in self.categoriesJson: # Iterate through keys - for i in range(len(self.categoriesJson[category])): # Look through category array - if nameString == self.categoriesJson[category][i]: # If there is a match append the category - categoryEval.append(category) - - if categoryEval: - categoryEval.sort() - return ", ".join(categoryEval) - else: - return None + sc().error("The configuration needs to be loaded before processing the mapfiles!") + + def createReports(self): + """ + A method to create the reports. + :return: None + """ + # If the mapfiles were already processed + if self.memoryContent is not None: + # Putting the same consumer collection types together + # (At this points the collections are grouped by configId then by their types) + consumerCollections = {} + for configId in self.memoryContent: + for collectionType in self.memoryContent[configId]: + if collectionType not in consumerCollections: + consumerCollections[collectionType] = [] + consumerCollections[collectionType].extend(self.memoryContent[configId][collectionType]) + + # Creating reports from the consumer colections + for collectionType in consumerCollections: + reportPath = emma_libs.memoryMap.createReportPath(self.settings.outputPath, self.settings.projectName, collectionType) + emma_libs.memoryMap.writeReportToDisk(reportPath, consumerCollections[collectionType]) + sc().info("A report was stored:", os.path.abspath(reportPath)) else: - return None - - def __evalCategory(self, nameString): - """ - Returns the category of a module. This function calls __categoriseModuleByKeyword() - and __searchCategoriesJson() to evaluate a matching category. - :param nameString: The name string of the module/section to categorise. - :return: Category string - """ - category = self.__searchCategoriesJson(nameString) - - if category is None: - # If there is no match check for keyword specified in categoriesKeywordsJson - category = self.__categoriseByKeyword(nameString) # FIXME: add a training parameter; seems very dangerous for me having wildcards as a fallback option (MSc) - - if category is None: - # If there is still no match - category = "" - # Add all unmatched module names so they can be appended to categories.json under "" - self.categorisedFromKeywords.append((nameString, category)) - - return category - - def __evalRegexPattern(self, configID, entry): - """ - Method to determine if the default pattern can be used or if a unique patter is configured - :param configID: Needed to navigate to the correct configID entry - :param entry: See param configID - :return: regex pattern object - """ - if isinstance(self.regexPatternData, emma_libs.mapfileRegexes.ImageSummaryPattern): - if UNIQUE_PATTERN_SECTIONS in self.globalConfig[configID]["patterns"]["mapfiles"][entry].keys(): - # If a unique regex pattern is needed, e.g. when the mapfile has a different format and cannot be parsed with the default pattern - sectionPattern = emma_libs.mapfileRegexes.ImageSummaryPattern() # Create new instance of pattern class when a unique pattern is needed - sectionPattern.pattern = self.globalConfig[configID]["patterns"]["mapfiles"][entry][UNIQUE_PATTERN_SECTIONS] # Overwrite default pattern with unique one - else: - # When there is no unique pattern needed the default pattern object can be used - sectionPattern = self.regexPatternData - - return sectionPattern - - elif isinstance(self.regexPatternData, emma_libs.mapfileRegexes.ModuleSummaryPattern): - if UNIQUE_PATTERN_OBJECTS in self.globalConfig[configID]["patterns"]["mapfiles"][entry].keys(): - # If a unique regex pattern is needed, e.g. when the mapfile has a different format and cannot be parsed with the default pattern - objectPattern = emma_libs.mapfileRegexes.ModuleSummaryPattern() # Create new instance of pattern class when a unique pattern is needed - objectPattern.pattern = self.globalConfig[configID]["patterns"]["mapfiles"][entry][UNIQUE_PATTERN_OBJECTS] # Overwrite default pattern with unique one - else: - # When there is no unique pattern needed the default pattern object can be used - objectPattern = self.regexPatternData - - return objectPattern - - def importData(self): - """ - Processes all input data and adds it to our container (`consumerCollection`) - :return: number of configIDs - """ - - # Importing for every configID - for configID in self.globalConfig: - sc.info("Importing Data for \"" + configID + "\", this may take some time...") - - # Reading the hexadecimal offset value from the addresSpaces*.json. This value is optional, in case it is not defined, we will assume that it is 0. - offset = int(self.globalConfig[configID]["addressSpaces"]["offset"], 16) if "offset" in self.globalConfig[configID]["addressSpaces"].keys() else 0 - # Defining a list of sections that will be excluded (including the objects residing in it) from the analysis based on the value that was loaded from the arguments - listOfExcludedSections = [".unused_ram"] if self.analyseDebug else SECTIONS_TO_EXCLUDE - - # Importing every mapfile that was found - for mapfile in self.globalConfig[configID]["patterns"]["mapfiles"]: - # Opening the mapfile and reading in its content - with open(self.globalConfig[configID]["patterns"]["mapfiles"][mapfile]["associatedFilename"], "r") as mapfile_file_object: - mapfileContent = mapfile_file_object.readlines() - # If there is a VAS defined for the mapfile, then the addresses found in it are virtual addresses, otherwise they are physical addresses - mapfileContainsVirtualAddresses = True if "VAS" in self.globalConfig[configID]["patterns"]["mapfiles"][mapfile] else False - # Loading the regex pattern that will be used for this mapfile - regexPatternData = self.__evalRegexPattern(configID, mapfile) - - # Analysing the mapfile with the loaded regex line-by-line - lineNumber = 0 - for line in mapfileContent: - lineNumber += 1 - - # Extracting the components from the line with the regex, if there was no match, we will continue with the next line - lineComponents = re.search(regexPatternData.pattern, line) - if lineComponents: - # If the section name of this element is in the list that we want to exclude then we can continue with the next line - if lineComponents.group(regexPatternData.Groups.section).rstrip() in listOfExcludedSections: - continue - # If this mapfile contains virtual addresses then we need to translate the address of this element - vasName = None - vasSectionName = None - if mapfileContainsVirtualAddresses: - # Name of the Virtual address space to which the elements of this mapfile belongs - vasName = self.globalConfig[configID]["patterns"]["mapfiles"][mapfile]["VAS"] - # List of the virtual sections that were belong to this mapfile. The address translation is done with the help of these sections. - virtualSectionsOfThisMapfile = self.globalConfig[configID]["virtualSections"][vasName] - # The part of the monolith file that contains the address translation data - monolithFileContent = self.globalConfig[configID]["sortMonolithTabularised"] - # Calculating the physical address and getting the name of the virtual section based on which the translation was done - physicalAddress, vasSectionName = self.__translateAddress(lineComponents.group(regexPatternData.Groups.origin), - lineComponents.group(regexPatternData.Groups.size), - virtualSectionsOfThisMapfile, - monolithFileContent) - # Check whether the address translation was successful - if physicalAddress is None: - if self.args.verbosity <= 2: - warning_section_name = lineComponents.group(regexPatternData.Groups.section).rstrip() - warning_object_name = ("::" + lineComponents.group(regexPatternData.Groups.module).rstrip()) if hasattr(regexPatternData.Groups, "module") else "" - sc.warning("The address translation failed for the element: \"" + mapfile + "(line " + str(lineNumber) + ")::" + - warning_section_name + warning_object_name + " (size: " + str(int(lineComponents.group(regexPatternData.Groups.size), 16)) + " B)\" of the configID \"" + - configID + "\"!") - if self.args.Werror: - sys.exit(-10) - continue - # In case the mapfile contains phyisical addresses, no translation is needed, we are just reading the address that is in the mapfile - else: - physicalAddress = int(lineComponents.group(regexPatternData.Groups.origin), 16) - offset - - # Finding the memory region and memory type this element belongs to - memoryRegion, memType = self.__evalMemRegion(physicalAddress, configID) - - # If a memory region was NOT found, we will continue with the next line - if memoryRegion is not None: - # Finding the category this element belongs to - category = self.__evalCategory(lineComponents.group(regexPatternData.Groups.name)) - # Skip memTypes to exclude - # TODO : We could write a function to replace this often executed code to make the program to be readable (AGK) - # TODO : def checkAndGetStuffFromDictionary(stuff, dictionary): - # TODO : result = None - # TODO : if stuff in dictionary.keys(): - # TODO : result = dictionary[stuff] - # TODO : return result - memoryRegionsToExclude = [] - if MEM_REGION_TO_EXCLUDE in self.globalConfig[configID]["patterns"]["mapfiles"][mapfile].keys(): - # If a memory types should be excluded on a mapfile basis - memoryRegionsToExclude = self.globalConfig[configID]["patterns"]["mapfiles"][mapfile][MEM_REGION_TO_EXCLUDE] - if memoryRegion in memoryRegionsToExclude: - continue - - # Determining the addressLength - addressLength = int(lineComponents.group(regexPatternData.Groups.size), 16) - # Check whether the address is valid - if 0 > addressLength: - if self.args.verbosity <= 2: - sc.warning("Negative addressLength found.") - if self.args.Werror: - sys.exit(-10) - - # Add the memory entry to the collection - self.__addMemEntry( - tag=memoryRegion, - vasName=vasName if mapfileContainsVirtualAddresses else None, - vasSectionName=vasSectionName if mapfileContainsVirtualAddresses else None, - section=lineComponents.group(regexPatternData.Groups.section).rstrip(), - moduleName=regexPatternData.getModuleName(lineComponents), - mapfileName=os.path.split(self.globalConfig[configID]["patterns"]["mapfiles"][mapfile]["associatedFilename"])[-1], - configID=configID, - memType=memType, - category=category, - addressStart=physicalAddress, - addressLength=addressLength) - else: - if self.args.verbosity <= 1: - warning_section_name = lineComponents.group(regexPatternData.Groups.section).rstrip() - warning_object_name = ("::" + lineComponents.group(regexPatternData.Groups.module).rstrip()) if hasattr(regexPatternData.Groups, "module") else "" - sc.warning("The element: \"" + mapfile + "(line " + str(lineNumber) + ")::" + - warning_section_name + warning_object_name + " (size: " + str(int(lineComponents.group(regexPatternData.Groups.size), 16)) + " B)\" of the configID \"" + - configID + "\" does not belong to any of the memory regions!") - if self.args.Werror: - sys.exit(-1) - continue - - return len(self.globalConfig) - - def writeSummary(self): - """ - Wrapper for writing consumerCollection to file - :return: nothing - """ - consumerCollectionToCSV(self.outputPath, self.consumerCollection) - - def removeUnmatchedFromCategoriesJson(self): - """ - Removes unused module names from categories.json. - The function prompts the user to confirm the overwriting of categories.json - :return: Bool if file has been overwritten - """ - sc.info("Remove unmatched modules from" + self.__categoriesFilePath + "?\n" + self.__categoriesFilePath + " will be overwritten.\n`y` to accept, any other key to discard.") - text = input("> ") if not self.args.noprompt else sys.exit(-10) - if text == "y": - # Make a dict of {modulename: category} from consumerCollection - # Remember: consumerCollection is a list of memEntry objects - rawCategorisedModulesConsumerCollection = {memEntry.moduleName: memEntry.category for memEntry in self.consumerCollection} - - # Format rawCategorisedModulesConsumerCollection to {Categ1: [Modulename1, Modulename2, ...], Categ2: [...]} - categorisedModulesConsumerCollection = {} - for k, v in rawCategorisedModulesConsumerCollection.items(): - categorisedModulesConsumerCollection[v] = categorisedModulesConsumerCollection.get(v, []) - categorisedModulesConsumerCollection[v].append(k) - - for category in self.categoriesJson: # For every category in categories.json - if category not in categorisedModulesConsumerCollection: - # If category is in categories.json but has never occured in the mapfiles (hence not present in consumerCollection) - # Remove the not occuring category entirely - self.categoriesJson.pop(category) - else: # Category occurs in consumerCollection, hence is present in mapfiles, - # overwrite old category module list with the ones acutally occuring in mapfiles - self.categoriesJson[category] = categorisedModulesConsumerCollection[category] - - # Sort self.categories case-insensitive in alphabetical order - for x in self.categoriesJson.keys(): - self.categoriesJson[x].sort(key=lambda s: s.lower()) - - shared_libs.emma_helper.writeJson(self.__categoriesFilePath, self.categoriesJson) - - return True - - else: - sc.warning(self.__categoriesFilePath + " not changed.") - if self.args.Werror: - sys.exit(-10) - return False - - def createCategoriesJson(self): - """ - Updates/overwrites the present categories.json - :return Bool if CategoriesJson has been created - """ - # FIXME: Clearer Output (FM) - sc.info("Merge categories.json with categorised modules from categoriesKeywords.json?\ncategories.json will be overwritten.\n`y` to accept, any other key to discard.") - text = input("> ") if not self.args.noprompt else sys.exit(-10) - if text == "y": - # Format moduleCategories to {Categ1: [Modulename1, Modulename2, ...], Categ2: [...]} - formatted = {} - for k, v in dict(self.categorisedFromKeywords).items(): - formatted[v] = formatted.get(v, []) - formatted[v].append(k) - - # Merge categories from keyword search with categories from categories.json - moduleCategories = {**formatted, **self.categoriesJson} - - # Sort moduleCategories case-insensitive in alphabetical order - for x in moduleCategories.keys(): - moduleCategories[x].sort(key=lambda s: s.lower()) - - shared_libs.emma_helper.writeJson(self.__categoriesKeywordsPath, self.__categoriesKeywordsPath) - - return True - else: - sc.warning(self.__categoriesFilePath + " not changed.") - if self.args.Werror: - sys.exit(-10) - return False - - def resolveDuplicateContainmentOverlap(self, nameGetter): - """ - Goes trough the consumerCollection and checks all the elements for the following situations: - 1 - Duplicate - 2 - Containment - 3 - Overlap - - Assumptions: - - The consumerCollection is a list of MemEntry objects: - - It is ordered based on the startAddress attribute - - :param nameGetter: A function to get the name of the element. This is solved in this abstract way so it can work for section and object resolving as well. - """ - for actualElement in self.consumerCollection: - for otherElement in self.consumerCollection: - - # Don't compare element with itself and only compare the same configID - if actualElement.equalConfigID(otherElement) and not actualElement == otherElement: - - # Case 0: actualElement and otherElement are completely separated - if actualElement.addressEnd < otherElement.addressStart or actualElement.addressStart > otherElement.addressEnd: - # There is not much to do here... - pass - else: - # Case 1: actualElement and otherElement are duplicates - if actualElement.addressStart == otherElement.addressStart and actualElement.addressEnd == otherElement.addressEnd: - # Setting the actualElement´s duplicateFlag if it was not already set - if actualElement.duplicateFlag is None: - actualElement.duplicateFlag = "Duplicate of (" + nameGetter(otherElement) + ", " + otherElement.configID + ", " + otherElement.mapfile + ")" - # Setting the actualElement to zero addressLength if this was not the first element of the duplicates - # This is needed to include only one of the duplicate elements with the real size in the report and not to distort the results - if otherElement.duplicateFlag is not None: - actualElement.addressLength = 0 - actualElement.addressLengthHex = hex(actualElement.addressLength) - else: - # Case 2: actualElement contains otherElement - if actualElement.addressStart <= otherElement.addressStart and actualElement.addressEnd >= otherElement.addressEnd: - actualElement.containingOthersFlag = True - else: - # Case 3: actualElement is contained by otherElement - if actualElement.addressStart >= otherElement.addressStart and actualElement.addressEnd <= otherElement.addressEnd: - # Setting the actualElement´s containmentFlag if it was not already set - if actualElement.containmentFlag is None: - actualElement.containmentFlag = "Contained by (" + nameGetter(otherElement) + ", " + otherElement.configID + ", " + otherElement.mapfile + ")" - # Setting the actualElement to zero addressLength because this was contained by the otherElement - # This is needed to include only one of these elements with the real size in the report and not to distort the results - actualElement.addressLength = 0 - actualElement.addressLengthHex = hex(actualElement.addressLength) - else: - # Case 4: actualElement overlaps otherElement: otherElement starts inside and ends outside actualElement - if actualElement.addressStart < otherElement.addressStart and actualElement.addressEnd < otherElement.addressEnd: - actualElement.overlappingOthersFlag = True - else: - # Case 5: actualElement is overlapped by otherElement: otherElement starts before and ends inside actualElement - if actualElement.addressStart > otherElement.addressStart and actualElement.addressEnd > otherElement.addressEnd: - actualElement.overlapFlag = "Overlapped by (" + nameGetter(otherElement) + ", " + otherElement.configID + ", " + otherElement.mapfile + ")" - # Adjusting the addresses and length of the actualElement: reducing its size by the overlapping part - actualElement.addressStart = otherElement.addressEnd + 1 - actualElement.addressStartHex = hex(actualElement.addressStart) - actualElement.addressLength = actualElement.addressEnd - actualElement.addressStart + 1 - actualElement.addressLengthHex = hex(actualElement.addressLength) - # Case X: SW error, unhandled case... - else: - sc.error("MemoryManager::resolveOverlap(): Case X: SW error, unhandled case...") - - -def createMemStatsFilepath(rootdir, subdir, csvFilename, projectName): - memStatsRootPath = shared_libs.emma_helper.joinPath(rootdir, subdir, OUTPUT_DIR) - shared_libs.emma_helper.mkDirIfNeeded(memStatsRootPath) - memStatsFileName = projectName + "_" + csvFilename + "_" + timestamp.replace(" ", "") + ".csv" - return shared_libs.emma_helper.joinPath(memStatsRootPath, memStatsFileName) - - -def consumerCollectionToCSV(filepath, consumerCollection): - """ - Writes the consumerCollection containig MemoryEntrys to CSV - :param filepath: Absolute path to the csv file - :param consumerCollection: List containing memEntrys - """ - with open(filepath, "w") as fp: - writer = csv.writer(fp, delimiter=";", lineterminator="\n") - header = [ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_START_DEC, ADDR_END_DEC, SIZE_DEC, SIZE_HUMAN_READABLE, - SECTION_NAME, MODULE_NAME, CONFIG_ID, VAS_NAME, VAS_SECTION_NAME, MEM_TYPE, TAG, CATEGORY, DMA, MAPFILE, OVERLAP_FLAG, - CONTAINMENT_FLAG, DUPLICATE_FLAG, CONTAINING_OTHERS_FLAG, ADDR_START_HEX_ORIGINAL, - ADDR_END_HEX_ORIGINAL, SIZE_HEX_ORIGINAL, SIZE_DEC_ORIGINAL] - writer.writerow(header) - for row in consumerCollection: - writer.writerow([ - row.addressStartHex, - row.addressEndHex, - row.addressLengthHex, - row.addressStart, - row.addressEnd, - row.addressLength, - shared_libs.emma_helper.toHumanReadable(row.addressLength), - row.section, - row.moduleName, - row.configID, - row.vasName, - row.vasSectionName, - row.memType, - row.memTypeTag, - row.category, - row.dma, - row.mapfile, - row.overlapFlag, - row.containmentFlag, - row.duplicateFlag, - row.containingOthersFlag, - # Addresses are modified in case of overlapping so we will post the original values so that the changes can be seen - row.addressStartOriginal if (row.overlapFlag is not None) else "", - row.addressEndOriginal if (row.overlapFlag is not None) else "", - # Lengths are modified in case of overlapping, containment and duplication so we will post the original values so that the changes can be seen - row.addressLengthHexOriginal if ((row.overlapFlag is not None) or (row.containmentFlag is not None) or (row.duplicateFlag is not None)) else "", - row.addressLengthOriginal if ((row.overlapFlag is not None) or (row.containmentFlag is not None) or (row.duplicateFlag is not None)) else "" - ]) - - sc.info("Summary saved in:", os.path.abspath(filepath)) - sc.info("Filename:", os.path.split(filepath)[-1]) - print("\n") - - -class SectionParser(MemoryManager): - def __init__(self, args): - regexData = emma_libs.mapfileRegexes.ImageSummaryPattern() # Regex Data containing the groups - categoriesPath = shared_libs.emma_helper.joinPath(args.project, CATEGORIES_SECTIONS_JSON) # The file path to categories.JSON - categoriesKeywordsPath = shared_libs.emma_helper.joinPath(args.project, CATEGORIES_KEYWORDS_SECTIONS_JSON) # The file path to categoriesKeyowrds.JSON - fileIdentifier = FILE_IDENTIFIER_SECTION_SUMMARY - super().__init__(args, categoriesPath, categoriesKeywordsPath, fileIdentifier, regexData) - - def resolveDuplicateContainmentOverlap(self): - nameGetter = lambda target: target.section - super().resolveDuplicateContainmentOverlap(nameGetter) - - -class ObjectParser(MemoryManager): - def __init__(self, args): - regexData = emma_libs.mapfileRegexes.ModuleSummaryPattern() # Regex Data containing the groups - categoriesPath = shared_libs.emma_helper.joinPath(args.project, CATEGORIES_OBJECTS_JSON) # The filepath to categories.JSON - categoriesKeywordsPath = shared_libs.emma_helper.joinPath(args.project, CATEGORIES_KEYWORDS_OBJECTS_JSON) # The filepath to categoriesKeyowrds.JSON - fileIdentifier = FILE_IDENTIFIER_OBJECT_SUMMARY - super().__init__(args, categoriesPath, categoriesKeywordsPath, fileIdentifier, regexData) - - def resolveDuplicateContainmentOverlap(self): - nameGetter = lambda target: target.section + "::" + target.moduleName - super().resolveDuplicateContainmentOverlap(nameGetter) + sc().error("The mapfiles need to be processed before creating the reports!") diff --git a/emma_libs/memoryMap.py b/emma_libs/memoryMap.py index 8ea516b..17c65ef 100644 --- a/emma_libs/memoryMap.py +++ b/emma_libs/memoryMap.py @@ -17,59 +17,155 @@ """ -import os +import csv import bisect import copy +import datetime + +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import -import emma_libs.memoryManager +import shared_libs.emma_helper + + +# Timestamp for the report file names +TIMESTAMP = datetime.datetime.now().strftime("%Y-%m-%d-%Hh%Ms%S") + + +def resolveDuplicateContainmentOverlap(consumerCollection, memEntryHandler): + # pylint: disable=too-many-nested-blocks, too-many-branches + # Rationale: Because of the complexity of the task this function implements, reducing the number of nested blocks and branches is not possible. + + """ + Goes trough the consumerCollection and checks and resolves all the elements for the following situations: + 1 - Duplicate + 2 - Containment + 3 - Overlap + + :param consumerCollection: A list of MemEntry objects. It must be ordered increasingly based on the startAddress attribute of the elements. + The elements of the list will be changed during the processing. + :param memEntryHandler: A subclass of the MemEntryHandler class. + :return: None + """ + for actualElement in consumerCollection: + for otherElement in consumerCollection: + + # Don't compare element with itself and only compare the same configID + if actualElement.equalConfigID(otherElement) and not memEntryHandler.isEqual(actualElement, otherElement): + + # Case 0: actualElement and otherElement are completely separated : the otherElement begins only after the actualElement or the actualElement begins only after the otherElement + if (actualElement.addressStart + actualElement.addressLength) <= otherElement.addressStart or actualElement.addressStart >= (otherElement.addressStart + otherElement.addressLength): + # There is not much to do here... + pass + else: + # Case 1: actualElement and otherElement are duplicates + if actualElement.addressStart == otherElement.addressStart and actualElement.addressLength == otherElement.addressLength: + # Setting the actualElement´s duplicateFlag if it was not already set + if actualElement.duplicateFlag is None: + actualElement.duplicateFlag = "Duplicate of (" + memEntryHandler.getName(otherElement) + ", " + otherElement.configID + ", " + otherElement.mapfile + ")" + # Setting the actualElement to zero addressLength if this was not the first element of the duplicates + # This is needed to include only one of the duplicate elements with the real size in the report and not to distort the results + if otherElement.duplicateFlag is not None: + actualElement.addressLength = 0 + else: + # Case 2: actualElement contains otherElement + if actualElement.addressStart <= otherElement.addressStart and (actualElement.addressStart + actualElement.addressLength) >= (otherElement.addressStart + otherElement.addressLength): + actualElement.containingOthersFlag = True + else: + # Case 3: actualElement is contained by otherElement + if actualElement.addressStart >= otherElement.addressStart and (actualElement.addressStart + actualElement.addressLength) <= (otherElement.addressStart + otherElement.addressLength): + # Setting the actualElement´s containmentFlag if it was not already set + if actualElement.containmentFlag is None: + actualElement.containmentFlag = "Contained by (" + memEntryHandler.getName(otherElement) + ", " + otherElement.configID + ", " + otherElement.mapfile + ")" + # Setting the actualElement to zero addressLength because this was contained by the otherElement + # This is needed to include only one of these elements with the real size in the report and not to distort the results + actualElement.addressLength = 0 + else: + # Case 4: actualElement overlaps otherElement: otherElement starts inside and ends outside actualElement + if actualElement.addressStart < otherElement.addressStart and (actualElement.addressStart + actualElement.addressLength) < (otherElement.addressStart + otherElement.addressLength): + actualElement.overlappingOthersFlag = True + else: + # Case 5: actualElement is overlapped by otherElement: otherElement starts before and ends inside actualElement + if actualElement.addressStart > otherElement.addressStart and (actualElement.addressStart + actualElement.addressLength) > (otherElement.addressStart + otherElement.addressLength): + actualElement.overlapFlag = "Overlapped by (" + memEntryHandler.getName(otherElement) + ", " + otherElement.configID + ", " + otherElement.mapfile + ")" + # Adjusting the addresses and length of the actualElement: reducing its size by the overlapping part + newAddressStart = otherElement.addressStart + otherElement.addressLength + sizeOfOverlappingPart = newAddressStart - actualElement.addressStart + actualElement.addressStart = newAddressStart + actualElement.addressLength -= sizeOfOverlappingPart + # Case X: SW error, unhandled case... + else: + sc().error("MemoryManager::resolveOverlap(): Case X: SW error, unhandled case...") def calculateObjectsInSections(sectionContainer, objectContainer): """ - Assumptions: - - The sectionCollection is a list of MemEntry objects: - - It is ordered based on the startAddress attribute - - The overlapping sections are already edited and the addresses corrected - - The objectCollection is a list of MemEntry objects - - It is ordered based on the startAddress attribute - - The overlapping objects are already edited and the addresses corrected + Creating a list of MemEntry objects from two lists of MemEntry objects that are representing the sections and objects. + These two lists will merged together. + From sections, new elements will be created: + - Section entry: A MemEntry object that describes the section but does not use memory space. + - Section reserce: A MemEntry object that describes the unused part of a section that was not filled up with objects. + + :param sectionContainer: A list of MemEntry objects. It must be ordered increasingly based on the startAddress attribute of the elements. + The overlapping, containing, duplicate sections must be are already edited and the addresses and lengths corrected. + :param objectContainer: A list of MemEntry objects. It must be ordered increasingly based on the startAddress attribute of the elements. + The overlapping, containing, duplicate sections must be are already edited and the addresses and lengths corrected. + :return: A list of MemEntry objects that contains all the elements of the sectionContainer and the objectContainer. """ objectsInSections = [] - def createASectionEntry(): - sectionEntry = copy.deepcopy(sectionContainerElement) - sectionEntry.moduleName = OBJECTS_IN_SECTIONS_SECTION_ENTRY + def createASectionEntry(sourceSection): + """ + Function to create a sectionEntry based on the sourceSection. + :param sourceSection: MemEntry object to create a section entry from. + :return: None + """ + sectionEntry = copy.deepcopy(sourceSection) + sectionEntry.objectName = OBJECTS_IN_SECTIONS_SECTION_ENTRY sectionEntry.addressLength = 0 - sectionEntry.addressLengthHex = hex(sectionEntry.addressLength) objectsInSections.append(sectionEntry) def createASectionReserve(sourceSection, addressEnd=None): + """ + Function to create a sectionReserveEntry based on the sourceSection. + :param sourceSection: MemEntry object to create a section reserve from. + :param addressEnd: The end address that the sourceSection shall have after the reserve creation. + If it is None, then the whole source section will be converted to a reserve. + :return: None + """ # If we have received a specific addressEnd then we will use that one and recalculate the size of the section # In this case we need to make a deepcopy of the sourceSection because the SW will continue to work with it if addressEnd is not None: sourceSectionCopy = copy.deepcopy(sourceSection) - sourceSectionCopy.moduleName = OBJECTS_IN_SECTIONS_SECTION_RESERVE - sourceSectionCopy.addressEnd = addressEnd - sourceSectionCopy.addressEndHex = hex(sourceSectionCopy.addressEnd) - sourceSectionCopy.addressLength = sourceSectionCopy.addressEnd - sourceSectionCopy.addressStart + 1 - sourceSectionCopy.addressLengthHex = hex(sourceSectionCopy.addressLength) + sourceSectionCopy.objectName = OBJECTS_IN_SECTIONS_SECTION_RESERVE + sourceSectionCopy.setAddressesGivenEnd(addressEnd) objectsInSections.append(sourceSectionCopy) # If not, then the whole sourceSection will be stored as a reserve # In this case no copy needed because the SW does not need it anymore else: - sourceSection.moduleName = OBJECTS_IN_SECTIONS_SECTION_RESERVE + sourceSection.objectName = OBJECTS_IN_SECTIONS_SECTION_RESERVE objectsInSections.append(sourceSection) def cutOffTheBeginningOfTheSection(sectionToCut, newAddressStart): - sectionToCut.addressStart = newAddressStart - sectionToCut.addressStartHex = hex(sectionToCut.addressStart) - sectionToCut.addressLength = sectionToCut.addressEnd - sectionToCut.addressStart + 1 - sectionToCut.addressLengthHex = hex(sectionToCut.addressLength) + """ + Function to cut off the beginning of a section. + :param sectionToCut: MemEntry object that will have its beginning cut off. + :param newAddressStart: The new start address value the section will have after the cut. + :return: + """ + # We need to make sure that the newAddressStart does not cause a cut more than the available length + lengthThatWillBeCutOff = newAddressStart - sectionToCut.addressStart + if lengthThatWillBeCutOff <= sectionToCut.addressLength: + sectionToCut.addressStart = newAddressStart + sectionToCut.addressLength -= lengthThatWillBeCutOff + else: + sc().error("memoryMap.py::calculateObjectsInSections::cutOffTheBeginningOfTheSection(): " + + sectionToCut.configID + "::" + sectionToCut.sectionName + ": The new newAddressStart(" + + str(newAddressStart) + ") would cause a cut that is bigger than the addressLength! (" + str(lengthThatWillBeCutOff) + "vs " + str(sectionToCut.addressLengthing) + ")") for sectionContainerElement in sectionContainer: # Creating a section entry - createASectionEntry() + createASectionEntry(sectionContainerElement) # We will skip the sections that are contained by other sections or have a zero length if sectionContainerElement.containmentFlag is not None: @@ -86,11 +182,11 @@ def cutOffTheBeginningOfTheSection(sectionToCut, newAddressStart): # - is not belonging to the same configID as the section # - or have a zero length or # - if it ends before this section, because it means that this object is outside the section. - if sectionCopy.configID != objectContainerElement.configID or objectContainerElement.addressLength == 0 or sectionCopy.addressStart > objectContainerElement.addressEnd: + if sectionCopy.configID != objectContainerElement.configID or objectContainerElement.addressLength == 0 or sectionCopy.addressStart >= (objectContainerElement.addressStart + objectContainerElement.addressLength): continue # Case 0: The object is completely overlapping the section - if objectContainerElement.addressStart <= sectionCopy.addressStart and sectionCopy.addressEnd <= objectContainerElement.addressEnd: + if objectContainerElement.addressStart <= sectionCopy.addressStart and (sectionCopy.addressStart + sectionCopy.addressLength) <= (objectContainerElement.addressStart + objectContainerElement.addressLength): # This object is overlapping the section completely. This means that all the following objects will be # outside the section, so we can continue with the next section. # We also need to mark that the section is fully loaded with the object @@ -98,19 +194,21 @@ def cutOffTheBeginningOfTheSection(sectionToCut, newAddressStart): break # Case 1: The object is overlapping the beginning of the section - elif objectContainerElement.addressStart <= sectionCopy.addressStart and objectContainerElement.addressEnd < sectionCopy.addressEnd: + elif objectContainerElement.addressStart <= sectionCopy.addressStart and (objectContainerElement.addressStart + objectContainerElement.addressLength) < (sectionCopy.addressStart + sectionCopy.addressLength): # Cutting off the beginning of the section - cutOffTheBeginningOfTheSection(sectionCopy, objectContainerElement.addressEnd + 1) + newSectionAddressStart = objectContainerElement.addressStart + objectContainerElement.addressLength + cutOffTheBeginningOfTheSection(sectionCopy, newSectionAddressStart) # Case 2: The object is overlapping a part in the middle of the section - elif sectionCopy.addressStart < objectContainerElement.addressStart and objectContainerElement.addressEnd < sectionCopy.addressEnd: + elif sectionCopy.addressStart < objectContainerElement.addressStart and (objectContainerElement.addressStart + objectContainerElement.addressLength) < (sectionCopy.addressStart + sectionCopy.addressLength): # Creating a sectionReserve createASectionReserve(sectionCopy, objectContainerElement.addressStart - 1) - # Setting up the remaining section part - cutOffTheBeginningOfTheSection(sectionCopy, objectContainerElement.addressEnd + 1) + # Setting up the remaining section part: the section will start after tge object + newSectionAddressStart = objectContainerElement.addressStart + objectContainerElement.addressLength + cutOffTheBeginningOfTheSection(sectionCopy, newSectionAddressStart) # Case 3: The object is overlapping the end of the section - elif sectionCopy.addressStart < objectContainerElement.addressStart <= sectionCopy.addressEnd < objectContainerElement.addressEnd: + elif sectionCopy.addressStart < objectContainerElement.addressStart <= (sectionCopy.addressStart + sectionCopy.addressLength) <= (objectContainerElement.addressStart + objectContainerElement.addressLength): # Creating the sectionReserve createASectionReserve(sectionCopy, objectContainerElement.addressStart - 1) # This object is overlapping the end of the section. This means that all the following objects will be @@ -121,7 +219,7 @@ def cutOffTheBeginningOfTheSection(sectionToCut, newAddressStart): # Case 4: The object is only starting after this section, it means that the following objects will also be outside the section. # So we have to create a reserve for the remaining part of the section and we can exit the object loop now and continue with the next section. - elif sectionCopy.addressEnd < objectContainerElement.addressStart: + elif (sectionCopy.addressStart + sectionCopy.addressLength) <= objectContainerElement.addressStart: # There is not much to do here, the reserve will be created after the object loop break @@ -139,9 +237,90 @@ def cutOffTheBeginningOfTheSection(sectionToCut, newAddressStart): return objectsInSections -def memoryMapToCSV(argsDir, argsSubdir, argsProject, memoryMap): +def createReportPath(outputPath, projectName, reportName): + """ + Function to create a string representing the path of a report. + :param outputPath: The folder where the report will be. + :param projectName: The name of the project. + :param reportName: The name of the report. + :return: The created path string. + """ + shared_libs.emma_helper.mkDirIfNeeded(outputPath) + memStatsFileName = projectName + "_" + reportName + "_" + TIMESTAMP + ".csv" + return shared_libs.emma_helper.joinPath(outputPath, memStatsFileName) + + +def collectCompilerSpecificHeaders(consumerCollection): + """ + Function to create a list of the headers that the compiler specific data of a consumer collection has. + :param consumerCollection: The consumer collection that has elements with compiler specific data. + :return: List of strings. + """ + collectedHeaders = [] + + for element in consumerCollection: + for key in element.compilerSpecificData.keys(): + if key not in collectedHeaders: + collectedHeaders.append(key) + + return collectedHeaders + + +def writeReportToDisk(reportPath, consumerCollection): """ - Writes the memoryMap created in calculateObjectsInSections(...) to CSV + Writes the consumerCollection containing MemEntry objects to a CSV file. + :param reportPath: A path of the CSV that needs to be created. + :param consumerCollection: A list of MemEntry objects. """ - filepath = emma_libs.memoryManager.createMemStatsFilepath(argsDir, argsSubdir, FILE_IDENTIFIER_OBJECTS_IN_SECTIONS, os.path.split(os.path.normpath(argsProject))[-1]) - emma_libs.memoryManager.consumerCollectionToCSV(filepath, memoryMap) + + # Opening the file + with open(reportPath, "w") as fp: + # The writer object that will be used for creating the CSV data + writer = csv.writer(fp, delimiter=";", lineterminator="\n") + + # Creating the list with the first part of the static headers + headers = [ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_START_DEC, ADDR_END_DEC, + SIZE_DEC, SIZE_HUMAN_READABLE, SECTION_NAME, OBJECT_NAME, CONFIG_ID] + + # Extending it with the compiler specific headers + compilerSpecificHeaders = collectCompilerSpecificHeaders(consumerCollection) + headers.extend(compilerSpecificHeaders) + + # Collecting the rest of the static headers + headers.extend([MEM_TYPE, MEM_TYPE_TAG, CATEGORY, MAPFILE, + OVERLAP_FLAG, CONTAINMENT_FLAG, DUPLICATE_FLAG, CONTAINING_OTHERS_FLAG, + ADDR_START_HEX_ORIGINAL, ADDR_END_HEX_ORIGINAL, SIZE_HEX_ORIGINAL, SIZE_DEC_ORIGINAL]) + + # Writing the headers to the CSV file + writer.writerow(headers) + + # Writing the data lines to the file + for row in consumerCollection: + # Collecting the first part of the static data for the current row + rowData = [row.addressStartHex(), row.addressEndHex(), row.addressLengthHex(), row.addressStart, row.addressEnd(), + row.addressLength, shared_libs.emma_helper.toHumanReadable(row.addressLength), row.sectionName, row.objectName, row.configID] + + # Extending it with the data part of the compiler specific data pairs of this MemEntry object + for compilerSpecificHeader in compilerSpecificHeaders: + rowData.append(row.compilerSpecificData[compilerSpecificHeader] if compilerSpecificHeader in row.compilerSpecificData else "") + + # Collecting the rest of the static data for the current row + rowData.extend([ + row.memType, + row.memTypeTag, + row.category, + row.mapfile, + row.overlapFlag, + row.containmentFlag, + row.duplicateFlag, + row.containingOthersFlag, + # Addresses are modified in case of overlapping so we will post the original values so that the changes can be seen + row.addressStartHexOriginal() if (row.overlapFlag is not None) else "", + row.addressEndHexOriginal() if (row.overlapFlag is not None) else "", + # Lengths are modified in case of overlapping, containment and duplication so we will post the original values so that the changes can be seen + row.addressLengthHexOriginal() if ((row.overlapFlag is not None) or (row.containmentFlag is not None) or (row.duplicateFlag is not None)) else "", + row.addressLengthOriginal if ((row.overlapFlag is not None) or (row.containmentFlag is not None) or (row.duplicateFlag is not None)) else "" + ]) + + # Writing the data to the file + writer.writerow(rowData) diff --git a/emma_libs/specificConfiguration.py b/emma_libs/specificConfiguration.py new file mode 100644 index 0000000..31bb788 --- /dev/null +++ b/emma_libs/specificConfiguration.py @@ -0,0 +1,58 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import abc + + +class SpecificConfiguration(abc.ABC): + """ + Abstract parent class for classes that represent the compiler specific configuration. + """ + @abc.abstractmethod + def __init__(self, noPrompt): + """ + Constructor of the SpecificConfiguration class. The functionality that is required to be implemented is the + storage of settings that are the parameters of the constructor. + :param noPrompt: False if during the calls to the methods can contain user prompts. + If True, then no user prompts shall be made. It is suggested that the programs fails in case + they can not decide the necessary action. + """ + + @abc.abstractmethod + def readConfiguration(self, configurationPath, mapfilesPath, configId, configuration) -> None: + """ + This function will be called to read in and process the compiler specific parts of the configuration. + These parts of the configuration will be used during the compiler specific mapfile processing. + :param configurationPath: Path of the directory that contains the configuration files. + :param mapfilesPath: Path of the directory that contains the mapfiles. + :param configId: The configId to which the configuration belongs. + :param configuration: The configuration dictionary to which the configuration elements need to be added. + :return: None + """ + + @abc.abstractmethod + def checkConfiguration(self, configId, configuration) -> bool: + """ + This function will be called after the readconfiguration() to check whether the configuration is correct. + The checks are fully compiler specific, there are no requirements for them. Based on the result of this function, + the configuration may be not analysed if it was marked incorrect. + :param configId: The configId to which the configuration belongs. + :param configuration: The configuration that needs to be checked. + :return: True if the configuration is correct, False otherwise. + """ diff --git a/emma_libs/specificConfigurationFactory.py b/emma_libs/specificConfigurationFactory.py new file mode 100644 index 0000000..e4e148c --- /dev/null +++ b/emma_libs/specificConfigurationFactory.py @@ -0,0 +1,41 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +from pypiscout.SCout_Logger import Logger as sc + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import emma_libs.ghsConfiguration + + +def createSpecificConfiguration(compiler, **kwargs): + """ + A factory for creating an object of one of the subclasses of the SpecificConfiguration class. + The concrete subclass is selected based on the received compiler name. + :param compiler: The compiler name. + :param kwargs: The arguments that will be forwarded to the constructor during the object creation. + :return: An object of the selected subclass of the SpecificConfiguration. + """ + + configuration = None + if COMPILER_NAME_GHS == compiler: + configuration = emma_libs.ghsConfiguration.GhsConfiguration(**kwargs) + else: + sc().error("Unexpected compiler value: " + compiler) + + return configuration diff --git a/emma_vis.py b/emma_vis.py index 33acf9c..0b65603 100644 --- a/emma_vis.py +++ b/emma_vis.py @@ -25,9 +25,9 @@ import argparse import pandas -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc -from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import # pylint: disable=unused-wildcard-import +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper import emma_vis_libs.dataVisualiserSections import emma_vis_libs.dataVisualiserObjects @@ -44,56 +44,37 @@ def main(args): - # Evaluate files and directories and make directories if necessary - projectPath = args.project + imageFile = emma_vis_libs.helper.getLastModFileOrPrompt(FILE_IDENTIFIER_SECTION_SUMMARY, args.inOutPath, args.quiet, args.append, args.noprompt) + moduleFile = emma_vis_libs.helper.getLastModFileOrPrompt(FILE_IDENTIFIER_OBJECT_SUMMARY, args.inOutPath, args.quiet, args.append, args.noprompt) + objectsInSectionsFile = emma_vis_libs.helper.getLastModFileOrPrompt(FILE_IDENTIFIER_OBJECTS_IN_SECTIONS, args.inOutPath, args.quiet, args.append, args.noprompt) - # Check project path before results path (=dir); this is our results path default if nothing was given - if projectPath is None: - sc.error("No project path given. Exiting...") - sys.exit(-10) - else: - shared_libs.emma_helper.checkIfFolderExists(projectPath) - - # Set default value for results path - resultsPath = projectPath - - # Overwrite results if path is given - if args.dir is not None: - resultsPath = shared_libs.emma_helper.joinPath(args.dir, args.subdir, OUTPUT_DIR_VISUALISER) - shared_libs.emma_helper.mkDirIfNeeded(resultsPath) - - imageFile = emma_vis_libs.helper.getLastModFileOrPrompt(FILE_IDENTIFIER_SECTION_SUMMARY, args) - moduleFile = emma_vis_libs.helper.getLastModFileOrPrompt(FILE_IDENTIFIER_OBJECT_SUMMARY, args) - objectsInSectionsFile = emma_vis_libs.helper.getLastModFileOrPrompt(FILE_IDENTIFIER_OBJECTS_IN_SECTIONS, args) + resultsPath = shared_libs.emma_helper.joinPath(args.inOutPath, OUTPUT_DIR_VISUALISER) # We don't have to check the existance of this path since this was done during parseArgs + shared_libs.emma_helper.mkDirIfNeeded(resultsPath) # Init classes for summaries - consumptionObjectsInSections = emma_vis_libs.dataVisualiserMemoryMap.MemoryMap(projectPath=projectPath, - args=args, + consumptionObjectsInSections = emma_vis_libs.dataVisualiserMemoryMap.MemoryMap(projectPath=args.projectDir, fileToUse=objectsInSectionsFile, resultsPath=resultsPath) consumptionObjectsInSections.plotPieChart(plotShow=False) # Image Summary object - sc.info("Analysing", imageFile) - consumptionImage = emma_vis_libs.dataVisualiserSections.ImageConsumptionList(projectPath=projectPath, - args=args, + sc().info("Analysing", imageFile) + consumptionImage = emma_vis_libs.dataVisualiserSections.ImageConsumptionList(projectPath=args.projectDir, fileToUse=imageFile, resultsPath=resultsPath) # Module Summary object - sc.info("Analysing", moduleFile) + sc().info("Analysing", moduleFile) try: - consumptionModule = emma_vis_libs.dataVisualiserObjects.ModuleConsumptionList(projectPath=projectPath, - args=args, + consumptionModule = emma_vis_libs.dataVisualiserObjects.ModuleConsumptionList(projectPath=args.projectDir, fileToUse=moduleFile, - resultsPath=resultsPath) + resultsPath=args.inOutPath) except ValueError: - sc.error("Data does not contain any module/object entry - exiting...") - sys.exit(-10) + sc().error("Data does not contain any module/object entry - exiting...") # Object for visualisation fo image and module summary categorisedImage = emma_vis_libs.dataVisualiserCategorisedSections.CategorisedImageConsumptionList(resultsPath=resultsPath, - projectPath=projectPath, + projectPath=args.projectDir, statsTimestamp=consumptionImage.statsTimestamp, imageSumObj=consumptionImage, moduleSumObj=consumptionModule) @@ -111,23 +92,24 @@ def main(args): # Write each report to file if append mode in args is selected if args.append: - sc.info("Appending reports...") + sc().info("Appending reports...") consumptionImage.writeReportToFile() - report = emma_vis_libs.dataReports.Reports(projectPath=projectPath) + report = emma_vis_libs.dataReports.Reports(projectPath=args.projectDir) report.plotNdisplay(plotShow=False) # Create a Markdown overview document and add all parts to it elif args.overview: - markdown_file_path = consumptionImage.createMarkdownOverview() - consumptionModule.appendModuleConsumptionToMarkdownOverview(markdown_file_path) - consumptionImage.appendSupplementToMarkdownOverview(markdown_file_path) - shared_libs.emma_helper.convertMarkdownFileToHtmlFile(markdown_file_path, (os.path.splitext(markdown_file_path)[0] + ".html")) + markdownFilePath = consumptionImage.createMarkdownOverview() + consumptionModule.appendModuleConsumptionToMarkdownOverview(markdownFilePath) + consumptionImage.appendSupplementToMarkdownOverview(markdownFilePath) + shared_libs.emma_helper.convertMarkdownFileToHtmlFile(markdownFilePath, (os.path.splitext(markdownFilePath)[0] + ".html")) def parseArgs(arguments=""): """ Argument parser - :return: nothing + :param arguments: List of strings specifying the arguments to be parsed + :return: Argparse object """ parser = argparse.ArgumentParser( @@ -143,10 +125,10 @@ def parseArgs(arguments=""): version="%(prog)s, Version: " + EMMA_VISUALISER_VERSION ) parser.add_argument( - "--project", + "--projectDir", "-p", required=True, - help="Path of directory holding the configs files. The project name will be derived from the root folder", + help="Path to directory holding the config files. The project name will be derived from this folder name,", ) parser.add_argument( "--quiet", @@ -161,15 +143,15 @@ def parseArgs(arguments=""): default=False ) parser.add_argument( - "--dir", - "-d", - help="User defined path to the statistics root directory.", + "--inOutDir", + "-i", + help="Path containing the memStats directory (-> Emma output). If not given the `project` directory will be used.", default=None ) parser.add_argument( - "--subdir", - help="User defined subdirectory in results folder.", - default="" + "--subDir", + help="Sub-directory of `inOutDir` where the Emma Visualiser results will be stored. If not given results will be stored in `inOutDir`.", + default=None ) parser.add_argument( "--overview", @@ -191,32 +173,58 @@ def parseArgs(arguments=""): default=False ) + # We will either parse the arguments string if it is not empty, + # or (in the default case) the data from sys.argv if "" == arguments: - args = parser.parse_args() + parsedArguments = parser.parse_args() else: - args = parser.parse_args(arguments) + # Arguments were passed to this function (e.g. for unit testing) + parsedArguments = parser.parse_args(arguments) - if args.dir is None: - args.dir = args.project + # Prepare final paths + parsedArguments.inOutPath = "" + + # Check given paths + if parsedArguments.projectDir is None: # This should not happen since it is a required argument + sc().error("No project path given. Exiting...") + else: + parsedArguments.projectDir = shared_libs.emma_helper.joinPath(parsedArguments.projectDir) # Unify path + shared_libs.emma_helper.checkIfFolderExists(parsedArguments.projectDir) + + parsedArguments.inOutPath = parsedArguments.projectDir + if parsedArguments.inOutDir is None: + parsedArguments.inOutDir = parsedArguments.projectDir else: - args.dir = shared_libs.emma_helper.joinPath(args.dir) + parsedArguments.inOutDir = shared_libs.emma_helper.joinPath(parsedArguments.inOutDir) # Unify path + shared_libs.emma_helper.checkIfFolderExists(parsedArguments.inOutDir) + + parsedArguments.inOutPath = parsedArguments.inOutDir + if parsedArguments.subDir is None: + parsedArguments.subDir = "" + else: + parsedArguments.subDir = shared_libs.emma_helper.joinPath(parsedArguments.subDir) # Unify path + + joinedInputPath = shared_libs.emma_helper.joinPath(parsedArguments.inOutDir, parsedArguments.subDir) + shared_libs.emma_helper.checkIfFolderExists(joinedInputPath) + parsedArguments.inOutPath = joinedInputPath - # Get paths straight (only forward slashes) - args.dir = shared_libs.emma_helper.joinPath(args.dir) - args.subdir = shared_libs.emma_helper.joinPath(args.subdir) + # Clean-up paths + del parsedArguments.subDir + del parsedArguments.inOutDir - return args + return parsedArguments if __name__ == "__main__": - args = parseArgs() + parsedArguments = parseArgs() + sc(invVerbosity=-1, actionWarning=(lambda : sys.exit(-10) if parsedArguments.Werror is not None else None), actionError=lambda : sys.exit(-10)) - sc.header("Emma Memory and Mapfile Analyser - Visualiser", symbol="/") + sc().header("Emma Memory and Mapfile Analyser - Visualiser", symbol="/") timeStart = timeit.default_timer() - sc.info("Started processing at:", datetime.datetime.now().strftime("%H:%M:%S")) + sc().info("Started processing at:", datetime.datetime.now().strftime("%H:%M:%S")) - main(args) + main(parsedArguments) timeEnd = timeit.default_timer() - sc.info("Finished job at:", datetime.datetime.now().strftime("%H:%M:%S"), "(duration: " + "{0:.2f}".format(timeEnd - timeStart) + "s)") + sc().info("Finished job at:", datetime.datetime.now().strftime("%H:%M:%S"), "(duration: " + "{0:.2f}".format(timeEnd - timeStart) + "s)") diff --git a/emma_vis_libs/__init__.py b/emma_vis_libs/__init__.py index 8b15f92..5e32dae 100644 --- a/emma_vis_libs/__init__.py +++ b/emma_vis_libs/__init__.py @@ -15,4 +15,3 @@ You should have received a copy of the GNU General Public License along with this program. If not, see """ - diff --git a/emma_vis_libs/dataVisualiser.py b/emma_vis_libs/dataVisualiser.py index 16f2b14..6783b5a 100644 --- a/emma_vis_libs/dataVisualiser.py +++ b/emma_vis_libs/dataVisualiser.py @@ -27,6 +27,8 @@ import matplotlib import matplotlib.style +from pypiscout.SCout_Logger import Logger as sc + from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper @@ -89,20 +91,20 @@ def __init__(self, fileToUse, resultsPath, projectPath): # default header self.header = [ # TODO: Remove the header list and replace it with constants from stringConstants.py # Later indexes are used (therefore the numbers commented inline) - ADDR_START_HEX, # 0 + ADDR_START_HEX, # 0 ADDR_END_HEX, SIZE_HEX, # 2 ADDR_START_DEC, ADDR_END_DEC, SIZE_DEC, # 5 - MODULE_NAME, + OBJECT_NAME, CONFIG_ID, # 7 VAS_NAME, MEM_TYPE, - TAG, # 10 + MEM_TYPE_TAG, # 10 CATEGORY, DMA, - MAPFILE # 13 + MAPFILE # 13 ] self.data = pandas.DataFrame(columns=self.header) matplotlib.style.use("ggplot") # Pycharm might claim there is no reference 'style' in `__init__.py` (you can ignore this)(https://stackoverflow.com/a/23839976/4773274) @@ -118,11 +120,17 @@ def __readBudgets(self): Reads the budgets.json file :return: nothing """ - filepath = shared_libs.emma_helper.joinPath(self.projectPath, "budgets.json") - with open(filepath, "r") as fp: - budgets = json.load(fp) - self.budgets = budgets["Budgets"] - self.projectThreshold = budgets["Project Threshold in %"] + self.budgets = None + self.projectThreshold = None + + try: + filepath = shared_libs.emma_helper.joinPath(self.projectPath, "budgets.json") + with open(filepath, "r") as fp: + budgets = json.load(fp) + self.budgets = budgets["Budgets"] + self.projectThreshold = budgets["Project Threshold in %"] + except FileNotFoundError: + sc().error("The budgets.json file was not found in the project folder (" + self.projectPath + ")!") def __readMemStatsFile(self): """ diff --git a/emma_vis_libs/dataVisualiserCategorisedSections.py b/emma_vis_libs/dataVisualiserCategorisedSections.py index e2f4896..dcc0636 100644 --- a/emma_vis_libs/dataVisualiserCategorisedSections.py +++ b/emma_vis_libs/dataVisualiserCategorisedSections.py @@ -24,7 +24,7 @@ import pandas import matplotlib.pyplot -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper @@ -58,18 +58,17 @@ def __categoriseImage(self): module is stored in the memory units. :return: A grouped dataframe containing the grouped image and modules """ - # Prepare image summary data self.imageData.reset_index() - self.imageData = self.imageData.drop([CATEGORY, ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_END_DEC, VAS_NAME, DMA, MAPFILE, MODULE_NAME], 1) + self.imageData = self.imageData.drop([CATEGORY, ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_END_DEC, VAS_NAME, DMA, MAPFILE, OBJECT_NAME], 1) self.imageData = self.imageData.groupby([CONFIG_ID, MEM_TYPE, SECTION_NAME]).sum() self.imageData = self.imageData.rename(index=str, columns={SIZE_DEC: SECTION_SIZE_BYTE}) self.imageData = self.imageData.reset_index() # Prepare module summary data self.moduleData = self.moduleData.reset_index() - self.moduleData = self.moduleData.drop([TAG, ADDR_START_DEC, ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_END_DEC, VAS_NAME, DMA, MAPFILE], 1) - self.moduleData = self.moduleData.groupby([CONFIG_ID, MEM_TYPE, SECTION_NAME, MODULE_NAME, CATEGORY]).sum() + self.moduleData = self.moduleData.drop([MEM_TYPE_TAG, ADDR_START_DEC, ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_END_DEC, VAS_NAME, DMA, MAPFILE], 1) + self.moduleData = self.moduleData.groupby([CONFIG_ID, MEM_TYPE, SECTION_NAME, OBJECT_NAME, CATEGORY]).sum() self.moduleData = self.moduleData.rename(index=str, columns={SIZE_DEC: MODULE_SIZE_BYTE}) self.moduleData = self.moduleData.reset_index() @@ -79,7 +78,7 @@ def __categoriseImage(self): on=[CONFIG_ID, MEM_TYPE, SECTION_NAME]) # Aggregate categorisedImage to desired form - categorisedImage = categorisedImage.groupby([CONFIG_ID, MEM_TYPE, SECTION_NAME, SECTION_SIZE_BYTE, CATEGORY, MODULE_NAME]).sum() + categorisedImage = categorisedImage.groupby([CONFIG_ID, MEM_TYPE, SECTION_NAME, SECTION_SIZE_BYTE, CATEGORY, OBJECT_NAME]).sum() return categorisedImage @@ -109,7 +108,7 @@ def __groupCategorisedImage(self): :return: The grouped dataframe """ groupedImage = self.__categorisedImage.reset_index() - groupedImage = groupedImage.groupby([CONFIG_ID, MEM_TYPE, CATEGORY, SECTION_NAME, SECTION_SIZE_BYTE, MODULE_NAME]).sum() + groupedImage = groupedImage.groupby([CONFIG_ID, MEM_TYPE, CATEGORY, SECTION_NAME, SECTION_SIZE_BYTE, OBJECT_NAME]).sum() return groupedImage def displayUsedByModulesInImage(self): @@ -184,7 +183,7 @@ def appendCategorisedImageToMarkdownOverview(self, markdownFilePath): :param markdownFilePath: The path of the Markdown file to which the data will be appended to. :return: nothing """ - sc.info("Appending module summary to overview...") + sc().info("Appending object summary to overview...") with open(markdownFilePath, 'a') as markdown: markdown.write("\n# Modules included in allocated Memory\n") @@ -198,16 +197,14 @@ def plotNdisplay(self, plotShow=True): :return: nothing """ - myfigure = self.displayUsedByModulesInImage() + myFigure = self.displayUsedByModulesInImage() filename = self.project + MEMORY_ESTIMATION_BY_MODULES_PICTURE_NAME_FIX_PART + self.statsTimestamp.replace(" ", "") + "." + MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION - shared_libs.emma_helper.saveMatplotlibPicture(myfigure, shared_libs.emma_helper.joinPath(self.resultsPath, filename), MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION, MEMORY_ESTIMATION_PICTURE_DPI, False) + shared_libs.emma_helper.saveMatplotlibPicture(myFigure, shared_libs.emma_helper.joinPath(self.resultsPath, filename), MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION, MEMORY_ESTIMATION_PICTURE_DPI, False) if plotShow: matplotlib.pyplot.show() # Show plots after results in console output are shown - return - def categorisedImagetoCSV(self): """ Function to write the info from categorised image to file @@ -225,8 +222,8 @@ def createCategoriesSections(self): .reset_index()\ .drop([SECTION_SIZE_BYTE, MODULE_SIZE_BYTE], 1)\ .groupby([CONFIG_ID, MEM_TYPE, SECTION_NAME, CATEGORY])\ - .agg({MODULE_NAME: "count"}) - categoriesCSV.sort_values(MODULE_NAME, ascending=False, inplace=True) + .agg({OBJECT_NAME: "count"}) + categoriesCSV.sort_values(OBJECT_NAME, ascending=False, inplace=True) categoriesCSV = {k: list(categoriesCSV.ix[k].index) for k in categoriesCSV.index.levels[0]} # Extract desired data for section -> category matching diff --git a/emma_vis_libs/dataVisualiserMemoryMap.py b/emma_vis_libs/dataVisualiserMemoryMap.py index 2cf85a2..e3067a7 100644 --- a/emma_vis_libs/dataVisualiserMemoryMap.py +++ b/emma_vis_libs/dataVisualiserMemoryMap.py @@ -31,11 +31,10 @@ class MemoryMap(emma_vis_libs.dataVisualiser.Visualiser): - def __init__(self, projectPath, args, fileToUse, resultsPath): + def __init__(self, projectPath, fileToUse, resultsPath): super().__init__(fileToUse, resultsPath, projectPath) self.projectPath = projectPath self.project = os.path.split(projectPath)[-1] - self.args = args def plotPieChart(self, plotShow=True): data = emma_vis_libs.dataVisualiser.removeDataWithFlags(sourceData=self.data, rmContained=True, rmDuplicate=True) @@ -69,8 +68,6 @@ def plotPieChart(self, plotShow=True): filename = self.project + "_" + configID + "_" + memType + "_" + self.statsTimestamp + ".png" matplotlib.pyplot.savefig(shared_libs.emma_helper.joinPath(self.resultsPath, filename), dpi=MEMORY_ESTIMATION_PICTURE_DPI, transparent=False, bbox_inches="tight") - if plotShow: matplotlib.pyplot.show() - return diff --git a/emma_vis_libs/dataVisualiserObjects.py b/emma_vis_libs/dataVisualiserObjects.py index 1fb41cc..08f4437 100644 --- a/emma_vis_libs/dataVisualiserObjects.py +++ b/emma_vis_libs/dataVisualiserObjects.py @@ -24,7 +24,7 @@ import pandas import matplotlib.pyplot -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper @@ -37,26 +37,25 @@ class ModuleConsumptionList(emma_vis_libs.dataVisualiser.Visualiser): does not have categories or the like they need to be added here. """ - def __init__(self, projectPath, args, fileToUse, resultsPath): + def __init__(self, projectPath, fileToUse, resultsPath): super().__init__(fileToUse, resultsPath, projectPath) self.projectPath = projectPath self.project = os.path.split(projectPath)[-1] - self.args = args self.consumptionByCategorisedModules = self.calcConsumptionByCategorisedModules() def printCategorisedModules(self): print(self.consumptionByCategorisedModules) def plotByCategorisedModules(self, plotShow=True): - myfigure = self.displayConsumptionByCategorisedModules(self.consumptionByCategorisedModules) + myFigure = self.displayConsumptionByCategorisedModules(self.consumptionByCategorisedModules) filename = self.project + MEMORY_ESTIMATION_PARTITION_OF_ALLOCATED_MEMORY_PICTURE_NAME_FIX_PART + self.statsTimestamp.replace(" ", "") + "." + MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION - shared_libs.emma_helper.saveMatplotlibPicture(myfigure, os.path.join(self.resultsPath, filename), MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION, MEMORY_ESTIMATION_PICTURE_DPI, False) + shared_libs.emma_helper.saveMatplotlibPicture(myFigure, os.path.join(self.resultsPath, filename), MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION, MEMORY_ESTIMATION_PICTURE_DPI, False) if plotShow: matplotlib.pyplot.show() def calcConsumptionByCategorisedModules(self): """ - Calculate and group the module data by categoriy in percent + Calculate and group the module data by category in percent :return: dataframe of grouped memStats """ # Resolve the containment/overlap/duplicate flags @@ -148,7 +147,7 @@ def appendModuleConsumptionToMarkdownOverview(self, markdownFilePath): :return: nothing """ - sc.info("Appending module summary to overview...") + sc().info("Appending object summary to overview...") # TODO: This should be better explained (AGK) self.plotByCategorisedModules(plotShow=False) # Re-write .png to ensure up-to-date overview diff --git a/emma_vis_libs/dataVisualiserSections.py b/emma_vis_libs/dataVisualiserSections.py index 4092294..8a981d7 100644 --- a/emma_vis_libs/dataVisualiserSections.py +++ b/emma_vis_libs/dataVisualiserSections.py @@ -34,11 +34,10 @@ class ImageConsumptionList(emma_vis_libs.dataVisualiser.Visualiser): Class holding the image data from .csv Memstats, plus methods for printing/plotting, file writing and .md/.html creation """ - def __init__(self, projectPath, args, fileToUse, resultsPath): + def __init__(self, projectPath, fileToUse, resultsPath): super().__init__(fileToUse, resultsPath, projectPath) self.projectPath = projectPath self.project = os.path.split(projectPath)[-1] - self.args = args self.consumptionByMemType = self.calcConsumptionByMemType() self.consumptionByMemTypeDetailed = self.calcConsumptionByMemTypeDetailed() self.consumptionByMemTypePerMap = self.calcConsumptionByMemTypePerMap() @@ -99,7 +98,7 @@ def groupDataByMemTypeDetailed(self, indices): groupedByMemType = groupedByMemType[[SIZE_DEC] + [self.header[i] for i in indices]] # Get only columns we need # Grouping - groupedByMemTypeAcc = groupedByMemType.groupby([CONFIG_ID, TAG]).sum() + groupedByMemTypeAcc = groupedByMemType.groupby([CONFIG_ID, MEM_TYPE_TAG]).sum() # Set formats and cast type pandas.options.display.float_format = '{:14,.0f}'.format @@ -146,7 +145,7 @@ def calcConsumptionByMemType(self): groupedByMemTypeAcc[self.headerAvailable] = (1 - groupedByMemTypeAcc[self.headerUsed]) * 100 # Check for negative value in next line for i in range(groupedByMemTypeAcc[self.headerAvailable].size): if groupedByMemTypeAcc[self.headerAvailable][i] < 0: - # Set available header to 0 if previously caluclated value is negative (hence no more memory left) + # Set available header to 0 if previously calculated value is negative (hence no more memory left) groupedByMemTypeAcc[self.headerAvailable][i] = 0 groupedByMemTypeAcc[self.headerUsed] *= 100 @@ -253,7 +252,6 @@ def plotByMemType(self, plotShow=True): myfigure = self.displayConsumptionByMemType() filename = self.project + MEMORY_ESTIMATION_BY_PERCENTAGES_PICTURE_NAME_FIX_PART + self.statsTimestamp.replace(" ", "") + "." + MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION shared_libs.emma_helper.saveMatplotlibPicture(myfigure, os.path.join(self.resultsPath, filename), MEMORY_ESTIMATION_PICTURE_FILE_EXTENSION, MEMORY_ESTIMATION_PICTURE_DPI, False) - if plotShow: matplotlib.pyplot.show() # Show plots after results in console output are shown diff --git a/emma_vis_libs/helper.py b/emma_vis_libs/helper.py index 082e28e..9c37b2b 100644 --- a/emma_vis_libs/helper.py +++ b/emma_vis_libs/helper.py @@ -21,30 +21,30 @@ import os import sys -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc import shared_libs.emma_helper import shared_libs.stringConstants -# TODO: Update the Docstring and rename the function to give it a name that explains the functionality more (AGK) -def getLastModFileOrPrompt(subStringIdentifier, args): +def getLastModFileOrPrompt(subStringIdentifier: str, inOutPath: str, quiet: bool, append: bool, noprompt: bool) -> str: """ - If args.quiet: Evaluates the file to use listing all files in "/MemStats", then matching + If quiet: Evaluates the file to use listing all files in "/MemStats", then matching the substring given in summaryTypes and returns the newest file matching the substring - :param subStringIdentifier: Substring the list of files in the memStats directory is mnatched - :param args: Command line arguments + :param subStringIdentifier: Substring the list of files in the memStats directory is matched + :param inOutPath: [string] + :param quiet: [bool] + :param append: [bool] + :param noprompt: [bool] :return: file name to use """ - fileToUse = None - path = shared_libs.emma_helper.joinPath(args.dir, args.subdir, shared_libs.stringConstants.OUTPUT_DIR) - lastModifiedFiles = shared_libs.emma_helper.lastModifiedFilesInDir(path, ".csv") # Latest file is last element + path = shared_libs.emma_helper.joinPath(inOutPath, shared_libs.stringConstants.OUTPUT_DIR) + lastModifiedFiles = shared_libs.emma_helper.lastModifiedFilesInDir(path, ".csv") # Newest file is last element # Check if no files were found if len(lastModifiedFiles) < 1: - sc.error("No files in the specified directory:", os.path.abspath(path)) - sys.exit(-10) + sc().error("No files in the specified directory:", os.path.abspath(path)) # Get last modified file (we NOT ONLY need this for the quiet mode) # Backwards iterate over file list (so newest file will be first) @@ -55,30 +55,33 @@ def getLastModFileOrPrompt(subStringIdentifier, args): # Exit in first match which is the newest file as we are backwards iterating break - if args.quiet: + if quiet: # Just use the last found file (we did this before) pass - elif not args.append: + elif noprompt: + sc().wwarning("No prompt is active. Using last modified file named:", fileToUse) + pass + elif not append: # If nothing specified AND append mode is OFF ask which file to use - sc.info("Last modified file:") + sc().info("Last modified file:") print("\t" + fileToUse) - sc.info("`y` to accept; otherwise specify an absolute path (without quotes)") + sc().info("`y` to accept; otherwise specify an absolute path (without quotes)") while True: - text = input("> ") if not args.noprompt else sys.exit(-10) + text = input("> ") + if text == "y": break if text is not None and text != "" and os.path.isfile(text) and text.endswith(".csv"): fileToUse = text break else: - sc.warning("Invalid input.") + sc().warning("Invalid input.") # Check if the fixed file name portions are within the found file name if shared_libs.stringConstants.FILE_IDENTIFIER_OBJECT_SUMMARY is shared_libs.emma_helper.evalSummary(lastModifiedFiles[-1]): - sc.warning("Last modified file is a " + shared_libs.stringConstants.FILE_IDENTIFIER_OBJECT_SUMMARY + "\n") + sc().warning("Last modified file is a " + shared_libs.stringConstants.FILE_IDENTIFIER_OBJECT_SUMMARY + "\n") if fileToUse is None: - sc.error("No file containing '" + subStringIdentifier + "' found in " + path) - sys.exit(-10) + sc().error("No file containing '" + subStringIdentifier + "' found in " + path) return fileToUse diff --git a/genDoc/_genCallGraphs.py b/genDoc/_genCallGraphs.py index 806b6c6..b84c2ba 100644 --- a/genDoc/_genCallGraphs.py +++ b/genDoc/_genCallGraphs.py @@ -19,12 +19,12 @@ import sys import os -import argparse import pstats import subprocess -import pypiscout as sc -import gprof2dot # Not directly used, but later we do a sys-call wich needs the library. This is needed to inform the user to install the package. +from pypiscout.SCout_Logger import Logger as sc +import gprof2dot # pylint: disable=unused-import + # Rationale: Not directly used, but later we do a sys-call wich needs the library. This is needed to inform the user to install the package. from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper @@ -32,38 +32,17 @@ sys.path.append("..") -def ParseArguments(): - """ - Argument parser - :return: argparse object containing the parsed options - """ - parser = argparse.ArgumentParser( - prog="Emma - Call graph generator", - description="Script to generate call graphs that can be used in the documentation or to examine the run of Emma and the Emma Visualiser.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument( - "--graphviz_bin_folder", - help=r"The bin subfolder of the Graphviz software. Example: c:\Program Files (x86)\Graphviz2.38\bin", - required=False - ) - parser.add_argument( - "--verbose", - help="Prints out more info during run.", - default=False - ) - return parser.parse_args() - - -EmmaRootFolderRelative = r".." -FilteredProfileSuffix = r"_filtered.profile" -EmmaExecutionString = r"..\emma.py --project ..\doc\test_project --mapfiles ..\doc\test_project\mapfiles --dir ..\doc\test_project\results" -EmmaProfileFile = README_CALL_GRAPH_AND_UML_FOLDER_NAME + r"\emma.profile" -EmmaVisualiserExecutionString = r"..\emma_vis.py --project ..\doc\test_project --dir ..\doc\test_project\results --overview --quiet" -EmmaVisualiserProfileFile = README_CALL_GRAPH_AND_UML_FOLDER_NAME + r"\emma_vis.profile" - - -class ProfilerFilter: +EMMA_ROOT_FOLDER_RELATIVE = r".." +FILTERED_PROFILE_SUFFIX = r"_filtered.profile" +EMMA_EXECUTION_STRING = r"../emma.py --project ../doc/test_project --mapfiles ../doc/test_project/mapfiles --dir ../doc/test_project/results" +EMMA_PROFILE_FILE_PATH = README_CALL_GRAPH_AND_UML_FOLDER_NAME + r"\emma.profile" +EMMA_VIS_EXECUTION_STRING = r"../emma_vis.py --projectDir ../doc/test_project --inOutDir ../doc/test_project/results --overview --quiet" +EMMA_VIS_PROFILE_FILE_PATH = README_CALL_GRAPH_AND_UML_FOLDER_NAME + r"\emma_vis.profile" + + +class ProfilerFilter: # pylint: disable=too-few-public-methods + # Rationale: There is no other functionality that shall be realized by this class. There is no need for more public methods. + """ This is a filtering class for the profiler data of the Lib/pstats.py library. The typical use case is when a after profiling a python code (with the Lib/profile.py library) the output data contains a lot of calls from our code to @@ -101,109 +80,125 @@ class ProfilerFilter: 3 - Call the filterProfilerData with the profiler data """ - def __init__(self, root_folder, list_of_filters=None): - if os.path.isdir(root_folder): - self.root_folder = root_folder - self.source_file_list = [] - if None is list_of_filters: + def __init__(self, rootFolder, listOfFilters=None): + if os.path.isdir(rootFolder): + self.rootFolder = rootFolder + self.sourceFileList = [] + if None is listOfFilters: self.__collectSourceFiles() - elif all(isinstance(element, str) for element in list_of_filters): - self.source_file_list = list_of_filters + elif all(isinstance(element, str) for element in listOfFilters): + self.sourceFileList = listOfFilters else: raise ValueError("The list_of_filters needs to be a list of strings or set to None for a default filter based on the root_folder argument!") else: raise ValueError("The root_folder parameter must be a valid path to the project root folder!") def __collectSourceFiles(self): - for root, directories, files in os.walk(self.root_folder): + for root, _, files in os.walk(self.rootFolder): for file in files: if os.path.splitext(file)[1] == ".py": - self.source_file_list.append(shared_libs.emma_helper.joinPath(root, file)) + self.sourceFileList.append(shared_libs.emma_helper.joinPath(root, file)) - def __isThisOurCode(self, file_name): + def __isThisOurCode(self, fileName): result = False - for source_file in self.source_file_list: - if -1 != file_name.find(source_file): + for sourceFile in self.sourceFileList: + if -1 != fileName.find(sourceFile): result = True break return result def filterProfilerData(self, tree): - # Initialization of variables - do_not_cut_this_off = False # Reason is that we can get a tree that is empty and in that case the variable would be uninitialized - list_of_keys = list(tree.keys()) # This needs to be done like this because the keys() method returns an iterator not a list - for key in list_of_keys: - this_is_our_code = self.__isThisOurCode(key[0]) - we_have_code_below_this = False + """ + Function to filter the profile data so that it contains only code that is ours or it is called by our code. + This function is recursive it´s return value does not have any meaning for the caller. + :param tree: Profile tree that needs to be filtered. + :return: True if this element shall not be cut off, False otherwise. + """ + doNotCutThisOff = False # Reason is that we can get a tree that is empty and in that case the variable would be uninitialized + listOfKeys = list(tree.keys()) # This needs to be done like this because the keys() method returns an iterator not a list + + for key in listOfKeys: + thisIsOurCode = self.__isThisOurCode(key[0]) + weHaveCodeBelowThis = False value = tree.get(key) for element in value: - if type(element) is dict: + if isinstance(element, dict): if self.filterProfilerData(element): - we_have_code_below_this |= True - do_not_cut_this_off = this_is_our_code or we_have_code_below_this - if not do_not_cut_this_off: + weHaveCodeBelowThis |= True + doNotCutThisOff = thisIsOurCode or weHaveCodeBelowThis + if not doNotCutThisOff: del tree[key] - return do_not_cut_this_off + return doNotCutThisOff -def generateCallGraph(profile_file, execution_string, verbose): - sc.info("Generating call graphs for: " + execution_string) - sc.info("The results will be stored in: " + shared_libs.emma_helper.joinPath(os.getcwd(), README_CALL_GRAPH_AND_UML_FOLDER_NAME)) - sc.info("Analyzing the program and creating the .profile file...") - subprocess.run("python -m cProfile -o " + profile_file + " " + execution_string) +def generateCallGraph(profileFile, executionString, verbose): + """ + Function to generate callgraphs for an executable. + :param profileFile: This is the path where the profile file will be stored. + :param executionString: This is the string that will be given to subprocess.run to be executed. + :param verbose: If True, then more info will be printed during execution. + :return: None + """ - profiler_data = pstats.Stats(profile_file) - profiler_data.sort_stats(pstats.SortKey.CUMULATIVE) + sc().info("Generating call graphs for: " + executionString) + sc().info("The results will be stored in: " + shared_libs.emma_helper.joinPath(os.getcwd(), README_CALL_GRAPH_AND_UML_FOLDER_NAME)) + + sc().info("Analysing the program and creating the .profile file...\n") + subprocess.run("python -m cProfile -o " + profileFile + " " + executionString, shell=True) + + profilerData = pstats.Stats(profileFile) + profilerData.sort_stats(pstats.SortKey.CUMULATIVE) if verbose: - sc.info("The content of the profile file:") - profiler_data.print_stats() + sc().info("The content of the profile file:") + profilerData.print_stats() - sc.info("Filtering the profiler data...") - profiler_filter = ProfilerFilter(EmmaRootFolderRelative) - profiler_filter.filterProfilerData(profiler_data.stats) + sc().info("Filtering the profiler data...") + profilerFilter = ProfilerFilter(EMMA_ROOT_FOLDER_RELATIVE) + profilerFilter.filterProfilerData(profilerData.stats) - filtered_profile_file = os.path.splitext(profile_file)[0] + FilteredProfileSuffix - sc.info("Storing the filtered profile file to:", filtered_profile_file) - profiler_data.dump_stats(filtered_profile_file) + filteredProfileFile = os.path.splitext(profileFile)[0] + FILTERED_PROFILE_SUFFIX + sc().info("Storing the filtered profile file to:", filteredProfileFile) + profilerData.dump_stats(filteredProfileFile) - sc.info("Creating the .dot file from the .profile file...") - subprocess.run("gprof2dot -f pstats " + profile_file + " -o " + profile_file + ".dot") + sc().info("Creating the .dot file from the .profile file...") + subprocess.run("gprof2dot -f pstats " + profileFile + " -o " + profileFile + ".dot", shell=True) - sc.info("Creating the .dot file from the filtered .profile file...") - subprocess.run("gprof2dot -f pstats " + filtered_profile_file + " -o " + filtered_profile_file + ".dot") + sc().info("Creating the .dot file from the filtered .profile file...") + subprocess.run("gprof2dot -f pstats " + filteredProfileFile + " -o " + filteredProfileFile + ".dot", shell=True) - sc.info("Creating the .png file from the .dot file...") - subprocess.run("dot -T" + README_PICTURE_FORMAT + " -Gdpi=" + str(DPI_DOCUMENTATION) + " " + profile_file + ".dot -o" + profile_file + "." + README_PICTURE_FORMAT) + sc().info("Creating the .png file from the .dot file...") + subprocess.run("dot -T" + README_PICTURE_FORMAT + " -Gdpi=" + str(DPI_DOCUMENTATION) + " " + profileFile + ".dot -o" + profileFile + "." + README_PICTURE_FORMAT, shell=True) - sc.info("Creating the .png file from the filtered .dot file...") - subprocess.run("dot -T" + README_PICTURE_FORMAT + " -Gdpi=" + str(DPI_DOCUMENTATION) + " " + filtered_profile_file + ".dot -o" + filtered_profile_file + "." + README_PICTURE_FORMAT) + sc().info("Creating the .png file from the filtered .dot file...") + subprocess.run("dot -T" + README_PICTURE_FORMAT + " -Gdpi=" + str(DPI_DOCUMENTATION) + " " + filteredProfileFile + ".dot -o" + filteredProfileFile + "." + README_PICTURE_FORMAT, shell=True) print("") def main(arguments): + """ + Main function of the script. + :param arguments: Dictionary that contains the arguments that influence the execution. + Currently avalaible arguments: + - verbose : Extra info will be printed during execution. + :return: None + """ # Store original path variables - path_old_value = os.environ["PATH"] - if not("Graphviz" in os.environ["PATH"]): - graphviz_bin_abspath = os.path.abspath(arguments.graphviz_bin_folder) + pathOldValue = os.environ["PATH"] + if "Graphviz" not in os.environ["PATH"]: + graphvizBinAbspath = os.path.abspath(arguments.graphviz_bin_folder) # Add to path - os.environ["PATH"] += (graphviz_bin_abspath + ";") + os.environ["PATH"] += (graphvizBinAbspath + ";") try: - generateCallGraph(EmmaProfileFile, EmmaExecutionString, arguments.verbose) - generateCallGraph(EmmaVisualiserProfileFile, EmmaVisualiserExecutionString, arguments.verbose) + generateCallGraph(EMMA_PROFILE_FILE_PATH, EMMA_EXECUTION_STRING, arguments.verbose) + generateCallGraph(EMMA_VIS_PROFILE_FILE_PATH, EMMA_VIS_EXECUTION_STRING, arguments.verbose) - except Exception as e: - sc.error("An exception was caught:", e) + except Exception as exception: # pylint: disable=broad-except + # Rationale: We are not trying to catch a specific exception type here. + # The purpose of this is, that the PATH environment variable will be set back in case of an error. + sc().error("An exception was caught:", exception) # Get back initial path config - os.environ["PATH"] = path_old_value - - -if __name__ == "__main__": - arguments = ParseArguments() - if not os.path.isdir(README_CALL_GRAPH_AND_UML_FOLDER_NAME): - sc.info("The folder \"" + README_CALL_GRAPH_AND_UML_FOLDER_NAME + "\" was created because it did not exist...") - os.makedirs(README_CALL_GRAPH_AND_UML_FOLDER_NAME) - main(arguments) + os.environ["PATH"] = pathOldValue diff --git a/genDoc/_genUmlDiagrams.py b/genDoc/_genUmlDiagrams.py index 1060923..fb220ff 100644 --- a/genDoc/_genUmlDiagrams.py +++ b/genDoc/_genUmlDiagrams.py @@ -18,20 +18,28 @@ import os -import argparse import subprocess -import pypiscout as sc -import gprof2dot # Not directly used; later we do a sys-call wich needs the library +from pypiscout.SCout_Logger import Logger as sc +import gprof2dot # pylint: disable=unused-import + # Rationale: Not directly used, but later we do a sys-call wich needs the library. This is needed to inform the user to install the package. from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import -list_of_source_file_paths = [ # "../../*" instead of "../*" since we change the working directory within the system call - "../../emma_libs/mapfileRegexes.py", +LIST_OF_SOURCE_FILE_PATHS = [ # "../../*" instead of "../*" since we change the working directory within the system call + "../../emma_libs/categorisation.py", + "../../emma_libs/configuration.py", + "../../emma_libs/ghsConfiguration.py", + "../../emma_libs/ghsMapfileProcessor.py", + "../../emma_libs/ghsMapfileRegexes.py", + "../../emma_libs/mapfileProcessor.py", + "../../emma_libs/mapfileProcessorFactory.py", "../../emma_libs/memoryEntry.py", "../../emma_libs/memoryManager.py", "../../emma_libs/memoryMap.py", + "../../emma_libs/specificConfiguration.py", + "../../emma_libs/specificConfigurationFactory.py", "../../emma_delta_libs/Delta.py", "../../emma_delta_libs/FilePresenter.py", "../../emma_delta_libs/FileSelector.py", @@ -49,42 +57,11 @@ ] -def ParseArguments(): - """ - Argument parser - :return: argparse object containing the parsed options - """ - parser = argparse.ArgumentParser( - prog="Emma - Call graph generator", - description="Script to generate call graphs that can be used in the documentation or to examine the run of Emma and the Emma visualiser.", - formatter_class=argparse.ArgumentDefaultsHelpFormatter - ) - parser.add_argument( - "--graphviz_bin_folder", - help=r"The bin subfolder of the Graphviz software. Example: c:\Program Files (x86)\Graphviz2.38\bin", - required=True - ) - parser.add_argument( - "--verbose", - help="Prints out more info during run.", - default=False - ) - return parser.parse_args() - - -def main(arguments): - sc.info("Generating UML Class diagrams from the source files...") - for source_file_path in list_of_source_file_paths: - source_file_name = os.path.splitext(os.path.basename(source_file_path))[0] - subprocess.run("pyreverse -AS -o " + README_PICTURE_FORMAT + " " + source_file_path + " -p " + source_file_name, cwd=README_CALL_GRAPH_AND_UML_FOLDER_NAME, shell=True) - # Note that pyreverse must be called via subprocess (do NOT import it as a module) +def main(): + sc().info("Generating UML Class diagrams from the source files...") + for sourceFilePath in LIST_OF_SOURCE_FILE_PATHS: + sourceFileName = os.path.splitext(os.path.basename(sourceFilePath))[0] + subprocess.run("pyreverse -AS -o " + README_PICTURE_FORMAT + " " + sourceFilePath + " -p " + sourceFileName, cwd=README_CALL_GRAPH_AND_UML_FOLDER_NAME, shell=True) + # Note that pyreverse MUST be called via subprocess (do NOT import it as a module) # The main reason are licencing issues (GPLv2 is incompatible with GPLv3) (https://softwareengineering.stackexchange.com/questions/110380/call-gpl-software-from-non-gpl-software) # See also: https://github.com/TeamFlowerPower/kb/wiki/callGraphsUMLdiagrams - - -if __name__ == "__main__": - arguments = ParseArguments() - if not os.path.isdir(README_CALL_GRAPH_AND_UML_FOLDER_NAME): - sc.info("The folder \"" + README_CALL_GRAPH_AND_UML_FOLDER_NAME + "\" was created because it did not exist...") - os.makedirs(README_CALL_GRAPH_AND_UML_FOLDER_NAME) - main(arguments) diff --git a/genDoc/call_graph_uml/classes_Delta.png b/genDoc/call_graph_uml/classes_Delta.png index 7d7d0e0..5c7fbea 100644 Binary files a/genDoc/call_graph_uml/classes_Delta.png and b/genDoc/call_graph_uml/classes_Delta.png differ diff --git a/genDoc/call_graph_uml/classes_categorisation.png b/genDoc/call_graph_uml/classes_categorisation.png new file mode 100644 index 0000000..3464f0f Binary files /dev/null and b/genDoc/call_graph_uml/classes_categorisation.png differ diff --git a/genDoc/call_graph_uml/classes_configuration.png b/genDoc/call_graph_uml/classes_configuration.png new file mode 100644 index 0000000..320df2b Binary files /dev/null and b/genDoc/call_graph_uml/classes_configuration.png differ diff --git a/genDoc/call_graph_uml/classes_dataVisualiser.png b/genDoc/call_graph_uml/classes_dataVisualiser.png index 51dfc6a..a227835 100644 Binary files a/genDoc/call_graph_uml/classes_dataVisualiser.png and b/genDoc/call_graph_uml/classes_dataVisualiser.png differ diff --git a/genDoc/call_graph_uml/classes_dataVisualiserMemoryMap.png b/genDoc/call_graph_uml/classes_dataVisualiserMemoryMap.png index a8cee43..1736296 100644 Binary files a/genDoc/call_graph_uml/classes_dataVisualiserMemoryMap.png and b/genDoc/call_graph_uml/classes_dataVisualiserMemoryMap.png differ diff --git a/genDoc/call_graph_uml/classes_dataVisualiserObjects.png b/genDoc/call_graph_uml/classes_dataVisualiserObjects.png index c3054c2..57ab2eb 100644 Binary files a/genDoc/call_graph_uml/classes_dataVisualiserObjects.png and b/genDoc/call_graph_uml/classes_dataVisualiserObjects.png differ diff --git a/genDoc/call_graph_uml/classes_dataVisualiserSections.png b/genDoc/call_graph_uml/classes_dataVisualiserSections.png index bfcb7fe..c0229c3 100644 Binary files a/genDoc/call_graph_uml/classes_dataVisualiserSections.png and b/genDoc/call_graph_uml/classes_dataVisualiserSections.png differ diff --git a/genDoc/call_graph_uml/classes_ghsConfiguration.png b/genDoc/call_graph_uml/classes_ghsConfiguration.png new file mode 100644 index 0000000..77a00e3 Binary files /dev/null and b/genDoc/call_graph_uml/classes_ghsConfiguration.png differ diff --git a/genDoc/call_graph_uml/classes_ghsMapfileProcessor.png b/genDoc/call_graph_uml/classes_ghsMapfileProcessor.png new file mode 100644 index 0000000..2cf78d9 Binary files /dev/null and b/genDoc/call_graph_uml/classes_ghsMapfileProcessor.png differ diff --git a/genDoc/call_graph_uml/classes_ghsMapfileRegexes.png b/genDoc/call_graph_uml/classes_ghsMapfileRegexes.png new file mode 100644 index 0000000..25b6347 Binary files /dev/null and b/genDoc/call_graph_uml/classes_ghsMapfileRegexes.png differ diff --git a/genDoc/call_graph_uml/classes_mapfileProcessor.png b/genDoc/call_graph_uml/classes_mapfileProcessor.png new file mode 100644 index 0000000..76bcfbb Binary files /dev/null and b/genDoc/call_graph_uml/classes_mapfileProcessor.png differ diff --git a/genDoc/call_graph_uml/classes_mapfileProcessorFactory.png b/genDoc/call_graph_uml/classes_mapfileProcessorFactory.png new file mode 100644 index 0000000..bf75304 Binary files /dev/null and b/genDoc/call_graph_uml/classes_mapfileProcessorFactory.png differ diff --git a/genDoc/call_graph_uml/classes_mapfileRegexes.png b/genDoc/call_graph_uml/classes_mapfileRegexes.png deleted file mode 100644 index f2dc861..0000000 Binary files a/genDoc/call_graph_uml/classes_mapfileRegexes.png and /dev/null differ diff --git a/genDoc/call_graph_uml/classes_memoryEntry.png b/genDoc/call_graph_uml/classes_memoryEntry.png index 4ef2e55..24dca1d 100644 Binary files a/genDoc/call_graph_uml/classes_memoryEntry.png and b/genDoc/call_graph_uml/classes_memoryEntry.png differ diff --git a/genDoc/call_graph_uml/classes_memoryManager.png b/genDoc/call_graph_uml/classes_memoryManager.png index 80caa0e..5ebfb5b 100644 Binary files a/genDoc/call_graph_uml/classes_memoryManager.png and b/genDoc/call_graph_uml/classes_memoryManager.png differ diff --git a/genDoc/call_graph_uml/classes_specificConfiguration.png b/genDoc/call_graph_uml/classes_specificConfiguration.png new file mode 100644 index 0000000..13a02aa Binary files /dev/null and b/genDoc/call_graph_uml/classes_specificConfiguration.png differ diff --git a/genDoc/call_graph_uml/classes_specificConfigurationFactory.png b/genDoc/call_graph_uml/classes_specificConfigurationFactory.png new file mode 100644 index 0000000..bf75304 Binary files /dev/null and b/genDoc/call_graph_uml/classes_specificConfigurationFactory.png differ diff --git a/genDoc/call_graph_uml/emma.profile.png b/genDoc/call_graph_uml/emma.profile.png index e69de29..56bebb6 100644 Binary files a/genDoc/call_graph_uml/emma.profile.png and b/genDoc/call_graph_uml/emma.profile.png differ diff --git a/genDoc/call_graph_uml/emma_filtered.profile.png b/genDoc/call_graph_uml/emma_filtered.profile.png index 53d174b..06426f0 100644 Binary files a/genDoc/call_graph_uml/emma_filtered.profile.png and b/genDoc/call_graph_uml/emma_filtered.profile.png differ diff --git a/genDoc/call_graph_uml/emma_vis.profile.png b/genDoc/call_graph_uml/emma_vis.profile.png index e69de29..d8604de 100644 Binary files a/genDoc/call_graph_uml/emma_vis.profile.png and b/genDoc/call_graph_uml/emma_vis.profile.png differ diff --git a/genDoc/call_graph_uml/emma_vis_filtered.profile.png b/genDoc/call_graph_uml/emma_vis_filtered.profile.png index 0504303..4a22d04 100644 Binary files a/genDoc/call_graph_uml/emma_vis_filtered.profile.png and b/genDoc/call_graph_uml/emma_vis_filtered.profile.png differ diff --git a/genDoc/genReadmeHtmlFromMd.py b/genDoc/genReadmeHtmlFromMd.py index e750e42..5114473 100644 --- a/genDoc/genReadmeHtmlFromMd.py +++ b/genDoc/genReadmeHtmlFromMd.py @@ -21,10 +21,14 @@ import os import argparse -import pypiscout as sc -import gprof2dot # Not directly used, but later we do a sys-call wich needs the library. This is needed to inform the user to install the package. +from pypiscout.SCout_Logger import Logger as sc +import gprof2dot # pylint: disable=unused-import + # Rationale: Not directly used, but later we do a sys-call wich needs the library. This is needed to inform the user to install the package. sys.path.append("../") +# pylint: disable=wrong-import-position +# Rationale: This module needs to access modules that are above them in the folder structure. + from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import import shared_libs.emma_helper import genDoc._genCallGraphs @@ -61,69 +65,80 @@ def ParseArguments(): def main(arguments): - sc.header("Generating the Readme documents", symbol="/") + """ + Main function. + :param arguments: Processed command line arguments. + :return: None + """ + sc(invVerbosity=-1, actionWarning=lambda: sys.exit(-10), actionError=lambda: sys.exit(-10)) + + sc().header("Generating the Readme documents", symbol="/") # Give a hint on python sys-call - sc.info("A `python` system call is going to happen. If any errors occur please check the following first:") + sc().info("A `python` system call is going to happen. If any errors occur please check the following first:") if sys.platform == "win32": - sc.info("Windows OS detected. Make sure `python` refers to the Python3 version targeted for this application (-> dependencies; e.g. WSL comes with its own Python).") + sc().info("Windows OS detected. Make sure `python` refers to the Python3 version targeted for this application (-> dependencies; e.g. WSL comes with its own Python).") else: - sc.info("Make sure `python` refers to a Python 3 installation.") + sc().info("Make sure `python` refers to a Python 3 installation.") # Store original path variables - path_old_value = os.environ["PATH"] + pathOldValue = os.environ["PATH"] if not("Graphviz" in os.environ["PATH"] or "graphviz" in os.environ["PATH"]): if arguments.graphviz_bin_folder is not None: - graphviz_bin_abspath = os.path.abspath(arguments.graphviz_bin_folder) + graphvizBinAbspath = os.path.abspath(arguments.graphviz_bin_folder) # Add to path - os.environ["PATH"] += (graphviz_bin_abspath + ";") + os.environ["PATH"] += (graphvizBinAbspath + ";") else: - sc.error("The \"graphviz_bin_folder\" was not found in PATH nor was given in the argument --graphviz_bin_folder") - sys.exit(-1) + sc().error("The \"graphviz_bin_folder\" was not found in PATH nor was given in the argument --graphviz_bin_folder") try: if not os.path.isdir(README_CALL_GRAPH_AND_UML_FOLDER_NAME): - sc.info("The folder \"" + README_CALL_GRAPH_AND_UML_FOLDER_NAME + "\" was created because it did not exist...") + sc().info("The folder \"" + README_CALL_GRAPH_AND_UML_FOLDER_NAME + "\" was created because it did not exist...") os.makedirs(README_CALL_GRAPH_AND_UML_FOLDER_NAME) if not arguments.no_graphs: + # pylint: disable=protected-access + # Rationale: These modules are private so that the users will not use them directly. They are meant to be used trough this script. genDoc._genCallGraphs.main(arguments) - genDoc._genUmlDiagrams.main(arguments) - - print("") - sc.info("Storing Emma readme as a .html file...") - markdown_file_path = r"../doc/readme.md" - shared_libs.emma_helper.convertMarkdownFileToHtmlFile(markdown_file_path, (os.path.splitext(markdown_file_path)[0] + ".html")) - sc.info("Done.") - - print("") - sc.info("Storing Emma Visualiser readme as a .html file...") - markdown_file_path = r"../doc/readme-vis.md" - shared_libs.emma_helper.convertMarkdownFileToHtmlFile(markdown_file_path, (os.path.splitext(markdown_file_path)[0] + ".html")) - sc.info("Done.") - - print("") - sc.info("Storing the test_project readme as a .html file...") - markdown_file_path = r"../doc/test_project/readme/readme.md" - shared_libs.emma_helper.convertMarkdownFileToHtmlFile(markdown_file_path, (os.path.splitext(markdown_file_path)[0] + ".html")) - sc.info("Done.") - - print("") - sc.info("Storing the top level README as a .html file...") + genDoc._genUmlDiagrams.main() + + sc().info("Storing Emma readme as a .html file...") + markdownFilePath = r"../doc/readme.md" + shared_libs.emma_helper.convertMarkdownFileToHtmlFile(shared_libs.emma_helper.joinPath(os.path.dirname(__file__), markdownFilePath), (os.path.splitext(markdownFilePath)[0] + ".html")) + sc().info("Done.\n") + + sc().info("Storing Emma Visualiser readme as a .html file...") + markdownFilePath = r"../doc/readme-vis.md" + shared_libs.emma_helper.convertMarkdownFileToHtmlFile(shared_libs.emma_helper.joinPath(os.path.dirname(__file__), markdownFilePath), (os.path.splitext(markdownFilePath)[0] + ".html")) + sc().info("Done.\n") + + sc().info("Storing Emma contribution as a .html file...") + markdownFilePath = r"../doc/contribution.md" + shared_libs.emma_helper.convertMarkdownFileToHtmlFile(shared_libs.emma_helper.joinPath(os.path.dirname(__file__), markdownFilePath), (os.path.splitext(markdownFilePath)[0] + ".html")) + sc().info("Done.\n") + + sc().info("Storing the test_project readme as a .html file...") + markdownFilePath = r"../doc/test_project/readme.md" + shared_libs.emma_helper.convertMarkdownFileToHtmlFile(shared_libs.emma_helper.joinPath(os.path.dirname(__file__), markdownFilePath), (os.path.splitext(markdownFilePath)[0] + ".html")) + sc().info("Done.\n") + + + sc().info("Storing the top level README as a .html file...") # Change the working directory; otherwise we get errors about the relative image import paths in emma_helper.changePictureLinksToEmbeddingInHtmlData() os.chdir("..") - markdown_file_path = r"README.md" - shared_libs.emma_helper.convertMarkdownFileToHtmlFile(markdown_file_path, (os.path.splitext(markdown_file_path)[0] + ".html")) - sc.info("Done.") + markdownFilePath = r"../README.md" + shared_libs.emma_helper.convertMarkdownFileToHtmlFile(shared_libs.emma_helper.joinPath(os.path.dirname(__file__), markdownFilePath), (os.path.splitext(markdownFilePath)[0] + ".html")) + sc().info("Done.") os.chdir("doc") # Change working directory back - except Exception as e: - sc.error("An exception was caught:", e) + except Exception as exception: # pylint: disable=broad-except + # Rationale: We are not trying to catch a specific exception type here. + # The purpose of this is, that the PATH environment variable will be set back in case of an error. + sc().error("An exception was caught:", exception) # Get back initial path config - os.environ["PATH"] = path_old_value + os.environ["PATH"] = pathOldValue -if "__main__" == __name__: - arguments = ParseArguments() - main(arguments) +if __name__ == "__main__": + main(ParseArguments()) diff --git a/shared_libs/__init__.py b/shared_libs/__init__.py index 8b15f92..5e32dae 100644 --- a/shared_libs/__init__.py +++ b/shared_libs/__init__.py @@ -15,4 +15,3 @@ You should have received a copy of the GNU General Public License along with this program. If not, see """ - diff --git a/shared_libs/emma_helper.py b/shared_libs/emma_helper.py index 1636870..7c5aa1b 100644 --- a/shared_libs/emma_helper.py +++ b/shared_libs/emma_helper.py @@ -19,14 +19,14 @@ # Emma Memory and Mapfile Analyser - helpers -import sys import os import re import json import hashlib import base64 -import pypiscout as sc +from pypiscout.SCout_Logger import Logger as sc + import markdown import markdown.extensions.codehilite import markdown.extensions.fenced_code @@ -42,8 +42,7 @@ def checkIfFolderExists(folderName): :param folderName: Project to check """ if not os.path.isdir(folderName): - sc.error("Given directory (" + os.path.abspath(folderName) + ") does not exist; exiting...") - sys.exit(-10) + sc().error("Given directory (" + os.path.abspath(folderName) + ") does not exist; exiting...") def checkIfFileExists(filePath): @@ -52,8 +51,7 @@ def checkIfFileExists(filePath): :param filePath: File path to check """ if not os.path.exists(filePath): - sc.error("Given file (" + filePath + ") does not exist; exiting...") - sys.exit(-10) + sc().error("Given file (" + filePath + ") does not exist; exiting...") def mkDirIfNeeded(path): @@ -63,7 +61,7 @@ def mkDirIfNeeded(path): """ if not os.path.isdir(path): os.makedirs(path) - sc.info("Directory " + path + " created since not present") + sc().info("Directory " + path + " created since not present") def readJson(jsonInFilePath): @@ -92,29 +90,33 @@ def unifyAddress(address): :param address: hex or dec address :return: [addressHex, addressDec) """ - if type(address) == str and address is not None: + if isinstance(address, str) and address is not None: address = int(address, 16) addressHex = hex(address) - elif type(address) == int and address is not None: + elif isinstance(address, int) and address is not None: addressHex = hex(address) else: - sc.error("Address must be either of type int or str") + sc().error("unifyAddress(): Address must be either of type int or str!") raise TypeError return addressHex, address def getTimestampFromFilename(filename): """ - Get the timestamp from the summary - :param filename: summary filename in ./memstats - :return: The timestamp in string form + Get the timestamp from a report file name. + :param filename: Name of the report file. + :return: The timestamp in string form if found in the filename, else None. """ + result = None + pattern = re.compile(r"\d{4}-\d{2}-\d{2}-\d{2}h\d{2}s\d{2}") # Matches timestamps of the following format: `2017-11-06-14h56s52` match = re.search(pattern, filename) if match: - return match.group() + result = match.group() else: - sc.error("Could not match the given filename:", filename) + sc().error("Could not match the given filename:", filename) + + return result def getColourValFromString(inputString): @@ -134,28 +136,39 @@ def lastModifiedFilesInDir(path, extension): :param extension: Only files with a specified extension are included :return: Sorted list of modified files """ - directory = os.listdir(path) - fileTimestamps = [] + result = [] + + if os.path.isdir(path): + directory = os.listdir(path) + fileTimestamps = [] - for file in directory: - file = joinPath(path, file) - if os.path.isfile(file) and file.endswith(extension): - time = os.path.getmtime(file) - fileTimestamps.append([time, file]) + for file in directory: + file = joinPath(path, file) - return [item[1] for item in sorted(fileTimestamps)] # python sorts always by first element for nested lists; we only need the last element (last change) and only its filename (>> [1]) + if os.path.isfile(file) and file.endswith(extension): + time = os.path.getmtime(file) + fileTimestamps.append([time, file]) + + # python sorts always by first element for nested lists; we only need the last element (last change) and only its filename (>> [1]) + result = [item[1] for item in sorted(fileTimestamps)] + + return result def evalSummary(filename): """ - Function to check whether current memStats file is image or module summary - :param filename: Filename to check - :return: "Image_Summary" or "Module_Summary" + Function to check whether current report file is section or object summary. + :param filename: Filename of a report. + :return: FILE_IDENTIFIER_SECTION_SUMMARY if it´s a section- or FILE_IDENTIFIER_OBJECT_SUMMARY if it´s an object report, else None. """ + result = None + if FILE_IDENTIFIER_SECTION_SUMMARY in filename: - return FILE_IDENTIFIER_SECTION_SUMMARY + result = FILE_IDENTIFIER_SECTION_SUMMARY elif FILE_IDENTIFIER_OBJECT_SUMMARY in filename: - return FILE_IDENTIFIER_OBJECT_SUMMARY + result = FILE_IDENTIFIER_OBJECT_SUMMARY + + return result def projectNameFromPath(path): @@ -168,10 +181,14 @@ def projectNameFromPath(path): def joinPath(*paths): + """ + Join paths together maintaining one slash direction. This is especially important when using multiple operating systems (use forward slashes only). + :param paths: [List(string)] List of paths which are going to be joined together + :return: [string] The joined path + """ # Removing the elements that are None because these can be optional path elements and they would cause an exception listOfReceivedPaths = [i for i in paths if i is not None] - # FIXME : Docstring or comment pls, and what about the commented out code? - return os.path.normpath(os.path.join(*listOfReceivedPaths)) # .replace("\\", "/") + return os.path.normpath(os.path.join(*listOfReceivedPaths)) def changePictureLinksToEmbeddingInHtmlData(htmlData, sourceDataPath=""): @@ -183,23 +200,25 @@ def changePictureLinksToEmbeddingInHtmlData(htmlData, sourceDataPath=""): :param sourceDataPath: This is the path of the file from which the htmlData comes from. It is needed during the search for the picture files. :return: The modified htmlData. """ - list_of_linked_pictures = re.findall(r"\"" '157.4 GiB' + Converts a number into a human readable format: humanReadableSize(168963795964) -> ' 157.36 GiB' Note: we use binary prefixes (-> 1kiB = 1024 Byte) + + MIT License toHumanReadable + Copyright (c) 2019 Marcel Schmalzl, Steve Göring + https://github.com/TeamFlowerPower/kb/wiki/humanReadable + :param num: Number to convert :param suffix: The suffix that will be added to the quantifier :return: Formatted string @@ -297,31 +318,20 @@ def toHumanReadable(num, suffix='B'): class Prompt: + # pylint: disable=too-few-public-methods + # Rationale: This is legacy code, changing it into a function would require changes in other code parts. + """ + Class that contains functions that help handling of user prompts. + """ @staticmethod def idx(): """ - Prompt for an index [0,inf[ and return it if in this range otherwise return `None` - :return: + Prompt for an index [0, inf) and return it if in this range. + :return: The index entered by the user, None otherwise. """ + result = -1 text = input("> ") - if text is None or text == "": - return -1 - else: - return int(text) - - @staticmethod - def txt(): - # TODO: implement this method (Msc) - raise NotImplementedError - + if text is not None and text != "": + result = int(text) -def checkIfHelpWasCalled(): - """ - Checks if --help or -h is within the command line argument list - This is an argparse limitation - :return: False if it is inside; else True - """ - if "--help" in sys.argv or "-h" in sys.argv: - return False - else: - return True + return result diff --git a/shared_libs/stringConstants.py b/shared_libs/stringConstants.py index 9b9b29e..19c6629 100644 --- a/shared_libs/stringConstants.py +++ b/shared_libs/stringConstants.py @@ -20,8 +20,8 @@ # Version ################################################ -VERSION_MAJOR = "2" -VERSION_MINOR = "1" +VERSION_MAJOR = "3" +VERSION_MINOR = "0" EMMA_VERSION = ".".join([VERSION_MAJOR, VERSION_MINOR]) EMMA_VISUALISER_VERSION = EMMA_VERSION EMMA_DELTAS_VERSION = EMMA_VERSION @@ -54,20 +54,19 @@ License GPL-3.0: GNU GPL version 3 . This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.""" -# TODO: This shall be changed to Object summary (AGK) -FILE_IDENTIFIER_OBJECT_SUMMARY = "Module_Summary" -# TODO: This shall be changed to Section summary (AGK) -FILE_IDENTIFIER_SECTION_SUMMARY = "Image_Summary" +FILE_IDENTIFIER_SECTION_SUMMARY = "Section_Summary" +FILE_IDENTIFIER_OBJECT_SUMMARY = "Object_Summary" FILE_IDENTIFIER_OBJECTS_IN_SECTIONS = "Objects_in_Sections" IGNORE_CONFIG_ID = "ignoreConfigID" IGNORE_MEMORY = "ignoreMemory" +LISTING_INDENT = "\t\t\t\t " # This is used if you want to list something; in this case you normally don't use pypiscout but a `print()` MAPFILE = "mapfile" MAPFILES = "mapfiles" OUTPUT_DIR = "memStats" OUTPUT_DIR_VISUALISER = "results" MEM_TYPE = "memType" MEM_REGION_TO_EXCLUDE = "memRegionExcludes" -MODULE_NAME = "moduleName" +OBJECT_NAME = "object" MODULE_SIZE_BYTE = "Module Size [Byte]" MODULE_SIZE_PERCENT = "Module Size [%]" OVERLAP_FLAG = "overlapFlag" @@ -80,7 +79,7 @@ SIZE_HEX = "sizeHex" SIZE_HEX_ORIGINAL = "sizeHexOriginal" SIZE_HUMAN_READABLE = "sizeHumanReadable" -TAG = "tag" +MEM_TYPE_TAG = "tag" TIMESTAMP = "timestamp" TOTAL_USED_PERCENT = "Total used [%]" UNIQUE_PATTERN_SECTIONS = "UniquePatternSections" @@ -99,6 +98,9 @@ README_PICTURE_FORMAT = "png" OBJECTS_IN_SECTIONS_SECTION_ENTRY = "" OBJECTS_IN_SECTIONS_SECTION_RESERVE = "" +UNKNOWN_MEM_REGION = "" +UNKNOWN_MEM_TYPE = "" +UNKNOWN_CATEGORY = "" # The HTML Template that will be used during the conversion of .md files to .html # The CSS in the header has two parts: @@ -138,7 +140,7 @@ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ # The body placeholder is placed into the template and then later it can be searched for and replaced by the actual body content. HTML_TEMPLATE_BODY_PLACEHOLDER = "__BODY__" @@ -286,3 +288,5 @@ ".mr_rw_NandFlashDataBuffer" } )) + +COMPILER_NAME_GHS = "GHS" diff --git a/tests/functional_tests/__init__.py b/tests/functional_tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tests/functional_tests/test__cmd_line.py b/tests/functional_tests/test__cmd_line.py new file mode 100644 index 0000000..e7e79c8 --- /dev/null +++ b/tests/functional_tests/test__cmd_line.py @@ -0,0 +1,268 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + +# Emma Memory and Mapfile Analyser - command-line tests + + +import unittest +import sys +import os +import shutil + +from pypiscout.SCout_Logger import Logger as sc +from matplotlib import pyplot as plt + +sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) +# pylint: disable=wrong-import-position +# Rationale: This module needs to access modules that are above them in the folder structure. + +import emma +import emma_vis + + +class TestHelper(unittest.TestCase): + """ + Helper class for tests that creates an environment basic setup. + This includes making a copy of the test_project so we can work on a separate copy and not modify the original one. + This also ensures that all the tests can work on a clean test_project state. + """ + def init(self, testCaseName): + # pylint: disable=attribute-defined-outside-init + # Rationale: This class does not have an __init__() member so the member variables will be created here. + + """ + Creating the environment of the test. + :param testCaseName: The name of the test case. This will be used to create the output folder with the name of the test case. + :return: None + """ + # Setting up the logger + # This syntax will default init it and then change the settings with the __call__() + # This is needed so that the unit tests can have different settings and not interfere with each other + sc()(invVerbosity=4, actionWarning=None, actionError=lambda: sys.exit(-10)) + + # Switching to the Emma root folder + os.chdir(os.path.join(os.path.dirname(__file__), "..", "..")) + + # Defining the paths of the folders used during the tests + self.cmdLineTestRootFolder = os.path.join("tests", "other_files", "test__cmd_line") + # Defining a path that shall contain the project files + self.cmdLineTestProjectFolder = os.path.join(self.cmdLineTestRootFolder, "test_project") + # Defining a path that shall contain the mapfiles + self.cmdLineTestProjectMapfilesfolder = os.path.join(self.cmdLineTestProjectFolder, "mapfiles") + # Defining a path that shall contain the output + self.cmdLineTestOutputFolder = os.path.join("tests", "other_files", "test__cmd_line", testCaseName) + # Defining a path that shall not exist + self.nonExistingPath = os.path.join(self.cmdLineTestRootFolder, "this", "directory", "does", "not", "exist") + + # Checking whether the root folder still exist from the previous run, if it does, we shall not erase it, but ask the user to do it manually + self.assertFalse(os.path.isdir(self.cmdLineTestRootFolder), "The temporary folder (\"" + self.cmdLineTestRootFolder + "\") still exists! Please delete it manually.") + + # Defining the location of the source test_project + sourceTestProjectFolder = os.path.join("doc", "test_project") + + # Creating the root folder + os.makedirs(self.cmdLineTestProjectFolder) + # Copying the project files + for file in os.listdir(sourceTestProjectFolder): + if os.path.splitext(file)[-1].lower() == ".json": + shutil.copy(os.path.join(sourceTestProjectFolder, file), self.cmdLineTestProjectFolder) + # Copying the mapfiles + shutil.copytree(os.path.join(sourceTestProjectFolder, "mapfiles"), os.path.join(self.cmdLineTestProjectFolder, "mapfiles")) + # Creating the output folder for the results with the test case name + os.makedirs(self.cmdLineTestOutputFolder) + + def deInit(self): + """ + Clearing up the environment of the test. + :return: None + """ + # Checking whether the non existing Path exists. If it does then it was created by the software, which is an error. We will delete it so it does not influence the other tests. + nonExistingPathErrorDetected = os.path.isdir(self.nonExistingPath) + # Deleting the output folder of this test case + shutil.rmtree(self.cmdLineTestRootFolder) + self.assertFalse(nonExistingPathErrorDetected, "\nThe path (\"" + self.nonExistingPath + "\") that is used to simulate a non-existing path given as a command line argument exists at tearDown!\nThis means that this path was somehow created during the test execution by the software.\nThe path was now deleted by the TestHelper::deInit() to avoid effects on other tests.") + + +class CmdEmma(TestHelper): + # pylint: disable=invalid-name + # Rationale: Tests need to have the following method names in order to be discovered: test_(). + + """ + Class containing tests for testing the command line argument processing for Emma. + """ + def setUp(self): + plt.clf() + self.init("CmdEmma") + + def tearDown(self): + plt.clf() + self.deInit() + + def test_normalRun(self): + """ + Check that an ordinary run is successful + """ + try: + args = emma.parseArgs(["--project", self.cmdLineTestProjectFolder, "--mapfiles", self.cmdLineTestProjectMapfilesfolder, "--dir", self.cmdLineTestOutputFolder]) + emma.main(args) + except Exception as e: # pylint: disable=broad-except + # Rationale: The purpose here is to catch any exception. + self.fail("Unexpected exception: " + str(e)) + + def test_help(self): + """ + Check that `--help` does not raise an exeption but exits with SystemExit(0) + """ + with self.assertRaises(SystemExit) as context: + args = emma.parseArgs(["--help"]) + emma.main(args) + self.assertEqual(context.exception.code, 0) + + def test_unrecognisedArgs(self): + """ + Check that an unexpected argument does raise an exeption + """ + with self.assertRaises(SystemExit) as context: + args = emma.parseArgs(["--project", self.cmdLineTestProjectFolder, "--mapfiles", self.cmdLineTestProjectMapfilesfolder, "--dir", self.cmdLineTestOutputFolder, "--blahhhhhh"]) + emma.main(args) + self.assertEqual(context.exception.code, 2) + + def test_noProjDir(self): + """ + Check run with non-existing project folder + """ + with self.assertRaises(SystemExit) as context: + args = emma.parseArgs(["--project", self.nonExistingPath, "--mapfiles", self.cmdLineTestProjectMapfilesfolder, "--dir", self.cmdLineTestOutputFolder]) + emma.main(args) + self.assertEqual(context.exception.code, -10) + + def test_noMapfileDir(self): + """ + Check run with non-existing mapfile folder + """ + with self.assertRaises(SystemExit) as context: + args = emma.parseArgs(["--project", self.cmdLineTestProjectFolder, "--mapfiles", self.nonExistingPath, "--dir", self.cmdLineTestOutputFolder]) + emma.main(args) + self.assertEqual(context.exception.code, -10) + + def test_noDirOption(self): + """ + Check run without a --dir parameter + """ + try: + args = emma.parseArgs(["--project", self.cmdLineTestProjectFolder, "--mapfiles", self.cmdLineTestProjectMapfilesfolder]) + emma.main(args) + except Exception as e: # pylint: disable=broad-except + # Rationale: The purpose here is to catch any exception. + self.fail("Unexpected exception: " + str(e)) + + +class CmdEmmaVis(TestHelper): + # pylint: disable=invalid-name + # Rationale: Tests need to have the following method names in order to be discovered: test_(). + + """ + Class containing tests for testing the command line argument processing for Emma-Vis. + """ + + def setUp(self): + self.init("CmdEmmaVis") + self.runEmma(self.cmdLineTestOutputFolder) + + def tearDown(self): + self.deInit() + + def runEmma(self, outputFolder=None): + """ + Function to run the Emma. + :param outputFolder: The output folder that will be given as the --dir parameter. If it is None, the --dir parameter will not be given to Emma. + :return: None + """ + if outputFolder is not None: + args = emma.parseArgs(["--project", self.cmdLineTestProjectFolder, "--mapfiles", self.cmdLineTestProjectMapfilesfolder, "--dir", outputFolder, "--noprompt"]) + else: + args = emma.parseArgs(["--project", self.cmdLineTestProjectFolder, "--mapfiles", self.cmdLineTestProjectMapfilesfolder, "--noprompt"]) + emma.main(args) + + def test_normalRun(self): + """ + Check that an ordinary run is successful + """ + try: + argsEmmaVis = emma_vis.parseArgs(["--project", self.cmdLineTestProjectFolder, "--overview", "--inOutDir", self.cmdLineTestOutputFolder, "--noprompt", "--quiet"]) + emma_vis.main(argsEmmaVis) + except Exception as e: # pylint: disable=broad-except + # Rationale: The purpose here is to catch any exception. + self.fail("Unexpected exception: " + str(e)) + + def test_help(self): + """ + Check that `--help` does not raise an exeption but exits with SystemExit(0) + """ + with self.assertRaises(SystemExit) as context: + args = emma_vis.parseArgs(["--help"]) + emma_vis.main(args) + self.assertEqual(context.exception.code, 0) + + def test_unrecognisedArgs(self): + """ + Check that an unexpected argument does raise an exeption + """ + with self.assertRaises(SystemExit) as context: + args = emma_vis.parseArgs(["--project", self.cmdLineTestProjectFolder, "overview", "--dir", self.cmdLineTestOutputFolder, "--blahhhhhh", "--noprompt", "--quiet"]) + emma_vis.main(args) + self.assertEqual(context.exception.code, 2) + + def test_noProjDir(self): + """ + Check run with non-existing project folder + """ + with self.assertRaises(SystemExit) as context: + args = emma_vis.parseArgs(["--project", self.nonExistingPath, "--overview", "--inOutDir", self.cmdLineTestOutputFolder, "--noprompt", "--quiet"]) + emma_vis.main(args) + self.assertEqual(context.exception.code, -10) + + def test_noMemStats(self): + """ + Check run with non-existing memStats folder + """ + with self.assertRaises(SystemExit) as context: + args = emma_vis.parseArgs(["--project", self.cmdLineTestProjectFolder, "--overview", "--inOutDir", self.nonExistingPath, "--noprompt", "--quiet"]) + emma_vis.main(args) + self.assertEqual(context.exception.code, -10) + + def test_noDirOption(self): + """ + Check run without a --dir parameter + """ + try: + # This a is a specific case, the default Emma results will not work here. Because of this, we will delete it and run the Emma again. + shutil.rmtree(self.cmdLineTestOutputFolder) + self.runEmma() + args = emma_vis.parseArgs(["--project", self.cmdLineTestProjectFolder, "--overview", "--noprompt", "--quiet"]) + emma_vis.main(args) + plt.close('all') + except Exception as e: # pylint: disable=broad-except + # Rationale: The purpose here is to catch any exception. + self.fail("Unexpected exception: " + str(e)) + + +if __name__ == '__main__': + if len(sys.argv) > 1: + sys.argv.pop() + unittest.main() diff --git a/tests/functional_tests/test__test_project.py b/tests/functional_tests/test__test_project.py new file mode 100644 index 0000000..7ffbbb6 --- /dev/null +++ b/tests/functional_tests/test__test_project.py @@ -0,0 +1,281 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + +# Emma Memory and Mapfile Analyser - command-line tests + + +import os +import sys +import shutil +import unittest +import datetime +import pandas + + +sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) +# pylint: disable=wrong-import-position +# Rationale: This module needs to access modules that are above them in the folder structure. + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import emma + +class EmmaTestProject(unittest.TestCase): + # pylint: disable=invalid-name + # Rationale: Tests need to have the following method names in order to be discovered: test_(). + + """ + A test case to test the Emma with the test_project. + """ + def setUp(self): + # pylint: disable=too-many-locals + # Rationale: This is only a test file, it does not need to have production grade quality. + """ + A function to setup the variables used in the tests and to run the Emma on the test_project. + :return: None + """ + # Changing the working directory to the scripts path + os.chdir(os.path.dirname(os.path.abspath(__file__))) + + # Setting up the variables + emmaRootFolder = os.path.join("..", "..") + testProjectFolder = os.path.join(emmaRootFolder, "doc", "test_project") + mapfilesFolder = os.path.join(testProjectFolder, "mapfiles") + self.resultsFolder = os.path.join("..", "other_files", "test__test_project") + self.memStatsFolder = os.path.join(self.resultsFolder, OUTPUT_DIR) + + # Checking whether the result folder exists, deleting it if it does, + # and then creating it again, so we have a clean results folder + if os.path.isdir(self.resultsFolder): + shutil.rmtree(self.resultsFolder) + os.mkdir(self.resultsFolder) + + # Running the test_project to create the CSV tables + arguments = emma.parseArgs(["--project", testProjectFolder, "--mapfile", mapfilesFolder, "--dir", self.resultsFolder]) + emma.main(arguments) + + for _, directories, files in os.walk(self.memStatsFolder): + # The result folder shall have 0 subdirectories and three summary files + self.assertEqual(len(directories), 0) + self.assertEqual(len(files), 3) + + # Setting up the file name related variables + projectName = "test_project" + timeStampLength = len(datetime.datetime.now().strftime("%Y-%m-%d-%Hh%Ms%S")) + reportFileExtension = ".csv" + reportFileExtensionLength = len(reportFileExtension) + imageSummaryFileNameFixPart = projectName + "_" + FILE_IDENTIFIER_SECTION_SUMMARY + "_" + moduleSummaryFileNameFixPart = projectName + "_" + FILE_IDENTIFIER_OBJECT_SUMMARY + "_" + objectsInSectionsFileNameFixPart = projectName + "_" + FILE_IDENTIFIER_OBJECTS_IN_SECTIONS + "_" + + # Checking whether the expected report names are there and setting up the variables with their paths + for file in os.listdir(self.memStatsFolder): + if imageSummaryFileNameFixPart == file[:-(timeStampLength + reportFileExtensionLength)]: + self.imageSummaryPath = os.path.join(self.memStatsFolder, file) + elif moduleSummaryFileNameFixPart == file[:-(timeStampLength + reportFileExtensionLength)]: + self.moduleSummaryPath = os.path.join(self.memStatsFolder, file) + elif objectsInSectionsFileNameFixPart == file[:-(timeStampLength + reportFileExtensionLength)]: + self.objectsInSectionsPath = os.path.join(self.memStatsFolder, file) + else: + raise EnvironmentError("Unexpected file: " + os.path.join(self.memStatsFolder, file)) + + # Setting up the variable with the expected column values + self.expectedColumns = [ADDR_START_HEX, ADDR_END_HEX, SIZE_HEX, ADDR_START_DEC, ADDR_END_DEC, + SIZE_DEC, SIZE_HUMAN_READABLE, SECTION_NAME, OBJECT_NAME, CONFIG_ID, + DMA, VAS_NAME, VAS_SECTION_NAME, + MEM_TYPE, MEM_TYPE_TAG, CATEGORY, MAPFILE, + OVERLAP_FLAG, CONTAINMENT_FLAG, DUPLICATE_FLAG, CONTAINING_OTHERS_FLAG, + ADDR_START_HEX_ORIGINAL, ADDR_END_HEX_ORIGINAL, SIZE_HEX_ORIGINAL, SIZE_DEC_ORIGINAL] + + def tearDown(self): + """ + A function to clean up after the tests. + :return: None + """ + # Checking whether the result folder exists, deleting it if it does + if os.path.isdir(self.resultsFolder): + shutil.rmtree(self.resultsFolder) + + class ExpectedDataTypeData: + # pylint: disable=too-few-public-methods + # Rationale: This class does not need methods, only data members to contain the expected data. + + """ + A class that contains the expected data of a data type. + """ + def __init__(self, name, numberOfRows, totalSizeDec): + self.name = name + self.numberOfRows = numberOfRows + self.totalSizeDec = totalSizeDec + + class ExpectedConfigIdData: + # pylint: disable=too-few-public-methods + # Rationale: This class does not need methods, only data members to contain the expected data. + + """ + A class that contains the expected data of a configId. + """ + def __init__(self, name, numberOfRows, listOfDataTypeData): + self.name = name + self.numerOfRows = numberOfRows + self.listOfDataTypeData = listOfDataTypeData + + class ExpectedReportData: + # pylint: disable=too-few-public-methods + # Rationale: This class does not need methods, only data members to contain the expected data. + + """ + A class that contains the expected data of a report. + """ + def __init__(self, numberOfRows, listOfColumns, listOfConfigIdData): + self.numberOfRows = numberOfRows + self.listOfColumns = listOfColumns + self.listOfConfigIdData = listOfConfigIdData + + def checkDataTypeData(self, dataTypeData: pandas.DataFrame, expectedDataTypeData: ExpectedDataTypeData): + """ + A function to test a data type. + :param dataTypeData: The data frame containing the data of a data type. + :param expectedDataTypeData: The expected data of the data type. + :return: None + """ + dataTypeNumberOfRows, _ = dataTypeData.shape + dataTypeTotalSizeDec = dataTypeData[SIZE_DEC].sum() + + # Checking the number of elements + self.assertEqual(dataTypeNumberOfRows, expectedDataTypeData.numberOfRows) + + # Checking the total consumption + self.assertEqual(dataTypeTotalSizeDec, expectedDataTypeData.totalSizeDec) + + def checkConfigIdData(self, configIdData: pandas.DataFrame, expectedConfigIdData: ExpectedConfigIdData): + """ + A function to test a configId. + :param configIdData: The data frame containing the data of the configId. + :param expectedConfigIdData: The expected data of the configId. + :return: None + """ + configIdNumberOfRows, _ = configIdData.shape + + # Checking the number of the elements + self.assertEqual(configIdNumberOfRows, expectedConfigIdData.numerOfRows) + + # Checking the data types + for expectedDataTypeData in expectedConfigIdData.listOfDataTypeData: + dataTypeData = configIdData[configIdData.memType == expectedDataTypeData.name] + self.checkDataTypeData(dataTypeData, expectedDataTypeData) + + def checkReport(self, reportData: pandas.DataFrame, expectedReportData: ExpectedReportData): + """ + A functon to test a report. + :param reportData: The data frame containing the data of the report. + :param expectedReportData: The expected data of the report. + :return: None + """ + # Checking the type of the imageSummary + self.assertEqual(type(reportData).__name__, "DataFrame") + numberOfRows, numberOfColumns = reportData.shape + + # Checking the number of rows + self.assertEqual(numberOfRows, expectedReportData.numberOfRows) + + # Checking the number of columns + self.assertEqual(numberOfColumns, len(expectedReportData.listOfColumns)) + + # Checking the column values and their order + self.assertEqual(reportData.columns.tolist(), expectedReportData.listOfColumns) + uniqueConfigIdValues = reportData.configID.unique().tolist() + + # Checking the configIDs and their data + self.assertEqual(len(uniqueConfigIdValues), len(expectedReportData.listOfConfigIdData)) + for expectedConfigIdData in expectedReportData.listOfConfigIdData: + self.assertIn(expectedConfigIdData.name, uniqueConfigIdValues) + configIdData = reportData[reportData.configID == expectedConfigIdData.name] + self.checkConfigIdData(configIdData, expectedConfigIdData) + + def test_imageSummaryReport(self): + """ + A function to test the Image Summary report of the test_project. + :return: None + """ + # Loading the report data + reportData = pandas.read_csv(self.imageSummaryPath, sep=";") + + # Define the expected data of the MCU configId + expectedMcuIntFlashData = self.ExpectedDataTypeData("INT_FLASH", 9, 393336) + expectedMcuIntRamData = self.ExpectedDataTypeData("INT_RAM", 4, 134656) + expectedMcuData = self.ExpectedConfigIdData("MCU", 13, [expectedMcuIntFlashData, expectedMcuIntRamData]) + + # Define the expected data of the SOC configId + expectedSocExtRamData = self.ExpectedDataTypeData("EXT_RAM", 18, 6164824) + expectedSocData = self.ExpectedConfigIdData("SOC", 18, [expectedSocExtRamData]) + + # Define the expected data of the report + expectedReportData = self.ExpectedReportData(31, self.expectedColumns, [expectedMcuData, expectedSocData]) + + # Checking the report data + self.checkReport(reportData, expectedReportData) + + def test_objectSummaryReport(self): + """ + A function to test the Object Summary report of the test_project. + :return: None + """ + # Loading the report data + reportData = pandas.read_csv(self.moduleSummaryPath, sep=";") + + # Define the expected data of the MCU configId + expectedMcuIntFlashData = self.ExpectedDataTypeData("INT_FLASH", 20, 89280) + expectedMcuIntRamData = self.ExpectedDataTypeData("INT_RAM", 2, 3584) + expectedMcuData = self.ExpectedConfigIdData("MCU", 22, [expectedMcuIntFlashData, expectedMcuIntRamData]) + + # Define the expected data of the SOC configId + expectedSocExtRamData = self.ExpectedDataTypeData("EXT_RAM", 26, 5181784) + expectedSocData = self.ExpectedConfigIdData("SOC", 26, [expectedSocExtRamData]) + + # Define the expected data of the report + expectedReportData = self.ExpectedReportData(48, self.expectedColumns, [expectedMcuData, expectedSocData]) + + # Checking the report data + self.checkReport(reportData, expectedReportData) + + def test_objectsInSectionsReport(self): + """ + A function to test the Objects In Sections report of the test_project. + :return: None + """ + # Loading the report data + reportData = pandas.read_csv(self.objectsInSectionsPath, sep=";") + + # Define the expected data of the MCU configId + expectedMcuIntFlashData = self.ExpectedDataTypeData("INT_FLASH", 32, 393336) + expectedMcuIntRamData = self.ExpectedDataTypeData("INT_RAM", 8, 134656) + expectedMcuData = self.ExpectedConfigIdData("MCU", 40, [expectedMcuIntFlashData, expectedMcuIntRamData]) + + # Define the expected data of the SOC configId + expectedSocExtRamData = self.ExpectedDataTypeData("EXT_RAM", 59, 6164824) + expectedSocData = self.ExpectedConfigIdData("SOC", 59, [expectedSocExtRamData]) + + # Define the expected data of the report + expectedReportData = self.ExpectedReportData(99, self.expectedColumns, [expectedMcuData, expectedSocData]) + + # Checking the report data + self.checkReport(reportData, expectedReportData) + + +if __name__ == "__main__": + unittest.main() diff --git a/tests/other_files/__init__.py b/tests/other_files/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tests/test__cmd-line.py b/tests/test__cmd-line.py deleted file mode 100644 index 28901dd..0000000 --- a/tests/test__cmd-line.py +++ /dev/null @@ -1,138 +0,0 @@ -""" -Emma - Emma Memory and Mapfile Analyser -Copyright (C) 2019 The Emma authors - -This program is free software: you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation, either version 3 of the License, or -(at your option) any later version. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program. If not, see -""" - -# Emma Memory and Mapfile Analyser - command-line unit tests - - -import unittest -import sys -import os -import shutil - -sys.path.append("../") -import emma -import emma_vis - -import shared_libs.emma_helper - - -class CmdEmma(unittest.TestCase): - def test_help(self): - """ - Check that `--help` does not raise an exeption - """ - # with self.assertRaises(SystemExit): - try: - args = emma.parseArgs(["--help"]) - emma.main(args) - except SystemExit as e: - if e.code != 0: - print("Exit code", e.code) - raise e - - def test_unrecognisedArgs(self): - """ - Check that `--help` does not raise an exeption - """ - # with self.assertRaises(SystemExit): - try: - args = emma.parseArgs(["--project", "doc/test_project", "--mapfiles", "doc/test_project/mapfiles", "--blahhhhhh"]) - emma.main(args) - except SystemExit as e: - if e.code != 2: - print("Exit code", e.code) - raise e - - def test_noProjMapfileDir(self): - with self.assertRaises(SystemExit): - args = emma.parseArgs(["--project", "this/directory/does/not/exist", "--mapfiles", "this/directory/does/not/exist"]) - emma.main(args) - - @staticmethod - def test_normalRun(): - """ - Check that an ordinary run is successful - """ - os.chdir("../") - args = emma.parseArgs(["--project", "doc/test_project", "--mapfiles", "doc/test_project/mapfiles"]) - emma.main(args) - os.chdir("tests") - - -class CmdEmmaVis(unittest.TestCase): - def test_help(self): - """ - Check that `--help` does not raise an exeption - """ - # with self.assertRaises(SystemExit): - try: - args = emma_vis.parseArgs(["--help"]) - emma_vis.main(args) - except SystemExit as e: - if e.code != 0: - print("Exit code", e.code) - raise e - - @staticmethod - def test_normalRun(): - """ - Check that an ordinary run is successful - """ - os.chdir("../") - # args = emma.parseArgs(["--project", "doc/test_project", "--mapfiles", "doc/test_project/mapfiles"]) - # emma.main(args) - - args = emma_vis.parseArgs(["--project", "doc/test_project", "--overview", "--quiet"]) - emma_vis.main(args) - os.chdir("tests") - - @staticmethod - def test_runWithNoMemStats(): - """ - No memStats should not raise an exception but exit with sys.exit(-10) - """ - os.chdir("../") - tempMemStatsPath = "temp" - tempMemStatsPathMemStats = shared_libs.emma_helper.joinPath(tempMemStatsPath, "memStats") - if os.path.isdir(tempMemStatsPathMemStats): - os.removedirs(tempMemStatsPathMemStats) - os.makedirs(tempMemStatsPathMemStats) - - # Make sure the directory is empty - assert len(os.listdir(tempMemStatsPathMemStats)) == 0 - - def cleanUp(): - shutil.rmtree(tempMemStatsPath) - os.chdir("tests") - try: - args = emma_vis.parseArgs(["--project", "doc/test_project", "--overview", "--dir", tempMemStatsPath]) - emma_vis.main(args) - except SystemExit as e: - if -10 != e.code: - cleanUp() - print("Exit code", e.code) - raise e - else: - # All good; program ends with sys.exit(-10) since no file was found - cleanUp() - - -if __name__ == '__main__': - if len(sys.argv) > 1: - sys.argv.pop() - unittest.main() diff --git a/tests/unit_tests/__init__.py b/tests/unit_tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tests/unit_tests/test_emma_helper.py b/tests/unit_tests/test_emma_helper.py new file mode 100644 index 0000000..40291fe --- /dev/null +++ b/tests/unit_tests/test_emma_helper.py @@ -0,0 +1,140 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import os +import sys +import unittest +import platform + +from pypiscout.SCout_Logger import Logger as sc + +sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) +# pylint: disable=wrong-import-position +# Rationale: This module needs to access modules that are above them in the folder structure. + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import shared_libs.emma_helper + + +class EmmaHelperTestCase(unittest.TestCase): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + """ + Unit tests for the emma_helper module. + """ + def setUp(self): + """ + Setting up the logger + This syntax will default init it and then change the settings with the __call__() + This is needed so that the unit tests can have different settings and not interfere with each other + :return: None + """ + sc()(invVerbosity=4, actionWarning=lambda: sys.exit("warning"), actionError=lambda: sys.exit("error")) + + def test_checkIfFolderExists(self): + try: + shared_libs.emma_helper.checkIfFolderExists(os.path.dirname(__file__)) + except Exception: # pylint: disable=broad-except + # Rationale: The goal here is to catch any exception types. + self.fail("Unexpected exception!") + with self.assertRaises(SystemExit) as contextManager: + shared_libs.emma_helper.checkIfFolderExists("DefinitelyNonExistingFolder") + self.assertEqual(contextManager.exception.code, "error") + + def test_checkIfFileExists(self): + try: + shared_libs.emma_helper.checkIfFileExists(__file__) + except Exception: # pylint: disable=broad-except + # Rationale: The goal here is to catch any exception types. + self.fail("Unexpected exception!") + with self.assertRaises(SystemExit) as contextManager: + shared_libs.emma_helper.checkIfFileExists("DefinitelyNonExisting.file") + self.assertEqual(contextManager.exception.code, "error") + + def test_mkDirIfNeeded(self): + directoryName = "TestDirectoryNameThatShouldNotExist" + self.assertFalse(os.path.isdir(directoryName)) + shared_libs.emma_helper.mkDirIfNeeded(directoryName) + self.assertTrue(os.path.isdir(directoryName)) + os.rmdir(directoryName) + + def test_readJsonWriteJson(self): + jsonTestFilePath = os.path.join(os.path.dirname(__file__), "..", "other_files", "testJson.json") + self.assertFalse(os.path.exists(jsonTestFilePath)) + + jsonContentToWrite = {"TestDictionary": {}} + jsonContentToWrite["TestDictionary"]["test_passed"] = True + shared_libs.emma_helper.writeJson(jsonTestFilePath, jsonContentToWrite) + self.assertTrue(os.path.exists(jsonTestFilePath)) + + jsonContentReadIn = shared_libs.emma_helper.readJson(jsonTestFilePath) + self.assertIn("TestDictionary", jsonContentReadIn) + self.assertIn("test_passed", jsonContentReadIn["TestDictionary"]) + self.assertEqual(type(jsonContentReadIn["TestDictionary"]["test_passed"]), bool) + + os.remove(jsonTestFilePath) + self.assertFalse(os.path.exists(jsonTestFilePath)) + + def test_unifyAddress(self): + hexResult, decResult = shared_libs.emma_helper.unifyAddress("0x16") + self.assertEqual(hexResult, "0x16") + self.assertEqual(decResult, 22) + hexResult, decResult = shared_libs.emma_helper.unifyAddress(22) + self.assertEqual(hexResult, "0x16") + self.assertEqual(decResult, 22) + with self.assertRaises(ValueError) as contextManager: + hexResult, decResult = shared_libs.emma_helper.unifyAddress("Obviously not a number...") + with self.assertRaises(SystemExit) as contextManager: + hexResult, decResult = shared_libs.emma_helper.unifyAddress(0.123) + self.assertEqual(contextManager.exception.code, "error") + + def test_getTimestampFromFilename(self): + timestamp = shared_libs.emma_helper.getTimestampFromFilename("MyFile_2017-11-06-14h56s52.csv") + self.assertEqual(timestamp, "2017-11-06-14h56s52") + with self.assertRaises(SystemExit) as contextManager: + shared_libs.emma_helper.getTimestampFromFilename("MyFileWithoutTimeStamp.csv") + self.assertEqual(contextManager.exception.code, "error") + + def test_toHumanReadable(self): + self.assertEqual(" 0.00 B", shared_libs.emma_helper.toHumanReadable(0)) + self.assertEqual(" 10.00 B", shared_libs.emma_helper.toHumanReadable(10)) + self.assertEqual(" 1024.00 B", shared_libs.emma_helper.toHumanReadable(1024)) + self.assertEqual(" 1.00 KiB", shared_libs.emma_helper.toHumanReadable(1025)) + self.assertEqual(" 1.01 KiB", shared_libs.emma_helper.toHumanReadable(1035)) + self.assertEqual(" 1.10 KiB", shared_libs.emma_helper.toHumanReadable(1126)) + self.assertEqual(" 157.36 GiB", shared_libs.emma_helper.toHumanReadable(168963795964)) + + def test_evalSummary(self): + self.assertEqual(shared_libs.emma_helper.evalSummary("Projectname_" + FILE_IDENTIFIER_SECTION_SUMMARY + "_2017-11-06-14h56s52.csv"), FILE_IDENTIFIER_SECTION_SUMMARY) + self.assertEqual(shared_libs.emma_helper.evalSummary("Projectname_" + FILE_IDENTIFIER_OBJECT_SUMMARY + "_2017-11-06-14h56s52.csv"), FILE_IDENTIFIER_OBJECT_SUMMARY) + self.assertIsNone(shared_libs.emma_helper.evalSummary("Projectname_" + "_2017-11-06-14h56s52.csv")) + + def test_projectNameFromPath(self): + self.assertEqual("MyProject", shared_libs.emma_helper.projectNameFromPath(os.path.join("C:", "GitRepos", "Emma", "MyProject"))) + + def test_joinPath(self): + if platform.system() == "Windows": + self.assertEqual(r"c:Documents\Projects\Emma", shared_libs.emma_helper.joinPath("c:", "Documents", "Projects", "Emma")) + self.assertEqual(r"..\..\Emma\tests\other_files", shared_libs.emma_helper.joinPath("..", "..", "Emma", "tests", "other_files")) + elif platform.system() == "Linux": + self.assertEqual(r"Documents/Projects/Emma", shared_libs.emma_helper.joinPath("Documents", "Projects", "Emma")) + self.assertEqual(r"../../Emma/tests/other_files", shared_libs.emma_helper.joinPath("..", "..", "Emma", "tests", "other_files")) + else: + raise EnvironmentError("Unexpected platform value: " + platform.system()) diff --git a/tests/unit_tests/test_memoryEntry.py b/tests/unit_tests/test_memoryEntry.py new file mode 100644 index 0000000..ebace09 --- /dev/null +++ b/tests/unit_tests/test_memoryEntry.py @@ -0,0 +1,344 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import os +import sys +import unittest +import collections + +from pypiscout.SCout_Logger import Logger as sc + +sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) +# pylint: disable=wrong-import-position +# Rationale: This module needs to access modules that are above them in the folder structure. + +import emma_libs.memoryEntry + + +class TestData: + # pylint: disable=too-many-instance-attributes, too-few-public-methods + # Rationale: This class needs to contain all the data needed for the tests as attributes and it does not need to have any methods. + + """ + Class to define data for the MemEntry unit tests. + """ + def __init__(self): + self.configID = "MCU" + self.mapfileName = "MapFile.map" + self.addressStart = 0x1000 + self.addressLength = 0x100 + self.addressEnd = 0x1000 + 0x100 - 0x01 + self.sectionName = "SectionName" + self.objectName = "ObjectName" + self.memType = "INT_RAM" + self.memTypeTag = "Tag" + self.category = "MyCategory" + self.vasName = "Vas" + self.vasSectionName = "VasSectionName" + + self.compilerSpecificData = collections.OrderedDict() + self.compilerSpecificData["DMA"] = (self.vasName is None) + self.compilerSpecificData["vasName"] = self.vasName + self.compilerSpecificData["vasSectionName"] = self.vasSectionName + + self.basicMemEntry = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=None, addressEnd=self.addressEnd, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + + +class MemEntryTestCase(unittest.TestCase, TestData): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + def setUp(self): + TestData.__init__(self) + + # Setting up the logger + # This syntax will default init it and then change the settings with the __call__() + # This is needed so that the unit tests can have different settings and not interfere with each other + sc()(4, actionWarning=None, actionError=self.actionError) + self.actionWarningWasCalled = False + self.actionErrorWasCalled = False + + def actionWarning(self): + self.actionWarningWasCalled = True + + def actionError(self): + self.actionErrorWasCalled = True + + def test_constructorBasicCase(self): + # Do not use named parameters here so that the order of parameters are also checked + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, self.addressLength) + self.assertEqual(self.basicMemEntry.addressEnd(), self.addressEnd) + self.assertEqual(self.basicMemEntry.addressStartHex(), hex(self.addressStart)) + self.assertEqual(self.basicMemEntry.addressLengthHex(), hex(self.addressLength)) + self.assertEqual(self.basicMemEntry.addressEndHex(), hex(self.addressEnd)) + self.assertEqual(self.basicMemEntry.memTypeTag, "Tag") + self.assertEqual(self.basicMemEntry.compilerSpecificData["DMA"], (self.vasName is None)) + self.assertEqual(self.basicMemEntry.compilerSpecificData["vasName"], self.vasName) + self.assertEqual(self.basicMemEntry.compilerSpecificData["vasSectionName"], self.vasSectionName) + self.assertEqual(self.basicMemEntry.sectionName, self.sectionName) + self.assertEqual(self.basicMemEntry.objectName, self.objectName) + self.assertEqual(self.basicMemEntry.mapfile, self.mapfileName) + self.assertEqual(self.basicMemEntry.configID, self.configID) + self.assertEqual(self.basicMemEntry.memType, self.memType) + self.assertEqual(self.basicMemEntry.category, self.category) + self.assertEqual(self.basicMemEntry.overlapFlag, None) + self.assertEqual(self.basicMemEntry.containmentFlag, None) + self.assertEqual(self.basicMemEntry.duplicateFlag, None) + self.assertEqual(self.basicMemEntry.containingOthersFlag, None) + self.assertEqual(self.basicMemEntry.overlappingOthersFlag, None) + self.assertEqual(self.basicMemEntry.addressStartOriginal, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLengthOriginal, self.addressLength) + self.assertEqual(self.basicMemEntry.addressEndOriginal(), self.addressEnd) + self.assertEqual(self.basicMemEntry.addressStartHexOriginal(), hex(self.addressStart)) + self.assertEqual(self.basicMemEntry.addressLengthHexOriginal(), hex(self.addressLength)) + self.assertEqual(self.basicMemEntry.addressEndHexOriginal(), hex(self.addressEnd)) + + def test_constructorAddressLengthAndAddressEnd(self): + # Modifying the self.addressEnd to make sure it is wrong + self.addressEnd = self.addressStart + self.addressLength + 0x100 + entryWithLengthAndAddressEnd = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=self.addressLength, addressEnd=self.addressEnd, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + # We expect that only the addressLength will be used and the addressEnd will be recalculated based on this + self.assertEqual(entryWithLengthAndAddressEnd.addressStart, self.addressStart) + self.assertEqual(entryWithLengthAndAddressEnd.addressLength, self.addressLength) + self.assertEqual(entryWithLengthAndAddressEnd.addressEnd(), (self.addressStart + self.addressLength - 1)) + + def test_constructorNoAddressLengthNorAddressEnd(self): + self.assertFalse(self.actionErrorWasCalled) + entryWithoutLengthAndAddressEnd = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=None, addressEnd=None, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + self.assertTrue(self.actionErrorWasCalled) + self.assertIsNone(entryWithoutLengthAndAddressEnd.addressLength) + + def test___setAddressesGivenEnd(self): + entry = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=None, addressEnd=self.addressEnd, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + self.assertEqual(entry.addressStart, self.addressStart) + self.assertEqual(entry.addressLength, self.addressLength) + self.assertEqual(entry.addressLengthHex(), hex(self.addressLength)) + self.assertEqual(entry.addressEnd(), self.addressEnd) + self.assertEqual(entry.addressEndHex(), hex(self.addressEnd)) + + EXTENSION = 0x1000 + self.addressEnd = self.addressEnd + EXTENSION + self.addressLength = self.addressLength + EXTENSION + entry.setAddressesGivenEnd(self.addressEnd) + + self.assertEqual(entry.addressStart, self.addressStart) + self.assertEqual(entry.addressLength, self.addressLength) + self.assertEqual(entry.addressLengthHex(), hex(self.addressLength)) + self.assertEqual(entry.addressEnd(), self.addressEnd) + self.assertEqual(entry.addressEndHex(), hex(self.addressEnd)) + + def test___setAddressesGivenLength(self): + entry = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=self.addressLength, addressEnd=None, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + self.assertEqual(entry.addressStart, self.addressStart) + self.assertEqual(entry.addressLength, self.addressLength) + self.assertEqual(entry.addressLengthHex(), hex(self.addressLength)) + self.assertEqual(entry.addressEnd(), self.addressEnd) + self.assertEqual(entry.addressEndHex(), hex(self.addressEnd)) + + EXTENSION = 0x1000 + self.addressEnd = self.addressEnd + EXTENSION + self.addressLength = self.addressLength + EXTENSION + entry.setAddressesGivenLength(self.addressLength) + + self.assertEqual(entry.addressStart, self.addressStart) + self.assertEqual(entry.addressLength, self.addressLength) + self.assertEqual(entry.addressLengthHex(), hex(self.addressLength)) + self.assertEqual(entry.addressEnd(), self.addressEnd) + self.assertEqual(entry.addressEndHex(), hex(self.addressEnd)) + + def test_compilerSpecificDataWrongType(self): + self.assertFalse(self.actionErrorWasCalled) + otherMemEntry = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=None, addressEnd=self.addressEnd, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData="This is obviously not a correct CompilerSpecificData here...") + self.assertTrue(self.actionErrorWasCalled) + self.assertIsNone(otherMemEntry.compilerSpecificData) + + def test_equalConfigID(self): + otherMemEntry = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=None, addressEnd=self.addressEnd, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + self.assertEqual(self.basicMemEntry.equalConfigID(otherMemEntry), True) + otherMemEntry.configID = "ChangedConfigId" + self.assertEqual(self.basicMemEntry.equalConfigID(otherMemEntry), False) + + def test___lt__(self): + otherMemEntry = emma_libs.memoryEntry.MemEntry(configID=self.configID, mapfileName=self.mapfileName, + addressStart=self.addressStart, addressLength=None, addressEnd=self.addressEnd, + sectionName=self.sectionName, objectName=self.objectName, + memType=self.memType, memTypeTag=self.memTypeTag, category=self.category, + compilerSpecificData=self.compilerSpecificData) + self.assertEqual(self.basicMemEntry < otherMemEntry, False) + self.assertEqual(self.basicMemEntry > otherMemEntry, False) + otherMemEntry.addressStart += otherMemEntry.addressLength + self.assertEqual(self.basicMemEntry < otherMemEntry, True) + self.assertEqual(self.basicMemEntry > otherMemEntry, False) + self.assertEqual(otherMemEntry < self.basicMemEntry, False) + self.assertEqual(otherMemEntry > self.basicMemEntry, True) + + def test___calculateAddressEnd(self): + # pylint: disable=protected-access + # Rationale: This test was specificly written to access this private method. + + self.assertEqual(emma_libs.memoryEntry.MemEntry._MemEntry__calculateAddressEnd(self.addressStart, self.addressLength), self.addressEnd) + self.assertEqual(emma_libs.memoryEntry.MemEntry._MemEntry__calculateAddressEnd(self.addressStart, 1), self.addressStart) + self.assertIsNone(emma_libs.memoryEntry.MemEntry._MemEntry__calculateAddressEnd(self.addressStart, 0), self.addressStart) + + def test___eq__(self): + with self.assertRaises(NotImplementedError): + self.assertEqual(self.basicMemEntry, self.basicMemEntry) + + def test_setAddressesGivenEnd(self): + # Basic case + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, self.addressLength) + + # End == Start + self.basicMemEntry.setAddressesGivenEnd(self.addressStart) + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, 1) + + # Going back to the basic case + self.basicMemEntry.setAddressesGivenEnd(self.addressEnd) + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, self.addressLength) + + # End < Start (We expect no change but a call to sc().error!) + self.assertFalse(self.actionErrorWasCalled) + self.basicMemEntry.setAddressesGivenEnd(self.addressStart - 1) + self.assertTrue(self.actionErrorWasCalled) + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, self.addressLength) + + def test_setAddressGivenLength(self): + # Basic case + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, self.addressLength) + + # Negative length (We expect no change but a call to sc().error!) + self.assertFalse(self.actionErrorWasCalled) + self.basicMemEntry.setAddressesGivenLength(-1) + self.assertTrue(self.actionErrorWasCalled) + self.assertEqual(self.basicMemEntry.addressStart, self.addressStart) + self.assertEqual(self.basicMemEntry.addressLength, self.addressLength) + + +class MemEntryHandlerTestCase(unittest.TestCase): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + def setUp(self): + # Setting up the logger + sc(invVerbosity=4, actionWarning=None, actionError=lambda: sys.exit(-10)) + + def test_abstractness(self): + # pylint: disable=abstract-class-instantiated, unused-variable + # Rationale: This test was specificly written to test whether it is possible to instantiate the MemEntryHandler class. Since it shall fail, it will not be tried to be used. + + with self.assertRaises(TypeError): + memEntryHandler = emma_libs.memoryEntry.MemEntryHandler() + + +class SectionEntryTestCase(unittest.TestCase, TestData): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + def setUp(self): + TestData.__init__(self) + + # Setting up the logger + # This syntax will default init it and then change the settings with the __call__() + # This is needed so that the unit tests can have different settings and not interfere with each other + sc()(4, actionWarning=None, actionError=self.actionError) + self.actionWarningWasCalled = False + self.actionErrorWasCalled = False + + def actionWarning(self): + self.actionWarningWasCalled = True + + def actionError(self): + self.actionErrorWasCalled = True + + def test_isEqual(self): + self.assertTrue(emma_libs.memoryEntry.SectionEntry.isEqual(self.basicMemEntry, self.basicMemEntry)) + with self.assertRaises(TypeError): + emma_libs.memoryEntry.SectionEntry.isEqual(self.basicMemEntry, "This is obviously not a MemEntry object!") + + def test_getName(self): + name = emma_libs.memoryEntry.SectionEntry.getName(self.basicMemEntry) + self.assertEqual(name, self.sectionName) + + +class ObjectEntryTestCase(unittest.TestCase, TestData): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + def setUp(self): + TestData.__init__(self) + + # Setting up the logger + # This syntax will default init it and then change the settings with the __call__() + # This is needed so that the unit tests can have different settings and not interfere with each other + sc()(4, actionWarning=None, actionError=self.actionError) + self.actionWarningWasCalled = False + self.actionErrorWasCalled = False + + def actionWarning(self): + self.actionWarningWasCalled = True + + def actionError(self): + self.actionErrorWasCalled = True + + def test_isEqual(self): + self.assertTrue(emma_libs.memoryEntry.ObjectEntry.isEqual(self.basicMemEntry, self.basicMemEntry)) + with self.assertRaises(TypeError): + emma_libs.memoryEntry.ObjectEntry.isEqual(self.basicMemEntry, "This is obviously not a MemEntry object!") + + def test_getName(self): + name = emma_libs.memoryEntry.ObjectEntry.getName(self.basicMemEntry) + self.assertEqual(name, (self.sectionName + "::" + self.objectName)) + + +if __name__ == "__main__": + unittest.main() diff --git a/tests/unit_tests/test_memoryMap.py b/tests/unit_tests/test_memoryMap.py new file mode 100644 index 0000000..ebd5bb8 --- /dev/null +++ b/tests/unit_tests/test_memoryMap.py @@ -0,0 +1,903 @@ +""" +Emma - Emma Memory and Mapfile Analyser +Copyright (C) 2019 The Emma authors + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation, either version 3 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see +""" + + +import os +import sys +import collections +import unittest + +sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) +# pylint: disable=wrong-import-position +# Rationale: This module needs to access modules that are above them in the folder structure. + +from shared_libs.stringConstants import * # pylint: disable=unused-wildcard-import,wildcard-import +import emma_libs.memoryEntry +import emma_libs.memoryMap + + +class MemEntryData: + # pylint: disable=too-many-instance-attributes, too-few-public-methods + # Rationale: This class needs to contain all the data needed for the creation of MemEntry objects as attributes and it does not need to have any methods. + + """ + The purpose of this class is that from its objects we can generate MemEntry Objects that + only differ from each other by the address and length values. + If the addressEnd is None, then we an entry with zero length will be created. + In this case the addressEnd needs to have the same value as addressStart. + The reason why not the addressLength is used here that the specific cases can be easier defined with the addressEnd. + The MemEntry objects can be generated with the createMemEntryObjects() function from these objects. + """ + + def __init__(self, addressStart, addressEnd, configId=None, section=None, moduleName=None): + # pylint: disable=too-many-arguments + # Rationale: The objects needs to be able to fully set up with data during construction. + self.addressStart = addressStart + self.addressEnd = addressEnd + self.configId = configId + self.section = section + self.moduleName = moduleName + + +def createMemEntryObjects(sectionDataContainer=None, objectDataContainer=None): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + def createMemEntryObject(memEntryData): + compilerSpecificData = collections.OrderedDict() + compilerSpecificData["DMA"] = True + compilerSpecificData["vasName"] = "" + compilerSpecificData["vasSectionName"] = "" + return emma_libs.memoryEntry.MemEntry(configID="MCU" if memEntryData.configId is None else memEntryData.configId, + mapfileName="mapfile.map", + addressStart=memEntryData.addressStart, + addressLength=0 if memEntryData.addressEnd is None else None, + addressEnd=memEntryData.addressEnd, + sectionName=".text" if memEntryData.section is None else memEntryData.section, + objectName="" if memEntryData.moduleName is None else memEntryData.moduleName, + memType="INT_FLASH", + memTypeTag="", + category="", + compilerSpecificData=compilerSpecificData) + + sectionContainer = [] + if sectionDataContainer is not None: + for element in sectionDataContainer: + sectionContainer.append(createMemEntryObject(element)) + + objectContainer = [] + if objectDataContainer is not None: + for element in objectDataContainer: + objectContainer.append(createMemEntryObject(element)) + + return sectionContainer, objectContainer + + +class ResolveDuplicateContainmentOverlapTestCase(unittest.TestCase): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + + def assertEqualSections(self, firstSection, secondSection): + self.assertTrue(emma_libs.memoryEntry.SectionEntry.isEqual(firstSection, secondSection)) + + def assertEqualObjects(self, firstObject, secondObject): + self.assertTrue(emma_libs.memoryEntry.ObjectEntry.isEqual(firstObject, secondObject)) + + def checkFlags(self, memEntry, memEntryHandler, expectedDuplicate=None, expectedContainingOthers=None, expectedContainedBy=None, expectedOverlappingOthers=None, expectedOverlappedBy=None): # pylint: disable=too-many-arguments + # Rationale: This function needs to be able to check all kinds of flags, this is why these arguments needed. + if expectedDuplicate is not None: + self.assertEqual(memEntry.duplicateFlag, "Duplicate of (" + memEntryHandler.getName(expectedDuplicate) + ", " + expectedDuplicate.configID + ", " + expectedDuplicate.mapfile + ")") + if expectedContainingOthers is not None: + self.assertEqual(memEntry.containingOthersFlag, expectedContainingOthers) + if expectedContainedBy is not None: + self.assertEqual(memEntry.containmentFlag, "Contained by (" + memEntryHandler.getName(expectedContainedBy) + ", " + expectedContainedBy.configID + ", " + expectedContainedBy.mapfile + ")") + if expectedOverlappingOthers is not None: + self.assertEqual(memEntry.overlappingOthersFlag, expectedOverlappingOthers) + if expectedOverlappedBy is not None: + self.assertEqual(memEntry.overlapFlag, "Overlapped by (" + memEntryHandler.getName(expectedOverlappedBy) + ", " + expectedOverlappedBy.configID + ", " + expectedOverlappedBy.mapfile + ")") + + def checkAddressChanges(self, resolvedMemEntry, originalMemEntry, expectedAddressStart=None, expectedAddressLength=None, expectedAddressEnd=None): # pylint: disable=too-many-arguments + # Rationale: This function needs to be able to check all kinds of address related data, this is why these arguments needed. + # If we have expect an address change + if expectedAddressStart is not None or expectedAddressLength is not None or expectedAddressEnd is not None: + # Then we will check it with the expected values + self.assertEqual(resolvedMemEntry.addressStart, expectedAddressStart) + if expectedAddressLength is not None and expectedAddressEnd is None: + self.assertEqual(resolvedMemEntry.addressLength, expectedAddressLength) + elif expectedAddressEnd is not None and expectedAddressLength is None: + self.assertEqual(resolvedMemEntry.addressEnd(), expectedAddressEnd) + else: + raise AttributeError("Either expectedAddressLength or expectedAddressEnd shall be provided!") + else: + # Otherwise the addresses should be the same as in the original + self.assertEqual(resolvedMemEntry.addressStart, originalMemEntry.addressStart) + self.assertEqual(resolvedMemEntry.addressLength, originalMemEntry.addressLength) + # Also check, whether the original values were stored correctly + self.assertEqual(resolvedMemEntry.addressStartOriginal, originalMemEntry.addressStart) + self.assertEqual(resolvedMemEntry.addressLengthOriginal, originalMemEntry.addressLength) + + def test__singleSection(self): + """ + S |---| + """ + # Creating the sections and objects for the test + ADDRESS_START = 0x0100 + ADDRESS_END = 0x01FF + listOfMemEntryData = [MemEntryData(ADDRESS_START, ADDRESS_END)] + memEntryHandler = emma_libs.memoryEntry.SectionEntry + originalSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + resolvedSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + # Running the resolveDuplicateContainmentOverlap list + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(resolvedSectionContainer, emma_libs.memoryEntry.SectionEntry) + + # Check the number of elements: no elements shall be removed or added to the container + self.assertEqual(len(resolvedSectionContainer), len(originalSectionContainer)) + # Check whether the section stayed the same + self.assertEqualSections(resolvedSectionContainer[0], originalSectionContainer[0]) + # Check whether the flags were set properly + self.checkFlags(resolvedSectionContainer[0], memEntryHandler) + # Check whether the addresses were set properly + self.checkAddressChanges(resolvedSectionContainer[0], originalSectionContainer[0]) + + def test__separateSections(self): + """ + S |---| + S |---| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0100 + FIRST_SECTION_ADDRESS_END = 0x01FF + SECOND_SECTION_ADDRESS_START = 0x0200 + SECOND_SECTION_ADDRESS_END = 0x02FF + listOfMemEntryData = [MemEntryData(FIRST_SECTION_ADDRESS_START, FIRST_SECTION_ADDRESS_END, section="first"), + MemEntryData(SECOND_SECTION_ADDRESS_START, SECOND_SECTION_ADDRESS_END, section="second")] + memEntryHandler = emma_libs.memoryEntry.SectionEntry + originalSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + resolvedSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + # Running the resolveDuplicateContainmentOverlap list + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(resolvedSectionContainer, emma_libs.memoryEntry.SectionEntry) + + # Check the number of elements: no elements shall be removed or added to the container + self.assertEqual(len(resolvedSectionContainer), len(originalSectionContainer)) + + # Check the non changed parts + for i, _ in enumerate(originalSectionContainer): + # Check whether the sections stayed the same + self.assertEqualSections(resolvedSectionContainer[i], originalSectionContainer[i]) + # Check whether the flags were set properly + self.checkFlags(resolvedSectionContainer[i], memEntryHandler) + # Check whether the addresses were set properly + self.checkAddressChanges(resolvedSectionContainer[i], originalSectionContainer[i]) + + def test__duplicateSections(self): + """ + S |---| + S |---| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0100 + FIRST_SECTION_ADDRESS_END = 0x01FF + SECOND_SECTION_ADDRESS_START = 0x0100 + SECOND_SECTION_ADDRESS_END = 0x01FF + listOfMemEntryData = [MemEntryData(FIRST_SECTION_ADDRESS_START, FIRST_SECTION_ADDRESS_END, section="first"), + MemEntryData(SECOND_SECTION_ADDRESS_START, SECOND_SECTION_ADDRESS_END, section="second")] + memEntryHandler = emma_libs.memoryEntry.SectionEntry + originalSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + resolvedSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + # Running the resolveDuplicateContainmentOverlap list + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(resolvedSectionContainer, emma_libs.memoryEntry.SectionEntry) + + # Check the number of elements: no elements shall be removed or added to the container + self.assertEqual(len(resolvedSectionContainer), len(originalSectionContainer)) + + # Check whether the addresses were set properly (it shall be the second section that loses it´s length) + self.checkAddressChanges(resolvedSectionContainer[0], originalSectionContainer[0]) + self.checkAddressChanges(resolvedSectionContainer[1], originalSectionContainer[1], originalSectionContainer[1].addressStart, 0x00) + # Check whether the flags were set properly + self.checkFlags(resolvedSectionContainer[0], memEntryHandler, expectedDuplicate=resolvedSectionContainer[1]) + self.checkFlags(resolvedSectionContainer[1], memEntryHandler, expectedDuplicate=resolvedSectionContainer[0]) + + def test__containmentSections(self): + """ + S |-----| + S |-| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0000 + FIRST_SECTION_ADDRESS_END = 0x02FF + SECOND_SECTION_ADDRESS_START = 0x0100 + SECOND_SECTION_ADDRESS_END = 0x01FF + listOfMemEntryData = [MemEntryData(FIRST_SECTION_ADDRESS_START, FIRST_SECTION_ADDRESS_END, section="first"), + MemEntryData(SECOND_SECTION_ADDRESS_START, SECOND_SECTION_ADDRESS_END, section="second")] + memEntryHandler = emma_libs.memoryEntry.SectionEntry + originalSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + resolvedSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + # Running the resolveDuplicateContainmentOverlap list + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(resolvedSectionContainer, emma_libs.memoryEntry.SectionEntry) + + # Check the number of elements: no elements shall be removed or added to the container + self.assertEqual(len(resolvedSectionContainer), len(originalSectionContainer)) + + # Check whether the addresses were set properly (it shall be the second section that loses it´s length) + self.checkAddressChanges(resolvedSectionContainer[0], originalSectionContainer[0]) + self.checkAddressChanges(resolvedSectionContainer[1], originalSectionContainer[1], originalSectionContainer[1].addressStart, 0x00) + # Check whether the flags were set properly + self.checkFlags(resolvedSectionContainer[0], memEntryHandler, expectedContainingOthers=True) + self.checkFlags(resolvedSectionContainer[1], memEntryHandler, expectedContainedBy=resolvedSectionContainer[0]) + + def test__overlapSections(self): + """ + S |---| + S |---| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0100 + FIRST_SECTION_ADDRESS_END = 0x01FF + SECOND_SECTION_ADDRESS_START = 0x0180 + SECOND_SECTION_ADDRESS_END = 0x027F + listOfMemEntryData = [MemEntryData(FIRST_SECTION_ADDRESS_START, FIRST_SECTION_ADDRESS_END, section="first"), + MemEntryData(SECOND_SECTION_ADDRESS_START, SECOND_SECTION_ADDRESS_END, section="second")] + memEntryHandler = emma_libs.memoryEntry.SectionEntry + originalSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + resolvedSectionContainer, _ = createMemEntryObjects(listOfMemEntryData) + # Running the resolveDuplicateContainmentOverlap list + emma_libs.memoryMap.resolveDuplicateContainmentOverlap(resolvedSectionContainer, emma_libs.memoryEntry.SectionEntry) + + # Check the number of elements: no elements shall be removed or added to the container + self.assertEqual(len(resolvedSectionContainer), len(originalSectionContainer)) + + # Check whether the addresses were set properly (the second section shall lose its beginning) + self.checkAddressChanges(resolvedSectionContainer[0], originalSectionContainer[0], + expectedAddressStart=originalSectionContainer[0].addressStart, + expectedAddressEnd=originalSectionContainer[0].addressEnd()) + self.checkAddressChanges(resolvedSectionContainer[1], originalSectionContainer[1], + expectedAddressStart=(originalSectionContainer[0].addressEnd() + 1), + expectedAddressEnd=originalSectionContainer[1].addressEnd()) + # Check whether the flags were set properly + self.checkFlags(resolvedSectionContainer[0], memEntryHandler, expectedOverlappingOthers=True) + self.checkFlags(resolvedSectionContainer[1], memEntryHandler, expectedOverlappedBy=originalSectionContainer[0]) + + +class CalculateObjectsInSectionsTestCase(unittest.TestCase): + # pylint: disable=invalid-name, missing-docstring + # Rationale: Tests need to have the following method names in order to be discovered: test_(). It is not necessary to add a docstring for every unit test. + # pylint: disable=too-many-public-methods + # Rationale: The tests have many repetitive checks that were grouped into these functions. + + def checkSectionNonChangingData(self, sectionToCheck, sourceSection): + """ + Checks the data of a section that shall never be changed during caluclateObjectsInSections(). + :param sectionToCheck: This is the section that was calculated. + :param sourceSection: This is the section from which the sectionToCheck was calculated. + :return: None + """ + self.assertEqual(sectionToCheck.memTypeTag, sourceSection.memTypeTag) + self.assertEqual(sectionToCheck.compilerSpecificData, sourceSection.compilerSpecificData) + self.assertEqual(sectionToCheck.sectionName, sourceSection.sectionName) + self.assertEqual(sectionToCheck.mapfile, sourceSection.mapfile) + self.assertEqual(sectionToCheck.configID, sourceSection.configID) + self.assertEqual(sectionToCheck.memType, sourceSection.memType) + self.assertEqual(sectionToCheck.category, sourceSection.category) + self.assertEqual(sectionToCheck.overlapFlag, sourceSection.overlapFlag) + self.assertEqual(sectionToCheck.containmentFlag, sourceSection.containmentFlag) + self.assertEqual(sectionToCheck.duplicateFlag, sourceSection.duplicateFlag) + self.assertEqual(sectionToCheck.containingOthersFlag, sourceSection.containingOthersFlag) + self.assertEqual(sectionToCheck.overlappingOthersFlag, sourceSection.overlappingOthersFlag) + self.assertEqual(sectionToCheck.addressStartOriginal, sourceSection.addressStartOriginal) + self.assertEqual(sectionToCheck.addressLengthOriginal, sourceSection.addressLengthOriginal) + self.assertEqual(sectionToCheck.addressLengthHexOriginal(), sourceSection.addressLengthHexOriginal()) + self.assertEqual(sectionToCheck.addressEndOriginal(), sourceSection.addressEndOriginal()) + + def checkSectionEntry(self, sectionEntry, sourceSection): + """ + Checks a section entry, whether it was created correctly. + :param sectionEntry: This is the section entry that was calculated. + :param sourceSection: This is the section from which the section entry was calculated + :return: None + """ + self.checkSectionNonChangingData(sectionEntry, sourceSection) + self.assertEqual(sectionEntry.addressStart, sourceSection.addressStart) + self.assertEqual(sectionEntry.addressLength, 0) + self.assertIsNone(sectionEntry.addressEnd()) + self.assertEqual(sectionEntry.addressStartHex(), hex(sourceSection.addressStart)) + self.assertEqual(sectionEntry.addressLengthHex(), hex(0)) + self.assertEqual(sectionEntry.addressEndHex(), "") + self.assertEqual(sectionEntry.objectName, OBJECTS_IN_SECTIONS_SECTION_ENTRY) + + def checkSectionReserve(self, sectionReserve, sourceSection, expectedAddressStart, expectedAddressEnd): + """ + Checks a section reserve, whether it was created correctly. + :param sectionReserve: This is the section reserve that was calculated. + :param sourceSection: This is the section from which the section reserve was calculated. + :param expectedAddressStart: The AddressStart value that the section reserve must have. + :param expectedAddressEnd: The AddressEnd value that the section reserve must have. + :return: None + """ + self.checkSectionNonChangingData(sectionReserve, sourceSection) + self.assertEqual(sectionReserve.addressStart, expectedAddressStart) + self.assertEqual(sectionReserve.addressLength, (expectedAddressEnd - expectedAddressStart + 1)) + self.assertEqual(sectionReserve.addressEnd(), expectedAddressEnd) + self.assertEqual(sectionReserve.addressStartHex(), hex(expectedAddressStart)) + self.assertEqual(sectionReserve.addressLengthHex(), hex((expectedAddressEnd - expectedAddressStart + 1))) + self.assertEqual(sectionReserve.addressEndHex(), hex(expectedAddressEnd)) + self.assertEqual(sectionReserve.objectName, OBJECTS_IN_SECTIONS_SECTION_RESERVE) + + def assertEqualSections(self, firstSection, secondSection): + self.assertTrue(emma_libs.memoryEntry.SectionEntry.isEqual(firstSection, secondSection)) + + def assertEqualObjects(self, firstObject, secondObject): + self.assertTrue(emma_libs.memoryEntry.ObjectEntry.isEqual(firstObject, secondObject)) + + def test_singleSection(self): + """ + S |---| + O + """ + # Creating the sections and objects for the test + ADDRESS_START = 0x0100 + ADDRESS_END = 0x01FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(ADDRESS_START, ADDRESS_END)], []) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + sectionReserve + self.assertEqual(len(objectsInSections), 2) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[1], sectionContainer[0], ADDRESS_START, ADDRESS_END) + + def test_singleSectionWithZeroObjects(self): + """ + S |-------| + O | | | | | + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0200 + SECTION_ADDRESS_END = 0x03FF + FIRST_OBJECT_ADDRESS_START = 0x00100 + SECOND_OBJECT_ADDRESS_START = 0x0200 + THIRD_OBJECT_ADDRESS_START = 0x0300 + FOURTH_OBJECT_ADDRESS_START = 0x03FF + FIFTH_OBJECT_ADDRESS_START = 0x0500 + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], + [MemEntryData(FIRST_OBJECT_ADDRESS_START, None), + MemEntryData(SECOND_OBJECT_ADDRESS_START, None), + MemEntryData(THIRD_OBJECT_ADDRESS_START, None), + MemEntryData(FOURTH_OBJECT_ADDRESS_START, None), + MemEntryData(FIFTH_OBJECT_ADDRESS_START, None)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: firstObject + + # sectionEntry + + # secondObject + + # thirdObject + + # fourthObject + + # sectionReserve + + # fifthObject + self.assertEqual(len(objectsInSections), 7) + # Check whether the firstObject was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[2], sectionContainer[0], SECTION_ADDRESS_START, SECTION_ADDRESS_END) + # Check whether the secondObject was created properly + self.assertEqualObjects(objectsInSections[3], objectContainer[1]) + # Check whether the thirdObject was created properly + self.assertEqualObjects(objectsInSections[4], objectContainer[2]) + # Check whether the fourthObject was created properly + self.assertEqualObjects(objectsInSections[5], objectContainer[3]) + # Check whether the fifthObject was created properly + self.assertEqualObjects(objectsInSections[6], objectContainer[4]) + + def test_multipleSectionsAndObjectsWithZeroLengths(self): + """ + S | | | | | + O | | |---| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0150 + SECOND_SECTION_ADDRESS_START = 0x0200 + THIRD_SECTION_ADDRESS_START = 0x0300 + FOURTH_SECTION_ADDRESS_START = 0x0350 + FIFTH_SECTION_ADDRESS_START = 0x03FF + FIRST_OBJECT_ADDRESS_START = 0x0100 + SECOND_OBJECT_ADDRESS_START = 0x0200 + THIRD_OBJECT_ADDRESS_START = 0x0300 + THIRD_OBJECT_ADDRESS_END = 0x03FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(FIRST_SECTION_ADDRESS_START, None), + MemEntryData(SECOND_SECTION_ADDRESS_START, None), + MemEntryData(THIRD_SECTION_ADDRESS_START, None), + MemEntryData(FOURTH_SECTION_ADDRESS_START, None), + MemEntryData(FIFTH_SECTION_ADDRESS_START, None)], + [MemEntryData(FIRST_OBJECT_ADDRESS_START, None), + MemEntryData(SECOND_OBJECT_ADDRESS_START, None), + MemEntryData(THIRD_OBJECT_ADDRESS_START, THIRD_OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: firstObject + + # firstSectionEntry + + # secondSectionEntry + + # secondObject + + # thirdSectionEntry + + # thirdObject + + # fourthSectionEntry + + # fifthSectionEntry + self.assertEqual(len(objectsInSections), 8) + # Check whether the firstObject was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the firstSectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + # Check whether the secondSectionEntry was created properly + self.checkSectionEntry(objectsInSections[2], sectionContainer[1]) + # Check whether the secondObject was created properly + self.assertEqualObjects(objectsInSections[3], objectContainer[1]) + # Check whether the thirdSectionEntry was created properly + self.checkSectionEntry(objectsInSections[4], sectionContainer[2]) + # Check whether the thirdObject was created properly + self.assertEqualObjects(objectsInSections[5], objectContainer[2]) + # Check whether the fourthSectionEntry was created properly + self.checkSectionEntry(objectsInSections[6], sectionContainer[3]) + # Check whether the fifthSectionEntry was created properly + self.checkSectionEntry(objectsInSections[7], sectionContainer[4]) + + def test_multipleSectionsAndObjectsWithContainmentFlag(self): + """ + S |--| |--| + O |---| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0100 + FIRST_SECTION_ADDRESS_END = 0x01FF + SECOND_SECTION_ADDRESS_START = 0x0300 + SECOND_SECTION_ADDRESS_END = 0x03FF + OBJECT_ADDRESS_START = 0x0150 + OBJECT_ADDRESS_END = 0x034F + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(FIRST_SECTION_ADDRESS_START, FIRST_SECTION_ADDRESS_END), + MemEntryData(SECOND_SECTION_ADDRESS_START, SECOND_SECTION_ADDRESS_END)], + [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Editing the sections: switching the the containmentFlags on + sectionContainer[0].containmentFlag = True + sectionContainer[1].containmentFlag = True + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: firstSectionEntry + + # object + + # secondSectionEntry + self.assertEqual(len(objectsInSections), 3) + # Check whether the firstSectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[1], objectContainer[0]) + # Check whether the secondSectionEntry was created properly + self.checkSectionEntry(objectsInSections[2], sectionContainer[1]) + + def test_sectionFullWithSingleObject(self): + """ + S |----| + O |----| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0200 + SECTION_ADDRESS_END = 0x02FF + OBJECT_ADDRESS_START = 0x0200 + OBJECT_ADDRESS_END = 0x02FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + object + self.assertEqual(len(objectsInSections), 2) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[1], objectContainer[0]) + + def test_sectionFullWithTwoObjects(self): + """ + S |-----| + O |--|--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0400 + SECTION_ADDRESS_END = 0x05FF + FIRST_OBJECT_ADDRESS_START = 0x0400 + FIRST_OBJECT_ADDRESS_END = 0x04FF + SECOND_OBJECT_ADDRESS_START = 0x0500 + SECOND_OBJECT_ADDRESS_END = 0x05FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], + [MemEntryData(FIRST_OBJECT_ADDRESS_START, FIRST_OBJECT_ADDRESS_END), MemEntryData(SECOND_OBJECT_ADDRESS_START, SECOND_OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + firstObject + secondObject + self.assertEqual(len(objectsInSections), 3) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the firstObject was created properly + self.assertEqualObjects(objectsInSections[1], objectContainer[0]) + # Check whether the secondObject was created properly + self.assertEqualObjects(objectsInSections[2], objectContainer[1]) + + def test_sectionFullWithOverlappingSingleObjectAtStart(self): + """ + S |-----| + O |--------| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0800 + SECTION_ADDRESS_END = 0x09FF + OBJECT_ADDRESS_START = 0x0700 + OBJECT_ADDRESS_END = 0x09FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: object + sectionEntry + self.assertEqual(len(objectsInSections), 2) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + + def test_sectionFullWithOverlappingSingleObjectAtEnd(self): + """ + S |-----| + O |--------| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0600 + SECTION_ADDRESS_END = 0x06FF + OBJECT_ADDRESS_START = 0x0600 + OBJECT_ADDRESS_END = 0x07FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + object + self.assertEqual(len(objectsInSections), 2) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[1], objectContainer[0]) + + def test_sectionFullWithOverlappingSingleObjectAtStartAndEnd(self): + """ + S |-----| + O |-----------| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0200 + SECTION_ADDRESS_END = 0x02FF + OBJECT_ADDRESS_START = 0x0100 + OBJECT_ADDRESS_END = 0x03FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: object + sectionEntry + self.assertEqual(len(objectsInSections), 2) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + + def test_sectionNotFullWithSingleObjectAtStart(self): + """ + S |-----| + O |--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0400 + SECTION_ADDRESS_END = 0x04FF + OBJECT_ADDRESS_START = 0x0400 + OBJECT_ADDRESS_END = 0x0410 + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + object + sectionReserve + self.assertEqual(len(objectsInSections), 3) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[1], objectContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[2], sectionContainer[0], (OBJECT_ADDRESS_END + 1), SECTION_ADDRESS_END) + + def test_sectionNotFullWithSingleObjectAtEnd(self): + """ + S |-----| + O |--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0500 + SECTION_ADDRESS_END = 0x05FF + OBJECT_ADDRESS_START = 0x0550 + OBJECT_ADDRESS_END = 0x05FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + sectionReserve + object + self.assertEqual(len(objectsInSections), 3) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[1], sectionContainer[0], SECTION_ADDRESS_START, (OBJECT_ADDRESS_START - 1)) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[2], objectContainer[0]) + + def test_sectionNotFullWithSingleObjectAtMiddle(self): + """ + S |------| + O |--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0500 + SECTION_ADDRESS_END = 0x05FF + OBJECT_ADDRESS_START = 0x0550 + OBJECT_ADDRESS_END = 0x05A0 + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + sectionReserve + object + sectionReserve + self.assertEqual(len(objectsInSections), 4) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[1], sectionContainer[0], SECTION_ADDRESS_START, (OBJECT_ADDRESS_START - 1)) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[2], objectContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[3], sectionContainer[0], (OBJECT_ADDRESS_END + 1), SECTION_ADDRESS_END) + + def test_sectionNotFullWithSingleObjectBeforeStart(self): + """ + S |--| + O |--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0500 + SECTION_ADDRESS_END = 0x05FF + OBJECT_ADDRESS_START = 0x0300 + OBJECT_ADDRESS_END = 0x03FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: object + sectionEntry + sectionReserve + self.assertEqual(len(objectsInSections), 3) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[2], sectionContainer[0], SECTION_ADDRESS_START, SECTION_ADDRESS_END) + + def test_sectionNotFullWithSingleObjectAfterEnd(self): + """ + S |--| + O |--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0300 + SECTION_ADDRESS_END = 0x03FF + OBJECT_ADDRESS_START = 0x0500 + OBJECT_ADDRESS_END = 0x05FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + sectionReserve + object + self.assertEqual(len(objectsInSections), 3) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[1], sectionContainer[0], SECTION_ADDRESS_START, SECTION_ADDRESS_END) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[2], objectContainer[0]) + + def test_sectionNotFullWithOverlappingSingleObjectBeforeStart(self): + """ + S |--------| + O |-------| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0500 + SECTION_ADDRESS_END = 0x06FF + OBJECT_ADDRESS_START = 0x0400 + OBJECT_ADDRESS_END = 0x05FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: object + sectionEntry + sectionReserve + self.assertEqual(len(objectsInSections), 3) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[2], sectionContainer[0], (OBJECT_ADDRESS_END + 1), SECTION_ADDRESS_END) + + def test_sectionNotFullWithOverlappingSingleObjectAfterEnd(self): + """ + S |--------| + O |-------| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0200 + SECTION_ADDRESS_END = 0x03FF + OBJECT_ADDRESS_START = 0x0300 + OBJECT_ADDRESS_END = 0x04FF + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], [MemEntryData(OBJECT_ADDRESS_START, OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + sectionReserve + object + self.assertEqual(len(objectsInSections), 3) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[0], sectionContainer[0]) + # Check whether the sectionReserve was created properly + self.checkSectionReserve(objectsInSections[1], sectionContainer[0], SECTION_ADDRESS_START, (OBJECT_ADDRESS_START - 1)) + # Check whether the object was created properly + self.assertEqualObjects(objectsInSections[2], objectContainer[0]) + + def test_sectionNotFullWithMultipleObjects(self): + # pylint: disable=too-many-locals + # Rationale: These constants are needed to set up the used sections and objects. + + """ + S |-------------| + O |-| |---| |--| |--| |--| + """ + # Creating the sections and objects for the test + SECTION_ADDRESS_START = 0x0100 + SECTION_ADDRESS_END = 0x01FF + FIRST_OBJECT_ADDRESS_START = 0x0080 + FIRST_OBJECT_ADDRESS_END = 0x0085 + SECOND_OBJECT_ADDRESS_START = 0x0090 + SECOND_OBJECT_ADDRESS_END = 0x00110 + THIRD_OBJECT_ADDRESS_START = 0x0150 + THIRD_OBJECT_ADDRESS_END = 0x0180 + FOURTH_OBJECT_ADDRESS_START = 0x01A0 + FOURTH_OBJECT_ADDRESS_END = 0x01D0 + FIFTH_OBJECT_ADDRESS_START = 0x0210 + FIFTH_OBJECT_ADDRESS_END = 0x0220 + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(SECTION_ADDRESS_START, SECTION_ADDRESS_END)], + [MemEntryData(FIRST_OBJECT_ADDRESS_START, FIRST_OBJECT_ADDRESS_END), + MemEntryData(SECOND_OBJECT_ADDRESS_START, SECOND_OBJECT_ADDRESS_END), + MemEntryData(THIRD_OBJECT_ADDRESS_START, THIRD_OBJECT_ADDRESS_END), + MemEntryData(FOURTH_OBJECT_ADDRESS_START, FOURTH_OBJECT_ADDRESS_END), + MemEntryData(FIFTH_OBJECT_ADDRESS_START, FIFTH_OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: sectionEntry + + # firstObject + + # secondObject + firstSectionReserve + + # thirdObject + secondSectionReserve + + # fourthObject + thirdSectionReserve + + # fifthObject + self.assertEqual(len(objectsInSections), 9) + # Check whether the firstObject was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the secondObject was created properly + self.assertEqualObjects(objectsInSections[1], objectContainer[1]) + # Check whether the sectionEntry was created properly + self.checkSectionEntry(objectsInSections[2], sectionContainer[0]) + # Check whether the firstSectionReserve was created properly + self.checkSectionReserve(objectsInSections[3], sectionContainer[0], (SECOND_OBJECT_ADDRESS_END + 1), (THIRD_OBJECT_ADDRESS_START - 1)) + # Check whether the thirdObject was created properly + self.assertEqualObjects(objectsInSections[4], objectContainer[2]) + # Check whether the secondSectionReserve was created properly + self.checkSectionReserve(objectsInSections[5], sectionContainer[0], (THIRD_OBJECT_ADDRESS_END + 1), (FOURTH_OBJECT_ADDRESS_START - 1)) + # Check whether the fourthObject was created properly + self.assertEqualObjects(objectsInSections[6], objectContainer[3]) + # Check whether the thirdSectionReserve was created properly + self.checkSectionReserve(objectsInSections[7], sectionContainer[0], (FOURTH_OBJECT_ADDRESS_END + 1), SECTION_ADDRESS_END) + # Check whether the fifthObject was created properly + self.assertEqualObjects(objectsInSections[8], objectContainer[4]) + + def test_multipleSectionsWithMultipleObjects(self): + # pylint: disable=too-many-locals + # Rationale: These constants are needed to set up the used sections and objects. + + """ + S |--|---| |------| |--| + O |-| |-| |---| |--| |--| + """ + # Creating the sections and objects for the test + FIRST_SECTION_ADDRESS_START = 0x0100 + FIRST_SECTION_ADDRESS_END = 0x01FF + SECOND_SECTION_ADDRESS_START = 0x0200 + SECOND_SECTION_ADDRESS_END = 0x02FF + THIRD_SECTION_ADDRESS_START = 0x0400 + THIRD_SECTION_ADDRESS_END = 0x05FF + FOURTH_SECTION_ADDRESS_START = 0x0700 + FOURTH_SECTION_ADDRESS_END = 0x07FF + FIRST_OBJECT_ADDRESS_START = 0x0080 + FIRST_OBJECT_ADDRESS_END = 0x008F + SECOND_OBJECT_ADDRESS_START = 0x0100 + SECOND_OBJECT_ADDRESS_END = 0x001AF + THIRD_OBJECT_ADDRESS_START = 0x0250 + THIRD_OBJECT_ADDRESS_END = 0x03FF + FOURTH_OBJECT_ADDRESS_START = 0x0450 + FOURTH_OBJECT_ADDRESS_END = 0x04FF + FIFTH_OBJECT_ADDRESS_START = 0x0600 + FIFTH_OBJECT_ADDRESS_END = 0x063F + sectionContainer, objectContainer = createMemEntryObjects([MemEntryData(FIRST_SECTION_ADDRESS_START, FIRST_SECTION_ADDRESS_END), + MemEntryData(SECOND_SECTION_ADDRESS_START, SECOND_SECTION_ADDRESS_END), + MemEntryData(THIRD_SECTION_ADDRESS_START, THIRD_SECTION_ADDRESS_END), + MemEntryData(FOURTH_SECTION_ADDRESS_START, FOURTH_SECTION_ADDRESS_END)], + [MemEntryData(FIRST_OBJECT_ADDRESS_START, FIRST_OBJECT_ADDRESS_END), + MemEntryData(SECOND_OBJECT_ADDRESS_START, SECOND_OBJECT_ADDRESS_END), + MemEntryData(THIRD_OBJECT_ADDRESS_START, THIRD_OBJECT_ADDRESS_END), + MemEntryData(FOURTH_OBJECT_ADDRESS_START, FOURTH_OBJECT_ADDRESS_END), + MemEntryData(FIFTH_OBJECT_ADDRESS_START, FIFTH_OBJECT_ADDRESS_END)]) + # Calculating the objectsInSections list + objectsInSections = emma_libs.memoryMap.calculateObjectsInSections(sectionContainer, objectContainer) + + # Check the number of created elements: firstObject + + # firstSectionEntry + + # secondObject + firstSectionReserve + + # secondSectionEntry + secondSectionReserve + + # thirdSectionObject + + # thirdSectionEntry + thirdSectionReserve + + # fourthObject + thirdSectionReserve + + # fifthObject + + # fourthSectionEntry + fourthSectionReserve + self.assertEqual(len(objectsInSections), 14) + # Check whether the firstObject was created properly + self.assertEqualObjects(objectsInSections[0], objectContainer[0]) + # Check whether the firstSectionEntry was created properly + self.checkSectionEntry(objectsInSections[1], sectionContainer[0]) + # Check whether the secondObject was created properly + self.assertEqualObjects(objectsInSections[2], objectContainer[1]) + # Check whether the firstSectionReserve was created properly + self.checkSectionReserve(objectsInSections[3], sectionContainer[0], (SECOND_OBJECT_ADDRESS_END + 1), FIRST_SECTION_ADDRESS_END) + # Check whether the secondSectionEntry was created properly + self.checkSectionEntry(objectsInSections[4], sectionContainer[1]) + # Check whether the secondSectionReserve was created properly + self.checkSectionReserve(objectsInSections[5], sectionContainer[1], SECOND_SECTION_ADDRESS_START, (THIRD_OBJECT_ADDRESS_START - 1)) + # Check whether the thirdObject was created properly + self.assertEqualObjects(objectsInSections[6], objectContainer[2]) + # Check whether the thirdSectionEntry was created properly + self.checkSectionEntry(objectsInSections[7], sectionContainer[2]) + # Check whether the thirdSectionReserve was created properly + self.checkSectionReserve(objectsInSections[8], sectionContainer[2], THIRD_SECTION_ADDRESS_START, (FOURTH_OBJECT_ADDRESS_START - 1)) + # Check whether the fourthObject was created properly + self.assertEqualObjects(objectsInSections[9], objectContainer[3]) + # Check whether the thirdSectionReserve was created properly + self.checkSectionReserve(objectsInSections[10], sectionContainer[2], (FOURTH_OBJECT_ADDRESS_END + 1), THIRD_SECTION_ADDRESS_END) + # Check whether the fifthObject was created properly + self.assertEqualObjects(objectsInSections[11], objectContainer[4]) + # Check whether the fourthSectionEntry was created properly + self.checkSectionEntry(objectsInSections[12], sectionContainer[3]) + # Check whether the fourthSectionReserve was created properly + self.checkSectionReserve(objectsInSections[13], sectionContainer[3], FOURTH_SECTION_ADDRESS_START, FOURTH_SECTION_ADDRESS_END) + + +if __name__ == "__main__": + unittest.main()