What to contribute?

The package contains multiple form of files:

  • Source code

  • Documentation

  • Tests

  • Notebooks

Direct contributions may include any or every combinations of those. Although they must be developed hand in hand, so new features must include all of the above.

Package organization

To ease development and usage simultaneously, we splat the user API from the implementation. The latter should be exclusively contained in the mcda.internal subpackage, while any other module/subpackage is assumed to be part of the user API (and therefore carefully treaded on).

This split is made to separate irrelevant internal functionalities from the user, and clean up the API (gathering functionalities from multiple related modules, moving features up the package hierarchy, etc.).

Features are made available to the user by importing them in a user API module and using the mcda.internal.core.utils.set_module() decorator so that the compiled doc will place them in that same user module in the API. They also need to be added to the user module __all__ list so the doc will show them (this is useless for actual usage though, just a quirk of sphinx autodoc).

User API

The user API is split in multiple subpackages and modules:

  • mcda: core functionalities and data structures

  • mcda.mavt: MCDA Multi-Attribute Value Theory algorithms and functionalities

  • mcda.outranking: MCDA Outranking algorithms and functionalities

  • mcda.plot: plotting utilities

Only the most used features such as mcda.PerformanceTable are placed directly in the mcda highest level.

Internal API

The internal API is split in multiple subpackages:

Notable differences

All internal features related to MCDA functions are gathered in the mcda.functions module. This applies to some features of the followings:

Aggregators are implemented in mcda.internal.core.aggregators but placed in mcda.mavt.aggregators user API. Indeed they are used internally in multiple modules, but are logically (according to MCDA domain) parts of MAVT features.

All types usable as type hints are gathered in the user module mcda.types:

  • mcda.internal.core.aliases (module)

  • mcda.internal.core.relations.Relation (type)

  • mcda.internal.core.scales.Scale (type)

  • mcda.internal.core.scales.OrdinalScale (type)

Coding conventions

This package follows the PEP8 recommendations strictly. For example, we fix the line length of each code line to 79.

We also document the code as we write it in doc-string (see section on code documentation).

We also write static typed python code using type hints (see section on type hinting).

We also use a list of linters to automatically check we respect those conventions (see section on linters).

Type hinting

We rely on type hinting to add some semblance of a static typed package.

It enables to add information about variables, parameters and function return types directly inline.

This enables at least to document what those types are supposed to be, as this type hinting has no influence at runtime.

Furthermore, some utilities such as mypy can perform static verification of your code by parsing and checking the coherence of these type hints.

N.B: the type hints are used to tag the functions to verify using mypy, so you can use type hinting on parts of your code you want to check more thoroughly.

N.B: these type hints are parsed by sphinx to complete the autogenerated API documentation.

Linters

We use the following list of tools to automatically check that best coding practices are followed:

  • flake8: check PEP8 rules compliance

  • isort: check and place the imports order

  • black: code formatter

  • mypy: static type checker

See Makefile to see how those linters are configured for our project.

If you want some details about them, see this section.

A note on floating point precision

The policy of this package concerning floating point accuracy is the following:

  • Numeric computations in the source code return raw results (no rounding except if its is part of an algorithm)

  • When checking floats equalities/inequalities, the function math.isclose from module math should be used

    • with its default parameters if possible

    • otherwise the function documentation should mention those parameters

    • parameters can alternatively be passed by the function so user has control

  • Numeric results in unit tests should be checked digits precision (using for example unittest.TestCase.assertAlmostEqual function)

    • with its default precision if possible (7 digits)

    • otherwise the tested precision should be mentionned in the source code documentation

Backward compatibility

As a general rule, you should always make non-breaking changes to the code base. This rule is mandatory between major releases which may concentrate such breaking changes. Even for major releases, the case must be real solid to push any breaking change.

In the rare case you do need to make breaking changes, they must logically be delayed until the next major release. You can use the deprecated.sphinx decorators to document and delayed such changes.

We propose multiple recipes which you can apply depending on the breaking change specific use case.

In the case of non-breaking changes introducing a new class/function/module, you must still document the version that added it:

from deprecated.sphinx import versionadded

@versionadded(
    reason="Here is my new function",
    version="0.1.0",
)
def foo():
    """Foo"""
    print("Yolooooooooo!")

Such decorators must be appended at the top (bottom is most ancient).

Code documentation

For maintainability, it is important that the documentation of the code is being built at the same time the code is developed. There are different documentation conventions, we will describe the one we chose. This choice of conventions is conditioned by the documentation tool we use: sphinx. This software is able to generate a complete documentation containing manually written .rst files alongside an autogenerated API obtained by parsing source files doc-strings.

Doc strings

All python functions and classes should be documented inline, using doc-string in reST formatting compliant with sphinx. Modules and subpackages can also be documented by placing a doc-string before the first import. The aim of subpackage / module doc-string is to express its intent, and also add any relevant information that cannot be contained in the functions and classes (for example: scripts should be described in those doc-strings).

This doc-strings should at the minimum contain the description of the class constructor / function parameters, return values and types, and intent. It can also be completed with various other information at the developers’ discretion (for example: mathematic formulas, todos, example code, etc).

When referencing other classes, types and functions from a doc-string, you should as much as possible use a reST reference to the actual one. The aim being to facilitate the documentation readability above all else.

There are at least 4 different ways to use reST references inside the documentation (in either doc-string or reST files):

  • create link to another document (will be converted to a link to another web page): :doc:`relative/path/to/document` (note: no file extension!)

  • create link to another section (from any document): :ref:`relative/path/to/document:Section name` (note: Section name is the actual section name as written, also the path is relative to the doc root directory doc/)

  • create arbitrary link to a section or figure: create a global reference just before referenced section/figure definition .. _my_ref: then it can de referenced using :ref:`my_ref`

  • create an external link: URL are automatically converted into links, otherwise one can be created using :ref:`link_name <http_link>`

You should also remember to cite scientific references on which you base your contributions (see this for more details).

References

As our package is meant primarily for research purposes, it is important that we include in the documentation bibliography references so users can clearly see what the code is based on.

We have decided to use the doc/refs.bib BibTex file to centralize all references across the whole package. This way, all references are unique, and any contributor should check if a reference is not already present before adding it.

Then, we recommend the references being cited in the documentation of the functions or modules using the extension sphinxcontrib.bibtex format:

:cite:p:`REF_NAME`

(for a reference named REF_NAME)

Type hinting in doc

The type hints can be parsed by sphinx to complete the autogenerated API documentation. They are used to complete the type information of parameters and return types of functions.

Comments

Comments should be used to inform about the implementation details such as scope intent, explanation of a line of code, etc.

They can be placed on their own line when describing the intent of the following code lines, or at the end of a code line to explain this particular line.

Example

You can see below an example of a code properly documented:

# src/my_array.py
"""This module is used to perform array computations.

**Usage**

Sum two arrays using this module as a script:

.. code:: bash

    python my_array.py ARRAY1 ARRAY2 OUTPUT

    ARRAY1 and ARRAY2 are two csv files containing arrays.
    OUTPUT is the output csv file.

"""
import argparse

import numpy as np


class MyArray:
    """This class provides a wrapper for :class:`numpy.array`.

    :param array: array to wrap
    """
    def __init__(self, array: np.array):
        self.array = array

    @classmethod
    def load_csv(cls, filename: str) -> 'MyArray':
        """Load array from csv file.

        :param filename: csv file
        """
        return MyArray(np.loadtxt(filename, delimiter=","))

    def save_csv(self, filename: str):
        """Save array in csv file

        :param filename: csv file
        """
        np.savetxt(filename, self.array, fmt="%g", delimiter=",")


def sum_arrays(array1: MyArray, array2: MyArray) -> MyArray:
    """Returns the sum of two arrays.

    This implements :cite:p:`REF_NAME` array sum method.

    :param array1:
    :param array2:
    """
    return MyArray(array1.array + array2.array)


if __name__ == "__main__":
    # Configuration of parameters
    parser = argparse.ArgumentParser(
        description="Sum two csv arrays"
    )
    parser.add_argument("array1", help="first array in csv file")
    parser.add_argument("array2", help="second array in csv file")
    parser.add_argument("output", help="output file")

    # Parse arguments
    args = parser.parse_args()

    # Load input files
    array1 = MyArray.load_csv(args.array1)
    array2 = MyArray.load_csv(args.array2)

    # Sum inputs and save output
    res = sum_arrays(array1, array2)
    res.save_csv(args.output)

Tests

There are mainly 4 types of tests that we are using in our python projects:

  • unit testing using pytest

  • tests multiple python versions using tox

  • coverage testing (actually combined with the unit testing)

  • retrocompatibility tests

We strongly recommend to develop the tests alongside the code if not before. And when commiting changes, developers must check their changes impact on those tests.

Unit testing

We recommend the use of unit testing in projects with an extensive library of code written (i.e python packages). They can be based upon the python package and utility pytest.

This package is easy to use and will simply execute every function which name starts with “test” in every python file which starts by “test” (N.B: by default). Those test function must have no parameters. It will then check all the assert statements (raise an AssertionError if the boolean condition inside is not met), and return the number of test functions that passed and failed.

A good practice is to write one file test_SOURCE.py in the folder test/ per source file SOURCE.py in src/, and implement one test function per module functions and classes, named after the functions or classes (ex: test_sum_arrays for the function sum_arrays).

Below an example for the source file my_array.py:

# test/test_my_array.py
import numpy as np

from .my_array import MyArray, sum_arrays


def test_my_arrays():
    """Test MyArray class."""
    # Test constructor
    a1 = MyArray(np.array([0, 1, 2, 3, 4]))
    a2 = MyArray(np.array([1, 1, 1, 1, 1]))
    assert a1.array.shape[0] == a2.array.shape[0] == 5


def test_sum_arrays():
    """Test sum_arrays function."""
    a1 = MyArray(np.array([0, 1, 2, 3, 4]))
    a2 = MyArray(np.array([1, 1, 1, 1, 1]))
    res = sum_arrays(a1, a2)
    assert res.array.shape[0] == 5
    assert res.array[0] == 1 and res.array[1] == 2 and res.array[2] ==3
    assert res.array[3] == 4 and res.array[4] == 5

Tests on multiple python versions

We use tox to test the package against multiple python versions (python>=3.7). It creates a virtual environment in which to run the unit tests in each python version.

To set up, you need to have the interpreters for each python version installed on your machine. If using python environment, you need to configure your local virtual environment so it can find the interpreters.

First install all python environments used:

$ pyenv install 3.8.15 3.9.15 3.10.8 3.11.0 3.12.0

Then if you defined a local virtual environment, you need to append the python environments to its .python-version file:

$ pyenv local mcda 3.8.15 3.9.15 3.10.8 3.11.0 3.12.0  # replace 'mcda' by your project virtual environment

Then your default interpreters would still be the one defined in your project virtual environment, but the python interpreters for the other versions would be accessible to tox.

Note

those tests don’t work on our VS Code container yet. You can delegate them to the Gitlab CI/CD pipeline automatic run at each push. They do work on virtual environments though, provided you followed our previous instructions.

Coverage testing

Coverage testing simply checks which lines of code have been reached or not, during the execution of an operation. It can be based upon the coverage python package and utility. It can produce detailed reports showing the percentage of code lines reached, and even show precisely the code reached in details.

As it is wrapped around an other process execution, we recommend wrapping it around the other type of testing chosen for this project: unit tests or black box ones.

It is a good practice to tend towards a 100% coverage of the code developed for a project, although in particular cases we can be more lenient depending on the lines unreached.

There are a number of commands you can use with this package:

  • coverage run --source src,test --branch -m pytest: run coverage of the code contained in src/ and test/ over pytest unit tests

  • coverage run --source ./tests.sh: run coverage of the code contained in src/ over tests.sh test scripts (e.g a black box testing script)

  • coverage report: get the report of the latest coverage done (reading the .coverage file)

  • coverage html: generate the detailed report in html format in htmlcov/index.html

Retrocompatibility tests

You can test retrocompatibility of the source code against any other version. This is done by running the unit tests of the other (generally older version) with the source code of the new version.

To ease those tests, we created a script scripts/back-test.sh which can be run like this:

$ scripts/back-test.sh OLD_VERSION  # for example 1.0.0

We added a helper script scripts/version/sh to extract current version number and latest major, minor and patch. When called, it exports the following environment variables:

  • CURRENT

  • MAJOR

  • MINOR

  • PATCH

Warning

This is using git describe and is therefore an experimental script. We intend to change it so it uses a global version number instead.

Notebooks

We added jupyter notebooks that can be run as examples in doc/notebooks/. Those examples should be extended as the package grows. Don’t forget to write these notebooks as you add new features to the package.

Those notebooks are split between almost raw examples and more beginner-friendly tutorials.

To install jupyter, run: pip install jupyter

Then to be able to run the notebooks, go to the doc/notebooks/ directory and run the following: jupyter notebook

This will open a web-page in your browser, listing all the provided notebooks of this package.

You can quickly test if the notebooks execute without errors: make notebook-test

For a better grasp at the real user experience when installing the package, and running the examples notebook, we strongly recommend setting up another virtual environment for the doc/notebooks/ subdirectory. In this new virtual environment, you will do the following (from doc/notebooks/):

  • install this package in editable mode: pip install -e ../..

  • install jupyter package: pip install jupyter