Performance Tables

[1]:
%config Completer.use_jedi = False

This notebook shows how to use performance table and their functions. Functions tested here revolved around scales and functions applied to performance tables, as well as plotting utilities.

Defining MCDA problem

We can define a MCDA problem by its alternatives and criteria.

Alternatives and criteria

Alternatives and criteria can be defined by the list of their ids or names as follows:

[2]:
alternatives = ["a01", "a02", "a03", "a04", "a05"]
criteria = ["c01", "c02", "c03"]

Performance table

A performance table is a 2D matrix, where the first dimension represents the alternatives, the second the criteria. In each cell, a value or performance represents the performance of a given alternative for a certain criterion.

To define a performance table, simply use a dataframe:

[3]:
from mcda import PerformanceTable
[4]:
perfTable = PerformanceTable(
    [
        [0, "medium", "+"],
        [0.5, "good", "++"],
        [1, "bad", "-"],
        [0.2, "medium", "0"],
        [0.9, "medium", "+"]
    ],
    alternatives=alternatives,
    criteria=criteria
)

You can access alternative and criterion values like this:

[5]:
perfTable.alternatives_values["a01"].data
[5]:
c01       0.0
c02    medium
c03         +
Name: a01, dtype: object
[6]:
perfTable.criteria_values["c02"].data
[6]:
a01    medium
a02      good
a03       bad
a04    medium
a05    medium
Name: c02, dtype: object

You can iterate over a PerformanceTable in different ways:

[7]:
for alternative_values in perfTable.alternatives_values.values():
    print(alternative_values.data)
c01       0.0
c02    medium
c03         +
Name: a01, dtype: object
c01     0.5
c02    good
c03      ++
Name: a02, dtype: object
c01    1.0
c02    bad
c03      -
Name: a03, dtype: object
c01       0.2
c02    medium
c03         0
Name: a04, dtype: object
c01       0.9
c02    medium
c03         +
Name: a05, dtype: object
[8]:
for a, values in perfTable.alternatives_values.items():
    print(f"{a}: {list(values)}")
a01: [0.0, 'medium', '+']
a02: [0.5, 'good', '++']
a03: [1.0, 'bad', '-']
a04: [0.2, 'medium', '0']
a05: [0.9, 'medium', '+']
[9]:
for alternative_values in perfTable.alternatives_values.values():
    print(f"{alternative_values.name}: {list(alternative_values.data)}")
a01: [0.0, 'medium', '+']
a02: [0.5, 'good', '++']
a03: [1.0, 'bad', '-']
a04: [0.2, 'medium', '0']
a05: [0.9, 'medium', '+']
[10]:
for criterion_values in perfTable.criteria_values.values():
    print(criterion_values.data)
a01    0.0
a02    0.5
a03    1.0
a04    0.2
a05    0.9
Name: c01, dtype: float64
a01    medium
a02      good
a03       bad
a04    medium
a05    medium
Name: c02, dtype: object
a01     +
a02    ++
a03     -
a04     0
a05     +
Name: c03, dtype: object

You can also access its internal representation, which uses a pandas.DataFrame:

[11]:
perfTable.data
[11]:
c01 c02 c03
a01 0.0 medium +
a02 0.5 good ++
a03 1.0 bad -
a04 0.2 medium 0
a05 0.9 medium +
[12]:
perfTable.data.to_dict()
[12]:
{'c01': {'a01': 0.0, 'a02': 0.5, 'a03': 1.0, 'a04': 0.2, 'a05': 0.9},
 'c02': {'a01': 'medium',
  'a02': 'good',
  'a03': 'bad',
  'a04': 'medium',
  'a05': 'medium'},
 'c03': {'a01': '+', 'a02': '++', 'a03': '-', 'a04': '0', 'a05': '+'}}

N.B: to avoid mistakes, it is best to use the same order of alternatives and criteria for the performance table definition.

All functions related specifically to performance tables are in the module performance_table.

Criteria Scales

In order to have a better grasp at what a performance or value represents, for a certain criterion, we can define criteria scales. Those scales can be of 3 types:

Quantitative scales

Quantitative scales represent a numerical interval bounding all possible values using this scale. Also a preference direction can be used to precise which values are preferred: PreferenceDirection.MIN if smaller values are preferred, PreferenceDirection.MAX otherwise.

[13]:
from mcda.scales import *

scale1 = QuantitativeScale(0, 1, preference_direction=MIN)

Qualitative scales

Qualitative scales represent a discrete set of possible labels, alongside the set of their corresponding numerical values in order to establish a preference order. As for quantitative scales, a preference direction can be defined.

[14]:
scale2 = QualitativeScale({"bad": 1, "medium": 2, "good": 3})

Nominative scales

Nominative scales a discrete set of possible labels. They are unordered. Using nominative scales is not recommended when performing multi criteria decision analysis, and they should be replaced by qualitative scales as soon as possible in the decision process.

[15]:
scale3 = NominalScale(["--", "-", "0", "+", "++"])

Bundling the criteria scales

We can bundle multiple criteria scales together, one per criterion of our MCDA problem, using a dictionary labelled by criterion:

[16]:
scales = {
    criteria[0]: scale1,
    criteria[1]: scale2,
    criteria[2]: scale3
}

N.B: when we created our performance table, we did not set the criteria scales then, so the module tried to infer them:

[17]:
perfTable.scales
[17]:
{'c01': QuantitativeScale(interval=[0.0, 1.0]),
 'c02': NominalScale(labels=['medium', 'good', 'bad']),
 'c03': NominalScale(labels=['+', '++', '-', '0'])}

Pretty close right? However the module will be able to infer neither quantitative scales preference direction nor qualitative values from string labels…

We can bundle the scales we defined with the performance table to fix that:

[18]:
perfTable.scales = scales

If you only have the performances without the criteria scales, you can compute the scales automatically.

N.B: the scales then will be assumed to be either maximizable quantitative scales (for numeric values) or nominal scales (for nominal ones)

[19]:
bounds = perfTable.bounds
bounds
[19]:
{'c01': QuantitativeScale(interval=[0.0, 1.0]),
 'c02': NominalScale(labels=['medium', 'good', 'bad']),
 'c03': NominalScale(labels=['+', '++', '-', '0'])}

N.B: This property is used to infer criteria scales if not set.

You can also concatenate two performance tables, either to add new alternative values:

[20]:
perfTable2 = PerformanceTable(
    [
        [0.25, "medium", "++"],
        [0.75, "bad", "+"]
    ],
    alternatives=["b1", "b2"],
    criteria=criteria,
    scales=scales
)
PerformanceTable.concat([perfTable, perfTable2]).data
[20]:
c01 c02 c03
a01 0.00 medium +
a02 0.50 good ++
a03 1.00 bad -
a04 0.20 medium 0
a05 0.90 medium +
b1 0.25 medium ++
b2 0.75 bad +

N.B: the concatenated table scales are taken from the tables scales (first encountered in case of duplicates). Also, the concatenated table must use the same criteria and have different alternatives.

You can also concatenate two tables to add new criteria values:

[21]:
new_scales = {
    "c04": QuantitativeScale(0, 10),
    "c05": QuantitativeScale(-100, 100, preference_direction=PreferenceDirection.MIN)
}
perfTable3 = PerformanceTable(
    [
        [0, 0],
        [2, -20],
        [5, -100],
        [10, 50],
        [3, -25]
    ],
    alternatives=alternatives,
    criteria=list(new_scales.keys()),
    scales=new_scales
)
PerformanceTable.concat([perfTable, perfTable3], axis=1).data
[21]:
c01 c02 c03 c04 c05
a01 0.0 medium + 0 0
a02 0.5 good ++ 2 -20
a03 1.0 bad - 5 -100
a04 0.2 medium 0 10 50
a05 0.9 medium + 3 -25

N.B: the concatenated table must use the same alternatives and have different criteria

N.B: when using concatenation, no transformation of scales is applied. This is the user’s responsibility to make the transformation beforehand if needed.

Computations on performance tables

Check performances values

We can check that all performances in a performance table are inside their respective criterion scale:

[22]:
perfTable.is_within_scales
[22]:
True

Transform labels into numerical values

We can transform all the values contained in a performance table (per criteria) into other scales. Any transformation between quantitative and/or qualitative scales is possible. Nominal scales need to be converted into a qualitative scale first.

Note: for nominal scales, you can either transform them calling the transform function, or simply use a qualitative scale with the same labels

The method returns a new PerformanceTable object.

[23]:
scale3b = QualitativeScale({"--": 0, "-": 1, "0": 2, "+": 3, "++": 4})
scales = {
    criteria[0]: scale1,
    criteria[1]: scale2,
    criteria[2]: scale3b
}
perfTable = PerformanceTable(perfTable.data, scales)

Then, we can simply convert the table to numerics:

[24]:
numeric_table = perfTable.to_numeric
numeric_table.data
[24]:
c01 c02 c03
a01 0.0 2 3
a02 0.5 3 4
a03 1.0 1 1
a04 0.2 2 2
a05 0.9 2 3

Normalize numerical values

Numerical values in a performance table can be normalized, using either the raw data for extracting the boundaries, or the criteria scales.

Scale normalization

Scales can be used to normalize numerical values of quantitative and qualitative scales. Nominal scales cannot be used to normalize and must there be replaced by qualitative scales.

This method returns a new PerformanceTable object:

[25]:
from mcda import normalize

normalized_table = normalize(perfTable)
normalized_table
[25]:
<mcda.core.matrices.PerformanceTable at 0x7fa6c040bfa0>
[26]:
normalized_table.data
[26]:
c01 c02 c03
a01 1.0 0.5 0.75
a02 0.5 1.0 1.00
a03 0.0 0.0 0.25
a04 0.8 0.5 0.50
a05 0.1 0.5 0.75

N.B: preference direction is used to normalize the data, so the resulting performance table values are ordered by preference (in increasing order).

Also, the returned performance table has normalized scale for each criterion scale.

Normalization on raw data

You can also decide to normalize the performance table without providing the criteria scales. The min and max values will be retrieved from the performance table. The method returns a new PerformanceTable object.

This code normalizes each value per criterion:

[27]:
table = PerformanceTable(
    [[0, 25000, -5], [1, 18000, -3], [0, 68000, -1]]
)
normalized_table = normalize(table)
normalized_table.data
[27]:
0 1 2
0 0.0 0.14 0.0
1 1.0 0.00 0.5
2 0.0 1.00 1.0

Apply criteria functions to performances

It is possible to apply criteria functions to each of the criteria values in the performance table. Those functions can be defined using lambda functions (to represent constant, affine or any type of function):

[28]:
f2 = lambda x: 2*x - 0.5

However for more complex non-arithmetical functions, we provide a module pymcda.functions which contains several usefull classes and methods:

[29]:
from mcda.functions import *

You can define discrete functions using the following code:

[30]:
f3 = DiscreteFunction({"--": 1, "-": 2, "0": 3, "+": 4, "++": 5})

This function can then simply be called as any python function:

[31]:
f3("+")
[31]:
4

You can also define piecewise functions using the following code:

[32]:
f = PieceWiseFunction(
    {
        Interval(0, 2.5, max_in=False): lambda x: x,
        Interval(2.5, 5): lambda x: -0.5 * x + 2.0,
    }
)
f
[32]:
PieceWiseFunction(functions={Interval(dmin=0, dmax=2.5,min_in=True, max_in=False): <function <lambda> at 0x7fa6c03c6e60>, Interval(dmin=2.5, dmax=5,min_in=True, max_in=True): <function <lambda> at 0x7fa6c03c70a0>})

There is also a simple way to create a piecewise-linear function from a list of segments:

[33]:
f1 = PieceWiseFunction(segments=[
    [[0, 1], [0.3, -2]],
    [[0.3, -2], [0.6, 0.5]],
    [[0.6, 0.5], [1, 5]]
])
f1
[33]:
PieceWiseFunction(functions={Interval(dmin=0, dmax=0.3,min_in=True, max_in=True): AffineFunction(constant=1.0, slope=-10.0), Interval(dmin=0.3, dmax=0.6,min_in=True, max_in=True): AffineFunction(constant=-4.5, slope=8.333333333333334), Interval(dmin=0.6, dmax=1,min_in=True, max_in=True): AffineFunction(constant=-6.25, slope=11.25)})

You can also simply call a piecewise function like this:

[34]:
f1(0.8)
[34]:
2.75
[35]:
print(f1)
{'[0, 0.3]': 'AffineFunction(constant=1.0, slope=-10.0)', '[0.3, 0.6]': 'AffineFunction(constant=-4.5, slope=8.333333333333334)', '[0.6, 1]': 'AffineFunction(constant=-6.25, slope=11.25)'}

Those functions, like the scales, can be bundled together in a an object:

[36]:
functions = CriteriaFunctions(
    {
        criteria[0]: f1,
        criteria[1]: f2,
        criteria[2]: f3
    },
    in_scales={
        criteria[0]: scales[criteria[0]],
        criteria[1]: scales[criteria[1]].numeric,
        criteria[2]: scales[criteria[2]]
    }
)

N.B: the input scales are not enforced when applying the criteria functions to data (they are also optional). You have to make sure your data has the correct scales in which the functions are defined. If they don’t, you can transform them to the input scales.

[37]:
from mcda import transform

input_table = transform(perfTable, functions.in_scales)

Then we can apply the criteria functions to a performance table.

[38]:
input_table.data
[38]:
c01 c02 c03
a01 0.0 2.0 +
a02 0.5 3.0 ++
a03 1.0 1.0 -
a04 0.2 2.0 0
a05 0.9 2.0 +
[39]:
nTable = functions(input_table)
nTable.data
[39]:
c01 c02 c03
a01 1.000000 3.5 4
a02 -0.333333 5.5 5
a03 5.000000 1.5 2
a04 -1.000000 3.5 3
a05 3.875000 3.5 4

You can also define the output scales of a criteria functions. Then those scales will be used to create the final resulting table. Note that coherency of result values with the output scales is not verified!

Check performance table numericity

We can easily check if all the values inside a performance table are numeric, using:

[40]:
nTable.is_numeric
[40]:
True

Sum performances

We can compute the sum of the values of a numerical performance table. The parameter axis controls how the sum is computed. This sum can be on the whole performance table (axis let to default value):

[41]:
nTable.sum()
[41]:
44.04166666666667

It can also be done column-wise, returning the sums of each column as a list. This corresponds to the sum of all criteria values per alternative:

[42]:
nTable.sum(axis=0).data
[42]:
c01     8.541667
c02    17.500000
c03    18.000000
dtype: float64

It can also be done row-wise, returning the sums of each row as a list. This corresponds to the sum of all alternatives values per criterion:

[43]:
nTable.sum(axis=1).data
[43]:
a01     8.500000
a02    10.166667
a03     8.500000
a04     5.500000
a05    11.375000
dtype: float64