# Evaluation Framework¶

SDV contains a Synthetic Data Evaluation Framework that facilitates the task of evaluating the quality of your Synthetic Dataset by applying multiple Synthetic Data Metrics on it and reporting results in a comprehensive way.

## Using the SDV Evaluation Framework¶

To evaluate the quality of synthetic data we essentially need two things: real data and synthetic data that pretends to resemble it.

Let us start by loading a demo table and generate a synthetic replica of it using the GaussianCopula model.

In [1]: from sdv.demo import load_tabular_demo

In [2]: from sdv.tabular import GaussianCopula

In [4]: model = GaussianCopula()

In [5]: model.fit(real_data)

In [6]: synthetic_data = model.sample(len(real_data))


After the previous steps we will have two tables:

• real_data: A table containing data about student placements

In [7]: real_data.head()
Out[7]:
student_id gender  second_perc  ...  start_date   end_date  duration
0       17264      M        67.00  ...  2020-07-23 2020-10-12       3.0
1       17265      M        79.33  ...  2020-01-11 2020-04-09       3.0
2       17266      M        65.00  ...  2020-01-26 2020-07-13       6.0
3       17267      M        56.00  ...         NaT        NaT       NaN
4       17268      M        85.80  ...  2020-07-04 2020-09-27       3.0

[5 rows x 17 columns]

• synthetic_data: A synthetically generated table that contains data in the same format and with similar statistical properties as the real_data.

In [8]: synthetic_data.head()
Out[8]:
student_id gender  second_perc  ...  start_date   end_date  duration
0       17306      M        73.34  ...  2020-03-11 2021-01-21       9.0
1       17441      M        48.24  ...         NaT        NaT       NaN
2       17461      M        66.87  ...  2020-06-07 2020-06-19       4.0
3       17307      F        84.12  ...  2020-04-30 2020-10-08       4.0
4       17318      F        86.47  ...  2020-04-05 2020-05-06       3.0

[5 rows x 17 columns]


Note

### Computing an overall score¶

The simplest way to see how similar the two tables are is to import the sdv.evaluation.evaluate function and run it passing both the synthetic_data and the real_data tables.

In [9]: from sdv.evaluation import evaluate

In [10]: evaluate(synthetic_data, real_data)
Out[10]: 0.6643877153579095


The output of this function call will be a number between 0 and 1 that will indicate how similar the two tables are, being 0 the worst and 1 the best possible score.

### How was the obtained score computed?¶

The evaluate function applies a collection of pre-configured metric functions and returns the average of the scores that the data obtained on each one of them. In most scenarios this can be enough to get an idea about the similarity of the two tables, but you might want to explore the metrics in more detail.

In order to see the different metrics that were applied you can pass and additional argument aggregate=False, which will make the evaluate function return a dictionary with the scores that each one of the metrics functions returned:

In [11]: evaluate(synthetic_data, real_data, aggregate=False)
Out[11]:
metric  ...                                              error
0                    BNLogLikelihood  ...  Please install pomegranate with pip install p...
1                  LogisticDetection  ...                                               None
2                       SVCDetection  ...                                               None
3       BinaryDecisionTreeClassifier  ...  target must be passed either directly or ins...
4           BinaryAdaBoostClassifier  ...  target must be passed either directly or ins...
5           BinaryLogisticRegression  ...  target must be passed either directly or ins...
6                BinaryMLPClassifier  ...  target must be passed either directly or ins...
7   MulticlassDecisionTreeClassifier  ...  target must be passed either directly or ins...
8            MulticlassMLPClassifier  ...  target must be passed either directly or ins...
9                   LinearRegression  ...  target must be passed either directly or ins...
10                      MLPRegressor  ...  target must be passed either directly or ins...
11                   GMLogLikelihood  ...  GaussianMixture Log Likelihood: Exhausted retr...
12                            CSTest  ...                                               None
13                            KSTest  ...                                               None
14                    KSTestExtended  ...                                               None
15                    CategoricalCAP  ...  key_fields must be passed either directly or...
16                CategoricalZeroCAP  ...  key_fields must be passed either directly or...
17         CategoricalGeneralizedCAP  ...  key_fields must be passed either directly or...
18                     CategoricalNB  ...  key_fields must be passed either directly or...
19                    CategoricalKNN  ...  key_fields must be passed either directly or...
20                     CategoricalRF  ...  key_fields must be passed either directly or...
21                    CategoricalSVM  ...  key_fields must be passed either directly or...
22               CategoricalEnsemble  ...  '<' not supported between instances of 'float'...
23                       NumericalLR  ...  key_fields must be passed either directly or...
24                      NumericalMLP  ...  key_fields must be passed either directly or...
25                      NumericalSVR  ...  key_fields must be passed either directly or...
26    NumericalRadiusNearestNeighbor  ...  key_fields must be passed either directly or...
27            ContinuousKLDivergence  ...                                               None
28              DiscreteKLDivergence  ...                                               None

[29 rows x 8 columns]


### Can I control which metrics are applied?¶

By default, the evaluate function will apply all the metrics that are included within the SDV Evaluation framework. However, the list of metrics that are applied can be controlled by passing a list with the names of the metrics that you want to apply.

For example, if you were interested on obtaining only the CSTest and KSTest metrics you can call the evaluate function as follows:

In [12]: evaluate(synthetic_data, real_data, metrics=['CSTest', 'KSTest'])
Out[12]: 0.9105381570608302


Or, if we want to see the scores separately:

In [13]: evaluate(synthetic_data, real_data, metrics=['CSTest', 'KSTest'], aggregate=False)
Out[13]:
metric                                     name  ...      goal  error
0  CSTest                              Chi-Squared  ...  MAXIMIZE   None
1  KSTest  Inverted Kolmogorov-Smirnov D statistic  ...  MAXIMIZE   None

[2 rows x 8 columns]
`

For more details about all the metrics that exist for the different data modalities please check the corresponding guides.