# Evaluation Framework¶

**SDV** contains a *Synthetic Data Evaluation Framework* that facilitates
the task of evaluating the quality of your *Synthetic Dataset* by
applying multiple *Synthetic Data Metrics* on it and reporting results
in a comprehensive way.

## Using the SDV Evaluation Framework¶

To evaluate the quality of synthetic data we essentially need two things:
*real* data and *synthetic* data that pretends to resemble it.

Let us start by loading a demo table and generate a synthetic replica of
it using the `GaussianCopula`

model.

```
In [1]: from sdv.demo import load_tabular_demo
In [2]: from sdv.tabular import GaussianCopula
In [3]: real_data = load_tabular_demo('student_placements')
In [4]: model = GaussianCopula()
In [5]: model.fit(real_data)
In [6]: synthetic_data = model.sample()
```

After the previous steps we will have two tables:

`real_data`

: A table containing data about student placements

```
In [7]: real_data.head()
Out[7]:
student_id gender second_perc high_perc high_spec degree_perc degree_type work_experience experience_years employability_perc mba_spec mba_perc salary placed start_date end_date duration
0 17264 M 67.00 91.00 Commerce 58.00 Sci&Tech False 0 55.0 Mkt&HR 58.80 27000.0 True 2020-07-23 2020-10-12 3.0
1 17265 M 79.33 78.33 Science 77.48 Sci&Tech True 1 86.5 Mkt&Fin 66.28 20000.0 True 2020-01-11 2020-04-09 3.0
2 17266 M 65.00 68.00 Arts 64.00 Comm&Mgmt False 0 75.0 Mkt&Fin 57.80 25000.0 True 2020-01-26 2020-07-13 6.0
3 17267 M 56.00 52.00 Science 52.00 Sci&Tech False 0 66.0 Mkt&HR 59.43 NaN False NaT NaT NaN
4 17268 M 85.80 73.60 Commerce 73.30 Comm&Mgmt False 0 96.8 Mkt&Fin 55.50 42500.0 True 2020-07-04 2020-09-27 3.0
```

`synthetic_data`

: A synthetically generated table that contains data in the same format and with similar statistical properties as the`real_data`

.

```
In [8]: synthetic_data.head()
Out[8]:
student_id gender second_perc high_perc high_spec degree_perc degree_type work_experience experience_years employability_perc mba_spec mba_perc salary placed start_date end_date duration
0 17295 F 47.844569 50.714347 Commerce 64.733765 Comm&Mgmt False 0 58.599782 Mkt&HR 54.517890 NaN True NaT NaT NaN
1 17385 M 59.742568 62.202806 Commerce 68.484052 Comm&Mgmt False 0 61.732074 Mkt&HR 59.282476 25110.646520 True 2020-01-22 2020-10-21 12.0
2 17415 M 68.878189 77.583448 Commerce 64.735564 Comm&Mgmt False 1 81.486051 Mkt&Fin 65.664266 29585.928520 True 2020-01-03 2020-09-01 12.0
3 17267 F 88.842036 89.413029 Science 86.173570 Sci&Tech False 1 83.662597 Mkt&Fin 59.897287 26039.548304 True 2020-01-02 2020-04-21 3.0
4 17355 M 61.560413 60.694757 Commerce 59.119288 Comm&Mgmt False 0 77.408036 Mkt&Fin 56.710694 26216.988756 True 2020-01-21 2020-07-24 NaN
```

Note

For more details about this process, please visit the GaussianCopula Model guide.

### Computing an overall score¶

The simplest way to see how similar the two tables are is to import the
`sdv.evaluation.evaluate`

function and run it passing both the
`synthetic_data`

and the `real_data`

tables.

```
In [9]: from sdv.evaluation import evaluate
In [10]: evaluate(synthetic_data, real_data)
Out[10]: 0.6034568629075101
```

The output of this function call will be a number between 0 and 1 that will indicate how similar the two tables are, being 0 the worst and 1 the best possible score.

### How was the obtained score computed?¶

The `evaluate`

function applies a collection of pre-configured metric
functions and returns the average of the scores that the data obtained
on each one of them. In most scenarios this can be enough to get an idea
about the similarity of the two tables, but you might want to explore
the metrics in more detail.

In order to see the different metrics that were applied you can pass and
additional argument `aggregate=False`

, which will make the
`evaluate`

function return a dictionary with the scores that each one
of the metrics functions returned:

```
In [11]: evaluate(synthetic_data, real_data, aggregate=False)
Out[11]:
metric name raw_score normalized_score min_value max_value goal
1 LogisticDetection LogisticRegression Detection 0.399599 3.995990e-01 0.0 1.0 MAXIMIZE
2 SVCDetection SVC Detection 0.339882 3.398820e-01 0.0 1.0 MAXIMIZE
11 GMLogLikelihood GaussianMixture Log Likelihood -40.977068 1.599137e-18 -inf inf MAXIMIZE
12 CSTest Chi-Squared 0.873999 8.739987e-01 0.0 1.0 MAXIMIZE
13 KSTest Inverted Kolmogorov-Smirnov D statistic 0.927907 9.279070e-01 0.0 1.0 MAXIMIZE
14 KSTestExtended Inverted Kolmogorov-Smirnov D statistic 0.906047 9.060465e-01 0.0 1.0 MAXIMIZE
27 ContinuousKLDivergence Continuous Kullback–Leibler Divergence 0.543956 5.439562e-01 0.0 1.0 MAXIMIZE
28 DiscreteKLDivergence Discrete Kullback–Leibler Divergence 0.823205 8.232053e-01 0.0 1.0 MAXIMIZE
```

### Can I control which metrics are applied?¶

By default, the `evaluate`

function will apply all the metrics that
are included within the SDV Evaluation framework. However, the list of
metrics that are applied can be controlled by passing a list with the
names of the metrics that you want to apply.

For example, if you were interested on obtaining only the `CSTest`

and
`KSTest`

metrics you can call the `evaluate`

function as follows:

```
In [12]: evaluate(synthetic_data, real_data, metrics=['CSTest', 'KSTest'])
Out[12]: 0.9009528136532905
```

Or, if we want to see the scores separately:

```
In [13]: evaluate(synthetic_data, real_data, metrics=['CSTest', 'KSTest'], aggregate=False)
Out[13]:
metric name raw_score normalized_score min_value max_value goal
0 CSTest Chi-Squared 0.873999 0.873999 0.0 1.0 MAXIMIZE
1 KSTest Inverted Kolmogorov-Smirnov D statistic 0.927907 0.927907 0.0 1.0 MAXIMIZE
```

For more details about all the metrics that exist for the different data modalities please check the corresponding guides.