diff --git a/structured_data/2022_06_02_causality/README.md b/structured_data/2022_06_02_causality/README.md
new file mode 100644
index 0000000..d22e638
--- /dev/null
+++ b/structured_data/2022_06_02_causality/README.md
@@ -0,0 +1,98 @@
+# Causality
+
+## What is causality?
+Causality in time series is a tricky thing. You want to find out which variables cause others.
+The notion itself is very subtle and at the same time very strong.
+
+It's in essence a stronger way of looking at dynamics between variables than the well known correlation metric.
+Why? Well.. Let's prove it intuitively.
+In the figure below you can expect that our three (time-series) variables are strongly correlated with each other.
+It makes sense to say that having your skin burned has a strong correlation with the amount of ice creams you might eat on a given day.
+However, this does not necessarily mean that eating ice cream causes skin burns!
+Using causality analysis techniques, we can better model and find the _causal_ links between variables.
+If our analysis was successful, we'd conclude that sunshine causes us to eat lots of ice cream and also causes our skin to burn.
+The analysis would also show that there is no causal relationship between eating ice cream and getting your skin burnt, or vice versa.
+Note that causality is a directional way of working, whereas correlation is not!
+
+![causality vs correlation example](figs/causality_vs_correlation.png)
+
+## Causality analysis as a guidance.
+Keep in mind that causality analyses are in essence unsupervised techniques.
+There is really no way to check whether the results make sense at all.
+
+In a experimental setting, people create toy datasets of which they determined the dynamics (linear and non-linear mathematical equations) upfront. Examples:
+
+- [this towardsdatascience article](https://towardsdatascience.com/causality-931372313a1c)
+- [the original paper of PCMCI](https://advances.sciencemag.org/content/5/11/eaau4996)
+- [this paper which compares causality techniques](https://arxiv.org/pdf/2104.08043v1.pdf)
+
+Typically, the internal dynamics are unknown or too hard to try to model (think about complex physical processes). In such cases, causality is merely a tool to represent the dynamics of the system.
+That's why results of causality analysis should be checked with the knowledge of experts. Try to make sure you are close to this expertise in your causality projects!
+
+## Context
+
+The notion of causality is not new and can be traced back to the [Vedic period]{https://en.wikipedia.org/wiki/Vedic_period#:~:text=The%20Vedic%20period%2C%20or%20the,%2C%20including%20the%20Vedas%20(ca.} which also brought about the well known concept of Karma. Next to a concept, causality nowadays has a practical and theoretical framework. Establishing this framework is possbile due to advanced mathematical and statistical sciences, coupled with an increase in computational power and our ability to capture digital data, and thereby, processes around us.
+
+## Approaches
+The earliest notion of this theoretical causality in time series is **granger causality**, put forward by [Granger in 1969](https://en.wikipedia.org/wiki/Granger_causality.
+We say that a variable *X* that evolves over time *Granger-causes* another evolving variable *Y* if predictions of the value of *Y* based on its own past values *and* on the past values of *X* are better than predictions of *Y* based only on *Y'*s own past values.
+
+> Just like with all other approaches mentioned in this section, the time series must be made stationary first! We provide [code](../2021_02_08_timeseries_getting_started/time_series_getting_started.ipynb) that automatically checks for stationarity and by differencing tries to make the time series stationary if that's not the case already.
+
+Granger causality works fine when you want to check for two variables whether one causes the other in a _linear_ way. There exist methods (non-parametric ways) to also look out for _non-linear_ causal interactions. We advocate to use these at all times, except if you're certain that the underlying complexity is linear by nature.
+
+Enter **Transfer Entropy**! [This article](https://towardsdatascience.com/causality-931372313a1c) gives a good overview of the differences and the similarities between Transfer Entropy (developed in 2000) and Granger Causality. In short, Transfer Entropy is the non-linear variant of granger causality and has proven to be the most intuitive and performant (quality and consistency of results) method in our current toolbox. We advocate [this python implementation](https://pypi.org/project/PyCausality/). The code itself is not maintained anymore but still works well. As the pip package is unstable, it's easier to just copy the source code and go off from there, which we already did for you :-). Check out the [code](./src/transfer_entropy/transfer_entropy_wrapper.py)!
+
+A more recent (2015) approach to causality is [PCMCI](https://github.com/jakobrunge/tigramite/). It was originally developed to find causal interactions in highly dimensional time series. This package is also able to _condition out_ variables.
+
+> **What does conditioning out mean in this context?** When a variable A is causing a variable B, and variable B causes variable C, then A is causing C to some degree as well. If B is not adding extra significant information to C, compared to A, then PCMCI will not end up showing the causal link between B and C, but will only output that A causes C.
+
+![conditional cuasality](figs/conditional_causality.png)
+
+**PCMCI** is a two phased approach:
+
+1. PC: condition selection algorithm (finds relevant parents to target variables)
+ This first removes the variables that are not even conditionally dependent by independence testing.
+ Then, the conditional dependence for the strongest dependent variables are checked
+ → This is done iteratively, until convergence
+
+ → Ending up with a list of relevant conditions (=relevant variables at a certain lag)
+
+2. MCI: “Momentary conditional independence”
+
+ → false positive control for highly-interdependent time series
+
+ → Also Identifies causal strength of a causal relationship
+
+This two - phased approach is the skeleton of how PCMCI works. You have to instantiate PCMCI with a conditional independence test as well. Here, you have 5 options (see `tigramite.independence_tests` in the [documentation](https://jakobrunge.github.io/tigramite/)). Let's highlight some:
+
+1. [`tigramite.independence_tests.CondIndTest`](https://jakobrunge.github.io/tigramite/#tigramite.independence_tests.CondIndTest) : This is the base class on which all actual independence tests are built.
+
+2. [`tigramite.independence_tests.ParCorr`](https://jakobrunge.github.io/tigramite/#tigramite.independence_tests.ParCorr) : This test is based on correlation and works well for linear causal effects
+
+3. [`tigramite.independence_tests.CMIknn`](https://jakobrunge.github.io/tigramite/#tigramite.independence_tests.CMIknn) : This test is a non-parametric test for continuous data that's based on KNN as the name suggests. It works well for
+
+ - Non-linear dependencies
+
+ - Additive *and* multiplicative noise
+
+ It's computationally the most expensive option though. If you want more information, have a look at the [paper](https://core.ac.uk/download/pdf/211564416.pdf).
+
+One of the most important parameters is alpha, denoting the “degree of conservativity”. The higher alpha, the quicker PCMCI will identify causal links higher risk for false positives. This value typically lies between 0.05 and 0.5. From our experiments, we've seen that that statement is correct most of the times (so not always). Running PCMCI with different values for alpha (e.g. [0.05, 0.1, 0.2, 0.4]) is a good idea!
+
+> Note that this alpha hyperparameter can be tuned automatically when alpha is set to `None` . That's at least what the documentation says, and that is correct for ParCorr, but not for other independence tests. See [this issue](https://github.com/jakobrunge/tigramite/issues/49) for more information.
+
+
+
+One last thing! PCMCI has really nice visuals as output. You can see an example below.
+
+ ![PCMCI visual](figs/PCMCI_visual.png)
+
+ The cross-MCI denotes how strong the causal link is between different variables. The auto-MCI scale denotes how strong the causal link is between current and past values (lags) for one specific variable (denoted as a node in the graph). The numbers denote the lags for which a causal link was found.
+
+#### Try it out yourself!
+Have a look in this repo to find out how TE and PCMCI can help you in your use-case! You can find [an example notebook](./src/Example%20notebook.ipynb) in the `src/` folder!
+
+> Extra remark 1: In 2020, the creator of PCMCI has come up with an extension of PCMCI, named PCMCI+. From our projects and experiments, we have seen that PCMCI+ shows inconsistent results over different runs. Although PCMCI can (not in every case though) suffer from the same problem, it feels much more reliable. We therefore advocate not to use PCMCI+, except when your use-case has the need to check contemporaneous links, i.e. check whether A causes B, for both historic as well as current time steps instead of only historic time steps (see [PCMCI+ paper](http://proceedings.mlr.press/v124/runge20a.html)).
+
+> Extra remark 2: In the PCMCI+ paper, the authors state that highly autocorrelated time series are challenging for most time series causality approaches and hints that that is also the case for (regular) PCMCI. You might want to keep this in mind when using PCMCI.
\ No newline at end of file
diff --git a/structured_data/2022_06_02_causality/figs/PCMCI_visual.png b/structured_data/2022_06_02_causality/figs/PCMCI_visual.png
new file mode 100644
index 0000000..8795fe6
Binary files /dev/null and b/structured_data/2022_06_02_causality/figs/PCMCI_visual.png differ
diff --git a/structured_data/2022_06_02_causality/figs/causality_vs_correlation.png b/structured_data/2022_06_02_causality/figs/causality_vs_correlation.png
new file mode 100644
index 0000000..dbd4ed3
Binary files /dev/null and b/structured_data/2022_06_02_causality/figs/causality_vs_correlation.png differ
diff --git a/structured_data/2022_06_02_causality/figs/conditional_causality.png b/structured_data/2022_06_02_causality/figs/conditional_causality.png
new file mode 100644
index 0000000..f7c3549
Binary files /dev/null and b/structured_data/2022_06_02_causality/figs/conditional_causality.png differ
diff --git a/structured_data/2022_06_02_causality/requirements.txt b/structured_data/2022_06_02_causality/requirements.txt
new file mode 100644
index 0000000..61b8788
--- /dev/null
+++ b/structured_data/2022_06_02_causality/requirements.txt
@@ -0,0 +1,88 @@
+argon2-cffi==21.3.0
+argon2-cffi-bindings==21.2.0
+asttokens==2.0.5
+async-generator==1.10
+attrs==21.4.0
+backcall==0.2.0
+beautifulsoup4==4.11.1
+bleach==5.0.0
+certifi==2022.5.18.1
+cffi==1.15.0
+cycler==0.11.0
+debugpy==1.6.0
+decorator==5.1.1
+defusedxml==0.7.1
+dill==0.3.5.1
+entrypoints==0.4
+executing==0.8.3
+fastjsonschema==2.15.3
+fonttools==4.33.3
+ipykernel==6.13.0
+ipython==8.4.0
+ipython-genutils==0.2.0
+ipywidgets==7.7.0
+jedi==0.18.1
+Jinja2==3.1.2
+joblib==1.1.0
+jsonschema==4.6.0
+jupyter==1.0.0
+jupyter-client==7.3.1
+jupyter-console==6.4.3
+jupyter-core==4.10.0
+jupyterlab-pygments==0.2.2
+jupyterlab-widgets==1.1.0
+kiwisolver==1.4.2
+llvmlite==0.38.1
+MarkupSafe==2.1.1
+matplotlib==3.5.2
+matplotlib-inline==0.1.3
+mistune==0.8.4
+nbclient==0.6.4
+nbconvert==6.5.0
+nbformat==5.4.0
+nest-asyncio==1.5.5
+networkx==2.8.2
+notebook==6.4.11
+numba==0.55.2
+numpy==1.22.4
+packaging==21.3
+pandas==1.4.2
+pandocfilters==1.5.0
+parso==0.8.3
+patsy==0.5.2
+pexpect==4.8.0
+pickleshare==0.7.5
+Pillow==9.1.1
+prometheus-client==0.14.1
+prompt-toolkit==3.0.29
+psutil==5.9.1
+ptyprocess==0.7.0
+pure-eval==0.2.2
+pycparser==2.21
+Pygments==2.12.0
+pyparsing==3.0.9
+pyrsistent==0.18.1
+python-dateutil==2.8.2
+pytz==2022.1
+pyzmq==23.1.0
+qtconsole==5.3.0
+QtPy==2.1.0
+scikit-learn==1.1.1
+scipy==1.8.1
+seaborn==0.11.2
+Send2Trash==1.8.0
+six==1.16.0
+sklearn==0.0
+soupsieve==2.3.2.post1
+stack-data==0.2.0
+statsmodels==0.13.2
+terminado==0.15.0
+testpath==0.6.0
+threadpoolctl==3.1.0
+tigramite==5.0.0.3
+tinycss2==1.1.1
+tornado==6.1
+traitlets==5.2.2.post1
+wcwidth==0.2.5
+webencodings==0.5.1
+widgetsnbextension==3.6.0
diff --git a/structured_data/2022_06_02_causality/src/Example notebook.ipynb b/structured_data/2022_06_02_causality/src/Example notebook.ipynb
new file mode 100644
index 0000000..b118184
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/Example notebook.ipynb
@@ -0,0 +1,1714 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "1df7c4f1",
+ "metadata": {},
+ "source": [
+ "## Get the data ready"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "58b4a264",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%load_ext autoreload\n",
+ "%autoreload 2\n",
+ "\n",
+ "# imports\n",
+ "from datetime import datetime, timedelta\n",
+ "\n",
+ "import pandas as pd\n",
+ "import seaborn as sns\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "import dill as pickle\n",
+ "\n",
+ "from helpers.stationarity import remove_trend_and_diff"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "dried-genetics",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "## Load in the data ##\n",
+ "# This data corresponds with one of the experiments carried out in this paper : \n",
+ "# https://arxiv.org/pdf/2104.08043v1.pdf\n",
+ "# More specifically, this data is the first of the 200 data samples used in \n",
+ "# the Causal Sufficiency experiment, with one latent variable.\n",
+ "# See https://github.com/causalens/cdml-neurips2020 for more information.\n",
+ "\n",
+ "filename = 'data.pickle'\n",
+ "\n",
+ "with open(filename, 'rb') as f:\n",
+ " # the pickle file contains a pandas dataframe as well as the causal\n",
+ " # graph that was generated by the authors of the paper, which we don't need\n",
+ " df, _ = pickle.load(f)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "8bcd194a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "tackling new col X1\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X1 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X10\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X10 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X2\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X2 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X3\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X3 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X4\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X4 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X5\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X5 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X6\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X6 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X7\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X7 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X8\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X8 (after 0 differencing operations)!\n",
+ "\n",
+ "tackling new col X9\n",
+ " --> (KPSS & ADF) Time-series IS stationary for X9 (after 0 differencing operations)!\n",
+ "\n",
+ "(Maximum number of differencing operations performed was 1)\n"
+ ]
+ },
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " X1 \n",
+ " X10 \n",
+ " X2 \n",
+ " X3 \n",
+ " X4 \n",
+ " X5 \n",
+ " X6 \n",
+ " X7 \n",
+ " X8 \n",
+ " X9 \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " 2019-09-06 23:45:32.296715 \n",
+ " -1.667105 \n",
+ " -0.647129 \n",
+ " 0.580980 \n",
+ " -0.943676 \n",
+ " 0.475631 \n",
+ " -0.298541 \n",
+ " -0.562768 \n",
+ " 0.511342 \n",
+ " 1.176538 \n",
+ " -0.137143 \n",
+ " \n",
+ " \n",
+ " 2019-09-07 23:45:32.296715 \n",
+ " 0.224083 \n",
+ " 0.508340 \n",
+ " 0.521288 \n",
+ " 0.405035 \n",
+ " 0.918387 \n",
+ " -0.611866 \n",
+ " 2.306102 \n",
+ " -0.658871 \n",
+ " 0.179594 \n",
+ " -0.455267 \n",
+ " \n",
+ " \n",
+ " 2019-09-08 23:45:32.296715 \n",
+ " -1.086641 \n",
+ " 0.201047 \n",
+ " -2.989957 \n",
+ " 0.466461 \n",
+ " -0.292212 \n",
+ " -0.960543 \n",
+ " 0.697460 \n",
+ " -0.682488 \n",
+ " 0.692319 \n",
+ " 1.161958 \n",
+ " \n",
+ " \n",
+ " 2019-09-09 23:45:32.296715 \n",
+ " -0.421986 \n",
+ " 0.993365 \n",
+ " 0.662050 \n",
+ " -0.896384 \n",
+ " -0.430599 \n",
+ " 1.764546 \n",
+ " 0.370239 \n",
+ " 2.418368 \n",
+ " 0.616185 \n",
+ " -0.104601 \n",
+ " \n",
+ " \n",
+ " 2019-09-10 23:45:32.296715 \n",
+ " -0.109283 \n",
+ " -0.413054 \n",
+ " -1.222458 \n",
+ " 1.193096 \n",
+ " -0.220662 \n",
+ " -0.831529 \n",
+ " 0.723627 \n",
+ " -0.544237 \n",
+ " 0.502862 \n",
+ " -1.117350 \n",
+ " \n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " X1 X10 X2 X3 X4 \\\n",
+ "2019-09-06 23:45:32.296715 -1.667105 -0.647129 0.580980 -0.943676 0.475631 \n",
+ "2019-09-07 23:45:32.296715 0.224083 0.508340 0.521288 0.405035 0.918387 \n",
+ "2019-09-08 23:45:32.296715 -1.086641 0.201047 -2.989957 0.466461 -0.292212 \n",
+ "2019-09-09 23:45:32.296715 -0.421986 0.993365 0.662050 -0.896384 -0.430599 \n",
+ "2019-09-10 23:45:32.296715 -0.109283 -0.413054 -1.222458 1.193096 -0.220662 \n",
+ "\n",
+ " X5 X6 X7 X8 X9 \n",
+ "2019-09-06 23:45:32.296715 -0.298541 -0.562768 0.511342 1.176538 -0.137143 \n",
+ "2019-09-07 23:45:32.296715 -0.611866 2.306102 -0.658871 0.179594 -0.455267 \n",
+ "2019-09-08 23:45:32.296715 -0.960543 0.697460 -0.682488 0.692319 1.161958 \n",
+ "2019-09-09 23:45:32.296715 1.764546 0.370239 2.418368 0.616185 -0.104601 \n",
+ "2019-09-10 23:45:32.296715 -0.831529 0.723627 -0.544237 0.502862 -1.117350 "
+ ]
+ },
+ "execution_count": 3,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Process data\n",
+ "df.index = pd.date_range(datetime.today() - timedelta(days = len(df)), \n",
+ " periods=len(df))\n",
+ "df_stat = remove_trend_and_diff(df.resample('W-MON').mean(), \n",
+ " debug=False)\n",
+ "df.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "67b0df31",
+ "metadata": {},
+ "source": [
+ "### Cool! Apparently, our dataset was completely stationary already. Let's start the analysis"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "00ee028a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# determine min and max lags\n",
+ "tau_min=0\n",
+ "tau_max=4"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fcab33b7",
+ "metadata": {},
+ "source": [
+ "# 1. Transfer entropy"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d8f98149",
+ "metadata": {},
+ "source": [
+ "## Run TE"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "fb6973c9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "from transfer_entropy.transfer_entropy_wrapper import average_transfer_entropy\n",
+ "from helpers.transfer_entropy import export_as_df"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "ab448ec2",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "lag(0)\n",
+ "X1 -> X10\n",
+ "X2 -> X10\n",
+ "X3 -> X10\n",
+ "X4 -> X10\n",
+ "X5 -> X10\n",
+ "X6 -> X10\n",
+ "X7 -> X10\n",
+ "X8 -> X10\n",
+ "X9 -> X10\n",
+ "took 15.5995512008667 seconds\n",
+ "\n",
+ "lag(1)\n",
+ "X1 -> X10\n",
+ "X2 -> X10\n",
+ "X3 -> X10\n",
+ "X4 -> X10\n",
+ "X5 -> X10\n",
+ "X6 -> X10\n",
+ "X7 -> X10\n",
+ "X8 -> X10\n",
+ "X9 -> X10\n",
+ "took 14.241465091705322 seconds\n",
+ "\n",
+ "lag(2)\n",
+ "X1 -> X10\n",
+ "X2 -> X10\n",
+ "X3 -> X10\n",
+ "X4 -> X10\n",
+ "X5 -> X10\n",
+ "X6 -> X10\n",
+ "X7 -> X10\n",
+ "X8 -> X10\n",
+ "X9 -> X10\n",
+ "took 13.998454570770264 seconds\n",
+ "\n",
+ "lag(3)\n",
+ "X1 -> X10\n",
+ "X2 -> X10\n",
+ "X3 -> X10\n",
+ "X4 -> X10\n",
+ "X5 -> X10\n",
+ "X6 -> X10\n",
+ "X7 -> X10\n",
+ "X8 -> X10\n",
+ "X9 -> X10\n",
+ "took 13.934021949768066 seconds\n",
+ "\n",
+ "lag(4)\n",
+ "X1 -> X10\n",
+ "X2 -> X10\n",
+ "X3 -> X10\n",
+ "X4 -> X10\n",
+ "X5 -> X10\n",
+ "X6 -> X10\n",
+ "X7 -> X10\n",
+ "X8 -> X10\n",
+ "X9 -> X10\n",
+ "took 13.63166069984436 seconds\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Number of shuffles to perform to determine the results' significance.\n",
+ "n_shuffles = 50 \n",
+ "# Whether or not to calculate the effective Transfer Entropy.\n",
+ "effective = False\n",
+ "# Whether or not to show intermediate results.\n",
+ "debug = False\n",
+ "\n",
+ "# Function execution\n",
+ "# !Make sure the first column in your dataframe is the target column you want to find causality for!\n",
+ "# In our case, we want to check causality with respect to X10, which we then define as the target var\n",
+ "def bring_col_to_front(df, column):\n",
+ " return df[[column] + [col for col in df.columns if col != column]]\n",
+ "df_stat = bring_col_to_front(df_stat, \"X10\")\n",
+ "\n",
+ "avg_nonlin_te = average_transfer_entropy(df_stat, \n",
+ " linear=False, \n",
+ " effective=effective, \n",
+ " tau_min=tau_min, \n",
+ " tau_max=tau_max,\n",
+ " n_shuffles=n_shuffles, \n",
+ " debug=debug)\n",
+ "avg_nonlin_te_arr = np.array(avg_nonlin_te)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "285d2a66",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Parse output as pandas dataframe\n",
+ "avg_nonlin_te_df = export_as_df(avg_nonlin_te_arr)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "c578e5c1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Remember that the parsed output of TE contains p-values\n",
+ "# To draw conclusions from these statistical notions, we have to \n",
+ "# arbitrarily pick an alpha (cut-off) value. Based on that value, we \n",
+ "# determine for which links there seems enough statistical evidence to conclude causality\n",
+ "threshold = 0.01\n",
+ "booldf = avg_nonlin_te_df.iloc[:,:]"
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "viz_df_raw(avg_nonlin_te_df, booldf, threshold)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6e19f278",
+ "metadata": {},
+ "source": [
+ "#### Graph explanations\n",
+ "The lineplots show the actual p-values for a given link between the target Variable (X10) and the explanatory variable at a given lag. Based on the cut-off value that we chose, we then also plot blue bars. These blue bars are popping up for the p values that are below the theshold. In essence these denote the combination of lags and variables that show a causal link for with regard to the target variable (X10 in this case)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "5dbd0993",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# clear matplotlib buffer to be able to make figures for pcmci\n",
+ "plt.clf()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3679674f",
+ "metadata": {},
+ "source": [
+ "# 2. PCMCI"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "5f488872",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "from tigramite import data_processing as pp\n",
+ "from tigramite import plotting as tp\n",
+ "from tigramite.pcmci import PCMCI\n",
+ "from tigramite.independence_tests import CMIknn\n",
+ "\n",
+ "from helpers.pcmci import get_selected_links, process_and_visualize_results"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "d69a441d",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "##\n",
+ "## Step 1: PC1 algorithm with lagged conditions\n",
+ "##\n",
+ "\n",
+ "Parameters:\n",
+ "selected_links = {0: [(1, -1), (1, -2), (1, -3), (1, -4), (2, -1), (2, -2), (2, -3), (2, -4), (3, -1), (3, -2), (3, -3), (3, -4), (4, -1), (4, -2), (4, -3), (4, -4), (5, -1), (5, -2), (5, -3), (5, -4), (6, -1), (6, -2), (6, -3), (6, -4), (7, -1), (7, -2), (7, -3), (7, -4), (8, -1), (8, -2), (8, -3), (8, -4), (9, -1), (9, -2), (9, -3), (9, -4)], 1: [(1, -1), (1, -2), (1, -3), (1, -4), (2, -1), (2, -2), (2, -3), (2, -4), (3, -1), (3, -2), (3, -3), (3, -4), (4, -1), (4, -2), (4, -3), (4, -4), (5, -1), (5, -2), (5, -3), (5, -4), (6, -1), (6, -2), (6, -3), (6, -4), (7, -1), (7, -2), (7, -3), (7, -4), (8, -1), (8, -2), (8, -3), (8, -4), (9, -1), (9, -2), (9, -3), (9, -4)], 2: [], 3: [], 4: [], 5: [], 6: [], 7: [], 8: [], 9: []}\n",
+ "independence test = cmi_knn\n",
+ "tau_min = 1\n",
+ "tau_max = 4\n",
+ "pc_alpha = [0.25]\n",
+ "max_conds_dim = None\n",
+ "max_combinations = 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "## Variable X10\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Testing condition sets of dimension 0:\n",
+ "\n",
+ " Link (X1 -1) --> X10 (1/36):\n",
+ " Subset 0: () gives pval = 0.22200 / val = 0.028\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X1 -2) --> X10 (2/36):\n",
+ " Subset 0: () gives pval = 0.20000 / val = 0.032\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X1 -3) --> X10 (3/36):\n",
+ " Subset 0: () gives pval = 0.49700 / val = 0.018\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X1 -4) --> X10 (4/36):\n",
+ " Subset 0: () gives pval = 0.94000 / val = -0.002\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X2 -1) --> X10 (5/36):\n",
+ " Subset 0: () gives pval = 0.00600 / val = 0.060\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X2 -2) --> X10 (6/36):\n",
+ " Subset 0: () gives pval = 0.33300 / val = 0.020\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X2 -3) --> X10 (7/36):\n",
+ " Subset 0: () gives pval = 0.34600 / val = 0.020\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X2 -4) --> X10 (8/36):\n",
+ " Subset 0: () gives pval = 0.14900 / val = 0.032\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X3 -1) --> X10 (9/36):\n",
+ " Subset 0: () gives pval = 1.00000 / val = -0.019\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -2) --> X10 (10/36):\n",
+ " Subset 0: () gives pval = 0.20500 / val = 0.028\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X3 -3) --> X10 (11/36):\n",
+ " Subset 0: () gives pval = 0.50100 / val = 0.014\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -4) --> X10 (12/36):\n",
+ " Subset 0: () gives pval = 0.18900 / val = 0.028\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X4 -1) --> X10 (13/36):\n",
+ " Subset 0: () gives pval = 0.21200 / val = 0.028\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X4 -2) --> X10 (14/36):\n",
+ " Subset 0: () gives pval = 0.66300 / val = 0.010\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -3) --> X10 (15/36):\n",
+ " Subset 0: () gives pval = 0.48000 / val = 0.016\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -4) --> X10 (16/36):\n",
+ " Subset 0: () gives pval = 0.07700 / val = 0.038\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X5 -1) --> X10 (17/36):\n",
+ " Subset 0: () gives pval = 0.21900 / val = 0.023\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X5 -2) --> X10 (18/36):\n",
+ " Subset 0: () gives pval = 0.93400 / val = -0.000\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X5 -3) --> X10 (19/36):\n",
+ " Subset 0: () gives pval = 0.03600 / val = 0.049\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X5 -4) --> X10 (20/36):\n",
+ " Subset 0: () gives pval = 0.21900 / val = 0.026\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X6 -1) --> X10 (21/36):\n",
+ " Subset 0: () gives pval = 0.37800 / val = 0.018\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -2) --> X10 (22/36):\n",
+ " Subset 0: () gives pval = 0.81100 / val = 0.004\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -3) --> X10 (23/36):\n",
+ " Subset 0: () gives pval = 0.31300 / val = 0.021\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -4) --> X10 (24/36):\n",
+ " Subset 0: () gives pval = 0.37900 / val = 0.019\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -1) --> X10 (25/36):\n",
+ " Subset 0: () gives pval = 0.75500 / val = 0.008\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -2) --> X10 (26/36):\n",
+ " Subset 0: () gives pval = 0.82700 / val = 0.004\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -3) --> X10 (27/36):\n",
+ " Subset 0: () gives pval = 0.81400 / val = 0.004\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -4) --> X10 (28/36):\n",
+ " Subset 0: () gives pval = 0.85500 / val = 0.003\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -1) --> X10 (29/36):\n",
+ " Subset 0: () gives pval = 0.33000 / val = 0.020\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -2) --> X10 (30/36):\n",
+ " Subset 0: () gives pval = 0.62600 / val = 0.010\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -3) --> X10 (31/36):\n",
+ " Subset 0: () gives pval = 0.83800 / val = 0.004\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -4) --> X10 (32/36):\n",
+ " Subset 0: () gives pval = 0.52500 / val = 0.014\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -1) --> X10 (33/36):\n",
+ " Subset 0: () gives pval = 0.05100 / val = 0.046\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X9 -2) --> X10 (34/36):\n",
+ " Subset 0: () gives pval = 0.66600 / val = 0.009\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -3) --> X10 (35/36):\n",
+ " Subset 0: () gives pval = 0.40800 / val = 0.020\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -4) --> X10 (36/36):\n",
+ " Subset 0: () gives pval = 0.29700 / val = 0.023\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X10 has 12 link(s):\n",
+ " (X2 -1): max_pval = 0.00600, min_val = 0.060\n",
+ " (X5 -3): max_pval = 0.03600, min_val = 0.049\n",
+ " (X9 -1): max_pval = 0.05100, min_val = 0.046\n",
+ " (X4 -4): max_pval = 0.07700, min_val = 0.038\n",
+ " (X2 -4): max_pval = 0.14900, min_val = 0.032\n",
+ " (X1 -2): max_pval = 0.20000, min_val = 0.032\n",
+ " (X3 -4): max_pval = 0.18900, min_val = 0.028\n",
+ " (X3 -2): max_pval = 0.20500, min_val = 0.028\n",
+ " (X1 -1): max_pval = 0.22200, min_val = 0.028\n",
+ " (X4 -1): max_pval = 0.21200, min_val = 0.028\n",
+ " (X5 -4): max_pval = 0.21900, min_val = 0.026\n",
+ " (X5 -1): max_pval = 0.21900, min_val = 0.023\n",
+ "\n",
+ "Testing condition sets of dimension 1:\n",
+ "\n",
+ " Link (X2 -1) --> X10 (1/12):\n",
+ " Subset 0: (X5 -3) gives pval = 0.22300 / val = 0.029\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X5 -3) --> X10 (2/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.39700 / val = 0.022\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -1) --> X10 (3/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.03700 / val = 0.041\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X4 -4) --> X10 (4/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.08100 / val = 0.040\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X2 -4) --> X10 (5/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.20900 / val = 0.027\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X1 -2) --> X10 (6/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.62200 / val = 0.011\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -4) --> X10 (7/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.42100 / val = 0.023\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -2) --> X10 (8/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.17700 / val = 0.030\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X1 -1) --> X10 (9/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.19200 / val = 0.029\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X4 -1) --> X10 (10/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.13000 / val = 0.031\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X5 -4) --> X10 (11/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.10400 / val = 0.032\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X5 -1) --> X10 (12/12):\n",
+ " Subset 0: (X2 -1) gives pval = 0.21400 / val = 0.026\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X10 has 9 link(s):\n",
+ " (X9 -1): max_pval = 0.05100, min_val = 0.041\n",
+ " (X4 -4): max_pval = 0.08100, min_val = 0.038\n",
+ " (X2 -1): max_pval = 0.22300, min_val = 0.029\n",
+ " (X3 -2): max_pval = 0.20500, min_val = 0.028\n",
+ " (X1 -1): max_pval = 0.22200, min_val = 0.028\n",
+ " (X4 -1): max_pval = 0.21200, min_val = 0.028\n",
+ " (X2 -4): max_pval = 0.20900, min_val = 0.027\n",
+ " (X5 -4): max_pval = 0.21900, min_val = 0.026\n",
+ " (X5 -1): max_pval = 0.21900, min_val = 0.023\n",
+ "\n",
+ "Testing condition sets of dimension 2:\n",
+ "\n",
+ " Link (X9 -1) --> X10 (1/9):\n",
+ " Subset 0: (X4 -4) (X2 -1) gives pval = 0.03900 / val = 0.037\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X4 -4) --> X10 (2/9):\n",
+ " Subset 0: (X9 -1) (X2 -1) gives pval = 0.01800 / val = 0.047\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X2 -1) --> X10 (3/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.00800 / val = 0.042\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X3 -2) --> X10 (4/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.30300 / val = 0.028\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X1 -1) --> X10 (5/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.25700 / val = 0.032\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -1) --> X10 (6/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.15400 / val = 0.030\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X2 -4) --> X10 (7/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.84300 / val = 0.014\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X5 -4) --> X10 (8/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.05200 / val = 0.036\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X5 -1) --> X10 (9/9):\n",
+ " Subset 0: (X9 -1) (X4 -4) gives pval = 0.33800 / val = 0.027\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X10 has 5 link(s):\n",
+ " (X4 -4): max_pval = 0.08100, min_val = 0.038\n",
+ " (X9 -1): max_pval = 0.05100, min_val = 0.037\n",
+ " (X2 -1): max_pval = 0.22300, min_val = 0.029\n",
+ " (X4 -1): max_pval = 0.21200, min_val = 0.028\n",
+ " (X5 -4): max_pval = 0.21900, min_val = 0.026\n",
+ "\n",
+ "Testing condition sets of dimension 3:\n",
+ "\n",
+ " Link (X4 -4) --> X10 (1/5):\n",
+ " Subset 0: (X9 -1) (X2 -1) (X4 -1) gives pval = 0.00700 / val = 0.044\n",
+ " Still subsets of dimension 3 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X9 -1) --> X10 (2/5):\n",
+ " Subset 0: (X4 -4) (X2 -1) (X4 -1) gives pval = 0.11400 / val = 0.034\n",
+ " Still subsets of dimension 3 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X2 -1) --> X10 (3/5):\n",
+ " Subset 0: (X4 -4) (X9 -1) (X4 -1) gives pval = 0.02800 / val = 0.040\n",
+ " Still subsets of dimension 3 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X4 -1) --> X10 (4/5):\n",
+ " Subset 0: (X4 -4) (X9 -1) (X2 -1) gives pval = 0.03600 / val = 0.035\n",
+ " Still subsets of dimension 3 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X5 -4) --> X10 (5/5):\n",
+ " Subset 0: (X4 -4) (X9 -1) (X2 -1) gives pval = 0.10300 / val = 0.035\n",
+ " Still subsets of dimension 3 left, but q_max = 1 reached.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X10 has 5 link(s):\n",
+ " (X4 -4): max_pval = 0.08100, min_val = 0.038\n",
+ " (X9 -1): max_pval = 0.11400, min_val = 0.034\n",
+ " (X2 -1): max_pval = 0.22300, min_val = 0.029\n",
+ " (X4 -1): max_pval = 0.21200, min_val = 0.028\n",
+ " (X5 -4): max_pval = 0.21900, min_val = 0.026\n",
+ "\n",
+ "Testing condition sets of dimension 4:\n",
+ "\n",
+ " Link (X4 -4) --> X10 (1/5):\n",
+ " Subset 0: (X9 -1) (X2 -1) (X4 -1) (X5 -4) gives pval = 0.00800 / val = 0.041\n",
+ " Still subsets of dimension 4 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X9 -1) --> X10 (2/5):\n",
+ " Subset 0: (X4 -4) (X2 -1) (X4 -1) (X5 -4) gives pval = 0.07300 / val = 0.036\n",
+ " Still subsets of dimension 4 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X2 -1) --> X10 (3/5):\n",
+ " Subset 0: (X4 -4) (X9 -1) (X4 -1) (X5 -4) gives pval = 0.00400 / val = 0.043\n",
+ " Still subsets of dimension 4 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X4 -1) --> X10 (4/5):\n",
+ " Subset 0: (X4 -4) (X9 -1) (X2 -1) (X5 -4) gives pval = 0.35500 / val = 0.025\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X5 -4) --> X10 (5/5):\n",
+ " Subset 0: (X4 -4) (X9 -1) (X2 -1) (X4 -1) gives pval = 0.19700 / val = 0.032\n",
+ " Still subsets of dimension 4 left, but q_max = 1 reached.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X10 has 4 link(s):\n",
+ " (X4 -4): max_pval = 0.08100, min_val = 0.038\n",
+ " (X9 -1): max_pval = 0.11400, min_val = 0.034\n",
+ " (X2 -1): max_pval = 0.22300, min_val = 0.029\n",
+ " (X5 -4): max_pval = 0.21900, min_val = 0.026\n",
+ "\n",
+ "Algorithm converged for variable X10\n",
+ "\n",
+ "## Variable X1\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Testing condition sets of dimension 0:\n",
+ "\n",
+ " Link (X1 -1) --> X1 (1/36):\n",
+ " Subset 0: () gives pval = 0.91800 / val = -0.000\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X1 -2) --> X1 (2/36):\n",
+ " Subset 0: () gives pval = 0.03800 / val = 0.047\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X1 -3) --> X1 (3/36):\n",
+ " Subset 0: () gives pval = 0.87700 / val = 0.002\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X1 -4) --> X1 (4/36):\n",
+ " Subset 0: () gives pval = 0.16500 / val = 0.033\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X2 -1) --> X1 (5/36):\n",
+ " Subset 0: () gives pval = 0.29500 / val = 0.023\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X2 -2) --> X1 (6/36):\n",
+ " Subset 0: () gives pval = 0.55500 / val = 0.012\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X2 -3) --> X1 (7/36):\n",
+ " Subset 0: () gives pval = 0.45000 / val = 0.016\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X2 -4) --> X1 (8/36):\n",
+ " Subset 0: () gives pval = 0.43700 / val = 0.018\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -1) --> X1 (9/36):\n",
+ " Subset 0: () gives pval = 0.47700 / val = 0.016\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -2) --> X1 (10/36):\n",
+ " Subset 0: () gives pval = 0.94600 / val = -0.001\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -3) --> X1 (11/36):\n",
+ " Subset 0: () gives pval = 0.38500 / val = 0.020\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X3 -4) --> X1 (12/36):\n",
+ " Subset 0: () gives pval = 0.59100 / val = 0.013\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -1) --> X1 (13/36):\n",
+ " Subset 0: () gives pval = 0.29600 / val = 0.023\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -2) --> X1 (14/36):\n",
+ " Subset 0: () gives pval = 0.38900 / val = 0.019\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -3) --> X1 (15/36):\n",
+ " Subset 0: () gives pval = 0.22400 / val = 0.024\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X4 -4) --> X1 (16/36):\n",
+ " Subset 0: () gives pval = 0.45500 / val = 0.017\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X5 -1) --> X1 (17/36):\n",
+ " Subset 0: () gives pval = 0.00500 / val = 0.069\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X5 -2) --> X1 (18/36):\n",
+ " Subset 0: () gives pval = 0.08000 / val = 0.039\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X5 -3) --> X1 (19/36):\n",
+ " Subset 0: () gives pval = 0.42100 / val = 0.014\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X5 -4) --> X1 (20/36):\n",
+ " Subset 0: () gives pval = 0.82300 / val = 0.007\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -1) --> X1 (21/36):\n",
+ " Subset 0: () gives pval = 0.39000 / val = 0.019\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -2) --> X1 (22/36):\n",
+ " Subset 0: () gives pval = 0.70200 / val = 0.008\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -3) --> X1 (23/36):\n",
+ " Subset 0: () gives pval = 0.25800 / val = 0.027\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X6 -4) --> X1 (24/36):\n",
+ " Subset 0: () gives pval = 0.69000 / val = 0.009\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -1) --> X1 (25/36):\n",
+ " Subset 0: () gives pval = 0.52200 / val = 0.012\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -2) --> X1 (26/36):\n",
+ " Subset 0: () gives pval = 0.28900 / val = 0.022\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -3) --> X1 (27/36):\n",
+ " Subset 0: () gives pval = 0.24000 / val = 0.025\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X7 -4) --> X1 (28/36):\n",
+ " Subset 0: () gives pval = 0.29900 / val = 0.022\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -1) --> X1 (29/36):\n",
+ " Subset 0: () gives pval = 0.06100 / val = 0.041\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X8 -2) --> X1 (30/36):\n",
+ " Subset 0: () gives pval = 0.19800 / val = 0.028\n",
+ " No conditions of dimension 0 left.\n",
+ "\n",
+ " Link (X8 -3) --> X1 (31/36):\n",
+ " Subset 0: () gives pval = 0.75200 / val = 0.006\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -4) --> X1 (32/36):\n",
+ " Subset 0: () gives pval = 0.34700 / val = 0.020\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -1) --> X1 (33/36):\n",
+ " Subset 0: () gives pval = 0.85200 / val = 0.003\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -2) --> X1 (34/36):\n",
+ " Subset 0: () gives pval = 0.94600 / val = -0.002\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -3) --> X1 (35/36):\n",
+ " Subset 0: () gives pval = 0.75700 / val = 0.007\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X9 -4) --> X1 (36/36):\n",
+ " Subset 0: () gives pval = 0.85700 / val = 0.001\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X1 has 8 link(s):\n",
+ " (X5 -1): max_pval = 0.00500, min_val = 0.069\n",
+ " (X1 -2): max_pval = 0.03800, min_val = 0.047\n",
+ " (X8 -1): max_pval = 0.06100, min_val = 0.041\n",
+ " (X5 -2): max_pval = 0.08000, min_val = 0.039\n",
+ " (X1 -4): max_pval = 0.16500, min_val = 0.033\n",
+ " (X8 -2): max_pval = 0.19800, min_val = 0.028\n",
+ " (X7 -3): max_pval = 0.24000, min_val = 0.025\n",
+ " (X4 -3): max_pval = 0.22400, min_val = 0.024\n",
+ "\n",
+ "Testing condition sets of dimension 1:\n",
+ "\n",
+ " Link (X5 -1) --> X1 (1/8):\n",
+ " Subset 0: (X1 -2) gives pval = 0.56400 / val = 0.015\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X1 -2) --> X1 (2/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.03900 / val = 0.036\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X8 -1) --> X1 (3/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.41700 / val = 0.022\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X5 -2) --> X1 (4/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.03400 / val = 0.038\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X1 -4) --> X1 (5/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.46400 / val = 0.015\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X8 -2) --> X1 (6/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.74200 / val = 0.011\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X7 -3) --> X1 (7/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.09700 / val = 0.025\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Link (X4 -3) --> X1 (8/8):\n",
+ " Subset 0: (X5 -1) gives pval = 0.20400 / val = 0.024\n",
+ " No conditions of dimension 1 left.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X1 has 4 link(s):\n",
+ " (X5 -2): max_pval = 0.08000, min_val = 0.038\n",
+ " (X1 -2): max_pval = 0.03900, min_val = 0.036\n",
+ " (X7 -3): max_pval = 0.24000, min_val = 0.025\n",
+ " (X4 -3): max_pval = 0.22400, min_val = 0.024\n",
+ "\n",
+ "Testing condition sets of dimension 2:\n",
+ "\n",
+ " Link (X5 -2) --> X1 (1/4):\n",
+ " Subset 0: (X1 -2) (X7 -3) gives pval = 0.10800 / val = 0.032\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X1 -2) --> X1 (2/4):\n",
+ " Subset 0: (X5 -2) (X7 -3) gives pval = 0.00000 / val = 0.049\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Link (X7 -3) --> X1 (3/4):\n",
+ " Subset 0: (X5 -2) (X1 -2) gives pval = 0.27800 / val = 0.024\n",
+ " Non-significance detected.\n",
+ "\n",
+ " Link (X4 -3) --> X1 (4/4):\n",
+ " Subset 0: (X5 -2) (X1 -2) gives pval = 0.16000 / val = 0.030\n",
+ " Still subsets of dimension 2 left, but q_max = 1 reached.\n",
+ "\n",
+ " Sorting parents in decreasing order with \n",
+ " weight(i-tau->j) = min_{iterations} |val_{ij}(tau)| \n",
+ "\n",
+ "Updating parents:\n",
+ "\n",
+ " Variable X1 has 3 link(s):\n",
+ " (X1 -2): max_pval = 0.03900, min_val = 0.036\n",
+ " (X5 -2): max_pval = 0.10800, min_val = 0.032\n",
+ " (X4 -3): max_pval = 0.22400, min_val = 0.024\n",
+ "\n",
+ "Algorithm converged for variable X1\n",
+ "\n",
+ "## Variable X2\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X2\n",
+ "\n",
+ "## Variable X3\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X3\n",
+ "\n",
+ "## Variable X4\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X4\n",
+ "\n",
+ "## Variable X5\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X5\n",
+ "\n",
+ "## Variable X6\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X6\n",
+ "\n",
+ "## Variable X7\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X7\n",
+ "\n",
+ "## Variable X8\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X8\n",
+ "\n",
+ "## Variable X9\n",
+ "\n",
+ "Iterating through pc_alpha = [0.25]:\n",
+ "\n",
+ "# pc_alpha = 0.25 (1/1):\n",
+ "\n",
+ "Algorithm converged for variable X9\n",
+ "\n",
+ "## Resulting lagged parent (super)sets:\n",
+ "\n",
+ " Variable X10 has 4 link(s):\n",
+ " (X4 -4): max_pval = 0.08100, min_val = 0.040\n",
+ " (X9 -1): max_pval = 0.11400, min_val = 0.034\n",
+ " (X2 -1): max_pval = 0.22300, min_val = 0.029\n",
+ " (X5 -4): max_pval = 0.21900, min_val = 0.026\n",
+ "\n",
+ " Variable X1 has 3 link(s):\n",
+ " (X1 -2): max_pval = 0.03900, min_val = 0.036\n",
+ " (X5 -2): max_pval = 0.10800, min_val = 0.032\n",
+ " (X4 -3): max_pval = 0.22400, min_val = 0.024\n",
+ "\n",
+ " Variable X2 has 0 link(s):\n",
+ "\n",
+ " Variable X3 has 0 link(s):\n",
+ "\n",
+ " Variable X4 has 0 link(s):\n",
+ "\n",
+ " Variable X5 has 0 link(s):\n",
+ "\n",
+ " Variable X6 has 0 link(s):\n",
+ "\n",
+ " Variable X7 has 0 link(s):\n",
+ "\n",
+ " Variable X8 has 0 link(s):\n",
+ "\n",
+ " Variable X9 has 0 link(s):\n",
+ "\n",
+ "##\n",
+ "## Step 2: MCI algorithm\n",
+ "##\n",
+ "\n",
+ "Parameters:\n",
+ "\n",
+ "independence test = cmi_knn\n",
+ "tau_min = 0\n",
+ "tau_max = 4\n",
+ "max_conds_py = None\n",
+ "max_conds_px = None\n",
+ "\n",
+ " link (X1 -1) --> X10 (1/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ (X1 -3) (X5 -3) (X4 -4) ]\n",
+ "\n",
+ " link (X1 -2) --> X10 (2/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ (X1 -4) (X5 -4) (X4 -5) ]\n",
+ "\n",
+ " link (X1 -3) --> X10 (3/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ (X1 -5) (X5 -5) (X4 -6) ]\n",
+ "\n",
+ " link (X1 -4) --> X10 (4/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ (X1 -6) (X5 -6) (X4 -7) ]\n",
+ "\n",
+ " link (X2 -1) --> X10 (5/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X2 -2) --> X10 (6/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X2 -3) --> X10 (7/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X2 -4) --> X10 (8/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -1) --> X10 (9/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -2) --> X10 (10/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -3) --> X10 (11/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -4) --> X10 (12/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -1) --> X10 (13/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -2) --> X10 (14/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -3) --> X10 (15/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -4) --> X10 (16/36):\n",
+ " with conds_y = [ (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -1) --> X10 (17/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -2) --> X10 (18/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -3) --> X10 (19/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -4) --> X10 (20/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -1) --> X10 (21/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -2) --> X10 (22/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -3) --> X10 (23/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -4) --> X10 (24/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -1) --> X10 (25/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -2) --> X10 (26/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -3) --> X10 (27/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -4) --> X10 (28/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -1) --> X10 (29/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -2) --> X10 (30/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -3) --> X10 (31/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -4) --> X10 (32/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -1) --> X10 (33/36):\n",
+ " with conds_y = [ (X4 -4) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -2) --> X10 (34/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -3) --> X10 (35/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -4) --> X10 (36/36):\n",
+ " with conds_y = [ (X4 -4) (X9 -1) (X2 -1) (X5 -4) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X1 -1) --> X1 (1/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ (X1 -3) (X5 -3) (X4 -4) ]\n",
+ "\n",
+ " link (X1 -2) --> X1 (2/36):\n",
+ " with conds_y = [ (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ (X1 -4) (X5 -4) (X4 -5) ]\n",
+ "\n",
+ " link (X1 -3) --> X1 (3/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ (X1 -5) (X5 -5) (X4 -6) ]\n",
+ "\n",
+ " link (X1 -4) --> X1 (4/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ (X1 -6) (X5 -6) (X4 -7) ]\n",
+ "\n",
+ " link (X2 -1) --> X1 (5/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X2 -2) --> X1 (6/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X2 -3) --> X1 (7/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X2 -4) --> X1 (8/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -1) --> X1 (9/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -2) --> X1 (10/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -3) --> X1 (11/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X3 -4) --> X1 (12/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -1) --> X1 (13/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -2) --> X1 (14/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -3) --> X1 (15/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X4 -4) --> X1 (16/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -1) --> X1 (17/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -2) --> X1 (18/36):\n",
+ " with conds_y = [ (X1 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -3) --> X1 (19/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X5 -4) --> X1 (20/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -1) --> X1 (21/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -2) --> X1 (22/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -3) --> X1 (23/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X6 -4) --> X1 (24/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -1) --> X1 (25/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -2) --> X1 (26/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -3) --> X1 (27/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X7 -4) --> X1 (28/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -1) --> X1 (29/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -2) --> X1 (30/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -3) --> X1 (31/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X8 -4) --> X1 (32/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -1) --> X1 (33/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -2) --> X1 (34/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -3) --> X1 (35/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ " link (X9 -4) --> X1 (36/36):\n",
+ " with conds_y = [ (X1 -2) (X5 -2) (X4 -3) ]\n",
+ " with conds_x = [ ]\n",
+ "\n",
+ "## Significant links at alpha = 0.05:\n",
+ "\n",
+ " Variable X10 has 3 link(s):\n",
+ " (X4 -4): pval = 0.01600 | val = 0.040\n",
+ " (X2 -1): pval = 0.00900 | val = 0.039\n",
+ " (X9 -1): pval = 0.02600 | val = 0.039\n",
+ "\n",
+ " Variable X1 has 4 link(s):\n",
+ " (X3 -3): pval = 0.00100 | val = 0.050\n",
+ " (X1 -2): pval = 0.00900 | val = 0.039\n",
+ " (X1 -3): pval = 0.03900 | val = 0.035\n",
+ " (X5 -2): pval = 0.02200 | val = 0.032\n",
+ "\n",
+ " Variable X2 has 0 link(s):\n",
+ "\n",
+ " Variable X3 has 0 link(s):\n",
+ "\n",
+ " Variable X4 has 0 link(s):\n",
+ "\n",
+ " Variable X5 has 0 link(s):\n",
+ "\n",
+ " Variable X6 has 0 link(s):\n",
+ "\n",
+ " Variable X7 has 0 link(s):\n",
+ "\n",
+ " Variable X8 has 0 link(s):\n",
+ "\n",
+ " Variable X9 has 0 link(s):\n"
+ ]
+ }
+ ],
+ "source": [
+ "# convert to pp dataframe\n",
+ "dataframe = pp.DataFrame(df_stat.to_numpy(),\n",
+ " datatime = np.arange(len(df_stat)), \n",
+ " var_names=df_stat.columns)\n",
+ "\n",
+ "# CMIknn as independence test\n",
+ "cmi_knn = CMIknn(significance='shuffle_test', shuffle_neighbors=5, transform='ranks')\n",
+ "\n",
+ "# Configure the links that you want to causal-check with PCMCI\n",
+ "target_column_indices = [0,1]\n",
+ "selected_links = get_selected_links(df_stat, \n",
+ " tau_min=tau_min, \n",
+ " tau_max=tau_max, \n",
+ " selected_columns_indices=target_column_indices)\n",
+ "\n",
+ "# Instantiate PCMCI\n",
+ "pcmci_cmi_knn = PCMCI(\n",
+ " dataframe=dataframe, \n",
+ " cond_ind_test=cmi_knn,\n",
+ " verbosity=2)\n",
+ "\n",
+ "# Run PCMCI algorithm with a given alpha\n",
+ "# in this example, the value is taken very high (not really conservative) to be sure we have results.\n",
+ "# Remember that we are working with a toy dataset! IRL, the alpha value should be lower ([0.05-0.4]).\n",
+ "# Note that running this function takes a lot of compute and therefor time!\n",
+ "alpha = 0.25\n",
+ "results = pcmci_cmi_knn.run_pcmci(selected_links=selected_links, \n",
+ " tau_min=tau_min, \n",
+ " tau_max=tau_max, \n",
+ " pc_alpha=alpha)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a66ca3d3",
+ "metadata": {},
+ "source": [
+ "## Visualize results PCMCI"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "02066fad",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "## Significant links at alpha = 0.01:\n",
+ "\n",
+ " Variable X10 has 1 link(s):\n",
+ " (X2 -1): pval = 0.00900 | val = 0.039\n",
+ "\n",
+ " Variable X1 has 2 link(s):\n",
+ " (X3 -3): pval = 0.00100 | val = 0.050\n",
+ " (X1 -2): pval = 0.00900 | val = 0.039\n",
+ "\n",
+ " Variable X2 has 0 link(s):\n",
+ "\n",
+ " Variable X3 has 0 link(s):\n",
+ "\n",
+ " Variable X4 has 0 link(s):\n",
+ "\n",
+ " Variable X5 has 0 link(s):\n",
+ "\n",
+ " Variable X6 has 0 link(s):\n",
+ "\n",
+ " Variable X7 has 0 link(s):\n",
+ "\n",
+ " Variable X8 has 0 link(s):\n",
+ "\n",
+ " Variable X9 has 0 link(s):\n",
+ "X10\n",
+ "X1\n",
+ "X2\n",
+ "X3\n",
+ "X4\n",
+ "X5\n",
+ "X9\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAaMAAAD+CAYAAACX8mppAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAAsTAAALEwEAmpwYAABC0UlEQVR4nO2dd5gURfrHP7WpB2WGcWaVJCuoiAHDgZ4JFQURFeOp53lnOD0DpjOAop45nIo5HQbM8Sd6nhkUxYRgRBBRQVCCKOwMszOE6d2dqd8f1QvLshN2d+Lu+3mefna2u7rq7d3p/nZVvfW+SmuNIAiCIOSTknwbIAiCIAgiRoIgCELeETESBEEQ8o6IkSAIgpB3RIwEQRCEvCNiJAiCIOQdESNBEAQh74gYCYIgCHlHxEgQBEHIOyJGgiAIQt4RMRIEQRDyjoiRIAiCkHdEjARBEIS8I2IkCIIg5J2yfBsgCGkTDWwMbO9sm2G+vzawEJgNzMPlr8ufgYIgtBYl+YyEgiYaqASOBc4EdgBWY3r0lvMzBkSdzxXAZOAR4A1cfjsfJguC0HJEjITCJBroDFwGXAjEgY1bcHYE02O6AHgOlz+ecfsEQcgoIkZC4RENDAFexPR+NmpDTSuB+cBhuPwLM2GaIAjZQRwYhMIiGhgJvAZsQtuECKAzZn5pJtHAH9tqmiAI2UN6RkLhEA38BTPf01YRao6VwABc/rlZqFsQhDYiYiQUBtHAzsBUsiNEYOadFgI74fJHstSGIAitRIbphPwTDSjgeaBTFlspAboBV2exDUEQWomIkVAIHAxsDqjmDi5atIQ+2w4gGFwBwIoVIfpsO4AZ38xiz/2Gs8OAvdlpt3154cX/pmrHBYwkGvBn0nhBENqODNMJ+Sca+AgYlKzIrbffw7z5C3jo/js589yL6L1FFX86cgRKKfpuvRW//rqUgXsPYc7Xn+L1dklW1RrgSlz+2zN5CYIgtA0RIyG/RANlmHVBrmTF6urqGLjXEE496QQefuwpZkyfQnl5+Xpldv7jfkx49lH6br1Vqlan4PLv3ya7BUHIKBIOSMg3OwB1pBCj8vJyxt50DcMPP45Jr0/YQIg++/wramtr2WrLPkkbi8chFmOfeG1wKvA7sDTBtszy+GKtvShBEFqGiJGQbzbFeLql5K2Jk+nerSvfzp7DgUMGr92/dOlvnHjaSJ54+H5KShJPg8Y11NUDqFJgzxTNxe1wcDlGmBYCPwDfN/y0PL5AOjYLgpAeIkZCvklLiGZ8M4t33pvCtA8mMmjIoRx/7FF0796NcDjCoUf/hRuvuYI9dt81eUNxSOAj0RwlQFdn26XpQTscrKaJQDk/51seX326jQiCYJA5IyG/RAN9gRkkWV+ktWavwQdz3VVjOHDIYO594GGmffYFjz10Lwcf8WcOO+QgLjjvrJRNxeJQX5+2GDVnSDqlYkA16GVAw7YcWK5MQFfFOkVUGDFeDaxyttVJfq6u6FKZtzh7tTXVCugOLK/oUinR0YWMImIk5BezxigEeBIVeWj8E0x+/0NeeHo8ALFYjN0GDeWIEQdzw813sMP2264t+/hD97LLzjs2W4/WUB9r1ENK+N1vfn8bZCyTrGSdyDUI3bJmtuVAdaZEo7amuhcmTNPOQDVwE3B3VsUxslABPQE/UIrxhPwZd9WarLUp5A0RIyH/RAMvA0fQxnVv9fX1PPnMC3w8dTrLli+ne7duHH/s0QzedxBooz1aE9OaoIaNVfaiPRQSy4AFTbb5zs9F6YpVbU31E8BJTXZ/CPy9okvl/IxZG1m4I3AiMALog3kzaLBRYRZGVwNfA48Dr+OuWp2x9oW8IWIk5J9oYBdMKKAWRWBo+O46IsOLL/+Pm8bezXnnnUe3bt1YsGABd9xxB6MvOI/T/35y5u0ufuLAItYXqbnALGBuY6GqrameBuzeTB2rgNHAuIoula17mJge0GHAnZgoGeXOlvJMp9yjwJW4q4Ktal8oCESMhMIgGngb2B+TIG8DtNZrRUdr0PENR9n++7/XuPbmsVRWVrJkyRIOOuggzjnnHIYdOJQFc2Zl/xraF7UYp4xvMeK0HRv2jBrzLnBaRZfKlqXqiCzcGnga6E/LclY1JgrUA2OAB3BXyUOtCBExEgqDaGAzYLbWuhLWiU3jLRV1dXU88cxzPPXs82itmT3nexYvXsymm25KZNkSlMrIrM8qzLxNxPm5svHv2jgwdAG8mDQYDZuXNg5DFsicVTLCmGSIj6XVS4osHAq8jBGhTIQmWwW8DfwVd5Vk+S0yxLVbyCu1NdUuYACwF+YNfDCtfO6Wl5dT6fMR03DGGWey8847c/rpp9PF40kkRDHMwtffGv38rcm+3zEOFhHa4M1mh4PlmDmQfphexp6YEEiV6daR+MVR12HmhoJAVIEbs34r1zH4PMB44NTamurjK7pULk5YMrJwGPBfMjtvtzEmzuGbRBYOw10li5aLCOkZCTmltqa6G0Z4GraBJBiaaw3TPvuco/9yIv369eOAAw7g4IMPZsiQISyeO+dRt7vzN8BPmEWsS4FgPl2l7XBQYcRpkLPtA2yZgapnAS+h9SugI8rU2aeZbbMMtJWIOHBSRZfKZzY4Elm4Jcad352ltlcB9+OuujRL9QtZQMRIyBrOupTtgf1YJz7J4/W0AqXMtqx6OW+8NZFt+m5FNBqNjzj6hJJvv/2WESNGMG/evO201t9nuu1MY4eDPYC9WSdOO9O2IawfMENhLwFfWR7f2hu+tqa6M9CbdeK0NSY8U38yI1R2RZfK9cM8GWeFbzDfi9IMtJGI1cDBuKs+zGIbQgYRMRIySm1N9abAUGCYs/XIZP2qZJ34lDjLRxuG4C4YdQUfTv2MX375hXA4zJAhQxg/fjxbbbUVtm131lqvyqQtucAOBz3AHhhhGuR8ThrHLwk/Y0TpJWC65fEl7BXW1lRvhhGlPwCXYob9WkPJevNHkYWHAC9gUsJvwKLFv7Lvwcfy5Qdv4PN5WbGihgH7HsL7rz/PWRdcwbQvvmbQHrvy+ouPpdP257irJN18kSBiJLQJZ85nb+BAjPj8IRPVYry4ZgLflJSwsrSMAUqphrxHUcx8TwMVAH8/47xVtTFVOXLkSNxuN3PnzuXGG29kxowZN2mtr8iAXXnHDgcrMHNsgxptrZkb+pV1PaaPE4Uwqq2pPhMY1zpr+bWiS2XP9fZEFn5Fiu/IrXeNY978n3nonps585+X0btqcy67+BwmT/mY1WuiPPjoM+mK0SrgANxVn7XSfiGHiBgJLcIZetuBdeKzH23P0Po78AlmrdFU4KuKLpXNe0OZlBNbYYaRyjDC9TPwq+pU2Q3zFr8H5s37F+Ap4AXdTr/odjhYgnGIOAz4E5A8QF/zLAf+hxGm9yyPr7bhQG1N9aXAzS2srw7jFn50RZfKeWv3RhZ2xjhZJF1DVFdXx8B9R3Dq347j4SeeY8Ynb62N0j7lo0+57Z6H0hWjOuBG3FXXttB+IQ+IGAkpcQRoV+AYZ2vLJHsc0+OZ2mj7udULJoX1sMPBLYCjMcK0Fy33TAxhwv68CLyldLw78B3ND6stB2Y33Sq6VDYf0Tyy8ECn3qTZDwEmvvsBw48+iUmvPM2BB+yzdn8LxQhgOu6qPdItLOQPce0WmqW2proEs+K+QYCqWllVHUZw3sf0fj6r6FIZzoiRwgZYHt8vmEgGd9rhYHfgKIwwDSY9RwgvJhzPicAirUrGoeMHKCNwG2McIhpEZ3kLzetNepEVeOudKXTvthnffvfDemLUCrZoy8lC7hAxEtZSW1NdinmbPgbzAOuZ/IyEzAEmAe8AH1R0qVyZGQuFlmB5fEuBB4AH7HCwEhP/708YB5N0RKEXcCOq5GptnA7uszy+tsy/pOU9N2PmbN55/yOmTX6FQQcdw/HHHEb3bl2z2qaQf0SMOjiOAO3DOgHq1opqqjHhYCYB7yRd7CjkBcvjq8YsSB1vh4NeTCDSPwHDSe2dV4HTW7LDwS+A+4AXLI8v2kIzApiwPQnRWjPywiu46+arqerVk9Hnn8GoK27kmfH3tLCptaxo7YlCbpE5ow5IbU11GWbY5hjMME5L15TEgY+AtzC9nxn5XDwqtB47HOyMiVrwJ+BQErhcN0M18AgwzhkaTE1kYS/MMF9Ch5eHHnuWyR98wguP3w846UIGH8ad/76Kf11/G9//+BMrV63C79uE8ffdykFD90vWogYewl2VOtmVkHdEjDoITg9oCHAccCQtdweOAe8BE4BXKrpULsuogULescNBF8ZD8u/A4SSZY5q/4Geefu55vp7xDb/8sjC42WabPvf2a6+c13hRbbNEFi6j9WuWWkoEOBV31YQctSe0ARGjdk5tTXVX4FTgDMwEckuox/R8JgD/S+glJbQ7HK+8s4DTaebF5cBDD2f7Hfpz0EEHYVkWp5x8Mot++uF74H7gScvjC9uR0HaYdVA+nGCxitgeinh/Rby0lFpUgkSGGSII9JCgqcWBiFE7xHHF3g/zMDmaND2YGk7HzP1MAF6t6FIpY+4dGKe39GfgPEwcQQCGjTiCyk03o6amhueff55t+/VjyYK5DYdXgpqDUrslrz1OBSuzFY18FXA57qpWTzYJuUXEqB1RW1O9CSbnzFnAtimKN8bGzP9MAF6v6FJZkwXzhCLGCer6R+Bc4Lgf586teHvSu1z/71uYP39+UzEycZvSoIzVlCb3aWgNdRj3891xV9WmKiwUBiJGRU6jBakjgeNJPxrCGuANjAC9WdGlMpIdC4X2hh0Obgb8I1RTc3bfHXbu2RYxKmclJWTU90VjPOj6465amsmKhewirt1FSm1N9cbACZhe0IA0T9PAm8DjwFsVXSqLLnCokH8sj28ZcNOIAw78D2ZephFOFNs0UMSWKuIeWp/htSl1QA0wWISo+BAxKjJqa6r7YwToREwys3RYhnHDfbiiS+XPWTJN6EAopTYCDnC7TUqilStX1k1+f0rJvvvsU9oQRy4pWtdr1N+UYgfgFsyzqCVzm01ZiVlsfTjuqt/aUI+QJzKR6lfIMrU11VZtTfUJtTXVH2ESp51DekI0BTP53KuiS+UVIkRCBrl51113nXDooYdiWRYHDR9ePuqyf5U++PD4NE/XZcAkW3feuV5X7I+Jh7cG07tpCRHMy9ZpmDkiEaIiRXpGBUxtTfVWGJfsU0k/PXUIeAIYV9GlsuCTyQnFyUadOlX27duX/v37c++997L77rtj2zaRlWlEflo3T10KnBaj4m8xXfFAGdEDS1X9YcApmJetWswcaEMmYI3xkosDFuZl6xHgNXHfLn7EgaHAcKIjjMAMxR3UglM/w+SdeaGiS+XqbNgmCAB2JDRk3k8/Pf3s8//Xra5+XUdm44024h+n/p1Kf5L11FpD4rVFK4HbQd9hqVVeTKqSHTAvYmXO8Z8wnnJzcFetafvVCIWCiFGBUFtT3QXjEXcu6QcoXQ08g+kFfZUt2wQBwI6EXMCNwEUtOG0lEERrG1CgvaTu5VcDNwH/aUX8O6FIETHKM06a7n9iRChlnheH2cB/gKdlTZCQC+xIaEfMi8+OLThtAnCB5fYuWVtPOLgRZgHtpZioDMlYBFwLPJEoE63QfhAxSkU00AvjOt0f2AkTSDKOmTT9GiMMX+Dyt2idTm1NdS/gYsycUDprg2oxicnGAZ9IMjohF9iRUAlwAfBv1s3dpOJn4BzL7X0zYb0mcvhop+6NUtT3A3Al8JLl8UlA3naKiFFzRANeTCSDMzFZTRsmUpu6nkYx0QtcwGTgIeA1XP6EN0xtTXVfzFvhSc3U1xw/AQ8Cj1V0qaxu0XUIQhuwI6HNMc4wB6R5Sj0wFrjBcnvTmre0w8FuwBWYey3V/fAJ8A/L4xPHnHaIiFFjooFyzLzNDRhPn1RvbE1ZCSwBzsLln9L4QG1N9S7AZZi0Dalc6uPAq5ihuHclPYOQa+xI6DjMS5A3zVOmA6dZbu/sVrUXDvYBrsGsn0u2arYWuA641fL4WuoGLhQwIkYNRANbAm8DPWj7ivDVmPHyM2ptPRDz5ndIGufVAo8BYyu6VP7URhsEocXYkVAX4F6MKKRDDCMON1lub5vndexwsD/mZfCIFEVnAqdZHt8XbW1TKAxEjACigV0xmUo7k7k0xavr63UwHmPzNMquwswF3VHRpfLXDLUvCC3CjoT2AZ4CtkjzlHnA3yy3d3rGbQkH98B41O2fpFgcuBO4yvL4ZDlDkSNiZBwUZpL+cERaaK2pSx0veAVwD3Cv5AoS8oUdCVVgvNYuJfkQWWMeAi623N40Vrm20i4TKXwYJs351kmKzgdOtzy+97Jli5B9OrYYRQMW8BWwDRmORqHjmrrEI9q/AbcDD0q0bCGfOAnwngH+kOYp1Zi5oVezZ9X62OFgJ8x80sUkH7kYD4yyPL5QDswSMkxHF6NzMEEaMxU1eC1aa+rrGkc+AYzL6y3A4xVdKmUxn5A37EhIAWcDt2G8QdPhTYwQ5SX+mx0ODsQIzs5Jii0FzrE8vv/mxiohU3TcQKmmV3QtSYRo0aIl9Nl2AMGgSXa6YkWIPtsO4IOPPmHAnvuzy+6D2WHA3ox7+LENzlVKUVYOpaVQUkK8tJQpQN+KLpXjRIiEfGJHQl5MYNL7SE+I1mC8TEfkS4gALI/vS2A34HLMkorm6A68bIeDExy3caFI6Lg9o2jgUOBZUkS/vvX2e5g3fwEP3X8nZ557Eb23qOLif56N1hrLsli5ciX9B+7D1PffpEeP7smqsgE3Lr+4owp5w46E+mKEqF+ap3wJ/NVye3/InlUtxw4H+2GCpA5KUiyECV30uOXxddAHXfHQcXtGxkunc6pCF54/kmmffcld947j46nTGXXBOVRUVGBZFgC2XUs8ntYyIJv0x+UFIePYkdABmPVA6QhRHBOHbs9CEyIAy+P7AdgPM9SYaN7VCzwKTLLDwS1zZJrQSjq6GKW8/vLycsbedA0XXvIv7hp7Iw2JwxYtWsJOu+1Lr747c+nF56fqFYFZXf7HNlstCK3AjoRGApNIHQ8OYAGwr+X2/styewu2J295fHHL4/sPJrL360mKDgVm2eHghXY4mKmlG0KG6chilG6WVN6aOJnu3bry7ew5a/f16tWTmZ9/yLxvP+OJp5/n99+XpaqmgvQDoQpCRrAjoTI7EroPeID01tA9Duxiub2fZNWwDGJ5fIuAw4G/YLz9mmMj4A7gXTsc7Jor24T06chilNYb34xvZvHOe1OY9sFE7rx3HEuXrj9/26NHd/rvsB0ffTItVVVxTIQFQcgJdiS0CfAWJjNwKlYAx1pu798ttzecXcsyj+Xxacvjex7YDng6SdHBwFd2OLhXTgwT0qYji1HKGFpaa0aeP5q7xt5IVdXmjL7wXEZddjWLF//KmjUmr9eKFSE+njqNftskW5MHGI+kuW03WxBSY0dC/TDzQ0PTKD4bGGi5vROya1X2sTy+asvjOxETfmtRgmI9gA/scPB8Z2GtUAB0ZG+60zGhRBK6dj80/gkmv/8hLzw9HoBYLMZug4ZyxIiDeemV11FKobXm3LNO44zTTk7VYi3QE5dfIm8LWcWOhIYB/0d6w8JvACcUY28oFXY46MaEFDqHxJElnsdEb8haJAkhPTqyGFVh8qSku+CvrXyLy9+SxGSC0CKchaznYl6y0pkfug0YY7m9sawalmfscPAAjOhsmqDId8CfJDVFfum4w3Qu/0LgFUwOlmyzEpM+QhCygh0JlWNSjtxDaiGqA/5uub2j27sQATgx6wYAiSZ2twc+t8PBY3JnldCUjtszAogGtsIESW1p3qKWUI9589oFl78D/7GFbGFHQn5MypLBaRRfDhxVTN5ymcIOByswvcHzkhS7HbhMciXlno4tRgDRwAnAw2RPkGqAnZyemCBkFCfQ6WvAVmkUnwUcbrm9P2fVqALHDgdT3fMfAX+2PL6lubNK6LjDdA24/M9icgmtykLtq4EjRYiEbGBHQgdjhp7SEaJXgb07uhABWB7fs8DuwI8JiuyDcf/ep7mDsnA2O4gYGUZhQp+syVB9dZh1G0Obph8XhExgR0InY6IOpLN4+2bM0JykK3GwPL5vMUFXX05QpBvwvh0OXtTg/m2Hgx47HJwArLTDwVl2OLhnjsztEMgwXWOigcGYWFabkkbcumbQGEF7FxiJyy9ZW4WMY0dCp2GGmVKtkbGBf1hub7JFoB0aR2guxgh2oh7PBOBUTE6lixrtXw70tTy+mmza2FEQMWpKNFCK+eJdBfgwMeXKU5y1GvNF/gIYhcufMhyDILQGOxI6C+M1l4rfgSMtt1e+i2lgh4P7AS8AiUIFfY8Zyh/YZP/llsf372za1lEQMUpENKCAnYATgH0x2WC7AA2usCWYRF6zMQsHX8Tl/z0PlgodBDsSOhe4N42iM4AjLLdX5ipbgB0O9sAsFt47QZE6NnwxXQ70tjy+1dm0rSMgYtQSogEX0AkjSKtx+XOxRkkQsCOhCzGBPlPxX+BEy+3NhkNOu8cOB8uBW4ELWnDa+ZbHl85LgpAEESNBKHDsSGg05gGZijuBUZbbm1aCLSExdjh4HGb+OGG4sEYsBrayPD4JhNwGxJtOEAoYOxK6nPSE6GbgYhGijGFhIu2nw+bA37JoS4dAekaCUKDYkdBVwLVpFL0euNpye+VmzgB2OOgDltCyuJVzge0sj6/dh1fKFtIzEoQCw46ElB0JXUd6QnS15fZeJUKUUXrQ8gDKfYE/ZcGWDoP0jAShgHAib98EjEmj+OWW27vWrdiJDLBxo20j1q1F0s7Wks+1mCC/KwHb8vg6xMPC+Tt+D6RMUtaEHzC9ow7xd8o0IkaCUAA4QTw3BXUTSp2U8gStF4NeyfriY2XRxHrWCVPjLZJgfxiz1ul34Dfgt2Jyf7bDwV7A/cAIUi8ubswZlsf3cHasat+IGAlClrDDwRKgp7N1bbRt1sznTUCBSuO5p4vWRyFCI3FqtDXdt6xQPNPscHBz4CTMQvh0YgD+bHl8fbJrVftExEgQ2oAdDm405l9XDSsvK9vhpL/9lb5bb9UN89DaEugDVKRXU7sXopbyG/CTs81r9PknIJDpoTAnAd+OwGQnbl3T4woYhBGl40gc8TtgeXyVmbStoyBiJAgpsMPBzTAT1FuyTmi2ArZ8/c23up1zwcX069ePJYsXM/vrz1vRgghRCwnTvEj9BCy2PL4W/aHscLCp1+I4YHSiVOROOvNjgZHArk0O3255fKOSNhgNlAC9Md+pjTHzczWYeaqlHTXvmYiRIDjY4aAFbIcJA7UTsLPzc7Pmyq9YEaJHn61544032H///fF6vdQsa2ls3DSESDf2K0jJqkbbaky0EMW6eY+WfLYwAYPdQFm6BuSZWowozXS2b5xtSXO9Ked/Xs2GgZF/Ak6yPL6pyRqzw8FdMA4nPYCnLY/vtmYLRgNdMQL2D8x3rA4zD9fYwaShF/05JhDu/3D5O0ykdREjocPhDLl0Z53YNGzb0oKH7mlnnc0m/k0BGDt2bOvESKVYXWFu0GdBv8/6QtN0WwmsyZYnl+Ng0SBMnRNsjY9tgpkL6+ZsXcmvoAVZJ04NP2djRCFM88NuccyC46tbPYcVDfQAxmLcvmMJ2mmOCObvdRdwEy5/s7209oSIkdCucZwI+gF7sL74+NtS76R3J3P+xZcwbtw4RowYQSQSSSRGNcAC1k3ULzM/VVfgYlRSNYpj4sw92xZbCwHn/7AJ68SpQaC6NbNvU1rmwdZaYpgEe/WY+aJEfAOcaHl8s9Ku2QRaHoUZ/isl7bnDDViN6e2disv/31bWURSIGAntCjsc3Bj4I7CXs+2JeQhmjJUrV7JZrz689tprjBo1ikULF+rl1dWqU6dO3HbzTePPHXnmO5hhnvmWxxfcwMZIaGfgY5LnzIoBJ1hu7/9l0vZiwOmF9cbMy22FWe/T8HlLsuvCnoha4ErMnFDyKAvRgIVJRzGU9GLbpcNqTOzBK9vrnJKIkVC0OMNtvVgnPHtjej+ZTgtdh1nQOA+Yf9MtY62Pp312zt13382ECRMoKSnh8ssv54YbbuC6665bpbX2aG28DZRSXYFhwBbA0uuvuXr66IsueAsTzywRMeDPltv7Uoavo+hp5C6/VZOtQbC6ZNmEucAxlsc3s9mjxjnhTUzamU4ZbnsVcDcu/xUZrrcgEDESigbnjXkX1onPXpgHUyb5lfUnv2cCP1geX11DAaXUH7t16zZ9zz1N1unS0lKeffZZ/vznP/PKK68s01p301prpdTgzp07vz98+HC22WYbFixYwIQJE3jz1f+yz96JUuYAcLrl9j6S4etq9zgvJ35gB8xLScOwbH9aHt4nGRo4wPL4pmxwJBq4BjM8l6keUVPWAMfj8r+apfrzhoiRULA4D5cdMT2LYZh1Hpl627SBb1knPDOBmZbHV53qRKWUwsxBdQdKS0pK/m/VqlV06tQJYEut9QKn3Oh//OMft2633XZ89dVXDBs2jM0224zzzj2Xb7/+IlH1N1tu72VtvzyhATscLMO4UTd4SDaIVLLeaSrmWh7fNuvtiQZ2Bj4l8z2ipqwEeuHyh7LcTk4RMRIKCjsc7IYZax8GHIiZ1G4rq4HpzjYDIzxzLY+vzckRleEDl8u1z5o1a97WWh/c6NhewIvAu5dcfFHPsXfcOaSmpobu3buz4Ifv8Hg8Tat7EThe0kDkBjscvAi4vZWnL7Y8vl7r7YkG3sZ8Z0sAFi1awr4HHsaXUyfj823CihUhBux5AO9PfIWzzhvFtM++YNBeu/P6y8+trWLBz79w/ImnEwiuYOAfduKpR/9DRcUGvg9rgLG4/Fe30vaCRKJ2C3nFDgdddjg41A4Hb7XDwRmYVO5PASfSeiH6BXgOOA8YCHSxPL4DLI/vMsvje8Hy+OZkQogAtGG/NWvWbAwc0uTYVK11z2h4xVebVlYOGThwIL/++isV5eV07ryB78KnwMkiRDnlqDacu76IRQNbYuaJ1j5Te/XqycjTT2HMldcBMObK6zjjtJPovUUVoy88l6fGP7BBpZdecR0XnncW82Z/ziabeBn/+NPNtd0JuIhooFjWfqVFu7oYofBxht52YN3Q2360bTy/DvgKmOpsn1oe35K22tkStBleaDYIqB0JHfHk08/cedd99zNp0iROPvlkLrtkNCUl670HzgeOsNzeNbmwV1hLoAVlw0AIWA7cZnl8zzc5vj/G8WQ9Ljx/JAP3GsJd947j46nTue/OWwAYsv++TPnw4/XKaq1574OPePaJBwE4+a/Hc82NtzLyjFObs0djhhu/bME1FDQiRkLWcVa5HwAcDRyKmWtpLdWsE56pwBeWx1eQD3E7Ehr41DPPvnD19Teqt956i/PPP59+fbfm3LPPalxsBXCo5fYuz5OZHZmrMeF8GjvBLATmONv3DZ8tjy/V/2cozbjql5eXM/amaxh++HFMen0C5eXlCSsIBIJ4u3ShrMw8ljfv2YMlvy5NVLwcM4cqYiQIyXDW+wzHCNAIYIMJknSrAj4CJjnbrJbGHssHdiRUNeWDDydecfW11qRJkxgzZgyTJ08GYPGSJbz28gRKS0vrgKMtt/f7/FrbMbE8vm/scHB7YHsc9/1E8ejSoHeiA29NnEz3bl35dvYcDhwyuJXVb4ALqMpUZYWAiJGQMexwcBOM8ByNEaLWDr/NAt7BiM9HxZQHB8COhDzAGx9+/In/5JNPprKykrFjxzJ27FgAhg0bxuIlS9iiquoflts7Ja/GdnAsjy8MTMtAVc1GjJjxzSzeeW8K0z6YyKAhh3L8sUfRvXvzU6F+v49QTQ319fWUlZWxeMmv9OyRdBAhF1EqcoaIkdAmHO+3IzACdACt+04tZ13P513L42tptNGCwY6EyjFecf0PHHoAfz/9TO6//35Wr16np4P325fu3bpdb7m9T+bNUCHTbDBPqbVm5PmjuWvsjVRVbc7oC89l1GVX88zjDzZbgVKK/fcdxISXX+X4447miWee54gRBzdbFjNikNO50Wwjrt1Ci7HDwd4YT6SjMVEPWvqGVsv6Q28zi2HoLRVOyvAHgdNTFH0aOMlye+Xmay9EA2dhPOzWBkJ9aPwTTH7/Q154ejwAsViM3QYN5c5bb+Bf1/yb73+cy8qVq/D7NmH8uLs56MADmL/gZ44/8XSCK0L8Yecdefqx/2BZzUY/CgMH4fJnoldXEIgYCWlhh4OVwPGYrJe7taKKMPAa8DIw0fL4VmXQvILAjoROBx5KUexDYJjl9to5MEnIFdFAP4xXZ7pRudvKGsCLy18QGXEzgYiRkBDHC+5QjAAdSsuH4KqBV4CXgPcKJZV0NrAjoe0wnk3JVt//COxpub0bBE8V2gHRwIcYD7dsz+VEgftx+ZMn8SsyRIyE9XDWAe2OEaDjaXnE68WY3s/LwMcpIxy3A+xIyIWJ7rBTkmIBYA/L7Z2XG6uEnBMN7AFMJvu9ozXAFrj87Wo5gDgwCMDaeaC/YUSobwtPn4vp/byMWffT0d5wbiG5ENmYRa0iRO0Zl38a0cCjwKlkT5BWAyPbmxCB9Iw6NHY46AGOwQjQfi08fSYwASNA33VAAQLAjoRGYObCknGyeM51EEyIno8w0eUzGSkcjBA9jct/ZobrLQhEjDoYTj6YocApGI+4ltwwSzGeYE+1KOtlO8WOhHpg0kxUJin2pOX2npwjk4Qs4XhK9sREbDgYE4ewAnjJcnvPXq9wNODG5DQaQOZ6SKuBZ4GzcPnb5dC3iFEHwQ4HfRgBGolJRJYuazC9nyeByR1hDigd7EioBOOWPiRJsXnAAMvtjeTGKiFT2JFQV4zwNN4SBe69w3J7L15vj+kh3Qycgwnd09qEjzYmLfqFuPwPt7KOokDEqJ1jh4O7AmcDf6FlvaD3MQL0kuXxycO0CXYkdCnmYZOIeoznXMLERULh4PR8hmPWiO1Gy3Id/Wy5vX2aPWJcvu/DrMcrIf2U6aud8s8Cl+HyL2uBPUWJODC0Q+xwsBNwHEaE/tiCU3/ACNAzlsf3SzZsaw/YkdAfgRtSFLtchKioOB7z4G8NCxIecfl/AA4kGtgG4yB0CiZQ8GqMC3hDj6lhxKETZgnAw8ALuPwJI6W2N6Rn1I6ww8GtgLMw3jy+NE8LYnL/PAl83lEdEdLFiTv3NbBlkmLvAMMlN1HxYEdC75J8yDUR9cA2ltubWJCaEg10BrYDtsGkJ9dADSZC+I+4/B1yQbT0jIocOxwsxUymno0ZZkiXd4BxwOvteTFqFrif5EK0HBPqR4SouFjRyvP+0iIhAnD5VwKfO5vgIGJUpNjh4KbAaZie0BZpnhYCHgPGWR7fj1kyrd1iR0InYoZaknGK5fb+lgt7hIxyHbAX0MP5XZM6ksJ4y+2dkFWrOhAyTFdk2OHg1sDFmLHndB0SvsK80T9fbOkYCgU7EtoaMzy3QQK1Rtxlub0X5sgkIcPYkVAXzBzrNRhhSsZ8YBfxlMwc0jMqEhyvuEuAP2G8bFKeArwAPAB8JnNBrceOhCow82rJhGgGMCYnBgnZopT0hCiOGYoVIcogIkYFjBMnbhhwKbB/mqctAP4DPGZ5fNXZsq2DcT1mnUkiVgPHSyTu4sWOhHoCbwP90yh+s+X2fpJlkzocMkxXgNjhYDnGNfsSksc8a0ADb2B6QRPbQ26gQsGOhA7ELG5NxmmW2/toLuwRMo8dCfXFOPSkM/f6FWb9mDj9ZBgRowLCDgc3xjglXER6N0YYkz/nAcvja5lHj5ASOxJyY9xteyYp9n+YXpHcSEWIHQkNwPSINk2jeBQTUWNOdq3qmMgwXQHgeMadhwkdks76oKXAncBDlsdXk03bOjhXkVyIfgHOFCEqTuxIaDdMygd3M4dXYxyEGs/PXiJClD2kZ5RH7HBwS4xn3Kmk5xn3PTAWEyFB5ieyiB0J7YBxSkj0whYD9rXc3qk5M0poE0opBfQCOs///rtwjx7dP8NEQ2jKckww1B2BazHfgbuBsfLikT1EjJIRDVRhvNf2w3wxe2Ii9WrMm9NPmDHkScBruPxppdJ23LOvAE4kvQCKn2Jy5rwm80HZx4lTNgXYN0mxqyy39/rcWCS0FaXUv4DLunfvvtFGG23EVn16B199eUJzoxC/YNLC/wjmuyAClBtEjJoSDSjgSOBqoJ+zN1WvJYKJzPsWcDku//fNFXJE6F+YhZPpiNBrwC2WxyeeOznEWdyaLP/QDGA3y+2tz41FQltRSi378ssvN91oo42ora3lryf8hS8+3eC2+g4jREvyYGKHJ531Kh2HaGAXTIiOJ4GdMSKUzvCZ2yl3OPAV0cDDRANr03Xb4WBfOxx8AhOI9GSSC1EdJkrCDpbHd7gIUW6xIyEvcFuKYmeLEBUdH+2+++4ceeSRiY7/DhwkQpQ/RIwaiAZOA6ZiEmIlW9yYjFJM1N0Tge9iqwN72eHgk5i5npNI/veOYOaD+lge36mWx/ddK20Q2sb1wGZJjj9qub2f5soYITNorf90w7VXn57gcB1wtOX2Ls6lTcL6iBgBRANjgHswQpIqHlU6WFrTNRbjI4wwJfs7BzHzR1WWx3eJ5fHJm1mecNx8z05SZAUSZaEosSOhLYcNHTo2weGR4oiSf8S1OxoYgXHh7ZTJarVGaa2SCVsAMxx0vySvyz9O5tYHSP7icJnl9i7PkUlChrAjoc7A/0pLS7zNHL7PcnvH59gkoRk6thhFA5tjYo5lVIgAkshQADMc94CIUEFxKrB7kuOfA4/kyBYhQzgvGU9Oeufd/rfffc/a/T//spCjjv3z7/N+mn/rj3Pn5s9AYS0dfZjuaoyrdrMsWrSEPtsOIBg0qU5WrAjRZ9sB/PzLQgDC4Qibb7Uj515w6QbnKgWlJY09FXUNJsZcb8vju0WEqHCwIyE/yVOIa4zTQixJGaEw+Rdw1Ng77+LQEYfx4IMP0qdPH55//nnKKqyuc+fNOyzfBgqGjtsziga6YVysE4pRr149GXn6KYy58joeuv9Oxlx5HWecdhK9t6gC4Mpr/82+g/ZM2ERZGZRqjdagFHNVJ9+tmb4MISP8G/AnOT5OUogXH3YkNAKzaJUdttuO22+/Hcuy1h5fsWIFGOcioQDouOuMooFTgHtJ4TlXV1fHwL2GcOpJJ/DwY08xY/oUysvL+fKrGYy98z6GDxvCF1/O4L67bknVYi3QFZc/lBH7hYxgR0K7YxYVJxpYXQ70s9ze1mYCFfKAkx5+Dk6yPK01i5csIR6P892cOWcfdezxbwIRrXUwr4YKa+m4PSOTmiGlC3d5eTljb7qG4Ycfx6TXJ1BeXk48HufiMVfx9KPjePf9D9Jtbw2wNya6tlAA2JFQKSbdRjJHk0tEiIqS61mXtRWlFL023xzghm122Ok/Wv85b4YJzdOR54x2TrfgWxMn071bV76dbWIkPvDgoxxy0FA237xHijPXoxOwXctMFLLMSOAPSY5/QvJIDEIBYkdCA4Fzmzk0ETNPLBQgHblnlHCuqDEzvpnFO+9NYdoHExk05FCOP/YoPp3+OR99Mo0HHnqMlatWUVtbS+fOG3PzDVclq6oMsJIVEHKHHQl1BW5IUiSGcVqQWIBFhNPbfZANX7RXYyKsy/+zQOnIYrQyVQGtNSPPH81dY2+kqmpzRl94LqMuu5pnHn9wbZnHn3qOL76ckUqIwMwZiQdd4TAW6JLk+L2W2zszV8YIGeNsYGAz+6+x3N5fcm2MkD4deZjuI0wu+4Q8/OiTVPXqyYFDBgNw9pmnMueHH/ngo1aFi6sFvmzNiUJmcZwWTkxSZCkynFN0OKnDb2zm0CzgrtxaI7SUjuxNdyTwBODJUYu1gAeXX/IQ5Rk7EnoLGJ6kyF8st/f5XNkjZAY7EnoROKaZQ3tJPMHCpyP3jCZhFjPmgnrgfyJE+ceOhPYguRC9B7yQI3OEDGFHQofQvBA9JEJUHHRcMXL5VwM3YSY2s009JhiqkH+uSXIsBpwrydSKCzsS2gi4v5lDy5DAtkVDxxUjw/2YL2w2PWxWA4/i8ksArDxjR0J7AgclKfKE5fbOyZU9Qsa4EujdzP6LZI1Y8dBx54waiAa2wTgWtDaHUTLqMFlB98blr8tC/UILsCOht0ksRvWYSAvzc2iS0EbsSKg/8DUbega/i8na2sEfcMVDR+8Zgcv/I3AEsIrMziFFgfnAoSJE+SfNXpEIURFhR0IKM7rRVIhszBoxEaIiQsQIwOV/D9gL+I3MzCGtwkyE74bLL/lvCoNrkhyrx8wfCsXFQcC+zey/yXJ7ZVi8yBAxasDlnwlsA9yJiSNX24paVmHWqJwMjMDll0WuBYAdCe2FiUWYCOkVFRlOr+j6Zg79CKSMWiwUHjJn1BzRQE/MSu6/A25MIM2N2DCgZgwjQBZmYd2DwFPiwl1Y2JHQJODABIfrgW0st3dBDk0S2ogdCR0O/K+ZQ0dabm9z+4UCR8QoGdGAwgRUHQjsAuwIbIwRoRWY7J+zgE9w+X/Nk5VCEuxIaG/g4yRFHrHc3tNzZY/QdpzsrV+xYbDjL4HdZK6oOOnIselS4/JrjDfcjPwaIrSBZGF96mk+fIxQ2BxF81H3rxIhKl5kzkhotzi9okTDcwCPWW7vzzkyR8gATlTua5s5NA14K8fmCBlEekZCe+aaJMfEg66QiQY6A9s7mw8oBdaUligrFi/ZoZkzpFdU5IgYCe0SOxIaBAxNUkR6RYVGNNAdOB44HeiLWWZRgsk9poB6pZpbC6g/AvVu7gwVsoGIkdBeuSbJMZkrKiSiAT8m0eEpmIXnnZwjTSPql5co7RRpcGzVlJfGe5eUcAAwOQfWCllCvOmEdocdCe0DfJikyEOW23tmruwRkhAN/Bl4BPNi7ErnlLiGWFyBhtISTYmZ+V6N+Z//BZc/lCVrhSwiYiS0O+xI6F1gSILDdUBfyfqZZ8yyiZuA8zFr+DKBjQl8vC8u/88ZqlPIEeJNJ7QrHA+6REIE8KgIUUFwJXAemRMiMIvPewCfEg34MlivkANEjIT2xoVJjtUhHnT5JxoYDlyKWUCeaUqBTYBXiAZKs1C/kCVEjIR2gx0JVWEWRCZivOX2LsyVPUIzRAMu4Bky2yNqigUMwMSIFIoEESOhPXE2ib/TMeDmHNoiNM8/MGLRLIsWLaHPtgMIBk1OvBUrQvTZdgA//7KQS6+4lv4DB9F/4CBeePG/qdrZGLiRaEA8hosEESOhXeCknk4WY+4lmSsqCMaQZHiuV6+ejDz9FMZceZ0pfOV1nHHaScz+7nu+mjGTGdOnMP3Didx21/2EwymD4m8MHJwxy4WsImIktBf+ilmpn4h7cmWIkIBooBvgT1XswvNHMu2zL7nr3nF8PHU6oy44h+/m/MC+g/akrKyMjTfemJ123IG3J6VcVrQxycNBCQWEiJFQ9Di5bc5PUuQrYGqOzBESsxdp5AkrLy9n7E3XcOEl/+KusTdSXl7Ozjv15+1J77F69WqqqwO8/8HHLFq8JFVVJSSPwiEUEDKeKrQHBgP9kxy/W+KWFQSbkeYz562Jk+nerSvfzp7DgUMGM2zo/nz+5dfstf8hbFrpZ8/dd6W0NC1nuZQ9MaEwkJ6R0B74Z5Jjy4AXcmWIkJR4OoVmfDOLd96bwrQPJnLnveNYuvQ3AK649CJmTJ/CO2+8hNaabfpulU518hJSJIgYCUWNHQn1AQ5PUuRBy+2VzLuFwSLMWq+EaK0Zef5o7hp7I1VVmzP6wnMZddnVxGIxAoEgADNnzWbmt98xbOj+6bSZcixPKAxkmE4oPkxgzc2AMkXJMRrVNB18A/XAuNwZJqTgU9YFQW2Whx99kqpePTlwyGAAzj7zVB576lk+njqNkeePBsDjdvP0o/+hrCzl46seyXFUNEhsOqHwiQaqgBOAPwH9MOtUbEDXx5QVi5ckWrfyrOX2/jVHVgrpEA18B2yXo9ZWAofi8icLmisUCNIzEgqXaGB3jEv2Ts6exlGdK8BEbY7FG6cUWIdS+rlsmyi0mKuAx4DOOWjrF+CjHLQjZADpGQmFhxmGexg4CDOsk2gYDgDtpBSIxZVTVFNaomNlpboWuBW4Hpc/lm2zhTSIBkqAuUAfUvxf28gq4FhcfhmmKxJEjITCIhroj0mS5sXp/aSL1uvSrjWaRVoFzMAM19RkykyhDZj/8XSyF59uNfAsLn+yiBxCgSFiJBQO0cD2wDTMEE4m35ptYD6wKy7/6gzWK7SWaOBY4HEyL0hRYDawFy5/ygW2QuEgrt1CYRANdAEmknkhAuPw0Bt40knqJuQbl/9FTJrx1WRuLdAqzMvMYBGi4kPESCgUHgE2JXvzCJ2A4cCJWapfaClGkAYBP2GEpLXUAWuAO4ChuPwrM2CdkGNEjIT8Ew1sCxxKgtQCydIKlG68GbvsPphddh/M4cek9OLeGLhV0goUEC7/18C2mKyvASBCmpEaMAJmA/8D+uHyXyWOKsWLzBkJ+ScaeAY4jiRLDW69/R7mzV/AQ/ffyZnnXkTvLaq4bPQFdK7cgpXVLcoMsRI4A5df3L4LDZOZdR/g78AwoJINh/Es5/fvgPHABFz+5Tm2VMgCIkZCfjFzOAFMquiE1NXVMXCvIZx60gk8/NhTzJg+hfLy8taIEcCLuPzHtdZkIUdEA50wi5x9mHTiazBDer/h8suDq50hYiTkl2igD8b7KWmYGICJ77zH8MOPY9LrE9aGiynr3JVddu5PWWkZY0b9kyMPPySdVpfh8ndti9mCIGQWGTsX8k1vTI6blGLUNK0AwC8/zKBnz+7MX/AzBww/ih37b8dWW/ZJVdWmRANK3q4FoXAQBwYh36SVlCZRWoGePbsDsGWf3gzed2++njErneoaQjUIglAgiBgJ+WYFKcP9NJ9WYMWKELZtskNUVwf45NPpbL9dv3TaXIPLn67HliAIOUCG6YR8M4sELt0NJEorMPPb2Zx74RhKSkqIx+OMGfXPdMXo6zZbLQhCRhEHBiH/RAOfAbvlqDUbuA6X/6YctScIQhrIMJ1QCNyKWf+TCzQmhYEgCAWEiJFQCLwMVOegHRt4Apd/aQ7aEgShBYgYCfnHOBP8DbOoMVtoIARckcU2BEFoJSJGQmHg8n8CjKZtATOTsQYYjssfyFL9giC0AREjoZB4ADN/lMmcQxojcEfg8s/IYL2CIGQQ8aYTCo9o4BhM4rUyUrh9p2AVZi7qUFz+2RmwTBCELCE9I6HwcPknAH2ApzDDa3YLa1iF8c77F7CNCJEgFD7SMxIKGxNI9STgVEzyvYY4dhWNSq3GJFjrBHwDPIRJLRDKqa2CILQaESOhODCpJnoCOwD9ge4YQVoDzMdE/p6Ny78ibzYKgtBqRIwEQRCEvCNzRoIgCELeETESBEEQ8o6IkSAIgpB3RIwEQRCEvCNiJAiCIOQdESNBEAQh77Qt02s0oNFx0DHQcUCbn2v3abSOA43K6CZl1p4Tg3ijffFYo3Piph4dc/Y31LN+GXTcqUOv+6njENfrzmn4vPa43vCcuHM8FjNbXDs/42t/12uPxRv9bPjccE4cHdfomEbHTVsNn3Vs/d9p2K+dz2vPAR2LE4/F0fUx4rGYc37cfK6PobUmXh9Hx5zj9XG01uvKO3bEYzHiTl2xtfv0ut+1Xnt8/TLx9cvENfVAzNnijT7Xm//eescTlblK66TpxoXWUfGHU7UqKaWkrIKS8gpKyyowv5c3s69i7T5VUkppWRklpSWUlpVQWlqCKlGUlpVQUmp+qhJFqXN87T5lflY4W2mJwioroaKs1OwrbbxvXZmGzxWlJZSXKEpKFOUlJZSXKvO7UpQ7x8y+EkpLWFumtERRgqK0BErVup9Krf97iYJSBaUlyvzuPHNUvB7iMfNTxyFWv8E+Fa+HWOMydej6OnRdLdTXoeMxdH2d+VxXC87vZl8tuq4O4jHi9XXEa+uJ19WjY3Fideb3WF098dp6dDy+7ve6OlPGKR+rjRGrizv74sTqYsRqY+iYXnssXhsjVhcz93NtjFhtnLp4nJiG2rheb2u8r05rYnr94/fHf87LfSk9I0EQBCHviBgJgiAIeUfESBAEQcg7IkaCIAhC3hExEgRBEPKOiJEgCIKQd0SMBEEQhLwjYiQIgiDkHREjQRAEIe+IGAmCIAh5R8RIEARByDsiRoIgCELeETESBEEQ8o6IkSAIgpB3RIwEQRCEvCNiJAiCIOQdpbVu/clKnaG1fiiD9hQkHeE6O8I1QnFdZz5s7QhtdoRrLMY229ozOqON5xcLHeE6O8I1QnFdZz5s7QhtdoRrLLo2ZZhOEARByDsiRoIgCELeaasYFcXYewboCNfZEa4Rius682FrR2izI1xj0bXZJgcGQRAEQcgEMkwnCIIg5J20xEgpNVwp9YNSap5Sakwzxy2l1AvO8elKqd4ZtzTLpLrGRuX+pJTSSqldc2lfpkjjf1mllHpfKfW1UmqmUuqQfNjZFpRSjyqllimlvk1w/K/Otc1SSk1VSu2caxubQyl1rFJqtlIqnuj7pZTq5fx/vnPK/rONbfqUUu8opeY6PzdJUO5Wp705Sql7lFIq2206ZT1KqcVKqfta0U7On1v5eI7k+p7O2v2ltU66AaXAT8CWQAXwDbB9kzJnA+Ocz8cDL6Sqt5C2dK7RKecGPgSmAbvm2+5sXCdmzHek83l74Od8292K69wXGAB8m+D4XsAmzueDgen5ttmxZTugHzAl0fcL6A4McD67gR+b+662oM1bgTHO5zHALQn+Xp84359S4FNgcDbbbFT2buBZ4L4WtpHz51Y+niP5uKezdX+l0zP6IzBPaz1fa10LPA8c0aTMEcATzucJwJC2vDnlgXSuEeB64BYgmkvjMkg616kBj/O5C/BrDu3LCFrrD4FgkuNTtdYrnF+nAZvnxLAUaK3naK1/SFFmqdb6K+dzBJgD9GxDs43v3SeAI5trFnBhHnYWUA78nuU2UUoNBLoCk1rRRj6eW/l4juT8ns7W/ZWOGPUEFjX6fTEbfvnXltFa1wM1gD8dAwqElNeolBoA9NJav5FLwzJMOv/La4C/KaUWA28C5+XGtLxxGvBWvo1oDc6w0h+A6W2opqvWeqnz+TfMw389tNafAu8DS51totZ6TjbbVEqVALcDo1rZRj6eW/l4jhT6PZ32/VWWZUPaBc6NcQdwSp5NyQV/AR7XWt+ulNoTeEop1V9rHc+3YZlGKbU/5mYZlMM23wW6NXPoCq31/1pQT2fgJeACrXW4tW02/kVrrZVSG7jXKqW2xgwhNrzhvqOU2kdr/VG22sQMob2ptV5cXIMsicnjcyQv93RL7690xGgJ0KvR75s7+5ors1gpVYbpCgbSMaBASHWNbqA/MMW5MboBryqlDtdaf5EzK9tOOv/L04DhYN6IlVIuoBJYlhMLc4RSaifgEeBgrXXOvqta66FtrUMpVY4Rome01i+3pU2l1O9Kqe5a66VKqe40/38+CpimtV7pnPMWsCeQUIwy0OaewD5KqbOBzkCFUmql1jqhU0AT8vHcysdzpCDv6dbcX+kM030O9FVK9VFKVWAm+l5tUuZV4GTn8zHAe9qZvSoSkl6j1rpGa12pte6tte6NGQctNiGC9P6XC4EhAEqp7TBzBctzamWWUUpVAS8DJ2qtf8y3PS3BmdMYD8zRWt+RgSob37snA831zhYC+ymlyhwh3A8zV5W1NrXWf9VaVzn32yjgyRYIEeTnuZWP50jB3dOtvr/S9J44BOO18xNmOAHgOswfEszFvQjMAz4DtmyLt0Y+tlTX2KTsFIrQmy7N/+X2GM+pb4AZwLB829yKa3wOM7dRhxlDPw04CzjLOf4IsMK5vhnAF/m22bHrKMdeG+MgMNHZ3wMzZAVmyEMDMxvZf0gb2vQDk4G5wLuAz9m/K/CI87kUeBAjQN8Bd7TxOlO22aT8KbTQm845L+fPrXw8R3J9T2fr/pIIDIIgCELekQgMgiAIQt4RMRIEQRDyjohRjlFKHamU2r4V5/2slPqoyb4ZjUNyKKX+qJT60AkN8rVS6hGl1EZKqVNaE05FEIoRJ8zO041+L1NKLVdKvd5o38FKqS+UCav0tVLqdmf/NUqp1q5tEtqAiFHuORIzodga3EqpXrDWK2YtSqmumMnYS7XW/bTWfwDexriTCkJHYhXQXynVyfn9QBq5Oyul+gP3AX/TWm+PcZyYl3MrhfUQMcoASqlXlFJfKhNI8gxn38pGx49RSj2ulNoLOBwY6/RqtlJK7aKUmuYEFvyvShI0Evg/4M/O579gvFoaOAd4QpvV8gBorSdordsStkUQipU3gUOdz03vlUuAG7XW3wNorWNa6//k2D6hCSJGmeFUrfVAzBvW+UqpZkOKaK2nYtYAjNZa76K1/gl4EtOb2QmYBVydpJ2XgKOdz4cBrzU61h/4sm2XIQjthueB450FnjuxfsgkuVcKEBGjzHC+UuobzCK2XkDfdE5SSnUBvFrrD5xdT2Ai4iYiAKxQSh2PWfOxuvUmC0L7RWs9E+iN6RW9mV9rhHQQMWojSqnBwFBgT631zsDXmMV0jRdwuVpYZ6kzjDdDKXVdk8MvAPez/rADwGxgYEvaEYR2zqvAbci9UhSIGLWdLsAKrfVqpdS2wB7O/t+VUts5wRGPalQ+guNUoLWuwfR09nGOnQh84Ixh7+JsVzVp77+YfDATm+y/DzhZKbV7ww6l1NGOY4MgdEQeBa7VWs9qsn8scLlSahswAUyVUmfl3DphPSRqd9t5GzhLKTUH+AEzVAcmadjrmBhQX2CCPYIZy35YKXU+Jh7WycA4pdRGwHzg78ka0yaHzS0AjaMZa61/d4bvblNKbQbEMQm83s7ANQpC0aG1Xgzc08z+mUqpC4DnnPtOY+5VIY9IOCBBEAQh78gwnSAIgpB3RIwEQRCEvCNiJAiCIOQdESNBEAQh74gYCYIgCHlHxEgQBEHIOyJGgiAIQt4RMRIEQRDyzv8DK17zOU2fOkgAAAAASUVORK5CYII=",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEjCAYAAADdZh27AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAAsTAAALEwEAmpwYAAB/IElEQVR4nO2deZgjRdnAf5U5evbITHaYZblvEKEVUBBUQORQBBQQRBDkEkRQURS88L4+7wNUVMAbEUQEEZRDTgVFVMACFJAblmVns5lkd3Y7k0l9f7ydTKdzdY6ZyWTr9zx5dtPpdNdUqut96633UMYYLBaLxWIBiM12AywWi8XSOVihYLFYLJYiVihYLBaLpYgVChaLxWIpYoWCxWKxWIpYoWCxWCyWIlYoWCwWi6WIFQoWi8ViKdIxQkEp9TKl1K5Nfnc7pdSEUmrPdrerE2imb5RSg0qpO5VStyql7lZK7Tdd7ZttmuyfXZRSf1FK3a6UulkptdV0tW82afa5Ukpdr5RarpT6+HS0ay7Qypw0l+kYoQAcBQw3+d1PALe1sS2dRjN9swrY2xizD3A08KV2N6qDaKZ/lgIHGmP2Br4GfKbtreoMmn2u3gGc0+a2zDVamZPmLB0hFJRSBwDvAj6hlPpWg9/dHXgeeGYamjbrNNs3xpi8MSbnvx0E7p+G5s06LfTP88aYjP/WA3K1zp+LtPJcGWO68nmKSit9N9fpne0GABhjblRKPWaM2auJr58LnAR8vc3N6gha6Rul1MbAZcB2wMltb1wH0OLYQSm1APg8ohl3Fa32zbrMutx3HSEUlFLrA8tCx3qAv1Q4/VpjzOf8cw4G7jHGrFBKTX9DZ4Fm+wbAGPMssKdSagvgVuD309fS2aGV/lFK9SFC88vGmAentaGzQCt9s65Tqe/WFTpCKAAvBXTwgDFmEtijzvd2BvZRSr0KeAmwvVLqrcaYJ6ellbNDU32jlHKMMZ7/Ng1kap0/h2m2f2LAL4CrjDFXTVvrZpdmnytLhb5bV1CdkDpbKbUlcA3wjDHmwCav8RPgImPMn9vZttmm2b5RSr0c+CYwiQj/Txtj/jQ9rZw9WuifI4GfAPf4h/5tjHlv+1s4e7TyXCmlLgReBTiANsYc1v4Wdi7tmJPmKh0hFCwWi8XSGXSE95HFYrFYOgMrFCwWi8VSxAoFi8VisRSxQsFisVgsRaxQsFgsFksRKxQsFovFUsQKBYvFYrEUsULBYrFYLEWsULBYLBZLESsULBaLxVLECgWLxWKxFLFCwWKxWCxFrFCwWCwWSxErFCwWi8VSxAoFi8VisRSxQsFisVgsRaxQsFgsFksRKxQsFovFUsQKBYvFYrEUsULBYrFYLEWsULBYLBZLESsULBaLxVLECgWLxWKxFLFCwWKxWCxFrFCwWCwWSxErFCwWi8VSxAoFi8VisRSxQsFisVgsRaxQsFgsFksRKxQsFovFUsQKBYvFYrEUsULBYrFYLEV6Z7sBtdBazwdeD2wCLABWA88A17uuOz6bbZtttNYKeCmwOzAE5IEUcIfrug/PYtM6Aq31EHAgsASYB6wCngBudF03O4tNm3X8sbM7sBMydnLASuBm13WfnM22dQJa6xFk3hkBBoAM8DBwq+u6udls20ygjDGz3YYS/AG7L/BO4I3ABNCPCLAckAX6gGuAHwC3uK7bWX/ENKK13hA4AXgXsNg/3AcYpK9iyOR3AfBL13VXzEIzZwWtdQ9wEHAasD9TY6UX6ZsJoAe4DLjQdd2/zlJTZwWt9ZbAycApwEJkrATHTg/wEPBd4HLXdTOz1NQZR2vdDxyGPFevQsZOP9JHOf+VB34GXOS67v2z09Lpp6OEgtZ6EPgl8BpkZaBqnG6QlcOtwLGu66anvYGzjNb6SOAnyECdV+f0cWRgH+m67p+muWmzjtZ6feAq4CXIhFeLScBDhMMZruuund7WzS6+onUa8HVk4nfqfGU1srI62HXdf0xz82YdX1heC2xK/bGTQwTo+cDHXNednObmzTgdIxS01usBdwKbIUu2qHjAk8Crulkr1lp/EPgsML/Br64BTnZd91ftb1VnoLXeDPgbsB6i+UZlDXA/sJ/ruquno22zjS8Qvo6svBc0+PVx4DDXdW9se8PahJdOzgeUMzjc1O+ntXaBO4A4IjCjMg7cDLzZdd2JZu7dqdTdaFZKLVRKPaGUOjZwLK6UekopdaRS6rVKqVuUUmNKqScqfH8L//NxpdR/lFL7h8/xB+4NwBY0JhBAtJ4tgOv963QdWutDaU4ggKwoLtZa79HeVnUGWute4M+IKa0RgQDSNzsBl7a7XR3Eu2hOIICMt6u01tu0t0ntwUsn9weeBlZ56eRfvHTyBF9IREJrHQduR/ZVGhEIIH2zL/DtBr/X8dQVCsaYVcjS81tKqYIN+yvAPcaYK5Cl5o+Ac6pc4lLgX4gWdy5wReA6BV4LvAix4TVDP7A9sE+T3+9YfEH3NZoTCAXmAf/XnhZ1HEcDi2j8oS4wAOyvtd6hfU3qDHw7+edpTiAU6Ac+1Z4WtZ1zgWH//69CTKvPeenk+V46+ZII3z8DUSqbVSbnAydprZc0+f2OJJJLqjHmesTmdp5Sah/gKKRDMcbcbYz5OfBY+HtKqe2AlwGfMsasMcb8Bvg3cETo1C/S2sAF+YG+0OI1OpHXAxu0eA0F7K613qkN7ekYtNYxZNKrZweuRx/w6ZYb1HkcR/OKVoFe4Eit9cZtaE+7WVbh2BDwHuB+L528y0snT/LSybK5RWs9AHyE1pStAtUU4jlJIy6pZwEPAgcAZxtjno/wnR2Bx4wxQS+G+/zjAGitt0JcK1tFATtrrbd0XffxNlyvU/gArU96IBrRe4BT23CtTuGVTGmKrdALHKq1jneZx83ZtGfsxICTEAHcMl46OQ+x4S9ElMHwv5WOVTpnCNkXquZ0sYf/ushLJ7/lDA5/MPDZITS/ugwyAJyO9HVXEFkoGGNWKqUeQJZpV0b82kJgLHRsDAhqHRsjXjL1vGmikPWv101CYfM2XScGbNWma3UKm7TxWllgfcQnvVtodYVZoJ8aY8dLJxUyQS8JvdavcGwJ7dHOGyEGfMBLJz/jDA4XvBQ3ofVVVAFHaz2/W2KnIgsFpdRxyIbuTcCXkQ2seqwCBkPHBil98BrdHKxHRwfkNUE7/5529/Vs086+MW2+XifQsiaslKKvt4d5Tv+WXjr5PkTQVJr42zXBTifB/uilfRkd8nTR2In0hyil1ge+iewl/Ad4QCl1iTHmjjpffQDYSikVD5iQdkJiEQq8QHuWcfjXWd6ma3UKL9A+DX9pm67TKSxHJvN24ACjbbpWp7CScqWsjJ5YjL7eHvp6e+nr6536f28PvT3FR3Mf5rYjx4+cweGVgffLEXf2dihKii5aYUaVbt8BrjLG3AKglPoQcKFSaiemIo775CM1AOSNMVljzMNKqXuBTymlPg68Adk/CG40PwgkaY/tM4lEZHYUXjo5gKysdkJsoKsrvDzEznqzMzj8v8DXLwZcWu+fVYh3RsfhpZN7Am9F7LOV+mY1MmkngWucweGCILiD9gmFezsxzsVLJxPI2HkR4htfqW8mEBv7H5zB4WcDX/8ZYuueF5zow//GYjOeAm0CMSMXguSC/0Y9tgr4KPDmGvdZBvwJ+CuizAa5jvYoo5PA77opq0Ld4DWl1GHA94AdjDGpwPGbgbuAG4FbQl+7zRizj3/eFshktDvwFPBuY8xNwZO11scjofWtTHyrgNNd1/1FC9eYFrx08jzgvQ185QxncPgCAK21g2j4i1psxqPAdtUGr5dJOYhWOeT/G3xVOraQqSV4T5V/g/9XwFpkYltTfBnTD+atRH9AHwO2KQgGrfVHgY/Tmp16FRKkVTHy28ukYojArtUf4WP9VO+XSv9OIH0S7J9xTH5fIGqcgEE2hJcBO+XzZreJydyb+3t7lVLTHsIz7t83+HqhwrFlwFhAsDeNl07+C9i5ga/c7QwO7154o7W+AEn70Yrpaw2wm+u6D7RwjY6iIyKatdZ9SKK7xTTnM2yQAbhJJyas8tLJOxFPmag84QwOb1l4o7X+APA5mpj4lILenp7xhfPnf2lk0dB/gQ2BjUKvDZFJbeYxhiaU/ftRsaeA5fl8fiw5ljk9NznpTORy5HKT5CajZx7oicUm+3p7n9hkg8VnKqWC/RHsnyW0z8TZGCY/K7ct3t4YjOG5WEw9iAjkZ6kw8TuDw6tmum1eOvkDJDCvEQacwWEPQGu9KfBfmndymUCS5L2uye93JB0hFAC01jsDtyEaWSOCwSD2vL1d171vGprWMl46eQpwYQNfudkZHN6v8MaP2v018DoqCIbenh76+nrp923C/b299Pb20tvTQ09Ph2dHb04ogKr+d+WNIZfLMZGbZCKXozDGlVL0xGL0+rby3p4eZkCDbo1m+6cBpL+kr7yJXLDv1uQm81/ccccd2+KK2m68dHJjZN8yqkKTBeY5g8NFSevnE/sZjQuGCSSa+pWu677Q4Hc7mo4RCgBa6x2BPzKVsrYea5ENowNd131wOtvWCl46uR3wGcT+WW+p+j9gF2dwuLhx5WVSaiKX23hFKn2eUurgvt7e/v6+Xt8m3Ess1uETWy2amfRqCIQ5T8nzOF3PZoXx4gtHYwy5yUmyE7m8Utw0f2Dg50jA6X+ceMKbpgY1hZdOxpEg2g8ylTG4GlngcGdw+LrwB1rrNyDJEecRbZ91NZIz61DXdbvNsaWzhAIU85F8EFkWxpEfKrh0n0SEwRiifX+9UwOOvHRyD2Qz7I1EW/1cDLwbFdsW2AWxlxb+bXVPoR6TSD2GdOg1VuFYGlmdFdIJT4b+rXQMRNDPB+ZhzDCYNyL5Y+LRmqiKk9cssIrqfRE+nkFszfX6ZBJjEmBegkT+74zsH7TJVBXoq9b7bRIxtfw79HrSiSdm1MblpZPrI0GdpwGJCF95FDjAGRx+otoJWuvNgQ8hUeAK2TMLdpqH/GaPIRlSL+rGDKnQgUKhgJ/zZzfgbcDWyMSRQTTpXwJ/79Qdfy+d3BrJNfSWBr52NyiFUi+h8aSAtZgEngeeq/BaGvj/ipl4uL10sg+Jqv4MsiKMwjJQGqV2QOz9M82TSP6uu/zXPU48sabRi3jpZAzYAXh14NWqu7EBfg/8CngG1DhKvRo4BnGx/TeyJ7I5sCWShbid+yOr/HvczVT/PO3EE21/Nv1I6PcBHyOyIsHtwGEFd1Qvk5qHjL2NgY858URJUSHfVLs/cCSynzQPUZbuB37uuu6jrf8lnU3HCoW5iJdOLkI8Yd5LQ/7PLWvASeARpDrUI/7rUcTmOerEE7Ou0fhRr4cAX0XcKxugpf7JI9p7FpkMB2ndNz0H3MvUJHgXojGXPEx+xs7dmBIAryKaZhuVVcCRzuDw9V4mpYC9kRxO+wTOudCJJ4qbsV4m1YvUDdiyyqsdUdDPIX1yp//vP1sxPfnC9BgkR9pmDXz1l8DJzuCw52VSI0ial48wVU9iLbCwE56PTsIKhTbgpZP9iG3zk1Q380xSUUOLOOEZ4yHeHqP+awWYJKLFTCC20F7/Hr2B1yQiNFYEXsH3bXEPrIWXTr4MyfT62iqn5KkaXRqhf2QQr0A8YUaBJJgVSPDWKv/awf4ZRGzQixGNcxGo9alUfCa6MFqKMXcBz4JZgOT3ejntiXS9HckyvGPg2HPAwaiYRvaqzkYEUCnGrADzUcrHRaWx0ov0wXpI3ywCBkEV/l9OtP7JAv9kSoDe6cQTz9b+iuClk3sj9SB2rXLKCsQt/ujQ8S8Cn0DFtkJMTSdSeTN5Zyee6EgHldnCCoUW8dLJw5AJb+vqZwUfHFN6PJpAYBo9UIJCIyw8liLBhQ8BTwW9NqLgpZMbAl8C3k7VPZUafQNRBALT7Z1THTXd974N+LAzOPw3L508HYkXAim3+hNQrwRehVLVTSmz5tIa+t3Kf8f/IJP5jcCtTjxRsi/om2C/hpTIrMRKJEHf95F9qr8jaXhyyL7c/Uj20sPLG1MkBwzYlUIpVig0iZdObgRcQs3Q/woTW7C/o2qhs+yr7jOOCIcHQ6/HncHhkofKNxW9D0llXiO2QrWhb2ZTKDTFOKI1b0D1oLSngT8gmvtLEZNJKIYn0gqKzuubisIih6wgbsCYm8C8AlEmKmn2E0iGhc87g8PJwkEvnRwB9gQeRMW+ARwcoTEXOPHEGU38EV2NFQoN4GVS2wAHYswJYKotZ6mq5Rb72tAhq4R24CFa38P+/zdAbOfRhQGUu2JGdTvtDIHZLmqY0YJ0zdgJEGnV9RvgI87gcNXNXi+T+gRSpbAeBtjaiSe6KaNyW+iazH7TgZdJxRE7+Osx5kBgq/oDN/DAlgiBZjH3ITUoJhCNKoeYfHI13geP9SM1B9YLvILvWy1u5IDaCcnrRO2/NSAs2+GPb8xqxPNmnOj9EXxvkMCncJ+MIB47jQZStkqbAzDMXxE30np9Ua1/CvsLlcbNerQn3X2hrbU+XAvqSpQ6nwrFvAC89MojwXwb1IKIq8wbrECojF0phPAyqe2BQ30hsCdKieCsq3VVs42XsQLZJJwAtQMQQ6lqAW0pYKNm3B+j4qWTwQc/PDFuhbhQ7kCJ10xoJRSpbwwRtMHCSYCaMkkpVc2F8j1OPPHdWheMipdOLkS8o96CJG5s44Q3HSiACZSq5kn1GLDtdLoZ+y6ilYTFCGIaK4yddiS7xP+blwG/QakrgdudeGLCSyd7EE+i3sB5E364ejXF9wgnnohaF2adYp0XCr4r307AERhzNEqV2nmbW4IbxCX0Xv91H2J/X+oMDq8N3PskpL51Nc5z4on3NXrzduJlUosw5hBks/i1RSFZoHkTRRbZML0T+C3wZ0RYBjyAVB5V1Y40jgjMcBGnyIQEwUFEjw95Hom8/yNSX2QcOBA4FMlxtRXtW4U/A1yB7C3sW/JJbRPbR5144kttakPT+PtLmzAlIHYAXox4UiVau7pajey9XAUmlAizponteWAzJ56YaO3+3ck6KRT8rJevwJhjgLeiVOXC29EmvHEkeOdepgTAv+slCPMyqT0Q75JaaS92dOKJGU/f4WVSizHmcMSNb/eKE3NbTGMlPIMEJ75Y3ta1m//IiSfe0ehN/NQIwRVBFEGQQ4RWQRDcj2z87o+Up92f9laBq09tgZADNnHiiUo1jGcdL53sRQp1faCNlw2uMuuNnS848cTH23jvrqKj9xT8lBcHIQ/cAiTnyDPAdY2mtvAyqR4knfcZwAEoFa85cKoLBINot98D/gE8Gva+idCWDZGSpkGBkA29/3MtgeBHfO+KpCQf8tuVAm53XVc30h6/TRtjzClIkNCLmuwbEPPYD5DN58Km8z7U1woDk2rFjWgPpYJxBD+odTGt9TDigbJBTyw2tGhw4WYL5w9s29vTs7Nf86MeTwPXIkLgZmSi3QtJg3AAxT2UhsgjK4tfIsJoR0T737HWl8qpsElf2l+/rSUQtNYxxFNnZ2TsTCAunn+a7ohd30voV8B+FT6+FTgFGSv7Ivt5exNt36u6QDDGBDIfGuokp9Rab4is/BYjv1MacaS40XXdrl9ddNxKwZ/sDkTymrweGbAOEoU6wVS1pOsRH+Xra9QI6PFNH+9H/Lnr502v7c3yMHCcMzj894h/TqU2OUj9iXAq7VMQN7wRZJNvfyeeuDX8fT/d78lIbqghZHPSQQZ71v93KdI3P6uVwdFLr9wU8eU+EqWipY+o3j95xOvji87gcMmD49t8d0Ye8n2RybW6nTmsBRfG6NTD/lvEJlzyu/spCt4EnK6U2mvhvIH8wvkDA/MHBlTEpIFPItlof434vS9GAsPejExO5cFt0fkHMnbCxV7w0skliOB8LZIJd8vih2Uea7EKbrwl3mzjwCudeOL+8H201tsB70DGT+GZ6kd+u6xchMcQheeXruummv9zy/HSyV2Q327z0EdZJPjuu+FYGD8tyq7IuNkPiQyfeo6Dfz8KwgWDysfO95x44t3htmmtB5DUFqcjQYeFeafXb5/n3+RXwA9d170n8h8+x+gooaC1XgRcDuyBaAe1nmSDrBzuAo4qDGAvk1IY8xokEd1rQtpllSsVH7xJRNNdv8JZvwOOdwaHm7Zh++2rlAP+K0488WF/BXEI8Hcnnrg3/F2t9bGIhlyIPK3FGmRgv9V13T8W759euQj4MHAssEl010YAM8pUAZkgK4G3OoPDN9a/WMmDXhASe1L8eyrFLpSM0VFQ2ziDi0p+B631RkqpaxbOG9hh4fyBgfnzBohF+NuMMU8ppS5Hxt09iNfRmxHz0t5E9whag0RPV8rW+WOkcNLaCp+V4aWTmyB9cxAmf5iCAQO+QAhPemVC+nZncPg1wQO+ovU+JMq3h/qZelf7f88bXdf9a5Q218NLJ98GXET5Bv6zwJudweG7I15nAJkf9sWYt4DZXtwXagnMInlQQ87gohLTrtZ6W6QS2wbU3xSfRATERcDZ3bhy6BihoLVeH5ngN6YxjcwDnt1wZPjYhfPnfRgxDdVfbk5NdE8DVyMJvT5A5UpOn0KCZVry5PAyqXdSbva4ATioXlSl1vpjwLk0XmhnHDhj28023hjRErdqQBCsBXMHYuraGsleG/7y/UhK4oqugvXwvZ8eBLaKIBAKfNIZHP5c4c3ypc8cbPLm1wvnz5sXpX7ERC5HZnwtq8bXeF524oEN1lt0RHzBvMI+w14E/8bqz4dBTGR/RzbI346M3SA54AtgfqZEQBfcigv/X9M/NFLzAfTGRpcrGDHRBAIAzuBwsf2+QDgf2R9q1P14HDjSdd0/VPowOzZa8O5xqrz6DcwD9R4qJ4e8B8x7lGz8hvvG6x8aqTnhemOjJyj4iQjMnvKxY/KV9hYudQaH31Z4o7XeCUkjspDGXILHkXKwb3JdN9vA9zqeKOU4FwIaONcYc4l/LI4Ut/gAoll/Ekn9u9IYs0WjjfAH7r3IJmPDycoWzHNyGy4e6a1bMEX+1jW+//bPgT86g8NLvXTyRUi4/aahb6SRJf81jbYpjJdJvQqxmQb/vseA3Zx4IlnxSz5a6zf77W2q5OTmGy0x/X19kdRm4HEwVzGlORvgPKBsyY0spU9xBodXN9MuKNqYl1e2BVeVwR8BfggcY4x5h1LqZfXuM5GbZM3ataz1JpjM5+npkYI7/X29kwvnzVNKVZ4QpjlIIY+k3U5Vexk4C1SCWMgr1xh/Yeu3cqrvjDM4XPxbtNZnIiuEqgKhJxajJ6aIxWLy/56p972xWG7h/IE/9cRi/Uy5n8aZmvins4vGqdwvY0jfbACcVFUglOw9Fz//ijM4/GEArfUgYjIcavLvGAcucV230epvHU2klYJS6vXAL5A6zcuVUhcAS4wxb1ZKvQLJejkP+FiTQuEARBttyp95kyWLmTdQZXEhf18azLXAN4B/BBPA+XbO6ylf9v8HSbn732baFMTLpDZCbMrBDJSrEdvvv2t91xeYj9JkiuW+3h622LjGdoExeeB+MD8AfhH0mvLNPD9CNleD5BET1NdbTaYnGTDVwyhVmjuq8irBIBlg/4HktKm5YWyMAWOK88FMRqG1i+pa8GTx7zHB01XsXGdw+P8A/vPQQ07e5J/v7elJ9PX00NfbQ29PzP9X3vfEYp1ffa4KVU1q+UlUoFeK/1OxZ53B4aJDQwur7yBrga1c113awjU6ikjeR8aY65VS1wLnKaV+AByF7zFhjLkbuFsptX8L7fgiLQS4eBMTpUJBBEEGuB7Ml4B/Vpq8vHRyT8TDZDD00TXICiHdbJuK95CN5d9QnpL4pHoCwecgKu9xRGIiNxlyvqCwIngAzPeBHzuDw+Nl7Rbb7WXIxm2QhvYP6qJiRxNOJmjM/WBOQTTC1RizGKnBcCRKbQdsV/V6gY1ZRfQUSp1ILbNI8M8K/P9xTH6z7NjotcDmW244snVET6s5iqpiUjPhswAMJl/MIqu1nocU1WlFIBQu/yHgrBav0zE04pJ6FmL7PQA42xjzfDsaoLXemoZd8kpZnkzRE1M4/f2mJxa7pSemPu4MDt9V6zteOnkgsjoJb3xdDJzWqJtpDc5HNsaCfMmJJ34d8fsfoMWI0OdeGGXxcAIFyb7eni8B33EGh6tGSfu+/FdTnup6KfA6Z3C4YZfXivfJpHZB+nsKY17ATF6AbPbuC2pHoHbqglDMxDTIgYL3SRbwjJgAK20oPw08p+TzPuT5qvZvPzVWOlNacLlJTVXeZwFZTb6r8GaGVgCTTPWPZ2Q/YDHl+4JrgIeUrDLr9U0f8lxWtfGbigKhXGAGUIhX0e/992+sdf0GcBDHkXVPKBhjViqlHkD8ztsZHr4RfkHtVi7y/OhKkD2AT7muW08gvAXJcBrev/g6cE676gt4mdRpiIYb5I9IIZ6otBwUNb7W48nnlgHc67ruV2ud66WTw0iU6CtCHz2OlDT8X6vtAfDGVuyIil1PUJMVs8j6oC4A6qv5ARNTrTPzxjA5mSefzzOZzzOZN0zmS9/n8/m1gwvmnbhw3sC/kdXQanxBENwM9tLJM5AsnWHOcQaHvxbhTy+SHRvtQ+zZidBrBBU7CRUr/Q0qaMGtYowh7/dH4VX2ftKgFDdssF7iY8geYhoxm3j9QyNF5clLJzdF9ubCY/Y+4PXO4HDkYDp/E3shpf0yBCQMamtU7MMoNTVnhExqFUgjTh0FNqI1F+Mg/Vrr+a7rlq245yKRhYJS6jgkX/lNSDTiu2p+YRra0I7reenkSYg7WVhL+DjiY98ugfBqZJUQ5H/A2xrM397O/qm5ie+lkxsgD7Ub+uhBRCA818xNs2OjCxAtbXdgdwOvQMU2rZgptV521AqrgtzkJLnJPBO5SXKTk2X/TuYj/aQTq9Z497quW3UPyUsnP4KUWS1pEbKyrBkQVYn+oZGJ7NjoCmTzdnuK/aN2QcXK04lU14Krks8b6YvJSXK54L95cn7/RBzwL2y21cg/qn3opZPbAH+ivDLaXcDBhXKYUfEFccZ/PV28TyY1HwkeDQmEin0zhjiw3An8X//QSNBLqJf2LSoNHR4I3AiR/hCl1PrAN5G9hP8ADyilLjHG3NGGNrxA+2rG9gLLq33opZNvRcwV4cHwXmdwuJL21xT+xvIVlE7Cq4HDnHiioYcDSQDWah3fAlWrXfmlRG+gXCDcAxzoDA6viHKD7NhoD5Lf5hX4k5x/zanZvqJ7pakTlWKywKNgblaieT75fHJs48z4mm8bU7Yn1AwOtcfOmZQLhByy93RZ1Jtkx0ZHkD55ReDfRYXPi2ajIEVvmql1QqirViGC+wGk5sVjwJP/e/aF30zm842Ur6xGHskkUBEvndwYURbD97oRcVdu2jut5D6Sp+xiwtHk/orRSPf8A7hWyWr3nuBKJsQLTAXCtopChFdXEFW6fQe4yhhzC4BS6kPAhUqpnRAbYj/Sucrf2MobeYij8BBSQrEdmRRf8K9XhpdOvh5x6ww+T5NIDdefteHecp/qG8snOvFEM7b4C4GX0Hr/rEKCqMrwawlf498nyK3AobU23LNjozH/e/v5r71rtbWqLVj5EanBY0IGWEYslkPGyfn98cTDAOmnnx9AlJVWMcA/XNet6BrspZPHAt8OHV4DHOEMDlf04S+QHRtdD9mb2Q8J1Ku6SV51Y1kpOR4815hCaooVQB6l5gMPO/FE0Tw4+fTzFyNeYq1upnrApRU/EHPjDZRHKV8JvM0ZHG66NnMFPki47KYxhlhM+TqHQoIiX2Yknc2ol0kV5qcnEeeOgpfQtUjUf6vkgCurZVWYi0SJUzgMCXvfwRiTChy/GVka3oikbQhym5E8Q5HwI3W/T2sT3yrgna7rlg1eL53cA1naBh+OHHCUMzj82xbuWXof0WR+iKSsCPJ/TjzxsWauqbXuR4Kj1muxef8FXhwevL7b6VWIl1OQPyKRpmUb0tmx0a2QSW5/ZKIbqXfzKbfAKpNedNJOPDFUeKO1/hASXNjKxLcaOMR13VvDH3jp5EHIpntQgRoH3uAMDt8ePt83le3JVP/sTAQzRfWIZVMrU2wlTnXiiYsAtNYJZHXYSt8Y4FbXdfcNf+ClkwuQFULYkeLniLKVa+G+pffKpA5AxmSwL0aJMPYCPO3EE8XVjNb6fGTPr5W9hTXAy13XraiMzkXqDjZjzFXGmI2CAsE/vq8x5lxjzK3GGBV67dNgOy5DJvVmpW3B/ljm0eOlkzsiWkH4wTixnQLB5zTKBcIfgE80e0E/WvKzyMTVLKuBj1UQCDEkDiEsEP6MaMFrALJjo0uyY6PHZMdGL8qOjT6O7I38EDEn1nwoA1vBaysKhMZ/8kEvkwqqzRcgK75mmUT+ntvCH3jp5KsRM2BQIEwgJpHbQTaLs2Ojr8qOjX4yOzZ6G6K9/xHJKbUL0ezWaVTs8Qr7KT9Fqfsa/HteV/iPn/rle8jE1SxrqOAY4aWT/ciKOCwQrqb9AmFLJFAy2EEpys159QgH7HwJMY01Sxa4rZsEArS90lNzuK6bQ1xd0zT+I+X9773Ov04RL53cHAlMGw59533O4PAlTTa3Il4m9WLgW6HD/wOObUNh8O8igq0Z74bVyAReIgD9PPffoDww7X6MeZMyeTc7NvqF7NjofUgagl8iaTK2iHDPUQM3GNQNoEZ9DXigfEVgAFZizMMYE3XiejDYn3623INpTmgWTFJvqiAwX4K4Lwa94gzwdmXy/8qOjZ6cHRu9GkgCfwE+g5jO6tmoc0iN5guQ1BM7GNVzCCq2Rei8exDBcjnwFNGkp6HcnPYJ4G80JxjGgU+6rntn8KCvTPwUSVgZ5DbgmDYLhB5k5RF8hg2SzfevSAbbqCaqkj1Q13Wf9a/TzHOVRUxS4ednztMxuY8AtNYvQrSsJURzUV2DbMQeGPYa8dLJ9ZFBELbhfs4ZHP5kG5o7dS8ZuH+mVGtaDezR5D5CGVrrHuCryGokijnAIG6Dnwa+WmHSOxf4/NTZBmApmBuUCOiNGmheGtl/uMmgHgUOQakTS9pZbhZZDSxDqUY20f8AHFqpOIrWejemAhGjmAPGkUjxg13XLdlE9dLJrZDfc0qzlFXNb5QEEr6axhSqexHz5Z+AO/qHRqaixsWb5l5g28D5SSRB22FEN6mmgDc78UTYlIvW2kE87g4nWv6jQtK3d7uu+5PgB74ycT7laU/+Bby21YSRYbxM6v2UC7qrEeeL8B5YLX6O7OuVKZ1a6/2RVc88om08r8aPqq+2DzWX6SihAKC1XgCciUx+I0ylry2QQwbsKLIPcb7ruiVaol8m8HZk0ynIBcC72+V2WrxfJvVBIOyjfpwTT7R1NQKgtd4HSTO8P2LKCE8aGaS/rkGEQVmKX98t90ch906jorvoZRHt+E+ITfkfRsVeguTAOpSy65QlumvUTj4BvNuJJ+rlwR9GgohOQfplHqWebVlkwnsSWX39MJzMzN84/RvG+BX4mgqI+x9TQuCW/qGR6l5NmdTXab3YzK3AW5x4YrTWSVrrN/r3eiXyHIUFRBrpr8uRsVNmFvHSyY8BXwgdfgTY0xkcrpqmvRm8TGobJOFiUEGcpDFvxdXA8fVKb2qtN0Y2sk9Enp8FlAr+Qobbh5BcYD9zXXfaSp3OJh0nFAr4OX92QpZ3WyEaYBpxt/slcH+lHX9fk/kZ5cu6y4Bj2xipLPfLpF6EaHrB6NSrEK1t2jpXaz2E5H9/DbIJnUcE5Q3A1dUCabyxFQdL+0wvRJ7sCq5+BSHwl/6hkTUAXjr5MmSjN5wOw6dCorvGeAHpy79E/YI/dvZA9jw2RQTEGOJO/ctqsQje2KgD6s9gdvVb3kgbC0LgT/1DI09E+ZKfJPHPjd2qjPOBDzZSWlJrvRh4K9JHw4jQXY6Yy66rlvXTSycPRpSNYHufA17tDA4/0VTrq+BXR7wN2bRvlseRlWWUdDJAcUX+GuAIZKU4H9kn+jeS/O7JFtozJ+hYodAsXjr5Pspt+zcChziDw21NceubjW5HorwLrAR2cOKJtqQBaQfZsdFB4AgjewKvjjgDFerfXg1c1z80UrJM9tLJlyPC4I3VL6HWlESdVudGxJxzaOj4P4DDnXji6fKvtAc/cvYVwNsNnKyiR9bfhwj/q4F766XADuJlUn2IQL+I+qbApYgS9MHQ8QngdCeeuLj8K+3HSye3Q9KEB2NCVgJ7tyvtSfFemdS2iBWgzOMpRB4RUoOUp2T5E/BWJ56IFF9jmaJrovAAvHTytUiqiiD/AY5st0DweS+lAgHgzE4QCH4Khdchef4PBQYiCIMXkGJCVyEab1lRGC+d3BURBodUv4waAwbrCIQk4vn0QyeeeMTLpM6gVChcgrhXtuI5UxXfrfY4/7Ut1FXX84gCcBVwddTVQBAvk9oEcYE8lXJPmDA3IebOa5Dn9GSmgtyeR1ZPNdO5tAs/F9ZvKRUIk8hz1a48WH2IgnE6YhqtxVIkfudCJ554xsukPkepUPgWcI4TT7Rtw3tdomuEgpdObobYQoP2xjSS/rrlbKdl9xNt5ouhw9cgk9ms4Gu9uyKC4GgqJ20L8yjywF8F/K1aBKiXTu6GCIODq19KFeooD1U/hzsRLfDXTjwRFDrfRzZxd0dMfT9tt/ktOzY6jJiUjkM2i2tiYI0Sx4ergGv7h0Ya1jp9M8gByGRXLwlbEgkw/IETTzwSOD7hZVJvQlYLzyAJFatGp7eTgKfRDqGPznEGh29u+fpTgvIU6js3FAVlyFz2eWSFtz1wsRNPtNvVfJ2iK8xH/sbyn5FCP0He1I4COWX3kwf9VqRKV4EUsKMTTzSVH6gVsmOjWyDlNd+O1Laoif+LPwq8WYGuZfrw0slXIMIgHMvgo+SS9VNy3g2804knGvW7b4ns2KiDtP3tiECrWYrS74g88GEF3+sfGmkqyZmXSS0GTkIcJup5WGWB9wC/mK6VUbOUeakJlwBvb9ZhIyAo34UIynobxzcA73X8SHbL9DLnVwr+xvIPKBcIn5oOgeDzbkoFAsD7Z1Ig+JPdm5EHa+965089vQqUegLYo1Y+Iz8K/FPAgZXPKNlAricQ7gP2bGQztFWyY6M7INr5sQRyC1WipG8AlDrNGRy+qJn7epnUpkiw14nUr4Vc4E1OPHF9M/ebTvyN5c+FDt8LvLMZgeDvwR2LxE5sE/FrNwEHTqfThqWUOb9S8NLJ9yIuYkGuQiJy2+4y5mVSWyNucsENwj8AB8/EwM2OjW6O5G8/hTrFd0zRxdCvWS2T+Brglc7gcEWN3UsnX4kIg3Bgkk9db6I8pSaSHLDrTKwQsmOj/Yhv/xmIB0lNDDwDStI8T/1NP3AGhxvOAOxlUkuAjyKCKKowADF3hKPgZx0vndwa2egPmgJXALs26mnkrwyOQCLzt69xatjdNAO4TjzxVCP3s7TGnF4p+KU0w/EB/wFOmCaBEEOyNAYFQhoxi0ybQPCTzr0OmewOprZdegK4zsDPQZ2CUmFN/x2VBIKfDuQr1DIT1RYG/0OCBU8MHf/8dAuE7NjoJoigPJXyRIRhHgF+blBPoFQ4EeKdSIxMZLxMahipvPVeqnsSZZF0GXtQakp6hnKvolnHT2FxKaUCIY/kCnsi8nUkF9hBiPlp5xqn/tt/vS10/GwrEGaeOSsU/GRcl1KqlU3bxrLP6ZRroGc58UTVtMKt4GfYPAkxEW1d5/S/IlGbl/UPjazwXXPDAuHrzuBwScJAv4bCZ5CVRwVhU1MYGGRVdgGSuiHsiXI/jeeniYQvKPdFBOWbqG2XXoHkzvk5cLdRsRG/bUGW0oCXmpdJDQLvRyb1aqm7H0f65sf+ueFJ751OPNHWCOA28Vlgt9CxhjaWvUxqX0QYvLLKKROIY8gFiHAMj52bEA8jywwzZ4UC4nYW3lQ9xRkcrlokpRX8pFxfDh2+nirpqJvF9yDaDZnsjqZ2yoY04hlyQf/QSDH61Esnd0K0/iB3Ah8JnDMfiW79MBVTKdRdGVwBfMqJJx4E8DKpn1KqpU8iaQXa6gqcHRtdhKxGTqc0NUQlbkcSwv22UGDF34P6caitBsnZU7f4up+W4t1Iv1XLXPsUMrH+1Ikncl4m9TICfe/zEyeeqJl2ezbw0sn9kJVPkKuImKLcy6ReiUQ8h+MGCuSRMftZJ554wl9NXE/pGFyNuCPPbdv2HKWj9xS01osQ74RNkEGzCnhm8w3Xz/T39YbD1i92BoenxTbrm41uonSgt9Xe6RenORx5IMNaWpj7jDHffXLZisezE7mdkGV+HvGAun2bTTf8kV/rosAYsLMzOPyEl072IJ44nwc2Lr90cbO12r2vAz7hxBP/LBzwMqlDEHfcIJ9z4om25Zjy4wo+iKycasU/rAJ+tmrN2sueG01ti0z+85Hf6/GtNt4g3tMTC2ugn3cGh2tmsvXrZJwKnEt1E9XzyIR4oRNPeP73+pHkdsE8Pc8hY2dlrXtOF37U7msRk84Qsu+zsren59YtN17yB0pdQ58FdqpXZMmvt/15qpofAXE1/pQTTxQVNy+TOoXyFcEZTjxxQcQ/p+1orTdFzLQjyFhLI6nn/+C6bjvrQ3QkHbdS0FrHEHPAu4B9kGVmIYfNJLBmzVpvXn9fSdMfBt43jc16J+WazwfbIRCyY6MDyCR9DrU13yxweWZ87VVLV6Rejkw+8xDzWT+i7Wb7ensmlRRcCXKaLxD2Q/Zgdi6/fF1hcCvw8XC6CS+TSiDeX0E05W6MTZEdG90Z0cqPovZeis4b8/0nl46OT0xOnoxEb+eYKgCfA9as8bILFs4PZiThLsR8VhEvk+oFjkc236tVMUsiaZi/68QTYRfWcylP3HbabAgErfWOyFg+HnmeBpAEcAbw+np7oDRdi0Eqy1X3UsukdkD678gat74GUSRK9pZ8T61vhM69hfLxNO1orecjK/MzgB0RJWsAGTsTSO6jHq31b4DvhzPHdhMdtVLQWq+HZCt8OTWyQ8YXzGOD9cTLMG9M3vOy+yfW37AsO2Q78DKpzZFJLtiem4DXtbK8zY6NDiGC7/3U3hx9Agns+tHDTz9/CFIFr5caHi5bbryE3h4xsXsTEzc4fX1nIeakCoFndYXB35CJ7eZKf6+XSf0I0d4LTCLZYcsS8UXFN6Htg5hcXlfj1AnEjPW9R55+/kkjuXu2osbYWTS4kJGEbAHk8/kJLzuxS2L9DR8In+evDt+KTHjVhHUGiaD/phNPlO1jeZnUzkhqiKAG83Mnnji+xt/UdvxcUB9CEhb2USUTaEwpttx4CbGYyN612YmfDI0sOanSuV4mtRWSgfdYqgvrmxBF4m8Vvq+QVWdw32sceIkTTzwW4c9qG1rr7REPwsXUzyKbRwTEz4D3VcsTNZfpGKGgtd4AmYA2IIJL38L583D6esmMr5nITuSeA/ZwXbft6SW8TOpqSpO9rUIG7hPNXC87Nrohsqo5neoblCAPzHeB6/uHRia11p9GVhN102b39vQwtHA+k5N5+vt7c4ML5veosuCyusLgPsSf/PfVhJ+XSb0GWUEE+aITT5xbr42V8E1ohyErg1omtGeQDcqL+4dGlmmtt0U0/iEirH4HF8ynr7eHsVXjXm5y8r/AXq7rpqE4WR2K+OeH61UXWIO4QX+1Wm4d/zp/C/0dzyMBjjOWbtkXCD9EEkvWTZvd39tLfME8shM5MuNrxoFjXNf9XeFzPwL5E0jajWp9fSdwrhNP3FrtPl4m9VZk8z/Ie514om210qOgtX45sjoJZ0Wtxzjy+76h20xKUcpxLkQ05XONMZf4x+JIkfAPAFsCJyA1WkeB7xljvlrlchXxB65GNLJmCmlPICakl7SzVqqXSe2FbFYGacremR0b3RaZ1E+gutDLIdGiX+kfGnmwcFBrfRSSJyhKLnyUUiTiCxgeXFjU+kJn1BIG/0U0yisq5Z4v4E96f0WSyRV4EHhZwZ4eFT8Q73gkJXjVGsbImPsKcGn/0MgEFMuVPonEbDRTNMoD7nBd9wB/Q/gCSv+mkqYipo0v1stv5WVSb0G8a4Ic5sQTVzfRxqbRWn8AEXDNluQcB3bddvNNnkCEwQeo7vzwTyRw74+1VtH+PstDlLrn3g68ttaYazd+udInqa2c1WIN8CvXdU9uW6M6gLpalTFmlVLqNOAXSqkbjDHLkQfzHmPMFUqpDyEP9P2I2+QNSqmnjTFhLaAWByL22mYEAv73NkdMDW2JDPUnvbC30V00aO/Mjo3uhDwoR1A98rdQHe2b/UMjJRlB/T2WrxBRIMQXzGO9oUF8+3CImsLgCcQccEnERGJvpnzyfGcjAsGvZ3w6MtHUShD3F8Rmf13/0Eh40jgOMRc1W0XQUUq9OpMc/XF/X+/bqezaOol4LH0uyj6Sn9wtnBfrt7MgEAaQibyVGs39C+fP+x7i7FEtCvlB/z6/jWhSfSelAiGHjJ2Zrk9wJs3POSD7VcdorT/hV3HrCiJtNBtjrldKXQucp5T6AbLpt6P/WdD18b9KqauRZGONCIUvEL3CVDUW+tdpV7qAQyn3sf5Q1IGbHRvdHrFHH1XjtOWIGeJ74dTUAQ6huutjkf6+XtYfTjDPqbQIURiolp7oOWRT+OKo7qNVJr2rotY88FcG70T2KpbUOPUa4Mv9QyMVr+t70XyWFsbOvAGHJcOL5vX19Z5Y4WODxMJ8OpSgrh6nUDqBTiLRzjPNSbTgTBKLKUYSid6h+IJ9qpzyGLIBf2nUkrNeJhVHVqJBLgx6JM0E/sby2URPlV4Nhex9vbflRnUIjQyYsxCN4ADgbGNM2fLZt1vvRQPatNZ6G2qHvjfCi7XW27iu+2grF/E9TsJBV9c48cSf6303Oza6JTLoj6e69voE4gX04wgJ195PjUlPKcV6Q3ES8QUVJ32DQilVaYmyEhEGFzSRhO1kSs08eeBj9b7kp/M+AemfTauclkPqB3ylf2ikbAM4xKtpcukfiylGFiUYWlh1AXY1sknaUGpoL5NaiEyUQS6a6UnP5/00KTAXzp/H4uFE0WEhxFLkb/xJE/msPkhp9t5xRLDPNDXqgDSEgygB655QMMasVEo9gNQPqFba7tPIRNhIQNdGiK22VYkNsrewIZIBtBVOpFRQ1Z30smOjGyFmolOoviS9DzFJ/bp/aCRqrvdNqn2wYN4AixcNVTQVycogVs1edSmSwK/h8oleJrUA+Z2D/MiJJ8pKNxbwo4+PRlZO1UwQqxF/9W/2D41EdfVtpI50kToT3jPIvlGzyRTPonT1M04Nl9dppmZurEr09vSweDjBwvlVH8cLgI82E4nt54c6O3T467NUf2QjotXyjkKv1np+tWqHc43IQkEpdRywBeJm9mXEnTL4+XsQ7XgvY0wjG42N1FuNQrO2ZaAYsRp+iH9aTWPMjo0uRrxl3k2pj3eQexGb67WNVOjyKft7ent6WH94iAXzKt8ujyJWYdWQm5z0ent6jnDiiWsbbEOQ91HqQruWciEBFF1LD0M0wWqePOOICe1rTdQraOi3rjXhGWOYnMz/ore3592V3Euj4KfLDkcDf9OJJ+pGSk8TEYvsCUMLF7DeoiF6KjgnTORy6b7e3oOjrJZr8AlK98ZGKc9dNlO0NE+EMLR/Hps1IgkFpdT6SJj7UUjCuQeUUpcYY+7wPz8ZsavtbYxpNA/QsqjtiEAvYqdvhTMp1UA9ys0BZMdGE8hS+P1UX6I/hJhKrqywQRqVZQTyHi0aXFjVqyhvIBaLlY12YwypzCqSY5nf7bDDDk0LBC+TGkEEYJBvhQu++MLgdYh5atcql8siWueX+odGmtUUXyCY+boGQwsXMLJoqGK/edkJXkiuzK31smcWXFOb5OOUjoUVQEOeeG0mSe2CRwD09fayZL1FzBsoV5yNMSTHMmZlOvP9HXfcsWmB4GVS2yC1JYJ8rlkB3AZeQMZgIxltq6EQV/WuIOpk/B3gKmPMLQC+x9GFfiqFI5FNx9caY5oJOnkImfjqFSKJwvP+9ZrCz3gZzlFzXrBGsB+B/D5kcqyWp/9xfE+eapXMGuAHwEsHnP6F6y8awukvt0wZsRURi5Urhl52ghdWrGRtNrua1iNFP0Z5jd4SD63s2Ogr/WPhehMFJhH32s+FPa2a4A7/elWRDfhaE16alWMZY+Au13WbjjL2c2OdHjr8hVlOeHcx8ptV9T4aHoqzaGiw4spyjefxwoqVZCdya5Fkgq3wOUrnm8eZhcjlANfSntVCDvh1O13hZ5u6naKUOgzYE/GxB8AYcxHitfJJRBtcD/i7UmqV//p+1Ab4nXkurUvaVcDHWvxxPkqpZpVCXCHJjo2q7NjoW5CV0peoLBCeRcxq2/cPjfysDQKBzTZYfMOS9RK9my4ZqSgQ8gZULFa2yZw3htHUGE8tXcbabBbEH7vp8oleJrUFYiIL8gUnnkgBZMdGN82Ojf4SCVyqJBAMEoOxff/QyDvbIBDwg4a+gOxHlKCA4aFBNttwSUWBsGatx1NLl5Ecy2DEhNVUwF2Az1G6l/QkkoxvNvkOVVZSA/39bLbhEtZLDJUJhHw+zwvJlTzz/HKyE7k88BfXdZuuxexlUi9H9pSCfLzReJZ24rpuEnEDb7UNE1Qxn85VOiKi2XctfBIx2zRkB/UxyIS8heu6TU3EXia1GRIAF5xBPuzEE1/Jjo2+DMnKWk37XY54K32/f2ikLeUU/dq4JyExCsPhz/NGPI8qeRytWeuxbMVKJnLFvezVwLGu6zbtJ+9lUj9DcjQVeBrYTuVzPYgd/RyqOwtcCXwygjdRw2itFyK/fXEFM+D0s/7woopCdDKfZ8XKMcZWFeVIDvi367rhyn2R8ZPB/TN0+HgnnmhVu24ZrfX/ISvbeSBjZiQxyFB8YcWxs3p8DS8kU+Qmi4/ROLCv67plqSqi4mVSNwL7Bw7dC7x8FuISStBaF5xSmo3jyAI3uK7bLk+mjqCdmy1N40/k+yOaeaMDpZAd9IBmBYLPZygVCM+Sn7w8OzZ6MZLlspJAGEPsyFv1D418s40CwUUiPC+igkCYzBtiFVYHk/k8y1as5Jlly8MC4TzgdzSJl0m9FAkSm8KYT6l87gimIqArCYQ/Arv2D40cMR0CAcB13VVIlbhVSimzeFGCTZYsrigQVo2v4annlgUFwgRiW271oQ67L9+PuNV2Ap9CzGzj8wcG2HyjJSQG42VjJzc5ydLlK3hu+YqwQPhwiwLhAEoFAsBHZlsgALiuuxR4C/J3NoqHFJaa0TxWM0FHrBQKaK23RiaSjYnmoroGcSF8g+u6/2v2vl4m9RLEXXTqSTH53yiTfz2VN5FzSF6iz9YIOmu8HenkALIU/SAV9nvyxowrFZtfScNbNb6GF5IrmZwsPmt5ZOB+BDi/FbOal0ldSzAlsjGPYSaXK9i9ylfuA97fPzRya7P3bJSlzzx1xML58y/rr+Cfm5ucZHkyxarxEpm9GjEFHtJKziwvk9oP8cgLclAn1Up45qkn5g/09/8tvmB+RQ+w9KrVLF85Rj5fHDuTyNh5h+u6jQShluAnFbwH2CVw+GZg/06qlaC1fg3wW8QzKsrG82rETHqU67qpaWzarNBRQgFAaz0PscufjgiHcFbHCWRSfhrJHvp913Vb0tC9TOoaJHJYMGYCM9lXxY51HfDB/qGR/7Ryz7I2pJO7IcVHXlzh4zVI+ciyz3K5SZavLJnwMoh73BXAN1zXbakUppdJ7YMkDJsiP+nHSJexHLHN/6gd+ylR8TKpYxH7cJkZYCyzmtFUinzegEx0BjETfgf4seu6UeNFKt03BtyNZPUtcCuwb6dMel4mtSGSg2nP8GfZiRwvJFeyZq0H0i+Ffb1LkLHTSBR3pXsfQ/mK6RVOPPH3Vq47HfgJOc9EUq7P919BS8oa//2/gPOBS7tpczlIxwmFIFrrHRA32K0Rm3EaWbJd5rpu015GQbxMam/gtpKDlSe9h4AP9A+N/LEd9y3eX+rhfhzxEqnk63wHqM1RqiyX/9ps9pbnXhh9anIyP4hod6PAjcB1ruuubbltlZLeGQNmMrzxM4HsuXyhf2hkxrxt/MRq36B8A5y8Mc8sT6buSK9abZDVXgoxdV3quu7jbbr/UUjhmCB7VEoVPRv4Y/tyQqlEjDH5NZ5349IXVizNG5NAfr9liMJzYyuCMnDvfmQltmXg8OVOPPHWVq89nfjJOV+F5PbaEBEOK5Ea0r9yXfe5WWzejNDRQmG68TIphTF/Q6mp9Mblk95KxC77/UJmzrbdP518CZKXfecKHz8L6krgVJQKR6mtAN7mxBM3tLM9Ze3LpI5AVhxT5HNhgXA1cHb/0EirUeQN4Rdo+TWVTVgXA2dWKHjTzvv3IWlfghHav3HiiVrFZmYEX5h/EPGSCysaTwNvmW7B5WVS70X2sgrkgB0azCFlmQU6rvLaTJEdG1Wo2JdQsdK8/VMCYRJxKfxME5G2NfHSyV4k3P+zVE6J8VNQBqUq5VO5G3mo21IGtGobx0Y3RfX8pCSrqskHBcK/gbP6h0b+NJ3tqNg2seP/CimXWPIR8G4nnrh4BppRKeldq26tLeNlUoNIHMgRFT6+EVEmRqe5DXEkejnID61AmBusk0IhOza6mYHvgjqk5IOpSe96xFT0YPm3W8NLJ7dD9g72qPDx88DHULEzqBwJfAFw1nT6d/uFbk4D9TWUmtrsNwZMHsRE9XGkwE3LZoZG8G34H0ZiY8Kec08ARwRrR09jOxzKM33OVtK7Il4mtSNSufBFFT7+HPCZqNlMW+RMSpPerfbvb5kDrFNCITs22otkM/wcqAWlWrABk38a2eC+rokcRTXx4w7egyzpK3lW/QrUb5DU5GE31DVIXd9p9XvPjo2+FPihgd1R5ckyFHwbWTmlprMdlfBrQf+U0ip4Ba4D3j6DFc3eSmn+p9lMegcUN3UvonyzPQUc12K+q0ba0U/5Hs9sJb2zNME6IxSyY6MvRzxUJEip3K3zIQUvb1esQRAvndwCyRy7T4WPVwBnoGLbI5uC4YY9imjA97e7XQWyY6PzEc13yhW2TGCaQ/qHRq6brjbUwsukdkI04K1DHxlkv+cLM+X37tvrw2a9C2cr6Z0/CX+Nyqmb/4WMnbZsrEeksEFbYDXiDGCZI3S9UMiOjcYR2/2Z+CYHWQKESxarU/sH2ysQvHRSIS5u36RyvMPvgA+jYt8A3lDh86uBE6Yzf052bPT1iFlqykukbJXAb52h9WZLIJyAuB7PymZ7BXan1LRnEPfWGcevl3w55cWgQPYV3tNErYxWCQunn85y/idLg3S1UMiOjb4JeWBLC7qoWHil8C8kGKVteOnkRshyvtJknwbOBPVvlPoDkpI8SKF+w1enSwPOjo0uQYTVMcHjVQTm+dPRhlr4dvtvU55ZE+DvyGb7kzPbKqB80vuDE0/MqOcVgJdJvRZxh10c/ggRBhfNQptehrhzBpkVgWlpnq4UCtmx0fWRiOMy90B/0pug1Ovn/HYFG/mrg2OQh6FS0rybgJNRsdchgiicrW05cLQTTzSdvK4Wflrr45G4gkTZCSqWQal44IhGArJmDD8P1RXAbhU+/j5SIGjGk6l5mdQGSFqEIDMqMH3z1YeQzMSVNtuPdOKJf8xkmwKEBeZNtYovWTqTjsh91E78TKYPUEEgAE+iYuejVFAgrKCxetJV8dLJOJJi+BLKBcI4cAaoN6Fin0ZWEWGBcBewyzQKhI0Qk9VPKBcIxvfICtdp/s5MRud6mdTrkORyYYGwFjGlnT6L2TVPo1SZeASYMfOVl0kNIckFv0T5s/sHJMncrAgEv8DQMaHDM77CtLROR68UtNaLgUORkpRDSAK6Z4CrXdctKabjV0D7LuWaHIgP+TcMfAYV+2voswvbYXf10smdEPvudhU+/jNwIiqWRFYK4SU2yAN0thNPhCfliviZZV+D2LgXISanJJL87K/BEHx/dXAcEkyUqHC5e4F3EuvdgVLPkRTwiyjtaQdeJnUqshIIT3j/QzZMI6fs0FpvjIydJUAc+VueQMZOM6Uk+wlVG0QE5kxtcG+MCKAdQh8ZJF/W56O2RWvdhxRB2gnxdMsiY+cm4L4m0zecQqmS8wRSs2DO4edgOxiJg1mAjJ2HgWu6peRmLTouotmf7I5AXENfiURCzkcM3QbRuHsRrfp7wJXbbbrBYchmadi+CvAP4JT+oZF7K+TxyQNbthII5puLTkPMMWHN30P2Br6Nii1GHuqXhM4ZB05x4olLo9xPa70dMjmdiPTDPKaE+4R/z1XISuTC7TbdYAKZaCu5cq5Fgoy+ZWK9k0jysmAK6W848cQHo7SrVbxM6kOECvb4/A5ZIaTqXUNr7SB5+89AJrxJplw0DeIJ04dMfhcg6UAiPQBeJvU2ZAVYYBWw8UxUDvOrlt1I+d5TEjjWiScipV7RWu+MjJ23+YfmMxXxnEWeteVI8ZuLXdeNVMPby6R6gcco3bs7x4knZqvUZsP4KdjfjoydQlCig8w7eWTs9ALXABe4rnvrLDRzRugooaC1Xh+4Cpk4q5W4DLIqsXD+8+svGqxUDL5Q/OIrhSArL5P6DeIyV+BKJ56oFPkZCS+dHELcXI+q8PEDwFudweEH/KpcN1LuUvkw8GYnnqibVtrPyXIa8HXKkwRWbF4spnJbbbh4VSwWW1Lh8zuBk/qHRh4G8DKpVwF/CXxugG2ceKKZanqR8W3k/0d5mc88EiT35SgasNZ6CyReYTNK6wBXYzWiIBwbpQSnl0ndRWnA4XedeOI9Ee7TEn7a8hsI5S9ClJ0jnXjiiXrX0FrHkL78CDJu6lkI1vqvI1zXrWvKrJAOZQ2wyQzGjbSE1tpFzG/D1K+tUFBMLwdO9ws9dRUds6fgL/f/hbj7RREIAAvj8wfCEy2ITfrl/UMjXwwIhM2QIvJBmrZ5eunkrv59KgmEi4BX+AJhR8R8FG7nHcDuDQiELyL+6POpLxAAnPlO/4IKAmEtkmJj74JA8AlvEl47AwKhB1nFhAXCWuAwJ574v4gCYXvkt9iOaAIB/7z9gbu01tXKqhbauSvlEejT7lXjC+rbKBcIVwN7NiAQfoxsTgdXlbUYQMyMv9daR1GawmPnF3NIIOyBWB02JlqxHYWMnaOBm/yszl1FlHKcC5VSTyiljg0ciyulnlJKHamUOksp9ZhSKq2Uek4p9U2lVEN7Ff6k9ydgfaJNeEW8iVzRf9IYM4GYQ/boHxr5d+jU0yn9ezXh7KhR7pdOKi+dPBPRtMN1pVcBxzqDw6c6g8PjXia1O1IsZ6PQedcBB0Yxifgcg8RZRJ3wAMhOTBJaCf4V2Ll/aOTrwdTWXia1EeUb89O6Sejb6C8F3hn6KIP0zTVRruObjG5DJrFKWWZrMYCYCq6sc1540rvBiSfamjo9jJdJvR4xcyVCH/0MWSFEzYJ7NvLbNjR2fOYBP/M16WrtfCmytxVkTmwwa62HkVXYQhqv+DgPSZn+w3a3a7apKxSMMavwbeZKqYLN/ivAPcaYKxCb78uMMYOAi9hyz2ywHQchkrrhje/lqQzLUxlWZlZPvJDKvK9/aOTz4WymXiY1Dzg19NWG3VC9dHIRMoF8m3LhdR/wcmdw+Jf+PfdDBF04ZcWliBYcacPK1/S+RBMlA7O5HM8sX0lq1TgvrEw/BuzZPzRSKT/PaZT2/X8pLxzTNrxMagEybsJOAaPAa514ohFhfTxTe07N0A/srrWuWI7Ty6TWp7y+8HQLzKMQ23VYCz0POMmJJyLlnPK12HNpvtwkiF29VgqPsAntNieeCCtkncr7ac3ZZh5wpNZ607pnziEimY+MMdcjngTnKaX2QUwmZ/if/c8Yk/JPLWzKVLLxVyRgGolqMgq3jZWZ1SxPZfrGVo2Htc4CRwPrBd6nKN00rIuXTu6BmLcOq/Dx94A9nMHhgn3+cGQ1ENbOvofkoWkkBfehVI53iMQaL8sLK9OkVo1v8PDTz5cl2fM19nCA2LR51XiZ1CJEO3t96KNngL0acanUWvci0epNjZ0Ajn+dSpxKaTWuxxD787Tge2D9inKl41NIfEYjv8s7aHz1FKYHOEhrHV4V42VSw4TLtM6dVcJC4CyiVXisRQxxJukaGtlTOAvJ3XMFcLYxppjgSin1NqVUGtH0dkK8F6KyDQ0IkTpsp7XeNnigSq6ai514YjUR8NLJmJdOno3sAWwe+jgNHOUMDr/bGRxe69/vRKSPwmX9Po9EmjY62Z5J65MeyOA/vcLxt1Bqs84giefajh/8dSvlLrkPA69uwiTzapozi4SJAa/TWg8GD/o1E8J99t3pyjTqe2D9kPJVz5lOPPHZJuJFGjY5VqEXOKHC8ZMpnVSfRvY75gKHQOXygQ3Sj3gCdg2RhYIxZiXiUTOfkA3WGPNL33y0HbJxuKyBNmyIeAq1gwlKs1eCTEDBGrEG0djr4qWTI4iZ46uULzPvAXZxBod/XTw/kzoL2dQL9+sHnXjiE00GgW3cxHcqoSgXalAuMH/ixBOZNt2ziO+B9WfgpaGP/oWsEJpxC96w/imR8ZA9rSCHUdr/40hOobbiZVLKy6S+RLlL7iSS/bVZ7buSi3Yz9BIaO76TQDgb6gVRTVsdwIaUu5A3S6/Wuh3CtyOILBSUUschftI3UdmfHGPMI4jgiDTpNtqGJq8XnvR+H8Wrxksn90KCug6u8PG3gD2dweHHoPhQf47ybJB54B1OPNFKlsh29k/JtbxMajfKK5e13aumjgfWa514IpI/fAVmeuz8vAHngEj4k+sPKPfA8oDDnXiileDBZvdZKhHum4MpjZvwgAvbeL/ppp1jx7T5erNKpE0WpdT6SPK0o5C6qw8opS4xxtxR5ZqV3ESrsSxqOyLQCxQnGD8KtCGvGj8Y7f3I6iBsj00BJzqDw8Ulsl/45TzKtaYscIwTT9TzbKnH8zTWn9UwiN0+SHjSu96JJx6mjXiZ1CuY8gEPch2S1K6VCNFltMcEAKI1FqPkvUxqZ2Cv0DltFZj+fs4vKN9wzwBvcuKJW1u8RRLJBNAqk4hpKEjYmeTS6a7o1maWIc9o2MzbDAbxPOwKokq37wBXGWNuMcYsRXyeL1RKOUqpU3yhgVJqB+CjiNdNVP4DtKsY9rP+9QqEN9r+Qw2vGi+dHEDyAn2DcoHwV2DnkEDoQ3IdhQXCauDgNggEkMjbdgy4cQKanL9JGC6i3tZNQi+T2pc2eGDV4A7aY3o0wF9c110ZOBZOaXGLE0/oNtwLqOmBtQLYtw0CAWQF0o60DFkC6U68TGpbYL/QOXNigznA72mPdp8DLmsyNUhHEiVO4TBgT+CcwjFjzEXIRP5JZLPv30qp1Yj2dx0N7Mb7nXkurU98q4CPhX6ccGBZ1eRuXjq5MeLvfnyFj78C7O0MDhdTNfturr9lKmVAgZXAfk480S6Xzstoz4P9GKVxGYcyjV41XiZ1mH+98CZ5Mx5YFXFdN4uUeYzkNFCDNQTqK/tpG8JBW22b9CJ4YN3Tplt9DzFhtkIeuN113WC207Agu3MmyqC2E9d1U4jCFTXeoxo5qnuuzUmixClcZYzZKOB2Wji+rzHmXGPMScaYJcaYBcaYLYwx5xhjGu3oK5GlbrMDOI9oWL8tHPAyqRcBOwbOmaRKNlQvnXwlsnH8itBHaeCNzuDwh53B4eIk5hcm/yPl+w1Lgb2deOJvTf4dZbium0OEbysT32rgoyGBGZ70LmmXG6qXSb0dqZTWLg+sWlxIa6uFHPCA67rBehp7IsnQCozRpuRuftzDrZR7YD2CRCm3LdW067oZZP+rlYSPa5EUGUHCY2fGkia2ma/SmtD0gD+6rjvj9TSmk47YHHFddxJJObASmbwbYdL/3gGu6wZ/4PDAvdWJJ1aEv+ylkycjD2nYa+m/SKqK35ecL3bg3wB7h85/DHGrbJuJIcAPkRVDM4JhHPia67rFSc3LpAaBA0Ln/ab55k3hZVKHIia4dnpgVcXPWnkAYodv9NpZZMV7aOh4eOz8Lmr22lr4ysR1lHtg3YsIhOkoGvRZ4GaaW22OA+93Xbe4cvG9yIKBfoaAMjaXcF13GXA4zfXNWsSV+sR2tqkT6AihAOC67iNI3qNHiP4jjSM/zK7+94OEH+ySSc9LJ/u8dPJ84GLKNdrrgN2dweGS6F9/U/nHlE+o/0Ye6mmphetr+Kcgms040bSbSURD/IDrup8OfXYIpX/z/4CWa0B7mdSeyGosOK7a4YFVE3/SehWypxRVK16NJJXbzXXdYn1l/zd+c+jclgVmQJl4eeijPwP7tOCBVRPXdSeQie/HyNiJIjgnkP453nXdsEdRuG/+4sQTzzNHcV33BuR5XoFo/lFYjeyVvbqZNOydTscIBQDXdZ8Adkb2Lx5AHvDwD+X5x7V/3i7+96ZOyKS2oIY248cf3EB5iD5Ixs43OYPDJT+2HwT3Ncr3EP6JPNTTWrjddV3juu5ngEIZxnFEOw5iEJPXKmQS2M113UqBhGWTXqsavJdJuUhqhmAt5TzwVieeaLtvfxjXdTVSa+BTyKptnHKz0lpk/Pwd+e33qpAeendKc1WtpsVCOr6g+QnlysRNwOunu4ax67oTruu+B1EGfof0Q3gPzyDjaSVSl+QlrutWEoZtF5izjW86fDHy7D+H/OZhi8U4MnZuQ1YHb/TNc11HR6XODqO13gbZ1NoSSQyWAh4Hfl3LjudlUh9AUkwX+LMTT+wFxWI4V1Gem34NcJIzOHxZlWueg2w4B3kUMRlNi5ZXCz+vzRuRjf7FyCBejpgKrvc1xDK8TGq+f14wH87uTjxxd7Nt8TPQ3kl5oN2ps1ErGEBr/RJkAtsMKbKzEllVXua6btg1t4iXSX0NCNaQuNyJJ8JeWpHxlYlvIG7OQe5BvIxmfGLRWseRvtkV2TvJIq7cfwRu9c25ZfiJE58NHd68lXoknYafdmdXxKS4ERIRvgJRUn8dtcbEXKajhUKzeJnUXyjdyDvLiSe+5aWTb0E0tnCCsKeAw5zB4X9Vud4J/veCLANeNd3ppduNl0m9mVLt7mnkwW5qIHiZ1HqICWT70EefcOKJzzfXytnBn8Afo1RheKsTT1zewjUrFQ+aNWWiFbxM6t2Uxmr83Yknws4ZljlOR5mP2oGvzZR6dhjzWy+d/DxSGCMsEG4Hdq0hEA5C9h2CFNI7zymB4BPea7myBYGwAPH3DguE7wJfaOaas8wulEfpXtfsxXxlIiwQngdeN9cEgk/NfTpLd9DRNZqb5PCSd8b8E8x3EHtqmO8B7w+6mwbxMqk9gF9TGsiWBQ514ol729LaGcTLpBzK+6GpB9sP3LuM8uIzVwDva7eX0QwRnvSud+KJpuJnvEzqYCorE2+YLoeE6cTLpBZTXjfBCoUupBuFQvjB3pTSTWeQDch3O4PDVXO1eJnU9ohvenBlYZDAq1sqf6vj2R8IZgJdhuwFNIRvZrmQ8jiNW5D+mZYsotOJ/ze1RRPuNmXC51BKLQv3O/FEV/nnW4SuEgqVtRkTzhS5DDjCGRz+C1XwcyZdT3l6hvc48cSvK3xlrhCe9H7b5AT+RcpTKd+LJHCbqzVrdwBeFHifQ7ypGsLLpF6MKBPBlNIGOHYOKxNgTUfrDF0lFAhrM+Wb6PcAhzuDw7W8TxYhXhibhT76vBNPNJL9taPwzT3hIK2GH2wvk3o/UgA+yOOIWWQu+2yHJ70/OfHEyopnVsHLpDahujJxRYWvzAm8TCpBea4jKxS6lO7aaDYm9GCXCIVfIPmLagmEeYgfd7gm7UVIqom5zGsonaySNFij2sukjkGy5QZZjvjaz9kAJp+W/O8DykS4NOPn5rIy4fNGSivB/Rd4cJbaYplmukYoeOmV6wOvq/Lxx4DjncHhqtGufhK0XyJ5b4L8Djh9jm6cBglrwlc3kpTOy6QOoLwi22rgICeeCEeTzym8TGprpGJggTwSyxL1+wVlYsfQRxciwXRznTLTURc8D5YqdIX5yEsnh4HbULGw6cgDTqgWkFb8vmwyfo/y+st/Bo6eQ9WkKuIXcjk8dDiyJuxlUi9HkhYGtcUJZA+hXRk9Z5PwpHe7E08sr3hmCF+ZuJRyZeJq4Iy5Pnl6mdRCyrO5WtNRF9PRQkFrvSEymW2GFAsZQwLNflvIV+Olk1sD14J6Uejra4ADam0oB/g0UqA9yANIoZNWMkxOG37R+v2RtAwjiK1sORJ3cUcoOeCrKK3DnKZGXYkgfu78SimwT3DiiRuba/30o7XegqlymgsRc9njyNgJJ0ZsahM1oEyE92r+jBRY6khlQmvtAAchKWUWMxXRfBPw91A23YMoTV3yBFJCtWvRWm+PuG5vwFRE83+Bq7o1tUWQjoto9ie7Y4AzkGCiSUrdQscRV79/znP6L914/fU+oZRaDApUoPqgMYc6g4t+V+9+XiZ1OuXlQ59GopWr7j/MFlrrHZGiPsci5RYXMlV2MY+YdDwk99H3Xdd9zMukvgW8L3CZXzrxxLH17uVlUhsgLqtbhj46y4knvtXCnzEt+Kk/3o6MnYKSEJzQViOK0J+R3/y3226+ySaIohFkEyeeCKdzKMPLpD4LfCLcDCR9ekOb1DOB1no3JOfTkchzFQ98nEPGzRhi9vqB67pLvUzqMkrrknzdiSfOnqEmzxha6yHgJOB0ZF+oh9KkkauQlfL1wHf9RHpdSUcJBX9lcA0SIVu3EPZIYnBi0eBCMWmowPaIMU+h1Bb1lu5eJrU3kiso6E+epM157duBn5PlTMQdtJ/6q7wsYuI5ddvNN/kypRugRzrxRE1t2DeL3Ex5ScovO/FE2Pto1vHzZP0ByVcTjlqvxCrgrq022fDmnp6e/wscv8uJJ8K1DsrwMqkjkEC9IE8hykRdgTKTaK17kBTa70eEZL29RA/went6jttykw0vpfRZfLUTTzQc29LJaK13QdyIE5S6ElfCIIrp74B3uK7bkZaEVugY85HWejPgb4gpJFK7JvN538Ydqk+u1BURBMIGSERuUCCsQcpodqJA+DrwTqJNeCCCo9/p7/sRpdryGsRLph5foFwg/BQpt9pRaK1dRPuPE915YiGwd3Yi9+p5PSWVV+uajnyT2o9Dh5OIF1YnCoRfIB5EUceOAzj9/X2/9v9f4DmkLG3XoLXeE3ke5lM2kVREIULyMGArrfW+fk2PriFKOc6FSqknlFLHBo7FlVJPKaWODBzrV0o9pJRq2OSitY4h+ckX04CgSqVXMZZZXSlB/NXlh6bwN15/SXlhnaOdeKITB/3bEYFQd/UUZsDpHwgdutGJJ2oW6/EL5XwodPgmJOtp5ywtAa31ABJJPUjj3nSO098Xnihrmhz9LLNXUGp6mQDe6MQT/6n8rVnlw8CbaGLszHP6ndCha9pcNW9W0Vqvh6wuFxBNIASZh3iszUoW4OkkSjnOVcBpwLfEdg9ICul7jDHB5fM5yEZnMxQ2dXrqnVjSNmBFOoNSJb9nFqiXBrpQlyDIF514ou4exEzja3pfpImHGmCeE36uuaPW+V4mtRXlrqfPAm9rR13laeBEZCXU6EON099HLFbyCCxDMpjW4juUV077QCeaVLTW85FAw6grhBIGGhw7c5CzaHDOCTEAHK613rxN7ekIImlWxpjrEZvbeUqpfZCNpzMKnyultgSOQ4pUNIRvGvki5d4tkQhPesaYfzjxRNUa0X6isnNDh2+lc/3JD0c8r5piwAkXlaNWeo8BJGdP8H454KioLpozie+U8BnaNHaQKmJVV0JeJnUyshkZ5HIkK2wnciotxCIN9EcfO3MNv6bE+6i/h1CPGOXzyZymkQFzFrAPsnQ+2xgTjGA9HwkQa2bTZTvKvVsiE570cpOTVSMtvUxqc+DnocPP08Hug8B7aXLS6+3poa93yhpnjMkjleKq8S3Kkwd+qBO1YJ+9aOGhDo+dycn8P6qd62VSO1E++f8XOKXTTGoB3kuTK0ynv59YbGrxNZnPZ4DpqCE9W7yxTdfpB473lduuILJQMMasRHz35yOBTAAopQ4HeowxzRbv3oDysomRCWt7a71sxRoHftroXwOLAocnkSIqnZyiYaP6p1QmPOl5ExOZagnrvEzq7YiZMMiViKDoVMJ7Qg0RHjvja9c+XOk8L5MaQpSh8Ib9kbNROa0BRpr94rzQ2MlmJ5IdLPyaYQPKa7M3Sw9NCt9OJLJQUEodhxQguQm/cIhSagGyv3BmC21oWsIqpXD6+0qOpVetfqDK6V8Hdgsd+5gTT9ze7P07nfCk53nZdKXz/PrK3w8dfhQ4ucsmgiK9PT309k6Zk/N5w4rUWJnXmR+g9iNgm9BHpznxhJ7eVs4eYaGw1svOxaJAM4WhhXms04jk6aOUWh9JhHYU8B/gAaXUJUjRkC2AO/zN3n5gSCn1PLCHMeaJCJd/ntL0CZEZcPpLNpmzEzkzvtYr8wDxMqmjkYCvIL8DvtbMfWeY5ymfkCIxMBB6sLMT/wuf42VScUQLDm5GrkW04E7PeroMKjmf1WdeqG+8bNZM5CYruZO+n/JkeT904omwGbITGaXJ/aiBgdAKPJu9rx0N6iCWIfEY7VgtGCTupSuIulL4DnCVMeYWY8xSxF3xQuAhJChqZ/91CtLZOyNRwVH4L+URpZEIazNeNptGirNPHZP89mG3sceBE+eIe913aWLAKaVw+kpl7cRErkQI+lrwRZTWEQDJ2TMXJoE7EG+zhgl71mQnJp50XTcVPOZlUq9GVsJB/kVpdHgn830kirsh+np76O0JrqLyrPWy32pjuzqB39OehKATwC9DqUHmNFHiFA5Dkn2dUzhmjLkICWT5pDHm+cILCeDJ++8jFW/xO/Ncmpj4KrjM/SL44/g1hK+g1N7nIVpwx6UhqMKvkRVZQ8wLraImJnLe1ttue23otHdTmsIA4EdOPBEOzOpIXNedQLyPGp74wgpFLBY7P/jey6TWR4Ibg6vpMWTsVPVu6zC+TxMrqXKBmVux/YtfXM0sOydxXXcMUbha/S0ngc+13qLOIUqcwlXGmI2MManQ8X2NMeeGjt1qjNmkiXZchSx1G9LcwxupC+YNFL1DfC34B0hFrSBnOvFELQ+cjsJ13Ung4zQ48YUfbIMpqfrlZVK7A98Ife1+JDfOXOJiRNBHnvxiStEfWkXFF8wvCkI/uPESJJlekOOdeKKiI0Mn4rruKmQvraGI23KBqa6scupc52vIpN4sa4Hfu647Z8ZEFDqinoKf0XM/JBthJNfQ/r4+ekoyZZuVsVgsuJ/wTiRpXJCfI2avucaPkYCyyIIh/GD39/UV04d7mdR6yAokODOmES14TuVy8XPP7IespiIpFeG9KGPMf0Irx08iGWiDfKUTgxsj8HngBhoQDGGFor+vrytTZbuuuxxxTW14pYkIhAeBk9vaqA6gI4QCgC9tX47sU9T9kcKTnlLqzwVPGT///3mhrzzAHC2W45vE3oM84GuIoN1UC1rzMqkYkgsnXCHspLlaLMd13XuRFOJPEWHyC096SqmiB5qXSR1IeebT25mjAUqu6+aAtyCr5nHqrKhisTKPPkOX5TsK4rruLcC+SDaGqKak1Uh6jL27MZV2xwgFANd1nwZ2RTby/oX8SOEfai2wdv68gWToeGHSW4TsI4TT3h5RL+dPJ+O6rnFd90vAq5laNYRdTA2Qcfr7VoXSNyxnKn3Dx4ADQ9/7hhNPzGkTgeu6/0HKqH4UUSzWUL4JPQ54C+YNhL2qCmNnU0RgBt0LlzHHCy25rptzXfcDSGXCXyN9E57M8kBmnuOEx9S/54AXWku4rns3kpn5M0iA3mrKLRarEDPl9Uhq/yNc152z80ktOip1dhg/p8gRiNtrocjOE8AV226+ye1AMOfIXkju/6soj1Y82oknalZfm2torfuRAih7IIkE80ihlNu22Wzj7ZVS3w6cfpUTTxzuZVL7IaaEoMS4E9inQ/MaNY3W+kVIJstNkGR5SeDR/r6+qzbfaMnDlLrgboN4y92G9GeBPLC/E0+U7MfMdfycSIci9UqCRXZu2Gazjd+glApmwr3AiSfOqHCZrsSPTH4pkkSwUGQniSgav3FdN6yMdh0dLRSq4WVSGwPBbKxZRGi8B/hq6PTznXiileC6OYeXSf0KeGvg0DmIBnwfsH7g+HJgl05L9zydeJnUrsDfA4eWARsiG7JnhU7/mBNPNJzPay7jZVK3AXsHDh3nxBOXzFZ7LDNPR5mPGuDVoff/QLS9L4SO3w10XZWoCIT75y+ITTkoEAyS+XSdEQg+4VrKfwFeQ7lAuA4/cn9dwcuk+oFXhA53TRI8SzS6RSjchdjZg/sISSS7Z1PBTXMVL5PaDDGZFA8BL0aWw0E+48QTkeo0dxnhsXMP5QVzngLePkeCG9vJLpTmd3qO7kqCZ4lAtwiFDSnP7nm6E0+siwM63Df3UR6PcBfiybRO4ceuhPtnN2TPKsjxTjzR9bbjCpStMOeit56lNeacUPAyqYVIGo0gbwm9v9yJJy6fmRZ1HOEHezGl+W/WICk+WgnamatsgSgQBbJIvYog33biidtmrEWdRSWzo2UdY84JBcTtMFgtKUtpKoIXKE9+ty6xS+h9uFbFR514omKK6HWAcN+EteBHEJfddZVw/1ihsA4yF4XCdqH34SitdzrxxOhMNaYDCfdPkNuRgkjrKuG+CUax5YETnHiiq4qwR8WvNxIuK1m1YJWle5mLQmHbGp/93Iknrp6xlnQYXiaVoHphldVI1PK6tnkapNbY+ZoTT9w1Yy3pPLaidD54el0VkOs63SQUnmPupDSeLmpNeufMpWRu00S1/nmQzq3RPVOE+2ZOpjyxtE6kIjuzhdZ6M0IRzVtuvOGrgxWzArxjDqXDbhk/ovlA4FXI6iA/PDS4eL3EYKXTb6K8slpXo7XeDolo3hSIAyu33nSjnULpP0DySB0/h9Jht4wf0fxGxGNvMTCx/nqLthxaWFJRcp0VClrrlyL9sxES+Z5EFIcrXdft+jmm44SC1roPOB44g6m010Xf6WAx8QAXOfHEH6e/dbOP1npnpCD7UYgdvKIUCJBGBGbXuxZqrRcAJyFjZwvEIaEfJF12BYEA8EUnnvjHTLVxNtFavwopnfsmJLdPvPhhKLXBRC73Qlm1ki5Ga70IOBU4HRGUDqXz42rgu1rrW5E6DL/vpsI6QTpKKGitNwGuBbamQiHsnlis7MHOTuS8Famxr28RT8xIG2cLPyfLOcCnkYmubLnU11f6cxpjGFu1+tIlG23SVGW7uYSf6+iPSNT2/PDn4b4BqdS3PJn63tbdP3Z6kOjs0xEFq0w69vX1lmhby5Ops5949vm7Xdf9/cy0cvbQWu+GzDsLgXlVTivMR69HouL/qLU+oRuT4nXMnoLWekvgn8jqoEwgQPmDPZHL8dTSZT2rxtfcrrXeYtobOUv4AuF8JM//PCoIBID+3tL+WZ5MsTyZervW+sPT3shZRGu9E5LSZDMqCASAvlDfrM1meXrpCwNrvOxftNaLp7+Vs4MvEC5HBMJ8qjzz4f7JTuTmAZdprY+f7jbOJlrrfYBbkNVBNYEQZgFwMHCb1nrhNDVt1ohSjnOhUuoJpdSxgWNxpdRTSqkjlVKfVkpNKKVWBV5bNdIIrXUM+BMwTI3VS3jSW+tlMcb0+t+72b9ON3IScCJVhGWB8IO9es1akIngE1rrcLrsrkBrPQ+4GTGFVP39wwrFmrUeRlZcmwLd7LH2cWTvqaKwBKnnHRw7xhgmcjn873xfax2OX+gKfGXg99R5rqowAOxIeYqUOU+UcpyrgNOAbymlChrVV4B7jDFX+O8vM8YsDLwa9XI5DJHUFTXgApVWCj49/vfD+X3mPL6m93nqDNxYLEZPz9TPmc/nyU0Wg5YXAN2a7fMdiP234mZTgbBCMTFRHDt9wE5a692no3Gzia/Fnk0NgQDQF3LcyOVKgt0duqwGcYAPUmfOqcMAcIhv5egaImnWxpjrEZvbeUqpfZBNzrbkWPdNI19A7Hk1CT/Y2YmSOhgLgS/41+smjiS4IViF/uoCs8C2Wuu92tes2cd3SvgkETS9GgoFiNmgG3NBnUYdYQmU1avOlvZNDNjX9+bqGrTWg0iq/YF659ahh/JKfXOaRswtZwH7IFXNzjbGPB/47I1KqaRS6gGl1OkNtuFFiC24LmUP9kTZxLc5tSN65yLvIYLADJuOKvTNfKRudTexF6VRyVWpo1Ao4DVa60S7GtYhnE4UgVl/7PQBx7WtVZ3BG6lTmjQifcCx3aSMRhYKxpiVSJ3j+UCwdOPlSGrmxYhL1yeVUsc00IYlQKSqX2WbYeXacM6/XjexQZSTwiuFCn2jKK/LPNdZQgRNOBZT9PRMWQnyxgRNawU8ZAx3E9Wi20sIK1sVxk4v5Skw5jpLKE+R0yyK5vYlOpLIQkEpdRzi+30TgeIjxpgHjTHPGWMmjTF3At9GTB5tpbenp8QddXIyTz6/LmdsKCWCtrfOYvumNjX2WyzR6JpVAkQUCkqp9YFvIiuB04CjlFLV7NOGxjrpeWQJVpM6NuHiaUh5xW7iuSgnRdD2DFI8ppt4nggmgLDNvMrYcZAMu91EpMSQ5c9W2cI9R/cV23keWR22g0kkuK0riLpS+A5wlTHmFmPMUuBDwIVKKUcpdahSapESXoFETDbi4vcw8Hi9k8r9qCtanB7zr9dNfAdYVe+kCNreON2X6uIOoG56iggrhTxwi+u6Y21rWWfwHepMVjGl6A2Y1sQdtcy0lgN+3vbWzS7X0J44rSzw826Kbo4Sp3AYEsF3TuGYMeYiRIP9JHA08CiQAX4GfNkY89OoDfA781zqTHxlk165trcK+Fg3/Tg+VwI1J6uentJI73w+z2S5ae0/ruve2f7mzR6u6+aQCO/aY6f+Kmot4s/fbVyICLyqRFiBTwI3uK7bVbmQXNfNIKbuVnNe5ekyz7UocQpXGWM2MsakQsf3Ncaca4w5xhiznh+fsL0x5rwm2nENYvapWg2sjufRJLIcvLaJe3c0rutOIkKzqsZXIRo1fMo48NF2t61D+DFiBqiqDNRZKWSBf7qu+/fpaNxs4qdg+DLy+1ckwirKQ5S/buQbyCqoWdYCV7mu21WmtY6IAHZdNw/sh9hAK9qFamh7OWA5sJ9/nW7kZ4jWV1Ew1IlRGAc+4brujdPVuNnEdd21iKt0mipacfl+S3GIeYit/LDpal8H8CVE6aooGOqsotYAp7que990NW42cV13BfAG5Llq1MKwBql/fkq72zXbdIRQAPCl7cuAf1Nh8uutrNGsRn6Yl7mu222bqEV8k9gHEBPHGkIrqr7eUPCR9M0EMhGc6rruN2akobOE67oa2BXZmyqZ/HpiMXrCprXJPIjJ6Q7gFf7k0JX4K81jgW8hfVMiOKusFDxEyL7Zdd1fzkQ7ZwvXdf+MxLssQ56tehhk3rkK2NcmxJtmXNd9DtgdCbr5KzI41/T29BBTUw5NuclJ8sbc5Z+3h+u6S2ejvTOJ67rGdd1vAa9AVg1pZB/H9JelKcitQuylL+n2h7qA67qPAi9BhOf9yNLeq2AzN0ierROB17mum5rRhs4CrutOuq57LvBa4BeIcMhABaGQm1yBrC5e7LruOpGO3nXdfwHbI/W5H0aEQzZ02ipkPvo9cITrum9zXbcrK9OpUBr1jkJrvRFw+ML58/bYcPF6xYhKY8z9A4OLdprFps06Wute4ABg9803WnJaf19fMcjNGPPagcFFt85a4zoAPx/NYYsG468ZWTR0aOF43pg/zhtc9IZZbNqso7V2kCyfO221yYZn9fT0FNOoGGNeNDC4qNs8+BpCa/1iJOJ5fSQoLQn8F9k/SM9m22aCjhYKBbxM6lBkuVbgOieeOHiWmtNxeJnUo0gNigIvduKJ/8xWezoJL5M6Hfhe4NBFTjxx6my1p9PwMqnVlCbMSzjxRLe55loaoKPMRzVYP/S+24KMWsX2T3Vs31TBy6QWUCoQsohZ0rIOY4XCHMfLpOZRmkU1B6RmpzUdiR071Snrm3WhbKulNlYozH3CSdyWO/FEt7rmNoMdO9WxfWMpwwqFuY/tm9rY/qmO7RtLGVYozH1s39TG9k91bN9YyrBCYe5j+6Y2tn+qY/vGUoYVCnMf2zdV8DKpPmA4cMgAXRu93AR27FjK6Hih4GVSPZRXkFo+G23pUOyDXZ3wuFnhxBO2gswUduxYyuitf8rsobXeRil11NabbmSUn+Yin8+v+t/Tz23quon/zXLzZhWt9TzgkA1GhveJL5hyNc+sHl/vkSef6XNdN1KJ025Fa+329/W+ffONpqqZ5iYnvUe03sR13WdmsWmzjtY6Dhy+8ZLFL58/MFXiOpXOjCx/8pkeP1/SOolfa3lXJKJ5YySOYyWggV+7rtv1CmnHRTT7IfgnA+8GtgJi66+3yBlaKCVQV6TGcsmxTA6p4fBd4Meu67arglLHo7XeDXgfcDiQc/r7BjdZsphYLMZELseTzy1LG2MUcCnwbdd1H5zVBs8g/mR3GvAuYEOgb6P11+tbMG8eAMtWrMymV602SBLF7wKXrEsToNZ6H+D9wOuBifkDA/GN1l8PpRRedoKnli7LIHEuPwHOc133idlq60yjtR5B5pxTgQQwAASTiq1GlOi7kOJFV3Zh7Ragw4SC1noL4DqkSHgw0pJ5Aw7GGNZ6JXmqxoEngIO6Lad5GF+D+QTwYWTAFk1/vb09OH19jK/1CPyeOSRC9SzXdX840+2dabTWOwJ/QExG84KfzR9wmMzn8bIli6dCht3Dul378/NkfRNRtuYRKJfb19tLf18v42vWBnNHTyDJ305wXffKmW3tzKO1fiWSXnwhUpa1HquBW4C3+cV6uoqOEQpa620RKZygVELXYxKJ4N3Dz5TZdfgC4YfAMUiCrkYYB77iuu5n2t6wDkFrvStwM/JQN1IffAJYCuzuuu7z09G22UZr3YdU79uXkKIVgTXAma7rXtT2hnUIWusDkLxqjfbNWiSj6l7dliQvSjnOhUqpJ5RSxwaOxZVSTymljlRKJZRSP1VKveC/Pt1oI7TWPcBNwCIaEwj45y8CbvKv042cSnMCAWSwn6O17soEglrr+cCNSKqPRgQCQB+wEfC7drerg/gkzQkEkFXFeb7Q7Tq01ktoTiCArNZfBEQuPTxXiFKOcxVip/2WUqqQUuErwD3GmCuQZel8YAsk1//blVInNdiOwxHXwWa9oWLAenRhBS1/6f85mhMIBRYA/+evOLqNU5HJvVl6gR211q9qU3s6Bn+P5f00N+kVcOiyGsQBzqE1D0wHeL3Weps2tacjiNQhxpjrkfrH5yml9gGOAs7wP34j8BVjzLgx5gngYsR2GQl/ovoisvRvhYXAF7pw4juK1h7qAlsBe7fhOh2D1rof2WdpRWCCaMTdOPGdTutu5zFgb7/GQNegtU4gDgkDLV6qDxmDXUMjA+YspBbuFcDZxpigDVaF/u82cN3tkSV8O9jEv143cQatC0wQwfLONlynk9iL1lYJBRSwp9Z6URuu1Um8i/YoFP3AcXXPmlscQuN1mSvRCxzdTcpoZKFgjFkJPIAMsqBHwh+Bj/j7DNsgq4RGBuISxFOmHeQoD8iZ62xQ/5RIKERodhNLaHwfoRoe5Rln5zrD9U+JRA+waZuu1SksQYRdO1C0R3HrCCILBaXUcci+wU3AlwMfnYl4KTwCXI34xzcSHNRu96fOcKdqH+1Mg237pjbd1j/t/Hu6rW/aOXYUXdQ/kYSCUmp9ZEP5VGTT+Sil1F4AxpikMeZYY8wGxpgd/Wve3UAbltIeEwD+dZa26VqdwrNtuo5BYjq6iXb+1g6wrI3X6wTaFX+RozvHTruCXnNI7EJXEHWl8B3gKmPMLcaYpcCHgAuVUo5Samul1HpKqR6l1BsQu3Ujm3aPINHJ7eAR13UfadO1OoXzgVVtuM4a4II2XKeT+AvteRjzwI3d5m+OjJ129E+O7nO9/D3tMT1mgZ90U3RzlDiFw4A9EfctAIwxFwHPIT7QLwf+DWSA/wOONcY8ELUBfmeeS+sT3yrgYy1eoxO5ivaU19Su6/6tDdfpGFzXzSFjsNWxs5Yu8yDxuQgJ7myFSeAPrus+1ob2dAyu665CrB9rWrxUHvGe7BqixClcZYzZyBiTCh3f1xhzrjHmcv/z+caYnX331Ua5FhEyzW445xAzyx+a/H7H4rpuHvgIrWl84/41upGfIQ92s5paFrjbdd1/tq9JnYHrumsQRW28hct4iODtRr5Fa04ua4ArXNd9uj3N6Qw6InW2v1rYD0nd22h2zwn/e/t10xIuxC8RE14zD/c48GHXdW9pb5M6Az8Z4muQ1VSjWrEH/A94c5ub1Ul8BfgNzSkV40j+I93eJnUGrusmgdchK81G5441wD/pPjfvzhAKAH4645cB/yC6OWAVcA+wi+u67dqQ7Th8YfdRxIQ3TjTB6SETwQmu635nGps367iu+xBixnyE6JPfKsSTbg/XdVdOV9tmG3+leSLwVWTsRBGcaxEh+ybXda+YtsZ1AK7r/hV4FWJpiKJ0GWSMXQbs76/GuoqOSYhXwM9fdCQSjbkHsrwLRqwGU9h+H1m+rUvpj1+EBCWdgPTDfKbyReWQB3oVYk++0HXdp2ajnbOBn3b9bcjYeSli7x1gymVwNeKh9idk7Py+i1eXZWitd0H65mikPxYypRhmEWVjFPgB8CPXdbvNG6sqfkqQ45Bg0a2RMVOIdjbIM9WHZFO9oFtX3tCBQiGI1noxks9oYyR7agqR6Fd1e7rjevjCcx8k39R6iAaYBG4H/rouTXaV0FpvArwJCf4bRAqlPIGMnbFZbNqs42dOfT2wEzJ2PGTs3AjcZ8eO3hqJeF6MKKRJZBV6jeu6XeN6Wo2OFgoWi8VimVk6Zk/BYrFYLLOPFQoWi8ViKWKFgsVisViKWKFgsVgsliJWKFgsFouliBUKFovFYinSO9sNsFi6mf5dTjax3n5ivf309PWjYj3EevuI9fYT65PjsVgPhXNixXP66emN0dMTI9YbI6YUsd4YPb2Knp4YKqaKn/X0yPHCsd7eGP29MXpiCsf/f39vjxxTiv7eWPF4T0z5n8urLxajR0FfT4y+mKK3p/R94d+YUvT1KPpiMfp6FEpBj1L0xPx/FcT8f3ti8q8Kve+JKZQxqHwO8pPyr8lDPoearHHMf29yE5DLYiYmID+JyU1gQu/JZeX4RBbyeUwuS34iR34ih5nMM5mdKL6fzMqx/ESOyYkc+ewEJp8nny28n2RyYpL8pJH/ZyeZnMiTn8yTz+aZnPCPZfOYfJ7JbF7e5/JMGkM2H3xRdiwPoXMM3zdPzHhFN7tSsFgsFksRKxQsFovFUsQKBYvFYrEUsULBYrFYLEWsULBYLBZLESsULBaLxVLECgWLxWKxFLFCwWKxWCxFrFCwWCwWSxErFCwWi8VSxAoFi8VisRSxQsFisVgsRaxQsFgsFksRKxQsFovFUsQKBYvFYrEUsULBYrFYLEWUMWa222CxdDxKqXcaY37Yrfez9+yue7ZyP7tSsFii8c4uv5+9Z3fds+n7WaFgsVgsliJWKFgsFouliBUKFks0ZtQGPQv3s/fsrns2fT+70WyxWCyWInalYLFYLJYiVihYLBVQSr1FKfWAUiqvlNq1yjmbKqVuUUo96J/7vhbuN6yUulEp9Yj/76Iq533Fv9dDSqnzlFJquu/pnzuolHpGKfWdJu91oFLqv0qpR5VSH6nwuaOUusz//G9KqS2auU8j9wycd4RSylT7ndt1P6XUZv54+ZdS6n6l1EGt3M+/5o+UUi8opXSVz4/17/VvpdSdSqmd6l3TCgWLpTIaeDNwe41zcsAHjTE7AHsA71ZK7dDk/T4C/MkYsy3wJ/99CUqpVwGvBl4KuMBuwGuavF+kewb4HLX7oipKqR7gu8AbgB2AYyr00zuAlcaYbYBvAl9u5l4N3hOlVBx4H/C3Gbjfx4HLjTG7AEcD32vlnj4/AQ6s8fnjwGuMMS9BfsO6ew1WKFgsFTDGPGSM+W+dc5YaY/7p/z8DPARs3OQtDwV+6v//p8BhlW4JDAD9gAP0AcuavF/Ue6KUejmwBLihyfu8AnjUGPOYMSYL/Mq/d7W2XAHs18oqKOI9QSbKLwNrW7hX1PsZYND//xDwXIv3xBhzO5Cs8fmdxpiV/tu/ApvUu6YVChZLG/DNHbvQvMa5xBiz1P//88gkXIIx5i7gFmCp/7reGPNQk/eLdE+lVAz4OnB2C/fZGHg68P4ZyoVn8RxjTA4YA9abznsqpV4GbGqMubaF+0S+H/Bp4Dil1DPAdcB723DfRngH8Id6J/XOQEMslo5EKXUTsEGFj841xlzdwHUWAr8B3m+MSTdzv+AbY4xRSpW5BSqltgFezJS2d6NSai9jzB3TdU/gDOA6Y8wzrSnunYUv7L4BnDiDtz0G+Ikx5utKqVcCP1dKucaY/HTfWCn1WkQo7FnvXCsULOssxpj9W72GUqoPEQiXGGOubPZ+SqllSqkNjTFLlVIbAi9UOO1w4K/GmFX+d/4AvBKoKhTacM9XAnsppc4AFgL9SqlVxpha+w9hngU2DbzfxD9W6ZxnlFK9iHllRQP3aPSecWRf5lZf2G0A/E4p9SZjzD3TcD+QSflAkFWfUmoAGKFyv7cNpdRLgYuANxhj6vapNR9ZLE3i27wvBh4yxnyjxcv9DjjB//8JQKWVylPAa5RSvb4weg2yjzFt9zTGHGuM2cwYswViQvpZgwIB4O/AtkqpLZVS/cgm6+9qtOVI4GbTWhBVzXsaY8aMMSPGmC38v+2vQLMCoe79fJ4C9gNQSr0Y2R9a3uT9IqGU2gy4Eni7MebhSF8yxtiXfdlX6IVo5c8AHrKZe71/fCPEnAKyFDfA/cC9/uugJu+3HuIB9AhwEzDsH98VuMj/fw/wA0QQPAh8o8W/se49Q+efCHynyXsdBDwM/A8xzwF8FpmIQSbIXwOPAncDW7XhN6x5z9C5twK7Tuf9EK+kvwD3+WPldW34Gy9F9pcm/PH6DuBdwLv8zy8CVgbG5z31rmkjmi0Wi8VSxJqPLBaLxVLECgWLxWKxFLFCwWKxWCxFrFCwWCxtRyn1hFIqq5QaCR3/l59naAv//SuUUtcppVJKqaRS6m6l1En+Z/v4gV6WGcQKBYvFMl08jgRsAaCUegkwP/D+lcDNwG3ANog31OlI/iDLLGGFgsVimS5+DhwfeH8C8LPA+68CPzXGfNkYM2qEfxhjjprRVlpKsELBYrFMF38FBpVSL/aziB4N/ML/bD4SLX3FbDXOUhmb5sJisUwnhdXCbUjQXSH1wyJEKV1a5XuWWcIKBYvFMp38HKnDsCWlpqOVQB7YEPjPLLTLUgVrPrJYLNOGMeZJZMP5ICQHT4Fx4C7giNlol6U6VihYLJbp5h3AvsaY1aHjHwJOVEqdo5RaD0AptZNS6lcz3kJLESsULBbLtGKM+Z+pkH3UGHMnsK//ekwplUTKRV43w020BLAJ8SwWi8VSxK4ULBaLxVLECgWLxWKxFLFCwWKxWCxFrFCwWCwWSxErFCwWi8VSxAoFi8VisRSxQsFisVgsRaxQsFgsFksRKxQsFovFUuT/ASHJkf7eB88EAAAAAElFTkSuQmCC",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "process_and_visualize_results(results, pcmci_cmi_knn, df_stat.columns, target_column_indices)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2f0722ec",
+ "metadata": {},
+ "source": [
+ "Have a look for yourself! Do the results for TE and PCMCI match? \n",
+ "- No? Well, then you've probably not found very strong evidence for the presence of the causal links!\n",
+ "- Yes? This is an indication that the results are consistent and make a lot of sense!\n",
+ "\n",
+ "Please note that this is a toy dataset, of which the causal links are unknown and very weak\n",
+ "\n",
+ "Make sure to look deeper into the hyperparameters of PCMCI and TE to have them suit your data better. Good luck and have fun :)"
+ ]
+ }
+ ],
+ "metadata": {
+ "interpreter": {
+ "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
+ },
+ "kernelspec": {
+ "display_name": "Python 3.10.4 64-bit",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/structured_data/2022_06_02_causality/src/data.pickle b/structured_data/2022_06_02_causality/src/data.pickle
new file mode 100644
index 0000000..efa01a3
Binary files /dev/null and b/structured_data/2022_06_02_causality/src/data.pickle differ
diff --git a/structured_data/2022_06_02_causality/src/helpers/pcmci.py b/structured_data/2022_06_02_causality/src/helpers/pcmci.py
new file mode 100644
index 0000000..b042567
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/helpers/pcmci.py
@@ -0,0 +1,97 @@
+import numpy as np
+from tigramite import plotting as tp
+
+def get_selected_links(df, tau_min=0, tau_max=3, selected_columns_indices = None):
+ """
+ Initialize dictionary with every possible link (i.e., combination of
+ columns) to be tested by PCMCI. Note that only causality of marketing
+ channels on sales are considered and NOT between channels.
+
+ Arguments:
+ - df (pd.DataFrame): input data
+ - tau_min (int): timelag to start from
+ - tau_max (int): timelag to end with
+ - selected_columns_indices (list[int]): column indices to exclude columns
+
+ Retruns:
+ - list[Tuple]: links
+ """
+ selected_links = {}
+ n_cols = list(range(len(df.columns)))
+
+ for col in n_cols:
+ selected_links[col] = [(link_col, -lag) for link_col in n_cols
+ for lag in range(tau_min, tau_max + 1)
+ if link_col>0 and lag>0]
+
+ if col not in selected_columns_indices:
+ # Do not consider causality between channels
+ selected_links[col] = [] # only need first col as ref
+
+ return selected_links
+
+def process_and_visualize_results(results, pcmci, cols, target_indices, controlFDR = False):
+ """
+ Process and visualize the results of PCMCI.
+
+ Arguments:
+ - results (list): Output of PCMCI run.
+ - pcmci (tigramite.pcmci.PCMCI): PCMCI object
+ - cols (list): column names
+ - target_indices (list): indices of target columns
+ - controlFDR (bool): whether to use the q_matrix, which involves a transformation
+ of the p_values to account for amount of statistical tests done.
+ Recommended if you checked many links using PCMCI.
+ See the following link for more information:
+ https://github.com/jakobrunge/tigramite/blob/master/tutorials/tigramite_tutorial_basics.ipynb
+ """
+
+
+ if not controlFDR:
+ pcmci.print_significant_links(
+ p_matrix = results['p_matrix'],
+ val_matrix = results['val_matrix'],
+ alpha_level = 0.01)
+
+ else:
+ q_matrix = pcmci.get_corrected_pvalues(p_matrix=results['p_matrix'],
+ fdr_method='fdr_bh',
+ exclude_contemporaneous = False)
+
+ pcmci.print_significant_links(
+ p_matrix = results['p_matrix'],
+ q_matrix = q_matrix,
+ val_matrix = results['val_matrix'],
+ alpha_level = 0.01)
+
+ column_indices = set(target_indices)
+
+ for ind, i in enumerate(results['graph']):
+ if not set(i.flatten()) == set(['']):
+ column_indices.add(ind)
+
+
+ tmp_results_val_matrix = np.array([i[list(column_indices)] for ind, i in enumerate(results['val_matrix']) if ind in list(column_indices)])
+
+ graph_small = np.array([i[list(column_indices)] for ind, i in enumerate(results['graph']) if ind in list(column_indices)])
+
+ var_names_small = []
+ for i in column_indices:
+ print(cols[i])
+ for i in column_indices:
+ var_names_small.append(cols[i])
+
+ tp.plot_graph(
+ val_matrix=tmp_results_val_matrix,
+ graph=graph_small,
+ var_names=var_names_small,
+ )
+
+ # Plot time series graph
+ tp.plot_time_series_graph(
+ figsize=(6, 4),
+ val_matrix=tmp_results_val_matrix,
+ graph=graph_small,
+ var_names=var_names_small,
+ link_colorbar_label='MCI',
+ )
\ No newline at end of file
diff --git a/structured_data/2022_06_02_causality/src/helpers/stationarity.py b/structured_data/2022_06_02_causality/src/helpers/stationarity.py
new file mode 100644
index 0000000..aa05469
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/helpers/stationarity.py
@@ -0,0 +1,144 @@
+"""
+Functions for determining if a time series is stationary and for making it stationary
+in case it is not.
+"""
+# Import standard library modules
+import math
+from typing import Tuple, List
+import warnings
+
+# Import third party modules
+from matplotlib import pyplot as plt
+import pandas as pd
+
+import statsmodels.api as sm
+from statsmodels.tools.sm_exceptions import InterpolationWarning
+from statsmodels.tsa.stattools import adfuller, kpss
+
+def perform_kpss_test(df: pd.DataFrame, col: str, debug: bool=False) -> Tuple[bool, float]:
+ """Perform the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test for the null
+ hypothesis that x is level or trend stationary.
+
+ Arguments:
+ - df (pd.DataFrame): Dataframe for which to check for stationarity.
+ - col (str): Name of column within dataframe to check stationarity for.
+ - debug (bool): Whether or not to print intermediate results.
+
+ Returns:
+ - bool: Whether or not the column of the dataframe is stationary.
+ - float: Significance with which conclusion is made.
+ """
+ # Select `col` column from argument `df` dataframe
+ df_col = df[[col]]
+
+ # Perform KPSS test (hyp: stationary) while catching InterpolationWarning messages
+ with warnings.catch_warnings(record=True) as w:
+ # Cause all warnings to always be triggered.
+ warnings.simplefilter("always")
+ kpss_test = kpss(df_col, nlags='legacy') # regression='c'|'ct'
+
+ if len(w) == 1 and issubclass(w[-1].category, InterpolationWarning):
+ p_value_oob = True
+ else:
+ p_value_oob = False
+
+ kpss_output = pd.Series(kpss_test[0:3],
+ index=['test_statistic', 'p_value', 'lags'])
+ for key, value in kpss_test[3].items():
+ kpss_output['Critical Value (%s)'%key] = value
+
+ p_value = kpss_output['p_value']
+ stationary = p_value >= 0.05 # Stationary if null-hyp. cannot be rejected.
+
+ if debug or not stationary:
+ print(f'\t(KPSS) Time-series IS {"" if stationary else "NOT "}trend-stationary (p{">" if p_value_oob else "="}{p_value})!')
+ return stationary, p_value
+
+
+def perform_adf_test(df: pd.DataFrame, col: str, debug: bool=False) -> Tuple[bool, float]:
+ """Perform Augmented Dickey-Fuller (ADF) unit root test for a unit root in a
+ univariate process in the presence of serial correlation.
+
+ Arguments:
+ - df (pd.DataFrame): Dataframe for which to check for stationarity.
+ - col (str): Name of column within dataframe to check stationarity for.
+ - debug (bool): Whether or not to print intermediate results.
+
+ Returns:
+ - bool: Whether or not the column of the dataframe is stationary.
+ - float: Significance with which conclusion is made.
+ """
+ # Select `col` column from argument `df` dataframe
+ df_col = df[[col]]
+
+ # Difference column values
+ df_col = df_col[col].diff()
+ df_col = df_col.fillna(0) # Remove first month of differenced data
+
+ # Perform ADF unit root test
+ adf_test = adfuller(df_col, autolag='AIC')
+ adf_output = pd.Series(adf_test[0:4], index=['test_statistic','p_value','lags','observations'])
+ for key,value in adf_test[4].items():
+ adf_output['Critical Value (%s)'%key] = value
+
+ p_value = adf_output['p_value']
+ stationary = p_value < 0.05 # Stationary if null-hyp. is rejected!
+
+ if debug or not stationary:
+ print(f'\t(ADF) Time-series IS {"" if stationary else "NOT "}difference stationary (p={p_value})!')
+
+ return stationary, p_value
+
+
+def remove_trend_and_diff(df: pd.DataFrame, debug: bool=False) -> pd.DataFrame:
+ """Perform Seasonal-Trend decomposition using LOESS (STL) to remove trend
+ and seasonality and difference residuals as much as necessary to make
+ time-series stationary.
+
+ Arguments:
+ - df (pd.DataFrame): Dataframe of which stationarity must be checked and
+ guaranteed.
+ - debug (bool): Whether or not to print intermediate results (for
+ debugging purposed).
+
+ Result:
+ - pd.DataFrame: Stationary dataframe.
+ """
+ # Keep track of number of differencing operations to omit NaN values at start of dataframe
+ max_diff = 1
+
+ # Initialize differenced dataframe
+ df_diff = df.copy()
+
+
+ # Make every column of dataframe stationary by...
+ for col in df_diff.columns:
+ print("tackling new col", col)
+ periods = 0
+ kpss_stat, kpss_p = perform_kpss_test(df_diff[periods:], col, debug=debug)
+ adf_stat, adf_p = perform_adf_test(df_diff[periods:], col, debug=debug)
+
+ while not (kpss_stat and adf_stat):
+ print(f" iteration {periods}")
+
+ # Log number of differencing operations
+ periods += 1
+ print(f'\tDifferencing results over {periods} period{"s" if periods - 1 else ""}...')
+
+ # Difference signal
+ df_diff[col] = df_diff[col].diff()
+ df_diff = df_diff.fillna(0)
+
+ # Check for stationarity
+ kpss_stat, kpss_p = perform_kpss_test(df_diff[periods:], col, debug=debug)
+ adf_stat, adf_p = perform_adf_test(df_diff[periods:], col, debug=debug)
+
+ # Print if stationarity is obtained
+ print(f' --> (KPSS & ADF) Time-series IS stationary for {col} (after {periods} differencing operations)!')
+
+ # Break up print statements between columns
+ print('')
+
+ print(f'(Maximum number of differencing operations performed was {max_diff})')
+ # Return detrended (and possibly differenced) dataframe
+ return df_diff[max_diff:]
\ No newline at end of file
diff --git a/structured_data/2022_06_02_causality/src/helpers/transfer_entropy.py b/structured_data/2022_06_02_causality/src/helpers/transfer_entropy.py
new file mode 100644
index 0000000..224cb52
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/helpers/transfer_entropy.py
@@ -0,0 +1,43 @@
+import matplotlib.pyplot as plt
+import pandas as pd
+
+def export_as_df(te_output):
+ """Transform the output of transfer entropy to a pandas dataframe.
+
+ Args:
+ te_output (list): output of a transfer entropy analysis.
+
+ Returns:
+ pd.DataFrame: dataframe including the p values.
+ """
+ df = pd.DataFrame()
+ for index, info in enumerate(te_output[0]):
+ col_name, *_ = info
+ data = []
+ for lag in te_output:
+ data.append(lag[index][1]["p_value_XY"].iloc[0])
+ df[col_name] = pd.Series(data)
+ return df
+
+
+def viz_df_raw(df, booldf, threshold):
+ """Vizualize results of a Transfer Entropy analysis.
+
+ Args:
+ df (pd.DataFrame): input data with raw p values.
+ booldf (pd.DataFrame): input data after thresholding containing booleans.
+ threshold (float): threshold used.
+ """
+ fs, fs_ax = plt.subplots(len(df.columns), 1, figsize=(10,len(df.columns)*2))
+
+ for ind, col in enumerate(df.columns):
+ print(col)
+ df[col].astype(float).plot(kind='line', ax=fs_ax[ind], legend = col)
+ booldf[col].astype(float).plot(kind='bar', ax=fs_ax[ind], stacked=False, alpha=0.3)
+ fs_ax[ind].set_ylim([0,1])
+ if ind==0:
+ fs_ax[ind].set_title(f"Causal relationships found - Transfer Entropy with significance level = {threshold}")
+ if ind == len(df.columns)-1:
+ fs_ax[ind].set_xlabel("lags")
+ fs.tight_layout()
+ fs.subplots_adjust(hspace=0.4, wspace=0)
diff --git a/structured_data/2022_06_02_causality/src/transfer_entropy/README.md b/structured_data/2022_06_02_causality/src/transfer_entropy/README.md
new file mode 100644
index 0000000..43fc7ef
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/transfer_entropy/README.md
@@ -0,0 +1,5 @@
+## Purpose of this folder
+We advocate to use the [PyCausality](https://pypi.org/project/PyCausality/) package when working with Transfer Entropy.
+The code itself is not maintained anymore but still works well.
+Because the pip package might become/is unstable, we copied the source code and created a wrapper around it.
+See [the example notebook](./../Example%20notebook.ipynb) for inspiration to integrate this code in your own project!
\ No newline at end of file
diff --git a/structured_data/2022_06_02_causality/src/transfer_entropy/__init__.py b/structured_data/2022_06_02_causality/src/transfer_entropy/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/structured_data/2022_06_02_causality/src/transfer_entropy/pycausality/__init__.py b/structured_data/2022_06_02_causality/src/transfer_entropy/pycausality/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/structured_data/2022_06_02_causality/src/transfer_entropy/pycausality/src.py b/structured_data/2022_06_02_causality/src/transfer_entropy/pycausality/src.py
new file mode 100644
index 0000000..6cf2f5a
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/transfer_entropy/pycausality/src.py
@@ -0,0 +1,1226 @@
+import pandas as pd
+import statsmodels.api as sm
+import numpy as np
+
+from numpy import ma, atleast_2d, pi, sqrt, sum
+from scipy import stats, linalg
+from scipy.special import gammaln
+from six import string_types
+from scipy.stats.mstats import mquantiles
+
+from copy import deepcopy
+
+import matplotlib.pyplot as plt
+import matplotlib.cm as cm
+
+from dateutil.relativedelta import relativedelta
+
+import warnings
+import sys
+
+
+class LaggedTimeSeries():
+ """
+ Custom wrapper class for pandas DataFrames for performing predictive analysis.
+ Generates lagged time series and performs custom windowing over datetime indexes
+ """
+
+ def __init__(self, df, endog, lag=None, max_lag_only=True, window_size=None, window_stride=None):
+ """
+ Args:
+ df - Pandas DataFrame object of N columns. Must be indexed as an increasing
+ time series (i.e. past-to-future), with equal timesteps between each row
+ lags - The number of steps to be included. Each increase in Lags will result
+ in N additional columns, where N is the number of columns in the original
+ dataframe. It will also remove the first N rows.
+ max_lag_only - Defines whether the returned dataframe contains all lagged timeseries up to
+ and including the defined lag, or only the time series equal to this lag value
+ window_size - Dict containing key-value pairs only from within: {'YS':0,'MS':0,'D':0,'H':0,'min':0,'S':0,'ms':0}
+ Describes the desired size of each window, provided the data is indexed with datetime type. Leave as
+ None for no windowing. Units follow http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-offset-aliases
+ window_stride - Dict containing key-value pairs only from within: {'YS':0,'MS':0,'D':0,'H':0,'min':0,'S':0,'ms':0}
+ Describes the size of the step between consecutive windows, provided the data is indexed with datetime type. Leave as
+ None for no windowing. Units follow http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-offset-aliases
+
+ Returns: - n/a
+ """
+ self.df = sanitise(df)
+ self.endog = endog
+ self.axes = list(self.df.columns.values) # Variable names
+
+ self.max_lag_only = max_lag_only
+ if lag is not None:
+ self.t = lag
+ self.df = self.__apply_lags__()
+
+ if window_size is not None and window_stride is not None:
+ self.has_windows = True
+ self. __apply_windows__(window_size, window_stride)
+ else:
+ self.has_windows = False
+
+ def __apply_lags__(self):
+ """
+ Args:
+ n/a
+ Returns:
+ new_df.iloc[self.t:] - This is a new dataframe containing the original columns and
+ all lagged columns. Note that the first few rows (equal to self.lag) will
+ be removed from the top, since lagged values are of coursenot available
+ for these indexes.
+ """
+ # Create a new dataframe to maintain the new data, dropping rows with NaN
+ new_df = self.df.copy(deep=True).dropna()
+
+ # Create new column with lagged timeseries for each variable
+ col_names = self.df.columns.values.tolist()
+
+ # If the user wants to only consider the time series lagged by the
+ # maximum number specified or by every series up to an including the maximum lag:
+ if self.max_lag_only == True:
+ for col_name in col_names:
+ new_df[col_name + '_lag' +
+ str(self.t)] = self.df[col_name].shift(self.t)
+
+ elif self.max_lag_only == False:
+ for col_name in col_names:
+ for t in range(1, self.t+1):
+ new_df[col_name + '_lag' +
+ str(t)] = self.df[col_name].shift(t)
+ else:
+ raise ValueError('Error')
+
+ # Drop the first t rows, which now contain NaN
+ return new_df.iloc[self.t:]
+
+ def __apply_windows__(self, window_size, window_stride):
+ """
+ Args:
+ window_size - Dict passed from self.__init__
+ window_stride - Dict passed from self.__init__
+ Returns:
+ n/a - Sets the daterange for the self.windows property to iterate along
+ """
+ self.window_size = {'YS': 0, 'MS': 0, 'D': 0,
+ 'H': 0, 'min': 0, 'S': 0, 'ms': 0}
+ self.window_stride = {'YS': 0, 'MS': 0,
+ 'D': 0, 'H': 0, 'min': 0, 'S': 0, 'ms': 0}
+
+ self.window_stride.update(window_stride)
+ self.window_size.update(window_size)
+ freq = ''
+ daterangefreq = freq.join(
+ [str(v)+str(k) for (k, v) in self.window_stride.items() if v != 0])
+ self.daterange = pd.date_range(
+ self.df.index.min(), self.df.index.max(), freq=daterangefreq)
+
+ def date_diff(self, window_size):
+ """
+ Args:
+ window_size - Dict passed from self.windows function
+ Returns:
+ start_date - The start date of the proposed window
+ end_date - The end date of the proposed window
+
+ This function is TBC - proposed due to possible duplication of the relativedelta usage in self.windows and self.headstart
+ """
+ pass
+
+ @property
+ def windows(self):
+ """
+ Args:
+ n/a
+ Returns:
+ windows - Generator defining a pandas DataFrame for each window of the data.
+ Usage like: [window for window in LaggedTimeSeries.windows]
+ """
+ if self.has_windows == False:
+ return self.df
+ # Loop Over TimeSeries Range
+ for i, dt in enumerate(self.daterange):
+
+ # Ensure Each Division Contains Required Number of Months
+ if dt-relativedelta(years=self.window_size['YS'],
+ months=self.window_size['MS'],
+ days=self.window_size['D'],
+ hours=self.window_size['H'],
+ minutes=self.window_size['min'],
+ seconds=self.window_size['S'],
+ microseconds=self.window_size['ms']
+ ) >= self.df.index.min():
+
+ # Create Window
+ yield self.df.loc[(dt-relativedelta(years=self.window_size['YS'],
+ months=self.window_size['MS'],
+ days=self.window_size['D'],
+ hours=self.window_size['H'],
+ minutes=self.window_size['min'],
+ seconds=self.window_size['S'],
+ microseconds=self.window_size['ms']
+ )): dt]
+
+ @property
+ def headstart(self):
+ """
+ Args:
+ n/a
+ Returns:
+ len(windows) - The number of windows which would have start dates before the desired date range.
+ Used in TransferEntropy class to slice off incomplete windows.
+
+ """
+ windows = [i for i, dt in enumerate(self.daterange)
+ if dt-relativedelta(years=self.window_size['YS'],
+ months=self.window_size['MS'],
+ days=self.window_size['D'],
+ hours=self.window_size['H'],
+ minutes=self.window_size['min'],
+ seconds=self.window_size['S'],
+ microseconds=self.window_size['ms']
+ ) < self.df.index.min()]
+ # i.e. count from the first window which falls entirely after the earliest date
+ return len(windows)
+
+
+class TransferEntropy():
+ """
+ Functional class to calculate Transfer Entropy between time series, to detect causal signals.
+ Currently accepts two series: X(t) and Y(t). Future extensions planned to accept additional endogenous
+ series: X1(t), X2(t), X3(t) etc.
+ """
+
+ def __init__(self, DF, endog, exog, lag=None, window_size=None, window_stride=None):
+ """
+ Args:
+ DF - (DataFrame) Time series data for X and Y (NOT including lagged variables)
+ endog - (string) Fieldname for endogenous (dependent) variable Y
+ exog - (string) Fieldname for exogenous (independent) variable X
+ lag - (integer) Number of periods (rows) by which to lag timeseries data
+ window_size - (Dict) Must contain key-value pairs only from within: {'YS':0,'MS':0,'D':0,'H':0,'min':0,'S':0,'ms':0}
+ Describes the desired size of each window, provided the data is indexed with datetime type. Leave as
+ None for no windowing. Units follow http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-offset-aliases
+ window_stride - (Dict) Must contain key-value pairs only from within: {'YS':0,'MS':0,'D':0,'H':0,'min':0,'S':0,'ms':0}
+ Describes the size of the step between consecutive windows, provided the data is indexed with datetime type. Leave as
+ None for no windowing. Units follow http://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-offset-aliases
+ Returns:
+ n/a
+ """
+ self.lts = LaggedTimeSeries(df=sanitise(DF),
+ endog=endog,
+ lag=lag,
+ window_size=window_size,
+ window_stride=window_stride)
+
+ if self.lts.has_windows is True:
+ self.df = self.lts.windows
+ self.date_index = self.lts.daterange[self.lts.headstart:]
+ self.results = pd.DataFrame(index=self.date_index)
+ self.results.index.name = "windows_ending_on"
+ else:
+ self.df = [self.lts.df]
+ self.results = pd.DataFrame(index=[0])
+ self.max_lag_only = True
+ self.endog = endog # Dependent Variable Y
+ self.exog = exog # Independent Variable X
+ self.lag = lag
+
+ """ If using KDE, this ensures the covariance matrices are calculated once over all data, rather
+ than for each window. This saves computational time and provides a fair point for comparison."""
+ self.covars = [[], []]
+
+ for i, (X, Y) in enumerate({self.exog: self.endog, self.endog: self.exog}.items()):
+ X_lagged = X+'_lag'+str(self.lag)
+ Y_lagged = Y+'_lag'+str(self.lag)
+
+ self.covars[i] = [np.cov(self.lts.df[[Y, Y_lagged, X_lagged]].values.T),
+ np.cov(
+ self.lts.df[[X_lagged, Y_lagged]].values.T),
+ np.cov(self.lts.df[[Y, Y_lagged]].values.T),
+ np.ones(shape=(1, 1)) * self.lts.df[Y_lagged].std()**2]
+
+ # Account for equal signals in case of lag 0 by adding identity matrix to covariance matrices
+ if lag == 0:
+ for j, c_j in enumerate(self.covars[i]):
+ if j % 2 == 0:
+ self.covars[i][j] += 1e-10 * np.eye(*c_j.shape)
+
+ def linear_TE(self, df=None, n_shuffles=0):
+ """
+ Linear Transfer Entropy for directional causal inference
+
+ Defined: G-causality * 0.5, where G-causality described by the reduction in variance of the residuals
+ when considering side information.
+ Calculated using: log(var(e_joint)) - log(var(e_independent)) where e_joint and e_independent
+ represent the residuals from OLS fitting in the joint (X(t),Y(t)) and reduced (Y(t)) cases
+
+ Arguments:
+ n_shuffles - (integer) Number of times to shuffle the dataframe, destroying the time series temporality, in order to
+ perform significance testing.
+ Returns:
+ transfer_entropies - (list) Directional Linear Transfer Entropies from X(t)->Y(t) and Y(t)->X(t) respectively
+ """
+ # Prepare lists for storing results
+ TEs = []
+ shuffled_TEs = []
+ p_values = []
+ z_scores = []
+
+ # Loop over all windows
+ for i, df in enumerate(self.df):
+ df = deepcopy(df)
+
+ # Shows user that something is happening
+ # if self.lts.has_windows is True:
+ # print("Window ending: ", self.date_index[i])
+
+ # Initialise list to return TEs
+ transfer_entropies = [0, 0]
+
+ # Require us to compare information transfer bidirectionally
+ for i, (X, Y) in enumerate({self.exog: self.endog, self.endog: self.exog}.items()):
+
+ # Note X-t, Y-t
+ X_lagged = X+'_lag'+str(self.lag)
+ Y_lagged = Y+'_lag'+str(self.lag)
+
+ # Calculate Residuals after OLS Fitting, for both Independent and Joint Cases
+ joint_residuals = sm.OLS(df[Y], sm.add_constant(
+ df[[Y_lagged, X_lagged]])).fit().resid
+ independent_residuals = sm.OLS(
+ df[Y], sm.add_constant(df[Y_lagged])).fit().resid
+
+ # Use Geweke's formula for Granger Causality
+ if np.var(joint_residuals) == 0:
+ granger_causality = 0
+ else:
+ granger_causality = np.log(np.var(independent_residuals) /
+ np.var(joint_residuals))
+
+ # Calculate Linear Transfer Entropy from Granger Causality
+ transfer_entropies[i] = granger_causality/2
+
+ TEs.append(transfer_entropies)
+
+ # Calculate Significance of TE during this window
+ if n_shuffles > 0:
+ p, z, TE_mean = significance(df=df,
+ TE=transfer_entropies,
+ endog=self.endog,
+ exog=self.exog,
+ lag=self.lag,
+ n_shuffles=n_shuffles,
+ method='linear')
+
+ shuffled_TEs.append(TE_mean)
+ p_values.append(p)
+ z_scores.append(z)
+
+ # Store Linear Transfer Entropy from X(t)->Y(t) and from Y(t)->X(t)
+ self.add_results({'TE_linear_XY': np.array(TEs)[:, 0],
+ 'TE_linear_YX': np.array(TEs)[:, 1],
+ 'p_value_linear_XY': None,
+ 'p_value_linear_YX': None,
+ 'z_score_linear_XY': 0,
+ 'z_score_linear_YX': 0
+ })
+
+ if n_shuffles > 0:
+ # Store Significance Transfer Entropy from X(t)->Y(t) and from Y(t)->X(t)
+
+ self.add_results({'p_value_linear_XY': np.array(p_values)[:, 0],
+ 'p_value_linear_YX': np.array(p_values)[:, 1],
+ 'z_score_linear_XY': np.array(z_scores)[:, 0],
+ 'z_score_linear_YX': np.array(z_scores)[:, 1],
+ 'Ave_TE_linear_XY': np.array(shuffled_TEs)[:, 0],
+ 'Ave_TE_linear_YX': np.array(shuffled_TEs)[:, 1]
+ })
+
+ return transfer_entropies
+
+ def nonlinear_TE(self, df=None, pdf_estimator='histogram', bins=None, bandwidth=None, gridpoints=20, n_shuffles=0):
+ """
+ NonLinear Transfer Entropy for directional causal inference
+
+ Defined: TE = TE_XY - TE_YX where TE_XY = H(Y|Y-t) - H(Y|Y-t,X-t)
+ Calculated using: H(Y|Y-t,X-t) = H(Y,Y-t,X-t) - H(Y,Y-t) and finding joint entropy through density estimation
+
+ Arguments:
+ pdf_estimator - (string) 'Histogram' or 'kernel' Used to define which method is preferred for density estimation
+ of the distribution - either histogram or KDE
+ bins - (dict of lists) Optional parameter to provide hard-coded bin-edges. Dict keys
+ must contain names of variables - including lagged columns! Dict values must be lists
+ containing bin-edge numerical values.
+ bandwidth - (float) Optional parameter for custom bandwidth in KDE. This is a scalar multiplier to the covariance
+ matrix used (see: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.covariance_factor.html)
+ gridpoints - (integer) Number of gridpoints (in each dimension) to discretise the probablity space when performing
+ integration of the kernel density estimate. Increasing this gives more precision, but significantly
+ increases execution time
+ n_shuffles - (integer) Number of times to shuffle the dataframe, destroying the time series temporality, in order to
+ perform significance testing.
+
+ Returns:
+ transfer_entropies - (list) Directional Transfer Entropies from X(t)->Y(t) and Y(t)->X(t) respectively
+
+ (Also stores TE, Z-score and p-values in self.results - for each window if windows defined.)
+ """
+ # Retrieve user-defined bins
+ self.bins = bins
+ if self.bins is None:
+ self.bins = {self.endog: None}
+
+ # Prepare lists for storing results
+ TEs = []
+ shuffled_TEs = []
+ p_values = []
+ z_scores = []
+
+ # Loop over all windows
+ for j, df in enumerate(self.df):
+ df = deepcopy(df)
+
+ # Shows user that something is happening
+ # if self.lts.has_windows is True and debug:
+ # print("Window ending: ", self.date_index[j])
+
+ # Initialise list to return TEs
+ transfer_entropies = [0, 0]
+
+ # Require us to compare information transfer bidirectionally
+ for i, (X, Y) in enumerate({self.exog: self.endog, self.endog: self.exog}.items()):
+ # Entropy calculated using Probability Density Estimation:
+ # Following: https://stat.ethz.ch/education/semesters/SS_2006/CompStat/sk-ch2.pdf
+ # Also: https://www.cs.cmu.edu/~aarti/Class/10704_Spring15/lecs/lec5.pdf
+
+ # Note Lagged Terms
+ X_lagged = X+'_lag'+str(self.lag)
+ Y_lagged = Y+'_lag'+str(self.lag)
+
+ # Estimate PDF using Gaussian Kernels and use H(x) = p(x) log p(x)
+ # 1. H(Y,Y-t,X-t)
+ H1 = get_entropy(df=df[[Y, Y_lagged, X_lagged]],
+ gridpoints=gridpoints,
+ bandwidth=bandwidth,
+ estimator=pdf_estimator,
+ bins={k: v for (k, v) in self.bins.items()
+ if k in [Y, Y_lagged, X_lagged]},
+ covar=self.covars[i][0])
+
+ # 2. H(Y-t,X-t)
+ H2 = get_entropy(df=df[[X_lagged, Y_lagged]],
+ gridpoints=gridpoints,
+ bandwidth=bandwidth,
+ estimator=pdf_estimator,
+ bins={k: v for (k, v) in self.bins.items()
+ if k in [X_lagged, Y_lagged]},
+ covar=self.covars[i][1])
+ #print('\t', H2)
+ # 3. H(Y,Y-t)
+ H3 = get_entropy(df=df[[Y, Y_lagged]],
+ gridpoints=gridpoints,
+ bandwidth=bandwidth,
+ estimator=pdf_estimator,
+ bins={k: v for (k, v) in self.bins.items()
+ if k in [Y, Y_lagged]},
+ covar=self.covars[i][2])
+ #print('\t', H3)
+ # 4. H(Y-t)
+ H4 = get_entropy(df=df[[Y_lagged]],
+ gridpoints=gridpoints,
+ bandwidth=bandwidth,
+ estimator=pdf_estimator,
+ bins={k: v for (k, v) in self.bins.items()
+ if k in [Y_lagged]},
+ covar=self.covars[i][3])
+
+ # Calculate Conditonal Entropy using: H(Y|X-t,Y-t) = H(Y,X-t,Y-t) - H(X-t,Y-t)
+ conditional_entropy_joint = H1 - H2
+
+ # And Conditional Entropy independent of X(t) H(Y|Y-t) = H(Y,Y-t) - H(Y-t)
+ conditional_entropy_independent = H3 - H4
+
+ # Directional Transfer Entropy is the difference between the conditional entropies
+ transfer_entropies[i] = conditional_entropy_independent - \
+ conditional_entropy_joint
+
+ TEs.append(transfer_entropies)
+
+ # Calculate Significance of TE during this window
+ if n_shuffles > 0:
+ p, z, TE_mean = significance(df=df,
+ TE=transfer_entropies,
+ endog=self.endog,
+ exog=self.exog,
+ lag=self.lag,
+ n_shuffles=n_shuffles,
+ pdf_estimator=pdf_estimator,
+ bins=self.bins,
+ bandwidth=bandwidth,
+ method='nonlinear')
+
+ shuffled_TEs.append(TE_mean)
+ p_values.append(p)
+ z_scores.append(z)
+
+ # Store Transfer Entropy from X(t)->Y(t) and from Y(t)->X(t)
+ self.add_results({'TE_XY': np.array(TEs)[:, 0],
+ 'TE_YX': np.array(TEs)[:, 1],
+ 'p_value_XY': None,
+ 'p_value_YX': None,
+ 'z_score_XY': 0,
+ 'z_score_YX': 0
+ })
+ if n_shuffles > 0:
+ # Store Significance Transfer Entropy from X(t)->Y(t) and from Y(t)->X(t)
+
+ self.add_results({'p_value_XY': np.array(p_values)[:, 0],
+ 'p_value_YX': np.array(p_values)[:, 1],
+ 'z_score_XY': np.array(z_scores)[:, 0],
+ 'z_score_YX': np.array(z_scores)[:, 1],
+ 'Ave_TE_XY': np.array(shuffled_TEs)[:, 0],
+ 'Ave_TE_YX': np.array(shuffled_TEs)[:, 1]
+ })
+ return transfer_entropies
+
+ def add_results(self, dict):
+ """
+ Args:
+ dict - JSON-style data to store in existing self.results DataFrame
+ Returns:
+ n/a
+ """
+ for (k, v) in dict.items():
+ self.results[str(k)] = v
+
+
+def significance(df, TE, endog, exog, lag, n_shuffles, method, pdf_estimator=None, bins=None, bandwidth=None, both=True):
+ """
+ Perform significance analysis on the hypothesis test of statistical causality, for both X(t)->Y(t)
+ and Y(t)->X(t) directions
+
+ Calculated using: Assuming stationarity, we shuffle the time series to provide the null hypothesis.
+ The proportion of tests where TE > TE_shuffled gives the p-value significance level.
+ The amount by which the calculated TE is greater than the average shuffled TE, divided
+ by the standard deviation of the results, is the z-score significance level.
+
+ Arguments:
+ TE - (list) Contains the transfer entropy in each direction, i.e. [TE_XY, TE_YX]
+ endog - (string) The endogenous variable in the TE analysis being significance tested (i.e. X or Y)
+ exog - (string) The exogenous variable in the TE analysis being significance tested (i.e. X or Y)
+ pdf_estimator - (string) The pdf_estimator used in the original TE analysis
+ bins - (Dict of lists) The bins used in the original TE analysis
+
+ n_shuffles - (float) Number of times to shuffle the dataframe, destroyig temporality
+ both - (Bool) Whether to shuffle both endog and exog variables (z-score) or just exog variables (giving z*-score)
+ Returns:
+ p_value - Probablity of observing the result given the null hypothesis
+ z_score - Number of Standard Deviations result is from mean (normalised)
+ """
+
+ # Prepare array for Transfer Entropy of each Shuffle
+ shuffled_TEs = np.zeros(shape=(2, n_shuffles))
+
+ ##
+ if both is True:
+ pass # TBC
+
+ for i in range(n_shuffles):
+ # Perform Shuffle
+ df = shuffle_series(df)
+
+ # Calculate New TE
+ shuffled_causality = TransferEntropy(DF=df,
+ endog=endog,
+ exog=exog,
+ lag=lag
+ )
+ if method == 'linear':
+ TE_shuffled = shuffled_causality.linear_TE(df, n_shuffles=0)
+ else:
+ TE_shuffled = shuffled_causality.nonlinear_TE(
+ df, pdf_estimator, bins, bandwidth, n_shuffles=0)
+ shuffled_TEs[:, i] = TE_shuffled
+
+ # Calculate p-values for each direction
+ p_values = (np.count_nonzero(TE[0] < shuffled_TEs[0, :]) / n_shuffles,
+ np.count_nonzero(TE[1] < shuffled_TEs[1, :]) / n_shuffles)
+
+ # Calculate z-scores for each direction
+ z_scores = ((TE[0] - np.mean(shuffled_TEs[0, :])) / np.std(shuffled_TEs[0, :]),
+ (TE[1] - np.mean(shuffled_TEs[1, :])) / np.std(shuffled_TEs[1, :]))
+
+ TE_mean = (np.mean(shuffled_TEs[0, :]),
+ np.mean(shuffled_TEs[1, :]))
+
+ # Return the self.DF value to the unshuffled case
+ return p_values, z_scores, TE_mean
+
+##############################################################################################################
+# U T I L I T Y C L A S S E S
+##############################################################################################################
+
+
+class NDHistogram():
+ """
+ Custom histogram class wrapping the default numpy implementations (np.histogram, np.histogramdd).
+ This allows for dimension-agnostic histogram calculations, custom auto-binning and
+ associated data and methods to be stored for each object (e.g. Probability Density etc.)
+ """
+
+ def __init__(self, df, bins=None, max_bins=15):
+ """
+ Arguments:
+ df - DataFrame passed through from the TransferEntropy class
+ bins - Bin edges passed through from the TransferEntropy class
+ max_bins - Number of bins per each dimension passed through from the TransferEntropy class
+ Returns:
+ self.pdf - This is an N-dimensional Probability Density Function, stored as a
+ Numpy histogram, representing the proportion of samples in each bin.
+ """
+ df = sanitise(df)
+ self.df = df.reindex(columns=sorted(df.columns)) # Sort axes by name
+ self.max_bins = max_bins
+ self.axes = list(self.df.columns.values)
+ self.bins = bins
+ self.n_dims = len(self.axes)
+
+ # Bins must match number and order of dimensions
+ if self.bins is None:
+ AB = AutoBins(self.df)
+ self.bins = AB.sigma_bins(max_bins=max_bins)
+ elif set(self.bins.keys()) != set(self.axes):
+ warnings.warn(
+ 'Incompatible bins provided - defaulting to sigma bins')
+ AB = AutoBins(self.df)
+ self.bins = AB.sigma_bins(max_bins=max_bins)
+
+ ordered_bins = [sorted(self.bins[key])
+ for key in sorted(self.bins.keys())]
+
+ # Create ND histogram (np.histogramdd doesn't scale down to 1D)
+ if self.n_dims == 1:
+ self.Hist, self.Dedges = np.histogram(
+ self.df.values, bins=ordered_bins[0], normed=False)
+ elif self.n_dims > 1:
+ self.Hist, self.Dedges = np.histogramdd(
+ self.df.values, bins=ordered_bins, normed=False)
+
+ # Empirical Probability Density Function
+ if self.Hist.sum() == 0:
+ print(self.Hist.shape)
+
+ with pd.option_context('display.max_rows', None, 'display.max_columns', 3):
+ print(self.df.tail(40))
+
+ sys.exit(
+ "User-defined histogram is empty. Check bins or increase data points")
+ else:
+ self.pdf = self.Hist/self.Hist.sum()
+ self._set_entropy_(self.pdf)
+
+ def _set_entropy_(self, pdf):
+ """
+ Arguments:
+ pdf - Probabiiity Density Function; this is calculated using the N-dimensional histogram above.
+ Returns:
+ n/a
+ Sets entropy for marginal distributions: H(X), H(Y) etc. as well as joint entropy H(X,Y)
+ """
+ # Prepare empty dict for marginal entropies along each dimension
+ self.H = {}
+
+ if self.n_dims > 1:
+
+ # Joint entropy H(X,Y) = -sum(pdf(x,y) * log(pdf(x,y)))
+ # Use masking to replace log(0) with 0
+ self.H_joint = -np.sum(pdf * ma.log2(pdf).filled(0))
+
+ # Single entropy for each dimension H(X) = -sum(pdf(x) * log(pdf(x)))
+ for a, axis_name in enumerate(self.axes):
+ # Use masking to replace log(0) with 0
+ self.H[axis_name] = - \
+ np.sum(pdf.sum(axis=a) * ma.log2(pdf.sum(axis=a)).filled(0))
+ else:
+ # Joint entropy and single entropy are the same
+ self.H_joint = -np.sum(pdf * ma.log2(pdf).filled(0))
+ self.H[self.df.columns[0]] = self.H_joint
+
+
+class AutoBins():
+ """
+ Prototyping class for generating data-driven binning.
+ Handles lagged time series, so only DF[X(t), Y(t)] required.
+ """
+
+ def __init__(self, df, lag=None):
+ """
+ Args:
+ df - (DateFrame) Time series data to classify into bins
+ lag - (float) Lag for data to provided bins for lagged columns also
+ Returns:
+ n/a
+ """
+ # Ensure data is in DataFrame form
+ self.df = sanitise(df)
+ self.axes = self.df.columns.values
+ self.ndims = len(self.axes)
+ self.N = len(self.df)
+ self.lag = lag
+
+ def __extend_bins__(self, bins):
+ """
+ Function to generate bins for lagged time series not present in self.df
+ Args:
+ bins - (Dict of List) Bins edges calculated by some AutoBins.method()
+ Returns:
+ bins - (Dict of lists) Bin edges keyed by column name
+ """
+ self.max_lag_only = True # still temporary until we kill this
+
+ # Handle lagging for bins, and calculate default bins where edges are not provided
+ if self.max_lag_only == True:
+ bins.update({fieldname + '_lag' + str(self.lag): edges
+ for (fieldname, edges) in bins.items()})
+ else:
+ bins.update({fieldname + '_lag' + str(t): edges
+ for (fieldname, edges) in bins.items() for t in range(self.lag)})
+
+ return bins
+
+ def MIC_bins(self, max_bins=15):
+ """
+ Method to find optimal bin widths in each dimension, using a naive search to
+ maximise the mutual information divided by number of bins. Only accepts data
+ with two dimensions [X(t),Y(t)].
+ We increase the n_bins parameter in each dimension, and take the bins which
+ result in the greatest Maximum Information Coefficient (MIC)
+
+ (Note that this is restricted to equal-width bins only.)
+ Defined: MIC = I(X,Y)/ max(n_bins)
+ edges = {Y:[a,b,c,d], Y-t:[a,b,c,d], X-t:[e,f,g]},
+ n_bins = [bx,by]
+ Calculated using: argmax { I(X,Y)/ max(n_bins) }
+ Args:
+ max_bins - (int) The maximum allowed bins in each dimension
+ Returns:
+ opt_edges - (dict) The optimal bin-edges for pdf estimation
+ using the histogram method, keyed by df column names
+ All bins equal-width.
+ """
+ if len(self.df.columns.values) > 2:
+ raise ValueError(
+ 'Too many columns provided in DataFrame. MIC_bins only accepts 2 columns (no lagged columns)')
+
+ min_bins = 3
+
+ # Initialise array to store MIC values
+ MICs = np.zeros(shape=[1+max_bins-min_bins, 1+max_bins-min_bins])
+
+ # Loop over each dimension
+ for b_x in range(min_bins, max_bins+1):
+
+ for b_y in range(min_bins, max_bins+1):
+
+ # Update parameters
+ n_bins = [b_x, b_y]
+
+ # Update dict of bin edges
+ edges = {dim: list(np.linspace(self.df[dim].min(),
+ self.df[dim].max(),
+ n_bins[i]+1))
+ for i, dim in enumerate(self.df.columns.values)}
+
+ # Calculate Maximum Information Coefficient
+ HDE = NDHistogram(self.df, edges)
+
+ I_xy = sum([H for H in HDE.H.values()]) - HDE.H_joint
+
+ MIC = I_xy / np.log2(np.min(n_bins))
+
+ MICs[b_x-min_bins][b_y-min_bins] = MIC
+
+ # Get Optimal b_x, b_y values
+ n_bins[0] = np.where(MICs == np.max(MICs))[0] + min_bins
+ n_bins[1] = np.where(MICs == np.max(MICs))[1] + min_bins
+
+ bins = {dim: list(np.linspace(self.df[dim].min(),
+ self.df[dim].max(),
+ n_bins[i]+1))
+ for i, dim in enumerate(self.df.columns.values)}
+
+ if self.lag is not None:
+ bins = self.__extend_bins__(bins)
+ # Return the optimal bin-edges
+ return bins
+
+ def knuth_bins(self, max_bins=15):
+ """
+ Method to find optimal bin widths in each dimension, using a naive search to
+ maximise the log-likelihood given data. Only accepts data
+ with two dimensions [X(t),Y(t)].
+ Derived from Matlab code provided in Knuth (2013): https://arxiv.org/pdf/physics/0605197.pdf
+
+ (Note that this is restricted to equal-width bins only.)
+ Args:
+ max_bins - (int) The maximum allowed bins in each dimension
+ Returns:
+ bins - (dict) The optimal bin-edges for pdf estimation
+ using the histogram method, keyed by df column names
+ All bins equal-width.
+ """
+ if len(self.df.columns.values) > 2:
+ raise ValueError(
+ 'Too many columns provided in DataFrame. knuth_bins only accepts 2 columns (no lagged columns)')
+
+ min_bins = 3
+
+ # Initialise array to store MIC values
+ log_probabilities = np.zeros(
+ shape=[1+max_bins-min_bins, 1+max_bins-min_bins])
+
+ # Loop over each dimension
+ for b_x in range(min_bins, max_bins+1):
+
+ for b_y in range(min_bins, max_bins+1):
+
+ # Update parameters
+ Ms = [b_x, b_y]
+
+ # Update dict of bin edges
+ bins = {dim: list(np.linspace(self.df[dim].min(),
+ self.df[dim].max(),
+ Ms[i]+1))
+ for i, dim in enumerate(self.df.columns.values)}
+
+ # Calculate Maximum log Posterior
+
+ # Create N-d histogram to count number per bin
+ HDE = NDHistogram(self.df, bins)
+ nk = HDE.Hist
+
+ # M = number of bins in total = Mx * My * Mz ... etc.
+ M = np.prod(Ms)
+
+ log_prob = (self.N * np.log(M)
+ + gammaln(0.5 * M)
+ - M * gammaln(0.5)
+ - gammaln(self.N + 0.5 * M)
+ + np.sum(gammaln(nk.ravel() + 0.5)))
+
+ log_probabilities[b_x-min_bins][b_y-min_bins] = log_prob
+
+ # Get Optimal b_x, b_y values
+ Ms[0] = np.where(log_probabilities == np.max(
+ log_probabilities))[0] + min_bins
+ Ms[1] = np.where(log_probabilities == np.max(
+ log_probabilities))[1] + min_bins
+
+ bins = {dim: list(np.linspace(self.df[dim].min(),
+ self.df[dim].max(),
+ Ms[i]+1))
+ for i, dim in enumerate(self.df.columns.values)}
+
+ if self.lag is not None:
+ bins = self.__extend_bins__(bins)
+ # Return the optimal bin-edges
+ return bins
+
+ def sigma_bins(self, max_bins=15):
+ """
+ Returns bins for N-dimensional data, using standard deviation binning: each
+ bin is one S.D in width, with bins centered on the mean. Where outliers exist
+ beyond the maximum number of SDs dictated by the max_bins parameter, the
+ bins are extended to minimum/maximum values to ensure all data points are
+ captured. This may mean larger bins in the tails, and up to two bins
+ greater than the max_bins parameter suggests in total (in the unlikely case of huge
+ outliers on both sides).
+ Args:
+ max_bins - (int) The maximum allowed bins in each dimension
+ Returns:
+ bins - (dict) The optimal bin-edges for pdf estimation
+ using the histogram method, keyed by df column names
+ """
+
+ bins = {k: [np.mean(v)-int(max_bins/2)*np.std(v) + i * np.std(v) for i in range(max_bins+1)]
+ for (k, v) in self.df.iteritems()} # Note: same as: self.df.to_dict('list').items()}
+
+ # Since some outliers can be missed, extend bins if any points are not yet captured
+ [bins[k].append(self.df[k].min())
+ for k in self.df.keys() if self.df[k].min() < min(bins[k])]
+ [bins[k].append(self.df[k].max())
+ for k in self.df.keys() if self.df[k].max() > max(bins[k])]
+
+ if self.lag is not None:
+ bins = self.__extend_bins__(bins)
+ return bins
+
+ def equiprobable_bins(self, max_bins=15):
+ """
+ Returns bins for N-dimensional data, such that each bin should contain equal numbers of
+ samples.
+ *** Note that due to SciPy's mquantiles() functional design, the equipartion is not strictly true -
+ it operates independently on the marginals, and so with large bin numbers there are usually
+ significant discrepancies from desired behaviour. Fortunately, for TE we find equipartioning is
+ extremely beneficial, so we find good accuracy with small bin counts ***
+ Args:
+ max_bins - (int) The number of bins in each dimension
+ Returns:
+ bins - (dict) The calculated bin-edges for pdf estimation
+ using the histogram method, keyed by df column names
+ """
+ quantiles = np.array([i/max_bins for i in range(0, max_bins+1)])
+ bins = dict(zip(self.axes, mquantiles(
+ a=self.df, prob=quantiles, axis=0).T.tolist()))
+
+ # Remove_duplicates
+ bins = {k: sorted(set(bins[k])) for (k, v) in bins.items()}
+
+ if self.lag is not None:
+ bins = self.__extend_bins__(bins)
+ return bins
+
+
+class _kde_(stats.gaussian_kde):
+ """
+ Subclass of scipy.stats.gaussian_kde. This is to enable the passage of a pre-defined covariance matrix, via the
+ `covar` parameter. This is handled internally within TransferEntropy class.
+ The matrix is calculated on the overall dataset, before windowing, which allows for consistency between windows,
+ and avoiding duplicative computational operations, compared with calculating the covariance each window.
+ Functions left as much as possible identical to scipi.stats.gaussian_kde; docs available:
+ https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html
+ """
+
+ def __init__(self, dataset, bw_method=None, df=None, covar=None):
+ self.dataset = atleast_2d(dataset)
+ if not self.dataset.size > 1:
+ raise ValueError("`dataset` input should have multiple elements.")
+
+ self.d, self.n = self.dataset.shape
+ self.set_bandwidth(bw_method=bw_method, covar=covar)
+
+ def set_bandwidth(self, bw_method=None, covar=None):
+
+ if bw_method is None:
+ pass
+ elif bw_method == 'scott':
+ self.covariance_factor = self.scotts_factor
+ elif bw_method == 'silverman':
+ self.covariance_factor = self.silverman_factor
+ elif np.isscalar(bw_method) and not isinstance(bw_method, string_types):
+ self._bw_method = 'use constant'
+ self.covariance_factor = lambda: bw_method
+ elif callable(bw_method):
+ self._bw_method = bw_method
+ self.covariance_factor = lambda: self._bw_method(self)
+ else:
+ msg = "`bw_method` should be 'scott', 'silverman', a scalar " \
+ "or a callable."
+ raise ValueError(msg)
+
+ self._compute_covariance(covar)
+
+ def _compute_covariance(self, covar):
+
+ if covar is not None:
+ try:
+ self._data_covariance = covar
+ self._data_inv_cov = linalg.inv(self._data_covariance)
+ except Exception as e:
+ print('\tSingular matrix encountered...')
+ covar += 10e-6 * np.eye(*covar.shape)
+ self._data_covariance = covar
+ self._data_inv_cov = linalg.inv(self._data_covariance)
+
+ self.factor = self.covariance_factor()
+ # Cache covariance and inverse covariance of the data
+ if not hasattr(self, '_data_inv_cov'):
+ self._data_covariance = atleast_2d(np.cov(self.dataset, rowvar=1,
+ bias=False))
+ self._data_inv_cov = linalg.inv(self._data_covariance)
+
+ self.covariance = self._data_covariance * self.factor**2
+ self.inv_cov = self._data_inv_cov / self.factor**2
+ self._norm_factor = sqrt(linalg.det(2*pi*self.covariance)) * self.n
+
+
+##############################################################################################################
+# U T I L I T Y F U N C T I O N S
+##############################################################################################################
+
+
+def get_pdf(df, gridpoints=None, bandwidth=None, estimator=None, bins=None, covar=None):
+ """
+ Function for non-parametric density estimation
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ gridpoints - (int) Number of gridpoints when integrating KDE over
+ the domain. Used if estimator='kernel'
+ bandwidth - (float) Bandwidth for KDE (scalar multiple to covariance
+ matrix). Used if estimator='kernel'
+ estimator - (string) 'histogram' or 'kernel'
+ bins - (Dict of lists) Bin edges for NDHistogram. Used if estimator = 'histogram'
+ covar - (Numpy ndarray) Covariance matrix between dimensions of df.
+ Used if estimator = 'kernel'
+ Returns:
+ pdf - (Numpy ndarray) Probability of a sample being in a specific
+ bin (technically a probability mass)
+ """
+ DF = sanitise(df)
+
+ if estimator == 'histogram':
+ pdf = pdf_histogram(DF, bins)
+ else:
+ pdf = pdf_kde(DF, gridpoints, bandwidth, covar)
+ return pdf
+
+
+def pdf_kde(df, gridpoints=None, bandwidth=1, covar=None):
+ """
+ Function for non-parametric density estimation using Kernel Density Estimation
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ gridpoints - (int) Number of gridpoints when integrating KDE over
+ the domain. Used if estimator='kernel'
+ bandwidth - (float) Bandwidth for KDE (scalar multiple to covariance
+ matrix).
+ covar - (Numpy ndarray) Covariance matrix between dimensions of df.
+ If None, these are calculated from df during the
+ KDE analysis
+ Returns:
+ Z/Z.sum() - (Numpy ndarray) Probability of a sample being between
+ specific gridpoints (technically a probability mass)
+ """
+ # Create Meshgrid to capture data
+ if gridpoints is None:
+ gridpoints = 20
+
+ N = complex(gridpoints)
+
+ slices = [slice(dim.min(), dim.max(), N)
+ for dimname, dim in df.iteritems()]
+ grids = np.mgrid[slices]
+
+ # Pass Meshgrid to Scipy Gaussian KDE to Estimate PDF
+ positions = np.vstack([X.ravel() for X in grids])
+ values = df.values.T
+ kernel = _kde_(values, bw_method=bandwidth, covar=covar)
+ Z = np.reshape(kernel(positions).T, grids[0].shape)
+
+ # Normalise
+ return Z/Z.sum()
+
+
+def pdf_histogram(df, bins):
+ """
+ Function for non-parametric density estimation using N-Dimensional Histograms
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ bins - (Dict of lists) Bin edges for NDHistogram.
+ Returns:
+ histogram.pdf - (Numpy ndarray) Probability of a sample being in a specific
+ bin (technically a probability mass)
+ """
+ histogram = NDHistogram(df=df, bins=bins)
+ return histogram.pdf
+
+
+def get_entropy(df, gridpoints=15, bandwidth=None, estimator='kernel', bins=None, covar=None):
+ """
+ Function for calculating entropy from a probability mass
+
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ gridpoints - (int) Number of gridpoints when integrating KDE over
+ the domain. Used if estimator='kernel'
+ bandwidth - (float) Bandwidth for KDE (scalar multiple to covariance
+ matrix). Used if estimator='kernel'
+ estimator - (string) 'histogram' or 'kernel'
+ bins - (Dict of lists) Bin edges for NDHistogram. Used if estimator
+ = 'histogram'
+ covar - (Numpy ndarray) Covariance matrix between dimensions of df.
+ Used if estimator = 'kernel'
+ Returns:
+ entropy - (float) Shannon entropy in bits
+ """
+ pdf = get_pdf(df, gridpoints, bandwidth, estimator, bins, covar)
+ # log base 2 returns H(X) in bits
+ return -np.sum(pdf * ma.log2(pdf).filled(0))
+
+
+def shuffle_series(DF, only=None):
+ """
+ Function to return time series shuffled rowwise along each desired column.
+ Each column is shuffled independently, removing the temporal relationship.
+ This is to calculate Z-score and Z*-score. See P. Boba et al (2015)
+ Calculated using: df.apply(np.random.permutation)
+ Arguments:
+ df - (DataFrame) Time series data
+ only - (list) Fieldnames to shuffle. If none, all columns shuffled
+ Returns:
+ df_shuffled - (DataFrame) Time series shuffled along desired columns
+ """
+ if not only == None:
+ shuffled_DF = DF.copy()
+ for col in only:
+ series = DF.loc[:, col].to_frame()
+ shuffled_DF[col] = series.apply(np.random.permutation)
+ else:
+ shuffled_DF = DF.apply(np.random.permutation)
+
+ return shuffled_DF
+
+
+def plot_pdf(df, estimator='kernel', gridpoints=None, bandwidth=None, covar=None, bins=None, show=False,
+ cmap='inferno', label_fontsize=7):
+ """
+ Wrapper function to plot the pdf of a pandas dataframe
+
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ estimator - (string) 'kernel' or 'histogram'
+ gridpoints - (int) Number of gridpoints when integrating KDE over
+ the domain. Used if estimator='kernel'
+ bandwidth - (float) Bandwidth for KDE (scalar multiple to covariance
+ matrix). Used if estimator='kernel'
+ covar - (Numpy ndarray) Covariance matrix between dimensions of df.
+ bins - (Dict of lists) Bin edges for NDHistogram. Used if estimator = 'histogram'
+ show - (Boolean) whether or not to plot direclty, or simply return axes for later use
+ cmap - (string) Colour map (see: https://matplotlib.org/examples/color/colormaps_reference.html)
+ label_fontsize - (float) Defines the fontsize for the axes labels
+ Returns:
+ ax - AxesSubplot object. Can be added to figures to allow multiple plots.
+ """
+
+ DF = sanitise(df)
+ if len(DF.columns) != 2:
+ print("DataFrame has " + str(len(DF.columns)) +
+ " dimensions. Only 2D or less can be plotted")
+ axes = None
+ else:
+ # Plot data in Histogram or Kernel form
+ if estimator == 'histogram':
+
+ if bins is None:
+ bins = {axis: np.linspace(DF[axis].min(),
+ DF[axis].max(),
+ 9) for axis in DF.columns.values}
+ fig, axes = plot_pdf_histogram(df, bins, cmap)
+ else:
+ fig, axes = plot_pdf_kernel(df, gridpoints, bandwidth, covar, cmap)
+
+ # Format plot
+ axes.set_xlabel(DF.columns.values[0], labelpad=20)
+ axes.set_ylabel(DF.columns.values[1], labelpad=20)
+ for label in axes.xaxis.get_majorticklabels():
+ label.set_fontsize(label_fontsize)
+ for label in axes.yaxis.get_majorticklabels():
+ label.set_fontsize(label_fontsize)
+ for label in axes.zaxis.get_majorticklabels():
+ label.set_fontsize(label_fontsize)
+ axes.view_init(10, 45)
+ if show == True:
+ plt.show()
+ plt.close(fig)
+
+ axes.remove()
+
+ return axes
+
+
+def plot_pdf_histogram(df, bins, cmap='inferno'):
+ """
+ Function to plot the pdf of a dataset, estimated via histogram.
+
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ bins - (Dict of lists) Bin edges for NDHistogram. Used if estimator = 'histogram'
+ Returns:
+ ax - AxesSubplot object, passed back via to plot_pdf() function
+ """
+ DF = sanitise(df) # in case function called directly
+
+ # Calculate PDF
+ PDF = get_pdf(df=DF, estimator='histogram', bins=bins)
+
+ # Get x-coords, y-coords for each bar
+ (x_edges, y_edges) = bins.values()
+ X, Y = np.meshgrid(x_edges[:-1], y_edges[:-1])
+ # Get dx, dy for each bar
+ dxs, dys = np.meshgrid(np.diff(x_edges), np.diff(y_edges))
+
+ # Colourmap
+ cmap = cm.get_cmap(cmap)
+ rgba = [cmap((p-PDF.flatten().min())/PDF.flatten().max())
+ for p in PDF.flatten()]
+
+ # Create subplots
+ fig = plt.figure()
+ ax = fig.add_subplot(111, projection='3d')
+
+ ax.bar3d(x=X.flatten(), # x coordinates of each bar
+ y=Y.flatten(), # y coordinates of each bar
+ z=0, # z coordinates of each bar
+ dx=dxs.flatten(), # width of each bar
+ dy=dys.flatten(), # depth of each bar
+ dz=PDF.flatten(), # height of each bar
+ alpha=1, # transparency
+ color=rgba
+ )
+ ax.set_title("Histogram Probability Distribution", fontsize=10)
+
+ return fig, ax
+
+
+def plot_pdf_kernel(df, gridpoints=None, bandwidth=None, covar=None, cmap='inferno'):
+ """
+ Function to plot the pdf, calculated by KDE, of a dataset
+
+ Args:
+ df - (DataFrame) Samples over which to estimate density
+ gridpoints - (int) Number of gridpoints when integrating KDE over
+ the domain. Used if estimator='kernel'
+ bandwidth - (float) Bandwidth for KDE (scalar multiple to covariance
+ matrix). Used if estimator='kernel'
+ covar - (Numpy ndarray) Covariance matrix between dimensions of df.
+
+ Returns:
+ ax - AxesSubplot object, passed back via to plot_pdf() function
+ """
+ DF = sanitise(df)
+ # Estimate the PDF from the data
+ if gridpoints is None:
+ gridpoints = 20
+
+ pdf = get_pdf(DF, gridpoints=gridpoints, bandwidth=bandwidth)
+ N = complex(gridpoints)
+ slices = [slice(dim.min(), dim.max(), N)
+ for dimname, dim in DF.iteritems()]
+ X, Y = np.mgrid[slices]
+
+ fig = plt.figure()
+ ax = fig.add_subplot(111, projection='3d')
+ ax.plot_surface(X, Y, pdf, cmap=cmap)
+
+ ax.set_title("KDE Probability Distribution", fontsize=10)
+
+ return fig, ax
+
+
+def sanitise(df):
+ """
+ Function to convert DataFrame-like objects into pandas DataFrames
+
+ Args:
+ df - Data in pd.Series or pd.DataFrame format
+ Returns:
+ df - Data as pandas DataFrame
+ """
+ # Ensure data is in DataFrame form
+ if isinstance(df, pd.DataFrame):
+ df = df
+ elif isinstance(df, pd.Series):
+ df = df.to_frame()
+ else:
+ raise ValueError(
+ 'Data passed as %s Please ensure your data is stored as a Pandas DataFrame' % (str(type(df))))
+ return df
diff --git a/structured_data/2022_06_02_causality/src/transfer_entropy/transfer_entropy_wrapper.py b/structured_data/2022_06_02_causality/src/transfer_entropy/transfer_entropy_wrapper.py
new file mode 100644
index 0000000..17421d4
--- /dev/null
+++ b/structured_data/2022_06_02_causality/src/transfer_entropy/transfer_entropy_wrapper.py
@@ -0,0 +1,169 @@
+"""
+Functions for identifying causality.
+"""
+# Import standard library modules
+from typing import List, Tuple, Dict
+import warnings
+
+# Import third party modules
+from matplotlib import pyplot as plt
+import numpy as np
+import pandas as pd
+
+from transfer_entropy.pycausality.src import TransferEntropy
+
+warnings.filterwarnings("ignore")
+
+# Function definitions
+def grangers_causation_matrix(df: pd.DataFrame, test: str='ssr_chi2test', max_lag: int=7, verbose=False) -> Tuple[pd.DataFrame, pd.DataFrame]:
+ """Check Granger Causality of all possible combinations of the Time series.
+ The rows are the response variable, columns are predictors. The values in the table
+ are the P-Values. P-Values lesser than the significance level (0.05), implies
+ the Null Hypothesis that the coefficients of the corresponding past values is
+ zero, that is, the X does not cause Y can be rejected.
+
+ Arguments:
+ - df (pd.DataFrame):
+ - test (str):
+ - max_lag (int):
+ - verbose: Whether or not to display intermediate results.
+
+ Results:
+ - Tuple[pd.DataFrame, pd.DataFrame]: Dataframes containing the minimum p_value (i.e., largest
+ significance) and corresponding lag for each of the columns of the argument dataframe.
+ """
+ df_gc = pd.DataFrame(np.zeros((1, len(df.columns[1:]))), columns=df.columns[1:], index=[df.columns[0]])
+ df_gc_lags = pd.DataFrame(np.zeros((1, len(df.columns[1:]))), columns=df.columns[1:], index=[df.columns[0]])
+
+ col_res = df.columns[0]
+ for col_orig in df.columns[1:]:
+ test_result = grangercausalitytests(df[[col_res, col_orig]], maxlag=max_lag, verbose=False)
+ p_values = [round(test_result[i+1][0][test][1],4) for i in range(max_lag)]
+ if verbose: print(f'Y = {r}, X = {c}, P Values = {p_values}')
+ min_p_value = np.min(p_values)
+ min_lags = np.argmin(p_values)
+ df_gc.loc[col_res, col_orig] = min_p_value
+ df_gc_lags.loc[col_res, col_orig] = min_lags
+
+ df_gc.columns = [col + '_x' for col in df.columns[1:]]
+ df_gc.index = [df.columns[0]]
+
+ df_gc_lags.columns = [col + '_x' for col in df.columns[1:]]
+ df_gc_lags.index = [df.columns[0]]
+ return df_gc, df_gc_lags
+
+
+def calculate_transfer_entropy(df: pd.DataFrame, lag: int, linear: bool=False, effective: bool=False, window_size: Dict={'MS': 6}, window_stride: Dict={'MS': 1}, n_shuffles=100, debug=False) -> List:
+ """Perform Seasonal-Trend decomposition using LOESS (STL) to remove trend
+ and seasonality and difference residuals as much as necessary to make
+ time-series stationary.
+
+ Arguments:
+ - df (pd.DataFrame): Dataframe for which transfer entropies must be
+ calculated.
+ - linear (bool): Whether the required transfer entropies should be linear
+ (True) or non-linear (False).
+ - effective (bool): Whether or not to calculate the effective transfer
+ entropy. Can only be done for `n_shuffles>0`, but has proven to not
+ give the most reliable results given the size of the dataset.
+ - window_size (Dict): Dictionary indicating the size of a window, either in 'MS'
+ (Month Start) or 'D' (Days; to express weeks), e.g., {'MS': 6}.
+ - window_stride (Dict): Dictionary indicating the stride of a window, either in 'MS'
+ (Month Start) or 'D' (Days; to express weeks), e.g., {'MS': 1}.
+ - n_shuffles (int): Number of shuffling operations to do when calculating
+ the average transfer entropy. Only relevant if the results should be
+ either the effective entropy or if p-values should be included for
+ significance.
+ - debug (bool): Whether or not to print intermediate results (for
+ debugging purposed).
+
+ Result:
+ - List[List[str, pd.DataFrame]]: List containing nested lists (pairs) of
+ the column names and the resulting Pandas dataframe containing the
+ transfer entropy for each window in the respective column.
+ """
+
+ te_results = []
+
+ col_res = df.columns[0]
+ col_origs = df.columns[1:]
+ for col_orig in col_origs:
+ print(f'{col_orig} -> {col_res}')
+
+ # Initialise Object to Calculate Transfer Entropy
+ TE = TransferEntropy(DF=df,
+ endog=col_res,
+ exog=col_orig,
+ lag=lag,
+ window_size=window_size,
+ window_stride=window_stride
+ )
+
+ # Calculate TE using KDE
+ if linear:
+ TE.linear_TE(n_shuffles=n_shuffles)
+ else:
+ TE.nonlinear_TE(pdf_estimator='kernel', n_shuffles=n_shuffles)
+
+ # Standardize column naming
+ if (linear):
+ TE.results = TE.results.rename(mapper=(lambda col: col.replace('linear_', '')), axis=1)
+
+ # Display TE_XY, TE_YX and significance values
+ if debug:
+ if n_shuffles and effective:
+ #print('\t', TE.results[[f'TE_XY', f'Ave_TE_XY', f'p_value_XY']])
+ print('\t', f"TE_XY_Eff=({TE.results['TE_XY'].values[0] - TE.results['Ave_TE_XY'].values[0]}), p=({TE.results['p_value_YX'].values[0]})", '\n')
+ elif n_shuffles:
+ print('\t', f"TE_XY=({TE.results['TE_XY'].values[0]}), p=({TE.results['p_value_YX'].values[0]})", '\n')
+ else:
+ print('\t', f"TE_XY=({TE.results[['TE_XY']]})", '\n')
+
+ # Track results of current link
+ te_results.append([col_orig, TE.results])
+ return te_results
+
+def average_transfer_entropy(df: pd.DataFrame, linear: bool, effective: bool, tau_min: int=0, tau_max: int=4, n_shuffles=None, debug: bool=False) -> List:
+ """Wrapper function around `calculate_transfer_entropy` for calculating the
+ average (non-)linear transfer entropy.
+
+ Arguments:
+ - df (pd.DataFrame): Dataframe for which transfer entropies must be
+ calculated.
+ - linear (bool): Whether the required transfer entropies should be linear
+ (True) or non-linear (False).
+ - effective (bool): Whether or not to calculate the effective transfer
+ entropy. Can only be done for `n_shuffles>0`, but has proven to not
+ give the most reliable results given the size of the dataset.
+ - tau_min (int): Minimal lag to calculate transfer entropy for.
+ - tau_max (int): Maximal lag to calculate transfer entropy for.
+ - n_shuffles (int): Number of shuffling operations to do when calculating
+ the average transfer entropy. Only relevant if the results should be
+ either the effective entropy or if p-values should be included for
+ significance.
+ - debug (bool): Whether or not to print intermediate results (for
+ debugging purposed).
+
+ Result:
+ - List[List[str, pd.DataFrame]]: List containing nested lists (pairs) of
+ the column names and the resulting Pandas dataframe containing the
+ transfer entropy for each window in the respective column.
+ """
+ te_results_arr = []
+ for lag in range(tau_min, tau_max+1):
+ print(f'\nlag({lag})')
+ import time
+ t= time.time()
+
+ # Call over-arching Transfer Entropy function
+ te_results = calculate_transfer_entropy(df, lag=lag, linear=linear, window_size=None, window_stride=None, n_shuffles=n_shuffles, debug=debug)
+
+ # Construct dataframe from results
+ te_results_df = pd.DataFrame(data=pd.concat(np.array(te_results)[:, 1]))
+ te_results_df.index = np.array(te_results)[:, 0]
+
+ # Keep track of results
+ te_results_arr.append(te_results)
+ print("took", time.time() - t, "seconds")
+
+ return te_results_arr
\ No newline at end of file