diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 400ac0d..69b30c7 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2024-03-26T00:01:00","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.8.5","generation_timestamp":"2024-03-29T22:11:26","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/dev/Examples/Example1/index.html b/dev/Examples/Example1/index.html index 4017065..e1d86ec 100644 --- a/dev/Examples/Example1/index.html +++ b/dev/Examples/Example1/index.html @@ -1,5 +1,5 @@ -Ex.1: Sample Entropy · EntropyHub.jl

Example 1: Sample Entropy

Import a signal of normally distributed random numbers [mean = 0; SD = 1], and calculate the sample entropy for each embedding dimension (m) from 0 to 4.

X = ExampleData("gaussian");
+Ex.1: Sample Entropy · EntropyHub.jl

Example 1: Sample Entropy

Import a signal of normally distributed random numbers [mean = 0; SD = 1], and calculate the sample entropy for each embedding dimension (m) from 0 to 4.

X = ExampleData("gaussian");
 Samp, _ = SampEn(X, m = 4);
5-element Vector{Float64}:
  2.178923612371957
  2.175742327787873
@@ -8,4 +8,4 @@
  2.1755667174542923

Select the last value to get the sample entropy for m = 4.

Samp[end]
2.1755667174542923

Calculate the sample entropy for each embedding dimension (m) from 0 to 4 with a time delay (tau) of 2 samples.

Samp, Phi1, Phi2 = SampEn(X, m = 4, tau = 2)
([2.178923612371957, 2.183323250654987, 2.188041075511569, 2.189184333017654, 2.1440802180581136], [1.414258e6, 159224.0, 17843.0, 1998.0, 234.0], [1.24975e7, 1.413233e6, 159119.0, 17838.0, 1997.0])

Import a signal of uniformly distributed random numbers in the range [-1, 1] and calculate the sample entropy for an embedding dimension (m) of 5, a time delay of 2, and a threshold radius of 0.075. Return the conditional probability (Vcp) and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively.

Samp, _, _, Vcp_Ka_Kb = SampEn(X, m = 5, tau = 2, r = 0.075, Vcp = true)
 Vcp, Ka, Kb = Vcp_Ka_Kb
Vcp = 0.00018629728228987074
 Ka = 92
-Kb = 3943
+Kb = 3943
diff --git a/dev/Examples/Example10/index.html b/dev/Examples/Example10/index.html index fed6cb3..66c9b14 100644 --- a/dev/Examples/Example10/index.html +++ b/dev/Examples/Example10/index.html @@ -1,5 +1,5 @@ -Ex.10: Bidimensional Fuzzy Entropy · EntropyHub.jl

Example 10: Bidimensional Fuzzy Entropy

Import an image of a Mandelbrot fractal as a matrix.

X = ExampleData("mandelbrot_Mat");
+Ex.10: Bidimensional Fuzzy Entropy · EntropyHub.jl

Example 10: Bidimensional Fuzzy Entropy

Import an image of a Mandelbrot fractal as a matrix.

X = ExampleData("mandelbrot_Mat");
 
 using Plots
-heatmap(X, background_color="black")

AlmondBread

Calculate the bidimensional fuzzy entropy in trits (logarithm base 3) with a template matrix of size [8 x 5], and a time delay (tau) of 2 using a 'constgaussian' fuzzy membership function (r=24).

FE2D = FuzzEn2D(X, m = (8, 5), tau = 2, Fx = "constgaussian", r = 24, Logx = 3)
1.4885540777860427
+heatmap(X, background_color="black")

AlmondBread

Calculate the bidimensional fuzzy entropy in trits (logarithm base 3) with a template matrix of size [8 x 5], and a time delay (tau) of 2 using a 'constgaussian' fuzzy membership function (r=24).

FE2D = FuzzEn2D(X, m = (8, 5), tau = 2, Fx = "constgaussian", r = 24, Logx = 3)
1.4885540777860427
diff --git a/dev/Examples/Example2/index.html b/dev/Examples/Example2/index.html index 43d5d16..c50ba87 100644 --- a/dev/Examples/Example2/index.html +++ b/dev/Examples/Example2/index.html @@ -1,9 +1,9 @@ -Ex.2: Permutation Entropy · EntropyHub.jl

Example 2: (Fine-grained) Permutation Entropy

Import the x, y, and z components of the Lorenz system of equations.

Data = ExampleData("lorenz");
+Ex.2: Permutation Entropy · EntropyHub.jl

Example 2: (Fine-grained) Permutation Entropy

Import the x, y, and z components of the Lorenz system of equations.

Data = ExampleData("lorenz");
 
 using Plots
 scatter(Data[:,1], Data[:,2], Data[:,3],
 markercolor = "green", markerstrokecolor = "black",
 markersize = 3, background_color = "black", grid = false)

Lorenz

Calculate fine-grained permutation entropy of the z component in dits (logarithm base 10) with an embedding dimension of 3, time delay of 2, an alpha parameter of 1.234. Return Pnorm normalised w.r.t the number of all possible permutations (m!) and the condition permutation entropy (cPE) estimate.

Z = Data[:,3];
 Perm, Pnorm, cPE = PermEn(Z, m = 3, tau = 2, Typex = "finegrain",
-            tpx = 1.234, Logx = 10, Norm = false)
([-0.0, 0.8686539340402203, 0.946782979031713], [NaN, 0.8686539340402203, 0.4733914895158565], [0.8686539340402203, 0.07812904499149276])
+ tpx = 1.234, Logx = 10, Norm = false)
([-0.0, 0.8686539340402203, 0.946782979031713], [NaN, 0.8686539340402203, 0.4733914895158565], [0.8686539340402203, 0.07812904499149276])
diff --git a/dev/Examples/Example3/index.html b/dev/Examples/Example3/index.html index 55cbcfd..edbdea1 100644 --- a/dev/Examples/Example3/index.html +++ b/dev/Examples/Example3/index.html @@ -1,9 +1,9 @@ -Ex.3: Phase Entropy · EntropyHub.jl

Example 3: Phase Entropy w/ Second Order Difference Plot

Import the x and y components of the Henon system of equations.

Data = ExampleData("henon");
+Ex.3: Phase Entropy · EntropyHub.jl

Example 3: Phase Entropy w/ Second Order Difference Plot

Import the x and y components of the Henon system of equations.

Data = ExampleData("henon");
 
 using Plots
 scatter(Data[:,1], Data[:,2],
 markercolor = "green", markerstrokecolor = "black",
 markersize = 3, background_color = "black",grid = false)

Henon

Calculate the phase entropy of the y-component in bits (logarithm base 2) without normalization using 7 angular partitions and return the second-order difference plot.

Y = Data[:,2];
 Phas = PhasEn(Y, K = 7, Norm = false, Logx = 2, Plotx = true)
2.0192821496913216

Phas1

Calculate the phase entropy of the x-component using 11 angular partitions, a time delay of 2, and return the second-order difference plot.

X = Data[:,1];
-Phas = PhasEn(X, K = 11, tau = 2, Plotx = true)
0.8395391613164361

Phas2

+Phas = PhasEn(X, K = 11, tau = 2, Plotx = true)
0.8395391613164361

Phas2

diff --git a/dev/Examples/Example4/index.html b/dev/Examples/Example4/index.html index 31e75c0..3dfc489 100644 --- a/dev/Examples/Example4/index.html +++ b/dev/Examples/Example4/index.html @@ -1,5 +1,5 @@ -Ex.4: Cross-Distribution Entropy · EntropyHub.jl

Example 4: Cross-Distribution Entropy w/ Different Binning Methods

Import a signal of pseudorandom integers in the range [1, 8] and calculate the cross- distribution entropy with an embedding dimension of 5, a time delay (tau) of 3, and 'Sturges' bin selection method.

X = ExampleData("randintegers2");
+Ex.4: Cross-Distribution Entropy · EntropyHub.jl

Example 4: Cross-Distribution Entropy w/ Different Binning Methods

Import a signal of pseudorandom integers in the range [1, 8] and calculate the cross- distribution entropy with an embedding dimension of 5, a time delay (tau) of 3, and 'Sturges' bin selection method.

X = ExampleData("randintegers2");
 XDist, _ = XDistEn(X[:,1], X[:,2], m = 5, tau = 3);
Note: 17/25 bins were empty
 XDist = 0.5248413652396312

Use Rice's method to determine the number of histogram bins and return the probability of each bin (Ppi).

XDist, Ppi = XDistEn(X[:,1], X[:,2], m = 5, tau = 3, Bins = "rice")
Note: 407/415 bins were emptyXDist = 0.28024570808915084
-Ppi = [3.5953721176540164e-5, 0.004693584691286341, 0.03679902564295558, 0.10958694214609442, 0.19781322971493293, 0.2558194625893131, 0.24212389495509928, 0.15312790653914185]
+Ppi = [3.5953721176540164e-5, 0.004693584691286341, 0.03679902564295558, 0.10958694214609442, 0.19781322971493293, 0.2558194625893131, 0.24212389495509928, 0.15312790653914185]
diff --git a/dev/Examples/Example5/index.html b/dev/Examples/Example5/index.html index 4f40b92..d19bae2 100644 --- a/dev/Examples/Example5/index.html +++ b/dev/Examples/Example5/index.html @@ -1,2 +1,2 @@ -Ex.5: Multiscale Entropy Object · EntropyHub.jl

Example 5: Multiscale Entropy Object - MSobject()

Note:

The base and cross- entropy functions used in the multiscale entropy calculation are declared by passing EntropyHub functions to MSobject(), not string names.

Create a multiscale entropy object (Mobj) for multiscale fuzzy entropy, calculated with an embedding dimension of 5, a time delay (tau) of 2, using a sigmoidal fuzzy function with the r scaling parameters (3, 1.2).

Mobj = MSobject(FuzzEn, m = 5, tau = 2, Fx = "sigmoid", r = (3, 1.2))
(Func = EntropyHub._FuzzEn.FuzzEn, m = 5, tau = 2, Fx = "sigmoid", r = (3, 1.2))

Create a multiscale entropy object (Mobj) for multiscale corrected-cross-conditional entropy, calculated with an embedding dimension of 6 and using a 11-symbolic data transform.

Mobj = MSobject(XCondEn, m = 6, c = 11)
(Func = EntropyHub._XCondEn.XCondEn, m = 6, c = 11)
+Ex.5: Multiscale Entropy Object · EntropyHub.jl

Example 5: Multiscale Entropy Object - MSobject()

Note:

The base and cross- entropy functions used in the multiscale entropy calculation are declared by passing EntropyHub functions to MSobject(), not string names.

Create a multiscale entropy object (Mobj) for multiscale fuzzy entropy, calculated with an embedding dimension of 5, a time delay (tau) of 2, using a sigmoidal fuzzy function with the r scaling parameters (3, 1.2).

Mobj = MSobject(FuzzEn, m = 5, tau = 2, Fx = "sigmoid", r = (3, 1.2))
(Func = EntropyHub._FuzzEn.FuzzEn, m = 5, tau = 2, Fx = "sigmoid", r = (3, 1.2))

Create a multiscale entropy object (Mobj) for multiscale corrected-cross-conditional entropy, calculated with an embedding dimension of 6 and using a 11-symbolic data transform.

Mobj = MSobject(XCondEn, m = 6, c = 11)
(Func = EntropyHub._XCondEn.XCondEn, m = 6, c = 11)
diff --git a/dev/Examples/Example6/index.html b/dev/Examples/Example6/index.html index eddffde..93c7a99 100644 --- a/dev/Examples/Example6/index.html +++ b/dev/Examples/Example6/index.html @@ -1,5 +1,5 @@ -Ex.6: Multiscale [Increment] Entropy · EntropyHub.jl

Example 6: Multiscale Increment Entropy

Import a signal of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = IncrEn(), embedding dimension = 3, a quantifying resolution = 6, normalization = true.

X = ExampleData("randintegers");
+Ex.6: Multiscale [Increment] Entropy · EntropyHub.jl

Example 6: Multiscale Increment Entropy

Import a signal of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = IncrEn(), embedding dimension = 3, a quantifying resolution = 6, normalization = true.

X = ExampleData("randintegers");
 Mobj = MSobject(IncrEn, m = 3, R = 6, Norm = true)
(Func = EntropyHub._IncrEn.IncrEn, m = 3, R = 6, Norm = true)

Calculate the multiscale increment entropy over 5 temporal scales using the modified graining procedure where:

$y_j^{(\tau)} =\frac{1}{\tau } \sum_{i=\left(j-1\right)\tau +1}^{j\tau } x{_i}, 1<= j <= \frac{N}{\tau }$

MSx, _ = MSEn(X, Mobj, Scales = 5, Methodx = "modified");
5-element Vector{Float64}:
  4.271928856964401
  4.305911441727119
@@ -10,4 +10,4 @@
   3.83143162219877
   4.24634531832518
   4.271178349238616
-  4.192115013720915
+ 4.192115013720915
diff --git a/dev/Examples/Example7/index.html b/dev/Examples/Example7/index.html index 0d9ca76..835b47c 100644 --- a/dev/Examples/Example7/index.html +++ b/dev/Examples/Example7/index.html @@ -1,3 +1,3 @@ -Ex.7: Refined Multiscale [Sample] Entropy · EntropyHub.jl

Example 7: Refined Multiscale Sample Entropy

Import a signal of uniformly distributed pseudorandom integers in the range [1, 8] and create a multiscale entropy object with the following parameters: EnType = SampEn(), embedding dimension = 4, radius threshold = 1.25

X = ExampleData("randintegers");
-Mobj = MSobject(SampEn, m = 4, r = 1.25)
(Func = EntropyHub._SampEn.SampEn, m = 4, r = 1.25)

Calculate the refined multiscale sample entropy and the complexity index (Ci) over 5 temporal scales using a 3rd order Butterworth filter with a normalised corner frequency of at each temporal scale (τ), where the radius threshold value (r) specified by Mobj becomes scaled by the median absolute deviation of the filtered signal at each scale.

MSx, Ci = rMSEn(X, Mobj, Scales = 5, F_Order = 3, F_Num = 0.6, RadNew = 4)
([0.5279653970442648, 0.573386455925927, 0.5939360094866717, 0.5907829626330106, 0.5564473543781709], 2.842518179468045)
+Ex.7: Refined Multiscale [Sample] Entropy · EntropyHub.jl

Example 7: Refined Multiscale Sample Entropy

Import a signal of uniformly distributed pseudorandom integers in the range [1, 8] and create a multiscale entropy object with the following parameters: EnType = SampEn(), embedding dimension = 4, radius threshold = 1.25

X = ExampleData("randintegers");
+Mobj = MSobject(SampEn, m = 4, r = 1.25)
(Func = EntropyHub._SampEn.SampEn, m = 4, r = 1.25)

Calculate the refined multiscale sample entropy and the complexity index (Ci) over 5 temporal scales using a 3rd order Butterworth filter with a normalised corner frequency of at each temporal scale (τ), where the radius threshold value (r) specified by Mobj becomes scaled by the median absolute deviation of the filtered signal at each scale.

MSx, Ci = rMSEn(X, Mobj, Scales = 5, F_Order = 3, F_Num = 0.6, RadNew = 4)
([0.5279653970442648, 0.573386455925927, 0.5939360094866717, 0.5907829626330106, 0.5564473543781709], 2.842518179468045)
diff --git a/dev/Examples/Example8/index.html b/dev/Examples/Example8/index.html index a2bf035..2619373 100644 --- a/dev/Examples/Example8/index.html +++ b/dev/Examples/Example8/index.html @@ -1,7 +1,7 @@ -Ex.8: Composite Multiscale Cross-Approximate Entropy · EntropyHub.jl

Example 8: Composite Multiscale Cross-Approximate Entropy

Import two signals of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = XApEn(), embedding dimension = 2, time delay = 2, radius distance threshold = 0.5

X = ExampleData("randintegers2");
+Ex.8: Composite Multiscale Cross-Approximate Entropy · EntropyHub.jl

Example 8: Composite Multiscale Cross-Approximate Entropy

Import two signals of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = XApEn(), embedding dimension = 2, time delay = 2, radius distance threshold = 0.5

X = ExampleData("randintegers2");
 Mobj = MSobject(XApEn, m = 2, tau = 2, r = 0.5)
(Func = EntropyHub._XApEn.XApEn, m = 2, tau = 2, r = 0.5)

Calculate the comsposite multiscale cross-approximate entropy over 3 temporal scales where the radius distance threshold value (r) specified by Mobj becomes scaled by the variance of the signal at each scale.

X = ExampleData("randintegers2"); # hide
 MSx, _ = cXMSEn(X[:,1], X[:,2], Mobj, Scales = 3, RadNew = 1)
3-element Vector{Float64}:
  1.0893229452569062
  1.4745638145624824
- 1.293182408488266
+ 1.293182408488266
diff --git a/dev/Examples/Example9/index.html b/dev/Examples/Example9/index.html index fbe865e..5c6c8c5 100644 --- a/dev/Examples/Example9/index.html +++ b/dev/Examples/Example9/index.html @@ -1,7 +1,7 @@ -Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy · EntropyHub.jl

Example 9: Hierarchical Multiscale corrected Cross-Conditional Entropy

Import the x and y components of the Henon system of equations and create a multiscale entropy object with the following parameters: EnType = XCondEn(), embedding dimension = 2, time delay = 2, number of symbols = 12, logarithm base = 2, normalization = true

Data = ExampleData("henon");
+Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy · EntropyHub.jl

Example 9: Hierarchical Multiscale corrected Cross-Conditional Entropy

Import the x and y components of the Henon system of equations and create a multiscale entropy object with the following parameters: EnType = XCondEn(), embedding dimension = 2, time delay = 2, number of symbols = 12, logarithm base = 2, normalization = true

Data = ExampleData("henon");
 Mobj = MSobject(XCondEn, m = 2, tau = 2, c = 12,  Logx = 2, Norm = true)
 
 using Plots
 scatter(Data[:,1], Data[:,2], markercolor = "green", markerstrokecolor = "black",
-markersize = 3, background_color = "black", grid = false)

Henon

Calculate the hierarchical multiscale corrected cross-conditional entropy over 4 temporal scales and return the average cross-entropy at each scale (Sn), the complexity index (Ci), and a plot of the multiscale entropy curve and the hierarchical tree with the cross-entropy value at each node.

MSx, Sn, Ci = hXMSEn(Data[:,1], Data[:,2], Mobj, Scales = 4, Plotx = true)
([0.5159119469801318, 0.6245115584569841, 0.5634170000748405, 0.7022124034937283, 0.6532640538485219, 0.5852823820201765, 0.7956453173364485, 0.8446734972394015, 0.7604554984465494, 0.8415218012703684, 0.8115326608866869, 0.5128494134582905, 0.6861931413242152, 0.8678500562727558, 0.8287299287906533], [0.5159119469801318, 0.5939642792659123, 0.6841010391747188, 0.7692257497111151], 2.5632030151318776)

hXMSEn

+markersize = 3, background_color = "black", grid = false)

Henon

Calculate the hierarchical multiscale corrected cross-conditional entropy over 4 temporal scales and return the average cross-entropy at each scale (Sn), the complexity index (Ci), and a plot of the multiscale entropy curve and the hierarchical tree with the cross-entropy value at each node.

MSx, Sn, Ci = hXMSEn(Data[:,1], Data[:,2], Mobj, Scales = 4, Plotx = true)
([0.5159119469801318, 0.6245115584569841, 0.5634170000748405, 0.7022124034937283, 0.6532640538485219, 0.5852823820201765, 0.7956453173364485, 0.8446734972394015, 0.7604554984465494, 0.8415218012703684, 0.8115326608866869, 0.5128494134582905, 0.6861931413242152, 0.8678500562727558, 0.8287299287906533], [0.5159119469801318, 0.5939642792659123, 0.6841010391747188, 0.7692257497111151], 2.5632030151318776)

hXMSEn

diff --git a/dev/Examples/Examples/index.html b/dev/Examples/Examples/index.html index ab49b9c..e01e863 100644 --- a/dev/Examples/Examples/index.html +++ b/dev/Examples/Examples/index.html @@ -1,5 +1,5 @@ -Notes on Examples · EntropyHub.jl

Examples:

The following sections provide some basic examples of EntropyHub functions. These examples are merely a snippet of the full range of EntropyHub functionality.

In the following examples, signals / data are imported into Julia using the ExampleData() function. To use this function as shown in the examples below, an internet connection is required.

EntropyHub._ExampleData.ExampleDataFunction

Data = ExampleData(SigName::String)

Imports sample data time series with specific properties that are commonly used as benchmarks for assessing the performance of various entropy methods. The datasets returned by ExampleData() are used in the examples provided in documentation on www.EntropyHub.xyz and elsewhere. ***Note*** ExampleData() requires an internet connection to download and import the required datasets!

Datais the sample dataset imported corresponding to the string input SigName which can be one of the following string:

Arguments:

SigName -

        `uniform`          - uniformly distributed random number sequence in range [0 1], N = 5000
+Notes on Examples · EntropyHub.jl

Examples:

The following sections provide some basic examples of EntropyHub functions. These examples are merely a snippet of the full range of EntropyHub functionality.

In the following examples, signals / data are imported into Julia using the ExampleData() function. To use this function as shown in the examples below, an internet connection is required.

EntropyHub._ExampleData.ExampleDataFunction

Data = ExampleData(SigName::String)

Imports sample data time series with specific properties that are commonly used as benchmarks for assessing the performance of various entropy methods. The datasets returned by ExampleData() are used in the examples provided in documentation on www.EntropyHub.xyz and elsewhere. ***Note*** ExampleData() requires an internet connection to download and import the required datasets!

Datais the sample dataset imported corresponding to the string input SigName which can be one of the following string:

Arguments:

SigName -

        `uniform`          - uniformly distributed random number sequence in range [0 1], N = 5000
         `randintegers`     - randomly distributed integer sequence in range [1 8], N = 4096
         `gaussian`         - normally distributed number sequence [mean: 0, SD: 1], N = 5000
         `henon`            - X and Y components of the Henon attractor [alpha: 1.4, beta: .3, Xo = 0, Yo = 0], N = 4500
@@ -14,4 +14,4 @@
         `mandelbrot_Mat`   - matrix representing a Mandelbrot fractal image with values in range [0 255], N = 92 x 115
         `entropyhub_Mat`   - matrix representing the EntropyHub logo with values in range [0 255], N = 127 x 95
                      
-For further info on these graining procedures see the `EntropyHub guide <https://github.com/MattWillFlood/EntropyHub/blob/main/EntropyHub%20Guide.pdf>`_.
source
IMPORTANT TO NOTE

Parameters of the base or cross- entropy methods are passed to multiscale and multiscale cross- functions using the multiscale entropy object using MSobject. Base and cross- entropy methods are declared with MSobject() using any Base or Cross- entropy function. See the MSobject example in the following sections for more info.

Hierarchical Multiscale Entropy (+ Multiscale Cross-Entropy)

In hierarchical multiscale entropy (hMSEn) and hierarchical multiscale cross-entropy (hXMSEn) functions, the length of the time series signal(s) is halved at each scale. Thus, hMSEn and hXMSEn only use the first 2^N data points where 2^N <= the length of the original time series signal. i.e. For a signal of 5000 points, only the first 4096 are used. For a signal of 1500 points, only the first 1024 are used.

BIDIMENSIONAL ENTROPIES

Each bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are "locked" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.

+For further info on these graining procedures see the `EntropyHub guide <https://github.com/MattWillFlood/EntropyHub/blob/main/EntropyHub%20Guide.pdf>`_.
source
IMPORTANT TO NOTE

Parameters of the base or cross- entropy methods are passed to multiscale and multiscale cross- functions using the multiscale entropy object using MSobject. Base and cross- entropy methods are declared with MSobject() using any Base or Cross- entropy function. See the MSobject example in the following sections for more info.

Hierarchical Multiscale Entropy (+ Multiscale Cross-Entropy)

In hierarchical multiscale entropy (hMSEn) and hierarchical multiscale cross-entropy (hXMSEn) functions, the length of the time series signal(s) is halved at each scale. Thus, hMSEn and hXMSEn only use the first 2^N data points where 2^N <= the length of the original time series signal. i.e. For a signal of 5000 points, only the first 4096 are used. For a signal of 1500 points, only the first 1024 are used.

BIDIMENSIONAL ENTROPIES

Each bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are "locked" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.

diff --git a/dev/Guide/Base_Entropies/index.html b/dev/Guide/Base_Entropies/index.html index de2e4d5..a4ea687 100644 --- a/dev/Guide/Base_Entropies/index.html +++ b/dev/Guide/Base_Entropies/index.html @@ -1,5 +1,5 @@ -Base Entropies · EntropyHub.jl

Base Entropies

Functions for estimating the entropy of a single univariate time series.

The following functions also form the base entropy method used by Multiscale functions.

These functions are directly available when EntropyHub is imported:

julia> using EntropyHub
+Base Entropies · EntropyHub.jl

Base Entropies

Functions for estimating the entropy of a single univariate time series.

The following functions also form the base entropy method used by Multiscale functions.

These functions are directly available when EntropyHub is imported:

julia> using EntropyHub
 
 julia> names(EntropyHub)
 :ApEn
  :AttnEn
@@ -10,7 +10,7 @@
  :rXMSEn
EntropyHub._ApEn.ApEnFunction
Ap, Phi = ApEn(Sig)

Returns the approximate entropy estimates Ap and the log-average number of matched vectors Phi for m = [0,1,2], estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold = 0.2*SD(Sig), logarithm = natural

Ap, Phi = ApEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))

Returns the approximate entropy estimates Ap of the data sequence Sig for dimensions = [0,1,...,m] using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

r - Radius Distance Threshold, a positive scalar

Logx - Logarithm base, a positive scalar

See also XApEn, SampEn, MSEn, FuzzEn, PermEn, CondEn, DispEn

References:

[1] Steven M. Pincus, 
     "Approximate entropy as a measure of system complexity." 
     Proceedings of the National Academy of Sciences 
-    88.6 (1991): 2297-2301.
source
EntropyHub._SampEn.SampEnFunction
Samp, A, B = SampEn(Sig)

Returns the sample entropy estimates Samp and the number of matched state vectors (m:B, m+1:A) for m = [0,1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2*SD(Sig), logarithm = natural

Samp, A, B, (Vcp, Ka, Kb) = SampEn(Sig, ..., Vcp = true)

If Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the sample entropy estimates (Samp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.

Samp, A, B = SampEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1), Vcp::Bool=false)

Returns the sample entropy estimates Samp for dimensions = [0,1,...,m] estimated from the data sequence Sig using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

r - Radius Distance Threshold, a positive scalar

Logx - Logarithm base, a positive scalar

See also ApEn, FuzzEn, PermEn, CondEn, XSampEn, SampEn2D, MSEn

References:

[1] Joshua S Richman and J. Randall Moorman. 
+    88.6 (1991): 2297-2301.
source
EntropyHub._SampEn.SampEnFunction
Samp, A, B = SampEn(Sig)

Returns the sample entropy estimates Samp and the number of matched state vectors (m:B, m+1:A) for m = [0,1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2*SD(Sig), logarithm = natural

Samp, A, B, (Vcp, Ka, Kb) = SampEn(Sig, ..., Vcp = true)

If Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the sample entropy estimates (Samp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.

Samp, A, B = SampEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1), Vcp::Bool=false)

Returns the sample entropy estimates Samp for dimensions = [0,1,...,m] estimated from the data sequence Sig using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

r - Radius Distance Threshold, a positive scalar

Logx - Logarithm base, a positive scalar

Vcp - Option to return the variance of the conditional probabilities and the number of overlapping matching vector pairs of lengths

See also ApEn, FuzzEn, PermEn, CondEn, XSampEn, SampEn2D, MSEn

References:

[1] Joshua S Richman and J. Randall Moorman. 
     "Physiological time-series analysis using approximate entropy
     and sample entropy." 
     American Journal of Physiology-Heart and Circulatory Physiology (2000).
@@ -18,7 +18,7 @@
 [2] Douglas E Lake, Joshua S Richman, M.P. Griffin, J. Randall Moorman
     "Sample entropy analysis of neonatal heart rate variability."
     American Journal of Physiology-Regulatory, Integrative and Comparative Physiology
-    283, no. 3 (2002): R789-R797.
source
EntropyHub._FuzzEn.FuzzEnFunction
Fuzz, Ps1, Ps2 = FuzzEn(Sig)

Returns the fuzzy entropy estimates Fuzz and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = "default", fuzzy function parameters (r) = [0.2, 2], logarithm = natural

Fuzz, Ps1, Ps2 = FuzzEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String="default", Logx::Real=exp(1))

Returns the fuzzy entropy estimates Fuzz for dimensions = [1,...,m] estimated for the data sequence Sig using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

Fx - Fuzzy function name, one of the following: {"sigmoid", "modsampen", "default", "gudermannian", "bell", "triangular", "trapezoidal1", "trapezoidal2", "z_shaped", "gaussian", "constgaussian"}

r - Fuzzy function parameters, a 1 element scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows: [default: [.2 2]]

        default:        r(1) = divisor of the exponential argument
+    283, no. 3 (2002): R789-R797.
source
EntropyHub._FuzzEn.FuzzEnFunction
Fuzz, Ps1, Ps2 = FuzzEn(Sig)

Returns the fuzzy entropy estimates Fuzz and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = "default", fuzzy function parameters (r) = [0.2, 2], logarithm = natural

Fuzz, Ps1, Ps2 = FuzzEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String="default", Logx::Real=exp(1))

Returns the fuzzy entropy estimates Fuzz for dimensions = [1,...,m] estimated for the data sequence Sig using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

Fx - Fuzzy function name, one of the following: {"sigmoid", "modsampen", "default", "gudermannian", "bell", "triangular", "trapezoidal1", "trapezoidal2", "z_shaped", "gaussian", "constgaussian"}

r - Fuzzy function parameters, a 1 element scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows: [default: [.2 2]]

        default:        r(1) = divisor of the exponential argument
                         r(2) = argument exponent (pre-division)
         sigmoid:        r(1) = divisor of the exponential argument
                         r(2) = value subtracted from argument (pre-division)
@@ -55,14 +55,14 @@
     "Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: 
     Assessment and Comparison"
     IEEE Access
-    7 (2019): 104833-104847
source
EntropyHub._K2En.K2EnFunction
K2, Ci = K2En(Sig)

Returns the Kolmogorov entropy estimates K2 and the correlation integrals Ci for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, r = 0.2*SD(Sig), logarithm = natural

K2, Ci = K2En(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))

Returns the Kolmogorov entropy estimates K2 for dimensions = [1,...,m] estimated from the data sequence Sig using the 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

r - Radius, a positive scalar

Logx - Logarithm base, a positive scalar

See also DistEn, XK2En, MSEn

References:

[1] Peter Grassberger and Itamar Procaccia,
+    7 (2019): 104833-104847
source
EntropyHub._K2En.K2EnFunction
K2, Ci = K2En(Sig)

Returns the Kolmogorov entropy estimates K2 and the correlation integrals Ci for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, r = 0.2*SD(Sig), logarithm = natural

K2, Ci = K2En(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))

Returns the Kolmogorov entropy estimates K2 for dimensions = [1,...,m] estimated from the data sequence Sig using the 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

r - Radius, a positive scalar

Logx - Logarithm base, a positive scalar

See also DistEn, XK2En, MSEn

References:

[1] Peter Grassberger and Itamar Procaccia,
     "Estimation of the Kolmogorov entropy from a chaotic signal." 
     Physical review A 28.4 (1983): 2591.
 
 [2] Lin Gao, Jue Wang  and Longwei Chen
     "Event-related desynchronization and synchronization 
     quantification in motor-related EEG by Kolmogorov entropy"
-    J Neural Eng. 2013 Jun;10(3):03602
source
EntropyHub._PermEn.PermEnFunction
Perm, Pnorm, cPE = PermEn(Sig)

Returns the permuation entropy estimates Perm, the normalised permutation entropy Pnorm and the conditional permutation entropy cPE for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, logarithm = base 2, normalisation = w.r.t #symbols (m-1) Note: using the standard PermEn estimation, Perm = 0 when m = 1. Note: It is recommeneded that signal length > 5m! (see [8] and Amigo et al., Europhys. Lett. 83:60005, 2008)

Perm, Pnorm, cPE = PermEn(Sig, m)

Returns the permutation entropy estimates Perm estimated from the data sequence Sig using the specified embedding dimensions = [1,...,m] with other default parameters as listed above.

Perm, Pnorm, cPE = PermEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Typex::String="none", tpx::Union{Real,Nothing}=nothing, Logx::Real=2, Norm::Bool=false)

Returns the permutation entropy estimates Perm for dimensions = [1,...,m] estimated from the data sequence Sig using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of PermEn value:

      false -  normalises w.r.t log(# of permutation symbols [m-1]) - default
+    J Neural Eng. 2013 Jun;10(3):03602
source
EntropyHub._PermEn.PermEnFunction
Perm, Pnorm, cPE = PermEn(Sig)

Returns the permuation entropy estimates Perm, the normalised permutation entropy Pnorm and the conditional permutation entropy cPE for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, logarithm = base 2, normalisation = w.r.t #symbols (m-1) Note: using the standard PermEn estimation, Perm = 0 when m = 1. Note: It is recommeneded that signal length > 5m! (see [8] and Amigo et al., Europhys. Lett. 83:60005, 2008)

Perm, Pnorm, cPE = PermEn(Sig, m)

Returns the permutation entropy estimates Perm estimated from the data sequence Sig using the specified embedding dimensions = [1,...,m] with other default parameters as listed above.

Perm, Pnorm, cPE = PermEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Typex::String="none", tpx::Union{Real,Nothing}=nothing, Logx::Real=2, Norm::Bool=false)

Returns the permutation entropy estimates Perm for dimensions = [1,...,m] estimated from the data sequence Sig using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of PermEn value:

      false -  normalises w.r.t log(# of permutation symbols [m-1]) - default
       true  -  normalises w.r.t log(# of all possible permutations [m!])
       * Note: Normalised permutation entropy is undefined for m = 1.
       ** Note: When Typex = 'uniquant' and Norm = true, normalisation
@@ -120,19 +120,19 @@
         "Phase permutation entropy: A complexity measure for nonlinear time 
         series incorporating phase information." 
         Physica A: Statistical Mechanics and its Applications 
-        568 (2021): 125686.
source
EntropyHub._CondEn.CondEnFunction
Cond, SEw, SEz = CondEn(Sig)

Returns the corrected conditional entropy estimates (Cond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 6, logarithm = natural, normalisation = false Note: CondEn(m=1) returns the Shannon entropy of Sig.

Cond, SEw, SEz = CondEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)

Returns the corrected conditional entropy estimates (Cond) from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

c - # of symbols, an integer > 1

Logx - Logarithm base, a positive scalar

Norm - Normalisation of CondEn value:

      [false]  no normalisation - default
+        568 (2021): 125686.
source
EntropyHub._CondEn.CondEnFunction
Cond, SEw, SEz = CondEn(Sig)

Returns the corrected conditional entropy estimates (Cond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 6, logarithm = natural, normalisation = false Note: CondEn(m=1) returns the Shannon entropy of Sig.

Cond, SEw, SEz = CondEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)

Returns the corrected conditional entropy estimates (Cond) from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

c - # of symbols, an integer > 1

Logx - Logarithm base, a positive scalar

Norm - Normalisation of CondEn value:

      [false]  no normalisation - default
       [true]   normalises w.r.t Shannon entropy of data sequence `Sig`

See also XCondEn, MSEn, PermEn, DistEn, XPermEn

References:

[1] Alberto Porta, et al.,
     "Measuring regularity by means of a corrected conditional
     entropy in sympathetic outflow." 
     Biological cybernetics 
-    78.1 (1998): 71-78.
source
EntropyHub._DistEn.DistEnFunction
Dist, Ppi = DistEn(Sig)

Returns the distribution entropy estimate (Dist) and the corresponding distribution probabilities (Ppi) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins

Dist, Ppi = DistEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Bins::Union{Int,String}="Sturges", Logx::Real=2, Norm::Bool=true)

Returns the distribution entropy estimate (Dist) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

Bins - Histogram bin selection method for distance distribution, one of the following:

      an integer > 1 indicating the number of bins, or one of the 
+    78.1 (1998): 71-78.
source
EntropyHub._DistEn.DistEnFunction
Dist, Ppi = DistEn(Sig)

Returns the distribution entropy estimate (Dist) and the corresponding distribution probabilities (Ppi) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins

Dist, Ppi = DistEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Bins::Union{Int,String}="Sturges", Logx::Real=2, Norm::Bool=true)

Returns the distribution entropy estimate (Dist) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

Bins - Histogram bin selection method for distance distribution, one of the following:

      an integer > 1 indicating the number of bins, or one of the 
       following strings {'sturges','sqrt','rice','doanes'}
       [default: 'sturges']

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of DistEn value:

      [false]  no normalisation.
       [true]   normalises w.r.t # of histogram bins - default

See also XDistEn, DistEn2D, MSEn, K2En

References:

[1] Li, Peng, et al.,
     "Assessing the complexity of short-term heartbeat interval 
     series by distribution entropy." 
     Medical & biological engineering & computing 
-    53.1 (2015): 77-87.
source
EntropyHub._SpecEn.SpecEnFunction
Spec, BandEn = SpecEn(Sig)

Returns the spectral entropy estimate of the full spectrum (Spec) and the within-band entropy (BandEn) estimated from the data sequence (Sig) using the default parameters: N-point FFT = 2*len(Sig) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.

Spec, BandEn = SpecEn(Sig::AbstractArray{T,1} where T<:Real; N::Int=1 + (2*size(Sig,1)), Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)

Returns the spectral entropy (Spec) and the within-band entropy (BandEn) estimate for the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

N - Resolution of spectrum (N-point FFT), an integer > 1

Freqs - Normalised band edge frequencies, a 2 element tuple with values

      in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2).
+    53.1 (2015): 77-87.
source
EntropyHub._SpecEn.SpecEnFunction
Spec, BandEn = SpecEn(Sig)

Returns the spectral entropy estimate of the full spectrum (Spec) and the within-band entropy (BandEn) estimated from the data sequence (Sig) using the default parameters: N-point FFT = 2*len(Sig) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.

Spec, BandEn = SpecEn(Sig::AbstractArray{T,1} where T<:Real; N::Int=1 + (2*size(Sig,1)), Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)

Returns the spectral entropy (Spec) and the within-band entropy (BandEn) estimate for the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

N - Resolution of spectrum (N-point FFT), an integer > 1

Freqs - Normalised band edge frequencies, a 2 element tuple with values

      in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2).
       Note: When no band frequencies are entered, BandEn == SpecEn

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of Spec value:

      [false]  no normalisation.
       [true]   normalises w.r.t # of spectrum/band frequency values - default.

For more info, see the EntropyHub guide.

See also XSpecEn, fft, MSEn, XMSEn

References:

[1] G.E. Powell and I.C. Percival,
     "A spectral entropy method for distinguishing regular and 
@@ -144,7 +144,7 @@
     "Quantification of EEG irregularity by use of the entropy of 
     the power spectrum." 
     Electroencephalography and clinical neurophysiology 
-    79.3 (1991): 204-210.
source
EntropyHub._DispEn.DispEnFunction
Dispx, RDE = DispEn(Sig)

Returns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function (ncdf)

Dispx, RDE = DispEn(Sig::AbstractArray{T,1} where T<:Real; c::Int=3, m::Int=2, tau::Int=1, Typex::String="ncdf", Logx::Real=exp(1), Fluct::Bool=false, Norm::Bool=false, rho::Real=1)

Returns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

c - Number of symbols, an integer > 1

Typex - Type of data-to-symbolic sequence transform, one of the following: {"linear", "kmeans" ,"ncdf", "finesort", "equal"}

      See the EntropyHub guide for more info on these transforms.

Logx - Logarithm base, a positive scalar

Fluct - When Fluct == true, DispEn returns the fluctuation-based Dispersion entropy. [default: false]

Norm - Normalisation of Dispx and RDE value: [false] no normalisation - default [true] normalises w.r.t number of possible dispersion patterns (c^m or (2c -1)^m-1 if Fluct == true).

rho - If Typex == 'finesort', rho is the tuning parameter (default: 1)

See also PermEn, SyDyEn, MSEn

References:

[1] Mostafa Rostaghi and Hamed Azami,
+    79.3 (1991): 204-210.
source
EntropyHub._DispEn.DispEnFunction
Dispx, RDE = DispEn(Sig)

Returns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function (ncdf)

Dispx, RDE = DispEn(Sig::AbstractArray{T,1} where T<:Real; c::Int=3, m::Int=2, tau::Int=1, Typex::String="ncdf", Logx::Real=exp(1), Fluct::Bool=false, Norm::Bool=false, rho::Real=1)

Returns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

c - Number of symbols, an integer > 1

Typex - Type of data-to-symbolic sequence transform, one of the following: {"linear", "kmeans" ,"ncdf", "finesort", "equal"}

      See the EntropyHub guide for more info on these transforms.

Logx - Logarithm base, a positive scalar

Fluct - When Fluct == true, DispEn returns the fluctuation-based Dispersion entropy. [default: false]

Norm - Normalisation of Dispx and RDE value: [false] no normalisation - default [true] normalises w.r.t number of possible dispersion patterns (c^m or (2c -1)^m-1 if Fluct == true).

rho - If Typex == 'finesort', rho is the tuning parameter (default: 1)

See also PermEn, SyDyEn, MSEn

References:

[1] Mostafa Rostaghi and Hamed Azami,
     "Dispersion entropy: A measure for time-series analysis." 
     IEEE Signal Processing Letters 
     23.5 (2016): 610-614.
@@ -164,7 +164,7 @@
     "Fault diagnosis for rolling bearings based on fine-sorted 
     dispersion entropy and SVM optimized with mutation SCA-PSO."
     Entropy
-    21.4 (2019): 404.
source
EntropyHub._SyDyEn.SyDyEnFunction
SyDy, Zt = SyDyEn(Sig)

Returns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, symbolic partition type = maximum entropy partitioning (MEP), normalisation = normalises w.r.t # possible vector permutations (c^m)

SyDy, Zt = SyDyEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=3, Typex::String="MEP", Logx::Real=exp(1), Norm::Bool=true)

Returns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

c - Number of symbols, an integer > 1

Typex - Type of symbolic sequnce partitioning method, one of the following:

      {"linear","uniform","MEP"(default),"kmeans"}

Logx - Logarithm base, a positive scalar

Norm - Normalisation of SyDyEn value:

      [false]  no normalisation 
+    21.4 (2019): 404.
source
EntropyHub._SyDyEn.SyDyEnFunction
SyDy, Zt = SyDyEn(Sig)

Returns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, symbolic partition type = maximum entropy partitioning (MEP), normalisation = normalises w.r.t # possible vector permutations (c^m)

SyDy, Zt = SyDyEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=3, Typex::String="MEP", Logx::Real=exp(1), Norm::Bool=true)

Returns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer

tau - Time Delay, a positive integer

c - Number of symbols, an integer > 1

Typex - Type of symbolic sequnce partitioning method, one of the following:

      {"linear","uniform","MEP"(default),"kmeans"}

Logx - Logarithm base, a positive scalar

Norm - Normalisation of SyDyEn value:

      [false]  no normalisation 
       [true]   normalises w.r.t # possible vector permutations (c^m+1) - default

See the EntropyHub guide for more info on these parameters.

See also DispEn, PermEn, CondEn, SampEn, MSEn

References:

[1] Yongbo Li, et al.,
     "A fault diagnosis scheme for planetary gearboxes using 
     modified multi-scale symbolic dynamic entropy and mRMR feature 
@@ -182,7 +182,7 @@
 [3] Venkatesh Rajagopalan and Asok Ray,
     "Symbolic time series analysis via wavelet-based partitioning."
     Signal processing 
-    86.11 (2006): 3309-3320.
source
EntropyHub._IncrEn.IncrEnFunction
Incr = IncrEn(Sig)

Returns the increment entropy (Incr) estimate of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, quantifying resolution = 4, logarithm = base 2,

Incr = IncrEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, R::Int=4, Logx::Real=2, Norm::Bool=false)

Returns the increment entropy (Incr) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

R - Quantifying resolution, a positive scalar

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of IncrEn value:

      [false]  no normalisation - default
+    86.11 (2006): 3309-3320.
source
EntropyHub._IncrEn.IncrEnFunction
Incr = IncrEn(Sig)

Returns the increment entropy (Incr) estimate of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, quantifying resolution = 4, logarithm = base 2,

Incr = IncrEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, R::Int=4, Logx::Real=2, Norm::Bool=false)

Returns the increment entropy (Incr) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

R - Quantifying resolution, a positive scalar

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of IncrEn value:

      [false]  no normalisation - default
       [true]   normalises w.r.t embedding dimension (m-1).

See also PermEn, SyDyEn, MSEn

References:

[1] Xiaofeng Liu, et al.,
     "Increment entropy as a measure of complexity for time series."
     Entropy
@@ -198,7 +198,7 @@
     "Appropriate use of the increment entropy for 
     electrophysiological time series." 
     Computers in biology and medicine 
-    95 (2018): 13-23.
source
EntropyHub._CoSiEn.CoSiEnFunction
CoSi, Bm = CoSiEn(Sig)

Returns the cosine similarity entropy (CoSi) and the corresponding global probabilities estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular threshold = .1, logarithm = base 2,

CoSi, Bm = CoSiEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=.1, Logx::Real=2, Norm::Int=0)

Returns the cosine similarity entropy (CoSi) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

r - Angular threshold, a value in range [0 < r < 1]

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of Sig, one of the following integers:

    [0]  no normalisation - default
+    95 (2018): 13-23.
source
EntropyHub._CoSiEn.CoSiEnFunction
CoSi, Bm = CoSiEn(Sig)

Returns the cosine similarity entropy (CoSi) and the corresponding global probabilities estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular threshold = .1, logarithm = base 2,

CoSi, Bm = CoSiEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=.1, Logx::Real=2, Norm::Int=0)

Returns the cosine similarity entropy (CoSi) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

r - Angular threshold, a value in range [0 < r < 1]

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of Sig, one of the following integers:

    [0]  no normalisation - default
     [1]  normalises `Sig` by removing median(`Sig`)
     [2]  normalises `Sig` by removing mean(`Sig`)
     [3]  normalises `Sig` w.r.t. SD(`Sig`)
@@ -206,21 +206,21 @@
     "Cosine similarity entropy: Self-correlation-based complexity
     analysis of dynamical systems."
     Entropy 
-    19.12 (2017): 652.
source
EntropyHub._PhasEn.PhasEnFunction
Phas = PhasEn(Sig)

Returns the phase entropy (Phas) estimate of the data sequence (Sig) using the default parameters: angular partitions = 4, time delay = 1, logarithm = natural,

Phas = PhasEn(Sig::AbstractArray{T,1} where T<:Real; K::Int=4, tau::Int=1, Logx::Real=exp(1), Norm::Bool=true, Plotx::Bool=false)

Returns the phase entropy (Phas) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

K - Angular partitions (coarse graining), an integer > 1

        *Note: Division of partitions begins along the positive x-axis. As this point is somewhat arbitrary, it is
+    19.12 (2017): 652.
source
EntropyHub._PhasEn.PhasEnFunction
Phas = PhasEn(Sig)

Returns the phase entropy (Phas) estimate of the data sequence (Sig) using the default parameters: angular partitions = 4, time delay = 1, logarithm = natural,

Phas = PhasEn(Sig::AbstractArray{T,1} where T<:Real; K::Int=4, tau::Int=1, Logx::Real=exp(1), Norm::Bool=true, Plotx::Bool=false)

Returns the phase entropy (Phas) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

K - Angular partitions (coarse graining), an integer > 1

        *Note: Division of partitions begins along the positive x-axis. As this point is somewhat arbitrary, it is
          recommended to use even-numbered (preferably multiples of 4) partitions for sake of symmetry.

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar

Norm - Normalisation of Phas value:

      [false]  no normalisation
       [true]   normalises w.r.t. the number of partitions Log(`K`)

Plotx - When Plotx == true, returns Poincaré plot (default: false)

See also SampEn, ApEn, GridEn, MSEn, SlopEn, CoSiEn, BubbEn

References:

[1] Ashish Rohila and Ambalika Sharma,
     "Phase entropy: a new complexity measure for heart rate
     variability." 
     Physiological measurement
-    40.10 (2019): 105006.
source
EntropyHub._SlopEn.SlopEnFunction
Slop = SlopEn(Sig)

Returns the slope entropy (Slop) estimates for embedding dimensions [2, ..., m] of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular thresholds = [5 45], logarithm = base 2

Slop = SlopEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Lvls::AbstractArray{T,1} where T<:Real=[5, 45], Logx::Real=2, Norm::Bool=true)

Returns the slope entropy (Slop) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

      SlopEn returns estimates for each dimension [2,...,m]

tau - Time Delay, a positive integer

Lvls - Angular thresolds, a vector of monotonically increasing values in the range [0 90] degrees.

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of SlopEn value, a boolean operator:

      [false]  no normalisation
+    40.10 (2019): 105006.
source
EntropyHub._SlopEn.SlopEnFunction
Slop = SlopEn(Sig)

Returns the slope entropy (Slop) estimates for embedding dimensions [2, ..., m] of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular thresholds = [5 45], logarithm = base 2

Slop = SlopEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Lvls::AbstractArray{T,1} where T<:Real=[5, 45], Logx::Real=2, Norm::Bool=true)

Returns the slope entropy (Slop) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

      SlopEn returns estimates for each dimension [2,...,m]

tau - Time Delay, a positive integer

Lvls - Angular thresolds, a vector of monotonically increasing values in the range [0 90] degrees.

Logx - Logarithm base, a positive scalar (enter 0 for natural log)

Norm - Normalisation of SlopEn value, a boolean operator:

      [false]  no normalisation
       [true]   normalises w.r.t. the number of patterns found (default)

See also PhasEn, GridEn, MSEn, CoSiEn, SampEn, ApEn

References:

[1] David Cuesta-Frau,
     "Slope Entropy: A New Time Series Complexity Estimator Based on
     Both Symbolic Patterns and Amplitude Information." 
     Entropy 
-    21.12 (2019): 1167.
source
EntropyHub._BubbEn.BubbEnFunction
Bubb, H = BubbEn(Sig)

Returns the bubble entropy (Bubb) and the conditional Rényi entropy (H) estimates of dimension m = 2 from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, logarithm = natural

Bubb, H = BubbEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Logx::Real=exp(1))

Returns the bubble entropy (Bubb) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

      BubbEn returns estimates for each dimension [2,...,m]

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar

See also PhasEn, MSEn

References:

[1] George Manis, M.D. Aktaruzzaman and Roberto Sassi,
+    21.12 (2019): 1167.
source
EntropyHub._BubbEn.BubbEnFunction
Bubb, H = BubbEn(Sig)

Returns the bubble entropy (Bubb) and the conditional Rényi entropy (H) estimates of dimension m = 2 from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, logarithm = natural

Bubb, H = BubbEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Logx::Real=exp(1))

Returns the bubble entropy (Bubb) estimate of the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

      BubbEn returns estimates for each dimension [2,...,m]

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar

See also PhasEn, MSEn

References:

[1] George Manis, M.D. Aktaruzzaman and Roberto Sassi,
     "Bubble entropy: An entropy almost free of parameters."
     IEEE Transactions on Biomedical Engineering
-    64.11 (2017): 2711-2718.
source
EntropyHub._GridEn.GridEnFunction
GDE, GDR, _ = GridEn(Sig)

Returns the gridded distribution entropy (GDE) and the gridded distribution rate (GDR) estimated from the data sequence (Sig) using the default parameters: grid coarse-grain = 3, time delay = 1, logarithm = base 2

GDE, GDR, PIx, GIx, SIx, AIx = GridEn(Sig)

In addition to GDE and GDR, GridEn returns the following indices estimated for the data sequence (Sig) using the default parameters: [PIx] - Percentage of points below the line of identity (LI) [GIx] - Proportion of point distances above the LI [SIx] - Ratio of phase angles (w.r.t. LI) of the points above the LI [AIx] - Ratio of the cumulative area of sectors of points above the LI

GDE, GDR, ..., = GridEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=3, tau::Int=1, Logx::Real=exp(1), Plotx::Bool=false)

Returns the gridded distribution entropy (GDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Grid coarse-grain (m x m sectors), an integer > 1

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar

Plotx - When Plotx == true, returns gridded Poicaré plot and a bivariate histogram of the grid point distribution (default: false)

See also PhasEn, CoSiEn, SlopEn, BubbEn, MSEn

References:

[1] Chang Yan, et al.,
+    64.11 (2017): 2711-2718.
source
EntropyHub._GridEn.GridEnFunction
GDE, GDR, _ = GridEn(Sig)

Returns the gridded distribution entropy (GDE) and the gridded distribution rate (GDR) estimated from the data sequence (Sig) using the default parameters: grid coarse-grain = 3, time delay = 1, logarithm = base 2

GDE, GDR, PIx, GIx, SIx, AIx = GridEn(Sig)

In addition to GDE and GDR, GridEn returns the following indices estimated for the data sequence (Sig) using the default parameters: [PIx] - Percentage of points below the line of identity (LI) [GIx] - Proportion of point distances above the LI [SIx] - Ratio of phase angles (w.r.t. LI) of the points above the LI [AIx] - Ratio of the cumulative area of sectors of points above the LI

GDE, GDR, ..., = GridEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=3, tau::Int=1, Logx::Real=exp(1), Plotx::Bool=false)

Returns the gridded distribution entropy (GDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Grid coarse-grain (m x m sectors), an integer > 1

tau - Time Delay, a positive integer

Logx - Logarithm base, a positive scalar

Plotx - When Plotx == true, returns gridded Poicaré plot and a bivariate histogram of the grid point distribution (default: false)

See also PhasEn, CoSiEn, SlopEn, BubbEn, MSEn

References:

[1] Chang Yan, et al.,
         "Novel gridded descriptors of poincaré plot for analyzing 
         heartbeat interval time-series." 
         Computers in biology and medicine 
@@ -241,15 +241,15 @@
 [4] C.K. Karmakar, A.H. Khandoker and M. Palaniswami,
         "Phase asymmetry of heart rate variability signal." 
         Physiological measurement 
-        36.2 (2015): 303.
source
EntropyHub._EnofEn.EnofEnFunction
EoE, AvEn, S2 = EnofEn(Sig)

Returns the entropy of entropy (EoE), the average Shannon entropy (AvEn), and the number of levels (S2) across all windows estimated from the data sequence (Sig) using the default parameters: window length (samples) = 10, slices = 10, logarithm = natural, heartbeat interval range (xmin, xmax) = (min(Sig), max(Sig))

EoE, AvEn, S2 = EnofEn(Sig::AbstractArray{T,1} where T<:Real; tau::Int=10, S::Int=10, Xrange::Tuple{Real,REal}, Logx::Real=exp(1))

Returns the entropy of entropy (EoE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

tau - Window length, an integer > 1

S - Number of slices (s1,s2), a two-element tuple of integers > 2

Xrange - The min and max heartbeat interval, a two-element tuple where X[1] <= X[2]

Logx - Logarithm base, a positive scalar

See also SampEn, MSEn, ApEn

References:

[1] Chang Francis Hsu, et al.,
+        36.2 (2015): 303.
source
EntropyHub._EnofEn.EnofEnFunction
EoE, AvEn, S2 = EnofEn(Sig)

Returns the entropy of entropy (EoE), the average Shannon entropy (AvEn), and the number of levels (S2) across all windows estimated from the data sequence (Sig) using the default parameters: window length (samples) = 10, slices = 10, logarithm = natural, heartbeat interval range (xmin, xmax) = (min(Sig), max(Sig))

EoE, AvEn, S2 = EnofEn(Sig::AbstractArray{T,1} where T<:Real; tau::Int=10, S::Int=10, Xrange::Tuple{Real,REal}, Logx::Real=exp(1))

Returns the entropy of entropy (EoE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

tau - Window length, an integer > 1

S - Number of slices (s1,s2), a two-element tuple of integers > 2

Xrange - The min and max heartbeat interval, a two-element tuple where X[1] <= X[2]

Logx - Logarithm base, a positive scalar

See also SampEn, MSEn, ApEn

References:

[1] Chang Francis Hsu, et al.,
     "Entropy of entropy: Measurement of dynamical complexity for
     biological systems." 
     Entropy 
-    19.10 (2017): 550.
source
EntropyHub._AttnEn.AttnEnFunction
Av4, (Hxx,Hnn,Hxn,Hnx) = AttnEn(Sig)

Returns the attention entropy (Av4) calculated as the average of the sub-entropies (Hxx,Hxn,Hnn,Hnx) estimated from the data sequence (Sig) using a base-2 logarithm.

Av4, (Hxx, Hnn, Hxn, Hnx) = AttnEn(Sig::AbstractArray{T,1} where T<:Real; Logx::Real=2)

Returns the attention entropy (Av4) and the sub-entropies (Hxx,Hnn,Hxn,Hnx) from the data sequence (Sig) where, Hxx: entropy of local-maxima intervals Hnn: entropy of local minima intervals Hxn: entropy of intervals between local maxima and subsequent minima Hnx: entropy of intervals between local minima and subsequent maxima

Arguments:

Logx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)

See also EnofEn, SpecEn, XSpecEn, PermEn, MSEn

References:

[1] Jiawei Yang, et al.,
+    19.10 (2017): 550.
source
EntropyHub._AttnEn.AttnEnFunction
Av4, (Hxx,Hnn,Hxn,Hnx) = AttnEn(Sig)

Returns the attention entropy (Av4) calculated as the average of the sub-entropies (Hxx,Hxn,Hnn,Hnx) estimated from the data sequence (Sig) using a base-2 logarithm.

Av4, (Hxx, Hnn, Hxn, Hnx) = AttnEn(Sig::AbstractArray{T,1} where T<:Real; Logx::Real=2)

Returns the attention entropy (Av4) and the sub-entropies (Hxx,Hnn,Hxn,Hnx) from the data sequence (Sig) where, Hxx: entropy of local-maxima intervals Hnn: entropy of local minima intervals Hxn: entropy of intervals between local maxima and subsequent minima Hnx: entropy of intervals between local minima and subsequent maxima

Arguments:

Logx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)

See also EnofEn, SpecEn, XSpecEn, PermEn, MSEn

References:

[1] Jiawei Yang, et al.,
     "Classification of Interbeat Interval Time-series Using 
     Attention Entropy." 
     IEEE Transactions on Affective Computing 
-    (2020)
source
EntropyHub._RangEn.RangEnFunction
Rangx, A, B = RangEn(Sig)

Returns the range entropy estimate (Rangx) and the number of matched state vectors (m: B, m+1: A) estimated from the data sequence (Sig) using the sample entropy algorithm and the following default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2, logarithm = natural.

Rangx, A, B = RangEn(Sig, keyword = value, ...)

Returns the range entropy estimates (Rangx) for dimensions = m estimated for the data sequence (Sig) using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer tau - Time Delay, a positive integer r - Radius Distance Threshold, a positive value between 0 and 1 Methodx - Base entropy method, either 'SampEn' [default] or 'ApEn' Logx - Logarithm base, a positive scalar

See also ApEn, SampEn, FuzzEn, MSEn

References:

[1] Omidvarnia, Amir, et al.
+    (2020)
source
EntropyHub._RangEn.RangEnFunction
Rangx, A, B = RangEn(Sig)

Returns the range entropy estimate (Rangx) and the number of matched state vectors (m: B, m+1: A) estimated from the data sequence (Sig) using the sample entropy algorithm and the following default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2, logarithm = natural.

Rangx, A, B = RangEn(Sig, keyword = value, ...)

Returns the range entropy estimates (Rangx) for dimensions = m estimated for the data sequence (Sig) using the specified keyword arguments:

Arguments:

m - Embedding Dimension, a positive integer tau - Time Delay, a positive integer r - Radius Distance Threshold, a positive value between 0 and 1 Methodx - Base entropy method, either 'SampEn' [default] or 'ApEn' Logx - Logarithm base, a positive scalar

See also ApEn, SampEn, FuzzEn, MSEn

References:

[1] Omidvarnia, Amir, et al.
     "Range entropy: A bridge between signal complexity and self-similarity"
     Entropy 
     20.12 (2018): 962.
@@ -258,7 +258,7 @@
     "Physiological time-series analysis using approximate entropy
     and sample entropy." 
     American Journal of Physiology-Heart and Circulatory Physiology 
-    2000
source
EntropyHub._DivEn.DivEnFunction
Div, CDEn, Bm = DivEn(Sig)

Returns the diversity entropy (Div), the cumulative diversity entropy (CDEn), and the corresponding probabilities (Bm) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, #bins = 5, logarithm = natural,

Div, CDEn, Bm = DivEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Int=5, Logx::Real=exp(1))

Returns the diversity entropy (Div) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

r - Histogram bins #: either

        * an integer [1 < `r`] representing the number of bins
+    2000
source
EntropyHub._DivEn.DivEnFunction
Div, CDEn, Bm = DivEn(Sig)

Returns the diversity entropy (Div), the cumulative diversity entropy (CDEn), and the corresponding probabilities (Bm) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, #bins = 5, logarithm = natural,

Div, CDEn, Bm = DivEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Int=5, Logx::Real=exp(1))

Returns the diversity entropy (Div) estimated from the data sequence (Sig) using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1

tau - Time Delay, a positive integer

r - Histogram bins #: either

        * an integer [1 < `r`] representing the number of bins
         * a list/numpy array of 3 or more increasing values in range [-1 1] representing the bin edges including the rightmost edge.

Logx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)

See also CoSiEn, PhasEn, SlopEn, GridEn, MSEn

References:

[1] X. Wang, S. Si and Y. Li, 
     "Multiscale Diversity Entropy: A Novel Dynamical Measure for Fault 
     Diagnosis of Rotating Machinery," 
@@ -269,4 +269,4 @@
     "Cumulative Diversity Pattern Entropy (CDEn): A High-Performance, 
     Almost-Parameter-Free Complexity Estimator for Nonstationary Time Series,"
     IEEE Transactions on Industrial Informatics
-    vol. 19, no. 9, pp. 9642-9653, Sept. 2023
source
+ vol. 19, no. 9, pp. 9642-9653, Sept. 2023
source
diff --git a/dev/Guide/Bidimensional_Entropies/index.html b/dev/Guide/Bidimensional_Entropies/index.html index 19cbc28..82cab00 100644 --- a/dev/Guide/Bidimensional_Entropies/index.html +++ b/dev/Guide/Bidimensional_Entropies/index.html @@ -1,11 +1,11 @@ -Bidimensional Entropies · EntropyHub.jl

Bidimensional Entropies

Functions for estimating the entropy of a two-dimensional univariate matrix.

While EntropyHub functions primarily apply to time series data, with the following bidimensional entropy functions one can estimate the entropy of two-dimensional (2D) matrices. Hence, bidimensional entropy functions are useful for applications such as image analysis.

IMPORTANT: Locked Matrix Size

Each bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D, DispEn2D, EspEn2D, PermEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are "locked" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.

The reason for this is because there are hundreds of millions of pairwise calculations performed in the estimation of bidimensional entropy, so memory errors often occur when storing data on RAM.

e.g. For a matrix of size [200 x 200], an embedding dimension (m) = 3, and a time delay (tau) = 1, there are 753,049,836 pairwise matrix comparisons (6,777,448,524 elemental subtractions). To pass matrices with sizes greater than [128 x 128], set Lock = false.

CAUTION: unlocking the permitted matrix size may cause your Julia IDE to crash.

EntropyHub._SampEn2D.SampEn2DFunction
SE2D, Phi1, Phi2 = SampEn2D(Mat)

Returns the bidimensional sample entropy estimate (SE2D) and the number of matched sub-matricess (m:Phi1, m+1:Phi2) estimated for the data matrix (Mat) using the default parameters: time delay = 1, radius distance threshold = 0.2*SD(Mat), logarithm = natural matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum dimension size of Mat must be > 10.**

SE2D, Phi1, Phi2 = SampEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10),             
+Bidimensional Entropies · EntropyHub.jl

Bidimensional Entropies

Functions for estimating the entropy of a two-dimensional univariate matrix.

While EntropyHub functions primarily apply to time series data, with the following bidimensional entropy functions one can estimate the entropy of two-dimensional (2D) matrices. Hence, bidimensional entropy functions are useful for applications such as image analysis.

IMPORTANT: Locked Matrix Size

Each bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D, DispEn2D, EspEn2D, PermEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are "locked" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.

The reason for this is because there are hundreds of millions of pairwise calculations performed in the estimation of bidimensional entropy, so memory errors often occur when storing data on RAM.

e.g. For a matrix of size [200 x 200], an embedding dimension (m) = 3, and a time delay (tau) = 1, there are 753,049,836 pairwise matrix comparisons (6,777,448,524 elemental subtractions). To pass matrices with sizes greater than [128 x 128], set Lock = false.

CAUTION: unlocking the permitted matrix size may cause your Julia IDE to crash.

EntropyHub._SampEn2D.SampEn2DFunction
SE2D, Phi1, Phi2 = SampEn2D(Mat)

Returns the bidimensional sample entropy estimate (SE2D) and the number of matched sub-matricess (m:Phi1, m+1:Phi2) estimated for the data matrix (Mat) using the default parameters: time delay = 1, radius distance threshold = 0.2*SD(Mat), logarithm = natural matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum dimension size of Mat must be > 10.**

SE2D, Phi1, Phi2 = SampEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10),             
                                 tau::Int=1, r::Real=0.2*std(Mat,corrected=false), Logx::Real=exp(1), Lock::Bool=true)

Returns the bidimensional sample entropy (SE2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:

Arguments:

m - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)])

tau - Time Delay, a positive integer (default: 1)

r - Distance Threshold, a positive scalar (default: 0.2*SD(Mat))

Logx - Logarithm base, a positive scalar (default: natural)

Lock - By default, SampEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, SampEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true)

      `WARNING: unlocking the permitted matrix size may cause your Julia
       IDE to crash.`

See also SampEn, FuzzEn2D, XSampEn, MSEn

References:

[1] Luiz Eduardo Virgili Silva, et al.,
      "Two-dimensional sample entropy: Assessing image texture 
      through irregularity." 
      Biomedical Physics & Engineering Express
-     2.4 (2016): 045002.
source
EntropyHub._FuzzEn2D.FuzzEn2DFunction
Fuzz2D = FuzzEn2D(Mat)

Returns the bidimensional fuzzy entropy estimate (Fuzz2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height and width of the data matrix 'Mat')

** The minimum dimension size of Mat must be > 10.**

Fuzz2D = FuzzEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), 
+     2.4 (2016): 045002.
source
EntropyHub._FuzzEn2D.FuzzEn2DFunction
Fuzz2D = FuzzEn2D(Mat)

Returns the bidimensional fuzzy entropy estimate (Fuzz2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height and width of the data matrix 'Mat')

** The minimum dimension size of Mat must be > 10.**

Fuzz2D = FuzzEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), 
                     tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2*std(Mat, corrected=false),2), 
                         Fx::String="default", Logx::Real=exp(1), Lock::Bool=true)

Returns the bidimensional fuzzy entropy (Fuzz2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:

Arguments:

m - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)])

tau - Time Delay, a positive integer (default: 1)

Fx - Fuzzy function name, one of the following: {"sigmoid", "modsampen", "default", "gudermannian", "bell", "triangular", "trapezoidal1", "trapezoidal2", "z_shaped", "gaussian", "constgaussian"}

r - Fuzzy function parameters, a 1 element scalar or a 2 element vector of positive values. The 'r' parameters for each fuzzy function are defined as follows:

      sigmoid:        r(1) = divisor of the exponential argument
                       r(2) = value subtracted from argument (pre-division)
@@ -46,18 +46,18 @@
     "Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: 
     Assessment and Comparison"
     IEEE Access
-    7 (2019): 104833-104847
source
EntropyHub._DistEn2D.DistEn2DFunction
Dist2D = DistEn2D(Mat)

Returns the bidimensional distribution entropy estimate (Dist2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, histogram binning method = "sturges", logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum number of rows and columns of Mat must be > 10.**

Dist2D = DistEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,
+    7 (2019): 104833-104847
source
EntropyHub._DistEn2D.DistEn2DFunction
Dist2D = DistEn2D(Mat)

Returns the bidimensional distribution entropy estimate (Dist2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, histogram binning method = "sturges", logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum number of rows and columns of Mat must be > 10.**

Dist2D = DistEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,
                     Bins::Union{Int,String}="Sturges", Logx::Real=2, Norm::Int=2, Lock::Bool=true)

Returns the bidimensional distribution entropy (Dist2D) estimate for the data matrix (Mat) using the specified 'keyword' arguments:

Arguments:

m - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element tuple of integers [height, width] with a minimum value > 1. [default: [floor(H/10) floor(W/10)]]

tau - Time Delay, a positive integer [default: 1]

Bins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {"sturges", "sqrt", "rice", "doanes"`} [default: 'sturges']

Logx - Logarithm base, a positive scalar [default: natural]

      ** enter 0 for natural logarithm.**

Norm - Normalisation of Dist2D value, one of the following integers: [0] no normalisation. [1] normalises values of data matrix (Mat) to range [0 1]. [2] normalises values of data matrix (Mat) to range [0 1], and normalises the distribution entropy value (Dist2D) w.r.t the number of histogram bins. [default] [3] normalises the distribution entropy value w.r.t the number of histogram bins, without normalising data matrix values.

Lock - By default, DistEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, DistEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. [default: 'true'] WARNING: unlocking the permitted matrix size may cause your Julia IDE to crash.

See also DistEn, XDistEn, SampEn2D, FuzzEn2D, MSEn

References:

[1] Hamed Azami, Javier Escudero and Anne Humeau-Heurtier,
     "Bidimensional distribution entropy to analyze the irregularity
     of small-sized textures."
     IEEE Signal Processing Letters 
-    24.9 (2017): 1338-1342.
source
EntropyHub._DispEn2D.DispEn2DFunction
Disp2D, RDE = DispEn2D(Mat)

Returns the bidimensional dispersion entropy estimate (Disp2D) and reverse bidimensional dispersion entropy (RDE) estimated for the data matrix (Mat) using the default parameters: time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function ('ncdf'), logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum number of rows and columns of Mat must be > 10.**

Disp2D, RDE = DispEn2D(Mat::AbstractArray{T,2} where T<:Real; 
+    24.9 (2017): 1338-1342.
source
EntropyHub._DispEn2D.DispEn2DFunction
Disp2D, RDE = DispEn2D(Mat)

Returns the bidimensional dispersion entropy estimate (Disp2D) and reverse bidimensional dispersion entropy (RDE) estimated for the data matrix (Mat) using the default parameters: time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function ('ncdf'), logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum number of rows and columns of Mat must be > 10.**

Disp2D, RDE = DispEn2D(Mat::AbstractArray{T,2} where T<:Real; 
                     m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,
                         c::Int=3, Typex::String="ncdf", Logx::Real=exp(1), Norm::Bool=false, Lock::Bool=true)

Returns the bidimensional dispersion entropy (Disp2D) and reverse bidimensional distribution entropy (RDE) estimate for the data matrix (Mat) using the specified 'keyword' arguments:

Arguments:

m - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element tuple of integers [height, width] with a minimum value > 1. [default: [floor(H/10) floor(W/10)]]

tau - Time Delay, a positive integer [default: 1]

c - Number of symbols, an integer > 1 Typex - Type of symbolic mapping transform, one of the following: {linear, kmeans, ncdf, equal} See the EntropyHub Guide for more info on these transforms. Logx - Logarithm base, a positive scalar [default: natural]

      ** enter 0 for natural logarithm.**

Norm - Normalisation of Disp2D value, a boolean: - [false] no normalisation - default - [true] normalises w.r.t number of possible dispersion patterns. Lock - By default, DispEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, DispEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. [default: 'true'] WARNING: unlocking the permitted matrix size may cause your Julia IDE to crash.

See also DispEn, DistEn2D, SampEn2D, FuzzEn2D, MSEn

References:

[1] Hamed Azami, et al.,
     "Two-dimensional dispersion entropy: An information-theoretic 
     method for irregularity analysis of images."
     Signal Processing: Image Communication, 
-    75 (2019): 178-187.
source
EntropyHub._PermEn2D.PermEn2DFunction
Perm2D = PermEn2D(Mat)

Returns the bidimensional permutation entropy estimate (Perm2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum dimension size of Mat must be > 10.**

Perm2D = PermEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10),             
+    75 (2019): 178-187.
source
EntropyHub._PermEn2D.PermEn2DFunction
Perm2D = PermEn2D(Mat)

Returns the bidimensional permutation entropy estimate (Perm2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat)

** The minimum dimension size of Mat must be > 10.**

Perm2D = PermEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10),             
                                 tau::Int=1, Norm::Bool=true, Logx::Real=exp(1), Lock::Bool=true)

Returns the bidimensional permutation entropy (Perm2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:

Arguments:

m - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)])

tau - Time Delay, a positive integer (default: 1)

Norm - Normalization of permutation entropy estimate, a boolean (default: true)

Logx - Logarithm base, a positive scalar (default: natural)

Lock - By default, PermEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, SampEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true)

      `WARNING: unlocking the permitted matrix size may cause your Julia
       IDE to crash.`

NOTE - The original bidimensional permutation entropy algorithms [1][2] do not account for equal-valued elements of the embedding matrices. To overcome this, PermEn2D uses the lowest common rank for such instances. For example, given an embedding matrix A where, A = [3.4 5.5 7.3] |2.1 6 9.9| [7.3 1.1 2.1] would normally be mapped to an ordinal pattern like so, [3.4 5.5 7.3 2.1 6 9.9 7.3 1.1 2.1] => [ 8 4 9 1 2 5 3 7 6 ] However, indices 4 & 9, and 3 & 7 have the same values, 2.1 and 7.3 respectively. Instead, PermEn2D uses the ordinal pattern [ 8 4 4 1 2 5 3 3 6 ] where the lowest rank (4 & 3) are used instead (of 9 & 7). Therefore, the number of possible permutations is no longer (mxmy)!, but (mxmy)^(mxmy). Here, the PermEn2D value is normalized by the maximum Shannon entropy (Smax = log((mxmy)!) $assuming that no equal values are found in the permutation motif matrices$, as presented in [1].

See also SampEn2D, FuzzEn2D, DispEn2D, DistEn2D

References:

[1] Haroldo Ribeiro et al.,
         "Complexity-Entropy Causality Plane as a Complexity Measure 
@@ -71,10 +71,10 @@
 
 [3] Matthew Flood and Bernd Grimm,
         "EntropyHub: An Open-source Toolkit for Entropic Time Series Analysis"
-        PLoS ONE (2021) 16(11): e0259448.
source
EntropyHub._EspEn2D.EspEn2DFunction
Esp2D,  = EspEn2D(Mat)

Returns the bidimensional Espinosa entropy estimate (Esp2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, tolerance threshold = 20, percentage similarity = 0.7 logarithm = natural, matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) ** The minimum number of rows and columns of Mat must be > 10.

Esp2D = EspEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10),             
+        PLoS ONE (2021) 16(11): e0259448.
source
EntropyHub._EspEn2D.EspEn2DFunction
Esp2D,  = EspEn2D(Mat)

Returns the bidimensional Espinosa entropy estimate (Esp2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, tolerance threshold = 20, percentage similarity = 0.7 logarithm = natural, matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) ** The minimum number of rows and columns of Mat must be > 10.

Esp2D = EspEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10),             
                                 tau::Int=1, r::Real=20, ps::Float=.7, Logx::Real=exp(1), Lock::Bool=true)

Returns the bidimensional Espinosa entropy (Esp2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:

Arguments:

m - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)])

tau - Time Delay, a positive integer (default: 1)

r - Tolerance threshold, a positive scalar (default: 20)

ps - Percentage similarity, a value in range [0 1], (default: 0.7)

Logx - Logarithm base, a positive scalar (default: natural)

Lock - By default, EspEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, EspEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true)

      `WARNING: unlocking the permitted matrix size may cause your Julia
       IDE to crash.`

See also SampEn2D, FuzzEn2D, DispEn2D, DistEn2D, PermEn2D

References:

[1] Ricardo Espinosa, et al.,
     "Two-dimensional EspEn: A New Approach to Analyze Image Texture 
     by Irregularity." 
     Entropy,
-    23:1261 (2021)
source
+ 23:1261 (2021)
source
diff --git a/dev/Guide/Cross_Entropies/index.html b/dev/Guide/Cross_Entropies/index.html index 76afce5..52c1efc 100644 --- a/dev/Guide/Cross_Entropies/index.html +++ b/dev/Guide/Cross_Entropies/index.html @@ -1,5 +1,5 @@ -Cross-Entropies · EntropyHub.jl

Cross Entropies

Functions for estimating the cross-entropy between two univariate time series.

The following functions also form the cross-entropy method used by Multiscale Cross-Entropy functions.

These functions are directly available when EntropyHub is imported:

julia> using EntropyHub
+Cross-Entropies · EntropyHub.jl

Cross Entropies

Functions for estimating the cross-entropy between two univariate time series.

The following functions also form the cross-entropy method used by Multiscale Cross-Entropy functions.

These functions are directly available when EntropyHub is imported:

julia> using EntropyHub
 julia> names(EntropyHub)
 :ApEn
  :AttnEn
  :BubbEn
@@ -14,7 +14,7 @@
 [2] Steven Pincus,
     "Assessing serial irregularity and its implications for health."
     Annals of the New York Academy of Sciences 
-    954.1 (2001): 245-267.
source
EntropyHub._XSampEn.XSampEnFunction
XSamp, A, B = XSampEn(Sig1, Sig2)

Returns the cross-sample entropy estimates (XSamp) and the number of matched vectors (m:B, m+1:A) for m = [0,1,2] estimated for the two univariate data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold= 0.2*SDpooled(Sig1,Sig2), logarithm = natural

XSamp, A, B, (Vcp, Ka, Kb) = XSampEn(Sig1, Sig2, ..., Vcp = true)

If Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the cross-sample entropy estimates (XSamp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.

XSamp, A, B = XSampEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1), Vcp::Bool=false)

Returns the cross-sample entropy estimates (XSamp) for dimensions [0,1,...,m] estimated between the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

r - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)]

Logx - Logarithm base, a positive scalar [default: natural]

See also XFuzzEn, XApEn, SampEn, SampEn2D, XMSEn, ApEn

References:

[1] Joshua S Richman and J. Randall Moorman. 
+    954.1 (2001): 245-267.
source
EntropyHub._XSampEn.XSampEnFunction
XSamp, A, B = XSampEn(Sig1, Sig2)

Returns the cross-sample entropy estimates (XSamp) and the number of matched vectors (m:B, m+1:A) for m = [0,1,2] estimated for the two univariate data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold= 0.2*SDpooled(Sig1,Sig2), logarithm = natural

XSamp, A, B, (Vcp, Ka, Kb) = XSampEn(Sig1, Sig2, ..., Vcp = true)

If Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the cross-sample entropy estimates (XSamp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.

XSamp, A, B = XSampEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1), Vcp::Bool=false)

Returns the cross-sample entropy estimates (XSamp) for dimensions [0,1,...,m] estimated between the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

r - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)]

Logx - Logarithm base, a positive scalar [default: natural]

Vcp - Option to return the variance of the conditional probabilities and the number of overlapping matching vector pairs of lengths

See also XFuzzEn, XApEn, SampEn, SampEn2D, XMSEn, ApEn

References:

[1] Joshua S Richman and J. Randall Moorman. 
     "Physiological time-series analysis using approximate entropy
     and sample entropy." 
     American Journal of Physiology-Heart and Circulatory Physiology
@@ -23,7 +23,7 @@
 [2] Douglas E Lake, Joshua S Richman, M.P. Griffin, J. Randall Moorman
     "Sample entropy analysis of neonatal heart rate variability."
     American Journal of Physiology-Regulatory, Integrative and Comparative Physiology
-    283, no. 3 (2002): R789-R797.
source
EntropyHub._XFuzzEn.XFuzzEnFunction
XFuzz, Ps1, Ps2 = XFuzzEn(Sig1, Sig2)

Returns the cross-fuzzy entropy estimates (XFuzz) and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2, using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural

XFuzz, Ps1, Ps2 = XFuzzEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String="default", Logx::Real=exp(1))

Returns the cross-fuzzy entropy estimates (XFuzz) for dimensions = [1,...,m] estimated for the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

Fx - Fuzzy function name, one of the following: {"sigmoid", "modsampen", "default", "gudermannian", "bell", "triangular", "trapezoidal1", "trapezoidal2", "z_shaped", "gaussian", "constgaussian"}

r - Fuzzy function parameters, a scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows:

      sigmoid:        r(1) = divisor of the exponential argument
+    283, no. 3 (2002): R789-R797.
source
EntropyHub._XFuzzEn.XFuzzEnFunction
XFuzz, Ps1, Ps2 = XFuzzEn(Sig1, Sig2)

Returns the cross-fuzzy entropy estimates (XFuzz) and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2, using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural

XFuzz, Ps1, Ps2 = XFuzzEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String="default", Logx::Real=exp(1))

Returns the cross-fuzzy entropy estimates (XFuzz) for dimensions = [1,...,m] estimated for the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

Fx - Fuzzy function name, one of the following: {"sigmoid", "modsampen", "default", "gudermannian", "bell", "triangular", "trapezoidal1", "trapezoidal2", "z_shaped", "gaussian", "constgaussian"}

r - Fuzzy function parameters, a scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows:

      sigmoid:        r(1) = divisor of the exponential argument
                       r(2) = value subtracted from argument (pre-division)
       modsampen:      r(1) = divisor of the exponential argument
                       r(2) = value subtracted from argument (pre-division)
@@ -55,20 +55,20 @@
     "Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: 
     Assessment and Comparison"
     IEEE Access
-    7 (2019): 104833-104847
source
EntropyHub._XK2En.XK2EnFunction
XK2, Ci = XK2En(Sig1, Sig2)

Returns the cross-Kolmogorov entropy estimates (XK2) and the correlation integrals (Ci) for m = [1,2] estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, distance threshold (r) = 0.2*SDpooled(Sig1, Sig2), logarithm = natural

XK2, Ci = XK2En(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1))

Returns the cross-Kolmogorov entropy estimates (XK2) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

r - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)]

Logx - Logarithm base, a positive scalar [default: natural]

See also XSampEn, XFuzzEn, XApEn, K2En, XMSEn, XDistEn

References:

[1]  Matthew W. Flood,
+    7 (2019): 104833-104847
source
EntropyHub._XK2En.XK2EnFunction
XK2, Ci = XK2En(Sig1, Sig2)

Returns the cross-Kolmogorov entropy estimates (XK2) and the correlation integrals (Ci) for m = [1,2] estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, distance threshold (r) = 0.2*SDpooled(Sig1, Sig2), logarithm = natural

XK2, Ci = XK2En(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1))

Returns the cross-Kolmogorov entropy estimates (XK2) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

r - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)]

Logx - Logarithm base, a positive scalar [default: natural]

See also XSampEn, XFuzzEn, XApEn, K2En, XMSEn, XDistEn

References:

[1]  Matthew W. Flood,
      "XK2En - EntropyHub Project"
-     (2021) https://github.com/MattWillFlood/EntropyHub
source
EntropyHub._XPermEn.XPermEnFunction
XPerm = XPermEn(Sig1, Sig2)

Returns the cross-permuation entropy estimates (XPerm) estimated betweeen the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 3, time delay = 1, logarithm = base 2,

XPerm = XPermEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=3, tau::Int=1, Logx::Real=exp(1))

Returns the permutation entropy estimates (XPerm) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 2 [default: 3]

    **Note: XPerm is undefined for embedding dimensions < 3.**

tau - Time Delay, a positive integer [default: 1]

Logx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log.**

See also PermEn, XApEn, XSampEn, XFuzzEn, XMSEn

References:

[1] Wenbin Shi, Pengjian Shang, and Aijing Lin,
+     (2021) https://github.com/MattWillFlood/EntropyHub
source
EntropyHub._XPermEn.XPermEnFunction
XPerm = XPermEn(Sig1, Sig2)

Returns the cross-permuation entropy estimates (XPerm) estimated betweeen the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 3, time delay = 1, logarithm = base 2,

XPerm = XPermEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=3, tau::Int=1, Logx::Real=exp(1))

Returns the permutation entropy estimates (XPerm) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 2 [default: 3]

    **Note: XPerm is undefined for embedding dimensions < 3.**

tau - Time Delay, a positive integer [default: 1]

Logx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log.**

See also PermEn, XApEn, XSampEn, XFuzzEn, XMSEn

References:

[1] Wenbin Shi, Pengjian Shang, and Aijing Lin,
     "The coupling analysis of stock market indices based on 
     cross-permutation entropy."
     Nonlinear Dynamics
-    79.4 (2015): 2439-2447.
source
EntropyHub._XCondEn.XCondEnFunction
XCond, SEw, SEz = XCondEn(Sig1, Sig2)

Returns the corrected cross-conditional entropy estimates (XCond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, number of symbols = 6, logarithm = natural ** Note: XCondEn is direction-dependent. Therefore, the order of the data sequences Sig1 and Sig2 matters. If Sig1 is the sequence 'y', and Sig2 is the second sequence 'u', the XCond is the amount of information carried by y(i) when the pattern u(i) is found.**

XCond, SEw, SEz = XCondEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)

Returns the corrected cross-conditional entropy estimates (XCond) for the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1 [default: 2]

tau - Time Delay, a positive integer [default: 1]

c - Number of symbols, an integer > 1 [default: 6]

Logx - Logarithm base, a positive scalar [default: natural]

Norm - Normalisation of XCond values: [false] no normalisation [default]

        [true]   normalises w.r.t cross-Shannon entropy.

See also XFuzzEn, XSampEn, XApEn, XPermEn, CondEn, XMSEn

References:

[1] Alberto Porta, et al.,
+    79.4 (2015): 2439-2447.
source
EntropyHub._XCondEn.XCondEnFunction
XCond, SEw, SEz = XCondEn(Sig1, Sig2)

Returns the corrected cross-conditional entropy estimates (XCond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, number of symbols = 6, logarithm = natural ** Note: XCondEn is direction-dependent. Therefore, the order of the data sequences Sig1 and Sig2 matters. If Sig1 is the sequence 'y', and Sig2 is the second sequence 'u', the XCond is the amount of information carried by y(i) when the pattern u(i) is found.**

XCond, SEw, SEz = XCondEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)

Returns the corrected cross-conditional entropy estimates (XCond) for the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:

Arguments:

m - Embedding Dimension, an integer > 1 [default: 2]

tau - Time Delay, a positive integer [default: 1]

c - Number of symbols, an integer > 1 [default: 6]

Logx - Logarithm base, a positive scalar [default: natural]

Norm - Normalisation of XCond values: [false] no normalisation [default]

        [true]   normalises w.r.t cross-Shannon entropy.

See also XFuzzEn, XSampEn, XApEn, XPermEn, CondEn, XMSEn

References:

[1] Alberto Porta, et al.,
     "Conditional entropy approach for the evaluation of the 
     coupling strength." 
     Biological cybernetics 
-    81.2 (1999): 119-129.
source
EntropyHub._XDistEn.XDistEnFunction
XDist, Ppi = XDistEn(Sig1, Sig2)

Returns the cross-distribution entropy estimate (XDist) and the corresponding distribution probabilities (Ppi) estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins

XDist, Ppi = XDistEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, Bins::Union{Int,String}="Sturges", Logx::Real=2, Norm::Bool=true)

Returns the cross-distribution entropy estimate (XDist) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' = arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

Bins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {'sturges','sqrt','rice','doanes'} [default: 'sturges']

Logx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log**

Norm - Normalisation of DistEn value: [false] no normalisation. [true] normalises w.r.t # of histogram bins [default]

See also XSampEn, XApEn, XPermEn, XCondEn, DistEn, DistEn2D, XMSEn

References:

[1] Yuanyuan Wang and Pengjian Shang,
+    81.2 (1999): 119-129.
source
EntropyHub._XDistEn.XDistEnFunction
XDist, Ppi = XDistEn(Sig1, Sig2)

Returns the cross-distribution entropy estimate (XDist) and the corresponding distribution probabilities (Ppi) estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins

XDist, Ppi = XDistEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, Bins::Union{Int,String}="Sturges", Logx::Real=2, Norm::Bool=true)

Returns the cross-distribution entropy estimate (XDist) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' = arguments:

Arguments:

m - Embedding Dimension, a positive integer [default: 2]

tau - Time Delay, a positive integer [default: 1]

Bins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {'sturges','sqrt','rice','doanes'} [default: 'sturges']

Logx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log**

Norm - Normalisation of DistEn value: [false] no normalisation. [true] normalises w.r.t # of histogram bins [default]

See also XSampEn, XApEn, XPermEn, XCondEn, DistEn, DistEn2D, XMSEn

References:

[1] Yuanyuan Wang and Pengjian Shang,
     "Analysis of financial stock markets through the multiscale
     cross-distribution entropy based on the Tsallis entropy."
     Nonlinear Dynamics 
-    94.2 (2018): 1361-1376.
source
EntropyHub._XSpecEn.XSpecEnFunction
XSpec, BandEn = XSpecEn(Sig)

Returns the cross-spectral entropy estimate (XSpec) of the full cross- spectrum and the within-band entropy (BandEn) estimated between the data sequences contained in Sig using the default parameters: N-point FFT = 2 * max(length(Sig1/Sig2)) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.

XSpec, BandEn = XSpecEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing;  N::Union{Nothing,Int}=nothing, Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)

Returns the cross-spectral entropy (XSpec) and the within-band entropy (BandEn) estimate between the data sequences contained in Sig1 and Sig2 using the following specified 'keyword' arguments:

Arguments:

N - Resolution of spectrum (N-point FFT), an integer > 1

Freqs - Normalised band edge frequencies, a scalar in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2). Note: When no band frequencies are entered, BandEn == SpecEn

Logx - Logarithm base, a positive scalar [default: base 2] ** enter 0 for natural log**

Norm - Normalisation of XSpec value: [false] no normalisation. [true] normalises w.r.t # of spectrum/band frequency values [default]

For more info, see the EntropyHub guide

See also SpecEn, fft, XDistEn, periodogram, XSampEn, XApEn

References:

[1]  Matthew W. Flood,
+    94.2 (2018): 1361-1376.
source
EntropyHub._XSpecEn.XSpecEnFunction
XSpec, BandEn = XSpecEn(Sig)

Returns the cross-spectral entropy estimate (XSpec) of the full cross- spectrum and the within-band entropy (BandEn) estimated between the data sequences contained in Sig using the default parameters: N-point FFT = 2 * max(length(Sig1/Sig2)) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.

XSpec, BandEn = XSpecEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing;  N::Union{Nothing,Int}=nothing, Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)

Returns the cross-spectral entropy (XSpec) and the within-band entropy (BandEn) estimate between the data sequences contained in Sig1 and Sig2 using the following specified 'keyword' arguments:

Arguments:

N - Resolution of spectrum (N-point FFT), an integer > 1

Freqs - Normalised band edge frequencies, a scalar in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2). Note: When no band frequencies are entered, BandEn == SpecEn

Logx - Logarithm base, a positive scalar [default: base 2] ** enter 0 for natural log**

Norm - Normalisation of XSpec value: [false] no normalisation. [true] normalises w.r.t # of spectrum/band frequency values [default]

For more info, see the EntropyHub guide

See also SpecEn, fft, XDistEn, periodogram, XSampEn, XApEn

References:

[1]  Matthew W. Flood,
     "XSpecEn - EntropyHub Project"
-    (2021) https://github.com/MattWillFlood/EntropyHub
source
+ (2021) https://github.com/MattWillFlood/EntropyHubsource diff --git a/dev/Guide/Multiscale_Cross_Entropies/index.html b/dev/Guide/Multiscale_Cross_Entropies/index.html index 6d20dcd..2c0186c 100644 --- a/dev/Guide/Multiscale_Cross_Entropies/index.html +++ b/dev/Guide/Multiscale_Cross_Entropies/index.html @@ -1,5 +1,5 @@ -Multiscale Cross-Entropies · EntropyHub.jl

Multiscale Cross-Entropies

Functions for estimating the multiscale entropy between two univariate time series.

Just as one can calculate multiscale entropy using any Base entropy, the same functionality is possible with multiscale cross-entropy using any Cross-entropy function: (XApEn, XSampEn, XK2En, XCondEn, XPermEn, XSpecEn, XDistEn, XFuzzEn).

To do so, we again use the MSobject function to pass a multiscale object (Mobj) to the multiscale cross-entropy functions.

NOTE:

Multiscale cross-entropy functions have three positional arguments:

  1. the first data seuqence, Sig1 (an Nx1 matrix),
  2. the second data seuqence, Sig2 (an Nx1 matrix),
  3. the multiscale entropy object, Mobj.

EntropyHub.MSobject

The following functions use the multiscale entropy object shown above.

EntropyHub._XMSEn.XMSEnFunction
MSx, CI = XMSEn(Sig1, Sig2, Mobj)

Returns a vector of multiscale cross-entropy values MSx and the complexity index CI between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining default.

MSx,CI = MSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; 
+Multiscale Cross-Entropies · EntropyHub.jl

Multiscale Cross-Entropies

Functions for estimating the multiscale entropy between two univariate time series.

Just as one can calculate multiscale entropy using any Base entropy, the same functionality is possible with multiscale cross-entropy using any Cross-entropy function: (XApEn, XSampEn, XK2En, XCondEn, XPermEn, XSpecEn, XDistEn, XFuzzEn).

To do so, we again use the MSobject function to pass a multiscale object (Mobj) to the multiscale cross-entropy functions.

NOTE:

Multiscale cross-entropy functions have three positional arguments:

  1. the first data seuqence, Sig1 (an Nx1 matrix),
  2. the second data seuqence, Sig2 (an Nx1 matrix),
  3. the multiscale entropy object, Mobj.

EntropyHub.MSobject

The following functions use the multiscale entropy object shown above.

EntropyHub._XMSEn.XMSEnFunction
MSx, CI = XMSEn(Sig1, Sig2, Mobj)

Returns a vector of multiscale cross-entropy values MSx and the complexity index CI between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining default.

MSx,CI = MSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; 
                  Scales::Int=3, Methodx::String="coarse", RadNew::Int=0, Plotx::Bool=false)

Returns a vector of multiscale cross-entropy values MSx and the complexity index CI of the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj and the following 'keyword' arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3)

Method - Graining method, one of the following:

         {`"coarse", "modified", "imf", "timeshift","generalized"`}  [default: 'coarse'] 
          For further info on graining procedures, see the Entropyhub guide.

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Pooled Standard Deviation          - r*std(Xt) 
 
@@ -38,7 +38,7 @@
 [6] Antoine Jamin and Anne Humeau-Heurtier. 
     "(Multiscale) Cross-Entropy Methods: A Review." 
     Entropy 
-    22.1 (2020): 45.
source
EntropyHub._cXMSEn.cXMSEnFunction
MSx, CI = cXMSEn(Sig1, Sig2, Mobj)

Returns a vector of composite multiscale cross-entropy values (MSx) between two univariate data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) using the composite multiscale method (cMSE) over 3 temporal scales.

MSx, CI = cXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; 
+    22.1 (2020): 45.
source
EntropyHub._cXMSEn.cXMSEnFunction
MSx, CI = cXMSEn(Sig1, Sig2, Mobj)

Returns a vector of composite multiscale cross-entropy values (MSx) between two univariate data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) using the composite multiscale method (cMSE) over 3 temporal scales.

MSx, CI = cXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; 
                       Scales::Int=3, RadNew::Int=0, Refined::Bool=false, Plotx::Bool=false)

Returns a vector of composite multiscale cross-entropy values (MSx) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following keyword arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3)

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew rescales the radius threshold of the sub-sequences at each time scale (Ykj). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Pooled Standard Deviation          - r*std(Ykj)
 
          [2]    Pooled Variance                    - r*var(Ykj)
@@ -76,7 +76,7 @@
 [6] Shuen-De Wu, et al.,
     "Time series analysis using composite multiscale entropy." 
     Entropy 
-    15.3 (2013): 1069-1084.
source
EntropyHub._rXMSEn.rXMSEnFunction
MSx, CI = rXMSEn(Sig1, Sig2, Mobj)

Returns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is XSampEn or XApEn, rMSEn updates the threshold radius of the data sequences (Xt) at each scale to 0.2SDpooled(Xa, Xb) when no r value is provided by Mobj, or rSDpooled(Xa, Xb) if r is specified.

MSx, CI = rXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple;
+    15.3 (2013): 1069-1084.
source
EntropyHub._rXMSEn.rXMSEnFunction
MSx, CI = rXMSEn(Sig1, Sig2, Mobj)

Returns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is XSampEn or XApEn, rMSEn updates the threshold radius of the data sequences (Xt) at each scale to 0.2SDpooled(Xa, Xb) when no r value is provided by Mobj, or rSDpooled(Xa, Xb) if r is specified.

MSx, CI = rXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple;
                  Scales::Int=3, F_Order::Int=6, F_Num::Float64=0.5, RadNew::Int=0, Plotx::Bool=false)

Returns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following keyword arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3)

F_Order - Butterworth low-pass filter order, a positive integer (default: 6)

F_Num - Numerator of Butterworth low-pass filter cutoff frequency, a scalar value in range [0 < F_Num < 1]. The cutoff frequency at each scale (T) becomes: Fc = F_Num/T. (default: 0.5)

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Pooled Standard Deviation          - r*std(Xt) 
 
          [2]    Pooled Variance                    - r*var(Xt) 
@@ -123,7 +123,7 @@
 [7] Antoine Jamin and Anne Humeau-Heurtier. 
     "(Multiscale) Cross-Entropy Methods: A Review." 
     Entropy 
-    22.1 (2020): 45.
source
EntropyHub._hXMSEn.hXMSEnFunction
MSx, Sn, CI = hXMSEn(Sig1, Sig2, Mobj)

Returns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average cross-entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST

MSx, Sn, CI = hXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; 
+    22.1 (2020): 45.
source
EntropyHub._hXMSEn.hXMSEnFunction
MSx, Sn, CI = hXMSEn(Sig1, Sig2, Mobj)

Returns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average cross-entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST

MSx, Sn, CI = hXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; 
                          Scales::Int=3, RadNew::Int=0, Plotx::Bool=false)

Returns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the entire hierarchical tree between the data sequences contained in Sig1 and Sig2 using the following name/value pair arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3) At each scale (T), entropy is estimated for 2^(T-1) nodes.

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each node in the tree. If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Pooled Standard Deviation          - r*std(Xt) 
 
          [2]    Pooled Variance                    - r*var(Xt) 
@@ -145,4 +145,4 @@
 [3] Ying Jiang, C-K. Peng and Yuesheng Xu,
     "Hierarchical entropy analysis for biological signals."
     Journal of Computational and Applied Mathematics
-    236.5 (2011): 728-742.
source
+ 236.5 (2011): 728-742.
source
diff --git a/dev/Guide/Multiscale_Entropies/index.html b/dev/Guide/Multiscale_Entropies/index.html index 9b57240..adc6d9f 100644 --- a/dev/Guide/Multiscale_Entropies/index.html +++ b/dev/Guide/Multiscale_Entropies/index.html @@ -1,5 +1,5 @@ -Multiscale Entropies · EntropyHub.jl

Multiscale Entropies

Functions for estimating the multiscale entropy between of a univariate time series.

Multiscale entropy can be calculated using any of the Base entropies: (ApEn, AttnEn, BubbEn, CondEn, CoSiEn, DistEn, DivEn, DispEn, EnofEn, FuzzEn, GridEn, IncrEn, K2En, PermEn, PhasEn, RangEn, SampEn, SlopEn, SpecEn, SyDyEn).

NOTE:

Multiscale cross-entropy functions have two positional arguments:

  1. the data sequence, Sig (a vector > 10 elements),
  2. the multiscale entropy object, Mobj.
EntropyHub._MSobject.MSobjectFunction
Mobj = MSobject()

Returns a multiscale entropy object (Mobj) based on that originally proposed by Costa et. al. (2002) using the following default parameters: EnType = SampEn(), embedding dimension = 2, time delay = 1, radius = 0.2*SD(Sig), logarithm = natural

Mobj = MSobject(EnType::Function)

Returns a multiscale entropy object by passing the entropy function (EnType) and the specifying default parameters for that entropy function. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType

(e.g. ? EntropyHub.SampEn)

Mobj = MSobject(EnType::Function; kwargs...)

Returns a multiscale entropy object using the specified entropy method (EnType) and the 'keyword' parameters for that particular method. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType (e.g. ? EntropyHub.SampEn)

EnType can be any of the following entropy functions:

Base Entropies:

-----------------
+Multiscale Entropies · EntropyHub.jl

Multiscale Entropies

Functions for estimating the multiscale entropy between of a univariate time series.

Multiscale entropy can be calculated using any of the Base entropies: (ApEn, AttnEn, BubbEn, CondEn, CoSiEn, DistEn, DivEn, DispEn, EnofEn, FuzzEn, GridEn, IncrEn, K2En, PermEn, PhasEn, RangEn, SampEn, SlopEn, SpecEn, SyDyEn).

NOTE:

Multiscale cross-entropy functions have two positional arguments:

  1. the data sequence, Sig (a vector > 10 elements),
  2. the multiscale entropy object, Mobj.
EntropyHub._MSobject.MSobjectFunction
Mobj = MSobject()

Returns a multiscale entropy object (Mobj) based on that originally proposed by Costa et. al. (2002) using the following default parameters: EnType = SampEn(), embedding dimension = 2, time delay = 1, radius = 0.2*SD(Sig), logarithm = natural

Mobj = MSobject(EnType::Function)

Returns a multiscale entropy object by passing the entropy function (EnType) and the specifying default parameters for that entropy function. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType

(e.g. ? EntropyHub.SampEn)

Mobj = MSobject(EnType::Function; kwargs...)

Returns a multiscale entropy object using the specified entropy method (EnType) and the 'keyword' parameters for that particular method. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType (e.g. ? EntropyHub.SampEn)

EnType can be any of the following entropy functions:

Base Entropies:

-----------------
 `ApEn`      - Approximate Entropy  
 
 `SampEn`    - Sample Entropy   
@@ -53,7 +53,7 @@
 
 `XDistEn`   - Cross-Distribution Entropy    
 
-`XSpecEn`   - Cross-Spectral Entropy

See also MSEn, XMSEn, rMSEn, cMSEn, hMSEn, rXMSEn, cXMSEn, hXMSEn

source

The following functions use the multiscale entropy object shown above.

EntropyHub._MSEn.MSEnFunction
 MSx, CI = MSEn(Sig, Mobj)

Returns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining (default).

 MSx, CI = MSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, 
+`XSpecEn`   - Cross-Spectral Entropy

See also MSEn, XMSEn, rMSEn, cMSEn, hMSEn, rXMSEn, cXMSEn, hXMSEn

source

The following functions use the multiscale entropy object shown above.

EntropyHub._MSEn.MSEnFunction
 MSx, CI = MSEn(Sig, Mobj)

Returns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining (default).

 MSx, CI = MSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, 
                       Methodx::String="coarse", RadNew::Int=0, Plotx::Bool=false)

Returns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj and the following 'keyword' arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3)

Method - Graining method, one of the following: {coarse,modified,imf,timeshift,generalized} [default = coarse] For further info on these graining procedures, see the EntropyHub guide.

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

          [1]    Standard Deviation          - r*std(Xt)
 
           [2]    Variance                    - r*var(Xt) 
@@ -123,14 +123,14 @@
  [12] Madalena Costa and Ary L. Goldberger,
          "Generalized multiscale entropy analysis: Application to quantifying 
          the complex volatility of human heartbeat time series." 
-         Entropy 17.3 (2015): 1197-1203.
source
EntropyHub._cMSEn.cMSEnFunction
MSx, CI = cMSEn(Sig, Mobj)

Returns a vector of composite multiscale entropy values (MSx) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) using the composite multiscale entropy method over 3 temporal scales.

MSx, CI = cMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple;  
+         Entropy 17.3 (2015): 1197-1203.
source
EntropyHub._cMSEn.cMSEnFunction
MSx, CI = cMSEn(Sig, Mobj)

Returns a vector of composite multiscale entropy values (MSx) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) using the composite multiscale entropy method over 3 temporal scales.

MSx, CI = cMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple;  
                     Scales::Int=3, RadNew::Int=0, Refined::Bool=false, Plotx::Bool=false)

Returns a vector of composite multiscale entropy values (MSx) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following 'keyword' arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3)

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Standard Deviation          - r*std(Xt)
 
          [2]    Variance                    - r*var(Xt) 
 
          [3]    Mean Absolute Deviation     - r*mean_ad(Xt) 
 
-         [4]    Median Absolute Deviation   - r*med_ad(Xt)

Refined - Refined-composite MSEn method. When Refined == true and the entropy function specified by Mobj is SampEn, cMSEn returns the refined-composite multiscale entropy (rcMSEn) [default: false]

Plotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]

See also MSobject, rMSEn, MSEn, hMSEn, SampEn, ApEn, XMSEn

References:

[1] Madalena Costa, Ary Goldberger, and C-K. Peng,
+         [4]    Median Absolute Deviation   - r*med_ad(Xt)

Refined - Refined-composite MSEn method. When Refined == true and the entropy function specified by Mobj is SampEn or FuzzEn, cMSEn returns the refined-composite multiscale entropy (rcMSEn) [default: false]

Plotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]

See also MSobject, rMSEn, MSEn, hMSEn, SampEn, ApEn, XMSEn

References:

[1] Madalena Costa, Ary Goldberger, and C-K. Peng,
     "Multiscale entropy analysis of complex physiologic time series."
     Physical review letters
     89.6 (2002): 068102.
@@ -155,7 +155,12 @@
     "Analysis of complex time series using refined composite 
     multiscale entropy." 
     Physics Letters A 
-    378.20 (2014): 1369-1374.
source
EntropyHub._rMSEn.rMSEnFunction
MSx, CI = rMSEn(Sig, Mobj)

Returns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is SampEn or ApEn, rMSEn updates the threshold radius of the data sequence (Xt) at each scale to 0.2std(Xt) if no r value is provided by Mobj, or r.std(Xt) if r is specified.

MSx, CI = rMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, 
+    378.20 (2014): 1369-1374.
+
+[6] Azami, Hamed, Alberto Fernández, and Javier Escudero.
+    "Refined multiscale fuzzy entropy based on standard deviation
+    for biomedical signal analysis." 
+    Medical & biological engineering & computing 55 (2017): 2037-2052.
source
EntropyHub._rMSEn.rMSEnFunction
MSx, CI = rMSEn(Sig, Mobj)

Returns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is SampEn or ApEn, rMSEn updates the threshold radius of the data sequence (Xt) at each scale to 0.2std(Xt) if no r value is provided by Mobj, or r.std(Xt) if r is specified.

MSx, CI = rMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, 
                     F_Order::Int=6, F_Num::Float64=0.5, RadNew::Int=0, Plotx::Bool=false)

Returns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following 'keyword' arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default = 3)

F_Order - Butterworth low-pass filter order, a positive integer (default: 6)

F_Num - Numerator of Butterworth low-pass filter cutoff frequency, a scalar value in range [0 < F_Num < 1]. The cutoff frequency at each scale (T) becomes: Fc = `F_Num/T. (default: 0.5)

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Standard Deviation          - r*std(Xt)
 
          [2]    Variance                    - r*var(Xt) 
@@ -189,7 +194,7 @@
     "Optimal selection of threshold value ‘r’for refined multiscale
     entropy." 
     Cardiovascular engineering and technology 
-    6.4 (2015): 557-576.
source
EntropyHub._hMSEn.hMSEnFunction
MSx, Sn, CI = hMSEn(Sig, Mobj)

Returns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST

MSx, Sn, CI = hMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; 
+    6.4 (2015): 557-576.
source
EntropyHub._hMSEn.hMSEnFunction
MSx, Sn, CI = hMSEn(Sig, Mobj)

Returns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST

MSx, Sn, CI = hMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; 
                         Scales::Int=3, RadNew::Int=0, Plotx::Bool=false)

Returns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the entire hierarchical tree for the data sequence (Sig) using the following 'keyword' arguments:

Arguments:

Scales - Number of temporal scales, an integer > 1 (default: 3) At each scale (T), entropy is estimated for 2^(T-1) nodes.

RadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each node in the tree. If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:

         [1]    Standard Deviation          - r*std(Xt)
 
          [2]    Variance                    - r*var(Xt)
@@ -199,4 +204,4 @@
          [4]    Median Absolute Deviation   - r*med_ad(Xt,1)

Plotx - When Plotx == true, returns a plot of the average entropy value at each time scale (i.e. the multiscale entropy curve) and a hierarchical graph showing the entropy value of each node in the hierarchical tree decomposition. (default: false)

See also MSobject, MSEn, cMSEn, rMSEn, SampEn, ApEn, XMSEn

References:

[1] Ying Jiang, C-K. Peng and Yuesheng Xu,
     "Hierarchical entropy analysis for biological signals."
     Journal of Computational and Applied Mathematics
-    236.5 (2011): 728-742.
source
+ 236.5 (2011): 728-742.
source
diff --git a/dev/index.html b/dev/index.html index 1b33114..0922e1f 100644 --- a/dev/index.html +++ b/dev/index.html @@ -29,4 +29,4 @@ | |_| || | | || | toolkit for | | / \ | | | _ || | | || \ entropic time- | | \___/ | | | | | || |_| || \ series analysis | \_______/ | - |_| |_|\_____/|_____/ \___________/

Documentation for EntropyHub.

+ |_| |_|\_____/|_____/ \___________/

Documentation for EntropyHub.

diff --git a/dev/search_index.js b/dev/search_index.js index ef18724..dda5458 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"Examples/Example3/#Example-3:-Phase-Entropy-w/-Second-Order-Difference-Plot","page":"Ex.3: Phase Entropy","title":"Example 3: Phase Entropy w/ Second Order Difference Plot","text":"","category":"section"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Import the x and y components of the Henon system of equations.","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Data = ExampleData(\"henon\");\n\nusing Plots\nscatter(Data[:,1], Data[:,2],\nmarkercolor = \"green\", markerstrokecolor = \"black\",\nmarkersize = 3, background_color = \"black\",grid = false)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"(Image: Henon)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Calculate the phase entropy of the y-component in bits (logarithm base 2) without normalization using 7 angular partitions and return the second-order difference plot.","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"henon\"); # hide\nY = Data[:,2];\nPhas = PhasEn(Y, K = 7, Norm = false, Logx = 2, Plotx = true)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"(Image: Phas1)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Calculate the phase entropy of the x-component using 11 angular partitions, a time delay of 2, and return the second-order difference plot.","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"henon\"); # hide\nX = Data[:,1];\nPhas = PhasEn(X, K = 11, tau = 2, Plotx = true)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"(Image: Phas2)","category":"page"},{"location":"Examples/Example2/#Example-2:-(Fine-grained)-Permutation-Entropy","page":"Ex.2: Permutation Entropy","title":"Example 2: (Fine-grained) Permutation Entropy","text":"","category":"section"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"Import the x, y, and z components of the Lorenz system of equations.","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"Data = ExampleData(\"lorenz\");\n\nusing Plots\nscatter(Data[:,1], Data[:,2], Data[:,3],\nmarkercolor = \"green\", markerstrokecolor = \"black\",\nmarkersize = 3, background_color = \"black\", grid = false)","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"(Image: Lorenz)","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"Calculate fine-grained permutation entropy of the z component in dits (logarithm base 10) with an embedding dimension of 3, time delay of 2, an alpha parameter of 1.234. Return Pnorm normalised w.r.t the number of all possible permutations (m!) and the condition permutation entropy (cPE) estimate.","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"lorenz\"); # hide\nZ = Data[:,3];\nPerm, Pnorm, cPE = PermEn(Z, m = 3, tau = 2, Typex = \"finegrain\", \n tpx = 1.234, Logx = 10, Norm = false)","category":"page"},{"location":"Guide/Multiscale_Entropies/#Multiscale-Entropies","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"","category":"section"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"Functions for estimating the multiscale entropy between of a univariate time series.","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"Multiscale entropy can be calculated using any of the Base entropies: (ApEn, AttnEn, BubbEn, CondEn, CoSiEn, DistEn, DivEn, DispEn, EnofEn, FuzzEn, GridEn, IncrEn, K2En, PermEn, PhasEn, RangEn, SampEn, SlopEn, SpecEn, SyDyEn).","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"info: NOTE:\nMultiscale cross-entropy functions have two positional arguments:the data sequence, Sig (a vector > 10 elements),\nthe multiscale entropy object, Mobj.","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"EntropyHub.MSobject","category":"page"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._MSobject.MSobject","page":"Multiscale Entropies","title":"EntropyHub._MSobject.MSobject","text":"Mobj = MSobject()\n\nReturns a multiscale entropy object (Mobj) based on that originally proposed by Costa et. al. (2002) using the following default parameters: EnType = SampEn(), embedding dimension = 2, time delay = 1, radius = 0.2*SD(Sig), logarithm = natural\n\nMobj = MSobject(EnType::Function)\n\nReturns a multiscale entropy object by passing the entropy function (EnType) and the specifying default parameters for that entropy function. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType \n\n(e.g. ? EntropyHub.SampEn)\n\nMobj = MSobject(EnType::Function; kwargs...)\n\nReturns a multiscale entropy object using the specified entropy method (EnType) and the 'keyword' parameters for that particular method. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType (e.g. ? EntropyHub.SampEn)\n\nEnType can be any of the following entropy functions:\n\nBase Entropies:\n\n-----------------\n`ApEn` - Approximate Entropy \n\n`SampEn` - Sample Entropy \n\n`FuzzEn` - Fuzzy Entropy \n\n`K2En` - Kolmogorov Entropy \n\n`PermEn` - Permutation Entropy\t \n\n`CondEn` - Conditional Entropy\t \n\n`DistEn` - Distribution Entropy\t \n\n`DispEn` - Dispersion Entropy\t \n\n`SpecEn` - Spectral Entropy \n\n`SyDyEn` - Symbolic Dynamic Entropy\t \n\n`IncrEn` - Increment Entropy\t \n\n`CoSiEn` - Cosine Similarity Entropy\t\n\n`PhasEn` - Phase Entropy\t \n\n`SlopEn` - Slope Entropy \n\n`BubbEn` - Bubble Entropy \n\n`GridEn` - Gridded Distribution Entropy \n\n`EnofEn` - Entropy of Entropy\t\n\n`AttnEn` - Attention Entropy \n\n`DivEn` - Diversity Entropy \n\n`RangEn` - Range Entropy\n\nCross Entropies:\n\n------------------\n`XApEn` - Cross-Approximate Entropy \n\n`XSampEn` - Cross-Sample Entropy \n\n`XFuzzEn` - Cross-Fuzzy Entropy \n\n`XK2En` - Cross-Kolmogorov Entropy \n\n`XPermEn` - Cross-Permutation Entropy \n\n`XCondEn` - Cross-Conditional Entropy \n\n`XDistEn` - Cross-Distribution Entropy \n\n`XSpecEn` - Cross-Spectral Entropy\n\nSee also MSEn, XMSEn, rMSEn, cMSEn, hMSEn, rXMSEn, cXMSEn, hXMSEn\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"The following functions use the multiscale entropy object shown above.","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"EntropyHub.MSEn\nEntropyHub.cMSEn\nEntropyHub.rMSEn\nEntropyHub.hMSEn","category":"page"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._MSEn.MSEn","page":"Multiscale Entropies","title":"EntropyHub._MSEn.MSEn","text":" MSx, CI = MSEn(Sig, Mobj)\n\nReturns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining (default). \n\n MSx, CI = MSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, \n Methodx::String=\"coarse\", RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nMethod - Graining method, one of the following: {coarse,modified,imf,timeshift,generalized} [default = coarse] For further info on these graining procedures, see the EntropyHub guide. \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt) \n\n [3] Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nFor further info on these graining procedures see the EntropyHub guide.\n\nSee also MSobject, rMSEn, cMSEn, hMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n [1] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n [2] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n [3] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n [4] Madalena Costa, Ary L. Goldberger and C-K. Peng,\n \"Multiscale entropy analysis of biological signals.\" \n Physical review E \n 71.2 (2005): 021906.\n\n [5] Ranjit A. Thuraisingham and Georg A. Gottwald,\n \"On multiscale entropy analysis for physiological data.\"\n Physica A: Statistical Mechanics and its Applications\n 366 (2006): 323-332.\n\n [6] Meng Hu and Hualou Liang,\n \"Intrinsic mode entropy based on multivariate empirical mode\n decomposition and its application to neural data analysis.\" \n Cognitive neurodynamics\n 5.3 (2011): 277-284.\n\n [7] Anne Humeau-Heurtier \n \"The multiscale entropy algorithm and its variants: A review.\" \n Entropy \n 17.5 (2015): 3110-3123.\n\n [8] Jianbo Gao, et al.,\n \"Multiscale entropy analysis of biological signals: a \n fundamental bi-scaling law.\" \n Frontiers in computational neuroscience \n 9 (2015): 64.\n\n [9] Paolo Castiglioni, et al.,\n \"Multiscale Sample Entropy of cardiovascular signals: Does the\n choice between fixed-or varying-tolerance among scales \n influence its evaluation and interpretation?.\" \n Entropy\n 19.11 (2017): 590.\n\n [10] Tuan D Pham,\n \"Time-shift multiscale entropy analysis of physiological signals.\" \n Entropy \n 19.6 (2017): 257.\n\n [11] Hamed Azami and Javier Escudero,\n \"Coarse-graining approaches in univariate multiscale sample \n and dispersion entropy.\" \n Entropy 20.2 (2018): 138.\n\n [12] Madalena Costa and Ary L. Goldberger,\n \"Generalized multiscale entropy analysis: Application to quantifying \n the complex volatility of human heartbeat time series.\" \n Entropy 17.3 (2015): 1197-1203.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._cMSEn.cMSEn","page":"Multiscale Entropies","title":"EntropyHub._cMSEn.cMSEn","text":"MSx, CI = cMSEn(Sig, Mobj)\n\nReturns a vector of composite multiscale entropy values (MSx) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) using the composite multiscale entropy method over 3 temporal scales.\n\nMSx, CI = cMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Refined::Bool=false, Plotx::Bool=false)\n\nReturns a vector of composite multiscale entropy values (MSx) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt) \n\n [3] Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Median Absolute Deviation - r*med_ad(Xt)\n\nRefined - Refined-composite MSEn method. When Refined == true and the entropy function specified by Mobj is SampEn, cMSEn returns the refined-composite multiscale entropy (rcMSEn) [default: false]\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nSee also MSobject, rMSEn, MSEn, hMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n[1] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[2] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n[3] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n [4] Shuen-De Wu, et al.,\n \"Time series analysis using composite multiscale entropy.\" \n Entropy \n 15.3 (2013): 1069-1084.\n\n[5] Shuen-De Wu, et al.,\n \"Analysis of complex time series using refined composite \n multiscale entropy.\" \n Physics Letters A \n 378.20 (2014): 1369-1374.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._rMSEn.rMSEn","page":"Multiscale Entropies","title":"EntropyHub._rMSEn.rMSEn","text":"MSx, CI = rMSEn(Sig, Mobj)\n\nReturns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is SampEn or ApEn, rMSEn updates the threshold radius of the data sequence (Xt) at each scale to 0.2std(Xt) if no r value is provided by Mobj, or r.std(Xt) if r is specified.\n\nMSx, CI = rMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, \n F_Order::Int=6, F_Num::Float64=0.5, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default = 3) \n\nF_Order - Butterworth low-pass filter order, a positive integer (default: 6) \n\nF_Num - Numerator of Butterworth low-pass filter cutoff frequency, a scalar value in range [0 < F_Num < 1]. The cutoff frequency at each scale (T) becomes: Fc = `F_Num/T. (default: 0.5) \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt) \n\n [3] Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false] \n\nSee also MSobject, MSEn, cMSEn, hMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n[1] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[2] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n[3] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n[4] José Fernando Valencia, et al.,\n \"Refined multiscale entropy: Application to 24-h holter \n recordings of heart period variability in healthy and aortic \n stenosis subjects.\" \n IEEE Transactions on Biomedical Engineering \n 56.9 (2009): 2202-2213.\n\n[5] Puneeta Marwaha and Ramesh Kumar Sunkaria,\n \"Optimal selection of threshold value ‘r’for refined multiscale\n entropy.\" \n Cardiovascular engineering and technology \n 6.4 (2015): 557-576.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._hMSEn.hMSEn","page":"Multiscale Entropies","title":"EntropyHub._hMSEn.hMSEn","text":"MSx, Sn, CI = hMSEn(Sig, Mobj)\n\nReturns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST\n\nMSx, Sn, CI = hMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the entire hierarchical tree for the data sequence (Sig) using the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) At each scale (T), entropy is estimated for 2^(T-1) nodes.\n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each node in the tree. If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt)\n\n [3] Mean Absolute Deviation - r*mean_ad(Xt)\n\n [4] Median Absolute Deviation - r*med_ad(Xt,1)\n\nPlotx - When Plotx == true, returns a plot of the average entropy value at each time scale (i.e. the multiscale entropy curve) and a hierarchical graph showing the entropy value of each node in the hierarchical tree decomposition. (default: false)\n\nSee also MSobject, MSEn, cMSEn, rMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n[1] Ying Jiang, C-K. Peng and Yuesheng Xu,\n \"Hierarchical entropy analysis for biological signals.\"\n Journal of Computational and Applied Mathematics\n 236.5 (2011): 728-742.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#Bidimensional-Entropies","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"","category":"section"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"Functions for estimating the entropy of a two-dimensional univariate matrix.","category":"page"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"While EntropyHub functions primarily apply to time series data, with the following bidimensional entropy functions one can estimate the entropy of two-dimensional (2D) matrices. Hence, bidimensional entropy functions are useful for applications such as image analysis.","category":"page"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"danger: IMPORTANT: Locked Matrix Size\nEach bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D, DispEn2D, EspEn2D, PermEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are \"locked\" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.The reason for this is because there are hundreds of millions of pairwise calculations performed in the estimation of bidimensional entropy, so memory errors often occur when storing data on RAM.e.g. For a matrix of size [200 x 200], an embedding dimension (m) = 3, and a time delay (tau) = 1, there are 753,049,836 pairwise matrix comparisons (6,777,448,524 elemental subtractions). To pass matrices with sizes greater than [128 x 128], set Lock = false.CAUTION: unlocking the permitted matrix size may cause your Julia IDE to crash.","category":"page"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"EntropyHub.SampEn2D\nEntropyHub.FuzzEn2D\nEntropyHub.DistEn2D\nEntropyHub.DispEn2D\nEntropyHub.PermEn2D\nEntropyHub.EspEn2D","category":"page"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._SampEn2D.SampEn2D","page":"Bidimensional Entropies","title":"EntropyHub._SampEn2D.SampEn2D","text":"SE2D, Phi1, Phi2 = SampEn2D(Mat)\n\nReturns the bidimensional sample entropy estimate (SE2D) and the number of matched sub-matricess (m:Phi1, m+1:Phi2) estimated for the data matrix (Mat) using the default parameters: time delay = 1, radius distance threshold = 0.2*SD(Mat), logarithm = natural matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum dimension size of Mat must be > 10.**\n\nSE2D, Phi1, Phi2 = SampEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, r::Real=0.2*std(Mat,corrected=false), Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional sample entropy (SE2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nr - Distance Threshold, a positive scalar (default: 0.2*SD(Mat)) \n\nLogx - Logarithm base, a positive scalar (default: natural) \n\nLock - By default, SampEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, SampEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true) \n\n `WARNING: unlocking the permitted matrix size may cause your Julia\n IDE to crash.`\n\nSee also SampEn, FuzzEn2D, XSampEn, MSEn\n\nReferences:\n\n[1] Luiz Eduardo Virgili Silva, et al.,\n \"Two-dimensional sample entropy: Assessing image texture \n through irregularity.\" \n Biomedical Physics & Engineering Express\n 2.4 (2016): 045002.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._FuzzEn2D.FuzzEn2D","page":"Bidimensional Entropies","title":"EntropyHub._FuzzEn2D.FuzzEn2D","text":"Fuzz2D = FuzzEn2D(Mat)\n\nReturns the bidimensional fuzzy entropy estimate (Fuzz2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height and width of the data matrix 'Mat') \n\n** The minimum dimension size of Mat must be > 10.**\n\nFuzz2D = FuzzEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2*std(Mat, corrected=false),2), \n Fx::String=\"default\", Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional fuzzy entropy (Fuzz2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nFx - Fuzzy function name, one of the following: {\"sigmoid\", \"modsampen\", \"default\", \"gudermannian\", \"bell\", \"triangular\", \"trapezoidal1\", \"trapezoidal2\", \"z_shaped\", \"gaussian\", \"constgaussian\"}\n\nr - Fuzzy function parameters, a 1 element scalar or a 2 element vector of positive values. The 'r' parameters for each fuzzy function are defined as follows:\n\n sigmoid: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n modsampen: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n default: r (1) = divisor of the exponential argument\n r(2) = argument exponent (pre-division)\n gudermannian: r = a scalar whose value is the numerator of\n argument to gudermannian function:\n GD(x) = atan(tanh(r/x))\n triangular: r = a scalar whose value is the threshold (corner point) of the triangular function.\n trapezoidal1: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the trapezoid.\n trapezoidal2: r(1) = a value corresponding to the upper corner point of the trapezoid.\n r(2) = a value corresponding to the lower corner point of the trapezoid.\n z_shaped: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the z-shape.\n bell: r(1) = divisor of the distance value\n r(2) = exponent of generalized bell-shaped function\n gaussian: r = a scalar whose value scales the slope of the Gaussian curve.\n constgaussian: r = a scalar whose value defines the lower threshod and shape of the Gaussian curve. \n [DEPRICATED] linear: r = an integer value. When r = 0, the\n argument of the exponential function is \n normalised between [0 1]. When r = 1,\n the minimuum value of the exponential \n argument is set to 0.\n\nLogx - Logarithm base, a positive scalar (default: natural)\n\nLock - By default, FuzzEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, FuzzEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true)\n\n ` WARNING: unlocking the permitted matrix size may cause\n your Julia IDE to crash.`\n\nSee also SampEn2D, FuzzEn, XFuzzEn\n\nReferences:\n\n[1] Luiz Fernando Segato Dos Santos, et al.,\n \"Multidimensional and fuzzy sample entropy (SampEnMF) for\n quantifying H&E histological images of colorectal cancer.\"\n Computers in biology and medicine \n 103 (2018): 148-160.\n\n[2] Mirvana Hilal and Anne Humeau-Heurtier,\n \"Bidimensional fuzzy entropy: Principle analysis and biomedical\n applications.\"\n 41st Annual International Conference of the IEEE (EMBC) Society\n 2019.\n\n[3] Hamed Azami, et al.\n \"Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: \n Assessment and Comparison\"\n IEEE Access\n 7 (2019): 104833-104847\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._DistEn2D.DistEn2D","page":"Bidimensional Entropies","title":"EntropyHub._DistEn2D.DistEn2D","text":"Dist2D = DistEn2D(Mat)\n\nReturns the bidimensional distribution entropy estimate (Dist2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, histogram binning method = \"sturges\", logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum number of rows and columns of Mat must be > 10.**\n\nDist2D = DistEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,\n Bins::Union{Int,String}=\"Sturges\", Logx::Real=2, Norm::Int=2, Lock::Bool=true)\n\nReturns the bidimensional distribution entropy (Dist2D) estimate for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element tuple of integers [height, width] with a minimum value > 1. [default: [floor(H/10) floor(W/10)]] \n\ntau - Time Delay, a positive integer [default: 1] \n\nBins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {\"sturges\", \"sqrt\", \"rice\", \"doanes\"`} [default: 'sturges'] \n\nLogx - Logarithm base, a positive scalar [default: natural]\n\n ** enter 0 for natural logarithm.**\n\nNorm - Normalisation of Dist2D value, one of the following integers: [0] no normalisation. [1] normalises values of data matrix (Mat) to range [0 1]. [2] normalises values of data matrix (Mat) to range [0 1], and normalises the distribution entropy value (Dist2D) w.r.t the number of histogram bins. [default] [3] normalises the distribution entropy value w.r.t the number of histogram bins, without normalising data matrix values. \n\nLock - By default, DistEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, DistEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. [default: 'true'] WARNING: unlocking the permitted matrix size may cause your Julia IDE to crash.\n\nSee also DistEn, XDistEn, SampEn2D, FuzzEn2D, MSEn\n\nReferences:\n\n[1] Hamed Azami, Javier Escudero and Anne Humeau-Heurtier,\n \"Bidimensional distribution entropy to analyze the irregularity\n of small-sized textures.\"\n IEEE Signal Processing Letters \n 24.9 (2017): 1338-1342.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._DispEn2D.DispEn2D","page":"Bidimensional Entropies","title":"EntropyHub._DispEn2D.DispEn2D","text":"Disp2D, RDE = DispEn2D(Mat)\n\nReturns the bidimensional dispersion entropy estimate (Disp2D) and reverse bidimensional dispersion entropy (RDE) estimated for the data matrix (Mat) using the default parameters: time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function ('ncdf'), logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum number of rows and columns of Mat must be > 10.**\n\nDisp2D, RDE = DispEn2D(Mat::AbstractArray{T,2} where T<:Real; \n m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,\n c::Int=3, Typex::String=\"ncdf\", Logx::Real=exp(1), Norm::Bool=false, Lock::Bool=true)\n\nReturns the bidimensional dispersion entropy (Disp2D) and reverse bidimensional distribution entropy (RDE) estimate for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element tuple of integers [height, width] with a minimum value > 1. [default: [floor(H/10) floor(W/10)]] \n\ntau - Time Delay, a positive integer [default: 1] \n\nc - Number of symbols, an integer > 1 Typex - Type of symbolic mapping transform, one of the following: {linear, kmeans, ncdf, equal} See the EntropyHub Guide for more info on these transforms. Logx - Logarithm base, a positive scalar [default: natural]\n\n ** enter 0 for natural logarithm.**\n\nNorm - Normalisation of Disp2D value, a boolean: - [false] no normalisation - default - [true] normalises w.r.t number of possible dispersion patterns. Lock - By default, DispEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, DispEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. [default: 'true'] WARNING: unlocking the permitted matrix size may cause your Julia IDE to crash.\n\nSee also DispEn, DistEn2D, SampEn2D, FuzzEn2D, MSEn\n\nReferences:\n\n[1] Hamed Azami, et al.,\n \"Two-dimensional dispersion entropy: An information-theoretic \n method for irregularity analysis of images.\"\n Signal Processing: Image Communication, \n 75 (2019): 178-187.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._PermEn2D.PermEn2D","page":"Bidimensional Entropies","title":"EntropyHub._PermEn2D.PermEn2D","text":"Perm2D = PermEn2D(Mat)\n\nReturns the bidimensional permutation entropy estimate (Perm2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum dimension size of Mat must be > 10.**\n\nPerm2D = PermEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, Norm::Bool=true, Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional permutation entropy (Perm2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nNorm - Normalization of permutation entropy estimate, a boolean (default: true) \n\nLogx - Logarithm base, a positive scalar (default: natural) \n\nLock - By default, PermEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, SampEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true) \n\n `WARNING: unlocking the permitted matrix size may cause your Julia\n IDE to crash.`\n\nNOTE - The original bidimensional permutation entropy algorithms [1][2] do not account for equal-valued elements of the embedding matrices. To overcome this, PermEn2D uses the lowest common rank for such instances. For example, given an embedding matrix A where, A = [3.4 5.5 7.3] |2.1 6 9.9| [7.3 1.1 2.1] would normally be mapped to an ordinal pattern like so, [3.4 5.5 7.3 2.1 6 9.9 7.3 1.1 2.1] => [ 8 4 9 1 2 5 3 7 6 ] However, indices 4 & 9, and 3 & 7 have the same values, 2.1 and 7.3 respectively. Instead, PermEn2D uses the ordinal pattern [ 8 4 4 1 2 5 3 3 6 ] where the lowest rank (4 & 3) are used instead (of 9 & 7). Therefore, the number of possible permutations is no longer (mxmy)!, but (mxmy)^(mxmy). Here, the PermEn2D value is normalized by the maximum Shannon entropy (Smax = log((mxmy)!) assuming that no equal values are found in the permutation motif matrices, as presented in [1].\n\nSee also SampEn2D, FuzzEn2D, DispEn2D, DistEn2D\n\nReferences:\n\n[1] Haroldo Ribeiro et al.,\n \"Complexity-Entropy Causality Plane as a Complexity Measure \n for Two-Dimensional Patterns\"\n PLoS ONE (2012), 7(8):e40689, \n\n[2] Luciano Zunino and Haroldo Ribeiro,\n \"Discriminating image textures with the multiscale\n two-dimensional complexity-entropy causality plane\"\n Chaos, Solitons and Fractals, 91:679-688 (2016)\n\n[3] Matthew Flood and Bernd Grimm,\n \"EntropyHub: An Open-source Toolkit for Entropic Time Series Analysis\"\n PLoS ONE (2021) 16(11): e0259448.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._EspEn2D.EspEn2D","page":"Bidimensional Entropies","title":"EntropyHub._EspEn2D.EspEn2D","text":"Esp2D, = EspEn2D(Mat)\n\nReturns the bidimensional Espinosa entropy estimate (Esp2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, tolerance threshold = 20, percentage similarity = 0.7 logarithm = natural, matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) ** The minimum number of rows and columns of Mat must be > 10.\n\nEsp2D = EspEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, r::Real=20, ps::Float=.7, Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional Espinosa entropy (Esp2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nr - Tolerance threshold, a positive scalar (default: 20) \n\nps - Percentage similarity, a value in range [0 1], (default: 0.7) \n\nLogx - Logarithm base, a positive scalar (default: natural) \n\nLock - By default, EspEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, EspEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true) \n\n `WARNING: unlocking the permitted matrix size may cause your Julia\n IDE to crash.`\n\nSee also SampEn2D, FuzzEn2D, DispEn2D, DistEn2D, PermEn2D\n\nReferences:\n\n[1] Ricardo Espinosa, et al.,\n \"Two-dimensional EspEn: A New Approach to Analyze Image Texture \n by Irregularity.\" \n Entropy,\n 23:1261 (2021)\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#Multiscale-Cross-Entropies","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"","category":"section"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"Functions for estimating the multiscale entropy between two univariate time series.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"Just as one can calculate multiscale entropy using any Base entropy, the same functionality is possible with multiscale cross-entropy using any Cross-entropy function: (XApEn, XSampEn, XK2En, XCondEn, XPermEn, XSpecEn, XDistEn, XFuzzEn).","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"To do so, we again use the MSobject function to pass a multiscale object (Mobj) to the multiscale cross-entropy functions.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"info: NOTE:\nMultiscale cross-entropy functions have three positional arguments:the first data seuqence, Sig1 (an Nx1 matrix),\nthe second data seuqence, Sig2 (an Nx1 matrix),\nthe multiscale entropy object, Mobj.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"EntropyHub.MSobject","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"The following functions use the multiscale entropy object shown above.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"EntropyHub.XMSEn\nEntropyHub.cXMSEn\nEntropyHub.rXMSEn\nEntropyHub.hXMSEn","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._XMSEn.XMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._XMSEn.XMSEn","text":"MSx, CI = XMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of multiscale cross-entropy values MSx and the complexity index CI between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining default. \n\nMSx,CI = MSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, Methodx::String=\"coarse\", RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of multiscale cross-entropy values MSx and the complexity index CI of the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nMethod - Graining method, one of the following:\n\n {`\"coarse\", \"modified\", \"imf\", \"timeshift\",\"generalized\"`} [default: 'coarse'] \n For further info on graining procedures, see the Entropyhub guide.\n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods: \n\n [1] Pooled Standard Deviation - r*std(Xt) \n\n [2] Pooled Variance - r*var(Xt) \n\n [3] Total Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Total Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nFor further info on these graining procedures see the EntropyHub guide.\n\nSee also MSobject, MSEn, cXMSEn, rXMSEn, hXMSEn, XSampEn, XApEn, XFuzzEn\n\nReferences:\n\n[1] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[2] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[3] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n[4] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n[5] Antoine Jamin, et al,\n \"A novel multiscale cross-entropy method applied to navigation \n data acquired with a bike simulator.\" \n 41st annual international conference of the IEEE EMBC\n IEEE, 2019.\n\n[6] Antoine Jamin and Anne Humeau-Heurtier. \n \"(Multiscale) Cross-Entropy Methods: A Review.\" \n Entropy \n 22.1 (2020): 45.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._cXMSEn.cXMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._cXMSEn.cXMSEn","text":"MSx, CI = cXMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of composite multiscale cross-entropy values (MSx) between two univariate data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) using the composite multiscale method (cMSE) over 3 temporal scales.\n\nMSx, CI = cXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Refined::Bool=false, Plotx::Bool=false)\n\nReturns a vector of composite multiscale cross-entropy values (MSx) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following keyword arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3)\n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew rescales the radius threshold of the sub-sequences at each time scale (Ykj). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Pooled Standard Deviation - r*std(Ykj)\n\n [2] Pooled Variance - r*var(Ykj)\n\n [3] Total Mean Absolute Deviation - r*mean_ad(Ykj)\n\n [4] Total Median Absolute Deviation - r*med_ad(Ykj,1)\n\nRefined - Refined-composite XMSEn method. When Refined == true and the entropy function specified by Mobj is XSampEn or XFuzzEn, cXMSEn returns the refined-composite multiscale entropy (rcXMSEn). (default: false) Plotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nSee also MSobject, XMSEn, rXMSEn, hXMSEn, XSampEn, XApEn, cMSEn\n\nReferences:\n\n[1] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[2] Yi Yin, Pengjian Shang, and Guochen Feng, \n \"Modified multiscale cross-sample entropy for complex time \n series.\"\n Applied Mathematics and Computation \n 289 (2016): 98-110.\n\n[3] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[4] Antoine Jamin, et al,\n \"A novel multiscale cross-entropy method applied to navigation \n data acquired with a bike simulator.\" \n 41st annual international conference of the IEEE EMBC\n IEEE, 2019.\n\n[5] Antoine Jamin and Anne Humeau-Heurtier. \n \"(Multiscale) Cross-Entropy Methods: A Review.\" \n Entropy \n 22.1 (2020): 45.\n\n[6] Shuen-De Wu, et al.,\n \"Time series analysis using composite multiscale entropy.\" \n Entropy \n 15.3 (2013): 1069-1084.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._rXMSEn.rXMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._rXMSEn.rXMSEn","text":"MSx, CI = rXMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is XSampEn or XApEn, rMSEn updates the threshold radius of the data sequences (Xt) at each scale to 0.2SDpooled(Xa, Xb) when no r value is provided by Mobj, or rSDpooled(Xa, Xb) if r is specified.\n\nMSx, CI = rXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple;\n Scales::Int=3, F_Order::Int=6, F_Num::Float64=0.5, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following keyword arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nF_Order - Butterworth low-pass filter order, a positive integer (default: 6) \n\nF_Num - Numerator of Butterworth low-pass filter cutoff frequency, a scalar value in range [0 < F_Num < 1]. The cutoff frequency at each scale (T) becomes: Fc = F_Num/T. (default: 0.5) \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods: \n\n [1] Pooled Standard Deviation - r*std(Xt) \n\n [2] Pooled Variance - r*var(Xt) \n\n [3] Total Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Total Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default = false] \n\nSee also MSobject, XMSEn, cXMSEn, hXMSEn, XSampEn, XApEn, MSEn\n\nReferences:\n\n[1] Matthew W. Flood (2021), \n \"EntropyHub - An open source toolkit for entropic time series analysis\"\n PLoS ONE 16(11):e0295448, \n DOI: 10.1371/journal.pone.0259448\n https://www.EntropyHub.xyz\n\n[2] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[3] José Fernando Valencia, et al.,\n \"Refined multiscale entropy: Application to 24-h holter \n recordings of heart period variability in healthy and aortic \n stenosis subjects.\" \n IEEE Transactions on Biomedical Engineering \n 56.9 (2009): 2202-2213.\n\n[4] Puneeta Marwaha and Ramesh Kumar Sunkaria,\n \"Optimal selection of threshold value ‘r’for refined multiscale\n entropy.\" \n Cardiovascular engineering and technology \n 6.4 (2015): 557-576.\n\n[5] Yi Yin, Pengjian Shang, and Guochen Feng, \n \"Modified multiscale cross-sample entropy for complex time \n series.\"\n Applied Mathematics and Computation \n 289 (2016): 98-110.\n\n[6] Antoine Jamin, et al,\n \"A novel multiscale cross-entropy method applied to navigation \n data acquired with a bike simulator.\" \n 41st annual international conference of the IEEE EMBC\n IEEE, 2019.\n\n[7] Antoine Jamin and Anne Humeau-Heurtier. \n \"(Multiscale) Cross-Entropy Methods: A Review.\" \n Entropy \n 22.1 (2020): 45.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._hXMSEn.hXMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._hXMSEn.hXMSEn","text":"MSx, Sn, CI = hXMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average cross-entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST\n\nMSx, Sn, CI = hXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the entire hierarchical tree between the data sequences contained in Sig1 and Sig2 using the following name/value pair arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) At each scale (T), entropy is estimated for 2^(T-1) nodes. \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each node in the tree. If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods: \n\n [1] Pooled Standard Deviation - r*std(Xt) \n\n [2] Pooled Variance - r*var(Xt) \n\n [3] Total Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Total Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the average cross-entropy value at each time scale (i.e. the multiscale entropy curve) and a hierarchical graph showing the entropy value of each node in the hierarchical tree decomposition. (default: false) \n\nSee also MSobject, XMSEn, rXMSEn, cXMSEn, XSampEn, XApEn, hMSEn\n\nReferences:\n\n[1] Matthew W. Flood (2021), \n \"EntropyHub - An open source toolkit for entropic time series analysis\"\n PLoS ONE 16(11):e0295448, \n DOI: 10.1371/journal.pone.0259448\n https://www.EntropyHub.xyz\n\n[2] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[3] Ying Jiang, C-K. Peng and Yuesheng Xu,\n \"Hierarchical entropy analysis for biological signals.\"\n Journal of Computational and Applied Mathematics\n 236.5 (2011): 728-742.\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example5/#Example-5:-Multiscale-Entropy-Object-MSobject()","page":"Ex.5: Multiscale Entropy Object","title":"Example 5: Multiscale Entropy Object - MSobject()","text":"","category":"section"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"warning: Note:\nThe base and cross- entropy functions used in the multiscale entropy calculation are declared by passing EntropyHub functions to MSobject(), not string names.","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"Create a multiscale entropy object (Mobj) for multiscale fuzzy entropy, calculated with an embedding dimension of 5, a time delay (tau) of 2, using a sigmoidal fuzzy function with the r scaling parameters (3, 1.2).","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"using EntropyHub # hide\nMobj = MSobject(FuzzEn, m = 5, tau = 2, Fx = \"sigmoid\", r = (3, 1.2))","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"Create a multiscale entropy object (Mobj) for multiscale corrected-cross-conditional entropy, calculated with an embedding dimension of 6 and using a 11-symbolic data transform.","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"using EntropyHub # hide\nMobj = MSobject(XCondEn, m = 6, c = 11)","category":"page"},{"location":"Examples/Example1/#Example-1:-Sample-Entropy","page":"Ex.1: Sample Entropy","title":"Example 1: Sample Entropy","text":"","category":"section"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Import a signal of normally distributed random numbers [mean = 0; SD = 1], and calculate the sample entropy for each embedding dimension (m) from 0 to 4.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"gaussian\");\nSamp, _ = SampEn(X, m = 4);\nSamp # hide","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Select the last value to get the sample entropy for m = 4.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"gaussian\"); # hide\nSamp, _ = SampEn(X, m = 4); # hide\nSamp[end]","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Calculate the sample entropy for each embedding dimension (m) from 0 to 4 with a time delay (tau) of 2 samples.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"gaussian\"); # hide\nSamp, Phi1, Phi2 = SampEn(X, m = 4, tau = 2)","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Import a signal of uniformly distributed random numbers in the range [-1, 1] and calculate the sample entropy for an embedding dimension (m) of 5, a time delay of 2, and a threshold radius of 0.075. Return the conditional probability (Vcp) and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"uniform\"); # hide\nSamp, _, _, Vcp_Ka_Kb = SampEn(X, m = 5, tau = 2, r = 0.075, Vcp = true)\nVcp, Ka, Kb = Vcp_Ka_Kb\nprintln(\"Vcp = \", Vcp) # hide\nprintln(\"Ka = \", Ka) # hide\nprintln(\"Kb = \", Kb) # hide","category":"page"},{"location":"Examples/Example9/#Example-9:-Hierarchical-Multiscale-corrected-Cross-Conditional-Entropy","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Example 9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"","category":"section"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"Import the x and y components of the Henon system of equations and create a multiscale entropy object with the following parameters: EnType = XCondEn(), embedding dimension = 2, time delay = 2, number of symbols = 12, logarithm base = 2, normalization = true","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"Data = ExampleData(\"henon\");\nMobj = MSobject(XCondEn, m = 2, tau = 2, c = 12, Logx = 2, Norm = true)\n\nusing Plots\nscatter(Data[:,1], Data[:,2], markercolor = \"green\", markerstrokecolor = \"black\",\nmarkersize = 3, background_color = \"black\", grid = false)","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"(Image: Henon)","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"Calculate the hierarchical multiscale corrected cross-conditional entropy over 4 temporal scales and return the average cross-entropy at each scale (Sn), the complexity index (Ci), and a plot of the multiscale entropy curve and the hierarchical tree with the cross-entropy value at each node.","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"henon\"); # hide\nMobj = MSobject(XCondEn, m = 2, tau = 2, c = 12, Logx = 2, Norm = true) # hide\nMSx, Sn, Ci = hXMSEn(Data[:,1], Data[:,2], Mobj, Scales = 4, Plotx = true)","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"(Image: hXMSEn)","category":"page"},{"location":"Examples/Example7/#Example-7:-Refined-Multiscale-Sample-Entropy","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Example 7: Refined Multiscale Sample Entropy","text":"","category":"section"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"Import a signal of uniformly distributed pseudorandom integers in the range [1, 8] and create a multiscale entropy object with the following parameters: EnType = SampEn(), embedding dimension = 4, radius threshold = 1.25","category":"page"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\");\nMobj = MSobject(SampEn, m = 4, r = 1.25)\nMobj # hide","category":"page"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"Calculate the refined multiscale sample entropy and the complexity index (Ci) over 5 temporal scales using a 3rd order Butterworth filter with a normalised corner frequency of at each temporal scale (τ), where the radius threshold value (r) specified by Mobj becomes scaled by the median absolute deviation of the filtered signal at each scale.","category":"page"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\"); # hide\nMobj = MSobject(SampEn, m = 4, r = 1.25) # hide\nMSx, Ci = rMSEn(X, Mobj, Scales = 5, F_Order = 3, F_Num = 0.6, RadNew = 4)","category":"page"},{"location":"Guide/Cross_Entropies/#Cross-Entropies","page":"Cross-Entropies","title":"Cross Entropies","text":"","category":"section"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"Functions for estimating the cross-entropy between two univariate time series.","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"The following functions also form the cross-entropy method used by Multiscale Cross-Entropy functions.","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"These functions are directly available when EntropyHub is imported:","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"julia> using EntropyHub\njulia> names(EntropyHub)","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":" :ApEn\n :AttnEn\n :BubbEn\n ⋮\n :hXMSEn\n :rMSEn\n :rXMSEn","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"EntropyHub.XApEn\nEntropyHub.XSampEn\nEntropyHub.XFuzzEn\nEntropyHub.XK2En\nEntropyHub.XPermEn\nEntropyHub.XCondEn\nEntropyHub.XDistEn\nEntropyHub.XSpecEn","category":"page"},{"location":"Guide/Cross_Entropies/#EntropyHub._XApEn.XApEn","page":"Cross-Entropies","title":"EntropyHub._XApEn.XApEn","text":"XAp, Phi = XApEn(Sig1, Sig2)\n\nReturns the cross-approximate entropy estimates (XAp) and the average number of matched vectors (Phi) for m = [0,1,2], estimated for the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold= 0.2*SDpooled(Sig1,Sig2), logarithm = natural\n\nNOTE: XApEn is direction-dependent. Thus, Sig1 is used as the template data sequence, and Sig2 is the matching sequence.``\n\nXAp, Phi = XApEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1))\n\nReturns the cross-approximate entropy estimates (XAp) between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nr - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nSee also XSampEn, XFuzzEn, XMSEn, ApEn, SampEn, MSEn\n\nReferences:\n\n[1] Steven Pincus and Burton H. Singer,\n \"Randomness and degrees of irregularity.\" \n Proceedings of the National Academy of Sciences \n 93.5 (1996): 2083-2088.\n\n[2] Steven Pincus,\n \"Assessing serial irregularity and its implications for health.\"\n Annals of the New York Academy of Sciences \n 954.1 (2001): 245-267.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XSampEn.XSampEn","page":"Cross-Entropies","title":"EntropyHub._XSampEn.XSampEn","text":"XSamp, A, B = XSampEn(Sig1, Sig2)\n\nReturns the cross-sample entropy estimates (XSamp) and the number of matched vectors (m:B, m+1:A) for m = [0,1,2] estimated for the two univariate data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold= 0.2*SDpooled(Sig1,Sig2), logarithm = natural\n\nXSamp, A, B, (Vcp, Ka, Kb) = XSampEn(Sig1, Sig2, ..., Vcp = true)\n\nIf Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the cross-sample entropy estimates (XSamp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.\n\nXSamp, A, B = XSampEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1), Vcp::Bool=false)\n\nReturns the cross-sample entropy estimates (XSamp) for dimensions [0,1,...,m] estimated between the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nr - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nSee also XFuzzEn, XApEn, SampEn, SampEn2D, XMSEn, ApEn\n\nReferences:\n\n[1] Joshua S Richman and J. Randall Moorman. \n \"Physiological time-series analysis using approximate entropy\n and sample entropy.\" \n American Journal of Physiology-Heart and Circulatory Physiology\n (2000)\n\n[2] Douglas E Lake, Joshua S Richman, M.P. Griffin, J. Randall Moorman\n \"Sample entropy analysis of neonatal heart rate variability.\"\n American Journal of Physiology-Regulatory, Integrative and Comparative Physiology\n 283, no. 3 (2002): R789-R797.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XFuzzEn.XFuzzEn","page":"Cross-Entropies","title":"EntropyHub._XFuzzEn.XFuzzEn","text":"XFuzz, Ps1, Ps2 = XFuzzEn(Sig1, Sig2)\n\nReturns the cross-fuzzy entropy estimates (XFuzz) and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2, using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural\n\nXFuzz, Ps1, Ps2 = XFuzzEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String=\"default\", Logx::Real=exp(1))\n\nReturns the cross-fuzzy entropy estimates (XFuzz) for dimensions = [1,...,m] estimated for the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nFx - Fuzzy function name, one of the following: {\"sigmoid\", \"modsampen\", \"default\", \"gudermannian\", \"bell\", \"triangular\", \"trapezoidal1\", \"trapezoidal2\", \"z_shaped\", \"gaussian\", \"constgaussian\"}\n\nr - Fuzzy function parameters, a scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows:\n\n sigmoid: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n modsampen: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n default: r(1) = divisor of the exponential argument\n r(2) = argument exponent (pre-division)\n gudermannian: r = a scalar whose value is the numerator of\n argument to gudermannian function:\n GD(x) = atan(tanh(`r`/x)).\n triangular: r = a scalar whose value is the threshold (corner point) of the triangular function.\n trapezoidal1: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the trapezoid.\n trapezoidal2: r(1) = a value corresponding to the upper corner point of the trapezoid.\n r(2) = a value corresponding to the lower corner point of the trapezoid.\n z_shaped: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the z-shape.\n bell: r(1) = divisor of the distance value\n r(2) = exponent of generalized bell-shaped function\n gaussian: r = a scalar whose value scales the slope of the Gaussian curve.\n constgaussian: r = a scalar whose value defines the lower threshod and shape of the Gaussian curve. \n [DEPRICATED] linear: r = an integer value. When r = 0, the\n argument of the exponential function is \n normalised between [0 1]. When r = 1,\n the minimuum value of the exponential \n argument is set to 0.\n\nLogx - Logarithm base, a positive scalar \n\nFor further information on the 'keyword' arguments, see the EntropyHub guide.\n\nSee also FuzzEn, XSampEn, XApEn, FuzzEn2D, XMSEn, MSEn\n\nReferences:\n\n[1] Hong-Bo Xie, et al.,\n \"Cross-fuzzy entropy: A new method to test pattern synchrony of\n bivariate time series.\" \n Information Sciences \n 180.9 (2010): 1715-1724.\n\n[3] Hamed Azami, et al.\n \"Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: \n Assessment and Comparison\"\n IEEE Access\n 7 (2019): 104833-104847\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XK2En.XK2En","page":"Cross-Entropies","title":"EntropyHub._XK2En.XK2En","text":"XK2, Ci = XK2En(Sig1, Sig2)\n\nReturns the cross-Kolmogorov entropy estimates (XK2) and the correlation integrals (Ci) for m = [1,2] estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, distance threshold (r) = 0.2*SDpooled(Sig1, Sig2), logarithm = natural\n\nXK2, Ci = XK2En(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1))\n\nReturns the cross-Kolmogorov entropy estimates (XK2) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nr - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nSee also XSampEn, XFuzzEn, XApEn, K2En, XMSEn, XDistEn\n\nReferences:\n\n[1] Matthew W. Flood,\n \"XK2En - EntropyHub Project\"\n (2021) https://github.com/MattWillFlood/EntropyHub\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XPermEn.XPermEn","page":"Cross-Entropies","title":"EntropyHub._XPermEn.XPermEn","text":"XPerm = XPermEn(Sig1, Sig2)\n\nReturns the cross-permuation entropy estimates (XPerm) estimated betweeen the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 3, time delay = 1, logarithm = base 2, \n\nXPerm = XPermEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=3, tau::Int=1, Logx::Real=exp(1))\n\nReturns the permutation entropy estimates (XPerm) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 2 [default: 3] \n\n **Note: XPerm is undefined for embedding dimensions < 3.**\n\ntau - Time Delay, a positive integer [default: 1] \n\nLogx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log.** \n\nSee also PermEn, XApEn, XSampEn, XFuzzEn, XMSEn\n\nReferences:\n\n[1] Wenbin Shi, Pengjian Shang, and Aijing Lin,\n \"The coupling analysis of stock market indices based on \n cross-permutation entropy.\"\n Nonlinear Dynamics\n 79.4 (2015): 2439-2447.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XCondEn.XCondEn","page":"Cross-Entropies","title":"EntropyHub._XCondEn.XCondEn","text":"XCond, SEw, SEz = XCondEn(Sig1, Sig2)\n\nReturns the corrected cross-conditional entropy estimates (XCond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, number of symbols = 6, logarithm = natural ** Note: XCondEn is direction-dependent. Therefore, the order of the data sequences Sig1 and Sig2 matters. If Sig1 is the sequence 'y', and Sig2 is the second sequence 'u', the XCond is the amount of information carried by y(i) when the pattern u(i) is found.**\n\nXCond, SEw, SEz = XCondEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)\n\nReturns the corrected cross-conditional entropy estimates (XCond) for the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nc - Number of symbols, an integer > 1 [default: 6] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nNorm - Normalisation of XCond values: [false] no normalisation [default]\n\n [true] normalises w.r.t cross-Shannon entropy.\n\nSee also XFuzzEn, XSampEn, XApEn, XPermEn, CondEn, XMSEn\n\nReferences:\n\n[1] Alberto Porta, et al.,\n \"Conditional entropy approach for the evaluation of the \n coupling strength.\" \n Biological cybernetics \n 81.2 (1999): 119-129.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XDistEn.XDistEn","page":"Cross-Entropies","title":"EntropyHub._XDistEn.XDistEn","text":"XDist, Ppi = XDistEn(Sig1, Sig2)\n\nReturns the cross-distribution entropy estimate (XDist) and the corresponding distribution probabilities (Ppi) estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins\n\nXDist, Ppi = XDistEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, Bins::Union{Int,String}=\"Sturges\", Logx::Real=2, Norm::Bool=true)\n\nReturns the cross-distribution entropy estimate (XDist) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' = arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nBins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {'sturges','sqrt','rice','doanes'} [default: 'sturges'] \n\nLogx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log**\n\nNorm - Normalisation of DistEn value: [false] no normalisation. [true] normalises w.r.t # of histogram bins [default] \n\nSee also XSampEn, XApEn, XPermEn, XCondEn, DistEn, DistEn2D, XMSEn\n\nReferences:\n\n[1] Yuanyuan Wang and Pengjian Shang,\n \"Analysis of financial stock markets through the multiscale\n cross-distribution entropy based on the Tsallis entropy.\"\n Nonlinear Dynamics \n 94.2 (2018): 1361-1376.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XSpecEn.XSpecEn","page":"Cross-Entropies","title":"EntropyHub._XSpecEn.XSpecEn","text":"XSpec, BandEn = XSpecEn(Sig)\n\nReturns the cross-spectral entropy estimate (XSpec) of the full cross- spectrum and the within-band entropy (BandEn) estimated between the data sequences contained in Sig using the default parameters: N-point FFT = 2 * max(length(Sig1/Sig2)) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.\n\nXSpec, BandEn = XSpecEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; N::Union{Nothing,Int}=nothing, Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)\n\nReturns the cross-spectral entropy (XSpec) and the within-band entropy (BandEn) estimate between the data sequences contained in Sig1 and Sig2 using the following specified 'keyword' arguments:\n\nArguments:\n\nN - Resolution of spectrum (N-point FFT), an integer > 1 \n\nFreqs - Normalised band edge frequencies, a scalar in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2). Note: When no band frequencies are entered, BandEn == SpecEn \n\nLogx - Logarithm base, a positive scalar [default: base 2] ** enter 0 for natural log** \n\nNorm - Normalisation of XSpec value: [false] no normalisation. [true] normalises w.r.t # of spectrum/band frequency values [default] \n\nFor more info, see the EntropyHub guide\n\nSee also SpecEn, fft, XDistEn, periodogram, XSampEn, XApEn\n\nReferences:\n\n[1] Matthew W. Flood,\n \"XSpecEn - EntropyHub Project\"\n (2021) https://github.com/MattWillFlood/EntropyHub\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example6/#Example-6:-Multiscale-Increment-Entropy","page":"Ex.6: Multiscale [Increment] Entropy","title":"Example 6: Multiscale Increment Entropy","text":"","category":"section"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"Import a signal of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = IncrEn(), embedding dimension = 3, a quantifying resolution = 6, normalization = true.","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\");\nMobj = MSobject(IncrEn, m = 3, R = 6, Norm = true)","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"Calculate the multiscale increment entropy over 5 temporal scales using the modified graining procedure where:","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"y_j^(tau) =frac1tau sum_i=left(j-1right)tau +1^jtau x_i 1= j = fracNtau ","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\"); # hide\nMobj = MSobject(IncrEn, m = 3, R = 6, Norm = true) # hide\nMSx, _ = MSEn(X, Mobj, Scales = 5, Methodx = \"modified\");\nMSx # hide","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"Change the graining method to return generalized multiscale increment entropy.","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"y_j^(tau) =frac1tau sum_i=left(j-1right)tau +1^jtau left( x_i - barx right)^2 1= j = fracNtau ","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\"); # hide\nMobj = MSobject(IncrEn, m = 3, R = 6, Norm = true) # hide\nMSx, _ = MSEn(X, Mobj, Scales = 5, Methodx = \"generalized\");\nMSx # hide","category":"page"},{"location":"Guide/Base_Entropies/#Base-Entropies","page":"Base Entropies","title":"Base Entropies","text":"","category":"section"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"Functions for estimating the entropy of a single univariate time series.","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"The following functions also form the base entropy method used by Multiscale functions.","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"These functions are directly available when EntropyHub is imported:","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"julia> using EntropyHub\n\njulia> names(EntropyHub)","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":" :ApEn\n :AttnEn\n :BubbEn\n ⋮\n :hXMSEn\n :rMSEn\n :rXMSEn","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"EntropyHub.ApEn\nEntropyHub.SampEn\nEntropyHub.FuzzEn\nEntropyHub.K2En\nEntropyHub.PermEn\nEntropyHub.CondEn\nEntropyHub.DistEn\nEntropyHub.SpecEn\nEntropyHub.DispEn\nEntropyHub.SyDyEn\nEntropyHub.IncrEn\nEntropyHub.CoSiEn\nEntropyHub.PhasEn\nEntropyHub.SlopEn\nEntropyHub.BubbEn\nEntropyHub.GridEn\nEntropyHub.EnofEn\nEntropyHub.AttnEn\nEntropyHub.RangEn\nEntropyHub.DivEn","category":"page"},{"location":"Guide/Base_Entropies/#EntropyHub._ApEn.ApEn","page":"Base Entropies","title":"EntropyHub._ApEn.ApEn","text":"Ap, Phi = ApEn(Sig)\n\nReturns the approximate entropy estimates Ap and the log-average number of matched vectors Phi for m = [0,1,2], estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold = 0.2*SD(Sig), logarithm = natural\n\nAp, Phi = ApEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))\n\nReturns the approximate entropy estimates Ap of the data sequence Sig for dimensions = [0,1,...,m] using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nr - Radius Distance Threshold, a positive scalar \n\nLogx - Logarithm base, a positive scalar\n\nSee also XApEn, SampEn, MSEn, FuzzEn, PermEn, CondEn, DispEn\n\nReferences:\n\n[1] Steven M. Pincus, \n \"Approximate entropy as a measure of system complexity.\" \n Proceedings of the National Academy of Sciences \n 88.6 (1991): 2297-2301.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SampEn.SampEn","page":"Base Entropies","title":"EntropyHub._SampEn.SampEn","text":"Samp, A, B = SampEn(Sig)\n\nReturns the sample entropy estimates Samp and the number of matched state vectors (m:B, m+1:A) for m = [0,1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2*SD(Sig), logarithm = natural\n\nSamp, A, B, (Vcp, Ka, Kb) = SampEn(Sig, ..., Vcp = true)\n\nIf Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the sample entropy estimates (Samp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.\n\nSamp, A, B = SampEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1), Vcp::Bool=false)\n\nReturns the sample entropy estimates Samp for dimensions = [0,1,...,m] estimated from the data sequence Sig using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nr - Radius Distance Threshold, a positive scalar \n\nLogx - Logarithm base, a positive scalar \n\nSee also ApEn, FuzzEn, PermEn, CondEn, XSampEn, SampEn2D, MSEn\n\nReferences:\n\n[1] Joshua S Richman and J. Randall Moorman. \n \"Physiological time-series analysis using approximate entropy\n and sample entropy.\" \n American Journal of Physiology-Heart and Circulatory Physiology (2000).\n\n[2] Douglas E Lake, Joshua S Richman, M.P. Griffin, J. Randall Moorman\n \"Sample entropy analysis of neonatal heart rate variability.\"\n American Journal of Physiology-Regulatory, Integrative and Comparative Physiology\n 283, no. 3 (2002): R789-R797.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._FuzzEn.FuzzEn","page":"Base Entropies","title":"EntropyHub._FuzzEn.FuzzEn","text":"Fuzz, Ps1, Ps2 = FuzzEn(Sig)\n\nReturns the fuzzy entropy estimates Fuzz and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = \"default\", fuzzy function parameters (r) = [0.2, 2], logarithm = natural\n\nFuzz, Ps1, Ps2 = FuzzEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String=\"default\", Logx::Real=exp(1))\n\nReturns the fuzzy entropy estimates Fuzz for dimensions = [1,...,m] estimated for the data sequence Sig using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2]\n\ntau - Time Delay, a positive integer [default: 1]\n\nFx - Fuzzy function name, one of the following: {\"sigmoid\", \"modsampen\", \"default\", \"gudermannian\", \"bell\", \"triangular\", \"trapezoidal1\", \"trapezoidal2\", \"z_shaped\", \"gaussian\", \"constgaussian\"}\n\nr - Fuzzy function parameters, a 1 element scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows: [default: [.2 2]]\n\n default: r(1) = divisor of the exponential argument\n r(2) = argument exponent (pre-division)\n sigmoid: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n modsampen: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n gudermannian: r = a scalar whose value is the numerator of\n argument to gudermannian function:\n GD(x) = atan(tanh(`r`/x))\n triangular: r = a scalar whose value is the threshold (corner point) of the triangular function.\n trapezoidal1: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the trapezoid.\n trapezoidal2: r(1) = a value corresponding to the upper corner point of the trapezoid.\n r(2) = a value corresponding to the lower corner point of the trapezoid.\n z_shaped: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the z-shape.\n bell: r(1) = divisor of the distance value\n r(2) = exponent of generalized bell-shaped function\n gaussian: r = a scalar whose value scales the slope of the Gaussian curve.\n constgaussian: r = a scalar whose value defines the lower threshod and shape of the Gaussian curve. \n [DEPRICATED] linear: r = an integer value. When r = 0, the\n argument of the exponential function is \n normalised between [0 1]. When r = 1,\n the minimuum value of the exponential \n argument is set to 0.\n\nLogx - Logarithm base, a positive scalar [default: natural]\n\nFor further information on keyword arguments, see the EntropyHub guide.\n\nSee also SampEn, ApEn, PermEn, DispEn, XFuzzEn, FuzzEn2D, MSEn\n\nReferences:\n\n[1] Weiting Chen, et al.\n \"Characterization of surface EMG signal based on fuzzy entropy.\"\n IEEE Transactions on neural systems and rehabilitation engineering\n 15.2 (2007): 266-272.\n\n[2] Hong-Bo Xie, Wei-Xing He, and Hui Liu\n \"Measuring time series regularity using nonlinear\n similarity-based sample entropy.\"\n Physics Letters A\n 372.48 (2008): 7140-7146.\n\n[3] Hamed Azami, et al.\n \"Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: \n Assessment and Comparison\"\n IEEE Access\n 7 (2019): 104833-104847\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._K2En.K2En","page":"Base Entropies","title":"EntropyHub._K2En.K2En","text":"K2, Ci = K2En(Sig)\n\nReturns the Kolmogorov entropy estimates K2 and the correlation integrals Ci for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, r = 0.2*SD(Sig), logarithm = natural\n\nK2, Ci = K2En(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))\n\nReturns the Kolmogorov entropy estimates K2 for dimensions = [1,...,m] estimated from the data sequence Sig using the 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nr - Radius, a positive scalar \n\nLogx - Logarithm base, a positive scalar\n\nSee also DistEn, XK2En, MSEn\n\nReferences:\n\n[1] Peter Grassberger and Itamar Procaccia,\n \"Estimation of the Kolmogorov entropy from a chaotic signal.\" \n Physical review A 28.4 (1983): 2591.\n\n[2] Lin Gao, Jue Wang and Longwei Chen\n \"Event-related desynchronization and synchronization \n quantification in motor-related EEG by Kolmogorov entropy\"\n J Neural Eng. 2013 Jun;10(3):03602\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._PermEn.PermEn","page":"Base Entropies","title":"EntropyHub._PermEn.PermEn","text":"Perm, Pnorm, cPE = PermEn(Sig)\n\nReturns the permuation entropy estimates Perm, the normalised permutation entropy Pnorm and the conditional permutation entropy cPE for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, logarithm = base 2, normalisation = w.r.t #symbols (m-1) Note: using the standard PermEn estimation, Perm = 0 when m = 1. Note: It is recommeneded that signal length > 5m! (see [8] and Amigo et al., Europhys. Lett. 83:60005, 2008)\n\nPerm, Pnorm, cPE = PermEn(Sig, m)\n\nReturns the permutation entropy estimates Perm estimated from the data sequence Sig using the specified embedding dimensions = [1,...,m] with other default parameters as listed above.\n\nPerm, Pnorm, cPE = PermEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Typex::String=\"none\", tpx::Union{Real,Nothing}=nothing, Logx::Real=2, Norm::Bool=false)\n\nReturns the permutation entropy estimates Perm for dimensions = [1,...,m] estimated from the data sequence Sig using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of PermEn value:\n\n false - normalises w.r.t log(# of permutation symbols [m-1]) - default\n true - normalises w.r.t log(# of all possible permutations [m!])\n * Note: Normalised permutation entropy is undefined for m = 1.\n ** Note: When Typex = 'uniquant' and Norm = true, normalisation\n is calculated w.r.t. log(tpx^m)\n\nTypex - Permutation entropy variation, one of the following: {`\"none\", \"uniquant\", \"finegrain\", \"modified\", \"ampaware\", \"weighted\", \"edge\", \"phase\"} See the EntropyHub guide for more info on PermEn variations. \n\ntpx - Tuning parameter for associated permutation entropy variation.\n\n [uniquant] 'tpx' is the L parameter, an integer > 1 (default = 4). \n [finegrain] 'tpx' is the alpha parameter, a positive scalar (default = 1)\n [ampaware] 'tpx' is the A parameter, a value in range [0 1] (default = 0.5)\n [edge] 'tpx' is the r sensitivity parameter, a scalar > 0 (default = 1)\n [phase] 'tpx' unwraps the instantaneous phase (angle of analytic signal) when tpx==1 (default = 0)\n See the EntropyHub guide for more info on PermEn variations.\n\nSee also XPermEn, MSEn, XMSEn, SampEn, ApEn, CondEn\n\nReferences:\n\n[1] Christoph Bandt and Bernd Pompe, \n \"Permutation entropy: A natural complexity measure for time series.\" \n Physical Review Letters,\n 88.17 (2002): 174102.\n\n[2] Xiao-Feng Liu, and Wang Yue,\n \"Fine-grained permutation entropy as a measure of natural \n complexity for time series.\" \n Chinese Physics B \n 18.7 (2009): 2690.\n\n[3] Chunhua Bian, et al.,\n \"Modified permutation-entropy analysis of heartbeat dynamics.\"\n Physical Review E\n 85.2 (2012) : 021906\n\n[4] Bilal Fadlallah, et al.,\n \"Weighted-permutation entropy: A complexity measure for time \n series incorporating amplitude information.\" \n Physical Review E \n 87.2 (2013): 022911.\n\n[5] Hamed Azami and Javier Escudero,\n \"Amplitude-aware permutation entropy: Illustration in spike \n detection and signal segmentation.\" \n Computer methods and programs in biomedicine,\n 128 (2016): 40-51.\n\n[6] Zhiqiang Huo, et al.,\n \"Edge Permutation Entropy: An Improved Entropy Measure for \n Time-Series Analysis,\" \n 45th Annual Conference of the IEEE Industrial Electronics Soc,\n (2019), 5998-6003\n\n[7] Zhe Chen, et al. \n \"Improved permutation entropy for measuring complexity of time\n series under noisy condition.\" \n Complexity \n 1403829 (2019).\n\n[8] Maik Riedl, Andreas Müller, and Niels Wessel,\n \"Practical considerations of permutation entropy.\" \n The European Physical Journal Special Topics \n 222.2 (2013): 249-262.\n\n[9] Kang Huan, Xiaofeng Zhang, and Guangbin Zhang,\n \"Phase permutation entropy: A complexity measure for nonlinear time \n series incorporating phase information.\" \n Physica A: Statistical Mechanics and its Applications \n 568 (2021): 125686.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._CondEn.CondEn","page":"Base Entropies","title":"EntropyHub._CondEn.CondEn","text":"Cond, SEw, SEz = CondEn(Sig)\n\nReturns the corrected conditional entropy estimates (Cond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 6, logarithm = natural, normalisation = false Note: CondEn(m=1) returns the Shannon entropy of Sig.\n\nCond, SEw, SEz = CondEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)\n\nReturns the corrected conditional entropy estimates (Cond) from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nc - # of symbols, an integer > 1 \n\nLogx - Logarithm base, a positive scalar \n\nNorm - Normalisation of CondEn value: \n\n [false] no normalisation - default\n [true] normalises w.r.t Shannon entropy of data sequence `Sig`\n\nSee also XCondEn, MSEn, PermEn, DistEn, XPermEn\n\nReferences:\n\n[1] Alberto Porta, et al.,\n \"Measuring regularity by means of a corrected conditional\n entropy in sympathetic outflow.\" \n Biological cybernetics \n 78.1 (1998): 71-78.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._DistEn.DistEn","page":"Base Entropies","title":"EntropyHub._DistEn.DistEn","text":"Dist, Ppi = DistEn(Sig)\n\nReturns the distribution entropy estimate (Dist) and the corresponding distribution probabilities (Ppi) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins\n\nDist, Ppi = DistEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Bins::Union{Int,String}=\"Sturges\", Logx::Real=2, Norm::Bool=true)\n\nReturns the distribution entropy estimate (Dist) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer \n\ntau - Time Delay, a positive integer \n\nBins - Histogram bin selection method for distance distribution, one of the following: \n\n an integer > 1 indicating the number of bins, or one of the \n following strings {'sturges','sqrt','rice','doanes'}\n [default: 'sturges']\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of DistEn value: \n\n [false] no normalisation.\n [true] normalises w.r.t # of histogram bins - default\n\nSee also XDistEn, DistEn2D, MSEn, K2En\n\nReferences:\n\n[1] Li, Peng, et al.,\n \"Assessing the complexity of short-term heartbeat interval \n series by distribution entropy.\" \n Medical & biological engineering & computing \n 53.1 (2015): 77-87.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SpecEn.SpecEn","page":"Base Entropies","title":"EntropyHub._SpecEn.SpecEn","text":"Spec, BandEn = SpecEn(Sig)\n\nReturns the spectral entropy estimate of the full spectrum (Spec) and the within-band entropy (BandEn) estimated from the data sequence (Sig) using the default parameters: N-point FFT = 2*len(Sig) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.\n\nSpec, BandEn = SpecEn(Sig::AbstractArray{T,1} where T<:Real; N::Int=1 + (2*size(Sig,1)), Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)\n\nReturns the spectral entropy (Spec) and the within-band entropy (BandEn) estimate for the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nN - Resolution of spectrum (N-point FFT), an integer > 1 \n\nFreqs - Normalised band edge frequencies, a 2 element tuple with values \n\n in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2).\n Note: When no band frequencies are entered, BandEn == SpecEn\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of Spec value:\n\n [false] no normalisation.\n [true] normalises w.r.t # of spectrum/band frequency values - default.\n\nFor more info, see the EntropyHub guide.\n\nSee also XSpecEn, fft, MSEn, XMSEn\n\nReferences:\n\n[1] G.E. Powell and I.C. Percival,\n \"A spectral entropy method for distinguishing regular and \n irregular motion of Hamiltonian systems.\" \n Journal of Physics A: Mathematical and General \n 12.11 (1979): 2053.\n\n[2] Tsuyoshi Inouye, et al.,\n \"Quantification of EEG irregularity by use of the entropy of \n the power spectrum.\" \n Electroencephalography and clinical neurophysiology \n 79.3 (1991): 204-210.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._DispEn.DispEn","page":"Base Entropies","title":"EntropyHub._DispEn.DispEn","text":"Dispx, RDE = DispEn(Sig)\n\nReturns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function (ncdf)\n\nDispx, RDE = DispEn(Sig::AbstractArray{T,1} where T<:Real; c::Int=3, m::Int=2, tau::Int=1, Typex::String=\"ncdf\", Logx::Real=exp(1), Fluct::Bool=false, Norm::Bool=false, rho::Real=1)\n\nReturns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nc - Number of symbols, an integer > 1\n\nTypex - Type of data-to-symbolic sequence transform, one of the following: {\"linear\", \"kmeans\" ,\"ncdf\", \"finesort\", \"equal\"}\n\n See the EntropyHub guide for more info on these transforms.\n\nLogx - Logarithm base, a positive scalar\n\nFluct - When Fluct == true, DispEn returns the fluctuation-based Dispersion entropy. [default: false]\n\nNorm - Normalisation of Dispx and RDE value: [false] no normalisation - default [true] normalises w.r.t number of possible dispersion patterns (c^m or (2c -1)^m-1 if Fluct == true).\n\nrho - If Typex == 'finesort', rho is the tuning parameter (default: 1)\n\nSee also PermEn, SyDyEn, MSEn\n\nReferences:\n\n[1] Mostafa Rostaghi and Hamed Azami,\n \"Dispersion entropy: A measure for time-series analysis.\" \n IEEE Signal Processing Letters \n 23.5 (2016): 610-614.\n\n[2] Hamed Azami and Javier Escudero,\n \"Amplitude-and fluctuation-based dispersion entropy.\" \n Entropy \n 20.3 (2018): 210.\n\n[3] Li Yuxing, Xiang Gao and Long Wang,\n \"Reverse dispersion entropy: A new complexity measure for \n sensor signal.\" \n Sensors \n 19.23 (2019): 5203.\n\n[4] Wenlong Fu, et al.,\n \"Fault diagnosis for rolling bearings based on fine-sorted \n dispersion entropy and SVM optimized with mutation SCA-PSO.\"\n Entropy\n 21.4 (2019): 404.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SyDyEn.SyDyEn","page":"Base Entropies","title":"EntropyHub._SyDyEn.SyDyEn","text":"SyDy, Zt = SyDyEn(Sig)\n\nReturns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, symbolic partition type = maximum entropy partitioning (MEP), normalisation = normalises w.r.t # possible vector permutations (c^m) \n\nSyDy, Zt = SyDyEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=3, Typex::String=\"MEP\", Logx::Real=exp(1), Norm::Bool=true)\n\nReturns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer \n\ntau - Time Delay, a positive integer \n\nc - Number of symbols, an integer > 1 \n\nTypex - Type of symbolic sequnce partitioning method, one of the following: \n\n {\"linear\",\"uniform\",\"MEP\"(default),\"kmeans\"}\n\nLogx - Logarithm base, a positive scalar \n\nNorm - Normalisation of SyDyEn value: \n\n [false] no normalisation \n [true] normalises w.r.t # possible vector permutations (c^m+1) - default\n\nSee the EntropyHub guide for more info on these parameters.\n\nSee also DispEn, PermEn, CondEn, SampEn, MSEn\n\nReferences:\n\n[1] Yongbo Li, et al.,\n \"A fault diagnosis scheme for planetary gearboxes using \n modified multi-scale symbolic dynamic entropy and mRMR feature \n selection.\" \n Mechanical Systems and Signal Processing \n 91 (2017): 295-312. \n\n[2] Jian Wang, et al.,\n \"Fault feature extraction for multiple electrical faults of \n aviation electro-mechanical actuator based on symbolic dynamics\n entropy.\" \n IEEE International Conference on Signal Processing, \n Communications and Computing (ICSPCC), 2015.\n\n[3] Venkatesh Rajagopalan and Asok Ray,\n \"Symbolic time series analysis via wavelet-based partitioning.\"\n Signal processing \n 86.11 (2006): 3309-3320.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._IncrEn.IncrEn","page":"Base Entropies","title":"EntropyHub._IncrEn.IncrEn","text":"Incr = IncrEn(Sig)\n\nReturns the increment entropy (Incr) estimate of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, quantifying resolution = 4, logarithm = base 2,\n\nIncr = IncrEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, R::Int=4, Logx::Real=2, Norm::Bool=false)\n\nReturns the increment entropy (Incr) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nR - Quantifying resolution, a positive scalar \n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of IncrEn value: \n\n [false] no normalisation - default\n [true] normalises w.r.t embedding dimension (m-1).\n\nSee also PermEn, SyDyEn, MSEn\n\nReferences:\n\n[1] Xiaofeng Liu, et al.,\n \"Increment entropy as a measure of complexity for time series.\"\n Entropy\n 18.1 (2016): 22.1.\n\n*** \"Correction on Liu, X.; Jiang, A.; Xu, N.; Xue, J. - Increment \n Entropy as a Measure of Complexity for Time Series,\n Entropy 2016, 18, 22.\" \n Entropy \n 18.4 (2016): 133.\n\n[2] Xiaofeng Liu, et al.,\n \"Appropriate use of the increment entropy for \n electrophysiological time series.\" \n Computers in biology and medicine \n 95 (2018): 13-23.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._CoSiEn.CoSiEn","page":"Base Entropies","title":"EntropyHub._CoSiEn.CoSiEn","text":"CoSi, Bm = CoSiEn(Sig)\n\nReturns the cosine similarity entropy (CoSi) and the corresponding global probabilities estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular threshold = .1, logarithm = base 2,\n\nCoSi, Bm = CoSiEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=.1, Logx::Real=2, Norm::Int=0)\n\nReturns the cosine similarity entropy (CoSi) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nr - Angular threshold, a value in range [0 < r < 1] \n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of Sig, one of the following integers: \n\n [0] no normalisation - default\n [1] normalises `Sig` by removing median(`Sig`)\n [2] normalises `Sig` by removing mean(`Sig`)\n [3] normalises `Sig` w.r.t. SD(`Sig`)\n [4] normalises `Sig` values to range [-1 1]\n\nSee also PhasEn, SlopEn, GridEn, MSEn, cMSEn\n\nReferences:\n\n[1] Theerasak Chanwimalueang and Danilo Mandic,\n \"Cosine similarity entropy: Self-correlation-based complexity\n analysis of dynamical systems.\"\n Entropy \n 19.12 (2017): 652.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._PhasEn.PhasEn","page":"Base Entropies","title":"EntropyHub._PhasEn.PhasEn","text":"Phas = PhasEn(Sig)\n\nReturns the phase entropy (Phas) estimate of the data sequence (Sig) using the default parameters: angular partitions = 4, time delay = 1, logarithm = natural,\n\nPhas = PhasEn(Sig::AbstractArray{T,1} where T<:Real; K::Int=4, tau::Int=1, Logx::Real=exp(1), Norm::Bool=true, Plotx::Bool=false)\n\nReturns the phase entropy (Phas) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nK - Angular partitions (coarse graining), an integer > 1 \n\n *Note: Division of partitions begins along the positive x-axis. As this point is somewhat arbitrary, it is\n recommended to use even-numbered (preferably multiples of 4) partitions for sake of symmetry.\n\ntau - Time Delay, a positive integer \n\nLogx - Logarithm base, a positive scalar \n\nNorm - Normalisation of Phas value: \n\n [false] no normalisation\n [true] normalises w.r.t. the number of partitions Log(`K`)\n\nPlotx - When Plotx == true, returns Poincaré plot (default: false) \n\nSee also SampEn, ApEn, GridEn, MSEn, SlopEn, CoSiEn, BubbEn\n\nReferences:\n\n[1] Ashish Rohila and Ambalika Sharma,\n \"Phase entropy: a new complexity measure for heart rate\n variability.\" \n Physiological measurement\n 40.10 (2019): 105006.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SlopEn.SlopEn","page":"Base Entropies","title":"EntropyHub._SlopEn.SlopEn","text":"Slop = SlopEn(Sig)\n\nReturns the slope entropy (Slop) estimates for embedding dimensions [2, ..., m] of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular thresholds = [5 45], logarithm = base 2 \n\nSlop = SlopEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Lvls::AbstractArray{T,1} where T<:Real=[5, 45], Logx::Real=2, Norm::Bool=true)\n\nReturns the slope entropy (Slop) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \t\n\n SlopEn returns estimates for each dimension [2,...,m]\n\ntau - Time Delay, a positive integer \n\nLvls - Angular thresolds, a vector of monotonically increasing values in the range [0 90] degrees.\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of SlopEn value, a boolean operator: \n\n [false] no normalisation\n [true] normalises w.r.t. the number of patterns found (default)\n\nSee also PhasEn, GridEn, MSEn, CoSiEn, SampEn, ApEn\n\nReferences:\n\n[1] David Cuesta-Frau,\n \"Slope Entropy: A New Time Series Complexity Estimator Based on\n Both Symbolic Patterns and Amplitude Information.\" \n Entropy \n 21.12 (2019): 1167.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._BubbEn.BubbEn","page":"Base Entropies","title":"EntropyHub._BubbEn.BubbEn","text":"Bubb, H = BubbEn(Sig)\n\nReturns the bubble entropy (Bubb) and the conditional Rényi entropy (H) estimates of dimension m = 2 from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, logarithm = natural \n\nBubb, H = BubbEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Logx::Real=exp(1))\n\nReturns the bubble entropy (Bubb) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\n BubbEn returns estimates for each dimension [2,...,m]\n\ntau - Time Delay, a positive integer \n\nLogx - Logarithm base, a positive scalar \n\nSee also PhasEn, MSEn\n\nReferences:\n\n[1] George Manis, M.D. Aktaruzzaman and Roberto Sassi,\n \"Bubble entropy: An entropy almost free of parameters.\"\n IEEE Transactions on Biomedical Engineering\n 64.11 (2017): 2711-2718.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._GridEn.GridEn","page":"Base Entropies","title":"EntropyHub._GridEn.GridEn","text":"GDE, GDR, _ = GridEn(Sig)\n\nReturns the gridded distribution entropy (GDE) and the gridded distribution rate (GDR) estimated from the data sequence (Sig) using the default parameters: grid coarse-grain = 3, time delay = 1, logarithm = base 2\n\nGDE, GDR, PIx, GIx, SIx, AIx = GridEn(Sig)\n\nIn addition to GDE and GDR, GridEn returns the following indices estimated for the data sequence (Sig) using the default parameters: [PIx] - Percentage of points below the line of identity (LI) [GIx] - Proportion of point distances above the LI [SIx] - Ratio of phase angles (w.r.t. LI) of the points above the LI [AIx] - Ratio of the cumulative area of sectors of points above the LI\n\nGDE, GDR, ..., = GridEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=3, tau::Int=1, Logx::Real=exp(1), Plotx::Bool=false)\n\nReturns the gridded distribution entropy (GDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Grid coarse-grain (m x m sectors), an integer > 1 \n\ntau - Time Delay, a positive integer \n\nLogx - Logarithm base, a positive scalar \n\nPlotx - When Plotx == true, returns gridded Poicaré plot and a bivariate histogram of the grid point distribution (default: false) \n\nSee also PhasEn, CoSiEn, SlopEn, BubbEn, MSEn\n\nReferences:\n\n[1] Chang Yan, et al.,\n \"Novel gridded descriptors of poincaré plot for analyzing \n heartbeat interval time-series.\" \n Computers in biology and medicine \n 109 (2019): 280-289.\n\n[2] Chang Yan, et al. \n \"Area asymmetry of heart rate variability signal.\" \n Biomedical engineering online \n 16.1 (2017): 1-14.\n\n[3] Alberto Porta, et al.,\n \"Temporal asymmetries of short-term heart period variability \n are linked to autonomic regulation.\" \n American Journal of Physiology-Regulatory, Integrative and \n Comparative Physiology \n 295.2 (2008): R550-R557.\n\n[4] C.K. Karmakar, A.H. Khandoker and M. Palaniswami,\n \"Phase asymmetry of heart rate variability signal.\" \n Physiological measurement \n 36.2 (2015): 303.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._EnofEn.EnofEn","page":"Base Entropies","title":"EntropyHub._EnofEn.EnofEn","text":"EoE, AvEn, S2 = EnofEn(Sig)\n\nReturns the entropy of entropy (EoE), the average Shannon entropy (AvEn), and the number of levels (S2) across all windows estimated from the data sequence (Sig) using the default parameters: window length (samples) = 10, slices = 10, logarithm = natural, heartbeat interval range (xmin, xmax) = (min(Sig), max(Sig))\n\nEoE, AvEn, S2 = EnofEn(Sig::AbstractArray{T,1} where T<:Real; tau::Int=10, S::Int=10, Xrange::Tuple{Real,REal}, Logx::Real=exp(1))\n\nReturns the entropy of entropy (EoE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\ntau - Window length, an integer > 1 \n\nS - Number of slices (s1,s2), a two-element tuple of integers > 2 \n\nXrange - The min and max heartbeat interval, a two-element tuple where X[1] <= X[2]\n\nLogx - Logarithm base, a positive scalar \n\nSee also SampEn, MSEn, ApEn\n\nReferences:\n\n[1] Chang Francis Hsu, et al.,\n \"Entropy of entropy: Measurement of dynamical complexity for\n biological systems.\" \n Entropy \n 19.10 (2017): 550.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._AttnEn.AttnEn","page":"Base Entropies","title":"EntropyHub._AttnEn.AttnEn","text":"Av4, (Hxx,Hnn,Hxn,Hnx) = AttnEn(Sig)\n\nReturns the attention entropy (Av4) calculated as the average of the sub-entropies (Hxx,Hxn,Hnn,Hnx) estimated from the data sequence (Sig) using a base-2 logarithm.\n\nAv4, (Hxx, Hnn, Hxn, Hnx) = AttnEn(Sig::AbstractArray{T,1} where T<:Real; Logx::Real=2)\n\nReturns the attention entropy (Av4) and the sub-entropies (Hxx,Hnn,Hxn,Hnx) from the data sequence (Sig) where, Hxx: entropy of local-maxima intervals Hnn: entropy of local minima intervals Hxn: entropy of intervals between local maxima and subsequent minima Hnx: entropy of intervals between local minima and subsequent maxima\n\nArguments:\n\nLogx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)\n\nSee also EnofEn, SpecEn, XSpecEn, PermEn, MSEn\n\nReferences:\n\n[1] Jiawei Yang, et al.,\n \"Classification of Interbeat Interval Time-series Using \n Attention Entropy.\" \n IEEE Transactions on Affective Computing \n (2020)\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._RangEn.RangEn","page":"Base Entropies","title":"EntropyHub._RangEn.RangEn","text":"Rangx, A, B = RangEn(Sig)\n\nReturns the range entropy estimate (Rangx) and the number of matched state vectors (m: B, m+1: A) estimated from the data sequence (Sig) using the sample entropy algorithm and the following default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2, logarithm = natural. \n\nRangx, A, B = RangEn(Sig, keyword = value, ...)\n\nReturns the range entropy estimates (Rangx) for dimensions = m estimated for the data sequence (Sig) using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer tau - Time Delay, a positive integer r - Radius Distance Threshold, a positive value between 0 and 1 Methodx - Base entropy method, either 'SampEn' [default] or 'ApEn' Logx - Logarithm base, a positive scalar \n\nSee also ApEn, SampEn, FuzzEn, MSEn\n\nReferences:\n\n[1] Omidvarnia, Amir, et al.\n \"Range entropy: A bridge between signal complexity and self-similarity\"\n Entropy \n 20.12 (2018): 962.\n \n[2] Joshua S Richman and J. Randall Moorman. \n \"Physiological time-series analysis using approximate entropy\n and sample entropy.\" \n American Journal of Physiology-Heart and Circulatory Physiology \n 2000\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._DivEn.DivEn","page":"Base Entropies","title":"EntropyHub._DivEn.DivEn","text":"Div, CDEn, Bm = DivEn(Sig)\n\nReturns the diversity entropy (Div), the cumulative diversity entropy (CDEn), and the corresponding probabilities (Bm) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, #bins = 5, logarithm = natural,\n\nDiv, CDEn, Bm = DivEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Int=5, Logx::Real=exp(1))\n\nReturns the diversity entropy (Div) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nr - Histogram bins #: either \n\n * an integer [1 < `r`] representing the number of bins\n * a list/numpy array of 3 or more increasing values in range [-1 1] representing the bin edges including the rightmost edge.\n\nLogx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)\n\nSee also CoSiEn, PhasEn, SlopEn, GridEn, MSEn\n\nReferences:\n\n[1] X. Wang, S. Si and Y. Li, \n \"Multiscale Diversity Entropy: A Novel Dynamical Measure for Fault \n Diagnosis of Rotating Machinery,\" \n IEEE Transactions on Industrial Informatics,\n vol. 17, no. 8, pp. 5419-5429, Aug. 2021\n \n[2] Y. Wang, M. Liu, Y. Guo, F. Shu, C. Chen and W. Chen, \n \"Cumulative Diversity Pattern Entropy (CDEn): A High-Performance, \n Almost-Parameter-Free Complexity Estimator for Nonstationary Time Series,\"\n IEEE Transactions on Industrial Informatics\n vol. 19, no. 9, pp. 9642-9653, Sept. 2023\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example8/#Example-8:-Composite-Multiscale-Cross-Approximate-Entropy","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Example 8: Composite Multiscale Cross-Approximate Entropy","text":"","category":"section"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"Import two signals of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = XApEn(), embedding dimension = 2, time delay = 2, radius distance threshold = 0.5","category":"page"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\");\nMobj = MSobject(XApEn, m = 2, tau = 2, r = 0.5)\nMobj # hide","category":"page"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"Calculate the comsposite multiscale cross-approximate entropy over 3 temporal scales where the radius distance threshold value (r) specified by Mobj becomes scaled by the variance of the signal at each scale.","category":"page"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\"); # hide \nMobj = MSobject(XApEn, m = 2, tau = 2, r = 0.5) # hide\nMSx, _ = cXMSEn(X[:,1], X[:,2], Mobj, Scales = 3, RadNew = 1)\nMSx # hide","category":"page"},{"location":"Examples/Example10/#Example-10:-Bidimensional-Fuzzy-Entropy","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Example 10: Bidimensional Fuzzy Entropy","text":"","category":"section"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"Import an image of a Mandelbrot fractal as a matrix.","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"X = ExampleData(\"mandelbrot_Mat\");\n\nusing Plots\nheatmap(X, background_color=\"black\")","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"(Image: AlmondBread)","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"Calculate the bidimensional fuzzy entropy in trits (logarithm base 3) with a template matrix of size [8 x 5], and a time delay (tau) of 2 using a 'constgaussian' fuzzy membership function (r=24).","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"mandelbrot_Mat\"); # hide\nFE2D = FuzzEn2D(X, m = (8, 5), tau = 2, Fx = \"constgaussian\", r = 24, Logx = 3)","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = EntropyHub","category":"page"},{"location":"","page":"Home","title":"Home","text":"(Image: EH4J)","category":"page"},{"location":"#EntropyHub","page":"Home","title":"EntropyHub","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"An Open-Source Toolkit For Entropic Time Series Analysis","category":"page"},{"location":"#Introduction","page":"Home","title":"Introduction","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This toolkit provides a wide range of functions to calculate different entropy statistics. There is an ever-growing range of information-theoretic and dynamical systems entropy measures presented in the scientific literature. The goal of EntropyHub.jl is to integrate the many established entropy methods in one open-source package with an extensive documentation and consistent syntax [that is also accessible in multiple programming languages (Matlab, Python)].","category":"page"},{"location":"#About","page":"Home","title":"About","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Information and uncertainty can be regarded as two sides of the same coin: the more uncertainty there is, the more information we gain by removing that uncertainty. In the context of information and probability theory, Entropy quantifies that uncertainty. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Various measures have been derived to estimate entropy (uncertainty) from discrete time series, each seeking to best capture the uncertainty of the system under examination. This has resulted in many entropy statistics from approximate entropy and sample entropy, to multiscale sample entropy and refined-composite multiscale cross-sample entropy.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The goal of EntropyHub is to provide a comprehensive set of functions with a simple and consistent syntax that allows the user to augment parameters at the command line, enabling a range from basic to advanced entropy methods to be implemented with ease.","category":"page"},{"location":"","page":"Home","title":"Home","text":"warning: NOTE:\nIt is important to clarify that the entropy functions herein described estimate entropy in the context of probability theory and information theory as defined by Shannon, and not thermodynamic or other entropies from classical physics.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Using the Julia REPL:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Pkg; Pkg.add(\"EntropyHub\")","category":"page"},{"location":"","page":"Home","title":"Home","text":"or","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ] \npkg> add EntropyHub","category":"page"},{"location":"","page":"Home","title":"Home","text":"To get the latest version of EntropyHub directly from GitHub:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ] \npkg> add https://github.com/MattWillFlood/EntropyHub.jl","category":"page"},{"location":"#Citing","page":"Home","title":"Citing","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"EntropyHub is licensed under the Apache License (Version 2.0) and is free to use by all on condition that the following reference be included on any outputs realized using the software:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Matthew W. Flood (2021),\nEntropyHub: An Open-Source Toolkit for Entropic Time Series Analysis,\nPLoS ONE 16(11):e0259448\nDOI: 10.1371/journal.pone.0259448\nwww.EntropyHub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"__________________________________________________________________","category":"page"},{"location":"","page":"Home","title":"Home","text":" © Copyright 2024 Matthew W. Flood, EntropyHub\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n \n http://www.apache.org/licenses/LICENSE-2.0\n \n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n \n For Terms of Use see https://github.com/MattWillFlood/EntropyHub","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you find this package useful, please consider starring it on GitHub and Julia Packages (or MatLab File Exchange and PyPI). This helps us to gauge user satisfaction.","category":"page"},{"location":"#Functions","page":"Home","title":"Functions","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"EntropyHub functions fall into 5 categories: ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Base functions for estimating the entropy of a single univariate time series.\nCross functions for estimating the entropy between two univariate time series.\nBidimensional functions for estimating the entropy of a two-dimensional univariate matrix.\nMultiscale functions for estimating the multiscale entropy of a single univariate time series using any of the Base entropy functions.\nMultiscale Cross functions for estimating the multiscale entropy between two univariate time series using any of the Cross-entropy functions.","category":"page"},{"location":"#Contact","page":"Home","title":"Contact","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"For general queries and information about EntropyHub, contact: info@entropyhub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you have any questions or need help using the package, please contact us at: help@entropyhub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you notice or identify any issues, please do not hesitate to contact us at: fix@entropyhub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"We will do our best to help you with any relevant issues that you may have.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you come across any errors or technical issues, you can raise these under the issues tab on the EntropyHub.jl GitHub page. Similarly, if you have any suggestions or recommendations on how this package can be improved, please let us know.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Thank you for using EntropyHub,","category":"page"},{"location":"","page":"Home","title":"Home","text":"Matt","category":"page"},{"location":"","page":"Home","title":"Home","text":" ___ _ _ _____ _____ ____ ____ _ _ \n | _|| \\ | ||_ _|| \\| || || \\ / | ___________ \n | \\_ | \\| | | | | __/| || __| \\ \\_/ / / _______ \\\n | _|| \\ \\ | | | | \\ | || | \\ / | / ___ \\ |\n | \\_ | |\\ | | | | |\\ \\ | || | | | | | / \\ | | \n |___||_| \\_| |_| |_| \\_||____||_| |_| _|_|__\\___/ | | \n _ _ _ _ ____ / |__\\______\\/ | \n | | | || | | || \\ An open-source | /\\______\\__|_/ \n | |_| || | | || | toolkit for | | / \\ | | \n | _ || | | || \\ entropic time- | | \\___/ | | \n | | | || |_| || \\ series analysis | \\_______/ |\n |_| |_|\\_____/|_____/ \\___________/","category":"page"},{"location":"","page":"Home","title":"Home","text":"Documentation for EntropyHub.","category":"page"},{"location":"","page":"Home","title":"Home","text":"","category":"page"},{"location":"Examples/Example4/#Example-4:-Cross-Distribution-Entropy-w/-Different-Binning-Methods","page":"Ex.4: Cross-Distribution Entropy","title":"Example 4: Cross-Distribution Entropy w/ Different Binning Methods","text":"","category":"section"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"Import a signal of pseudorandom integers in the range [1, 8] and calculate the cross- distribution entropy with an embedding dimension of 5, a time delay (tau) of 3, and 'Sturges' bin selection method.","category":"page"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\");\nXDist, _ = XDistEn(X[:,1], X[:,2], m = 5, tau = 3);\nprintln(\" \") # hide\nprintln(\"XDist = \", XDist) # hide","category":"page"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"Use Rice's method to determine the number of histogram bins and return the probability of each bin (Ppi).","category":"page"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\"); # hide\nXDist, Ppi = XDistEn(X[:,1], X[:,2], m = 5, tau = 3, Bins = \"rice\")\nprintln(\"XDist = \", XDist) # hide\nprintln(\"Ppi = \", Ppi) #hide","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"Pages = [\"Example1.md\", \"Example2.md\", \"Example3.md\", \"Example4.md\", \"Example5.md\",\n \"Example6.md\",\"Example7.md\",\"Example8.md\",\"Example9.md\",\"Example10.md\"]","category":"page"},{"location":"Examples/Examples/#Examples:","page":"Notes on Examples","title":"Examples:","text":"","category":"section"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"The following sections provide some basic examples of EntropyHub functions. These examples are merely a snippet of the full range of EntropyHub functionality. ","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"In the following examples, signals / data are imported into Julia using the ExampleData() function. To use this function as shown in the examples below, an internet connection is required.","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"EntropyHub.ExampleData","category":"page"},{"location":"Examples/Examples/#EntropyHub._ExampleData.ExampleData","page":"Notes on Examples","title":"EntropyHub._ExampleData.ExampleData","text":"Data = ExampleData(SigName::String)\n\nImports sample data time series with specific properties that are commonly used as benchmarks for assessing the performance of various entropy methods. The datasets returned by ExampleData() are used in the examples provided in documentation on www.EntropyHub.xyz and elsewhere. ***Note*** ExampleData() requires an internet connection to download and import the required datasets!\n\nDatais the sample dataset imported corresponding to the string input SigName which can be one of the following string:\n\nArguments:\n\nSigName - \n\n `uniform` - uniformly distributed random number sequence in range [0 1], N = 5000\n `randintegers` - randomly distributed integer sequence in range [1 8], N = 4096\n `gaussian` - normally distributed number sequence [mean: 0, SD: 1], N = 5000\n `henon` - X and Y components of the Henon attractor [alpha: 1.4, beta: .3, Xo = 0, Yo = 0], N = 4500\n `lorenz` - X, Y, and Z components of the Lorenz attractor [sigma: 10, beta: 8/3, rho: 28, Xo = 10, Yo = 20, Zo = 10], N = 5917\n `chirp` - chirp signal (f0 = .01, t1 = 4000, f1 = .025), N = 5000\n `uniform2` - two uniformly distributed random number sequences in range [0,1], N = 4096\n `gaussian2` - two normally distributed number sequences [mean: 0, SD: 1], N = 3000\n `randintegers2` - two uniformly distributed pseudorandom integer sequences in range [1 8], N = 3000\n `uniform_Mat` - matrix of uniformly distributed random numbers in range [0 1], N = 50 x 50\n `gaussian_Mat` - matrix of normally distributed numbers [mean: 0, SD: 1], N = 60 x 120\n `randintegers_Mat` - matrix of randomly distributed integers in range [1 8], N = 88 x 88\n `mandelbrot_Mat` - matrix representing a Mandelbrot fractal image with values in range [0 255], N = 92 x 115\n `entropyhub_Mat` - matrix representing the EntropyHub logo with values in range [0 255], N = 127 x 95\n \nFor further info on these graining procedures see the `EntropyHub guide `_.\n\n\n\n\n\n","category":"function"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"tip: IMPORTANT TO NOTE\nParameters of the base or cross- entropy methods are passed to multiscale and multiscale cross- functions using the multiscale entropy object using MSobject. Base and cross- entropy methods are declared with MSobject() using any Base or Cross- entropy function. See the MSobject example in the following sections for more info.","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"warning: Hierarchical Multiscale Entropy (+ Multiscale Cross-Entropy)\nIn hierarchical multiscale entropy (hMSEn) and hierarchical multiscale cross-entropy (hXMSEn) functions, the length of the time series signal(s) is halved at each scale. Thus, hMSEn and hXMSEn only use the first 2^N data points where 2^N <= the length of the original time series signal. i.e. For a signal of 5000 points, only the first 4096 are used. For a signal of 1500 points, only the first 1024 are used.","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"danger: BIDIMENSIONAL ENTROPIES\nEach bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are \"locked\" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.","category":"page"}] +[{"location":"Guide/Multiscale_Cross_Entropies/#Multiscale-Cross-Entropies","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"","category":"section"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"Functions for estimating the multiscale entropy between two univariate time series.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"Just as one can calculate multiscale entropy using any Base entropy, the same functionality is possible with multiscale cross-entropy using any Cross-entropy function: (XApEn, XSampEn, XK2En, XCondEn, XPermEn, XSpecEn, XDistEn, XFuzzEn).","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"To do so, we again use the MSobject function to pass a multiscale object (Mobj) to the multiscale cross-entropy functions.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"info: NOTE:\nMultiscale cross-entropy functions have three positional arguments:the first data seuqence, Sig1 (an Nx1 matrix),\nthe second data seuqence, Sig2 (an Nx1 matrix),\nthe multiscale entropy object, Mobj.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"EntropyHub.MSobject","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"The following functions use the multiscale entropy object shown above.","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/","page":"Multiscale Cross-Entropies","title":"Multiscale Cross-Entropies","text":"EntropyHub.XMSEn\nEntropyHub.cXMSEn\nEntropyHub.rXMSEn\nEntropyHub.hXMSEn","category":"page"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._XMSEn.XMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._XMSEn.XMSEn","text":"MSx, CI = XMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of multiscale cross-entropy values MSx and the complexity index CI between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining default. \n\nMSx,CI = MSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, Methodx::String=\"coarse\", RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of multiscale cross-entropy values MSx and the complexity index CI of the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object Mobj and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nMethod - Graining method, one of the following:\n\n {`\"coarse\", \"modified\", \"imf\", \"timeshift\",\"generalized\"`} [default: 'coarse'] \n For further info on graining procedures, see the Entropyhub guide.\n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods: \n\n [1] Pooled Standard Deviation - r*std(Xt) \n\n [2] Pooled Variance - r*var(Xt) \n\n [3] Total Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Total Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nFor further info on these graining procedures see the EntropyHub guide.\n\nSee also MSobject, MSEn, cXMSEn, rXMSEn, hXMSEn, XSampEn, XApEn, XFuzzEn\n\nReferences:\n\n[1] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[2] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[3] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n[4] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n[5] Antoine Jamin, et al,\n \"A novel multiscale cross-entropy method applied to navigation \n data acquired with a bike simulator.\" \n 41st annual international conference of the IEEE EMBC\n IEEE, 2019.\n\n[6] Antoine Jamin and Anne Humeau-Heurtier. \n \"(Multiscale) Cross-Entropy Methods: A Review.\" \n Entropy \n 22.1 (2020): 45.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._cXMSEn.cXMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._cXMSEn.cXMSEn","text":"MSx, CI = cXMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of composite multiscale cross-entropy values (MSx) between two univariate data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) using the composite multiscale method (cMSE) over 3 temporal scales.\n\nMSx, CI = cXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Refined::Bool=false, Plotx::Bool=false)\n\nReturns a vector of composite multiscale cross-entropy values (MSx) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following keyword arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3)\n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew rescales the radius threshold of the sub-sequences at each time scale (Ykj). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Pooled Standard Deviation - r*std(Ykj)\n\n [2] Pooled Variance - r*var(Ykj)\n\n [3] Total Mean Absolute Deviation - r*mean_ad(Ykj)\n\n [4] Total Median Absolute Deviation - r*med_ad(Ykj,1)\n\nRefined - Refined-composite XMSEn method. When Refined == true and the entropy function specified by Mobj is XSampEn or XFuzzEn, cXMSEn returns the refined-composite multiscale entropy (rcXMSEn). (default: false) Plotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nSee also MSobject, XMSEn, rXMSEn, hXMSEn, XSampEn, XApEn, cMSEn\n\nReferences:\n\n[1] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[2] Yi Yin, Pengjian Shang, and Guochen Feng, \n \"Modified multiscale cross-sample entropy for complex time \n series.\"\n Applied Mathematics and Computation \n 289 (2016): 98-110.\n\n[3] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[4] Antoine Jamin, et al,\n \"A novel multiscale cross-entropy method applied to navigation \n data acquired with a bike simulator.\" \n 41st annual international conference of the IEEE EMBC\n IEEE, 2019.\n\n[5] Antoine Jamin and Anne Humeau-Heurtier. \n \"(Multiscale) Cross-Entropy Methods: A Review.\" \n Entropy \n 22.1 (2020): 45.\n\n[6] Shuen-De Wu, et al.,\n \"Time series analysis using composite multiscale entropy.\" \n Entropy \n 15.3 (2013): 1069-1084.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._rXMSEn.rXMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._rXMSEn.rXMSEn","text":"MSx, CI = rXMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is XSampEn or XApEn, rMSEn updates the threshold radius of the data sequences (Xt) at each scale to 0.2SDpooled(Xa, Xb) when no r value is provided by Mobj, or rSDpooled(Xa, Xb) if r is specified.\n\nMSx, CI = rXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple;\n Scales::Int=3, F_Order::Int=6, F_Num::Float64=0.5, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of refined multiscale cross-entropy values (MSx) and the complexity index (CI) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) and the following keyword arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nF_Order - Butterworth low-pass filter order, a positive integer (default: 6) \n\nF_Num - Numerator of Butterworth low-pass filter cutoff frequency, a scalar value in range [0 < F_Num < 1]. The cutoff frequency at each scale (T) becomes: Fc = F_Num/T. (default: 0.5) \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods: \n\n [1] Pooled Standard Deviation - r*std(Xt) \n\n [2] Pooled Variance - r*var(Xt) \n\n [3] Total Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Total Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default = false] \n\nSee also MSobject, XMSEn, cXMSEn, hXMSEn, XSampEn, XApEn, MSEn\n\nReferences:\n\n[1] Matthew W. Flood (2021), \n \"EntropyHub - An open source toolkit for entropic time series analysis\"\n PLoS ONE 16(11):e0295448, \n DOI: 10.1371/journal.pone.0259448\n https://www.EntropyHub.xyz\n\n[2] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[3] José Fernando Valencia, et al.,\n \"Refined multiscale entropy: Application to 24-h holter \n recordings of heart period variability in healthy and aortic \n stenosis subjects.\" \n IEEE Transactions on Biomedical Engineering \n 56.9 (2009): 2202-2213.\n\n[4] Puneeta Marwaha and Ramesh Kumar Sunkaria,\n \"Optimal selection of threshold value ‘r’for refined multiscale\n entropy.\" \n Cardiovascular engineering and technology \n 6.4 (2015): 557-576.\n\n[5] Yi Yin, Pengjian Shang, and Guochen Feng, \n \"Modified multiscale cross-sample entropy for complex time \n series.\"\n Applied Mathematics and Computation \n 289 (2016): 98-110.\n\n[6] Antoine Jamin, et al,\n \"A novel multiscale cross-entropy method applied to navigation \n data acquired with a bike simulator.\" \n 41st annual international conference of the IEEE EMBC\n IEEE, 2019.\n\n[7] Antoine Jamin and Anne Humeau-Heurtier. \n \"(Multiscale) Cross-Entropy Methods: A Review.\" \n Entropy \n 22.1 (2020): 45.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Cross_Entropies/#EntropyHub._hXMSEn.hXMSEn","page":"Multiscale Cross-Entropies","title":"EntropyHub._hXMSEn.hXMSEn","text":"MSx, Sn, CI = hXMSEn(Sig1, Sig2, Mobj)\n\nReturns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) between the data sequences contained in Sig1 and Sig2 using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average cross-entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST\n\nMSx, Sn, CI = hXMSEn(Sig1::AbstractVector{T} where T<:Real, Sig2::AbstractVector{T} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of cross-entropy values (MSx) calculated at each node in the hierarchical tree, the average cross-entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the entire hierarchical tree between the data sequences contained in Sig1 and Sig2 using the following name/value pair arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) At each scale (T), entropy is estimated for 2^(T-1) nodes. \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is XSampEn or XApEn, RadNew allows the radius threshold to be updated at each node in the tree. If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods: \n\n [1] Pooled Standard Deviation - r*std(Xt) \n\n [2] Pooled Variance - r*var(Xt) \n\n [3] Total Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Total Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the average cross-entropy value at each time scale (i.e. the multiscale entropy curve) and a hierarchical graph showing the entropy value of each node in the hierarchical tree decomposition. (default: false) \n\nSee also MSobject, XMSEn, rXMSEn, cXMSEn, XSampEn, XApEn, hMSEn\n\nReferences:\n\n[1] Matthew W. Flood (2021), \n \"EntropyHub - An open source toolkit for entropic time series analysis\"\n PLoS ONE 16(11):e0295448, \n DOI: 10.1371/journal.pone.0259448\n https://www.EntropyHub.xyz\n\n[2] Rui Yan, Zhuo Yang, and Tao Zhang,\n \"Multiscale cross entropy: a novel algorithm for analyzing two\n time series.\" \n 5th International Conference on Natural Computation. \n Vol. 1, pp: 411-413 IEEE, 2009.\n\n[3] Ying Jiang, C-K. Peng and Yuesheng Xu,\n \"Hierarchical entropy analysis for biological signals.\"\n Journal of Computational and Applied Mathematics\n 236.5 (2011): 728-742.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#Bidimensional-Entropies","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"","category":"section"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"Functions for estimating the entropy of a two-dimensional univariate matrix.","category":"page"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"While EntropyHub functions primarily apply to time series data, with the following bidimensional entropy functions one can estimate the entropy of two-dimensional (2D) matrices. Hence, bidimensional entropy functions are useful for applications such as image analysis.","category":"page"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"danger: IMPORTANT: Locked Matrix Size\nEach bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D, DispEn2D, EspEn2D, PermEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are \"locked\" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.The reason for this is because there are hundreds of millions of pairwise calculations performed in the estimation of bidimensional entropy, so memory errors often occur when storing data on RAM.e.g. For a matrix of size [200 x 200], an embedding dimension (m) = 3, and a time delay (tau) = 1, there are 753,049,836 pairwise matrix comparisons (6,777,448,524 elemental subtractions). To pass matrices with sizes greater than [128 x 128], set Lock = false.CAUTION: unlocking the permitted matrix size may cause your Julia IDE to crash.","category":"page"},{"location":"Guide/Bidimensional_Entropies/","page":"Bidimensional Entropies","title":"Bidimensional Entropies","text":"EntropyHub.SampEn2D\nEntropyHub.FuzzEn2D\nEntropyHub.DistEn2D\nEntropyHub.DispEn2D\nEntropyHub.PermEn2D\nEntropyHub.EspEn2D","category":"page"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._SampEn2D.SampEn2D","page":"Bidimensional Entropies","title":"EntropyHub._SampEn2D.SampEn2D","text":"SE2D, Phi1, Phi2 = SampEn2D(Mat)\n\nReturns the bidimensional sample entropy estimate (SE2D) and the number of matched sub-matricess (m:Phi1, m+1:Phi2) estimated for the data matrix (Mat) using the default parameters: time delay = 1, radius distance threshold = 0.2*SD(Mat), logarithm = natural matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum dimension size of Mat must be > 10.**\n\nSE2D, Phi1, Phi2 = SampEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, r::Real=0.2*std(Mat,corrected=false), Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional sample entropy (SE2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nr - Distance Threshold, a positive scalar (default: 0.2*SD(Mat)) \n\nLogx - Logarithm base, a positive scalar (default: natural) \n\nLock - By default, SampEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, SampEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true) \n\n `WARNING: unlocking the permitted matrix size may cause your Julia\n IDE to crash.`\n\nSee also SampEn, FuzzEn2D, XSampEn, MSEn\n\nReferences:\n\n[1] Luiz Eduardo Virgili Silva, et al.,\n \"Two-dimensional sample entropy: Assessing image texture \n through irregularity.\" \n Biomedical Physics & Engineering Express\n 2.4 (2016): 045002.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._FuzzEn2D.FuzzEn2D","page":"Bidimensional Entropies","title":"EntropyHub._FuzzEn2D.FuzzEn2D","text":"Fuzz2D = FuzzEn2D(Mat)\n\nReturns the bidimensional fuzzy entropy estimate (Fuzz2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height and width of the data matrix 'Mat') \n\n** The minimum dimension size of Mat must be > 10.**\n\nFuzz2D = FuzzEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2*std(Mat, corrected=false),2), \n Fx::String=\"default\", Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional fuzzy entropy (Fuzz2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nFx - Fuzzy function name, one of the following: {\"sigmoid\", \"modsampen\", \"default\", \"gudermannian\", \"bell\", \"triangular\", \"trapezoidal1\", \"trapezoidal2\", \"z_shaped\", \"gaussian\", \"constgaussian\"}\n\nr - Fuzzy function parameters, a 1 element scalar or a 2 element vector of positive values. The 'r' parameters for each fuzzy function are defined as follows:\n\n sigmoid: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n modsampen: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n default: r (1) = divisor of the exponential argument\n r(2) = argument exponent (pre-division)\n gudermannian: r = a scalar whose value is the numerator of\n argument to gudermannian function:\n GD(x) = atan(tanh(r/x))\n triangular: r = a scalar whose value is the threshold (corner point) of the triangular function.\n trapezoidal1: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the trapezoid.\n trapezoidal2: r(1) = a value corresponding to the upper corner point of the trapezoid.\n r(2) = a value corresponding to the lower corner point of the trapezoid.\n z_shaped: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the z-shape.\n bell: r(1) = divisor of the distance value\n r(2) = exponent of generalized bell-shaped function\n gaussian: r = a scalar whose value scales the slope of the Gaussian curve.\n constgaussian: r = a scalar whose value defines the lower threshod and shape of the Gaussian curve. \n [DEPRICATED] linear: r = an integer value. When r = 0, the\n argument of the exponential function is \n normalised between [0 1]. When r = 1,\n the minimuum value of the exponential \n argument is set to 0.\n\nLogx - Logarithm base, a positive scalar (default: natural)\n\nLock - By default, FuzzEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, FuzzEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true)\n\n ` WARNING: unlocking the permitted matrix size may cause\n your Julia IDE to crash.`\n\nSee also SampEn2D, FuzzEn, XFuzzEn\n\nReferences:\n\n[1] Luiz Fernando Segato Dos Santos, et al.,\n \"Multidimensional and fuzzy sample entropy (SampEnMF) for\n quantifying H&E histological images of colorectal cancer.\"\n Computers in biology and medicine \n 103 (2018): 148-160.\n\n[2] Mirvana Hilal and Anne Humeau-Heurtier,\n \"Bidimensional fuzzy entropy: Principle analysis and biomedical\n applications.\"\n 41st Annual International Conference of the IEEE (EMBC) Society\n 2019.\n\n[3] Hamed Azami, et al.\n \"Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: \n Assessment and Comparison\"\n IEEE Access\n 7 (2019): 104833-104847\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._DistEn2D.DistEn2D","page":"Bidimensional Entropies","title":"EntropyHub._DistEn2D.DistEn2D","text":"Dist2D = DistEn2D(Mat)\n\nReturns the bidimensional distribution entropy estimate (Dist2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, histogram binning method = \"sturges\", logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum number of rows and columns of Mat must be > 10.**\n\nDist2D = DistEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,\n Bins::Union{Int,String}=\"Sturges\", Logx::Real=2, Norm::Int=2, Lock::Bool=true)\n\nReturns the bidimensional distribution entropy (Dist2D) estimate for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element tuple of integers [height, width] with a minimum value > 1. [default: [floor(H/10) floor(W/10)]] \n\ntau - Time Delay, a positive integer [default: 1] \n\nBins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {\"sturges\", \"sqrt\", \"rice\", \"doanes\"`} [default: 'sturges'] \n\nLogx - Logarithm base, a positive scalar [default: natural]\n\n ** enter 0 for natural logarithm.**\n\nNorm - Normalisation of Dist2D value, one of the following integers: [0] no normalisation. [1] normalises values of data matrix (Mat) to range [0 1]. [2] normalises values of data matrix (Mat) to range [0 1], and normalises the distribution entropy value (Dist2D) w.r.t the number of histogram bins. [default] [3] normalises the distribution entropy value w.r.t the number of histogram bins, without normalising data matrix values. \n\nLock - By default, DistEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, DistEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. [default: 'true'] WARNING: unlocking the permitted matrix size may cause your Julia IDE to crash.\n\nSee also DistEn, XDistEn, SampEn2D, FuzzEn2D, MSEn\n\nReferences:\n\n[1] Hamed Azami, Javier Escudero and Anne Humeau-Heurtier,\n \"Bidimensional distribution entropy to analyze the irregularity\n of small-sized textures.\"\n IEEE Signal Processing Letters \n 24.9 (2017): 1338-1342.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._DispEn2D.DispEn2D","page":"Bidimensional Entropies","title":"EntropyHub._DispEn2D.DispEn2D","text":"Disp2D, RDE = DispEn2D(Mat)\n\nReturns the bidimensional dispersion entropy estimate (Disp2D) and reverse bidimensional dispersion entropy (RDE) estimated for the data matrix (Mat) using the default parameters: time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function ('ncdf'), logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum number of rows and columns of Mat must be > 10.**\n\nDisp2D, RDE = DispEn2D(Mat::AbstractArray{T,2} where T<:Real; \n m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), tau::Int=1,\n c::Int=3, Typex::String=\"ncdf\", Logx::Real=exp(1), Norm::Bool=false, Lock::Bool=true)\n\nReturns the bidimensional dispersion entropy (Disp2D) and reverse bidimensional distribution entropy (RDE) estimate for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element tuple of integers [height, width] with a minimum value > 1. [default: [floor(H/10) floor(W/10)]] \n\ntau - Time Delay, a positive integer [default: 1] \n\nc - Number of symbols, an integer > 1 Typex - Type of symbolic mapping transform, one of the following: {linear, kmeans, ncdf, equal} See the EntropyHub Guide for more info on these transforms. Logx - Logarithm base, a positive scalar [default: natural]\n\n ** enter 0 for natural logarithm.**\n\nNorm - Normalisation of Disp2D value, a boolean: - [false] no normalisation - default - [true] normalises w.r.t number of possible dispersion patterns. Lock - By default, DispEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, DispEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. [default: 'true'] WARNING: unlocking the permitted matrix size may cause your Julia IDE to crash.\n\nSee also DispEn, DistEn2D, SampEn2D, FuzzEn2D, MSEn\n\nReferences:\n\n[1] Hamed Azami, et al.,\n \"Two-dimensional dispersion entropy: An information-theoretic \n method for irregularity analysis of images.\"\n Signal Processing: Image Communication, \n 75 (2019): 178-187.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._PermEn2D.PermEn2D","page":"Bidimensional Entropies","title":"EntropyHub._PermEn2D.PermEn2D","text":"Perm2D = PermEn2D(Mat)\n\nReturns the bidimensional permutation entropy estimate (Perm2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, logarithm = natural, template matrix size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) \n\n** The minimum dimension size of Mat must be > 10.**\n\nPerm2D = PermEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, Norm::Bool=true, Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional permutation entropy (Perm2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nNorm - Normalization of permutation entropy estimate, a boolean (default: true) \n\nLogx - Logarithm base, a positive scalar (default: natural) \n\nLock - By default, PermEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, SampEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true) \n\n `WARNING: unlocking the permitted matrix size may cause your Julia\n IDE to crash.`\n\nNOTE - The original bidimensional permutation entropy algorithms [1][2] do not account for equal-valued elements of the embedding matrices. To overcome this, PermEn2D uses the lowest common rank for such instances. For example, given an embedding matrix A where, A = [3.4 5.5 7.3] |2.1 6 9.9| [7.3 1.1 2.1] would normally be mapped to an ordinal pattern like so, [3.4 5.5 7.3 2.1 6 9.9 7.3 1.1 2.1] => [ 8 4 9 1 2 5 3 7 6 ] However, indices 4 & 9, and 3 & 7 have the same values, 2.1 and 7.3 respectively. Instead, PermEn2D uses the ordinal pattern [ 8 4 4 1 2 5 3 3 6 ] where the lowest rank (4 & 3) are used instead (of 9 & 7). Therefore, the number of possible permutations is no longer (mxmy)!, but (mxmy)^(mxmy). Here, the PermEn2D value is normalized by the maximum Shannon entropy (Smax = log((mxmy)!) assuming that no equal values are found in the permutation motif matrices, as presented in [1].\n\nSee also SampEn2D, FuzzEn2D, DispEn2D, DistEn2D\n\nReferences:\n\n[1] Haroldo Ribeiro et al.,\n \"Complexity-Entropy Causality Plane as a Complexity Measure \n for Two-Dimensional Patterns\"\n PLoS ONE (2012), 7(8):e40689, \n\n[2] Luciano Zunino and Haroldo Ribeiro,\n \"Discriminating image textures with the multiscale\n two-dimensional complexity-entropy causality plane\"\n Chaos, Solitons and Fractals, 91:679-688 (2016)\n\n[3] Matthew Flood and Bernd Grimm,\n \"EntropyHub: An Open-source Toolkit for Entropic Time Series Analysis\"\n PLoS ONE (2021) 16(11): e0259448.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Bidimensional_Entropies/#EntropyHub._EspEn2D.EspEn2D","page":"Bidimensional Entropies","title":"EntropyHub._EspEn2D.EspEn2D","text":"Esp2D, = EspEn2D(Mat)\n\nReturns the bidimensional Espinosa entropy estimate (Esp2D) estimated for the data matrix (Mat) using the default parameters: time delay = 1, tolerance threshold = 20, percentage similarity = 0.7 logarithm = natural, matrix template size = [floor(H/10) floor(W/10)], (where H and W represent the height (rows) and width (columns) of the data matrix Mat) ** The minimum number of rows and columns of Mat must be > 10.\n\nEsp2D = EspEn2D(Mat::AbstractArray{T,2} where T<:Real; m::Union{Int,Tuple{Int,Int}}=floor.(Int, size(Mat)./10), \n tau::Int=1, r::Real=20, ps::Float=.7, Logx::Real=exp(1), Lock::Bool=true)\n\nReturns the bidimensional Espinosa entropy (Esp2D) estimates for the data matrix (Mat) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Template submatrix dimensions, an integer scalar (i.e. the same height and width) or a two-element vector of integers [height, width] with a minimum value > 1. (default: [floor(H/10) floor(W/10)]) \n\ntau - Time Delay, a positive integer (default: 1) \n\nr - Tolerance threshold, a positive scalar (default: 20) \n\nps - Percentage similarity, a value in range [0 1], (default: 0.7) \n\nLogx - Logarithm base, a positive scalar (default: natural) \n\nLock - By default, EspEn2D only permits matrices with a maximum size of 128 x 128 to prevent memory errors when storing data on RAM. e.g. For Mat = [200 x 200], m = 3, and tau = 1, EspEn2D creates a vector of 753049836 elements. To enable matrices greater than [128 x 128] elements, set Lock to false. (default: true) \n\n `WARNING: unlocking the permitted matrix size may cause your Julia\n IDE to crash.`\n\nSee also SampEn2D, FuzzEn2D, DispEn2D, DistEn2D, PermEn2D\n\nReferences:\n\n[1] Ricardo Espinosa, et al.,\n \"Two-dimensional EspEn: A New Approach to Analyze Image Texture \n by Irregularity.\" \n Entropy,\n 23:1261 (2021)\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example5/#Example-5:-Multiscale-Entropy-Object-MSobject()","page":"Ex.5: Multiscale Entropy Object","title":"Example 5: Multiscale Entropy Object - MSobject()","text":"","category":"section"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"warning: Note:\nThe base and cross- entropy functions used in the multiscale entropy calculation are declared by passing EntropyHub functions to MSobject(), not string names.","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"Create a multiscale entropy object (Mobj) for multiscale fuzzy entropy, calculated with an embedding dimension of 5, a time delay (tau) of 2, using a sigmoidal fuzzy function with the r scaling parameters (3, 1.2).","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"using EntropyHub # hide\nMobj = MSobject(FuzzEn, m = 5, tau = 2, Fx = \"sigmoid\", r = (3, 1.2))","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"Create a multiscale entropy object (Mobj) for multiscale corrected-cross-conditional entropy, calculated with an embedding dimension of 6 and using a 11-symbolic data transform.","category":"page"},{"location":"Examples/Example5/","page":"Ex.5: Multiscale Entropy Object","title":"Ex.5: Multiscale Entropy Object","text":"using EntropyHub # hide\nMobj = MSobject(XCondEn, m = 6, c = 11)","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"Pages = [\"Example1.md\", \"Example2.md\", \"Example3.md\", \"Example4.md\", \"Example5.md\",\n \"Example6.md\",\"Example7.md\",\"Example8.md\",\"Example9.md\",\"Example10.md\"]","category":"page"},{"location":"Examples/Examples/#Examples:","page":"Notes on Examples","title":"Examples:","text":"","category":"section"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"The following sections provide some basic examples of EntropyHub functions. These examples are merely a snippet of the full range of EntropyHub functionality. ","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"In the following examples, signals / data are imported into Julia using the ExampleData() function. To use this function as shown in the examples below, an internet connection is required.","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"EntropyHub.ExampleData","category":"page"},{"location":"Examples/Examples/#EntropyHub._ExampleData.ExampleData","page":"Notes on Examples","title":"EntropyHub._ExampleData.ExampleData","text":"Data = ExampleData(SigName::String)\n\nImports sample data time series with specific properties that are commonly used as benchmarks for assessing the performance of various entropy methods. The datasets returned by ExampleData() are used in the examples provided in documentation on www.EntropyHub.xyz and elsewhere. ***Note*** ExampleData() requires an internet connection to download and import the required datasets!\n\nDatais the sample dataset imported corresponding to the string input SigName which can be one of the following string:\n\nArguments:\n\nSigName - \n\n `uniform` - uniformly distributed random number sequence in range [0 1], N = 5000\n `randintegers` - randomly distributed integer sequence in range [1 8], N = 4096\n `gaussian` - normally distributed number sequence [mean: 0, SD: 1], N = 5000\n `henon` - X and Y components of the Henon attractor [alpha: 1.4, beta: .3, Xo = 0, Yo = 0], N = 4500\n `lorenz` - X, Y, and Z components of the Lorenz attractor [sigma: 10, beta: 8/3, rho: 28, Xo = 10, Yo = 20, Zo = 10], N = 5917\n `chirp` - chirp signal (f0 = .01, t1 = 4000, f1 = .025), N = 5000\n `uniform2` - two uniformly distributed random number sequences in range [0,1], N = 4096\n `gaussian2` - two normally distributed number sequences [mean: 0, SD: 1], N = 3000\n `randintegers2` - two uniformly distributed pseudorandom integer sequences in range [1 8], N = 3000\n `uniform_Mat` - matrix of uniformly distributed random numbers in range [0 1], N = 50 x 50\n `gaussian_Mat` - matrix of normally distributed numbers [mean: 0, SD: 1], N = 60 x 120\n `randintegers_Mat` - matrix of randomly distributed integers in range [1 8], N = 88 x 88\n `mandelbrot_Mat` - matrix representing a Mandelbrot fractal image with values in range [0 255], N = 92 x 115\n `entropyhub_Mat` - matrix representing the EntropyHub logo with values in range [0 255], N = 127 x 95\n \nFor further info on these graining procedures see the `EntropyHub guide `_.\n\n\n\n\n\n","category":"function"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"tip: IMPORTANT TO NOTE\nParameters of the base or cross- entropy methods are passed to multiscale and multiscale cross- functions using the multiscale entropy object using MSobject. Base and cross- entropy methods are declared with MSobject() using any Base or Cross- entropy function. See the MSobject example in the following sections for more info.","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"warning: Hierarchical Multiscale Entropy (+ Multiscale Cross-Entropy)\nIn hierarchical multiscale entropy (hMSEn) and hierarchical multiscale cross-entropy (hXMSEn) functions, the length of the time series signal(s) is halved at each scale. Thus, hMSEn and hXMSEn only use the first 2^N data points where 2^N <= the length of the original time series signal. i.e. For a signal of 5000 points, only the first 4096 are used. For a signal of 1500 points, only the first 1024 are used.","category":"page"},{"location":"Examples/Examples/","page":"Notes on Examples","title":"Notes on Examples","text":"danger: BIDIMENSIONAL ENTROPIES\nEach bidimensional entropy function (SampEn2D, FuzzEn2D, DistEn2D) has an important keyword argument - Lock. Bidimensional entropy functions are \"locked\" by default (Lock == true) to only permit matrices with a maximum size of 128 x 128.","category":"page"},{"location":"Examples/Example10/#Example-10:-Bidimensional-Fuzzy-Entropy","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Example 10: Bidimensional Fuzzy Entropy","text":"","category":"section"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"Import an image of a Mandelbrot fractal as a matrix.","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"X = ExampleData(\"mandelbrot_Mat\");\n\nusing Plots\nheatmap(X, background_color=\"black\")","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"(Image: AlmondBread)","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"Calculate the bidimensional fuzzy entropy in trits (logarithm base 3) with a template matrix of size [8 x 5], and a time delay (tau) of 2 using a 'constgaussian' fuzzy membership function (r=24).","category":"page"},{"location":"Examples/Example10/","page":"Ex.10: Bidimensional Fuzzy Entropy","title":"Ex.10: Bidimensional Fuzzy Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"mandelbrot_Mat\"); # hide\nFE2D = FuzzEn2D(X, m = (8, 5), tau = 2, Fx = \"constgaussian\", r = 24, Logx = 3)","category":"page"},{"location":"Examples/Example6/#Example-6:-Multiscale-Increment-Entropy","page":"Ex.6: Multiscale [Increment] Entropy","title":"Example 6: Multiscale Increment Entropy","text":"","category":"section"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"Import a signal of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = IncrEn(), embedding dimension = 3, a quantifying resolution = 6, normalization = true.","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\");\nMobj = MSobject(IncrEn, m = 3, R = 6, Norm = true)","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"Calculate the multiscale increment entropy over 5 temporal scales using the modified graining procedure where:","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"y_j^(tau) =frac1tau sum_i=left(j-1right)tau +1^jtau x_i 1= j = fracNtau ","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\"); # hide\nMobj = MSobject(IncrEn, m = 3, R = 6, Norm = true) # hide\nMSx, _ = MSEn(X, Mobj, Scales = 5, Methodx = \"modified\");\nMSx # hide","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"Change the graining method to return generalized multiscale increment entropy.","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"y_j^(tau) =frac1tau sum_i=left(j-1right)tau +1^jtau left( x_i - barx right)^2 1= j = fracNtau ","category":"page"},{"location":"Examples/Example6/","page":"Ex.6: Multiscale [Increment] Entropy","title":"Ex.6: Multiscale [Increment] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\"); # hide\nMobj = MSobject(IncrEn, m = 3, R = 6, Norm = true) # hide\nMSx, _ = MSEn(X, Mobj, Scales = 5, Methodx = \"generalized\");\nMSx # hide","category":"page"},{"location":"Examples/Example9/#Example-9:-Hierarchical-Multiscale-corrected-Cross-Conditional-Entropy","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Example 9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"","category":"section"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"Import the x and y components of the Henon system of equations and create a multiscale entropy object with the following parameters: EnType = XCondEn(), embedding dimension = 2, time delay = 2, number of symbols = 12, logarithm base = 2, normalization = true","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"Data = ExampleData(\"henon\");\nMobj = MSobject(XCondEn, m = 2, tau = 2, c = 12, Logx = 2, Norm = true)\n\nusing Plots\nscatter(Data[:,1], Data[:,2], markercolor = \"green\", markerstrokecolor = \"black\",\nmarkersize = 3, background_color = \"black\", grid = false)","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"(Image: Henon)","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"Calculate the hierarchical multiscale corrected cross-conditional entropy over 4 temporal scales and return the average cross-entropy at each scale (Sn), the complexity index (Ci), and a plot of the multiscale entropy curve and the hierarchical tree with the cross-entropy value at each node.","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"henon\"); # hide\nMobj = MSobject(XCondEn, m = 2, tau = 2, c = 12, Logx = 2, Norm = true) # hide\nMSx, Sn, Ci = hXMSEn(Data[:,1], Data[:,2], Mobj, Scales = 4, Plotx = true)","category":"page"},{"location":"Examples/Example9/","page":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","title":"Ex.9: Hierarchical Multiscale corrected Cross-Conditional Entropy","text":"(Image: hXMSEn)","category":"page"},{"location":"Guide/Cross_Entropies/#Cross-Entropies","page":"Cross-Entropies","title":"Cross Entropies","text":"","category":"section"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"Functions for estimating the cross-entropy between two univariate time series.","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"The following functions also form the cross-entropy method used by Multiscale Cross-Entropy functions.","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"These functions are directly available when EntropyHub is imported:","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"julia> using EntropyHub\njulia> names(EntropyHub)","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":" :ApEn\n :AttnEn\n :BubbEn\n ⋮\n :hXMSEn\n :rMSEn\n :rXMSEn","category":"page"},{"location":"Guide/Cross_Entropies/","page":"Cross-Entropies","title":"Cross-Entropies","text":"EntropyHub.XApEn\nEntropyHub.XSampEn\nEntropyHub.XFuzzEn\nEntropyHub.XK2En\nEntropyHub.XPermEn\nEntropyHub.XCondEn\nEntropyHub.XDistEn\nEntropyHub.XSpecEn","category":"page"},{"location":"Guide/Cross_Entropies/#EntropyHub._XApEn.XApEn","page":"Cross-Entropies","title":"EntropyHub._XApEn.XApEn","text":"XAp, Phi = XApEn(Sig1, Sig2)\n\nReturns the cross-approximate entropy estimates (XAp) and the average number of matched vectors (Phi) for m = [0,1,2], estimated for the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold= 0.2*SDpooled(Sig1,Sig2), logarithm = natural\n\nNOTE: XApEn is direction-dependent. Thus, Sig1 is used as the template data sequence, and Sig2 is the matching sequence.``\n\nXAp, Phi = XApEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1))\n\nReturns the cross-approximate entropy estimates (XAp) between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nr - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nSee also XSampEn, XFuzzEn, XMSEn, ApEn, SampEn, MSEn\n\nReferences:\n\n[1] Steven Pincus and Burton H. Singer,\n \"Randomness and degrees of irregularity.\" \n Proceedings of the National Academy of Sciences \n 93.5 (1996): 2083-2088.\n\n[2] Steven Pincus,\n \"Assessing serial irregularity and its implications for health.\"\n Annals of the New York Academy of Sciences \n 954.1 (2001): 245-267.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XSampEn.XSampEn","page":"Cross-Entropies","title":"EntropyHub._XSampEn.XSampEn","text":"XSamp, A, B = XSampEn(Sig1, Sig2)\n\nReturns the cross-sample entropy estimates (XSamp) and the number of matched vectors (m:B, m+1:A) for m = [0,1,2] estimated for the two univariate data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold= 0.2*SDpooled(Sig1,Sig2), logarithm = natural\n\nXSamp, A, B, (Vcp, Ka, Kb) = XSampEn(Sig1, Sig2, ..., Vcp = true)\n\nIf Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the cross-sample entropy estimates (XSamp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.\n\nXSamp, A, B = XSampEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1), Vcp::Bool=false)\n\nReturns the cross-sample entropy estimates (XSamp) for dimensions [0,1,...,m] estimated between the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nr - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nVcp - Option to return the variance of the conditional probabilities and the number of overlapping matching vector pairs of lengths \n\nSee also XFuzzEn, XApEn, SampEn, SampEn2D, XMSEn, ApEn\n\nReferences:\n\n[1] Joshua S Richman and J. Randall Moorman. \n \"Physiological time-series analysis using approximate entropy\n and sample entropy.\" \n American Journal of Physiology-Heart and Circulatory Physiology\n (2000)\n\n[2] Douglas E Lake, Joshua S Richman, M.P. Griffin, J. Randall Moorman\n \"Sample entropy analysis of neonatal heart rate variability.\"\n American Journal of Physiology-Regulatory, Integrative and Comparative Physiology\n 283, no. 3 (2002): R789-R797.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XFuzzEn.XFuzzEn","page":"Cross-Entropies","title":"EntropyHub._XFuzzEn.XFuzzEn","text":"XFuzz, Ps1, Ps2 = XFuzzEn(Sig1, Sig2)\n\nReturns the cross-fuzzy entropy estimates (XFuzz) and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2, using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = 'default', fuzzy function parameters (r) = [0.2, 2], logarithm = natural\n\nXFuzz, Ps1, Ps2 = XFuzzEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String=\"default\", Logx::Real=exp(1))\n\nReturns the cross-fuzzy entropy estimates (XFuzz) for dimensions = [1,...,m] estimated for the data sequences in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nFx - Fuzzy function name, one of the following: {\"sigmoid\", \"modsampen\", \"default\", \"gudermannian\", \"bell\", \"triangular\", \"trapezoidal1\", \"trapezoidal2\", \"z_shaped\", \"gaussian\", \"constgaussian\"}\n\nr - Fuzzy function parameters, a scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows:\n\n sigmoid: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n modsampen: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n default: r(1) = divisor of the exponential argument\n r(2) = argument exponent (pre-division)\n gudermannian: r = a scalar whose value is the numerator of\n argument to gudermannian function:\n GD(x) = atan(tanh(`r`/x)).\n triangular: r = a scalar whose value is the threshold (corner point) of the triangular function.\n trapezoidal1: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the trapezoid.\n trapezoidal2: r(1) = a value corresponding to the upper corner point of the trapezoid.\n r(2) = a value corresponding to the lower corner point of the trapezoid.\n z_shaped: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the z-shape.\n bell: r(1) = divisor of the distance value\n r(2) = exponent of generalized bell-shaped function\n gaussian: r = a scalar whose value scales the slope of the Gaussian curve.\n constgaussian: r = a scalar whose value defines the lower threshod and shape of the Gaussian curve. \n [DEPRICATED] linear: r = an integer value. When r = 0, the\n argument of the exponential function is \n normalised between [0 1]. When r = 1,\n the minimuum value of the exponential \n argument is set to 0.\n\nLogx - Logarithm base, a positive scalar \n\nFor further information on the 'keyword' arguments, see the EntropyHub guide.\n\nSee also FuzzEn, XSampEn, XApEn, FuzzEn2D, XMSEn, MSEn\n\nReferences:\n\n[1] Hong-Bo Xie, et al.,\n \"Cross-fuzzy entropy: A new method to test pattern synchrony of\n bivariate time series.\" \n Information Sciences \n 180.9 (2010): 1715-1724.\n\n[3] Hamed Azami, et al.\n \"Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: \n Assessment and Comparison\"\n IEEE Access\n 7 (2019): 104833-104847\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XK2En.XK2En","page":"Cross-Entropies","title":"EntropyHub._XK2En.XK2En","text":"XK2, Ci = XK2En(Sig1, Sig2)\n\nReturns the cross-Kolmogorov entropy estimates (XK2) and the correlation integrals (Ci) for m = [1,2] estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, distance threshold (r) = 0.2*SDpooled(Sig1, Sig2), logarithm = natural\n\nXK2, Ci = XK2En(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, r::Union{Real,Nothing}=nothing, Logx::Real=exp(1))\n\nReturns the cross-Kolmogorov entropy estimates (XK2) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nr - Radius Distance Threshold, a positive scalar [default: 0.2*SDpooled(Sig1,Sig2)] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nSee also XSampEn, XFuzzEn, XApEn, K2En, XMSEn, XDistEn\n\nReferences:\n\n[1] Matthew W. Flood,\n \"XK2En - EntropyHub Project\"\n (2021) https://github.com/MattWillFlood/EntropyHub\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XPermEn.XPermEn","page":"Cross-Entropies","title":"EntropyHub._XPermEn.XPermEn","text":"XPerm = XPermEn(Sig1, Sig2)\n\nReturns the cross-permuation entropy estimates (XPerm) estimated betweeen the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 3, time delay = 1, logarithm = base 2, \n\nXPerm = XPermEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=3, tau::Int=1, Logx::Real=exp(1))\n\nReturns the permutation entropy estimates (XPerm) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 2 [default: 3] \n\n **Note: XPerm is undefined for embedding dimensions < 3.**\n\ntau - Time Delay, a positive integer [default: 1] \n\nLogx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log.** \n\nSee also PermEn, XApEn, XSampEn, XFuzzEn, XMSEn\n\nReferences:\n\n[1] Wenbin Shi, Pengjian Shang, and Aijing Lin,\n \"The coupling analysis of stock market indices based on \n cross-permutation entropy.\"\n Nonlinear Dynamics\n 79.4 (2015): 2439-2447.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XCondEn.XCondEn","page":"Cross-Entropies","title":"EntropyHub._XCondEn.XCondEn","text":"XCond, SEw, SEz = XCondEn(Sig1, Sig2)\n\nReturns the corrected cross-conditional entropy estimates (XCond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated for the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, number of symbols = 6, logarithm = natural ** Note: XCondEn is direction-dependent. Therefore, the order of the data sequences Sig1 and Sig2 matters. If Sig1 is the sequence 'y', and Sig2 is the second sequence 'u', the XCond is the amount of information carried by y(i) when the pattern u(i) is found.**\n\nXCond, SEw, SEz = XCondEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)\n\nReturns the corrected cross-conditional entropy estimates (XCond) for the data sequences contained in Sig1 and Sig2 using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nc - Number of symbols, an integer > 1 [default: 6] \n\nLogx - Logarithm base, a positive scalar [default: natural] \n\nNorm - Normalisation of XCond values: [false] no normalisation [default]\n\n [true] normalises w.r.t cross-Shannon entropy.\n\nSee also XFuzzEn, XSampEn, XApEn, XPermEn, CondEn, XMSEn\n\nReferences:\n\n[1] Alberto Porta, et al.,\n \"Conditional entropy approach for the evaluation of the \n coupling strength.\" \n Biological cybernetics \n 81.2 (1999): 119-129.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XDistEn.XDistEn","page":"Cross-Entropies","title":"EntropyHub._XDistEn.XDistEn","text":"XDist, Ppi = XDistEn(Sig1, Sig2)\n\nReturns the cross-distribution entropy estimate (XDist) and the corresponding distribution probabilities (Ppi) estimated between the data sequences contained in Sig1 and Sig2 using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins\n\nXDist, Ppi = XDistEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; m::Int=2, tau::Int=1, Bins::Union{Int,String}=\"Sturges\", Logx::Real=2, Norm::Bool=true)\n\nReturns the cross-distribution entropy estimate (XDist) estimated between the data sequences contained in Sig1 and Sig2 using the specified 'keyword' = arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2] \n\ntau - Time Delay, a positive integer [default: 1] \n\nBins - Histogram bin selection method for distance distribution, an integer > 1 indicating the number of bins, or one of the following strings {'sturges','sqrt','rice','doanes'} [default: 'sturges'] \n\nLogx - Logarithm base, a positive scalar [default: 2] ** enter 0 for natural log**\n\nNorm - Normalisation of DistEn value: [false] no normalisation. [true] normalises w.r.t # of histogram bins [default] \n\nSee also XSampEn, XApEn, XPermEn, XCondEn, DistEn, DistEn2D, XMSEn\n\nReferences:\n\n[1] Yuanyuan Wang and Pengjian Shang,\n \"Analysis of financial stock markets through the multiscale\n cross-distribution entropy based on the Tsallis entropy.\"\n Nonlinear Dynamics \n 94.2 (2018): 1361-1376.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Cross_Entropies/#EntropyHub._XSpecEn.XSpecEn","page":"Cross-Entropies","title":"EntropyHub._XSpecEn.XSpecEn","text":"XSpec, BandEn = XSpecEn(Sig)\n\nReturns the cross-spectral entropy estimate (XSpec) of the full cross- spectrum and the within-band entropy (BandEn) estimated between the data sequences contained in Sig using the default parameters: N-point FFT = 2 * max(length(Sig1/Sig2)) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.\n\nXSpec, BandEn = XSpecEn(Sig1::Union{AbstractMatrix{T}, AbstractVector{T}} where T<:Real, Sig2::Union{AbstractVector{T} where T<:Real, Nothing} = nothing; N::Union{Nothing,Int}=nothing, Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)\n\nReturns the cross-spectral entropy (XSpec) and the within-band entropy (BandEn) estimate between the data sequences contained in Sig1 and Sig2 using the following specified 'keyword' arguments:\n\nArguments:\n\nN - Resolution of spectrum (N-point FFT), an integer > 1 \n\nFreqs - Normalised band edge frequencies, a scalar in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2). Note: When no band frequencies are entered, BandEn == SpecEn \n\nLogx - Logarithm base, a positive scalar [default: base 2] ** enter 0 for natural log** \n\nNorm - Normalisation of XSpec value: [false] no normalisation. [true] normalises w.r.t # of spectrum/band frequency values [default] \n\nFor more info, see the EntropyHub guide\n\nSee also SpecEn, fft, XDistEn, periodogram, XSampEn, XApEn\n\nReferences:\n\n[1] Matthew W. Flood,\n \"XSpecEn - EntropyHub Project\"\n (2021) https://github.com/MattWillFlood/EntropyHub\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example4/#Example-4:-Cross-Distribution-Entropy-w/-Different-Binning-Methods","page":"Ex.4: Cross-Distribution Entropy","title":"Example 4: Cross-Distribution Entropy w/ Different Binning Methods","text":"","category":"section"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"Import a signal of pseudorandom integers in the range [1, 8] and calculate the cross- distribution entropy with an embedding dimension of 5, a time delay (tau) of 3, and 'Sturges' bin selection method.","category":"page"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\");\nXDist, _ = XDistEn(X[:,1], X[:,2], m = 5, tau = 3);\nprintln(\" \") # hide\nprintln(\"XDist = \", XDist) # hide","category":"page"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"Use Rice's method to determine the number of histogram bins and return the probability of each bin (Ppi).","category":"page"},{"location":"Examples/Example4/","page":"Ex.4: Cross-Distribution Entropy","title":"Ex.4: Cross-Distribution Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\"); # hide\nXDist, Ppi = XDistEn(X[:,1], X[:,2], m = 5, tau = 3, Bins = \"rice\")\nprintln(\"XDist = \", XDist) # hide\nprintln(\"Ppi = \", Ppi) #hide","category":"page"},{"location":"Examples/Example8/#Example-8:-Composite-Multiscale-Cross-Approximate-Entropy","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Example 8: Composite Multiscale Cross-Approximate Entropy","text":"","category":"section"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"Import two signals of uniformly distributed pseudorandom integers in the range [1 8] and create a multiscale entropy object with the following parameters: EnType = XApEn(), embedding dimension = 2, time delay = 2, radius distance threshold = 0.5","category":"page"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\");\nMobj = MSobject(XApEn, m = 2, tau = 2, r = 0.5)\nMobj # hide","category":"page"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"Calculate the comsposite multiscale cross-approximate entropy over 3 temporal scales where the radius distance threshold value (r) specified by Mobj becomes scaled by the variance of the signal at each scale.","category":"page"},{"location":"Examples/Example8/","page":"Ex.8: Composite Multiscale Cross-Approximate Entropy","title":"Ex.8: Composite Multiscale Cross-Approximate Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers2\"); # hide \nMobj = MSobject(XApEn, m = 2, tau = 2, r = 0.5) # hide\nMSx, _ = cXMSEn(X[:,1], X[:,2], Mobj, Scales = 3, RadNew = 1)\nMSx # hide","category":"page"},{"location":"Guide/Multiscale_Entropies/#Multiscale-Entropies","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"","category":"section"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"Functions for estimating the multiscale entropy between of a univariate time series.","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"Multiscale entropy can be calculated using any of the Base entropies: (ApEn, AttnEn, BubbEn, CondEn, CoSiEn, DistEn, DivEn, DispEn, EnofEn, FuzzEn, GridEn, IncrEn, K2En, PermEn, PhasEn, RangEn, SampEn, SlopEn, SpecEn, SyDyEn).","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"info: NOTE:\nMultiscale cross-entropy functions have two positional arguments:the data sequence, Sig (a vector > 10 elements),\nthe multiscale entropy object, Mobj.","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"EntropyHub.MSobject","category":"page"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._MSobject.MSobject","page":"Multiscale Entropies","title":"EntropyHub._MSobject.MSobject","text":"Mobj = MSobject()\n\nReturns a multiscale entropy object (Mobj) based on that originally proposed by Costa et. al. (2002) using the following default parameters: EnType = SampEn(), embedding dimension = 2, time delay = 1, radius = 0.2*SD(Sig), logarithm = natural\n\nMobj = MSobject(EnType::Function)\n\nReturns a multiscale entropy object by passing the entropy function (EnType) and the specifying default parameters for that entropy function. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType \n\n(e.g. ? EntropyHub.SampEn)\n\nMobj = MSobject(EnType::Function; kwargs...)\n\nReturns a multiscale entropy object using the specified entropy method (EnType) and the 'keyword' parameters for that particular method. To see the default parameters for a particular entropy method, type: ? EntropyHub.EnType (e.g. ? EntropyHub.SampEn)\n\nEnType can be any of the following entropy functions:\n\nBase Entropies:\n\n-----------------\n`ApEn` - Approximate Entropy \n\n`SampEn` - Sample Entropy \n\n`FuzzEn` - Fuzzy Entropy \n\n`K2En` - Kolmogorov Entropy \n\n`PermEn` - Permutation Entropy\t \n\n`CondEn` - Conditional Entropy\t \n\n`DistEn` - Distribution Entropy\t \n\n`DispEn` - Dispersion Entropy\t \n\n`SpecEn` - Spectral Entropy \n\n`SyDyEn` - Symbolic Dynamic Entropy\t \n\n`IncrEn` - Increment Entropy\t \n\n`CoSiEn` - Cosine Similarity Entropy\t\n\n`PhasEn` - Phase Entropy\t \n\n`SlopEn` - Slope Entropy \n\n`BubbEn` - Bubble Entropy \n\n`GridEn` - Gridded Distribution Entropy \n\n`EnofEn` - Entropy of Entropy\t\n\n`AttnEn` - Attention Entropy \n\n`DivEn` - Diversity Entropy \n\n`RangEn` - Range Entropy\n\nCross Entropies:\n\n------------------\n`XApEn` - Cross-Approximate Entropy \n\n`XSampEn` - Cross-Sample Entropy \n\n`XFuzzEn` - Cross-Fuzzy Entropy \n\n`XK2En` - Cross-Kolmogorov Entropy \n\n`XPermEn` - Cross-Permutation Entropy \n\n`XCondEn` - Cross-Conditional Entropy \n\n`XDistEn` - Cross-Distribution Entropy \n\n`XSpecEn` - Cross-Spectral Entropy\n\nSee also MSEn, XMSEn, rMSEn, cMSEn, hMSEn, rXMSEn, cXMSEn, hXMSEn\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"The following functions use the multiscale entropy object shown above.","category":"page"},{"location":"Guide/Multiscale_Entropies/","page":"Multiscale Entropies","title":"Multiscale Entropies","text":"EntropyHub.MSEn\nEntropyHub.cMSEn\nEntropyHub.rMSEn\nEntropyHub.hMSEn","category":"page"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._MSEn.MSEn","page":"Multiscale Entropies","title":"EntropyHub._MSEn.MSEn","text":" MSx, CI = MSEn(Sig, Mobj)\n\nReturns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj over 3 temporal scales with coarse- graining (default). \n\n MSx, CI = MSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, \n Methodx::String=\"coarse\", RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of multiscale entropy values MSx and the complexity index CI of the data sequence Sig using the parameters specified by the multiscale object Mobj and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nMethod - Graining method, one of the following: {coarse,modified,imf,timeshift,generalized} [default = coarse] For further info on these graining procedures, see the EntropyHub guide. \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt) \n\n [3] Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nFor further info on these graining procedures see the EntropyHub guide.\n\nSee also MSobject, rMSEn, cMSEn, hMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n [1] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n [2] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n [3] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n [4] Madalena Costa, Ary L. Goldberger and C-K. Peng,\n \"Multiscale entropy analysis of biological signals.\" \n Physical review E \n 71.2 (2005): 021906.\n\n [5] Ranjit A. Thuraisingham and Georg A. Gottwald,\n \"On multiscale entropy analysis for physiological data.\"\n Physica A: Statistical Mechanics and its Applications\n 366 (2006): 323-332.\n\n [6] Meng Hu and Hualou Liang,\n \"Intrinsic mode entropy based on multivariate empirical mode\n decomposition and its application to neural data analysis.\" \n Cognitive neurodynamics\n 5.3 (2011): 277-284.\n\n [7] Anne Humeau-Heurtier \n \"The multiscale entropy algorithm and its variants: A review.\" \n Entropy \n 17.5 (2015): 3110-3123.\n\n [8] Jianbo Gao, et al.,\n \"Multiscale entropy analysis of biological signals: a \n fundamental bi-scaling law.\" \n Frontiers in computational neuroscience \n 9 (2015): 64.\n\n [9] Paolo Castiglioni, et al.,\n \"Multiscale Sample Entropy of cardiovascular signals: Does the\n choice between fixed-or varying-tolerance among scales \n influence its evaluation and interpretation?.\" \n Entropy\n 19.11 (2017): 590.\n\n [10] Tuan D Pham,\n \"Time-shift multiscale entropy analysis of physiological signals.\" \n Entropy \n 19.6 (2017): 257.\n\n [11] Hamed Azami and Javier Escudero,\n \"Coarse-graining approaches in univariate multiscale sample \n and dispersion entropy.\" \n Entropy 20.2 (2018): 138.\n\n [12] Madalena Costa and Ary L. Goldberger,\n \"Generalized multiscale entropy analysis: Application to quantifying \n the complex volatility of human heartbeat time series.\" \n Entropy 17.3 (2015): 1197-1203.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._cMSEn.cMSEn","page":"Multiscale Entropies","title":"EntropyHub._cMSEn.cMSEn","text":"MSx, CI = cMSEn(Sig, Mobj)\n\nReturns a vector of composite multiscale entropy values (MSx) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) using the composite multiscale entropy method over 3 temporal scales.\n\nMSx, CI = cMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Refined::Bool=false, Plotx::Bool=false)\n\nReturns a vector of composite multiscale entropy values (MSx) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt) \n\n [3] Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Median Absolute Deviation - r*med_ad(Xt)\n\nRefined - Refined-composite MSEn method. When Refined == true and the entropy function specified by Mobj is SampEn or FuzzEn, cMSEn returns the refined-composite multiscale entropy (rcMSEn) [default: false]\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false]\n\nSee also MSobject, rMSEn, MSEn, hMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n[1] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[2] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n[3] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n [4] Shuen-De Wu, et al.,\n \"Time series analysis using composite multiscale entropy.\" \n Entropy \n 15.3 (2013): 1069-1084.\n\n[5] Shuen-De Wu, et al.,\n \"Analysis of complex time series using refined composite \n multiscale entropy.\" \n Physics Letters A \n 378.20 (2014): 1369-1374.\n\n[6] Azami, Hamed, Alberto Fernández, and Javier Escudero.\n \"Refined multiscale fuzzy entropy based on standard deviation\n for biomedical signal analysis.\" \n Medical & biological engineering & computing 55 (2017): 2037-2052.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._rMSEn.rMSEn","page":"Multiscale Entropies","title":"EntropyHub._rMSEn.rMSEn","text":"MSx, CI = rMSEn(Sig, Mobj)\n\nReturns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following default parameters: Scales = 3, Butterworth LPF Order = 6, Butterworth LPF cutoff frequency at scale (T): Fc = 0.5/T. If the entropy function specified by Mobj is SampEn or ApEn, rMSEn updates the threshold radius of the data sequence (Xt) at each scale to 0.2std(Xt) if no r value is provided by Mobj, or r.std(Xt) if r is specified.\n\nMSx, CI = rMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; Scales::Int=3, \n F_Order::Int=6, F_Num::Float64=0.5, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of refined multiscale entropy values (MSx) and the complexity index (CI) of the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) and the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default = 3) \n\nF_Order - Butterworth low-pass filter order, a positive integer (default: 6) \n\nF_Num - Numerator of Butterworth low-pass filter cutoff frequency, a scalar value in range [0 < F_Num < 1]. The cutoff frequency at each scale (T) becomes: Fc = `F_Num/T. (default: 0.5) \n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each time scale (Xt). If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt) \n\n [3] Mean Absolute Deviation - r*mean_ad(Xt) \n\n [4] Median Absolute Deviation - r*med_ad(Xt)\n\nPlotx - When Plotx == true, returns a plot of the entropy value at each time scale (i.e. the multiscale entropy curve) [default: false] \n\nSee also MSobject, MSEn, cMSEn, hMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n[1] Madalena Costa, Ary Goldberger, and C-K. Peng,\n \"Multiscale entropy analysis of complex physiologic time series.\"\n Physical review letters\n 89.6 (2002): 068102.\n\n[2] Vadim V. Nikulin, and Tom Brismar,\n \"Comment on “Multiscale entropy analysis of complex physiologic\n time series”.\" \n Physical review letters \n 92.8 (2004): 089803.\n\n[3] Madalena Costa, Ary L. Goldberger, and C-K. Peng. \n \"Costa, Goldberger, and Peng reply.\" \n Physical Review Letters\n 92.8 (2004): 089804.\n\n[4] José Fernando Valencia, et al.,\n \"Refined multiscale entropy: Application to 24-h holter \n recordings of heart period variability in healthy and aortic \n stenosis subjects.\" \n IEEE Transactions on Biomedical Engineering \n 56.9 (2009): 2202-2213.\n\n[5] Puneeta Marwaha and Ramesh Kumar Sunkaria,\n \"Optimal selection of threshold value ‘r’for refined multiscale\n entropy.\" \n Cardiovascular engineering and technology \n 6.4 (2015): 557-576.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Multiscale_Entropies/#EntropyHub._hMSEn.hMSEn","page":"Multiscale Entropies","title":"EntropyHub._hMSEn.hMSEn","text":"MSx, Sn, CI = hMSEn(Sig, Mobj)\n\nReturns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the hierarchical tree (i.e. sum(Sn)) for the data sequence (Sig) using the parameters specified by the multiscale object (Mobj) over 3 temporal scales (default). The entropy values in MSx are ordered from the root node (S.00) to the Nth subnode at scale T (S.TN): i.e. S.00, S.10, S.11, S.20, S.21, S.22, S.23, S.30, S.31, S.32, S.33, S.34, S.35, S.36, S.37, S.40, ... , S.TN. The average entropy values in Sn are ordered in the same way, with the value of the root node given first: i.e. S0, S1, S2, ..., ST\n\nMSx, Sn, CI = hMSEn(Sig::AbstractArray{T,1} where T<:Real, Mobj::NamedTuple; \n Scales::Int=3, RadNew::Int=0, Plotx::Bool=false)\n\nReturns a vector of entropy values (MSx) calculated at each node in the hierarchical tree, the average entropy value across all nodes at each scale (Sn), and the complexity index (CI) of the entire hierarchical tree for the data sequence (Sig) using the following 'keyword' arguments:\n\nArguments:\n\nScales - Number of temporal scales, an integer > 1 (default: 3) At each scale (T), entropy is estimated for 2^(T-1) nodes.\n\nRadNew - Radius rescaling method, an integer in the range [1 4]. When the entropy specified by Mobj is SampEn or ApEn, RadNew allows the radius threshold to be updated at each node in the tree. If a radius value is specified by Mobj (r), this becomes the rescaling coefficient, otherwise it is set to 0.2 (default). The value of RadNew specifies one of the following methods:\n\n [1] Standard Deviation - r*std(Xt)\n\n [2] Variance - r*var(Xt)\n\n [3] Mean Absolute Deviation - r*mean_ad(Xt)\n\n [4] Median Absolute Deviation - r*med_ad(Xt,1)\n\nPlotx - When Plotx == true, returns a plot of the average entropy value at each time scale (i.e. the multiscale entropy curve) and a hierarchical graph showing the entropy value of each node in the hierarchical tree decomposition. (default: false)\n\nSee also MSobject, MSEn, cMSEn, rMSEn, SampEn, ApEn, XMSEn\n\nReferences:\n\n[1] Ying Jiang, C-K. Peng and Yuesheng Xu,\n \"Hierarchical entropy analysis for biological signals.\"\n Journal of Computational and Applied Mathematics\n 236.5 (2011): 728-742.\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example2/#Example-2:-(Fine-grained)-Permutation-Entropy","page":"Ex.2: Permutation Entropy","title":"Example 2: (Fine-grained) Permutation Entropy","text":"","category":"section"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"Import the x, y, and z components of the Lorenz system of equations.","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"Data = ExampleData(\"lorenz\");\n\nusing Plots\nscatter(Data[:,1], Data[:,2], Data[:,3],\nmarkercolor = \"green\", markerstrokecolor = \"black\",\nmarkersize = 3, background_color = \"black\", grid = false)","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"(Image: Lorenz)","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"Calculate fine-grained permutation entropy of the z component in dits (logarithm base 10) with an embedding dimension of 3, time delay of 2, an alpha parameter of 1.234. Return Pnorm normalised w.r.t the number of all possible permutations (m!) and the condition permutation entropy (cPE) estimate.","category":"page"},{"location":"Examples/Example2/","page":"Ex.2: Permutation Entropy","title":"Ex.2: Permutation Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"lorenz\"); # hide\nZ = Data[:,3];\nPerm, Pnorm, cPE = PermEn(Z, m = 3, tau = 2, Typex = \"finegrain\", \n tpx = 1.234, Logx = 10, Norm = false)","category":"page"},{"location":"Guide/Base_Entropies/#Base-Entropies","page":"Base Entropies","title":"Base Entropies","text":"","category":"section"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"Functions for estimating the entropy of a single univariate time series.","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"The following functions also form the base entropy method used by Multiscale functions.","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"These functions are directly available when EntropyHub is imported:","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"julia> using EntropyHub\n\njulia> names(EntropyHub)","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":" :ApEn\n :AttnEn\n :BubbEn\n ⋮\n :hXMSEn\n :rMSEn\n :rXMSEn","category":"page"},{"location":"Guide/Base_Entropies/","page":"Base Entropies","title":"Base Entropies","text":"EntropyHub.ApEn\nEntropyHub.SampEn\nEntropyHub.FuzzEn\nEntropyHub.K2En\nEntropyHub.PermEn\nEntropyHub.CondEn\nEntropyHub.DistEn\nEntropyHub.SpecEn\nEntropyHub.DispEn\nEntropyHub.SyDyEn\nEntropyHub.IncrEn\nEntropyHub.CoSiEn\nEntropyHub.PhasEn\nEntropyHub.SlopEn\nEntropyHub.BubbEn\nEntropyHub.GridEn\nEntropyHub.EnofEn\nEntropyHub.AttnEn\nEntropyHub.RangEn\nEntropyHub.DivEn","category":"page"},{"location":"Guide/Base_Entropies/#EntropyHub._ApEn.ApEn","page":"Base Entropies","title":"EntropyHub._ApEn.ApEn","text":"Ap, Phi = ApEn(Sig)\n\nReturns the approximate entropy estimates Ap and the log-average number of matched vectors Phi for m = [0,1,2], estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius distance threshold = 0.2*SD(Sig), logarithm = natural\n\nAp, Phi = ApEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))\n\nReturns the approximate entropy estimates Ap of the data sequence Sig for dimensions = [0,1,...,m] using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nr - Radius Distance Threshold, a positive scalar \n\nLogx - Logarithm base, a positive scalar\n\nSee also XApEn, SampEn, MSEn, FuzzEn, PermEn, CondEn, DispEn\n\nReferences:\n\n[1] Steven M. Pincus, \n \"Approximate entropy as a measure of system complexity.\" \n Proceedings of the National Academy of Sciences \n 88.6 (1991): 2297-2301.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SampEn.SampEn","page":"Base Entropies","title":"EntropyHub._SampEn.SampEn","text":"Samp, A, B = SampEn(Sig)\n\nReturns the sample entropy estimates Samp and the number of matched state vectors (m:B, m+1:A) for m = [0,1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2*SD(Sig), logarithm = natural\n\nSamp, A, B, (Vcp, Ka, Kb) = SampEn(Sig, ..., Vcp = true)\n\nIf Vcp == true, an additional tuple (Vcp, Ka, Kb) is returned with the sample entropy estimates (Samp) and the number of matched state vectors (m: B, m+1: A). (Vcp, Ka, Kb) contains the variance of the conditional probabilities (Vcp), i.e. CP = A/B, and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively. Note Vcp is undefined for the zeroth embedding dimension (m = 0) and due to the computational demand, will take substantially more time to return function outputs. See Appendix B in [2] for more info.\n\nSamp, A, B = SampEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1), Vcp::Bool=false)\n\nReturns the sample entropy estimates Samp for dimensions = [0,1,...,m] estimated from the data sequence Sig using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nr - Radius Distance Threshold, a positive scalar \n\nLogx - Logarithm base, a positive scalar \n\nVcp - Option to return the variance of the conditional probabilities and the number of overlapping matching vector pairs of lengths \n\nSee also ApEn, FuzzEn, PermEn, CondEn, XSampEn, SampEn2D, MSEn\n\nReferences:\n\n[1] Joshua S Richman and J. Randall Moorman. \n \"Physiological time-series analysis using approximate entropy\n and sample entropy.\" \n American Journal of Physiology-Heart and Circulatory Physiology (2000).\n\n[2] Douglas E Lake, Joshua S Richman, M.P. Griffin, J. Randall Moorman\n \"Sample entropy analysis of neonatal heart rate variability.\"\n American Journal of Physiology-Regulatory, Integrative and Comparative Physiology\n 283, no. 3 (2002): R789-R797.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._FuzzEn.FuzzEn","page":"Base Entropies","title":"EntropyHub._FuzzEn.FuzzEn","text":"Fuzz, Ps1, Ps2 = FuzzEn(Sig)\n\nReturns the fuzzy entropy estimates Fuzz and the average fuzzy distances (m:Ps1, m+1:Ps2) for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, fuzzy function (Fx) = \"default\", fuzzy function parameters (r) = [0.2, 2], logarithm = natural\n\nFuzz, Ps1, Ps2 = FuzzEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Union{Real,Tuple{Real,Real}}=(.2,2), Fx::String=\"default\", Logx::Real=exp(1))\n\nReturns the fuzzy entropy estimates Fuzz for dimensions = [1,...,m] estimated for the data sequence Sig using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer [default: 2]\n\ntau - Time Delay, a positive integer [default: 1]\n\nFx - Fuzzy function name, one of the following: {\"sigmoid\", \"modsampen\", \"default\", \"gudermannian\", \"bell\", \"triangular\", \"trapezoidal1\", \"trapezoidal2\", \"z_shaped\", \"gaussian\", \"constgaussian\"}\n\nr - Fuzzy function parameters, a 1 element scalar or a 2 element tuple of positive values. The r parameters for each fuzzy function are defined as follows: [default: [.2 2]]\n\n default: r(1) = divisor of the exponential argument\n r(2) = argument exponent (pre-division)\n sigmoid: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n modsampen: r(1) = divisor of the exponential argument\n r(2) = value subtracted from argument (pre-division)\n gudermannian: r = a scalar whose value is the numerator of\n argument to gudermannian function:\n GD(x) = atan(tanh(`r`/x))\n triangular: r = a scalar whose value is the threshold (corner point) of the triangular function.\n trapezoidal1: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the trapezoid.\n trapezoidal2: r(1) = a value corresponding to the upper corner point of the trapezoid.\n r(2) = a value corresponding to the lower corner point of the trapezoid.\n z_shaped: r = a scalar whose value corresponds to the upper (2r) and lower (r) corner points of the z-shape.\n bell: r(1) = divisor of the distance value\n r(2) = exponent of generalized bell-shaped function\n gaussian: r = a scalar whose value scales the slope of the Gaussian curve.\n constgaussian: r = a scalar whose value defines the lower threshod and shape of the Gaussian curve. \n [DEPRICATED] linear: r = an integer value. When r = 0, the\n argument of the exponential function is \n normalised between [0 1]. When r = 1,\n the minimuum value of the exponential \n argument is set to 0.\n\nLogx - Logarithm base, a positive scalar [default: natural]\n\nFor further information on keyword arguments, see the EntropyHub guide.\n\nSee also SampEn, ApEn, PermEn, DispEn, XFuzzEn, FuzzEn2D, MSEn\n\nReferences:\n\n[1] Weiting Chen, et al.\n \"Characterization of surface EMG signal based on fuzzy entropy.\"\n IEEE Transactions on neural systems and rehabilitation engineering\n 15.2 (2007): 266-272.\n\n[2] Hong-Bo Xie, Wei-Xing He, and Hui Liu\n \"Measuring time series regularity using nonlinear\n similarity-based sample entropy.\"\n Physics Letters A\n 372.48 (2008): 7140-7146.\n\n[3] Hamed Azami, et al.\n \"Fuzzy Entropy Metrics for the Analysis of Biomedical Signals: \n Assessment and Comparison\"\n IEEE Access\n 7 (2019): 104833-104847\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._K2En.K2En","page":"Base Entropies","title":"EntropyHub._K2En.K2En","text":"K2, Ci = K2En(Sig)\n\nReturns the Kolmogorov entropy estimates K2 and the correlation integrals Ci for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, r = 0.2*SD(Sig), logarithm = natural\n\nK2, Ci = K2En(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=0.2*std(Sig,corrected=false), Logx::Real=exp(1))\n\nReturns the Kolmogorov entropy estimates K2 for dimensions = [1,...,m] estimated from the data sequence Sig using the 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nr - Radius, a positive scalar \n\nLogx - Logarithm base, a positive scalar\n\nSee also DistEn, XK2En, MSEn\n\nReferences:\n\n[1] Peter Grassberger and Itamar Procaccia,\n \"Estimation of the Kolmogorov entropy from a chaotic signal.\" \n Physical review A 28.4 (1983): 2591.\n\n[2] Lin Gao, Jue Wang and Longwei Chen\n \"Event-related desynchronization and synchronization \n quantification in motor-related EEG by Kolmogorov entropy\"\n J Neural Eng. 2013 Jun;10(3):03602\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._PermEn.PermEn","page":"Base Entropies","title":"EntropyHub._PermEn.PermEn","text":"Perm, Pnorm, cPE = PermEn(Sig)\n\nReturns the permuation entropy estimates Perm, the normalised permutation entropy Pnorm and the conditional permutation entropy cPE for m = [1,2] estimated from the data sequence Sig using the default parameters: embedding dimension = 2, time delay = 1, logarithm = base 2, normalisation = w.r.t #symbols (m-1) Note: using the standard PermEn estimation, Perm = 0 when m = 1. Note: It is recommeneded that signal length > 5m! (see [8] and Amigo et al., Europhys. Lett. 83:60005, 2008)\n\nPerm, Pnorm, cPE = PermEn(Sig, m)\n\nReturns the permutation entropy estimates Perm estimated from the data sequence Sig using the specified embedding dimensions = [1,...,m] with other default parameters as listed above.\n\nPerm, Pnorm, cPE = PermEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Typex::String=\"none\", tpx::Union{Real,Nothing}=nothing, Logx::Real=2, Norm::Bool=false)\n\nReturns the permutation entropy estimates Perm for dimensions = [1,...,m] estimated from the data sequence Sig using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of PermEn value:\n\n false - normalises w.r.t log(# of permutation symbols [m-1]) - default\n true - normalises w.r.t log(# of all possible permutations [m!])\n * Note: Normalised permutation entropy is undefined for m = 1.\n ** Note: When Typex = 'uniquant' and Norm = true, normalisation\n is calculated w.r.t. log(tpx^m)\n\nTypex - Permutation entropy variation, one of the following: {`\"none\", \"uniquant\", \"finegrain\", \"modified\", \"ampaware\", \"weighted\", \"edge\", \"phase\"} See the EntropyHub guide for more info on PermEn variations. \n\ntpx - Tuning parameter for associated permutation entropy variation.\n\n [uniquant] 'tpx' is the L parameter, an integer > 1 (default = 4). \n [finegrain] 'tpx' is the alpha parameter, a positive scalar (default = 1)\n [ampaware] 'tpx' is the A parameter, a value in range [0 1] (default = 0.5)\n [edge] 'tpx' is the r sensitivity parameter, a scalar > 0 (default = 1)\n [phase] 'tpx' unwraps the instantaneous phase (angle of analytic signal) when tpx==1 (default = 0)\n See the EntropyHub guide for more info on PermEn variations.\n\nSee also XPermEn, MSEn, XMSEn, SampEn, ApEn, CondEn\n\nReferences:\n\n[1] Christoph Bandt and Bernd Pompe, \n \"Permutation entropy: A natural complexity measure for time series.\" \n Physical Review Letters,\n 88.17 (2002): 174102.\n\n[2] Xiao-Feng Liu, and Wang Yue,\n \"Fine-grained permutation entropy as a measure of natural \n complexity for time series.\" \n Chinese Physics B \n 18.7 (2009): 2690.\n\n[3] Chunhua Bian, et al.,\n \"Modified permutation-entropy analysis of heartbeat dynamics.\"\n Physical Review E\n 85.2 (2012) : 021906\n\n[4] Bilal Fadlallah, et al.,\n \"Weighted-permutation entropy: A complexity measure for time \n series incorporating amplitude information.\" \n Physical Review E \n 87.2 (2013): 022911.\n\n[5] Hamed Azami and Javier Escudero,\n \"Amplitude-aware permutation entropy: Illustration in spike \n detection and signal segmentation.\" \n Computer methods and programs in biomedicine,\n 128 (2016): 40-51.\n\n[6] Zhiqiang Huo, et al.,\n \"Edge Permutation Entropy: An Improved Entropy Measure for \n Time-Series Analysis,\" \n 45th Annual Conference of the IEEE Industrial Electronics Soc,\n (2019), 5998-6003\n\n[7] Zhe Chen, et al. \n \"Improved permutation entropy for measuring complexity of time\n series under noisy condition.\" \n Complexity \n 1403829 (2019).\n\n[8] Maik Riedl, Andreas Müller, and Niels Wessel,\n \"Practical considerations of permutation entropy.\" \n The European Physical Journal Special Topics \n 222.2 (2013): 249-262.\n\n[9] Kang Huan, Xiaofeng Zhang, and Guangbin Zhang,\n \"Phase permutation entropy: A complexity measure for nonlinear time \n series incorporating phase information.\" \n Physica A: Statistical Mechanics and its Applications \n 568 (2021): 125686.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._CondEn.CondEn","page":"Base Entropies","title":"EntropyHub._CondEn.CondEn","text":"Cond, SEw, SEz = CondEn(Sig)\n\nReturns the corrected conditional entropy estimates (Cond) and the corresponding Shannon entropies (m: SEw, m+1: SEz) for m = [1,2] estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 6, logarithm = natural, normalisation = false Note: CondEn(m=1) returns the Shannon entropy of Sig.\n\nCond, SEw, SEz = CondEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=6, Logx::Real=exp(1), Norm::Bool=false)\n\nReturns the corrected conditional entropy estimates (Cond) from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nc - # of symbols, an integer > 1 \n\nLogx - Logarithm base, a positive scalar \n\nNorm - Normalisation of CondEn value: \n\n [false] no normalisation - default\n [true] normalises w.r.t Shannon entropy of data sequence `Sig`\n\nSee also XCondEn, MSEn, PermEn, DistEn, XPermEn\n\nReferences:\n\n[1] Alberto Porta, et al.,\n \"Measuring regularity by means of a corrected conditional\n entropy in sympathetic outflow.\" \n Biological cybernetics \n 78.1 (1998): 71-78.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._DistEn.DistEn","page":"Base Entropies","title":"EntropyHub._DistEn.DistEn","text":"Dist, Ppi = DistEn(Sig)\n\nReturns the distribution entropy estimate (Dist) and the corresponding distribution probabilities (Ppi) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, binning method = 'Sturges', logarithm = base 2, normalisation = w.r.t # of histogram bins\n\nDist, Ppi = DistEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Bins::Union{Int,String}=\"Sturges\", Logx::Real=2, Norm::Bool=true)\n\nReturns the distribution entropy estimate (Dist) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer \n\ntau - Time Delay, a positive integer \n\nBins - Histogram bin selection method for distance distribution, one of the following: \n\n an integer > 1 indicating the number of bins, or one of the \n following strings {'sturges','sqrt','rice','doanes'}\n [default: 'sturges']\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of DistEn value: \n\n [false] no normalisation.\n [true] normalises w.r.t # of histogram bins - default\n\nSee also XDistEn, DistEn2D, MSEn, K2En\n\nReferences:\n\n[1] Li, Peng, et al.,\n \"Assessing the complexity of short-term heartbeat interval \n series by distribution entropy.\" \n Medical & biological engineering & computing \n 53.1 (2015): 77-87.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SpecEn.SpecEn","page":"Base Entropies","title":"EntropyHub._SpecEn.SpecEn","text":"Spec, BandEn = SpecEn(Sig)\n\nReturns the spectral entropy estimate of the full spectrum (Spec) and the within-band entropy (BandEn) estimated from the data sequence (Sig) using the default parameters: N-point FFT = 2*len(Sig) + 1, normalised band edge frequencies = [0 1], logarithm = base 2, normalisation = w.r.t # of spectrum/band frequency values.\n\nSpec, BandEn = SpecEn(Sig::AbstractArray{T,1} where T<:Real; N::Int=1 + (2*size(Sig,1)), Freqs::Tuple{Real,Real}=(0,1), Logx::Real=exp(1), Norm::Bool=true)\n\nReturns the spectral entropy (Spec) and the within-band entropy (BandEn) estimate for the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nN - Resolution of spectrum (N-point FFT), an integer > 1 \n\nFreqs - Normalised band edge frequencies, a 2 element tuple with values \n\n in range [0 1] where 1 corresponds to the Nyquist frequency (Fs/2).\n Note: When no band frequencies are entered, BandEn == SpecEn\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of Spec value:\n\n [false] no normalisation.\n [true] normalises w.r.t # of spectrum/band frequency values - default.\n\nFor more info, see the EntropyHub guide.\n\nSee also XSpecEn, fft, MSEn, XMSEn\n\nReferences:\n\n[1] G.E. Powell and I.C. Percival,\n \"A spectral entropy method for distinguishing regular and \n irregular motion of Hamiltonian systems.\" \n Journal of Physics A: Mathematical and General \n 12.11 (1979): 2053.\n\n[2] Tsuyoshi Inouye, et al.,\n \"Quantification of EEG irregularity by use of the entropy of \n the power spectrum.\" \n Electroencephalography and clinical neurophysiology \n 79.3 (1991): 204-210.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._DispEn.DispEn","page":"Base Entropies","title":"EntropyHub._DispEn.DispEn","text":"Dispx, RDE = DispEn(Sig)\n\nReturns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, data transform = normalised cumulative density function (ncdf)\n\nDispx, RDE = DispEn(Sig::AbstractArray{T,1} where T<:Real; c::Int=3, m::Int=2, tau::Int=1, Typex::String=\"ncdf\", Logx::Real=exp(1), Fluct::Bool=false, Norm::Bool=false, rho::Real=1)\n\nReturns the dispersion entropy (Dispx) and the reverse dispersion entropy (RDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer\n\ntau - Time Delay, a positive integer\n\nc - Number of symbols, an integer > 1\n\nTypex - Type of data-to-symbolic sequence transform, one of the following: {\"linear\", \"kmeans\" ,\"ncdf\", \"finesort\", \"equal\"}\n\n See the EntropyHub guide for more info on these transforms.\n\nLogx - Logarithm base, a positive scalar\n\nFluct - When Fluct == true, DispEn returns the fluctuation-based Dispersion entropy. [default: false]\n\nNorm - Normalisation of Dispx and RDE value: [false] no normalisation - default [true] normalises w.r.t number of possible dispersion patterns (c^m or (2c -1)^m-1 if Fluct == true).\n\nrho - If Typex == 'finesort', rho is the tuning parameter (default: 1)\n\nSee also PermEn, SyDyEn, MSEn\n\nReferences:\n\n[1] Mostafa Rostaghi and Hamed Azami,\n \"Dispersion entropy: A measure for time-series analysis.\" \n IEEE Signal Processing Letters \n 23.5 (2016): 610-614.\n\n[2] Hamed Azami and Javier Escudero,\n \"Amplitude-and fluctuation-based dispersion entropy.\" \n Entropy \n 20.3 (2018): 210.\n\n[3] Li Yuxing, Xiang Gao and Long Wang,\n \"Reverse dispersion entropy: A new complexity measure for \n sensor signal.\" \n Sensors \n 19.23 (2019): 5203.\n\n[4] Wenlong Fu, et al.,\n \"Fault diagnosis for rolling bearings based on fine-sorted \n dispersion entropy and SVM optimized with mutation SCA-PSO.\"\n Entropy\n 21.4 (2019): 404.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SyDyEn.SyDyEn","page":"Base Entropies","title":"EntropyHub._SyDyEn.SyDyEn","text":"SyDy, Zt = SyDyEn(Sig)\n\nReturns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, symbols = 3, logarithm = natural, symbolic partition type = maximum entropy partitioning (MEP), normalisation = normalises w.r.t # possible vector permutations (c^m) \n\nSyDy, Zt = SyDyEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, c::Int=3, Typex::String=\"MEP\", Logx::Real=exp(1), Norm::Bool=true)\n\nReturns the symbolic dynamic entropy (SyDy) and the symbolic sequence (Zt) of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer \n\ntau - Time Delay, a positive integer \n\nc - Number of symbols, an integer > 1 \n\nTypex - Type of symbolic sequnce partitioning method, one of the following: \n\n {\"linear\",\"uniform\",\"MEP\"(default),\"kmeans\"}\n\nLogx - Logarithm base, a positive scalar \n\nNorm - Normalisation of SyDyEn value: \n\n [false] no normalisation \n [true] normalises w.r.t # possible vector permutations (c^m+1) - default\n\nSee the EntropyHub guide for more info on these parameters.\n\nSee also DispEn, PermEn, CondEn, SampEn, MSEn\n\nReferences:\n\n[1] Yongbo Li, et al.,\n \"A fault diagnosis scheme for planetary gearboxes using \n modified multi-scale symbolic dynamic entropy and mRMR feature \n selection.\" \n Mechanical Systems and Signal Processing \n 91 (2017): 295-312. \n\n[2] Jian Wang, et al.,\n \"Fault feature extraction for multiple electrical faults of \n aviation electro-mechanical actuator based on symbolic dynamics\n entropy.\" \n IEEE International Conference on Signal Processing, \n Communications and Computing (ICSPCC), 2015.\n\n[3] Venkatesh Rajagopalan and Asok Ray,\n \"Symbolic time series analysis via wavelet-based partitioning.\"\n Signal processing \n 86.11 (2006): 3309-3320.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._IncrEn.IncrEn","page":"Base Entropies","title":"EntropyHub._IncrEn.IncrEn","text":"Incr = IncrEn(Sig)\n\nReturns the increment entropy (Incr) estimate of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, quantifying resolution = 4, logarithm = base 2,\n\nIncr = IncrEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, R::Int=4, Logx::Real=2, Norm::Bool=false)\n\nReturns the increment entropy (Incr) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nR - Quantifying resolution, a positive scalar \n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of IncrEn value: \n\n [false] no normalisation - default\n [true] normalises w.r.t embedding dimension (m-1).\n\nSee also PermEn, SyDyEn, MSEn\n\nReferences:\n\n[1] Xiaofeng Liu, et al.,\n \"Increment entropy as a measure of complexity for time series.\"\n Entropy\n 18.1 (2016): 22.1.\n\n*** \"Correction on Liu, X.; Jiang, A.; Xu, N.; Xue, J. - Increment \n Entropy as a Measure of Complexity for Time Series,\n Entropy 2016, 18, 22.\" \n Entropy \n 18.4 (2016): 133.\n\n[2] Xiaofeng Liu, et al.,\n \"Appropriate use of the increment entropy for \n electrophysiological time series.\" \n Computers in biology and medicine \n 95 (2018): 13-23.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._CoSiEn.CoSiEn","page":"Base Entropies","title":"EntropyHub._CoSiEn.CoSiEn","text":"CoSi, Bm = CoSiEn(Sig)\n\nReturns the cosine similarity entropy (CoSi) and the corresponding global probabilities estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular threshold = .1, logarithm = base 2,\n\nCoSi, Bm = CoSiEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Real=.1, Logx::Real=2, Norm::Int=0)\n\nReturns the cosine similarity entropy (CoSi) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nr - Angular threshold, a value in range [0 < r < 1] \n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of Sig, one of the following integers: \n\n [0] no normalisation - default\n [1] normalises `Sig` by removing median(`Sig`)\n [2] normalises `Sig` by removing mean(`Sig`)\n [3] normalises `Sig` w.r.t. SD(`Sig`)\n [4] normalises `Sig` values to range [-1 1]\n\nSee also PhasEn, SlopEn, GridEn, MSEn, cMSEn\n\nReferences:\n\n[1] Theerasak Chanwimalueang and Danilo Mandic,\n \"Cosine similarity entropy: Self-correlation-based complexity\n analysis of dynamical systems.\"\n Entropy \n 19.12 (2017): 652.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._PhasEn.PhasEn","page":"Base Entropies","title":"EntropyHub._PhasEn.PhasEn","text":"Phas = PhasEn(Sig)\n\nReturns the phase entropy (Phas) estimate of the data sequence (Sig) using the default parameters: angular partitions = 4, time delay = 1, logarithm = natural,\n\nPhas = PhasEn(Sig::AbstractArray{T,1} where T<:Real; K::Int=4, tau::Int=1, Logx::Real=exp(1), Norm::Bool=true, Plotx::Bool=false)\n\nReturns the phase entropy (Phas) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nK - Angular partitions (coarse graining), an integer > 1 \n\n *Note: Division of partitions begins along the positive x-axis. As this point is somewhat arbitrary, it is\n recommended to use even-numbered (preferably multiples of 4) partitions for sake of symmetry.\n\ntau - Time Delay, a positive integer \n\nLogx - Logarithm base, a positive scalar \n\nNorm - Normalisation of Phas value: \n\n [false] no normalisation\n [true] normalises w.r.t. the number of partitions Log(`K`)\n\nPlotx - When Plotx == true, returns Poincaré plot (default: false) \n\nSee also SampEn, ApEn, GridEn, MSEn, SlopEn, CoSiEn, BubbEn\n\nReferences:\n\n[1] Ashish Rohila and Ambalika Sharma,\n \"Phase entropy: a new complexity measure for heart rate\n variability.\" \n Physiological measurement\n 40.10 (2019): 105006.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._SlopEn.SlopEn","page":"Base Entropies","title":"EntropyHub._SlopEn.SlopEn","text":"Slop = SlopEn(Sig)\n\nReturns the slope entropy (Slop) estimates for embedding dimensions [2, ..., m] of the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, angular thresholds = [5 45], logarithm = base 2 \n\nSlop = SlopEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Lvls::AbstractArray{T,1} where T<:Real=[5, 45], Logx::Real=2, Norm::Bool=true)\n\nReturns the slope entropy (Slop) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \t\n\n SlopEn returns estimates for each dimension [2,...,m]\n\ntau - Time Delay, a positive integer \n\nLvls - Angular thresolds, a vector of monotonically increasing values in the range [0 90] degrees.\n\nLogx - Logarithm base, a positive scalar (enter 0 for natural log) \n\nNorm - Normalisation of SlopEn value, a boolean operator: \n\n [false] no normalisation\n [true] normalises w.r.t. the number of patterns found (default)\n\nSee also PhasEn, GridEn, MSEn, CoSiEn, SampEn, ApEn\n\nReferences:\n\n[1] David Cuesta-Frau,\n \"Slope Entropy: A New Time Series Complexity Estimator Based on\n Both Symbolic Patterns and Amplitude Information.\" \n Entropy \n 21.12 (2019): 1167.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._BubbEn.BubbEn","page":"Base Entropies","title":"EntropyHub._BubbEn.BubbEn","text":"Bubb, H = BubbEn(Sig)\n\nReturns the bubble entropy (Bubb) and the conditional Rényi entropy (H) estimates of dimension m = 2 from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, logarithm = natural \n\nBubb, H = BubbEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, Logx::Real=exp(1))\n\nReturns the bubble entropy (Bubb) estimate of the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\n BubbEn returns estimates for each dimension [2,...,m]\n\ntau - Time Delay, a positive integer \n\nLogx - Logarithm base, a positive scalar \n\nSee also PhasEn, MSEn\n\nReferences:\n\n[1] George Manis, M.D. Aktaruzzaman and Roberto Sassi,\n \"Bubble entropy: An entropy almost free of parameters.\"\n IEEE Transactions on Biomedical Engineering\n 64.11 (2017): 2711-2718.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._GridEn.GridEn","page":"Base Entropies","title":"EntropyHub._GridEn.GridEn","text":"GDE, GDR, _ = GridEn(Sig)\n\nReturns the gridded distribution entropy (GDE) and the gridded distribution rate (GDR) estimated from the data sequence (Sig) using the default parameters: grid coarse-grain = 3, time delay = 1, logarithm = base 2\n\nGDE, GDR, PIx, GIx, SIx, AIx = GridEn(Sig)\n\nIn addition to GDE and GDR, GridEn returns the following indices estimated for the data sequence (Sig) using the default parameters: [PIx] - Percentage of points below the line of identity (LI) [GIx] - Proportion of point distances above the LI [SIx] - Ratio of phase angles (w.r.t. LI) of the points above the LI [AIx] - Ratio of the cumulative area of sectors of points above the LI\n\nGDE, GDR, ..., = GridEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=3, tau::Int=1, Logx::Real=exp(1), Plotx::Bool=false)\n\nReturns the gridded distribution entropy (GDE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Grid coarse-grain (m x m sectors), an integer > 1 \n\ntau - Time Delay, a positive integer \n\nLogx - Logarithm base, a positive scalar \n\nPlotx - When Plotx == true, returns gridded Poicaré plot and a bivariate histogram of the grid point distribution (default: false) \n\nSee also PhasEn, CoSiEn, SlopEn, BubbEn, MSEn\n\nReferences:\n\n[1] Chang Yan, et al.,\n \"Novel gridded descriptors of poincaré plot for analyzing \n heartbeat interval time-series.\" \n Computers in biology and medicine \n 109 (2019): 280-289.\n\n[2] Chang Yan, et al. \n \"Area asymmetry of heart rate variability signal.\" \n Biomedical engineering online \n 16.1 (2017): 1-14.\n\n[3] Alberto Porta, et al.,\n \"Temporal asymmetries of short-term heart period variability \n are linked to autonomic regulation.\" \n American Journal of Physiology-Regulatory, Integrative and \n Comparative Physiology \n 295.2 (2008): R550-R557.\n\n[4] C.K. Karmakar, A.H. Khandoker and M. Palaniswami,\n \"Phase asymmetry of heart rate variability signal.\" \n Physiological measurement \n 36.2 (2015): 303.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._EnofEn.EnofEn","page":"Base Entropies","title":"EntropyHub._EnofEn.EnofEn","text":"EoE, AvEn, S2 = EnofEn(Sig)\n\nReturns the entropy of entropy (EoE), the average Shannon entropy (AvEn), and the number of levels (S2) across all windows estimated from the data sequence (Sig) using the default parameters: window length (samples) = 10, slices = 10, logarithm = natural, heartbeat interval range (xmin, xmax) = (min(Sig), max(Sig))\n\nEoE, AvEn, S2 = EnofEn(Sig::AbstractArray{T,1} where T<:Real; tau::Int=10, S::Int=10, Xrange::Tuple{Real,REal}, Logx::Real=exp(1))\n\nReturns the entropy of entropy (EoE) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\ntau - Window length, an integer > 1 \n\nS - Number of slices (s1,s2), a two-element tuple of integers > 2 \n\nXrange - The min and max heartbeat interval, a two-element tuple where X[1] <= X[2]\n\nLogx - Logarithm base, a positive scalar \n\nSee also SampEn, MSEn, ApEn\n\nReferences:\n\n[1] Chang Francis Hsu, et al.,\n \"Entropy of entropy: Measurement of dynamical complexity for\n biological systems.\" \n Entropy \n 19.10 (2017): 550.\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._AttnEn.AttnEn","page":"Base Entropies","title":"EntropyHub._AttnEn.AttnEn","text":"Av4, (Hxx,Hnn,Hxn,Hnx) = AttnEn(Sig)\n\nReturns the attention entropy (Av4) calculated as the average of the sub-entropies (Hxx,Hxn,Hnn,Hnx) estimated from the data sequence (Sig) using a base-2 logarithm.\n\nAv4, (Hxx, Hnn, Hxn, Hnx) = AttnEn(Sig::AbstractArray{T,1} where T<:Real; Logx::Real=2)\n\nReturns the attention entropy (Av4) and the sub-entropies (Hxx,Hnn,Hxn,Hnx) from the data sequence (Sig) where, Hxx: entropy of local-maxima intervals Hnn: entropy of local minima intervals Hxn: entropy of intervals between local maxima and subsequent minima Hnx: entropy of intervals between local minima and subsequent maxima\n\nArguments:\n\nLogx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)\n\nSee also EnofEn, SpecEn, XSpecEn, PermEn, MSEn\n\nReferences:\n\n[1] Jiawei Yang, et al.,\n \"Classification of Interbeat Interval Time-series Using \n Attention Entropy.\" \n IEEE Transactions on Affective Computing \n (2020)\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._RangEn.RangEn","page":"Base Entropies","title":"EntropyHub._RangEn.RangEn","text":"Rangx, A, B = RangEn(Sig)\n\nReturns the range entropy estimate (Rangx) and the number of matched state vectors (m: B, m+1: A) estimated from the data sequence (Sig) using the sample entropy algorithm and the following default parameters: embedding dimension = 2, time delay = 1, radius threshold = 0.2, logarithm = natural. \n\nRangx, A, B = RangEn(Sig, keyword = value, ...)\n\nReturns the range entropy estimates (Rangx) for dimensions = m estimated for the data sequence (Sig) using the specified keyword arguments:\n\nArguments:\n\nm - Embedding Dimension, a positive integer tau - Time Delay, a positive integer r - Radius Distance Threshold, a positive value between 0 and 1 Methodx - Base entropy method, either 'SampEn' [default] or 'ApEn' Logx - Logarithm base, a positive scalar \n\nSee also ApEn, SampEn, FuzzEn, MSEn\n\nReferences:\n\n[1] Omidvarnia, Amir, et al.\n \"Range entropy: A bridge between signal complexity and self-similarity\"\n Entropy \n 20.12 (2018): 962.\n \n[2] Joshua S Richman and J. Randall Moorman. \n \"Physiological time-series analysis using approximate entropy\n and sample entropy.\" \n American Journal of Physiology-Heart and Circulatory Physiology \n 2000\n\n\n\n\n\n","category":"function"},{"location":"Guide/Base_Entropies/#EntropyHub._DivEn.DivEn","page":"Base Entropies","title":"EntropyHub._DivEn.DivEn","text":"Div, CDEn, Bm = DivEn(Sig)\n\nReturns the diversity entropy (Div), the cumulative diversity entropy (CDEn), and the corresponding probabilities (Bm) estimated from the data sequence (Sig) using the default parameters: embedding dimension = 2, time delay = 1, #bins = 5, logarithm = natural,\n\nDiv, CDEn, Bm = DivEn(Sig::AbstractArray{T,1} where T<:Real; m::Int=2, tau::Int=1, r::Int=5, Logx::Real=exp(1))\n\nReturns the diversity entropy (Div) estimated from the data sequence (Sig) using the specified 'keyword' arguments:\n\nArguments:\n\nm - Embedding Dimension, an integer > 1 \n\ntau - Time Delay, a positive integer \n\nr - Histogram bins #: either \n\n * an integer [1 < `r`] representing the number of bins\n * a list/numpy array of 3 or more increasing values in range [-1 1] representing the bin edges including the rightmost edge.\n\nLogx - Logarithm base, a positive scalar (Enter 0 for natural logarithm)\n\nSee also CoSiEn, PhasEn, SlopEn, GridEn, MSEn\n\nReferences:\n\n[1] X. Wang, S. Si and Y. Li, \n \"Multiscale Diversity Entropy: A Novel Dynamical Measure for Fault \n Diagnosis of Rotating Machinery,\" \n IEEE Transactions on Industrial Informatics,\n vol. 17, no. 8, pp. 5419-5429, Aug. 2021\n \n[2] Y. Wang, M. Liu, Y. Guo, F. Shu, C. Chen and W. Chen, \n \"Cumulative Diversity Pattern Entropy (CDEn): A High-Performance, \n Almost-Parameter-Free Complexity Estimator for Nonstationary Time Series,\"\n IEEE Transactions on Industrial Informatics\n vol. 19, no. 9, pp. 9642-9653, Sept. 2023\n\n\n\n\n\n","category":"function"},{"location":"Examples/Example1/#Example-1:-Sample-Entropy","page":"Ex.1: Sample Entropy","title":"Example 1: Sample Entropy","text":"","category":"section"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Import a signal of normally distributed random numbers [mean = 0; SD = 1], and calculate the sample entropy for each embedding dimension (m) from 0 to 4.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"gaussian\");\nSamp, _ = SampEn(X, m = 4);\nSamp # hide","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Select the last value to get the sample entropy for m = 4.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"gaussian\"); # hide\nSamp, _ = SampEn(X, m = 4); # hide\nSamp[end]","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Calculate the sample entropy for each embedding dimension (m) from 0 to 4 with a time delay (tau) of 2 samples.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"gaussian\"); # hide\nSamp, Phi1, Phi2 = SampEn(X, m = 4, tau = 2)","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"Import a signal of uniformly distributed random numbers in the range [-1, 1] and calculate the sample entropy for an embedding dimension (m) of 5, a time delay of 2, and a threshold radius of 0.075. Return the conditional probability (Vcp) and the number of overlapping matching vector pairs of lengths m+1 (Ka) and m (Kb), respectively.","category":"page"},{"location":"Examples/Example1/","page":"Ex.1: Sample Entropy","title":"Ex.1: Sample Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"uniform\"); # hide\nSamp, _, _, Vcp_Ka_Kb = SampEn(X, m = 5, tau = 2, r = 0.075, Vcp = true)\nVcp, Ka, Kb = Vcp_Ka_Kb\nprintln(\"Vcp = \", Vcp) # hide\nprintln(\"Ka = \", Ka) # hide\nprintln(\"Kb = \", Kb) # hide","category":"page"},{"location":"Examples/Example3/#Example-3:-Phase-Entropy-w/-Second-Order-Difference-Plot","page":"Ex.3: Phase Entropy","title":"Example 3: Phase Entropy w/ Second Order Difference Plot","text":"","category":"section"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Import the x and y components of the Henon system of equations.","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Data = ExampleData(\"henon\");\n\nusing Plots\nscatter(Data[:,1], Data[:,2],\nmarkercolor = \"green\", markerstrokecolor = \"black\",\nmarkersize = 3, background_color = \"black\",grid = false)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"(Image: Henon)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Calculate the phase entropy of the y-component in bits (logarithm base 2) without normalization using 7 angular partitions and return the second-order difference plot.","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"henon\"); # hide\nY = Data[:,2];\nPhas = PhasEn(Y, K = 7, Norm = false, Logx = 2, Plotx = true)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"(Image: Phas1)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"Calculate the phase entropy of the x-component using 11 angular partitions, a time delay of 2, and return the second-order difference plot.","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"using EntropyHub # hide\nData = ExampleData(\"henon\"); # hide\nX = Data[:,1];\nPhas = PhasEn(X, K = 11, tau = 2, Plotx = true)","category":"page"},{"location":"Examples/Example3/","page":"Ex.3: Phase Entropy","title":"Ex.3: Phase Entropy","text":"(Image: Phas2)","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = EntropyHub","category":"page"},{"location":"","page":"Home","title":"Home","text":"(Image: EH4J)","category":"page"},{"location":"#EntropyHub","page":"Home","title":"EntropyHub","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"An Open-Source Toolkit For Entropic Time Series Analysis","category":"page"},{"location":"#Introduction","page":"Home","title":"Introduction","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This toolkit provides a wide range of functions to calculate different entropy statistics. There is an ever-growing range of information-theoretic and dynamical systems entropy measures presented in the scientific literature. The goal of EntropyHub.jl is to integrate the many established entropy methods in one open-source package with an extensive documentation and consistent syntax [that is also accessible in multiple programming languages (Matlab, Python)].","category":"page"},{"location":"#About","page":"Home","title":"About","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Information and uncertainty can be regarded as two sides of the same coin: the more uncertainty there is, the more information we gain by removing that uncertainty. In the context of information and probability theory, Entropy quantifies that uncertainty. ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Various measures have been derived to estimate entropy (uncertainty) from discrete time series, each seeking to best capture the uncertainty of the system under examination. This has resulted in many entropy statistics from approximate entropy and sample entropy, to multiscale sample entropy and refined-composite multiscale cross-sample entropy.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The goal of EntropyHub is to provide a comprehensive set of functions with a simple and consistent syntax that allows the user to augment parameters at the command line, enabling a range from basic to advanced entropy methods to be implemented with ease.","category":"page"},{"location":"","page":"Home","title":"Home","text":"warning: NOTE:\nIt is important to clarify that the entropy functions herein described estimate entropy in the context of probability theory and information theory as defined by Shannon, and not thermodynamic or other entropies from classical physics.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Using the Julia REPL:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> using Pkg; Pkg.add(\"EntropyHub\")","category":"page"},{"location":"","page":"Home","title":"Home","text":"or","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ] \npkg> add EntropyHub","category":"page"},{"location":"","page":"Home","title":"Home","text":"To get the latest version of EntropyHub directly from GitHub:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ] \npkg> add https://github.com/MattWillFlood/EntropyHub.jl","category":"page"},{"location":"#Citing","page":"Home","title":"Citing","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"EntropyHub is licensed under the Apache License (Version 2.0) and is free to use by all on condition that the following reference be included on any outputs realized using the software:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Matthew W. Flood (2021),\nEntropyHub: An Open-Source Toolkit for Entropic Time Series Analysis,\nPLoS ONE 16(11):e0259448\nDOI: 10.1371/journal.pone.0259448\nwww.EntropyHub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"__________________________________________________________________","category":"page"},{"location":"","page":"Home","title":"Home","text":" © Copyright 2024 Matthew W. Flood, EntropyHub\n Licensed under the Apache License, Version 2.0 (the \"License\");\n you may not use this file except in compliance with the License.\n You may obtain a copy of the License at\n \n http://www.apache.org/licenses/LICENSE-2.0\n \n Unless required by applicable law or agreed to in writing, software\n distributed under the License is distributed on an \"AS IS\" BASIS,\n WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n See the License for the specific language governing permissions and\n limitations under the License.\n \n For Terms of Use see https://github.com/MattWillFlood/EntropyHub","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you find this package useful, please consider starring it on GitHub and Julia Packages (or MatLab File Exchange and PyPI). This helps us to gauge user satisfaction.","category":"page"},{"location":"#Functions","page":"Home","title":"Functions","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"EntropyHub functions fall into 5 categories: ","category":"page"},{"location":"","page":"Home","title":"Home","text":"Base functions for estimating the entropy of a single univariate time series.\nCross functions for estimating the entropy between two univariate time series.\nBidimensional functions for estimating the entropy of a two-dimensional univariate matrix.\nMultiscale functions for estimating the multiscale entropy of a single univariate time series using any of the Base entropy functions.\nMultiscale Cross functions for estimating the multiscale entropy between two univariate time series using any of the Cross-entropy functions.","category":"page"},{"location":"#Contact","page":"Home","title":"Contact","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"For general queries and information about EntropyHub, contact: info@entropyhub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you have any questions or need help using the package, please contact us at: help@entropyhub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you notice or identify any issues, please do not hesitate to contact us at: fix@entropyhub.xyz ","category":"page"},{"location":"","page":"Home","title":"Home","text":"We will do our best to help you with any relevant issues that you may have.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you come across any errors or technical issues, you can raise these under the issues tab on the EntropyHub.jl GitHub page. Similarly, if you have any suggestions or recommendations on how this package can be improved, please let us know.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Thank you for using EntropyHub,","category":"page"},{"location":"","page":"Home","title":"Home","text":"Matt","category":"page"},{"location":"","page":"Home","title":"Home","text":" ___ _ _ _____ _____ ____ ____ _ _ \n | _|| \\ | ||_ _|| \\| || || \\ / | ___________ \n | \\_ | \\| | | | | __/| || __| \\ \\_/ / / _______ \\\n | _|| \\ \\ | | | | \\ | || | \\ / | / ___ \\ |\n | \\_ | |\\ | | | | |\\ \\ | || | | | | | / \\ | | \n |___||_| \\_| |_| |_| \\_||____||_| |_| _|_|__\\___/ | | \n _ _ _ _ ____ / |__\\______\\/ | \n | | | || | | || \\ An open-source | /\\______\\__|_/ \n | |_| || | | || | toolkit for | | / \\ | | \n | _ || | | || \\ entropic time- | | \\___/ | | \n | | | || |_| || \\ series analysis | \\_______/ |\n |_| |_|\\_____/|_____/ \\___________/","category":"page"},{"location":"","page":"Home","title":"Home","text":"Documentation for EntropyHub.","category":"page"},{"location":"","page":"Home","title":"Home","text":"","category":"page"},{"location":"Examples/Example7/#Example-7:-Refined-Multiscale-Sample-Entropy","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Example 7: Refined Multiscale Sample Entropy","text":"","category":"section"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"Import a signal of uniformly distributed pseudorandom integers in the range [1, 8] and create a multiscale entropy object with the following parameters: EnType = SampEn(), embedding dimension = 4, radius threshold = 1.25","category":"page"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\");\nMobj = MSobject(SampEn, m = 4, r = 1.25)\nMobj # hide","category":"page"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"Calculate the refined multiscale sample entropy and the complexity index (Ci) over 5 temporal scales using a 3rd order Butterworth filter with a normalised corner frequency of at each temporal scale (τ), where the radius threshold value (r) specified by Mobj becomes scaled by the median absolute deviation of the filtered signal at each scale.","category":"page"},{"location":"Examples/Example7/","page":"Ex.7: Refined Multiscale [Sample] Entropy","title":"Ex.7: Refined Multiscale [Sample] Entropy","text":"using EntropyHub # hide\nX = ExampleData(\"randintegers\"); # hide\nMobj = MSobject(SampEn, m = 4, r = 1.25) # hide\nMSx, Ci = rMSEn(X, Mobj, Scales = 5, F_Order = 3, F_Num = 0.6, RadNew = 4)","category":"page"}] }