-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmaximum-likelihood-asymptotics.html
400 lines (353 loc) · 26.3 KB
/
maximum-likelihood-asymptotics.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="author" content="Jonathan Landy" />
<meta property="og:type" content="article" />
<meta name="twitter:card" content="summary">
<meta name="keywords" content=", Statistics, " />
<meta property="og:title" content="Maximum-likelihood asymptotics "/>
<meta property="og:url" content="./maximum-likelihood-asymptotics" />
<meta property="og:description" content="In this post, we review two facts about maximum-likelihood estimators: 1) They are consistent, meaning that they converge to the correct values given a large number of samples, \(N\), and 2) They satisfy the Cramer-Rao lower bound for unbiased parameter estimates in this same limit — that is, they have the …" />
<meta property="og:site_name" content="EFAVDB" />
<meta property="og:article:author" content="Jonathan Landy" />
<meta property="og:article:published_time" content="2015-12-30T00:01:00-08:00" />
<meta name="twitter:title" content="Maximum-likelihood asymptotics ">
<meta name="twitter:description" content="In this post, we review two facts about maximum-likelihood estimators: 1) They are consistent, meaning that they converge to the correct values given a large number of samples, \(N\), and 2) They satisfy the Cramer-Rao lower bound for unbiased parameter estimates in this same limit — that is, they have the …">
<title>Maximum-likelihood asymptotics · EFAVDB
</title>
<link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/css/bootstrap-combined.min.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Fira+Sans:300,400,700" rel="stylesheet" type='text/css' />
<link href="https://fonts.googleapis.com/css?family=Cardo:400,700" rel="stylesheet" type='text/css' />
<link rel="stylesheet" type="text/css" href="./theme/css/elegant.prod.css" media="screen">
<link rel="stylesheet" type="text/css" href="./theme/css/custom.css" media="screen">
<link rel="apple-touch-icon" sizes="180x180" href="./images/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="./images/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="./images/favicon-16x16.png">
<link rel="manifest" href="./images/site.webmanifest">
<link rel="mask-icon" href="./images/safari-pinned-tab.svg" color="#5bbad5">
<meta name="msapplication-TileColor" content="#da532c">
<meta name="theme-color" content="#ffffff">
</head>
<body>
<div id="content">
<div class="navbar navbar-static-top">
<div class="navbar-inner">
<div class="container-fluid">
<a class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</a>
<a class="brand" href="./"><span class=site-name>EFAVDB</span></a>
<div class="nav-collapse collapse">
<ul class="nav pull-right top-menu">
<li >
<a href=
.
>Home</a>
</li>
<li ><a href="./pages/authors.html">Authors</a></li>
<li ><a href="./categories.html">Categories</a></li>
<li ><a href="./tags.html">Tags</a></li>
<li ><a href="./archives.html">Archives</a></li>
<li><form class="navbar-search" action="./search.html" onsubmit="return validateForm(this.elements['q'].value);"> <input type="text" class="search-query" placeholder="Search" name="q" id="tipue_search_input"></form></li>
</ul>
</div>
</div>
</div>
</div>
<div class="container-fluid">
<div class="row-fluid">
<div class="span1"></div>
<div class="span10">
<article itemscope>
<div class="row-fluid">
<header class="page-header span8 offset2">
<h1>
<a href="./maximum-likelihood-asymptotics">
Maximum-likelihood asymptotics
</a>
</h1>
</header>
<div class="span2"></div>
</div>
<div class="row-fluid">
<div class="span8 offset2 article-content">
<p>In this post, we review two facts about maximum-likelihood estimators: 1) They are consistent, meaning that they converge to the correct values given a large number of samples, <span class="math">\(N\)</span>, and 2) They satisfy the <a href="http://efavdb.github.io/multivariate-cramer-rao-bound">Cramer-Rao</a> lower bound for unbiased parameter estimates in this same limit — that is, they have the lowest possible variance of any unbiased estimator, in the <span class="math">\(N\gg 1\)</span> limit.</p>
<h3>Introduction</h3>
<p>We begin with a simple example maximum-likelihood inference problem: Suppose one has obtained <span class="math">\(N\)</span> independent samples <span class="math">\(\{x_1, x_2, \ldots, x_N\}\)</span> from a Gaussian distribution of unknown mean <span class="math">\(\mu\)</span> and variance <span class="math">\(\sigma^2\)</span>. In order to obtain a maximum-likelihood estimate for these parameters, one asks which <span class="math">\(\hat{\mu}\)</span> and <span class="math">\(\hat{\sigma}^2\)</span> would be most likely to generate the samples observed. To find these, we first write down the probability of observing the samples, given our model. This is simply
</p>
<div class="math">$$
P(\{x_1, x_2, \ldots, x_N\} \vert \mu, \sigma^2) =\\ \exp\left [ \sum_{i=1}^N \left (-\frac{1}{2} \log (2 \pi \sigma^2) -\frac{1}{2 \sigma^2} (x_i - \mu)^2\right ) \right ]. \tag{1} \label{1}
$$</div>
<p>
To obtain the maximum-likelihood estimates, we maximize (\ref{1}): Setting its derivatives with respect to <span class="math">\(\mu\)</span> and <span class="math">\(\sigma^2\)</span> to zero and solving gives
</p>
<div class="math">\begin{align}\label{mean}
\hat{\mu} &= \frac{1}{N} \sum_i x_i \tag{2} \\
\hat{\sigma}^2 &= \frac{1}{N} \sum_i (x_i - \hat{\mu})^2. \tag{3} \label{varhat}
\end{align}</div>
<p>
These are mean and variance values that would be most likely to generate our observation set <span class="math">\(\{x_i\}\)</span>. Our solutions show that they are both functions of the random observation set. Because of this, <span class="math">\(\hat{\mu}\)</span> and <span class="math">\(\hat{\sigma}^2\)</span> are themselves random variables, changing with each sample set that happens to be observed. Their distributions can be characterized by their mean values, variances, etc.</p>
<p>The average squared error of a parameter estimator is determined entirely by its bias and variance — see eq (2) of <a href="http://efavdb.github.io/bayesian-linear-regression">prior post</a>. Now, one can show that the <span class="math">\(\hat{\mu}\)</span> estimate of (\ref{mean}) is unbiased, but this is not the case for the variance estimator (\ref{varhat}) — one should (famously) divide by <span class="math">\(N-1\)</span> instead of <span class="math">\(N\)</span> here to obtain an unbiased estimator<span class="math">\(^1\)</span>. This shows that maximum-likelihood estimators need not be unbiased. Why then are they so popular? One reason is that these estimators are guaranteed to be unbiased when <span class="math">\(N\)</span>, the sample size, is large. Further, in this same limit, these estimators achieve the minimum possible variance for any unbiased parameter estimate — as set by the fundamental <a href="http://efavdb.github.io/multivariate-cramer-rao-bound">Cramer-Rao</a> bound. The purpose of this post is to review simple proofs of these latter two facts about maximum-likelihood estimators<span class="math">\(^2\)</span>.</p>
<h3>Consistency</h3>
<p>Let <span class="math">\(P(x \vert \theta^*)\)</span> be some distribution characterized by a parameter <span class="math">\(\theta^*\)</span> that is unknown. We will show that the maximum-likelihood estimator converges to <span class="math">\(\theta^*\)</span> when <span class="math">\(N\)</span> is large: As in (\ref{1}), the maximum-likelihood solution is that <span class="math">\(\theta\)</span> that maximizes
</p>
<div class="math">$$\tag{4} \label{4}
J \equiv \frac{1}{N}\sum_{i=1}^N \log P(x_i \vert \theta),
$$</div>
<p>
where the <span class="math">\(\{x_i\}\)</span> are the independent samples taken from <span class="math">\(P(x \vert \theta^*)\)</span>. By the law of large numbers, when <span class="math">\(N\)</span> is large, this average over the samples converges to its population mean. In other words,
</p>
<div class="math">$$\tag{5}
\lim_{N \to \infty}J \rightarrow \int_x P(x \vert \theta^*) \log P(x \vert \theta) dx.
$$</div>
<p>
We will show that <span class="math">\(\theta^*\)</span> is the <span class="math">\(\theta\)</span> value that maximizes the above. We can do this directly, writing
</p>
<div class="math">$$
\begin{align}
J(\theta) - J(\theta^*) & = \int_x P(x \vert \theta^*) \log \left ( \frac{P(x \vert \theta) }{P(x \vert \theta^*)}\right) \\
& \leq \int_x P(x \vert \theta^*) \left ( \frac{P(x \vert \theta) }{P(x \vert \theta^*)} - 1 \right) \\
& = \int_x P(x \vert \theta) - P(x \vert \theta^*) = 1 - 1 = 0. \tag{6} \label{6}
\end{align}
$$</div>
<p>
Here, we have used <span class="math">\(\log t \leq t-1\)</span> in the second line. Rearranging the above shows that <span class="math">\(J(\theta^*) \geq J(\theta)\)</span> for all <span class="math">\(\theta\)</span> — when <span class="math">\(N \gg 1\)</span>, meaning that <span class="math">\(J\)</span> is maximized at <span class="math">\(\theta^*\)</span>. That is, the maximum-likelihood estimator <span class="math">\(\hat{\theta} \to \theta^*\)</span> in this limit<span class="math">\(^3\)</span>.</p>
<h3>Optimal variance</h3>
<p>To derive the variance of a general maximum-likelihood estimator, we will see how its average value changes upon introduction of a small Bayesian prior, <span class="math">\(P(\theta) \sim \exp(\Lambda \theta)\)</span>. The trick will be to evaluate the change in two separate ways — this takes a few lines, but is quite straightforward. In the first approach, we do a direct maximization: The quantity to be maximized is now
</p>
<div class="math">$$ \label{7}
J = \sum_{i=1}^N \log P(x_i \vert \theta) + \Lambda \theta. \tag{7}
$$</div>
<p>
Because we take <span class="math">\(\Lambda\)</span> small, we can use a Taylor expansion to find the new solution, writing
</p>
<div class="math">$$ \label{8}
\hat{\theta} = \theta^* + \theta_1 \Lambda + O(\Lambda^2). \tag{8}
$$</div>
<p>
Setting the derivative of (\ref{7}) to zero, with <span class="math">\(\theta\)</span> given by its value in (\ref{8}), we obtain
</p>
<div class="math">$$
\sum_{i=1}^N \partial_{\theta} \left . \log P(x_i \vert \theta) \right \vert_{\theta^*} + \\ \sum_{i=1}^N \partial_{\theta}^2 \left . \log P(x_i \vert \theta) \right \vert_{\theta^*} \times \theta_1 \Lambda + \Lambda + O(\Lambda^2) = 0. \tag{9} \label{9}
$$</div>
<p>
The first term here goes to zero at large <span class="math">\(N\)</span>, as above. Setting the terms at <span class="math">\(O(\Lambda^1)\)</span> to zero gives
</p>
<div class="math">$$
\theta_1 = - \frac{1}{ \sum_{i=1}^N \partial_{\theta}^2 \left . \log P(x_i \vert \theta) \right \vert_{\theta^*} }. \tag{10} \label{10}
$$</div>
<p>
Plugging this back into (\ref{8}) gives the first order correction to <span class="math">\(\hat{\theta}\)</span> due to the perturbation. Next, as an alternative approach, we evaluate the change in <span class="math">\(\theta\)</span> by maximizing the <span class="math">\(P(\theta)\)</span> distribution, expanding about its unperturbed global maximum, <span class="math">\(\theta^*\)</span>: We write, formally,
</p>
<div class="math">$$\tag{11} \label{11}
P(\theta) = e^{ - a_0 - a_2 (\theta - \theta^*)^2 - a_3 (\theta - \theta^*)^3 + \ldots + \Lambda \theta}.
$$</div>
<p>
Differentiating to maximize (\ref{11}), and again assuming a solution of form (\ref{8}), we obtain
</p>
<div class="math">$$\label{12} \tag{12}
-2 a_2 \times \theta_1 \Lambda + \Lambda + O(\Lambda^2) = 0 \ \ \to \ \ \theta_1 = \frac{1}{2 a_2}.
$$</div>
<p>
We now require consistency between our two approaches, equating (\ref{10}) and (\ref{12}). This gives an expression for <span class="math">\(a_2\)</span>. Plugging this back into (\ref{11}) then gives (for the unperturbed distribution)
</p>
<div class="math">$$\tag{13} \label{13}
P(\theta) = \mathcal{N} \exp \left [ N \frac{ \langle \partial_{\theta}^2 \left . \log P(x, \theta) \right \vert_{\theta^*} \rangle }{2} (\theta - \theta^*)^2 + \ldots \right].
$$</div>
<p>
Using this Gaussian approximation<span class="math">\(^4\)</span>, we can now read off the large <span class="math">\(N\)</span> variance of <span class="math">\(\hat{\theta}\)</span> as
</p>
<div class="math">$$\tag{14} \label{14}
var(\hat{\theta}) = - \frac{1}{N} \times \frac{1}{\langle \partial_{\theta}^2 \left . \log P(x, \theta) \right \vert_{\theta^*} \rangle }.
$$</div>
<p>
This is the lowest possible value for any unbiased estimator, as set by the Cramer-Rao bound. The proof shows that maximum-likelihood estimators always saturate this bound, in the large <span class="math">\(N\)</span> limit — a remarkable result. We discuss the intuitive meaning of the Cramer-Rao bound in a <a href="http://efavdb.github.io/multivariate-cramer-rao-bound">prior post</a>.</p>
<h3>Footnotes</h3>
<p>[1] To see that (\ref{varhat}) is biased, we just need to evaluate the average of <span class="math">\(\sum_i (x_i - \hat{\mu})^2\)</span>. This is</p>
<div class="math">$$
\overline{\sum_i x_i^2 - 2 \sum_{i,j} \frac{x_i x_j}{N} + \sum_{i,j,k} \frac{x_j x_k}{N^2}} = N \overline{x^2} - (N-1) \overline{x}^2 - \overline{x^2} \\
= (N-1) \left ( \overline{x^2} - \overline{x}^2 \right) \equiv (N-1) \sigma^2.
$$</div>
<p>
Dividing through by <span class="math">\(N\)</span>, we see that <span class="math">\(\overline{\hat{\sigma}^2} = \left(\frac{N-1}{N}\right)\sigma^2\)</span>. The deviation from the true variance <span class="math">\(\sigma^2\)</span> goes to zero at large <span class="math">\(N\)</span>, but is non-zero for any finite <span class="math">\(N\)</span>: The estimator is biased, but the bias goes to zero at large <span class="math">\(N\)</span>.</p>
<p>[2] The consistency proof is taken from lecture notes by D. Panchenko, see <a href="http://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2006/lecture-notes/lecture3.pdf">here</a>. Professor Panchenko is quite famous for having proven the correctness of the Parisi ansatz in replica theory. Our variance proof is original — please let us know if you have seen it elsewhere. Note that it can also be easily extended to derive the covariance matrix of a set of maximum-likelihood estimators that are jointly distributed — we cover only the scalar case here, for simplicity.</p>
<p>[3] The proof here actually only shows that there is no <span class="math">\(\theta\)</span> that gives larger likelihood than <span class="math">\(\theta^*\)</span> in the large <span class="math">\(N\)</span> limit. However, for some problems, it is possible that more than one <span class="math">\(\theta\)</span> maximizes the likelihood. A trivial example is given by the case where the distribution is actually only a function of <span class="math">\((\theta - \theta_0)^2\)</span>. In this case, both values <span class="math">\(\theta_0 \pm (\theta^* - \theta_0)\)</span> will necessarily maximize the likelihood.</p>
<p>[4] It’s a simple matter to carry this analysis further, including the cubic and higher order terms in the expansion (\ref{11}). These lead to correction terms for (\ref{14}), smaller in magnitude than that given there. These terms become important when <span class="math">\(N\)</span> decreases in magnitude.</p>
<script type="text/javascript">if (!document.getElementById('mathjaxscript_pelican_#%@#$@#')) {
var align = "center",
indent = "0em",
linebreak = "false";
if (false) {
align = (screen.width < 768) ? "left" : align;
indent = (screen.width < 768) ? "0em" : indent;
linebreak = (screen.width < 768) ? 'true' : linebreak;
}
var mathjaxscript = document.createElement('script');
mathjaxscript.id = 'mathjaxscript_pelican_#%@#$@#';
mathjaxscript.type = 'text/javascript';
mathjaxscript.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=TeX-AMS-MML_HTMLorMML';
var configscript = document.createElement('script');
configscript.type = 'text/x-mathjax-config';
configscript[(window.opera ? "innerHTML" : "text")] =
"MathJax.Hub.Config({" +
" config: ['MMLorHTML.js']," +
" TeX: { extensions: ['AMSmath.js','AMSsymbols.js','noErrors.js','noUndefined.js'], equationNumbers: { autoNumber: 'none' } }," +
" jax: ['input/TeX','input/MathML','output/HTML-CSS']," +
" extensions: ['tex2jax.js','mml2jax.js','MathMenu.js','MathZoom.js']," +
" displayAlign: '"+ align +"'," +
" displayIndent: '"+ indent +"'," +
" showMathMenu: true," +
" messageStyle: 'normal'," +
" tex2jax: { " +
" inlineMath: [ ['\\\\(','\\\\)'] ], " +
" displayMath: [ ['$$','$$'] ]," +
" processEscapes: true," +
" preview: 'TeX'," +
" }, " +
" 'HTML-CSS': { " +
" availableFonts: ['STIX', 'TeX']," +
" preferredFont: 'STIX'," +
" styles: { '.MathJax_Display, .MathJax .mo, .MathJax .mi, .MathJax .mn': {color: 'inherit ! important'} }," +
" linebreaks: { automatic: "+ linebreak +", width: '90% container' }," +
" }, " +
"}); " +
"if ('default' !== 'default') {" +
"MathJax.Hub.Register.StartupHook('HTML-CSS Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax['HTML-CSS'].FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"MathJax.Hub.Register.StartupHook('SVG Jax Ready',function () {" +
"var VARIANT = MathJax.OutputJax.SVG.FONTDATA.VARIANT;" +
"VARIANT['normal'].fonts.unshift('MathJax_default');" +
"VARIANT['bold'].fonts.unshift('MathJax_default-bold');" +
"VARIANT['italic'].fonts.unshift('MathJax_default-italic');" +
"VARIANT['-tex-mathit'].fonts.unshift('MathJax_default-italic');" +
"});" +
"}";
(document.body || document.getElementsByTagName('head')[0]).appendChild(configscript);
(document.body || document.getElementsByTagName('head')[0]).appendChild(mathjaxscript);
}
</script>
<p id="post-share-links">
Like this post? Share on:
<a href="https://twitter.com/intent/tweet?text=Maximum-likelihood%C2%A0asymptotics&url=/maximum-likelihood-asymptotics" target="_blank" rel="nofollow noopener noreferrer" title="Share on Twitter">Twitter</a>
|
<a href="https://www.facebook.com/sharer/sharer.php?u=/maximum-likelihood-asymptotics" target="_blank" rel="nofollow noopener noreferrer" title="Share on Facebook">Facebook</a>
|
<a href="mailto:?subject=Maximum-likelihood%C2%A0asymptotics&body=/maximum-likelihood-asymptotics" target="_blank" rel="nofollow noopener noreferrer" title="Share via Email">Email</a>
</p>
<hr />
<div class="author_blurb">
<a href="" target="_blank" rel="nofollow noopener noreferrer">
<img src=/wp-content/uploads/2014/12/JonathanLinkedIn.jpg alt="Jonathan Landy Avatar" title="Jonathan Landy">
<span class="author_name">Jonathan Landy</span>
</a>
Jonathan grew up in the midwest and then went to school at Caltech and UCLA. Following this, he did two postdocs, one at UCSB and one at UC Berkeley. His academic research focused primarily on applications of statistical mechanics, but his professional passion has always been in the mastering, development, and practical application of slick math methods/tools. He currently works as a data-scientist at Stitch Fix.
</div>
<hr/>
<aside>
<nav>
<ul class="articles-timeline">
<li class="previous-article">« <a href="./principal-component-analysis" title="Previous: Principal component analysis">Principal component analysis</a></li>
<li class="next-article"><a href="./independent-component-analysis" title="Next: Independent component analysis">Independent component analysis</a> »</li>
</ul>
</nav>
</aside>
</div>
<section id="article-sidebar" class="span2">
<h4>Published</h4>
<time itemprop="dateCreated" datetime="2015-12-30T00:01:00-08:00">Dec 30, 2015</time>
<h4>Category</h4>
<a class="category-link" href="./categories.html#statistics-ref">Statistics</a>
<h4>Contact</h4>
<div id="sidebar-social-link">
<a href="https://twitter.com/efavdb" title="" target="_blank" rel="nofollow noopener noreferrer">
<svg xmlns="http://www.w3.org/2000/svg" aria-label="Twitter" role="img" viewBox="0 0 512 512"><rect width="512" height="512" rx="15%" fill="#1da1f3"/><path fill="#fff" d="M437 152a72 72 0 0 1-40 12 72 72 0 0 0 32-40 72 72 0 0 1-45 17 72 72 0 0 0-122 65 200 200 0 0 1-145-74 72 72 0 0 0 22 94 72 72 0 0 1-32-7 72 72 0 0 0 56 69 72 72 0 0 1-32 1 72 72 0 0 0 67 50 200 200 0 0 1-105 29 200 200 0 0 0 309-179 200 200 0 0 0 35-37"/></svg>
</a>
<a href="https://github.com/efavdb" title="" target="_blank" rel="nofollow noopener noreferrer">
<svg xmlns="http://www.w3.org/2000/svg" aria-label="GitHub" role="img" viewBox="0 0 512 512"><rect width="512" height="512" rx="15%" fill="#1B1817"/><path fill="#fff" d="M335 499c14 0 12 17 12 17H165s-2-17 12-17c13 0 16-6 16-12l-1-50c-71 16-86-28-86-28-12-30-28-37-28-37-24-16 1-16 1-16 26 2 40 26 40 26 22 39 59 28 74 22 2-17 9-28 16-35-57-6-116-28-116-126 0-28 10-51 26-69-3-6-11-32 3-67 0 0 21-7 70 26 42-12 86-12 128 0 49-33 70-26 70-26 14 35 6 61 3 67 16 18 26 41 26 69 0 98-60 120-117 126 10 8 18 24 18 48l-1 70c0 6 3 12 16 12z"/></svg>
</a>
<a href="https://www.youtube.com/channel/UClfvjoSiu0VvWOh5OpnuusA" title="" target="_blank" rel="nofollow noopener noreferrer">
<svg xmlns="http://www.w3.org/2000/svg" aria-label="YouTube" role="img" viewBox="0 0 512 512" fill="#ed1d24"><rect width="512" height="512" rx="15%"/><path d="m427 169c-4-15-17-27-32-31-34-9-239-10-278 0-15 4-28 16-32 31-9 38-10 135 0 174 4 15 17 27 32 31 36 10 241 10 278 0 15-4 28-16 32-31 9-36 9-137 0-174" fill="#fff"/><path d="m220 203v106l93-53"/></svg>
</a>
</div>
</section>
</div>
</article>
<button onclick="topFunction()" id="myBtn" title="Go to top">▲</button>
</div>
<div class="span1"></div>
</div>
</div>
</div>
<footer>
<div>
<span class="site-name"><a href= .>EFAVDB</span> - Everybody's Favorite Data Blog</a>
</div>
<!-- <div id="fpowered"> -->
<!-- Powered by: <a href="http://getpelican.com/" title="Pelican Home Page" target="_blank" rel="nofollow noopener noreferrer">Pelican</a> -->
<!-- Theme: <a href="https://elegant.oncrashreboot.com/" title="Theme Elegant Home Page" target="_blank" rel="nofollow noopener noreferrer">Elegant</a> -->
<!-- -->
<!-- </div> -->
</footer> <script src="//code.jquery.com/jquery.min.js"></script>
<script src="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.2/js/bootstrap.min.js"></script>
<script>
function validateForm(query)
{
return (query.length > 0);
}
</script>
<script>
//** Scroll to top button **
//Get the button:
mybutton = document.getElementById("myBtn");
// When the user scrolls down 30px from the top of the document, show the button
window.onscroll = function() {scrollFunction()};
function scrollFunction() {
if (document.body.scrollTop > 30 || document.documentElement.scrollTop > 30) {
mybutton.style.display = "block";
} else {
mybutton.style.display = "none";
}
}
// When the user clicks on the button, scroll to the top of the document
function topFunction() {
document.body.scrollTop = 0; // For Safari
document.documentElement.scrollTop = 0; // For Chrome, Firefox, IE and Opera
}
</script>
<script>
(function () {
if (window.location.hash.match(/^#comment-\d+$/)) {
$('#comment_thread').collapse('show');
}
})();
window.onhashchange=function(){
if (window.location.hash.match(/^#comment-\d+$/))
window.location.reload(true);
}
$('#comment_thread').on('shown', function () {
var link = document.getElementById('comment-accordion-toggle');
var old_innerHTML = link.innerHTML;
$(link).fadeOut(200, function() {
$(this).text('Click here to hide comments').fadeIn(200);
});
$('#comment_thread').on('hidden', function () {
$(link).fadeOut(200, function() {
$(this).text(old_innerHTML).fadeIn(200);
});
})
})
</script>
</body>
<!-- Theme: Elegant built for Pelican
License : MIT -->
</html>