-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathindex.html
285 lines (243 loc) · 12.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
<!DOCTYPE html>
<html>
<!-- ------------------------------------ HEAD ---------------------------- -->
<head>
<meta charset='utf-8'>
<meta http-equiv="X-UA-Compatible" content="chrome=1">
<meta name="description" content=" Switching Bayesian heuristics">
<link rel="stylesheet" href="web/stylesheets/stylesheet.css">
<link rel="stylesheet" href="web/fonts/Serif/cmun-serif.css">
<!--Mathematics with MathJax-->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ['$','$'], ["\\(","\\)"] ],
displayMath: [ ['$f$','$$'], ["\\[","\\]"] ],
processEscapes: true
}, "HTML-CSS": {
availableFonts: ["STIX"],
}
});
</script>
<script type="text/javascript" async
src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML">
</script>
</head>
<!-- ------------------------------------ BODY -------------------------------- -->
<body>
<!-- Title -->
<title>
A Switching heuristics approximates Bayesian inference in a motion direction estimation task
</title>
<!-- HEADER -->
<div id="header_wrap" class="outer">
<header class="inner">
<a id="forkme_banner" href="https://github.com/steevelaquitaine/projInference">Github</a>
<a id="project_author" href="http://steevelaquitaine.blogspot.com">
Steeve laquitaine
</a>
</header>
</div>
<!--TITLE-->
<div id="proj_title_wrap" class="outer">
<header class="inner">
<section id="proj_title" class="inner">
<h1 id="proj_title">
Switching as a heuristic approximation to Bayesian inference
</h1>
</section>
</header>
</div>
<!-- Table of contents -->
<div id="Table_of_content_wrap" class="outer">
<section id="table_of_content" class="inner">
<h3 id="top"> Content </h3>
<a href="#abstract"> Abstract</a><br>
<a href="#task"> Task </a><br>
<a href="#get_data"> Data </a><br>
<a href="#Q_A"> Q&A </a>
</section>
</div>
<!-- MAIN CONTENT -->
<div id="main_content_wrap" class="outer">
<section id="main_content" class="inner">
<a id="abstract" class="abstract" href="#abstract" aria-hidden="true">
<span aria-hidden="true" class="octicon octicon-link">
</span>
</a>
<!-- abstract -->
<a href="#top">
<h2>
Abstract
</h2>
</a>
<p>
Human perceptual inference has been fruitfully characterized as a normative Bayesian process in which sensory evidence and priors are multiplicatively combined to form posteriors from which sensory estimates can be optimally read out. We tested whether this basic Bayesian framework could explain human subjects’ behavior in two estimation tasks in which we varied the strength of sensory evidence (motion coherence or contrast) and priors (set of directions or orientations). We found that despite excellent agreement of estimates mean and variability with a Basic Bayesian observer model, the estimate distributions were bimodal with unpre- dicted modes near the prior and the likelihood. We developed a model that switched between prior and sensory evidence rather than integrating the two, which better explained the data than the Basic and several other Bayesian observers. Our data suggest that humans can approximate Bayesian optimality with a switching heuristic that forgoes multiplicative combination of priors and likelihoods.
</p>
<a id="task" class="anchor" href="#task" aria-hidden="true">
<span aria-hidden="true" class="octicon octicon-link">
</span>
</a>
<!-- task -->
<a href="#top">
<h2>
Task
</h2>
</a>
<!-- Browniam motion -->
<h3>
Brownian motion
</h3>
<p>
<ul>
<li>
New dots are randomly chosen to move in the coherent direction from each frame to the next, and the remaining (noise) dots are randomly relocated; i.e., each noise dot is given random direction.
</li>
<li>
All dots move at the same speed.
</li>
<li>
Once a dot moves out of the circular aperture it is relocated to the opposite side of the aperture where it will either be moved in a coherent or random direction in the next frame.
</li>
<li>
The Brownian motion stimulus has been shown to produce high motion direction estimation accuracy compared to other motion algorithms, possibly because it sets up lower local directional ambiguity compared to other algorithms as the noise dots are only locally repositioned.
</li>
<li>
see Pilly, P. K. & Seitz, A. R. Vision Research 49, 1599–1612 (2009).
</li>
</ul>
</p>
<!-- motion videos -->
<div id="videoal" align="center" style="padding:0px">
<div class="video">
<video controls autoplay><source src="images/motioncoh06.mp4" type="video/mp4">
</video>
</div>
<div class="video">
<video controls autoplay><source src="images/motioncoh12.mp4" type="video/mp4">
</video>
</div>
<div class="video">
<video controls autoplay><source src="images/motioncoh24.mp4" type="video/mp4"></video>
</div>
</div>
<!-- Run the task -->
<h3>
Run the task
</h3>
<p>The matlab code is in projInference/task/.</p>
</ul>
<!-- code -->
<code class="code">
<ol>
<li>
git clone mgl and mrtools libraries at https://github.com/justingardner and SLcodes at https://github.com/steevelaquitaine/
</li>
<li>
set screen parameters (mglEditScreenParams, screenNumber = 1, can be changed in taskDotDir.m)
</li>
<li>
open main.m and run each prior block line by line
</li>
</ol>
</code>
<!-- get the data -->
<a id="get_data" class="anchor" href="#get_data" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><a href="#top"> <h2>Data</h2> </a>
<p>
<ol>
<li>
Download "data01_direction4priors.zip" <a href="https://doi.org/10.17632/nxkvtrj9ps.1"> from mendeley </a>
</li>
<li>
Add /projInference's datapath to matlab path (mine is ~/Desktop/projInference)
</li>
<li>
Unzip the file in /projInference/data/
</li>
<li>
fit the data
</li>
</ol>
</p>
<p>
To visualize the Standard Bayesian model's predictions (subject 1):
</p>
<!-- code make predictions-->
<code class="code">
addpath(genpath('~/Desktop/projInference/')); <br>
datapath = '~/Desktop/projInference/data/data01_direction4priors/data'; <br>
SLfitBayesianModel({'sub01'},[100 3 3 1 2.5 7.7 43 NaN 0.001 15 NaN ],'dataPathVM',datapath,'experiment','vonMisesPrior','MAPReadout','modelPredictions','withData','inputFitParameters', 'filename','myfile');
</code>
<a id="Q_A" class="anchor" href="#Q_A" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a><a href="#top"> <h2>Q&A</h2> </a>
<!-- Section title -->
<a id="task" class="anchor" href="#task" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span>
</a>
<h3><a href="#top"> Why is estimation standard deviation higher clockwise to the prior mean (at -140) than counterclockwise (at +140) for 80º prior?
Why are the models' fits not smooth (fig 3 and 5)?</a></h3>
<p>
This is an artefact in the plotted standard deviation created by numerical imprecision when performing polar to cartesian conversions for this plot. 90 degree was converted to misestimated cartesians (x=-4e-8, y=1) instead of (x=0, y=1). Once converted back to polar with atan(y/x), negative x led to -90 degrees instead of 90 degrees. A corrected plot will be uploaded soon. Similar numerical imprecisions make the curves not smooth.
</p>
<h3><a href="#top"> Could detection sometimes fails ? </a></h3>
<p> A Bayesian model where detection sometimes fails, would combine the prior with a uniform likelihood in trials when detection failed, producing an estimate peak centered at the prior mean and combines the prior with a noiseless (a delta function) or an extremely strong likelihood when detection succeeds, producing an estimate peak centered at the motion direction. Whether the likelihood function, represented by inherently noisy neural populations, can be noiseless, or nearly noiseless enough to entirely bias estimate toward sensory evidence is a plausible explanation remains to be determined. </p>
<p>
We cannot preclude the possibility that this or other distributional forms would allow a Bayesian model with multiplicative integration to better explain the data.
</p>
<h3><a href="#top"> Can we check the detection hypothesis by measuring likelihood strength in a 2AFC? </a></h3>
<p> In a 2AFC task, subjects would for example indicate whether a test motion stimulus with 6% coherence moved clockwise or counterclockwise to a reference 100% coherence motion stimulus in repeated trials. In a simple task the prior is uniform and motion directions are sampled out of a uniform distribution. The sensory likelihood width can be fitted to subjects' responses. It is not clear though how/whether the width can be derived for each individual trial which is necessary to determined whether likelihood is sometimes flat (no detection) or strong (detection).</p>
<h3>
<a href="#top"> Why is orientation estimation so hard at 15% contrast? </a>
</h3>
<p> Possible reason why orientation estimation was hard at 15% contrast are that the stimulus was a 1) thin bar, 2) it was filled with filtered noise 3) it appeared for only 20 ms 4) in one of 36 different possible spatial orientations. Thus contrast was only one of the many noise factor.
</ul>
<p></p>
<!-- Section title -->
<h3>
<a href="#top"> Why is the switching model able to fit the SD so well and not predict higher SD like for cue switching in the cue-integration literature? </a>
</h3>
<p> The most likely reason is that our switching model has more free parameters and that allows it to avoid that problem to some extent. </p>
<!-- Section title -->
<h3>
<a href="#top"> Did we run the task without feedback ? </a>
</h3>
<p>
We did run pilot experiments without feedback but that made the task much harder for subjects particularly at low coherence (6%) - pilot subjects' estimate distributions were mostly random. Some subjects also displayed unexplained biases. We also reasoned that prior would be easier to learn and thus learning would be faster (which was essential to test our hypothesis) if we used feedbacks, again particularly at low coherence.
</p>
<!-- Section title -->
<h3>
<a href="#top"> Did subjects see the edges of the monitor ? </a>
</h3>
<p>
No subjects' eye distance to the monitor prevented them to see its edges. We also ensured subjects viewed the stimuli in a dark room with no visual distractors. The fixation point was also circular. These dispositions were taken to make sure that subjects couldn't use cardinal axes (edges, fixation cross) as references to estimate the motion directions.
</p>
<h3>
<a href="#top"> What were subjects instructions ? </a>
</h3>
<p>
Subjects were instructed as follows:
<ul>
<li>"The basic task is: </li>
<li> Fixate and then view a motion of noisy dots.</li>
<li> Report the direction you saw as best as possible, as accurately as possible, and as fast as possible by adjusting the wheel.</li>
<li> Press keyboard button 1 when you have made a choice. You have a maximum of 5 s to report your choice. d Then your response will be confirmed (greenline).</li>
<li>You will then get feedback about the true direction of the motion (redline).
<li>Important !! Please fixate the fixation point during the entire experiment.</li>
<li> Your objective is to estimate and report the direction of the motion as best as possible. In the ideal case, your choice (green line)will match perfectly the feedback (true direction of the motion)."</li>
</ul>
</p>
<!--End Main Content-->
</section>
</div>
<!-- FOOTER -->
<div id="footer_wrap" class="outer">
<footer class="inner">
<p class="copyright">Projinference maintained by <a href="https://github.com/steevelaquitaine">steevelaquitaine</a>
</p>
<p>
Published with <a href="https://pages.github.com"> GitHub Pages </a>
</p>
</footer>
</div>
</body>
</html>