-
Notifications
You must be signed in to change notification settings - Fork 1
/
references.html
267 lines (247 loc) · 19 KB
/
references.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
<!DOCTYPE html>
<html class="writer-html5" lang="en" data-content_root="./">
<head>
<meta charset="utf-8" /><meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>References — N2D2 documentation</title>
<link rel="stylesheet" type="text/css" href="_static/pygments.css?v=80d5e7a1" />
<link rel="stylesheet" type="text/css" href="_static/css/theme.css?v=19f00094" />
<!--[if lt IE 9]>
<script src="_static/js/html5shiv.min.js"></script>
<![endif]-->
<script src="_static/documentation_options.js?v=5929fcd5"></script>
<script src="_static/doctools.js?v=888ff710"></script>
<script src="_static/sphinx_highlight.js?v=dc90522c"></script>
<script src="_static/js/theme.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="index.html" class="icon icon-home">
N2D2
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" aria-label="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
<p class="caption" role="heading"><span class="caption-text">Introduction</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="intro/intro.html">Presentation</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/about.html">About N2D2-IP</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/simus.html">Performing simulations</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/perfs_tools.html">Performance evaluation tools</a></li>
<li class="toctree-l1"><a class="reference internal" href="intro/tuto.html">Tutorials</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">ONNX Import</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="onnx/convert.html">Obtain ONNX models</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx/import.html">Import ONNX models</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx/transfer.html">Train from ONNX models</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Quantization and Export</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="quant/post.html">Post-training quantization</a></li>
<li class="toctree-l1"><a class="reference internal" href="quant/qat.html">Quantization-Aware Training</a></li>
<li class="toctree-l1"><a class="reference internal" href="quant/pruning.html">Pruning</a></li>
<li class="toctree-l1"><a class="reference internal" href="export/CPP.html">Export: C++</a></li>
<li class="toctree-l1"><a class="reference internal" href="export/CPP_STM32.html">Export: C++/STM32</a></li>
<li class="toctree-l1"><a class="reference internal" href="export/TensorRT.html">Export: TensorRT</a></li>
<li class="toctree-l1"><a class="reference internal" href="export/DNeuro.html">Export: DNeuro</a></li>
<li class="toctree-l1"><a class="reference internal" href="export/ONNX.html">Export: ONNX</a></li>
<li class="toctree-l1"><a class="reference internal" href="export/legacy.html">Export: other / legacy</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">INI File Interface</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="ini/intro.html">Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini/databases.html">Databases</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini/data_analysis.html">Stimuli data analysis</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini/environment.html">Stimuli provider (Environment)</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini/layers.html">Network Layers</a></li>
<li class="toctree-l1"><a class="reference internal" href="ini/target.html">Targets (outputs & losses)</a></li>
<li class="toctree-l1"><a class="reference internal" href="adversarial.html">Adversarial module</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="python_api/intro.html">Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_api/databases.html">Databases</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_api/cells.html">Cells</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_api/tensor.html">Tensor</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_api/interoperability.html">Interoperability</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_api/export.html">Export</a></li>
<li class="toctree-l1"><a class="reference internal" href="python_api/example.html">Example</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">C++/Python core</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="core/core.html">Core N2D2</a></li>
<li class="toctree-l1"><a class="reference internal" href="core/example.html">Example</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">C++ API / Developer</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="dev_intro.html">Introduction</a></li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"><nav class="wy-nav-top" aria-label="Mobile navigation menu" >
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">N2D2</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="Page navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html" class="icon icon-home" aria-label="Home"></a></li>
<li class="breadcrumb-item active">References</li>
<li class="wy-breadcrumbs-aside">
<a href="_sources/references.rst.txt" rel="nofollow"> View page source</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<section id="references">
<span id="id1"></span><h1>References<a class="headerlink" href="#references" title="Link to this heading">¶</a></h1>
<div class="docutils container" id="id2">
<div role="list" class="citation-list">
<div class="citation" id="id29" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>BLN+20<span class="fn-bracket">]</span></span>
<p>Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, and Nojun Kwak. Lsq+: improving low-bit quantization through learnable offsets and better initialization. 2020. <a class="reference external" href="https://arxiv.org/abs/2004.09576">arXiv:2004.09576</a>.</p>
</div>
<div class="citation" id="id18" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>COR+16<span class="fn-bracket">]</span></span>
<p>Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In <em>Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>. 2016.</p>
</div>
<div class="citation" id="id10" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>DollarWSP09<span class="fn-bracket">]</span></span>
<p>P. Dollár, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: a benchmark. In <em>CVPR</em>. 2009.</p>
</div>
<div class="citation" id="id8" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>FFFP04<span class="fn-bracket">]</span></span>
<p>L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. In <em>IEEE. CVPR 2004, Workshop on Generative-Model Based Vision</em>. 2004.</p>
</div>
<div class="citation" id="id11" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>GB10<span class="fn-bracket">]</span></span>
<p>X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In <em>International conference on artificial intelligence and statistics</em>, 249–256. 2010.</p>
</div>
<div class="citation" id="id25" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>GDollarG+17<span class="fn-bracket">]</span></span>
<p>Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imagenet in 1 hour. <em>CoRR</em>, 2017. URL: <a class="reference external" href="http://arxiv.org/abs/1706.02677">http://arxiv.org/abs/1706.02677</a>, <a class="reference external" href="https://arxiv.org/abs/1706.02677">arXiv:1706.02677</a>.</p>
</div>
<div class="citation" id="id17" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>Gra14<span class="fn-bracket">]</span></span>
<p>Benjamin Graham. Fractional max-pooling. <em>CoRR</em>, 2014.</p>
</div>
<div class="citation" id="id9" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>GHP07<span class="fn-bracket">]</span></span>
<p>Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. Technical Report, 2007.</p>
</div>
<div class="citation" id="id19" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>HZRS15<span class="fn-bracket">]</span></span>
<p>Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In <em>Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV)</em>, ICCV '15, 1026–1034. 2015. <a class="reference external" href="https://doi.org/10.1109/ICCV.2015.123">doi:10.1109/ICCV.2015.123</a>.</p>
</div>
<div class="citation" id="id20" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>HS97<span class="fn-bracket">]</span></span>
<p>Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. <em>Neural Computation</em>, 9(8):1735–1780, 1997. <a class="reference external" href="https://doi.org/10.1162/neco.1997.9.8.1735">doi:10.1162/neco.1997.9.8.1735</a>.</p>
</div>
<div class="citation" id="id7" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>HSS+13<span class="fn-bracket">]</span></span>
<p>Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, and Christian Igel. Detection of traffic signs in real-world images: the German Traffic Sign Detection Benchmark. In <em>International Joint Conference on Neural Networks</em>, number 1288. 2013.</p>
</div>
<div class="citation" id="id16" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>IS15<span class="fn-bracket">]</span></span>
<p>Sergey Ioffe and Christian Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. <em>CoRR</em>, 2015.</p>
</div>
<div class="citation" id="id15" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>JLM10<span class="fn-bracket">]</span></span>
<p>Vidit Jain and Erik Learned-Miller. FDDB: a benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst, 2010.</p>
</div>
<div class="citation" id="id28" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>JYL19<span class="fn-bracket">]</span></span>
<p>Qing Jin, Linjie Yang, and Zhenyu Liao. Towards efficient training for neural network quantization. 2019. <a class="reference external" href="https://arxiv.org/abs/1912.10207">arXiv:1912.10207</a>.</p>
</div>
<div class="citation" id="id21" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>KB14<span class="fn-bracket">]</span></span>
<p>Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. <em>CoRR</em>, 2014. URL: <a class="reference external" href="http://arxiv.org/abs/1412.6980">http://arxiv.org/abs/1412.6980</a>, <a class="reference external" href="https://arxiv.org/abs/1412.6980">arXiv:1412.6980</a>.</p>
</div>
<div class="citation" id="id4" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>Kri09<span class="fn-bracket">]</span></span>
<p>Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical Report, 2009.</p>
</div>
<div class="citation" id="id5" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>LBBH98<span class="fn-bracket">]</span></span>
<p>Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In <em>Proceedings of the IEEE</em>, volume 86, 2278–2324. 1998.</p>
</div>
<div class="citation" id="id26" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>LWX+11<span class="fn-bracket">]</span></span>
<p>Jeffrey W. Lockhart, Gary M. Weiss, Jack C. Xue, Shaun T. Gallagher, Andrew B. Grosner, and Tony T. Pulickal. Design considerations for the wisdm smart phone-based sensor mining architecture. In <em>Proceedings of the Fifth International Workshop on Knowledge Discovery from Sensor Data</em>, SensorKDD '11, 25–33. New York, NY, USA, 2011. ACM. URL: <a class="reference external" href="http://doi.acm.org/10.1145/2003653.2003656">http://doi.acm.org/10.1145/2003653.2003656</a>, <a class="reference external" href="https://doi.org/10.1145/2003653.2003656">doi:10.1145/2003653.2003656</a>.</p>
</div>
<div class="citation" id="id14" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>RG14<span class="fn-bracket">]</span></span>
<p>A. Rakotomamonjy and G. Gasso. Histogram of gradients of time-frequency representations for audio scene detection. Technical Report, 2014.</p>
</div>
<div class="citation" id="id13" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>RDS+15<span class="fn-bracket">]</span></span>
<p>Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. <em>International Journal of Computer Vision (IJCV)</em>, 115(3):211–252, 2015. <a class="reference external" href="https://doi.org/10.1007/s11263-015-0816-y">doi:10.1007/s11263-015-0816-y</a>.</p>
</div>
<div class="citation" id="id12" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>SHK+12<span class="fn-bracket">]</span></span>
<p>Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from voverfitting. <em>Journal of Machine Learning Research</em>, 15:1929–1958, 2012.</p>
</div>
<div class="citation" id="id6" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>SSSI12<span class="fn-bracket">]</span></span>
<p>J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. <em>Neural Networks</em>, 2012. <a class="reference external" href="https://doi.org/10.1016/j.neunet.2012.02.016">doi:10.1016/j.neunet.2012.02.016</a>.</p>
</div>
<div class="citation" id="id22" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>XBD+17<span class="fn-bracket">]</span></span>
<p>Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge J. Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. DOTA: A large-scale dataset for object detection in aerial images. <em>CoRR</em>, 2017. URL: <a class="reference external" href="http://arxiv.org/abs/1711.10398">http://arxiv.org/abs/1711.10398</a>, <a class="reference external" href="https://arxiv.org/abs/1711.10398">arXiv:1711.10398</a>.</p>
</div>
<div class="citation" id="id24" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>ZDM19<span class="fn-bracket">]</span></span>
<p>Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma. Residual learning without normalization via better initialization. In <em>International Conference on Learning Representations</em>. 2019. URL: <a class="reference external" href="https://openreview.net/forum?id=H1gsz30cKX">https://openreview.net/forum?id=H1gsz30cKX</a>.</p>
</div>
<div class="citation" id="id3" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>LuceyCohnKanade+10<span class="fn-bracket">]</span></span>
<p>P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In <em>2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops</em>, volume, 94–101. June 2010. <a class="reference external" href="https://doi.org/10.1109/CVPRW.2010.5543262">doi:10.1109/CVPRW.2010.5543262</a>.</p>
</div>
<div class="citation" id="id27" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>Warden18<span class="fn-bracket">]</span></span>
<p>P. Warden. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. <em>ArXiv e-prints</em>, April 2018. URL: <a class="reference external" href="https://arxiv.org/abs/1804.03209">https://arxiv.org/abs/1804.03209</a>, <a class="reference external" href="https://arxiv.org/abs/1804.03209">arXiv:1804.03209</a>.</p>
</div>
<div class="citation" id="id23" role="doc-biblioentry">
<span class="label"><span class="fn-bracket">[</span>WilsonRoelofsStern+17<span class="fn-bracket">]</span></span>
<p>Ashia C. Wilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The Marginal Value of Adaptive Gradient Methods in Machine Learning. <em>arXiv e-prints</em>, pages arXiv:1705.08292, May 2017. <a class="reference external" href="https://arxiv.org/abs/1705.08292">arXiv:1705.08292</a>.</p>
</div>
</div>
</div>
</section>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<p>© Copyright 2019, CEA LIST.</p>
</div>
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script>
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>