-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
293 lines (246 loc) · 13.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Yuhang Lu</title>
<meta content="Yuhang Lu, https://rudylyh.github.io" name="keywords">
<style media="screen" type="text/css">html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, tt, var, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td {
border: 0pt none;
font-family: inherit;
font-size: 100%;
font-style: inherit;
font-weight: inherit;
margin: 0pt;
outline-color: invert;
outline-style: none;
outline-width: 0pt;
padding: 0pt;
vertical-align: baseline;
}
a {
color: #1772d0;
text-decoration:none;
}
a:focus, a:hover {
color: #f09228;
text-decoration:none;
}
a.paper {
font-weight: bold;
font-size: 12pt;
}
b.paper {
font-weight: bold;
font-size: 12pt;
}
* {
margin: 0pt;
padding: 0pt;
}
body {
position: relative;
margin: 3em auto 2em auto;
width: 800px;
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 14px;
background: #eee;
}
h2 {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 15pt;
font-weight: 700;
}
h3 {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 16px;
font-weight: 700;
}
strong {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 14px;
font-weight:bold;
}
ul {
list-style: circle;
}
img {
border: none;
}
li {
padding-bottom: 0.5em;
margin-left: 1.4em;
}
alert {
font-family: Lato, Verdana, Helvetica, sans-serif;
font-size: 13px;
font-weight: bold;
color: #FF0000;
}
em, i {
font-style:italic;
}
div.section {
clear: both;
margin-bottom: 1.5em;
background: #eee;
}
div.spanner {
clear: both;
}
div.edu {
clear: both;
margin-top: 0.5em;
margin-bottom: 0.5em;
border: 1px solid #ddd;
background: #fff;
padding: 1em 1em 1em 1em;
}
div.edu div {
padding-left: 100px;
}
div.paper {
clear: both;
margin-top: 0.5em;
margin-bottom: 1em;
border: 1px solid #ddd;
background: #fff;
padding: 1em 1em 1em 1em;
}
div.paper div {
padding-left: 230px;
}
img.paper {
margin-bottom: 0.5em;
float: left;
width: 200px;
}
img.icon {
margin-bottom: 0.5em;
float: left;
width: 75px;
height: 75px;
}
span.blurb {
font-style:italic;
display:block;
margin-top:0.75em;
margin-bottom:0.5em;
}
pre, code {
font-family: 'Lucida Console', 'Andale Mono', 'Courier', monospaced;
margin: 1em 0;
padding: 0;
}
div.paper pre {
font-size: 0.9em;
}
</style>
<link href="css" rel="stylesheet" type="text/css"><!--<link href='http://fonts.googleapis.com/css?family=Open+Sans+Condensed:300' rel='stylesheet' type='text/css'>--><!--<link href='http://fonts.googleapis.com/css?family=Open+Sans' rel='stylesheet' type='text/css'>--><!--<link href='http://fonts.googleapis.com/css?family=Yanone+Kaffeesatz' rel='stylesheet' type='text/css'>-->
<script async="" src="analytics.js"></script><script async="" src="analytics.js"></script><script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
</script><script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-66888300-1', 'auto');
ga('send', 'pageview');
</script><script type="text/javascript" src="jquery-1.12.4.min.js"></script></head>
<body data-gr-c-s-loaded="true">
<div style="margin-bottom: 1em; border: 1px solid #ddd; background-color: #fff; padding: 1em; height: 140px;">
<div style="margin: 0px auto; width: 100%;">
<img title="YuhangLu" style="float: left; padding-left: .01em; height: 140px;" src="yuhang.jpg">
<div style="padding-left: 12em; vertical-align: top; height: 120px;"><span style="line-height: 150%; font-size: 20pt;">Yuhang Lu (鲁宇航)</span><br>
<span>Machine Learning Engineer</span><br>
<span>Xpeng Motors</span><br>
<span> <a href="[email protected]">Email</a> | <a href="https://www.linkedin.com/in/yuhanglu/">LinkedIn</a> | <a href="https://scholar.google.com/citations?user=t89lkOsAAAAJ&hl=en">Google scholar</a> </span> <br>
</div>
</div>
</div>
<div style="clear: both;">
<div class="section">
<h2>About Me</h2>
<div class="paper">
I am a Senior Machine Learning Engineer at Xpeng Motors, based in San Diego, CA. I am currently working on BEV map learning and vision-based 3D reconstruction. I hold a PhD in Computer Science from the University of South Carolina (2022), where my research concentrated on semantic segmentation, few-shot learning, and image matching.</div>
</div>
</div>
<div style="clear: both;">
<div class="section"><h2>Selected Publications</h2>
<div class="paper">
<strong> Semi-supervised Deep Large-baseline Homography Estimation with Progressive Equivalence Constraint </strong>
<br> <i> AAAI Conference on Artificial Intelligence (AAAI), 2023. </i>
<br> Hai Jiang, Haipeng Li, <strong>Yuhang Lu</strong>, Songchen Han, Shuaicheng Liu
<br> [<a href="https://arxiv.org/abs/2212.02763" target="_blank">PDF</a>]
[<a href="https://github.com/megvii-research/LBHomo" target="_blank">Code</a>]
<br><br>
<strong> Unsupervised Global and Local Homography Estimation with Motion Basis Learning </strong>
<br> <i> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023 </i>
<br> Shuaicheng Liu, <strong>Yuhang Lu</strong>, Hai Jiang, Nianjin Ye, Chuang Wang, Bing Zeng
<br> [<a href="https://ieeexplore.ieee.org/abstract/document/9956874" target="_blank">PDF</a>]
[<a href="https://github.com/megvii-research/BasesHomo" target="_blank">Code</a>]
<br><br>
<strong> Unsupervised Homography Estimation with Coplanarity-Aware GAN </strong>
<br> <i> IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022 </i>
<br> Mingbo Hong*, <strong>Yuhang Lu*</strong>, Nianjin Ye, Chunyu Lin, Qijun Zhao, Shuaicheng Liu
<br> [<a href="https://openaccess.thecvf.com/content/CVPR2022/html/Hong_Unsupervised_Homography_Estimation_With_Coplanarity-Aware_GAN_CVPR_2022_paper.html" target="_blank">PDF</a>]
[<a href="https://github.com/megvii-research/HomoGAN" target="_blank">Code</a>]
[<a href="https://www.youtube.com/watch?v=uNFA-yOSz7M&ab_channel=ShuaichengLiu" target="_blank">Video</a>]
<br><br>
<strong> Snowvision: Segmenting, Identifying, and Discovering Stamped Curve Patterns from Fragments of Pottery </strong>
<br> <i> International Journal of Computer Vision (IJCV), 2022 </i>
<br> <strong>Yuhang Lu*</strong>, Jun Zhou*, Sam T. McDorman, Canyu Zhang, Deja Scott, Jake Bukuts, Colin Wilder, Karen Y. Smith, Song Wang
<br> [<a href="https://link.springer.com/article/10.1007/s11263-022-01669-7" target="_blank">PDF</a>]
[<a href="https://github.com/rudylyh/SnowVision/tree/pytorch" target="_blank">Code</a>]
[<a href="http://www.worldengraved.org/" target="_blank">Website</a>]
<br><br>
<strong> Cross-domain Few-shot Segmentation with Transductive Fine-tuning </strong>
<br> <i> arxiv, 2021 </i>
<br> <strong>Yuhang Lu</strong>, Xinyi Wu, Zhenyao Wu, Song Wang
<br> [<a href="https://arxiv.org/pdf/2211.14745.pdf" target="_blank">PDF</a>]
<br><br>
<strong> Annotation-Efficient Semantic Segmentation with Shape Prior Knowledge </strong>
<br> <i> ACM Multimedia Doctoral Symposium, 2021 </i>
<br> <strong>Yuhang Lu</strong>
<br> [<a href="https://dl.acm.org/doi/10.1145/3474085.3481030" target="_blank">PDF</a>]
<br><br>
<strong>Contour Transformer Network for One-shot Segmentation of Anatomical Structures</strong>
<br> <i> IEEE Transactions on Medical Imaging, 2020 </i>
<br> <strong>Yuhang Lu</strong>, Kang Zheng, Weijian Li, Yirui Wang, Adam P. Harrison, Chihung Lin, Song Wang, Jing Xiao, Le Lu, Chang-Fu Kuo, Shun Miao
<br> [<a href="https://arxiv.org/pdf/2012.01480.pdf" target="_blank">PDF</a>]
[<a href="https://github.com/rudylyh/CTN_data" target="_blank">Dataset</a>]
[<a href="https://github.com/rudylyh/ContourTransformerNetwork" target="_blank">Code</a>]
<br><br>
<strong>Learning to Segment Anatomical Structures Accurately from One Exemplar</strong>
<br> <i> International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2020 </i>
<br> <strong>Yuhang Lu</strong>, Weijian Li, Kang Zheng, Yirui Wang, Adam P. Harrison, Chihung Lin, Song Wang, Jing Xiao, Le Lu, Chang-Fu Kuo, Shun Miao
<br> [<a href="https://arxiv.org/pdf/2007.03052.pdf" target="_blank">PDF</a>]
[<a href="https://github.com/rudylyh/CTN_data" target="_blank">Dataset</a>]
[<a href="https://github.com/rudylyh/ContourTransformerNetwork" target="_blank">Code</a>]
<br><br>
<strong>Structured Landmark Detection via Topology-Adapting Deep Graph Learning</strong>
<br> <i> European Conference on Computer Vision (ECCV), 2020. </i>
<br> Weijian Li, <strong>Yuhang Lu</strong>, Kang Zheng, Haofu Liao, Chihung Lin, Jiebo Luo, Chi-Tung Cheng, Jing Xiao, Le Lu, Chang-Fu Kuo, Shun Miao
<br> [<a href="https://arxiv.org/pdf/2004.08190.pdf" target="_blank">PDF</a>]
[<a href="https://github.com/Weijian-li/unsupervised_inter_intra_landmark" target="_blank">Code</a>]
<br><br>
<strong>A Framework for Design Identification on Heritage Objects</strong>
<br> <i> Proceedings of the Practice and Experience in Advanced Research Computing (PEARC), 2019. </i>
<br> Jun Zhou*, <strong>Yuhang Lu</strong>*, Karen Smith, Colin Wilder, Song Wang, Paul Sagona, Ben Torkian
<br> [<a href="https://dl.acm.org/citation.cfm?id=3332186.3332190" target="_blank">PDF</a>]
(Best Paper Award of the Machine Learning track)
<br><br>
<strong>Curve-Structure Segmentation from Depth Maps: A CNN-based Approach and Its Application to Exploring Cultural Heritage Objects</strong>
<br> <i> AAAI Conference on Artificial Intelligence (AAAI), 2018 </i>
<br> <strong>Yuhang Lu</strong>, Jun Zhou, Jing Wang, Jun Chen, Karen Smith, Colin Wilder, Song Wang
<br> [<a href="https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16866/16317" target="_blank">PDF</a>]
[<a href="https://github.com/rudylyh/CurveStructureSeg" target="_blank">Code</a>]
<br><br>
<strong>Scale-constrained Unsupervised Evaluation Method for Multi-scale Image Segmentation</strong>
<br> <i> IEEE International Conference on Image Processing (ICIP), 2016 </i>
<br> <strong>Yuhang Lu</strong>, YouchuanWan, Gang Li
<br> [<a href="https://arxiv.org/pdf/1611.04850.pdf" target="_blank">PDF</a>]
[<a href="https://github.com/rudylyh/SegEvaluation" target="_blank">Code</a>]
<br><br>
</div>
</div>
</div>