-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
335 lines (332 loc) · 26.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
<!DOCTYPE html>
<html lang="en">
<head>
<title>Ximing Lu</title>
<meta http-equiv="content-type" content="text/html; charset=UTF-8" />
<meta charset="utf-8" />
<meta property="og:title" content="Ximing Lu" />
<meta property="og:image" content="https://ximinglu1999.github.io/img/ximing.jpg" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="author" content="Ximing Lu" />
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
<link rel="shortcut icon" type="image/png" href="favicon.ico" />
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous" />
<link rel="stylesheet" href="css/style.css" />
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script>
</head>
<body>
<style>
pre {
text-align: left;
white-space: pre-line;
}
</style>
<div class="container mt-5">
<div class="row mb-3">
<div class="col">
<h1>Ximing Lu</h1>
</div>
</div>
<div class="row">
<div class="col-md-4 order-md-2">
<img src="img/ximing.jpg" alt="Ximing" class="img-fluid rounded" />
</div>
<div class="col-md-8 order-md-1">
<p>
I am a Ph.D. candidate at the <a href="https://www.cs.washington.edu/">University of Washington</a>, advised by Professor <a href="https://homes.cs.washington.edu/~yejin/" target="_blank">Yejin Choi</a>. Previously, I received my B.S. degree in Computer Science at University of Washington.
</p>
<p>
My broad research goal is to <b>understand the boundaries</b> of machine intelligence and <b>bridge the capability gap</b> between models and humans by exploring alternative paths beyond scaling, such as algorithmic innovations and knowledge enhancement.
Over the past few years, I have focused on <b>studying the capabilities and limits</b> of language models, as well as <b>developing learning and inference algorithms</b> to unlock capabilities in smaller models, for example:
</p>
<ul>
<li>I investigate the fundamental limits of Transformer language models in the context of compositional tasks in my work <a href="https://arxiv.org/abs/2305.18654">Faith and Fate</a>. I explore the divergence in the configuration of machine and human intelligence by proposing and testing the <a href="https://arxiv.org/abs/2311.00059">Generative AI Paradox</a>.</li>
<li>I have worked to develop a suite of learning and decoding-time methods to empower compact and efficient language models, including <a href="https://arxiv.org/abs/2010.12884">NeuroLogic Decoding</a>, <a href="https://arxiv.org/abs/2112.08726">NeuroLogic A<sup>*</sup>esque Decoding</a>, <a href="https://arxiv.org/abs/2205.13636">Quark</a>, and <a href="https://arxiv.org/abs/2305.15065">Inference-Time Policy Adapters</a>.</li>
</ul>
<!-- <p>
I am happy to mentor a few self-motivated undergraduate and master students. Please feel free to reach out if you are interested in working with me!
</p> -->
<!-- <p>
At NVIDIA, my work is centered around synthesizing pre/post-training data, developing alternative learning paradigms and model architectures to empower reasoning capabilities in frontier models.
</p> -->
<p>Email: lux32 [<a href="https://en.wikipedia.org/wiki/At_sign" target="_blank">at</a>] cs.washington.edu</p>
<p>Links: [<a href="https://scholar.google.com/citations?user=ssYPSmkAAAAJ&hl=en" target="_blank">Google Scholar</a>] [<a href="https://twitter.com/gximing?lang=en" target="_blank">Twitter</a>] [<a href="https://github.com/GXimingLu" target="_blank">Github</a>] [<a href="img/Ximing_Lu_CV.pdf" target="_blank">CV</a>] [<a href="img/Research_Statement.pdf" target="_blank">Research Statement</a>]</p>
</div>
</div>
<hr />
<div class="row" id="publications">
<div class="col">
<h2>Publications</h2>
<p>Publications are listed in reverse chronological order. For a list of all publications, please check out my <a href="https://scholar.google.com/citations?user=ssYPSmkAAAAJ&hl=en" target="_blank">Google Scholar</a></p>
<ul>
<li>
<a href="https://arxiv.org/abs/2403.13780"><b>Information-Theoretic Distillation</b> for Reference-less Summarization</a><br>
Jaehun Jung, <strong>Ximing Lu</strong>, Liwei Jiang, Faeze Brahman, Peter West, Pang Wei Koh, Yejin Choi<br>
arXiv:2403.13780 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2402.05070">A Roadmap to <b>Pluralistic Alignment</b></a><br>
Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, <strong>Ximing Lu</strong>, Nouha Dziri, Tim Althoff, Yejin Choi<br>
ICML 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2305.16635"><b>Impossible Distillation:</b> From Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing</a><br>
Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, <strong>Ximing Lu</strong>, Jillian Fisher, Taylor Sorensen, Yejin Choi<br>
NAACL 2024 <br><br>
</li>
<li>
<b><a href="https://arxiv.org/abs/2402.08761">JAMDEC:</b> Unsupervised Authorship Obfuscation using Constrained Decoding over Small Language Models</a><br>
Jillian Fisher, <strong>Ximing Lu</strong>, Jaehun Jung, Liwei Jiang, Zaid Harchaoui, Yejin Choi<br>
NAACL 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2310.08559"><b>Phenomenal Yet Puzzling:</b> Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement</a><br>
Linlu Qiu, Liwei Jiang, <strong>Ximing Lu</strong>, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren<br>
ICLR 2024, <strong>Oral (1.2%)</strong> <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2311.00059"><b>THE GENERATIVE AI PARADOX:</b> “What It Can Create, It May Not Understand”</a><br>
*Peter West, *<strong>Ximing Lu</strong>, *Nouha Dziri, *Faeze Brahman, *Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi<br>
ICLR 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2312.01552"><b>The Unlocking Spell on Base LLMs:</b> Rethinking Alignment via In-Context Learning</a><br>
Bill Yuchen Lin, Abhilasha Ravichander, <strong>Ximing Lu</strong>, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi<br>
ICLR 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2311.02805">Tailoring Self-Rationalizers with Multi-Reward Distillation</a><br>
Sahana Ramnath, Brihi Joshi, Skyler Hallinan, <strong>Ximing Lu</strong>, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren<br>
ICLR 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2305.14718">Improving Language Models with Advantage-Based Offline Policy Gradients</a><br>
Ashutosh Baheti, <strong>Ximing Lu</strong>, Faeze Brahman, Ronan Le Bras, Maarten Sap, Mark Riedl<br>
ICLR 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2309.00779"><b>Value Kaleidoscope:</b> Engaging AI with Pluralistic Human Values, Rights, and Duties</a><br>
Taylor Sorensen, Liwei Jiang, Jena D Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, <strong>Ximing Lu</strong>, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi
<br>
AAAI 2024 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2305.18654"><b>Faith and Fate:</b> Limits of Transformers on Compositionality</a><br>
*Nouha Dziri, *<strong>Ximing Lu</strong>, *Melanie Sclar, +Xiang Lorraine Li, +Liwei Jiang, +Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena Hwang, Soumya Sanyal, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi<br>
NeurIPS 2023, <strong>Spotlight</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2312.04837">Localized Symbolic Knowledge Distillation for Visual Commonsense Models</a><br>
Jae Sung Park, Jack Hessel, Khyathi Chandu, Paul Pu Liang, <strong>Ximing Lu</strong>, Peter West, Youngjae Yu, Qiuyuan Huang, Jianfeng Gao, Ali Farhadi, Yejin Choi<br>
NeurIPS 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2212.10465"><b>Soda:</b> Million-scale Dialogue Distillation with Social Commonsense Contextualization</a><br>
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, <strong>Ximing Lu</strong>, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin Choi<br>
EMNLP 2023, <font color="red"> <strong>Outstanding Paper Award</strong> </font> <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2305.15065"><b>Inference-time Policy Adapters (IPA):</b> Tailoring Extreme-Scale LMs Without Fine-Tuning</a><br>
<strong>Ximing Lu</strong>, Faeze Brahman, Peter West, Jaehun Jung, Khyathi Chandu, Abhilasha Ravichander, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Lin, Skyler Hallinan, Lianhui Qin, Xiang Ren, Sean Welleck, Yejin Choi <br>
EMNLP 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2312.05979"><b>NovaCOMET:</b> Open Commonsense Foundation Models with Symbolic Knowledge Distillation</a><br>
Peter West, Ronan Le Bras, Taylor Sorensen, Bill Yuchen Lin, Liwei Jiang, <strong>Ximing Lu</strong>, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi <br>
Findings of EMNLP 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2311.07167">STEER: Unified Style Transfer with Expert Reinforcement</a><br>
Skyler Hallinan, Faeze Brahman, <strong>Ximing Lu</strong>, Jaehun Jung, Sean Welleck, Yejin Choi <br>
Findings of EMNLP 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2311.07237">In Search of the Long-Tail: Systematic Generation of Long-Tail Knowledge via Logical Rule Guided Search</a><br>
Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, <strong>Ximing Lu</strong>, Wenting Zhao, Faeze Brahman, Yejin Choi, Xiang Ren<br>
arXiv:2311.07237 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2212.10409"><b>ClarifyDelphi:</b> Reinforced Clarification Questions with Defeasibility Rewards for Social and Moral Situations</a><br>
Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, <strong>Ximing Lu</strong>, Liwei Jiang, Yejin Choi, Chandra Bhagavatula<br>
ACL 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2212.09246"><b>I2D2:</b> Inductive Knowledge Distillation with Neurologic and Self-Imitation</a><br>
Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, <strong>Ximing Lu</strong>, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi<br>
ACL 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2205.12630">Multimodal Knowledge Alignment with Reinforcement Learning</a><br>
Youngjae Yu, Jiwan Chung, Heeseung Yun, Jack Hessel, Jae Sung Park, <strong>Ximing Lu</strong>, Rowan Zellers, Prithviraj Ammanabrolu, Ronan Le Bras, Gunhee Kim, Yejin Choi<br>
CVPR 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2211.00053">Generating Sequences by Learning to Self-Correct</a><br>
*Sean Welleck, *<strong>Ximing Lu</strong>, +Peter West, +Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi<br>
ICLR 2023 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2205.13636"><b>Quark:</b> Controllable Text Generation with Reinforced Unlearning</a><br>
<strong>Ximing Lu</strong>, Sean Welleck, Liwei Jiang, Jack Hessel, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, Yejin Choi<br>
NeurIPS 2022, <strong>Oral</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2205.12910"><b>Naturalprover:</b> Grounded Mathematical Proof Generation with Language Models</a><br>
Sean Welleck, Jiacheng Liu, <strong>Ximing Lu</strong>, Hannaneh Hajishirzi, Yejin Choi<br>
NeurIPS 2022 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2210.03078"><b>Rainier:</b> Reinforced Knowledge Introspector for Commonsense Question Answering</a><br>
Jiacheng Liu, Skyler Hallinan, <strong>Ximing Lu</strong>, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi<br>
EMNLP 2022 <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2205.12688"><b>Prosocialdialog:</b> A Prosocial Backbone for Conversational Agents</a><br>
Hyunwoo Kim, Youngjae Yu, Liwei Jiang, <strong>Ximing Lu</strong>, Daniel Khashabi, Gunhee Kim, Yejin Choi, Maarten Sap<br>
EMNLP 2022<br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2205.09273"><b>Twist Decoding:</b> Diverse Generators Guide Each Other</a><br>
Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, <strong>Ximing Lu</strong>, Dragomir Radev, Yejin Choi, Noah A. Smith<br>
EMNLP 2022<br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2201.02639"><b>MERLOT Reserve:</b> Multimodal Neural Script Knowledge through Vision and Language and Sound</a><br>
Rowan Zellers, Jiasen Lu, <strong>Ximing Lu</strong>, Youngjae Yu, Yanpeng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, Yejin Choi<br>
CVPR 2022, <strong>Oral</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2112.08995">Connecting the Dots between Audio and Text without Parallel Data through Visual Knowledge Transfer</a><br>
Yanpeng Zhao, Jack Hessel, Youngjae Yu, <strong>Ximing Lu</strong>, Rowan Zellers, Yejin Choi<br>
NAACL 2022, <strong>Oral</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2112.08726"><b>NeuroLogic A*esque Decoding:</b> Constrained Text Generation with Lookahead Heuristics</a><br>
<strong>Ximing Lu</strong>, +Sean Welleck, +Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah Smith, Yejin Choi<br>
NAACL 2022, <font color="red"> <strong>Best Paper Award</strong> </font> <br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2110.07178"><b> Symbolic Knowledge Distillation:</b> from General Language Models to Commonsense Models</a><br>
Peter West, Chandra Bhagavatula, Jack Hessel, Jena D. Hwang, Liwei Jiang, Ronan Le Bras, <strong>Ximing Lu</strong>, Sean Welleck, Yejin Choi<br>
NAACL 2022, <strong>Oral</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2110.08387">Generated Knowledge Prompting for Commonsense Reasoning</a><br>
Jiacheng Liu, Alisa Liu, <strong>Ximing Lu</strong>, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi<br>
ACL 2022<br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2106.02636">🍷<b>MERLOT:</b> Multimodal Neural Script Knowledge Models</a><br>
*Rowan Zellers, *<strong>Ximing Lu</strong>, *Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi<br>
NeurIPS 2021, <strong>Oral (1%)</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2101.00297">Analyzing Commonsense Emergence in Few-shot Knowledge Models</a><br>
Jeff Da, Ronan Le Bras, <strong>Ximing Lu</strong>, Yejin Choi, Antoine Bosselut<br>
AKBC 2021<br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2105.03023"><b>DExperts:</b> On-the-Fly Controlled Text Generation with Experts and Anti-Experts</a><br>
Alisa Liu, Maarten Sap, <strong>Ximing Lu</strong>, Swabha Swayamdipta, Chandra Bhagavatula, Noah Smith, Yejin Choi<br>
ACL 2021, <strong>Oral</strong><br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2010.08566"><b>Reflective Decoding:</b> Beyond Unidirectional Generation with Off-the-shelf Language Models</a><br>
Peter West, <strong>Ximing Lu</strong>, Ari Holtzman, Chandra Bhagavatula, Jena D. Hwang, Yejin Choi<br>
ACL 2021<br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2101.00371">On-the-Fly Attention Modulation for Neural Generation</a><br>
Yue Dong, Chandra Bhagavatula, <strong>Ximing Lu</strong>, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi<br>
ACL 2021 Findings<br><br>
</li>
<li>
<a href="https://arxiv.org/abs/2010.12884"><b>NeuroLogic Decoding:</b> (Un)supervised Neural Text Generation with Predicate Logic Constraints</a><br>
<strong>Ximing Lu</strong>, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi<br>
NAACL 2021<br><br>
</li>
<li>
<a href="https://www.sciencedirect.com/science/article/abs/pii/S136184152200113X">End-to-End Diagnosis of Breast Biopsy Images with Transformers</a><br>
*Sachin Mehta, *<strong>Ximing Lu</strong>, Wenjun Wu, Donald Weaver, Hannaneh Hajishirzi, Joann Elmore, Linda Shapiro<br>
Medical Image Analysis 79, 102466<br><br>
</li>
<li>
<a href="https://www.sciencedirect.com/science/article/abs/pii/B9780128197400000061">Applications of the ESPNet Architecture in Medical Imaging</a><br>
Sachin Mehta, Nicholas Nuechterlein, Ezgi Mercan, Beibin Li, Shima Nofallah, Wenjun Wu, <strong>Ximing Lu</strong>, Anat Caspi, Mohammad Rastegari, Joann Elmore, Hannaneh Hajishirzi, Linda Shapiro<br>
State of the Art in Neural Networks and their Applications, 117-131<br><br>
</li>
<li>
<a href="https://ieeexplore.ieee.org/abstract/document/9508513">Analysis of Regions of Interest and Distractor Regions in Breast Biopsy Images</a><br>
<strong>Ximing Lu</strong>, Sachin Mehta, Tad Brunyé, Donald Weaver, Joann Elmore, Linda Shapiro<br>
BHI 2021<br>
</li>
</ul>
</div>
</div>
<hr />
<div class="row">
<div class="col">
<h2>Honors & Awards</h2>
<ul>
<li>
(2023) Outstanding Paper Award at EMNLP<br>
</li>
<li>
(2023) <a href="https://news.cs.washington.edu/2023/06/22/take-advantage-of-the-doors-that-open-allen-school-celebrates-the-class-of-2023/">Best Senior Thesis Award</a>, Paul G. Allen School of Computer Science & Engineering<br>
</li>
<li>
(2022) <a href="https://2022.naacl.org/blog/best-papers/">Best Paper Award at NAACL</a><br>
</li>
<li>
(2020) <a href="https://news.cs.washington.edu/2021/01/15/computing-research-association-recognizes-undergraduates-advancing-data-science-for-mental-health-commonsense-reasoning-mobile-health-sensing-and-more/">Outstanding Undergraduate Researcher Award Runners-Up</a>, Computing Research Association<br>
</li>
<li>
(2020) <a href="https://news.cs.washington.edu/2020/11/03/allen-school-recognizes-undergraduates-ximing-lu-and-sanjana-chintalapati-during-annual-celebration-of-diversity-in-computing/">Lisa Simonyi Prize</a>, Paul G. Allen School of Computer Science & Engineering<br>
</li>
<li>
(2020) <a href="https://news.cs.washington.edu/2020/12/14/six-allen-school-undergraduates-recognized-for-excellence-in-research/">Levinson Emerging Scholars Award</a>, University of Washington<br>
</li>
<li>
(2020) Mary Gates Research Scholarship, University of Washington<br>
</li>
<li>
(2019) Denton, Denice Dee Scholars Endowment, Paul G. Allen School of Computer Science & Engineering<br>
</li>
<li>
(2018) Second Prize of UW Datathon, Citadel Investment Group, LLC<br>
</li>
<li>
(2018) Conference Travel Award, University of Washington<br>
</li>
</ul>
</div>
</div>
<hr />
<div class="row">
<div class="col">
<h2>Teaching Experience</h2>
<ul>
<!-- Teaching experiences can be listed here -->
<li>
(Winter, 2024) TA @ CSE 447/517 (Undergrad/Grad NLP) at University of Washington
</li>
<li>
(Winter, 2021)<font color="white">-</font>TA @ CSE P517 (Professional NLP) at University of Washington
</li>
</ul>
</div>
</div>
<footer class="pt-2 my-md-2 pt-md-2 border-top">
<div class="row justify-content-center">
<div class="col-6 col-md text-left align-self-center">
<p class="h5 text-muted">
Ximing, 2024
</p>
</div>
</div>
</footer>
</div>
</body>
</html>