-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
716 lines (544 loc) · 30.6 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
<!DOCTYPE html>
<html lang="en">
<!--
The design of this website is based on https://xingxuanli.github.io/ and https://isakzhang.github.io/.
PLS ask for permission & add a link to these websites before you directly copy and apply to yours.
-->
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title> Lidong Bing's Homepage </title>
<meta name="keywords" content="Lidong Bing, 邴立东, Alibaba DAMO, Language Technology Lab, NLP, Large Language Models">
<meta name="robots" content="index,follow">
<link rel="stylesheet" href="./style.css">
<link href='https://fonts.googleapis.com/css?family=PT+Serif:400,700,400italic' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-134238626-1"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-134238626-1');
</script>
</head>
<body>
<div class="container content">
<main>
<div class="home">
<div class="mini-intro">
<img class="avatar" src="img/bing_australia.jpg" alt="LidongBing-photo">
<h2 id="l.bing">Lidong Bing <span style="font-family:STFangsong">(邴立东)</span></h2>
Vice President & Chief Scientist, Shanda Group
<br>Office: UBIX, 25 Ubi Rd 4, Singapore <code class="language-plaintext highlighter-rouge">AND</code> No. 356 Guoshoujing Road, Shanghai
<br>Contact: <code class="language-plaintext highlighter-rouge">lidong.bing [at] shanda.com AND binglidong [at] gmail.com</code>
<br>
<br>
<p>
<font color="#FF4500">
<b>JOB OPENINGS NOW!!</b> (<i>Updated in Jan 2025</i>): <br>
- Openings of research scientists and research interns are available at Shanda AI Research Institute (SARI)! <br>
- We especially encourage candidates with experience in AI and brain science to apply. <br>
- Base locations: Singapore, Shanghai, Beijing, USA. <br>
- Send your CV to the above email addresses.
</font>
</p>
<div id="menu">
<div>
<a href="#Biography">Bio</a>
</div>
<div>
<a href="#News">News</a>
</div>
<div>
<a href="#Publications">Publication</a>
</div>
<div>
<a href="#Talks">Talk</a>
</div>
<div>
<a href="#Professional_Activities">Service</a>
</div>
<script>
$(function () {
const current = window.location.pathname.split("/");
$('#menu div a').each(function(){
const $this = $(this);
if (current.indexOf($this.attr("href")) !== -1) {
$this.parent().addClass("now");
}
})
});
</script>
</div>
<h3><a name="Biography"></a>Biography</h3>
<p>
Lidong Bing is Shanda Group’s Vice President, reporting to Chairman Tianqiao Chen.
He is leading the effort to establish the Shanda AI Research Institute (SARI) as Chief Scientist,
focusing on various large model techniques and the intersection of brain science and AI.
He received a Ph.D. from The Chinese University of Hong Kong and was a postdoc research fellow at Carnegie Mellon University.
<br>
Lidong's research interests include large language models, vision-language models, and various low-resource and multilingual NLP problems.
Currently, he serves as an Action Editor for Transactions of the Association for Computational Linguistics (TACL) and ACL Rolling Review (ARR),
Area Chair for AI conferences, and Associate Editor for AI journals.
<br>
Previously, Lidong was the Head of the Language Technology Lab at DAMO Academy of Alibaba Group,
developing NLP techniques for many business scenarios of Alibaba and its globalization strategy.
He spearheaded projects like SeaLLMs, awarded by the ITU, and VideoLLaMA, a pioneering video language model.
<!--
Current projects that the lab is focusing on include SeaLLMs (<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2">[tech memo]</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B">[demo]</a> <a href="https://arxiv.org/abs/2312.00738">[paper]</a>),
a family of language models optimized for Southeast Asian (SEA) languages,
and Video-LLaMA ( <a href="https://github.com/DAMO-NLP-SG/Video-LLaMA">[code]</a>
<a href="https://huggingface.co/spaces/DAMO-NLP-SG/Video-LLaMA">[demo]</a>
<a href="https://arxiv.org/abs/2306.02858">[paper]</a>), an instruction-tuned audio-visual language model.
-->
</p>
<h3 id="News">News/Event</h3>
<h4><font color="#FF4500">Upcoming events:</font></h4>
<ul>
<font color="#FF4500">
<li>
July 31, 2024. Lidong will deliver a Keynote Address on
"Bridging the Digital Divide: Strategies for Inclusive Digital Transformation" during the
<a href="https://mekonginstitute.org/mekong-forum-2024/">Mekong Forum 2024</a> on July 31 –
August 1, 2024, at Pullman Khon Kaen Raja Orchid, Khon Kaen Province, Thailand.
</li>
<li>
August 11, 2024. We will present our <a href="https://damo-nlp-sg.github.io/SeaLLMs/">SeaLLMs</a> at
<a href="https://2024.aclweb.org/">ACL 2024</a>, Bangkok, Thailand.
</li>
</font>
</ul>
<h4>News:</h4>
<ul>
<li>
July 10, 2024. Delivered a keynote speech on our <a href="https://damo-nlp-sg.github.io/SeaLLMs/">SeaLLMs</a> at
<a href="https://www.alibabacloud.com/en/idn_ai_conference?_p_lc=1">Indonesia AI Conference 2024</a>.
</li>
<li>
July 5, 2024. Award ceremony for <a href="https://damo-nlp-sg.github.io/SeaLLMs/">SeaLLMs</a> winning
"Best Innovate for Impact Award" (news: <a href="https://aiforgood.itu.int/speaker/lidong-bing/">English</a>
<a href="https://www.sohu.com/a/783129021_384789">Chinese</a>) by The International Telecommunication Union (ITU) of the United Nations, colocated with
<a href="https://www.worldaic.com.cn/forum">WAIC</a>.
</li>
<li>
July 2024. We released Version 3 of SeaLLMs (<a href="https://huggingface.co/SeaLLMs/SeaLLM3-7B-Chat">tech memo</a>,
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat">demo</a>). V3 supports 10 local languages and
achieves SOTA on exam, multiturn, math, and translation among similar-size models. More importantly,
it has high reliability, less hallucination, and safer responses in local contexts.
</li>
<li>
June 2024. We released VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
(<a href="https://arxiv.org/abs/2406.07476">paper</a>,
<a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">code</a>).
</li>
<li>
June 2024. We released Auto Arena of LLMs: Automating LLM Evaluations with Agent Peer-battles and Committee Discussions
(<a href="https://arxiv.org/abs/2405.20267">paper</a>,
<a href="https://auto-arena.github.io/">project page</a>).
</li>
<li>
May 2024. 6 papers were accepted at <a href="https://2024.aclweb.org/">ACL 2024</a>. </li>
<li>
April 2024. We released Version 2.5 of SeaLLMs (<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2.5">tech memo</a>,
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5">demo</a>). </li>
<li>
Feb 2024. We released Version 2 of SeaLLMs (<a href="https://huggingface.co/SeaLLMs/SeaLLM-7B-v2">tech memo</a>,
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B">demo</a>). </li>
<li>
Jan 2024. Three papers were accepted at <a href="https://iclr.cc/">ICLR 2024</a>. </li>
<li>
Nov 2023. We released an LLM, named SeaLLMs (<a href="https://arxiv.org/abs/2312.00738">Paper</a> <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b">DEMO</a>), which has quite good capabilities for the languages in Southeast Asia. </li>
<li>
Oct 2023. Seven papers were accepted at <a href="https://2023.emnlp.org/">EMNLP 2023</a>. </li>
<li>
Sep 2023. Two papers were accepted at <a href="https://nips.cc/">NeurIPS 2023</a>. </li>
<!--<li>
June 2023. Serve as an AC of <a href="https://2023.emnlp.org/">EMNLP 2023</a> for the theme track: Large Language Models and the Future of NLP. </li>
<li>
May 2023. 17 papers were accepted at <a href="https://2023.aclweb.org/">ACL 2023</a>, 10 at the main conference, and 7 at the findings. </li>
<li>
Dec 2022. Invited to serve as an AC of <a href="https://2023.aclweb.org/">ACL 2023</a>. </li>
<li>
Oct 2022. Eight papers were accepted at the main conference of <a href="https://2022.emnlp.org/">EMNLP 2022</a>. </li>
<li>
Sep 2022. Three papers were accepted at <a href="https://coling2022.org/">COLING 2022</a> main conference. </li>
<li>
April 2022. Invited to serve as an Action Editor for <a href="https://transacl.org/ojs/index.php/tacl/index">TACL</a>. </li>
<li>
Feb 2022. Six papers were accepted at <a href="https://2022.aclweb.org/">ACL 2022</a>, 3 at the main conference and 3 at the findings. </li>
<li>
Oct 2021. Invited to serve as an Action Editor for <a href="https://aclrollingreview.org/">ACL Rolling Review</a>. </li>
<li>
Aug 2021. Four papers were accepted at <a href="https://2021.emnlp.org/">EMNLP 2021</a>, 2 at the main conference and 2 at the findings. </li>
<li>
May 2021. Seven papers were accepted at <a href="https://2021.aclweb.org/">ACL 2021</a> main conference. </li>
<li>
April 2021. Invited to serve as an AC of <a href="https://2021.emnlp.org/">EMNLP 2021</a> for the Sentiment Analysis, Stylistic Analysis, and Argument Mining track. </li>
<li>
Jan 2021. Invited to serve as a Social Media Co-Chair of <a href="https://2021.aclweb.org/">ACL 2021</a>. </li>
<li>
Nov 2020. Invited to serve as an AC of <a href="https://2021.aclweb.org/">ACL 2021</a> for the Sentiment Analysis, Stylistic Analysis, and Argument Mining track. </li>
<li>
Sep 2020. Ten papers were accepted at <a href="https://2020.emnlp.org/">EMNLP 2020</a>, 9 at the main conference, 1 at the findings. </li>
<li>
Sep 2020. One paper was accepted at <a href="https://coling2020.org/">COLING 2020</a> and one paper was accetped at <a href="http://aacl2020.org/">AACL 2020</a>. </li>
<li>
Apr 2020. Two papers were accepted at <a href="https://acl2020.org/">ACL 2020</a> and one paper was accetped at <a href="https://www.ijcai20.org/">IJCAI 2020</a>. </li>
<li>
Nov 2019. Four papers were accepted at <a href="https://aaai.org/Conferences/AAAI-20/">AAAI 2020</a>. </li>
<li>
Aug 2019. Eight papers were accepted at <a href="https://www.emnlp-ijcnlp2019.org/">EMNLP 2019</a>. </li>
<li>
May 2019. One paper was accetped at <a href="https://www.ijcai19.org/">IJCAI 2019</a>. </li>
<li>
January 2019, I gave a talk on "Text Generation, Editing and Summarization" at the National University of Singapore, here is <a href="pub/TextGeneration_NUS_2019-Jan-18_v2.pdf">the slides</a>.
</li>
<li>
January 2019. One paper was accetped at <a href="https://www2019.thewebconf.org/">WWW 2019</a>.
</li>-->
</ul>
<h3 id="Publications">Selected Publications (<a href="./research.html#Publications">Full List</a>,
<a href="https://github.com/DAMO-NLP-SG">GitHub</a>,
<a href="https://scholar.google.com/citations?user=_oYzrzAAAAAJ&hl=en"> Google Scholar</a>,
<a href="http://dblp.uni-trier.de/pers/hd/b/Bing:Lidong"> DBLP</a>) </h3>
<ul>
<li>
<p> <font size="3.5"><b>SeaLLMs -- Large Language Models for Southeast Asia </b>
<a href="https://arxiv.org/abs/2312.00738">[paper]</a>
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B-v2.5">[demo]</a>
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs">[code]</a>.
Xuan-Phi Nguyen, Wenxuan Zhang, Xin Li, Mahani Aljunied, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, Chaoqun Liu, Hang Zhang, Lidong Bing.
<i>ACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts </b>
<a href="https://arxiv.org/abs/2306.11372">[paper]</a>.
Xuan-Phi Nguyen, Sharifah Mahani Aljunied, Shafiq Joty, Lidong Bing.
<i>ACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Exploring the Potential of Large Language Models in Computational Argumentation </b>
<a href="https://arxiv.org/abs/2311.09022">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/LLM-argumentation">[code]</a>.
Guizhen Chen, Liying Cheng, Anh Tuan Luu, Lidong Bing.
<i>ACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>PuzzleVQA: Diagnosing Multimodal Reasoning Challenges of Language Models with Abstract Visual Patterns </b>
<a href="https://arxiv.org/abs/2403.13315">[paper]</a>
<a href="https://github.com/declare-lab/LLM-PuzzleTest">[code]</a>.
Yew Ken Chia, Vernon Toh, Deepanway Ghosal, Lidong Bing, Soujanya Poria.
<i>ACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Order-Agnostic Data Augmentation for Few-Shot Named Entity Recognition </b>
<a href="https://lidongbing.github.io/">[paper]</a>.
Huiming Wang, Liying Cheng, Wenxuan Zhang, De Wen Soh, Lidong Bing.
<i>ACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Towards Robust Temporal Reasoning of Large Language Models via a Multi-Hop QA Dataset and Pseudo-Instruction Tuning </b>
<a href="https://arxiv.org/abs/2311.09821">[paper]</a>.
Qingyu Tan, Hwee Tou Ng, Lidong Bing.
<i>ACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding </b>
<a href="https://arxiv.org/abs/2311.16922">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/VCD">[code]</a>.
Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, Lidong Bing.
<i>CVPR</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources </b>
<a href="https://arxiv.org/abs/2305.13269">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/chain-of-knowledge">[code]</a>.
Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Shafiq Joty, Soujanya Poria, Lidong Bing.
<i>ICLR</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>CLEX: Continuous Length Extrapolation for Large Language Models </b>
<a href="https://arxiv.org/abs/2310.16450">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/CLEX">[code]</a>.
Guanzheng Chen, Xin Li, Zaiqiao Meng, Shangsong Liang, Lidong Bing.
<i>ICLR</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Multilingual Jailbreak Challenges in Large Language Models </b>
<a href="https://arxiv.org/abs/2310.06474">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/multilingual-safety-for-LLMs">[code]</a>.
Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, Lidong Bing.
<i>ICLR</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Sentiment Analysis in the Era of Large Language Models: A Reality Check </b>
<a href="https://arxiv.org/abs/2305.15005">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/LLM-Sentiment">[code]</a>.
Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing.
<i>NAACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Large Language Models can Contrastively Refine their Generation for Better Sentence Representation Learning </b>
<a href="https://arxiv.org/abs/2310.10962">[paper]</a>
<a href="https://lidongbing.github.io/">[code]</a>.
Huiming Wang, Liying Cheng, Zhaodonghui Li, De Wen Soh, Lidong Bing.
<i>NAACL</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Auto Arena of LLMs: Automating LLM Evaluations with Agent Peer-battles and Committee Discussions </b>
<a href="https://arxiv.org/abs/2405.20267">[paper]</a>
<a href="https://auto-arena.github.io/">[web site]</a>
<a href="https://huggingface.co/spaces/Auto-Arena/Leaderboard">[leaderboard]</a>
<a href="https://github.com/DAMO-NLP-SG/Auto-Arena-LLMs">[code]</a>.
Ruochen Zhao, Wenxuan Zhang, Yew Ken Chia, Deli Zhao, Lidong Bing.
<i>Preprint arXiv:2405.20267</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs </b>
<a href="https://arxiv.org/abs/2406.07476">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">[code]</a>.
Zesen Cheng, Sicong Leng, Hang Zhang, Yifei Xin, Xin Li, Guanzheng Chen, Yongxin Zhu, Wenqi Zhang, Ziyang Luo, Deli Zhao, Lidong Bing.
<i>Preprint arXiv:2406.07476</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>LLM-R2: A Large Language Model Enhanced Rule-based Rewrite System for Boosting Query Efficiency </b>
<a href="https://arxiv.org/abs/2404.12872">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/LLM-R2">[code]</a>.
Zhaodonghui Li, Haitao Yuan, Huiming Wang, Gao Cong, Lidong Bing.
<i>Preprint arXiv:2404.12872</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>How do Large Language Models Handle Multilingualism? </b>
<a href="https://arxiv.org/abs/2402.18815">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/">[code]</a>.
Yiran Zhao, Wenxuan Zhang, Guizhen Chen, Kenji Kawaguchi, Lidong Bing.
<i>Preprint arXiv:2402.18815</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Evaluating Psychological Safety of Large Language Models </b>
<a href="https://arxiv.org/abs/2212.10529">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/">[code]</a>.
Xingxuan Li, Yutong Li, Lin Qiu, Shafiq Joty, Lidong Bing.
<i>Preprint arXiv:2212.10529</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>AdaMergeX: Cross-Lingual Transfer with Large Language Models via Adaptive Adapter Merging </b>
<a href="https://arxiv.org/abs/2402.18913">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/AdaMergeX">[code]</a>.
Yiran Zhao, Wenxuan Zhang, Huiming Wang, Kenji Kawaguchi, Lidong Bing.
<i>Preprint arXiv:2402.18913</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>ParaICL: Towards Robust Parallel In-Context Learning </b>
<a href="https://arxiv.org/abs/2404.00570">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/">[code]</a>.
Xingxuan Li, Xuan-Phi Nguyen, Shafiq Joty, Lidong Bing.
<i>Preprint arXiv:2404.00570</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Is translation all you need? a study on solving multilingual tasks with large language models </b>
<a href="https://arxiv.org/abs/2403.10258">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/">[code]</a>.
Chaoqun Liu, Wenxuan Zhang, Yiran Zhao, Anh Tuan Luu, Lidong Bing.
<i>Preprint arXiv:2403.10258</i>, 2024.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Contrastive Chain-of-Thought Prompting </b>
<a href="https://arxiv.org/abs/2311.09277">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/contrastive-cot">[code]</a>.
Yew Ken Chia, Guizhen Chen, Luu Anh Tuan, Soujanya Poria, Lidong Bing.
<i>Preprint arXiv:2311.09277</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs </b>
<a href="https://arxiv.org/abs/2311.09802">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/CaRing">[code]</a>.
Sen Yang, Xin Li, Leyang Cui, Lidong Bing, Wai Lam.
<i>Preprint arXiv:2311.09802</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding </b>
<a href="https://arxiv.org/abs/2306.02858">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/Video-LLaMA">[code]</a>
<a href="https://huggingface.co/spaces/DAMO-NLP-SG/Video-LLaMA">[demo (Hugging Face)]</a>
<a href="https://www.modelscope.cn/studios/damo/video-llama/summary">[demo (ModelScope)]</a>
<a href="https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-Series">[checkpoints]</a>.
Hang Zhang, Xin Li, Lidong Bing.
<i>Demo Track of The Conference on Empirical Methods in Natural Language Processing (<b>EMNLP'23 Demo</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Once Upon a Time in Graph: Relative-Time Pretraining for Complex Temporal Reasoning </b>
<a href="https://lidongbing.github.io/">[paper]</a> <a href="https://lidongbing.github.io/">[code]</a>.
Sen Yang, Xin Li, Lidong Bing, Wai Lam.
<i>The Conference on Empirical Methods in Natural Language Processing (<b>EMNLP'23</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models </b>
<a href="https://arxiv.org/abs/2304.01933">[paper]</a> <a href="https://github.com/AGI-Edgerunners/LLM-Adapters">[code]</a>.
Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee.
<i>The Conference on Empirical Methods in Natural Language Processing (<b>EMNLP'23</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Is GPT-4 a Good Data Analyst? </b>
<a href="https://arxiv.org/abs/2305.15038">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/GPT4-as-DataAnalyst">[code]</a>.
Liying Cheng, Xingxuan Li, Lidong Bing.
<i>Findings of The Conference on Empirical Methods in Natural Language Processing (<b>Findings of EMNLP'23</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization </b>
<a href="https://arxiv.org/abs/2305.13091">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/LLM_summeval">[code]</a>.
Chenhui Shen, Liying Cheng, Yang You, Xuan-Phi Nguyen, Lidong Bing.
<i>Findings of The Conference on Empirical Methods in Natural Language Processing (<b>Findings of EMNLP'23</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models </b>
<a href="https://arxiv.org/abs/2306.05179">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/M3Exam">[code]</a>.
Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, Lidong Bing.
<i>Advances in Neural Information Processing Systems 36 (<b>NeurIPS'23</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>
From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader </b>
<a href="https://arxiv.org/abs/2212.04755">[paper]</a>
<a href="https://github.com/DAMO-NLP-SG/PMR">[code]</a>.
Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Wai Lam, Luo Si, Lidong Bing.
<i>Advances in Neural Information Processing Systems 36 (<b>NeurIPS'23</b>)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models </b>
<a href="https://arxiv.org/abs/2306.04757">[paper]</a> <a href="https://github.com/declare-lab/instruct-eval">[code]</a>.
Yew Ken Chia, Pengfei Hong, Lidong Bing, Soujanya Poria.
<i>Preprint arXiv:2306.04757</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Can ChatGPT-like Generative Models Guarantee Factual Accuracy? On the Mistakes of New Generation Search Engines </b>
<a href="https://arxiv.org/abs/2304.11076">[paper]</a> [blog article
<a href="https://zhuanlan.zhihu.com/p/605915549"> (ZH)</a>,
<a href="https://dev.to/ruochenzhao3/can-chatgpt-like-generative-models-guarantee-factual-accuracy-on-the-mistakes-of-microsofts-new-bing-111b">(EN)</a>].
Ruochen Zhao, Xingxuan Li, Yew Ken Chia, Bosheng Ding, Lidong Bing. <i>Preprint arXiv:2304.11076</i>, 2023.
<i>Follow-up reports by <a href="https://www.cnn.com/2023/02/14/tech/microsoft-bing-ai-errors/index.html">CNN</a>,
<a href="https://www.cnbc.com/2023/02/14/microsoft-bing-ai-made-several-errors-in-launch-demo-last-week-.html">CNBC</a>,
<a href="https://fortune.com/2023/02/15/microsoft-bing-ai-errors-demo-google-bard-chatgpt/">FORTUNE, </a></i>etc.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>mPMR: A Multilingual Pre-trained Machine Reader at Scale </b>
<a href="https://aclanthology.org/2023.acl-short.131.pdf">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/PMR">[code]</a>.
Weiwen Xu, Xin Li, Wai Lam, Lidong Bing.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Reasoning Implicit Sentiment with Chain-of-Thought Prompting </b>
<a href="https://aclanthology.org/2023.acl-short.101.pdf">[paper]</a> <a href="https://github.com/scofield7419/THOR-ISA">[code]</a>.
Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework </b>
<a href="https://aclanthology.org/2023.acl-long.320.pdf">[paper]</a> <a href="https://github.com/RuochenZhao/Verify-and-Edit">[code]</a>.
Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Is GPT-3 a Good Data Annotator? </b>
<a href="https://aclanthology.org/2023.acl-long.626.pdf">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/LLM-Data-Annotator">[code]</a>.
Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Boyang Li, Shafiq Joty, Lidong Bing.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Improving Self-training for Cross-lingual Named Entity Recognition with Contrastive and Prototype Learning </b>
<a href="https://aclanthology.org/2023.acl-long.222.pdf">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/ContProto">[code]</a>.
Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Chunyan Miao.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Bidirectional Generative Framework for Cross-domain Aspect-based Sentiment Analysis </b>
<a href="https://aclanthology.org/2023.acl-long.686.pdf">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/BGCA">[code]</a>.
Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, Lidong Bing.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
<li>
<p> <font size="3.5"><b>Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models </b>
<a href="https://aclanthology.org/2023.acl-long.828.pdf">[paper]</a> <a href="https://github.com/DAMO-NLP-SG/TempReason">[code]</a>.
Qingyu Tan, Hwee Tou Ng, Lidong Bing.
<i>The 61th Annual Meeting of the Association for Computational Linguistics (ACL'23)</i>, 2023.<br></font>
</p>
</li>
</ul>
<h3><a name="Talks"></a>Talks</h3>
<ul>
<li> Nov 2023. Invited talk on "Research and Implementation of Large Language Models at Alibaba DAMO Academy" at <a href="https://wing-nus.github.io/ssnlp-2023/">SSNLP 2023</a>.</li>
<li> Mar-Apr 2023. Invited talk on "Towards Solving Low-resource & Multilingual NLP Problems and a Pilot Study with LLMs" at Nanjing, Zhejiang, Fudan, and Shanghai Jiao Tong universities.</li>
</ul>
<h3><a name="Professional_Activities"></a>Professional Service</h3>
<ul>
<li>
<p><b>Associate/Action Editor and Reviewer of journals</b>:<br>
Transactions of the Association for Computational Linguistics (TACL)<br>
ACM Transactions on Information Systems (TOIS)<br>
Computational Linguistics (CL)<br>
IEEE Transactions on Knowledge and Data Engineering (TKDE)<br>
ACM Transactions on the Web (TWEB)<br>
ACM Transactions on Intelligent Systems and Technology (ACM TIST)<br>
Neurocomputing<br>
Neural Networks<br>
Neural Computing and Applications (NCA)<br>
Knowledge-based Systems (KBS) <br>
Information Processing and Management (IPM) <br>
</p>
</li>
<li>
<p><b>Regular AC, SPC and PC of conferences</b>:<br>
The Annual Meeting of the Association for Computational Linguistics (ACL)<br>
The Conference on Empirical Methods in Natural Language Processing (EMNLP)<br>
The AAAI Conference on Artificial Intelligence (AAAI)<br>
The International Joint Conference on Artificial Intelligence (IJCAI) <br>
The Conference on Neural Information Processing Systems (NeurIPS) <br>
The International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)<br>
The International World Wide Web Conference (WWW) <br>
The ACM International Conference on Information and Knowledge Management (CIKM) <br>
</p>
</li>
</ul>
</div>
</body>
</html>