generated from HugoBlox/theme-research-group
-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
74 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
--- | ||
# Documentation: https://wowchemy.com/docs/managing-content/ | ||
|
||
title: "Seminar: \"The \'BERT moment\' for Music - MERT: Large-Scale Self-supervised Training for Acoustic Music Understanding | ||
\"" | ||
# event: | ||
# event_url: | ||
location: Abacws | ||
# address: | ||
# street: | ||
# city: | ||
# region: | ||
# postcode: | ||
# country: | ||
summary: Talk by [Chenghua Lin](https://chenghua-lin.github.io/) (University of Manchester) | ||
abstract: "In this talk, I will first introduce MERT, an acoustic music understanding model based on large-scale self-supervised training and is akin to the “BERT moment” in NLP, but for Music. We have successfully trained a family of MERT models (with model sizes including 95M, 330M, and 1B parameters), which demonstrated excellent performance on 14 Music Information Retrieval (MIR) tasks. Since releasing on [HuggingFace in mid-2023](https://huggingface.co/m-a-p/MERT-v1-330M), MERT has received over 500K downloads. To address the significant absence of a universal and community-driven benchmark for music understanding, we further developed [MARBLE](https://marble-bm.shef.ac.uk), a universal MIR benchmark. MARBLE facilitates the benchmarking of pre-trained music models for 18 tasks (with more being added) on 12 publicly available datasets, offering an easy-to-use, extendable, and reproducible evaluation suite for this burgeoning community." | ||
|
||
# Talk start and end times. | ||
# End time can optionally be hidden by prefixing the line with `#`. | ||
date: 2024-11-21T13:00:00Z | ||
date_end: 2024-11-21T14:00:00Z | ||
all_day: false | ||
|
||
# Schedule page publish date (NOT event date). | ||
publishDate: 2024-11-08T00:00:00Z | ||
|
||
authors: [camachocolladosj] | ||
tags: [] | ||
|
||
# Is this a featured event? (true/false) | ||
featured: false | ||
|
||
# Featured image | ||
# To use, add an image named `featured.jpg/png` to your page's folder. | ||
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. | ||
image: | ||
caption: "" | ||
focal_point: "" | ||
preview_only: false | ||
|
||
# Custom links (optional). | ||
# Uncomment and edit lines below to show custom links. | ||
# links: | ||
# - name: Follow | ||
# url: https://twitter.com | ||
# icon_pack: fab | ||
# icon: twitter | ||
|
||
# Optional filename of your slides within your event's folder or a URL. | ||
url_slides: | ||
|
||
url_code: | ||
url_pdf: | ||
url_video: | ||
|
||
# Markdown Slides (optional). | ||
# Associate this event with Markdown slides. | ||
# Simply enter your slide deck's filename without extension. | ||
# E.g. `slides = "example-slides"` references `content/slides/example-slides.md`. | ||
# Otherwise, set `slides = ""`. | ||
slides: "" | ||
|
||
# Projects (optional). | ||
# Associate this post with one or more of your projects. | ||
# Simply enter your project's folder or file name without extension. | ||
# E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. | ||
# Otherwise, set `projects = []`. | ||
projects: [] | ||
--- | ||
|
||
**Invited Speaker:** [Chenghua Lin](https://chenghua-lin.github.io/) (University of Manchester) | ||
|
||
**Bio:** | ||
Chenghua Lin is Professor of Natural Language Processing in the Department of Computer Science at The University of Manchester. His research focuses on integrating NLP and machine learning for language generation and understanding, with current key interests including AI for science, robustness in LLMs, evaluation methods and benchmarks, metaphor processing, and representation learning for music. He currently serves as the Secretary of the ACL SIGGEN Board, a member of the IEEE Speech and Language Processing Technical Committee, and is a founding advisor for the Multimodal Art Projection community. He has received several awards for his research, including the CIKM Test-of-Time Award and the INLG Best Paper Runner-up Award. He has also held numerous program and chairing roles for *ACL conferences, and is the lead organiser of the 1st and 2nd editions of the Lay Summarisation shared task, co-located with the BioNLP Workshop. |