Skip to content

Latest commit

 

History

History
37 lines (28 loc) · 2.4 KB

README_postrelease.md

File metadata and controls

37 lines (28 loc) · 2.4 KB

Overview

The Unsupervised Modeling of Brain Activity (UMBrA) benchmark aims to evaluate models of neural population state. Participating models should take multi-channel spiking activity as input and produce firing rate estimates as output. The benchmark will be released in August 2021.

In this first iteration of the benchmark, the benchmark will consist of a collection of 3-4 datasets of neural spiking activity and a public challenge hosted on EvalAI. While the benchmark will be available indefinitely, the challenge will close and announce winners in Fall 2021.

Pointers

FAQ

How do I submit a model to the benchmark?

Please note, the benchmark will open in August 2021. We are hosting our challenge on EvalAI, a platform for evaluating AI models. On the platform, you can choose to make private or public submissions to any or all of the individual datasets.

Can I view the leaderboard without submitting?

Yes, the full leaderboard will be available on this website indefinitely (courtesy of EvalAI), and EvalAI is also synced with Papers With Code.

Is there a deadline?

The benchmark and its leaderboard can be submittted to indefinitely on EvalAI as a resource for the community. However, the winners of the challenge will be determined from the leaderboard in Fall 2021. Prizes for the challenge will be distributed to the winner then.