Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading Axona data #956

Closed
sbuergers opened this issue Mar 7, 2021 · 5 comments
Closed

Reading Axona data #956

sbuergers opened this issue Mar 7, 2021 · 5 comments
Milestone

Comments

@sbuergers
Copy link
Contributor

Hi,

Reading Axona data

I am with CatalystNeuro and we want to build support for reading Axona data files with python-neo, and then write a wrapper for spikeextractors using the python-neo implementation.

I can share some example files if requested.

I will start working on this here: https://github.com/catalystneuro/python-neo/tree/axonarawio

This is a useful resource for reading continuous raw data from Axona .bin files by the Hussaini Lab (Geoff Barrett): https://github.com/HussainiLab/BinConverter/blob/master/BinConverter/core/readBin.py

For the moment I will focus on the raw continuous data (.bin files containing the raw data and .set files with meta information of the recording setup), but TINT data formats should be readable as well (e.g. .eeg, .eeg1, .1, egf, .egfX, .pos, .cut and .set files).

Additional considerations

We also want to be able to perform some preprocessing and spike sorting using SpikeInterface and subsequently export data to TINT format. I am not sure if it is a good idea to interface with python-neo for this as well and I am happy to discuss options.

Since I am completely new to python-neo, any tips or hints will be appreciated. :-)

Cheers,
Steffen.

@samuelgarcia
Copy link
Contributor

Hi Steffen,
yes make this rawio in neo is a good idea.
Then making the wrapper in spikeextractor will be super easy.
If the file is "raw" based the code should be super easy (if the header is not too complex)

I can help you. Do you have a small test example file that could be public ?
We need to deposite a testing public file here

The branch you mention is a copy/paste of exmple at the moment.
If you send me a file I can help to make the skeleton.

@sbuergers
Copy link
Contributor Author

Hi Samuel,

Thanks for the support. We requested a small example file that could be made public. I will let you know once that is available.

@apdavison apdavison added this to the 0.10.0 milestone Mar 18, 2021
@JuliaSprenger
Copy link
Member

The files should be available now via https://gin.g-node.org/NeuralEnsemble/ephy_testing_data/pulls/41

@sbuergers
Copy link
Contributor Author

sbuergers commented Mar 22, 2021

As I mentioned in #958 (comment), my first draft for the continuous axona data (considering only .set and .bin files) is ready for review.

However, I am not 100% sure what the best way is to also include spiking data, and ideally also LFP and position data, though I want to focus on spiking data for now.

This concerns the following data formats:
.X (.1, .2, .3, ..., .N) --> tetrode 1, 2, 3, ..., N files containing only data around spike events (-200us to +800us) based on a threshold. Here the SR is still 48000. There is no information about units yet.
.cut --> Results from spike sorting

The .X files would potentially be well placed as the analog signal, including segments for the discontinuity in time, so one option might be to have the user choose whether .bin or .X files should be used. Alternatively, one could make a separate rawio for continuous and discontinuous data. What do you think?

The .cut file information can only be useful with .X data currently, since the .bin data has not been thresholded. Maybe another argument for having two separate rawios.

Additional data formats that should ideally be incorporated:
.eeg or .egf --> for lfp data (the difference between eeg and egf seems to be mainly different sampling rates)
.pos --> information about head position from video recording

Actually, Alessio has already created a draft for a wrapper of pyxona in spikeextractors: https://github.com/catalystneuro/spikeextractors/tree/axonaunitextractor

I also saw that there was speculation of integrating pyxona with neo at some point CINPLA/pyxona#13. So maybe we can also try to add an axonaunitrawio based on pyxona, with a few improvements.

@sbuergers sbuergers mentioned this issue Mar 22, 2021
@sbuergers
Copy link
Contributor Author

I created a new PR for including video tracking data of the animal's position in axonarawio.py: #980 (comment)

In addition to ecephys data the Axona system saves video tracking data in the .bin files containing the raw data.
Each data packet contains a header, footer and 3 samples of ecephys data for each channel. The header includes a
flag, either ADU1 or ADU2, where ADU2 denotes that video tracking data is available in this packet, which is then located
in the header.

To quote from the file format manual:

Each position sample is 20 
bytes long, and consists of a 4-byte frame counter (incremented at around 50 Hz, 
according to the camera sync signal), and then 8 2-byte words. In four-spot mode, the 8 
words are redx, redy, greenx, greeny, bluex, bluey, whitex, whitey. In two-spot mode, they 
are big_spotx, big_spoty, little_spotx, little_spoty, number_of_pixels_in_big_spot, 
number_of_pixels_in_little_spot, total_tracked_pixels, and the 8th word is unused. Each 
word is MSB-first. If a position wasn't tracked (e.g., the light was obscured), then the 
values for x and y will both be 0x3ff (= 1023).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants