inverse_cochlea can reconstruct sounds from the activity of auditory nerve fibers' using artificial neural networks:
__|______|______|____ +-----------+ _|________|______|___ -->| Inverse | .-. .-. .-. ___|______|____|_____ -->| |--> / \ / \ / \ __|______|______|____ -->| Cochlea | '-' '-' +-----------+ ANF activity Sound
Rudnicki M, Zuffo MK and Hemmert W (2012). Sound Decoding from Auditory Nerve Activity. Front. Comput. Neurosci. Conference Abstract: Bernstein Conference 2012. doi: 10.3389/conf.fncom.2012.55.00092
The direct reconstruction with an artificial neural network (suitable for frequencies to up 2 kHz) is implemented in inverse_cochlea.MlpReconstructor. The reconstruction using a combination of an artificial neural network and inverse spectrogram is implemented in inverse_cochlea.ISgramReconstructor.
Both reconstructor classes can be configured by the constructor parameters and have train() and run() methods.
To see how to use the package, have a look at the scripts in the examples directory.
- Python 2.7
- Numpy
- Scipy
- Pandas
- joblib
- cochlea
For the MlpReconstructor:
- ffnet
For the ISgramReconstructor:
- oct2py or pytave
- GNU Octave with ltfat preinstalled
The project is licensed under the GNU General Public License v3 or later (GPLv3+).