Name | Registration number |
---|---|
Boura Tatiana | MTN2210 |
Sideras Andreas | MTN2214 |
-
Report.pdf - The report of our project.
-
Presentation.pdf - The presentation of our project.
-
the-circor-digiscope-phonocardiogram-dataset-1.0.3 - The dataset.
-
The_CirCor_DigiScope_Dataset.pdf - The dataset's paper.
-
requirements.txt - Requirements.
-
notebooks - Folder that includes:
The Jupyter Notebooks,
- Feature_Extraction_&_Demonstration.ipynb, where feature extraction and demonstration is performed.
- Feature_Selection.ipynb, where the train, validation, test sets are created and feature selection is performed.
- murmur_classification.ipynb, where we select the ML model, train and evaluate it.
The Python modules:
- data_loader_ML.py, that is used from Feature_Extraction_&_Demonstration.ipynb to load the dataset.
- feature_extraction_ML.py, that is used from Feature_Extraction_&_Demonstration.ipynb to extract the audio features.
The dataset containing the features extracted from Feature_Extraction_&_Demonstration.ipynb:
- murmor_dataset.csv
A script demo_murmur.py that runs a server on localhost for demonstration purposes. Run
demo_murmur.py
without any arguments after installing the packages in requirements.txt.The following two pickle files that store the selected model and the standard scaler respectively. Both files are created from murmur_classification.ipynb.
- final_model.pkl
- scaler.pkl
-
train_val_test_datasets - Folder that includes the the train, validation, test sets are stored from Feature_Selection.ipynb.
-
important_features - Folder that includes .txt files where each feature selection method stores its most important features Feature_Selection.ipynb.
-
classifiers_results - Folder that includes .txt files where murmur_classification.ipynb stores for each feature selection, the selected models' classifiers results are stored.
In order to run the whole process you should execute the notebooks,
- Feature_Extraction_&_Demonstration.ipynb
- Feature_Selection.ipynb
- murmur_classification.ipynb
with the given order.
However, every notebook can also be executed separately.
Otherwise, you could just run the demo demo_murmur.py (check requirements.txt), where you can manually select .wav audios for the four regions (AV,MV,PV,TV) from the folder training_data and test our final model.