Congratulations on finishing Levels 1 and 2 🥳🥳! By now you should have chosen a focus question and created a model to solve that question. In Level 3, we will be evaluating this model!
This is the final step in creating a model! If you'll to how different algorithms are better suited for specific focus questions, there are specific evaluation metrics that are better suited for different types of focus question categories.
As we have done for levels 1 and 2, you should write a readme file that answers these questions:
- What evaluation metrics did you choose to use?
- What was the score you got for each of these metrics?
- Mention any resources you used to help you in this portion of the challenge (we love links!).
- Any extra information you'd like to include
In this folder, you should put the answers from the objective section in a filed called '[name].md' (with the name replaced with yours i.e. janeDoe.md). The code you are using to load the dataset you choose as well as clean the dataset should be located in the Notebook(s) folder, you may title this notebook as you wish.
+1 point for explaining why a specific evaluation metric is useful for the focus question you chose
+1 point for describing what an evaluation metric means in real world terms (in context of your focus question)
+1 point for very nice code comments (up to the grader's discretion)
How do I start?? For optimal Google search results, try searching '(focus question category) evaluation metrics' i.e. 'classification metrics' hint hint
I'm still confused! No worries! If a cursory Google search doesn't help, feel free to reach out to us and we'll be happy to help you out. This is the hardest part of the coding challenge and we want to make sure you have all the resources you need!