Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need to add more testing #32

Open
srz2 opened this issue Dec 20, 2020 · 2 comments
Open

Need to add more testing #32

srz2 opened this issue Dec 20, 2020 · 2 comments
Labels
enhancement Small new feature or request help wanted Extra attention is needed testing Issue is related to testing

Comments

@srz2
Copy link
Owner

srz2 commented Dec 20, 2020

Most of the codebase is not tested but the structure for unit testing has been added. It just needs to be built on

@srz2 srz2 added enhancement Small new feature or request help wanted Extra attention is needed testing Issue is related to testing labels Dec 20, 2020
@pillemer
Copy link

Hey, I'd love to help with this issue. I don't have much experience with creating tests though so can you give me direction on how or where to start?

@srz2
Copy link
Owner Author

srz2 commented Jan 2, 2021

Hi @pillemer, sorry for taking so long to response. I was in the middle of a major structure change which I just finished. I wanted to complete that before getting back to you since it would have been hard to merge changes.

I'm not sure if you are still interested in helping, but if you want, feel free to tackle this.

Basically what I am looking for is expanding the test folder. It is supposed to contain files of classes which will test different aspects of the codebase. I have done two so far.

If you wanted a good starting point, try to tackle one of the following aspects:

  • create_link_reference(): Make sure the returned text matches a regex for something like: - [Google][https://google.com]
  • check_file_exists(): I have my own function to check if a file exists, and prints a message
  • confirm 'ver' argument behavior: This might be harder to implement. But It confirms that if the argument 'ver' is passed to the bot, the version number is outputted.

You can attempt with the stuff I listed above, but ideally I want to get 95% of the application tested in this manner. The main advantage to doing this is so when changes are made, these logical tests are performed automatically each time. This ensure core functionality doesn't break across changes.

Let me know if you have any questions. Good Luck!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Small new feature or request help wanted Extra attention is needed testing Issue is related to testing
Projects
None yet
Development

No branches or pull requests

2 participants