Contribute set of threat intelligence experiments #3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi,
First of all, thanks for sharing your work @Cyb3rWard0g! This has been very inspiring to me and triggered me to look into this myself as well. Based on your research I have been experimenting and I would like to share some of this with you to contribute back to the community and potentially receive feedback to improve on this idea. This has been my first experience with using LLMs in programming so it has been very insightful already, but it also means there is much more to learn.
My experiments are the following:
I believe these experiments provide additional insight into both possibilities and limitations for these use cases. Hope to hear from you soon.
Note: as you might have already noticed I'm new to contributing to public repositories. I have been working in a separate repo, causing this commit to be huge. These are mostly the documents though. Excuse me if I'm not following some processes or quality standards; if this is the case, please inform me so I can learn and improve!