There are 2 versions of hamburglar, full and lite. The main branch is the full version, and hamburglar lite is on a separate branch.
Hamburglar
Full fledged scraping tool for artifact retrieval from multiple sources. There are some dependencies, so install them first:
pip3 install -r requirements.txt
Hamburglar Lite
Multithreaded and recursive directory scraping script. Stores useful information with the filepath and finding. Hamburglar lite will never require external packages, and will always remain as a single script. Setup is as simple as requesting the file and using it:
wget https://raw.githubusercontent.com/needmorecowbell/Hamburglar/hamburglar-lite/hamburglar-lite.py
This is designed to be quickly downloaded and executed on a machine.
usage: hamburglar.py [-h] [-g] [-x] [-v] [-w] [-o FILE] [-y YARA] path
positional arguments:
path path to directory, url, or file, depending on flag
used
optional arguments:
-h, --help show this help message and exit
-g, --git sets hamburglar into git mode
-x, --hexdump give hexdump of file
-v, --verbose increase output verbosity
-w, --web sets Hamburgler to web request mode, enter url as path
-o FILE, --out FILE write results to FILE
-y YARA, --yara YARA use yara ruleset for checking
Directory Traversal
python3 hamburglar.py ~/Directory/
- This will recursively scan for files in the given directory, then analyzes each file for a variety of findings using regex filters
Single File Analysis
python3 hamburglar.py ~/Directory/file.txt
- This will recursively scan for files in the given directory, then analyzes each file for a variety of findings using regex filters
YARA Rule Based Analysis
python3 hamburglar.py -y rules/ ~/Directory
- This will compile the yara rule files in the rules directory and then check them against every item in Directory.
Git Scraping Mode
python3 hamburglar.py -g https://www.github.com/needmorecowbell/Hamburglar
- Adding
-y <rulepath>
will allow the repo to be scraped using yara rules
- Adding
Web Request Mode
python3 hamburglar.py -w https://google.com
- Adding a
-w
to hamburgler.py tells the script to handle the path as a url. - Currently this does not spider the page, it just analyzes the requested html content
- Adding a
Hex Dump Mode
python3 hamburglar.py -x ~/file-to-dump
- This just does a hex dump and nothing more right now -- could be piped into a file
- This will eventually be used for binary analysis
Tips
- Adding
-v
will set the script into verbose mode, and-h
will show details of available arguments - Adding
-o FILENAME
will set the results filename, this is especially useful in scripting situations where you might want multiple results tables (ie github repo spidering)
whitelistOn
: turns on or off whitelist checkingmaxWorkers
: number of worker threads to run concurrently when reading file stackwhitelist
: list of files or directories to exclusively scan for (if whitelistOn=True)blacklist
: list of files, extensions, or directories to block in scanregexList
: dictionary of regex filters with filter type as the key
- ipv4 addresses (public and local)
- emails
- private keys
- urls
- cryptocurrency addresses
- anything you can imagine using regex filters and yara rules
{
"/home/adam/Dev/test/email.txt": {
"emails": "{'[email protected]'}"
},
"/home/adam/Dev/test/email2.txt": {
"emails": "{'[email protected]'}"
},
"/home/adam/Dev/test/ips.txt": {
"ipv4": "{'10.0.11.2', '192.168.1.1'}"
},
"/home/adam/Dev/test/test2/email.txt": {
"emails": "{'[email protected]', '[email protected]'}"
},
"/home/adam/Dev/test/test2/ips.txt": {
"ipv4": "{'10.0.11.2', '192.168.1.1'}"
},
"/home/adam/Dev/test/test2/links.txt": {
"site": "{'http://login.web.com'}"
}
}
-
Please contribute! If there's an error let me know -- even better if you can fix it :)
-
A big thank you to anyone who has helped:
- [adi928](https://github.com/adi928) - [jaeger-2601](https://github.com/jaeger-2601) - [tijko](https://github.com/tijko)