-
Notifications
You must be signed in to change notification settings - Fork 220
Performance Tips
It is highly recommended to configure indexTemp folder on a SSD disk (in LocalConfig.txt), if possible different from your operating system partition or disk, free of antivirus scanning, system indexing or shadow copy services.
Use different disks for temp folder, output directory and forensic images. If possible, put your forensic images in a non parity RAID disk array.
We recommend a minimum of 8GB of RAM to use the analysis interface, do not expect good performance with that. For processing, we ask a minimum of 16GB of RAM, 32GB or more is recommended. Do not allocate all your memory to IPED processing with -Xms/-Xmx options, IPED often just uses some gigabytes. IPED needs free memory to run external tools (imagemagick, mplayer, tesseract, python modules...) and to use as IO cache for your images and case index, so it is very important to have free memory. IPED will need more free memory if you turn on the advanced options 'enableExternalParsing' or 'robustImageReading'.
IPED used memory increases at processing time if you have dozens of CPU cores (1GB per logical CPU core is a rule of thumb) and at analysis time if your case is huge (for dozens of millions of files you will need a few dozens of GBs). Generally, it is best to let the JVM allocate memory by itself (on most systems java 8 default max heap size is half the RAM and java 9 and newer common default is 1/4 of the RAM) instead setting it with -Xms/-Xmx, although setting half of your physical RAM is another good rule of thumb (if you don't enable robustImageReading and mainly enableExternalParsing). Increasing the heap manually might also workaround some OOME errors (see https://github.com/sepinf-inc/IPED/wiki/Troubleshooting). Never allocate more than 30GB of heap, pointers will double in size and you will end with less heap to use.
If you are processing E01 images and have more free memory available (don't forget you need free memory for IO cache), enable robustImageReading in conf/AdvancedConfig.txt (conf/FileSystemConfig.txt for version 4.0 and above). It will uncompress data from E01 images in parallel using your multiple CPU cores and can result in great speed ups, up to 3x depending on your dataset and hardware. Without that, just one item is uncompressed at a time by sleuthkit because libewf library is not thread safe and it will synchronize reads and will bottleneck.
If you enable OCR module, processing time can increase from hours to days if you have lots of scanned documents, currently that is the slowest module of the tool and the only thing you can try to do is using the -ocr option to limit the OCR to some subset of your files (https://github.com/sepinf-inc/IPED/wiki/User-Manual#ocr) or disabling OCR completely.