For the convenience of users, Leviathan offers automatic image building, tool review, and user image space management. When the user uploads the Dockerfile to the associated GitHub repository and automatically builds the image, the tool image name in the code will be invoked with .username.tool:tag
.
If the image used by the user is built by a third party, the tool image name in the code must be entered in full to be invoked, such as registry.cn-hangzhou.aliyuncs.com/levkit/checkdmarc:v5
. Tool review will take a longer time in this case.
To use the tool, you need to start the corresponding tool container by invoking the image. The image created by the user needs to be uploaded and made public before it can be pulled and used by other users. Of course, the levrt library provided by Leviathan will query the user's local image before the tool is executed. If local image exists, it will be invoked directly. Otherwise, levrt will request the image repository platform uploaded by the tool author and pull the image. This process requires a smooth network. Please wait a moment.
Firstly, these annotations help the author to sort out the tool invocation and workflow execution process. It also helps the users to understand the purpose of the tool and the workflow, the meaning of the parameters and the overall logic. Secondly, Leviathan requires canonical annotations in tools and workflows for data parsing, which is used for front-end display and affects the classification of tools. Therefore, authors are required to write the annotations in the standard format provided by the document.
The raw mode is defined for users to use freely in advanced mode. Take the well-known Nmap tool as an example. When Nmap is invoked from the command line, various parameters can be attached, such as -sP -sn
, etc. If these parameters are all defined as Nmap mode, it will be very inconvenient for tool authors to build and for users to invoke. Under raw invocations, users can directly pass in the parameter list of ["-sP", "-sn", "192.168.1.1"]
, and use ping to scan the ip host of 192.168.1.1 without port scanning. When a tool is relatively simple to use and has fewer parameters, it is recommended that authors do not use raw mode. Instead, authors can directly define all the modes of the tool so that users can use the tool more directly and quickly.
ENTRYPOINT[]
in the Dockerfile can be understood as a command that is automatically executed after the docker container has been started. In the tool definition mode, the entry() function will replace this command with the command about how the tool in the container should execute.
The entry() function will process data as below.
-
output = subprocess.checkout_put([], text=True)
invokes the tool to execute the command in the container, and returns the command standard output as the result data (output). -
subprocess.run([])
invokes the tool to execute the command in the container, and usually saves the result locally in the container in the format of other files (such as .txt, .json).
Next, the tool author can use Python to process string data to form a dictionary (result) containing the required result data, and then store the dictionary of result data into the database through the final ctx.update(result)
.
After the tool output is processed into a dictionary type, it will be stored in the MongoDB database. The storage space of the data in the database is specified by the parameter value in Cr( ), such as szczecin/checkdmarc.raw:1.0
.
Workflow can be understood as a purposeful task that needs to be completed by using many tools. It can also be understood as tool scheduling. Previous chapters describe various writing methods of workflows. At the same time, it should be noted that the tool serves the workflow and the workflow is on top of the tool. One workflow can invoke multiple tools. For example, when the first tool obtains the output result under the workflow scheduling, the output result can be returned through the database invocation function and passed to the next tool for use. Writing workflow is similar to writing Python code, where most of us will first import some libraries(tools), and then start to write the program(tool invocation).
\lev.username.tool\
is a collection of tools, workflows and module calls uploaded by users. This entire folder is the package to be uploaded to Leviathan, including tool.py
(tool module), asset.py
(workflow module) and __init__.py
(export module).
After executing the talentsec/lev to build environment, it will build lev-mongo (mongo database container), lev-mongo-express (mongo database display container), lev-agent (remote link of Leviathan system container), lev-rt (data scheduling container in tool/workflow execution) and lev-lemon (data transmission to Leviathan front-end display container) locally.
Considering the security of the platform and the privacy of other users, Leviathan will conduct a security review of the code in the package uploaded by all authors (including tool files, workflow invocations and comments, etc.) and the image invoked in the user's tool file. We must ensure when the user uploaded package code is parsed, no malicious behavior will be triggered on the front end. Besides, we will ensure that the tool image will not perform malicious operations on the user's system. Therefore, you may need to wait for some time before it can be published, displayed and used on Leviathan after the package has been uploaded.
attck2 = {
'TA0043': 'Information reconnaissance',
'TA0042': 'Resource development',
'TA0001': 'Initial access',
'TA0002': 'Malicious execution',
'TA0003': 'Permission maintained',
'TA0004': 'Privilege escalation',
'TA0005': 'Defense avoidance',
'TA0006': 'Credential theft',
'TA0007': 'Environment discovery',
'TA0008': 'Transverse lateral movement',
'TA0009': 'Data search',
'TA0011': 'Command control',
'TA0010': 'Data exfiltration',
'TA0040': 'System impact',
'TA0038': 'Traffic interception',
'TA0039': 'Remote Service Control'
}