Skip to content

Administrator's guide

Saad Kadhi edited this page Nov 26, 2016 · 17 revisions

Administrator's guide

1. User management

Users can be managed in a dedicated page, accessible through the Administration > Users menu. Only administrators may access this page.

users

Each user is identified by their login, full name and role. Currently there are 3 roles:

  • read : all non-sensitive data can be read. With this role, a user can't make any change. They can't add a case, task, log or observable. They also can't run analyzers;
  • write: create, remove and change data of any type. This role is for standard users. write role inherits read rights;
  • admin: this role is reserved for TheHive administrators. Users with this role can manage user accounts, metrics, create case templates and observable data types. admin inherits write rights.

Warning: Please note that user accounts cannot be removed once they have been created, otherwise audit logs will refer to an unknown user. However, unwanted or unused accounts can be locked.

2. Case template management

Some cases may share the same structure (tags, tasks, description, metrics). Templates are here to automatically add tasks, description or metrics while creating a new case. An user can choose to create an empty case or based on registered template.

To create a template, as admin go in the administration menu, and open the "Case templates" item.

template

In this screen, you can add, remove or change template. A template contains:

  • default severity
  • default tags
  • title prefix (can be changed by user at case creation)
  • default TLP
  • default default
  • task list (title and description)
  • metrics

Except for title prefix, task list and metrics, the user can change values defined in template.

3. Metrics management

Metrics have been integrated to have relevant indicators about cases.

Metrics are numerical values associated to cases (for example, the number of impacted users). Each metric has a name, a title and a description, defined by an administrator. When a metric is added to a case, it can't be removed and must be filled. Metrics are used to monitor business indicators, thanks to graphs.

Metrics are defined globally. To create metrics, as admin got in the administration menu, and open the "Case metrics" item.

metrics

Metrics are used to create statistics ("Statistics" item in the user profile menu). They can be filtered on time interval, and case with specific tags.

For example you can show metrics of case with "malspam" tag on January 2016 :

statistics

For graphs based on time, user can choose metrics to show. They are aggregated on interval of time (by day, week, month of year) using a function (sum, min or max).

Some metrics are predefined (in addition to those defined by administrator) like case handling duration (how much time the case had been open) and number of case opening or closing.

4. Advanced configuration

4.1. Configuration file

The configuration file of TheHive is defined in conf/application.conf. This file use the HOCON format. All configuration parameters should go in this file.

You can have a look in default settings in the following files :

4.2. Database

TheHive uses the search engine ElasticSearch to store all persistent data. ElasticSearch is not part of TheHive package. It must be installed and configured in standalone instance (cf. ElasticSearch installation guide).

Three settings are required to connect to ElasticSearch :

  • the base name of the index
  • the name of the cluster
  • the address(es) and port(s) of the ElasticSearch

Defaults settings are :

# ElasticSearch
search {
  # Name of the index
  index = the_hive
  # Name of the ElasticSearch cluster
  cluster = hive
  # Address of the ElasticSearch instance
  host = ["127.0.0.1:9300"]
  # Scroll keepalive
  keepalive = 1m
  # Size of the page for scroll
  pagesize = 50
}

If you use another configuration of ElasticSearch databse, add the correct parameters to the conf/application.conf file.

If multiple ElasticSearch nodes are used in cluster, addresses of masters must be used for search.host setting. All cluster nodes must use the same cluster name :

search {
    host = ["node1:9300", "node2:9300"]
   ...

TheHive uses the transport port (9300/tcp by default), not the http port (9200/tcp).

TheHive versions index schema (mapping) in ElasticSearch. Version number are appended to index base name (the 7th version of the schema uses the index the_hive_7 if search.index = the_hive).

When too many documents are requested to TheHive, it uses the scroll feature : result is retrieve using pagination. You can specify the size of the page (search.pagesize) and how long pages are keep in ElasticSearch before purge (search.keepalive).

4.3. Datastore

TheHive stores attachments in ElasticSearch documents. They are split in chunks and each chunk, sent to ElasticSearch, is identified by the hash of the entire attachment and the chunk number.

The size of chunks (datastore.chunksize) can be changed (only for new attachments, already inserted attachments are not changed).

Attachment are identified by its hash. The algorithm used is configurable (datastore.hash.main) but must not be changed after the first attachment insertion (otherwise, the old attachments won't be able to be retrieved).

Extra hash algorithms can be configured using datastore.hash.extra. These hashes are not used to identify the attachment but are shown in user interface (the hash from main algorithm is also shown). If you change extra algorithms, you should inform TheHive and ask it to recompute all hashes (this API is currently disabled. It will be reactivated in the next release).

Observables can contains malicious data. When you try to download attachment from observable, it is automatically protected by an encrypted zip. The password is "malware" by default but it can be change with datastore.attachment.password setting.

Default values are:

# Datastore
datastore {
  name = data
  # Size of stored data chunks
  chunksize = 50k
  hash {
    # Main hash algorithm /!\ Don't change this value
    main = "SHA-256"
    # Additional hash algorithms (used in attachments)
    extra = ["SHA-1", "MD5"]
  }
  attachment.password = "malware"
}

4.4. Authentication

TheHive supports Local, LDAP and Active Directory for authentication. By default is uses local stored password (in ElasticSearch). Authentication methods are stored in auth.type parameter, which is multi-valued. When an user logs in, each authentication method is tried in order until one succeeds.

Default values are :

auth {
	# "type" parameter contains authentication provider. It can be multi-valued (useful for migration)
	# available auth types are:
	# services.LocalAuthSrv : passwords are stored in user entity (in ElasticSearch). No configuration are required.
	# ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key
	# ldap : use LDAP to authenticate users. Configuration is under "auth.ldap" key
	type = [local]

	ad {
		# Domain Windows name using DNS format. This parameter is required.
		#domainFQDN = "mydomain.local"

		# Domain Windows name using short format. This parameter is required.
		#domainName = "MYDOMAIN"

		# Use SSL to connect to domain controller
		#useSSL = true
	}

	ldap {
		# LDAP server name or address. Port can be specified (host:port). This parameter is required.
		#serverName = "ldap.mydomain.local:389"

		# Use SSL to connect to directory server
		#useSSL = true

		# Account to use to bind on LDAP server. This parameter is required.
		#bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local"

		# Password of the binding account. This parameter is required.
		#bindPW = "***secret*password***"

		# Base DN to search users. This parameter is required.
		#baseDN = "ou=users,dc=mydomain,dc=local"

		# Filter to search user {0} is replaced by user name. This parameter is required.
		#filter = "(cn={0})"
	}
}

# Maximum time between two requests without requesting authentication
session {
  warning = 5m
  inactivity = 1h
}

To enable authentication using AD or LDAP edit the conf/application.conf file and add the good parameters.

4.5. Streaming

User interface are automatically updated when data is changed in the back-end. To do this, the back-end send events to all connected front-end. The mechanism used to notify front-end is long polling and its settings are :

  • refresh : when there no notification, close the connection after this duration (default 1 minute).
  • cache : before polling a session must be created, in order to make sure no event is lost between to poll. If there is no poll during the cache setting, session is destroyed (default 15 minutes).
  • nextItemMaxWait, globalMaxWait : when an event occurs, it is not immediately sent to front-end. Back-end waits nextItemMaxWait and up to globalMaxWait in case another event can be included in response. This mechanism saves many HTTP request.

Default values are :

# Streaming
stream.longpolling {
  # Maximum time a stream request waits for new element
  refresh = 1m
  # Lifetime of the stream session without request
  cache = 15m
  nextItemMaxWait = 500ms
  globalMaxWait = 1s
}

4.6. Entity size limit

Play framework, used by TheHive, sets by default HTTP body size limit to 100KB for textual content (json, xml, text, form data) and 10MB for file upload. This could be too small is most case so you may want to change it with the following settings in the /etc/thehive/application.conf file:

# Max textual content length
play.http.parser.maxMemoryBuffer=1M
# Max file size
play.http.parser.maxDiskBuffer=1G

If you are using nginx reverse proxy in front of TheHive, be aware that it doesn't distinguish text data and file upload. So, you should also set client_max_body_size parameter in nginx server configuration. Set it to the max between file and text size defined in /etc/thehive/application.conf file.

4.7. Analyzers

This section defines configuration of analyzers. Analyzer definitions are loaded at startup. TheHive looks for these definitions (name, version, how to invoke, ...) in analyzer.path directory (relative to TheHive installation).

Each analyzer can have its own configuration settings (api keys for example) in the subsection identified by analyzer name. The global subsection are applied for all analyzer.

Default values are :

analyzer {
  # Directory that holds analyzers
  path = analyzers
  # Analyzer configuration
  config {
    global {
      #proxy {
      #       http="http://127.0.0.1:3128",
      #       https="http://127.0.0.1:3128"
      #}
    }
    DNSDB {
      server="https://api.dnsdb.info"
      #key="DNSDB_API_key"
    }
    DomainTools {
      #username="DomainTools_Username"
      #key="DomainTools_API_key"
    }
    VirusTotal {
      #key="VirusTotal_API_key"
    }
    Hippocampe {
      url="http://localhost:5000/hippocampe/api/v1.0/"
    }
  }
}

Warning : each time you configure a new analyzer, you have to restart the service or the application.

4.8. MISP

TheHive has the ability to connect to one or several MISP servers. Within the configuration file, you can register your MISP server(s) under the misp configuration keyword. Each server shall be identified using an arbitrary name, its url, the corresponding authentication key and optional tags to add to the corresponding cases when importing MISP events.

4.8.1. Minimal configuration

## Enable MISP module
play.modules.enabled += connectors.misp.MispConnector

misp {
  #"MISP-SERVER-ID" {
  #  # URL of the MISP server
  url = ""
  #  # authentication key
  key = ""
  #  # tags that must be automatically added to the case corresponding to the imported event
  tags = ["misp"]
  #}

  # truststore to use to validate the MISP certificate (if the default truststore is not sufficient)
  #cert = /path/to/truststore.jsk

  # Interval between two MISP event import in hours (h) or minutes (m)
  interval = 1h
}

To sync with a MISP server and retrieve events, edit the conf/application.conf file and add the adjust the parameters shown above to your setup.

4.8.2. Associate a case template

As stated in the subsection above, TheHive is able to automatically import MISP events and create cases out of them. This operation leverages the template engine. Thus you'll need to create a case template prior to importing MISP events.

First, create a template. Let's call it MISP_CASETEMPLATE.

Then update TheHive's configuration to add a 'caseTemplate' parameter as shown in the example below:

misp {
  "MISP" {
  #  # URL of the MISP server
    url = "http://MYMISPSERVER"
  #  # authentication key
    key = "MYKEY"
  #  #tags to be added to imported artifact
    tags = ["misp"]
  # Optional case template
  caseTemplate = "MISP_CASETEMPLATE"

Last, restart TheHive. Every new import of MISP event will be with the template "MISP_CASETEMPLATE".

4.9. Monitoring and performance metrics

Performance metrics (response time, call rate to ElasticSearch and HTTP request, throughput, memory used...) can be collected if enabled in configuration.

Enable it by editing the conf/application.conf file, and add:

metrics.enabled = true

These metrics can optionally be sent to an external database (graphite, ganglia or influxdb) in order to monitor the health of the platform. This feature is disabled by default

metrics {
    name = default
    enabled = true
    rateUnit = SECONDS
    durationUnit = SECONDS
    showSamples = false
    jvm = true
    logback = true

    graphite {
        enabled = false
        host = "127.0.0.1"
        port = 2003
        prefix = thehive
        rateUnit = SECONDS
        durationUnit = MILLISECONDS
        period = 10s
    }

    ganglia {
        enabled = false
        host = "127.0.0.1"
        port = 8649
        mode = UNICAST
        ttl = 1
        version = 3.1
        prefix = thehive
        rateUnit = SECONDS
        durationUnit = MILLISECONDS
        tmax = 60
        dmax = 0
        period = 10s
    }

    influx {
        enabled = false
        url = "http://127.0.0.1:8086"
        user = root
        password = root
        database = thehive
        retention = default
        consistency = ALL
        #tags = {
        #    tag1 = value1
        #    tag2 = value2
        #}
        period = 10s
    }
}

4.10 HTTPS

To enable HTTPS in the application, add the following lines to /etc/thehive/application.conf

https.port: 9443
play.server.https.keystore {
  path: "/path/to/keystore.jks"
  type: "JKS"
  password: "password_of_keystore"
}

To import your certificate in the keystore, depending on your situation, you can follow this documentation.

More information:

This is a setting of the Play framework that is documented on its website https://www.playframework.com/documentation/2.5.x/ConfiguringHttps.

Documentation has been moved here

Clone this wiki locally