Skip to content

Commit

Permalink
Merge branch 'dev' into merger
Browse files Browse the repository at this point in the history
  • Loading branch information
TOoSmOotH committed Dec 18, 2024
2 parents 75fe4e9 + d393bd2 commit 2e260f3
Show file tree
Hide file tree
Showing 26 changed files with 5,126 additions and 36 deletions.
2 changes: 1 addition & 1 deletion architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The simplest architecture is an ``Import`` node. An import node is a single stan
Evaluation
----------

The next architecture is ``Evaluation``. It's a little more complicated than ``Import`` because it has a network interface dedicated to sniffing live traffic from a TAP or span port. Processes monitor the traffic on that sniffing interface and generate logs. :ref:`elastic-agent` collects those logs and sends them directly to :ref:`elasticsearch` where they are parsed and indexed. Evaluation mode is designed for a quick installation to temporarily test out Security Onion. It is **not** designed for production usage at all and it does not support adding Elastic agents or additional Security Onion nodes.
The next architecture is ``Evaluation``. It's a little more complicated than ``Import`` because it has a network interface dedicated to sniffing live traffic from a TAP or SPAN port. Processes monitor the traffic on that sniffing interface and generate logs. :ref:`elastic-agent` collects those logs and sends them directly to :ref:`elasticsearch` where they are parsed and indexed. Evaluation mode is designed for a quick installation to temporarily test out Security Onion. It is **not** designed for production usage at all and it does not support adding Elastic agents or additional Security Onion nodes.

.. image:: images/diagrams/eval.png
:align: center
Expand Down
2 changes: 1 addition & 1 deletion best-practices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Installation

- Adequately spec your hardware to meet your current usage and allow for growth over time.

- Prefer taps to span ports when possible.
- When possible, we recommend using a dedicated TAP rather than SPAN ports.

- Make sure that any network firewalls have the proper firewall rules in place to allow ongoing operation and updates (see the :ref:`firewall` section).

Expand Down
12 changes: 11 additions & 1 deletion conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration

extensions = []
extensions = ['sphinxcontrib.redoc']

templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '.venv']
Expand All @@ -40,3 +40,13 @@
html_js_files = [
'theme_overrides.js'
]

redoc = [
{
'name': 'Security Onion Connect API',
'page': 'api/index',
'spec': 'specs/openapi.yaml',
'embed': True,
},
]
redoc_uri = 'https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js'
2 changes: 1 addition & 1 deletion configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Security Onion is designed for many different use cases. Here are just a few exa

.. warning::

If the network configuration portion displays a message like ``The IP being routed by Linux is not the IP address assigned to the management interface``, then you have multiple network interfaces with IP addresses. In most cases, sniffing interfaces should not have IP addresses and there should only be an IP address on the management interface itself. Sometimes this is caused by a sniffing interface connected to a normal switch port (not a TAP/span port) and acquiring an IP address via DHCP. Double-check your network interfaces, wiring, and configuration.
If the network configuration portion displays a message like ``The IP being routed by Linux is not the IP address assigned to the management interface``, then you have multiple network interfaces with IP addresses. In most cases, sniffing interfaces should not have IP addresses and there should only be an IP address on the management interface itself. Sometimes this is caused by a sniffing interface connected to a normal switch port (not a TAP/SPAN port) and acquiring an IP address via DHCP. Double-check your network interfaces, wiring, and configuration.

Import
------
Expand Down
80 changes: 80 additions & 0 deletions connect.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
.. _connect:

Connect API
===========

.. note::

This is an enterprise-level feature of Security Onion. Contact Security Onion Solutions, LLC via our website at https://securityonion.com/pro for more information about purchasing a Security Onion Pro license to enable this feature.

The Security Onion Connect API allows other servers to integrate with Security Onion, and access the same functionality that the Security Onion Console user-interface provides. Access to the Connect API is permitted through API Clients, which can be created by SOC administrators via the SOC UI -> Administration -> API Clients screen.

The Connect API currently provides functionality exposed by the Security Onion Console server. It does not provide full access to third-party applications included with the Security Onion platform. Specifically, while you can read events from Elasticsearch, you cannot manipulate Kibana settings via the Security Onion Connect API, unless those settings are already exposed via the SOC Configuration system.

Enabling Connect API
--------------------

By default, newly setup grids will not be configured for API client access. To enable API client access, the following steps must be taken:

1. A license key must be applied to the grid. The license key must include the API feature.
2. The Hydra feature must be enabled via the ``hydra > enabled`` setting in the Configuration screen.
3. Synchronize the grid to apply the license key and configuration changes. This can be done via the Configuration screen options dropdown.

Authentication
--------------

API clients must use The OAuth 2.0 client credentials flow to authenticate to the Security Onion manager node.

Exchange Client Credentials for an Access Token
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Obtain an access token by submitting a POST request to ``https://BASE_URL/oauth2/token``, and providing the client ID and client secret via the *Basic* authentication scheme. The body of the request must contain ``grant_type=client_credentials``.

Example:

.. code::
curl --cacert ca.crt -X POST -u socl_my_new_client:hwKHspsX2bMuoIs7kGwN https://BASE_URL/oauth2/token -d grant_type=client_credentials
Where you will replace:

- ``ca.crt`` with your manager's certificate authority. If a custom certificate has been applied to your grid after setup completed, you can access it via the Configuration screen (requires superuser role) from the ``nginx > ssl > SSL/TLS Cert File [adv]`` config setting, or if using the default generated certificate authority, retrieve the ``/etc/pki/ca.crt`` certificate file via SSH from the manager node.
- ``socl_my_new_client`` with your client ID (generated by SOC during API client creation)
- ``hwKHspsX2bMuoIs7kGwN`` with your API client's generated secret
- ``BASE_URL`` with your manager's IP or hostname, depending on which option you selected during Security Onion setup

The response will resemble the following:

.. code::
{"access_token":"ory_at_xI1_2FVvoWR60GHAXZXAcDW7V3qEi2mIB8RKnpqN0fk.Hy5LaHPqh9sfWVEtDXDhs8Gj-9YZ85FJHp6pyD0eeNw","expires_in":3599,"scope":"","token_type":"bearer"}
The access token will expire in 2 hours, by default, after which a new access token must be requested using this same credential exchange method again.

Authorize API Requests with an Access Token
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Now that the access token has been retrieved, the API requests can be submitted. These requests will utilize the access token via the HTTP Authorization header, using the *Bearer* scheme.

Example:

.. code::
curl --cacert ca.crt -X GET --oauth2-bearer ory_at_U74544Scqho5KGOci-qemWsOjxOU8TALqddAnrfxAGg.7GlO4SYPUAllO23LVqs9e_FXl0tAdRlUk3AH9IplWRU https://BASE_URL/connect/info
Where the provided bearer token above must be replaced with the access token extracted from the client credential exchange response.

Authorization / RBAC
--------------------

API clients are permitted access to various components within Security Onion using the same RBAC system for users.
However, rather than assign *roles* to API clients, the more granular *permissions* are assigned.
For example, while a *user* might be assigned the ``analyst`` role, an API *client* would be assigned the ``events/read``, ``events/write``, ``cases/read``, etc.
This ensures that remote systems will only have access to the minimum necessary permissions required for the integration.

Currently OAuth 2.0 scopes are not utilized, since these permissions are assigned outside of the OAuth 2.0 flow.

API Reference
-------------

An interactive API view is available: `Interactive API <api/>`__
4 changes: 3 additions & 1 deletion detections.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,9 @@ There are two ways to reach the detail page for an individual detection:

Once you've used one of these methods to reach the detection detail page, you can check the Status field in the upper-right corner and use the slider to enable or disable the detection.

To the left of the Status field are several tabs. The OVERVIEW tab displays the Summary, References, and Detection Logic for the detection.
To the left of the Status field are several tabs.

The OVERVIEW tab displays the Summary, References, and Detection Logic for the detection. Starting in Security Onion 2.4.110, the Summary field may contain an AI summary of the rule if one is available. These AI summaries are pre-generated so nothing is ever sent from your system to generate this information. That also means that AI summaries only exist for our default rules and will not exist for any of your custom rules.

.. image:: images/60_detection_nids.png
:target: _images/60_detection_nids.png
Expand Down
2 changes: 1 addition & 1 deletion elasticsearch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Elasticsearch indices are managed by both the ``so-elasticsearch-indices-delete`
so-elasticsearch-indices-delete
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``so-elasticsearch-indices-delete`` manages size-based deletion of Elasticsearch indices based on the value of the ``elasticsearch.retention.retention_pct`` setting. This setting is checked against the total disk space available for ``/nsm/elasticsearch`` across all nodes in the Elasticsearch cluster. If your indices are using more than ``retention_pct``, then ``so-elasticsearch-indices-delete`` will delete old indices until available disk space is back under ``retention_pct``. The default value for this setting is ``50`` percent so that standalone deployments have sufficient space for not only Elasticsearch but also full packet capture and other logs. For distributed deployments with dedicated search nodes where Elasticsearch is main consumer of disk space, you may want to increase this default value.
``so-elasticsearch-indices-delete`` manages size-based deletion of Elasticsearch indices based on the value of the ``elasticsearch.retention.retention_pct`` setting. This setting is checked against the total disk space available for ``/nsm/elasticsearch`` across all nodes in the Elasticsearch cluster. If your indices are using more than ``retention_pct``, then ``so-elasticsearch-indices-delete`` will delete old indices until disk space consumed by indices is back under ``retention_pct``. The default value for this setting is ``50`` percent so that standalone deployments have sufficient space for not only Elasticsearch but also full packet capture and other logs. For distributed deployments with dedicated search nodes where Elasticsearch is main consumer of disk space, you may want to increase this default value.

To modify the ``retention_pct`` value, first navigate to :ref:`administration` --> Configuration. At the top of the page, click the ``Options`` menu and then enable the ``Show advanced settings`` option. Then navigate to elasticsearch --> retention --> retention_pct. Once you make the change and save it, the new setting will take effect at the next 15 minute interval. If you would like to make the change immediately, you can click the ``SYNCHRONIZE GRID`` button under the ``Options`` menu at the top of the page.

Expand Down
25 changes: 24 additions & 1 deletion faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ FAQ
| `IDS engines <#ids-engines>`__\
| `Security Onion internals <#security-onion-internals>`__\
| `Tuning <#tuning>`__\
| `Common Problems <#common-problems>`__\
| `Miscellaneous <#miscellaneous>`__\
|
|
Expand Down Expand Up @@ -137,7 +138,7 @@ In general, Security Onion attempts to make use of as much disk space as you giv
How is my data kept secure?
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Standard network connections to or from Security Onion are encrypted. This includes SSH, HTTPS, :ref:`elasticsearch` network queries, and :ref:`salt` minion traffic. Endpoint agent traffic is encrypted where supported. This includes the :ref:`elastic-agent` which supports encryption with additional configuration. SOC user account passwords are hashed via bcrypt in Kratos and you can read more about that at https://github.com/ory/kratos.
Standard network connections to or from Security Onion are encrypted. This includes SSH, HTTPS, :ref:`elasticsearch` network queries, and :ref:`salt` minion traffic. All endpoint agent (Elastic Agent) traffic is encrypted except for binary updates, which are served from the Manager over http - these update files are cryptographically signed by Elastic and are verified before they are used. There is also the option to pull these updates via https directly from Elastic. SOC user account passwords are hashed via bcrypt in Kratos and you can read more about that at https://github.com/ory/kratos.

`back to top <#top>`__

Expand Down Expand Up @@ -176,6 +177,28 @@ Please see the :ref:`new-disk` section.

`back to top <#top>`__

Common Problems
---------------

Why do containers go missing?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Docker containers that stop running, due to exiting from errors or other reasons, will be automatically removed by the scheduled cleanup process.

Most container logs are redirected to their application log directory, located in ``/opt/so/log``. In some cases the logs may not get written to disk, and instead must be viewed via ``docker logs <container-name>`` before the container is cleaned up.

Why does ElastAlert often go missing on my grid?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ElastAlert 2 will exit upon encountering syntax errors with rules or when Elasticsearch is not in a healthy state.

Why does Elasticsearch go to the unhealthy state?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Elasticsearch will become unhealthy for a variety of reasons, but the most common reasons are running out of disk space and having indices with unallocated shards.

`back to top <#top>`__

Miscellaneous
-------------

Expand Down
6 changes: 4 additions & 2 deletions first-time-users.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,11 @@ First Time Users

Welcome, first time users! You're going to be peeling back the layers of your network in just a few minutes!

First, download our ISO image as shown in the :ref:`download` section.
First, please note that Security Onion only supports x86-64 architecture (standard Intel or AMD 64-bit processors). If you don't have an x86-64 box available, then one option may be to run Security Onion in the cloud. For more information, please see the :ref:`cloud-amazon`, :ref:`cloud-azure`, and :ref:`cloud-google` sections.

Then install the ISO image and configure for IMPORT as shown below (also see the :ref:`installation` and :ref:`configuration` sections). This can be done in a minimal virtual machine with as little as 4GB RAM, 2 CPU cores, and 200GB of storage. For more information about virtualization, please see the :ref:`vmware`, :ref:`virtualbox`, and :ref:`proxmox` sections.
Otherwise, if you have an x86-64 box for your Security Onion IMPORT installation, then check to make sure it meets the MINIMUM hardware requirements of 4GB RAM, 2 CPU cores, and 200GB of storage. If you will be installing Security Onion in a virtual machine, then the VM will need those specs at minimum and the host machine will have higher hardware requirements since it will be running the host operating system and possibly other VMs or apps. For more information about virtualization, please see the :ref:`vmware`, :ref:`virtualbox`, and :ref:`proxmox` sections. Once you've verified that you have an appropriate installation target, you can proceed to download our ISO image as shown in the :ref:`download` section and then install the ISO image as shown in the :ref:`installation` section.

Once you have Security Onion installed either in the cloud or on-prem, you can configure for IMPORT as shown below (also see the :ref:`configuration` section).

Once you're comfortable with your IMPORT installation, then you can move on to more advanced installations as shown in the :ref:`architecture` section.

Expand Down
4 changes: 2 additions & 2 deletions grid.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ The ``Connection Status`` field shows whether or not the node is currently conne
Elasticsearch Status
~~~~~~~~~~~~~~~~~~~~

If the node runs Elasticsearch, then the ``Elasticsearch Status`` field will show the status of it.
If the node runs :ref:`elasticsearch`, then the ``Elasticsearch Status`` field will show the status of it. If the status is anything other than OK, then see the :ref:`elasticsearch` section to troubleshoot.

RAID Status
~~~~~~~~~~~
Expand Down Expand Up @@ -140,7 +140,7 @@ The ``I/O Wait`` field shows the system I/O wait percentage. Higher values indic
Capture Loss
~~~~~~~~~~~~

The ``Capture Loss`` field shows the percentage of packet capture loss reported by :ref:`zeek`. Higher values indicate a reduced visibility into packets traversing the network. If :ref:`zeek` is reporting capture loss but no packet loss, this usually means that the capture loss is happening upstream in the tap or span port itself.
The ``Capture Loss`` field shows the percentage of packet capture loss reported by :ref:`zeek`. Higher values indicate a reduced visibility into packets traversing the network. If :ref:`zeek` is reporting capture loss but no packet loss, this usually means that the capture loss is happening upstream in the TAP or SPAN port itself.

Zeek Loss
~~~~~~~~~
Expand Down
Loading

0 comments on commit 2e260f3

Please sign in to comment.