Skip to content

Commit

Permalink
update references
Browse files Browse the repository at this point in the history
  • Loading branch information
dougburks committed Sep 27, 2023
1 parent d2cb41f commit 5da778e
Show file tree
Hide file tree
Showing 8 changed files with 37 additions and 37 deletions.
2 changes: 1 addition & 1 deletion architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ There is also an option to have a **manager node** and one or more **heavy nodes

Heavy nodes do not consume from the :ref:`redis` queue on the manager. This means that if you just have a manager and heavy nodes, then the :ref:`redis` queue on the manager will grow and never be drained. To avoid this, you have two options. If you are starting a new deployment, you can make your ``manager`` a ``manager search`` so that it will drain its own :ref:`redis` queue. Alternatively, if you have an existing deployment with a ``manager`` and want to avoid rebuilding, then you can add a separate search node (NOT heavy node) to consume from the :ref:`redis` queue on the manager.

Heavy nodes perform sensor duties and store their own logs in their own local Elasticsearch instance. This results in higher hardware requirements and lower performance. Heavy nodes do NOT pull logs from the redis queue on the manager like search nodes do.
Heavy nodes perform sensor duties and store their own logs in their own local :ref:`elasticsearch` instance. This results in higher hardware requirements and lower performance. Heavy nodes do NOT pull logs from the redis queue on the manager like search nodes do.

Heavy Nodes run the following components:

Expand Down
4 changes: 2 additions & 2 deletions firewall.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,8 @@ Elastic Agent:

Search nodes from/to manager:

- TCP/9300 - Node-to-node for Elasticsearch
- TCP/9696 - Redis
- TCP/9300 - Node-to-node for :ref:`elasticsearch`
- TCP/9696 - :ref:`redis`

Host Firewall
-------------
Expand Down
36 changes: 18 additions & 18 deletions hardware.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,19 +76,19 @@ In a standalone deployment, the manager components and the sensor components all
This deployment type is recommended for evaluation purposes, POCs (proof-of-concept) and small to medium size single sensor deployments. Although you can deploy Security Onion in this manner, it is recommended that you separate the backend components and sensor components.

- CPU: Used to parse incoming events, index incoming events, search metatadata, capture PCAP, analyze packets, and run the frontend components. As data and event consumption increases, a greater amount of CPU will be required.
- RAM: Used for Logstash, Elasticsearch, disk cache for Lucene, :ref:`suricata`, :ref:`zeek`, etc. The amount of available RAM will directly impact search speeds and reliability, as well as ability to process and capture traffic.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot ES indices.
- RAM: Used for :ref:`logstash`, :ref:`elasticsearch`, disk cache for Lucene, :ref:`suricata`, :ref:`zeek`, etc. The amount of available RAM will directly impact search speeds and reliability, as well as ability to process and capture traffic.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot :ref:`elasticsearch` indices.

Please refer to the :ref:`architecture` section for detailed deployment scenarios.

Manager node with local log storage and search
----------------------------------------------

In an enterprise distributed deployment, a manager node will store logs from itself and forward nodes. It can also act as a syslog destination for other log sources to be indexed into Elasticsearch. An enterprise manager node should have 8 CPU cores at a minimum, 16-128GB RAM, and enough disk space (multiple terabytes recommended) to meet your retention requirements.
In an enterprise distributed deployment, a manager node will store logs from itself and forward nodes. It can also act as a syslog destination for other log sources to be indexed into :ref:`elasticsearch`. An enterprise manager node should have 8 CPU cores at a minimum, 16-128GB RAM, and enough disk space (multiple terabytes recommended) to meet your retention requirements.

- CPU: Used to parse incoming events, index incoming events, search metadata. As consumption of data and events increases, more CPU will be required.
- RAM: Used for Logstash, Elasticsearch, and disk cache for Lucene. The amount of available RAM will directly impact search speeds and reliability.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot ES indices.
- CPU: Used to parse incoming events, index incoming events, and search metadata. As consumption of data and events increases, more CPU will be required.
- RAM: Used for :ref:`logstash`, :ref:`elasticsearch`, and disk cache for Lucene. The amount of available RAM will directly impact search speeds and reliability.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot :ref:`elasticsearch` indices.

Please refer to the :ref:`architecture` section for detailed deployment scenarios.

Expand All @@ -97,20 +97,20 @@ Manager node with separate search nodes

This deployment type utilizes search nodes to parse and index events. As a result, the hardware requirements of the manager node are reduced. An enterprise manager node should have at least 4-8 CPU cores, 16GB RAM, and 200GB to 1TB of disk space. Many folks choose to host their manager node in their VM farm since it has lower hardware requirements than sensors but needs higher reliability and availability.

- CPU: Used to receive incoming events and place them into Redis. Used to run all the front end web components and aggregate search results from the search nodes.
- RAM: Used for Logstash and Redis. The amount of available RAM directly impacts the size of the Redis queue.
- Disk: Used for general OS purposes and storing Kibana dashboards.
- CPU: Used to receive incoming events and place them into :ref:`redis`. Used to run all the front end web components and aggregate search results from the search nodes.
- RAM: Used for :ref:`logstash` and :ref:`redis`. The amount of available RAM directly impacts the size of the :ref:`redis` queue.
- Disk: Used for general OS purposes and storing :ref:`kibana` dashboards.

Please refer to the :ref:`architecture` section for detailed deployment scenarios.

Search Node
-----------

Search nodes increase search and retention capacity with regard to Elasticsearch. These nodes parse and index events, and provide the ability to scale horizontally as overall data intake increases. Search nodes should have at least 4-8 CPU cores, 16-64GB RAM, and 200GB of disk space or more depending on your logging requirements.
Search nodes increase search and retention capacity with regard to :ref:`elasticsearch`. These nodes parse and index events, and provide the ability to scale horizontally as overall data intake increases. Search nodes should have at least 4-8 CPU cores, 16-64GB RAM, and 200GB of disk space or more depending on your logging requirements.

- CPU: Used to parse incoming events and index incoming events. As consumption of data and events increases, more CPU will be required.
- RAM: Used for Logstash, Elasticsearch, and disk cache for Lucene. The amount of available RAM will directly impact search speeds and reliability.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot ES indices.
- RAM: Used for :ref:`logstash`, :ref:`elasticsearch`, and disk cache for Lucene. The amount of available RAM will directly impact search speeds and reliability.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot :ref:`elasticsearch` indices.

Please refer to the :ref:`architecture` section for detailed deployment scenarios.

Expand All @@ -125,14 +125,14 @@ A forward node runs sensor components only, and forwards metadata to the manager

Please refer to the :ref:`architecture` section for detailed deployment scenarios.

Heavy Node (Sensor with ES components)
--------------------------------------
Heavy Node (Sensor with Elasticsearch components)
-------------------------------------------------

A heavy node runs all the sensor components AND Elastic components locally. This dramatically increases the hardware requirements. In this case, all indexed metadata and PCAP are retained locally. When a search is performed through Kibana, the manager node queries this node's Elasticsearch instance.
A heavy node runs all the sensor components AND Elastic components locally. This dramatically increases the hardware requirements. In this case, all indexed metadata and PCAP are retained locally. When a search is performed through :ref:`kibana`, the manager node queries this node's :ref:`elasticsearch` instance.

- CPU: Used to parse incoming events, index incoming events, search metadata. As monitored bandwidth (and the amount of overall data/events) increases, a greater amount of CPU will be required.
- RAM: Used for Logstash , Elasticsearch, and disk cache for Lucene. The amount of available RAM will directly impact search speeds and reliability.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot ES indices.
- CPU: Used to parse incoming events, index incoming events, and search metadata. As monitored bandwidth (and the amount of overall data/events) increases, a greater amount of CPU will be required.
- RAM: Used for :ref:`logstash`, :ref:`elasticsearch`, and disk cache for Lucene. The amount of available RAM will directly impact search speeds and reliability.
- Disk: Used for storage of indexed metadata. A larger amount of storage allows for a longer retention period. It is typically recommended to retain no more than 30 days of hot :ref:`elasticsearch` indices.

Please refer to the :ref:`architecture` section for detailed deployment scenarios.

Expand Down
24 changes: 12 additions & 12 deletions ingest.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,28 +7,28 @@ Here's an overview of how logs are ingested in various deployment types.

Import
------
| Core Pipeline: Elastic Agent [IMPORT Node] --> ES Ingest [IMPORT Node]
| Core Pipeline: Elastic Agent [IMPORT Node] --> Elasticsearch Ingest [IMPORT Node]
| Logs: Zeek, Suricata
Eval
----
| Core Pipeline: Elastic Agent [EVAL Node] --> ES Ingest [EVAL Node]
| Core Pipeline: Elastic Agent [EVAL Node] --> Elasticsearch Ingest [EVAL Node]
| Logs: Zeek, Suricata, Osquery/Fleet
|
| Osquery Shipper Pipeline: Osquery [Endpoint] --> Fleet [EVAL Node] --> ES Ingest via Core Pipeline
| Osquery Shipper Pipeline: Osquery [Endpoint] --> Fleet [EVAL Node] --> Elasticsearch Ingest via Core Pipeline
| Logs: WEL, Osquery, syslog
Standalone
----------
| Core Pipeline: Elastic Agent [SA Node] --> Logstash [SA Node] --> Redis [SA Node] <--> Logstash [SA Node] --> ES Ingest [SA Node]
| Core Pipeline: Elastic Agent [SA Node] --> Logstash [SA Node] --> Redis [SA Node] <--> Logstash [SA Node] --> Elasticsearch Ingest [SA Node]
| Logs: Zeek, Suricata, Osquery/Fleet, syslog
|
| WinLogbeat: Winlogbeat [Windows Endpoint]--> Logstash [SA Node] --> Redis [SA Node] <--> Logstash [SA Node] --> ES Ingest [SA Node]
| WinLogbeat: Winlogbeat [Windows Endpoint]--> Logstash [SA Node] --> Redis [SA Node] <--> Logstash [SA Node] --> Elasticsearch Ingest [SA Node]
| Logs: WEL, Sysmon
Fleet Standalone
----------------
| Pipeline: Elastic Agent [Fleet Node] --> Logstash [M | MS] --> ES Ingest [S | MS]
| Pipeline: Elastic Agent [Fleet Node] --> Logstash [M | MS] --> Elasticsearch Ingest [S | MS]
| Logs: Osquery
Manager (separate search nodes)
Expand All @@ -41,26 +41,26 @@ Manager (separate search nodes)
Manager Search
--------------
| Core Pipeline: Elastic Agent [Fleet | Forward] --> Logstash [MS] --> Redis [MS] <--> Logstash [MS] --> ES Ingest [MS]
| Core Pipeline: Elastic Agent [Fleet | Forward] --> Logstash [MS] --> Redis [MS] <--> Logstash [MS] --> Elasticsearch Ingest [MS]
| Logs: Zeek, Suricata, Osquery/Fleet, syslog
|
| Pipeline: Elastic Agent [MS] --> Logstash [MS] --> ES Ingest [MS]
| Pipeline: Elastic Agent [MS] --> Logstash [MS] --> Elasticsearch Ingest [MS]
| Logs: Local Osquery/Fleet
|
| WinLogbeat: Winlogbeat [Windows Endpoint]--> Logstash [MS] --> ES Ingest [MS]
| WinLogbeat: Winlogbeat [Windows Endpoint]--> Logstash [MS] --> Elasticsearch Ingest [MS]
| Logs: WEL
Heavy
-----
| Pipeline: Elastic Agent [Heavy Node] --> Logstash [Heavy] --> Redis [Heavy] <--> Logstash [Heavy] --> ES Ingest [Heavy]
| Pipeline: Elastic Agent [Heavy Node] --> Logstash [Heavy] --> Redis [Heavy] <--> Logstash [Heavy] --> Elasticsearch Ingest [Heavy]
| Logs: Zeek, Suricata, Osquery/Fleet, syslog
Search
------
| Pipeline: Redis [Manager] --> Logstash [Search] --> ES Ingest [Search]
| Pipeline: Redis [Manager] --> Logstash [Search] --> Elasticsearch Ingest [Search]
| Logs: Zeek, Suricata, Osquery/Fleet, syslog
Forward
-------
| Pipeline: Elastic Agent [Forward] --> Logstash [M | MS] --> ES Ingest [S | MS]
| Pipeline: Elastic Agent [Forward] --> Logstash [M | MS] --> Elasticsearch Ingest [S | MS]
| Logs: Zeek, Suricata, syslog
2 changes: 1 addition & 1 deletion rbac.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Role-Based Access Control (RBAC)

The ability to restrict or grant specific privileges to a subset of users is covered by role-based access control, or "RBAC" for short. RBAC is an authorization technique in which users are assigned one of a small set of roles, and then the roles are associated to many low-level privileges. This provides the ability to build software with fine-grained access control, but without the need to maintain complex associations of users to large numbers of privileges. Users are traditionally assigned a single role, one which correlates closely with their role in the organization. However, it's possible to assign a user to multiple roles, if necessary.

RBAC in Security Onion covers both Security Onion privileges and Elastic stack privileges. Security Onion privileges are only involved with functionality specifically provided by the components developed by Security Onion, while Elastic stack privileges are only involved with the Elasticsearch, Kibana, and related Elastic stack. For example, Security Onion will check if a user has permission to create a PCAP request, while Elastic will check if the same user has permission to view a particular index or document stored in Elasticsearch.
RBAC in Security Onion covers both Security Onion privileges and Elastic stack privileges. Security Onion privileges are only involved with functionality specifically provided by the components developed by Security Onion, while Elastic stack privileges are only involved with the :ref:`elasticsearch`, :ref:`kibana`, and related Elastic stack. For example, Security Onion will check if a user has permission to create a PCAP request, while Elastic will check if the same user has permission to view a particular index or document stored in :ref:`elasticsearch`.

Default Roles
-------------
Expand Down
2 changes: 1 addition & 1 deletion re‐indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Make the script executable:

sudo chmod +x so-elastic-reindex

Re-index all indices matching ``logstash-*``, pulling the appropriate ``refresh_interval`` from the template named ``logstash`` in Elasticsearch:
Re-index all indices matching ``logstash-*``, pulling the appropriate ``refresh_interval`` from the template named ``logstash`` in :ref:`elasticsearch`:

::

Expand Down
2 changes: 1 addition & 1 deletion so-elasticsearch-query.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
so-elasticsearch-query
======================

You can use ``so-elasticsearch-query`` to submit a cURL request to the local Security Onion Elasticsearch host from the command line.
You can use ``so-elasticsearch-query`` to submit a cURL request to the local Security Onion :ref:`elasticsearch` host from the command line.

Usage
-----
Expand Down
2 changes: 1 addition & 1 deletion zeek-fields.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Zeek Fields
===========

Zeek logs are sent to Elasticsearch where they are parsed using ingest parsing. Most Zeek logs have a few standard fields and they are parsed as follows:
Zeek logs are sent to :ref:`elasticsearch` where they are parsed using ingest parsing. Most Zeek logs have a few standard fields and they are parsed as follows:

| ts => @timestamp
| uid => log.id.uid
Expand Down

0 comments on commit 5da778e

Please sign in to comment.