diff --git a/.github/ISSUE_TEMPLATE/bug-or-error-report.md b/.github/ISSUE_TEMPLATE/bug-or-error-report.md
index bda324c4..ecef87db 100644
--- a/.github/ISSUE_TEMPLATE/bug-or-error-report.md
+++ b/.github/ISSUE_TEMPLATE/bug-or-error-report.md
@@ -10,24 +10,38 @@ assignees: ''
## **BEFORE CREATING THE ISSUE, CHECK THE FOLLOWING GUIDES**:
- [ ] [FAQ](https://github.com/cisagov/LME/blob/main/docs/markdown/reference/faq.md)
- [ ] [Troubleshooting](https://github.com/cisagov/LME/blob/main/docs/markdown/reference/troubleshooting.md)
- - [ ] Search current/closed issues for similar questions, and utilize github/google search to see if an answer exists for the error I'm encountering.
+ - [ ] Search current/closed issues for similar questions and utilize github/google search to see if an answer exists for the error you are encountering.
If the above did not answer your question, proceed with creating an issue below:
## Describe the bug
-
+
+
+## Expected behavior
+A clear and concise description of what you expected to happen.
## To Reproduce
-
+
### Please complete the following information
-#### **Desktop:**
- - OS: [e.g. Windows 10]
- - Browser: [e.g. Firefox Version 104.0.1]
- - Software version: [e.g. Sysmon v15.0, Winlogbeat 8.11.1]
+
+#### **Setup**
+- Are you running the LME machines in a virtual environment (i.e. Docker) or are you running natively on the machines?
+- Which version of LME are you installing?
+- Is this a first-time installation or are you upgrading? If upgrading, what was your previous version?
+
+#### **Desktop:** (Client Machines)
+- OS: [e.g. Windows 10]
+- Browser: [e.g. Firefox Version 104.0.1]
+- Software version: [e.g. Sysmon v15.0]
+
+#### **Domain Controller:**
+- OS: [e.g. Windows Server]
+- Browser: [e.g. Firefox Version 104.0.1]
+- Software version: [e.g. Winlogbeat 8.11.1]
-#### **Server:**
+#### **ElasticSearch/Kibana Server:**
- OS: [e.g. Ubuntu 22.04]
- Software Versions:
- ELK: [e.g. 8.7.1]
@@ -45,14 +59,12 @@ lsb_release -a
```
for name in $(sudo docker ps -a --format '{{.Names}}'); do echo -e "\n\n\n-----------$name----------"; sudo docker logs $name | tail -n 20; done
```
-Increase the number of lines if your issue is not present, or include a relevant log of the erroring container
+Increase the number of lines if your issue is not present or include a relevant log of the erroring container
- Output of the relevant /var/log/cron_logs/ file
-## Expected behavior
-A clear and concise description of what you expected to happen.
## Screenshots **OPTIONAL**
If applicable, add screenshots to help explain your problem.
## Additional context
-Add any other context about the problem here.
+Add any other context about the problem or any unique environment information here.
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md
index bbcbbe7d..0297b228 100644
--- a/.github/ISSUE_TEMPLATE/feature_request.md
+++ b/.github/ISSUE_TEMPLATE/feature_request.md
@@ -8,7 +8,7 @@ assignees: ''
---
**Is your feature request related to a problem? Please describe.**
-A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
+A clear and concise description of what the problem is. Ex. When I try ABC, this happens instead [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 7afbe3bb..e95d6f07 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -10,42 +10,35 @@
-
-
+
### 📷 Screenshots (DELETE IF UNAPPLICABLE)
## 🧪 Testing
-
+
## ✅ Pre-approval checklist ##
- [ ] There is a [gitIssue](https://github.com/cisagov/LME/issues) that this PR resolves
+- [ ] Git Issue that this PR solves has been selected in the Development section
- [ ] The PR's base branch has been modified to be the proper branch.
- [ ] Changes are limited to a single goal **AND**
-<<<<<<< HEAD
the title reflects this in a clear human readable format for the release notes
-=======
- the title reflects this in a clear human readable format
-- [ ] Issue that this PR solves has been selected in the Development section
->>>>>>> 34b2ff9 (Update PULL_REQUEST_TEMPLATE.md (#206))
- [ ] I have read and agree to LME's [CONTRIBUTING.md](https://github.com/cisagov/LME/CONTRIBUTING.md) document.
- [ ] The PR adheres to LME's requirements in [RELEASES.md](https://github.com/cisagov/LME/RELEASES.md#steps-to-submit-a-PR)
- [ ] These code changes follow [cisagov code standards](https://github.com/cisagov/development-guide).
- [ ] All relevant repo and/or project documentation has been updated to reflect the changes in this PR.
-- [ ] The PR is labeled with `feat` for an added new feature, `update` for an update, **OR** `fix` for a fix.
-- [ ] The PR contains `Resolves #` so that merging it closes out the corresponding issue. For example `Resolves #132`.
-
## ✅ Pre-merge Checklist
-- [ ] All tests pass
-- [ ] PR has been tested and the documentation for testing is above
-- [ ] Squash and merge all commits into one PR level commit
+- [ ] All tests pass.
+- [ ] PR has been tested and the documentation for testing is above.
+- [ ] Squash and merge all commits into one PR level commit.
## ✅ Post-merge Checklist
-- [ ] Delete the branch to keep down number of branches
-
+- [ ] Delete the branch to keep down number of branches.
+- [ ] The PR is labeled with `feat` for an added new feature, `update` for an update, **OR** `fix` for a fix.
+- [ ] The PR contains `Resolves #` so that merging it closes out the corresponding issue. For example `Resolves #132`.
\ No newline at end of file
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 94621689..938fdf27 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,17 +1,17 @@
# Welcome #
-We're so glad you're thinking about contributing to this open-source project! If you're unsure or hesitant to make a recommendation, just ask, submit the issue, or pull request. The worst that can happen is that you'll be politely asked to change something. We appreciate any sort of contribution(s), and don't want a wall of rules to stifle innovation.
+Users are welcome to contribute to LME. If you're unsure or hesitant to make a recommendation, just ask, submit the issue or pull request. The LME team appreciates any sort of contribution, and does not want to stifle innovation.
-Before contributing, we encourage you to read our CONTRIBUTING policy (you are here), our LICENSE, and our README, all of which are in this repository.
+Before contributing, please read the CONTRIBUTING policy (you are here), LICENSE, and README, all of which are in this repository.
## Issues
If you want to report a bug or request a new feature, the most direct method is to [create an issue](https://github.com/cisagov/development-guide/issues) in this repository.
-We recommend that you first search through existing issues (both open and closed) to check if your particular issue has already been reported.
+We recommend that you first search through existing issues (both open and closed) to check if another users has reported your particular issue and there is already an answer.
-If it has then you might want to add a comment to the existing issue.
+If your question is in an existing issue, then you might want to add a comment to the existing issue.
-If it hasn't then please create a new one.
+If it hasn't, then please create a new one.
Please follow the provided template and fill out all sections. We have a `BUG` and `FEATURE REQUEST` Template
@@ -25,13 +25,13 @@ Example:
## Pull Requests (PR)
-If you choose to submit a pull request, it will be required to pass various sanity checks in our continuous integration (CI) pipeline, before we merge it. Your pull request may fail these checks, and that's OK. If you want you can stop there and wait for us to make the necessary corrections to ensure your code passes the CI checks, you're more than within your rights; however, it helps our team greatly if you fix the issues found by our CI pipeline.
+If you choose to submit a pull request, your pull request must pass various sanity checks in the continuous integration (CI) pipeline, before merging it. Your pull request may fail these checks, and that's OK. If you want, you can stop there and wait for us to make the necessary corrections to ensure your code passes the CI checks. It helps our community if you fix the issue found by our CI pipeline.
Below are some loose requirements we'd like all PR's to follow. Our release process is documented in [Releases](releases.md).
### Quality assurance and code reviews
-All PRs will be tested, vetted, and reviewed by our team before being merged with the main code base. All should be pull requested into whatever the upcoming release branch is. Find that by searching for the highest SEMVER `release-X.Y.Z` branch or following our release documentation.
+Our team will test, vet and review all PR's before our team merges a PR with the main code base. All code should be pull requested into the upcoming release branch. You can find that by searching for the highest SEMVER `release-X.Y.Z` branch or following our release documentation.
### Steps to submit a PR
- All PRs should request merges back into LME's *CLOSEST* Major or Minor upcoming release branch `release-X.Y.Z`. This will be viewable in the branch list on Github. You can also refer to our release documentation for guidance.
@@ -39,7 +39,7 @@ All PRs will be tested, vetted, and reviewed by our team before being merged wit
- If the PR does not have an issue, please create a new issue and name your branch according to the conventions [here](#branch-naming-conventions). Add a human readable title describing the PR and how it fits into LME's project/code. If the PR follows our other requirements listed here, we'll add it into our public project linked previously.
- Add the label `feat` for an added new feature, `update` for an update, **or** `fix` for a fix.
- We'll work with you to mold it to our development goals/process, so your work can be merged into LME and your Github profile gets credit for the contributions.
- - Before merging we request that all commits be squashed into one commit. This way your changes to the repository are tracked, but our `git log` history does not rapidly expand.
+ - Before merging, we request that all commits be squashed into one commit. This way your changes to the repository are tracked, but our `git log` history does not rapidly expand.
- Thanks for wanting to submit and develop improvements for LME!!
## Public domain
diff --git a/README.md b/README.md
index cdcc4d95..b9a24ff3 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
[]()
# Logging Made Easy
-Initially created by NCSC and now maintained by CISA, Logging Made Easy is a self-install tutorial for small organizations to gain a basic level of centralized security logging for Windows clients and provide functionality to detect attacks. It's the coming together of multiple open software platforms which come at no cost to users, where LME helps the reader integrate them together to produce an end-to-end logging capability. We also provide some pre-made configuration files and scripts, although there is the option to do it on your own.
+CISA's Logging Made Easy has a self-install tutorial for organizations to gain a basic level of centralized security logging for Windows clients and provide functionality to detect attacks. LME is the integration of multiple open software platforms which come at no cost to users. LME helps users integrate software platforms together to produce an end-to-end logging capability. LME also provides some pre-made configuration files and scripts, although there is the option to do this on your own.
Logging Made Easy can:
- Show where administrative commands are being run on enrolled devices
@@ -20,28 +20,28 @@ Logging Made Easy can:
**LME is a 'homebrew' way of gathering logs and querying for attacks.**
-We have done the hard work to make things simple. We will tell you what to download, which configurations to use and have created convenient scripts to auto-configure wherever possible.
+The LME team simplified the process and created clear instruction on what to download and which configugrations to use, and created convinent scripts to auto configure when possible.
-The current architecture is based upon Windows Clients, Microsoft Sysmon, Windows Event Forwarding and the ELK stack.
+The current architecture is based on Windows Clients, Microsoft Sysmon, Windows Event Forwarding and the ELK stack.
-We are **not** able to comment on or troubleshoot individual installations. If you believe you have have found an issue with the LME code or documentation please submit a [GitHub issue](https://github.com/cisagov/lme/issues). If you have a question about your installation, please visit [GitHub Discussions](https://github.com/cisagov/lme/discussions) to see if your issue has been addressed before.
+LME is **not** able to comment on or troubleshoot individual installations. If you believe you have have found an issue with the LME code or documentation please submit a [GitHub issue](https://github.com/cisagov/lme/issues). If you have a question about your installation, please look through all open and closed issues to see if it has been addressed before. If not, then submit a GitHub issue using the Bug Template, ensuring that you provide all the requested information.
+
+For general questions about LME and suggestions, please visit [GitHub Discussions](https://github.com/cisagov/lme/discussions) to add a discussion post.
## Who is Logging Made Easy for?
From single IT administrators with a handful of devices in their network to larger organizations.
-LME is for you if:
+LME is suited for for:
-* You don’t have a [SOC](https://en.wikipedia.org/wiki/Information_security_operations_center), SIEM or any monitoring in place at the moment.
-* You lack the budget, time or understanding to set up your own logging system.
-* You recognize the need to begin gathering logs and monitoring your IT.
-* You understand that LME has limitations and is better than nothing - but no match for a professional tool.
+*Oganization without [SOC](https://en.wikipedia.org/wiki/Information_security_operations_center), SIEM or any monitoring in place at the moment.
+* Organizations that lack the budget, time or understanding to set up a logging system.
+* Organizations that that require gathering logs and monitoring IT
+* Organizations that understand LMEs limitiation
-If any, or all, of these criteria fit, then LME is a step in the right direction for you.
-LME could also be useful for:
-* Small isolated networks where corporate monitoring doesn’t reach.
+LME is most useful for small isolated networks where corporate monitoring doesn’t reach.
## Overview
The LME architecture consists of 3 groups of computers, as summarized in the following diagram:
diff --git a/RELEASES.md b/RELEASES.md
index a1e6c27c..e4b20fbb 100644
--- a/RELEASES.md
+++ b/RELEASES.md
@@ -10,7 +10,7 @@ The patch versions will generally adhere to the following guidelines:
### Timelines
-Development lifecycle timelines will vary depending on project goals, tasking, community contributions, and vision.
+Development lifecycle timelines will vary depending on project goals, tasking, community contributions and vision.
## Current Release Branch:
@@ -18,25 +18,25 @@ To determine the current release branch, it will either be clearly documented in
- For example, if the current latest release (as seen on the main [README](/README.md)) version `1.1.0`, and the `release-*` branches are: `release-1.1.1` and `release-1.2.0` then the `1.2.0` branch would be the branch where submit the PR, since it is the closest release that is a Major or Minor release, while 1.1.1 is a patch release.
-- All `release-*` have various branch protections enabled, and will require review by the development team before being merged.
-The team requests a brief description if one submits a fix for a current issue on the public project, that context will allow us to help determine if it warrants inclusion. If the PR is well documented following our processes in our CONTRIBUTING.md, it will most likely be worked into LME. We value inclusion and recognize the importance of the open-source community.
+- All `release-*` have various branch protections enabled and will require review by the development team before being merged.
+The team requests a brief description if for each submission for a fix for a current issue on the public project, that context will allow us to help determine if it warrants inclusion. If the PR is well documented following our processes in our CONTRIBUTING.md, we will most likely work it into LME. We value inclusion and recognize the importance of the open source community.
## Content:
-Each release generally notes the Additions, Changes, and Fixes addressed in the release and the contributors that provided code for the release. Additionally, relevant builds of the release will be attached with the release. Tagging the release will correspond with its originating branch's SEMVER number.
+Each release generally notes the Additions, Changes and Fixes addressed in the release and the contributors that provided code for the release. Additionally, relevant builds of the release will be attached with the release. Tagging the release will correspond with its originating branch's SEMVER number.
## Update Process:
-Developments and changes will accrue in a release-X.Y.Z branch according to the level of the release as documented in [Pull Requests](#pull-requests). The process of merging all changes into a release branch and preparing it for release is documented below.
+Developments and changes will accrue in a release-X.Y.Z branch according to the level of the release as documented in [Pull Requests](#pull-requests). The process of merging all changes into a release branch and documents for preparing it for release are below.
### Code Freeze:
-Each code freeze will have an announced end date/time in accordance with our public [project](https://github.com/orgs/cisagov/projects/68). Any PRs with new content will need to be in by the announced time in order to be included into the release.
+We will announce for each code free an end date/time in accordance with our public [project](https://github.com/orgs/cisagov/projects/68). Users must add any PRs with new content by the announced time for us to include in the release.
### Steps:
-1. Goals/changes/updates to LME will be tracked in LME's public [project](https://github.com/orgs/cisagov/projects/68). These updates to LME will be tracked by pull requests (and may be backed by corresponding issues for documentation purposes for documentation purposes) to a specific `release-X.Y.Z` branch.
-2. As commits are pushed to the PRs set to pull into a release branch, we will determine a time to cease developments. When its determined the features developed in a `release` branch meet a goal or publish point, we will merge all the release's PR's into one combined state onto the `release-.X.Y.Z` branch. This will make sure all testing happens from a unified branch state, and will minimize the number of merge conflicts that occur, easing coordination of merge conflicts.
-3. Once all work has been merged into an initial release, we will mark the pull request for the release with a `code freeze` label to denote that the release is no longer excepting new features/developments/etc...., all PRs that commit to the release branch should only be to fix breaking changes or failed tests. We’ll also invite the community to pull the frozen `release` branch to test and validate if the new changes cause issues in their environment.
-4. Finally, when all testing and community feedback is complete we'll merge into main with a new tag denoting the `release-X.Y.Z` SEMVER value `X.Y.Z`.
+1. The team will track goals, changes and updates in LME's public [project](https://github.com/orgs/cisagov/projects/68). Pull requests will track updates to LME (and may be backed by corresponding issues for documentation purposes for documentation purposes) to a specific `release-X.Y.Z` branch.
+2. As commits are pushed to the PRs set to pull into a release branch, we will determine a time to cease developments. When the team determines that features developed in a `release` branch meet a goal or publish point, we will merge all the release's PR's into one combined state onto the `release-.X.Y.Z` branch. This will ensure all testing happens from a unified branch state and will minimize the number of merge conflicts that occur and ease coordination of merge conflicts.
+3. Once the team has merged all work into an initial release, we will mark the pull request for the release with a `code freeze` label to denote that the release is no longer excepting new features/developments/etc...., all PR's that commit to the release branch should only be to fix breaking changes or failed tests. We’ll also invite the community to pull the frozen `release` branch to test and validate if the new changes cause issues in their environment.
+4. Finally, when all testing and community feedback is complete, we'll merge into main with a new tag denoting the `release-X.Y.Z` SEMVER value `X.Y.Z`.
### Caveats:
Major or Minor SEMVER LME versions will only be pushed to `main` with testing and validation of code to ensure stability and compatibility. However, new major changes will not always be backwards compatible.
diff --git a/build/Readme.md b/build/Readme.md
index f87c46ef..4f7e1237 100644
--- a/build/Readme.md
+++ b/build/Readme.md
@@ -1,16 +1,16 @@
# Generating the docs:
-This directory uses [pandoc]() a universal document converter to build the markdown files into a pdf. Due to regulatory concerns we cannot release a pdf here directly, but you can utilize the following script to build the markdown docs into a pdf so you can use them offline if desired.
+This directory uses [pandoc](), a universal document converter, to build the markdown files into a pdf. Due to regulatory concerns LME cannot release a pdf directly, but you can utilize the following script to build the markdown docs into a pdf so you can use them offline if desired.
In our testing we utilized the macos package manager [homebrew](https://brew.sh/) to install our packages.
## Installing pandoc
-After you have homebrew make sure to install mactex:
+After installing homebrew make sure to install mactex:
```bash
brew install mactex
```
-Its a huge file but makes compiling everything super easy. Theres probably an equivalent on linux, but idk what it is
+This is a large file that simplyfies compiling everything.
Finally install pandoc: [link](https://pandoc.org/installing.html)
```bash
@@ -18,12 +18,12 @@ brew install pandoc
```
### Installing on other platforms
-Other operating systems adn their respecitve latex/pandoc packages have not been tested nor will they be supported by LME. Since not every organization will have access to a MacOS operating system, but might wish to compile the docs anyway, please reachout and the team will attempt to help you compile the docs into a pdf. Any operating system with a latex package and pandoc executable should be able to accomplish the job. There are also many other ways to convert github flavored markdown to pdf if you google for them, and want to compile using a different method than we've provided here.
+Other operating systems and their respective latex/pandoc packages have not been tested nor will LME support them in the future. Since not every organization has access to a MacOS operating system, but might wish to compile the docs anyway, please reachout to LME and the team will attempt to help you compile the docs into a pdf. Any operating system with a latex package and pandoc executable should suffice. There are several other ways to convert github flavored markdown to pdf if you search them online and want to compile using a different method than provided here.
## Compiling:
-This command below will compile the markdown docs on macos from the homebrew install pandoc/mactex packages:
+This command below will compile the markdown docs on MacOS from the homebrew install pandoc/mactex packages:
```bash
$ pandoc --from gfm --pdf-engine=lualatex -H ./build/setup.tex -V geometry:margin=1in --highlight-style pygments -o docs.pdf -V colorlinks=true -V linkcolor=blue --lua-filter=./build/emoji-filter.lua --lua-filter=./build/makerelativepaths.lua --lua-filter=./build/parse_breaks.lua --table-of-contents --number-sections --wrap=preserve --quiet -s $(cat ./build/includes.txt)
```
-On a successful compilation it will output the `docs.pdf` file, a pdf of all the docs. There is a small bug where the `troubleshooting.md` table does not display as expected, so if you want the notes in the table offline, we suggest you record the information manually, OR submit a pull request that fixes this bug :smile:.
+A successful compilation will output the `docs.pdf` file, a pdf of all the docs. There is a small bug where the `troubleshooting.md` table does not display as expected, so if you want the notes in the table offline, we suggest you record the information manually, OR submit a pull request that fixes this bug.
diff --git a/docs/markdown/chapter1/chapter1.md b/docs/markdown/chapter1/chapter1.md
index 6658774b..af91d36b 100644
--- a/docs/markdown/chapter1/chapter1.md
+++ b/docs/markdown/chapter1/chapter1.md
@@ -6,13 +6,13 @@ Figure 1: Finished state of Chapter 1
## Chapter Overview
-In this chapter you will:
-* Add some Group Policy Objects (GPOs) to your Active Directory (AD).
-* Configure the Windows Event Collector listener service.
-* Configure clients to send logs to this box.
+This chapter will cover:
+* Adding some Group Policy Objects (GPOs) to your Active Directory (AD).
+* Configuring the Windows Event Collector listener service.
+* Configuring clients to send logs to this box.
## 1.1 Introduction
-This chapter will cover setting up the built-in Windows functionality for event forwarding. This effectively takes the individual events (such as a file being opened) and sends them to a central machine for processing. This is similar to the setup discussed in this [Microsoft blog](https://docs.microsoft.com/en-us/windows/security/threat-protection/use-windows-event-forwarding-to-assist-in-intrusion-detection).
+This chapter will cover setting up the built-in Windows functionality for event forwarding, effectively taking the individual events (such as a file being opened) and sending them to a central machine for processing. This is similar to the setup discussed in this [Microsoft blog](https://docs.microsoft.com/en-us/windows/security/threat-protection/use-windows-event-forwarding-to-assist-in-intrusion-detection).
Only a selection of events will be sent from the client's ‘Event Viewer’ to a central ‘Event Collector’. The events will then be uploaded to the database and dashboard in Chapter 3.
This chapter will require the clients and event collector to be Active Directory domain joined and the event collector to be either a Windows server or a Windows client operating system.
@@ -20,14 +20,14 @@ This chapter will require the clients and event collector to be Active Directory
## 1.2 Firewall rules and where to host
You will need TCP port 5985 open between the clients and the Windows Event Collector. You also need port 5044 open between the Windows Event Collector and the Linux server.
-We recommend that this traffic does not go directly across the Internet, so you should host the Windows Event Collector on the local network, in a similar place to the Active Directory server.
+We recommend that this traffic does not go directly across the internet, so you should host the Windows Event Collector on the local network, in a similar place to the Active Directory server.
## 1.3 Download LME
-There are several files within the LME repo that need to be available on a domain controller. These files will be needed for both Chapters 1 and 2. While there are multiple ways to accomplish this, one simple method is to download the latest release package.
+There are several files within the LME repo that need to be available on a domain controller. You will need these fles for both Chapters 1 and 2. While there are multiple ways to accomplish this, one simple method is to download the latest release package.
1. While on a domain controller, download [the desired release](https://github.com/cisagov/lme/releases/).
2. Open File Explorer, locate and extract the release file downloaded in step 1, for example, LME-1.0.zip.
-3. Move the LME folder somewhere safe. There is no set location where this folder is required to be, but it should be saved somewhere it won't be inadvertently modified or deleted during the installation process. After installation is complete, the folder can be safely deleted.
+3. Move the LME folder somewhere safe. There is no set location requirement for this folder, but you should be save it somewhere you will not inadvertently modify or delete it during the installation process. After installation is complete, you can safely delete the folder.
## 1.4 Import Group Policy objects
Group policy objects (GPOs) are a convenient way to administer technical policies across an Active Directory domain. LME comes with two GPOs that work together to forward events from the client machines to the Event Collector.
diff --git a/docs/markdown/chapter1/guide_to_ous.md b/docs/markdown/chapter1/guide_to_ous.md
index 78ec9158..9f99994f 100644
--- a/docs/markdown/chapter1/guide_to_ous.md
+++ b/docs/markdown/chapter1/guide_to_ous.md
@@ -2,8 +2,8 @@
## Guide to Organizational Units
What is an Organizational Unit?
-An Organizational Unit can in its simplest form be thought of as a folder to contain Users, Computers and groups.
-OUs can be used to select a subset of computers that you want to be included in the LME Client group for testing before rolling out LME site wide.
+An Organizational Unit is a folder that contains users, computers and groups.
+You can use OUs to select a subset of computers that you want to be included in the LME Client group for testing before rolling out LME site wide.
### 1 - How to make an OU
**1.1** Open the Group Policy Management Console by running ```gpmc.msc```. You can run this command by pressing Windows key + R.
diff --git a/docs/markdown/chapter2.md b/docs/markdown/chapter2.md
index 15326292..62df769a 100644
--- a/docs/markdown/chapter2.md
+++ b/docs/markdown/chapter2.md
@@ -5,7 +5,7 @@ In this chapter you will:
* Setup a GPO or SCCM job to deploy Sysmon across your clients.
## 2.1 Introduction
-Sysmon is a Windows service developed by Microsoft to generate rich Windows event logs with much more information than the default events created in Windows. Having comprehensive logs is critical in monitoring your system and keeping it secure. The information contained within Sysmon's logs are based on settings defined in an XML configuration file and can be configured to your liking, though templates will be provided to get you started.
+Microsoft developed Sysmon in Windows to generate rich Windows event logs with much more information than the default events created in Windows. Having comprehensive logs is critical in monitoring your system and keeping it secure. The information contained within Sysmon's logs are based on settings defined in an XML configuration file and can be configured to your liking, though templates will be provided to get you started.
**By following this guide and using Sysmon, you are agreeing to the following EULA.
Please read this before continuing.
@@ -24,7 +24,7 @@ Using Microsoft Group Policy to deploy LME requires two main things:
If you get stuck while trying to add and configure GPO's, refer back to Chapter 1 for a quick refresher.
### 2.2.1 - Folder Layout
-A centralized network folder accessible by all machines that are going to be running Sysmon is needed. We suggest inside the SYSVOL directory as a suitable place since this is configured by default to have very restricted write permissions.
+You need a centralized network folder accessible by all machines that are going to be running Sysmon. We suggest inside the SYSVOL directory as a suitable place since this is configured by default to have very restricted write permissions.
**It is extremely important that the folder contents cannot be modified by users, hence recommending SYSVOL folder.**
The SYSVOL directory is located on the Domain Controller at `C:\Windows\SYSVOL\SYSVOL\`, where "YOUR-DOMAIN-NAME" refers to your active directory domain name. You can also access it over the network at `\\\SYSVOL\`. As you are adding files to the SYSVOL directory throughout this chapter, you can either add them on the Domain Controller locally or over the network.
@@ -90,7 +90,7 @@ This section sets up a scheduled task to run update.bat (stored on a network fol
Figure 2: Specify the path to the update.bat file as the action for the scheduled test.
-At this point, the GPO should be properly configured, but without additional intervention, it could take up to 24 hours for the scheduled task to activate. Before it does, Sysmon will not show up as a service on the clients. However, further steps can be taken to ensure immediate installation.
+At this point, you should have configured the GOP properly, but without additional intervention, it could take up to 24 hours for the scheduled task to activate. Before it does, Sysmon will not show up as a service on the clients. However, you can take further steps to ensure immediate installation.
- View the "Triggers" tab of the "LME-Sysmon-Task-Properties" page. Click "Daily," then "Edit..." Note the start time specified. Each day, starting at that specific time, the LME-Sysmon-Task will run, repeating every 30 minutes. If that time has already passed on the day you created the GPO, the task won't activate for the first time until the following day. Generally speaking, you'll want to set the time to the beginning of the day for complete coverage, but you may consider adjusting it temporarily for testing purposes so that it will activate while you can observe it.
- By default, Windows will update group policy settings only every 90 minutes. You can manually trigger a group policy update by running `gpupdate /force` in an elevated Command Prompt window on a given client to apply the GPO to that specific client immediately.
@@ -109,7 +109,7 @@ Uninstall program:
Detection method: `File exists - C:\Windows\sysmon64.exe`
## Chapter 2 - Checklist
-1. Ensure that your files and folders in the network share are nested and named correctly. Remember that in Windows, case in filenames or folders does not matter.
+1. Ensure that your files and folders in the network share are nested and named correctly. Remember that in Windows, the case in filenames or folders does not matter.
```
NETWORK_SHARE (e.g. SYSVOL)
diff --git a/docs/markdown/chapter3/chapter3.md b/docs/markdown/chapter3/chapter3.md
index c963ca22..ea8dc9b8 100644
--- a/docs/markdown/chapter3/chapter3.md
+++ b/docs/markdown/chapter3/chapter3.md
@@ -1,28 +1,29 @@
# Chapter 3 – Installing the ELK Stack and Retrieving Logs
## Chapter Overview
-In this chapter you will:
-* Install a new Linux server for events to be sent to.
-* Run a script to:
+Chapter 3 covers:
+* Installating a new Linux server for events to be sent to.
+* Running a script to:
* install Docker.
* secure the Linux server.
* secure the Elasticsearch server.
* generate certificates.
* deploy the LME Docker stack.
-* Configure the Windows Event Collector to send logs to the Linux server.
+* Configuring the Windows Event Collector to send logs to the Linux server.
## Introduction
This section covers the installation and configuration of the Database and search functionality on a Linux server. We will install the ‘ELK’ Stack from Elasticsearch for this portion.
What is the ELK Stack?
-"ELK" is the acronym for three open projects which come at no cost to users: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.
+"ELK" is the acronym for three open projects which come at no cost to users: Elasticsearch, Logstash and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms them and then sends them to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.
+

Figure 1: Elastic Stack components
-Elasticsearch, Logstash, Kibana, and Winlogbeat are developed by [Elastic](https://www.elastic.co/). Before following this guide and running our install script, you should review and ensure that you agree with the license terms associated with these products. Elastic’s license terms can be found on their GitHub page [here](https://github.com/elastic). By running our install script you are agreeing to Elastic’s terms.
+[Elastic](https://www.elastic.co/) developed Elastisearch, Logstash, Kibana and Winglogbeat. Before following this guide and running our install script, you should review and ensure that you agree with the license terms associated with these products. Elastic’s license terms can be found on their GitHub page [here](https://github.com/elastic). By running our install script you are agreeing to Elastic’s terms.
This script also makes use of use of Docker Community Edition (CE). By following this guide and using our install script you are agreeing to the Docker CE license, which can be found [here](https://github.com/docker/docker-ce/blob/master/LICENSE).
@@ -167,7 +168,7 @@ The command will ask for a password to connect. Enter your password and press en
`files_for_windows.zip` should then be downloaded to your desktop.
#### Method 3: Web Server
-You can also download the file over a Python HTTP server, included on Linux by default. On the Linux server, running the below commands will copy the zip file into your home directory, and host an HTTP server listening on port 8000.
+You can also download the file over a Python HTTP server, included on Linux by default. On the Linux server, running the below commands will copy the zip file into your home directory and host an HTTP server listening on port 8000.
\*\***This will download the files over http which is not encrypted,
so ensure you trust the network you're downloading the zip file over**\*\*
@@ -199,7 +200,7 @@ Whichever method you used in [step 3.2.4](#324-download-files-for-windows-event-
- wlbclient.crt
- winlogbeat.yml
-These are certificates, keys, and configuration files required for the Event Collector to securely transfer event logs to the Linux ELK server.
+These are certificates, keys and configuration files required for the Event Collector to securely transfer event logs to the Linux ELK server.
**Download winlogbeat:**
@@ -247,7 +248,7 @@ Theres a few steps we need to follow to trust the self-signed cert:
1. Grab the self-signed certificate authority for LME (done in step [3.2.4](#324-download-files-for-windows-event-collector)).
2. Have our clients trust the certificate authority (see command below).
-This will trust the self signed cert and any other certificates it signs. If this certificate is stolen by an attacker, they can use it to trick your browser into trusting any website they setup. Make sure this cert is kept safe and secure.
+This will trust the self signed cert and any other certificates it signs. If an attacker steals this certificate, they can use it to trick your browser into trusting any website they setup. Make sure this cert is kept safe and secure.
We've already downloaded the self-signed cert in previous steps in Chapter 3, so now we just need to tell Windows to trust the certificates our self-signed cert has setup for our LME services.
diff --git a/docs/markdown/chapter3/resilience.md b/docs/markdown/chapter3/resilience.md
index faf4fa2e..2aab9c48 100644
--- a/docs/markdown/chapter3/resilience.md
+++ b/docs/markdown/chapter3/resilience.md
@@ -1,10 +1,10 @@
# LME Resilience
The Elasticsearch Stack components of LME are installed on a single server using
-Docker for Linux, and this is the only supported installation. However, **if LME
-is installed on a single server and the hard drive fails or the server crashes
-then there is the potential for all of the logs to be lost.** It is therefore
-recommended that LME installers aim to configure a multi-server cluster to help
+Docker for Linux, and this is the only supported installation. However, **if
+a user installs LME on a single server and the hard drive fails or the server crashes,
+then there is the potential for all of the logs to be lost.** We
+recommend that LME users configure a multi-server cluster to
ensure data resiliency.
The [Elastic website](https://www.elastic.co/) contains documentation about how
diff --git a/docs/markdown/chapter4.md b/docs/markdown/chapter4.md
index 9c2f4cb7..bfbfa87f 100644
--- a/docs/markdown/chapter4.md
+++ b/docs/markdown/chapter4.md
@@ -1,34 +1,34 @@
# Chapter 4 - Post Install Actions
## Chapter Overview
-In this chapter we will:
-* Log in to Kibana in order to view your logs
-* Check you are getting logs from your clients
+
+* Logging in to Kibana to view logs
+* Check that logs are being received
* Enable the default detection rules
-* Learn the basics of using Kibana
+* Kibana basics
## 4.1 Initial Kibana setup
-Once you have completed chapters 1 to 3, you can import a set of Kibana dashboards that we have created. These will help visualize the logs, and answer questions like 'What patch level are my clients running?'.
+Once chapters 1 to 3 are complete, you can import an existing set of Kibana dashboards, which will visualize the logs, and answer questions like 'What patch level are the clients running?'.
In a web browser, navigate to ```https://your_Linux_server``` and authenticate with the credentials provided in [Chapter 3.2](/docs/markdown/chapter3/chapter3.md#32-install-lme-the-easy-way-using-our-script).
### 4.1.1 Import Initial Dashboards
-As of version 0.4 of LME, the initial process of creating an index and importing the dashboards should be handled automatically as part of the install process. This means upon logging in to Kibana a number of the dashboards should automatically be visible under the ‘Dashboard’ tab on the left-hand side.
+As of LME version 0.4, the install process automatically handles the initial index creating process and importing dashboards. Upon logging into Kibana the number of dashboards should be visible under the ‘Dashboard’ tab on the left-hand side.
-If an error was encountered during the initial dashboard import then the upload can be reattempted by running the dashboard update script created within the root LME directory (**NOT** the one in 'Chapter 3 Files'):
+If the initial dashboard import has an error, you can reattempt the upload by running the dashboard update script created within the root LME directory (**NOT** the one in 'Chapter 3 Files'):
```
sudo /opt/lme/dashboard_update.sh
```
-:hammer_and_wrench: If this does not resolve the issue or you wish to manually import the dashboards for whatever reason, see [Troubleshooting: Manual Dashboard Install](/docs/markdown/reference/troubleshooting.md#manual-dashboard-install) for the previous installation instructions.
+:hammer_and_wrench: If this does not resolve the issue or you wish to manually import the dashboards, see [Troubleshooting: Manual Dashboard Install](/docs/markdown/reference/troubleshooting.md#manual-dashboard-install) for the previous installation instructions.
-### 4.1.2 Check you are receiving logs
+### 4.1.2 Check that logs are being received
-While on the Elastic home page, click on the hamburger icon on the left, then under "Analytics," find and click "Dashboard." From there, find and select "User Security." This will show a dashboard similar to Figure 2.
+While on the Elastic home page, click on the hamburger icon on the left, select "Analytics" and click "Dashboard" and select "User Security," to show a dashboard similar to Figure 2.
@@ -37,17 +37,17 @@ While on the Elastic home page, click on the hamburger icon on the left, then un
Figure 2 - The LME NEW - User Security - Overview
-In the top right hand corner, click on the calendar icon to the left of "Last 15 minutes" and select "Today." This will change the date range to only include today's data, and the dashboard will then have an accurate representation of machines that have been sending logs. Changing to "Last 7 days" will be useful in the future to visualize logs over time.
+In the top right hand corner, click the calendar icon to the left of "Last 15 minutes" and select "Today" to change the date range to only include today's data. The dashboard accurately represents the machines that have been sending logs. Changing to "Last 7 days" is useful to visualize logs over time.
## 4.2 Enable Alerts
Click on the hamburger icon on the top left, then under "Security," navigate to "Alerts" (in older versions, this may be titled "Detections").
-From here navigate to "Manage Rules" (In older versions, this may be titled "Manage Detection Rules"):
+Navigate to "Manage Rules" (In older versions, this may be titled "Manage Detection Rules"):

-Once this has been done, select the option to "Load Elastic prebuilt rules and timeline templates":
+Select the option to "Load Elastic prebuilt rules and timeline templates":

@@ -55,11 +55,11 @@ Once the prebuilt Elastic rules are installed, filter from the "Tags" option and

-From here, ensure that the maximum number of rows is shown so that all of the relevant rules can be selected at once (In recent versions, there is an ability to "Select All" rows):
+Ensure that the maximum number of rows is shown for all relevant rules to be selected at once (In recent versions, there is an ability to "Select All" rows):

-Lastly, select all of the displayed rules, expand "Bulk actions" and choose "Enable":
+Select all the displayed rules, expand "Bulk actions" and choose "Enable":

@@ -71,9 +71,9 @@ Rules without the "ML" tag should still be activated through this bulk action, r
### 4.2.1 Add rule exceptions
-Depending on your environment it may be desirable to add exceptions to some of the built-in Elastic rules shown above to prevent false positives from occurring. These will be specific to your environment and should be tightly scoped so as to avoid excluding potentially malicious behavior, but may be beneficial to filter out some of the benign behavior of LME (for example to prevent the Sysmon update script creating alerts).
+Depending on the environment, exceptions may be added to some of the built-in Elastic rules shown above to prevent false positives. These will be specific to your environment and should be tightly scoped to avoid excluding potentially malicious behavior but may be beneficial to filter out some of the benign behavior of LME (for example to prevent the Sysmon update script creating alerts).
-An example of this is shown below, with further information available [here](https://www.elastic.co/guide/en/security/current/detections-ui-exceptions.html).
+An example is shown below, with further information available [here](https://www.elastic.co/guide/en/security/current/detections-ui-exceptions.html).
First, navigate to the "Manage Detection Rules" section as described above, and then search for and select the rule you wish to add an exception for:
@@ -85,11 +85,11 @@ Then navigate to the "Exceptions" tab above the "Trend" section and then select

-From here, configure the necessary exception, taking care to ensure that it is tightly scoped and will not inadvertently prevent detection of actual malicious behavior:
+Next, configure the necessary exception, taking care to ensure that it is tightly scoped and will not inadvertently prevent detection of actual malicious behavior:

-Note that in this instance the following command line value has been added as an exception, but the ```testme.local``` domain would need to be updated to match the location you installed the update batch script to during the LME installation, the same value used to update the scheduled task as described [here](/docs/markdown/chapter2.md#222---scheduled-task-gpo-policy).
+Note that in this instance the following command line value has been added as an exception, but the ```testme.local``` domain would need updating to match the location you installed the update batch script to during the LME installation, the same value used to update the scheduled task as described [here](/docs/markdown/chapter2.md#222---scheduled-task-gpo-policy).
```
C:\Windows\SYSTEM32\cmd.exe /c "\\testme.local\SYSVOL\testme.local\Sysmon\update.bat"
@@ -97,17 +97,17 @@ C:\Windows\SYSTEM32\cmd.exe /c "\\testme.local\SYSVOL\testme.local\Sysmon\update
## 4.3 Learning how to use Kibana
-If you have never used Kibana before, Elasticsearch has provided a number of videos exploring the features of Kibana and how to create new dashboards and analytics. https://www.youtube.com/playlist?list=PLhLSfisesZIvA8ad1J2DSdLWnTPtzWSfI
+Elasticsearch has provided a number of videos exploring the features of Kibana and how to create new dashboards and analytics. https://www.youtube.com/playlist?list=PLhLSfisesZIvA8ad1J2DSdLWnTPtzWSfI
-Kibana comes with many useful features. In particular, make note of the following:
+Kibana's useful features are as following:
### 4.3.1 Dashboards
-Found under "Analytics" -> "Dashboard," dashboards are a great way to visualize LME data. LME comes with several dashboards. Take some time to get familiar with the different dashboards already available. If interested in creating custom dashboards, see the link above for some starting points offered by Elasticsearch.
+Found under "Analytics" -> "Dashboard," dashboards visualize LME data. LME comes with several dashboards. Take some time to get familiar with the different dashboards already available. If interested in creating custom dashboards, see the link above for starting points offered by Elasticsearch.
Note: If you make changes to the dashboards that LME provides, be sure to save your changes to a dashboard with a different name. Otherwise, your changes will be overwritten when you upgrade LME.
### 4.3.2 Discover
-Found under "Analytics" -> "Discover," Discover allows you view raw events and craft custom filters to find events of interest. For example, to inspect all DNS queries made on a computer named "Example-1," you could insert the following query where it says "Filter your data using KQL syntax":
+Found under "Analytics" -> "Discover," Discover allows the view raw events and craft custom filters to find events of interest. For example, to inspect all DNS queries made on a computer named "Example-1," you could insert the following query where it says "Filter your data using KQL syntax":
```
event.code: 22 and host.name: Example-1
```
diff --git a/docs/markdown/logging-guidance/filtering.md b/docs/markdown/logging-guidance/filtering.md
index 2e2fac4a..db4f957b 100644
--- a/docs/markdown/logging-guidance/filtering.md
+++ b/docs/markdown/logging-guidance/filtering.md
@@ -1,6 +1,6 @@
# Filtering logs:
-There may come a time where a log is not particularly useful or an aspect of LME proves overly verbose (e.g.: [Dashboard spamming events](https://github.com/cisagov/LME/issues/22). We try our best to make everything useful by default but cannot predict every eventuality since all environments will be different. So to enable users to make the LME system more useful (and hopefully commit their own pull requests back with updates :) ), we are documenting here how you can filter out logs in the:
+There may come a time where a log is not particularly useful or an aspect of LME proves overly verbose (e.g.: [Dashboard spamming events](https://github.com/cisagov/LME/issues/22). We try our best to make everything useful by default but cannot predict every eventuality since all environments will be different. To enable users to make the LME system more useful, we document here how to filter out logs in the:
1. Dashboard
2. Host logging utility (e.g. winlogbeat)
@@ -10,7 +10,7 @@ Have fun reading and applying these concepts
## Dashboard:
-The below example shows a filter that can be applied to a search, and saved with a dashboard to filter out unneeded windows event log [4624](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4624) with a TargetUserName field that has a `$ `.
+The below example shows a filter that you can apply to a search and save with a dashboard to filter out unneeded windows event log [4624](https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4624) with a TargetUserName field that has a `$ `.
```
{
"bool": {
diff --git a/docs/markdown/logging-guidance/other-logging.md b/docs/markdown/logging-guidance/other-logging.md
index 2cef8f6b..62c7fa6c 100644
--- a/docs/markdown/logging-guidance/other-logging.md
+++ b/docs/markdown/logging-guidance/other-logging.md
@@ -1,14 +1,14 @@
# Additional Logging
-As of the release of LME v0.5, the Logstash configuration has been modified to remove the exposed Syslog port from the LME host itself. Instead, LME has been changed to support ingest from multiple Elastic Beats - to make it easier to customize LME installs to handle additional logging in a manner compliant with the Elastic Common Schema (ECS).
+As of the release of LME v0.5, the Logstash configuration has been modified to remove the exposed Syslog port from the LME host itself. Instead, we have changed LME to support ingest from multiple Elastic Beats - to make it easier to customize LME installs to handle additional logging in a manner compliant with the Elastic Common Schema (ECS).
As the logging and analysis of Windows Event Logs is the central goal of LME, this support for other log types is not provided out of the box on fresh installations. However it can be manually configured using the steps below.
-Note: We **do not** provide technical support for this process or any issues arising from it. This information is provided as an example solely to help you get started expanding LME to suit your own needs as required. This information also assumes a level of familiarity with the concepts involved, and is not intended to be an "out of the box" solution in the same way as LME's Windows logging capabilities. We are working to support other logging data in the future.
+Note: We **do not** provide technical support for this process or any issues arising from it. We provide this information as an example solely to help you get started expanding LME to suit your own needs as required. This information assumes a level of familiarity with the concepts involved and is not intended to be an "out of the box" solution in the same way as LME's Windows logging capabilities. We are working to support other logging data in the future.
## Identify a Beat to Use
-In order to ingest different log types, Elastic provides a variety of different "Beat" log shippers beyond just the Winlogbeat shipper used by LME. Each of these is aimed at a specific type of data and logging, and so the first step is to review the type of data that you wish to add to LME, and what your needs for this log are, to decide which Beat suits this need best.
+To ingest different log types, Elastic provides a variety of different "Beat" log shippers beyond just the Winlogbeat shipper used by LME. Each of these is aimed at a specific type of data and logging. The first step is to review the type of data that you wish to add to LME and what your needs for this log are. After you should decide which Beat suits your need the best.
The following list provides links to Elastic's description of each Beat other than Winlogbeat, which can be used to evaluate their suitability, although generally speaking Filebeat would be used for most non-Windows operating system logging:
@@ -23,7 +23,7 @@ Once you have identified the correct Beat to use for your logging requirements,
### Identifying a module
-In the event you are using Filebeat, Auditbeat or Metricbeat, you will also have the option of using an additional "module" as part of your configuration to transform your data to comply with the Elastic Common Schema. In this instance, review the list of modules for the relevant Beat and decide if any of these are appropriate for the type of data you wish to ingest before proceeding:
+In the event you are using Filebeat, Auditbeat, or Metricbeat, you will also have the option of using an additional "module" as part of your configuration to transform your data to comply with the Elastic Common Schema. Review the list of modules for the relevant Beat and decide if any of these are appropriate for the type of data you wish to ingest before proceeding:
* [Auditbeat](https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-modules.html)
* [Filebeat](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-modules.html)
@@ -31,7 +31,7 @@ In the event you are using Filebeat, Auditbeat or Metricbeat, you will also have
## Configuring LME Permissions
-Once you have identified the Beat required, LME will require additional configuration in order to allow Logstash to correctly create and use the relevant indices. Specifically, Elasticsearch needs to be modified to allow the logstash_writer user to manage an index pattern associated with the Beat you have chosen.
+Once you have identified the Beat required, LME will require additional configuration to allow Logstash to correctly create and use the relevant indices. Specifically, Elasticsearch needs to be modified to allow the logstash_writer user to manage an index pattern associated with the Beat you have chosen.
This can be done by accessing the `Roles` section under `Stack Management`:
@@ -53,13 +53,13 @@ After this click `Update role`:
## Beat Setup
-Once LME has been configured with the required permissions, you are able to proceed with the configuration of your chosen Beat. The steps for this will vary dependent upon the Beat you have selected and the logs you wish to collect.
+Once you configure LME with the required permissions, you can to proceed with the configuration of your chosen Beat. The steps for this will vary dependent upon the Beat you have selected and the logs you wish to collect.
### Installation
-The installation will vary from Beat to Beat. In general it will likely involve either copying files in to Program Files and running a PowerShell script (similar to the LME Winlogbeat installation) if installing on Windows, or installing a package containing the Beat if installing on Linux or Mac OS.
+The installation will vary from Beat to Beat. In general it will likely involve either copying files in to Program Files and running a PowerShell script (similar to the LME Winlogbeat installation) if installing on Windows or installing a package containing the Beat if installing on Linux or Mac OS.
-Note: It is also possible to install a second Beat alongside the host used to run Winlogbeat as part of the LME installation process. This may be desirable in order to simplify the configuration process and transferring of files, although in practice any host compatible with the relevant Elastic beat can be used.
+Note: It is also possible to install a second Beat alongside the host used to run Winlogbeat as part of the LME installation process. This may be desirable to simplify the configuration process and transferring of files, although in practice any host compatible with the relevant Elastic beat can be used.
The Beat version used must match that officially supported by LME. Please check the corresponding document in [Chapter 3](/docs/markdown/chapter3/chapter3.md#331-files-required)
@@ -68,7 +68,7 @@ The instructions for the installation of each Beat available can be found by fol
#### Enable Modules (Optional)
-If using a "module" as part of the Beat set up, this can be enabled now. In order to enable a specific module please refer to the documentation for the relevant Beat, as listed here.
+If using a "module" as part of the Beat set up, you can now enable this. To enable a specific module please refer to the documentation for the relevant Beat, as listed here.
Generally, modules can be listed by running the Beat directly with the command `modules list`, and then enabled by running `modules enable [module]`. For example to enable the Cisco module in Filebeat on Windows you would run the following commands from an administrative PowerShell window within the Filebeat directory:
@@ -81,7 +81,7 @@ PS > .\filebeat.exe modules enable cisco
#### Log Collection
-Once installed, configuring the Beat will depend largely on what log sources you wish to collect, how you wish to ingest them, and which Beat you have chosen to do this. Please see the standard Elastic documentation for specifics on how to ingest the log set which is relevant to you.
+Once installed, configuring the Beat will depend largely on what log sources you wish to collect, how you wish to ingest them and which Beat you have chosen to do this. Please see the standard Elastic documentation for specifics on how to ingest the log set which is relevant to you.
If using a module to collect logs, the log input should be configured in the `modules.d` folder within the Beat's installation directory. If not making use of a Beat which uses modules, it is instead configured in the Beat's base `yaml` file in the installation directory.
@@ -292,7 +292,7 @@ No specific advice around troubleshooting a custom log setup is available, as th
The generic troubleshooting steps listed [here](/docs/markdown/reference/troubleshooting.md) are still likely to be a good starting point if you do encounter any issues with this customisation, and should be reviewed if something goes wrong.
-One commonly observed flaw with some Beats is to default to a relication setting that is incompatible with LME's default single-node cluster, causing a yellow cluster health state and unassigned replica shards. This is likely to be fixed in a later release of Elastic, but in the meantime details on diagnosing and resolving it can be found here. If this re-occurs each time a new index is created for your additional logs, it can be resolved by editing the index template in `Stack Management` -> `Index Management` -> `Index Templates` -> `[beatname]-[beatversion]` to include the following settings:
+One commonly observed flaw with some Beats is to default to a relication setting that is incompatible with LME's default single-node cluster, causing a yellow cluster health state and unassigned replica shards. Elastic will likely fix this in a later release, but in the meantime details on diagnosing and resolving it is here. If this re-occurs each time a new index is created for your additional logs, it can be resolved by editing the index template in `Stack Management` -> `Index Management` -> `Index Templates` -> `[beatname]-[beatversion]` to include the following settings:
```
{
diff --git a/docs/markdown/logging-guidance/retention.md b/docs/markdown/logging-guidance/retention.md
index c66b2fac..eba4d8e0 100644
--- a/docs/markdown/logging-guidance/retention.md
+++ b/docs/markdown/logging-guidance/retention.md
@@ -2,7 +2,7 @@
By default, LME will configure an index lifecycle policy that will delete
indexes based on estimated disk usage. Initially, 80% of the disk will be used
-for the indices, with an assumption that a day of logs will use 1Gb of disk
+for the indexes, with an assumption that a day of logs will use 1Gb of disk
space.
If you wish to adjust the number of days retained, then this can be done in
@@ -26,6 +26,6 @@ documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/d
more information.
Click the "Save policy" button and the new setting will be applied to the LME
-indices. The changes will be applied immediately, so care should be taken to
+indexes. The changes will be applied immediately, so care should be taken to
ensure that the new policy does not result in unwanted data loss. (E.g. by
reducing the retention period, which would cause existing logs to be deleted.)
diff --git a/docs/markdown/maintenance/backups.md b/docs/markdown/maintenance/backups.md
index 43442ca1..0013ded9 100644
--- a/docs/markdown/maintenance/backups.md
+++ b/docs/markdown/maintenance/backups.md
@@ -1,15 +1,14 @@
# Backing up LME Logs
-Logs are backed up using the built-in Elastic facilities. Out of the box,
-Elasticsearch supports backing up to filesystems, and this is the only approach
-supported by LME. Other backup destinations are supported but these require
-separate plugins, and are not supported by LME.
+You back up logs using the built-in Elastic facilities. Out of the box,
+Elasticsearch supports backing up to filesystems, and this is the only approach LME supports. Other backup destinations are supported but these require
+separate plugins, and LME does not not support them.
## Approach
-Backups are created using Elasticsearch snapshots. The initial snapshot will
-contain all of the current logs but subsequent backups will only contain changes
-since the last snapshot was taken. It is therefore possible to take regular
+You create backups using Elasticsearch snapshots. The initial snapshot will
+contain all of the current logs but subsequent backups will onlyYou contain changes
+since the last snapshot was taken. It is possible to take regular
backups without a significant effect on the system's performance and without
consuming large amounts of disk space.
@@ -20,12 +19,10 @@ consuming large amounts of disk space.
The LME installation creates a bind mount in Docker that maps to the
`/opt/lme/backups` directory on the host system.
-The LME log retention period is determined by the amount of disk space on the
-host system. Therefore it is **strongly** recommended that an external drive be
-mounted at the `/opt/lme/backups` location so that both disk space is conserved
+The amount of disk space on the host system determines the LME log retention period. We **strongly** recommend thatyou mount an external drive at the `/opt/lme/backups` location so that both disk space is conserved
and to ensure that backups exist on a separate drive. Backups use a large volume of disk space, and if the storage volume provided is not suitable to store these logs without running out of space backups may cease to function, or LME may stop working altogether if all available disk space on the primary host is consumed.
-Once the external drive has been mounted on the host, you will need to ensure the ownership of the `/opt/lme/backups` folder is correct, to ensure the elasticsearch user can write the backups correctly. By default this folder will likely be owned by the root user, and this will need to be changed so that it is owned by the user you created during the operating system's installation, typically Ubuntu or similar. This can be achieved using the following command:
+Once you have mounted the external drive on the host, you will need to ensure the ownership of the `/opt/lme/backups` folder is correct, to ensure the elasticsearch user can write the backups correctly. By default the root user will likely be the owner of this folder, and you will need to change this so that the user you created during the operating system's installation is the owner. To do this use the following command:
```
sudo chown -R 1000 /opt/lme/backups/
@@ -68,7 +65,7 @@ then click the "Create a policy" button:
On the next screen, pick a name for your new policy ("lme-snapshots" in this
example). For the snapshot name the value `` will create
files with the prefix `lme-daily` and with the current date as a suffix. Make
-sure your new repository is selected, and then configure a schedule in line with
+sure that you select your new repository, and then configure a schedule in line with
your backup policy. Elasticsearch uses incremental snapshots for its backup,
and so only the previous day's logs will need to be snapshotted, which will help
minimize the performance impact.
@@ -87,7 +84,7 @@ Review the new policy and click "Create policy".

-If you want to test the new policy, or to create the initial snapshot, you can
+If you want to test the new policy or want to create the initial snapshot, you can
select the "Run now" option for the policy on the polices tab:

diff --git a/docs/markdown/maintenance/certificates.md b/docs/markdown/maintenance/certificates.md
index 5751dcdd..5181c25e 100644
--- a/docs/markdown/maintenance/certificates.md
+++ b/docs/markdown/maintenance/certificates.md
@@ -1,10 +1,10 @@
# Certificates
-The LME installation makes use of a number of TLS certificates to protect communications between Winlogbeat and Logstash, as well as to secure connections to Elasticsearch and Kibana. These certificates can either be generated by the installation script, or imported from an existing trusted Certificate Authority if one is in use within the environment.
+The LME installation makes use of a number of TLS certificates to protect communications between Winlogbeat and Logstash, as well as to secure connections to Elasticsearch and Kibana. The installation script can generate these certificates, or you can import them from an existing trusted Certificate Authority if one is in use within the environment.
## Regenerating Self-Signed Certificates
By default the installation script will generate a root Certificate Authority (CA) and then use this to generate certificates for Elasticsearch, Logstash and Kibana, as well as client certificates which will be used to authenticate the Winlogbeat client to Logstash.
-These self-signed certificates are only valid for two-years from the date of creation, and will need to be renewed periodically before they expire to ensure LME continues to function correctly. Note that the root self-signed CA has a validity of ten years by default and will not need to be regenerated regularly, unlike the others.
+These self-signed certificates are only valid for two-years from the date of creation, and you will need to renew them periodically before they expire to ensure LME continues to function correctly. Note that the root self-signed CA has a validity of ten years by default and will not need to be regenerated regularly, unlike the others.
Regenerating the relevant certificates can be done by calling the "renew" function within the deploy script as shown below (*NOTE: You will need to know the IP address and the Fully Qualified Domain Name for the server before doing this*):
@@ -14,7 +14,7 @@ cd /opt/lme/Chapter\ 3\ Files/
sudo ./deploy.sh renew
```
-This will prompt you to select which certificates to regenerate, and can be used to individually recreate certificates as required or to replace the root CA and all other certificates entirely. When re-creating the certificates due to an imminent expiry the root CA can be left as is, with all of the certificates which are due to expire selected to be recreated:
+This will prompt you to select which certificates to regenerate, and you can individually recreate certificates as required or to replace the root CA and all other certificates entirely. When re-creating the certificates due to an imminent expiration the root CA can be left as is with all of the certificates, which are due to expire selected to be recreated:
```bash
Do you want to regenerate the root Certificate Authority (warning - this will invalidate all current certificates in use) ([y]es/[n]o): n
@@ -120,23 +120,23 @@ In order for the Winlogbeat client certificate to be included in the ```files_fo
/opt/lme/Chapter\ 3\ Files/certs/wlbclient.key
/opt/lme/Chapter\ 3\ Files/certs/wlbclient.crt
```
-Alternatively these files can be transfered to the Windows Event Collector server separately if desired.
+Alternatively you can transfer these files to the Windows Event Collector server separately if desired.
### Installation
-Once the certificates have been generated as required and copied into the correct location, simply run the installer as instructed in [Chapter 3](/docs/markdown/chapter3/chapter3.md), selecting "No" when prompted to generate self-signed certificates. The installer should then ensure that the files are in the correct location and proceed as normal, making use of the manually created certificates instead.
+Once you have generated the certificates as required and copied them into the correct location, simply run the installer as instructed in [Chapter 3](/docs/markdown/chapter3/chapter3.md), selecting "No" when prompted to generate self-signed certificates. The installer should then ensure that the files are in the correct location and proceed as normal, making use of the manually created certificates instead.
## Migrating from Self-Signed Certificates
-It is possible to migrate from the default self-signed certificates to manually generated certificates at a later date, for example to move to enterprise certificates post-installation after an initial testing period. This can be done by taking advantage of the "renew" functionality within the deploy script to replace the certificates once they are in the correct place.
+It is possible to migrate from the default self-signed certificates to manually generated certificates at a later date. You can move to enterprise certificates post-installation after an initial testing period if desired. You can do this by taking advantage of the "renew" functionality within the deploy script to replace the certificates once they are in the correct place.
**NOTE: The default supported method of LME installation is to use the automatically created self-signed certificates, and we will be unable to support any problems that arise from generating the certificates manually incorrectly.**
To begin this process you will need to generate the required certificates that you intend to use as part of the LME installation going forward. The certificates must meet the requirements set out above under [Certificate Creation](#certificate-creation).
-Once the required certificates have been created they must be copied into the correct location, as described in the [Certificate Location](#certificate-locations) section above. If you have an existing installation with self-signed certificates then files will already exist in these locations, and will need to be overwritten with the newly created certificate files.
+Once you create the required certificates, you must copy them into the correct location, as described in the [Certificate Location](#certificate-locations) section above. If you have an existing installation with self-signed certificates then files will already exist in these locations, and you will need to overwrite them with the newly created certificate files.
-Once the certificate files have been copied into the correct locations calling the deploy script's "renew" function and prompting it **not** to regenerate any of the certificates will cause it to replace the currently in-use certificates with the newly copied files:
+Once you have copied the certificate files into the correct locations calling the deploy script's "renew" function and prompting it **not** to regenerate any of the certificates will cause it to replace the currently in-use certificates with the newly copied files:
```
cd /opt/lme/Chapter\ 3\ Files/
diff --git a/docs/markdown/maintenance/upgrading.md b/docs/markdown/maintenance/upgrading.md
index 5f48ea70..73c640ff 100644
--- a/docs/markdown/maintenance/upgrading.md
+++ b/docs/markdown/maintenance/upgrading.md
@@ -2,12 +2,12 @@
Please see https://github.com/cisagov/LME/releases/ for our latest release.
-Below you can find the upgrade paths that are currently supported and what steps are required for these upgrades. Note that major version upgrades tend to include significant changes, and so will require manual intervention and will not be automatically applied, even if auto-updates are enabled.
+Below you can find the upgrade paths that are currently supported and what steps you need for these upgrades. Note that major version upgrades tend to include significant changes and will require manual intervention and will not be automatically applied, even if you enable auto-updates.
-Applying these changes is automated for any new installations. But, if you have an existing installation, you need to conduct some extra steps. **Before performing any of these steps it is advised to take a backup of the current installation using the method described [here](/docs/markdown/maintenance/backups.md).**
+Applying these changes is automated for any new installations. If you have an existing installation, you need to conduct some extra steps. **Before performing any of these steps it is advised to take a backup of the current installation using the method described [here](/docs/markdown/maintenance/backups.md).**
## 1. Finding your LME version (and the components versions)
-When reporting an issue or suggesting improvements, it is important to include the versions of all the components, where possible. This ensures that the issue has not already been fixed!
+When reporting an issue or suggesting improvements, please include the versions of all the components, where possible. This is to enusre that we have not already fixed the issue.
### 1.1. Windows Server
* Operating System: Press "Windows Key"+R and type ```winver```
@@ -29,7 +29,7 @@ LME does not support upgrading directly from versions prior to v0.5 to v1.0. Pri
## 3. Upgrade from v0.5 to v1.0.0
-Since LME's transition from the NCSC to CISA, the location of the LME repository has changed from `https://github.com/ukncsc/lme` to `https://github.com/cisagov/lme`. To obtain any further updates to LME on the ELK server, you will need to transition to the new git repository. Because vital configuration files are stored within the same folder as the git repo, it's simpler to copy the old LME folder to a different location, clone the new repo, copy the files and folders unique to your system, and then optionally delete the old folder. You can do this by running the following commands:
+Since LME's transition from the NCSC U.K. to CISA, the location of the LME repository has changed from `https://github.com/ukncsc/lme` to `https://github.com/cisagov/lme`. To obtain any further updates to LME on the ELK server, you will need to transition to the new git repository, because vital configuration files are stored within the same folder as the git repo. It's simpler to copy the old LME folder to a different location, clone the new repo, copy the files and folders unique to your system, and then optionally delete the old folder. You can do this by running the following commands:
```
@@ -60,14 +60,14 @@ sudo ./deploy.sh upgrade
```
**The last step of this script makes all files only readable by their owner in /opt/lme, so that all root owned files with passwords in them are only readable by root. This prevents a local unprivileged user from gaining access to the elastic stack.**
-Once the deploy update is finished, next update the dashboards that are provided alongside LME to the latest version. This can be done by running the below script, with more detailed instructions available [here](/docs/markdown/chapter4.md#411-import-initial-dashboards):
+Once the deploy update is complete, next update the dashboards that are provided alongside LME to the latest version. You can do this by running the below script, with more detailed instructions available [here](/docs/markdown/chapter4.md#411-import-initial-dashboards):
\*\**NOTE:*\*\* *You may need to wait several minutes for Kibana to successfully initialize after the update before running this script during the upgrade process. If you encounter a "Failed to connect" error or an "Entity Too Large" error wait for several minutes before trying again.*
##### Optional Substep: Clear out old dashboards
**Skip this step if you don't want to clear out the old dashboards**
-The LME team will not be maintaining any old dashboards from the old NCSC LME version, so if you would like to clean up your LME you can remove the dashboards by navigating to: https:///app/management/kibana/objects
+The LME team will not be maintaining any old dashboards from the old NCSC U.K. LME version, so if you would like to clean up your LME you can remove the dashboards by navigating to: https:///app/management/kibana/objects
From there select all the dashboards in the search: `type:(dashboard)` and delete them.
Then you can re-import the new dashboards like above.
@@ -98,7 +98,7 @@ To update Winlogbeat:
3. Re-install Winlogbeat, using the new copy of files_for_windows.zip, following the instructions listed under [3.3 Configuring Winlogbeat on Windows Event Collector Server](/docs/markdown/chapter3/chapter3.md#33-configuring-winlogbeat-on-windows-event-collector-server)
### 3.3. Network Share Updates
-LME v1.0 made a minor change to the file structure used in the SYSVOL folder, so a few manual changes are needed to accommodate this.
+LME v1.0 made a minor change to the file structure used in the SYSVOL folder, so you need a few manual changes to accommodate this.
1. Set up the SYSVOL folder as described in [2.2.1 - Folder Layout](/docs/markdown/chapter2.md#221---folder-layout).
2. Replace the old version of update.bat with the [latest version](/Chapter%202%20Files/GPO%20Deployment/update.bat).
3. Update the path to update.bat used in the LME-Sysmon-Task GPO (refer to [2.2.3 - Scheduled task GPO Policy](/docs/markdown/chapter2.md#223---scheduled-task-gpo-policy)).
diff --git a/docs/markdown/prerequisites.md b/docs/markdown/prerequisites.md
index f34e9ed0..8e66db8e 100644
--- a/docs/markdown/prerequisites.md
+++ b/docs/markdown/prerequisites.md
@@ -4,7 +4,7 @@
## What kind of IT skills do I need to install LME?
-The LME project can be installed by someone at the skill level of a systems administrator or enthusiast. If you have ever…
+A user with the skill level of a systems administrator or enthusiast can insall the LME project. If you have ever…
* Installed a Windows server and connected it to an Active Directory domain
@@ -13,9 +13,9 @@ The LME project can be installed by someone at the skill level of a systems admi
* Installed a Linux operating system, and logged in over SSH.
-… then you are likely to have the skills to install LME!
+… then you are likely to have the skills to install LME.
-We estimate that you should allow a couple of days to run through the entire installation process, though you can break up the process to fit your schedule. While we have automated steps where we can and made the instructions as detailed as possible, installation will require more steps than simply using an installation wizard.
+Allow a couple of days to run through the entire installation process. You can break up the process to fit your schedule. While we have automated steps and made the instructions as detailed as possible, installation will require more steps than simply using an installation wizard.
## High level overview diagram of the LME system
@@ -26,7 +26,7 @@ Figure 1: High level overview, linking to documentation chapters
## How much does LME cost?
-The portions of this package developed by the United States government are distributed under the Creative Commons 0 ("CC0") license. Portions created by government contractors at the behest of CISA are provided with the explicit grant of right to use, modify, and redistribute the code subject to this statement and the existing license structure. All other portions, including new submissions from all others, are subject to the Apache License, Version 2.0.
+The portions of this package developed by the United States government are distributed under the Creative Commons 0 ("CC0") license. CISA government contractors have created certain portions and are providing them with the explicit grant of right to use, modify and redistribute the code subject to this statement and the existing license structure. All other portions, including new submissions from all others, are subject to the Apache License, Version 2.0.
This project (scripts, documentation, and so on) is licensed under the [Apache License 2.0 and Creative Commons 0](../../LICENSE).
The design uses open software which comes at no cost to the user, we will maintain a pledge to ensure that no paid software licenses are needed above standard infrastructure costs (With the exception of Windows Operating system Licensing).
diff --git a/docs/markdown/reference/faq.md b/docs/markdown/reference/faq.md
index cc9db992..d0521504 100644
--- a/docs/markdown/reference/faq.md
+++ b/docs/markdown/reference/faq.md
@@ -1,10 +1,10 @@
# FAQ
## Basic Troubleshooting
-You can find basic troubleshooting steps in the [Troubleshooting Guide](troubleshooting.md).
+Troubleshooting steps are in the [Troubleshooting Guide](troubleshooting.md).
## Finding your LME version (and the components versions)
-When reporting an issue or suggesting improvements, it is important to include the versions of all the components, where possible. This ensures that the issue has not already been fixed!
+When reporting an issue or suggesting improvements, it is important to include the versions of all the components, when possible, to ensure that the issue has not already been fixed.
### Windows Server
* Operating System: Press "Windows Key"+R and type ```winver```
diff --git a/docs/markdown/reference/troubleshooting.md b/docs/markdown/reference/troubleshooting.md
index 140d9d87..ad915d7d 100644
--- a/docs/markdown/reference/troubleshooting.md
+++ b/docs/markdown/reference/troubleshooting.md
@@ -2,9 +2,9 @@
## Troubleshooting Diagram
-Below is a diagram of the LME architecture with labels referring to possible issues at that specific location. Refer to the chart below for protocol information, process information, log file locations, and common issues at each point in LME.
+Below is a diagram of the LME architecture with labels referring to possible issues at specifics locations. Refer to the chart below for protocol information, process information, log file locations and common issues at each point in LME.
-You can also find more detailed troubleshooting steps for each chapter after the chart.
+More detailed troubleshooting steps can be found for each chapter after the chart.

@@ -23,9 +23,9 @@ Figure 1: Troubleshooting overview diagram
### Installing Group Policy Management Tools
-If you receive the error `Windows cannot find 'gpmc.msc'`, you need to install the optional feature `Group Policy Management Tools`.
+When receiving the error `Windows cannot find 'gpmc.msc'`, the `Group Policy Management Tools` feature must be installed.
- - For Windows Server, follow Microsoft's instructions [here](https://learn.microsoft.com/en-us/azure/active-directory-domain-services/manage-group-policy#install-group-policy-management-tools). In short, you need to add the "Group Policy Management" Feature from the "Add Roles and Features" menu in Server Manager.
+ - For Windows Server, follow Microsoft's instructions [here](https://learn.microsoft.com/en-us/azure/active-directory-domain-services/manage-group-policy#install-group-policy-management-tools). In short, add the "Group Policy Management" Feature from the "Add Roles and Features" menu in Server Manager.
- For Windows 10/11, open the "Run" dialog box by pressing Windows key + R. Run the command `ms-settings:optionalfeatures` to open Windows Optional Features in Settings. Select "Add a Feature," then scroll down until you find `RSAT: Group Policy Management Tools`. Check the box next to it and select install.

@@ -38,7 +38,7 @@ If you receive the error `Windows cannot find 'gpmc.msc'`, you need to install t
Figure 3: Install RSAT: Group Policy Management Tools
-- Note: You only need `gpmc.msc` installed on one machine to manage the others. For example, you can install it only on the Domain Controller and modify the Group Policy from that machine.
+- Note: Only `gpmc.msc` needs to be installed on one machine to manage the others. For example, install it only on the Domain Controller and modify the Group Policy from that machine.
### Installing Active Directory Domain Services
@@ -49,7 +49,7 @@ If you receive the error `dsa.msc` cannot be found, you will need to install `Ac
## Chapter 2 - Installing Sysmon
-If you are having trouble not seeing Sysmon logs in the client's Event Viewer or not seeing forwarded logs on the WEC, first try restarting all of your systems and running `gpupdate /force` on the domain controller and clients.
+If you don't see Sysmon logs in the client's Event Viewer or forwarded logs on the WEC, try restarting all systems and running `gpupdate /force` on the domain controller and clients.
### No Logs Forwarded from Clients
@@ -62,22 +62,22 @@ When diagnosing issues in installing Sysmon on the clients using Group Policy, t
By default, Windows will update group policy settings only every 90 minutes. You can manually trigger a group policy update by running `gpupdate /force` in a Command Prompt window on the Domain Controller and the client.
-If after ensuring that group policy is updated on the client the client is still missing `LME-Sysmon-Task`, continue to [Step 2](#2-the-task-is-improperly-configured).
+If the client is still missing `LME-Sysmon-Task` after ensuring that group policy is updated on the client, continue to [Step 2](#2-the-task-is-improperly-configured).
#### 2. The task is improperly configured
-Windows Tasks are a fickle beast. In order for a task to trigger for the first time, **the trigger time must be set at some time in the future**, even if the Task is set to run repeatedly at a given interval.
+For a task to trigger for the first time, **the trigger time must be set at some time in the future**, even if the task is set to run repeatedly at a given interval.
#### 3. The task runs, but Sysmon is not installed
-If you don't see `sysmon64` listed in `services.msc`, it's likely the install script failed somehow. Double check that the files are organized correctly according to the diagram in the [Chapter 2 checklist](/docs/markdown/chapter2.md#chapter-2---checklist).
+If you don't see `sysmon64` listed in `services.msc`, it's likely the install script failed. Double check that the files are organized correctly according to the diagram in the [Chapter 2 checklist](/docs/markdown/chapter2.md#chapter-2---checklist).
## Chapter 3 - Installing the ELK Stack and Retrieving Logs
### Events not forwarded to Kibana
The `winlogbeat` service installed in [section 3.3](/docs/markdown/chapter3/chapter3.md#33-configuring-winlogbeat-on-windows-event-collector-server) is responsible for sending events from the collector to Kibana. Confirm the `winlogbeat` service is running and check the log file (`C:\ProgramData\winlogbeat\logs`) for errors.
-By default the `ForwardedEvents` maximum log size is around 20MB so events will be lost if the `winlogbeat` service stops. Consider increasing the size of the `ForwardedEvents` log file to help reduce log loss in this scenario. Historical logs are sent once the `winlogbeat` service starts.
+By default the `ForwardedEvents` maximum log size is roughly 20MB so events will be lost if the `winlogbeat` service stops. Consider increasing the size of the `ForwardedEvents` log file to help reduce log loss in this scenario. Historical logs are sent once the `winlogbeat` service starts.
* Open Microsoft Event View (`eventvwr`)
* Expand _Windows Logs_ and right click _Forwarded Events_
@@ -87,7 +87,7 @@ By default the `ForwardedEvents` maximum log size is around 20MB so events will

### Events not forwarding from Domain Controllers
-Please be aware that Logging Made Easy does not currently support logging Domain Controllers, and the log volumes may be significant from servers with this role. If you wish to proceed forwarding logs from your Domain Controllers please be aware you do this at your own risk! Monitoring such servers has not been tested and may have unintended side effects.
+Please be aware that Logging Made Easy does not currently support logging Domain Controllers, and the log volumes may be significant from servers with this role. If you wish to proceed forwarding logs from your Domain Controllers please be aware you do this at your own risk. LME has not tested monitoring such servers and they may have unintended side effects.
@@ -103,7 +103,7 @@ root@util:~# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
```
### Containers restarting/not running:
-Usually if you have issues with containers restarting there is probably something wrong with your host or the container itself. Like in the above sample, a wrong password could be preventing the Elastic Stack from operating properly. You can check the container logs like so:
+Usually if issues arise with containers restarting, typically something is wrong with the host or the container itself. simmilar to the sample above, a wrong password could prevent the Elastic Stack from operating properly. You can check the container logs like so:
```
#TO list the name of the container
sudo docker ps --format "{{.Names}}"
@@ -114,7 +114,7 @@ sudo docker logs -f [CONTAINER_NAME]
Hopefully that is enough to determine the issue, but below we have some common issues you could encounter:
#### Directory Permission issues
-If you encounter errors like [this](https://github.com/cisagov/LME/issues/15) in the container logs, probably your host ownership or permissions for mounted files, don't match what the container expects them to be. In this case the `/usr/share/elasticsearch/backups` which is mapped from `/opt/lme/backups` on the host.
+If you encounter errors like [this](https://github.com/cisagov/LME/issues/15) in the container logs, your host ownership or permissions for mounted files don't match what the container expects them to be. In this case the `/usr/share/elasticsearch/backups` which is mapped from `/opt/lme/backups` on the host.
You can see this in the [docker-compose-stack.yml](https://github.com/cisagov/LME/blob/main/Chapter%203%20Files/docker-compose-stack.yml) file:
```
╰─$ cat Chapter\ 3\ Files/docker-compose-stack.yml | grep -i volume -A 5
@@ -127,7 +127,7 @@ You can see this in the [docker-compose-stack.yml](https://github.com/cisagov/LM
target: /usr/share/elasticsearch/backups
```
-To fix this you can change the permissions to what the container expects:
+To fix, change the permissions to what the container expects:
```
sudo chown -R 1000:1000 /opt/lme/backups
```
@@ -136,7 +136,7 @@ We know this by investigating the backing docker container image for elasticsear
#### deploy.sh stalls on: waiting for elasticsearch to connect
-This was a bug that was fixed in the current iteration of deploy.sh. This occurs if the `elastic` user password was already set in a previous deployment of LME. The easiest fix for this is to delete your old LME volumes as that will clear out any old settings that would be preventing install.
+This bug was fixed in the current iteration of deploy.sh. This occurs if the `elastic` user password was already set in a previous deployment of LME. The easiest fix is to delete your old LME volumes as that will clear out any old settings that would be preventing install.
```
#DONT RUN THIS IF YOU HAVE DATA YOU WANT TO PRESERVE!!
sudo docker volume rm lme_esdata
@@ -156,12 +156,12 @@ echo "xpack.security.http.ssl.verification_mode: certificate" >> config/elastics
#add a -f if needed
elasticsearch-reset-password -v -u elastic -i --url https://localhost:9200
```
-If the elasticsearch-reset-password is not available in your version of elasticsearch, you may be able to try recreating the container with a newer version of LME and running the same above steps. We have not tested this last suggestion, so attempting this last step won't be supported, but is worth a try if none of the above works.
+If the elasticsearch-reset-password is not available in your version of elasticsearch, you can try recreating the container with a newer version of LME and running the same above steps. This has not been tested, so attempting this last step is not supported, but it is worth a try if none of the above works.
### Elasticsearch fails to boot on Linux server
-Sometimes environmental differences can make the installation process get screwed up [ISSUE](https://github.com/cisagov/LME/issues/21). If you have the luxury, you could perform a full reinstall:
+Sometimes environmental differences can harm the installation process [ISSUE](https://github.com/cisagov/LME/issues/21). In this case a full reinstall may be necessary:
-If you are unable to access https://, this is most likely because the elasticsearch service fails to run on the Linux server. To perform a full reinstall:
+If https:// is unaccessibale, this is most likely because the elasticsearch service fails to run on the Linux server. To perform a full reinstall:
```
cd /opt/lme/Chapter\ 3\ Files/
sudo ./deploy.sh uninstall
@@ -174,7 +174,7 @@ cd /opt/lme/Chapter\ 3\ Files/
sudo ./deploy.sh install
#Save credentials, then continue with Chapter 3 installation
```
-Optionally you could uninstall docker entirely and reinstall it from the deploy.sh script. If you do end up removing Docker this link could be helpful: https://askubuntu.com/a/1021506.
+You could uninstall docker entirely and reinstall it from the deploy.sh script. If you do end up removing Docker this link could be helpful: https://askubuntu.com/a/1021506.
## Chapter 4 and Beyond
@@ -225,7 +225,7 @@ If this Index pattern is not selected as the default, this can be re-done by cli
### Unhealthy Cluster Status
-There are a number of reasons why the cluster's health may be yellow or red, but a common cause is unassigned replica shards. As LME is a single-node instance by default this is means that replicas will never be assigned, but this issue is commonly caused by built-in indices which do not have the `index.auto_expand_replicas` value correctly set. This will be fixed in a future release of Elastic, but can be temporarily diagnosed and resolved as follows:
+There are a number of reasons why the cluster's health may be yellow or red, but a common cause is unassigned replica shards. As LME is a single-node instance by default, meaning that replicas will never be assigned, but this issue is commonly caused by built-in indices which do not have the `index.auto_expand_replicas` value correctly set. Elastic will fix this in a future release of Elastic, but for now you can temporarily diagnose and resolve this as follows:
Check the cluster health by running the following request against Elasticsearch (an easy way to do this is to navigate to `Dev Tools` in Kibana under `Management` on the left-hand menu):
@@ -239,7 +239,7 @@ If it shows any unassigned shards, these can be enumerated with the following co
GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
```
-If the `UNASSIGNED` shard is shown as `r` rather than `p` this means it's a replica. In this case the error can be safely fixed in the single-node default installation of LME by forcing all indices to have a replica count of 0 using the following request:
+If the `UNASSIGNED` shard is shown as `r` rather than `p` this means it's a replica. In this case you can safely fix the error in the single-node default installation of LME by forcing all indices to have a replica count of 0 using the following request:
```
PUT _settings
@@ -256,9 +256,9 @@ For errors encountered when re-indexing existing data as part of an an LME versi
### Illegal Argument Exception While Re-Indexing
-With the correct mapping in place it is not possible to store a string value in any of the fields which represent IP addresses, for example ```source.ip``` or ```destination.ip```. If any of these values are represented in your current data as strings, such as ```LOCAL``` it will not be possible to successfully re-index with the correct mapping. In this instance the simplest fix is to modify your existing data to store the relevant fields as valid IP representations using the update_by_query method, documented [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html).
+With the correct mapping in place it is not possible to store a string value in any of the fields which represent IP addresses, for example ```source.ip``` or ```destination.ip```. If any of these values are in your current data as strings, such as ```LOCAL``` it will not successfully re-index with the correct mapping. In this instance the simplest fix is to modify the existing data to store the relevant fields as valid IP representations using the update_by_query method, documented [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update-by-query.html).
-An example of this is shown below, which may need to be modified for the particular field that is causing problems:
+An example of this is shown below, which may need modification for the particular field that is causing problems:
```
POST winlogbeat-11.06.2021/_update_by_query
@@ -282,7 +282,7 @@ For security the self-signed certificates generated for use by LME at install ti
### Dashboard Update Script Failing
-If you encounter an error when the dashboards are updated using the dashboard update script, either manually or as part of automatic updates, this may mean that your current version of Elastic is too old to support the minimum functionality required for the new dashboard versions. Ensure that the latest supported version of the Elastic stack is in use with the following command:
+If you encounter an error when you update the dashboards using the dashboard update script, either manually or as part of automatic updates, this may mean that your current version of Elastic is too old. To ensure that the latest supported version of the Elastic stack is in use with the following command:
```
cd /opt/lme/Chapter\ 1\ Files/
sudo ./deploy.sh update
@@ -313,7 +313,7 @@ sudo docker stack deploy lme --compose-file /opt/lme/Chapter\ 3\ Files/docker-co
### Changing elastic Username Password
-After doing an install if you wish to change the password to the elastic username you can use the following command:
+If you wish to change the password to the elastic username after installing you can use the following command:
NOTE: You will need to run this command with an account that can access /opt/lme. If you can't sudo the user account will at least need to be able to access the certs located in the command.
diff --git a/testing/Readme.md b/testing/Readme.md
index 8577bf09..de0f7b52 100644
--- a/testing/Readme.md
+++ b/testing/Readme.md
@@ -3,22 +3,22 @@ This script creates a "blank slate" for testing/configuring LME.
Using the Azure CLI, it creates the following:
- A resource group
-- A virtual network, subnet, and network security group
+- A virtual network, subnet and network security group
- 2 VMs: "DC1," a Windows server, and "LS1," a Linux server
- Client VMs: Windows clients "C1", "C2", etc. up to 16 based on user input
- Promotes DC1 to a domain controller
- Adds C1 to the managed domain
- Adds a DNS entry pointing to LS1
-This script does not install LME; it simply creates a fresh environment that's ready to have LME installed.
+This script does not install LME. It simply creates a fresh environment that's ready to have LME installed.
## Usage
| **Parameter** | **Alias** | **Description** | **Required** |
|--------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|
-| $ResourceGroup | -g | The name of the resource group that will be created for storing all testbed resources. | Yes |
+| $ResourceGroup | -g | The name of the resource group that you will create for storing all testbed resources. | Yes |
| $NumClients | -n | The number of Windows clients to create; maximum 16; defaults to 2 | No |
| $AutoShutdownTime | | The auto-shutdown time in UTC (HHMM, e.g. 2230, 0000, 1900); auto-shutdown not configured if not provided | No |
-| $AutoShutdownEmail | | An email to be notified if a VM is auto-shutdown. | No |
+| $AutoShutdownEmail | | An email to notify if a VM is auto-shutdown. | No |
| $AllowedSources | -s | Comma-Separated list of CIDR prefixes or IP ranges, e.g. XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc..., that are allowed to connect to the VMs via RDP and ssh. | Yes |
| $Location | -l | The region you would like to build the assets in. Defaults to westus | No |
| $NoPrompt | -y | Switch, run the script with no prompt (useful for automated runs). By default, the script will prompt the user to review paramters and confirm before continuing. | No |
@@ -33,9 +33,9 @@ Example:
| **#** | **Step** | **Screenshot** |
|-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|
| 1 | Open a cloud shell by navigating to portal.azure.com and clicking the shell icon. |  |
-| 2 | Select PowerShell. |  |
+| 2 | Select PowerShell. |  |
| 3 | Clone the repo `git clone https://github.com/cisagov/LME.git` and then `cd LME\testing` | |
-| 4 | Run the script, providing values for the parameters when promoted (see [Usage](#usage)). The script will take ~20 minutes to run to completion. |  |
+| 4 | Run the script, providing values for the parameters when promoted (see [Usage](#usage)). The script will take approxmiately 20 minutes to run to completion. |  |
| 5 | Save the login credentials printed to the terminal at the end (They will also be in a file called `<$ResourceGroup>.password.txt`). At this point you can login to each VM using RDP (for the Windows servers) or SSH (for the Linux server). |  |
| 6 | When you're done testing, simply delete the resource group to clean up all resources created. |  |
@@ -62,7 +62,7 @@ Flags:
## Usage
| **Parameter** | **Alias** | **Description** | **Required** |
|-------------------|-----------|----------------------------------------------------------------------------------------|--------------|
-| $ResourceGroup | -g | The name of the resource group that will be created for storing all testbed resources. | Yes |
+| $ResourceGroup | -g | The name of the resource group that you will create for storing all testbed resources. | Yes |
| $NumClients | -n | The number of Windows clients you have created; defaults to 2 | No |
| $DomainController | -w | The name of the domain controller in the cluster; defaults to "DC1" | No |
| $LinuxVm | -l | The name of the linux server in the cluster; defaults to "LS1" | No |
@@ -86,7 +86,5 @@ Example:
| 5 | Save the login credentials printed to the terminal at the end. *See note* | |
| 6 | When you're done testing, simply delete the resource group to clean up all resources created. | |
-Note: When the script finishes you will be in the azure_scripts directory, and you should see the elasticsearch credentials printed to the terminal.
-You will need to `cd ../../` to get back to the LME directory. All the passwords should also be in the `<$ResourceGroup>.password.txt` file.
-
-
+Note: When the script finishes you will be in the azure_scripts directory. You should see the elasticsearch credentials printed to the terminal.
+You will need to `cd ../../` to get back to the LME directory. All the passwords should be in the `<$ResourceGroup>.password.txt` file.
diff --git a/testing/selenium_tests.py b/testing/selenium_tests.py
index e7f7f9c7..239672a8 100644
--- a/testing/selenium_tests.py
+++ b/testing/selenium_tests.py
@@ -17,10 +17,6 @@
Additionally, you can pass in arguments to the unittest
library, such as the -v flag."""
-<<<<<<< HEAD
-=======
-
->>>>>>> 0cbe654 (Cut dev comments)
import unittest
import argparse
import sys
diff --git a/testing/tests/README.md b/testing/tests/README.md
index 1e52d411..a1cd63e8 100644
--- a/testing/tests/README.md
+++ b/testing/tests/README.md
@@ -20,11 +20,10 @@ Using Docker helps to avoid polluting your host environment with multiple versio
When you select the Python Tests option to run your container in, there are already
config files for running tests in VSCode so you won't have to set this part up.
-If you want to run tests within the
-Python Development environment option, you will have to make a `.vscode/launch.json` in the root
-of your environment. This folder isn't checked into the repo so it has to be manually
-created.
-The easy way to create this file is to click on the play button (triangle) with the little bug on it in your
+If you want to run tests within the Python Development environment option, you will have to make a `.vscode/launch.json` in the root
+of your environment. This folder isn't checked into the repo so it has to be manually created.
+
+To create this file, click on the play button (triangle) with the little bug on it in your
VSCode activity bar. There will be a link there to "create a launch.json file". Click on that link and select
"Python Debugger"->"Python File". This will create a file and open it. Replace its contents with the below
code to run the `api_tests` in `testing/tests/api_tests`.
@@ -72,15 +71,14 @@ container, it may take a little time for VSCode to install the necessary extensi
variables before running tests.
## Python Virtual Environment Setup
-In order for VSCode to use the python modules for the tests, you will want to install a
-python virtual environment for it to use. You can make a python virtual environment
+In order for VSCode to use the python modules for the tests, you will have to install a
+python virtual environment. You can make a python virtual environment
folder that is available for both of the development containers by making it in the
`testing/tests` folder. Then you can have only one copy of the environment for both
container options.
You can do this by opening a new terminal in VSCode, within the `testing/tests`
directory, and running:
-
`python3 -m venv venv`
This will make a virtual environment for python to install its modules into.