Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added New Relic Script to Add the client on each instance. #35

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ fi
if [ "${CloudCommand}" != 'delete-stack' ]; then

#
# Generate CF JSON (this is ugly! REP)
# Generate CloudFormation JSON (this is ugly! REP)
#
USERDATA1=resources/user-data
USERDATA2=$(mktemp -t user-data.XXXXXXXXXX)
Expand Down
101 changes: 101 additions & 0 deletions documentation/HPAC-GraceFul-Failover-Settings-Specifications.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
#HPAC Failover Settings and Specifications
======================

Welcome to the HPAC Production Drupal document that specifies the settings that are inherent in the cloudformation template that is used to build the production web server and databases instances in Amazon Web Services. This file contains all of the settings and syntax associated during the creating of the production AWS HA Drupal instances using automated scripts and CloudFormation Stack Templates.

## HPAC HA Drupal Infrastructure Initial Build
Qty 1 - Elastic Load Balancer
Qty 3 - Drupal Webserver Instances
- 2 Web Server Instances in the us-east-1d Availability Zone
- 1 Web Server Instance in the us-east-1a Availability Zone
Qty 1 - Drupal Admin Server
- 1 Web Server Instance in the us-east-1a Availability Zone
Qty 2 - MySql MultiAZDatabase Configuration
- 1 RDS Instance PRIMARY us-east-1d
- 1 RDS Instance SECONDARY us-east-1a

### Drupal WebServer Instances

###Drupal WebServer Instance Failover Specifications

Customer Requirement: A minimum of 2 Drupal WebServers must exist at all times within 1 availability zone.

Supporting Template Syntax
```
"WebServerCapacity": {
"ConstraintDescription": "must be between 1 and 5 EC2 instances.",
"Default": "3",
"Description": "The initial number of WebServer instances",
"MaxValue": "5",
"MinValue": "2",
"Type": "Number"
},
```

Customer Requirement: Under load, the Drupal WebServers will auto-scale to handle any abnormal or increased load.
```
NOT SEEING AUTO-SCALING
```

###Drupal AdminServer Instance Failover Specifications

Customer Requirement: A minimum of 1 Drupal ADMIN WebServes must exist at all times within 1 availability zone.

Supporting Template Syntax
```
"WebServerCapacitySingle": {
"ConstraintDescription": "must be between 1 and 1 EC2 instances.",
"Default": "1",
"Description": "The initial number of WebServer instances",
"MaxValue": "1",
"MinValue": "1",
"Type": "Number"
}
```
### Drupal Database Instance Failover Specifications

Customer Requirement: A mininum of 1 Drupal MySql Database must be operational at all times within 1 availability zone.

Supporting Template Syntax
```
"MultiAZDatabase": {
"AllowedValues": [
"true",
"false"
],
"ConstraintDescription": "must be either true or false.",
"Default": "true",
"Description": "Create a multi-AZ MySQL Amazon RDS database instance",
"Type": "String"
},
```
### Elastic LoadBalancer Failover Specifications

Customer Requirement: A load balancer must be utilized that is redundant across availability zones, that is constantly performing health checks of WebServer, Admin and Database instances every 10 seconds with an expected response time of 2 seconds..

Supporting Template Syntax

```
"ElasticLoadBalancer": {
"Metadata": {
"Comment": "Configure the Load Balancer with a simple health check and cookie-based stickiness"
},
"Properties": {
"AvailabilityZones": [
"us-east-1a",
"us-east-1d"
],
"HealthCheck": {
"HealthyThreshold": "2",
"Interval": "10",
"Target": "HTTP:80/",
"Timeout": "5",
"UnhealthyThreshold": "5"
},
"LBCookieStickinessPolicy": [
{
"CookieExpirationPeriod": "30",
"PolicyName": "CookieBasedPolicy"
}
],
```
26 changes: 24 additions & 2 deletions documentation/HPAC-Run-Book.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,26 @@
#HPAC Run Book to deploy, update and delete the running service using CloudFormation orchestration.
#HPAC Run Book
======================

Welcome to the HPAC Production Drupal Run Book Document. This file contains all of the documentation associated with the syntax for creating,updating and deleting the production AWS HA Drupal instances using automated scripts and CloudFormation Stack Templates.

## Preparing Your Environment
<<<<<<< HEAD

###Updating your git environment

The git repository associated with the HPAC Production Drupal HA sites is called "AWS-HA-Drupal". Please ensure that you are located in the proper directory and that your repository is up to date with the master. Perform the following commands to ensure that you have the latest code from the "AWS-HA-Drupal" repository.

```
$> git pull

```

Before proceeding, make sure that no errors occurred during the ```git pull```.

### Cloudformation Template Location

The cloudformation template located here ```your_git_user_name/AWS-HA-Drupal```
=======

###Updating your git environment

Expand All @@ -19,8 +36,9 @@ Before proceeding, make sure that no errors occurred during the ```git pull```.
## Cloudformation Template Location

The cloudformation template located here `your_git_user_name/AWS-HA-Drupal`
>>>>>>> 2a1baeef61c20537788b9c9960ea72f6cae0bc96

## Creating the HPAC Drupal HA Instances using CloudFormation and the aws create-stack command
### Creating the HPAC Drupal HA Instances using CloudFormation and the aws create-stack command

### The following Arguments need to be exported within your environment before executing the deployment script.

Expand Down Expand Up @@ -53,7 +71,11 @@ $> aws cloudformation create-stack
--parameters ParameterKey=SitePassword,ParameterValue=hpacsitepassword
ParameterKey=DBPassword,ParameterValue=hpacdbpassword ParameterKey=Label,ParameterValue=HPAC-Drupal-Instance ParameterKey=KeyName,ParameterValue=HPACDrupalKeyPair
$> {
<<<<<<< HEAD
$> "StackId": "arn:aws:cloudformation:us-east-1:219880708180:stack/HPAC-Drupal-Instance/e317ad30-fd44-11e3-a961-500162a66cb4"
=======
"StackId": "arn:aws:cloudformation:us-east-1:219880708180:stack/HPAC-Drupal-Instance/e317ad30-fd44-11e3-a961-500162a66cb4"
>>>>>>> 2a1baeef61c20537788b9c9960ea72f6cae0bc96
$> }

```
Expand Down
85 changes: 85 additions & 0 deletions documentation/HPAC-Splunk-Install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
#HPAC Splunk Installation Guide
======================

Welcome to the HPAC Splunk Installation Guide. This file contains all of the documentation for adding Splunk Logging capabilities to the production AWS HA Drupal instances using automated scripts and CloudFormation Stack Templates.

###Service Description

The following code needs to be added to your CloudFormation Template that will automate the creation and redirection of specific logs to an S3 bucket.

### S3 Bucket Standard Creation Parameters
```
S3 Bucket Name: hpac_splunk_logging_bucket
S3 Location (Region): US Standard
S3 Lifecycle Rules In Place: Delete Entire Bucket every 45 Days
```

### Splunk Script

```
# Begin Splunk Installation
# Installing s3 cmd tools in proper directory on the server and install it!
cd /home/ec2-user
git clone https://github.com/s3tools/s3cmd.git
cd s3cmd
python setup.py install

# Enable system logging to s3 by configuring log rotate
cat <<\syslogEOF > /etc/logrotate.d/syslog
/var/log/cron
/var/log/maillog
/var/log/messages
/var/log/secure
/var/log/spooler
{
missingok
# Set the initial size of the log to 1 that will force the creation of the INSTANCE_ID directories being created
# during the hourly log rotate operation
size 1
sharedscripts
dateext
dateformat -%Y-%m-%d-%s
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
BUCKET=hpac_splunk_logging_bucket
INSTANCE_ID=`curl --silent http://169.254.169.254/latest/meta-data/instance-id | sed -e "s/i-//"`
/usr/bin/s3cmd -m text/plain sync /var/log/messages-* s3://${BUCKET}/${INSTANCE_ID}/var/log/
/usr/bin/s3cmd -m text/plain sync /var/log/cron-* s3://${BUCKET}/${INSTANCE_ID}/var/log/
/usr/bin/s3cmd -m text/plain sync /var/log/maillog-* s3://${BUCKET}/${INSTANCE_ID}/var/log/
/usr/bin/s3cmd -m text/plain sync /var/log/secure-* s3://${BUCKET}/${INSTANCE_ID}/var/log/
/usr/bin/s3cmd -m text/plain sync /var/log/spooler-* s3://${BUCKET}/${INSTANCE_ID}/var/log/
endscript
}
syslogEOF

# Enable system logging to s3 by configuring log rotate for httpd
cat <<\httpdEOF > /etc/logrotate.d/httpd
/var/log/httpd/*log {
missingok
# Set the initial size of the log to 1 that will force the creation of the INSTANCE_ID directories being created
# during the hourly log rotate operation
size 1
notifempty
sharedscripts
dateext
dateformat -%Y-%m-%d-%s
postrotate
BUCKET=boot_camp_logging_bucket
INSTANCE_ID=`curl --silent http://169.254.169.254/latest/meta-data/instance-id | sed -e "s/i-//"`
/usr/bin/s3cmd -m text/plain sync /var/log/httpd/*log s3://${BUCKET}/${INSTANCE_ID}/var/log/httpd/
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}
httpdEOF

# Setup up cron to run this hourly
mv /etc/cron.daily/logrotate /etc/cron.hourly/.
# End Splunk Installation

```

### Manual Rotation of Server Logs:
Description: In the event that you need to perform a manual rotation of the server logs needs, execute the following commands on your linux instances.
```
$> sudo logrotate -v -f /etc/logrotate.conf
```
Loading