Skip to content

Latest commit

 

History

History
65 lines (54 loc) · 6.36 KB

USAGE.md

File metadata and controls

65 lines (54 loc) · 6.36 KB

Requirements

Name Version
terraform >= 1.0
aws >= 5.0

Providers

Name Version
aws >= 5.0

Modules

Name Source Version
cloudwatch_logs ./modules/cloudwatch-logs n/a
cloudwatch_metrics ./modules/cloudwatch-metrics n/a
failure_bucket terraform-aws-modules/s3-bucket/aws ~> 3.0
rds_logs ./modules/rds-logs n/a
s3_logfile ./modules/s3-logfile n/a

Resources

Name Type
aws_region.current data source

Inputs

Name Description Type Default Required
cloudwatch_log_groups CloudWatch Log Group names to stream to Honeycomb list(string) [] no
delivery_failure_s3_bucket_name Name for S3 bucket that will be created to hold Kinesis Firehose delivery failures. string "honeycomb-firehose-failures-{REGION}" no
enable_cloudwatch_metrics n/a bool false no
enable_rds_logs n/a bool false no
environment The environment this code is running in. If set, will be added as 'env' to each event. string "" no
honeycomb_api_host If you use a Secure Tenancy or other proxy, put its schema://host[:port] here. string "https://api.honeycomb.io" no
honeycomb_api_key Your Honeycomb team's API key. string n/a yes
honeycomb_dataset Honeycomb Dataset where events will be sent. string "lb-access-logs" no
http_buffering_interval Kinesis Firehose http buffer interval, in seconds. number 60 no
http_buffering_size Kinesis Firehose http buffer size, in MiB. number 15 no
rds_db_engine n/a string "" no
rds_db_log_types n/a list(string) [] no
rds_db_name n/a string "" no
s3_backup_mode Should we only backup to S3 data that failed delivery, or all data? string "FailedDataOnly" no
s3_bucket_arn The full ARN of the bucket storing logs - must pass s3_parser_type with this string "" no
s3_buffer_interval The Firehose S3 buffer interval (in seconds). See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html number 400 no
s3_buffer_size The size of the Firehose S3 buffer (in MiB). See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html number 10 no
s3_compression_format The Firehose S3 compression format. May be GZIP, Snappy, Zip, or Hadoop-Compatiable Snappy. See https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html string "GZIP" no
s3_filter_prefix Prefix within logs bucket to restrict processing. string "" no
s3_filter_suffix Suffix of files that should be processed. string ".gz" no
s3_force_destroy By default, AWS will decline to delete S3 buckets that are not empty:
BucketNotEmpty: The bucket you tried to delete is not empty. These buckets
are used for backup if delivery or processing fail.
#
To allow this module's resources to be removed, we've set force_destroy =
true, allowing non-empty buckets to be deleted. If you want to block this and
preserve those failed deliveries, you can set this value to false, though that
will leave terraform unable to cleanly destroy the module.
bool true no
s3_parser_type The type of logfile to parse. string "" no
sample_rate Sample rate - used for S3 logfiles only. See https://honeycomb.io/docs/guides/sampling/. number 1 no
tags Tags to add to resources created by this module. map(string) null no
vpc_security_group_ids List of security group ids when Lambda Function should run in the VPC. list(string) null no
vpc_subnet_ids List of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets. list(string) null no

Outputs

No outputs.