Applications

ELK Stack User Guide

| Product: ELK Stack

Overview

This guide covers the deployment and configuration of the ELK Stack on Linux using cloudimg AMIs from the AWS Marketplace. The ELK Stack combines three powerful open source tools: Elasticsearch for search and analytics, Logstash for data ingestion and transformation, and Kibana for data visualization and dashboards.

What's included in this AMI:

  • Elasticsearch 8.6.1 with security enabled (HTTPS and authentication)
  • Logstash for log parsing and data pipeline processing
  • Kibana dashboard on port 5601
  • Java runtime environment
  • Preconfigured security with auto generated credentials
  • OS package update script for keeping the system current
  • AWS CLI v2 for AWS service integration
  • Systems Manager Agent (SSM) for remote management
  • CloudWatch Agent for monitoring
  • Latest security patches applied at build time
  • 24/7 cloudimg support with guaranteed 24 hour response SLA

Prerequisites

Before launching this AMI, ensure you have:

  1. An active AWS account
  2. An active subscription to the ELK Stack listing on AWS Marketplace
  3. An EC2 key pair for SSH access
  4. Familiarity with EC2 instance management and SSH

Recommended Instance Type: t3.large (2 vCPU, 8 GB RAM) or larger. The minimum requirements are 1 vCPU, 1 GB RAM, and 20 GB disk space, but Elasticsearch is memory intensive and performs significantly better with 8 GB or more.

Step 1: Launch the AMI

  1. Navigate to the AWS Marketplace and search for "ELK Stack cloudimg"
  2. Click Continue to Subscribe, accept the terms, then Continue to Configuration
  3. Select your preferred Region and Software Version
  4. Click Continue to Launch
  5. Choose Launch through EC2 for full control over instance configuration
  6. Select your instance type (t3.large recommended)
  7. Configure storage: 20 GB gp3 minimum, 50 GB or more recommended for production log storage
  8. Configure your Security Group with the following inbound rules:
Port Protocol Source Purpose
22 TCP Your IP SSH access
9200 TCP Your IP Elasticsearch API calls over HTTP
5601 TCP Your IP Kibana dashboard

Important: Restrict ports 9200 and 5601 to trusted IP addresses only. These services provide access to your data and should not be exposed to the public internet.

  1. Select your EC2 key pair and launch the instance

Step 2: Connect via SSH

Once your instance is running and has passed both status checks (2/2), connect using SSH:

ssh -i your-key.pem ec2-user@<public-ip-address>

Replace your-key.pem with the path to your EC2 key pair and <public-ip-address> with your instance's public IP.

Important: Wait for the EC2 instance to reach 2/2 successful status checks before attempting to connect. If you connect too early, you may see errors such as:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

To switch to the root user:

sudo su -

Step 3: Retrieve Credentials

The ELK Stack is preconfigured with security enabled. Randomly generated credentials are stored in log files on the instance.

Elasticsearch credentials:

cat /stage/scripts/elastic_password.log

Kibana credentials:

cat /stage/scripts/kibana_password.log

Important: Make a note of these passwords. You will need them to access Elasticsearch and Kibana.

Step 4: Verify Elasticsearch

Verify that Elasticsearch is running and responding to API requests:

curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic http://localhost:9200

Enter the password from the elastic_password.log file when prompted.

Expected output:

{
  "name" : "ip-172-31-89-117.ec2.internal",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "wdyXEmS-TEearSUr0MEhPA",
  "version" : {
    "number" : "8.6.1",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "lucene_version" : "9.4.2",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Step 5: Access Kibana

Open your web browser and navigate to:

http://<public-ip-address>:5601

You will see the Kibana login screen. Sign in with:

  • Username: elastic
  • Password: The value from /stage/scripts/elastic_password.log

After logging in, you will see the Kibana home page where you can add integrations, create dashboards, and explore your data.

Server Components

Component Install Path
Java /bin/java
Elasticsearch /var/lib/elasticsearch
Logstash /etc/logstash
Kibana /etc/kibana

Note: Component versions may be updated on first boot by the automatic OS package update script.

Filesystem Layout

Mount Point Size Description
/ 38 GB Root filesystem
/boot 2 GB Operating system kernel files
/var/lib/elasticsearch 9.8 GB Elasticsearch data and indices

Key directories:

Directory Purpose
/etc/elasticsearch Elasticsearch configuration
/var/lib/elasticsearch Elasticsearch data (indices, shards)
/var/log/elasticsearch Elasticsearch logs
/etc/logstash Logstash configuration
/etc/logstash/conf.d Logstash pipeline configurations
/var/log/logstash Logstash logs
/etc/kibana Kibana configuration
/var/log/kibana Kibana logs
/etc/elasticsearch/certs TLS certificates for Elasticsearch

Managing Services

All three services are managed via systemd and start automatically on boot.

Elasticsearch:

systemctl start elasticsearch
systemctl stop elasticsearch
systemctl status elasticsearch

Logstash:

systemctl start logstash
systemctl stop logstash
systemctl status logstash

Kibana:

systemctl start kibana
systemctl stop kibana
systemctl status kibana

Scripts and Log Files

Script/Log Path Description
initial_boot_update.sh /stage/scripts Updates the OS with the latest packages on first boot
initial_boot_update.log /stage/scripts Output log for the boot update script
elastic_password.log /stage/scripts Elasticsearch user credentials
kibana_password.log /stage/scripts Kibana system user credentials

On Startup

An OS package update script runs on first boot to ensure the image is fully up to date. You can disable this by removing the script and its crontab entry:

rm -f /stage/scripts/initial_boot_update.sh

crontab -e
# Delete the following line, save and exit:
@reboot /stage/scripts/initial_boot_update.sh

Configuring Logstash Pipelines

To ingest logs, create pipeline configuration files in /etc/logstash/conf.d/.

Example: Ingest syslog data

Create /etc/logstash/conf.d/syslog.conf:

input {
  file {
    path => "/var/log/messages"
    start_position => "beginning"
    type => "syslog"
  }
}

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} %{DATA:program}: %{GREEDYDATA:message}" }
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    user => "elastic"
    password => "YOUR_ELASTIC_PASSWORD"
    ssl_certificate_authorities => "/etc/elasticsearch/certs/http_ca.crt"
    index => "syslog-%{+YYYY.MM.dd}"
  }
}

Restart Logstash to apply the new pipeline:

systemctl restart logstash

Troubleshooting

Cannot access Kibana on port 5601

  1. Verify Kibana is running: systemctl status kibana
  2. Check your security group allows port 5601 from your IP
  3. Kibana may take 1 to 2 minutes to start. Wait and retry.
  4. Check Kibana logs: tail -f /var/log/kibana/kibana.log

Elasticsearch returns authentication errors

  1. Verify you are using the correct password from /stage/scripts/elastic_password.log
  2. Use the --cacert flag when making API calls with curl
  3. Reset the elastic password if needed: bash /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic

Elasticsearch cluster health is yellow or red

  1. Check cluster health: curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic http://localhost:9200/_cluster/health?pretty
  2. Yellow status with a single node is normal (no replica allocation possible)
  3. Red status indicates unassigned primary shards; check disk space and logs

Logstash pipeline is not processing data

  1. Verify Logstash is running: systemctl status logstash
  2. Check pipeline configuration syntax: /usr/share/logstash/bin/logstash --config.test_and_exit -f /etc/logstash/conf.d/
  3. Review Logstash logs: tail -f /var/log/logstash/logstash-plain.log

Security Recommendations

  • Change default passwords: Reset Elasticsearch and Kibana passwords after first login
  • Restrict network access: Only allow ports 9200 and 5601 from trusted IPs
  • Enable HTTPS for Kibana: Configure TLS in /etc/kibana/kibana.yml for encrypted access
  • Use role based access control: Create dedicated users with minimal privileges in Kibana
  • Secure credential files: Delete /stage/scripts/elastic_password.log and kibana_password.log after noting the passwords
  • Monitor cluster health: Set up alerts for cluster status changes
  • Back up indices: Use Elasticsearch snapshots to S3 for disaster recovery
  • Keep the stack updated: Regularly update all components with yum update

Support

If you encounter any issues with this product, contact cloudimg support:

  • Email: support@cloudimg.co.uk
  • Website: www.cloudimg.co.uk
  • Support hours: 24/7 with guaranteed 24 hour response SLA