Let's start by creating the cluster, if you already have a cluster running you can skip this part.
Creating the EKS cluster
Sign in to your AWS console and go to EKS.
Add > Create
Click Next.
Click Create.
After few minutes you will have the cluster active.
To add nodes to our cluster, Go to Compute tab
Add node group
Click Next,
We will be using 1 node for now,
Next > Create.
Wait for a few minutes until the node group is available.
Setup aws-cli on the local machine
Install the aws-cli if you haven't already, and get started.
Then, enter the following command to configure aws-cli.
aws configure
To go, IAM > Security Credentials
Scroll down to access keys, create one if you haven't already.
Enter the key details on the command prompt waiting for you, when you enter aws configure, put the region name you are using and output format, I prefer using JSON.
Use the below command to test your AWS-credentials:
aws sts get-caller-identity
Now as we have created ekscluster, we need the .kube/config file to access our cluster.
To copy the kubeconfig file from cluster to our local machine use:
aws eks update-kubeconfig --name eks-cluster
replace eks-cluster with the name of your cluster if it's different, if not let's proceed.
Now, we have the kubeconfig file. Let's install kubectl on the local machine, in case you do not have it.
Kubectl is a command line utility used to access Kubernetes clusters.
Try:
kubectl get nodes
you should get output similar to this:
Because we have one node in our cluster currently.
Now, as we have cluster setup successfully.
Create a Namespace & ClusterRole
Use the below command to create a namespace named monitoring
kubectl create namespace monitoring
Prometheus uses Kubernetes APIs to read all the available metrics from Nodes, Pods, Deployments, etc. For this reason, we need to create an RBAC policy with read access
to required API groups and bind the policy to the monitoring
namespace.
Create a file name clusterRole.yaml
and copy the following RBAC role.
Note: In the role, given below, you can see that we have added get
, list
, and watch
permissions to nodes, services endpoints, pods, and ingresses. The role binding is bound to the monitoring namespace. If you have any use case to retrieve metrics from any other object, you need to add that in this cluster role.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: default
namespace: monitoring
Create the role using the following command.
kubectl create -f clusterRole.yaml
Create a Config Map To Externalize Prometheus Configurations
Create a file named config-map.yaml
copy the file contents from this link
Execute the following command to create the config map in Kubernetes
kubectl create -f config-map.yaml
Note: In Prometheus terms, the config for collecting metrics from a collection of endpoints is called a job
.
Create a Prometheus Deployment
Create a file called prometheus-deployment.yaml
and copy the following contents onto the file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus-server
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--storage.tsdb.retention.time=12h"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
ports:
- containerPort: 9090
resources:
requests:
cpu: 500m
memory: 500M
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
emptyDir: {}
Create a deployment on monitoring namespace using the following command
kubectl create -f prometheus-deployment.yaml
After a few seconds, you can check if the Prometheus deployement is up and running or not.
kubectl get deployments --namespace=monitoring
You will get an output something like this:
Connecting To Prometheus Dashboard
We will connect to the Prometheus dashboard using Kubectl port forwarding.
First, get the Prometheus pod name.
kubectl get pods --namespace=monitoring
The output will be similar to like:
Execute the following command with your pod name to access Prometheus from localhost port 8080.
kubectl port-forward prometheus-deployment-96898bbc9-q8gh5 8080:9090 -n monitoring
You should see output something similar to:
Now, to go http://localhost:8080
, you should be able to see the Prometheus dashboard.
Yayy! We have our Prometheus setup for Kubernetes up and running, this is just the basic setup, we need to do more configurations for the production level. Thank you.
If you wish to further add grafana to this cluster, refer to this article.