Overview
This year AWS have released several security enhancements for EKS. Security is probably the most critical element of any container cluster. One announcement that came out in 2022, was support for scanning containers running on EKS, with AWS GuardDuty. This is good, because it means that any suspicious activity on the containers can now be found in the same place as alerts for configuration risks for the rest of the AWS environment.
The integration of EKS and GuardDuty allows continuous monitoring of your EKS clusters for suspicious behaviour, for new and also existing clusters. The integration between EKS and GuardDuty also allows you to use GuardDuty to do this without having to make any significant configuration changes to your EKS clusters. You can do this because the implementation of this feature, is basically a change to GuardDuty to give it the functionality to read from the EKS audit log files, assuming you already have the logged enabled on your EKS clusters, which most people would do. Any GuardDuty findings for EKS, will have a resource_type of EKSCluster. to help you filter and find those quickly.
When the EKS and GuardDuty integration was launched, it included (27 new types of GuardDuty findings)[https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-remediate-kubernetes.html], including findings for detecting attempts to steal credentials, or attempts to evade detection, through the use of anonymous credentials, or even TOR exit nodes. For each security finding detected, information similar to the following will be included: • Pod ID • Container Image ID • Any tags that have been applied
GuardDuty will look for suspicious entries, but will also use ML models to provide an even greater level of threat detection. The information from GuardDuty can also be included with Amazon Detective. You are also now able to include the EKS Audit log data source package in Detective. This will allow in-depth information about EKS to be added to the Detective Behaviour graph, and includes information not just about the EKS Cluster, but also about pods and images.
Implementing
When EKS Control Plane logging is enabled on your cluster, these logs are automatically sent to CloudWatch Logs. You do have a choice of which log types you want to enable, for new or existing EKS clusters though. These logs are:
- API server
- Audit
- Authenticator
- Controller Manager
- Scheduler
For GuardDuty integration, audit logs need to be enabled, as these will provide information around the users, administrators, or systems components used or changed within your cluster. Basically, when the audit log is enabled on your cluster, these log entries in CloudWatch will provide your what, when, who, how, where.
You can use the following steps to enable the new GuardDuty integration:
- Enable the audit logs on your EKS clusters. This could be done via the console, or more likely from your IaC tool of choice (typically CloudFormation or Terraform). Remember that when enabling this, you should also configure the CloudWatch Log Group that these logs will go to, and set your desired log retention.
- Enable GuardDuty. If using Terraform, you can use a "aws_guardduty_detector" resource, and enable audit logs for kubernetes. Once you have done that, you can then go into CloudWatch and check for your new log group.
- The next step is to enable Amazon Detective. You will not be able to enable Amazon Detective until at least 48 hours after GuardDuty has been enabled.
Enable the audit logs on your EKS clusters
To do this in Terraform, you can use the following code additions. First, you needed create new log group in CloudwWatch, which you can do using the following:
resource "aws_cloudwatch_log_group" "greg-cw-log-group" {
name = "greg-eks-cluster"
retention_in_days = 7
}
You then need to add a couple of lines to the EKS configuration. In this case, I added a line to my EKS configuration to enabling logging, and also edited the depends_on value.
enabled_cluster_log_types = ["api", "audit"]
depends_on = [aws_iam_role.iam-role-eks-cluster,
aws_cloudwatch_log_group.greg-cw-log-group]
In the context of the cluster resource itself, that would look like this:
resource "aws_eks_cluster" "greg-eks-cluster" {
name = "greg-eks-cluster"
role_arn = aws_iam_role.iam-role-eks-cluster.arn
version = "1.24"
enabled_cluster_log_types = ["api", "audit"]
vpc_config {
security_group_ids = [aws_security_group.greg-eks-cluster.id]
subnet_ids = [aws_subnet.private-subnet["greg-private-1"].id,
aws_subnet.private-subnet["greg-private-2"].id]
}
depends_on = [aws_iam_role.iam-role-eks-cluster,
aws_cloudwatch_log_group.greg-cw-log-group]
}
When you look in the AWS Console, you will see that logs have been enabled:
From your cluster, you can then generate some log entries:
- Go to add-ons
- press "get more add-ons"
- Select KubeProxy and press Next
- Press Next again
- Press Create to create the add-on
Kubeproxy should then be installed and enabled:
If you then go to Cloudwatch log groups, you can select the log group for your cluster, and then see entries from your cluster.
In the next part, we will look at generating some alerts to see in GuardDuty.
comments powered by Disqus