Introduction
In the ever-evolving landscape of cloud-native computing, securing your Kubernetes clusters is not just a best practiceβit’s a fundamental necessity. Misconfigurations are a leading cause of security breaches, and manually auditing every setting across a dynamic cluster is a Herculean, if not impossible, task. This is where the CIS Kubernetes Benchmark comes into play. It provides a comprehensive set of prescriptive guidelines for establishing a secure baseline configuration for Kubernetes components.
However, simply having a benchmark isn’t enough. The real challenge lies in continuously assessing and enforcing compliance in an automated fashion. This guide will walk you through the process of automating CIS Kubernetes Benchmark compliance, leveraging powerful open-source tools like Kube-bench for auditing and Kyverno for policy enforcement. By the end, you’ll have a robust framework to ensure your Kubernetes environments remain secure and compliant, reducing attack surfaces and fostering a more resilient infrastructure.
TL;DR
Automate CIS Kubernetes Benchmark compliance using Kube-bench for auditing and Kyverno for enforcing security policies. Install Kube-bench via Helm, run scans, and then deploy Kyverno policies to remediate and prevent non-compliance. This ensures continuous security without manual intervention.
# Install Kube-bench for auditing
helm repo add aqua https://aquasecurity.github.io/helm-charts/
helm repo update
helm install kube-bench aqua/kube-bench --namespace kube-bench --create-namespace
# Run a Kube-bench scan (example for master node)
kubectl exec -it <kube-bench-pod-name> -- kube-bench run --targets master
# Install Kyverno for policy enforcement
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace
# Example Kyverno policy to enforce a CIS control (e.g., no privileged containers)
kubectl apply -f - <<EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
spec:
validationFailureAction: Enforce
rules:
- name: privileged-containers
match:
any:
- resources:
kinds:
- Pod
validate:
pattern:
spec:
containers:
- securityContext:
privileged: "false" # Must be explicitly false or omitted for non-privileged
EOF
Prerequisites
Before diving into the automation, ensure you have the following:
- Kubernetes Cluster: An operational Kubernetes cluster (version 1.16 or higher is recommended for full feature compatibility with tools like Kyverno). You can use any cloud provider (AWS EKS, GKE, Azure AKS) or a self-managed cluster.
kubectl: The Kubernetes command-line tool, configured to connect to your cluster. Refer to the official Kubernetes documentation for installation instructions.- Helm: The package manager for Kubernetes, version 3.x. Install it by following the instructions on the Helm website.
- Basic understanding of Kubernetes concepts: Familiarity with Pods, Deployments, Namespaces, and RBAC is assumed.
- Administrative access: You’ll need cluster-admin privileges to install Kube-bench and Kyverno, and to apply cluster-wide policies.
Step-by-Step Guide: Automating CIS Kubernetes Benchmark Compliance
Step 1: Understanding the CIS Kubernetes Benchmark
The CIS Kubernetes Benchmark is a security configuration guide developed by the Center for Internet Security (CIS). It provides a set of recommendations for securely configuring Kubernetes components, including the API Server, Controller Manager, Scheduler, etcd, Kubelet, and worker nodes. These recommendations are categorized by severity and impact, helping organizations prioritize their security efforts. Familiarizing yourself with the benchmark is crucial for understanding the rationale behind the tools and policies we’ll implement.
The benchmark is divided into sections, each addressing a specific component or aspect of Kubernetes security. For instance, sections cover securing the API server with proper authentication and authorization, hardening the Kubelet configuration, and ensuring proper access controls for etcd. Each recommendation includes a description, rationale, audit procedure, and remediation steps. While reading the entire document can be lengthy, understanding its structure and key areas will significantly aid in interpreting scan results and crafting effective policies.
Step 2: Installing and Running Kube-bench for Auditing
Kube-bench is an open-source tool developed by Aqua Security that checks whether Kubernetes is deployed securely by running the checks documented in the CIS Kubernetes Benchmark. It can be run inside a Pod within your cluster or directly on the host. For continuous auditing, running it as a Pod (e.g., a Job or DaemonSet) is often preferred.
We’ll install Kube-bench using Helm, which simplifies its deployment and configuration. Running Kube-bench will generate a report detailing compliance status against the chosen CIS benchmark version, highlighting passed, failed, and skipped checks. This report serves as our baseline and identifies immediate areas for improvement.
# Add the Aqua Security Helm repository
helm repo add aqua https://aquasecurity.github.io/helm-charts/
# Update your Helm repositories
helm repo update
# Install Kube-bench into a dedicated namespace
helm install kube-bench aqua/kube-bench --namespace kube-bench --create-namespace
Verify: Check if the Kube-bench Pods are running. Kube-bench typically runs as a Job, so you might see it transition to Completed.
kubectl get pods -n kube-bench
Expected Output:
NAME READY STATUS RESTARTS AGE
kube-bench-master-xxxxx 0/1 Completed 0 2m
kube-bench-node-xxxxx 0/1 Completed 0 2m
Once the Pods are completed, you can view the logs to see the scan results. Replace <kube-bench-pod-name> with the actual name of the Pod for your master or node.
# View the Kube-bench report for a master node
kubectl logs kube-bench-master-xxxxx -n kube-bench
# View the Kube-bench report for a worker node
kubectl logs kube-bench-node-xxxxx -n kube-bench
Expected Output Snippet: (This will be a very long output, showing passes, fails, and warnings)
[INFO] 1 Control Plane
[INFO] 1.1 Master Node Configuration
[INFO] 1.1.1 API Server
[PASS] 1.1.1 Ensure that the --admission-control argument is set (Automated)
[PASS] 1.1.2 Ensure that the --insecure-bind-address argument is not set (Automated)
...
[FAIL] 1.2.1 Ensure that the --profiling argument is set to false (Automated)
The output clearly indicates which checks passed, failed, or were skipped. Pay close attention to the [FAIL] items, as these are the immediate security vulnerabilities that need addressing. Kube-bench provides a detailed report in JSON format as well, which can be useful for programmatic analysis and integration with other tools.
Step 3: Installing Kyverno for Policy Enforcement
Kyverno is a policy engine designed for Kubernetes. It can validate, mutate, and generate Kubernetes resources using policies defined as Kubernetes resources. Unlike some other policy engines, Kyverno doesn’t require learning a new language; policies are written in YAML, making them accessible to Kubernetes administrators. Kyverno can enforce policies that directly map to many CIS Benchmark recommendations, such as disallowing privileged containers, enforcing resource limits, or requiring specific labels. For more advanced security measures, particularly in supply chain security, explore how Kyverno integrates with Sigstore and Kyverno Security.
Installing Kyverno is straightforward using its Helm chart.
# Add the Kyverno Helm repository
helm repo add kyverno https://kyverno.github.io/kyverno/
# Update your Helm repositories
helm repo update
# Install Kyverno into its own namespace
helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace
Verify: Check if Kyverno Pods are running.
kubectl get pods -n kyverno
Expected Output:
NAME READY STATUS RESTARTS AGE
kyverno-admission-xxxx 1/1 Running 0 2m
kyverno-background-xxxx 1/1 Running 0 2m
kyverno-cleanup-xxxx 1/1 Running 0 2m
Once Kyverno is running, it will automatically register as a validating and mutating admission webhook in your cluster, intercepting API requests and applying policies.
Step 4: Implementing Kyverno Policies for CIS Compliance
Now that Kube-bench has identified non-compliant areas, we’ll use Kyverno to enforce policies that remediate these issues and prevent future misconfigurations. Kyverno policies are defined as ClusterPolicy or Policy resources. For cluster-wide compliance, ClusterPolicy is generally preferred.
Let’s create a few example policies that address common CIS Benchmark failures:
- Disallow privileged containers (CIS 5.2.1)
- Require resource limits (CIS 5.2.3)
- Disallow usage of hostPath volumes (CIS 5.2.4)
These policies will run in Enforce mode, meaning they will block any resource creation or update that violates the policy. For a softer approach initially, you can set validationFailureAction: Audit, which will log violations without blocking them.
# Policy 1: Disallow Privileged Containers (CIS 5.2.1)
# Ensures that no containers run with privileged escalation.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
annotations:
policies.kyverno.io/description: "Privileged containers can bypass security mechanisms and gain full access to the host. This policy ensures that no containers are configured to run in privileged mode, aligning with CIS Benchmark 5.2.1."
policies.kyverno.io/category: "CIS Kubernetes Benchmark"
spec:
validationFailureAction: Enforce
background: true # Apply to existing resources
rules:
- name: privileged-containers
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Privileged containers are not allowed. Set securityContext.privileged to false."
pattern:
spec:
containers:
- securityContext:
privileged: "false"
initContainers:
- securityContext:
privileged: "false"
ephemeralContainers:
- securityContext:
privileged: "false"
---
# Policy 2: Require Resource Limits (CIS 5.2.3)
# Prevents resource exhaustion attacks by requiring CPU and memory limits.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
annotations:
policies.kyverno.io/description: "Containers should have resource requests and limits defined to prevent resource exhaustion and ensure fair scheduling. This policy enforces the presence of CPU and memory limits, aligning with CIS Benchmark 5.2.3."
policies.kyverno.io/category: "CIS Kubernetes Benchmark"
spec:
validationFailureAction: Enforce
background: true
rules:
- name: require-limits-requests
match:
any:
- resources:
kinds:
- Pod
validate:
message: "CPU and memory limits are required. Please define resource limits for all containers."
pattern:
spec:
containers:
- resources:
limits:
memory: "?*" # Must be present and non-empty
cpu: "?*" # Must be present and non-empty
initContainers:
- resources:
limits:
memory: "?*"
cpu: "?*"
ephemeralContainers:
- resources:
limits:
memory: "?*"
cpu: "?*"
---
# Policy 3: Disallow HostPath Volumes (CIS 5.2.4)
# Prevents containers from mounting host paths, which can lead to privilege escalation.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-hostpath-volumes
annotations:
policies.kyverno.io/description: "HostPath volumes expose sensitive host filesystem paths to containers, which can be abused for privilege escalation or information disclosure. This policy prevents the use of HostPath volumes, aligning with CIS Benchmark 5.2.4."
policies.kyverno.io/category: "CIS Kubernetes Benchmark"
spec:
validationFailureAction: Enforce
background: true
rules:
- name: hostpath-volumes
match:
any:
- resources:
kinds:
- Pod
validate:
message: "HostPath volumes are not allowed. Consider using persistent volumes or other storage types."
pattern:
spec:
volumes:
- =(hostPath): "null" # Check if hostPath is not defined
Apply these policies:
kubectl apply -f cis-kyverno-policies.yaml
Verify: Attempt to create a non-compliant Pod.
# Attempt to create a privileged Pod
apiVersion: v1
kind: Pod
metadata:
name: privileged-test
spec:
containers:
- name: nginx
image: nginx
securityContext:
privileged: true
kubectl apply -f privileged-pod.yaml
Expected Output:
Error from server (privileged containers are not allowed. Set securityContext.privileged to false.): error when creating "privileged-pod.yaml": admission webhook "validate.kyverno.svc-fail" denied the request: validation failure: privileged containers are not allowed. Set securityContext.privileged to false.
This demonstrates Kyverno in action, blocking the creation of non-compliant resources. You can similarly test the other policies by attempting to deploy Pods without resource limits or with hostPath volumes.
Step 5: Continuous Monitoring and Reporting
Compliance is not a one-time event; it’s a continuous process. Integrate Kube-bench scans into your CI/CD pipeline or schedule them as recurring jobs within your cluster. Kyverno provides metrics that can be scraped by Prometheus and visualized in Grafana, giving you real-time insights into policy violations.
For advanced observability and network security, tools like eBPF Observability with Hubble can provide deeper insights into network traffic and potential anomalies that might indicate security breaches, complementing your compliance efforts. Similarly, robust Network Policies Security Guide are essential to control traffic flow within your cluster, further hardening your environment against lateral movement.
# Example of a Kubernetes CronJob to run Kube-bench periodically
apiVersion: batch/v1
kind: CronJob
metadata:
name: kube-bench-daily-scan
namespace: kube-bench
spec:
schedule: "0 2 * * *" # Run daily at 2 AM
jobTemplate:
spec:
template:
spec:
hostPID: true
nodeSelector:
kubernetes.io/os: linux
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
serviceAccountName: kube-bench
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
command: ["kube-bench", "run", "--targets", "master,node", "--json"] # Scan both master and worker nodes, output JSON
volumeMounts:
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: usr-bin
mountPath: /usr/bin
readOnly: true
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
restartPolicy: Never
volumes:
- name: var-lib-kubelet
hostPath:
path: /var/lib/kubelet
- name: etc-systemd
hostPath:
path: /etc/systemd
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: usr-bin
hostPath:
path: /usr/bin
Deploy the CronJob:
kubectl apply -f kube-bench-cronjob.yaml
Verify: Check the CronJob status after a few minutes (or wait for the scheduled time).
kubectl get cronjobs -n kube-bench
kubectl get jobs -n kube-bench
You can then check the logs of the Job Pods created by the CronJob for the scan results. For integrating Kyverno metrics with Prometheus and Grafana, refer to the Kyverno monitoring documentation.
Production Considerations
- Baseline Definition: Before enforcing policies, establish a clear compliance baseline. Running Kube-bench in
Auditmode first for a period allows you to identify violations without disrupting existing workloads. - Policy Granularity: Start with broad, high-impact policies (e.g., disallowing privileged containers) and gradually introduce more granular policies. Overly restrictive policies can break applications.
- Testing Policies: Always test new Kyverno policies in a staging environment before deploying them to production. Use
validationFailureAction: Auditinitially to see the impact without blocking resources. - Exemptions and Overrides: Real-world applications may have legitimate reasons to violate certain policies (e.g., a security scanner needing privileged access). Kyverno supports policy exceptions. Use them judiciously and document them thoroughly.
- Integration with CI/CD: Integrate Kube-bench and Kyverno policy checks into your CI/CD pipelines. This shifts security left, catching non-compliance before deployment. For example, use Kube-bench as a gate before merging pull requests.
- Alerting and Reporting: Set up alerts for Kube-bench failures and Kyverno policy violations. Integrate with your existing monitoring and alerting systems (e.g., Prometheus, Alertmanager, PagerDuty).
- RBAC for Policy Management: Implement strict RBAC for who can create, update, or delete Kyverno policies. Policies are powerful and can significantly impact cluster operations.
- Cloud Provider Benchmarks: While the CIS Kubernetes Benchmark is universal, cloud providers often have their own specific benchmarks (e.g., AWS EKS Best Practices, GCP Kubernetes Engine Security Best Practices). Consider integrating these alongside the CIS Benchmark.
- Performance Impact: Kyverno runs as an admission controller, which can introduce latency to API requests. Monitor its performance and ensure it doesn’t become a bottleneck, especially in very high-throughput clusters.
- Regular Review: Kubernetes and the CIS Benchmark evolve. Regularly review your compliance policies and Kube-bench versions to ensure they remain current and effective.
Troubleshooting
1. Kube-bench Pods Stuck in Pending/CrashLoopBackOff
Issue: Kube-bench Pods are not starting or frequently crashing.
Solution:
- Check Logs: The first step is always to check the Pod logs.
kubectl logs <kube-bench-pod-name> -n kube-bench - Resource Constraints: Kube-bench might be failing due to insufficient resources. Check events for OOMKilled or scheduling issues.
kubectl describe pod <kube-bench-pod-name> -n kube-benchAdjust resource requests/limits in the Helm chart values if necessary.
- HostPath Mounts: Kube-bench requires hostPath mounts to access critical host files. Ensure these paths are correct and accessible by the Pod. If running on a managed service (EKS, GKE, AKS), some host paths might be restricted.
- RBAC: Ensure the service account used by Kube-bench has the necessary permissions (ClusterRole and ClusterRoleBinding) to perform host-level checks. The Helm chart usually sets this up correctly, but verify if you’re using custom configurations.
2. Kyverno Policies Not Being Applied
Issue: Kyverno is installed, but policies don’t seem to be validating or mutating resources.
Solution:
- Check Kyverno Pod Status: Ensure all Kyverno Pods are running correctly in the
kyvernonamespace.kubectl get pods -n kyverno - Policy Status: Check the status of your
ClusterPolicyorPolicyresources.kubectl get clusterpoliciesLook for any errors in the status field.
- Admission Webhook Configuration: Kyverno relies on dynamic admission webhooks. Verify that the
MutatingWebhookConfigurationandValidatingWebhookConfigurationresources for Kyverno exist and are correctly configured.kubectl get validatingwebhookconfigurations | grep kyverno kubectl get mutatingwebhookconfigurations | grep kyvernoIf these are missing or misconfigured, Kyverno cannot intercept API requests.
- Policy Match Rules: Double-check the
matchandexcludeblocks in your policy definitions. A common mistake is an incorrect kind, namespace, or label selector that prevents the policy from matching the intended resources. validationFailureAction: EnsurevalidationFailureActionis set toEnforceif you expect resources to be blocked, orAuditif you only expect warnings.
3. Kyverno Blocks Legitimate Deployments
Issue: Kyverno policies are too aggressive and prevent necessary applications from deploying.
Solution:
- Temporary Audit Mode: Change
validationFailureActiontoAuditfor the problematic policy. This allows the resource to be created while logging the violation. Examine the logs to understand why it was blocked.kubectl logs -n kyverno -l app.kubernetes.io/component=admission | grep "denied request" - Policy Exceptions: Use Kyverno’s Policy Exceptions to create specific exclusions for certain namespaces, labels, or resources. For example, exclude a specific system namespace from a strict security context policy.
# Example PolicyException apiVersion: kyverno.io/v1 kind: PolicyException metadata: name: allow-privileged-in-monitoring spec: policyName: disallow-privileged-containers exceptions: - podSelector: matchLabels: app: prometheus namespaces: - monitoring - Refine Policies: Revisit the policy definition. Can you make it more specific? For example, instead of disallowing all hostPath volumes, disallow them only for certain sensitive paths.
4. Kube-bench Report Shows Inaccurate Results
Issue: Kube-bench reports failures for controls that you believe are compliant, or skips checks unexpectedly.
Solution:
- Benchmark Version: Ensure Kube-bench is running against the correct CIS Kubernetes Benchmark version that matches your Kubernetes cluster version. You can specify this with the
--benchmarkflag.kube-bench run --benchmark 1.23 - Configuration Files: Kube-bench reads configuration files directly from the host. Verify that Kube-bench has correct access to
/etc/kubernetes,/var/lib/kubelet, etc. If your cluster was installed with a non-standard configuration, paths might differ. - Managed Kubernetes Services: For managed Kubernetes (EKS, GKE, AKS), some master node components are managed by the cloud provider and may not be fully auditable or configurable by the user. Kube-bench might report “Not Applicable” or “INFO” for these, or even “FAIL” if it cannot verify. Understand which controls are relevant to your responsibility.
- False Positives: Sometimes a check might fail due to a specific configuration detail that is functionally equivalent to the recommendation but not exactly what Kube-bench expects. Review the specific CIS control description and your cluster’s configuration carefully.
5. Kyverno Background Scan Not Fixing Existing Resources
Issue: You’ve deployed a Kyverno policy with background: true, but existing non-compliant resources are not being remediated or reported.
Solution:
- Kyverno Background Controller: Ensure the
kyverno-backgroundPod is running and healthy. This controller is responsible for scanning existing resources.kubectl get pods -n kyverno -l app.kubernetes.io/component=background - Policy Definition: Verify that the policy’s
spec.backgroundfield is indeed set totrue.kubectl get clusterpolicy <policy-name> -o yaml | grep background - Policy Reports: Kyverno generates
PolicyReportandClusterPolicyReportresources that detail compliance status for existing resources. Check these reports.kubectl get clusterpolicyreports -o wideYou might need to install Policy Report CRDs if they are not present.
- Scan Interval: The background scanner runs periodically. It might take some time for existing resources to be scanned. You can configure the scan interval if needed (via Kyverno Helm values).
- Mutating vs. Validating: Remember that
validationFailureAction: Enforcewill *block* new creations/updates. For existing resources, Kyverno will *report* violations. To *fix* existing resources, you often need to manually update them or use Kyverno’s generate or mutate policies (which typically apply on resource creation/update, but can be triggered manually for existing resources).
FAQ Section
Q1: What is the CIS Kubernetes Benchmark, and why is it important?
A1: The CIS Kubernetes Benchmark is a security configuration guide published by the Center for Internet Security. It provides a set of prescriptive recommendations for securing your Kubernetes cluster components (API Server, Kubelet, etcd, etc.). It’s crucial because it offers a standardized, vendor-agnostic baseline to reduce the attack surface of your cluster, helping to prevent common misconfigurations that can lead to security breaches.
Q2: Can I use Kube-bench to fix issues, or only to find them?
A2: Kube-bench is primarily an auditing tool. It identifies non-compliant configurations and provides remediation steps in its reports, but it does not automatically fix them. For automated remediation and enforcement, you need a policy engine like Kyverno, which can block non-compliant resources or mutate them to become compliant.
Q3: How does Kyverno compare to other policy engines like OPA Gatekeeper?
A3: Kyverno and OPA Gatekeeper are both powerful admission controllers for Kubernetes. The main difference lies in their policy definition language. Kyverno policies are written directly in YAML as Kubernetes resources, making them familiar to Kubernetes users and reducing the learning curve. Gatekeeper uses Rego, a powerful, declarative policy language that offers greater flexibility for complex logic but requires learning a new syntax. The choice often comes down to team familiarity and the complexity of policies required.
Q4: Is it possible to achieve 100% CIS Kubernetes Benchmark compliance?
A4: Achieving 100% compliance can be challenging, especially in managed Kubernetes environments (EKS, GKE, AKS) where cloud providers manage some control plane components. Some checks might be marked as “Not Applicable” or “Manual” for these services. The goal should be to achieve the highest possible level of compliance for the components you control, prioritize critical failures, and understand the risks associated with any deviations. It’s an ongoing journey, not a one-time destination.
Q5: How can I integrate CIS compliance checks into my CI/CD pipeline?
A5: You can integrate Kube-bench into your CI/CD pipeline by running it as a step that fails the build if critical compliance checks fail. This “shifts left” your security, catching issues before deployment. For Kyverno, you can use tools like kubectl-kyverno CLI to test policies against your YAML manifests in CI/CD, ensuring that new deployments will not be blocked by production policies. Combining these ensures that code is compliant before it even reaches the cluster. For more on automating security in pipelines, consider how tools like Sigstore and Kyverno Security can secure your container supply chain.
Cleanup Commands
To remove Kube-bench and Kyverno from your cluster:
# Uninstall Kube-bench
helm uninstall kube-bench --namespace kube-bench
kubectl delete namespace kube-bench
# Uninstall Kyverno
helm uninstall kyverno --namespace kyverno
kubectl delete namespace kyverno
# Delete Kyverno ClusterPolicies
kubectl delete clusterpolicy disallow-privileged-containers
kubectl delete clusterpolicy require-resource-limits
kubectl delete clusterpolicy disallow-hostpath-volumes # And any other policies you created
# Delete Kube-bench CronJob
kubectl delete cronjob kube-bench-daily-scan -n kube-bench
Next Steps / Further Reading
- Explore More Kyverno Policies: The Kyverno Policy Library offers a rich collection of pre-built policies for various security and best practice checks.
- Advanced Kube-bench Reporting: Learn how to parse Kube-bench’s JSON output for automated reporting and integration with other security tools. Refer to the Kube-bench documentation.
- Service Mesh Integration: For fine-grained traffic control and enhanced security, consider integrating a service mesh like Istio Ambient Mesh, which can enforce policies at the application layer.
- Network Security: Deep dive into Kubernetes network security with our Kub