Orchestration

Secure Workloads: Master SPIFFE/SPIRE Identity

Introduction

In the complex and dynamic world of cloud-native applications, establishing trust between workloads is paramount. Traditional methods of identity management, often relying on IP addresses or shared secrets, fall short in environments where applications are ephemeral, constantly scaling, and distributed across various infrastructure. How do you ensure that a microservice requesting data from another microservice is truly who it claims to be, without complex, hard-coded credentials or brittle network configurations?

This is where SPIFFE (Secure Production Identity Framework for Everyone) and its reference implementation, SPIRE (SPIFFE Runtime Environment), come into play. SPIFFE provides a universal, cryptographically verifiable identity for every workload in a distributed system, regardless of its underlying platform or location. SPIRE then automates the issuance and rotation of these identities, allowing workloads to securely authenticate with each other and with external services. This tutorial will guide you through deploying SPIRE on Kubernetes, demonstrating how to assign and consume SPIFFE IDs, and ultimately enhance the security posture of your microservices.

By leveraging SPIFFE/SPIRE, you can move beyond IP-based trust and embrace a robust, zero-trust security model. This shift not only simplifies identity management but also integrates seamlessly with other cloud-native security tools, providing a foundational layer for secure communication. Whether you’re building a new microservices architecture or looking to bolster the security of an existing one, understanding and implementing SPIFFE/SPIRE is a critical step towards a more resilient and trustworthy system.

TL;DR: SPIFFE/SPIRE Workload Identity on Kubernetes

SPIFFE/SPIRE provides cryptographically verifiable identities for workloads, enabling secure, zero-trust communication in Kubernetes. SPIRE automates identity issuance and rotation, replacing brittle IP-based trust with strong, attested identities.

Key Steps:

  1. Install SPIRE: Deploy the SPIRE server and agent via Helm.
  2. Create Registration Entries: Define which workloads get which SPIFFE IDs.
  3. Deploy Workloads: Run your applications, configured to consume SPIFFE IDs via the SPIRE agent.
  4. Verify Identity: Use spire-agent bundle show or workload-specific tools to confirm identity.

Quick Commands:

# Add SPIRE Helm repo
helm repo add spire-server https://spiffe.github.io/helm-charts/spire-server
helm repo add spire-agent https://spiffe.github.io/helm-charts/spire-agent

# Install SPIRE Server
helm install spire-server spire-server/spire-server --namespace spire --create-namespace

# Install SPIRE Agent
helm install spire-agent spire-agent/spire-agent --namespace spire

# Create a registration entry for a sample workload
kubectl apply -f - <<EOF
apiVersion: spire.spiffe.io/v1alpha1
kind: ClusterSPIFFEID
metadata:
  name: my-app
spec:
  spiffeID: spiffe://example.org/ns/default/sa/my-app
  selector:
    - kubernetes:container-image:my-app-image:latest
    - kubernetes:namespace:default
    - kubernetes:service-account:my-app
EOF

# Deploy a sample application with SPIFFE support
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      serviceAccountName: my-app
      containers:
      - name: my-app
        image: busybox:latest # Replace with your actual application image
        command: ["sh", "-c", "echo 'My app is running with SPIFFE ID' && sleep infinity"]
        volumeMounts:
        - name: spire-agent-socket
          mountPath: /tmp/spire-agent/sockets
          readOnly: true
      volumes:
      - name: spire-agent-socket
        hostPath:
          path: /tmp/spire-agent/sockets
          type: DirectoryOrCreate
EOF

Prerequisites

Before diving into the deployment of SPIFFE/SPIRE on Kubernetes, ensure you have the following:

  • Kubernetes Cluster: A running Kubernetes cluster (v1.20+ recommended). This can be a local cluster like MiniKube or KinD, or a managed service like EKS, GKE, or AKS.
  • kubectl: The Kubernetes command-line tool, configured to connect to your cluster. Refer to the official Kubernetes documentation for installation instructions.
  • Helm: The Kubernetes package manager (v3+ recommended). Follow the Helm installation guide if you don’t have it already.
  • Basic Kubernetes Knowledge: Familiarity with Kubernetes concepts such as Pods, Deployments, Services, and Service Accounts.
  • Network Connectivity: Ensure that your Kubernetes nodes can communicate with each other and reach external repositories for Helm charts and container images.

Step-by-Step Guide

Step 1: Install the SPIRE Server

The SPIRE Server is the central authority responsible for issuing and managing SPIFFE IDs. It maintains the trust bundle and registration entries for all workloads. We’ll deploy it into its own dedicated namespace for better isolation and management. The Helm chart simplifies the deployment, handling the necessary Kubernetes resources like Deployments, Services, and ConfigMaps.

The SPIRE Server needs a persistent storage mechanism to store its state, including registration entries and the trust bundle. By default, the Helm chart uses an in-memory SQLite database for simplicity, but for production environments, you would typically configure an external database like PostgreSQL or MySQL, or a persistent volume claim (PVC) for embedded SQLite. For this tutorial, the default in-memory setup is sufficient.

# Add the SPIRE server Helm repository
helm repo add spire-server https://spiffe.github.io/helm-charts/spire-server
helm repo update

# Install the SPIRE server into the 'spire' namespace
helm install spire-server spire-server/spire-server --namespace spire --create-namespace

# Wait for the SPIRE server pod to be ready
kubectl wait --namespace spire \
  --for=condition=ready pod \
  --selector=app=spire-server \
  --timeout=300s

Verify Step 1

Once the installation command completes, verify that the SPIRE server pod is running and healthy in the spire namespace.

kubectl get pods -n spire -l app=spire-server

Expected Output:

NAME                            READY   STATUS    RESTARTS   AGE
spire-server-79d5bb679b-abcde   1/1     Running   0          2m

You can also check the logs of the SPIRE server to ensure it started without errors:

kubectl logs -n spire -l app=spire-server

Step 2: Install the SPIRE Agent

The SPIRE Agent runs on every node in your Kubernetes cluster. Its primary role is to attest the identity of workloads running on its node and securely provide them with their SPIFFE IDs (in the form of X.509-SVIDs or JWT-SVIDs) and trust bundles. The agent communicates with the SPIRE server to fetch registration entries and uses various attestation plugins (like the Kubernetes Workload Attestor) to verify workload identity.

The agent exposes a Unix domain socket, typically mounted into workload containers, through which workloads can request their SVIDs. This secure channel ensures that identities are only provided to authorized workloads. The Helm chart for the agent deploys a DaemonSet, ensuring an agent pod runs on each eligible node in your cluster.

# Add the SPIRE agent Helm repository
helm repo add spire-agent https://spiffe.github.io/helm-charts/spire-agent
helm repo update

# Install the SPIRE agent into the 'spire' namespace
helm install spire-agent spire-agent/spire-agent --namespace spire \
  --set agent.hostPath.path=/tmp/spire-agent/sockets

# Wait for the SPIRE agent pods to be ready
kubectl wait --namespace spire \
  --for=condition=ready pod \
  --selector=app=spire-agent \
  --timeout=300s

Note: We are setting agent.hostPath.path=/tmp/spire-agent/sockets to ensure a consistent socket path for workloads. This path needs to be accessible by your application containers.

Verify Step 2

Confirm that the SPIRE agent pods are running on your nodes. The number of agent pods should typically match the number of worker nodes in your cluster.

kubectl get pods -n spire -l app=spire-agent

Expected Output:

NAME                      READY   STATUS    RESTARTS   AGE
spire-agent-abcdef        1/1     Running   0          1m
spire-agent-ghijkl        1/1     Running   0          1m

Check the logs of one of the SPIRE agents to confirm it’s communicating with the server:

kubectl logs -n spire -l app=spire-agent --tail 20

Step 3: Register a Workload Identity

Before a workload can obtain a SPIFFE ID, it must be registered with the SPIRE server. Registration entries are the core of SPIFFE identity management. They define which selectors (e.g., Kubernetes namespace, service account, container image) map to a specific SPIFFE ID. SPIRE offers a custom resource definition (CRD) called ClusterSPIFFEID to manage these entries declaratively within Kubernetes.

In this step, we’ll create a ClusterSPIFFEID for a hypothetical application named my-app. This entry specifies that any workload running in the default namespace, using the my-app service account, and having a container image matching my-app-image:latest will be assigned the SPIFFE ID spiffe://example.org/ns/default/sa/my-app. This identity can then be used for authentication and authorization.

# my-app-registration.yaml
apiVersion: spire.spiffe.io/v1alpha1
kind: ClusterSPIFFEID
metadata:
  name: my-app-registration
spec:
  spiffeID: spiffe://example.org/ns/default/sa/my-app
  selector:
    - kubernetes:container-image:my-app-image:latest
    - kubernetes:namespace:default
    - kubernetes:service-account:my-app
kubectl apply -f my-app-registration.yaml

Verify Step 3

You can verify that the registration entry has been created by listing the ClusterSPIFFEID resources:

kubectl get clusterspiffeids

Expected Output:

NAME                  SPIFFEID                                   SELECTOR_COUNT   AGE
my-app-registration   spiffe://example.org/ns/default/sa/my-app   3                1m

To inspect the registration entry in more detail, you can use kubectl describe:

kubectl describe clusterspiffeid my-app-registration

Step 4: Deploy a Sample Workload

Now, let’s deploy a simple application that will consume the SPIFFE ID we just registered. This application will consist of a Service Account, which is a key selector in our registration entry, and a Deployment that uses this Service Account. Crucially, the Deployment’s Pods must be configured to mount the SPIRE agent’s Unix domain socket into their containers.

The SPIRE agent exposes its workload API via a Unix domain socket. For a container to communicate with the agent and obtain its SVID, this socket needs to be mounted into the container’s filesystem. We achieve this using a hostPath volume type. While hostPath has security implications in general, for the SPIRE agent socket, it’s a standard and necessary pattern for workload identity. Remember to replace my-app-image:latest with an actual image if you intend to run a functional application, though for demonstration, busybox with a sleep command is sufficient to show the identity being assigned.

# my-app-deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      serviceAccountName: my-app
      containers:
      - name: my-app-container # Must match the image specified in the ClusterSPIFFEID selector
        image: my-app-image:latest # Placeholder: replace with a real image if needed, e.g., busybox:latest
        command: ["sh", "-c", "echo 'My app is running, waiting for SPIFFE ID...' && sleep infinity"]
        volumeMounts:
        - name: spire-agent-socket
          mountPath: /tmp/spire-agent/sockets
          readOnly: true
      volumes:
      - name: spire-agent-socket
        hostPath:
          path: /tmp/spire-agent/sockets
          type: DirectoryOrCreate
kubectl apply -f my-app-deployment.yaml

Verify Step 4

Ensure the my-app deployment and pod are running correctly:

kubectl get sa my-app
kubectl get deployment my-app
kubectl get pods -l app=my-app

Expected Output:

NAME     SECRETS   AGE
my-app   1         1m

NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-app     1/1     1            1           1m

NAME                      READY   STATUS    RESTARTS   AGE
my-app-7c7d6b4998-xyzab   1/1     Running   0          1m

Step 5: Inspect Workload Identity

Now that our sample workload is running and has the SPIRE agent socket mounted, we can verify that it has successfully obtained its SPIFFE ID. The SPIRE agent provides a command-line utility, spire-agent bundle show (or spire-agent api fetch for SVIDs), which can be executed from within the workload’s container to display its identity.

This step demonstrates the core functionality of SPIFFE/SPIRE: the automatic attestation and issuance of cryptographically verifiable identities to workloads. The output will show the assigned SPIFFE ID, the X.509-SVID (the certificate representing the identity), and the trust bundle necessary for other workloads to verify this identity. This forms the basis for secure, mutual TLS (mTLS) communication.

# Get the name of the 'my-app' pod
POD_NAME=$(kubectl get pods -l app=my-app -o jsonpath='{.items[0].metadata.name}')

# Execute a command inside the pod to show its SPIFFE ID
# We use the SPIRE agent's default socket location /tmp/spire-agent/sockets/agent.sock
kubectl exec -it $POD_NAME -- /bin/sh -c "/opt/spire/bin/spire-agent bundle show"

# If the above fails (e.g., busybox doesn't have spire-agent binary),
# you can use a debug container or an image with `spire-agent` installed.
# Alternatively, you can use the `spire-agent api fetch` command to get the SVID.
# For this example, let's assume we have a simple 'spire-agent' client in the container path.
# A more realistic scenario involves an application using a SPIFFE client library.

Note: The spire-agent binary is usually not present in a typical application container like busybox. To genuinely test this, you’d either need to build an image with the SPIRE agent binary or use an ephemeral debug container that mounts the socket. For simplicity, we’ll simulate the output here.

Verify Step 5

If you were to run a custom container with the SPIRE agent client, the output would look similar to this:

# Expected output from 'spire-agent bundle show' or similar client call
SPIFFE ID: spiffe://example.org/ns/default/sa/my-app
X.509-SVID:
  URI: spiffe://example.org/ns/default/sa/my-app
  Expires At: 2023-10-27 10:00:00 +0000 UTC
  Certificates:
    - <X.509 Certificate 1 PEM encoded>
    - <X.509 Certificate 2 PEM encoded> (Trust bundle)

Trust Bundle:
  spiffe://example.org
    - <X.509 Certificate 1 PEM encoded>
    - <X.509 Certificate 2 PEM encoded>

This output confirms that the workload has successfully attested and received its unique, cryptographically verifiable identity. This identity can now be used for mTLS, authorization, and audit logging.

Step 6: Example: Using SPIFFE ID for mTLS (Conceptual)

While a full mTLS example is beyond the scope of a single step, it’s crucial to understand how the obtained SPIFFE ID is used. Applications integrate with a SPIFFE client library (available for various languages like Go, Java, Python, Node.js) to retrieve their SVIDs and the trust bundle from the SPIRE agent. These are then used to establish mTLS connections.

Consider two services, my-app and my-service. my-app wants to call my-service. Both services would:

  1. Query their local SPIRE agent for their X.509-SVID and the global trust bundle.
  2. Use these credentials to initiate an mTLS handshake.
  3. During the handshake, my-app presents its SVID to my-service.
  4. my-service verifies my-app‘s SVID against the trust bundle and extracts the SPIFFE ID (e.g., spiffe://example.org/ns/default/sa/my-app).
  5. my-service can then apply authorization policies based on this verified SPIFFE ID.

This pattern is central to a zero-trust architecture. For instance, an Istio Ambient Mesh or Kubernetes Gateway API setup can leverage SPIFFE for service-to-service authentication, often integrating with tools like Cilium WireGuard Encryption for even stronger transport security.

Here’s a conceptual code snippet for a Go application using the Go-SPIFFE library:

package main

import (
	"context"
	"crypto/tls"
	"log"
	"net/http"

	"github.com/spiffe/go-spiffe/v2/spiffetls/tlsconfig"
	"github.com/spiffe/go-spiffe/v2/workloadapi"
)

const (
	socketPath = "unix:///tmp/spire-agent/sockets/agent.sock"
	serverID   = "spiffe://example.org/ns/default/sa/my-service" // Expected SPIFFE ID of the server
)

func main() {
	// Create a Workload API client
	ctx, cancel := context.WithCancel(context.Background())
	defer cancel()

	// Connect to the Workload API
	source, err := workloadapi.NewX509Source(ctx, workloadapi.WithWorkloadAPISocketPath(socketPath))
	if err != nil {
		log.Fatalf("Unable to create X509Source: %v", err)
	}
	defer source.Close()

	// Create a TLS client configuration that uses the X509Source
	// This configures mTLS with the local workload's identity and verifies the server's identity
	tlsConfig, err := tlsconfig.NewTLSClientConfig(source, tlsconfig.AuthorizeID(serverID))
	if err != nil {
		log.Fatalf("Unable to create TLS client config: %v", err)
	}

	// Example: Make an HTTP request with mTLS
	client := &http.Client{
		Transport: &http.Transport{
			TLSClientConfig: tlsConfig,
		},
	}

	resp, err := client.Get("https://my-service.default.svc.cluster.local:8443/data")
	if err != nil {
		log.Fatalf("Error making request: %v", err)
	}
	defer resp.Body.Close()

	log.Printf("Response Status: %s", resp.Status)
	// Further processing of response...
}

This code illustrates how an application can retrieve its SVID and trust bundle from the SPIRE agent and then use them to establish an mTLS connection, authorizing the server based on its expected SPIFFE ID. This is a powerful mechanism for securing service-to-service communication.

Production Considerations

  • High Availability: For production, deploy multiple SPIRE server replicas and configure them for high availability. Use an external, highly available database (e.g., AWS RDS, GCP Cloud SQL) instead of the default in-memory SQLite.
  • Persistent Storage: If using an embedded database (like SQLite), ensure the SPIRE server’s data directory is backed by a PersistentVolumeClaim (PVC) for data durability across pod restarts.
  • Trust Domain: Carefully choose your SPIFFE trust domain (e.g., example.org). This forms the root of all your workload identities and should reflect your organization’s trust boundary. It should be unique and ideally owned by your organization.
  • Attestors and Selectors: Use robust and specific selectors for your registration entries. Leverage multiple selectors (e.g., Kubernetes namespace, service account, pod label, container image) to minimize the risk of identity spoofing. Consider OIDC or cloud provider attestors for workloads outside Kubernetes.
  • Security Context: Ensure SPIRE agent pods run with appropriate security contexts, limiting their privileges to only what’s necessary, especially concerning host path access. The default Helm chart aims for this, but review it for your specific environment.
  • Certificate Rotation: SPIRE automatically handles certificate rotation for SVIDs. Ensure your applications are designed to gracefully handle certificate updates without requiring restarts. SPIFFE client libraries generally abstract this.
  • Monitoring and Logging: Integrate SPIRE server and agent logs with your central logging solution (e.g., Prometheus, Grafana, ELK stack). Monitor certificate expiration, attestation failures, and registration entry changes. Consider eBPF Observability with Hubble for network traffic verification.
  • Authorization Policies: While SPIFFE provides authentication, you still need an authorization layer. Integrate SPIFFE IDs with an authorization system like Open Policy Agent (OPA) or an Istio-like service mesh to define fine-grained access control policies based on workload identities. For more on network security, review our Network Policies Security Guide.
  • Network Policies: Secure communication between the SPIRE agent and server using Kubernetes Network Policies, allowing only necessary traffic.
  • Resource Limits: Set appropriate CPU and memory limits for SPIRE server and agent pods to prevent resource exhaustion and ensure stability, especially if deploying many agents or registering many identities.
  • External Workloads: SPIRE isn’t limited to Kubernetes. Consider extending SPIFFE identities to VMs or bare-metal servers using SPIRE agents deployed there, providing a consistent identity framework across your entire infrastructure.
  • Private Registries: If your container images are in a private registry, ensure your Kubernetes nodes have credentials to pull them. This is relevant for the kubernetes:container-image selector.
  • Cost Optimization: While not directly related to SPIFFE, optimizing your underlying Kubernetes infrastructure with tools like Karpenter for Cost Optimization can ensure your identity framework runs efficiently without excessive resource consumption.
  • Supply Chain Security: Combine SPIFFE/SPIRE with tools like Sigstore and Kyverno for a comprehensive supply chain security strategy, ensuring not only workload identity but also software integrity.

Troubleshooting

1. SPIRE Server Pod Not Ready

Issue: The SPIRE server pod is stuck in a Pending or CrashLoopBackOff state.

Solution:

  1. Check Pod Events:
    kubectl describe pod -n spire -l app=spire-server
    

    Look for issues related to scheduling (e.g., insufficient resources, taints/tolerations) or image pull errors.

  2. Check Pod Logs:
    kubectl logs -n spire -l app=spire-server
    

    Look for error messages during startup, configuration issues, or database connection problems if using an external database.

  3. Verify Helm Values: Ensure any custom Helm values are correct and compatible with your cluster.

2. SPIRE Agent Pods Not Ready

Issue: SPIRE agent pods are not running on all nodes or are in a CrashLoopBackOff state.

Solution:

  1. Check DaemonSet Status:
    kubectl get daemonset -n spire spire-agent
    

    Verify that DESIRED, CURRENT, and READY counts match. If READY is less than DESIRED, investigate why.

  2. Check Agent Logs:
    kubectl logs -n spire -l app=spire-agent
    

    Common issues include failure to connect to the SPIRE server (check server service and network policies), or issues with host path mounts.

  3. Node Taints/Tolerations: Ensure your nodes don’t have taints that prevent the agent DaemonSet pods from scheduling, or add appropriate tolerations to the agent’s Helm chart values.

3. Workload Not Getting SPIFFE ID

Issue: Your application container cannot obtain an SVID from the SPIRE agent.

Solution:

  1. Verify Socket Mount: Ensure the spire-agent-socket volume is correctly mounted into the workload container at /tmp/spire-agent/sockets.
    kubectl describe pod <your-app-pod-name>
    

    Look under Volumes and Volume Mounts.

  2. Check HostPath: Confirm that the hostPath for the SPIRE agent socket is correctly configured on the node where the workload is running. The agent’s Helm chart sets this up.
  3. Registration Entry Mismatch: The most common cause. Double-check that your ClusterSPIFFEID selectors (kubernetes:namespace, kubernetes:service-account, kubernetes:container-image, etc.) exactly match the workload’s properties. Even a typo will prevent attestation.
    kubectl describe clusterspiffeid <your-registration-name>
    kubectl describe pod <your-app-pod-name>
    

    Compare the selectors in the ClusterSPIFFEID with the actual pod’s metadata.

  4. SPIRE Agent Logs: Check the SPIRE agent logs on the node where the workload pod is running. It might show attestation failures or errors related to the workload API.
  5. Workload API Client Errors: If your application uses a SPIFFE client library, check its logs for errors when trying to connect to the SPIRE agent socket or fetch SVIDs.

4. SPIRE Server Trust Domain Mismatch

Issue: SPIRE server and agent logs show errors about trust domain mismatches.

Solution:

  1. Check Helm Values: Ensure the trustDomain specified during SPIRE server and agent installation is consistent. By default, it’s example.org. If you customized it, it must match across all components.
    helm get values spire-server -n spire -o yaml | grep trustDomain
    helm get values spire-agent -n spire -o yaml | grep trustDomain
    
  2. Reinstall if Mismatched: If there’s a mismatch, you’ll likely need to uninstall and reinstall both the SPIRE server and agent with the correct, consistent trustDomain.

5. Certificate Validation Errors in Applications

Issue: Your applications are failing mTLS handshakes, reporting certificate validation errors.

Solution:

  1. Trust Bundle Distribution: Ensure both client and server applications are correctly fetching and using the trust bundle from their respective SPIRE agents.
  2. SPIFFE ID Authorization: Verify that the client is presenting the correct SPIFFE ID and the server is authorizing against the expected SPIFFE ID. A common mistake is authorizing against an incorrect or too broad SPIFFE ID.
  3. Clock Skew: Significant clock skew between nodes can cause certificate validation failures due to invalid “Not Before” or “Not After” times. Ensure NTP is configured and working on all Kubernetes nodes.
  4. Intermediate CAs: If you have a complex trust chain, ensure all intermediate CAs are included in the trust bundle and correctly handled by the client libraries.

6. High Resource Usage by SPIRE Components

Issue: SPIRE server or agent pods are consuming excessive CPU or memory.

Solution:

  1. Check Logs for Errors: High resource usage can sometimes be a symptom of underlying errors (e.g., constant reconnections, failed attestations).
  2. Scale Vertically/Horizontally:
    • Server: If you have many registration entries or high attestation/SVID request rates, consider increasing resources (CPU/memory) for the SPIRE server or deploying multiple server replicas for high availability.
    • Agent: Agents typically consume less, but if you have a very high number of workloads constantly refreshing SVIDs on a single node, you might need to optimize the agent’s configuration or consider using larger nodes.
  3. Optimize Registration Entries: Ensure you’re not creating an excessive number of redundant or overly broad registration entries.
  4. Review SPIFFE Client Behavior: If applications are aggressively fetching SVIDs, ensure they are using caching mechanisms provided by SPIFFE client libraries to reduce load on the agent.

FAQ Section

Q1: What is the difference between SPIFFE, SPIRE, and mTLS?

A1: SPIFFE defines the specification for a universal identity framework, providing a cryptographically verifiable identity (a SPIFFE ID) for every workload. It specifies how workloads obtain and use these identities.

Leave a Reply

Your email address will not be published. Required fields are marked *