Orchestration

Share Multi-Cluster Resources with Liqo

Introduction

In the modern cloud-native landscape, organizations often find themselves managing multiple Kubernetes clusters. Whether for disaster recovery, geographical distribution, regulatory compliance, or simply scaling applications beyond the limits of a single cluster, multi-cluster architectures are becoming the norm. However, this distributed environment introduces significant challenges: how do you efficiently utilize resources across these clusters? How do you ensure high availability and seamless application portability without complex, bespoke solutions? This is where Liqo, an open-source project, steps in.

Liqo (LIquid Kube cOntrol) provides a powerful and elegant solution for inter-cluster resource sharing. It transforms a collection of isolated Kubernetes clusters into a unified, distributed platform, allowing applications to seamlessly burst or migrate across cluster boundaries. Imagine a scenario where one cluster is experiencing high load while another has idle capacity; Liqo enables you to “offload” workloads to the less-utilized cluster, optimizing resource utilization and improving application resilience. This capability is particularly valuable for hybrid cloud scenarios, edge computing, and multi-cloud deployments, offering unparalleled flexibility and efficiency.

This comprehensive guide will walk you through setting up Liqo, sharing resources between two Kubernetes clusters, and deploying applications that leverage this multi-cluster capability. We’ll cover everything from initial installation to advanced configurations, ensuring you have a deep understanding of how to harness Liqo’s potential to build truly resilient and scalable distributed applications.

TL;DR: Multi-Cluster Resource Sharing with Liqo

Liqo enables seamless resource sharing between Kubernetes clusters, allowing workloads to burst or migrate. This guide covers its setup and usage.

Key Commands:

  • Install Liqo CLI:
    curl -sL https://install.liqo.io | bash
  • Install Liqo on Cluster A:
    liqo install --cluster-name cluster-a --set liqo.auth.enableGateway=true
  • Install Liqo on Cluster B:
    liqo install --cluster-name cluster-b --set liqo.auth.enableGateway=true
  • Peer Cluster A with Cluster B:
    liqo peer outgoing --cluster-name cluster-b --cluster-id $(liqo get clusterid cluster-b) --api-server-url $(liqo get apiserver-url cluster-b) --token $(liqo get token cluster-b)
  • Check Peering Status:
    liqo get foreignclusters
  • Deploy Sample Application:
    # Use resourceOffloading.enabled: true in your Liqo-enabled Namespace

Liqo creates a virtual node in the local cluster, representing the remote cluster’s capacity, allowing standard Kubernetes scheduling to extend workloads across clusters.

Prerequisites

Before we dive into the installation and configuration of Liqo, ensure you have the following prerequisites in place:

  • Two Kubernetes Clusters: You will need at least two operational Kubernetes clusters. These can be local (e.g., Kind, K3s, Minikube) or cloud-based (EKS, AKS, GKE). For this guide, we’ll assume two clusters named `cluster-a` and `cluster-b`. Ensure you have administrative access to both.
  • `kubectl` configured: Your `kubectl` command-line tool must be configured to switch between the contexts of both clusters. This is crucial for managing resources and Liqo installations on each cluster. Refer to the official Kubernetes documentation on `kubectl` installation if you need assistance.
  • `helm` installed: Liqo uses Helm charts for installation. Ensure you have Helm 3 or later installed on your local machine. You can find installation instructions on the official Helm website.
  • Network Connectivity: The control planes (API servers) and the Liqo gateways of your clusters must be able to communicate with each other. For local setups, this often means they are on the same network. For cloud-based clusters, ensure appropriate firewall rules or VPNs are in place. Liqo provides secure communication channels, but basic network reachability is a must.
  • Liqo CLI: While not strictly mandatory for all operations, the Liqo CLI simplifies many tasks, especially peering. We’ll install it as part of this guide.

Step-by-Step Guide

Step 1: Install the Liqo CLI

The Liqo CLI is a convenient tool that streamlines the installation, peering, and management of Liqo. It’s highly recommended for simplifying the process. This step involves downloading and installing the `liqo` executable to your local machine.

The CLI tool automates several complex steps, such as fetching cluster IDs, API server URLs, and authentication tokens, which are essential for establishing secure peering relationships between clusters. It also provides helpful commands for checking the status of your Liqo deployment and managing virtual nodes.

curl -sL https://install.liqo.io | bash
liqo --help

Verify:

You should see the help output for the Liqo CLI, confirming its successful installation.

Liqo is a tool to install, configure and manage Liqo on your Kubernetes clusters.

Usage:
  liqo [command]

Available Commands:
  completion  Generate the autocompletion script for the specified shell
  get         Get a specific Liqo resource
  help        Help about any command
  install     Install Liqo on the current Kubernetes cluster
  offload     Configure the offloading of resources to a remote cluster
  peer        Peer the current Kubernetes cluster with a remote one
  uninstall   Uninstall Liqo from the current Kubernetes cluster
  update      Update Liqo on the current Kubernetes cluster
  version     Print the Liqo version

Flags:
  -h, --help      help for liqo
  -v, --version   version for liqo

Use "liqo [command] --help" for more information about a command.

Step 2: Install Liqo on Cluster A

Now that we have the Liqo CLI, we can proceed with installing Liqo on our first Kubernetes cluster, `cluster-a`. This involves using the `liqo install` command, which leverages Helm charts under the hood to deploy all necessary Liqo components, such as the controller manager, network manager, and various custom resource definitions (CRDs).

It’s crucial to specify a unique `–cluster-name` for each cluster to help identify them later. We’re also enabling the gateway using `–set liqo.auth.enableGateway=true`. The Liqo Gateway is responsible for establishing secure network connectivity between peered clusters, allowing pods to communicate across cluster boundaries. This secure communication between clusters can optionally be enhanced with solutions like Cilium WireGuard Encryption, particularly in production environments.

# Ensure your kubectl context is set to cluster-a
kubectl config use-context cluster-a

# Install Liqo
liqo install --cluster-name cluster-a --set liqo.auth.enableGateway=true

Verify:

The installation process will output a series of messages, and upon completion, you should see confirmation that Liqo has been successfully installed. You can also check the deployed pods in the `liqo` namespace.

# Expected output (truncated)
# ...
# Liqo is installed!
# Wait for Liqo to be ready...
# Liqo is ready!
# ...

kubectl get pods -n liqo
NAME                                             READY   STATUS    RESTARTS   AGE
liqo-auth-56f8f48466-9b5lq                       1/1     Running   0          2m
liqo-controller-manager-5f5f7f7b-6c7d8           1/1     Running   0          2m
liqo-crd-replicator-6b8b7b7c-8d9e0               1/1     Running   0          2m
liqo-gateway-7d6d7d8d-9e0f1                      1/1     Running   0          2m
liqo-network-manager-8e9e0e1e-2f3g4              1/1     Running   0          2m
liqo-operator-9f0f1f2f-3g4h5                     1/1     Running   0          2m

Step 3: Install Liqo on Cluster B

Repeat the process for `cluster-b`. It’s essential to set the `kubectl` context correctly before running the installation command. Each cluster will manage its own Liqo installation, and they will later establish a peering relationship.

Just like with `cluster-a`, we’re ensuring that the gateway component is enabled. This component is fundamental for Liqo’s cross-cluster networking capabilities, allowing pods in different clusters to communicate as if they were in the same network. This plays a critical role in how Liqo handles networking, complementing or even integrating with other CNI solutions. For example, if you’re using Cilium with WireGuard, Liqo can leverage that secure tunnel or establish its own.

# Ensure your kubectl context is set to cluster-b
kubectl config use-context cluster-b

# Install Liqo
liqo install --cluster-name cluster-b --set liqo.auth.enableGateway=true

Verify:

Confirm the successful installation and check the Liqo pods in the `liqo` namespace on `cluster-b`.

# Expected output (truncated)
# ...
# Liqo is installed!
# Wait for Liqo to be ready...
# Liqo is ready!
# ...

kubectl get pods -n liqo
NAME                                             READY   STATUS    RESTARTS   AGE
liqo-auth-56f8f48466-9b5lq                       1/1     Running   0          2m
liqo-controller-manager-5f5f7f7b-6c7d8           1/1     Running   0          2m
liqo-crd-replicator-6b8b7b7c-8d9e0               1/1     Running   0          2m
liqo-gateway-7d6d7d8d-9e0f1                      1/1     Running   0          2m
liqo-network-manager-8e9e0e1e-2f3g4              1/1     Running   0          2m
liqo-operator-9f0f1f2f-3g4h5                     1/1     Running   0          2m

Step 4: Peer the Clusters

This is the core step where `cluster-a` and `cluster-b` establish a secure, bidirectional peering relationship. Liqo uses a secure out-of-band mechanism to exchange the necessary information (cluster ID, API server URL, and an authentication token) to set up this trust. We’ll initiate the peering from `cluster-a` towards `cluster-b`.

The `liqo peer outgoing` command simplifies this by fetching the required details from the target cluster (in this case, `cluster-b`) and configuring `cluster-a` to peer with it. This creates a `ForeignCluster` resource in `cluster-a` which represents `cluster-b`, and vice-versa. This mutual trust is crucial for Liqo’s operation, allowing resources to be securely shared and workloads to be scheduled remotely.

# Ensure your kubectl context is set to cluster-a
kubectl config use-context cluster-a

# Get peering info from cluster-b (while still in cluster-a context)
# The Liqo CLI automatically uses the current kubectl context to get info from the target cluster
# if you provide the cluster-name and it's configured in your kubeconfig.
# Alternatively, you could manually switch context, run 'liqo get ...' for each piece, then switch back.

# Peer cluster-a with cluster-b
liqo peer outgoing \
  --cluster-id $(liqo get clusterid --kubeconfig-context cluster-b) \
  --api-server-url $(liqo get apiserver-url --kubeconfig-context cluster-b) \
  --token $(liqo get token --kubeconfig-context cluster-b) \
  --cluster-name cluster-b

Verify:

Check the peering status on both clusters. It might take a minute or two for the status to transition to `Established`.

# On cluster-a
kubectl config use-context cluster-a
liqo get foreignclusters
CLUSTER ID                             CLUSTER NAME   TYPE      MODE       AUTH URL                  NETWORK STATUS   AUTH STATUS   APISERVER STATUS   OUTGOING PEERING   INCOMING PEERING   AGE
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   cluster-b      Outgoing  Bidirectional  https://192.168.1.100:6443  Established      Established   Established        Established        Established        2m
# On cluster-b
kubectl config use-context cluster-b
liqo get foreignclusters
CLUSTER ID                             CLUSTER NAME   TYPE      MODE       AUTH URL                  NETWORK STATUS   AUTH STATUS   APISERVER STATUS   OUTGOING PEERING   INCOMING PEERING   AGE
yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy   cluster-a      Incoming  Bidirectional  https://192.168.1.101:6443  Established      Established   Established        Established        Established        2m

You should see `Established` for all status fields. After successful peering, Liqo will create a virtual node in each cluster, representing the remote cluster’s available resources. This virtual node is key to how Liqo integrates with Kubernetes’ native scheduler.

# On cluster-a, you'll see a virtual node for cluster-b
kubectl config use-context cluster-a
kubectl get nodes
NAME                       STATUS   ROLES    AGE   VERSION
cluster-a-control-plane    Ready    control-plane   1h    v1.28.3
cluster-a-worker           Ready              1h    v1.28.3
liqo-cluster-b             Ready              2m    v1.28.3-liqo
# On cluster-b, you'll see a virtual node for cluster-a
kubectl config use-context cluster-b
kubectl get nodes
NAME                       STATUS   ROLES    AGE   VERSION
cluster-b-control-plane    Ready    control-plane   1h    v1.28.3
cluster-b-worker           Ready              1h    v1.28.3
liqo-cluster-a             Ready              2m    v1.28.3-liqo

Step 5: Configure Namespace Offloading

For Liqo to offload workloads, you need to explicitly enable resource offloading for specific namespaces. This is done by adding a label to the namespace or by creating a `NamespaceOffloading` custom resource. When a namespace is offloaded, Liqo monitors it for new pods and, based on scheduling decisions, can redirect them to the peered cluster.

We’ll create a new namespace called `liqo-demo` and enable offloading for it. This tells Liqo that pods within this namespace are candidates for cross-cluster scheduling. This mechanism allows for fine-grained control over which applications can leverage multi-cluster capabilities, ensuring that sensitive workloads remain within their designated cluster if required. This is also where Kubernetes Network Policies become critical, as they need to be carefully considered for pods spanning multiple clusters to maintain security.

# Ensure your kubectl context is set to cluster-a
kubectl config use-context cluster-a

# Create a namespace for demonstration
kubectl create namespace liqo-demo

# Enable offloading for the namespace
kubectl label namespace liqo-demo liqo.io/enabled=true

Verify:

Check the labels on the `liqo-demo` namespace to confirm that `liqo.io/enabled=true` is present.

kubectl get namespace liqo-demo -o yaml | grep liqo.io/enabled
    liqo.io/enabled: "true"

Additionally, Liqo automatically creates a corresponding mirrored namespace in the peered cluster (`cluster-b` in this case). This mirrored namespace is where the offloaded pods will actually run.

# On cluster-b, verify the mirrored namespace exists
kubectl config use-context cluster-b
kubectl get namespace liqo-demo
NAME        STATUS   AGE
liqo-demo   Active   1m

Step 6: Deploy a Sample Application

Now we can deploy an application into the `liqo-demo` namespace on `cluster-a`. Because the namespace is configured for offloading, the Kubernetes scheduler, with Liqo’s help, will consider the virtual node representing `cluster-b` as a valid target. If `cluster-a` is under resource pressure or if the pod explicitly requests resources that only `cluster-b` can satisfy, the pod will be scheduled to `cluster-b`.

This example uses a simple Nginx deployment. We’ll add a node selector that explicitly targets the Liqo virtual node for `cluster-b`, forcing the pod to be offloaded. This demonstrates how you can control where your workloads run, even across cluster boundaries. For more advanced scheduling scenarios, especially with specialized hardware like GPUs for AI/ML workloads, you might look into techniques discussed in our LLM GPU Scheduling Guide.

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-offloaded
  namespace: liqo-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-offloaded
  template:
    metadata:
      labels:
        app: nginx-offloaded
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
      nodeSelector:
        liqo.io/virtual-node: "true"
        # Optional: Target a specific remote cluster if you have multiple peers
        # liqo.io/cluster-name: "cluster-b"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-offloaded
  namespace: liqo-demo
spec:
  selector:
    app: nginx-offloaded
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
# Ensure your kubectl context is set to cluster-a
kubectl config use-context cluster-a

# Apply the deployment
kubectl apply -f deployment.yaml

Verify:

First, check the pod status on `cluster-a`. You will see the pod in the `liqo-demo` namespace. Notice that it’s scheduled on the `liqo-cluster-b` virtual node.

# On cluster-a
kubectl config use-context cluster-a
kubectl get pods -n liqo-demo -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
nginx-offloaded-5c6f7d8e9-f0g1h   1/1     Running   0          30s     10.0.0.10     liqo-cluster-b              

Next, switch to `cluster-b` and verify that the actual pod is running there. Liqo handles the replication of the pod definition and network routing, making it appear seamless.

# On cluster-b
kubectl config use-context cluster-b
kubectl get pods -n liqo-demo -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP            NODE                 NOMINATED NODE   READINESS GATES
nginx-offloaded-5c6f7d8e9-f0g1h   1/1     Running   0          40s     10.244.1.5    cluster-b-worker              

You can also test connectivity to the service from `cluster-a`:

# On cluster-a
kubectl config use-context cluster-a
kubectl run -it --rm --restart=Never debug-pod --image=busybox --namespace liqo-demo -- /bin/sh
/ # wget -O - nginx-offloaded.liqo-demo.svc.cluster.local
Connecting to nginx-offloaded.liqo-demo.svc.cluster.local (10.96.100.10:80)



Welcome to nginx!
# ... (Nginx welcome page HTML)

This demonstrates that the service on `cluster-a` successfully routed traffic to the pod running on `cluster-b`, proving the cross-cluster network connectivity established by Liqo.

Production Considerations

Deploying Liqo in a production environment requires careful planning beyond a basic setup. Here are key considerations:

  1. Security:
    • Network Policies: While Liqo establishes secure tunnels, you must still implement Kubernetes Network Policies to control traffic between pods, especially those offloaded to other clusters. Ensure policies are correctly mirrored or adapted for the remote environment.
    • Authentication & Authorization: Liqo uses Kubernetes service accounts and tokens for inter-cluster authentication. Regularly review and rotate these tokens. Consider integrating with external identity providers if your setup demands it.
    • Data Encryption: Liqo’s gateway encrypts traffic between clusters. For additional layers of security, especially if your underlying network is untrusted, consider augmenting with solutions like Cilium WireGuard Encryption or VPNs at the infrastructure level.
  2. Networking:
    • CIDR Overlaps: Ensure cluster pod and service CIDRs do not overlap. Liqo handles network translation, but avoiding overlaps simplifies network management and debugging.
    • Latency: Be mindful of network latency between clusters. Offloading latency-sensitive applications to a remote cluster with high network latency can negatively impact performance.
    • Egress/Ingress: Plan how external traffic reaches offloaded applications. You might need to configure Kubernetes Gateway API resources or Ingress controllers in the remote cluster, or use global load balancers.
  3. Resource Management & Cost:
    • Resource Quotas: Apply Resource Quotas to offloaded namespaces to prevent a single application from consuming excessive resources on the peered cluster.
    • Cost Optimization: Liqo helps optimize resource utilization. Combine it with tools like Karpenter for Cost Optimization to dynamically scale nodes based on actual workload demands, even across clusters, potentially reducing idle resource costs.
    • Monitoring Resource Consumption: Monitor resource consumption on both local and virtual nodes to understand offloading patterns and potential bottlenecks.
  4. Observability:
    • Centralized Logging and Monitoring: Implement a centralized logging and monitoring solution (e.g., Prometheus/Grafana, ELK stack) that can aggregate metrics and logs from all peered clusters. This is crucial for debugging and performance analysis of distributed applications. Consider leveraging eBPF-based tools like eBPF Observability with Hubble for deep network insights across your multi-cluster setup.
    • Tracing: Distributed tracing (e.g., Jaeger, Zipkin) is essential for understanding the flow of requests across services deployed in different clusters.
  5. High Availability & Disaster Recovery:
    • Multi-Cluster Deployments: Design your applications to be multi-cluster aware, using tools like KubeFed or GitOps to deploy across clusters for true high availability.
    • Data Replication: Liqo focuses on compute offloading. For stateful applications, ensure your data is replicated or shared across clusters using solutions like Rook-Ceph, Portworx, or cloud-provider specific storage services.
  6. Application Design:
    • Stateless Applications: Liqo is best suited for stateless or loosely coupled applications that can tolerate network latency.
    • Service Mesh Integration: If you use a service mesh like Istio Ambient Mesh, ensure its multi-cluster capabilities are configured to work seamlessly with Liqo’s network virtualization.

Troubleshooting

Here are some common issues you might encounter when working with Liqo and their potential solutions:

  1. Issue: `ForeignCluster` status stuck in `Pending` or `Error`

    Explanation: This typically indicates a problem with the peering process, often related to network connectivity or incorrect authentication details.

    Solution:

    1. Check Network Connectivity: Ensure the API servers and Liqo gateway components of both clusters can reach each other. For cloud clusters, verify security groups/firewall rules. For local clusters, ensure they are on the same network or configured for inter-cluster communication.
    2. Verify Token and API Server URL: Re-run the `liqo peer outgoing` command, double-checking that the `–cluster-id`, `–api-server-url`, and `–token` obtained from the remote cluster are correct.
    3. Check Liqo Gateway Pods: Ensure the `liqo-gateway` pods are running and healthy in both clusters (in the `liqo` namespace).
    4. Review Logs: Check the logs of the `liqo-controller-manager` and `liqo-gateway` pods in both clusters for error messages.
    5. kubectl logs -n liqo -l app.kubernetes.io/component=controller-manager
      kubectl logs -n liqo -l app.kubernetes.io/component=gateway
      
  2. Issue: Pods are not offloaded to the remote cluster

    Explanation: Even if peering is successful, pods might not be scheduled to the virtual node if the namespace isn’t offloaded, or if scheduling constraints prevent it.

    Solution:

    1. Verify Namespace Offloading: Ensure the target namespace has the `liqo.io/enabled=true` label.
    2. kubectl get namespace liqo-demo -o yaml | grep liqo.io/enabled
      
    3. Check Virtual Node Status: Confirm that the virtual node (`liqo-`) is in `Ready` status on the local cluster.
    4. kubectl get nodes
      
    5. Examine Pod Events: Describe the pending pod and check its events for scheduling errors. Look for messages related to taints, tolerations, node selectors, or insufficient resources.
    6. kubectl describe pod  -n 
      
    7. Resource Requests: Ensure the remote cluster has enough resources to satisfy the pod’s requests.
    8. Node Selector/Affinity: If you’re using `nodeSelector` or `nodeAffinity`, ensure they correctly target the virtual node (e.g., `liqo.io/virtual-node: “true”`).
  3. Issue: Offloaded pods cannot communicate with local services (or vice-versa)

    Explanation: This points to a network connectivity issue between the clusters, despite peering being established.

    Solution:

    1. Check `liqo-network-manager` Logs: This component is responsible for establishing and maintaining cross-cluster network routes.
    2. kubectl logs -n liqo -l app.kubernetes.io/component=network-manager
      
    3. Verify Gateway Pods: Ensure `liqo-gateway` pods are running and healthy on both clusters.
    4. Firewall Rules: Double-check that all necessary ports (e.g., UDP 51820 for WireGuard, if used by Liqo’s network fabric) are open between the gateway nodes.
    5. CIDR Overlaps: Confirm that your cluster pod and service CIDRs do not overlap. Liqo attempts to handle this, but overlaps can cause complex routing issues.
    6. Test Connectivity Manually: Deploy simple `busybox` pods in both the local and remote offloaded namespaces and try to `ping` or `wget` services/pods across clusters to narrow down the problem.
  4. Issue: Service exposure (LoadBalancer/NodePort) for offloaded apps doesn’t work

    Explanation: Liqo primarily handles ClusterIP services for inter-cluster pod communication. Exposing services externally requires additional configuration.

    Solution:

    1. Expose in Remote Cluster: If a pod is offloaded to `cluster-b`, you typically need to create the `LoadBalancer` or `NodePort` service directly in `cluster-b` (in the mirrored namespace). Liqo does not automatically expose these services from the originating cluster.
    2. Ingress/Gateway API: Use an Ingress controller or Gateway API in the remote cluster (`cluster-b`) to expose the service.
    3. Global Load Balancer: For true multi-cluster ingress, consider a global load balancer (e.g., AWS Global Accelerator, GCP Global External HTTP(S) Load Balancer) that can route traffic to services in either cluster.
  5. Issue: Performance degradation for offloaded workloads

    Explanation: Increased latency or reduced throughput for applications running on a remote cluster.

    Solution:

    1. Network Latency: Measure the network latency between the clusters. Tools like `ping` or `iperf` can help. If latency is high, consider if the application is suitable for offloading or if a closer peered cluster is available.
    2. Bandwidth: Ensure sufficient bandwidth between the clusters. Large data transfers over slow links will impact performance.
    3. Resource Saturation: Check resource utilization (CPU, memory, network I/O) on the nodes of the remote cluster. The remote cluster might be overloaded.
    4. Liqo Gateway Resources: Ensure the `liqo-gateway` pods have adequate CPU and memory resources allocated, especially if handling high volumes of traffic.
    5. Monitor with eBPF: Use tools like eBPF Observability with Hubble to gain deep insights into network traffic patterns and potential bottlenecks within and between clusters.
  6. Issue: Liqo CLI commands fail with “Error: context not found” or similar

    Explanation: The Liqo CLI relies on your `kubectl` configuration to determine which cluster to interact with.

    Solution:

    1. Verify `kubectl` Context: Ensure your `kubectl` context is correctly set to the intended cluster before running Liqo CLI commands.
    2. kubectl config get-contexts
      kubectl config use-context 
      
    3. Specify Context: For commands interacting with a remote cluster from your current context (like `liqo peer outgoing`), ensure you correctly use the `–kubeconfig-context` flag for the remote cluster.
    4. <

Leave a Reply

Your email address will not be published. Required fields are marked *