Introduction
In the fast-paced world of microservices and cloud-native development, empowering developers with self-service capabilities is no longer a luxury – it’s a necessity. As your Kubernetes footprint grows, so does the complexity of managing services, environments, and infrastructure. Developers often find themselves navigating a maze of internal tools, documentation, and approval processes just to spin up a new service or get critical information about an existing one. This friction slows down innovation, impacts developer experience, and ultimately affects your organization’s agility.
Enter the Developer Portal: a centralized hub designed to streamline development workflows, provide a single pane of glass for all things related to your services, and foster a culture of self-service. A well-implemented developer portal can significantly reduce cognitive load for developers, automate repetitive tasks, and ensure consistency across your development ecosystem. This guide will walk you through building a self-service developer portal using Port, a powerful platform that helps organizations define, manage, and automate their software catalog. We’ll leverage Kubernetes as our underlying infrastructure, demonstrating how to integrate your cluster resources into Port’s catalog and enable self-service actions.
TL;DR
Build a self-service developer portal with Port by integrating your Kubernetes resources. Define your software catalog, ingest Kubernetes resource data, and enable self-service actions directly from the portal.
# 1. Install Port CLI
curl -fsSL https://raw.githubusercontent.com/port-labs/port-agent/main/install.sh | bash
# 2. Login to Port
port login
# 3. Initialize a Port blueprint (e.g., for a Kubernetes Service)
port init --blueprint-name "Kubernetes Service" --kind "Service"
# 4. Deploy the Port Kubernetes Exporter
helm repo add port-labs https://port-labs.github.io/helm-charts
helm repo update
helm upgrade --install port-kubernetes-exporter port-labs/port-kubernetes-exporter \
--namespace port-exporter --create-namespace \
--set port.clientID="YOUR_CLIENT_ID" \
--set port.clientSecret="YOUR_CLIENT_SECRET" \
--set port.baseUrl="https://api.getport.io"
# 5. Define a self-service action in Port (example: restart deployment)
# (This is done via the Port UI or API, linking to a Kubernetes job/workflow)
# 6. Verify data ingestion in Port UI
port-kubernetes-exporter logs -f
# Check your Port catalog for ingested Kubernetes Services, Deployments, etc.
Prerequisites
Before we dive into building our self-service developer portal with Port and Kubernetes, ensure you have the following tools and knowledge:
- Kubernetes Cluster: A running Kubernetes cluster (v1.20+). This can be a local cluster like Minikube or Kind, or a cloud-managed cluster (EKS, GKE, AKS).
kubectl: The Kubernetes command-line tool, configured to connect to your cluster. Refer to the official Kubernetes documentation for installation.- Helm: The Kubernetes package manager, version 3+. Install instructions can be found on the Helm website.
- Port Account: A free or paid Port account. You can sign up at getport.io.
- Port CLI: The Port command-line interface. Installation instructions are detailed in the first step.
- Basic Kubernetes Knowledge: Familiarity with Kubernetes concepts like Deployments, Services, Namespaces, and RBAC.
- Basic YAML Knowledge: Understanding how to read and write YAML files.
Step-by-Step Guide
Step 1: Install and Configure the Port CLI
The Port CLI is your primary interface for interacting with the Port platform from your local machine. It allows you to manage blueprints, entities, and actions, and is essential for setting up your developer portal. We’ll start by installing it and then logging in to authenticate with your Port account.
First, download and install the Port CLI using the provided script. After installation, you’ll need to log in. This process will open your web browser to authenticate with your Port account, linking your CLI session to your workspace.
# Install the Port CLI
curl -fsSL https://raw.githubusercontent.com/port-labs/port-agent/main/install.sh | bash
# Verify installation
port --version
# Login to Port
port login
Verify:
After running port login, your browser should open, prompting you to log in to your Port account. Once successful, the CLI will confirm your login, and port --version should display the installed version.
# Expected output after successful login
Logged in successfully
# Expected output for version
port version v0.1.x # (version number may vary)
Step 2: Understand Port Blueprints and Entities
In Port, a Blueprint is a schema that defines a type of software component or resource in your organization’s catalog. Think of it as a class definition. An Entity is an instance of a Blueprint, representing a specific component, like a particular microservice, a database, or a Kubernetes Deployment. Blueprints organize your catalog, while Entities populate it with real-world data.
We’ll start by defining a simple blueprint for a “Microservice.” This blueprint will serve as a foundational component in our catalog, allowing us to track various attributes of our services. Later, we’ll see how Kubernetes resources can be ingested as entities under such blueprints.
# Create a new file for your blueprint definition, e.g., microservice-blueprint.yaml
cat < microservice-blueprint.yaml
identifier: Microservice
title: Microservice
icon: Microservice
properties:
name:
type: string
title: Name
description: The name of the microservice.
owner:
type: string
title: Owner
description: The team or individual responsible for the microservice.
repoUrl:
type: string
title: Repository URL
description: URL to the microservice's source code repository.
format: url
environment:
type: string
title: Environment
description: The deployment environment (e.g., dev, staging, prod).
enum:
- dev
- staging
- prod
status:
type: string
title: Status
description: Current operational status.
enum:
- active
- deprecated
- archived
EOF
# Apply the blueprint to your Port account
port blueprint create -f microservice-blueprint.yaml
Verify:
You can verify the creation of the blueprint by listing all blueprints or by checking the Port UI in your browser under the “Blueprints” section. You should see “Microservice” listed.
port blueprint get Microservice
# Expected output (truncated)
identifier: Microservice
title: Microservice
icon: Microservice
properties:
name:
type: string
title: Name
description: The name of the microservice.
...
Step 3: Deploy the Port Kubernetes Exporter
To populate your Port catalog with real-time data from your Kubernetes cluster, we’ll deploy the Port Kubernetes Exporter. This component runs inside your cluster, discovers specified Kubernetes resources (like Deployments, Services, Ingresses, etc.), and ingests them as Entities into your Port workspace. This is the core mechanism for bringing your Kubernetes infrastructure into your developer portal.
Before deployment, you’ll need your Port Client ID and Client Secret. These can be generated in your Port account settings under “API Tokens.” Store them securely.
# Get your Port Client ID and Client Secret from your Port account settings
# (Replace with your actual keys)
export PORT_CLIENT_ID="YOUR_CLIENT_ID"
export PORT_CLIENT_SECRET="YOUR_CLIENT_SECRET"
# Add the Port Helm repository
helm repo add port-labs https://port-labs.github.io/helm-charts
helm repo update
# Deploy the Port Kubernetes Exporter
helm upgrade --install port-kubernetes-exporter port-labs/port-kubernetes-exporter \
--namespace port-exporter --create-namespace \
--set port.clientID="${PORT_CLIENT_ID}" \
--set port.clientSecret="${PORT_CLIENT_SECRET}" \
--set port.baseUrl="https://api.getport.io" \
--set kubernetesExporter.clusterName="my-dev-cluster" \
--set kubernetesExporter.configs[0].kind="Deployment" \
--set kubernetesExporter.configs[0].blueprint="Microservice" \
--set kubernetesExporter.configs[0].selector.queryParameters.selector="app" \
--set kubernetesExporter.configs[0].selector.queryParameters.selectorEnabled=true \
--set kubernetesExporter.configs[0].properties.name="metadata.name" \
--set kubernetesExporter.configs[0].properties.owner="metadata.labels.owner" \
--set kubernetesExporter.configs[0].properties.environment="metadata.labels.env" \
--set kubernetesExporter.configs[0].properties.repoUrl="spec.template.metadata.annotations.repository" \
--set kubernetesExporter.configs[0].properties.status="status.conditions[?(@.type == 'Available')].status | [0] | lower | `{'true':'active', 'false':'inactive'}[.]`"
Verify:
Check if the exporter pod is running and inspect its logs. You should see messages indicating successful connection to Port and resource discovery.
# Check exporter pod status
kubectl get pods -n port-exporter
# View exporter logs
kubectl logs -f -n port-exporter -l app.kubernetes.io/name=port-kubernetes-exporter
# Expected output for pod status
NAME READY STATUS RESTARTS AGE
port-kubernetes-exporter-7b...-abcde 1/1 Running 0 2m
# Expected log output (truncated)
{"level":"info","ts":"2023-10-27T10:30:00.000Z","logger":"exporter","msg":"Connected to Port, starting export loop"}
{"level":"info","ts":"2023-10-27T10:30:05.000Z","logger":"exporter","msg":"Exported 5 Deployments to Port"}
The configuration above maps Kubernetes Deployment fields to our “Microservice” blueprint properties. For example, metadata.name maps to name, and metadata.labels.owner maps to owner. This is a crucial step in defining how your Kubernetes resources appear in your developer portal. For more advanced networking configurations in Kubernetes, consider exploring Network Policies or even Cilium WireGuard Encryption for secure pod-to-pod communication, which can also be represented in your Port catalog.
Step 4: Create a Sample Kubernetes Deployment
To see the Port Kubernetes Exporter in action, let’s deploy a simple microservice to our Kubernetes cluster. This deployment will have labels and annotations that the exporter is configured to pick up and map to our “Microservice” blueprint properties.
Notice the app: my-webapp label, owner: dev-team-a, env: dev, and the repository annotation. These are key for the exporter’s mapping.
# Create a sample deployment
cat < my-webapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-webapp
labels:
app: my-webapp
owner: dev-team-a
env: dev
annotations:
repository: https://github.com/my-org/my-webapp
spec:
replicas: 2
selector:
matchLabels:
app: my-webapp
template:
metadata:
labels:
app: my-webapp
owner: dev-team-a
env: dev
annotations:
repository: https://github.com/my-org/my-webapp
spec:
containers:
- name: my-webapp
image: nginx:latest
ports:
- containerPort: 80
EOF
kubectl apply -f my-webapp-deployment.yaml
Verify:
Ensure the deployment is running in your Kubernetes cluster, then check the Port UI. You should now see an “Entity” under your “Microservice” blueprint named “my-webapp,” populated with the data from the deployment’s labels and annotations. It might take a few moments for the exporter to sync the data.
kubectl get deployment my-webapp
# Expected output
NAME READY UP-TO-DATE AVAILABLE AGE
my-webapp 2/2 2 2 30s
Now, navigate to your Port UI (https://app.getport.io/), select your workspace, and go to the “Microservice” blueprint. You should see an entity named “my-webapp” with its properties populated.
Step 5: Define and Trigger Self-Service Actions
One of the most powerful features of a developer portal is the ability to enable self-service actions. Instead of developers needing direct Kubernetes access or opening tickets, they can trigger predefined workflows directly from the portal. These actions can range from restarting a deployment, scaling a service, deploying a new environment, or even requesting a new database.
We’ll define an action to “Restart Deployment.” This action will trigger a Kubernetes Job that performs a rolling restart of a specified deployment by patching its `spec.template` with a new annotation, forcing a pod recreation.
First, create an RBAC role and service account for the action runner in your cluster. This service account will be used by the Kubernetes Job triggered by Port to interact with your cluster.
# Create rbac.yaml for the action runner
cat < rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: port-action-runner
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: port-deployment-restarter
namespace: default
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: port-action-runner-binding
namespace: default
subjects:
- kind: ServiceAccount
name: port-action-runner
namespace: default
roleRef:
kind: Role
name: port-deployment-restarter
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f rbac.yaml
Next, define the action in Port. This is typically done through the Port UI, but you can also use the CLI or API. The action will be associated with the “Microservice” blueprint and will provision a Kubernetes Job.
Go to your Port UI:
- Navigate to “Blueprints” and select “Microservice.”
- Go to the “Actions” tab.
- Click “Add Action.”
- Configure the action with the following details:
- Identifier:
restartDeployment - Title: Restart Deployment
- Icon:
Restart - Description: Triggers a rolling restart for the selected microservice’s deployment.
- Trigger:
K8s Job - Job Specification:
# This YAML will be placed in the "Job Specification" field in the Port UI
apiVersion: batch/v1
kind: Job
metadata:
generateName: restart-deployment-
namespace: default # Or the namespace where your deployments reside
spec:
template:
spec:
serviceAccountName: port-action-runner
containers:
- name: restart-deployment
image: bitnami/kubectl:1.27.3 # Use a kubectl image
command: ["/bin/sh", "-c"]
args:
- |
echo "Restarting deployment {{ .entity.identifier }}..."
kubectl patch deployment {{ .entity.identifier }} -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'$(date +%Y-%m-%dT%H:%M:%S%Z)'"}}}}}}'
echo "Deployment {{ .entity.identifier }} restarted successfully."
restartPolicy: Never
backoffLimit: 0
Verify:
Navigate to the “my-webapp” entity in your Port UI. You should see a “Restart Deployment” button. Click it. A new Kubernetes Job should be created in your cluster, and your “my-webapp” deployment’s pods should restart. You can monitor the job and deployment from your terminal.
# Monitor the job
kubectl get jobs -n default -w
# Monitor the deployment rollout
kubectl rollout status deployment/my-webapp -n default
# Expected job output
NAME COMPLETIONS DURATION AGE
restart-deployment-abcde 1/1 5s 10s
# Expected rollout status
Waiting for deployment "my-webapp" rollout to finish: 1 of 2 updated replicas are available...
deployment "my-webapp" successfully rolled out
This self-service capability dramatically improves developer experience, allowing them to manage their services without direct cluster access, while adhering to predefined, secure workflows. For more robust automation, consider integrating with tools like Argo Workflows or Tekton, which can also be triggered via Port actions. If you’re looking to optimize resource usage in your cluster, especially with dynamic workloads, check out our guide on Karpenter Cost Optimization, which can complement your self-service provisioning.
Step 6: Enhance the Portal with Dashboards and Integrations
A true self-service developer portal goes beyond just displaying data and triggering actions. It should offer a comprehensive view of your services, including operational metrics, logs, and links to relevant tools. Port allows you to embed dashboards from observability platforms and integrate with other tools.
For example, you can add a link to your Grafana dashboard for a specific service or embed a PagerDuty service status. This brings all relevant operational data into a single pane of glass within Port.
Let’s add a “Grafana Dashboard” link to our “Microservice” blueprint. This is done by editing the blueprint in the Port UI under the “Blueprints” section. Add a new “externalLink” property.
# Example of adding an external link to the blueprint definition (via Port UI or API)
# This snippet shows how you would extend the blueprint's `mirrorProperties` or `properties`
# to include a link. You'd typically update the blueprint using the Port UI "Edit Blueprint"
# or `port blueprint update -f your-updated-blueprint.yaml`.
#
# For simplicity, this is often set up in the Port UI directly under the "Properties" section
# by adding a property of type "string" with format "url". Then, you can populate this URL
# for each entity. For a dynamic link, you might use JQ in the exporter config.
# Example of how you might update the exporter config to dynamically generate a Grafana URL
# (This would be an addition to the `kubernetesExporter.configs` in the Helm chart in Step 3)
# --set kubernetesExporter.configs[0].properties.grafanaDashboardUrl="\"https://grafana.your-domain.com/d/your-dashboard-id?var-service=\" + .metadata.name"
Verify:
After updating the blueprint (and potentially the exporter config if you’re making it dynamic), navigate back to the “my-webapp” entity in the Port UI. You should now see a new field or a dedicated “Links” section with a clickable link to the Grafana dashboard (or whatever URL you configured).
Integrating observability tools is crucial. For advanced Kubernetes observability, especially at the network layer, consider tools like eBPF Observability with Hubble, which can provide deep insights into your cluster’s network traffic and can be linked or even integrated into Port via custom ingesters. If you’re using a service mesh like Istio, our Istio Ambient Mesh Guide offers insights into streamlining mesh deployments, which could also be cataloged and managed through Port.
Production Considerations
Deploying a self-service developer portal in a production environment requires careful thought beyond the basic setup. Here are key considerations:
- Security and RBAC:
- Port API Tokens: Ensure your Port Client ID and Secret are stored securely, preferably using Kubernetes Secrets and injected as environment variables into the exporter. Rotate them regularly.
- Kubernetes RBAC for Actions: The Service Account used by Port actions (like
port-action-runner) should have the absolute minimum necessary permissions (Least Privilege). Do not grant cluster-admin roles. Carefully review the verbs and resources for each role. - Port User Permissions: Configure fine-grained access control within Port to ensure developers can only trigger actions on services they own or are authorized to manage.
- Scalability and Resilience:
- Port Kubernetes Exporter: For large clusters, consider deploying multiple instances of the exporter if necessary, though it’s generally designed to handle significant scale. Monitor its resource consumption.
- Port Platform: Port is a SaaS offering, so its scalability is managed by Port Labs.
- Action Runners: Ensure your Kubernetes cluster has sufficient resources to handle the Jobs triggered by Port actions, especially during peak times.
- Observability and Monitoring:
- Exporter Logs: Centralize and monitor logs from the Port Kubernetes Exporter for any errors during data ingestion.
- Action Logs: Monitor the Kubernetes Jobs created by Port actions. Implement alerting for failed jobs.
- Port Activity Logs: Use Port’s built-in activity logs to audit who triggered which actions and when.
- Blueprint and Action Governance:
- Version Control: Manage your Port blueprints and action definitions in a Git repository. Use GitOps principles to apply changes to Port, ensuring an auditable and reproducible process.
- Standardization: Establish clear guidelines for defining blueprints, properties, and actions to maintain consistency across your organization.
- Review Process: Implement a review process for new blueprints and actions before they are made available to developers.
- Data Freshness and Consistency:
- The Port Kubernetes Exporter continuously syncs data. Understand its sync interval and ensure it meets your organization’s requirements for data freshness.
- Handle potential discrepancies between your cluster state and the catalog if manual changes are made outside the portal.
- Integration with CI/CD:
- Integrate Port into your CI/CD pipelines. For example, a new service deployment could automatically create or update its corresponding entity in Port.
- Use Port webhooks to trigger CI/CD pipelines based on entity changes or action completions.
Troubleshooting
1. Port CLI Login Fails or Token Expires
Issue: You get an error like “Authentication failed” or “Invalid token” when using the Port CLI.
Solution: Your CLI session might have expired or you might have revoked the token. Run port login again to re-authenticate. If the issue persists, check your network connectivity or ensure your Port account is active.
port login
2. Kubernetes Exporter Pod Not Running
Issue: The port-kubernetes-exporter pod is stuck in a pending or error state.
Solution:
- Check the pod’s events for clues:
kubectl describe pod -n port-exporter -l app.kubernetes.io/name=port-kubernetes-exporter. - Check the pod’s logs:
kubectl logs -n port-exporter -l app.kubernetes.io/name=port-kubernetes-exporter. Look for errors related to image pull, mounting volumes, or authentication. - Ensure your Kubernetes cluster has enough resources (CPU/Memory) for the pod.
- Verify the namespace
port-exporterexists.
3. No Data Appears in Port UI
Issue: The Port Kubernetes Exporter pod is running, but no entities appear in your Port catalog.
Solution:
- Check Exporter Logs: The most common issue is incorrect Client ID/Secret or mapping configuration. Look for “Unauthorized” errors or messages about failed exports in the exporter logs:
kubectl logs -f -n port-exporter -l app.kubernetes.io/name=port-kubernetes-exporter - Verify Client ID/Secret: Double-check that the
port.clientIDandport.clientSecretin your Helm values are correct and active in your Port account. - Review Mappings: Ensure the
kubernetesExporter.configsin your Helm chart correctly map Kubernetes kinds and properties to your Port blueprint. Make sure the blueprint identifier is correct. - Check Kubernetes Labels/Annotations: Confirm that your Kubernetes resources (e.g., the
my-webappdeployment) have the labels and annotations that your exporter configuration expects. - Blueprint Exists: Ensure the target blueprint (e.g., “Microservice”) exists in your Port workspace.
4. Self-Service Action Fails
Issue: When you trigger an action in Port, the corresponding Kubernetes Job fails or doesn’t execute correctly.
Solution:
- Check Port Action Logs: In the Port UI, navigate to the “Activity” tab or the specific entity’s “Activity” section. Look for the action run and check its logs for any errors reported by Port.
- Check Kubernetes Job Status: Get the status of the Kubernetes Job created by Port:
kubectl get job -n default -l port.io/action-id=restartDeployment # Adjust namespace and label if needed - Check Kubernetes Job Logs: If the Job failed, get its logs:
kubectl logs -f -n default pod/Look for errors in the script execution (e.g.,
kubectlcommand failures). - RBAC Permissions: Verify that the
port-action-runnerService Account has the necessary RBAC permissions (Role and RoleBinding) to perform the actions defined in the Job spec (e.g.,patch deployments). This is a very common cause of action failures. - Job Spec Correctness: Ensure the Kubernetes Job YAML in Port’s action definition is valid and the templating (
{{ .entity.identifier }}) is correct.
5. Incorrect Data Mapping in Port
Issue: Data from Kubernetes resources appears in Port, but some properties are missing or incorrect.
Solution:
- Review Exporter Configuration: Carefully re-examine the
kubernetesExporter.configssection in your Helm values. Ensure the JQ paths (e.g.,.metadata.name,.metadata.labels.owner) precisely match the structure of your Kubernetes resources. - Test JQ Paths: You can test JQ paths against a sample Kubernetes resource YAML locally to verify they extract the correct data.
# Example: Test JQ for owner label kubectl get deployment my-webapp -o json | jq -r '.metadata.labels.owner' - Blueprint Property Types: Ensure the data type of the property in your Port blueprint matches the data being ingested (e.g., a string for a URL, an enum for a status).
6. High Resource Usage for Port Kubernetes Exporter
Issue: The Port Kubernetes Exporter pod consumes excessive CPU or memory.
Solution:
- Limit Resources: Ensure you have resource limits and requests set for the exporter pod in your Helm values to prevent it from consuming too many cluster resources.
- Filter Resources: If you have a very large cluster, consider filtering which Kubernetes resources the exporter watches. You can use
selector.queryParametersin the Helm configuration to only export resources matching specific labels or namespaces.# Example: Only export deployments with 'app' label kubernetesExporter: configs: - kind: "Deployment" blueprint: "Microservice" selector: queryParameters: selector: "app" selectorEnabled: true - Update Exporter Version: Ensure you are running the latest stable version of the Port Kubernetes Exporter, as performance improvements are often included in new releases.
FAQ Section
Q1: What is a Developer Portal and why do I need one?
A Developer Portal is a centralized platform that provides developers with a single source of truth for all tools, services, and information relevant to their work. You need one to reduce cognitive load, accelerate development cycles, improve onboarding, enforce standardization, and empower developers with self-service capabilities, reducing dependencies on operations teams. It’s especially crucial in microservices architectures where complexity can quickly overwhelm developers.
Q2: How does Port integrate with my existing CI/CD pipelines?
Port can integrate with CI/CD pipelines in several ways. You can use Port’s API or CLI within your pipelines to automatically create or update entities in your catalog when new services are deployed or changed. Conversely, Port actions can trigger webhooks or external jobs (e.g., Jenkins, GitLab CI, Argo Workflows) to run CI/CD tasks based on events or manual triggers from the portal. This allows for seamless automation of your software delivery lifecycle.
Q3: Can Port manage resources beyond Kubernetes?
Yes, absolutely. While this guide focuses on Kubernetes, Port is designed to be infrastructure-agnostic. You can define blueprints for virtually any resource: cloud accounts, databases, serverless functions, environments, teams, APIs, and more. Port has integrations for various cloud providers (AWS, GCP, Azure), Terraform, GitHub, and custom ingesters to pull data from any source via its API. This makes it a truly universal catalog.
Q4: Is Port open source?
Port itself is a commercial SaaS platform. However, many of its integration components, like the Port Kubernetes Exporter, are open-source and available on GitHub. This allows for transparency and community contributions to the integration layer.
Q5: How does Port help with governance and compliance?
Port helps enforce governance by providing a structured catalog where all services and resources adhere to predefined blueprints. This ensures consistency in metadata and properties. For compliance, Port allows you to define required properties (e.g., security contact, data classification) for each entity type. Self-service actions can be configured with approval workflows and RBAC, ensuring that only authorized users can trigger specific operations, and all actions are logged for auditability. Projects like Sigstore and Kyverno can further enhance your supply chain security, and their status can be reflected in your Port catalog.
Cleanup Commands
To remove the resources created during this tutorial from your Kubernetes cluster and Port workspace:
# 1. Delete the sample Kubernetes deployment
kubectl delete -f my-webapp-deployment.yaml
# 2. Delete the RBAC resources for the action runner
kubectl delete -f rbac.yaml
# 3. Uninstall the Port Kubernetes Exporter Helm chart
helm uninstall port-kubernetes-exporter --namespace port-exporter
kubectl delete namespace port-exporter
# 4. (Optional)