Orchestration

Grafana Mimir: Scale Your Metrics

Introduction

In the world of cloud-native applications, observability is paramount. As your Kubernetes clusters grow, so does the volume and cardinality of your metrics data. Traditional Prometheus setups, while powerful, can struggle with long-term storage, high availability, and global views across multiple clusters. This often leads to operational overhead, data silos, and a limited historical perspective, making it challenging to identify long-term trends or troubleshoot intermittent issues.

Enter Grafana Mimir, a horizontally scalable, highly available, multi-tenant, long-term storage for Prometheus metrics. Mimir is designed to address the scaling limitations of standalone Prometheus, allowing you to ingest and query billions of metrics samples per second with ease. By centralizing your metrics storage, Mimir provides a unified view of your entire infrastructure, simplifies operations, and unlocks advanced analytics capabilities, making it an indispensable tool for any organization operating at scale. This guide will walk you through deploying and configuring Grafana Mimir on Kubernetes, transforming your metrics pipeline into a robust, enterprise-grade solution.

TL;DR: Grafana Mimir on Kubernetes

Grafana Mimir provides scalable, highly available, and multi-tenant long-term storage for Prometheus metrics on Kubernetes. It solves the limitations of standalone Prometheus by offering horizontal scalability and centralized data.

  • Install Helm: Ensure Helm is installed to manage Mimir deployments.
  • Add Mimir Repo:
    helm repo add grafana https://grafana.github.io/helm-charts
  • Update Repo:
    helm repo update
  • Install Mimir:
    helm install mimir grafana/mimir -f values.yaml
  • Configure Prometheus: Point existing Prometheus instances to Mimir using remote write.
  • Verify: Access Grafana and query Mimir as a data source.

Prerequisites

Before diving into the deployment of Grafana Mimir on your Kubernetes cluster, ensure you have the following:

  • Kubernetes Cluster: A running Kubernetes cluster (version 1.20+ recommended). You can use any cloud provider (AWS EKS, GCP GKE, Azure AKS) or a local cluster like Kind or Minikube.
  • kubectl: The Kubernetes command-line tool, configured to connect to your cluster. Refer to the official Kubernetes documentation for installation instructions.
  • Helm: The Kubernetes package manager, used to deploy Mimir. If you don’t have it, follow the Helm installation guide.
  • Prometheus: An existing Prometheus instance or a plan to deploy one, as Mimir acts as a remote storage backend for Prometheus.
  • Persistent Storage: A StorageClass configured in your Kubernetes cluster. Mimir components like ingesters and store-gateways require persistent volumes for caching and data storage. Ensure your cluster has a default StorageClass or specify one in your Mimir Helm values.
  • Basic Kubernetes Knowledge: Familiarity with Kubernetes concepts such as Pods, Deployments, Services, and StorageClasses.

Step-by-Step Guide: Deploying Grafana Mimir on Kubernetes

This guide will walk you through deploying Grafana Mimir using its official Helm chart, which simplifies the process significantly.

Step 1: Add the Grafana Helm Repository

First, you need to add the official Grafana Helm chart repository, which contains the Mimir chart, to your Helm installation. This allows you to easily discover and install Mimir.

Adding the repository makes the Mimir Helm chart available for installation. It’s good practice to update your Helm repositories afterward to ensure you have the latest chart versions and dependencies.

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Verify:

You should see a confirmation that the repository was added and updated, similar to this:

"grafana" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "grafana" chart repository
Update Complete. ⎈Happy Helming!⎈

Step 2: Create a Custom Values File for Mimir Configuration

While Mimir can be installed with default settings, a custom values.yaml file is crucial for production deployments. This file allows you to configure Mimir’s various components, define storage backends, set resource limits, and enable features like multi-tenancy.

Mimir is highly configurable. For initial deployment, we’ll focus on enabling object storage for long-term data (e.g., S3, GCS, Azure Blob Storage), which is fundamental to Mimir’s scalability. We’ll use a simplified example here, assuming an S3-compatible backend. Remember to replace placeholder values with your actual cloud provider credentials and bucket names. For detailed configuration options, refer to the official Mimir configuration documentation.

Create a file named mimir-values.yaml with the following content. This example configures Mimir with MinIO as a local S3-compatible object storage for simplicity in a tutorial environment. In production, you’d use AWS S3, GCS, or Azure Blob Storage.

# mimir-values.yaml
mimir:
  structuredConfig:
    common:
      # Use an S3-compatible object storage for blocks.
      # In a real production environment, you would use AWS S3, GCS, or Azure Blob Storage.
      # For this tutorial, we'll configure MinIO as an S3-compatible service.
      storage:
        backend: s3
        s3:
          endpoint: minio.minio.svc.cluster.local:9000 # Assuming MinIO is installed in 'minio' namespace
          bucket_name: mimir-bucket
          access_key_id: mimiraccesskey
          secret_access_key: mimirsecretkey
          insecure: true # Use only for MinIO or local testing, not for production S3/GCS/Azure.
          region: us-east-1 # Or your preferred region

    # Mimir's architecture consists of multiple microservices.
    # We can enable/disable them and configure their replicas.
    # For a small setup, we might reduce replicas.
    # For production, ensure appropriate replication and resource limits.
    distributor:
      ring:
        instance_store: memberlist # Or 'consul' for larger clusters
    ingester:
      ring:
        instance_store: memberlist
      # Ingesters require persistent storage for write-ahead log (WAL) and cache.
      # Ensure you have a StorageClass available.
      persistentVolume:
        enabled: true
        size: 50Gi
        storageClass: standard # Replace with your StorageClass name if different

    # For a simple setup, we might start with fewer replicas.
    # Scale up as needed for production.
    compactor:
      # Compactor needs access to object storage to compact blocks.
      # Ensure it has the same storage configuration as 'common'.
      retention_period: 30d # Example: Retain data for 30 days
    store_gateway:
      # Store-gateways also require persistent storage for index cache.
      persistentVolume:
        enabled: true
        size: 20Gi
        storageClass: standard # Replace with your StorageClass name if different

    querier:
      # Queriers fetch data from ingesters and store-gateways.
      # Can be scaled independently.
      max_fetched_series: 5000000
    query_frontend:
      # Query frontend improves query performance and provides request caching.
      # Useful for Grafana dashboards.

    # Limits can be set for tenants. For single-tenant, apply to 'default' tenant.
    limits:
      max_series_per_user: 5000000
      max_samples_per_query: 100000000
      max_query_length: 24h

# MinIO for object storage (for demonstration purposes)
minio:
  enabled: true
  accessKey: mimiraccesskey
  secretKey: mimirsecretkey
  defaultBucket:
    name: mimir-bucket
  resources:
    requests:
      memory: 256Mi
      cpu: 100m
    limits:
      memory: 512Mi
      cpu: 500m

# Enable Grafana for easy visualization and querying Mimir
grafana:
  enabled: true
  adminPassword: prom-operator # Change this for production!
  persistence:
    enabled: true
    size: 10Gi
    storageClassName: standard # Replace with your StorageClass name if different
  # Automatically provision Mimir as a data source in Grafana
  additionalDataSources:
    - name: Mimir
      type: prometheus
      url: http://mimir-query-frontend.mimir.svc.cluster.local:80 # Mimir query frontend service
      isDefault: true
      access: proxy
      version: 1
      editable: false

# For a small cluster, you might want to adjust resource requests/limits
# and replica counts for Mimir components to fit your cluster capacity.
# Example for a smaller setup (adjust as needed):
# distributor:
#   replicas: 1
# ingester:
#   replicas: 1
# querier:
#   replicas: 1
# storeGateway:
#   replicas: 1
# queryFrontend:
#   replicas: 1

Verify:

Ensure the file mimir-values.yaml is created in your current directory. A quick cat mimir-values.yaml should display its content.

Step 3: Install Grafana Mimir using Helm

Now, use Helm to deploy Mimir to your Kubernetes cluster, referencing the custom values file you just created.

The Helm chart will deploy all necessary Mimir components (distributor, ingester, querier, query-frontend, compactor, store-gateway, etc.), along with MinIO for object storage and Grafana for visualization, based on your mimir-values.yaml. Mimir’s modular architecture allows it to scale horizontally, with each component performing a specific role. For instance, ingesters handle incoming writes, while queriers process read requests. This separation of concerns is key to its scalability.

If you’re interested in how such distributed systems communicate securely, you might find our guide on Cilium WireGuard Encryption relevant for securing pod-to-pod traffic within your cluster.

helm install mimir grafana/mimir -f mimir-values.yaml -n mimir --create-namespace

Verify:

Monitor the pods in the mimir namespace until they are all running. This might take a few minutes as persistent volumes are provisioned and containers start up.

kubectl get pods -n mimir

Expected output, with all pods in Running state:

NAME                                       READY   STATUS    RESTARTS   AGE
mimir-compactor-75845c4854-abcde           1/1     Running   0          2m
mimir-distributor-6c87d69b9c-fghij         1/1     Running   0          2m
mimir-grafana-79d57948b8-klmno             1/1     Running   0          2m
mimir-ingester-0                           1/1     Running   0          2m
mimir-minio-0                              1/1     Running   0          2m
mimir-query-frontend-8475975d-pqrst        1/1     Running   0          2m
mimir-querier-7c6d59775f-uvwxy             1/1     Running   0          2m
mimir-store-gateway-0                      1/1     Running   0          2m
... (other Mimir components)

Step 4: Configure Prometheus to Remote Write to Mimir

With Mimir deployed, you now need to configure your existing Prometheus instances to send their metrics to Mimir using the remote write feature. This offloads long-term storage to Mimir while Prometheus continues to scrape and temporarily store recent data.

Prometheus’s remote write capability is the bridge between your existing monitoring setup and Mimir’s scalable storage. When Prometheus scrapes metrics, it writes a copy of that data to Mimir, ensuring data durability and availability for long-term queries. For a more in-depth understanding of Kubernetes networking and how services communicate, you might find our Kubernetes Network Policies Guide useful, especially when securing communication between Prometheus and Mimir.

You will need to modify your Prometheus configuration. If you’re using the Prometheus Operator, this typically involves updating your Prometheus custom resource or a ConfigMap.

Here’s an example of adding a remote write configuration to a Prometheus instance. Assuming your Prometheus instance is also deployed via Helm, you’d typically update its values.yaml or patch its ConfigMap.

# prometheus-remote-write-patch.yaml
# This is an example of how you might add remote write to an existing Prometheus
# configuration, typically in its values.yaml if using the Prometheus Helm chart.
# Or, you'd edit the prometheus.yaml ConfigMap directly.

prometheus:
  prometheusSpec:
    remoteWrite:
      - url: http://mimir-distributor.mimir.svc.cluster.local/api/v1/push
        # Example for basic authentication (if enabled in Mimir)
        # basicAuth:
        #   username:
        #     name: mimir-credentials
        #     key: username
        #   password:
        #     name: mimir-credentials
        #     key: password
        # This assumes Mimir is in the 'mimir' namespace and accessible via its service.
        # Replace 'mimir-distributor.mimir.svc.cluster.local' with the correct URL
        # if your Mimir service name or namespace is different.

Apply this configuration to your Prometheus. If using the Prometheus Helm chart, you’d typically upgrade it:

# Example for patching a Prometheus instance managed by Prometheus Operator
# This assumes your Prometheus CR is named 'k8s-prometheus' in 'monitoring' namespace
kubectl patch prometheus k8s-prometheus -n monitoring --type='json' -p='[{"op": "add", "path": "/spec/remoteWrite", "value": [{"url": "http://mimir-distributor.mimir.svc.cluster.local/api/v1/push"}]}]'

# Alternatively, if you're managing Prometheus with a custom ConfigMap:
# 1. Get the current Prometheus config:
#    kubectl get configmap prometheus-server -n monitoring -o yaml > prometheus-server-cm.yaml
# 2. Edit prometheus-server-cm.yaml to add the remote_write section under 'global' or 'scrape_configs':
#    ...
#    remote_write:
#      - url: http://mimir-distributor.mimir.svc.cluster.local/api/v1/push
#    ...
# 3. Apply the updated ConfigMap:
#    kubectl apply -f prometheus-server-cm.yaml
# 4. Restart Prometheus pods to pick up the new config.

Verify:

Check the logs of your Prometheus server pods. You should see messages indicating successful remote writes to Mimir. Also, check the Mimir distributor logs for incoming writes.

# Check Prometheus logs
kubectl logs -n monitoring -l app=prometheus | grep "remote_storage"

# Check Mimir distributor logs
kubectl logs -n mimir -l app.kubernetes.io/component=distributor

Prometheus logs might show:

level=info ts=2023-10-27T10:30:00.000Z caller=dedupe.go:112 component=remote level=info remote_name=mimir-distributor url=http://mimir-distributor.mimir.svc.cluster.local/api/v1/push msg="Starting remote writer"
level=info ts=2023-10-27T10:30:05.000Z caller=queue_manager.go:275 component=remote level=info remote_name=mimir-distributor url=http://mimir-distributor.mimir.svc.cluster.local/api/v1/push msg="Successfully sent batch of 100 samples to remote storage"

Mimir distributor logs might show:

level=info ts=2023-10-27T10:30:05.000Z caller=push.go:138 msg="received series" num_series=500 num_samples=1000 tenant=fake

Step 5: Access Grafana and Query Mimir

Since we enabled Grafana in our mimir-values.yaml and configured Mimir as a data source, you can now access Grafana and verify that Mimir is receiving and serving metrics.

Grafana provides the visualization layer for your metrics. By automatically configuring Mimir as a Prometheus data source, you can immediately start building dashboards and alerts based on the long-term data stored in Mimir. This integrated approach streamlines your observability workflow. For more advanced observability insights using eBPF, check out our guide on eBPF Observability with Hubble, which can complement your metrics with network-level visibility.

First, get the Grafana service URL:

kubectl get svc -n mimir mimir-grafana

Output will be similar to:

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
mimir-grafana   ClusterIP   10.100.100.100   <none>        80/TCP     5m

To access Grafana from your local machine, use port-forwarding:

kubectl port-forward svc/mimir-grafana 3000:80 -n mimir

Now, open your web browser and navigate to http://localhost:3000.
Log in with username admin and the password you set in mimir-values.yaml (e.g., prom-operator).

Verify:

  1. Once logged in, go to “Connections” -> “Data Sources”. You should see “Mimir” listed as a Prometheus data source.
  2. Navigate to “Explore”. Select the “Mimir” data source from the dropdown.
  3. Enter a Prometheus query (e.g., up{job="kubernetes-nodes"} or rate(node_cpu_seconds_total[5m])) and run it.
  4. You should see graphs and data returned, confirming Mimir is successfully storing and serving metrics.

If you see data, congratulations! You have successfully deployed Grafana Mimir and integrated it with Prometheus and Grafana.

Production Considerations

Deploying Grafana Mimir in production requires careful planning and configuration to ensure scalability, high availability, and cost-efficiency.

  • Object Storage: While MinIO is great for testing, always use a robust cloud object storage solution like AWS S3, Google Cloud Storage (GCS), or Azure Blob Storage for production. Configure appropriate IAM roles and policies for secure access.
  • Resource Management: Mimir components can be resource-intensive. Set appropriate CPU and memory requests and limits for each Mimir component (ingester, querier, compactor, etc.) based on your expected metrics volume and query load. Over-provisioning leads to waste, under-provisioning to instability. Tools like Karpenter Cost Optimization can dynamically adjust node capacity based on pod resource requests, helping you optimize costs.
  • High Availability: Mimir is designed for HA. Ensure you have multiple replicas for stateful components (ingesters, store-gateways) and stateless ones (distributor, querier, query-frontend). Use anti-affinity rules to distribute pods across different nodes or availability zones.
  • Network Configuration: Mimir’s components communicate heavily. Ensure your Kubernetes network is robust and high-performing. Consider network policies using our Network Policies Security Guide to restrict communication between components to only what is necessary.
  • Multi-Tenancy: If you plan to use Mimir for multiple teams or applications, leverage its multi-tenancy features. Configure tenant IDs and enforce limits to prevent one tenant from impacting others.
  • Monitoring Mimir Itself: Deploy a dedicated Prometheus instance to scrape Mimir’s own metrics. This allows you to monitor the health, performance, and resource utilization of your Mimir cluster. Grafana provides dashboards specifically for Mimir.
  • Backup and Restore: While Mimir stores data in object storage, understand the backup and restore procedures for any local persistent volumes (WAL, cache) and your object storage itself.
  • Security: Implement mTLS for internal Mimir component communication and secure access to the Mimir API. Use Kubernetes Secrets for sensitive credentials. Consider integration with identity providers for Grafana access. For advanced supply chain security, our guide on Sigstore and Kyverno Security can provide insights into securing your deployment artifacts.
  • Horizontal Scaling: Mimir components can be scaled independently. Monitor your cluster’s performance and scale ingesters for write throughput, queriers for read throughput, and store-gateways for query performance over long-term data.

Troubleshooting

Here are common issues you might encounter when deploying Grafana Mimir and their solutions.

  1. Issue: Mimir pods are stuck in Pending state.

    Explanation: This usually indicates that the scheduler cannot find a suitable node for the pods. Common reasons include insufficient resources (CPU, memory) or missing persistent volumes.

    Solution:

    • Check pod events:
      kubectl describe pod <pod-name> -n mimir

      Look for messages like “Insufficient cpu”, “Insufficient memory”, or “persistentvolumeclaim not found”.

    • Ensure your cluster has enough nodes and resources.
    • Verify your StorageClass is correctly configured and has available provisioners for the PVCs requested by Mimir ingesters and store-gateways.
    • If using a specific storageClass in mimir-values.yaml, ensure it exists:
      kubectl get sc
  2. Issue: Prometheus is not remote writing to Mimir, or Mimir distributor logs show no incoming writes.

    Explanation: Prometheus cannot connect to the Mimir distributor service, or the configuration is incorrect.

    Solution:

    • Check the Prometheus remote write URL in your Prometheus configuration. Ensure the service name and namespace are correct: http://mimir-distributor.mimir.svc.cluster.local/api/v1/push.
    • Verify the Mimir distributor service exists and is accessible:
      kubectl get svc mimir-distributor -n mimir
    • Check network connectivity between Prometheus and Mimir pods. If you have network policies, ensure they allow traffic from Prometheus to Mimir’s distributor service.
    • Examine Prometheus logs for remote write errors:
      kubectl logs -n <prometheus-namespace> -l app=prometheus | grep "remote_storage"
  3. Issue: Grafana Mimir data source is configured, but queries return no data or errors.

    Explanation: Mimir is not receiving data, or there are issues with its internal components (ingesters, queriers, store-gateways) communicating with object storage.

    Solution:

    • Verify Prometheus is successfully remote writing (see previous troubleshooting step).
    • Check Mimir ingester, querier, and store-gateway logs for errors:
      kubectl logs -n mimir -l app.kubernetes.io/component=ingester

      (and similarly for querier, store-gateway).

    • Ensure Mimir’s object storage configuration (S3, GCS, MinIO) in mimir-values.yaml is correct, including endpoint, bucket name, and credentials.
    • If using MinIO, check its logs for errors:
      kubectl logs -n mimir -l app.kubernetes.io/name=minio
    • Verify the Mimir data source URL in Grafana is pointing to the mimir-query-frontend service: http://mimir-query-frontend.mimir.svc.cluster.local:80.
  4. Issue: Mimir ingesters are crashing or restarting frequently.

    Explanation: Ingesters are stateful components and can crash due to OOM errors (Out Of Memory), persistent volume issues, or ring state problems.

    Solution:

    • Increase memory limits for ingester pods in mimir-values.yaml. Ingesters need sufficient memory to hold active series in memory.
    • Check ingester logs for OOMKilled events or issues writing to the WAL (Write-Ahead Log) on the persistent volume.
    • Verify the persistent volume for ingesters is healthy and has sufficient space.
    • Ensure the ring configuration (instance_store) is stable. For larger clusters, consider using Consul or Etcd instead of Memberlist.
  5. Issue: Mimir queries are slow or timeout.

    Explanation: Query performance can be affected by various factors, including high data volume, insufficient querier resources, slow object storage, or compactor issues.

    Solution:

    • Increase replicas and resource limits for querier and queryFrontend components.
    • Check compactor logs to ensure it’s successfully compacting blocks in object storage. Uncompacted blocks can slow down queries.
    • Monitor the performance of your object storage backend.
    • Enable and configure caching for the query-frontend in mimir-values.yaml to cache frequently accessed query results.
    • Adjust max_samples_per_query and max_query_length limits in Mimir’s configuration.
    • Ensure network latency between Mimir components and object storage is low.

FAQ Section

  1. What is the primary benefit of using Grafana Mimir over standalone Prometheus?

    The primary benefit is horizontal scalability, high availability, and long-term storage for Prometheus metrics. Mimir allows you to ingest and query billions of metrics, overcome the single-node limitations of Prometheus, and provide a global view of metrics across multiple clusters, crucial for large-scale environments. It ensures your metrics data is durable and always accessible.

  2. Can Mimir replace Prometheus entirely?

    No, Mimir works in conjunction with Prometheus. Prometheus instances still scrape targets and perform initial data collection. Mimir acts as a remote write backend for Prometheus, storing the data long-term and handling complex queries. You still need Prometheus for scraping and short-term local storage.

  3. Is Grafana Mimir multi-tenant?

    Yes, Grafana Mimir is designed from the ground up with multi-tenancy in mind. It allows multiple independent tenants to write and query their metrics data within the same Mimir cluster, with strong isolation and configurable limits per tenant. This makes it ideal for large organizations or managed service providers.

  4. What kind of storage does Mimir use for metrics data?

    Mimir primarily uses object storage (like AWS S3, Google Cloud Storage, Azure Blob Storage, or MinIO) for its long-term data store. It also utilizes local persistent volumes for ingester Write-Ahead Logs (WAL) and various caches (e.g., index cache for store-gateways) to optimize performance.

  5. How does Mimir handle high cardinality metrics?

    Mimir is built to handle high cardinality metrics efficiently. Its distributed architecture, optimized storage format (based on Prometheus’s TSDB), and query path optimizations allow it to ingest, store, and query large volumes of unique time series without performance degradation. However, extremely high cardinality can still impact performance and cost, so it’s always good practice to manage your metrics wisely.

Cleanup Commands

If you need to remove Grafana Mimir and its associated components from your cluster, use the following Helm command:

helm uninstall mimir -n mimir
kubectl delete namespace mimir

Note: This will delete all Mimir components, including MinIO and Grafana, and their associated Persistent Volume Claims (PVCs). Depending on your StorageClass reclaim policy, the underlying Persistent Volumes (PVs) and the data in them might persist. You may need to manually delete PVs if they are not automatically reclaimed, and the object storage bucket (e.g., S3 bucket) will also need to be manually emptied and deleted if you used a cloud provider’s storage.

Next Steps / Further Reading

Congratulations on deploying Grafana Mimir! Here are some next steps to deepen your understanding and optimize your setup:

  • Explore Mimir Dashboards: Import official Grafana dashboards for Mimir to monitor its internal health and performance. You can find these on the Grafana Dashboards page.
  • Configure Multi-Tenancy: If you have multiple teams, explore Mimir’s multi-tenancy features to isolate data and enforce limits. Refer to the Mimir documentation on multi-tenancy.
  • Implement Alerting: Integrate Mimir with Grafana Alerting or a separate Alertmanager instance to create robust alerting rules based on your long-term metrics.
  • Advanced Querying: Familiarize yourself with advanced PromQL queries to leverage Mimir’s capabilities for complex analytics and troubleshooting.
  • Tune Performance: Dive into Mimir’s extensive configuration options to fine-tune performance for your specific workload. Pay attention to ingester chunk sizes, query limits, and caching strategies.
  • Explore Gateway API for External Access: If you need to expose Mimir’s query frontend securely to external clients, consider using the Kubernetes Gateway API for more advanced traffic management than traditional Ingress.
  • Service Mesh Integration: For complex microservices environments, integrating Mimir’s telemetry with a service mesh like Istio can provide richer observability. Check out our Istio Ambient Mesh Production Guide for insights into modern service mesh deployments.

Conclusion

Grafana Mimir fundamentally transforms how you handle Prometheus metrics at scale. By providing a horizontally scalable, highly available, and multi-tenant long-term storage solution, Mimir addresses the inherent limitations of standalone Prometheus, empowering organizations to gain deeper insights into their systems over extended periods. This guide has equipped you with the knowledge to deploy Mimir on Kubernetes, integrate it with your existing Prometheus and Grafana setup, and understand the critical considerations for a production-ready environment. As your Kubernetes footprint expands, Mimir will prove to be an invaluable asset in maintaining robust observability and ensuring the reliability of your cloud-native applications.

Leave a Reply

Your email address will not be published. Required fields are marked *