This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from GKE Inference Gateway. This document shows you how to do the following:
- Set up GKE Inference Gateway to report metrics.
- Configure a ClusterPodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.
- Access a dashboard in Cloud Monitoring to view the metrics.
These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the GKE Inference Gateway documentation for installation information.
These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, then we recommend you consult open-source documentation for support.
For information about GKE Inference Gateway, see GKE Inference Gateway.
Prerequisites
To collect metrics from the GKE Inference Gateway exporter by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:
- Your cluster must be running Google Kubernetes Engine version 1.28.15-gke.2475000 or later.
- You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection.
To verify that the GKE Inference Gateway exporter is emitting metrics on the expected endpoints, do the following:
Add a secret, ServiceAccount, ClusterRole, and ClusterBinding. The GKE Inference Gateway exporter observability endpoints are protected by the auth token. To obtain credentials, the client requires a Secret that maps to a service account with the connected ClusterRole, for the
nonResourceURLs: "/metrics", verbs: get
rule. For more information, see Create a secret for a service account.Set up port forwarding by using the following command:
kubectl -n NAMESPACE_NAME port-forward POD_NAME 9090
In another window, do the following:
Fetch the token by running the following command:
TOKEN=$(kubectl -n default get secret inference-gateway-sa-metrics-reader-secret -o jsonpath='{.secrets[0].name}' -o jsonpath='{.data.token}' | base64 --decode)
Access the endpoint
localhost:9090/metrics
using thecurl
utility:curl -H "Authorization: Bearer $TOKEN" localhost:9090/metrics
Create a secret for a service account
For the protected the GKE Inference Gateway exporter endpoint, the
Managed Service for Prometheus Operator requires a secret for
authorized metric collection in the gmp-system
namespace.
If your cluster is using Autopilot mode, then replace
gmp-system
with gke-gmp-system
.
You can use the following Secret, ServiceAccount, ClusterRole and ClusterRoleBinding configuration:
For more information, see the exporter's Metric & Observability guide.
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
Define a ClusterPodMonitoring resource
For target discovery, the Managed Service for Prometheus Operator requires a ClusterPodMonitoring resource that corresponds to the GKE Inference Gateway exporter in the same namespace.
You can use the following ClusterPodMonitoring configuration:
GKE Inference Gateway uses the ClusterPodMonitoring
resource instead of the
PodMonitoring
resource because it needs to access the secret from another
namespace.
In the matchLabels
selector of the ClusterPodMonitoring
configuration,
you can replace the app
value of inference-gateway-ext-proc
with labels
from your GKE Inference Gateway deployment.
To apply configuration changes from a local file, run the following command:
kubectl apply -n NAMESPACE_NAME -f FILE_NAME
You can also use Terraform to manage your configurations.
Verify the configuration
You can use Metrics Explorer to verify that you correctly configured the GKE Inference Gateway exporter. It might take one or two minutes for Cloud Monitoring to ingest your metrics.
To verify the metrics are ingested, do the following:
-
In the Google Cloud console, go to the leaderboard Metrics explorer page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- In the toolbar of the query-builder pane, select the button whose name is either code MQL or code PromQL.
- Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
- Enter and run the following query:
inference_model_request_total{cluster="CLUSTER_NAME", namespace="NAMESPACE_NAME"}
View dashboards
The Cloud Monitoring integration includes the GKE Inference Gateway Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.
To view an installed dashboard, do the following:
-
In the Google Cloud console, go to the
Dashboards page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Select the Dashboard List tab.
- Choose the Integrations category.
- Click the name of the dashboard, for example, GKE Inference Gateway Prometheus Overview.
To view a static preview of the dashboard, do the following:
-
In the Google Cloud console, go to the
Integrations page:
If you use the search bar to find this page, then select the result whose subheading is Monitoring.
- Click the Kubernetes Engine deployment-platform filter.
- Locate the GKE Inference Gateway integration and click View Details.
- Select the Dashboards tab.
Troubleshooting
For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems.