-
Work Item
-
Resolution: Done
-
Minor
-
None
-
None
-
All
-
GreenHopper Ranking:0|i23ltn:
-
9223372036854775807
-
Small
Why
Today, with the old method of working with Prometheus (with a label), dev teams are not able to customize their monitoring regarding their needs.
With Service Monitor, they can adapt the monitoring configuration to their needs. So, we need to be sure that everyone implement service monitor in its charts.
Also, we need to remove uid & version fields from Grafana dashboard to avoid monitoring disturb.
How
Service Monitor:
The objective is to implement a service monitor object inside all of your charts.
You can use this commit from starter chart as example: https://github.com/Talend/helm-charts/commit/21a7a154e97fc96b760b93563e3d09d4d4e1c418
Please, don't forget to remove "prometheus" label inside service k8s object to avoid any issue
Warning: if you need to test on your computer the chart with service monitor implemented without face an error, 2 solutions:
- Install prometheus on your local cluster: https://github.com/prometheus-operator/prometheus-operator#quickstart
- Or disable serviceMonitor inside your values-ci.yaml file
The Service Monitor deployment works fine on CI cluster when chart is built.
Grafana dashboard:
The idea is to update Grafana Dashboard definition (in your helm charts) to remove "uid" & "version" fields at the end of file.
For each of your Grafana dashboards (stored in json file), remove the concerned fields (follow this doc https://github.com/Talend/infra-doc-snippets/blob/master/Monitoring/Dashboarding/export_grafana_dashboards_into_github.md#remove-the-uid).
FYI, helm chart build job will check this (cf https://jira.talendforge.org/browse/DEVOPS-12397).
What
Acceptance criteria:
- GIVEN a helm chart with dashboard(s)
WHEN it's built
THEN you have no trace of uid & version fields inside the chart Grafana dashboard(s) - GIVEN a helm chart
WHEN it's deployed
THEN a service monitor is created in k8s and I'm able to see metrics on Grafana - GIVEN a helm chart
WHEN it's deployed
THEN we have no trace of "prometheus" label inside service k8s object