* an additional REST endpoint to retrieve openHAB core metrics from. This can be used as scrape target for pull-based monitoring systems like [Prometheus](https://prometheus.io/).
* optionally configurable services to export openHAB core metrics to push-based monitoring systems like [InfluxDB](https://www.influxdata.com/).
Refer to the [Prometheus](https://prometheus.io/) documentation on how to setup a Prometheus instance and add a scrape configuration. A typical scrape config could look like this (excerpt from `/etc/prometheus/prometheus.yml`):
There are no Prometheus specific configuration paramters.
### InfluxDB
The InfluxDB exporter service will start as soon as the _influxMetricsEnabled_ configuration parameter is set to true.
#### Available configuration parameters
|Config param|Description|Default value|
|--|--|--|
|influxURL|The URL of the InfluxDB instance. Defaults to http://localhost:8086|http://localhost:8086|
|influxDB|The name of the database to use. Defaults to "openhab".|openhab|
|influxUsername|InfluxDB user name|n/a|
|influxPassword|The InfluxDB password (no default).|n/a|
|influxUpdateIntervalInSeconds|Controls how often metrics are exported to InfluxDB (in seconds). Defaults to 300|300|
## Additional metric formats
The metrics service was implemented using [Micrometer](https://micrometer.io), which supports a number of [monitoring systems](https://micrometer.io/docs)
It should be possible to add any of these, especially the ones using a pull mechanism ("scraping") like Prometheus does.
## Grafana
You can now visualize the results in Grafana. Micrometer provides a public [Grafana dashboard here](https://grafana.com/grafana/dashboards/4701).