IBM Z HMC Prometheus Exporter

Introduction

What this package provides

The IBM Z HMC Prometheus Exporter is a Prometheus exporter written in Python that retrieves metrics from the IBM Z Hardware Management Console (HMC) and exports them to the Prometheus monitoring system.

The exporter attempts to stay up as much as possible, for example it performs automatic session renewals with the HMC if the logon session expires, and it survives HMC reboots and automatically picks up metrics collection again once the HMC come back up.

Supported environments

  • Operating systems: Linux, macOS, Windows

  • Python versions: 3.5 and higher

  • HMC versions: 2.11.1 and higher

Quickstart

  • Install the exporter and all of its Python dependencies as follows:

    $ pip install zhmc-prometheus-exporter
    
  • Provide an HMC credentials file for use by the exporter.

    The HMC credentials file tells the exporter which HMC to talk to for obtaining metrics, and which userid and password to use for logging on to the HMC.

    Download the Sample HMC credentials file as hmccreds.yaml and edit that copy accordingly.

    For details, see HMC credentials file.

  • Provide a metric definition file for use by the exporter.

    The metric definition file maps the metrics returned by the HMC to metrics exported to Prometheus.

    Furthermore, the metric definition file allows optimizing the access time to the HMC by disabling the fetching of metrics that are not needed.

    Download the Sample metric definition file as metrics.yaml. It can be used as it is and will have all metrics enabled and mapped properly. You only need to edit the file if you want to adjust the metric names, labels, or metric descriptions, or if you want to optimize access time by disabling metrics not needed.

    For details, see Metric definition file.

  • Run the exporter as follows:

    $ zhmc_prometheus_exporter -c hmccreds.yaml -m metrics.yaml
    Exporter is up and running on port 9291
    

    Depending on the number of CPCs managed by your HMC, and dependent on how many metrics are enabled, it will take some time until the exporter reports to be up and running. You can see what it does in the mean time by using the -v option. Subsequent requests to the exporter will be sub-second.

  • Direct your web browser at http://localhost:9291 to see the exported Prometheus metrics. Refreshing the browser will update the metrics.

Reporting issues

If you encounter a problem, please report it as an issue on GitHub.

License

This package is licensed under the Apache 2.0 License.

Usage

This section describes how to use the exporter beyond the quick introduction in Quickstart.

Running on a system

If you want to run the exporter on some system (e.g. on your workstation for trying it out), it is recommended to use a virtual Python environment.

With the virtual Python environment active, follow the steps in Quickstart to install, establish the required files, and to run the exporter.

Running in a Docker container

If you want to run the exporter in a Docker container you can create the container as follows, using the Dockerfile provided in the Git repository.

  • Clone the Git repository of the exporter and switch to the clone’s root directory:

    $ git clone https://github.com/zhmcclient/zhmc-prometheus-exporter
    $ cd zhmc-prometheus-exporter
    
  • Provide an HMC credentials file named hmccreds.yaml in the clone’s root directory, as described in Quickstart. You can copy it from the examples directory.

  • Provide a metric definition file named metrics.yaml in the clone’s root directory, as described in Quickstart. You can copy it from the examples directory.

  • Build the container as follows:

    $ docker build . -t zhmcexporter
    
  • Run the container as follows:

    $ docker run -p 9291:9291 zhmcexporter
    

zhmc_prometheus_exporter command

The zhmc_prometheus_exporter command supports the following arguments:

usage: zhmc_prometheus_exporter [-h] [-c CREDS_FILE] [-m METRICS_FILE] [-p PORT] [--log DEST]
                                [--log-comp COMP] [--verbose] [--help-creds] [--help-metrics]

IBM Z HMC Exporter - a Prometheus exporter for metrics from the IBM Z HMC

optional arguments:

  -h, --help       show this help message and exit

  -c CREDS_FILE    path name of HMC credentials file. Use --help-creds for details. Default:
                   /etc/zhmc-prometheus-exporter/hmccreds.yaml

  -m METRICS_FILE  path name of metric definition file. Use --help-metrics for details. Default:
                   /etc/zhmc-prometheus-exporter/metrics.yaml

  -p PORT          port for exporting. Default: 9291

  --log DEST       enable logging and set the log destination to one of: stderr, FILE. Default: No
                   logging

  --log-comp COMP  set the components to log to one of: hmc, jms, exporter. May be specified
                   multiple times. Default: no components

  --verbose, -v    increase the verbosity level (max: 2)

  --help-creds     show help for HMC credentials file and exit

  --help-metrics   show help for metric definition file and exit

HMC userid requirements

This section describes the requirements on the HMC userid that is used by the zhmc_prometheus_exporter command.

To return all metrics supported by the command, the HMC userid must have the following permissions:

  • Object access permission to the objects for which metrics should be returned.

    If the userid does not have object access permission to a particular object, the exporter will behave as if the object did not exist, i.e. it will successfully return metrics for objects with access permission, and ignore any others.

    The exporter can return metrics for the following types of objects. To return metrics for all existing objects, the userid must have object access permission to all of the following objects:

    • CPCs

    • On CPCs in DPM mode: - Adapters - Partitions - NICs

    • On CPCs in classic mode: - LPARs

  • Task permission for the “Manage Secure Execution Keys” task.

    This is used by the exporter during the ‘Get CPC Properties’ operation, but it does not utilize the CPC properties returned that way (room for future optimization).

HMC certificate

By default, the HMC is configured with a self-signed certificate. That is the X.509 certificate presented by the HMC as the server certificate during SSL/TLS handshake at its Web Services API.

Starting with version 0.7, the ‘zhmc_prometheus_exporter’ command will reject self-signed certificates by default.

The HMC should be configured to use a CA-verifiable certificate. This can be done in the HMC task “Certificate Management”. See also the HMC Security book and Chapter 3 “Invoking API operations” in the HMC API book.

Starting with version 0.7, the ‘zhmc_prometheus_exporter’ command provides control knobs for the verification of the HMC certificate via the verify_cert attribute in the HMC credentials file, as follows:

  • Not specified or specified as true (default): Verify the HMC certificate using the CA certificates from the first of these locations:

  • Specified with a string value: An absolute path or a path relative to the directory of the HMC credentials file. Verify the HMC certificate using the CA certificates in the specified certificate file or directory.

  • Specified as false: Do not verify the HMC certificate. Not verifying the HMC certificate means that hostname mismatches, expired certificates, revoked certificates, or otherwise invalid certificates will not be detected. Since this mode makes the connection vulnerable to man-in-the-middle attacks, it is insecure and should not be used in production environments.

If a certificate file is specified (using any of the ways listed above), that file must be in PEM format and must contain all CA certificates that are supposed to be used. Usually they are in the order from leaf to root, but that is not a hard requirement. The single certificates are concatenated in the file.

If a certificate directory is specified (using any of the ways listed above), it must contain PEM files with all CA certificates that are supposed to be used, and copies of the PEM files or symbolic links to them in the hashed format created by the OpenSSL command c_rehash.

An X.509 certificate in PEM format is base64-encoded, begins with the line -----BEGIN CERTIFICATE-----, and ends with the line -----END CERTIFICATE-----. More information about the PEM format is for example on this www.ssl.com page or in this serverfault.com answer.

Note that setting the REQUESTS_CA_BUNDLE or CURL_CA_BUNDLE environment variables influences other programs that use these variables, too.

For more information, see the Security section in the documentation of the ‘zhmcclient’ package.

Exported metric concepts

The exporter provides its metrics in the Prometheus text-based format.

All metrics are of the metric type gauge and follow the Prometheus metric naming. The names of the metrics are defined in the Metric definition file. The metric names could be changed by users, but unless there is a strong reason for doing that, it is not recommended. It is recommended to use the Sample metric definition file unchanged. The metrics mapping in the Sample metric definition file is referred to as the standard metric definition in this documentation.

In the standard metric definition, the metric names are structured as follows:

zhmc_{resource-type}_{metric}_{unit}

Where:

  • {resource-type} is a short lower case term for the type of resource the metric applies to, for example cpc or partition.

  • {metric} is a unique name of the metric within the resource type, for example processor.

  • {unit} is the (simple or complex) unit of measurement of the metric value. For example, a usage percentage will usually have a unit of usage_ratio, while a temperature would have a unit of celsius.

Each metric value applies to a particular instance of a resource. In a particular set of exported metrics, there are usually metrics for multiple resource instances. For example, the HMC can manage multiple CPCs, a CPC can have multiple partitions, and so on. In the exported metrics, the resource instance is identified using one or more Prometheus labels. Where possible, the labels identify the resource instances in a hierarchical way from the CPC on down to the resource to which the metric value applies. For example, a metric for a partition will have labels cpc and partition whose values are the names of CPC and partition, respectively.

Example for the representation of metric values that are the IFL processor usage percentages of two partitions in a single CPC:

# HELP zhmc_partition_ifl_processor_usage_ratio Usage ratio across all IFL processors of the partition
# TYPE zhmc_partition_ifl_processor_usage_ratio gauge
zhmc_partition_ifl_processor_usage_ratio{cpc='CPCA',partition='PART1'} 0.42
zhmc_partition_ifl_processor_usage_ratio{cpc='CPCA',partition='PART2'} 0.07

Available metrics

The exporter supports two types of metrics. These metrics are differently retrieved from the HMC, but they are exported to Prometheus in the same way:

  • HMC metric service based - These metrics are retrieved from the HMC using the “Get Metric Context” operation each time Prometheus retrieves metrics from the exporter.

  • HMC resource property based - These metrics are actually the values of properties of HMC resources, such as the number of processors assigned to a partition. The exporter maintains representations of the corresponding resources in memory. These representations are automatically and asynchronously updated via HMC object notifications. When Prometheus retrieves these metrics from the exporter, the exporter always has up-to-date resource representations and can immediately return them without having to turn around for getting them from the HMC.

The exporter code is agnostic to the actual set of metrics supported by the HMC. A new metric exposed by the HMC metric service or a new property added to one of the auto-updated resources can immediately be supported by just adding it to the Metric definition file.

The Sample metric definition file in the Git repository states in its header up to which HMC version or Z machine generation the metrics are defined.

The following table shows the mapping between exporter metric groups and exported Prometheus metrics in the standard metric definition. Note that ensemble and zBX related metrics are not covered in the standard metric definition (support for them has been removed in z15). For more details on the HMC metrics, see section “Metric Groups” in the HMC API book. For more details on the resource properties of CPC and Partition (DPM mode) and Logical Partition (classic mode), see the corresponding data models in the HMC API book.

Exporter Metric Group

Type

Mode

Prometheus Metrics

Prometheus Labels

cpc-usage-overview

M

C

zhmc_cpc_*

cpc

logical-partition-usage

M

C

zhmc_partition_*

cpc, partition

channel-usage

M

C

zhmc_channel_*

cpc, channel_css_chpid

crypto-usage

M

C

zhmc_crypto_adapter_*

cpc, adapter_pchid

flash-memory-usage

M

C

zhmc_flash_memory_adapter_*

cpc, adapter_pchid

roce-usage

M

C

zhmc_roce_adapter_*

cpc, adapter_pchid

dpm-system-usage-overview

M

D

zhmc_cpc_*

cpc

partition-usage

M

D

zhmc_partition_*

cpc, partition

adapter-usage

M

D

zhmc_adapter_*

cpc, adapter

network-physical-adapter-port

M

D

zhmc_port_*

cpc, adapter, port

partition-attached-network-interface

M

D

zhmc_nic_*

cpc, partition, nic

zcpc-environmentals-and-power

M

C+D

zhmc_cpc_*

cpc

environmental-power-status

M

C+D

zhmc_cpc_*

cpc

zcpc-processor-usage

M

C+D

zhmc_processor_*

cpc, processor, type

cpc-resource

R

C+D

zhmc_cpc_*

cpc

partition-resource

R

D

zhmc_partition_*

cpc, partition

logical-partition-resource

R

C

zhmc_partition_*

cpc, partition

Legend:

  • Type:: The type of the metric group: M=metric service, R=resource property

  • Mode: The operational mode of the CPC: C=Classic, D=DPM

As you can see, the zhmc_cpc_* and zhmc_partition_* metrics are used for both DPM mode and classic mode. The names of the metrics are equal if and only if they have the same meaning in both modes.

The following table shows the Prometheus metrics in the standard metric definition. This includes both metric service and resource property based metrics:

Prometheus Metric

Mode

Type

Description

zhmc_cpc_cp_processor_count

C+D

G

Number of active CP processors

zhmc_cpc_ifl_processor_count

C+D

G

Number of active IFL processors

zhmc_cpc_icf_processor_count

C+D

G

Number of active ICF processors

zhmc_cpc_iip_processor_count

C+D

G

Number of active zIIP processors

zhmc_cpc_aap_processor_count

C+D

G

Number of active zAAP processors

zhmc_cpc_cbp_processor_count

C+D

G

Number of active CBP processors

zhmc_cpc_sap_processor_count

C+D

G

Number of active SAP processors

zhmc_cpc_defective_processor_count

C+D

G

Number of defective processors of all processor types

zhmc_cpc_spare_processor_count

C+D

G

Number of spare processors of all processor types

zhmc_cpc_total_memory_mib

C+D

G

Total amount of installed memory, in MiB

zhmc_cpc_hsa_memory_mib

C+D

G

Memory reserved for the base hardware system area (HSA), in MiB

zhmc_cpc_partition_memory_mib

C+D

G

Memory for use by partitions, in MiB

zhmc_cpc_partition_central_memory_mib

C+D

G

Memory allocated as central storage across the active partitions, in MiB

zhmc_cpc_partition_expanded_memory_mib

C+D

G

Memory allocated as expanded storage across the active partitions, in MiB

zhmc_cpc_available_memory_mib

C+D

G

Memory not allocated to active partitions, in MiB

zhmc_cpc_vfm_increment_gib

C+D

G

Increment size of VFM, in GiB

zhmc_cpc_total_vfm_gib

C+D

G

Total amount of installed VFM, in GiB

zhmc_cpc_processor_usage_ratio

C+D

G

Usage ratio across all processors of the CPC

zhmc_cpc_shared_processor_usage_ratio

C+D

G

Usage ratio across all shared processors of the CPC

zhmc_cpc_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated processors of the CPC

zhmc_cpc_cp_processor_usage_ratio

C+D

G

Usage ratio across all CP processors of the CPC

zhmc_cpc_cp_shared_processor_usage_ratio

C+D

G

Usage ratio across all shared CP processors of the CPC

zhmc_cpc_cp_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated CP processors of the CPC

zhmc_cpc_ifl_processor_usage_ratio

C+D

G

Usage ratio across all IFL processors of the CPC

zhmc_cpc_ifl_shared_processor_usage_ratio

C+D

G

Usage ratio across all shared IFL processors of the CPC

zhmc_cpc_ifl_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated IFL processors of the CPC

zhmc_cpc_aap_shared_processor_usage_ratio

C

G

Usage ratio across all shared zAAP processors of the CPC

zhmc_cpc_aap_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated zAAP processors of the CPC

zhmc_cpc_cbp_processor_usage_ratio

C

G

Usage ratio across all CBP processors of the CPC

zhmc_cpc_cbp_shared_processor_usage_ratio

C

G

Usage ratio across all shared CBP processors of the CPC

zhmc_cpc_cbp_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated CBP processors of the CPC

zhmc_cpc_icf_processor_usage_ratio

C

G

Usage ratio across all ICF processors of the CPC

zhmc_cpc_icf_shared_processor_usage_ratio

C

G

Usage ratio across all shared ICF processors of the CPC

zhmc_cpc_icf_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated ICF processors of the CPC

zhmc_cpc_iip_processor_usage_ratio

C

G

Usage ratio across all zIIP processors of the CPC

zhmc_cpc_iip_shared_processor_usage_ratio

C

G

Usage ratio across all shared zIIP processors of the CPC

zhmc_cpc_iip_dedicated_processor_usage_ratio

C

G

Usage ratio across all dedicated zIIP processors of the CPC

zhmc_cpc_channel_usage_ratio

C

G

Usage ratio across all channels of the CPC

zhmc_cpc_accelerator_adapter_usage_ratio

D

G

Usage ratio across all accelerator adapters of the CPC

zhmc_cpc_crypto_adapter_usage_ratio

D

G

Usage ratio across all crypto adapters of the CPC

zhmc_cpc_network_adapter_usage_ratio

D

G

Usage ratio across all network adapters of the CPC

zhmc_cpc_storage_adapter_usage_ratio

D

G

Usage ratio across all storage adapters of the CPC

zhmc_cpc_power_watts

C+D

G

Power consumption of the CPC

zhmc_cpc_ambient_temperature_celsius

C+D

G

Ambient temperature of the CPC

zhmc_cpc_humidity_percent

C+D

G

Relative humidity

zhmc_cpc_dew_point_celsius

C+D

G

Dew point

zhmc_cpc_heat_load_total_btu_per_hour

C+D

G

Total heat load of the CPC

zhmc_cpc_heat_load_forced_air_btu_per_hour

C+D

G

Heat load of the CPC covered by forced-air

zhmc_cpc_heat_load_water_btu_per_hour

C+D

G

Heat load of the CPC covered by water

zhmc_cpc_exhaust_temperature_celsius

C+D

G

Exhaust temperature of the CPC

zhmc_cpc_power_cord1_phase_a_watts

C+D

G

Power in Phase A of line cord 1 - 0 if not available

zhmc_cpc_power_cord1_phase_b_watts

C+D

G

Power in Phase B of line cord 1 - 0 if not available

zhmc_cpc_power_cord1_phase_c_watts

C+D

G

Power in Phase C of line cord 1 - 0 if not available

zhmc_cpc_power_cord2_phase_a_watts

C+D

G

Power in Phase A of line cord 2 - 0 if not available

zhmc_cpc_power_cord2_phase_b_watts

C+D

G

Power in Phase B of line cord 2 - 0 if not available

zhmc_cpc_power_cord2_phase_c_watts

C+D

G

Power in Phase C of line cord 2 - 0 if not available

zhmc_cpc_power_cord3_phase_a_watts

C+D

G

Power in Phase A of line cord 3 - 0 if not available

zhmc_cpc_power_cord3_phase_b_watts

C+D

G

Power in Phase B of line cord 3 - 0 if not available

zhmc_cpc_power_cord3_phase_c_watts

C+D

G

Power in Phase C of line cord 3 - 0 if not available

zhmc_cpc_power_cord4_phase_a_watts

C+D

G

Power in Phase A of line cord 4 - 0 if not available

zhmc_cpc_power_cord4_phase_b_watts

C+D

G

Power in Phase B of line cord 4 - 0 if not available

zhmc_cpc_power_cord4_phase_c_watts

C+D

G

Power in Phase C of line cord 4 - 0 if not available

zhmc_cpc_power_cord5_phase_a_watts

C+D

G

Power in Phase A of line cord 5 - 0 if not available

zhmc_cpc_power_cord5_phase_b_watts

C+D

G

Power in Phase B of line cord 5 - 0 if not available

zhmc_cpc_power_cord5_phase_c_watts

C+D

G

Power in Phase C of line cord 5 - 0 if not available

zhmc_cpc_power_cord6_phase_a_watts

C+D

G

Power in Phase A of line cord 6 - 0 if not available

zhmc_cpc_power_cord6_phase_b_watts

C+D

G

Power in Phase B of line cord 6 - 0 if not available

zhmc_cpc_power_cord6_phase_c_watts

C+D

G

Power in Phase C of line cord 6 - 0 if not available

zhmc_cpc_power_cord7_phase_a_watts

C+D

G

Power in Phase A of line cord 7 - 0 if not available

zhmc_cpc_power_cord7_phase_b_watts

C+D

G

Power in Phase B of line cord 7 - 0 if not available

zhmc_cpc_power_cord7_phase_c_watts

C+D

G

Power in Phase C of line cord 7 - 0 if not available

zhmc_cpc_power_cord8_phase_a_watts

C+D

G

Power in Phase A of line cord 8 - 0 if not available

zhmc_cpc_power_cord8_phase_b_watts

C+D

G

Power in Phase B of line cord 8 - 0 if not available

zhmc_cpc_power_cord8_phase_c_watts

C+D

G

Power in Phase C of line cord 8 - 0 if not available

zhmc_cpc_status_int

C+D

G

Status as integer

zhmc_cpc_has_unacceptable_status

C+D

G

Boolean indicating whether the CPC has an unacceptable status

zhmc_processor_usage_ratio

C+D

G

Usage ratio of the processor

zhmc_processor_smt_mode_percent

C+D

G

Percentage of time the processor was in in SMT mode

zhmc_processor_smt_thread0_usage_ratio

C+D

G

Usage ratio of thread 0 of the processor when in SMT mode

zhmc_processor_smt_thread1_usage_ratio

C+D

G

Usage ratio of thread 1 of the processor when in SMT mode

zhmc_partition_processor_usage_ratio

C+D

G

Usage ratio across all processors of the partition

zhmc_partition_cp_processor_usage_ratio

C

G

Usage ratio across all CP processors of the partition

zhmc_partition_ifl_processor_usage_ratio

C

G

Usage ratio across all IFL processors of the partition

zhmc_partition_icf_processor_usage_ratio

C

G

Usage ratio across all ICF processors of the partition

zhmc_partition_cbp_processor_usage_ratio

C

G

Usage ratio across all CBP processors of the partition

zhmc_partition_iip_processor_usage_ratio

C

G

Usage ratio across all IIP processors of the partition

zhmc_partition_accelerator_adapter_usage_ratio

D

G

Usage ratio of all accelerator adapters of the partition

zhmc_partition_crypto_adapter_usage_ratio

D

G

Usage ratio of all crypto adapters of the partition

zhmc_partition_network_adapter_usage_ratio

D

G

Usage ratio of all network adapters of the partition

zhmc_partition_storage_adapter_usage_ratio

D

G

Usage ratio of all storage adapters of the partition

zhmc_partition_zvm_paging_rate_pages_per_second

C

G

z/VM paging rate in pages/sec

zhmc_partition_processor_mode_int

C+D

G

Allocation mode for processors as an integer (0=shared, 1=dedicated)

zhmc_partition_threads_per_processor_ratio

D

G

Number of threads per processor used by OS

zhmc_partition_defined_capacity_msu_per_hour

C

G

Defined capacity expressed in terms of MSU per hour

zhmc_partition_workload_manager_is_enabled

C

G

Boolean indicating whether z/OS WLM is allowed to change processing weight related properties (0=false, 1=true)

zhmc_partition_cp_processor_count

C+D

G

Number of CP processors allocated to the active partition

zhmc_partition_cp_processor_count_is_capped

C+D

G

Boolean indicating whether absolute capping is enabled for CP processors (0=false, 1=true)

zhmc_partition_cp_processor_count_cap

C+D

G

Maximum number of CP processors that can be used if absolute capping is enabled, else 0

zhmc_partition_cp_reserved_processor_count

C

G

Number of CP processors reserved for the active partition

zhmc_partition_cp_initial_processing_weight

C+D

G

Initial CP processing weight for the active partition in shared mode

zhmc_partition_cp_minimum_processing_weight

C+D

G

Minimum CP processing weight for the active partition in shared mode

zhmc_partition_cp_maximum_processing_weight

C+D

G

Maximum CP processing weight for the active partition in shared mode

zhmc_partition_cp_current_processing_weight

C+D

G

Current CP processing weight for the active partition in shared mode

zhmc_partition_cp_processor_count_cap

D

G

Maximum number of CP processors to be used when absolute CP processor capping is enabled

zhmc_partition_cp_initial_processing_weight_is_capped

C+D

G

Boolean indicating whether the initial CP processing weight is capped (0=false, 1=true)

zhmc_partition_cp_current_processing_weight_is_capped

C

G

Boolean indicating whether the current CP processing weight is capped (0=false, 1=true)

zhmc_partition_ifl_processor_count

C+D

G

Number of IFL processors allocated to the active partition

zhmc_partition_ifl_processor_count_is_capped

C+D

G

Boolean indicating whether absolute capping is enabled for IFL processors (0=false, 1=true)

zhmc_partition_ifl_processor_count_cap

C+D

G

Maximum number of IFL processors that can be used if absolute capping is enabled, else 0

zhmc_partition_ifl_reserved_processor_count

C

G

Number of IFL processors reserved for the active partition

zhmc_partition_ifl_initial_processing_weight

C+D

G

Initial IFL processing weight for the active partition in shared mode

zhmc_partition_ifl_minimum_processing_weight

C+D

G

Minimum IFL processing weight for the active partition in shared mode

zhmc_partition_ifl_maximum_processing_weight

C+D

G

Maximum IFL processing weight for the active partition in shared mode

zhmc_partition_ifl_current_processing_weight

C+D

G

Current IFL processing weight for the active partition in shared mode

zhmc_partition_ifl_processor_count_cap

D

G

Maximum number of IFL processors to be used when absolute IFL processor capping is enabled

zhmc_partition_ifl_initial_processing_weight_is_capped

C+D

G

Boolean indicating whether the initial IFL processing weight is capped (0=false, 1=true)

zhmc_partition_ifl_current_processing_weight_is_capped

C

G

Boolean indicating whether the current IFL processing weight is capped (0=false, 1=true)

zhmc_partition_icf_processor_count

C

G

Number of ICF processors currently allocated to the active partition

zhmc_partition_icf_processor_count_is_capped

C

G

Boolean indicating whether absolute capping is enabled for ICF processors (0=false, 1=true)

zhmc_partition_icf_processor_count_cap

C

G

Maximum number of ICF processors that can be used if absolute capping is enabled, else 0

zhmc_partition_icf_reserved_processor_count

C

G

Number of ICF processors reserved for the active partition

zhmc_partition_icf_initial_processing_weight

C

G

Initial ICF processing weight for the active partition in shared mode

zhmc_partition_icf_minimum_processing_weight

C

G

Minimum ICF processing weight for the active partition in shared mode

zhmc_partition_icf_maximum_processing_weight

C

G

Maximum ICF processing weight for the active partition in shared mode

zhmc_partition_icf_current_processing_weight

C

G

Current ICF processing weight for the active partition in shared mode

zhmc_partition_icf_initial_processing_weight_is_capped

C

G

Boolean indicating whether the initial ICF processing weight is capped (0=false, 1=true)

zhmc_partition_icf_current_processing_weight_is_capped

C

G

Boolean indicating whether the current ICF processing weight is capped (0=false, 1=true)

zhmc_partition_iip_processor_count

C

G

Number of zIIP processors currently allocated to the active partition

zhmc_partition_iip_processor_count_is_capped

C

G

Boolean indicating whether absolute capping is enabled for zIIP processors (0=false, 1=true)

zhmc_partition_iip_processor_count_cap

C

G

Maximum number of zIIP processors that can be used if absolute capping is enabled, else 0

zhmc_partition_iip_reserved_processor_count

C

G

Number of zIIP processors reserved for the active partition

zhmc_partition_iip_initial_processing_weight

C

G

Initial zIIP processing weight for the active partition in shared mode

zhmc_partition_iip_minimum_processing_weight

C

G

Minimum zIIP processing weight for the active partition in shared mode

zhmc_partition_iip_maximum_processing_weight

C

G

Maximum zIIP processing weight for the active partition in shared mode

zhmc_partition_iip_current_processing_weight

C

G

Current zIIP processing weight for the active partition in shared mode

zhmc_partition_iip_initial_processing_weight_is_capped

C

G

Boolean indicating whether the initial zIIP processing weight is capped (0=false, 1=true)

zhmc_partition_iip_current_processing_weight_is_capped

C

G

Boolean indicating whether the current zIIP processing weight is capped (0=false, 1=true)

zhmc_partition_aap_processor_count_is_capped

C

G

Boolean indicating whether absolute capping is enabled for zAAP processors (0=false, 1=true)

zhmc_partition_aap_processor_count_cap

C

G

Maximum number of zAAP processors that can be used if absolute capping is enabled, else 0

zhmc_partition_aap_initial_processing_weight

C

G

Initial zAAP processing weight for the active partition in shared mode

zhmc_partition_aap_minimum_processing_weight

C

G

Minimum zAAP processing weight for the active partition in shared mode

zhmc_partition_aap_maximum_processing_weight

C

G

Maximum zAAP processing weight for the active partition in shared mode

zhmc_partition_aap_current_processing_weight

C

G

Current zAAP processing weight for the active partition in shared mode

zhmc_partition_aap_initial_processing_weight_is_capped

C

G

Boolean indicating whether the initial zAAP processing weight is capped (0=false, 1=true)

zhmc_partition_aap_current_processing_weight_is_capped

C

G

Boolean indicating whether the current zAAP processing weight is capped (0=false, 1=true)

zhmc_partition_cbp_processor_count_is_capped

C

G

Boolean indicating whether absolute capping is enabled for CBP processors (0=false, 1=true)

zhmc_partition_cbp_processor_count_cap

C

G

Maximum number of CBP processors that can be used if absolute capping is enabled, else 0

zhmc_partition_cbp_initial_processing_weight

C

G

Initial CBP processing weight for the active partition in shared mode

zhmc_partition_cbp_minimum_processing_weight

C

G

Minimum CBP processing weight for the active partition in shared mode

zhmc_partition_cbp_maximum_processing_weight

C

G

Maximum CBP processing weight for the active partition in shared mode

zhmc_partition_cbp_current_processing_weight

C

G

Current CBP processing weight for the active partition in shared mode

zhmc_partition_cbp_initial_processing_weight_is_capped

C

G

Boolean indicating whether the initial CBP processing weight is capped (0=false, 1=true)

zhmc_partition_cbp_current_processing_weight_is_capped

C

G

Boolean indicating whether the current CBP processing weight is capped (0=false, 1=true)

zhmc_partition_initial_memory_mib

D

G

Initial amount of memory allocated to the partition when it becomes active, in MiB

zhmc_partition_reserved_memory_mib

D

G

Amount of reserved memory (maximum memory minus initial memory), in MiB

zhmc_partition_maximum_memory_mib

D

G

Maximum amount of memory to which the OS can increase, in MiB

zhmc_partition_initial_central_memory_mib

C

G

Amount of central memory initially allocated to the active partition in MiB, else 0

zhmc_partition_current_central_memory_mib

C

G

Amount of central memory currently allocated to the active partition, in MiB, else 0

zhmc_partition_maximum_central_memory_mib

C

G

Maximum amount of central memory to which the operating system running in the active partition can increase, in MiB

zhmc_partition_initial_expanded_memory_mib

C

G

Amount of expanded memory initially allocated to the active partition in MiB, else 0

zhmc_partition_current_expanded_memory_mib

C

G

Amount of expanded memory currently allocated to the active partition, in MiB, else 0

zhmc_partition_maximum_expanded_memory_mib

C

G

Maximum amount of expanded memory to which the operating system running in the active partition can increase, in MiB

zhmc_partition_initial_vfm_memory_gib

C

G

Initial amount of VFM memory to be allocated at partition activation, in GiB

zhmc_partition_maximum_vfm_memory_gib

C

G

Maximum amount of VFM memory that can be allocated to the active partition, in GiB

zhmc_partition_current_vfm_memory_gib

C

G

Current amount of VFM memory that is allocated to the active partition, in GiB

zhmc_partition_status_int

D

G

Partition status as integer (0=active, 1=degraded, 10=paused, 11=stopped, 12=starting, 13=stopping, 20=reservation-error, 21=terminated, 22=communications-not-active, 23=status-check, 99=unsupported value)

zhmc_partition_lpar_status_int

C

G

LPAR status as integer (0=operating, 1=not-operating, 2=not-activated, 10=exceptions, 99=unsupported value)

zhmc_partition_has_unacceptable_status

C+D

G

Boolean indicating whether the partition has an unacceptable status

zhmc_crypto_adapter_usage_ratio

C

G

Usage ratio of the crypto adapter

zhmc_flash_memory_adapter_usage_ratio

C

G

Usage ratio of the flash memory adapter

zhmc_adapter_usage_ratio

D

G

Usage ratio of the adapter

zhmc_channel_usage_ratio

C

G

Usage ratio of the channel

zhmc_roce_adapter_usage_ratio

C

G

Usage ratio of the RoCE adapter

zhmc_port_bytes_sent_count

D

C

Number of Bytes in unicast packets that were sent

zhmc_port_bytes_received_count

D

C

Number of Bytes in unicast packets that were received

zhmc_port_packets_sent_count

D

C

Number of unicast packets that were sent

zhmc_port_packets_received_count

D

C

Number of unicast packets that were received

zhmc_port_packets_sent_dropped_count

D

C

Number of sent packets that were dropped (resource shortage)

zhmc_port_packets_received_dropped_count

D

C

Number of received packets that were dropped (resource shortage)

zhmc_port_packets_sent_discarded_count

D

C

Number of sent packets that were discarded (malformed)

zhmc_port_packets_received_discarded_count

D

C

Number of received packets that were discarded (malformed)

zhmc_port_multicast_packets_sent_count

D

C

Number of multicast packets sent

zhmc_port_multicast_packets_received_count

D

C

Number of multicast packets received

zhmc_port_broadcast_packets_sent_count

D

C

Number of broadcast packets sent

zhmc_port_broadcast_packets_received_count

D

C

Number of broadcast packets received

zhmc_port_data_sent_bytes

D

G

Amount of data sent over the collection interval

zhmc_port_data_received_bytes

D

G

Amount of data received over the collection interval

zhmc_port_data_rate_sent_bytes_per_second

D

G

Data rate sent over the collection interval

zhmc_port_data_rate_received_bytes_per_second

D

G

Data rate received over the collection interval

zhmc_port_bandwidth_usage_ratio

D

G

Bandwidth usage ratio of the port

zhmc_nic_bytes_sent_count

D

C

Number of Bytes in unicast packets that were sent

zhmc_nic_bytes_received_count

D

C

Number of Bytes in unicast packets that were received

zhmc_nic_packets_sent_count

D

C

Number of unicast packets that were sent

zhmc_nic_packets_received_count

D

C

Number of unicast packets that were received

zhmc_nic_packets_sent_dropped_count

D

C

Number of sent packets that were dropped (resource shortage)

zhmc_nic_packets_received_dropped_count

D

C

Number of received packets that were dropped (resource shortage)

zhmc_nic_packets_sent_discarded_count

D

C

Number of sent packets that were discarded (malformed)

zhmc_nic_packets_received_discarded_count

D

C

Number of received packets that were discarded (malformed)

zhmc_nic_multicast_packets_sent_count

D

C

Number of multicast packets sent

zhmc_nic_multicast_packets_received_count

D

C

Number of multicast packets received

zhmc_nic_broadcast_packets_sent_count

D

C

Number of broadcast packets sent

zhmc_nic_broadcast_packets_received_count

D

C

Number of broadcast packets received

zhmc_nic_data_sent_bytes

D

G

Amount of data sent over the collection interval

zhmc_nic_data_received_bytes

D

G

Amount of data received over the collection interval

zhmc_nic_data_rate_sent_bytes_per_second

D

G

Data rate sent over the collection interval

zhmc_nic_data_rate_received_bytes_per_second

D

G

Data rate received over the collection interval

Legend:

  • Mode: The operational mode of the CPC: C=Classic, D=DPM

  • Type: The Prometheus metric type: G=Gauge, C=Counter

HMC credentials file

The HMC credentials file tells the exporter which HMC to talk to for obtaining metrics, and which userid and password to use for logging on to the HMC.

In addition, it allows specifying additional labels to be used in all metrics exported to Prometheus. This can be used for defining labels that identify the environment managed by the HMC, in cases where metrics from multiple instances of exporters and HMCs come together.

The HMC credentials file is in YAML format and has the following structure:

metrics:
  hmc: {hmc-ip-address}
  userid: {hmc-userid}
  password: {hmc-password}
  verify_cert: {verify-cert}

extra_labels:  # optional
  # list of labels:
  - name: {label-name}
    value: {label-value}

Where:

  • {hmc-ip-address} is the IP address of the HMC.

  • {hmc-userid} is the userid on the HMC to be used for logging on.

  • {hmc-password} is the password of that userid.

  • {verify-cert} controls whether and how the HMC server certificate is verified. For details, see HMC certificate.

  • {label-name} is the label name.

  • {label-value} is the label value. The string value is used directly without any further interpretation.

Sample HMC credentials file

The following is a sample HMC credentials file (hmccreds.yaml).

The file can be downloaded from the Git repo as examples/hmccreds.yaml.

# Sample HMC credentials file for the Z HMC Prometheus Exporter.

metrics:
  hmc: 9.10.11.12
  userid: user
  password: password
  verify_cert: true

extra_labels:
  - name: pod
    value: mypod

Metric definition file

The metric definition file maps the metrics returned by the HMC to metrics exported to Prometheus.

Furthermore, the metric definition file allows optimizing the access time to the HMC by disabling the fetching of metrics that are not needed.

The metric definition file is in YAML format and has the following structure:

metric_groups:
  # dictionary of metric groups:
  {hmc-metric-group}:
    prefix: {resource-type}
    fetch: {fetch-bool}
    if: {fetch-condition}  # optional
    labels:
      # list of labels:
      - name: {label-name}
        value: {label-value}

metrics:
  # dictionary of metric groups:
  {hmc-metric-group}:

    # dictionary format for defining metrics:
    {hmc-metric}:
      exporter_name: {metric}_{unit}
      exporter_desc: {help}
      metric_type: {metric-type}
      percent: {percent-bool}
      valuemap: {valuemap}

    # list format for defining metrics:
    - property_name: {hmc-metric}                     # either this
      properties_expression: {properties-expression}  # or this
      exporter_name: {metric}_{unit}
      exporter_desc: {help}
      percent: {percent-bool}
      valuemap: {valuemap}

Where:

  • {hmc-metric-group} is the name of the metric group on the HMC.

  • {hmc-metric} is the name of the metric (within the metric group) on the HMC.

  • {resource-type} is a short lower case term for the type of resource the metric applies to, for example cpc or partition. It is used in the Prometheus metric name directly after the initial zhmc_.

  • {fetch-bool} is a boolean indicating whether the user wants this metric group to be fetched from the HMC. For the metric group to actually be fetched, the if property, if specified, also needs to evaluate to True.

  • {fetch-condition} is a string that is evaluated as a Python expression and that indicates whether the metric group can be fetched. For the metric group to actually be fetched, the fetch property also needs to be True. The expression may contain the hmc_version variable which evaluates to the HMC version. The HMC versions are evaluated as tuples of integers, padding them to 3 version parts by appending 0 if needed.

  • {label-name} is the label name.

  • {label-value} identifies where the label value is taken from, as follows:

    • resource the name of the resource reported by the HMC for the metric. This is the normal case and also the default.

    • resource.parent the name of the parent resource of the resource reported by the HMC for the metric. This is useful for resources that are inside of the CPC, such as adapters or partitions, to get back to the CPC containing them.

    • resource.parent.parent the name of the grand parent resource of the resource reported by the HMC for the metric. This is useful for resources that are inside of the CPC at the second level, such as NICs or adapter ports, to get back to the CPC containing them.

    • {hmc-metric} the name of the HMC metric within the same metric group whose metric value should be used as a label value. This can be used to use accompanying HMC metrics that are actually identifiers for resources, a labels for the actual metric. Example: The HMC returns metrics group channel-usage with metric channel-usage that has the actual value and metric channel-name that identifies the channel to which the metric value belongs. The following fragment utilizes the channel-name metric as a label for the channel-usage metric:

      metric_groups:
        channel-usage:
          prefix: channel
          fetch: True
          labels:
            - name: cpc
              value: resource
            - name: channel_css_chpid
              value: channel-name
      metrics:
        channel-usage:
          channel-usage:
            percent: True
            exporter_name: usage_ratio
            exporter_desc: Usage ratio of the channel
      
  • {properties-expression} is a Jinja2 expression whose value should be used is as the metric value, for resource based metrics. The expression uses the variable properties which is the resource properties dictionary of the resource. The properties_expression attribute is mutually exclusive with property_name.

  • {metric-type} is an optional enum value that defines the Prometheus metric type used for this metric: - “gauge” (default) - For values that can go up and down - “counter” - For values that are monotonically increasing counters

  • {percent-bool} is a boolean indicating whether the metric value should be divided by 100. The reason for this is that the HMC metrics represent percentages such that a value of 100 means 100% = 1, while Prometheus represents them such that a value of 1.0 means 100% = 1.

  • {valuemap} is an optional dictionary for mapping string enumeration values in the original HMC value to integers to be exported to Prometheus. This is used for example for the processor mode (shared, dedicated).

  • {metric}_{unit} is the Prometheus local metric name and unit in the full metric name zhmc_{resource-type}_{metric}_{unit}.

  • {help} is the description text that is exported as # HELP.

Sample metric definition file

The following is a sample metric definition file (metrics.yaml) that defines all metrics as of HMC 2.15 (z15).

The file can be downloaded from the Git repo as examples/metrics.yaml.

# Sample metric definition file for the Z HMC Prometheus Exporter.
# Defines all metrics up to HMC version 2.15.0 (z15), except for ensemble/zBX
# related metrics which are not supported by the Z HMC Prometheus Exporter.

metric_groups:

  # Available for CPCs in classic mode

  cpc-usage-overview:
    prefix: cpc
    fetch: true
    labels:
      - name: cpc
        value: resource

  logical-partition-usage:
    prefix: partition
    fetch: true
    labels:
      - name: cpc
        value: resource.parent
      - name: partition
        value: resource

  channel-usage:
    prefix: channel
    fetch: true
    labels:
      - name: cpc
        value: resource
      - name: channel_css_chpid
        value: channel-name  # format: 'CSS.CHPID'

  crypto-usage:
    prefix: crypto_adapter
    fetch: true
    if: "hmc_version>='2.12.0'"
    labels:
      - name: cpc
        value: resource
      - name: adapter_pchid
        value: channel-id

  flash-memory-usage:
    prefix: flash_memory_adapter
    fetch: true
    if: "hmc_version>='2.12.0'"
    labels:
      - name: cpc
        value: resource
      - name: adapter_pchid
        value: channel-id

  roce-usage:
    prefix: roce_adapter
    fetch: true
    if: "hmc_version>='2.12.1'"
    labels:
      - name: cpc
        value: resource
      - name: adapter_pchid
        value: channel-id

  logical-partition-resource:
    type: resource
    resource: cpc.logical-partition
    prefix: partition
    fetch: true
    labels:
      - name: cpc
        value: resource.parent
      - name: partition
        value: resource

  # Available for CPCs in DPM mode

  dpm-system-usage-overview:
    prefix: cpc
    fetch: true
    if: "hmc_version>='2.13.1'"
    labels:
      - name: cpc
        value: resource

  partition-usage:
    prefix: partition
    fetch: true
    if: "hmc_version>='2.13.1'"
    labels:
      - name: cpc
        value: resource.parent
      - name: partition
        value: resource

  adapter-usage:
    prefix: adapter
    fetch: true
    if: "hmc_version>='2.13.1'"
    labels:
      - name: cpc
        value: resource.parent
      - name: adapter
        value: resource

  network-physical-adapter-port:
    prefix: port
    fetch: true
    if: "hmc_version>='2.13.1'"
    labels:
      - name: cpc
        value: resource.parent
      - name: adapter
        value: resource
      - name: port
        value: network-port-id

  partition-attached-network-interface:
    prefix: nic
    fetch: false  # Takes about 1 minute for the initial processing
    if: "hmc_version>='2.13.1'"
    labels:
      - name: cpc
        value: resource.parent.parent
      - name: partition
        value: resource.parent
      - name: nic
        value: resource

  partition-resource:
    type: resource
    resource: cpc.partition
    prefix: partition
    fetch: true
    labels:
      - name: cpc
        value: resource.parent
      - name: partition
        value: resource

  # Available for CPCs in any mode

  zcpc-environmentals-and-power:
    prefix: cpc
    fetch: true
    labels:
      - name: cpc
        value: resource

  zcpc-processor-usage:
    prefix: processor
    fetch: true
    labels:
      - name: cpc
        value: resource
      - name: processor
        value: processor-name
      - name: type
        value: processor-type

  environmental-power-status:
    prefix: cpc
    fetch: true
    if: "hmc_version>='2.15.0'"
    labels:
      - name: cpc
        value: resource

  cpc-resource:
    type: resource
    resource: cpc
    prefix: cpc
    fetch: true
    labels:
      - name: cpc
        value: resource

metrics:

  # Available for CPCs in classic mode

  cpc-usage-overview:
    cpc-processor-usage:
      percent: true
      exporter_name: processor_usage_ratio
      exporter_desc: Usage ratio across all processors of the CPC
    all-shared-processor-usage:
      percent: true
      exporter_name: shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared processors of the CPC
    all-dedicated-processor-usage:
      percent: true
      exporter_name: dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated processors of the CPC
    cp-all-processor-usage:
      percent: true
      exporter_name: cp_processor_usage_ratio
      exporter_desc: Usage ratio across all CP processors of the CPC
    cp-shared-processor-usage:
      percent: true
      exporter_name: cp_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared CP processors of the CPC
    cp-dedicated-processor-usage:
      percent: true
      exporter_name: cp_dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated CP processors of the CPC
    ifl-all-processor-usage:
      percent: true
      exporter_name: ifl_processor_usage_ratio
      exporter_desc: Usage ratio across all IFL processors of the CPC
    ifl-shared-processor-usage:
      percent: true
      exporter_name: ifl_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared IFL processors of the CPC
    ifl-dedicated-processor-usage:
      percent: true
      exporter_name: ifl_dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated IFL processors of the CPC
    icf-all-processor-usage:
      percent: true
      exporter_name: icf_processor_usage_ratio
      exporter_desc: Usage ratio across all ICF processors of the CPC
    icf-shared-processor-usage:
      percent: true
      exporter_name: icf_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared ICF processors of the CPC
    icf-dedicated-processor-usage:
      percent: true
      exporter_name: icf_dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated ICF processors of the CPC
    iip-all-processor-usage:
      percent: true
      exporter_name: iip_processor_usage_ratio
      exporter_desc: Usage ratio across all zIIP processors of the CPC
    iip-shared-processor-usage:
      percent: true
      exporter_name: iip_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared zIIP processors of the CPC
    iip-dedicated-processor-usage:
      percent: true
      exporter_name: iip_dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated zIIP processors of the CPC
    aap-shared-processor-usage:
      percent: true
      exporter_name: aap_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared zAAP processors of the CPC
    aap-dedicated-processor-usage:
      percent: true
      exporter_name: aap_dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated zAAP processors of the CPC
    # aap-all-processor-usage does not seem to exist
    cbp-all-processor-usage:
      # since HMC/SE version 2.14.0
      percent: true
      exporter_name: cbp_processor_usage_ratio
      exporter_desc: Usage ratio across all CBP processors of the CPC
    cbp-shared-processor-usage:
      # since HMC/SE version 2.14.0
      percent: true
      exporter_name: cbp_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared CBP processors of the CPC
    cbp-dedicated-processor-usage:
      # since HMC/SE version 2.14.0
      percent: true
      exporter_name: cbp_dedicated_processor_usage_ratio
      exporter_desc: Usage ratio across all dedicated CBP processors of the CPC
    channel-usage:
      percent: true
      exporter_name: channel_usage_ratio
      exporter_desc: Usage ratio across all channels of the CPC
    power-consumption-watts:
      percent: false
      exporter_name: power_watts
      exporter_desc: Power consumption of the CPC
    temperature-celsius:
      percent: false
      exporter_name: ambient_temperature_celsius
      exporter_desc: Ambient temperature of the CPC

  logical-partition-usage:
    processor-usage:
      percent: true
      exporter_name: processor_usage_ratio
      exporter_desc: Usage ratio across all processors of the partition
    cp-processor-usage:
      percent: true
      exporter_name: cp_processor_usage_ratio
      exporter_desc: Usage ratio across all CP processors of the partition
    ifl-processor-usage:
      percent: true
      exporter_name: ifl_processor_usage_ratio
      exporter_desc: Usage ratio across all IFL processors of the partition
    icf-processor-usage:
      percent: true
      exporter_name: icf_processor_usage_ratio
      exporter_desc: Usage ratio across all ICF processors of the partition
    iip-processor-usage:
      percent: true
      exporter_name: iip_processor_usage_ratio
      exporter_desc: Usage ratio across all IIP processors of the partition
    cbp-processor-usage:
      # since HMC/SE version 2.14.0
      percent: true
      exporter_name: cbp_processor_usage_ratio
      exporter_desc: Usage ratio across all CBP processors of the partition
    zvm-paging-rate:
      percent: false
      exporter_name: zvm_paging_rate_pages_per_second
      exporter_desc: z/VM paging rate in pages/sec

  channel-usage:
    channel-usage:
      percent: true
      exporter_name: usage_ratio
      exporter_desc: Usage ratio of the channel
    channel-name:
      percent: false
      exporter_name: null  # Ignored (used for identification in channel-usage)
      exporter_desc: null
    shared-channel:
      percent: false
      exporter_name: null  # Ignored (used for identification in channel-usage)
      exporter_desc: null
    logical-partition-name:
      percent: false
      exporter_name: null  # Ignored (used for identification in channel-usage)
      exporter_desc: null

  crypto-usage:
    adapter-usage:
      percent: true
      exporter_name: usage_ratio
      exporter_desc: Usage ratio of the crypto adapter
    channel-id:
      percent: false
      exporter_name: null  # Ignored (used for identification in adapter-usage)
      exporter_desc: null
    crypto-id:
      percent: false
      exporter_name: null  # Ignored (used for identification in adapter-usage)
      exporter_desc: null

  flash-memory-usage:
    adapter-usage:
      percent: true
      exporter_name: usage_ratio
      exporter_desc: Usage ratio of the flash memory adapter
    channel-id:
      percent: false
      exporter_name: null  # Ignored (used for identification in adapter-usage)
      exporter_desc: null

  roce-usage:
    adapter-usage:
      percent: true
      exporter_name: usage_ratio
      exporter_desc: Usage ratio of the RoCE adapter

  logical-partition-resource:  # can be in dictionary or list format
    - property_name: defined-capacity
      exporter_name: defined_capacity_msu_per_hour
      exporter_desc: Defined capacity expressed in terms of Millions of Service Units (MSU)s per hour
    - property_name: workload-manager-enabled
      exporter_name: workload_manager_is_enabled
      exporter_desc: Boolean indicating whether the z/OS Workload Manager is allowed to change processing weight related properties of the partition (0=false, 1=true)
    - property_name: processor-usage
      exporter_name: processor_mode_int
      exporter_desc: Allocation mode for processors to the active partition as an integer (0=shared, 1=dedicated)
      valuemap:
        shared: 0
        dedicated: 1
    - property_name: number-general-purpose-processors
      exporter_name: cp_processor_count
      exporter_desc: Number of CP processors currently allocated to the active partition
    - property_name: number-reserved-general-purpose-processors
      exporter_name: cp_reserved_processor_count
      exporter_desc: Number of CP processors reserved for the active partition (this is the maximum when increasing the number)
    - property_name: initial-processing-weight
      exporter_name: cp_initial_processing_weight
      exporter_desc: Initial CP processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-processing-weight
      exporter_name: cp_minimum_processing_weight
      exporter_desc: Minimum CP processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-processing-weight
      exporter_name: cp_maximum_processing_weight
      exporter_desc: Maximum CP processing weight for the active partition in shared mode (1..999)
    - property_name: current-processing-weight
      exporter_name: cp_current_processing_weight
      exporter_desc: Current CP processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['absolute-processing-capping'].type == 'processor'"
      exporter_name: cp_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for CP processors (0=false, 1=true)
    - properties_expression: "properties['absolute-processing-capping'].value if properties['absolute-processing-capping'].type == 'processor' else 0"
      exporter_name: cp_processor_count_cap
      exporter_desc: Maximum number of CP processors that can be used if absolute capping is enabled, else 0
    - property_name: initial-processing-weight-capped
      exporter_name: cp_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial CP processing weight is capped (0=false, 1=true)
    - property_name: current-processing-weight-capped
      exporter_name: cp_current_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the current CP processing weight is capped (0=false, 1=true)
    - property_name: number-ifl-processors
      exporter_name: ifl_processor_count
      exporter_desc: Number of IFL processors currently allocated to the active partition
    - property_name: number-reserved-ifl-processors
      exporter_name: ifl_reserved_processor_count
      exporter_desc: Number of IFL processors reserved for the active partition (this is the maximum when increasing the number)
    - property_name: initial-ifl-processing-weight
      exporter_name: ifl_initial_processing_weight
      exporter_desc: Initial IFL processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-ifl-processing-weight
      exporter_name: ifl_minimum_processing_weight
      exporter_desc: Minimum IFL processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-ifl-processing-weight
      exporter_name: ifl_maximum_processing_weight
      exporter_desc: Maximum IFL processing weight for the active partition in shared mode (1..999)
    - property_name: current-ifl-processing-weight
      exporter_name: ifl_current_processing_weight
      exporter_desc: Current IFL processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['absolute-ifl-capping'].type == 'processor'"
      exporter_name: ifl_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for IFL processors (0=false, 1=true)
    - properties_expression: "properties['absolute-ifl-capping'].value if properties['absolute-ifl-capping'].type == 'processor' else 0"
      exporter_name: ifl_processor_count_cap
      exporter_desc: Maximum number of IFL processors that can be used if absolute capping is enabled, else 0
    - property_name: initial-ifl-processing-weight-capped
      exporter_name: ifl_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial IFL processing weight is capped (0=false, 1=true)
    - property_name: current-ifl-processing-weight-capped
      exporter_name: ifl_current_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the current IFL processing weight is capped (0=false, 1=true)
    - property_name: number-icf-processors
      exporter_name: icf_processor_count
      exporter_desc: Number of ICF processors currently allocated to the active partition
    - property_name: number-reserved-icf-processors
      exporter_name: icf_reserved_processor_count
      exporter_desc: Number of ICF processors reserved for the active partition (this is the maximum when increasing the number)
    - property_name: initial-cf-processing-weight
      exporter_name: icf_initial_processing_weight
      exporter_desc: Initial ICF processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-cf-processing-weight
      exporter_name: icf_minimum_processing_weight
      exporter_desc: Minimum ICF processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-cf-processing-weight
      exporter_name: icf_maximum_processing_weight
      exporter_desc: Maximum ICF processing weight for the active partition in shared mode (1..999)
    - property_name: current-cf-processing-weight
      exporter_name: icf_current_processing_weight
      exporter_desc: Current ICF processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['absolute-cf-capping'].type == 'processor'"
      exporter_name: icf_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for ICF processors (0=false, 1=true)
    - properties_expression: "properties['absolute-cf-capping'].value if properties['absolute-cf-capping'].type == 'processor' else 0"
      exporter_name: icf_processor_count_cap
      exporter_desc: Maximum number of ICF processors that can be used if absolute capping is enabled, else 0
    - property_name: initial-cf-processing-weight-capped
      exporter_name: icf_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial ICF processing weight is capped (0=false, 1=true)
    - property_name: current-cf-processing-weight-capped
      exporter_name: icf_current_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the current ICF processing weight is capped (0=false, 1=true)
    - property_name: number-ziip-processors
      exporter_name: iip_processor_count
      exporter_desc: Number of zIIP processors currently allocated to the active partition
    - property_name: number-reserved-ziip-processors
      exporter_name: iip_reserved_processor_count
      exporter_desc: Number of zIIP processors reserved for the active partition (this is the maximum when increasing the number)
    - property_name: initial-ziip-processing-weight
      exporter_name: iip_initial_processing_weight
      exporter_desc: Initial zIIP processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-ziip-processing-weight
      exporter_name: iip_minimum_processing_weight
      exporter_desc: Minimum zIIP processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-ziip-processing-weight
      exporter_name: iip_maximum_processing_weight
      exporter_desc: Maximum zIIP processing weight for the active partition in shared mode (1..999)
    - property_name: current-ziip-processing-weight
      exporter_name: iip_current_processing_weight
      exporter_desc: Current zIIP processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['absolute-ziip-capping'].type == 'processor'"
      exporter_name: iip_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for zIIP processors (0=false, 1=true)
    - properties_expression: "properties['absolute-ziip-capping'].value if properties['absolute-ziip-capping'].type == 'processor' else 0"
      exporter_name: iip_processor_count_cap
      exporter_desc: Maximum number of zIIP processors that can be used if absolute capping is enabled, else 0
    - property_name: initial-ziip-processing-weight-capped
      exporter_name: iip_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial zIIP processing weight is capped (0=false, 1=true)
    - property_name: current-ziip-processing-weight-capped
      exporter_name: iip_current_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the current zIIP processing weight is capped (0=false, 1=true)
    # Note: number-...-processors/cores properties do not exist in 2.15 for AAP processors
    - property_name: initial-aap-processing-weight
      exporter_name: aap_initial_processing_weight
      exporter_desc: Initial zAAP processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-aap-processing-weight
      exporter_name: aap_minimum_processing_weight
      exporter_desc: Minimum zAAP processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-aap-processing-weight
      exporter_name: aap_maximum_processing_weight
      exporter_desc: Maximum zAAP processing weight for the active partition in shared mode (1..999)
    - property_name: current-aap-processing-weight
      exporter_name: aap_current_processing_weight
      exporter_desc: Current zAAP processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['absolute-aap-capping'].type == 'processor'"
      exporter_name: aap_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for zAAP processors (0=false, 1=true)
    - properties_expression: "properties['absolute-aap-capping'].value if properties['absolute-aap-capping'].type == 'processor' else 0"
      exporter_name: aap_processor_count_cap
      exporter_desc: Maximum number of zAAP processors that can be used if absolute capping is enabled, else 0
    - property_name: initial-aap-processing-weight-capped
      exporter_name: aap_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial zAAP processing weight is capped (0=false, 1=true)
    - property_name: current-aap-processing-weight-capped
      exporter_name: aap_current_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the current zAAP processing weight is capped (0=false, 1=true)
    # Note: number-...-processors/cores properties do not exist in 2.15 for CBP processors
    - property_name: initial-cbp-processing-weight
      # since HMC/SE version 2.14.0
      exporter_name: cbp_initial_processing_weight
      exporter_desc: Initial CBP processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-cbp-processing-weight
      # since HMC/SE version 2.14.0
      exporter_name: cbp_minimum_processing_weight
      exporter_desc: Minimum CBP processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-cbp-processing-weight
      # since HMC/SE version 2.14.0
      exporter_name: cbp_maximum_processing_weight
      exporter_desc: Maximum CBP processing weight for the active partition in shared mode (1..999)
    - property_name: current-cbp-processing-weight
      # since HMC/SE version 2.14.0
      exporter_name: cbp_current_processing_weight
      exporter_desc: Current CBP processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['absolute-cbp-capping'].type == 'processor'"
      # since HMC/SE version 2.14.0
      exporter_name: cbp_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for CBP processors (0=false, 1=true)
    - properties_expression: "properties['absolute-cbp-capping'].value if properties['absolute-cbp-capping'].type == 'processor' else 0"
      # since HMC/SE version 2.14.0
      exporter_name: cbp_processor_count_cap
      exporter_desc: Maximum number of CBP processors that can be used if absolute capping is enabled, else 0
    - property_name: initial-cbp-processing-weight-capped
      # since HMC/SE version 2.14.0
      exporter_name: cbp_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial CBP processing weight is capped (0=false, 1=true)
    - property_name: current-cbp-processing-weight-capped
      # since HMC/SE version 2.14.0
      exporter_name: cbp_current_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the current CBP processing weight is capped (0=false, 1=true)
    - properties_expression: "properties['storage-central-allocation']|map(attribute='initial')|sum"
      exporter_name: initial_central_memory_mib
      exporter_desc: Amount of central memory initially allocated to the active partition in MiB, else 0
    - properties_expression: "properties['storage-central-allocation']|map(attribute='current')|sum"
      exporter_name: current_central_memory_mib
      exporter_desc: Amount of central memory currently allocated to the active partition, in MiB, else 0
    - properties_expression: "properties['storage-central-allocation']|map(attribute='maximum')|sum"
      exporter_name: maximum_central_memory_mib
      exporter_desc: Maximum amount of central memory to which the operating system running in the active partition can increase, in MiB
    - properties_expression: "properties['storage-expanded-allocation']|map(attribute='initial')|sum"
      exporter_name: initial_expanded_memory_mib
      exporter_desc: Amount of expanded memory initially allocated to the active partition in MiB, else 0
    - properties_expression: "properties['storage-expanded-allocation']|map(attribute='current')|sum"
      exporter_name: current_expanded_memory_mib
      exporter_desc: Amount of expanded memory currently allocated to the active partition, in MiB, else 0
    - properties_expression: "properties['storage-expanded-allocation']|map(attribute='maximum')|sum"
      exporter_name: maximum_expanded_memory_mib
      exporter_desc: Maximum amount of expanded memory to which the operating system running in the active partition can increase, in MiB
    - property_name: initial-vfm-storage
      exporter_name: initial_vfm_memory_gib
      exporter_desc: Initial amount of IBM Virtual Flash Memory (VFM) to be allocated at partition activation, in GiB
    - property_name: maximum-vfm-storage
      exporter_name: maximum_vfm_memory_gib
      exporter_desc: Maximum amount of IBM Virtual Flash Memory (VFM) that can be allocated to the active partition, in GiB
    - property_name: current-vfm-storage
      exporter_name: current_vfm_memory_gib
      exporter_desc: Current amount of IBM Virtual Flash Memory (VFM) that is allocated to the active partition, in GiB
    - properties_expression: "{'operating': 0, 'not-operating': 1, 'not-activated': 2, 'exceptions': 10}.get(properties.status, 99)"
      exporter_name: lpar_status_int
      exporter_desc: "LPAR status as integer (0=operating, 1=not-operating, 2=not-activated, 10=exceptions, 99=unsupported value)"
    - property_name: has-unacceptable-status
      exporter_name: has_unacceptable_status
      exporter_desc: Boolean indicating whether the partition has an unacceptable status (0=false, 1=true)

  # Available for CPCs in DPM mode

  dpm-system-usage-overview:
    processor-usage:
      percent: true
      exporter_name: processor_usage_ratio
      exporter_desc: Usage ratio across all processors of the CPC
    all-shared-processor-usage:
      percent: true
      exporter_name: shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared processors of the CPC
    cp-all-processor-usage:
      percent: true
      exporter_name: cp_processor_usage_ratio
      exporter_desc: Usage ratio across all CP processors of the CPC
    cp-shared-processor-usage:
      percent: true
      exporter_name: cp_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared CP processors of the CPC
    ifl-all-processor-usage:
      percent: true
      exporter_name: ifl_processor_usage_ratio
      exporter_desc: Usage ratio across all IFL processors of the CPC
    ifl-shared-processor-usage:
      percent: true
      exporter_name: ifl_shared_processor_usage_ratio
      exporter_desc: Usage ratio across all shared IFL processors of the CPC
    network-usage:
      percent: true
      exporter_name: network_adapter_usage_ratio
      exporter_desc: Usage ratio across all network adapters of the CPC
    storage-usage:
      percent: true
      exporter_name: storage_adapter_usage_ratio
      exporter_desc: Usage ratio across all storage adapters of the CPC
    accelerator-usage:
      percent: true
      exporter_name: accelerator_adapter_usage_ratio
      exporter_desc: Usage ratio across all accelerator adapters of the CPC
    crypto-usage:
      percent: true
      exporter_name: crypto_adapter_usage_ratio
      exporter_desc: Usage ratio across all crypto adapters of the CPC
    power-consumption-watts:
      percent: false
      exporter_name: power_watts
      exporter_desc: Power consumption of the CPC
    temperature-celsius:
      percent: false
      exporter_name: ambient_temperature_celsius
      exporter_desc: Ambient temperature of the CPC

  partition-usage:
    processor-usage:
      percent: true
      exporter_name: processor_usage_ratio
      exporter_desc: Usage ratio across all processors of the partition
    network-usage:
      percent: true
      exporter_name: network_adapter_usage_ratio
      exporter_desc: Usage ratio of all network adapters of the partition
    storage-usage:
      percent: true
      exporter_name: storage_adapter_usage_ratio
      exporter_desc: Usage ratio of all storage adapters of the partition
    accelerator-usage:
      percent: true
      exporter_name: accelerator_adapter_usage_ratio
      exporter_desc: Usage ratio of all accelerator adapters of the partition
    crypto-usage:
      percent: true
      exporter_name: crypto_adapter_usage_ratio
      exporter_desc: Usage ratio of all crypto adapters of the partition

  adapter-usage:
    adapter-usage:
      percent: true
      exporter_name: usage_ratio
      exporter_desc: Usage ratio of the adapter

  network-physical-adapter-port:
    network-port-id:
      # type: info
      percent: false
      exporter_name: null  # Ignored (identifies the port, used in label)
      exporter_desc: null
    bytes-sent:
      metric_type: counter
      percent: false
      exporter_name: bytes_sent_count
      exporter_desc: Number of Bytes in unicast packets that were sent
    bytes-received:
      metric_type: counter
      percent: false
      exporter_name: bytes_received_count
      exporter_desc: Number of Bytes in unicast packets that were received
    packets-sent:
      metric_type: counter
      percent: false
      exporter_name: packets_sent_count
      exporter_desc: Number of unicast packets that were sent
    packets-received:
      metric_type: counter
      percent: false
      exporter_name: packets_received_count
      exporter_desc: Number of unicast packets that were received
    packets-sent-dropped:
      metric_type: counter
      percent: false
      exporter_name: packets_sent_dropped_count
      exporter_desc: Number of sent packets that were dropped (resource shortage)
    packets-received-dropped:
      metric_type: counter
      percent: false
      exporter_name: packets_received_dropped_count
      exporter_desc: Number of received packets that were dropped (resource shortage)
    packets-sent-discarded:
      metric_type: counter
      percent: false
      exporter_name: packets_sent_discarded_count
      exporter_desc: Number of sent packets that were discarded (malformed)
    packets-received-discarded:
      metric_type: counter
      percent: false
      exporter_name: packets_received_discarded_count
      exporter_desc: Number of received packets that were discarded (malformed)
    multicast-packets-sent:
      metric_type: counter
      percent: false
      exporter_name: multicast_packets_sent_count
      exporter_desc: Number of multicast packets sent
    multicast-packets-received:
      metric_type: counter
      percent: false
      exporter_name: multicast_packets_received_count
      exporter_desc: Number of multicast packets received
    broadcast-packets-sent:
      metric_type: counter
      percent: false
      exporter_name: broadcast_packets_sent_count
      exporter_desc: Number of broadcast packets sent
    broadcast-packets-received:
      metric_type: counter
      percent: false
      exporter_name: broadcast_packets_received_count
      exporter_desc: Number of broadcast packets received
    interval-bytes-sent:
      percent: false
      exporter_name: data_sent_bytes
      exporter_desc: Amount of data sent over the collection interval
    interval-bytes-received:
      percent: false
      exporter_name: data_received_bytes
      exporter_desc: Amount of data received over the collection interval
    bytes-per-second-sent:
      percent: false
      exporter_name: data_rate_sent_bytes_per_second
      exporter_desc: Data rate sent over the collection interval
    bytes-per-second-received:
      percent: false
      exporter_name: data_rate_received_bytes_per_second
      exporter_desc: Data rate received over the collection interval
    utilization:
      percent: true
      exporter_name: bandwidth_usage_ratio
      exporter_desc: Bandwidth usage ratio of the port
    mac-address:
      # type: info
      percent: false
      exporter_name: null # mac_address
      exporter_desc: null # MAC address of the port, or 'N/A'
    flags:
      # type: info
      percent: false
      exporter_name: null  # Ignored (can be detected from metric values)
      exporter_desc: null

  partition-attached-network-interface:
    partition-id:  # the OID, i.e. /api/partitions/{partition-id}
      # type: info
      percent: false
      exporter_name: null  # Ignored (identifies the partition, used in label)
      exporter_desc: null
    bytes-sent:
      metric_type: counter
      percent: false
      exporter_name: bytes_sent_count
      exporter_desc: Number of Bytes in unicast packets that were sent
    bytes-received:
      metric_type: counter
      percent: false
      exporter_name: bytes_received_count
      exporter_desc: Number of Bytes in unicast packets that were received
    packets-sent:
      metric_type: counter
      percent: false
      exporter_name: packets_sent_count
      exporter_desc: Number of unicast packets that were sent
    packets-received:
      metric_type: counter
      percent: false
      exporter_name: packets_received_count
      exporter_desc: Number of unicast packets that were received
    packets-sent-dropped:
      metric_type: counter
      percent: false
      exporter_name: packets_sent_dropped_count
      exporter_desc: Number of sent packets that were dropped (resource shortage)
    packets-received-dropped:
      metric_type: counter
      percent: false
      exporter_name: packets_received_dropped_count
      exporter_desc: Number of received packets that were dropped (resource shortage)
    packets-sent-discarded:
      metric_type: counter
      percent: false
      exporter_name: packets_sent_discarded_count
      exporter_desc: Number of sent packets that were discarded (malformed)
    packets-received-discarded:
      metric_type: counter
      percent: false
      exporter_name: packets_received_discarded_count
      exporter_desc: Number of received packets that were discarded (malformed)
    multicast-packets-sent:
      metric_type: counter
      percent: false
      exporter_name: multicast_packets_sent_count
      exporter_desc: Number of multicast packets sent
    multicast-packets-received:
      metric_type: counter
      percent: false
      exporter_name: multicast_packets_received_count
      exporter_desc: Number of multicast packets received
    broadcast-packets-sent:
      metric_type: counter
      percent: false
      exporter_name: broadcast_packets_sent_count
      exporter_desc: Number of broadcast packets sent
    broadcast-packets-received:
      metric_type: counter
      percent: false
      exporter_name: broadcast_packets_received_count
      exporter_desc: Number of broadcast packets received
    interval-bytes-sent:
      percent: false
      exporter_name: data_sent_bytes
      exporter_desc: Amount of data sent over the collection interval
    interval-bytes-received:
      percent: false
      exporter_name: data_received_bytes
      exporter_desc: Amount of data received over the collection interval
    bytes-per-second-sent:
      percent: false
      exporter_name: data_rate_sent_bytes_per_second
      exporter_desc: Data rate sent over the collection interval
    bytes-per-second-received:
      percent: false
      exporter_name: data_rate_received_bytes_per_second
      exporter_desc: Data rate received over the collection interval
    flags:
      # type: info
      percent: false
      exporter_name: null  # Ignored (can be detected from metric values)
      exporter_desc: null

  partition-resource:  # can be in dictionary or list format
    - property_name: processor-mode
      exporter_name: processor_mode_int
      exporter_desc: Allocation mode for processors to the active partition as an integer (0=shared, 1=dedicated)
      valuemap:
        shared: 0
        dedicated: 1
    - property_name: threads-per-processor
      exporter_name: threads_per_processor_ratio
      exporter_desc: Number of threads per allocated processor the operating system running in the partition is configured to use
    - property_name: cp-processors
      exporter_name: cp_processor_count
      exporter_desc: Number of CP processors allocated to the active partition
    - property_name: initial-cp-processing-weight
      exporter_name: cp_initial_processing_weight
      exporter_desc: Initial CP processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-cp-processing-weight
      exporter_name: cp_minimum_processing_weight
      exporter_desc: Minimum CP processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-cp-processing-weight
      exporter_name: cp_maximum_processing_weight
      exporter_desc: Maximum CP processing weight for the active partition in shared mode (1..999)
    - property_name: current-cp-processing-weight
      exporter_name: cp_current_processing_weight
      exporter_desc: Current CP processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['cp-absolute-processor-capping']"
      exporter_name: cp_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for CP processors (0=false, 1=true)
    - properties_expression: "properties['cp-absolute-processor-capping-value'] if properties['cp-absolute-processor-capping'] else 0"
      exporter_name: cp_processor_count_cap
      exporter_desc: Maximum number of CP processors that can be used if absolute capping is enabled, else 0
    - property_name: cp-processing-weight-capped
      exporter_name: cp_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial CP processing weight is capped (0=false, 1=true)
    - property_name: ifl-processors
      exporter_name: ifl_processor_count
      exporter_desc: Number of IFL processors allocated to the active partition
    - property_name: initial-ifl-processing-weight
      exporter_name: ifl_initial_processing_weight
      exporter_desc: Initial IFL processing weight for the active partition in shared mode (1..999)
    - property_name: minimum-ifl-processing-weight
      exporter_name: ifl_minimum_processing_weight
      exporter_desc: Minimum IFL processing weight for the active partition in shared mode (1..999)
    - property_name: maximum-ifl-processing-weight
      exporter_name: ifl_maximum_processing_weight
      exporter_desc: Maximum IFL processing weight for the active partition in shared mode (1..999)
    - property_name: current-ifl-processing-weight
      exporter_name: ifl_current_processing_weight
      exporter_desc: Current IFL processing weight for the active partition in shared mode (1..999)
    - properties_expression: "properties['ifl-absolute-processor-capping']"
      exporter_name: ifl_processor_count_is_capped
      exporter_desc: Boolean indicating whether absolute capping is enabled for IFL processors (0=false, 1=true)
    - properties_expression: "properties['ifl-absolute-processor-capping-value'] if properties['ifl-absolute-processor-capping'] else 0"
      exporter_name: ifl_processor_count_cap
      exporter_desc: Maximum number of IFL processors that can be used if absolute capping is enabled, else 0
    - property_name: ifl-processing-weight-capped
      exporter_name: ifl_initial_processing_weight_is_capped
      exporter_desc: Boolean indicating whether the initial IFL processing weight is capped (0=false, 1=true)
    - property_name: initial-memory
      exporter_name: initial_memory_mib
      exporter_desc: Initial amount of memory allocated to the partition when it becomes active, in MiB
    - property_name: reserved-memory
      exporter_name: reserved_memory_mib
      exporter_desc: Amount of reserved memory (maximum memory minus initial memory), in MiB
    - property_name: maximum-memory
      exporter_name: maximum_memory_mib
      exporter_desc: Maximum amount of memory to which the operating system running in the partition can increase the memory allocation, in MiB
    - properties_expression: "{'active': 0, 'degraded': 1, 'paused': 10, 'stopped': 11, 'starting': 12, 'stopping': 13, 'reservation-error': 20, 'terminated': 21, 'communications-not-active': 22, 'status-check': 23}.get(properties.status, 99)"
      exporter_name: status_int
      exporter_desc: "Partition status as integer (0=active, 1=degraded, 10=paused, 11=stopped, 12=starting, 13=stopping, 20=reservation-error, 21=terminated, 22=communications-not-active, 23=status-check, 99=unsupported value)"
    - property_name: has-unacceptable-status
      exporter_name: has_unacceptable_status
      exporter_desc: Boolean indicating whether the partition has an unacceptable status (0=false, 1=true)

  # Available for CPCs in any mode

  zcpc-environmentals-and-power:
    temperature-celsius:
      percent: false
      exporter_name: null  # Ignored (duplicate of ambient_temperature_celsius)
      exporter_desc: null
    humidity:
      percent: false
      exporter_name: humidity_percent
      exporter_desc: Relative humidity
    dew-point-celsius:
      percent: false
      exporter_name: dew_point_celsius
      exporter_desc: Dew point
    power-consumption-watts:
      percent: false
      exporter_name: null  # Ignored (duplicate of power_watts)
      exporter_desc: null
    heat-load:
      percent: false
      exporter_name: heat_load_total_btu_per_hour
      exporter_desc: Total heat load of the CPC
    heat-load-forced-air:
      percent: false
      exporter_name: heat_load_forced_air_btu_per_hour
      exporter_desc: Heat load of the CPC covered by forced-air
    heat-load-water:
      percent: false
      exporter_name: heat_load_water_btu_per_hour
      exporter_desc: Heat load of the CPC covered by water
    exhaust-temperature-celsius:
      percent: false
      exporter_name: exhaust_temperature_celsius
      exporter_desc: Exhaust temperature of the CPC

  environmental-power-status:
    # linecord-one-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord1_name
    #   exporter_desc: Line cord 1 identifier - "not-connected" if not available
    linecord-one-power-phase-A:
      percent: false
      exporter_name: power_cord1_phase_a_watts
      exporter_desc: Power in Phase A of line cord 1 - 0 if not available
    linecord-one-power-phase-B:
      percent: false
      exporter_name: power_cord1_phase_b_watts
      exporter_desc: Power in Phase B of line cord 1 - 0 if not available
    linecord-one-power-phase-C:
      percent: false
      exporter_name: power_cord1_phase_c_watts
      exporter_desc: Power in Phase C of line cord 1 - 0 if not available
    # linecord-two-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord2_name
    #   exporter_desc: Line cord 2 identifier - "not-connected" if not available
    linecord-two-power-phase-A:
      percent: false
      exporter_name: power_cord2_phase_a_watts
      exporter_desc: Power in Phase A of line cord 2 - 0 if not available
    linecord-two-power-phase-B:
      percent: false
      exporter_name: power_cord2_phase_b_watts
      exporter_desc: Power in Phase B of line cord 2 - 0 if not available
    linecord-two-power-phase-C:
      percent: false
      exporter_name: power_cord2_phase_c_watts
      exporter_desc: Power in Phase C of line cord 2 - 0 if not available
    # linecord-three-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord3_name
    #   exporter_desc: Line cord 3 identifier - "not-connected" if not available
    linecord-three-power-phase-A:
      percent: false
      exporter_name: power_cord3_phase_a_watts
      exporter_desc: Power in Phase A of line cord 3 - 0 if not available
    linecord-three-power-phase-B:
      percent: false
      exporter_name: power_cord3_phase_b_watts
      exporter_desc: Power in Phase B of line cord 3 - 0 if not available
    linecord-three-power-phase-C:
      percent: false
      exporter_name: power_cord3_phase_c_watts
      exporter_desc: Power in Phase C of line cord 3 - 0 if not available
    # linecord-four-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord4_name
    #   exporter_desc: Line cord 4 identifier - "not-connected" if not available
    linecord-four-power-phase-A:
      percent: false
      exporter_name: power_cord4_phase_a_watts
      exporter_desc: Power in Phase A of line cord 4 - 0 if not available
    linecord-four-power-phase-B:
      percent: false
      exporter_name: power_cord4_phase_b_watts
      exporter_desc: Power in Phase B of line cord 4 - 0 if not available
    linecord-four-power-phase-C:
      percent: false
      exporter_name: power_cord4_phase_c_watts
      exporter_desc: Power in Phase C of line cord 4 - 0 if not available
    # linecord-five-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord5_name
    #   exporter_desc: Line cord 5 identifier - "not-connected" if not available
    linecord-five-power-phase-A:
      percent: false
      exporter_name: power_cord5_phase_a_watts
      exporter_desc: Power in Phase A of line cord 5 - 0 if not available
    linecord-five-power-phase-B:
      percent: false
      exporter_name: power_cord5_phase_b_watts
      exporter_desc: Power in Phase B of line cord 5 - 0 if not available
    linecord-five-power-phase-C:
      percent: false
      exporter_name: power_cord5_phase_c_watts
      exporter_desc: Power in Phase C of line cord 5 - 0 if not available
    # linecord-six-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord6_name
    #   exporter_desc: Line cord 6 identifier - "not-connected" if not available
    linecord-six-power-phase-A:
      percent: false
      exporter_name: power_cord6_phase_a_watts
      exporter_desc: Power in Phase A of line cord 6 - 0 if not available
    linecord-six-power-phase-B:
      percent: false
      exporter_name: power_cord6_phase_b_watts
      exporter_desc: Power in Phase B of line cord 6 - 0 if not available
    linecord-six-power-phase-C:
      percent: false
      exporter_name: power_cord6_phase_c_watts
      exporter_desc: Power in Phase C of line cord 6 - 0 if not available
    # linecord-seven-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord7_name
    #   exporter_desc: Line cord 7 identifier - "not-connected" if not available
    linecord-seven-power-phase-A:
      percent: false
      exporter_name: power_cord7_phase_a_watts
      exporter_desc: Power in Phase A of line cord 7 - 0 if not available
    linecord-seven-power-phase-B:
      percent: false
      exporter_name: power_cord7_phase_b_watts
      exporter_desc: Power in Phase B of line cord 7 - 0 if not available
    linecord-seven-power-phase-C:
      percent: false
      exporter_name: power_cord7_phase_c_watts
      exporter_desc: Power in Phase C of line cord 7 - 0 if not available
    # linecord-eight-name:
    #   # type: info
    #   percent: false
    #   exporter_name: power_cord8_name
    #   exporter_desc: Line cord 8 identifier - "not-connected" if not available
    linecord-eight-power-phase-A:
      percent: false
      exporter_name: power_cord8_phase_a_watts
      exporter_desc: Power in Phase A of line cord 8 - 0 if not available
    linecord-eight-power-phase-B:
      percent: false
      exporter_name: power_cord8_phase_b_watts
      exporter_desc: Power in Phase B of line cord 8 - 0 if not available
    linecord-eight-power-phase-C:
      percent: false
      exporter_name: power_cord8_phase_c_watts
      exporter_desc: Power in Phase C of line cord 8 - 0 if not available

  zcpc-processor-usage:
    processor-name:
      # type: info
      percent: false
      exporter_name: null  # Ignored (used as label)
      exporter_desc: null
    processor-type:
      # type: info
      percent: false
      exporter_name: null  # Ignored (used as label, also included in processor-name)
      exporter_desc: null
    processor-usage:
      percent: true
      exporter_name: usage_ratio
      exporter_desc: Usage ratio of the processor
    smt-usage:
      percent: false
      exporter_name: smt_mode_percent
      exporter_desc: Percentage of time the processor was in SMT mode - -1 if not supported
    thread-0-usage:
      percent: true
      exporter_name: smt_thread0_usage_ratio
      exporter_desc: Usage ratio of thread 0 of the processor when in SMT mode - -1 if not supported
    thread-1-usage:
      percent: true
      exporter_name: smt_thread1_usage_ratio
      exporter_desc: Usage ratio of thread 1 of the processor when in SMT mode - -1 if not supported

  cpc-resource:  # can be in dictionary or list format
    - property_name: processor-count-general-purpose
      exporter_name: cp_processor_count
      exporter_desc: Number of active CP processors
    - property_name: processor-count-ifl
      exporter_name: ifl_processor_count
      exporter_desc: Number of active IFL processors
    - property_name: processor-count-icf
      exporter_name: icf_processor_count
      exporter_desc: Number of active ICF processors
    - property_name: processor-count-iip
      exporter_name: iip_processor_count
      exporter_desc: Number of active zIIP processors
    - property_name: processor-count-aap
      exporter_name: aap_processor_count
      exporter_desc: Number of active zAAP processors
    - property_name: processor-count-cbp
      # since HMC/SE version 2.14.0
      exporter_name: cbp_processor_count
      exporter_desc: Number of active CBP processors
    - property_name: processor-count-service-assist
      exporter_name: sap_processor_count
      exporter_desc: Number of active SAP processors
    - property_name: processor-count-defective
      exporter_name: defective_processor_count
      exporter_desc: Number of defective processors of all processor types
    - property_name: processor-count-spare
      exporter_name: spare_processor_count
      exporter_desc: Number of spare processors of all processor types
    - property_name: storage-total-installed
      # since HMC/SE version 2.13.1
      exporter_name: total_memory_mib
      exporter_desc: Total amount of installed memory, in MiB
    - property_name: storage-hardware-system-area
      # since HMC/SE version 2.13.1
      exporter_name: hsa_memory_mib
      exporter_desc: Amount of memory reserved for the base hardware system area (HSA), in MiB
    - property_name: storage-customer
      # since HMC/SE version 2.13.1
      exporter_name: partition_memory_mib
      exporter_desc: Amount of memory for use by partitions, in MiB
    - property_name: storage-customer-central
      # since HMC/SE version 2.13.1
      exporter_name: partition_central_memory_mib
      exporter_desc: Amount of memory allocated as central storage across the active partitions, in MiB
    - property_name: storage-customer-expanded
      # since HMC/SE version 2.13.1
      exporter_name: partition_expanded_memory_mib
      exporter_desc: Amount of memory allocated as expanded storage across the active partitions, in MiB
    - property_name: storage-customer-available
      # since HMC/SE version 2.13.1
      exporter_name: available_memory_mib
      exporter_desc: Amount of memory not allocated to active partitions, in MiB
    - property_name: storage-vfm-increment-size
      # since HMC/SE version 2.14.0
      exporter_name: vfm_increment_gib
      exporter_desc: Increment size of IBM Virtual Flash Memory (VFM), in GiB
    - property_name: storage-vfm-total
      # since HMC/SE version 2.14.0
      exporter_name: total_vfm_gib
      exporter_desc: Total amount of installed IBM Virtual Flash Memory (VFM), in GiB
    - properties_expression: "{'active': 0, 'operating': 0, 'degraded': 1, 'service-required': 2, 'service': 10, 'exceptions': 11, 'not-communicating': 12, 'status-check': 13, 'not-operating': 14, 'no-powerstatus': 15}.get(properties.status, 99)"
      exporter_name: status_int
      exporter_desc: "Status as integer (0=active/=operating, 1=degraded, 2=service-required, 10=service, 11=exceptions, 12=not-communicating, 13=status-check, 14=not-operating, 15=no-power, 99=unsupported value)"
    - property_name: has-unacceptable-status
      exporter_name: has_unacceptable_status
      exporter_desc: Boolean indicating whether the CPC has an unacceptable status (0=false, 1=true)

Sample output to Prometheus

The following is sample output of the exporter to Prometheus. It is from a z14 system in DPM mode and was created with an extra label pod=wdc04-05, and with all metric groups enabled. The data has been reduced to show only three example partitions (but all adapters and processors):

# HELP zhmc_adapter_usage_ratio Usage ratio of the adapter
# TYPE zhmc_adapter_usage_ratio gauge
zhmc_adapter_usage_ratio{adapter="OSM1",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="FCP_120_SAN1_02",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.27
zhmc_adapter_usage_ratio{adapter="FCP_104_SAN1_01",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.01
zhmc_adapter_usage_ratio{adapter="OSM_12C_PSCN2_08",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="EP11_13C",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="FCP_121_SAN2_02",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.01
zhmc_adapter_usage_ratio{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="FCP1.D1",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.3
zhmc_adapter_usage_ratio{adapter="CRYP00",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="EP11_138",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="FCP_124_SAN1_03",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.27
zhmc_adapter_usage_ratio{adapter="FCP_101_SAN2_00",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.27
zhmc_adapter_usage_ratio{adapter="FCP_105_SAN2_01",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.27
zhmc_adapter_usage_ratio{adapter="EP11_11C",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="FCP_125_SAN2_03",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.3
zhmc_adapter_usage_ratio{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
zhmc_adapter_usage_ratio{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_processor_usage_ratio Usage ratio of the processor
# TYPE zhmc_processor_usage_ratio gauge
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL000",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL004",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL008",type="ifl"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL00C",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL010",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL014",type="ifl"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL018",type="ifl"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL01C",type="ifl"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL020",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL024",type="ifl"} 0.55
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL028",type="ifl"} 0.54
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL02C",type="ifl"} 0.56
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL030",type="ifl"} 0.57
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL034",type="ifl"} 0.54
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL038",type="ifl"} 0.62
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL03C",type="ifl"} 0.58
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFP040",type="ifp"} 0.06
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL044",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL048",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL04C",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL050",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL054",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL058",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL05C",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL060",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL064",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL068",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL06C",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL070",type="ifl"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL078",type="ifl"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL07C",type="ifl"} 0.02
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="SAP00",type="sap"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="SAP01",type="sap"} 0.01
zhmc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="SAP02",type="sap"} 0.01
# HELP zhmc_processor_smt_mode_percent Percentage of time the processor was in SMT mode - -1 if not supported
# TYPE zhmc_processor_smt_mode_percent gauge
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL000",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL004",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL008",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL00C",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL010",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL014",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL018",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL01C",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL020",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL024",type="ifl"} 68.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL028",type="ifl"} 67.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL02C",type="ifl"} 69.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL030",type="ifl"} 70.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL034",type="ifl"} 67.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL038",type="ifl"} 76.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL03C",type="ifl"} 71.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFP040",type="ifp"} 0.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL044",type="ifl"} 3.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL048",type="ifl"} 2.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL04C",type="ifl"} 2.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL050",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL054",type="ifl"} 3.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL058",type="ifl"} 2.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL05C",type="ifl"} 2.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL060",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL064",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL068",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL06C",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL070",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL078",type="ifl"} 1.0
zhmc_processor_smt_mode_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL07C",type="ifl"} 1.0
# HELP zhmc_processor_smt_thread0_usage_ratio Usage ratio of thread 0 of the processor when in SMT mode - -1 if not supported
# TYPE zhmc_processor_smt_thread0_usage_ratio gauge
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL000",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL004",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL008",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL00C",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL010",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL014",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL018",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL01C",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL020",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL024",type="ifl"} 0.56
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL028",type="ifl"} 0.55
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL02C",type="ifl"} 0.57
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL030",type="ifl"} 0.58
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL034",type="ifl"} 0.55
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL038",type="ifl"} 0.63
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL03C",type="ifl"} 0.59
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFP040",type="ifp"} 0.06
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL044",type="ifl"} 0.03
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL048",type="ifl"} 0.02
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL04C",type="ifl"} 0.02
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL050",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL054",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL058",type="ifl"} 0.02
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL05C",type="ifl"} 0.02
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL060",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL064",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL068",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL06C",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL070",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL078",type="ifl"} 0.01
zhmc_processor_smt_thread0_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL07C",type="ifl"} 0.01
# HELP zhmc_processor_smt_thread1_usage_ratio Usage ratio of thread 1 of the processor when in SMT mode - -1 if not supported
# TYPE zhmc_processor_smt_thread1_usage_ratio gauge
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL000",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL004",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL008",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL00C",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL010",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL014",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL018",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL01C",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL020",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL024",type="ifl"} 0.54
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL028",type="ifl"} 0.53
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL02C",type="ifl"} 0.56
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL030",type="ifl"} 0.56
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL034",type="ifl"} 0.53
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL038",type="ifl"} 0.61
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL03C",type="ifl"} 0.58
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFP040",type="ifp"} 0.0
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL044",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL048",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL04C",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL050",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL054",type="ifl"} 0.02
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL058",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL05C",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL060",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL064",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL068",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL06C",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL070",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL078",type="ifl"} 0.01
zhmc_processor_smt_thread1_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05",processor="IFL07C",type="ifl"} 0.01
# HELP zhmc_partition_processor_usage_ratio Usage ratio across all processors of the partition
# TYPE zhmc_partition_processor_usage_ratio gauge
zhmc_partition_processor_usage_ratio{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.01
zhmc_partition_processor_usage_ratio{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_processor_usage_ratio{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_network_adapter_usage_ratio Usage ratio of all network adapters of the partition
# TYPE zhmc_partition_network_adapter_usage_ratio gauge
zhmc_partition_network_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_network_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_network_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_storage_adapter_usage_ratio Usage ratio of all storage adapters of the partition
# TYPE zhmc_partition_storage_adapter_usage_ratio gauge
zhmc_partition_storage_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.19
zhmc_partition_storage_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.18
zhmc_partition_storage_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.19
# HELP zhmc_partition_accelerator_adapter_usage_ratio Usage ratio of all accelerator adapters of the partition
# TYPE zhmc_partition_accelerator_adapter_usage_ratio gauge
zhmc_partition_accelerator_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_accelerator_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_accelerator_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_crypto_adapter_usage_ratio Usage ratio of all crypto adapters of the partition
# TYPE zhmc_partition_crypto_adapter_usage_ratio gauge
zhmc_partition_crypto_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_crypto_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_crypto_adapter_usage_ratio{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_processor_usage_ratio Usage ratio across all processors of the CPC
# TYPE zhmc_cpc_processor_usage_ratio gauge
zhmc_cpc_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.14
# HELP zhmc_cpc_network_adapter_usage_ratio Usage ratio across all network adapters of the CPC
# TYPE zhmc_cpc_network_adapter_usage_ratio gauge
zhmc_cpc_network_adapter_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_storage_adapter_usage_ratio Usage ratio across all storage adapters of the CPC
# TYPE zhmc_cpc_storage_adapter_usage_ratio gauge
zhmc_cpc_storage_adapter_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.21
# HELP zhmc_cpc_accelerator_adapter_usage_ratio Usage ratio across all accelerator adapters of the CPC
# TYPE zhmc_cpc_accelerator_adapter_usage_ratio gauge
zhmc_cpc_accelerator_adapter_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_crypto_adapter_usage_ratio Usage ratio across all crypto adapters of the CPC
# TYPE zhmc_cpc_crypto_adapter_usage_ratio gauge
zhmc_cpc_crypto_adapter_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_power_watts Power consumption of the CPC
# TYPE zhmc_cpc_power_watts gauge
zhmc_cpc_power_watts{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 2279.0
# HELP zhmc_cpc_ambient_temperature_celsius Ambient temperature of the CPC
# TYPE zhmc_cpc_ambient_temperature_celsius gauge
zhmc_cpc_ambient_temperature_celsius{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 25.2
# HELP zhmc_cpc_ifl_shared_processor_usage_ratio Usage ratio across all shared IFL processors of the CPC
# TYPE zhmc_cpc_ifl_shared_processor_usage_ratio gauge
zhmc_cpc_ifl_shared_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.14
# HELP zhmc_cpc_ifl_processor_usage_ratio Usage ratio across all IFL processors of the CPC
# TYPE zhmc_cpc_ifl_processor_usage_ratio gauge
zhmc_cpc_ifl_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.14
# HELP zhmc_cpc_shared_processor_usage_ratio Usage ratio across all shared processors of the CPC
# TYPE zhmc_cpc_shared_processor_usage_ratio gauge
zhmc_cpc_shared_processor_usage_ratio{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.14
# HELP zhmc_cpc_humidity_percent Relative humidity
# TYPE zhmc_cpc_humidity_percent gauge
zhmc_cpc_humidity_percent{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 39.0
# HELP zhmc_cpc_dew_point_celsius Dew point
# TYPE zhmc_cpc_dew_point_celsius gauge
zhmc_cpc_dew_point_celsius{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 10.2
# HELP zhmc_cpc_heat_load_total_btu_per_hour Total heat load of the CPC
# TYPE zhmc_cpc_heat_load_total_btu_per_hour gauge
zhmc_cpc_heat_load_total_btu_per_hour{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 7781.0
# HELP zhmc_cpc_heat_load_forced_air_btu_per_hour Heat load of the CPC covered by forced-air
# TYPE zhmc_cpc_heat_load_forced_air_btu_per_hour gauge
zhmc_cpc_heat_load_forced_air_btu_per_hour{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 7781.0
# HELP zhmc_cpc_heat_load_water_btu_per_hour Heat load of the CPC covered by water
# TYPE zhmc_cpc_heat_load_water_btu_per_hour gauge
zhmc_cpc_heat_load_water_btu_per_hour{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_exhaust_temperature_celsius Exhaust temperature of the CPC
# TYPE zhmc_cpc_exhaust_temperature_celsius gauge
zhmc_cpc_exhaust_temperature_celsius{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 32.0
# HELP zhmc_nic_bytes_sent_count Number of Bytes in unicast packets that were sent
# TYPE zhmc_nic_bytes_sent_count gauge
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 6.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 28.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_bytes_received_count Number of Bytes in unicast packets that were received
# TYPE zhmc_nic_bytes_received_count gauge
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 2.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 15.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_bytes_received_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_packets_sent_count Number of unicast packets that were sent
# TYPE zhmc_nic_packets_sent_count gauge
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 4113.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 49.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 2.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 218.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 36076.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 3.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 2.0
zhmc_nic_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 4099.0
# HELP zhmc_nic_packets_received_count Number of unicast packets that were received
# TYPE zhmc_nic_packets_received_count gauge
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 4127.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 67.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 1121.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 342.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 2.198806e+06
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 453496.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 1121.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 1121.0
zhmc_nic_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 2.20192e+06
# HELP zhmc_nic_packets_sent_dropped_count Number of sent packets that were dropped (resource shortage)
# TYPE zhmc_nic_packets_sent_dropped_count gauge
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_dropped_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_packets_received_dropped_count Number of received packets that were dropped (resource shortage)
# TYPE zhmc_nic_packets_received_dropped_count gauge
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_dropped_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_packets_sent_discarded_count Number of sent packets that were discarded (malformed)
# TYPE zhmc_nic_packets_sent_discarded_count gauge
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 8.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_sent_discarded_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_packets_received_discarded_count Number of received packets that were discarded (malformed)
# TYPE zhmc_nic_packets_received_discarded_count gauge
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 10.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_packets_received_discarded_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_multicast_packets_sent_count Number of multicast packets sent
# TYPE zhmc_nic_multicast_packets_sent_count gauge
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 4.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 2.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 2.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 2.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 2.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_multicast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 2.0
# HELP zhmc_nic_multicast_packets_received_count Number of multicast packets received
# TYPE zhmc_nic_multicast_packets_received_count gauge
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 6.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_multicast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_broadcast_packets_sent_count Number of broadcast packets sent
# TYPE zhmc_nic_broadcast_packets_sent_count gauge
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_nic_broadcast_packets_sent_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_broadcast_packets_received_count Number of broadcast packets received
# TYPE zhmc_nic_broadcast_packets_received_count gauge
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 1117.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 31.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 1121.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 161.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 2.198864e+06
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 452826.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 1121.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 1121.0
zhmc_nic_broadcast_packets_received_count{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 2.19889e+06
# HELP zhmc_nic_data_sent_bytes Amount of data sent over the collection interval
# TYPE zhmc_nic_data_sent_bytes gauge
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_sent_bytes{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_data_received_bytes Amount of data received over the collection interval
# TYPE zhmc_nic_data_received_bytes gauge
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_received_bytes{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_data_rate_sent_bytes_per_second Data rate sent over the collection interval
# TYPE zhmc_nic_data_rate_sent_bytes_per_second gauge
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_sent_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_nic_data_rate_received_bytes_per_second Data rate received over the collection interval
# TYPE zhmc_nic_data_rate_received_bytes_per_second gauge
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40001",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40003",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HAMGMT0_ssc_mgmt",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMGR1",partition="PENNY",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="IMGMT0_ssc_mgmt",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40002",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMVST40000",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_nic_data_rate_received_bytes_per_second{cpc="CPCA",mzr="US-East",nic="HMGR0",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_port_bytes_sent_count Number of Bytes in unicast packets that were sent
# TYPE zhmc_port_bytes_sent_count gauge
zhmc_port_bytes_sent_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.507262554781e+012
zhmc_port_bytes_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.43835296642e+011
zhmc_port_bytes_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_bytes_sent_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 6.4139822089e+010
zhmc_port_bytes_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.43246035861e+011
zhmc_port_bytes_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_bytes_sent_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 6.07818967299e+011
zhmc_port_bytes_sent_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.1881573114e+010
# HELP zhmc_port_bytes_received_count Number of Bytes in unicast packets that were received
# TYPE zhmc_port_bytes_received_count gauge
zhmc_port_bytes_received_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.0792682943354e+013
zhmc_port_bytes_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.767158928309e+012
zhmc_port_bytes_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_bytes_received_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.53574001932e+011
zhmc_port_bytes_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.248649421627e+012
zhmc_port_bytes_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_bytes_received_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 4.987490770863e+012
zhmc_port_bytes_received_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 3.39516874475e+011
# HELP zhmc_port_packets_sent_count Number of unicast packets that were sent
# TYPE zhmc_port_packets_sent_count gauge
zhmc_port_packets_sent_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.55980834e+09
zhmc_port_packets_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.499304654e+09
zhmc_port_packets_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_sent_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 7.8007212e+07
zhmc_port_packets_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 5.61008891e+08
zhmc_port_packets_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_sent_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.21867209e+09
zhmc_port_packets_sent_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 5.275366e+07
# HELP zhmc_port_packets_received_count Number of unicast packets that were received
# TYPE zhmc_port_packets_received_count gauge
zhmc_port_packets_received_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 7.703941077e+09
zhmc_port_packets_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.576730272e+09
zhmc_port_packets_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_received_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.75184789e+08
zhmc_port_packets_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.017703606e+09
zhmc_port_packets_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_received_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 3.681868642e+09
zhmc_port_packets_received_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.91129005e+08
# HELP zhmc_port_packets_sent_dropped_count Number of sent packets that were dropped (resource shortage)
# TYPE zhmc_port_packets_sent_dropped_count gauge
zhmc_port_packets_sent_dropped_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_dropped_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
# HELP zhmc_port_packets_received_dropped_count Number of received packets that were dropped (resource shortage)
# TYPE zhmc_port_packets_received_dropped_count gauge
zhmc_port_packets_received_dropped_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_received_dropped_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 14.0
zhmc_port_packets_received_dropped_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_received_dropped_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.0
zhmc_port_packets_received_dropped_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 52.0
zhmc_port_packets_received_dropped_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_received_dropped_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 10.0
zhmc_port_packets_received_dropped_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 8.0
# HELP zhmc_port_packets_sent_discarded_count Number of sent packets that were discarded (malformed)
# TYPE zhmc_port_packets_sent_discarded_count gauge
zhmc_port_packets_sent_discarded_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_sent_discarded_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
# HELP zhmc_port_packets_received_discarded_count Number of received packets that were discarded (malformed)
# TYPE zhmc_port_packets_received_discarded_count gauge
zhmc_port_packets_received_discarded_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 33.0
zhmc_port_packets_received_discarded_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_received_discarded_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_received_discarded_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 264.0
zhmc_port_packets_received_discarded_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_packets_received_discarded_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_packets_received_discarded_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 52.0
zhmc_port_packets_received_discarded_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 60.0
# HELP zhmc_port_multicast_packets_sent_count Number of multicast packets sent
# TYPE zhmc_port_multicast_packets_sent_count gauge
zhmc_port_multicast_packets_sent_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.22367e+06
zhmc_port_multicast_packets_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.948191e+06
zhmc_port_multicast_packets_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_multicast_packets_sent_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.945628e+06
zhmc_port_multicast_packets_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.91852e+06
zhmc_port_multicast_packets_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_multicast_packets_sent_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.961788e+06
zhmc_port_multicast_packets_sent_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.960602e+06
# HELP zhmc_port_multicast_packets_received_count Number of multicast packets received
# TYPE zhmc_port_multicast_packets_received_count gauge
zhmc_port_multicast_packets_received_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.085537e+06
zhmc_port_multicast_packets_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.3716533e+07
zhmc_port_multicast_packets_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_multicast_packets_received_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.622439e+06
zhmc_port_multicast_packets_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.3878248e+07
zhmc_port_multicast_packets_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_multicast_packets_received_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.621445e+06
zhmc_port_multicast_packets_received_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2.085871e+06
# HELP zhmc_port_broadcast_packets_sent_count Number of broadcast packets sent
# TYPE zhmc_port_broadcast_packets_sent_count gauge
zhmc_port_broadcast_packets_sent_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 4.280821e+07
zhmc_port_broadcast_packets_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 201366.0
zhmc_port_broadcast_packets_sent_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_broadcast_packets_sent_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 300179.0
zhmc_port_broadcast_packets_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 38731.0
zhmc_port_broadcast_packets_sent_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_broadcast_packets_sent_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 246243.0
zhmc_port_broadcast_packets_sent_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 28786.0
# HELP zhmc_port_broadcast_packets_received_count Number of broadcast packets received
# TYPE zhmc_port_broadcast_packets_received_count gauge
zhmc_port_broadcast_packets_received_count{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.3470031e+07
zhmc_port_broadcast_packets_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.15572536e+08
zhmc_port_broadcast_packets_received_count{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_broadcast_packets_received_count{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 5.7123565e+07
zhmc_port_broadcast_packets_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1.13229372e+08
zhmc_port_broadcast_packets_received_count{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_broadcast_packets_received_count{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 5.6900344e+07
zhmc_port_broadcast_packets_received_count{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 5.5882729e+07
# HELP zhmc_port_data_sent_bytes Amount of data sent over the collection interval
# TYPE zhmc_port_data_sent_bytes gauge
zhmc_port_data_sent_bytes{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1480.0
zhmc_port_data_sent_bytes{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 190.0
zhmc_port_data_sent_bytes{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_sent_bytes{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 380.0
zhmc_port_data_sent_bytes{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 14278.0
zhmc_port_data_sent_bytes{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_sent_bytes{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 710598.0
zhmc_port_data_sent_bytes{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 190.0
# HELP zhmc_port_data_received_bytes Amount of data received over the collection interval
# TYPE zhmc_port_data_received_bytes gauge
zhmc_port_data_received_bytes{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 2078.0
zhmc_port_data_received_bytes{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 4471.0
zhmc_port_data_received_bytes{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_received_bytes{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 447.0
zhmc_port_data_received_bytes{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 30165.0
zhmc_port_data_received_bytes{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_received_bytes{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 3.085951e+06
zhmc_port_data_received_bytes{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 830.0
# HELP zhmc_port_data_rate_sent_bytes_per_second Data rate sent over the collection interval
# TYPE zhmc_port_data_rate_sent_bytes_per_second gauge
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 49.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 6.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 12.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 475.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 23686.0
zhmc_port_data_rate_sent_bytes_per_second{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 6.0
# HELP zhmc_port_data_rate_received_bytes_per_second Data rate received over the collection interval
# TYPE zhmc_port_data_rate_received_bytes_per_second gauge
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 69.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 149.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 14.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 1005.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 102865.0
zhmc_port_data_rate_received_bytes_per_second{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 27.0
# HELP zhmc_port_bandwidth_usage_ratio Bandwidth usage ratio of the port
# TYPE zhmc_port_bandwidth_usage_ratio gauge
zhmc_port_bandwidth_usage_ratio{adapter="OSD_134_DATA_NET2_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_108_MGMT_NET1_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_130_DATA_NET1_17",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_128_MGMT_NET2_30",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="1"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_110_DATA_NET1_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
zhmc_port_bandwidth_usage_ratio{adapter="OSD_114_DATA_NET2_15",cpc="CPCA",mzr="US-East",pod="wdc04-05",port="0"} 0.0
# HELP zhmc_partition_processor_mode_int Allocation mode for processors to the active partition as an integer (0=shared, 1=dedicated)
# TYPE zhmc_partition_processor_mode_int gauge
zhmc_partition_processor_mode_int{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_processor_mode_int{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_processor_mode_int{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_threads_per_processor_ratio Number of threads per allocated processor the operating system running in the partition is configured to use
# TYPE zhmc_partition_threads_per_processor_ratio gauge
zhmc_partition_threads_per_processor_ratio{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_threads_per_processor_ratio{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 2.0
zhmc_partition_threads_per_processor_ratio{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 2.0
# HELP zhmc_partition_cp_processor_count Number of CP processors allocated to the active partition
# TYPE zhmc_partition_cp_processor_count gauge
zhmc_partition_cp_processor_count{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_cp_processor_count{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_cp_processor_count{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_cp_initial_processing_weight Initial CP processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_cp_initial_processing_weight gauge
zhmc_partition_cp_initial_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 100.0
zhmc_partition_cp_initial_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 100.0
zhmc_partition_cp_initial_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 100.0
# HELP zhmc_partition_cp_minimum_processing_weight Minimum CP processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_cp_minimum_processing_weight gauge
zhmc_partition_cp_minimum_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_cp_minimum_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_partition_cp_minimum_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 1.0
# HELP zhmc_partition_cp_maximum_processing_weight Maximum CP processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_cp_maximum_processing_weight gauge
zhmc_partition_cp_maximum_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 999.0
zhmc_partition_cp_maximum_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 999.0
zhmc_partition_cp_maximum_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 999.0
# HELP zhmc_partition_cp_current_processing_weight Current CP processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_cp_current_processing_weight gauge
zhmc_partition_cp_current_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_cp_current_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_partition_cp_current_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 1.0
# HELP zhmc_partition_cp_processor_count_cap Maximum number of CP processors to be used when absolute CP processor capping is enabled
# TYPE zhmc_partition_cp_processor_count_cap gauge
zhmc_partition_cp_processor_count_cap{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_cp_processor_count_cap{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_partition_cp_processor_count_cap{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 1.0
# HELP zhmc_partition_cp_initial_processing_weight_is_capped Boolean indicating whether the initial CP processing weight is capped (i.e. a limit) or not (i.e. a target).
# TYPE zhmc_partition_cp_initial_processing_weight_is_capped gauge
zhmc_partition_cp_initial_processing_weight_is_capped{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_cp_initial_processing_weight_is_capped{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_cp_initial_processing_weight_is_capped{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_ifl_processor_count Number of IFL processors allocated to the active partition
# TYPE zhmc_partition_ifl_processor_count gauge
zhmc_partition_ifl_processor_count{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 2.0
zhmc_partition_ifl_processor_count{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 2.0
zhmc_partition_ifl_processor_count{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 4.0
# HELP zhmc_partition_ifl_initial_processing_weight Initial IFL processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_ifl_initial_processing_weight gauge
zhmc_partition_ifl_initial_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 20.0
zhmc_partition_ifl_initial_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 20.0
zhmc_partition_ifl_initial_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 80.0
# HELP zhmc_partition_ifl_minimum_processing_weight Minimum IFL processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_ifl_minimum_processing_weight gauge
zhmc_partition_ifl_minimum_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_ifl_minimum_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_partition_ifl_minimum_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 1.0
# HELP zhmc_partition_ifl_maximum_processing_weight Maximum IFL processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_ifl_maximum_processing_weight gauge
zhmc_partition_ifl_maximum_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 999.0
zhmc_partition_ifl_maximum_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 999.0
zhmc_partition_ifl_maximum_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 999.0
# HELP zhmc_partition_ifl_current_processing_weight Current IFL processing weight for the active partition in shared mode (1..999)
# TYPE zhmc_partition_ifl_current_processing_weight gauge
zhmc_partition_ifl_current_processing_weight{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_ifl_current_processing_weight{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_partition_ifl_current_processing_weight{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 1.0
# HELP zhmc_partition_ifl_processor_count_cap Maximum number of IFL processors to be used when absolute IFL processor capping is enabled
# TYPE zhmc_partition_ifl_processor_count_cap gauge
zhmc_partition_ifl_processor_count_cap{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 1.0
zhmc_partition_ifl_processor_count_cap{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 1.0
zhmc_partition_ifl_processor_count_cap{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 1.0
# HELP zhmc_partition_ifl_initial_processing_weight_is_capped Boolean indicating whether the initial IFL processing weight is capped (i.e. a limit) or not (i.e. a target).
# TYPE zhmc_partition_ifl_initial_processing_weight_is_capped gauge
zhmc_partition_ifl_initial_processing_weight_is_capped{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_ifl_initial_processing_weight_is_capped{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_ifl_initial_processing_weight_is_capped{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_initial_memory_mib Initial amount of memory allocated to the partition when it becomes active, in MiB
# TYPE zhmc_partition_initial_memory_mib gauge
zhmc_partition_initial_memory_mib{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 68608.0
zhmc_partition_initial_memory_mib{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 122880.0
zhmc_partition_initial_memory_mib{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 614400.0
# HELP zhmc_partition_reserved_memory_mib Amount of reserved memory (maximum memory minus initial memory), in MiB
# TYPE zhmc_partition_reserved_memory_mib gauge
zhmc_partition_reserved_memory_mib{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 0.0
zhmc_partition_reserved_memory_mib{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 0.0
zhmc_partition_reserved_memory_mib{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 0.0
# HELP zhmc_partition_maximum_memory_mib Maximum amount of memory to which the operating system running in the partition can increase the memory allocation, in MiB
# TYPE zhmc_partition_maximum_memory_mib gauge
zhmc_partition_maximum_memory_mib{cpc="CPCA",mzr="US-East",partition="MIKETEST",pod="wdc04-05"} 68608.0
zhmc_partition_maximum_memory_mib{cpc="CPCA",mzr="US-East",partition="WILLVLAN2",pod="wdc04-05"} 122880.0
zhmc_partition_maximum_memory_mib{cpc="CPCA",mzr="US-East",partition="PENNY",pod="wdc04-05"} 614400.0
# HELP zhmc_cpc_cp_processor_count Number of active CP processors
# TYPE zhmc_cpc_cp_processor_count gauge
zhmc_cpc_cp_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_ifl_processor_count Number of active IFL processors
# TYPE zhmc_cpc_ifl_processor_count gauge
zhmc_cpc_ifl_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 30.0
# HELP zhmc_cpc_icf_processor_count Number of active ICF processors
# TYPE zhmc_cpc_icf_processor_count gauge
zhmc_cpc_icf_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_iip_processor_count Number of active zIIP processors
# TYPE zhmc_cpc_iip_processor_count gauge
zhmc_cpc_iip_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_aap_processor_count Number of active zAAP processors
# TYPE zhmc_cpc_aap_processor_count gauge
zhmc_cpc_aap_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_cbp_processor_count Number of active CBP processors
# TYPE zhmc_cpc_cbp_processor_count gauge
zhmc_cpc_cbp_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_sap_processor_count Number of active SAP processors
# TYPE zhmc_cpc_sap_processor_count gauge
zhmc_cpc_sap_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 2.0
# HELP zhmc_cpc_defective_processor_count Number of defective processors of all processor types
# TYPE zhmc_cpc_defective_processor_count gauge
zhmc_cpc_defective_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_spare_processor_count Number of spare processors of all processor types
# TYPE zhmc_cpc_spare_processor_count gauge
zhmc_cpc_spare_processor_count{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_total_memory_mib Total amount of installed memory, in MiB
# TYPE zhmc_cpc_total_memory_mib gauge
zhmc_cpc_total_memory_mib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 4.194304e+06
# HELP zhmc_cpc_hsa_memory_mib Amount of memory reserved for the base hardware system area (HSA), in MiB
# TYPE zhmc_cpc_hsa_memory_mib gauge
zhmc_cpc_hsa_memory_mib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 65536.0
# HELP zhmc_cpc_partition_memory_mib Amount of memory for use by partitions, in MiB
# TYPE zhmc_cpc_partition_memory_mib gauge
zhmc_cpc_partition_memory_mib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 4.128768e+06
# HELP zhmc_cpc_partition_central_memory_mib Amount of memory allocated as central storage across the active partitions, in MiB
# TYPE zhmc_cpc_partition_central_memory_mib gauge
zhmc_cpc_partition_central_memory_mib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 2.280448e+06
# HELP zhmc_cpc_partition_expanded_memory_mib Amount of memory allocated as expanded storage across the active partitions, in MiB
# TYPE zhmc_cpc_partition_expanded_memory_mib gauge
zhmc_cpc_partition_expanded_memory_mib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0
# HELP zhmc_cpc_available_memory_mib Amount of memory not allocated to active partitions, in MiB
# TYPE zhmc_cpc_available_memory_mib gauge
zhmc_cpc_available_memory_mib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 1.84832e+06
# HELP zhmc_cpc_vfm_increment_gib Increment size of IBM Virtual Flash Memory (VFM), in GiB
# TYPE zhmc_cpc_vfm_increment_gib gauge
zhmc_cpc_vfm_increment_gib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 16.0
# HELP zhmc_cpc_total_vfm_gib Total amount of installed IBM Virtual Flash Memory (VFM), in GiB
# TYPE zhmc_cpc_total_vfm_gib gauge
zhmc_cpc_total_vfm_gib{cpc="CPCA",mzr="US-East",pod="wdc04-05"} 0.0

Demo setup with Grafana

This section describes a demo setup with a Prometheus server and with the Grafana frontend for visualizing the metrics.

The Prometheus server scrapes the metrics from the exporter. The Grafana server provides a HTML based web server that visualises the metrics in a dashboard.

The following diagram shows the demo setup:

Demo setup

Perform these steps for setting it up:

  • Download and install Prometheus from the Prometheus download page or using your OS-specific package manager.

    Copy the sample Prometheus configuration file (examples/prometheus.yaml in the Git repo) as prometheus.yaml into some directory where you will run the Prometheus server. The host:port for contacting the exporter is already set to localhost:9291 and it can be changed as needed.

    Run the Prometheus server as follows:

    $ prometheus --config.file=prometheus.yaml
    

    For details, see the Prometheus guide.

  • Download and install Grafana from the Grafana download page or using your OS-specific package manager.

    Run the Grafana server as follows:

    $ grafana-server -homepath {homepath} web
    

    Where:

    • {homepath} is the path name of the directory with the conf and data directories, for example /usr/local/Cellar/grafana/7.3.4/share/grafana on macOS when Grafana was installed using Homebrew.

    By default, the web interface will be on localhost:3000. This can be changed as needed. For details, see the Prometheus guide on Grafana.

  • Direct your web browser at https://localhost:3000 and log on using admin/admin.

    Create a data source in Grafana with:

    Create a dashboard in Grafana by importing the sample dashboard (examples/grafana.json in the Git repo). It will use the data source ZHMC_Prometheus.

Logging

The exporter supports logging the interactions with the HMC to stderr or to a file. Logging is enabled by using the --log-dest DEST option where DEST can be the keyword stderr or the path name of a log file.

Examples:

$ zhmc_prometheus_exporter --log-dest stderr ...
$ zhmc_prometheus_exporter --log-dest mylog.log ...

At this point, only the HMC interactions can be logged, so the only valid value for the --log-comp option is hmc. That is also the default component that is logged (when enabled via --log-dest).

Performance

The support for resource property based metric values that was introduced in version 1.0 has slowed down the startup of the exporter quite significantly if these metrics are enabled.

Here is an elapsed time measurement for the startup of the exporter using an HMC in one of our development data centers:

  • 11:33 min for preparing auto-update for 143 partitions on two z14 systems in classic mode

  • 0:12 min for preparing auto-update for 98 partitions on two z13 systems in DPM mode

  • 1:30 min for preparing auto-update for the 4 CPCs

  • 10:25 min for all other startup activities (without the partition-attached-network-interface metrics group that would have been 0:48 min)

Once the exporter is up and running, the fetching of metrics by Prometheus from the exporter is very fast:

  • 0:00.35 min (=350 ms) for fetching metrics with 236 HELP/TYPE lines and 5269 metric value lines (size: 500 KB)

In this measurement, the complete set of metrics was enabled for the 4 CPCs described above.

This result includes metric values from properties of auto-updated resources (which are maintained in the exporter and are updated asynchronously via notifications the exporter receives from the HMC) and metric values retrieved from the HMC metric service by executing a single HMC operation (“Get Metric Context”).

This was measured with a local web browser that was directed to an exporter running on the same local system (a MacBook Pro). The network path between the exporter and the targeted HMC went via VPN to the IBM Intranet (via WLAN and Internet) and then across a boundary firewall.

Trouble shooting

This section describes some issues and how to resolve them. If you encounter an issue that is not covered here, see Reporting issues.

Permission error

Example:

$ zhmc_prometheus_exporter
Permission error. Make sure you have appropriate permissions to read from
  /etc/zhmc-prometheus-exporter/hmccreds.yaml.

You don’t have permission to read from a YAML file. Change the permissions with chmod, check man chmod if you are unfamiliar with it.

File not found

Example:

$ zhmc_prometheus_exporter
Error: File not found. It seems that /etc/zhmc-prometheus-exporter/hmccreds.yaml does not exist.

A required YAML file (hmccreds.yaml and metrics.yaml) does not exist. Make sure that you specify paths, relative or absolute, with -c or -m if the file is not in etc/zhmc-prometheus-exporter/. You have to copy the HMC credentials file from the examples folder and fill in your own credentials, see Quickstart for more information.

Section not found

Example:

$ zhmc_prometheus_exporter
Section metric_groups not found in file /etc/zhmc-prometheus-exporter/metrics.yaml.

At least one of the sections metric_groups and metrics in your metrics.yaml or metrics in hmccreds.yaml is missing in its entirety. See chapter Metric definition file for more information.

Doesn’t follow the YAML syntax

Example:

$ zhmc_prometheus_exporter
/etc/zhmc-prometheus-exporter/metrics.yaml does not follow the YAML syntax

A YAML file you specified breaks the syntax rules of the YAML specification. If you derive your YAML files from the existing examples (see chapter Quickstart), this error should not occur, you can also check the YAML specification.

You did not specify

Example:

$ zhmc_prometheus_exporter
You did not specify the IP address of the HMC in /etc/zhmc-prometheus-exporter/hmccreds.yaml.

There is a lot of mandatory information in the two YAML files that might be missing if you improperly filled the credentials file (see Quickstart) or made bad changes to the metrics file (see Metric definition file).

All of these values could in some way be missing or incorrect:

In the credentials YAML file, in the section “metrics”

  • hmc, the IP address of the HMC (it must be a correct IP address as well!)

  • userid, a username for the HMC

  • password, the respective password

In the metrics YAML file, in the section “metric_groups”, for each metric group

  • prefix, the prefix for the metrics to be exported

  • fetch, specifying whether the group should be fetched (it must be one of True or False as well!)

In the metrics YAML file, in the section “metrics”, for each metric group

  • The group must also exist in the metric_groups section

  • percent, specifying whether the metric is a percent value (it must be one of True or False as well!)

  • exporter_name, the name for the exporter (minus the prefix)

  • exporter_desc, the mandatory description for the exporter

Time out

Example:

$ zhmc_prometheus_exporter
Time out. Ensure that you have access to the HMC and that you have stored
  the correct IP address in /etc/zhmc-prometheus-exporter/hmccreds.yaml.

There is a certain timeout threshold if the HMC cannot be found. Check that you have access to the HMC on the IP address that you specified in the HMC credentials file.

Authentication error

Example:

$ zhmc_prometheus_exporter
Authentication error. Ensure that you have stored a correct user ID-password
  combination in /etc/zhmc-prometheus-exporter/hmccreds.yaml.

Wrong username or password in the HMC credentials file. Check if you can regularly access the HMC with this username-password combination.

Warning: Skipping metric or metric group

Example:

$ zhmc_prometheus_exporter
...: UserWarning: Skipping metric group 'new-metric-group' returned by the HMC that is
  not defined in the 'metric_groups' section of metric definition file metrics.yaml
  warnings.warn(warning_str % (metric, filename))

$ zhmc_prometheus_exporter
...: UserWarning: Skipping metric 'new-metric' of metric group 'new-metric-group'
  returned by the HMC that is not defined in the 'metrics' section of metric
  definition file metrics.yaml
  warnings.warn(warning_str % (metric, filename))

If the HMC implements new metrics, or if the metric definition file misses a metric or metric group, the exporter issues ths warning to make you aware of that.

Development

This page covers the relevant aspects for developers.

Repository

The Git repository for the exporter project is GitHub: https://github.com/zhmcclient/zhmc-prometheus-exporter

Code of Conduct

Contribution must follow the Code of Conduct as defined by the Contributor Covenant.

Contributing

Third party contributions to this project are welcome!

In order to contribute, create a Git pull request, considering this:

  • Test is required.

  • Each commit should only contain one “logical” change.

  • A “logical” change should be put into one commit, and not split over multiple commits.

  • Large new features should be split into stages.

  • The commit message should not only summarize what you have done, but explain why the change is useful.

  • The commit message must follow the format explained below.

What comprises a “logical” change is subject to sound judgement. Sometimes, it makes sense to produce a set of commits for a feature (even if not large). For example, a first commit may introduce a (presumably) compatible API change without exploitation of that feature. With only this commit applied, it should be demonstrable that everything is still working as before. The next commit may be the exploitation of the feature in other components.

For further discussion of good and bad practices regarding commits, see:

Format of commit messages

A commit message must start with a short summary line, followed by a blank line.

Optionally, the summary line may start with an identifier that helps identifying the type of change or the component that is affected, followed by a colon.

It can include a more detailed description after the summary line. This is where you explain why the change was done, and summarize what was done.

It must end with the DCO (Developer Certificate of Origin) sign-off line in the format shown in the example below, using your name and a valid email address of yours. The DCO sign-off line certifies that you followed the rules stated in DCO 1.1. In short, you certify that you wrote the patch or otherwise have the right to pass it on as an open-source patch.

We use GitCop during creation of a pull request to check whether the commit messages in the pull request comply to this format. If the commit messages do not comply, GitCop will add a comment to the pull request with a description of what was wrong.

Example commit message:

cookies: Add support for delivering cookies

Cookies are important for many people. This change adds a pluggable API for
delivering cookies to the user, and provides a default implementation.

Signed-off-by: Random J Developer <random@developer.org>

Use git commit --amend to edit the commit message, if you need to.

Use the --signoff (-s) option of git commit to append a sign-off line to the commit message with your name and email as known by Git.

If you like filling out the commit message in an editor instead of using the -m option of git commit, you can automate the presence of the sign-off line by using a commit template file:

  • Create a file outside of the repo (say, ~/.git-signoff.template) that contains, for example:

    <one-line subject>
    
    <detailed description>
    
    Signed-off-by: Random J Developer <random@developer.org>
    
  • Configure Git to use that file as a commit template for your repo:

    git config commit.template ~/.git-signoff.template
    

Releasing a version

This section shows the steps for releasing a version to PyPI.

It covers all variants of versions that can be released:

  • Releasing a new major version (Mnew.0.0) based on the master branch

  • Releasing a new minor version (M.Nnew.0) based on the master branch

  • Releasing a new update version (M.N.Unew) based on the stable branch of its minor version

This description assumes that you are authorized to push to the remote repo at https://github.com/zhmcclient/zhmc-ansible-modules and that the remote repo has the remote name origin in your local clone.

Any commands in the following steps are executed in the main directory of your local clone of the zhmc-ansible-modules Git repo.

  1. Set shell variables for the version that is being released and the branch it is based on:

    • MNU - Full version M.N.U that is being released

    • MN - Major and minor version M.N of that full version

    • BRANCH - Name of the branch the version that is being released is based on

    When releasing a new major version (e.g. 1.0.0) based on the master branch:

    MNU=1.0.0
    MN=1.0
    BRANCH=master
    

    When releasing a new minor version (e.g. 0.9.0) based on the master branch:

    MNU=0.9.0
    MN=0.9
    BRANCH=master
    

    When releasing a new update version (e.g. 0.8.1) based on the stable branch of its minor version:

    MNU=0.8.1
    MN=0.8
    BRANCH=stable_${MN}
    
  2. Create a topic branch for the version that is being released:

    git checkout ${BRANCH}
    git pull
    git checkout -b release_${MNU}
    
  3. Edit the version file:

    vi zhmc_prometheus_exporter/_version.py
    

    and set the __version__ variable to the version that is being released:

    __version__ = 'M.N.U'
    
  4. Edit the change log:

    vi docs/changes.rst
    

    and make the following changes in the section of the version that is being released:

    • Finalize the version.

    • Change the release date to today’s date.

    • Make sure that all changes are described.

    • Make sure the items shown in the change log are relevant for and understandable by users.

    • In the “Known issues” list item, remove the link to the issue tracker and add text for any known issues you want users to know about.

    • Remove all empty list items.

  5. When releasing based on the master branch, edit the GitHub workflow file test.yml:

    vi .github/workflows/test.yml
    

    and in the on section, increase the version of the stable_* branch to the new stable branch stable_M.N created earlier:

    on:
      schedule:
        . . .
      push:
        branches: [ master, stable_M.N ]
      pull_request:
        branches: [ master, stable_M.N ]
    
  6. Commit your changes and push the topic branch to the remote repo:

    git status  # Double check the changed files
    git commit -asm "Release ${MNU}"
    git push --set-upstream origin release_${MNU}
    
  7. On GitHub, create a Pull Request for branch release_M.N.U. This will trigger the CI runs.

    Important: When creating Pull Requests, GitHub by default targets the master branch. When releasing based on a stable branch, you need to change the target branch of the Pull Request to stable_M.N.

  8. On GitHub, close milestone M.N.U.

  9. On GitHub, once the checks for the Pull Request for branch start_M.N.U have succeeded, merge the Pull Request (no review is needed). This automatically deletes the branch on GitHub.

  10. Add a new tag for the version that is being released and push it to the remote repo. Clean up the local repo:

    git checkout ${BRANCH}
    git pull
    git tag -f ${MNU}
    git push -f --tags
    git branch -d release_${MNU}
    
  11. When releasing based on the master branch, create and push a new stable branch for the same minor version:

    git checkout -b stable_${MN}
    git push --set-upstream origin stable_${MN}
    git checkout ${BRANCH}
    

    Note that no GitHub Pull Request is created for any stable_* branch.

  12. On GitHub, edit the new tag M.N.U, and create a release description on it. This will cause it to appear in the Release tab.

    You can see the tags in GitHub via Code -> Releases -> Tags.

  13. On ReadTheDocs, activate the new version M.N.U:

  14. Upload the package to PyPI:

    make upload
    

    This will show the package version and will ask for confirmation.

    Attention! This only works once for each version. You cannot release the same version twice to PyPI.

    Verify that the released version arrived on PyPI at https://pypi.python.org/pypi/zhmc-prometheus-exporter/

Starting a new version

This section shows the steps for starting development of a new version.

This section covers all variants of new versions:

  • Starting a new major version (Mnew.0.0) based on the master branch

  • Starting a new minor version (M.Nnew.0) based on the master branch

  • Starting a new update version (M.N.Unew) based on the stable branch of its minor version

This description assumes that you are authorized to push to the remote repo at https://github.com/zhmcclient/zhmc-ansible-modules and that the remote repo has the remote name origin in your local clone.

Any commands in the following steps are executed in the main directory of your local clone of the zhmc-ansible-modules Git repo.

  1. Set shell variables for the version that is being started and the branch it is based on:

    • MNU - Full version M.N.U that is being started

    • MN - Major and minor version M.N of that full version

    • BRANCH - Name of the branch the version that is being started is based on

    When starting a new major version (e.g. 1.0.0) based on the master branch:

    MNU=1.0.0
    MN=1.0
    BRANCH=master
    

    When starting a new minor version (e.g. 0.9.0) based on the master branch:

    MNU=0.9.0
    MN=0.9
    BRANCH=master
    

    When starting a new minor version (e.g. 0.8.1) based on the stable branch of its minor version:

    MNU=0.8.1
    MN=0.8
    BRANCH=stable_${MN}
    
  2. Create a topic branch for the version that is being started:

    git checkout ${BRANCH}
    git pull
    git checkout -b start_${MNU}
    
  3. Edit the version file:

    vi zhmc_prometheus_exporter/_version.py
    

    and update the version to a draft version of the version that is being started:

    __version__ = 'M.N.U.dev1'
    
  4. Edit the change log:

    vi docs/changes.rst
    

    and insert the following section before the top-most section:

    Version M.N.U.dev1
    ^^^^^^^^^^^^^^^^^^
    
    This version contains all fixes up to version M.N-1.x.
    
    Released: not yet
    
    **Incompatible changes:**
    
    **Deprecations:**
    
    **Bug fixes:**
    
    **Enhancements:**
    
    **Cleanup:**
    
    **Known issues:**
    
    * See `list of open issues`_.
    
    .. _`list of open issues`: https://github.com/zhmcclient/zhmc-prometheus-exporter/issues
    
  5. Commit your changes and push them to the remote repo:

    git status  # Double check the changed files
    git commit -asm "Start ${MNU}"
    git push --set-upstream origin start_${MNU}
    
  6. On GitHub, create a Pull Request for branch start_M.N.U.

    Important: When creating Pull Requests, GitHub by default targets the master branch. When starting a version based on a stable branch, you need to change the target branch of the Pull Request to stable_M.N.

  7. On GitHub, create a milestone for the new version M.N.U.

    You can create a milestone in GitHub via Issues -> Milestones -> New Milestone.

  8. On GitHub, go through all open issues and pull requests that still have milestones for previous releases set, and either set them to the new milestone, or to have no milestone.

  9. On GitHub, once the checks for the Pull Request for branch start_M.N.U have succeeded, merge the Pull Request (no review is needed). This automatically deletes the branch on GitHub.

  10. Update and clean up the local repo:

    git checkout ${BRANCH}
    git pull
    git branch -d start_${MNU}
    

Building the distribution archives

You can build a binary (wheel) distribution archive and a source distribution archive (a more minimal version of the repository) with:

$ make build

You will find the files zhmc_prometheus_exporter-VERSION_NUMBER-py2.py3-none-any.whl and zhmc_prometheus_exporter-VERSION_NUMBER.tar.gz in the dist folder, the former being the binary and the latter being the source distribution archive.

The binary distribution archive could be installed with:

$ pip install zhmc_prometheus_exporter-VERSION_NUMBER-py2.py3-none-any.whl

The source distribution archive could be installed with:

$ tar -xfz zhmc_prometheus_exporter-VERSION_NUMBER.tar.gz
$ pip install zhmc_prometheus_exporter-VERSION_NUMBER

Building the documentation

You can build the HTML documentation with:

$ make builddoc

The root file for the built documentation will be build_docs/index.html.

Testing

You can perform unit tests with:

$ make test

You can perform a flake8 check with:

$ make check

You can perform a pylint check with:

$ make pylint

Appendix

Glossary

Exporter

A server application for exposing metrics to Prometheus

IBM Z

IBM’s mainframe product line

Prometheus

A server application for monitoring and alerting

Z HMC

Hardware Management Console for IBM Z

Bibliography

HMC API

The Web Services API of the z Systems Hardware Management Console, described in the following books:

HMC API 2.11.1

IBM SC27-2616, System z Hardware Management Console Web Services API (Version 2.11.1)

HMC API 2.12.0

IBM SC27-2617, System z Hardware Management Console Web Services API (Version 2.12.0)

HMC API 2.12.1

IBM SC27-2626, System z Hardware Management Console Web Services API (Version 2.12.1)

HMC API 2.13.0

IBM SC27-2627, z Systems Hardware Management Console Web Services API (Version 2.13.0)

HMC API 2.13.1

IBM SC27-2634, z Systems Hardware Management Console Web Services API (Version 2.13.1)

HMC API 2.14.0

IBM SC27-2636, IBM Z Hardware Management Console Web Services API (Version 2.14.0)

HMC API 2.14.1

IBM SC27-2637, IBM Z Hardware Management Console Web Services API (Version 2.14.1)

HMC API 2.15.0

IBM SC27-2638, IBM Z Hardware Management Console Web Services API (Version 2.15.0) (covers both GA1 and GA2)

HMC Security

Hardware Management Console Security

Change log

Version 1.2.0

Released: 2022-06-26

Incompatible changes:

  • For classic mode CPCs, changed the name of the LPAR status metric from zhmc_partition_status_int to zhmc_partition_lpar_status_int in order to disambiguate it from the same-named metric for partitions on CPCs in DPM mode. (issue #207)

Bug fixes:

  • Fixed Pylint config file because pylint 2.14 rejects older options (issue #202)

  • The read timeout for HMC interactions was increased from 120 sec to 300 sec. The retry count remains at 2. (issue #210)

Enhancements:

  • Increased the minimum version of zhmcclient to 1.3.1, in order to have the exported JMS logger name symbol. (part of issue #209)

  • Added support for logging HMC notifications with new “jms” log component. (issue #209)

Version 1.1.0

This version contains all fixes up to version 1.0.0.

Released: 2022-04-07

Bug fixes:

  • Fixed new issues reported by Pylint 2.10.

  • Disabled new Pylint issue ‘consider-using-f-string’, since f-strings were introduced only in Python 3.6.

  • The hmccreds_schema.yml schema incorrectly specified the items of an array as a list. That was tolerated by JSON schema draft 07. When jsonschema 4.0 added support for newer JSON schema versions, that broke. Fixed that by changing the array items from a list to its list item object. Also, in order to not fall into future JSON schema incompatibilities again, added $schema: http://json-schema.org/draft-07/schema (issue #180)

  • Increased minimum zhmcclient version to 1.2.0 to pick up the automatic presence of metric group definitions in its mock support, and adjusted testcases accordingly. This accomodates the removal of certain metrics related mock functions in zhmcclient 1.2.0 (issue #194)

  • Made the cleanup when stopping the exporter program more tolerant against meanwhile closed HMC sessions or removed metrics contexts, eliminating exceptions that were previously shown when interrupting the exporter program. (related to issue #193)

  • Fixed an AttributeError exception when retrying the metrics collection after the HMC was rebooted. (related to issue #193)

Enhancements:

  • Changed the “Exporter is up and running” message to be shown also in non-verbose mode to give first-time users a better feedback on when it is ready.

  • Support for Python 3.10: Added Python 3.10 in GitHub Actions tests, and in package metadata.

  • Docs: Documented the authorization requirements for the HMC userid. (issue #179)

  • Improved the information in authentication related error messages to better distinguish between client (=setup) errors and HMC authentication errors, and to include the HTTP reason code in the latter case. (related to issue #193)

  • Showed some more messages in verbose mode for re-creating the HMS session and re-creating the metrics context in case the HMC has rebooted. (related to issue #193)

Cleanup:

  • Removed an unnecessary recreation of the HMC session when re-creating the metrics context on the HMC. (related to issue #193)

  • Changed debug messages when metric value resource was not found on HMC, to messages that are output and logged.

Version 1.0.0

Released: 2021-08-08

Incompatible changes:

  • Dropped support for Python 3.4. (issue #155)

  • Changed some network metrics to be represented using Prometheus counter metric types. Specifically, the following metrics at the NIC and port level have been changed to counters: (issue #160)

    • bytes_sent_count

    • bytes_received_count

    • packets_sent_count

    • packets_received_count

    • packets_sent_dropped_count

    • packets_received_dropped_count

    • packets_sent_discarded_count

    • packets_received_discarded_count

    • multicast_packets_sent_count

    • multicast_packets_received_count

    • broadcast_packets_sent_count

    • broadcast_packets_received_count

Bug fixes:

  • Fixed new isues reported by Pylint 2.9.

Enhancements:

  • Added support for metrics based on resource properties of CPCs, partitions (DPM mode) and LPARs (classic mode). (issue #112)

  • Added support for metrics representing CPC and partition status. (issue #131)

  • Increased minimum version of zhmcclient to 1.0.0 to pick up support for auto-updated resources. (issue #156)

  • Added support for testing with minimum package levels. (issue #59)

  • Added a new make target ‘check_reqs’ for checking dependencies declared in the requirements files.

  • Increased minimum versions of dependent packages to address install issues on Windows and with minimum package levels: - prometheus-client from 0.3.1 to 0.9.0 - jinja2 from 2.0.0 to 2.8

Version 0.7.0

Released: 2021-06-15

This version contains all fixes up to version 0.6.1.

Incompatible changes:

  • The zhmc_prometheus_exporter command now verifies HMC server certificates by default, using the CA certificates in the ‘certifi’ Python package. This verification will reject the self-signed certificates the HMC is set up with initially. To deal with this, install a CA-verifiable certificate in the HMC and specify the correct CA certificates with the new ‘verify_cert’ attribute in the HMC credentials file. As a temporary quick fix or in non-production environments, you can also disable the verification with that new attribute.

Bug fixes:

  • Mitigated the coveralls HTTP status 422 by pinning coveralls-python to <3.0.0.

Enhancements:

  • Increased minimum version of zhmcclient to 0.31.0, mainly driven by its support for verifying HMC certificates.

  • Added support for logging the HMC interactions with new options –log-dest and –log-comp. (issue #121)

  • Added the processor type as a label on the metrics of the ‘zcpc-processor-usage’ metrics group. (issue #102)

  • Docs: Added sample Prometheus output from the exporter.

  • Improved error handling and recovery. Once the exporter is up and running, any connectivity loss is now recovered by retrying eternally.

  • Added exporter level activities to the log, as a new log component “exporter”. All messages that would be displayed at the highest verbosity level are now also logged, regardless of the actual verbosity level. Changed the log format by removing the level name and adding the timestamp.

  • Changed the retry/timeout configuration used for the zhmcclient session, lowering the retry and timeout parameters for connection and reads. This only affects how quickly the exporter reacts to connectivity issues, it does not lower the allowable response time of the HMC.

  • The zhmc_prometheus_exporter command now supports verification of the HMC server certificate. There is a new configuration attributes in the HMC credentials file (‘verify_cert’) that controls the verification behavior.

Version 0.6.0

Released: 2020-12-07

Bug fixes:

  • Docs: Fixed the names of the Prometheus metrics of the line cord power metrics. (see issue #89)

  • Added missing dependency to ‘urllib3’ Python package.

  • README: Fixed the links to the metric definition and HMC credentials files (see issue #88).

  • Dockerfile: Fixed that all files from the package are included in the Docker image (see issue #91).

Enhancements:

  • Added support for specifying a new optional property if in the definition of metric groups in the metric definition file, which specifies a Python expression representing a condition under which the metric group is fetched. The HMC version can be specified in the expression as a hmc_version variable. (see issue #77)

Cleanup:

  • The metric definition and HMC credentials YAML files are now validated using a schema definition (using JSON schema). This improved the ability to enhance these files, and allowed to get rid of error-prone manual validation code. The schema validation files are part of the installed Python package. This adds a dependency to the ‘jsonschema’ package. (see issue #81)

Version 0.5.0

Released: 2020-12-03

Incompatible changes:

  • The sample metric definition file has changed the metric names that are exported, and also the labels. This is only a change if you choose to use the new sample metric definition file; if you continue using your current metric definition file, the exported metrics will be as before.

Enhancements:

  • The packages needed for installation are now properly reflected in the package metadata (part of issue #55).

  • Improved the metric labels published along with metric values in multiple ways. The sample metric definition file has been updated to exploit all these new capabilities:

    • The type of resource to which a metric value belongs is now identified in the label name e.g. by showing a label ‘cpc’ or ‘adapter’ instead of the generic label ‘resource’.

    • Resources that are inside a CPC (e.g. adapters, partitions) now can show their parent resource (the CPC) as an additional label, if the metric definition file specifies that.

    • Metrics that identify the resource (e.g. ‘channel-id’ in the ‘channel-usage’ metric group now can used as additional labels on the actual metric value, if the metric definition file specifies that.

    Note that these changes will only become active if you pick them up in your metric definition file, e.g. by using the updated sample metric definition file. If you continue to use your current metric definition file, nothing will change regarding the labels.

  • The published metrics no longer contain empty HELP/TYPE comments.

  • Metrics with the special value -1 that are returned by the HMC for some metrics in case the resource does not exist, are now suppressed.

  • Disabled the Platform and Python specific additional metrics so that they are not collected or published (see issue #66).

  • Overhauled the complete documentation (triggered by issue #57).

  • Added a cache for looking up HMC resources from their resource URIs to avoid repeated lookup on the HMC. This speeds up large metric retrievals from over a minute to sub-seconds (see issue #73).

  • Added a command line option -v / –verbose to show additional verbose messages (see issue #54).

  • Showing the HMC API version as a verbose message.

  • Removed ensemble/zBX related metrics from the sample metric definition file.

  • Added all missing metrics up to z15 to the sample metric definition file.

  • Added support for additional labels to be shown in every metric that is exported, by specifying them in a new extra_labels section of the HMC credentials file. This allows providing some identification of the HMC environment, if needed. (see issue #80)

Cleanup:

  • Removed the use of ‘pbr’ to simplify installation and development (see issue #55).

Version 0.4.1

Released: 2020-11-29

Bug fixes:

  • Fixed the error that only a subset of the possible exceptions were handled that can be raised by the zhmcclient package (i.e. only ConnectionTimeout and ServerAuthError). This lead to lengthy and confusing tracebacks being shown when they occurred. Now, they are all handled and result in a proper error message.

  • Added metadata to the Pypi package declaring a development status of 4 - Beta, and requiring the supported Python versions (3.4 and higher).

Enhancements:

  • Migrated from Travis and Appveyor to GitHub Actions. This required several changes in package dependencies for development.

  • Added options –help-creds and –help-metrics that show brief help for the HMC credentials file and for the metric definition file, respectively.

  • Improved all exception and warning messages to be better understandable and to provide the context for any issues with content in the HMC credentials or metric definition files.

  • Expanded the supported Python versions to 3.4 and higher.

  • Expanded the supported operating systems to Linux, macOS, Windows.

  • Added the sample HMC credentials file and the sample metric definition file to the appendix of the documentation.

  • The sample metric definition file ‘examples/metrics.yaml’ has been completed so that it now defines all metrics of all metric groups supported by HMC 2.15 (z15). Note that some metric values have been renamed for clarity and consistency.

Version 0.4.0

Released: 2019-08-21

Bug fixes:

  • Avoid exception in case of a connection drop error handling.

  • Replace yaml.load() by yaml.safe_load(). In PyYAML before 5.1, the yaml.load() API could execute arbitrary code if used with untrusted data (CVE-2017-18342).

Version 0.3.0

Released: 2019-08-11

Bug fixes:

  • Reconnect in case of a connection drop.

Version 0.2.0

Released: 2018-08-24

Incompatible changes:

  • All metrics now have a zhmc_ prefix.

Bug fixes:

  • Uses Grafana 5.2.2.

Version 0.1.2

Released: 2018-08-23

Enhancements:

  • The description now instructs the user to pip3 install zhmc-prometheus-exporter instead of running a local install from the cloned repository. It also links to the stable version of the documentation rather than to the latest build.

Version 0.1.1

Released: 2018-08-23

Initial PyPI release (0.1.0 was for testing purposes)

Version 0.1.0

Released: Only on GitHub, never on PyPI

Initial release