Using OpenNTI As A Collector For Streaming Telemetry From Juniper Devices: Part 2

This is the second part of a series of blog posts that are meant to serve as a Quick Start Guide for getting up and running with streaming real-time telemetry data from Juniper devices and collecting it using an open-source tool called OpenNTI.

In Part 1 we covered the two types of telemetry formats supported by Junos, namely Native and OpenConfig Streaming, and discussed how to install OpenNTI and verify that it’s up and running.  In this post, we will take a look at the Junos configurations required to support Native Streaming, as well as OpenNTI/Grafana configurations required to collect and visual the telemetry data.

Native Streaming

As shown in Figure 1 below, when streaming telemetry in native format, there is one important caveat to keep in mind:

Traffic from Native telemetry sensors is injected into the forwarding path, so the collector (OpenNTI) must be reachable via inband connectivity.  Native sensor telemetry traffic will not get forwarded through the router’s management interface (eg. fxp0).

Figure 1:  Streaming Native Telemetry via In-band

Junos Configuration

Enabling Native format telemetry on a Juniper router is a straightforward 3-step process:

  1. Configure a Streaming Server Profile, which defines the parameters of the OpenNTI server which will be used to collect the exported telemetry data.  Such parameters include:
    • Server name (any string label),
    • Server IP address (reachable via in-band), and
    • Port number (which defaults to 50000 for streaming native telemetry to OpenNTI).
set services analytics streaming-server <SERVER_NAME> remote-address <SERVER_IP>
set services analytics streaming-server <SERVER_NAME> remote-port 50000
  1. Configure an Export Profile, which defines parameters related to the data that is exported via the Junos Telemetry Interface.  Such parameters include:
    • Export profile name (any string label),
    • Local router IP address (source address for exported packets),
    • Local router port number (source port for exported packets),
    • Reporting rate (telemetry interval, ranging from 1 to 3600 seconds),
    • Data format (always “gpb” or Google Protobuf),
    • Transport type (always “udp” for native streaming).
set services analytics export-profile <PROFILE_NAME> local-address <ROUTER_IP>
set services analytics export-profile <PROFILE_NAME> local-port <ROUTER_PORT>
set services analytics export-profile <PROFILE_NAME> reporting-rate <REPORTING_RATE>
set services analytics export-profile <PROFILE_NAME> format gpb
set services analytics export-profile <PROFILE_NAME> transport udp
  1. Configure a Sensor to collect telemetry data for a specific resource.  The parameters needed to configure a sensor are:
    • Sensor name (any string label),
    • Streaming server name (as defined in Step 1 above),
    • Export profile name (as defined in Step 2 above),
    • Resource name (as selected from a predefined list of supported sensors in Junos).
set services analytics sensor <SENSOR_NAME> server-name <SERVER_NAME>
set services analytics sensor <SENSOR_NAME> export-name <PROFILE_NAME>
set services analytics sensor <SENSOR_NAME> resource <RESOURCE_NAME>

Putting all 3 steps together, below is a sample configuration snippet (with actual values) for configuring Native Streaming:

services {
   analytics {
      streaming-server OPEN-NTI {
         remote-port 50000;
      export-profile PROFILE1 {
         local-port 21111;
         reporting-rate 1;
         format gpb;
         transport udp;
      sensor SENSOR1 {
         server-name OPEN-NTI;
         export-name PROFILE1;
         resource /junos/system/linecard/interface/;

Verifying Telemetry Data Is Being Streamed

Once the above configuration is committed on the router, there are a few quick checks we can perform to verify that the telemetry data is being streamed to the collector.

The first check is on the router itself, where we can verify that the sensor has been configured and is running, using the “show agent sensors” command.  As shown below, we can verify that we have subscribed to the “/junos/system/linecard/interface/” sensor and that it is being streamed towards the OpenNTI collector server at IP address on port 50000 using UDP transport and Google Protobuf data format.

root@tech_mocha_1> show agent sensors    

Sensor Information : 
    Name                                    : SENSOR1               
    Resource                                : /junos/system/linecard/interface/ 
    Version                                 : 1.1                  
    Sensor-id                               : 248034917             
    Subscription-ID                         : 562951275198053      
    Parent-Sensor-Name                      : Not applicable       
    Component(s)                            : PFE                   
    Server Information : 
        Name                                : OPEN-NTI              
        Scope-id                            : 0                     
        Remote-Address                      :               
        Remote-port                         : 50000                 
        Transport-protocol                  : UDP                   

    Profile Information : 
        Name                                : PROFILE1              
        Reporting-interval                  : 1                     
        Payload-size                        : 5000                  
        Address                             :               
        Port                                : 21111                 
        Timestamp                           : 1                     
        Format                              : GPB                   
        DSCP                                : 0                     
        Forwarding-class                    : 0                     
        Loss-priority                       : low

The second check can be performed on the collector server, using the “tcpdump” utility.  As shown below, we can verify that telemetry data packets are actually being received from the router via port 21111, as configured in the export profile.  We can also see that the packets are being received on the collector interface with IP address on port 50000 via UDP.

root@ubuntu:~# tcpdump -i ens3f1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3f1, link-type EN10MB (Ethernet), capture size 262144 bytes

17:55:47.210561 IP > UDP, length 1069
17:55:49.220587 IP > UDP, length 1069
17:55:51.323314 IP > UDP, length 1069
17:55:53.330639 IP > UDP, length 1069
17:55:55.341179 IP > UDP, length 1069

Note in the tcpdump output above that even though we specified a reporting-rate of 1 second in our export-profile configuration, we still see a 2-second interval in the timestamps for each of the packets arriving at the collector.  This is because the sensor we subscribed to (/junos/system/linecard/interface/) is a PFE or Packet Forwarding Engine sensor, and Junos enforces a minimum reporting rate of 2 seconds for such sensors.

For PFE (Packet Forwarding Engine) sensors, the minimum reporting rate is 2 seconds.

Querying The Data In InfluxDB

Before creating dashboards in Grafana, it is helpful to first take a peek inside the InfluxDB time-series database and understand how things are structured.  Recall from Part 1 of this blog series, that you can access InfluxDB by pointing your browser to the OpenNTI collector server IP address followed by the port number 8083.

As shown in Figure 2 below, the first query to run is “show measurements” against the “juniper” database, which reveals the root measurement for all subsequent queries, namely “jnpr.jvision“.

Figure 2:  Root Measurement For All Queries

Using the root measurement, we can drill down into specific telemetry measurements that are part of the sensor(s) we subscribe to.  For example, let’s say we want to list the last 5 measurements collected for a particular device and interface; we would use the query shown in Figure 3 below (see Query toolbar outlined in blue).

Figure 3:  Selecting the Last 5 Measurements For a Device & Interface

As another example, let’s say we want to list all the “ingress_stats” measurements in the last 2 seconds for a particular device and interface; we would use the query shown in Figure 4 below (see Query toolbar outlined in blue).

Figure 4:  Selecting All Ingress_Stats Measurements in the Last 2 Seconds for a Device & Interface

Visualizing The Data In Grafana

Although OpenNTI comes pre-packaged with some Grafana dashboards for certain types of measurements, it is quite straightforward to create your own custom dashboard panel in Grafana.

From the main Grafana menu bar at the top left, select “Dashboards -> New“, as shown in Figure 5 below.

Figure 5:  Creating a New Dashboard in Grafana

In the “New dashboard” page that appears, either click on the “Graph” icon or drag it into the “Empty Space” below it, as shown in Figure 6 below.

Figure 6:  Instantiating a Graph Within the New Grafana Dashboard

At the very top of the new empty graph that appears, click on the “Panel Title“, which will bring up a small context menu bar above it, as shown in Figure 7 below.  Click on the “Edit” option.

Figure 7:  Editing the Newly Instantiated Graph in Grafana

In the “Graph” panel that appears just below the newly instantiated empty graph, click on the “Metrics” tab, as shown in Figure 8 below.  Within this tab, first select “influxdb” as the “Panel data source“.  Next, we replace the default InfluxDB select statement with our own custom InfluxDB query.

In this example, let’s say we want the graph to display the ingress packets per second for a particular device and interface.  From Figure 4 above, looking at the “type” column in the screenshot, we can see that this corresponds to a measurement type called “ingress_stats.if_pkts“.  Note that this measurement type is a counter and not an actual rate, meaning that it counts the total number of packets received on an interface.  In order to graph the rate (in packets/sec), we first have to calculate it.  As shown in the InfluxDB query below, the rate can be calculated using the InfluxDB “derivative()” function.

SELECT derivative(mean(“value”), 1s) FROM “jnpr.jvision” WHERE “type” = ‘ingress_stats.if_pkts’ AND “device” =~ /jag_router_1/ AND “interface” = ‘ge-0/0/0’ AND $timeFilter GROUP BY “device”, “interface”, “type”, time(30s)

Figure 8:  Displaying the Contents of a Custom InfluxDB Query in the New Grafana Graph Panel

In Part 3 of this blog series, we’ll take a look at the Junos configuration needed to enable telemetry streaming for OpenConfig format, as well as getting the data collected and visualized using OpenNTI.


  1. First thanks for the useful infos,
    If you want please add the below commands for troubleshooting.
    request pfe execute target tfeb0 command “show agent sensors”
    request pfe execute target tfeb0 command “show agent sensors id X” <– X: "Jvision ID" from the first command

    Also the AE interfaces are supported at Junos 17.2R1+ only with gRPC config.


Leave a Reply