Decoding Junos Native Format Telemetry

When working with streaming telemetry, most users focus primarily on the “sending” and “receiving” endpoints, namely configuring telemetry streaming on the router (sending) and setting up the collector to ingest the telemetry stream (receiving). For many users, however, what the actual telemetry stream looks like is a complete mystery. This is because the users’ first view of the telemetry data is often via some type of dashboarding tool (eg. Grafana), which happens after the collector (eg. Telegraf) has already parsed the data and persisted it in a time-series database (eg. InfluxDB).

This blog post aims to demystify the underlying structure of Junos Native Format telemetry and shows the Reader how to capture and decode an incoming telemetry stream using basic UNIX or open-source tools.

What Is “Native Format Telemetry”?

The Junos Telemetry Interface (JTI) supports two different ways of exporting telemetry data:

  1. Native sensors: these are linecard or PFE-based sensors, which export the data via UDP. These sensors use a proprietary data model defined by Juniper, using Google Protocol Buffers (gpb) as a means of both structuring the telemetry data messages and also serializing them for transmission. The data model might be proprietary, but they are open, as they are defined using Protocol Buffer files (or “.proto” files). Each Native sensor has an associated .proto file which defines the content and structure of that sensor’s telemetry payload.
  2. gRPC sensors: these are Routing Engine (RE) based sensors, which export the data using gRPC over HTTP2. These sensors use a data model defined by OpenConfig, and structure the telemetry data using a key/value pair format. Like Native sensors, gRPC sensors also serialize the message for transmission using Google Protocol Buffers.

A summary of the differences between these two sensor types is shown in Figure 1 below. When comparing these two telemetry formats, they both serialize the data in binary format using Google Protocol Buffers for transmission over the wire, but where they differ is in how the underlying telemetry data is encoded. For gRPC sensors, the underlying data is structured in key/value pair format. This means that once the binary data off the wire is decoded (using Protobuf Compiler, or “protoc”), the underlying data is “self-describing” and we can immediately discern the content as a listing of keys and associated values. In contrast, with Native sensors, once the binary data off the wire is decoded, the underlying data is still structured as Google Protocol Buffer messages and we need to use the associated .proto files as a “secret decoder ring” in order to make sense of the underlying data. Let’s explore further how we can do this.

Figure 1:  Comparing Junos Native And gRPC Sensors

Installing The Prerequisite Tools

To replicate the examples shown in this blog post, the following three utilities need to be installed:

  1. Netcat
  2. Protocol Buffers Compiler
  3. Protocol Buffers Developer’s Library

These utilities can be installed on an Ubuntu 16.04 server as shown below:

root@ubuntu:~# apt-get install netcat protobuf-compiler libprotobuf-dev -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  libc-dev-bin libc6-dev libprotobuf-lite9v5 libprotobuf9v5 libprotoc9v5 linux-libc-dev manpages-dev netcat-traditional zlib1g zlib1g-dev
Suggested packages:
The following NEW packages will be installed:
  libc-dev-bin libc6-dev libprotobuf-dev libprotobuf-lite9v5 libprotobuf9v5 libprotoc9v5 linux-libc-dev manpages-dev netcat netcat-traditional protobuf-compiler zlib1g-dev
The following packages will be upgraded:
1 upgraded, 12 newly installed, 0 to remove and 155 not upgraded.
Need to get 6,480 kB of archives.
After this operation, 29.2 MB of additional disk space will be used.
Get:1 xenial-updates/main amd64 zlib1g amd64 1:1.2.8.dfsg-2ubuntu4.1 [51.2 kB]
Get:2 xenial-updates/main amd64 libc-dev-bin amd64 2.23-0ubuntu10 [68.7 kB]
Get:3 xenial-updates/main amd64 linux-libc-dev amd64 4.4.0-137.163 [850 kB]
Get:4 xenial-updates/main amd64 libc6-dev amd64 2.23-0ubuntu10 [2,079 kB]
Get:5 xenial/main amd64 libprotobuf-lite9v5 amd64 2.6.1-1.3 [58.4 kB]                                                                      
Get:6 xenial/main amd64 libprotobuf9v5 amd64 2.6.1-1.3 [326 kB]                                                                            
Get:7 xenial/main amd64 libprotoc9v5 amd64 2.6.1-1.3 [273 kB]                                                                              
Setting up netcat-traditional (1.10-41) ...
Setting up netcat (1.10-41) ...
Setting up zlib1g-dev:amd64 (1:1.2.8.dfsg-2ubuntu4.1) ...
Setting up libprotobuf-dev:amd64 (2.6.1-1.3) ...
Setting up protobuf-compiler (2.6.1-1.3) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...

Once the above three utilities are installed, you can quickly check that the Protocol Buffers library files have been installed by executing the following:

root@ubuntu:~# ls /usr/include/google/protobuf/descriptor.proto

Finally, we need to download the Junos Telemetry Interface (JTI) Data Model Files from the Junos Downloads site.  These data model files are the Protocol Buffer “.proto” files associated with each of the native sensors.  After selecting the product type and associated Junos version, navigate down to the Tools section of the site and download the compressed TAR archive titled “JUNOS Telemetry Interface Data Model Files“, as shown in Figure 2 below.

Figure 2:  Junos Telemetry Interface (JTI) Data Model Files

Upload the TAR archive to the server where the Protocol Buffers library files were installed.  Untar the contents of the file, as shown below:

root@ubuntu:~# ls
root@ubuntu:~# tar -xzvf junos-telemetry-interface-18.3R1.9.tgz

Configuring Native Streaming On The Router

The process to setup and configure the Juniper router for Native telemetry streaming is covered in depth in the “Junos Configuration” section of the following blog post.  Rather than repeat and duplicate the content here, the Reader is encouraged to peruse the post for further details.  Note that for the example shown in this blog, the author subscribed to the “/junos/system/linecard/cpu/memory/” sensor. The Native sensor telemetry configuration used in this blog post was as follows:

root@mx1_re0> show configuration services analytics
streaming-server OPEN-NTI {
    remote-port 50001;
export-profile PROFILE1 {
    local-port 21111;
    reporting-rate 5;
    format gpb;
    transport udp;
sensor SENSOR1 {
    server-name OPEN-NTI;
    export-name PROFILE1;
    resource /junos/system/linecard/cpu/memory/;

Capturing And Decoding The Telemetry Stream

At this point, we have configured the router to stream telemetry statistics for the “/junos/system/linecard/cpu/memory/” sensor to the target collection server. Let’s first verify that the telemetry data is in fact reaching the server. To do this, we can use the “tcpdump” packet analyzer utility,  to display packets being received on a specific interface (eg. “ens3f1”) and to a specific destination UDP port, namely 50001 as per our Junos configuration from above.  This is shown in the output below, where we can see telemetry packets arriving roughly every 5 seconds from the router (source IP to the collector server on destination IP and destination port 50001:

root@ubuntu:~# tcpdump -i ens3f1 dst port 50001
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3f1, link-type EN10MB (Ethernet), capture size 262144 bytes
07:41:58.545015 IP > UDP, length 1098
07:42:04.574925 IP > UDP, length 1098
07:42:10.430476 IP > UDP, length 1098
07:42:16.219732 IP > UDP, length 1098
07:42:22.188419 IP > UDP, length 1098

Now that we have confirmed that the telemetry stream is indeed reaching the collector server, let’s capture a sample of the data and take a deeper look at the contents.  For this, we use the “netcat” utility to capture the telemetry data received on port 50001 and subsequently write the captured data to a file called “data.gpb”, as shown below.  Be sure to first “cd” to the directory where the Junos Telemetry Interface (JTI) Data Model Files were unpackaged.

root@ubuntu:~/junos-telemetry-interface# nc -ul 50001 > data.gpb
root@ubuntu:~/junos-telemetry-interface# ls -l
total 168
-r-xr-xr-x 1 root root 11886 Sep 20 13:52 agent.proto
-r-xr-xr-x 1 root root  3233 Sep 20 11:49 cmerror_data.proto
-r-xr-xr-x 1 root root  3488 Sep 20 11:49 cmerror.proto
-r-xr-xr-x 1 root root  3020 Sep 20 11:49 cpu_memory_utilization.proto
-rw-r--r-- 1 root root  1099 Oct  9 14:59 data.gpb
-r-xr-xr-x 1 root root  5217 Sep 20 11:49 fabric.proto
-r-xr-xr-x 1 root root  5230 Sep 20 11:49 firewall.proto
-r-xr-xr-x 1 root root 12856 Sep 20 11:49 inline_jflow.proto
-r-xr-xr-x 1 root root  4289 Sep 20 11:47 ipsec_telemetry.proto
-r-xr-xr-x 1 root root 10138 Sep 20 11:51 license.txt
-r-xr-xr-x 1 root root  6243 Sep 20 11:49 logical_port.proto
-r-xr-xr-x 1 root root  2496 Sep 20 11:49 lsp_stats.proto
-r-xr-xr-x 1 root root   605 Sep 20 11:51 NOTICE
-r-xr-xr-x 1 root root  3184 Sep 20 11:49 npu_memory_utilization.proto
-r-xr-xr-x 1 root root  3773 Sep 20 11:49 npu_utilization.proto
-r-xr-xr-x 1 root root  6959 Sep 20 11:49 optics.proto
-r-xr-xr-x 1 root root  2941 Sep 20 11:49 packet_stats.proto
-r-xr-xr-x 1 root root  2291 Sep 20 11:48 pbj.proto
-r-xr-xr-x 1 root root  1669 Sep 20 11:49 port_exp.proto
-r-xr-xr-x 1 root root  8634 Sep 20 11:49 port.proto
-r-xr-xr-x 1 root root  3026 Sep 20 11:48 qmon.proto
-r-xr-xr-x 1 root root  2788 Sep 20 11:47 session_telemetry.proto
-r-xr-xr-x 1 root root  1544 Sep 20 11:49 sr_stats_per_if_egress.proto
-r-xr-xr-x 1 root root  1549 Sep 20 11:49 sr_stats_per_if_ingress.proto
-r-xr-xr-x 1 root root  1607 Sep 20 11:49 sr_stats_per_sid.proto
-r-xr-xr-x 1 root root  4742 Sep 20 11:47 svcset_telemetry.proto
-r-xr-xr-x 1 root root  2952 Sep 20 11:49 telemetry_top.proto

As shown below, if we take a look at the raw data that was captured in the “data.gpb” file, we see that it is in binary format.  This is because the telemetry data is encoded in Google Protocol Buffers format.

root@ubuntu:~/junos-telemetry-interface# more data.gpb
Kernel???????V !*
ifl-halp??{ (
nh??? (
filter??? (

agent?]?? ޺(
^Ltoe-lu-stats?  (

features nh?g? 	(
^Lfeatures iff?/?(

To read the contents of the “data.gpb” file in non-binary (text) format, let’s use the Protocol Buffers Compiler to read the telemetry data from the file and write the raw tag/value pairs in text format to standard output. To write the raw tag/value pairs, we use the “–decode_raw” option.

root@ubuntu:~/junos-telemetry-interface# protoc --decode_raw < data.gpb
1: "mx1_re0:"
2: 0
4: "SENSOR1:/junos/system/linecard/cpu/memory/:/junos/system/linecard/cpu/memory/:PFE"
5: 4522
6: 1539122335596
7: 1
8: 0
101 {
  2636 {
    1 {
      1 {
        1: "Kernel"
        2: 536866792
        3: 181051544
        4: 33
        5 {
          1: "ifd"
          2: 3536
          3: 34
          4: 0
          5: 0
        5 {
          1: "ifd-halp"
          2: 4144
          3: 28
          4: 0
          5: 0
        5 {
          1: "counters"
          2: 4039952
          3: 915
          4: 245
          5: 0
        5 {
          1: "iftable"
          2: 3432
          3: 26
          4: 0
          5: 0
        5 {
          1 {
            13: 0x616b20656e696c6e
          2: 56
          3: 3
          4: 0
          5: 0

With the raw decode above, notice how even though the data is now displayed in non-binary (text) format, that we still can’t make proper sense of the information. This is because without including a pointer to the sensor-specific .proto file, the field names do not get decoded and instead remain as integers. To get around this, we once again use the Protocol Buffers Compiler, but this time using the “–decode” option, along with the name of the sensor-specific .proto file (ie. “cpu_memory_utilization.proto“) and the top-level message type (ie. “TelemetryStream“), as shown below.  And now we have successfully captured and fully decoded a sample Native telemetry payload packet!

root@ubuntu:~/junos-telemetry-interface# protoc --decode TelemetryStream cpu_memory_utilization.proto -I /usr/include -I .  < data.gpb
system_id: "mx1_re0:"
component_id: 0
sensor_name: "SENSOR1:/junos/system/linecard/cpu/memory/:/junos/system/linecard/cpu/memory/:PFE"
sequence_number: 4522
timestamp: 1539122335596
version_major: 1
version_minor: 0
enterprise {
  [juniperNetworks] {
    [cpu_memory_util_ext] {
      utilization {
        name: "Kernel"
        size: 536866792
        bytes_allocated: 181051544
        utilization: 33
        application_utilization {
          name: "ifd"
          bytes_allocated: 3536
          allocations: 34
          frees: 0
          allocations_failed: 0
        application_utilization {
          name: "ifd-halp"
          bytes_allocated: 4144
          allocations: 28
          frees: 0
          allocations_failed: 0
        application_utilization {
          name: "counters"
          bytes_allocated: 4039952
          allocations: 915
          frees: 245
          allocations_failed: 0
        application_utilization {
          name: "iftable"
          bytes_allocated: 3432
          allocations: 26
          frees: 0
          allocations_failed: 0
        application_utilization {
          name: "inline ka"
          bytes_allocated: 56
          allocations: 3
          frees: 0
          allocations_failed: 0

One comment

  1. This is an excellent write-up! Thank you very much. Is it correct to say that the result of the binary to non-binary conversion step and the decoding step results in “valuable” data that can then be displayed in a visualizer like Grafana? Is it also correct to say that the “valuable” data is structured using OpenConfig or Native data models? (Native Juniper in this case)

Leave a Reply