Deploying EVE-NG On Google Cloud Platform: Part 3

In Part 1 of this blog series, we covered the step-by-step procedure for installing EVE-NG (eve-ng.net) on Google Cloud Platform (GCP).  In Part 2, we looked at how to spin up a simple topology in EVE-NG, consisting of an Arista vEOS and Juniper vMX router.

However, it is very likely that you might want to connect your EVE-NG topology to external VMs that reside in your GCP environment.  For example, you might want to send Syslog messages to an existing Splunk server in your GCP lab, or perhaps you might want to send streaming telemetry from a Juniper vMX to an existing Telegraf collector.  This type of access is not enabled by default and requires a little workaround in both EVE-NG and in GCP in order to make it work.  

Enabling this access can be quite powerful, in that the topologies you spin up in EVE-NG can then serve as an extension of the existing infrastructure you might have deployed in GCP.  This is very useful in testing and prototyping scenarios.  So with that in mind, in this last blog post of this series on deploying EVE-NG to GCP, I will walk through a simple approach that worked for me in enabling this bridging between the EVE-NG and GCP network domains.

Bridge Interfaces Created By EVE-NG

EVE-NG creates several bridge interfaces on the host VM during the installation process.  Of particular importance are the interfaces named pnet0 through pnet9.  If you login to your EVE-NG host machine and issue the ifconfig -s command, you will see these interfaces, as shown in the screenshot below.

By issuing the brctl show command, we can see that these are indeed bridge interfaces, as shown in the screenshot below.  A bridge interface shows up in “ifconfig” or “ip link” alongside physical interfaces such as “eth0”, however they are just virtual interfaces that takes packets from one physical interface and transparently routes them to the other interface(s) on the same bridge. 

In the context of KVM/QEMU, a Linux bridge is used to connect the KVM/QEMU guest  interface to the KVM/QEMU host network interface.

As shown in the screenshot above, bridge interface “pnet0” is bridged with the primary physical EVE-NG ethernet port, “eth0”.  Furthermore, as depicted below, “pnet0” is assigned the IP address (eg. “10.128.0.27”) that is used for the EVE-NG Web GUI.  This EVE-NG subnet can optionally be used as a management network in labs, but for my purposes, I wanted to keep the management IPs for EVE-NG nodes in a separate address space from my external GCP lab VMs, so we are going to use a different bridge interface for the management network.

Mapping Bridge Interfaces To Cloud Interfaces In EVE-NG

Inside of EVE-NG, there is a notion of Cloud Interfaces, which have a one-to-one mapping to the bridge interfaces (“pnet0” to “pnet9”) mentioned above.  A summary of these mappings is shown in the Table below.

Bridge Interface On Host VM Cloud Interface In EVE-NG
pnet0 Management (Cloud0)
pnet1 Cloud1
pnet2 Cloud2
pnet3 Cloud3
pnet4 Cloud4
pnet5 Cloud5
pnet6 Cloud6
pnet7 Cloud7
pnet8 Cloud8
pnet9 Cloud9

In a nutshell, for EVE-NG, the word “Cloud” is used an alias to “pnet”.   What we need to do in order to create bridge between EVE-NG and GCP are the following steps:

  1. Select one of the “Cloud” interfaces to use inside the EVE-NG Topology Designer.  As highlighted in the table above, we are going to work with bridge interface “pnet9” (or Cloud Interface “Cloud9”).
  2. For the devices inside the EVE-NG topology that require GCP access, connect them to that “Cloud” interface.
  3. For the associated bridge interface on the host VM that corresponds to the “Cloud” interface (eg. “pnet9” for “Cloud9”), assign a static IP address (eg. “192.168.249.1”) from a designated management subnet (eg. “192.168.249.0/24”) that does not conflict with an existing subnet in your GCP Lab infrastructure.  This will serve as the “Gateway IP” for all the EVE-NG nodes connected to this management subnet.
  4. For each EVE-NG node requiring GCP Lab access, configure a static IP address from the management subnet address space.  Be sure to add a static route to the GCP Lab subnet, using the management IP from (3) above as the next hop address.
  5. In GCP, add a static route to this management subnet that points to your EVE-NG VM as the next hop.

Let’s walk through the above four steps with a concrete example.

Creating The EVE-NG To GCP Bridge

The EVE-NG topology that we were working on in Part 2 is shown in the figure below.

What we want to do here is connect the 2 Arista vEOS routers and the 1 Juniper vMX router to a management subnet (“192.168.249.0/24”) which has access to the external GCP VMs in the same project where our EVE-NG VM resides.

First, let’s add “Cloud9” to our network topology.  To do this, right-click anywhere in the Topology Designer canvas and select “Network” from the “Add a new object” context-menu.  This is shown in the screenshot below.

In the “Add A New Network” popup window that appears, select “Cloud9” from the dropdown menu of the “Type” field, as shown in the screenshot below.  Optionally, give a descriptive name for the network in the “Name/Prefix” field (eg. “External GCP Lab”).  Click “Save” to continue.

This results in an “External GCP Lab” cloud icon being dropped onto our topology canvas.  Next, we connect our three devices to this icon.  Note that for layout/neatness reasons, we created a second “External GCP Lab” icon to facilitate the connections.  For the Arista vEOS devices, be sure to select Mgmt1 as the interface.  For the Juniper vMX device, be sure to make the connection from the VCP and to use em0/fxp0 as the interface.  This is shown, highlighted in yellow, in the screenshot below.

Recall that we used Cloud Interface “Cloud9” above for our topology’s connection to external GCP VMs.  “Cloud9” maps to the bridge interface “pnet9” on our EVE-NG VM.  Let’s now assign a static IP address (eg. “192.168.249.1”) from our designated management subnet (eg. “192.168.249.0/24”).  This IP address will be our “Gateway IP” for our EVE-NG nodes.  To do this, logon to the EVE-NG VM and issue the ip address add 192.168.249.1/24 dev pnet9 command, as shown in the screenshot below.  Verify that the IP address has been successfully configured using the ifconfig pnet9 command.

Next, let’s configure a management IP address for each of the devices that need to access external GCP Lab VMs.  The screenshot below shows the simple config needed for the Juniper vMX.  Highlighted are the management interface name (“fxp0”), the device’s management IP address (“192.168.249.101/24”), and the default static route to the gateway IP (“192.168.249.1”).

The screenshot below shows the config needed for the Arista vEOS.  Highlighted are the management interface name (“Management1”), the device’s management IP address (“192.168.249.102/24”), and the default static route to the gateway IP (“192.168.249.1”).

Our final step is to ensure that all of our GCP Lab VMs know how to route to the “192.168.249.0/24” management subnet for our EVE-NG topology.  To do this, we need to create a static route in our GCP project.  From the GCP Console sidebar menu, select “VPC network -> Routes”, as shown below.

Then, from the “VPC network” landing page, click on the “Create Route” button as shown in the screenshot below.

The “Create a route” landing page appears as shown in the screenshot below.  Enter the following configuration details as follows:

  • Name:  Provide a descriptive name for the route (eg. “eve-ng-cloud9-route”).
  • Description:  Provide an optional description for the route.
  • Network:  Specify the GCP Network to which this route applies (eg. “default”).
  • Destination IP range:  This is where we specify the management subnet used by our EVE-NG topology (eg. “192.168.249.0/24”).
  • Priority:  Unless required to tweak this value, leave the value intact at “1000”.
  • Next hop:  From the dropdown list, we have an option to specify either an IP address or a GCP instance as a next hop for the destination IP range.  If your EVE-NG instance uses a DHCP-assigned IP address, then it’s best to choose “Specify an instance” as the dropdown value.  This way, the next hop will always point to EVE-NG even if another IP address gets assigned to EVE-NG via DHCP.
  • Next hop instance:  Here, specify your EVE-NG GCP VM instance as the next hop.
  • Click on the “Create” when done.

To test it out, logon to one of your external GCP VM instances and try to connect to one of your EVE-NG topology routers.  For example, in my setup, I want to be able to send Syslog messages from my EVE-NG routers to an external Splunk instance running in my GCP Lab.  So, as shown in the screenshot below, I have logged on to the Splunk VM and tried to SSH to the Juniper vMX router (192.168.249.101).

And that’s all there is to it!  Your EVE-NG topology can now serve as a useful extension to your existing GCP infrastructure for testing and/or prototyping purposes.  In the next blog post, Part 4, we will explore how we can connect our EVE-NG topology nodes to Internet.

17 comments

  1. thank you for the detailed documentation. I have been browsing through the Internet for a while but I could not understand how I can reach the Internet on eve-ng lab through Cloud network.
    All I have is a linux node, and Cisco router
    Linux node –> Cisco router –> Cloud 0-9 ( community version ) but I can not even ping 8.8.8.8 on Cisco router.

    here is my routing table

    root@eve-ng:~# route
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    default 10.142.0.1 0.0.0.0 UG 0 0 0 pnet0
    10.142.0.1 * 255.255.255.255 UH 0 0 0 pnet0

    even if I connect my router to cloud9 interface , I can not ping 8.8.8.8 sourcing pnet9

    root@eve-ng:~# ping 8.8.8.8 -I pnet9
    PING 8.8.8.8 (8.8.8.8) from 10.142.0.1 pnet9: 56(84) bytes of data.
    From 10.142.0.1 icmp_seq=1 Destination Host Unreachable
    From 10.142.0.1 icmp_seq=2 Destination Host Unreachable

    Can you put me in right direction ? I have no GNS3, Eve-Ng experiecne and funny enough I started with EVE-Ng on GCP 🙁
    I would really appreciate if you can shed some lights on it

    Regards,

  2. thank you for the detailed documentation. I have been browsing through the Internet for a while but I could not understand how I can reach the Internet on eve-ng lab through Cloud network.
    All I have is a linux node, and Cisco router
    Linux node –> Cisco router –> Cloud 0-9 ( community version ) but I can not even ping 8.8.8.8 on Cisco router.
    here is my routing table
    root@eve-ng:# route
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    default 10.142.0.1 0.0.0.0 UG 0 0 0 pnet0
    10.142.0.1 * 255.255.255.255 UH 0 0 0 pnet0
    even if I connect my router to cloud9 interface , I can not ping 8.8.8.8 sourcing pnet9
    root@eve-ng:# ping 8.8.8.8 -I pnet9
    PING 8.8.8.8 (8.8.8.8) from 10.142.0.1 pnet9: 56(84) bytes of data.
    From 10.142.0.1 icmp_seq=1 Destination Host Unreachable
    From 10.142.0.1 icmp_seq=2 Destination Host Unreachable
    Can you put me in right direction ? I have no GNS3, Eve-Ng experiecne and funny enough I started with EVE-Ng on GCP 🙁
    I would really appreciate if you can shed some lights on it

  3. Hi thanks for this blog. It’s helped me to setup eve-ng in GCP. Please could you confirm how do it connect eve-ng topology to external network i.e. how do I make them ping to google.com. With the help of your blog, I can ping within 192.168.249.0/24 but cannot ping to 8.8.8.8 or to other vpc.

  4. Good night!

    I would like to thank you for your blog, it helped me a lot!

    I’m having difficulties in freeing internet access for a device, I can’t communicate the switch / router with the internet.

  5. having some trouble, I can ping the pnet gateway IP from the my little virtual PC in the eve-ng lab. and my ubuntu server in gcp can ping the pnet interface but gcp ubuntu can not ping the virtual pc in the eve-ng lab? firewall is allowing full 192.168.0.0/16 and 10.0.0.0/8 and route looks to be working letting ubuntu gcp vm ping the pnet ip. any ideas?

    1. Hi Jason, sincere apologies for the late reply. I think you might be missing a static route in GCP from your external VM back to EVE-NG. In the GCP project where your EVE-NG VM resides, if you go to “VPC Network -> Routes”, you can create a static route for your EVE-NG subnet where you specify your EVE-NG VM as the static route next hop. If you search in this blog post for the string “Create a route”, you’ll see the instructions for setting up the static route. Hope that helps.

  6. I’ve followed this guide. All works except the connection between the nodes and external gcp vms. There may be a step missing but I’m unsure what it is. Node can ping pnet9 and eveng can ping external vm, but external vm cannot ping eve ng node

    1. Hi Joe, sincere apologies for the late reply. I think you might be missing a static route in GCP from your external VM back to EVE-NG. In the GCP project where your EVE-NG VM resides, if you go to “VPC Network -> Routes”, you can create a static route for your EVE-NG subnet where you specify your EVE-NG VM as the static route next hop. If you search in this blog post for the string “Create a route”, you’ll see the instructions for setting up the static route. Hope that helps.

  7. Hi,

    Thank you for the detailed explanation. Really helpful.
    I’ve configured everything exactly as you mentioned. But I’m not able to ping from my linux VM, residing in the same VPC, to a router IP connected to pnet9. Let’s say 192.168.249.5.
    From the Linux VM after setting the route under VPC, I’m able to ping the pnet9 gateway IP 192.168.249.1. But not .5

    Also from the router, which has a default route 0.0.0.0/0 192.168.249.1, I’m able to ping the .1.
    Also the IP on pnet0 is reachable. But not devices reachable behind pnet0 interface.
    [router output] ping pnet9 from router:
    localhost#ping 192.168.249.1
    PING 192.168.249.1 (192.168.249.1) 72(100) bytes of data.
    80 bytes from 192.168.249.1: icmp_seq=1 ttl=64 time=9.47 ms
    80 bytes from 192.168.249.1: icmp_seq=2 ttl=64 time=4.25 ms
    80 bytes from 192.168.249.1: icmp_seq=5 ttl=64 time=8.02 ms
    — 192.168.249.1 ping statistics —
    5 packets transmitted, 5 received, 0% packet loss, time 40ms
    rtt min/avg/max/mdev = 3.077/6.516/9.472/2.429 ms, ipg/ewma 10.119/8.054 ms

    [router output] ping pnet0 from router:
    localhost#ping 10.164.0.2
    PING 10.164.0.2 (10.164.0.2) 72(100) bytes of data.
    80 bytes from 10.164.0.2: icmp_seq=1 ttl=64 time=6.52 ms
    80 bytes from 10.164.0.2: icmp_seq=2 ttl=64 time=4.36 ms
    80 bytes from 10.164.0.2: icmp_seq=3 ttl=64 time=2.49 ms
    — 10.164.0.2 ping statistics —
    5 packets transmitted, 5 received, 0% packet loss, time 38ms
    rtt min/avg/max/mdev = 2.490/4.603/6.522/1.291 ms, ipg/ewma 9.512/5.556 ms

    [router output] ping linux VM from router:
    localhost#ping 10.164.0.3
    PING 10.164.0.3 (10.164.0.3) 72(100) bytes of data.
    — 10.164.0.3 ping statistics —
    5 packets transmitted, 0 received, 100% packet loss, time 49ms

    I’ve seen this behavior before with on-prem EVE instances running on ESXi. There I needed to enable promiscuous on the portgroup.
    Do I need to configure additional settings?

    1. Hi Trace, sincere apologies for the late reply. Not sure if this is your issue, but can you check if you are connecting your router management port to “Cloud9” and not “Cloud0”? If you are using “pnet9” then you’ll have to use “Cloud9” in the “Type” field in the “Add a new network” window.

  8. Thank you for sharing this really valuable content. Could you please let me not how to configure a PNET interface to go to the internet from inside a lab, I as struggling with that part.

  9. Hey I followed the guide but still no dice.

    I can ping from the vEOS to its default gateway (pnet9) and back, via Cloud9; however, I cant ping vEOS from another VM or from / to pnet0.
    I did insert a route into the GCP routing table.

Leave a Reply