fbpx

System-Engineering | 2 min read

vSRX read me first

Remi Locherer
October 2020
written by Remi Locherer
Senior Network Engineer
+3

To learn how Juniper’s SRX firewalls work or to test some specific configurations, the virtual version of the SRX appliance comes in handy. It supports almost all features of the physical versions, even clustering.

That was exactly what I needed in order to test a setup that I was going to implement for a customer. I followed the detailed instructions from Juniper to setup vSRX on Hyper-V and instantiated two vSRX instances.

At least I thought, I followed said instructions.

The VMs started properly and I could log in. Then, I added additional interfaces and Hyper-V networks to get the network connections needed for a chassis cluster configuration.

Setting up the cluster also worked – or so I thought. However, the cluster status was not completely error-free and I noticed that the ge-7/0/X interfaces on cluster node 1 were not present. But fxp0 on node 1 was fine since I was able to ssh into that firewall.

I looked into the logs (“show log messages”) and noticed that the following line shows up at least once a minute:

Jun 26 08:20:04  vsrx1 kernel: pid 9237 (srxpfe), uid 0: exited on signal 6 (core dumped)

Something was clearly not good. I recalled that the packet forwarding engine on vSRX is a software process which is based on DPDK. That is exactly the process which was dying. But what could be the cause for this? I remembered that in order to separate data plane from control plane, the two must be pinned to different CPUs. This implies that I must have assigned at least two vCPUs for my VM. To check, I quickly changed to the FreeBSD shell:

{secondary:node1}
admin@vsrx1> start shell
% sysctl hw.ncpu
hw.ncpu: 1
%

Finally, it all became clear! I missed assigning a 2nd CPU to node 1. Because of that, the forwarding engine could not be started. That was also the reason the ge-7/0/X interfaces did not show up. The fxp0 interface belongs to the control plane and was not affected by this. Once I assigned the missing vCPU to node1, everything was up and running as I expected it to be.

+3
Leave a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Remi Locherer
October 2020
written by Remi Locherer
Senior Network Engineer

Most Popular

Network-Engineering | 8 min read

Junos upgrade – filesystem is full

Not enough storage during Junos upgrade (EX2300 and EX3400). An extension of Juniper's article…

Read more