Wednesday, 15 July 2015

Juniper SRX: How to manage fxp0 across a VPN (Remote Management Best Practices)

This is one of the most common questions I see, both in my professional life as well as on popular Juniper technical forums.  Most of this confusion could be avoided if Juniper allowed for fxp0 to be placed in a non-default routing instance, however, for the time being, we're left with having to perform the following (moving all interfaces to a VR instead of just fxp0).

For those who are unaware, fxp0 represents a dedicated management interface to the routing-engine of the device. On the SRX, there is complete hardware separation between the routing-engine and the dataplane (which is responsible for the actual forwarding of transit traffic). This is accomplished on the high-end devices by a separate hardware module or blade (SRX 650 -> SRX 5800), and by dedicated CPU cores on the shared CPUs on the smaller branch devices.

In this example, our topology will contain the following:
1 Cluster of two SRX to be managed via fxp0 remotely.
1 Stand-Alone SRX acting as our VPN peer. Our SSH Proxy will reside behind this gateway.
1 Virtual-Chassis stack of EX Switches to attach the cluster's fxp0 and management reth to
1 SSH Proxy ( for remote device management.

This example assumes the cluster has already been configured, the VPN (route-based) is up and running, etc. If you are unfamiliar with these topics, you can check out previous articles here on how to set them up (or for the official documentation).

Step 1: Allocate a new VLAN for management purposes. This VLAN will contain both chassis' fxp0 interfaces, as well as a single reth interface (trunked or otherwise) from the dataplane. Assign the new reth interface to an appropriately named zone (careful not to use 'Management' as it's reserved):

Interface configurations are as follows:
set interfaces reth0 vlan-tagging
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth0 redundant-ether-options minimum-links 1
set interfaces reth0 redundant-ether-options lacp passive
set interfaces reth0 unit 100 description "Outside interface"
set interfaces reth0 unit 100 vlan-id 100
set interfaces reth0 unit 100 family inet address
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 description "Attached to Management VLAN for fxp0 access"
set interfaces reth1 unit 0 family inet address
set interfaces st0 unit 0 family inet address
set groups node0 interfaces fxp0 unit 0 family inet address
set groups node1 interfaces fxp0 unit 0 family inet address
set security zones security-zone Outside interfaces reth0.100 host-inbound-traffic system-services ike
set security zones security-zone Mgmt interfaces reth1.0
set security zones security-zone VPN interfaces st0.0

Step 2: Divide the SRX into (at least) two virtual-routers. One with your management (fxp0) interfaces (in inet.0), and one with all of your revenue ports (reth's or otherwise):
set routing-instances Traffic instance-type virtual-router
set routing-instances Traffic interface reth0.100
set routing-instances Traffic interface reth1.0
set routing-instances Traffic interface st0.0
set routing-instances Traffic routing-options static route next-hop
Step 3: Assign the correct routes to the default routing instance (that should only contain fxp0) so that it has a return path to our SSH Proxy regardless of what state it's in (RPD does not run on the standby node). This requires us to use several pieces of configuration.
The next-hop for all traffic for fxp0 should be our newly created "Mgmt" reth interface (reth1.0)

a) Create a backup router statement for the device to read at boot. This will only take effect during the RE boot sequence before RPD becomes available. Make sure to use host routes here (/32) with a next-hop address of your reth interface
set system backup-router
set system backup-router destination
b) Create a mirrored static route for the backup-router statement. This ensures that the secondary node remains reachable post-failover (backup-router is only read during boot).:
set routing-options static route next-hop
set routing-options static route retain
set routing-options static route no-readvertise
c) Create the default route for all other RE (fxp0) traffic to use reth1.0 as well:
set routing-options static route next-hop
d) Inform the device that all dataplane logs for traffic (Accept/Deny/IPS, from the  'security log' configuration) need to be routed to the new virtual-router (Traffic.inet.0) to reach the syslog/SIEM. This ensures they exit a revenue port and are not impacting the RE negatively:
set routing-options static route IP_of_SYSLOG_Server/32 next-table Traffic.inet.0

Step 4: Add appropriate security policies for the traffic from the Mgmt zone to the VPN zone (and vice-versa) to permit access to fxp0

Step 5: Validate that you have access to both devices and view it on the device (test it with both RG0 and RG1 failover scenarios):
 show security flow session source-prefix node 0

Session ID: 382, Policy name: ANY_ANY_PERMIT/4, State: Active, Timeout: 1800, Valid
  In: -->;tcp, If: st0.0, Pkts: 459, Bytes: 27651
  Out: -->;tcp, If: reth1.0, Pkts: 480, Bytes: 161953
Total sessions: 1

Step 6 (Optional): Ensure you can still SSH to the master device if the VPN tunnel goes down by adding SSH accessibility on the outside reth interface. Ensure this is locked down appropriately with firewall filters!
set security zones security-zone Outside interfaces reth0.100 host-inbound-traffic system-services ssh

Step 7 (Optional): If your cluster requires access to things like public DNS, NTP, or application-services updates (IPS/AppFW, etc), you'll need to add NAT and security policies for fxp0 to access  the appropriate resources.


  1. Hi Craig,

    I tried to replicate this setup but I have some differences and I am getting stuck. I don't have a VPN peer, everything terminates on an SRX220 cluster with reth0 as the external interface, the VPN terminating at st0.0 and reth2 as the additional mgmt network interface.

    I have created the above additional VR and routes and placed all RE in it. I can't seem to ping across the VPN though and I can't even ping from the SRX host out to the internet. I have the default route in the VR to the public gateway and I had a default route in inet.0 to the reth2 ip but this doesn't seem to work?



  2. Hmm, think I need a bit more info :)

    Do you see the session show up in the flow table? How are the interfaces tied to zones?

  3. Thank you very much for this post. I've been working with SRX firewalls for about 8 years now and have been searching for a legitimate answer to this question. Even JTAC was not able to assist me with this problem.

    The overall solution worked for me, but in my environment I used a Juniper EX switch chassis as the backup router to avoid the need for a separate MGT zone and reth interface on the SRX. The SRX cluster has a route in the Traffic VR to reach the fxp0 management subnet via the EX switch and the EX switch has a default route pointing to the SRX's trust interface. I was concerned about the change moving all interfaces out of the default VR as I was sure this would break something else but I didn't lose any connectivity to my lab environment when implementing.

    Really wish Juniper would take the approach of other firewall vendors and maintain a separate routing table for the OOB by default.

    1. That's great! There are plans to move fxp0 into its own VRF (outside of inet.0), but the target release has yet to be confirmed. It's a *highly* requested feature.