Vyatta – Clustering

The latest subscription release of Vyatta, 2.3, has seen the addition of clustering capability, which has added greatly to the high availability features of the product.

Previously high availability was really limited to VRRP, which was great but had a couple of issues:

  • You couldn’t use VRRP across VIF interfaces, which made high availability for ‘router on a stick solutions’ tricky.
  • We experienced a few issues with interface bouncing, especially on gigabit interfaces.

VRRP is however a very nice solution, each virtual address is associated with a virtual MAC address that the currently actively router associates with the appropriate interface, the switchover is nearly instanteous.

The new clustering functionality in Vyatta is based upon the Linux HA project, which takes a slightly more simple but arguably more effective approach to the HA whereby when a failure is detected the virtual IP is reassigned to appropriate interface on the secondary router and a gratuitous arp sent out across the associated network segment to mitigate any arp cache issues.

The HA functionality also allows for failover of the ipsec vpn service, at the moment this works pretty simplistically by simply stopping or starting the service as needed, thus on the currently inactive server the VPN service and therefore tunnels simply aren’t up.

Lets take a look at a relatively simple multisite HA Vyatta solution and the associated configuration.

Vyatta Cluster Example

We have two sites, each with a pair of Vyattas configured as router, vpn, firewall, and nat. Behind them is a multi-segment internal network.

Interfaces

ldn-router1 interfaces:

interfaces { loopback lo { 

 address 10.1.1.251 { 

 	prefix-length: 24 

 } 

} 

ethernet eth0 { 

 description: "Internet" 

 address 98.76.54.31 

 	prefix-length: 28 

 } 

} 

ethernet eth1 { 

 description: "Servers" 

 address 10.1.10.251 { 

 	prefix-length: 24 

 } 

} 

ethernet eth2 { 

 description: "Workstations" 

 address 10.1.101.251 { 

 	prefix-length: 24 

 } 

} 

ldn-router2 interfaces:

interfaces { loopback lo { 

 address 10.1.1.252 { 

 	prefix-length: 24 

 } 

} 

ethernet eth0 { 

 description: "Internet" 

 address 98.76.54.32 { 

 	prefix-length: 28 

 } 

} 

ethernet eth1 { 

 description: "Servers" 

 address 10.1.10.252 { 

 	prefix-length: 24 

 } 

} 

ethernet eth2 { 

 description: "Workstations" 

 address 10.1.101.252 { 

 	prefix-length: 24 

 } 

} 

The important thing to notice here is that, the virtual ‘active’ addresses aren’t configured on the network interfaces themselves, instead they come later in the cluster configuration.

The New York site configuration is the same, except of course the IP addresses are changed accordingly.

Cluster
cluster { interface eth0 

 pre-shared-secret: "!secret!" 

 keepalive-interval: 2 

 dead-interval: 10 

 group "ldn-cluster1" { 

 	primary: "ldn-router1" 

 	secondary "ldn-router2" 

 	auto-failback: true 

 	monitor 12.34.56.73 

 	service "98.76.54.33" 

 	service ipsec 

 	service "10.1.10.1" 

 	service "10.1.101.1" 

 } 

} 

The cluster configuration on each router is identical (unless you want to do certain clever things such as run a different routing configuration in failover!). The interface definition is just for the interface that you want to monitor via. You can have multiple monitors however a failover will occur if any monitor returns a failure, in some ways this is a help and some ways its a hindrance, personally I prefer to just monitor an outside address and if its not available then go to failover where hopefully it will be (especially if we use different external blocks by router).

When a router becomes the active member of the cluster, it scans the route table for matches to the service IP and assigns the service IP to the appropriate interface, it then sends a gratuitous arp out of that interface to avoid any arp cache issues.

Routes

One downside of the Vyatta downing the ipsec tunnel when that router is not active, is that you can then only address that router on its dedicated addresses, for example if I wanted to do some remote maintenance ldn-router2 from the New York site while it wasn’t active, the only way I would be able to do so is either to log onto a machine on the London subnet and go via that, or use the public external IP (which I probably don’t want publically accessible anyway).

The solution is very simple, due to the way that VPN route matching works. When making a packet routing decision Vyatta checks the VPN tunnels for a local/remote match first, then checks against the routing table, therefore if we add a static route to each router for the whole internal network to go via its partner, we get a really neat solution:

protocols { static { 

 	route 10.0.0.0/8 { 

 	next-hop: 10.1.10.252 

 	} 

 } 

} 

Thus if a router has the VPN tunnel up (i.e. its active), it never checks the routing table and traffic goes direct, if the router has no VPN tunnel (i.e. its passive), it simply forwards the traffic to the active router.

VPN

The VPN configuration in a cluster is basically the same as a standard configuration, except the local and remote public IPs are the cluster addresses.

vpn { ipsec { 

 	ipsec-interfaces { 

 	interface eth0 

 } 

 ike-group "ike-ny" { 

 	proposal 1 { 

 		encryption: "aes256" 

 	} 

 	lifetime: 3600 

 } 

 esp-group "esp-ny" { 

 	proposal 1 { 

 		encryption: "aes256" 

 	} 

 	proposal 2 { 

 		encryption: "3des" 

 		hash: "md5" 

 	} 

 	lifetime: 1800 

 } 

 site-to-site { 

 	peer 12.34.56.73 { 

 		authentication { 

 		pre-shared-secret: "secret" 

 	} 

 	ike-group: "ike-ny" 

 	local-ip: 98.76.54.33 

 	tunnel 13 { 

 		local-subnet: 10.1.0.0/16 

 		remote-subnet: 10.3.0.0/16 

 		esp-group: "esp-ny" 

 	} 

 } 

} 

NAT

An easy pitfall on the NAT configuration is to forget that Vyatta processes source NAT before checking vpn or routing table matches. The fix is simply to exclude your internal network as a destination in the NAT configuration.

nat { rule 101 { 

 	type: "source" 

 	outbound-interface: "eth0" 

 	source { 

 		network: "10.1.101.0/24" 

 		} 

 	destination { 

 		network: "!10.0.0.0/8" 

 	} 

 	outside-address { 

 		address: 98.76.54.31 

 	} 

 } 

} 

VIFs

As i mentioned earlier, Vyattas implementation of VRRP doesn’t allow you to use VRRP on virtual VLAN interfaces, which is frankly a little annoying (although it will be fixed in the next release hopefully).

However under clustering it works perfectly, as the service IP can match and be assigned to any interface, real or virtual.

Conclusion

The clustering in Vyatta has added just enough simple HA clustering functionality that ‘just works’ to enable us to deploy far more complex and reliable solutions than was previously possible.

This is also just the tip of the iceberg, in future releases we can expect to see multiple cluster (allowing Active/Active configurations) and extra services added to the failover capability.

Author: Ben King

My name is Ben King, I am a director of an Internet solutions company called bit10 ltd. My ultimate responsibility is to bring in the work that bit10 delivers. However I also do a myriad of other things, for example system design, and administration. Outside work I go out, I drink, I socialise, I cook, I have fun, oh and I play a little bit too much World of Warcraft!

Leave a Reply

Your email address will not be published. Required fields are marked *