-
Notifications
You must be signed in to change notification settings - Fork 18
Client data plane
The PEERING client controller is a set of scripts to ease configuration
and operation of a PEERING client. This document extends the PEERING
client's README
file with information specific to the data-plane and
routing of packets. Please read the client's README
file if you are
not familiar with it yet.
Each PEERING mux is mapped to an OpenVPN tunnel (the mapping is built
automatically and stored under var/mux2dev.txt
). Routes out of
a given mux will exit through its tunnel device.
The default BIRD configuration installs best routes to table 151
. You
can check installed routes using ip route show table 151
, but beware
that (multiple) full routing tables are very large.
The client scripts add the .1
IP in any announced prefix to the
loopback interface. This is so the machine replies to any pings or
traceroutes probes that it receives.
The client scripts also install source-routing rules to route all
packets with source IP addresses within an announced prefix using table
151
(i.e., through the OpenVPN tunnel). You can check this with ip rule show
.
Table 151
is provided as a sensible default, it includes BIRD's
best route across all announcements received from muxes. Experimenters,
however, are free to create their own routing tables and route differently.
A common use case is to source-route packets from different prefixes to
egress through different muxes. This can be achieved by (i) creating
one routing table per mux, and (ii) modifying the source-routing rules
(using ip rule
) to use the new tables.
Step (i) can be accomplished by creating a new routing table with a default route pointing to the mux endpoint on its OpenVPN tunnel (see commands below) or by changing the BIRD configuration to install routes from that mux in a different kernel routing table.
# Get *local* OpenVPN tunnel address of the mux you want to use as egress
# running the command below. The remote OpenVPN address is the .1 in the
# same prefix. Also note the tapX device used by the OpenVPN tunnel.
./peering openvpn status
# Create a default routing rule in a new routing table. You can
# choose any unused routing table number (5000 in the command below).
ip route add default via <remote> dev <tapX> table 5000
Step (ii) can be accomplished by creating a rule to source-route packets from a given prefix using a specific routing table. In the command below, you need to edit the prefix to match the prefix you want to source-route and the routing table number to match the table you configured in step (i).
ip rule add from 184.164.224.0/24 table 5000
PEERING allows controlled spoofed traffic. In particular, you can
send packets with source IP addresses belonging to any prefix
allocated to your experiment, regardless of whether it is announced to
the mux the traffic will egress through. In other words, if your
experiment is allocated 184.164.224.0/24
, you can announce it from
clemson01
and send packets sourced from 184.164.224.1
out of
gateceh01
(or any other mux); in this case, any responses would be
received from clemson01.
Note that PEERING does not allow sending traffic from prefixes not allocated to your experiment (i.e., it does not allow general spoofing).
PEERING also allows sending traffic from the LXC container. In this case, LXC container acts as a client, enabling the container to run commands such as traceroute, ping or send any type of probe.
Announce a prefix with the -M
option to send downstream traffic into the container:
client# peering prefix announce -M -m mux prefix
# for example: peering prefix announce -M -m amsterdam01 138.185.228.0/24
Configure the container to route egress packets using the mux as the next hop. Here, veth-remote
is the endpoint on the mux's side of the veth
. For example, if your container's IP is 100.X.Y.Z
, then the mux's side will be IP 100.X.Y.{Z-1}
.
container# ip route <prefix-announced by upstream> via <veth-remote>
# for example: ip route add 187.16.209.0/24 via 100.125.8.5
We also need to add an IP of a prefix announced by the client to the container's loopback interface.
container# ip addr add <announced-prefix>.1 dev lo
# for example: ip addr add 138.185.228.1 dev lo
Finally, send a ping from the LXC container to an IP address announced by a remote AS. Use the -I
option to set the source IP address to the IP address added to the loopback interface:
container$ ping -I 138.185.228.1 187.16.209.1
If a packet arrives on an interface, e.g., tap1
, and the reply would
go out of a different interface, e.g., tap2
, then Linux would filter
the reply and not send it. Note that this is likely to happen in
a PEERING client. For example, if you receive a ping from destination
D1
on tap1
but the route in table 151
for D1
would send the
packet out of tap2
(because that is the route BIRD preferred toward
D1
), then the reply would get filtered. This is called Reverse Path
Filtering. Read more about it here. You can disable the RP filter
running something like:
for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do
echo 2 > $i
done