by Will | May 25, 2018 | Networking
Since I have a Juniper EX2200 performing my layer 3 routing for internal traffic and I have Sonos on a separate subnet than some Sonos clients, I needed to allow multicast across subnets. By creating a loopback interface and using it as my PIM rendezvous point, I was able to get my Windows desktop on 10.1.20.0/24 (Wired) find my Sonos speaker on 10.1.50.0/24 (WiFi).
This is pretty much taken directly from the guide here.
First, I leave the default igmp-snooping configuration alone. If yours doesn’t look like this, then you have modified the default configuration:
|
will@ex2200> show configuration protocols igmp-snooping vlan all; |
Then I created a loopback interface to use for multicast:
|
will@ex2200> show configuration interfaces lo0 unit 0 { description "Used for SSDP Multicast for Sonos"; family inet { address 10.0.0.200/32; } } |
Next, I added the new loopback interface as passive in my OSPF configuration. The export and the other interfaces were already there for something unrelated to this multicast configuration:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
|
will@ex2200> show configuration protocols ospf export exportdirect; area 0.0.0.0 { interface vlan.100; interface vlan.2; interface lo0.0 { passive; } } |
Now I define the IP address of that loopback interface as the rendezvous point and add the necessary VLAN interfaces in sparse mode:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
|
will@ex2200> show configuration protocols pim rp { local { address 10.0.0.200; } } interface lo0.0 { mode sparse; } interface vlan.20 { mode sparse; } interface vlan.40 { mode sparse; } interface vlan.50 { mode sparse; } |
I have subnet 10.1.20.0/24 on vlan.20 for trusted wired, 10.1.40.0/24 on vlan.40 for guest wifi, and 10.1.50.0/24 on vlan.50 for trusted wifi. Since Sonos is on vlan.50, I want all three VLANs to share multicast. You can block multicast from guest wifi to trusted wired via a firewall filter.
Note that because I did not specify a multicast range, the entire 224.0.0.0/4 range is allowed. Sonos only uses SSDP, or 239.255.255.250, so if you want to block all other multicasts, you can limit the range with the following:
|
set protocols rp local group-ranges 239.255.255.250/32 |
I hope this helps!
by Will | May 20, 2018 | Linux
Note on 2018.5.20: This is a first draft. This will undergo many revisions.
I used many, many other tutorials, blogs, and Q&A posts to assemble this. Relevant links are sprinkled throughout the guide.
This is a complete, step by step tutorial on configuring the following:
- Ubuntu 18.04 install on a server with two NICs
- One NIC for host traffic
- Other NIC for LXC/Docker traffic
- Plex, Sonarr, Radarr, Jackett in Docker on host
- rTorrent, ruTorrent, Flood, and OpenVPN nested in Docker in LXC container on host
Topology from a visual perspective:

Topology from a CLI perspective:
INSTALL UBUNTU
At Filesystem Setup for 120GB SSD:
1. Leave bootloader partition alone
2. I gave 20GB to / partition
3. I gave 60GB to /home partition
4. I left the rest as free space so it can be used later in this guide for our ZFS pool
CREATE USER, ADD TO SUDO GROUP, SWITCH TO USER
adduser will
usermod -aG sudo will
su will
CONFIGURE HOST NETWORK INTERFACE
Get names of network interfaces
ip a
enp1s0f0 is for my host network
enp1s0f1 is for my container network
Edit the existing YAML file
sudo vim /etc/netplan/50-cloud-init.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
|
network: version: 2 ethernets: enp1s0f0: addresses: [10.1.20.24/24] gateway4: 10.1.20.1 nameservers: addresses: - 10.1.20.254 - 10.1.20.253 - 1.1.1.1 search: - paw.blue enp1s0f1: dhcp4: false # I don't think this bridge is necessary but I could be wrong # bridges: # br0: # interfaces: [enp1s0f1] # dhcp4: true |
Apply changes
sudo netplan apply
Schedule script every boot to set the physical interface used for containers to be UP with PROMISC ON. This is necessary right now on Ubuntu 18.04 due to a bug documented here.
mkdir ~/scripts
echo "ip link set enp1s0f1 up && ip link set enp1s0f1 promisc on" > ~/scripts/enp1s0f1.sh
chmod +x ~/scripts/enp1s0f1.sh
The following command will write to crontab so that your script runs as root at boot.
This did not work with only the first sudo, so I threw a bunch of extra sudos in there to make it work. I don’t know if they are all necessary.
sudo crontab -u root -l | { sudo cat; sudo echo "@reboot /home/will/scripts/enp1s0f1.sh"; } | sudo crontab -
SET UNUSED INTERFACES DOWN
I have extra physical interfaces that I’m not using right now, so I’m shutting them down.
sudo ip link set en131s0f0 down
sudo ip link set en131s0f1 down
INTERFACE TROUBLESHOOTING
You can do the following to remove an IP from an interface. For example, I accidentally assigned 10.1.20.24/24 to enp1s0f1 but I want that IP on enp1s0f0 instead:
sudo ip address del 10.1.20.24/24 dev enp1s0f1
You can do the following to restart an interface:
sudo ip link set enp1s0f1 down && sudo ip link set enp1s0f1 up
If you need to fix 127.0.0.53 being in resolv.conf (this happened to me):
sudo rm -f /etc/resolv.conf
CREATE YOUR HOST UPGRADE SCRIPT
Create update script:
cd ~
vim update.sh
|
#/bin/bash time sudo apt update && sudo apt full-upgrade -y |
Make update script executable:
chmod +x update.sh
Run script to update. You can run this whenever you want to update.
./update.sh
Install whatever common packages you use:
sudo apt install tree unrar ncdu -y
INSTALL ZFS AND IMPORT ZFS POOL FROM HBA CONTROLLER
sudo apt install zfsutils-linux -y
zpool import
zpool import tank
Import zpools at boot. Not necessary if you reference the disk by-id when doing initial import as discussed here.
sudo systemctl enable zfs-import-cache
INSTALL AND CONFIGURE SAMBA
This section is entirely optional. I like being able to access my entire media ZFS pool from Windows.
Install Samba as described here.
sudo apt install -y samba
sudo nano /etc/samba/smb.conf
|
[tank] comment = ZFS Pool path = /tank read only = no guest only = no guest ok = no share modes = yes |
Turn SMB sharing on:
sudo zfs set sharesmb=on tank
Create Samba user and define password
sudo pdbedit -a will
Find out who owns the media folder
ls -l /tank/media
|
will@ara:~$ ls -l /tank/media total 499 drwxrwx--- 335 1420 1420 372 May 18 00:56 movies -rwxrwx--- 1 1420 1420 283648 Sep 18 2017 plexpy.db drwxrwx--- 5 1420 1420 16 Jan 25 19:03 radarr drwxrwx--- 2 1420 1420 2 Feb 18 01:36 security drwxrwx--- 5 1420 1420 16 Jan 25 19:01 sonarr drwxrwx--- 55 1420 1420 57 May 7 13:15 tv |
I don’t care about UID 1420, but I do want to name the GID 1420 group “media”
sudo groupadd media
sudo groupmod -g 1420 media
Observe changes:
|
will@ara:~$ ls -l /tank/media total 499 drwxrwx--- 335 1420 media 372 May 18 00:56 movies -rwxrwx--- 1 1420 media 283648 Sep 18 2017 plexpy.db drwxrwx--- 5 1420 media 16 Jan 25 19:03 radarr drwxrwx--- 2 1420 media 2 Feb 18 01:36 security drwxrwx--- 5 1420 media 16 Jan 25 19:01 sonarr drwxrwx--- 55 1420 media 57 May 7 13:15 tv |
Add myself to the media group and restart Samba
sudo adduser will media
sudo systemctl restart smbd nmbd
CONFIGURE LXC PROFILE TO USE MACVLAN
Run lxc without typing sudo every time:
sudo setfacl -m u:will:rwx ~/.config/lxc
Create and edit the YAML file:
lxc profile create lxcnet
lxc profile edit lxcnet
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
|
config: environment.TZ: "America/Chicago" description: Creates macvlan bridge for LXC containers devices: eth0: name: eth0 nictype: macvlan parent: enp1s0f1 type: nic name: lxcnet used_by: [] |
CREATE ZFS PARTITION ON LOCAL SSD
Find disk/partition to be used
sudo fdisk -l
Check lxc version. On 18.04 it’s 3.0.0 right now.
lxc info
Start configuration of LXC
sudo lxd init
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83
|
will@ara:~$ sudo lxd init Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: lxd Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: Create a new ZFS pool? (yes/no) [default=yes]: Would you like to use an existing block device? (yes/no) [default=no]: yes Path to the existing block device: /dev/sdg4 Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new network bridge? (yes/no) [default=yes]: no Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes Name of the existing bridge or host interface: enp1s0f1 Is this interface connected to your MAAS server? (yes/no) [default=yes]: MAAS IPv4 subnet name for this interface (empty for no subnet): MAAS IPv6 subnet name for this interface (empty for no subnet): Would you like LXD to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes config: {} cluster: null networks: [] storage_pools: - config: source: /dev/sdg4 description: "" name: lxd driver: zfs profiles: - config: {} description: "" devices: eth0: name: eth0 nictype: macvlan parent: enp1s0f1 type: nic root: path: / pool: lxd type: disk name: default |
The outcome of these steps is that the network configuration from the lxcnet profile is copied to the default profile, and the default profile is populated with the ZFS pool information.
You can see this with the following:
lxc profile show default
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
|
will@ara:~$ lxc profile show default config: {} description: "" devices: eth0: name: eth0 nictype: macvlan parent: enp1s0f1 type: nic root: path: / pool: lxd type: disk name: default used_by: [] |
INCREASE FILE AND INODE LIMITS
sudo vim /etc/sysctl.conf
|
fs.inotify.max_queued_events = 1048576 fs.inotify.max_user_instances = 1048576 fs.inotify.max_user_watches = 1048576 |
sudo vim /etc/security/limits.conf
|
* soft nofile 100000 * hard nofile 100000 |
Now reboot:
sudo reboot
VERIFY HOST NETWORK, LXD NETWORK, AND ZFS
Observe your primary network interface matches what you set in /etc/netplan/00-netcfg.yaml
ip a
Note that a pool has been created with datasets:
sudo zfs list
CREATE AN LXC CONTAINER FOR TORRENT/VPN
Note, this is an empty Ubuntu 16.04 container. I’m naming it ‘torrent’
lxc launch ubuntu:xenial torrent
See that the container has started:
lxc list
Look for the MAC address in the container:
lxc config show --expanded torrent
In my case, I see the following:
volatile.eth0.hwaddr: 00:16:3e:e1:65:36
On my DHCP server, I create a new MAC reservation:
Name: torrent
IP: 10.1.20.12
MAC: 00:16:3e:e1:65:36
Enter the torrent container:
lxc exec torrent bash
Remove the dynamic IP so you can get the static one assigned
ip addr flush dev eth0
rm /var/lib/dhcp/dhclient.eth0.leases
dhclient -r; dhclient
You should see the DHCP-assigned static IP address:
ip a
Now exit the container
exit
|
root@torrent:~# exit exit will@ara:~$ |
MOUNT HOST DIRECTORY INTO LXC CONTAINER
Stop it if it’s running:
lxc stop torrent
Make it privileged to avoid file ownership issues as noted here:
lxc config set torrent security.privileged true
Mount /tank/downloads to /downloads:
lxc config device add torrent downloads disk source=/tank/downloads path=/downloads
Allow Docker inside LXD container:
lxc config set torrent security.nesting true
CREATE NON-ROOT USER AND ASSIGN PRIVILEGES
Start the container and enter it:
lxc start torrent
lxc exec torrent bash
Create user in the container and assign permissions:
adduser will
usermod -aG sudo will
groupadd media
adduser will media
usermod -u 1420 will
groupmod -g 1420 media
Log into user and create command so you can run sudo as documented here:
su will
vim ~/.bashrc
Install the .bashrc:
source ~/.bashrc
INSTALL DOCKER IN LXC CONTAINER
Instructions pulled from here.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt-get install -y docker-ce
Start it and enable it to start at boot:
sudo systemctl start docker
sudo systemctl enable docker
Let user do docker things without typing sudo:
sudo gpasswd -a will docker
sudo service docker restart
sudo systemctl enable docker
Leave and come back:
exit
su will
INSTALL PIA VPN AND TORRENT DOCKER IN LXC
Enter container if you aren’t already in it:
lxc exec torrent bash
Prepare host (LXC container) and create torrent config directory:
mkdir -p ~/torrent/config/openvpn
mkdir ~/torrent/openvpn_all
cd ~/torrent/openvpn_all
wget https://www.privateinternetaccess.com/openvpn/openvpn.zip
sudo apt install unzip
unzip openvpn.zip
Copy only the key, cert, and ovpn file you want to use:
cp *.crt ~/torrent/config/openvpn
cp *.pem ~/torrent/config/openvpn
cp US\ Midwest.ovpn ~/torrent/config/openvpn
Create and run torrent docker:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
|
#/bin/bash docker run -d \ --cap-add=NET_ADMIN \ -p 9080:9080 \ -p 9443:9443 \ -p 8118:8118 \ -p 3000:3000 \ --name=torrent \ -v /home/will/torrent/config:/config \ -v /downloads:/downloads \ -v /etc/localtime:/etc/localtime:ro \ -e VPN_ENABLED=yes \ -e VPN_USER=yourusername \ -e VPN_PASS=yourpassword \ -e VPN_PROV=pia \ -e STRICT_PORT_FORWARD=no \ -e ENABLE_PRIVOXY=no \ -e ENABLE_FLOOD=both \ -e ENABLE_AUTODL_IRSSI=yes \ -e LAN_NETWORK=10.1.20.0/24 \ -e NAME_SERVERS=10.1.20.254,10.1.20.253,208.67.222.222,1.1.1.1 \ -e DEBUG=true \ -e PHP_TZ=America/Chicago \ -e UMASK=000 \ -e PUID=1420 \ -e PGID=1420 \ --restart=always \ binhex/arch-rtorrentvpn |
If you need to add a flag on the fly to a running container, here’s an example:
docker update --restart=always torrent
Troubleshooting Docker:
docker events&
docker start sonarr
docker logs 345fceb9d7589a51c6b2d40c4b84c2e7b4e23463a363a24d7bd47fffd3dec013
Enter a Docker container for troubleshooting:
docker exec -it torrent /bin/bash
INSTALL AND SET UP DOCKER ON HOST
Install Docker in Ubuntu 18.04:
curl -fsSL test.docker.com | sh
Create macvlan for hosts
docker network create -d macvlan \
--subnet=10.1.20.0/24 \
--gateway=10.1.20.1 \
-o parent=enp1s0f1 mvdock0
Create and run Plex container (source):
docker run -id \
--name plex \
--network=mvdock0 \
--ip=10.1.20.11 \
-h plex \
-e VERSION=latest \
-e TZ="America/Chicago" \
-e ADVERTISE_IP="http://plex.paw.blue:32400/" \
-e PLEX_UID=1420 -e PLEX_GID=1420 \
-v /tank/plexdata/config:/config \
-v /tank/media/tv:/tv \
-v /tank/media/movies:/movies \
-v /tank/media/education:/education \
-v /tank/transcode:/transcode \
plexinc/pms-docker
Sonarr:
docker run -d \
--name sonarr \
--network=mvdock0 \
--ip=10.1.20.15 \
-p 8989:8989 \
-e NAME_SERVERS=10.1.20.254,10.1.20.253,208.67.222.222,1.1.1.1 \
-e PUID=1420 -e PGID=1420 \
-e TZ=America/Chicago \
-e DEBUG=false \
-v /etc/localtime:/etc/localtime:ro \
-v /home/will/sonarr/config:/config \
-v /tank/downloads:/downloads \
-v /tank/media/tv:/tv \
--restart=always \
linuxserver/sonarr
Radarr:
docker run -d \
--name=radarr \
--network=mvdock0 \
--ip=10.1.20.16 \
-p 7878:7878 \
-e NAME_SERVERS=10.1.20.254,10.1.20.253,208.67.222.222,1.1.1.1 \
-e PGID=1420 -e PUID=1420 \
-e TZ=America/Chicago \
-v /etc/localtime:/etc/localtime:ro \
-v /home/will/radarr/config:/config \
-v /tank/downloads:/downloads \
-v /tank/media/movies:/movies \
--restart=always \
linuxserver/radarr
Jackett:
docker run -d \
--name=jackett \
--network=mvdock0 \
--ip=10.1.20.17 \
-p 9117:9117 \
-e NAME_SERVERS=10.1.20.254,10.1.20.253,208.67.222.222,1.1.1.1 \
-v /home/will/jackett/config:/config \
-v /tank/downloads:/downloads \
-e PGID=1420 -e PUID=1420 \
-e TZ=America/Chicago \
-v /etc/localtime:/etc/localtime:ro \
--restart=always \
linuxserver/jackett
DOCKER EXPERIMENTAL THINGS
If you want to use an ipvlan instead of a macvlan in Ubuntu 18.04, you will have to start docker in experimental mode.
Enabling experimental mode:
dockerd --experimental=true
Examples of layer 2 and layer 3 ipvlan networks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
docker network create -d ipvlan \ --subnet=10.1.20.0/24 \ -o ipvlan_mode=l3 \ -o parent=enp1s0f1 ipdock0 docker network create -d ipvlan \ --experimental=true \ --subnet=10.1.20.0/24 \ --gateway=10.1.20.1 \ -o ipvlan_mode=l2 ipvlan20 |
EXPLANATIONS ARE IN ORDER
Q: Why don’t you just run the vpn/torrent docker container on the host?
A: I tried to do that with the network=mvdock0 and ip flags, but it wasn’t working. I think it has something to do with the way the VPN is influencing the network connection. By making the vpn/torrent docker container use the “host” network and having the “host” actually be an LXC container, I can still ensure this traffic passes through enp1s0f1 on the physical host.
Q: Why not do all LXC or all Docker? Why mix and match?
A: First, so I could learn both. Second, Docker is so easy to get my applications up and running.
Third, my inspiration for this project was this amazing post by Jason Bayton, and I loved the idea of hosting LXC containers in ZFS. As it turns out, I only hosted one. But perhaps more soon!
SUMMARY
This took me many hours to assemble, as I had very little LXC or Docker experience before setting out on this journey. I’m sure people will point out many, many flaws in this tutorial. Please comment so that I can fix them!
If this guide helped you, please consider a small crypto donation!
BTC: 3LypUnvR1RktXqF4FAA7WT5tCZXxjSDH1f
ETH: 0xc23D1cb22324873Bf7dc4e3FFaD81621Ce50F8EC
LTC: MJb1YqnFSdPkM6vaX1ApnGz96donGZfAU3
Recent Comments