.github/workflows | ||
adguardhome | ||
flaresolverr | ||
heimdall | ||
jellyfin | ||
letsencrypt | ||
pia | ||
pia-shared | ||
prowlarr | ||
qbittorrent | ||
radarr | ||
sabnzbd | ||
sonarr | ||
.env.example | ||
.gitignore | ||
docker-compose.yml | ||
README.md | ||
update-config.sh |
Docker Compose NAS
After searching for the perfect NAS solution, I realized what I wanted could be achieved with some Docker containers on a vanilla Linux box. The result is an opinionated Docker Compose configuration capable of browsing indexers to retrieve media resources and downloading them through a WireGuard VPN with port forwarding. SSL certificates and remote access through Tailscale are supported.
Requirements: Any Docker-capable recent Linux box with Docker Engine and Docker Compose V2. I am running it in Ubuntu Server 22.04; I also tested this setup on a Synology DS220+ with DSM 7.1.
Table of Content
- Docker Compose NAS
Applications
Application | Description | Image | URL |
---|---|---|---|
Sonarr | PVR for newsgroup and bittorrent users | linuxserver/sonarr | /sonarr |
Radarr | Movie collection manager for Usenet and BitTorrent users | linuxserver/radarr | /radarr |
Prowlarr | Indexer aggregator for Sonarr and Radarr | linuxserver/prowlarr:latest | /prowlarr |
PIA WireGuard VPN | Encapsulate qBittorrent traffic in PIA using WireGuard with port forwarding. | thrnz/docker-wireguard-pia | |
qBittorrent | Bittorrent client with a complete web UI Uses VPN network Using Libtorrent 1.x |
linuxserver/qbittorrent:libtorrentv1 | /qbittorrent |
Jellyfin | Media server designed to organize, manage, and share digital media files to networked devices | linuxserver/jellyfin | /jellyfin |
Heimdall | Application dashboard | linuxserver/heimdall | / |
Traefik | Reverse proxy | traefik | |
Watchtower | Automated Docker images update | containrrr/watchtower | |
SABnzbd | Optional - Free and easy binary newsreader | linuxserver/sabnzbd | /sabnzbd |
FlareSolverr | Optional - Proxy server to bypass Cloudflare protection in Prowlarr | flaresolverr/flaresolverr | |
AdGuard Home | Optional - Network-wide software for blocking ads & tracking | adguard/adguardhome | |
DHCP Relay | Optional - Docker DHCP Relay | modem7/dhcprelay | |
Traefik Certs Dumper | Optional - Dump ACME data from Traefik to certificates | ldez/traefik-certs-dumper |
Optional containers are not run by default, they need to be enabled, see Optional Services for more information.
Quick Start
cp .env.example .env
, edit to your needs then sudo docker compose up -d
.
For the first time, run ./update-config.sh
to update the applications base URLs.
Environment Variables
Variable | Description | Default |
---|---|---|
COMPOSE_FILE |
Docker compose files to load | docker-compose.yml |
COMPOSE_PATH_SEPARATOR |
Path separator between compose files to load | : |
USER_ID |
ID of the user to use in Docker containers | 1000 |
GROUP_ID |
ID of the user group to use in Docker containers | 1000 |
TIMEZONE |
TimeZone used by the container. | America/New_York |
DATA_ROOT |
Host location of the data files | /mnt/data |
DOWNLOAD_ROOT |
Host download location for qBittorrent, should be a subfolder of DATA_ROOT |
/mnt/data/torrents |
PIA_LOCATION |
Servers to use for PIA | ca (Montreal, Canada) |
PIA_USER |
PIA username | |
PIA_PASS |
PIA password | |
PIA_LOCAL_NETWORK |
PIA local network | 192.168.0.0/16 |
HOSTNAME |
Hostname of the NAS, could be a local IP or a domain name | localhost |
ADGUARD_HOSTNAME |
AdGuard Home hostname used, if enabled | |
DNS_CHALLENGE |
Enable/Disable DNS01 challenge, set to false to disable. |
true |
DNS_CHALLENGE_PROVIDER |
Provider for DNS01 challenge, see list here. | cloudflare |
LETS_ENCRYPT_CA_SERVER |
Let's Encrypt CA Server used to generate certificates, set to production by default. Set to https://acme-staging-v02.api.letsencrypt.org/directory to test your changes with the staging server. |
https://acme-v02.api.letsencrypt.org/directory |
LETS_ENCRYPT_EMAIL |
E-mail address used to send expiration notifications | |
CLOUDFLARE_EMAIL |
CloudFlare Account email | |
CLOUDFLARE_DNS_API_TOKEN |
API token with DNS:Edit permission |
|
CLOUDFLARE_ZONE_API_TOKEN |
API token with Zone:Read permission |
PIA WireGuard VPN
I chose PIA since it supports WireGuard and port forwarding, but you could use other providers:
- OpenVPN: linuxserver/openvpn-as
- WireGuard: linuxserver/wireguard
- NordVPN + OpenVPN: bubuntux/nordvpn
- NordVPN + WireGuard (NordLynx): bubuntux/nordlynx
For PIA + WireGuard, fill .env
and fill it with your PIA credentials.
The location of the server it will connect to is set by LOC=ca
, defaulting to Montreal - Canada.
Sonarr & Radarr
File Structure
Sonarr and Radarr must be configured to support hardlinks, to allow instant moves and prevent using twice the storage (Bittorrent downloads and final file). The trick is to use a single volume shared by the Bittorrent client and the *arrs. Subfolders are used to separate the TV shows from the movies.
The configuration is well explained by this guide.
In summary, the final structure of the shared volume will be as follows:
data
├── torrents = shared folder qBittorrent downloads
│ ├── movies = movies downloads tagged by Radarr
│ └── tv = movies downloads tagged by Sonarr
└── media = shared folder for Sonarr and Radarr files
├── movies = Radarr
└── tv = Sonarr
Go to Settings > Management.
In Sonarr, set the Root folder to /data/media/tv
.
In Radar, set the Root folder to /data/media/movies
.
Download Client
Then qBittorrent can be configured at Settings > Download Clients. Because all the networking for qBittorrent takes
place in the VPN container, the hostname for qBittorrent is the hostname of the VPN container, ie vpn
, and the port is 8080
:
Prowlarr
The indexers are configured through Prowlarr. They synchronize automatically to Radarr and Sonarr.
Radarr and Sonarr may then be added via Settings > Apps. The Prowlarr server is http://prowlarr:9696/prowlarr
, the Radarr server
is http://radarr:7878/radarr
and Sonarr http://sonarr:8989/sonarr
:
Their API keys can be found in Settings > Security > API Key.
qBittorrent
Set the default save path to /data/torrents
in Settings, and restrict the network interface to WireGuard (wg0
).
The web UI login page can be disabled on for the local network in Settings > Web UI > Bypass authentication for clients
192.168.0.0/16
127.0.0.0/8
172.17.0.0/16
Jellyfin
To enable hardware transcoding, depending on your system, you may need to update the following block:
devices:
- /dev/dri/renderD128:/dev/dri/renderD128
- /dev/dri/card0:/dev/dri/card0
Generally, running Docker on Linux you will want to use VA-API, but the exact mount paths may differ depending on your hardware.
Traefik and SSL Certificates
While you can use the private IP to access your NAS, how cool would it be for it to be accessible through a subdomain with a valid SSL certificate?
Traefik makes this trivial by using Let's Encrypt and one of its supported ACME challenge providers.
Let's assume we are using nas.domain.com
as custom subdomain.
The idea is to create an A record pointing to the private IP of the NAS, 192.168.0.10
for example:
nas.domain.com. 1 IN A 192.168.0.10
The record will be publicly exposed but not resolve given this is a private IP.
Given the NAS is not accessible from the internet, we need to do a dnsChallenge. Here we will be using CloudFlare, but the mechanism will be the same for all DNS providers baring environment variable changes, see the Traefik documentation above and Lego's documentation.
Then, fill the CloudFlare .env
entries.
If you want to test your configuration first, use the Let's Encrypt staging server by updating LETS_ENCRYPT_CA_SERVER
's
value in .env
:
LETS_ENCRYPT_CA_SERVER=https://acme-v02.api.letsencrypt.org/directory
If it worked, you will see the staging certificate at https://nas.domain.com.
You may remove the ./letsencrypt/acme.json
file and restart the services to issue the real certificate.
You are free to use any DNS01 provider. Simply replace DNS_CHALLENGE_PROVIDER
with your own provider,
see complete list here.
You will also need to inject the environments variables specific to your provider.
Certificate generation can be disabled by setting DNS_CHALLENGE
to false
.
Accessing from the outside with Tailscale
If we want to make it reachable from outside the network without opening ports or exposing it to the internet, I found Tailscale to be a great solution: create a network, run the client on both the NAS and the device you are connecting from, and they will see each other.
In this case, the A record should point to the IP Tailscale assigned to the NAS, eg 100.xxx.xxx.xxx
:
nas.domain.com. 1 IN A 100.xxx.xxx.xxx
See here for installation instructions.
However, this means you will always need to be connected to Tailscale to access your NAS, even locally.
This can be remedied by overriding the DNS entry for the NAS domain like 192.168.0.10 nas.domain.com
in your local DNS resolver such as Pi-Hole.
This way, when connected to the local network, the NAS is accessible directly from the private IP, and from the outside you need to connect to Tailscale first, then the NAS domain will be accessible.
Optional Services
As their name would suggest, optional services are not launched by default. They have their own docker-compose.yml
file
in their subfolders. To enable a service, append it to the COMPOSE_FILE
environment variable.
Say you want to enable FlareSolverr, you should have COMPOSE_FILE=docker-compose.yml:flaresolverr/docker-compose.yml
FlareSolverr
In Prowlarr, add the FlareSolverr indexer with the URL http://flaresolverr:8191/
SABnzbd
Enable SABnzbd by setting COMPOSE_FILE=docker-compose.yml:sabnzbd/docker-compose.yml
. It will be accessible at /sabnzbd
.
If that is not the case, the url_base
parameter in sabnzbd.ini
should be set to /sabnzbd
.
AdGuard Home
Set the ADGUARD_HOSTNAME
, I chose a different subdomain to use secure DNS without the folder.
On first run, specify the port 3000 and enable listen on all interfaces to make it work with Tailscale.
If after running docker compose up -d
, you're getting network docker-compose-nas declared as external, but could not be found
,
run docker network create docker-compose-nas
first.
Encryption
In Settings > Encryption Settings, set the certificates path to /opt/adguardhome/certs/certs/<YOUR_HOSTNAME>.crt
and the private key to /opt/adguardhome/certs/private/<YOUR_HOSTNAME>.key
, those files are created by Traefik cert dumper
from the ACME certificates Traefik generates in JSON.
DHCP
If you want to use the AdGuard Home DHCP server, for example because your router does not allow changing its DNS server,
you will need to select the eth0
DHCP interface matching 10.0.0.10
, then specify the
Gateway IP to match your router address (192.168.0.1
for example) and set a range of IP addresses assigned to local
devices.
In adguardhome/docker-compose.yml
, set the network interface dhcp-relay
should listen to. By default, it is set to
enp2s0
, but you may need to change it to your host's network interface, verify it with ip a
.
In the configuration (adguardhome/conf/AdGuardHome.yaml
), set the DHCP options 6th key to your NAS internal IP address:
dhcp:
dhcpv4:
options:
- 6 ips 192.168.0.10,192.168.0.10
Expose DNS Server with Tailscale
Based on Tailscale's documentation, it is easy to use your AdGuard server everywhere. Just make sure that AdGuard Home listens to all interfaces.
Customization
You can override the configuration of a services or add new services by creating a new docker-compose.override.yml
file,
then appending it to the COMPOSE_FILE
environment variable: COMPOSE_FILE=docker-compose.yml:docker-compose.override.yml
For example, use a different VPN provider:
version: '3.9'
services:
vpn:
image: ghcr.io/bubuntux/nordvpn
cap_add:
- NET_ADMIN # Required
- NET_RAW # Required
environment: # Review https://github.com/bubuntux/nordvpn#environment-variables
- USER=user@email.com # Required
- "PASS=pas$word" # Required
- CONNECT=United_States
- TECHNOLOGY=NordLynx
- NETWORK=192.168.1.0/24 # So it can be accessed within the local network
Synology Quirks
Docker compose NAS can run on DSM 7.1, with a few extra steps.
Free Ports 80 and 443
By default, ports 80 and 443 are used by Nginx but not actually used for anything useful. Free them by creating a new task
in the Task Scheduler > Create > Triggered Task > User-defined script. Leave the Event as Boot-up
and the root
user,
go to Task Settings and paste the following in User-defined script:
sed -i -e 's/80/81/' -e 's/443/444/' /usr/syno/share/nginx/server.mustache /usr/syno/share/nginx/DSM.mustache /usr/syno/share/nginx/WWWService.mustache
synosystemctl restart nginx
Install Synology WireGuard
Since WireGuard is not part of DSM's kernel, an external package must be installed for the vpn
container to run.
For DSM 7.1, download and install the package corresponding to your NAS CPU architecture from here.
As specified in the project's README,
the package must be run as root
from the command line: sudo /var/packages/WireGuard/scripts/start
Free Port 1900
Jellyfin will fail to run by default since the port 1900
is not free.
You may free it by going to Control Panel > File Services > Advanced > SSTP > Untick Enable Windows network discovery
.
User Permissions
By default, the user and groups are set to 1000
as it is the default on Ubuntu and many other Linux distributions.
However, that is not the case in Synology; the first user should have an ID of 1026
and a group of 100
.
You may check yours with id
.
Update the USER_ID
and GROUP_ID
in .env
with your IDs.
Not updating them may result in permission issues.
USER_ID=1026
GROUP_ID=100
Synology DHCP Server and Adguard Home Port Conflict
If you are using the Synology DHCP Server package, it will use port 53 even if it does not need it. This is because
it uses Dnsmasq to handle DHCP requests, but does not serve DNS queries. The port can be released by editing (as root)
/usr/local/lib/systemd/system/pkg-dhcpserver.service
and adding -p 0:
ExecStart=/var/packages/DhcpServer/target/dnsmasq-2.x/usr/bin/dnsmasq --user=DhcpServer --group=DhcpServer --cache-size=200 --conf-file=/etc/dhcpd/dhcpd.conf --dhcp-lease-max=2147483648 -p 0
Reboot the NAS and the port 53 will be free for Adguard.
Use Separate Paths for Torrents and Storage
If you want to use separate paths for torrents download and long term storage, to use different disks for example,
set your docker-compose.override.yml
to:
version: "3.9"
services:
sonarr:
volumes:
- ./sonarr:/config
- ${DATA_ROOT}/media/tv:/data/media/tv
- ${DOWNLOAD_ROOT}/tv:/data/torrents/tv
radarr:
volumes:
- ./radarr:/config
- ${DATA_ROOT}/media/movies:/data/media/movies
- ${DOWNLOAD_ROOT}/movies:/data/torrents/movies
Note you will lose the hard link ability, ie your files will be duplicated.
In Sonarr and Radarr, go to Settings
> Importing
> Untick Use Hardlinks instead of Copy
NFS Share
This can be useful to share the media folder to a local player like Kodi or computers in the local network, but may not be necessary if Jellyfin is going to be used to access the media.
Install the NFS kernel server: sudo apt install nfs-kernel-server
Then edit /etc/exports
to configure your shares:
/mnt/data/media 192.168.0.0/255.255.255.0(rw,all_squash,nohide,no_subtree_check,anonuid=1000,anongid=1000)
This will share the media
folder to anybody on your local network (192.168.0.x).
I purposely left out the sync
flag that would slow down file transfer.
On some devices you may need to use the insecure
option for the share to be available.
Restart the NFS server to apply the changes: sudo /etc/init.d/nfs-kernel-server restart
On other machines, you can see the shared folder by adding the following to your /etc/fstab
:
192.168.0.10:/mnt/data/media /mnt/nas nfs ro,hard,intr,auto,_netdev 0 0
Static IP
Set a static IP, assuming 192.168.0.10
and using Google DNS servers: sudo nano /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
enp2s0:
dhcp4: no
addresses:
- 192.168.0.10/24
gateway4: 192.168.0.1
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
version: 2
Apply the plan: sudo netplan apply
. You can check the server uses the right IP with ip a
.
Laptop Specific Configuration
If the server is installed on a laptop, you may want to disable the suspension when the lid is closed:
sudo nano /etc/systemd/logind.conf
Replace:
#HandleLidSwitch=suspend
byHandleLidSwitch=ignore
#LidSwitchIgnoreInhibited=yes
byLidSwitchIgnoreInhibited=no
Then restart: sudo service systemd-logind restart