From TrueNAS CORE to TrueNAS SCALE

I finally bit the bullet. After years of running my trusty TrueNAS CORE server, I decided it was time to make the jump to TrueNAS SCALE — the newer, Linux-based version that’s now iXsystems’ main focus.

When CORE officially moved into “sustaining engineering” in early 2024 — meaning no more new features, just maintenance and security updates — I knew I had to migrate sooner rather than later. By April 2025, with the release of SCALE 25.04 and the final CORE update (13.3-U1.2), the priority of this task became even greater. However, I knew upgrading would take me quite some effort, so I kept postponing it. Until last week, when the wife was out of town and I could work undisturbed deep into the night for a couple of days while the kids were in bed.

TrueNAS? Core? Scale?

Hold on, not so fast I hear you think. What even are these programs I’m talking about, and why are they relevant?

All right, in September 2021 I bought myself a TrueNAS Mini X+ as my new private NAS server. After previously running my website on an Intel NUC, I quietly upgraded to a proper NAS. I kept the NUC solely for Home Assistant, and moved the rest of my smart home stack to the new Mini X+. TrueNAS CORE was the Operating System (OS) I used for my NUC as a home server, so it made sense to continue using it on my new hardware.

TrueNAS CORE is an open-source OS specifically designed for servers and network-attached storage (NAS). It’s built upon FreeBSD and is widely praised for its rock-solid stability.

PXL_20251027_203454912

TrueNAS Mini X+

Also around this time, iXsystems — the company behind TrueNAS — released the first public beta of TrueNAS SCALE, a new OS built upon Linux. In their early announcements, they explicitly wrote: “Production users with standard NAS (NFS, SMB, iSCSI, S3) requirements are still advised to use TrueNAS CORE … SCALE has inherited some of that maturity … but has not completed its software quality lifecycle.”

That was fine by me. I already knew my way around CORE and had a good, stable NAS system running, which I simply migrated to my new hardware.
FreeBSD offers a way to run isolated extra applications through a feature called jails. Programs in jails operate independently from the host system — like mini-sandboxes – but share the same kernel as the host system making them far more efficient than full virtual machines.

Over the last four years, I’d built up a stack of nine jails hosting a total of fourteen different programs:

  • Adguard – blocking ads on my home network.
  • Grafana – fancy graphs for showing smart-home data.
  • WordPress – this website.
  • Mosquitto – MQTT server for collecting smart-home data.
  • Nextcloud – a private alternative to Dropbox or Google Drive.
  • Plex – a private alternative to Netflix.
  • Handbrake – ripping DVD’s for my Plex library
  • Radarr/Servar/Readarr/Prowlarr/Transmission – automated torrent indexing and downloading.
  • Nginx Reverse Proxy – allowing access from the internet to my internal programs.

However, as time went on, TrueNAS SCALE matured and began to outpace CORE in terms of features, app support, and active development. By early 2024, iXsystems had officially confirmed that CORE would only receive critical fixes going forward, while SCALE would be the focus for all new functionality. Then, in April 2025, they released SCALE 25.04 alongside what was announced as the final CORE 13.3 update — version 13.3-U1.2 — effectively marking the end of the CORE development line.

And that gave me a bit of a problem.

See, migrating from CORE to SCALE is usually pretty painless — the official tools can migrate nearly all system settings and file shares automatically. However, one of the major exceptions is jails (the other being virtual machines, which I didn’t use). Since SCALE is built on Linux, it relies on Docker containers to run extra applications. Docker containers are inherently different from FreeBSD jails, so they can’t be transferred automatically. That meant I had to rebuild all fourteen of my programs manually, which I expected would be… well, a bit of a pain and pretty time consuming.

I’d actually been keeping an eye on SCALE for quite a while and had considered migrating since version 24.10 (released in October 2024). After a year of postponing — and finally having a few quiet days at home in October 2025 — I decided to go for it: time to future-proof my NAS and take advantage of the new features that SCALE offers.

TrueNAS SCALE 24.10 artwork

Migrating to SCALE

The entire migration process ended up taking a week. As in, I started on a Friday… and wasn’t done until the Thursday after.

Friday evening – Backup & migration

I started by taking backups of all my programs. The migration from CORE to SCALE is a one-way process, so I wanted to be absolutely sure nothing important got lost. Luckily, most backups were straightforward.

WordPress? Used a plugin.
Reverse proxy? Just copied the proxy.conf file.
Radarr and Sonarr? Clicked the handy “Backup” button.
Nextcloud was the only one that needed a bit of command-line work to dump the MySQL database.

All in all, after about two hours, everything important was backed up. Since my files would still be accessible after the migration — and I also have an off-site backup — I didn’t bother backing up large media files like my Nextcloud data or Plex library. I just focused on configuration, databases, and anything I couldn’t easily recreate.

For completeness, I also backed up all my NAS settings, certificates, and took screenshots of important automation schedules. That turned out to be unnecessary, because the OS migration itself was incredibly smooth. In CORE, go to Settings -> Update, choose SCALE 24.04 and press Download & Apply. One coffee later, my Mini X+ rebooted and greeted me with the shiny new SCALE login screen.

To my surprise, all my system settings — users, permissions, certificates, shares — had migrated perfectly. By that time it was getting late, so I decided to quit on a high note and leave the app setup for the next day.

My new TrueNAS SCALE dashboard.

Saturday evening – Reverse proxy & Docker containers

After some initial poking around (okay, I admit it — I tried to install all my apps simultaneously because I’m impatient, which naturally failed spectacularly with a flood of error messages I didn’t understand), I figured it was best to start by getting the reverse proxy back up and running. After all, most of my other apps rely on it, and it didn’t seem like the most complicated one to start with.

The thing with Docker containers is that their filesystem is normally hidden from the user. Depending on the app, SCALE allows you to mount certain directories as easily accessible folders on the host system. This was new to me, and it forced me to think about a good way to organize everything. It also took me a while to realize that, for this to work, both the folder and all parent folders you’re mounting into need the correct permissions — otherwise the mount fails and the app won’t start.

To keep things simple, I ended up creating a new dataset in the root of my filesystem called Apps. Inside that dataset, I created a child dataset for each app I wanted to mount. The main Apps dataset has broad access permissions, while each app dataset is tailored to its specific needs. I figured that should be secure enough for my setup.

My dataset structure after the migration.

That evening, I managed to get the reverse proxy up and running. As a bonus, I now had a fancy web interface to configure it – something I didn’t have before. I also got AdGuard up and running. While researching how to set up the reverse proxy, I stumbled across something called Cloudflare Tunnel, which might allow me to get rid of the reverse proxy. Hmm… well not now – let’s try and reach parity with the old system first.

Sunday evening – Formula 1

Sunday was race day, so priorities were clear.

Though before the race I looked some more at Cloudflared (Cloudflare’s tunnel client). It’s a small program that creates a secure, outbound tunnel between my home server and Cloudflare’s network. This lets you access services like Nextcloud or WordPress remotely — no open ports, no self-signed certificates, no dynamic DNS updates. Everything runs “securely” through the tunnel. I say “securely”, because the downside is that Cloudflare is in charge of security, and because of that in theory can see all my network traffic.

Monday – Nextcloud

Monday was dedicated entirely to Nextcloud.

The official Nextcloud app in Truenas SCALE uses a PostgreSQL database, opposed to the MySQL database I was using previously – and had a backup for. My inexperience using docker, lack of a computer running Linux and general inexperience with databases meant that it took me a good part of the day to figure out how to convert my MySQL database into a PostgreSQL database. For those interested, what worked in the end for me was to:

  1. Set up 4 docker containers:
    – Nextcloud – This is where we eventually want the database to go.
    Pgloader – A tool to convert between MySQL and PostgreSQL.
    MySQL:8.0 – Important this is version 8.0 or lower, otherwise PgLoader doesn’t work.
    PostgreSQL – a separate container to load the database into.
  2. Load the MySQL backup into the MySQL container.
  3. Use PgLoader to migrate the database from MySQL to PostgreSQL.
  4. Dump the PostgreSQL database and restore it into the Nextcloud container.

Apart from the database, I also had troubles mounting my files correctly to the docker container, and getting the permissions of the datasets to match. So in addition I used a lot of time making sure Nextcloud could actually find my files. I left out a lot of troubleshooting here, the entire process took me almost 14 hours to get right.

Nextcloud Loginpage

My Nextcloud login page. I was very happy when I saw this again.

Tuesday – Cloudflared, Radarr/Sonarr, Plex

On Tuesday, I gave Cloudflare Tunnel a proper test. I disabled my reverse proxy, set up Cloudflared, and configured it to expose my Nextcloud instance to the internet. So far, performance looks great — and it’s far easier to maintain than my previous Nginx setup. I’ll be testing this out to make sure it performs well in the upcoming weeks/months, but it is looking promising and an easier to use alternative than my reverse proxy setup. And I trust that one of the largest online security providers in the world is able to handle my data securely and discretely – their entire business model is based around that.

On Tuesday I also re-installed my Radarr/Servarr and Transmission setup. Radarr (for finding movies) and Servarr (for finding series) luckily had easy to use restore functions, so setting them up was very easy, especially after all my troubles from the day before. Transmission is just a downloading tool, so no need to do any restoring there.

I also re-installed Plex. The Plex backup itself worked flawlessly, so all my settings were still there. It however turned out that I had messed up the access permissions to my movie files – I suspect it was an issue with the dataset they were in – but instead of trying to find out how to fix it I decided to start from scratch, delete everything and re-download/re-rip movies when I see the need for them. No big deal, after my wins with the other apps (getting Nextcloud back was much more critical) I was comfortable taking this loss. And it freed up a good chunk of hard disk space.

Plex

Plex (my personal Netflix) back up and running.

Wednesday evening – Grafana & InfluxDB

Wednesday was for the smart-home dashboard Grafana (and InfluxDB).

On CORE, I had used the official Grafana plugin, which came bundled with InfluxDB v1. On SCALE, the Grafana app no longer includes InfluxDB, so I had to install both separately.

The difficulty here was the age of my influxDB database. When I set up the original Grafana plugin in 2020 InfluxDB v2 had just come out, but ‘everything’ still ran on the older v1, including the plugin. This had never been updated, so getting up to speed on the ‘new’ v2 nomenclature and updating Home Assistant – which pushes data to the database – to use the new api took a bit of research. But after an hour or 2 I also had this up and running again.

Grafana dashboard.

Thursday evening – WordPress

Lastly, I turned my attention to WordPress and getting the website you’re looking at now back online. I’d actually tried to get a WordPress app running earlier in the week, but for some reason it kept failing on me. In the end, I discovered that one of the storage mounts in the app settings was causing the issue — specifically the WordPress MariaDB Data Storage. Since I didn’t need external access to that anyway, I left it internal to the Docker container. With a fresh WordPress site I used a migration tool to restore my website and all its settings.

That was the last piece of the puzzle. For those paying attention I did not migrate the Mosquitto app, as that had actually become obsolete a few months earlier due to changes in my smart home setup. After a week of work I had successfully migrated from CORE to SCALE.

So what now?

SCALE makes it easy to pass through a GPU to your apps. My Plex setup will benefit a lot from that — it enables GPU Transcoding, which should let it stream high-definition video across the network more smoothly and (hopefully) get rid of the occasional stuttering we’ve been seeing. I have already ordered a new GPU for exactly this purpose.

On top of that, the app library in SCALE is much larger than the old plugin library in CORE, so I’ll have plenty more apps to experiment with in the future. And since everything runs as Docker containers, I can even spin up my own if something isn’t available as a pre-built app.

I do enjoy tinkering with new tools and software, so I’m sure it won’t take long before I find some new, creative ways to expand what my NAS can do.

OpenVPN on a Raspberry Pi

My parents and I, who come from the Netherlands, have recently bought a cabin in Norway. We have a lot of wishes and ideas for this cabin, but one of the first projects I started on right after we signed the contract was the setup of a VPN server on a Raspberry Pi. The goal is to have any device connecting to the WiFi in the cabin appearing to be in the Netherlands, so that my parents can ‘work from home’ from the cabin and can stream Dutch TV and Dutch Netflix. For this to work, we need a router that can act as a VPN Client and a VPN Server to connect to.

By having the router connecting to the VPN Server, any device that connects to the router will also be connected via the same tunnel to the internet. By installing the VPN server on a Raspberry Pi, I can just ship a readily installed unit to the Netherlands with minimal setup steps for my parents while they remain 100% in control of their VPN endpoint. This is important to ensure that for example Netflix will not block their stream, as any data appears to come from their own home instead of a (known) VPN provider.

For this project we use the following components:

I recently bought an Asus RT-AC66U B1 router, which I know can act as a VPN Client. The Asus 4G-AC68U is a model from the same product line, which also includes a 4G simcard slot.

Software-wise, we only need only a handful of services/programs:

  • The latest Raspbian Lite
  • PiVPN
  • A Dynamic DNS provider, I’m using Google Domains
  • ddclient

Setup

The first step is obviously to flash Raspbian on an SD-card and shuf it into the Raspberry. I’m using Raspbian Lite since we know exactly which software packages we are going to use, and any dependencies will be installed with them. This will keep the overall system performance as high as possible.

After setting up Raspbian, we use SSH to log in as root and install PiVPN. PiVPN will install either OpenVPN or WireGuard, in our case OpenVPN as this is also supported in the Asus router. I have set up the IP configuration to be dynamic, so it can adapt to the setup in my parent’s house once it arrives in the post. Other than that I’ve used the standard settings, obviously choosing the right DNS Provider (Google Domains). I had also set up a Dynamic DNS entry in Google Domains prior to the Raspberry Pi installation, which will be used for this VPN setup.

Dynamic IP lookup

Since I don’t know the public IP address of my parents house (and they might have a dynamic IP address that changes every once in a while), one can use Dynamic DNS. Basically, Dynamic DNS checks the current public IP address of the host and sends this to a pre-configured DNS provider. The provider matches the IP address, for example 185.176.244.205, to a subdomain name, for example cloud.jessendelft.org. This way, anytime a device tries to find cloud.jessendelft.org they only have to ask the DNS provider, which will then provide them with the correct public IP address. To achieve this on the Raspberry Pi we can use ddclient. ddclient only needs to know a few basic parameters such as the login credentials of the DNS provider and does the rest by itself. It runs as a deamon in the background, automatically checking and updating the current public IP address in the DNS register.

I generated two OpenVPN configuration files which can be uploaded to VPN Clients and allows them to connect to the server, one for the Asus router and one for my private PC so I can test & debug the entire setup. These configuration files include instructions to use one of my subdomains to find the current public IP address of the OpenVPN Server in the Netherlands. This keeps the setup easy and flexible.

Lastly, I entered the Wi-Fi credentials of my parents house in a file called ‘wpa-supplicant.conf’ and placed this in the /boot/ folder of the Raspberry Pi, so they can use it both in wired and wireless mode. After running a few tests it was then ready to send it in the post, and hope that all works! I also included a guide for my father to set up the required port forwarding in his router in the Netherlands, so the VPN Server can be found from the internet.

Testing the setup

When the Raspberry Pi had arrived in the Netherlands it was time to put it to the test. We forwarded the required port in the router, gave it a static local IP address and attempted to connect from Norway.

Connecting was successful!
However, the test-pc did not have internet access.

The VPN Server in its natural habitat.

Some debugging later revealed that the ethernet port did not have the default eth0 name, but something more tropical. Changing the name of the ethernet port in the configuration (iptables) fixed the problem and allowed internet access through the VPN tunnel. Hooray!

Lastly we installed Log2Ram, which limits the logging done to the SD-card to extend the lifetime of the system. SD-cards can get corrupted when written too often to, so in order to limit the amount of write cycles Log2Ram will save all logs in RAM memory and only once a day write the entire logfiles to the SD-card.

A reboot to make sure everything works and it was finally time to check the speed of the connection!

Speedtest over 4G

Honestly, this is 10x as high as expected when we started on this project so we’re certainly very happy about this! This will allow my parents to comfortably travel to their cabin and use the internet, while they appear to be in the Netherlands.

Playing with Grafana & InfluxDB

In my search for a way to display the data being collected by Homey I often have seen Grafana as an option. Grafana is a tool to visualize data in graphs, gauges, tables, etc. It reads data from a database, is very responsive and easy to work with. As a bonus, FreeNAS offers a community plug-in which has both Grafana and InfluxDB installed and ready to go, so I could easily set up a jail to try it out.

Homey by itself does not log any data. To have it upload its variables to the InfluxDB database I just had to install the InfluxDB App, fill in the IP address of the Grafana jail & credentials of the database, et voila! From the Grafana interface I started seeing the potential Query fields being populated with all the data that Homey had to offer. Not much time after that, I had my first Dashboard populated with energy measurements, real-time power consumption and temperature data from different rooms in the house. With a little more playing around this Dasboard was shown as an iframe on my Magic Mirror.

Grafana dashboard shown as an iframe on the Magic Mirror

After doing this I realized that FreeNAS is also a great source of data (CPU usage, network & HDD speeds, RAM usage etc.) and a place where I’d like to get some more overview of what’s happening. Naturally, a quick Google-search yielded tons of people who had done this before, and I followed this guide to get FreeNAS to upload its data to a separate InfluxDB database and create a Dashboard in Grafana. I then used this dashboard as an inspiration to create a similar one for Homey and by the end of the day I had 3 different dashboards which give me a neat insight in how well my core-components from my smart home are working.

An additional line in the reverse proxy configuration and the Grafana jail was accessible through the internet. Curious on how it looks? You can find it here: cloud.jessendelft.org/grafana/.
Username: viewer
Password: viewer123

I am not sure yet if I want to keep using this system, as I ultimately want some form of 2D/3D interactive map of my house to show this information. As an interim solution though, this is quite nice and I was surprised by how easy it was to include this in my system. I like the fact that all the ground-work is up and running (FreeNAS, Reverse Proxy, Homey, etc.), and that it apparently is working so well that it is easy to build layers of complexity upon them with for example the Grafana dashboards. If you have comments/ideas on what I can do with my data, or how I can improve my system even more, please let me know in the comments!

Cheers!
Jesper

Home NAS Server Setup

This website runs on an Intel NUC.

Actually, a lot of things are now running on this little NUC. Before showing you exactly what processes/services are running, please allow me to explain why I have this NUC in the first place.

Home Assistant and the NUC

In our previous house I was running Home Assistant on a Raspberry Pi. Home Assistant is a piece of software that can observe, control and automate nearly anything that can be part of a smart home. In my case, I had the following devices connected to it:

Linking all these devices together required something more robust than a Raspberry Pi, hence why in April 2019 I bought an Intel NUC NUC6CAYH. This little fellah has an Intel Cerion CPU, place for a maximum of 2x 4GB of DDR3L RAM, can house a 2.5″ hard drive and has a 1Gb ethernet port. I figured that this was a very good alternative for a Raspberry Pi, whilst also keeping my wallet in mind.

This NUC ran Home Assistant (or HA for short) very reliably, although the HA software itself needed quite some maintenance, up until we moved in December 2019. The NUC disappeared in a box, and at the new house I bought an Athom Homey to take over the task of HA in an attempt to limit the amount of maintenance work. This is why I had a NUC laying around when I decided to start setting up a Home Server in January 2020.

First step: Setting up a NAS file server

When I started on this project I knew nothing about file- or NAS servers, but I imagined that there would be open source software out there that could help me out. I had decided that I did not want to buy new hardware, as things could be tested on the NUC first to see if it would be good enough.

Two names that kept popping up were FreeNAS and Unraid. They both looked equally good candidates for me, so I picked the one that felt like it had the best chance of succeeding -> FreeNAS. Over the last couple of months I have been very happy with this choice. FreeNAS is running very stable and is in my opinion easy to use. The initial file server setup was a breeze, and in no-time I had a functioning NAS server which could be accessed through a PC with Windows Explorer (via a Samba share).

FreeNAS has a functionality which are called ‘Jails’. Jails are, very shortly explained, little isolated operating systems that use the same kernel as the hosts operating system. This means that they are more lightweight to run than a Virtual Machine as they dynamically share available RAM, CPU & HDD space between the host and other jails, but simultaneously are compartmentalized from the host. Processes run inside the jail can only access files inside the jail, and processes/files inside the jail are not aware of any file outside the jail. An additional (much better) introduction to jails can be found here. All in all, they are a perfect place to run additional programs/services without the risk of breaking my entire NAS system.

The current setup

The current HW setup, including PS4 Pro and Philips Hue bridge

Hardware

The current hardware today is, as I mentioned, running on an Intel NUC. This includes:

The HDD’s are set up in a mirrored configuration. That means that all data is copied on both drives, giving me an effective storage capacity of 2 TB whilst also protecting myself from a disk failure. This is also called a RAID 1 setup.

FreeNAS Software setup

The current setup is running 3 jails, 1 Virtual Machine, a samba share and some additional smaller services inside FreeNAS:

FreeNAS services setup
  • The Samba share allows us to access files on the server when we’re on the home network.
  • The website that you see right now is running inside a Jail.
  • A second jail contains NextCloud. NextCloud mainly allows for automatic synchronization of pictures and videos from my phone to the server.
  • Since there are multiple websites that I want to access from this web-address there is a jail set up that acts as a Reverse Proxy server.
  • Lastly I have a virtual machine that runs PiHole. PiHole is software that blocks advertisements on my home network. Unfortunately it cannot run (yet) inside a FreeNAS jail as it does not support FreeBSD, the operating system FreeNAS runs on.

So how do all these services work together? Well, that’s a different view:

Networking flow

Starting from the bottom, there are the NextCloud storage, this blog and the magic mirror which are accessible through the internet via the reverse proxy. There is also the Samba share which is accessible only on the local network for privacy reasons.

In the middle of the picture is the router which obviously has access to the internet. All DNS requests are however forwarded to PiHole. A DNS request is a request for a name server to translate the domain name of a website (for example jessendelft.org) to an address (for example 217.197.166.65), in order to connect to that address. PiHole blocks any requests to known advertisement addresses so that these requests never get resolved, which means they will not load. This way there is network-wide ad-blocking for all devices connected to it.

I have some plans of integrating Octoprint into the Reverse Proxy once my 3D printer is back up&running. I also want to move PiHole to a jail to free up some RAM and HDD space which are now reserved by the Virtual Machine.
If you have any more ideas on what I can do to improve my setup, please let me know!