New server setup (Hetzner cloud)

The past…

I have a long history of renting a server from Hetzner, a German hosting company. I started to rent one of their dedicated servers (model “DS 3000”) back in February 2008, and since then switched to a newer, more powerful model twice already (first to model “EQ 4” and then the current “EX 40”), so I’m on the third Hetzner dedicated server eleven years later. I’ve always been happy with their hardware reliability + performance, network connectivity and also their service during the few occasions when I needed it, e.g. when a hard disk broke, or when I needed them to attach a remote console so that I could debug some kernel / boot manager issue.

As you can see from some of my older blog entries, I’ve always ran Gentoo Linux on those servers so far. This was a lot of fun back when I wasn’t a parent and my job wasn’t that demanding. I also learned a lot. But by now it no longer feels like such a good match for me anymore: It’s not actually the building from source that bothers me, but rather the rolling release distribution type that no longer suits me. Rolling release means that every day there will be a few updates (including major version changes), and some of them mean config file updates or dependency conflicts that often result in a few minutes of tweaking and fiddling.

Switching to Ubuntu 😮

So, at the expense of no longer being able to keep my system as lean as possible (using USE flags to disable unneeded features, and therefore having less unused runtime dependencies lying around), I’ve switched to Ubuntu. There updates within one release are 99.9% painless, and switching releases happens only every two to four years. And even then, it’s typically 2-3 hours fiddling until everything works again. Ok, it also means at the end of a release life cycle I’m using software versions that are 2-4 years old with some security fixes backported, but then again, I don’t have time to play with all the bleeding edge features anymore, anyway…

Partitioning etc.

But now to the topic I actually wanted to write about in this post: My new server setup. Not only did I switch to Ubuntu, but I also switched to Hetzner’s new Cloud Server offering.
I expect to reap the following benefits with this switch:

  • No more switching to newer server hardware every few years (which typically took quite a bit of work, between 3-5 days).
  • No more worrying about hardware components (especially HDDs or PSUs) breaking down at inconvenient points in time. It only happened two or three times in total, but with the current server having its 4th birthday next month, it’s becoming more and more likely with every month.
  • Reduction in costs by 40-50% with similar performance and storage and identical network connectivity: I’m currently sharing the server hardware with three friends, and it costs me ~18 EUR/month. With the Cloud Server I’m paying ~10 EUR/month.

Now some details on how I’ve set up the cloud VM – here is a screenshot of the cloud console’s summary:

(ignore the costs on the right, it neither includes the costs for the volumes and backups, nor tax).

The invoice section for the cloud server looks like this:

Note these numbers are also still without the 19% tax. Adding it we end up with 9.65 EUR/month total costs.

As you can see, the 2×64 GB of additional storage I booked are actually more expensive than the server VM itself. But it’s still what I would call reasonable pricing. The storage is also sufficiently fast, and Hetzner says it’s double redundant, i.e. I shouldn’t need to worry about storage downtime or that they’ll lose my data.

Now from the hardware level upwards, my setup looks like this:
I’ve picked the ubuntu-18.04.1-server-amd64.iso image that Hetzner offers. So the 20 GB local space are used for / (but with /var and /home being just mount points, and with a 2 GB swapfile in it):

Filesystem                   Size  Used Avail Use% Mounted on
/dev/sda1                     19G  6.7G   12G  38% /

Then I made the two volumes – which show up as /dev/disk/by-id/scsi-0HC_Volume_1234567 as soon as you create them via the web interface – into a LVM volume group (‘vg_data’), and in it create a logical volume (‘lv_data’). I did this so that I can add additional volumes when I run out of space.
I then formatted ‘lv_data’ with an ext4 file system. Finally I installed veracrypt and created a ‘home.hc’ and a ‘var.hc’ encrypted volume with 35 GB and 80 GB respectively, again containing ext4 file systems.
The following /etc/crypttab takes care of triggering decryption upon boot (which is when the passphrases need to be entered on the web-based console):

crypthome    /mnt/lv_data/home.hc    none       tcrypt-veracrypt
cryptvar     /mnt/lv_data/var.hc     none       tcrypt-veracrypt

The file systems from within the crypto volumes are then available as /dev/mapper/crypthome and /dev/mapper/cryptvar respectively. This /etc/fstab …

/swapfile    none                     swap    sw     0 0
/dev/vg_data/lv_data  /mnt/lv_data    ext4    discard,nofail,defaults,auto   0 0
/dev/mapper/crypthome /home           ext4    defaults  0 0
/dev/mapper/cryptvar  /var            ext4    defaults  0 0

takes care of mounting them to /home and /var:

/dev/mapper/vg_data-lv_data  126G  116G  4.1G  97% /mnt/lv_data
/dev/mapper/crypthome         35G   16G   17G  49% /home
/dev/mapper/cryptvar          79G  9.9G   65G  14% /var

The reason why I’m using these crypt volume files is mostly out of habit, and because I could also open them on a Windows box if I had to.

Everything else is straight-forward – just your usual Ubuntu server installation with a couple of services.


To make sure data doesn’t get lost, even if I screw up and delete files I didn’t want to delete, I have the following backup mechanisms in place:

For the 20 GB local storage, I use the Hetzner Backup that can be selected from the web interface – it keeps seven backup slots and creates one backup of the full local storage every 24 hours. You can also trigger a backup manually, which will cause the oldest backup to get discarded. The whole solution costs 20% of the server base price, in my case that’s 60 cents/month. If I screw something up, I can just go back 24 hours in time, which doesn’t really make a difference for /.

For /var and /home I do off-site backups, using my home server. I’m using rdiff-backup, because that’s what I’ve been using for many years now, and it still works very nicely. Every couple of days a script is run by cron on the home server, which uses a dedicated SSH key to access the cloud server and then does an incremental backup of both /var and /home (separately). It takes a couple of minutes (even if barely any new data has been added), which is the downside of rdiff-backup. But since it happens while I sleep, I don’t really care. The great thing about rdiff-backup is, that I can directly access the most current snapshot without needing any special tools. Only when I want to get to older versions of files, I need to use the ‘rdiff-backup’ tool and start digging.


That’s a missing piece of the puzzle right now. For the dedicated servers Hetzner offers a simple but reliable monitoring solution, which sends out e-mails for some events that can be defined, e.g. if pings are not returned, or if a connection to port 443 is unsuccessful. For the cloud servers they don’t seem to offer anything similar, but I’d really like to get some notification if one of the services is down (most likely reason: I screwed something up during an update and forgot to check). Preferably the notification should use some messenger (Whatsapp, Threema, … or even good old SMS) – but I don’t want to pay more than a couple of cents per month. And I also don’t want to spend hours to configure the thing. Any ideas?

Sync your Android contacts and calendars with your own server

It’s 2012, and there are still people who don’t put all their information onto Google/Facebook/… servers. Call them paranoid control freaks, if you want. 😉

Some of them even run their own e-mail server. Those people would probably prefer to have their address book(s) and calendar(s) stored on their own server as well, which Android cannot do out of the box.

This blog post aims to give a brief overview over my current solution to this problem. It’s not 100% perfect yet, but I am quite satisfied with it already. I have been using this setup for a couple of months now, and did not encounter any problems of relevance. (*)

Software components involved

On the server:

  • DAViCal – a free (GPL licensed) CalDAV/CardDAV server written in PHP; needs PostgreSQL as database server
  • Roundcube Webmail with the CardDAV plugin – to manage your contacts from within any web browser (Roundcube is of course also a decent mail client)

On the desktop / laptop computer:

  • Mozilla Thunderbird with
    • Lightning extension – to manage your calendar(s) from your Linux/Windows/Mac computer
    • SOGo connector extension (this link brings you to a file listing where you can download a nightly snapshot, there is no officially released version for current Thunderbird versions on the SOGo download page, yet) – to manage / lookup your contacts from your Linux/Windows/Mac computer

      A few words on how to get the SOGo connector working (it’s not really straight-forward, in my opinion): After installing the extension by dragging the downloaded .xpi file onto Thunderbird, open the Address book and choose Menu File / New / Remote Address Book. Enter the URL of your DAViCal CardDAV collection, i.e. https://your.server/davical/caldav.php/YOUR_USER/YOUR_COLLECTION. Then right-click on the new address book and choose Synchronize.

On the Android device:

  • CalDAV-Sync app from Market or AndroidPIT for a bit more than 2€

    (Since CalDAV-Sync is just a backend app that facilitates syncing, this is a screenshot of the Android calendar, with the event that can be seen in the Thunderbird-Lightning screenshot above)
  • CardDAV-Sync app from Market or AndroidPIT for a bit less than 1.50€ or free

    (Since CardDAV-Sync is just a backend app that facilitates syncing, this is a screenshot of the Android contact viewer, with the contact that can be seen in the Roundcube CardDAV screenshot above)
  • Contact Editor or Contact Editor Pro app from Market or AndroidPIT (Pro costs a bit more than 2€, the free version lacks a few features)

A few notes regarding the components:

  • Contact Editor is necessary because the default Android contact editor somehow does not work with custom contact sources. It integrates seamlessly once you have set it as default action upon adding/editing a contact for the first time after installation.
  • The SOGo connector extension for Thunderbird is a good start, but in the long run I really hope Thunderbird’s contact handling can be brought to a level that matches the rest of the application. There is hope.
  • There seems to be a calendar plugin for Roundcube as well (as part of the MyRoundcube plugin collection), and it seems to support CalDAV, but I couldn’t get it to work so far (and did not try hard, since I always have a Thunderbird with Lightning around, which is great for calendaring).

I’m planning to write more on how to get everything set up, but I currently don’t have time for that. The hardest part is getting DAViCal and PostgreSQL to work, in my opinion, all the other components basically just need a URL (to the previously set up DAViCal collection – e.g. https://your.server/davical/caldav.php/YOUR_USER/YOUR_COLLECTION), username and password to work.

Update (2012-01-28): Added some screenshots.
By the way, what must be a very recent change in Gentoo’s packaging of PHP causes CalDAV-Sync to fail syncing, and the apache error log contains “[Sat Jan 28 08:30:48 2012] [error] [client x.x.x.x] PHP Fatal error: Call to undefined function cal_days_in_month() in /…/davical/inc/RRule-v2.php on line 906” if you do not enable the ‘calendar’ USE flag for dev-lang/php (which is disabled by default).
Update (2012-01-29): (*) Typical, I write about something, and then it breaks. It seems there is an incompatibility between the newly released DAViCal 1.0.2 and CalDAV-Sync. The CalDAV-Sync developer has confirmed the issue and is working on it.
Update (2012-01-30): The incompatibility – resulting in logged error messages – does not affect functionality (it was just me having set the account to “One-Way-Sync”)
Update (2012-02-09): Great news: There are nightly builds of the SOGo connector Thunderbird extension that provides CardDAV integration for Thunderbird 10 now – I knew that extension before, but development seemed to have stopped at Thunderbird 3.5 or so. I added links and a bit of info above.
Update (2012-02-09/2): I found the first bug with SOGo connector – when saving a contact that has an image, the image gets lost. This doesn’t really matter to me, because I don’t use images in contacts usually, but for people who use images, this could be annoying. Hope they fix it.

Impact of ext4’s discard option on my SSD

Solid-state drives (SSDs) are seen as the future of mass storage by many. They are famous for their high performance: extremely low seek times, since there is no head that needs move to a position and then wait for the spinning disk to come around to where it needs to read/write; but also higher throughput of sequential data: My 2,5″ OCZ Vertex LE (100 GB) is rated at 235 MB/s sustained write speed, and read speeds up to 270 MB/s, for example.

There is a caveat though – quoting Wikipedia:

In SSDs, a write operation can be done on the page-level, but due to hardware limitations, erase commands always affect entire blocks. As a result, writing data to SSD media is very fast as long as empty pages can be used, but slows down considerably once previously written pages need to be overwritten. Since an erase of the cells in the page is needed before it can be written again, but only entire blocks can be erased, an overwrite will initiate a read-erase-modify-write cycle: the contents of the entire block have to be stored in cache before it is effectively erased on the flash medium, then the overwritten page is modified in the cache so the cached block is up to date, and only then is the entire block (with updated page) written to the flash medium. This phenomenon is known as write amplification.

So, SSDs are fast at writing, but only when their free space is neatly trimmed. The only component in your software stack that knows which parts of your SSD should be trimmed, is your file system. That is why there is a file system option in ext4 (my current file system of choice), called “discard”. When this option is active, space that is freed up in the file system is reported to the SSD immediately, and then the SSD does the trimming right away. This will make the next write to that part of the SSD as fast as expected. Obviously, trimming takes time – but how much time exactly? I wanted to find out, and did the following: I measured the time to unpack and then delete the kernel sources (36706 files amounting to 493 MB, which is what I call a big bunch of small files). I did it three times with and three times without the “discard” option, and then took the average of those three tries:

Without “discard” option:

  • Unpack: 1.21s
  • Sync: 1.66s (= 172 MB/s)
  • Delete: 0.47s
  • Sync: 0.17s

With “discard” option:

  • Unpack: 1.18s
  • Sync: 1.62s (= 176 MB/s)
  • Delete: 0.48s
  • Sync: 40.41s

So, with “discard” on, deleting a big bunch of small files is 64 times slower on my SSD. For those ~40 seconds any I/O is really slow, so that’s pretty much the time when you get a fresh cup of coffee, or waste time watching the mass storage activity LED.

Don’t enable the “discard” option if you have a similar SSD. A much better way to keep your free space neatly trimmed for good write speeds is, to trigger a complete walk over the file system’s free space, and tell the SSD to trim that all at once. And of course you would do that at times when you don’t actually want to use the system (e.g. in a nightly cron job, or with a script that gets launched during system shutdown). This can be done with the ‘fstrim’ command (that comes with util-linux), which takes around six minutes for my currently 60% filled 95 GB file system.

Update (2011-07-08): I forgot some details that may be interesting:

  • Kernel version:
  • SSD firmware version: 1.32
  • CPU: AMD Phenom II X4 965