Fix / workaround for non-linear volume control on Linux

Alright, it’s the time of the year where I find time to tinker a little. This time I wanted to find out why changing the volume didn’t feel “linear” for my beloved Teufel CONCEPT 8 2.1 sound system that’s connected to my PC via USB.

First of all I probably have to clarify what I mean by “didn’t feel linear”. I remember learning that human sound perception isn’t linear at all, so that’s probably not accurate. But what I mean here is that: When I slide the volume slider (or make discrete jumps via pactl set-sink-volume alsa_output.usb-NXP_SEMICONDUCTORS_Teufel_CONCEPT_8-00.analog-stereo 60%, for example), the loudest point is reached at already 40, maybe 45%, and after that it doesn’t get any louder. Or maybe it does? But if so, only marginally, and definitely not at the rate it does left of the 40% mark.

So I did some googling, but that turned up nothing specific to these speakers. And also nothing much helpful in general. I suspected some kind of quirk that nobody bothered adding a workaround for in the kernel so far. So I dove into the Linux kernel sources and tried a bunch of changes in sound/usb/mixer_quirks.c – e.g. setting cval->dbMin to different values for the USB ID of the Teufel CONCEPT 8, but also played around with cval->max as well as cval->res. It was rather tedious, because after every change I had to reboot, lacking the knowledge on how to test the changes “on the fly”. While I did observe changes to how the volume changed – e.g. the range where the volume changes fast shifted to the higher end – none of them yielded the results I wanted, i.e. “linear” volume change and no “dead zones” along the slider. So after a while I concluded that with my limited knowledge of the system’s inner workings, I could not understand how the numbers related to what I observed. So I wanted to learn more about the Linux USB audio subsystem, but gave up after a while because I figured that this would take up more time than I was willing to invest.

Just when I wanted to give up completely, I came across a bug report where someone recommended to choose the “Pro Audio” profile for the audio device instead of the “Analog Stereo Output” profile to get rid of this issue. And what can I say: works for me! The PipeWire FAQ states that using the “Pro Audio” profile “disables the hardware mixers, it only enables software volume/mute“. That’s probably what fixes the issue for me. I’d still love to understand why the issue exists in the first place though, and what the underlying layers are doing to mess up something seemingly simple as the volume slider. Maybe next Xmas holidays…?

See below a screenshot of the KDE System Settings / Sound. You can also use pavucontrol or its -qt variant to change the setting.

Fast SFTP transfers with KDE / kio-extras / Dolphin

For a few years now, I’ve used Dolphin / Konqueror / Krusader for transferring files to or from my home server via SFTP. However when there were larger amounts to be transferred, I opened the shell and used rsync. That’s because I felt it was always somewhat slower with SFTP. I blamed it on the weak file server, on the network, or whatever.

Recently I again wanted to clean up my desktop’s home directory, where a bunch of raw or edited videos were sitting. All in all ~250 GiB. Because I wanted to sort the videos into different sub-directories on the server, I used Dolphin and SFTP once again. While I was watching the transfer, I began to doubt whether the network between my desktop and the home server was actually Gigabit, because I only saw 10-15 MiB/s. Quick check with rsync and scp: nope, 80+ MB/s is possible, just not with Dolphin.

That’s when I decided to get to the bottom of the issue. I discovered KDE bug 296526 – Dolphin is too slow when upload a file on a SSH server. It fit exactly what I was observing, even though it’s from 2012. That’s 14 years! I went on to read all the comments, and it became apparent that none of the users that had commented had spent the time to do a proper side-by-side comparison, and document it properly. So I took up that task and posted comment 40, where I wrote pretty much what I wrote here, but also the results of some tests that I had done with ‘libssh’. That’s the underlying library which kio-extras/sftp uses to interact with SSH/SFTP servers. And Dolphin / Konqueror / Krusader in turn use kio-extras/sftp to do SFTP transfers.

These tests showed very promising results: I could actually saturate my Gigabit link and transfer >750 Mbps. What I didn’t know at that time (September): I had tested a brand new version of libssh (0.11.0), released in August, that came with major changes. Namely a new async I/O API had been added. The transfers with Dolphin had still used libssh 0.10.x though.

Upon learning about these important changes, I opened a version bump request for libssh 0.11.0 in Gentoo’s bug tracker to make the Gentoo devs aware of this new libssh version. Fast forward two months, libssh 0.11.1 was available in Gentoo. I then erroneously tested with kio-extras linked against libssh 0.10.x (and thus disabled new async I/O API), even though I had 0.11.1 on my system. The reason was, that I hadn’t rebuilt kio-extras. That resulted in comment 45. A few minutes later I realized my error, rebuilt kio-extras (actually most of KDE, because an update was coming in anyway), and voilà: ~230 MiB/s or ~1840 Mbps (with peaks going >2 Gbps)! Hurray!

Side note: I had upgraded my network from Gigabit to 2.5 Gigabit in the meantime, otherwise that would obviously not be possible. But with the old libssh only around ~90 MiB/s or ~720 Mbps were possible over the same network, 2.5x slower than with the new libssh 0.11.x.

So, if you find that your SFTP transfers in KDE are slower than they should be, check which libssh version comes with your distribution. If it’s <0.11.0, you know you need to upgrade. With non-rolling binary distributions, you’ll probably have to wait a bit and then upgrade your whole distribution. For example Ubuntu will only get libssh 0.11.1 in “Plucky” aka version 25.04. For rolling distributions like Gentoo or Arch it’s already available.

This is probably the biggest single improvement to my Linux on the Desktop experience of the last years… hence this blog post 🙂

Heating the room for Science

Before I can get into the actual topic, I have to give some background.
TL;DR: I have some free electric power that I wanted to put to good use. You can skip to How to put the power to good use?

Background: Fuel cell produces more power than we need at times

We have a pretty fancy heating system that doesn’t just burn domestic gas to generate heat as most heating systems do, but instead it reforms the gas to hydrogen and uses it in a fuel cell to generate ~750 watts of electric power. The fuel cell’s “waste heat” is what’s used to warm the house and domestic hot water most of the time. On cold days, when the 1.1 kW thermal power are not enough, there is also still a traditional gas condensing boiler built in, which just burns as much gas as necessary. Another important component of the system is a 220 liters hot water tank which serves as a kind of energy storage over time. If you’re interested in this technology, here are some links:

Now 750 watts doesn’t sound like a lot, and often we need much more than that for short periods of time, e.g. while cooking or baking, when the washing machine is running, etc.. But then again, there are sometimes many hours where the whole house needs less than 300 watts:

The blue graph is power production, yellow is what we don’t use, red is what’s drawn from the grid. I’ve highlighted the area (green) where power was available from the fuel cell that we didn’t use.

The price we get for power exported to the grid (yellow graph) is very low – from an economic perspective it makes no sense to “waste” power by sending it to the grid. Which is weird, as everyone is talking about “Energiewende” (“turnaround in energy policy”), decentralizing power production and so on. But that’s how it is right now, not much I can do about it. Hopefully policies will improve soon and make it more attractive for anyone to send power to the grid, so that more coal-powered plants can be switched off.

How to put the power to good use?

So I was thinking about how I could put the electric power to good use, instead of “wasting it”. First I thought of a big battery, but these are rather expensive, heavy, degrade over time and will probably never pay off. A big battery in the form of a battery electric vehicle would make a lot of sense, but that’s also not an option right now. Then, one evening, sitting next to my beast of a computer and playing a 3D game, I noticed that the room was getting pretty warm, even though the floor heating was switched off for that room. That’s when it occurred to me: Why don’t I transform the surplus power to heat, and let the computer do something useful while generating it?

I remembered how – at the beginning of the COVID-19 pandemic – I had donated my old laptop’s meager CPU + GPU power to participate in the search for a vaccine by running the BOINC client. What is a BOINC client you ask? It stands for “Berkeley Open Infrastructure for Network Computing” client, and basically turns your computer into a part of a distributed super computer that scientists can use to solve computation-intensive problems. See this Wikipedia article for more info.

So I installed BOINC on my desktop machine, connected it to Science United and the machine started to hum. However when I checked the power meter, even though it was late evening and there were no big power consumers running, the house was drawing power from the grid. Turns out this beast – when all 12 cores and the GPU are crunching numbers as hard as they can – draws about 550 watts. Add on top the fridges and other infrastructure (home server, WiFi router/APs, switches, smart home stuff etc.) and ~750 watts from the fuel cell weren’t enough. So I had to come up with a plan to limit power usage of the computer somehow. I wanted to regulate the power consumption in a way so that BOINC would only run when it makes sense. I came up with the following criteria for running BOINC:

  • Currently unused fuel cell power must be greater than what the computer needs when it runs BOINC without GPU (more about GPU later), i.e. >180 watts
  • Outside temperature must be “cold” (I defined that as less than ~10 degrees Celsius for now) – for the simple reason that otherwise the small room would get uncomfortably warm and I would have to open a window, which I would consider a waste of energy.

I put these in code and tried it out. But something wasn’t right: BOINC would always run for two to four minutes, and then stop, only to start up again after two to four minutes. Thinking back how much power the machine was actually using at full CPU+GPU load, I realized I would have to regulate the power consumption in a more fine-grained way. As the GPU alone can use up to 350 watts, that’s where I saw the biggest leverage. I then (re)discovered the nvidia-smi tool, which allows to set a limit on how much power the GPU can use. So I enhanced my program to first start BOINC on CPU only. If after the next cycle there was still sufficient fuel cell power, I would switch on the GPU with its lowest possible wattage (100 watts). Then, every cycle, if there was still more than >50 watts available, I would increase GPU power further. I designed these control cycles to be two minutes long. The data to base the decisions on comes from InfluxDB and is averaged (mean) over these two minutes, so that short bursts of power consumption are flattened. Whenever hitting 0 watts of available fuel cell power, the GPU would first be suspended, and in the next cycle also the CPU.

There are a few more elaborate details I added after a few days, but I can say it’s rather smooth now. The room is warm and I’m contributing valuable computing power to science – at zero cost 😄