Andrew Hutchings (aka LinuxJedi) has worked on many of the Open Source software projects that make up the Internet. He now works from home in the UK for MariaDB developing their MariaDB ColumnStore engine.
For the last few days I had my laptop connected to an external monitor which has a 2560×1600 resolution. Whilst having a few websites open and doing a video chat the laptop completely ground to a halt with the fan going full speed.
My laptop is a Lenovo ThinkPad X260 with an i7 CPU and 16GB RAM, so not really a lightweight which was why this was so unexpected. After trying some random things I have solved this. So I thought I would detail it here if only so I remember in the future.
After a little more investigating I found the root cause of my problem. I have a Firefox profile I have been copying around for a while and in it layers.acceleration.force-enabled and gfx.canvas.azure.accelerated both enabled. It turns out that these appear to eat massive CPU resources and can cause performance issues with Glamor. I turned them off, switched back to Glamor rendering and we are back to normal performance again.
This is the more ideal scenario because as we move towards Wayland being the default display server we will also be moving away from Xorg graphics drivers.
Recent versions of Xorg have switched the Glamor as a graphics driver. This basically uses 3D acceleration to draw your 2D desktop. From what I can tell for most uses this is very performant. With just my laptop screen I have no complaints. When using high resolution external monitors however it appears to be really struggling.
When I switched to the Intel native Xorg driver I could suddenly use the external monitor effortlessly. CPU usage was way down and it has even appeared to increase battery life when disconnected from the external screen. I’ve done a bit of searching as to why this would be but haven’t come up with any hard evidence so far.
To do this switch in my Kubuntu 18.10 installation I created a file called /etc/X11/xorg.conf.d/20.intel.conf with the following contents:
At the beginning of the year I gave KDE Plasma a try as the primary desktop on one of my devices. It wasn’t my primary laptop but I still used it heavily in that time. I enjoyed it but there were still some niggles that meant I wouldn’t have been happy with it being my primary desktop at the time.
A week ago I happened to come across the announcement for KDE Plasma 5.14. The thing that really caught my eye there was the “Display Configuration widget”. This led me to giving KDE & Plasma another chance, another week of testing. Again this is on my primary laptop, a highly-sepecced Lenovo ThinkPad X260.
At the time of writing Fedora did not have a KDE spin that included Plasma 5.14 beyond Rawhide and I didn’t want to try Rawhide on my laptop. I’ve been hitting a lot of paper-cut style bugs with Fedora 29 anyway including not being able to update my laptop at all due to TeX Live package issues. I therefore decided to try KDE Neon, I figured this distro would show Plasma as intended by the people behind KDE. For anyone not familiar with KDE Neon, it is based on Ubuntu’s current LTS release (18.04) and has the latest KDE packages on top. There are a few options depending on how bleeding edge you want to go, I went for the “stable” option.
There were two interesting parts to the new display configuration widget that really drew me in. The first is the fact you have five easy to select buttons for instant monitor configuration. This is extremely handy for me at conferences and meetings where I need to quickly setup to give a presentation. The second is a rather innocuous switch that was actually my main draw “Enable Presentation Mode”. This mode works a lot like the popular Caffeine extension in GNOME Shell. It inhibits the screen blanking / locking for when you are giving presentations. I actually also use this during conference calls when I’m using my laptop as a second screen to my desktop that is making the call and don’t want it to go to sleep in the couple of minutes that I’m not reading.
Everything is Better!
In my previous post I broke things up into Good/Bad/Ugly. I can’t do that this time because every bug I hit, every crash I had… It is all fixed. I’m someone that almost always installs software from the command line but I even found “Discover” a joy to use!
KMail works pretty well, I’m using it as my primary mail client now. Whilst configuration can be a bit fiddly with it once it is up and running it is a pleasure to use. I’ve been having issues with performance with Thunderbird 60 onwards, no matter what machine I used, scrolling thousands of emails is painfully slow. I was looking to switch mail clients anyway so this is a refreshing change.
Things I Discovered
The screenshot tool is called “Spectacle” and it is amazing. You have much more control over what you are taking a screenshot of and you can even use a magnifier for area selections. It will even connect to a few screen recording applications to do video capture.
Dolphin, the file manager, has always been a very powerful tool. But I found out there are integrations such as Dropbox and Git available. I’ve only used these plugins briefly but they appear to work very well.
Idle power usage is insanely low. When the screen is off (and the system is not yet in suspend) it is as if the battery isn’t being used at all. I’ve left the laptop overnight open with the screen off (I disable auto-suspend) and the battery drain has been tiny.
That being said the battery drain appears to be a little higher than GNOME Shell when using a web browser. It appears a process called “kwin_x11” uses a lot of CPU time. I think with heavy usage I’ll only get 10-12 hours instead of 16-18 that I would get with GNOME Shell at the moment. But I’m OK with that. When using other applications the battery usage seems to be on-par with GNOME Shell.
My Workflow Changing
One of the big differences for me between GNOME Shell and KDE Plasma is task switching. As I said in my previous post I often have a lot of terminals open at any time and I like the fact that “Alt-Tab” in GNOME Shell brings them all to the front in one go. Plasma doesn’t have something like this, but it is OK.
Konsole’s multi tab and split screen is awesome. I found myself gravitating towards that and using Firefox containers instead of Chrome’s multi-window user profiles for personal/work Google account separation.
I have also created an “Activity” for development work which is separate from the one that has email, web, etc… on it. So I can work for a while pretty much distraction free.
This means I have less windows open at any time to do the same things I was doing before. I feel comfortable with this workflow.
Things I Would Like To See
KCalc is really useful when I’m doing base conversion work. But I would like to see a history there like other calculators do, I would also love if “Numeric System Mode” showed a preview of the result in other bases like GNOME Calculator’s “Programming Mode”. I think basically I just want a KDE version of GNOME Calculator.
I still miss having automatic timezone switching that is integrated into GNOME Shell. I have a trip in a few weeks where I will be spending a few hours in four different timezones. It would be nice for Plasma to see use the libraries that can see that and adjust accordingly. There are workarounds using scripting but I’ll adjust this manually for now.
I’m also missing slightly newer versions of software such as GIMP that can be found in Fedora. For now it isn’t too much of an issue, but waiting for 2 years for such things to be updated (the next LTS) may become an issue for me. If it were possible to have Neon against the latest Ubuntu releases I think I would be very happy. But this would put huge demands on the KDE team so LTS is a good compromise.
I have gone from not liking the workflow changes in my previous post to easily adjusting my workflow accordingly now. In hindsight I think the many paper-cut style bugs were dampening the experience for me. But now? Things are great. KDE Neon is the primary driver on my laptop and at least for now it is there to stay.
I may in the future also change my primary desktop computer’s OS to Neon, but this is a much larger task so I want to continue on my laptop for at least another few weeks before I try it.
Gource is a tool which can take a source code tree and create beautiful visualisations out of it. I’ve used it a few times before for various projects. This weekend I spent a little bit of time playing with it and applying it to MairaDB Server to see what it would produce.
In this visualisation you can see every file in the source code as a coloured dot. The dots are clustered in directories which are linked together in the directory tree using the lines. Git users swam around the files and spray code into them. It gives you a real sense of just how much work goes into a project.
To create this I got the MariaDB tree from GitHub and switched to branch 10.3. I observed the first MariaDB 5.1 tag was on the date 26th October 2009 so used that as a starting point. I could have gone right back to the beginning of MySQL here but the video would have been a lot longer!
I used my Lenovo ThinkStation S30 to generate the video, this is a 6-core Ivy-Bridge Xeon with 64GB RAM and a GeForce 1050Ti graphics card. The reason I used this over my more powerful machines is the GeForce can be used both for the OpenGL requirements of Gource and to hardware encode the video using FFmpeg.
Gource and FFmpeg are in most Linux distribution repositories. I was using Fedora for this with the RPM Fusion repositories to give me the proprietary NVidia drivers and CUDA based H.264 encoding support.
Breaking this down we are telling Gource to generate at 1080p starting at 2009-10-26, every day in git should take 0.1 seconds so that this video doesn’t go on for hours. The ‘-a’ flag auto skips if nothing happens for a whole second. We are hiding filenames and usernames simply because there are so many that they would cover up the visualisation. The final part of the Gource command tells it to stream our a PPM video to the pipe at 60 FPS.
For the FFmpeg command we tell it to receive a 60 FPS PPM video feed, add a PNG overlay of the MariaDB logo to the bottom right and pipe this though the NVidia H.264 encoder to generate our output mp4 file. The rest of the settings are to set the quality quite high so that the image is still relatively crisp at the end (YouTube uploading/re-encoding will have reduced the quality a little).
Whilst encoding you get to see the visualisation on the screen, make sure you don’t move your mouse over it as this will bring up context information which will also appear on the video. With FFmpeg using the GeForce to encode the video this is encoded in real time. Before this I tried piping the data over SSH to my larger 16 core HP Z620 workstation to encode but my 1GBit network was only fast enough to do 18 FPS.
I later added a Creative Commons licensed music track on YouTube just to add a bit of ambience.
You may notice the date jumping backwards a little bit due to things such as branches being merged. I haven’t yet seen a way to flatten this out.
Everything here is Open Source and east to tweak for your own software project. I’d be interested to see what others can do with these tool.
All software has bugs. Even if you could possibly write the perfect bug free software all the layers down have bugs. Even CPUs as can be seen with the recent Meltdown and Spectre bugs. This means unfortunately sometimes software will crash. When this happens it is useful to capture as much information as possible to try and stop it happening again.
One of the first things I did when coming back to work from the holiday break is code a new crash dump handler to be used in MariaDB ColumnStore. This will spit out a stack trace for the current thread into a file upon a crash. It is very useful for daemons to try and find the root cause of a problem without running through a debugger.
The first thing you will want to do is enable useful debugging symbols and frame pointers to your binary compilations. This may add a tiny overhead to binary execution, a few percent at most but it is worth it to be able to run a postmortem on crashes. The useful options are “-g” and “-fno-omit-frame-pointer”.
This is a basic crash handler, it will dump the crash data into a file with the filename of the PID of the process in /tmp. You will likely want to expand on this to add more information and error handling. The important thing is to try and avoid mallocs as much as possible:
This opens the file, writes the current time/date into it as well as the signal number that generated the crash. It then gets the backtrace and writes it into the file. We then reset the signal handler to default. You’ll need some more headers than this example, but execinfo.h, which is part of glibc, provides the backtrace functionality.
Adding to Application
Somewhere near the beginning of your ‘main’ function you need to add signal handler hooks, you’ll need to include ‘signal.h’ for this to work:
Once compiled and running an easy way to test this is to send a signal to an application to tell it that it has crashed. You can do this with “kill -11 <PID>”. You should find the crash dump in /tmp.
The crash dump file will have a list of function calls and address offsets. This may be useful but you can use the same binaries to generate source line numbers. The following is an example from a MariaDB ColumnStore binary:
Unlike most storage engines, MariaDB ColumnStore does not store its data files in the datadir. Instead these are stored in the Performance Modules in what appears to be a strange numbering system. In this post I will walk you through deciphering the number system.
If you are still using InfiniDB with MySQL, the system is exactly the same as outlined in this post, but the default path that the data is stored in will be a little different.
The default path for the data to be stored is /usr/local/mariadb/columnstore/data[dbRoot] where “dbRoot” is the DB root number selected when the ColumnStore system was configured.
From here onwards we are looking at directories with three digits ending in “.dir”. Every filename will be nested in similar to 000.dir/000.dir/003.dir/233.dir/000.dir/FILE000.cdf.
Now, to understand this you first need to understand how ColumnStore’s storage works. As the name implies every column of a table is stored separately. These columns are broken up into “extents” of 2^15 (roughly 8M) entries either 1 or 2 extents (depending on how much data you have) will make up a segment file. Each segment file is given a segment ID and a collection of four segments is given a partition ID. In addition to all this every column is given an “Object ID”.
You can find the object ID for every column using the information_schema.columnstore_columns table and details about every extent, including the partition and segment IDs using the information_schema.columnstore_extents table. This will be useful when working out the file names.
The following is how to work out a filename from an object ID. It should be noted that object IDs are 32bit and the output of each of these parts is converted to decimal:
Part 1: The top byte from the object ID (object ID >> 24)
Part 2: The next byte from the object ID ((object ID & 0x00ff0000) >> 16)
Part 3: The next byte from the object ID ((object ID & 0x0000ff00) >> 8)
Part 4: The last byte from the object ID (object ID & 0x000000ff)
Part 5: The partition ID
Part 6 (the filename): The segment ID
Each part here apart from the final part is a directory appended with “.dir”. The filename is prepended with FILE and appended with “.cdf”. There is of course a much easier way of finding out this information. The information_schema.columnstore_files table will give you the filename for each object/partition/segment combination currently in use as well as the file size information.