Server crash on 25th February

On the Saturday morning of the 25th February I was quite surprised that I did not get an email delivered to my phone my fiancee sent me. Wondering about that I searched for the cause and realized after some research that my storage server had crashed.

The crash was a provider issue with the host kernel system at netcup’s system, which had a bug and threw all it’s virtualized machines into ‘death’. To be correct, the system reported the VM’s still as running, but they were completely offline. Which subsequently also killed my productive system as the storage was not available anymore.

Some hours of restoring stuff, including a phone call to the provider in the beginning solved all the problems resulting out of the system’s downtime and everything’s running again.

Still I made myself some tasks to work on to make recovery easier and faster.

Everything on latest version again

Looks like I am only posting about server updates since quite a while. Well what should I say else?

Of course, in my private life there’s a lot to talk about, but I do not lay my private life open to the public on a blog, like some other people do…

Err, yeah, so regarding the updates: It’s all updated. Except wekan, that sucker got blown up and I do not want to fix that now, I am working on different programming topics for the rest of the day.

Another server update

Quite some time was lost between the last and this server update.

And of course there are some update quirks this time. It’s seldom, but it happens.

  • collabora, accessed from the nextcloud instance, doesn’t work.
  • webmail client is down, because it doesn’t support php8 yet. :rolling_eyes:

At collabora I’m lost, no idea what went wrong there…
But webmail will be fixed today evening or tomorrow, when I’ve installed the latest RC.

Pushing Jenkins/Pipeline/Groovy/JVM to the limits [update]

At work we have enhanced with an own python build system wrapper, which handles full dependency chains, forward and backward, and is capable of generating Jenkins pipelines for building in the Jenkins CI system.

Such a build is quite simple, the Jenkinsfiles just needs two lines for bootstrap out of a jenkins pipeline library and then the CI build itself in it’s standard incarnation has 4 build phases:

  1. Generation phase, here the build is bootstrapped, the pipeline is generated, loaded and executed
  2. Prebuild phase, things to be done before the real build (reporting, or other stuff)
  3. The build itself, parallel execution for all existing targets and variants, plus test execution, static code analysis etc.
  4. Postbuild phase, reporting, metrics, Jira connectors, etc.

So such a generated pipeline can reach up to 1500 lines, depending on the project’s configuration, not a big deal typically.

Now we added the feature to build a complete dependency chain. Yes, you are reading right. If you have a Project A, B and C and the dependency chain is like A -> B -> C (-> means depends on) then if you build C the result is taken to build B and the result is taken again to build A. Still a rather simple thing, but can be a quite complex and huge pipeline to run that.

Method size

And there we hit our first problem, still a simple one. Executing that gives back an exception with the message “Method code too large!” coming from the underlying JVM, which allows a Method code size up to 64k.

Weeell, not that big deal, some small changes to the generation and every target build had it’s own function, problem solved.

Class size

Okay, fine. Then we got that one new huge project with something like 30 components or something. Normal build is fine, works good so far. And then someone tried to run a dependency build. BOOM!

Now we got an exception saying “Class size too large!”, seriously what the fuck?

Now it’s okay that a pipeline/Groovy script with 52.5k lines of code may be a bit oversized (1.6MB), but why the fuck is there a limit to the the class size? (We wondered already at the method code limit…)

Okay first step was to use generateable reusable methods in the pipeline, which reduced the size of that pipeline already to 22.7k lines of code – still too large for a class. (Yes I know that LOC are not the same as the bytesize of a class or method, but it is at least some indicator for the size of it)

What now? Splitting to multiple loadable groovy classes of course. Said and done, every build step is now it’s own little groovy file, 438 files to be exact. In the main pipeline script we generate now a map with an entry for each file and are loading the classes dynamically into that map.

Now guess what…

General error when generating a class: ArrayIndexOutOfBoundsException

That’s now a show stopper. Now we are trying to find out where this comes from and guess what the internet spats out about that error?

Nothing or just shit/brabble/rubbish.


A solution is of course to make the generation more intelligent and also the generated result more intelligent, which also brings a much higher complexity. But after all it is the best solution right now as long as you generate everything of a huge Jenkins pipeline.

Lets Encrypt Wildcard certs

I searched a half hour for a how-to for Let’s Encrypt Wildcard certificates with automatic renewal.

All sites I’ve found just promoted the manual method, where I would have to manually add dns entries every 3 months – neeeeeever!

Then I stumbled upon This acme client tool for Let’s Encrypt even has plugins for the most providers who offer DNS configuration and expose an API. And there exists a plugin for my provider

Couldn’t be better. Just set the environment variables like mentioned in the quite small how-to for the plugin. And run the command to get a new cert.

It might be a good idea to also add a bigger key size, because the default is just 2048bits. --issue --dns dns_netcup -d -d * -k 4096

And you’re done. 

Now you just have the work to point your services to the new certificates.

For me those were:

  • apache 
  • quasselcore
  • postfix
  • dovecot
  • prosody (xmpp/jabber)

Again Qualys SSL Labs and mxtoolbox were a great help in checking if everything works as expected, thanks for that guys!

Editing office documents directly inside nextcloud

It bothered me for long, that I couldn’t edit office documents directly online on my own/nextcloud. Then I found the collabora plugin in the nextcloud apps and checked the nextcloud website about that.

It’s easier than you think.

First Step: Get yourself the docker container running

The simplest solution would be a docker-compose.yml file like this one:

version: '2'
    image: collabora/code
      - username=<username>
      - password=<password>
    restart: always
      - collabora
    driver: bridge

It’s the latest developement version in collabora/code so for private use it’s okay. 🙂

As a sidenote: I have no idea what’s the username and password for in the docker container, but I’ve set it just to be sure.

Don’t forget to configure your webserver with an subdomain vhost and all the proxy configuration parts which are mentioned inside the nextcloud tutorial.

Second Step: Configure your Let’s Encrypt cert for the subdomain

Well, that’s kinda obvious and god damn simple, so I’ll skip to the next and last step.

Last Step: Configure your collabora app in nextcloud

… with the subdomain of your collabora docker instance behind the webproxy. 

And it magically works. I was surprised too! 

If anything doesn’t work as expected check back with the nextcloud site mentioned above or maybe on the website of collabora itself.


Well, it’s about time that I handle that topic too.

So here’s a list what I had to do to get it working on all services I have running

  • Checking the IPv6 subnet I got from my provider
    • Setting one of those IPs on the network device
  • Checking DNS entries
    • Adding AAAA records for “*”, “@” and the server name
    • Adding an IPv6 reverse DNS name
    • For email I had to correct my SPF entry
  • Service configurations I had to change or to check
    • Apache, just had to check that the Listen configuration listens on all interfaces
    • Postfix, here I had to add the IPv6 protocol
  • Gladly the docker internal network is completly hidden, so I do not have to care about anything running behind my apache proxy, also the SSH server is listening on all devices and I do not care currently about my gitlab instance external ssh access, that may still stay on IPv4 for a while

What might help when you’re testing IPv6 is following test website:

So far everything is working, what’s still bugging me is that you can’t force your browser to use IPv6 when visiting a site which supports it, you don’t even know it… 

Zuul, Jenkins, Gerrit and Git submodules

So, we got a Git -> Gerrit -> Zuul (w/ Gearman) -> Jenkins setup at work and we started to use Git submodules with one repository lately.

Setting up the quality gate with Zuul and Gerrit for a normal git repository is quite straight forward and I won’t mention that any further. We got the problem, that we wanted to do a build of the parent repository of our submodule repository, when a change is committed for review or merge.

Zuul doesn’t give you any options here, it just has a single project configuration, and doesn’t support project dependencies.

BUT it supports build job dependencies!

So the solution is to build your submodule standalone in the first job, which can be the standard review job, based on a Jenkinsfile inside the submodule repository. And then starting a build job with the parent repository which depends on the result of the submodule standalone build. This second job can’t be a standard review build job because it has to do some different things. The standard Jenkinsfile for the review of the parent repository can be used with minor modifications.

So for your parent repository, you’ll be already using a checkout method which also retrieves the submodule repository and may look like this:

def zuul_fetch_repo() {
    checkout changelog: true, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'refs/heads/zuul']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[refspec: '+$ZUUL_REF:refs/heads/zuul', url: '$ZUUL_URL/$ZUUL_PROJECT']], extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: true], [$class: 'CleanBeforeCheckout']]]

Because of the fact that you have to use a special job for the task, you also have to change the fetch function away from the generic $ZUUL_URL/$ZUUL_REPO to a hardcoded checkout url.

These variables you have to use to update your submodule repository to the zuul change provided, the resulting fetch function could look like this:

def zuul_fetch_repo() {
    checkout changelog: true, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'ssh://<user>@your-gerrit.url:29418/parent-repo']], extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: true], [$class: 'CleanBeforeCheckout']]]

    sh '''
    cd path/to/your/submodule/repository
    git pull $ZUUL_URL/$ZUUL_PROJECT +$ZUUL_REF:refs/heads/zuul

And that’s it! You just have to somehow get the change from the project you configured in zuul into the submodule, and you have a build of the parent project with integrated change commit from the submodule. Of course you can do that a bit fancier, but that’s left as an exercise for the reader.

At last here a little snippet of the zuul config part reflecting that.

  - name: submodule-repo
    review:    # the zuul pipeline
      - review:    # standard review job, submodule standalone
        - review-parent-with-submodule    # parent project with submodule checkout


Getting rid of an old email address…

Just if anyone wonders by chance, I’ve deleted one of my old Emailaddresses, which was called ‘’ (2004-2017 rest in peace, now without spam).

It just grew to a spam honeypot lately and I already stopped using the address actively in 2013, as far as I remember.

I held it active for the unlikely chance that anyone would have wanted to get in contact with me, who hasn’t got my current addresses. Now I don’t care anymore, it’s deleted.

Integrity checks of different file types after hard disk crashes

So, your hard disk crashed?

You rescued it with ddrescue/other tools?

Now you don’t know if the files are still intact?

Here some solutions for some media file formats:

  • Movies (avi, mpeg, mp3, mkv, webm, … everything ffmpeg can decode.)
    • ffmpeg -v error -i "$1" -f null - 2>"$1".log
    • decodes a movie or audio file and reports all errors into a logfile, so if you do that for every of your files you get a bunch of logfiles which you can grep for Read errors, then you have an idea which movies are damaged.
    • #!/bin/bash                                 
      if [ ! -e "$1.log" ]; then                  
              echo "Checking file: $1"            
              ffmpeg -v error -i "$1" -f null - 2>"$1".log                                    
              ls -l "$1".log                      
              echo "$1 already checked."          

      find . -type f -size +1M -exec ./ "{}" \;

  • Pictures (png, jpg/jpeg)
    • Use pngcheck for pngs
    • Use jpeginfo -c for jpgs
  • Music (flac)
    • Just use flac -t to test a flac file

When I find some other useful integrity check methods for other file types or media I may add some.