Pear upgrade broken

I haven’t been able to upgrade Horde for a few months:

Error getting channel info from Connection to `ssl://' failed:

Today I finally digged into the issue. The short explanation and fix in my case was:

# php -r "print_r(openssl_get_cert_locations());"
    [default_cert_file] => /usr/local/ssl/cert.pem
    [default_cert_file_env] => SSL_CERT_FILE
    [default_cert_dir] => /usr/local/ssl/certs
    [default_cert_dir_env] => SSL_CERT_DIR
    [default_private_dir] => /usr/local/ssl/private
    [default_default_cert_area] => /usr/local/ssl
    [ini_cafile] =>
    [ini_capath] =>
# ll /usr/local/ssl/cert.pem
ls: /usr/local/ssl/cert.pem: No such file or directory
# cd /usr/local/ssl
# wget --no-check-certificate
# mv cacert.pem cert.pem

Problem solved. Completely unrelated to Horde. Might have been caused by a PHP or OpenSSL upgrade at some point, I guess.

Miele@mobile 2.02

By incident I found out yesterday that the Miele@mobile app was updated in Google Play back in May. For some reason Pushbullet didn’t notify me, and I’m not allowed to download the app from Google Play myself, since I’m resided in Denmark and Miele@home is not supported here. Strangely, the only visible change I found in this update since 2.01 was a complete Danish translation. Nice – hopefully this means that Miele@home support in Denmark is not too far away. Update, September 6th 2016: I just found out that the geo-restriction in Google Play is also gone, or at least expanded to include Denmark. It’s now also possible to select Miele Denmark as origin for the Miele user account.

Today’s small rant:

  • Why this geoblocking? I’m allowed within EU to buy all the hardware in Germany, but the only app that exists for the system, I’m not allowed to use. With my very expensive appliances and Miele@home gateway/modules I might add. Luckily in Android there are ways to get the app anyway, but my girlfriend can’t download the app for iPhone and use the system at all.
  • Why not open up the new JSON-RPC API, so better 3rd party apps could be written, that didn’t have to depend only on the old broken Homebus protocol? This would make the system more useful since new ways to use the sytem would arise.

Miele XGW 3000 firmware 2.03

For quite some time, I’ve been holding back a post introducing my Miele@home Android project. So in the wrong order, here’s an attempt to get in touch with some fellow Miele@home geeks: Sharing my findings about the new XGW 3000 firmware upgrade, which was released last week.

My findings are only about the Homebus protocol and multicast packets, since the new JSON-RPC protocol isn’t for public consumption (or so it seems). So here we go – changes since version 2.0.0:

  • Element ‘type’ in http://<gateway>/homebus/ seems to be working again (e.g. <type>WMV960</type>). I’ve never seen this work since the original firmware of my gateway (1.1.0).
  • Prior to 2.0.3, multicast packets would refer to a ZigBee MAC address (at least when communicating with XKM 3000 Z modules): “id=hdm:ZigBee:<MAC>”. This is now changed to device UID.
  • For some devices numeric values are now included in the http://<gateway>/homebus/device XML:
    <key name="State" value="Running" type="state" raw="5"/>
    <key name="Program" value="Cottons" type="program" raw="1"/>
    <key name="Phase" value="Rinses" type="phase" raw="5"/>

    This is true for my WMV 960, but not my TKR 350 WP or H 5581 BP. They all have the XKM 3000 Z module, but the WMV has firmware version 1.16, while the other two are version 1.02 (does anyone know if these modules are firmware-upgradeable and/or where to find a changelog?).

Bugs still present:

  • Language parameter not respected for http://<gateway>/homebus/device?language=en. Gateway language is always used.
  • Bizarre values for key “Start Time”. Will update the post on this later.

That’s it for now. I’ll return later with more information about my project as well as rants about the Homebus protocol and Miele’s secrecy and lack of support.

Synology DS cloud broken

The latest version of Synology’s Android app DS cloud stopped working with the Cloud Station package on my DS508, which is stuck with DSM 4.0. The app was released on Google Play March 2nd 2016 – version 2.6. Cloud Station package version is 2.2-3047.

CloudStation is running on the DS. My Android phones and tablet are shown as offline in the client list. On the devices DS cloud is shown as running, but no files are synchronized. Synchronization is missing both ways – files are not pushed to the phone, and files updated on the phone are not pushed to the DS.

To fix this I went to and downloaded version 2.5 of the app. Before installing, the existing app has to be uninstalled, so reconfiguration is needed.

I created a support ticket at, so they are informed.

Update, March 10th 2016: Synology reported back to me that they had analyzed my logs and found the problem: The database was corrupted. So it must be a bug in the database upgrade in version 2.6 of the app. After clearing the app data and reconfiguring it works perfectly.

Building PHP after installing MySQL 5.7

I got the following error when trying to build PHP 5.6.14/5.6.15 after upgrading to MySQL 5.7:

configure: error: Cannot find libmysqlclient_r under /usr/local/mysql.
Note that the MySQL client library is not bundled anymore!

I added this to my build script in order to create the missing symbolic links:

cd /usr/local/mysql/lib
for f in*; do ln -s $f $(echo $f | sed s/libmysqlclient/libmysqlclient_r/); done
ln -s libmysqlclient.a libmysqlclient_r.a

Denon receiver plugin for Yatse

I probably made my most niche thing ever today: A plugin for an Android app. I found out yesterday that a new API for Yatse was made available, so today I took on the challenge, and created a plugin for my Denon A/V receiver. I can now control the volume on my receiver directly from Yatse. In a few days I might clean the code up and finish it, so it can be released either here or on Google Play.

Update, August 8th 2015: I’ve just released the plugin!
Get it on Google Play.

Raspbmc and NFS permissions

NFS is often used for accessing network shares from Raspbmc, due to its low overhead. Many tutorials describe how to set this up, for example with a Synology NAS. One subject isn’t covered much, though: A setup with restrictive permissions.

On my Synology my media files are usually owned by me, and has ‘dlna’ as group with read-only permissions. Example:

drwxr-x---  9 jacob dlna   4096 Jun 15  2012 video

On the NAS I have created a user for Raspbmc and made it a member of this group:

DiskStation> cat /etc/group | grep dlna
DiskStation> cat /etc/passwd | grep raspbmc
raspbmc:x:1046:100:Raspberry Pie XBMC:/var/services/homes/raspbmc:/sbin/nologin

To get this to work on the Raspberry I’ve first had to synchronize the GID/UID’s. Since the NAS is the master, I’ve done this on the Raspberry (logged in as the pi user):

pi@raspbmc:~$ sudo groupadd -g 65536 dlna
pi@raspbmc:~$ sudo usermod -a -G dlna pi

After enabling root access, I’ve changed the pi user to match the UID of the raspbmc user on the NAS:

root@raspbmc:~# usermod -u 1046 pi

(For this to work I had to kill a number of processes first)

Changing UID of the pi user will cause a lot of trouble for Raspbmc, which expects the user to have UID 1000. This is hardcoded in at least two scripts:

  • /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/
  • /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/

Fix this by adding:

sed -i 's/getpwuid(1000)/getpwuid(1046)/g' /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/
sed -i 's/getpwuid(1000)/getpwuid(1046)/g' /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/

to /etc/rc.local so the scripts are automatically fixed during startup. Replace 1046 with your pi UID.

Without this fix automatic updates won’t work, and you’ll see script errors during startup – and can’t launch the Raspbmc settings. gone missing

For quite some time I’ve been having a problem with my file going missing every now and then. Whenever this happens, I can’t use the init script to start and stop the server anymore, which is pretty annoying. For some reason I had learned to live with this – I would just ‘killall httpd’, start the server again and the would be back for some time, until the problem would reappear.

Finally I decided to do something about it. I added this small script to my hourly cron jobs:

if [ ! -e "/usr/local/apache2/logs/" ]; then
        echo " missing"

A few weeks later this trap was finally triggered – less than an hour after my daily logrotate, which produced this output:

Starting httpd: [  OK  ]
Stopping httpd: [  OK  ]
(repeated a number of times...)
Starting httpd: [  OK  ]
Stopping httpd: [FAILED]
Starting httpd: (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address
no listening sockets available, shutting down
AH00015: Unable to open logs
error: error running postrotate script for /usr/local/apache2/logs/*_log 

This was not the first time I had seen this, but it was the first time I realized that the logrotate problem was causing the problem, and not the other way around.

So I checked my /etc/logrotate.d/apache config file:

/usr/local/apache2/logs/*_log {
    rotate 1024
    size 4M
        /etc/init.d/httpd restart

After a few minutes of reading the logrotate man page, I realized what was wrong (had a hunch, though): The restart between the postrotate/endscript directives was performed for each logfile, i.e. multiple times, instead of just a single time after the last one was rotated. This in itself is pretty bad, but it would also happen asynchronously, thus create a mess. This problem was easily solved using the sharedscript directive. Also, a simple “reload” instead of “restart” is sufficient to make httpd reopen the logfiles, thus create new filehandles.

After fixing these two problems, the file ended up like this:

/usr/local/apache2/logs/*_log {
    rotate 1024
    size 4M
        /etc/init.d/httpd reload

I’m fully expecting this to solve the problem once and for all.

Playing multi-channel FLAC’s on Raspbmc

About a year ago I bought a Raspberry Pi and installed Raspbmc. I wanted to use this cheap little gadget as a media center, filling some of the holes in my existing home entertainment setup. One of the things I was hoping to get out of it was the ability to play multi-channel (5.1) FLAC’s through HDMI to my surround receiver. However, I never got this to work. The same goes for the primary goal – being able to playback my DVD collection from ISO files, but that’s another story…

The setup:

  1. The FLAC’s are 24 bit with a sample rate of 96 kHz.
  2. I’m using NFS for efficient file transfer from my NAS.
  3. The average bitrate for the files is below 10 Mbps.

Raspbmc will try to play the files, but immediately chokes or freezes. I’m ruling out network limitations, since the Pi is wired and able to stream 1080p video at higher bitrates. I had completely given up, thinking it was a shortcoming of Raspbmc itself, when I suddenly, by coincidence, discovered that one of my albums played perfectly. So I started investigating the difference between this album and the all the others that didn’t work. The difference was the bitrate, which was only 48 kHz for the working album.

The next step was to downsample a song from 96 kHz to 48 kHz – and this turned out good as well. So now I’ve downsampled all my albums, and can play them all on the Pi. A bit of research led me to SoX, one of the best free tools for downsampling audio — amongst a lot of other things. I use it like this:

sox -S orig96.flac -r 48000 -b 24 conv48.flac

I had to compile it myself, because the version included in my CentOS installation didn’t support FLAC. This was completely straight-forward (configure, make, make install), probably because I already had libFLAC installed. The only think I’m unsure about is if I’ve missed some option to get the best quality downsampling.

Synology Photo Station permissions

Getting Photo Station to work on my Synology DiskStation has been quite a pain due to the way permissions are handled. Photo Station basically expects all photo files to be world-readable, i.e. use the default permissions:

drwxrwxrwx    2 myuser   users         4096 Apr 18 19:18 Test

In my setup I have more strict permissions in order to solve these two problems:

  1. World-readable files will give anyone with access to the photo share access to all files. I have friends with login to my Linux server, which can use a mounted NFS share to access the files.
  2. Access through UPnP/DLNA will give unlimited access to all files, since there are no privilege control in the protocol. Inviting friends to use your wireless network will also invite them to see all your private photos.

So I’ve created a dlna group, containing the admin user, and set the group permission on all my pictures:

drwxr-x---    4 myuser   dlna          4096 Mar 29 22:35 Test

This approach will completely break Photo Station. To understand why, we must first understand the design a little bit. First of all, we have the scanner (/usr/syno/bin/convert-thumb) which runs as root. This will create all the different versions of the photos in the @eaDir sub directory:

DiskStation> ll
drwxrwxrwx   39 root     root          4096 Mar 11 17:29 .
drwxr-x--x    3 myuser   dlna          4096 Mar 11 17:29 ..
drwxrwxrwx    2 root     root          4096 Feb 10 14:19 IMG_0001.JPG

DiskStation> ll IMG_0001.JPG
drwxrwxrwx    2 root     root          4096 Feb 10 14:19 .
drwxrwxrwx   39 root     root          4096 Mar 11 17:29 ..
-rwxrwxrwx    1 root     root         66199 Feb 10 14:19 SYNOPHOTO:THUMB_B.jpg
-rwxrwxrwx    1 root     root        145956 Feb 10 14:19 SYNOPHOTO:THUMB_L.jpg
-rwxrwxrwx    1 root     root         28540 Feb 10 14:19 SYNOPHOTO:THUMB_M.jpg
-rwxrwxrwx    1 root     root          4830 Feb 10 14:19 SYNOPHOTO:THUMB_S.jpg
-rwxrwxrwx    1 root     root        341283 Feb 10 14:19 SYNOPHOTO:THUMB_XL.jpg

The scanner itself doesn’t have a problem, since it runs a root, thus will always be able to access the files. However, the created files are all world-readable.

Next, we have the web server running the Photo Station application. This is where the problems start, since the server is run with user nobody/group nobody. This is a clever choice, since the web server should run using an unprivileged user. However, it does give us a bit of a headache, since this user will not have access to anything not being world-readable – which conflicts with our requirement.

A number of attempts to fix this using alternate permissions ultimately failed. It tried to do the following:

  1. Leave the @eaDir directories with default permissions.
  2. Set all structure directories (not containing pictures) to drwxr-xr-x, which would allow the web server to traverse through all these directories.
  3. Set the last directory (containing the actual pictures) to drwxr-x–x, allowing the web server to access the directory.
  4. Set the picture files to drwxr—–.

This almost worked. The media server cannot see the directory contents, thus won’t display any pictures (unless allowed by the dlna group permissions). The web server can still access all the thumbnails in @eaDir. However, it won’t be able to display the original picture, since we removed the access to all the original pictures. Also, NFS access is still problematic, since anyone knowing there’s an @eaDir directory inside each directory will have full access to all the scaled down images.

The solution

The only real solution for this problem, as I see it, is to change the user or group the web server runs as. The web server configuration is stored in /usr/syno/apache/conf/httpd.conf-user. This configuration includes /usr/syno/etc/sites-enabled-user/*.conf, which in my setup is limited to /usr/syno/etc/sites-enabled-user/SYNO.SDS.PhotoStation.conf. Since the web server is only used for the single purpose of running Photo Station, I could simply edit /usr/syno/apache/conf/httpd.conf-user like this, replacing nobody as group:

# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
User photostation
Group dlna

This solved the problem, as I could go back to my strict permissions. In case you have multiple virtual hosts and only want to change the user or group for Photo Station, the Apache module apache2-mpm-itk might be interesting. If you manage to compile the module for the Freescale PowerQUICC III MPC8543 CPU, let me know. 🙂 The module should be placed in usr/syno/apache/modules/.