Synology DS cloud broken

The latest version of Synology’s Android app DS cloud stopped working with the Cloud Station package on my DS508, which is stuck with DSM 4.0. The app was released on Google Play March 2nd 2016 – version 2.6. Cloud Station package version is 2.2-3047.

CloudStation is running on the DS. My Android phones and tablet are shown as offline in the client list. On the devices DS cloud is shown as running, but no files are synchronized. Synchronization is missing both ways – files are not pushed to the phone, and files updated on the phone are not pushed to the DS.

To fix this I went to and downloaded version 2.5 of the app. Before installing, the existing app has to be uninstalled, so reconfiguration is needed.

I created a support ticket at, so they are informed.

Update, March 10th 2016: Synology reported back to me that they had analyzed my logs and found the problem: The database was corrupted. So it must be a bug in the database upgrade in version 2.6 of the app. After clearing the app data and reconfiguring it works perfectly.

Building PHP after installing MySQL 5.7

I got the following error when trying to build PHP 5.6.14/5.6.15 after upgrading to MySQL 5.7:

configure: error: Cannot find libmysqlclient_r under /usr/local/mysql.
Note that the MySQL client library is not bundled anymore!

I added this to my build script in order to create the missing symbolic links:

cd /usr/local/mysql/lib
for f in*; do ln -s $f $(echo $f | sed s/libmysqlclient/libmysqlclient_r/); done
ln -s libmysqlclient.a libmysqlclient_r.a

Denon receiver plugin for Yatse

I probably made my most niche thing ever today: A plugin for an Android app. I found out yesterday that a new API for Yatse was made available, so today I took on the challenge, and created a plugin for my Denon A/V receiver. I can now control the volume on my receiver directly from Yatse. In a few days I might clean the code up and finish it, so it can be released either here or on Google Play.

Update, August 8th 2015: I’ve just released the plugin!
Get it on Google Play.

Raspbmc and NFS permissions

NFS is often used for accessing network shares from Raspbmc, due to its low overhead. Many tutorials describe how to set this up, for example with a Synology NAS. One subject isn’t covered much, though: A setup with restrictive permissions.

On my Synology my media files are usually owned by me, and has ‘dlna’ as group with read-only permissions. Example:

drwxr-x---  9 jacob dlna   4096 Jun 15  2012 video

On the NAS I have created a user for Raspbmc and made it a member of this group:

DiskStation> cat /etc/group | grep dlna
DiskStation> cat /etc/passwd | grep raspbmc
raspbmc:x:1046:100:Raspberry Pie XBMC:/var/services/homes/raspbmc:/sbin/nologin

To get this to work on the Raspberry I’ve first had to synchronize the GID/UID’s. Since the NAS is the master, I’ve done this on the Raspberry (logged in as the pi user):

pi@raspbmc:~$ sudo groupadd -g 65536 dlna
pi@raspbmc:~$ sudo usermod -a -G dlna pi

After enabling root access, I’ve changed the pi user to match the UID of the raspbmc user on the NAS:

root@raspbmc:~# usermod -u 1046 pi

(For this to work I had to kill a number of processes first)

Changing UID of the pi user will cause a lot of trouble for Raspbmc, which expects the user to have UID 1000. This is hardcoded in at least two scripts:

  • /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/
  • /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/

Fix this by adding:

sed -i 's/getpwuid(1000)/getpwuid(1046)/g' /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/
sed -i 's/getpwuid(1000)/getpwuid(1046)/g' /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/

to /etc/rc.local so the scripts are automatically fixed during startup. Replace 1046 with your pi UID.

Without this fix automatic updates won’t work, and you’ll see script errors during startup – and can’t launch the Raspbmc settings. gone missing

For quite some time I’ve been having a problem with my file going missing every now and then. Whenever this happens, I can’t use the init script to start and stop the server anymore, which is pretty annoying. For some reason I had learned to live with this – I would just ‘killall httpd’, start the server again and the would be back for some time, until the problem would reappear.

Finally I decided to do something about it. I added this small script to my hourly cron jobs:

if [ ! -e "/usr/local/apache2/logs/" ]; then
        echo " missing"

A few weeks later this trap was finally triggered – less than an hour after my daily logrotate, which produced this output:

Starting httpd: [  OK  ]
Stopping httpd: [  OK  ]
(repeated a number of times...)
Starting httpd: [  OK  ]
Stopping httpd: [FAILED]
Starting httpd: (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address
no listening sockets available, shutting down
AH00015: Unable to open logs
error: error running postrotate script for /usr/local/apache2/logs/*_log 

This was not the first time I had seen this, but it was the first time I realized that the logrotate problem was causing the problem, and not the other way around.

So I checked my /etc/logrotate.d/apache config file:

/usr/local/apache2/logs/*_log {
    rotate 1024
    size 4M
        /etc/init.d/httpd restart

After a few minutes of reading the logrotate man page, I realized what was wrong (had a hunch, though): The restart between the postrotate/endscript directives was performed for each logfile, i.e. multiple times, instead of just a single time after the last one was rotated. This in itself is pretty bad, but it would also happen asynchronously, thus create a mess. This problem was easily solved using the sharedscript directive. Also, a simple “reload” instead of “restart” is sufficient to make httpd reopen the logfiles, thus create new filehandles.

After fixing these two problems, the file ended up like this:

/usr/local/apache2/logs/*_log {
    rotate 1024
    size 4M
        /etc/init.d/httpd reload

I’m fully expecting this to solve the problem once and for all.

Playing multi-channel FLAC’s on Raspbmc

About a year ago I bought a Raspberry Pi and installed Raspbmc. I wanted to use this cheap little gadget as a media center, filling some of the holes in my existing home entertainment setup. One of the things I was hoping to get out of it was the ability to play multi-channel (5.1) FLAC’s through HDMI to my surround receiver. However, I never got this to work. The same goes for the primary goal – being able to playback my DVD collection from ISO files, but that’s another story…

The setup:

  1. The FLAC’s are 24 bit with a sample rate of 96 kHz.
  2. I’m using NFS for efficient file transfer from my NAS.
  3. The average bitrate for the files is below 10 Mbps.

Raspbmc will try to play the files, but immediately chokes or freezes. I’m ruling out network limitations, since the Pi is wired and able to stream 1080p video at higher bitrates. I had completely given up, thinking it was a shortcoming of Raspbmc itself, when I suddenly, by coincidence, discovered that one of my albums played perfectly. So I started investigating the difference between this album and the all the others that didn’t work. The difference was the bitrate, which was only 48 kHz for the working album.

The next step was to downsample a song from 96 kHz to 48 kHz – and this turned out good as well. So now I’ve downsampled all my albums, and can play them all on the Pi. A bit of research led me to SoX, one of the best free tools for downsampling audio — amongst a lot of other things. I use it like this:

sox -S orig96.flac -r 48000 -b 24 conv48.flac

I had to compile it myself, because the version included in my CentOS installation didn’t support FLAC. This was completely straight-forward (configure, make, make install), probably because I already had libFLAC installed. The only think I’m unsure about is if I’ve missed some option to get the best quality downsampling.

Synology Photo Station permissions

Getting Photo Station to work on my Synology DiskStation has been quite a pain due to the way permissions are handled. Photo Station basically expects all photo files to be world-readable, i.e. use the default permissions:

drwxrwxrwx    2 myuser   users         4096 Apr 18 19:18 Test

In my setup I have more strict permissions in order to solve these two problems:

  1. World-readable files will give anyone with access to the photo share access to all files. I have friends with login to my Linux server, which can use a mounted NFS share to access the files.
  2. Access through UPnP/DLNA will give unlimited access to all files, since there are no privilege control in the protocol. Inviting friends to use your wireless network will also invite them to see all your private photos.

So I’ve created a dlna group, containing the admin user, and set the group permission on all my pictures:

drwxr-x---    4 myuser   dlna          4096 Mar 29 22:35 Test

This approach will completely break Photo Station. To understand why, we must first understand the design a little bit. First of all, we have the scanner (/usr/syno/bin/convert-thumb) which runs as root. This will create all the different versions of the photos in the @eaDir sub directory:

DiskStation> ll
drwxrwxrwx   39 root     root          4096 Mar 11 17:29 .
drwxr-x--x    3 myuser   dlna          4096 Mar 11 17:29 ..
drwxrwxrwx    2 root     root          4096 Feb 10 14:19 IMG_0001.JPG

DiskStation> ll IMG_0001.JPG
drwxrwxrwx    2 root     root          4096 Feb 10 14:19 .
drwxrwxrwx   39 root     root          4096 Mar 11 17:29 ..
-rwxrwxrwx    1 root     root         66199 Feb 10 14:19 SYNOPHOTO:THUMB_B.jpg
-rwxrwxrwx    1 root     root        145956 Feb 10 14:19 SYNOPHOTO:THUMB_L.jpg
-rwxrwxrwx    1 root     root         28540 Feb 10 14:19 SYNOPHOTO:THUMB_M.jpg
-rwxrwxrwx    1 root     root          4830 Feb 10 14:19 SYNOPHOTO:THUMB_S.jpg
-rwxrwxrwx    1 root     root        341283 Feb 10 14:19 SYNOPHOTO:THUMB_XL.jpg

The scanner itself doesn’t have a problem, since it runs a root, thus will always be able to access the files. However, the created files are all world-readable.

Next, we have the web server running the Photo Station application. This is where the problems start, since the server is run with user nobody/group nobody. This is a clever choice, since the web server should run using an unprivileged user. However, it does give us a bit of a headache, since this user will not have access to anything not being world-readable – which conflicts with our requirement.

A number of attempts to fix this using alternate permissions ultimately failed. It tried to do the following:

  1. Leave the @eaDir directories with default permissions.
  2. Set all structure directories (not containing pictures) to drwxr-xr-x, which would allow the web server to traverse through all these directories.
  3. Set the last directory (containing the actual pictures) to drwxr-x–x, allowing the web server to access the directory.
  4. Set the picture files to drwxr—–.

This almost worked. The media server cannot see the directory contents, thus won’t display any pictures (unless allowed by the dlna group permissions). The web server can still access all the thumbnails in @eaDir. However, it won’t be able to display the original picture, since we removed the access to all the original pictures. Also, NFS access is still problematic, since anyone knowing there’s an @eaDir directory inside each directory will have full access to all the scaled down images.

The solution

The only real solution for this problem, as I see it, is to change the user or group the web server runs as. The web server configuration is stored in /usr/syno/apache/conf/httpd.conf-user. This configuration includes /usr/syno/etc/sites-enabled-user/*.conf, which in my setup is limited to /usr/syno/etc/sites-enabled-user/SYNO.SDS.PhotoStation.conf. Since the web server is only used for the single purpose of running Photo Station, I could simply edit /usr/syno/apache/conf/httpd.conf-user like this, replacing nobody as group:

# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
User photostation
Group dlna

This solved the problem, as I could go back to my strict permissions. In case you have multiple virtual hosts and only want to change the user or group for Photo Station, the Apache module apache2-mpm-itk might be interesting. If you manage to compile the module for the Freescale PowerQUICC III MPC8543 CPU, let me know. 🙂 The module should be placed in usr/syno/apache/modules/.

Permanent redirection to default host in Apache

A few weeks ago at work, I needed to rename a webhost. To avoid breaking a lot of links to the old hostname, I set up permanent redirection, but ran into an infinite loop. I tried both a simple Redirect statement like this:

Redirect permanent / http://newhost/

And the same thing using mod_rewrite. I have done this many times before at home, but what was special about this case is that the web server was set up as default host. So I just added a virtual host with the old hostname. Eventually I got it to work with mod_rewrite by using a RewriteCond statement to break the loop:

<VirtualHost *:80>
ServerName oldhost

<IfModule rewrite_module>
RewriteEngine on
RewriteCond %{HTTP_HOST} oldhost
RewriteRule ^/(.*)$ http://newhost/$1 [R=permanent,L]

However, I’m still not sure why this is neccesarry, since the first rewritten URL should end at the default host which doesn’t rewrite anything.

Building Courier-Authlib 0.65.0 on CentOS 5

Today I wanted to upgrade Courier-Authlib from 0.63.0 and read this in the ChangeLog:

2010-03-06 Sam Varshavchik

* Remove the bundled libtdl library. Require the system-installed
libltdl library.

As expected, this gave me some problems with my old CentOS 5.9 release:

/bin/sh ./libtool –tag=CC –mode=link gcc -g -O2 -Wall -I.. -I./.. -export-dynamic -dlopen -dlopen -dlopen -dlopen -dlopen -o authdaemondprog authdaemond.o libltdl/ liblock/ libhmac/ md5/ sha1/ rfc822/ numlib/ -ldl
libtool: link: cannot find the library `libltdl/’ or unhandled argument `libltdl/’
make[2]: *** [authdaemondprog] Error 1
make[2]: Leaving directory `/usr/local/src/courier-authlib-0.65.0′
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/courier-authlib-0.65.0′
make: *** [all] Error 2

I fixed this by adding this line to my build script (after configure):
sed -i -e 's/^LIBLTDL = ${top_build_prefix}libltdl\/ = -lltdl/' Makefile

Update, February 15th 2013: Today I found out that in a freshly installed VMware machine with CentOS 5.9, ltdl was missing. So I needed to add the following in my pre-build script as well:
rpm -q libtool-ltdl >/dev/null
if [ $? != 0 ]; then yum -q -y install libtool-ltdl; fi
rpm -q libtool-ltdl-devel >/dev/null
if [ $? != 0 ]; then yum -q -y install libtool-ltdl-devel; fi

Update, December 7th 2015: I wanted to build 0.66.4 today, and the problem reappeared. Luckily, I found this posting which saved me some time. Updated ‘sed’ command:
sed -i -e 's/^LIBLTDL = $(top_build_prefix)libltdl\/ = -lltdl/' Makefile

The downfall of Samsung

First of all, I’m a big Samsung customer. I own two Samsung LCD TV’s, a PC monitor, a cellphone, a Blu-ray player and a hard drive. Heck, I even own a Samsung vacuum cleaner. I don’t own any Apple products. So this is not about the ongoing Samsung vs. Apple patent war or me dishing Samsung because I’m an Apple fanboy. It’s about me being a critical, but fair, customer.

In January I bought a top-of-the-line 55″ LED Smart TV – UE55D8005. This is a very nice TV, but the “Smart” also makes it a computer – a computer that needs software.

Next, in June I also bought a smartphone, Galaxy S3 – finally making the switch from my ancient HTC Desire (that became useless because of the lack of internal memory – but that’s a completely different story).

I’m relatively satisfied with both products. However, I’m not at all impressed by Samsung’s understanding of software and product life cycles. It seems that Samsung abandons software support for a specific model almost before the last item has shipped.

The TV came with a Galaxy 5″ Wi-Fi tablet. This tablet came with an app called Samsung Smart View. A nice little app to control the TV and even stream video from the TV to the tablet. However, this app is not compatible with Samsung Galaxy S3, their flagship smartphone. It’s been four months now, so why is this still not working? I contacted Samsung about this issue, but didn’t get any useful answers. I asked four times before I got confirmation that they were even aware of the issue, but their support is completely broken.

Yesterday Netflix was introduced in Denmark. So I also asked Samsung how to get the Netflix application for the TV back – I couldn’t find it anywhere, but I knew that the app exists for my model and works with the American Netflix. Their support couldn’t help me with this, but told me that the latest TV models (E models) would get an update today or tomorrow.

What’s the point of all this? Samsung simply don’t get it. Not being able to integrate a top model of their TV’s with the current top model smartphones is ridiculous. There are so many reasons why this would make sense. Just to name a few features that would be nice to have on the phone:

  • Remote control.
  • Using the phone as a keyboard.
  • Scheduling timed recordings.
  • Automatic pause/time shifting when the phone rings.

The TV also integrates with services on the net, for example YouTube. Samsung cannot just abandon the firmware once a new TV model has been released – and render “old” models useless when services change and need software upgrades. At least not if they want customers to stick. I don’t get it. Is the logic that I will buy a new TV only nine months after buying a 2.000 € TV – only to get the latest software? If this is the case, they are doomed – I’d never buy another Samsung Smart TV after experiencing a complete lack of support and upgrades once.

Then there’s the poor quality of the software. I prefer Android to iOS because of its open nature. But everything Samsung has built on top of Android sucks. Period. Just to name a few:

  • TouchWiz: Well, this is actually decent, but has some stupid bugs – like folders opening on their own. So annoying.
  • Calendar: First of all, it’s ugly. When creating new events, it always defaults to “Samsung Calendar”. Who would prefer Samsung Calendar to Google Calendar – and what is Samsung Calendar? What’s up with the up/down arrows when setting date and time – why not use a scroll wheel? And how about some nicer widgets for the calendar?
  • ChatON, Samsung Apps, S Suggest: Who cares about these things?
  • Sometimes it wants me to connect to my Samsung account, but doesn’t say why. If the wrong password is typed in, the application leaves and prompts for both username and password again. They shouldn’t release software that works like this.

To summarize, Samsung is a hardware company in a software world. They have no talent for writing software whatsoever, and they don’t even manage to support and integrate their own products. Apple and Google get this. This is why I believe Samsung will have a very hard time, once the competition is ready to take them out. Like Google killed Altavista in the late 1990’s. Like Netflix killed Blockbuster. I know I’m ready for an alternative to both Apple and Samsung.