New SunFlux GU4 MR11 LED bulb

I received twenty of the new GU4/MR11 bulbs from SunFlux a few days ago. Specifications:

  • EAN: 5710777110165
  • Part number: 11016
  • Power: 3 W
  • Color temperature: 2700 K
  • Lumen: 200
  • CRI: >95
  • Beam: 40°
  • Dimmable: Yes
  • Size: Ø35 – L41
  • Life hours: 20.000

I don’t have the equipment to test if the bulbs live up to the specifications, but at least one of the specifications is incorrect: It’s not an MR11 bulb. Well, sort of. Almost. I’ve never had any problem with halogen bulbs of any brand, but these bulbs are too large to fit into my spots. Bummer! A 1 mm difference would probably have made it possible to fit the retaining ring clips into the spot.

I bought the bulbs because SunFlux suddenly, without notice, changed the specifications for the older GU4/MR11 bulb with EAN 5710777110158/part number 11015. It used to be:

  • Power: 3 W
  • Color temperature: 2750 K
  • Lumen: 180
  • CRI: >96
  • Beam: 40°
  • Dimmable: Yes
  • Size: Ø35 – L43
  • Life hours: 20.000

I actually received one of these, and four bulbs of the new version with these changed specifications:

  • Color temperature: 2700 K
  • Lumen: 200
  • CRI: >92

So that’s how I found out – I didn’t get five identical bulbs, even though it was the same order. So they downgraded the CRI, but upgraded the efficiency a tiny bit. These bulbs are ugly, but at least they fit into standard MR11 spots. I’m not sure why they made a new version of the old bulb with almost identical specifications as the new one (except the lower CRI). We’ll see if part number 11015 is being phased out.

Perhaps time to go looking for a spot that has enough space for this bulb…

Danfoss Living Connect review

I got my new toy yesterday: A Danfoss Link CC and five Danfoss Living Connect RA thermostats. So here’s a few comments – compiled from a bit of research and my first impression of the system.

First of all, these are the products I bought:

  • Danfoss Link CC NSU – 014G0287, EAN 5702425112881. SW version 4.1 (after upgrading it).
  • Living Connect (RA) – 014G0001, EAN 5702420110257. SW version 4.02 (can’t be upgraded), production date November 19th 2016.

I thought about buying the Living Connect Z-Wave version instead – 014G0013. But unfortunately this version cannot communicate with Link CC. So you have to make a tough decision between a closed (but working and tested) system and an open system where you have to implement everything from scratch. This software version document states:

  • Version 2.06 – 11-03-2011: The initial software for living connect® – can be controlled by Danfoss Link™ CC or by a 3rd party Z-wave controller.
  • Version 3.02 – 25-05-2012: Standard Z-wave functionality removed, meaning that living connect® no longer can be controlled by a 3rd party controller.

That’s a shame. So if you want something that works and want to play, you’ll have to buy two entire systems and switch all your thermostats whenever you switch between feeling creative and conservative. Oh, and you can’t upgrade the firmware, no matter which of the solutions you choose.

In my case I actually don’t need to communicate with the thermostats myself. Had there just been an API for Link CC (and/or a bunch of open Danfoss Cloud web services), a lot of interesting things could have been easy to implement. Like telling the system when you are home or almost home – based on anything imaginable. Geofences, near Wi-Fi network, etc. Temperature readings from 3rd party software would also have been nice. Please make something like this, Danfoss!

Well, back to reality. The Link CC is actually pretty nice, but I’d like to control the system with my phone, and only with my phone. Or my tablet. Or from a laptop or PC (browser). Anything but another wall-mounted touchscreen in my house. It would have been nice with full functionality in the app, and a variant of the Link CC without a screen – just a small Z-Wave controller with Danfoss software. The energy consumption would have been lower. About 2.8 W as the Link CC uses is not bad, but not great either.

The thermostats uses two AA Alkaline batteries. 1.5 V required, so it’s not possible to use 1.2 V rechargeable (e.g. Eneloop) batteries. I was almost done phasing out all Alkaline batteries in the house, even those in remote controls. But now I’m stuck with 10 Alkalines batteries again. It’s not a major problem, but a bit annoying and not “green”.

All communication from the app goes through Danfoss Cloud which communicates with Link CC. Also when you’re at home and on the same Wi-Fi network as the Link CC. All ports are closed on the Link CC. So if your internet is down, so is your contact with your Danfoss system (from the app). If Danfoss’ servers are down, so is your contact… If Danfoss goes down or decides to shut down support for the Living Connect system, you’ll lose contact with your Danfoss system. I would have preferred at least an option to have an operational system without an external dependency to Danfoss.

You don’t care about power consumption, rechargeable batteries, open API’s, cross-system integration and companies (or hackers) being able to control your home temperature? Well, these things aside the system is actually pretty neat. See app screenshots and google Danfoss Living Connect to read more about the system and Link CC.

Update: After having used the system for some weeks, I’m almost ready to return the crap. I’ll run a few more tests, but the results so far are discouraging. With my old thermostats I had consistently 42-43 °C cooling. Now we’re down to 36-37 °C and the consumption in m3 has increased. This is true whether we use night-time drop or not. So the system will actually cost you money and leave a negative environmental impact, instead of the opposite.

Pear upgrade broken

I haven’t been able to upgrade Horde for a few months:

Error getting channel info from pear.horde.org: Connection to `ssl://pear.horde.org:443' failed:

Today I finally digged into the issue. The short explanation and fix in my case was:

# php -r "print_r(openssl_get_cert_locations());"
Array
(
    [default_cert_file] => /usr/local/ssl/cert.pem
    [default_cert_file_env] => SSL_CERT_FILE
    [default_cert_dir] => /usr/local/ssl/certs
    [default_cert_dir_env] => SSL_CERT_DIR
    [default_private_dir] => /usr/local/ssl/private
    [default_default_cert_area] => /usr/local/ssl
    [ini_cafile] =>
    [ini_capath] =>
)
# ll /usr/local/ssl/cert.pem
ls: /usr/local/ssl/cert.pem: No such file or directory
# cd /usr/local/ssl
# wget --no-check-certificate https://curl.haxx.se/ca/cacert.pem
# mv cacert.pem cert.pem

Problem solved. Completely unrelated to Horde. Might have been caused by a PHP or OpenSSL upgrade at some point, I guess.

Miele@mobile 2.02

By incident I found out yesterday that the Miele@mobile app was updated in Google Play back in May. For some reason Pushbullet didn’t notify me, and I’m not allowed to download the app from Google Play myself, since I’m resided in Denmark and Miele@home is not supported here. Strangely, the only visible change I found in this update since 2.01 was a complete Danish translation. Nice – hopefully this means that Miele@home support in Denmark is not too far away. Update, September 6th 2016: I just found out that the geo-restriction in Google Play is also gone, or at least expanded to include Denmark. It’s now also possible to select Miele Denmark as origin for the Miele user account.

Today’s small rant:

  • Why this geoblocking? I’m allowed within EU to buy all the hardware in Germany, but the only app that exists for the system, I’m not allowed to use. With my very expensive appliances and Miele@home gateway/modules I might add. Luckily in Android there are ways to get the app anyway, but my girlfriend can’t download the app for iPhone and use the system at all.
  • Why not open up the new JSON-RPC API, so better 3rd party apps could be written, that didn’t have to depend only on the old broken Homebus protocol? This would make the system more useful since new ways to use the sytem would arise.

Miele XGW 3000 firmware 2.03

For quite some time, I’ve been holding back a post introducing my Miele@home Android project. So in the wrong order, here’s an attempt to get in touch with some fellow Miele@home geeks: Sharing my findings about the new XGW 3000 firmware upgrade, which was released last week.

My findings are only about the Homebus protocol and multicast packets, since the new JSON-RPC protocol isn’t for public consumption (or so it seems). So here we go – changes since version 2.0.0:

  • Element ‘type’ in http://<gateway>/homebus/ seems to be working again (e.g. <type>WMV960</type>). I’ve never seen this work since the original firmware of my gateway (1.1.0).
  • Prior to 2.0.3, multicast packets would refer to a ZigBee MAC address (at least when communicating with XKM 3000 Z modules): “id=hdm:ZigBee:<MAC>”. This is now changed to device UID.
  • For some devices numeric values are now included in the http://<gateway>/homebus/device XML:
    <key name="State" value="Running" type="state" raw="5"/>
    <key name="Program" value="Cottons" type="program" raw="1"/>
    <key name="Phase" value="Rinses" type="phase" raw="5"/>
    

    This is true for my WMV 960, but not my TKR 350 WP or H 5581 BP. They all have the XKM 3000 Z module, but the WMV has firmware version 1.16, while the other two are version 1.02 (does anyone know if these modules are firmware-upgradeable and/or where to find a changelog?).

Bugs still present:

  • Language parameter not respected for http://<gateway>/homebus/device?language=en. Gateway language is always used.
  • Bizarre values for key “Start Time”. Will update the post on this later.

That’s it for now. I’ll return later with more information about my project as well as rants about the Homebus protocol and Miele’s secrecy and lack of support.

Synology DS cloud broken

The latest version of Synology’s Android app DS cloud stopped working with the Cloud Station package on my DS508, which is stuck with DSM 4.0. The app was released on Google Play March 2nd 2016 – version 2.6. Cloud Station package version is 2.2-3047.

CloudStation is running on the DS. My Android phones and tablet are shown as offline in the client list. On the devices DS cloud is shown as running, but no files are synchronized. Synchronization is missing both ways – files are not pushed to the phone, and files updated on the phone are not pushed to the DS.

To fix this I went to apkpure.com and downloaded version 2.5 of the app. Before installing, the existing app has to be uninstalled, so reconfiguration is needed.

I created a support ticket at synology.com, so they are informed.

Update, March 10th 2016: Synology reported back to me that they had analyzed my logs and found the problem: The database was corrupted. So it must be a bug in the database upgrade in version 2.6 of the app. After clearing the app data and reconfiguring it works perfectly.

Building PHP after installing MySQL 5.7

I got the following error when trying to build PHP 5.6.14/5.6.15 after upgrading to MySQL 5.7:

configure: error: Cannot find libmysqlclient_r under /usr/local/mysql.
Note that the MySQL client library is not bundled anymore!

I added this to my build script in order to create the missing symbolic links:

cd /usr/local/mysql/lib
for f in libmysqlclient.so*; do ln -s $f $(echo $f | sed s/libmysqlclient/libmysqlclient_r/); done
ln -s libmysqlclient.a libmysqlclient_r.a

Denon receiver plugin for Yatse

I probably made my most niche thing ever today: A plugin for an Android app. I found out yesterday that a new API for Yatse was made available, so today I took on the challenge, and created a plugin for my Denon A/V receiver. I can now control the volume on my receiver directly from Yatse. In a few days I might clean the code up and finish it, so it can be released either here or on Google Play.

Update, August 8th 2015: I’ve just released the plugin!
Get it on Google Play.

Raspbmc and NFS permissions

NFS is often used for accessing network shares from Raspbmc, due to its low overhead. Many tutorials describe how to set this up, for example with a Synology NAS. One subject isn’t covered much, though: A setup with restrictive permissions.

On my Synology my media files are usually owned by me, and has ‘dlna’ as group with read-only permissions. Example:

drwxr-x---  9 jacob dlna   4096 Jun 15  2012 video

On the NAS I have created a user for Raspbmc and made it a member of this group:

DiskStation> cat /etc/group | grep dlna
dlna:x:65536:admin,jacob,raspbmc
DiskStation> cat /etc/passwd | grep raspbmc
raspbmc:x:1046:100:Raspberry Pie XBMC:/var/services/homes/raspbmc:/sbin/nologin

To get this to work on the Raspberry I’ve first had to synchronize the GID/UID’s. Since the NAS is the master, I’ve done this on the Raspberry (logged in as the pi user):

pi@raspbmc:~$ sudo groupadd -g 65536 dlna
pi@raspbmc:~$ sudo usermod -a -G dlna pi

After enabling root access, I’ve changed the pi user to match the UID of the raspbmc user on the NAS:

root@raspbmc:~# usermod -u 1046 pi

(For this to work I had to kill a number of processes first)

Changing UID of the pi user will cause a lot of trouble for Raspbmc, which expects the user to have UID 1000. This is hardcoded in at least two scripts:

  • /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/default.py
  • /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/autostart.py

Fix this by adding:

sed -i 's/getpwuid(1000)/getpwuid(1046)/g' /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/default.py
sed -i 's/getpwuid(1000)/getpwuid(1046)/g' /opt/xbmc-bcm/xbmc-bin/share/xbmc/addons/script.raspbmc.settings/autostart.py

to /etc/rc.local so the scripts are automatically fixed during startup. Replace 1046 with your pi UID.

Without this fix automatic updates won’t work, and you’ll see script errors during startup – and can’t launch the Raspbmc settings.

httpd.pid gone missing

For quite some time I’ve been having a problem with my httpd.pid file going missing every now and then. Whenever this happens, I can’t use the init script to start and stop the server anymore, which is pretty annoying. For some reason I had learned to live with this – I would just ‘killall httpd’, start the server again and the httpd.pid would be back for some time, until the problem would reappear.

Finally I decided to do something about it. I added this small script to my hourly cron jobs:

if [ ! -e "/usr/local/apache2/logs/httpd.pid" ]; then
        echo "httpd.pid missing"
fi

A few weeks later this trap was finally triggered – less than an hour after my daily logrotate, which produced this output:

Starting httpd: [  OK  ]
Stopping httpd: [  OK  ]
(repeated a number of times...)
Starting httpd: [  OK  ]
Stopping httpd: [FAILED]
Starting httpd: (98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
[FAILED]
error: error running postrotate script for /usr/local/apache2/logs/*_log 

This was not the first time I had seen this, but it was the first time I realized that the logrotate problem was causing the httpd.pid problem, and not the other way around.

So I checked my /etc/logrotate.d/apache config file:

/usr/local/apache2/logs/*_log {
    rotate 1024
    size 4M
    notifempty
    postrotate
        /etc/init.d/httpd restart
    endscript
}

After a few minutes of reading the logrotate man page, I realized what was wrong (had a hunch, though): The restart between the postrotate/endscript directives was performed for each logfile, i.e. multiple times, instead of just a single time after the last one was rotated. This in itself is pretty bad, but it would also happen asynchronously, thus create a mess. This problem was easily solved using the sharedscript directive. Also, a simple “reload” instead of “restart” is sufficient to make httpd reopen the logfiles, thus create new filehandles.

After fixing these two problems, the file ended up like this:

/usr/local/apache2/logs/*_log {
    rotate 1024
    size 4M
    notifempty
    sharedscripts
    postrotate
        /etc/init.d/httpd reload
    endscript
}

I’m fully expecting this to solve the problem once and for all.