Child Wrangling

When I go on a long work trip, I often end up buying some books, because it is one of the rare times that I get to selfishly spend uninterrupted hours just reading. In September, I had a trip where I picked up a couple of parenting books.

My kids are getting bigger, and while at the moment I can get them to go where I need them to go by picking them up and taking them there, this is not sustainable. When we had babies, I read a bunch of books about how to get through that stage, but I hadn’t educated myself on parenting primary-school-age children. So, I picked two best-selling titles that seemed to have differing perspectives, and figured by reading both I would get a good coverage of the space. Now, by writing about them here, I am forced to understand them well enough to explain them.

The first book was 1-2-3 Magic by Thomas W Phelan. It is all about how to improve the behaviour of children 2-12yo through “effective discipline”, and is currently rated 4.7 out of 5 stars on Amazon (139 reviews). It is written by a child psychologist and is an easy read. I would say that this book has a basic assumption that children are happy and well behaved when they know what behaviour is required of them.

The second book was Calmer, Easier, Happier Parenting by Noel Janis-Norton. It is all about how to improve the behaviour of children 3-13yo through “five strategies” and is currently rated 4.5 out of 5 stars on Amazon (27 reviews). It is written by a child educator and is a comprehensive theory and practice for child-raising. This book has a basic assumption that children can work out what they are supposed to do, and will do the right things when they are supported appropriately and when doing the wrong things no longer works.

I seem to recall that I was a near-perfect child. So, my memories of how my own parents raised me should not be relied upon, and I find that I need to come up with things that suit my kids. Hopefully they will look back and think they were near-perfect as well.

Despite taking different approaches, the two books do agree on some aspects. There are five common strategies that I have noticed, and they seem reasonably sensible:

  1. Don’t ignore bad behaviour.
  2. Stay calm and don’t shout.
  3. Always follow through.
  4. Spend quality time with each child.
  5. All caregivers in a house act consistently.

However, there is probably more that they disagree about than they agree, as you may guess from their differing assumptions about children’s behaviours. In addition to the above five common strategies, Phelan’s book proposes two fundamental techniques for achieving household happiness:

  1. Impose time-outs for repeated bad behaviour.
  2. Establish everyday routines.

Of course, the book has plenty more detail around how to do this. In particular the title of the book refers to counting instances of bad behaviour, and putting a child into time-out when the third count is reached.

On the other hand, Janis-Norton’s book has different fundamental techniques that support a range of parenting strategies:

  1. Train children to want parental praise and recognition.
  2. Teach them how to verbalise thoughts and emotions.

Hers is a very thorough book, going into numerous examples over its 400+ pages. However, it doesn’t include any examples of disciplining children – at least not in a traditional way. Looking on the Internet, it seems this sort of approach is also known as positive discipline, and there are other authors out there that promote it. Janis-Norton many times states that she knows it may seem unbelievable that this could work, but reassures the reader that it does.

I haven’t decided yet how to put any of this into practice, but I feel now better equipped with a bunch of parental tools that I hope will make life easier and more sustainable. And if I don’t have to pick up and move children any more, my back will be thankful.

Wrist Computers

At some point in the last century, a strange thing happened: people took something that they’d been happy to carry around in their pockets for centuries and started to wear it on their wrist. Why?

I have just bought myself a smartwatch, and it’s got me thinking about this. A smart watch is typically what a 1980s calculator watch would be if someone invented it today. Because that’s basically what 99% of them are. Not calculator watches, of course, but stick with me for a bit. Just as in the 1980s, the most computing power an ordinary person could carry around in their pocket was a calculator, so people tried to put a tiny version of it on their wrist. These days, the most computing power an ordinary person can carry around in their pocket is a smartphone, so people are trying to put a tiny version of it on their wrists.

That said, you may not be too surprised to hear that the smartwatch I bought was part of the 1% that aren’t like that. It is a Withings Activité Pop, which is an analog watch that happens to also talk to my smartphone using Bluetooth. Withings isn’t the only maker of this sort of smart watch, e.g. you can also get a Martian watch which takes a similar approach to being “smart”. I expect other watch makers will put chips in their watches and it will become pretty normal soon.

I am really loving my Withings smartwatch. It automatically updates the time when daylight savings changes or when I travel into a different timezone. It has a pedometer inside it, and shows me my progress towards my daily step target on a dial on the face. It also has a bunch of other features, and sometimes gets new ones that appear for free, like tracking swimming strokes. But most of all, it looks good, is light on my wrist, and has a battery life of over 8 months. While these as expected features of a normal watch, they are rather novel in a smartwatch.

As a result, smartwatches haven’t really taken off yet in the way that, say, FitBit fitness trackers have. Is the smartwatch market destined for greatness or niche-ness?

Perhaps the history of the pocket watch has some relevant lessons, for which I will be drawing heavily on Wikipedia. The wearable watch was a 16th century innovation, beginning as a clock-on-a-pendant with only an hour hand. Some 17th century improvements brought the glass-covered face and the minute hand, and they became regularly carried in (waist coat) pockets at this time. It took until late in the 18th century for the pocket watch to move beyond a pure luxury item.

Pocket watches continued to be the dominant form of watch, at least for men, until the late 19th century, when the “wristlet” (we know it better as the wrist watch) came along. The British Army began issuing them to servicemen in 1917, where synchronising the creeping barrage tactic between infantry and artillery was important, and pocket watches were impractical. Reading the time at a glance was probably the first “killer app”, and by 1930, the ratio of wrist to pocket watches was 50 to 1. Within a couple of decades, the pocket watch had been completed disrupted.

While it was more convenient to read the time on a wrist watch than a pocket watch, it was also was also awkward to wear a heavy thing on a wrist, and in terms of fashion, the wrist watch was considered more of a women’s fashion item. In the end, World War I forced the issue, eliminating the fashion consideration, and the convenience factor overcame the weight problem.

Coming back to the present, UK mobile operator O2 published a report called “All About You” in 2012 that noted 46% of respondents had dispensed with a watch in favour of using their smartphone to check the time. It seems the greater utility of a smartphone has led people to forgo their watches, even if it means that time has gone back into the pocket.

So, there’s an argument that if the smartwatch provided similar utility to the smartphone, people would again shift from the pocket to the wrist. My Withings watch doesn’t in any way substitute for my smartphone, and is really a smartphone accessory. However, something like a LG Urbane Second Edition watch runs Android and has an LTE connection for calls and texting, and is more powerful than even a smartphone of a few years ago. Speech recognition can make up for the lack of keyboard entry, and a Bluetooth headset can enable private conversations.

However, economically a smartphone is actually a games platform, and games dominate the revenues from apps on smartphones. Making the smartwatch a viable games platform may be required for it to replace smartphones. Even in the 1980s, there were attempts to create games for the wrist, but they weren’t enormously successful compared to the game & (pocket) watch versions. Admittedly, there are games for modern smartwatches. However, they drain the battery and aren’t the same calibre as smartphone games.

If we measure the period of the smartphone since 2002, when Nokia introduced Series60 handsets, it has been with us for 13 years. The pocket watch, from invention to disruption, lasted 400 years, but declined due to the rise of the wrist watch in the last 50 of those years. If the smartwatch disrupted the smartphone at the same speed, it would need less than 2 years.

All I can say is: watch this space.

Windows 10 on Raspberry Pi 2

Windows 10 IoT Core on Raspberry Pi 2I was one of those who ordered the Raspberry Pi 2, when it was announced back in February 2015, off the back of the claims that it would run Windows 10. Not the full desktop version of Windows 10 of course, but a version for simpler devices. Still, it impressed me that here was a $36 computer that could run the latest version of Microsoft Windows.

Unfortunately, while the Pi 2 became available back then, the required version of Windows was not. It’s only been in the last month that Microsoft launched Windows 10 IoT Core, so I’ve finally had a chance to try it out.

For those that also are interested in this option, I thought I’d note down my experiences on installing it, connecting to it and running software on it.

Installing Windows 10 IoT Core

There are some official instructions provided by Microsoft on how to do this. However, they require that you are running Windows 10 on a PC, and none of my computers have Windows 10 yet. I also didn’t want to use up the hard disk space that would be needed if I had set up a Windows 10 virtual machine. I was more interested in unofficial options.

What didn’t work:

  • Using the Python ffu2img tool to convert the official Windows 10 IoT SD card image to something that could be loaded onto the SD card with something like Win32 Disk Imager. The ffu2img developer admits that they are pretty sure that there’s something wrong.
  • Downloading the official Windows 10 Home edition ISO and using the version of DISM in the sources directory there to load the SD card image.

What did work:

  • I got the official Windows 10 IoT Core for Raspberry Pi 2 ISO from Microsoft, opened it, ran the installer, and it put the flash.ffu file in C:\Program Files (x86)\Microsoft IoT\FFU\RaspberryPi2\
  • Next, I got the Windows ADK for Windows 10 installer from Microsoft, and it loaded a suitable version of DISM into C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\x86\DISM\
  • Then I formatted my SD card using SD Formatter
  • I copied the flash.ffu file into the DISM directory and used it (following the instructions on the Raspberry Pi forums) in an Administrator Command Prompt to copy it onto my SD card
  • I safely ejected the SD card, and popped into the Pi and it booted up fine.

Connecting to Windows 10 IoT Core

Once the Pi got going, I needed to tell it what language to use. I had plugged a decent quality USB keyboard in, but it was extremely finnicky: key presses were seemingly ignored. In the end, I plugged a USB mouse in and it was much more responsive to mouse clicks.

Windows 10 IoT is really designed to run a single GUI application. It boots into one that shows the hostname and IP address for the Pi, as well as displaying some simple tutorial instructions. It’s designed to connect to Visual Studio 2015, and allow a developer to push their application straight to the Pi. However, I don’t work with my Pi that way – I connect into it and configure/run it via a remote shell.

It’s possible to SSH straight into the Pi (as user Administrator, initially, until you set up some other users). You basically get a DOS prompt. Cool! What was less straightforward was getting files onto it.

What didn’t work:

  • SCP – I kept getting an “exec request failed on channel 0” error
  • Trying to get the Pi to download files using an Invoke-WebRequest via PowerShell running on the Pi. The version of PowerShell seems to be missing some modules.

What did work:

  • The Pi appears on the LAN as a Windows network share. You can use a Windows PC and put in \\192.168.1.10\c$ (or whatever your IP address is) and then login as minwinpc\Administrator with your password. Voila!
  • Similarly, on a mac, you can access it via the Finder using Go > Connect to Server smb://192.168.1.14/c$ (or whatever your IP address is). The Pi will also then appear under /Volumes/c$/
  • Once the share has been opened, it’s straightforward to copy files to and from the Pi.

Running Software on Windows IoT Core

As mentioned above, the standard way to get software running on Windows IoT is for Visual Studio to load it onto the Pi over the network. However, I’m more interested in running standard server apps that don’t rely on the Microsoft ecosystem, so I focussed my efforts on getting Node.js to run on the Pi.

Microsoft is doing some very cool stuff around supporting platforms like Node.js and even Python on Windows IoT. It’s still very much in its early days, but shows promise.

Here’s what I did:

  • I downloaded and installed the Node.js Tools for Windows IoT (v1.1) from GitHub. These were installed into C:\Program Files (x86)\Node.js (chakra)\
  • I copied the whole Node.js (chakra)\ installation directory over to the Pi into C:\Node.js\
  • I downloaded the ARM version of node.exe from the same GitHub page as above, which I copied over the top of the previous (Intel version of) node.exe in C:\Node.js\
  • Set up the APPDATA environment variable to be somewhere useful (it wasn’t set for me): set APPDATA=C:\Users\Public
  • Set up other useful environment variables for Node by running: C:\Node.js\nodevars.bat
  • Now commands like “npm install -s express” and “node test.js” work.

While I could run simple Hello World style programs with Node that wrote text out to the screen, I was unable to get working a slightly more advanced Node program that ran a basic webserver.

Conclusions

It was fun to see Windows 10 boot up on the Raspberry Pi. However, I was a little disappointed how limited it was, given how powerful a Pi is with the default Linux-based OS.

Microsoft’s approach to developing for the Raspberry Pi brings something new to the space, and may make the platform more accessible to developers who are already adept with Microsoft tools. Still, it would’ve been nice to see the basic image come with something immediately useful, if only the new Edge web browser (this would’ve make super-cheap Internet Explorer based kiosks really simple to create).

There’s the old saying that you should always wait for the third version of a Microsoft OS. I don’t know if we’ll need to wait that long for a compelling Microsoft OS on Raspberry Pi, but I am excited to see what Microsoft does with this in future.

Facebook Node SDK example

I’ve been writing some Node.js software to interact with Facebook recently. To do this, I just picked the first SDK listed in the Facebook developer’s page for Node.js. However, I couldn’t find a good example listing of how to use this SDK to iterate over multiple pages of results. So, this is a quick post that will hopefully serve as such an example.

The complete Node.js application can be downloaded from Github at https://github.com/aesidau/sbs-headlines, but I’ll walk through it step by step here. The application will list the first 1,000 news headlines from the SBS News page on Facebook. However, before any of this will work, you will need to get the fb and async modules, so start with something like:

npm install fb
npm install async

Also, I assume that you’ve set up a Facebook application over at https://developers.facebook.com/ and have its app id and app secret to hand. In fact, for this Node.js example, I am assuming that these values are stored in the environment variables FB_APP_ID and FB_APP_SECRET, respectively.

The first step in any Facebook API application is to use these app credentials to get an access token that can be used in later Facebook API calls. In this example, I’m going to get a basic access token that isn’t associated with a Facebook user, so will be able to obtain only public information. That’s all we need here, anyway.

// Acquire a new access token and callback to f when done (passing any error)
function acquireFacebookToken(f) {
  FB.napi('oauth/access_token', {
    client_id: process.env.FB_APP_ID,
    client_secret: process.env.FB_APP_SECRET,
    grant_type: 'client_credentials'
  }, function (err, result) {
    if (!err) {
      // Store the access token for later queries to use
      FB.setAccessToken(result.access_token);
    }
    if (f) f(err);
  }); // FB.napi('oauth/access_token'
}

This has been written as a function so we can call it later. Note that it uses a callback to indicate to the caller when the process has been completed, passing back any errors that came up.

So far, so obvious. Now that the access token is sorted, let’s look at what is required to iterate over a Facebook feed using the API.

I’m going to use the doUntil function in the async module. This enables iteration of functions that return results in callbacks. The other thing to note is that each call to the Facebook API will return a “paging” object that will contain a “next” attribute but only if there is another page of results to retrieve. This attribute can be parsed to construct the next Facebook API query to obtain the next page of results.

I have also included a test for if the access token has expired. This shouldn’t happen in a simple app where the access token was only just acquired. However, in many apps, the access token may have been acquired hours before. So, if this code is to be reused, it’s a good idea to deal with this case.

// Process the Facebook feed and callback to f when done (passing any error)
function processFacebookFeed(feed, f) {
  var params, totalResults, done;

  totalResults = []; // progressively store results here
  params = { // initial set of params to use in querying Facebook
    fields: 'message,name',
    limit: 100
  };
  done = false; // will be set to true to terminate loop
  async.doUntil(function(callback) {
    // body of the loop
    FB.napi(feed, params, function(err, result) {
      if (err) return callback(err);
      totalResults = totalResults.concat(result.data);
      if (!result.paging.next || totalResults.length >= 1000) {
        done = true;
      } else {
        params = URL.parse(result.paging.next, true).query;
      }
      callback();
    }); // FB.napi
  }, function() {
    // test for loop termination
    return done;
  }, function (err) {
    // completed looping
    if (err && err.type == 'OAuthException') {
      // the access token has expired since we acquired it, so get it again
      console.error('Need to reauthenticate with Facebook: %s', err.message);
      acquireFacebookToken(function (err) {
        if (!err) {
          // Now try again (n.b. setImmediate requires Node v10)
          setImmediate(function() {
            processFacebookFeed(f);
          }); // setImmediate
        } else if (f) {
          f(err);
        }
      }); // acquireFacebookToken
    } else if (f) {
      f(err, totalResults);
    }
  }); // async.doUntil
}

Lastly, we just need to wire these two functions together so that we get the access token, retrieve the results (i.e. the headlines from SBS World News Australia), and then print them out.

acquireFacebookToken(function (err) {
  if (err) {
    console.error('Failed authorisation to Facebook: %s', err.message);
  } else {
    console.log('Acquired Facebook access token');
    // Now let's do something interesting with Facebook
    processFacebookFeed('SBSWorldNewsAustralia/feed', function (err, results) {
      if (err) {
        console.error('Failed to retrieve Facebook feed: %s', err.message);
      } else {
        // Print out the results
        results.forEach(function (i) {
          var headline = i.message || i.name;
          // If it's an embedded video, possible there's no headline
          if (headline) {
            console.log(headline);
          }
        }); // results.forEach
      }
    }); // processFacebookFeed
  }
}); // acquireFacebookToken

And that’s it. I hope this has been useful for others. Grab the complete application from GitHub to try it out, but make sure you set up your environment variables for the App ID and App Secret first.

XBMC on Raspberry Pi 2

Raspberry Pi running OpenELEC with XBMCI got a Raspberry Pi 2 on the first day they were available in Australia. It has twice the memory and is up to six times faster than the old Raspberry Pi, and at some point in the future it will be able to run Windows 10. But in the mean-time, I thought it would be cool to see what sort of media centre appliance I could get going on a $36 computer. This post is for posterity, but also in case it helps others who are trying to get this working.

The default media centre platform is called XBMC, but the first thing I learnt was that it was now called Kodi. According to the Kodi Wiki, there are just two versions that work on the Pi 2. The first one I tried was OSMC, but it is still in Alpha release and not so stable. The other is OpenELEC and v5.0.3 supports the Pi 2.

Following the installation instructions didn’t work for me, perhaps running Windows 7 64-bit caused problems for the Win32 Disk Imager program. So, I tried using WinFLASHTool instead, and it worked for me perfectly.

This got me a media centre on the Pi, but what I really wanted was to be able to control it from the TV remote control – this requires HDMI CEC to work. I have an LG 42LN5710 television, and LG calls their implementation of CEC “Simplink”. There are two ways to turn it on: press the Simplink button on the remote, or press the Input source button on the remote and then the green button on the remote. Neither worked for me.

After a lot of stuffing around, I learned two things that got me on the right track.

Firstly, not every HDMI cable supports CEC. I had a cheap HDMI 1.3 cable that was fine for delivering A/V from the Pi to the TV, but I needed to replace it with a new cable. CEC is implemented in a single wire in the cable, and is apparently mandatory, but not mandatory enough.

Secondly, any HDMI device can communicate with any other HDMI device connected on any HDMI cable using CEC. I had three HDMI devices (including the Pi) plugged into my TV. One of them was misbehaving, and stopping CEC on the Pi from working. I had to unplug the rogue device and reboot the Pi.

After this, I was able to turn on Simplink and the TV identified the Pi as a Simplink device. Excellent!

Mathematical, musical curiosity

I’ve been recently writing an app that uses the autocorrelation approach to detect the pitch of musical notes. This approach basically tries to see if a given musical note is in a digital audio signal by comparing each sample with the next sample in the signal that ought to be the same (since a given note should repeat periodically as per its frequency). In exploring how to best do this in my app, I’ve come across something I found curious.

Before I get to that, I need to explain a couple of things. Firstly, I am doing this pitch detection for a particular instrument: the flute. The flute is ordinarily considered to be able to play notes from B3 (the B immediately below middle C, but only if a flute has a “B foot”, otherwise from middle C) to C7 (the C three octaves above middle C). However, very skilled players might be able to get a few notes higher, to F7. Also, the piccolo flute can go up to C8, but we’ll ignore that more now. Given that the frequency of B3 is 246.9Hz and that of F7 is 2,793.8Hz, the 43 notes are spread across about 2,550Hz of frequencies.

The other thing to explain is that CDs (and many electronic devices) use a sample frequency of 44,100Hz. This is considered to be sufficiently high to record and reproduce audio signals up to 20,000Hz, which is the general limit of human hearing. However, a higher sample frequency, of 48,000Hz, is being increasingly used, such as in DAT tapes or DVDs.

These two things come together in autocorrelation because it requires knowing the period of each note, measured in numbers of samples. For example, the audio signal for a pure B3 tone should repeat every 178.6 samples if sampled at 44.1kHz or every 194.4 samples if sampled at 48kHz. Similarly, F7 should repeat every 15.8 samples at 44.1kHz or every 17.2 samples at 48kHz. Except there’s no such thing as a fraction of a sample, so for my autocorrelation calculations, I would round to the nearest sample.

Rounding introduces error, so using a period of 16 samples (at 44.1kHz) or 17 samples (at 48kHz) for F7 is not ideal. In fact, these periods correspond to different frequencies – 2,756.3Hz and 2,823.5Hz respectively. The intervals between musical notes are measured in cents, and there are 100 evenly-spaced cents to a semitone. The frequency corresponding to a period of 16 samples at 44.1kHz is 23 cents below the real F7, and the frequency of 17 samples at 48kHz version is 18 cents above F7. Higher notes are more error-prone, and the corresponding errors for a low note like B3 are 4 cents below (for 44.1kHz) and 3 cents above (for 48kHz).

For my autocorrelation algorithm, some error in detecting pitch is okay, since as long as the flute is playing in tune, if the algorithm was less than 50 cents out, it would always get the right note. So, I wrote some code to look at what the maximum error in cents was in following this approach, considering a range of sample frequencies from 2,000Hz to 60,000, and got a curious graph:

Pitch Errors for Sample Frequencies

You might be able to see small red dots at the points for 44.1kHz and 48kHz (or you can click through to see a bigger version of the photo). This graph shows the maximum error in cents across all notes in the range between A3 and F7, and it is less than 40 cents for both 44.1kHz and 48kHz. In fact, the maximum error for 44.1kHz (29.3 cents, relating to the note G#6) is less than 48kHz (37.0 cents, relating to the note D7), and 44.1kHz is close to the minimum for all sample frequencies up until about 57.8kHz.

There is a general trend that the higher rates result in lower errors, although I wasn’t expecting that the sample rate of 44.1kHz would have lower maximum error than 48kHz. I wondered if this was due to the specific range I was examining, so I wrote some more code to examine the impact of maximum errors on these two frequencies if I used ranges of notes starting at A3 and finishing at between C7 and C8. Here’s the resulting graph:

Pitch Errors for Note Ranges

As with before, for a note range going up to F7, 44.1kHz has a lower maximum error in cents than 48kHz. However, if the note range had stopped at C7, 48kHz would have a lower maximum error. Also, if we’d gone above A7, 48kHz would also be more accurate than using 44.1kHz but at that point the error would be above 50 cents, i.e. not accurate enough to be useful.

So, curiously 44.1kHz happens to be well-suited to autocorrelation of notes in the flute range. I’m sure this wasn’t a consideration when that was selected as a common sample frequency for audio recordings, but it happens to benefit me now.

Remembering dozens of passwords

You’ll never forget your password ever again

In recent weeks, there have been claims that username/passwords of Dropbox have been leaked online. While Dropbox has denied that any passwords were leaked, their advice was for “users not to reuse passwords across services”. For people who don’t use second-factor authentication or password manager services, this is good advice.

In fact, I’ve moved away from the approach I described previously of how to choose a strong password. There is no such thing as a strong password once it’s leaked. Sadly, even well regarded sites like Evernote and LinkedIn have had their passwords stolen, and no service can be considered immune to hacks.

Previously, I simply remembered passwords relating to different tiers of service: a password for my most secure service, another for secure but less important services, another for services I use regularly but don’t need to be secure, and another for services that I don’t really use. This way I just needed to remember a handful of passwords across many sites. Unfortunately, this method is not proof against hacks.

However, to remember a different password for every site is infeasible for most people (including me!). Still, there is a way to have a large number of different passwords across different sites but need to remember only two things: a password stub and a password algorithm. When logging in, a user just needs to apply the name of the service and the stub to the algorithm, and out should pop a (relatively) unique password. Different stubs might be used for different accounts, e.g. if the same service is used for both work and personal purposes.

Here’s an example of how this might be used. Take the password stub “pa55word” and the algorithm “insert the second and third letter of the site name in the third position”, then if this user was logging in to “www.dropbox.com”, the second and third letter would be “ro” and the unique password would be “paro55word”. (Let me just say that this is neither a stub that I use nor an algorithm, and now that it’s documented here, not one that you should use either.)

Since there are potentially 676 (26 x 26) combinations of second and third letters, this algorithm can generate hundreds of passwords without needing to remember more than two things. It’s easier than my previous approach where I needed to remember at least four things.

In choosing a stub, it’s helpful to include the sorts of things that password strength tests look for, e.g. some punctuation, a number and both upper and lower case letters. In choosing an algorithm, you want it to be pretty simple so that it will work for many different site names, so don’t go overboard.

So this will let you follow Dropbox’s advice, and avoid you reusing passwords, but when (!) a service has its passwords hacked and you need to change the password, it’s not going to work. So, probably you need to remember a third thing – how many times a given service has been hacked (hopefully there aren’t too many). Then you would have a modification of the algorithm that would incorporate this information as well, e.g. have as the letters inserted for the second iteration of a password on www.dropbox.com to be “rro” instead of “ro”, and the third iteration being “rrro”, etc. This does expose the main weakness of the method, in my opinion, so I’m hopeful of coming across a better approach at some point.

As I mentioned at the top, second-factor authentication and password manager services are also approaches that can be considered, but have their own downsides. I’m more hopeful that these services will improve in usability and utility over time so that I can make more use of them, before I need to remember the details of too many website hacks.

Lessons from NYT on innovation

The Kindle New York TimesWhatever the circumstances that led someone at The New York Times to leak their report on Innovation, I am thankful. Published (internally) in March, it is the fruits of a six month long deep-dive into the business of journalism within a company that has been a leader in that industry for over a century, and provides an intimate and honest study into how an incumbent can be disrupted. It is 97 pages long, and worth reading for anyone who is interested in innovation or the future of media.

The report was leaked in full in May, and I’ve been reading bits of it in my spare time. Just recently I completed it, and felt it was worth summarising some of the lessons that are highlighted by the people at the Times. As it is with such things, my summary is going to be subjective and – by nature – highly selective, so if this piques your interest, I encourage you to read the whole thing.

(My summary ended up being longer than I’d originally intended, so apologies in advance.)

Organisational Division

Because of the principle of editorial independence, the Times has clear boundaries between the journalists in the newsroom and those who operate “the business” part of the newspaper, which has been traditionally about selling advertising. This separation is even known as “church and state” within the organisation, and affects everything from who is allowed to meet with whom (even during brown-bag lunch style meetings) to the language used to communicate concepts. This has worked well in the past, allowing the journalism to be kept at the highest quality, without fear of being compromised by commercial considerations.

However, the part of the organisation that has been developing new software tools and reader applications is within “the business” (not being journalists), and has hence been disconnected from the newsroom. Hence new software is not developed to support the changing style of journalism, and where it is, it is done as one-off projects. Other media organisations are utilising developers more strategically, resulting in better tools for the journalists and a better experience for the readers.

Lesson: Technology capability needs to be at the heart of an innovation organisation, rather than kept at arms-length.

Changing Customers

For a very long time, the main customer of the Times has been advertisers. However, print media is facing a future where advertisers will not pay enough to keep the organisation running. Online advertising pays less than print advertising, and mobile advertising even less again. Coupled with declining circulation due to increased digital readership, the advertising business looks pretty sick. But there’s a new type of customer for the digital editions that is growing in importance: the reader.

While advertising revenues had the potential to severely compromise journalism, it’s not so clear that the same threat exists from reader revenues. In theory there is a good alignment: high quality journalism results in more readers. But if consideration of attracting readers is explicitly kept away from the newsroom as part of the “church and state” division, readers may end up being attracted by other media organisations. In fact, this is what is happening at the Times, with declines in most online reader metrics, and none increasing.

In the print world, it was enough to produce a high quality newspaper and it would attract readers. However, in the digital world this strategy is not currently working. Digital readers don’t select a publication and then read the stories in it, they discover individual articles from a variety of sources and then select whether to read them or not. The authors of articles need to take a bigger role in ensuring those articles are discovered.

Lesson: When customers radically change, the business needs to radically change too (many true-isms may be true no longer).

Experimentation

The rules for success in digital are different from those of traditional print journalism, although no-one really knows what they are yet. That said, the Times newsroom has an ingrained dislike of risk-taking. Again this made sense for a newsroom that didn’t want to print an incorrect story, and so everything had to be checked before it went public. However, this culture inhibits innovation if applied outside of the news itself.

Not only does it a culture of avoiding risks prevent them from experimenting and slow the ability to launch new things, but smart people within the organisation risk getting good at the wrong things. A great quote from the report: “When it takes 20 months to build one thing, your skill set becomes less about innovation and more about navigating bureaucracy.”

Also, the newsroom lacks a dedicated strategy and operations team, so doesn’t know how well readers are responding to experiments, or what is working well for competitors. Given that competitors are no longer only other daily newspapers, it’s not enough to just read the morning’s papers to get insight into the competition. BuzzFeed reformatted stories from the Times and managed to get greater reader numbers than the Times was able to for the same stories.

Lesson: If experimentation is being avoided due to risk, then business risks are not being managed effectively.

Acquiring Talent

It turns out that people experienced in traditional journalism don’t automatically have all the skills to meet the requirements of digital readers. However, the Times has a bias for hiring and promoting people in digital roles based on their achievements as journalists. While this likely worked in the past to create a high quality newspaper, it isn’t working in digital. In general, the New York Times appears to be a print newspaper first, and a digital business second. The daily tempo of article submission and review is oriented around a daily publication to be read in the mornings, rather than supporting the release of stories digitally when they are ready to be published. Performance metrics are still oriented around the number of front page stories published – a measure declining in importance as digital readers cease to discover articles via the home page.

The lack of appreciation for the digital world and digital people in general has resulted in the departure of a number of skilled employees, according to the report. Hiring digital talent is also difficult to justify to management given that demand has pushed salaries higher for skilled people even if those people are relatively young. What could be a virtuous circle, with talent attracting talent, is working in the opposite direction with what appears to be a cultural bias against the very talent that would help the Times.

Lesson: An organisation pays for the talent either by paying market rates for capable people or paying the cost in lost opportunities.

Final words

When I first came across the NYT Innovation report, I expected to read about another example of the innovators’ dilemma, where rational business decisions kept them from moving into a new market. Instead, the report is the tale of how the organisation structure, culture and processes that made The New York Times great in the past are actively inhibiting its success in the present. Some of these seem to have become sacred cows and it is difficult for the organisation to get rid of them. It will require courage – and a dedication to innovation – to change the organisation into one that is able to compete effectively.

Hackathon Tips

fall 2012 hackNY student hackathonLast week, I participated in my first Hackathon. It was an internal one for Telstra employees, but there were around 40-50 people involved, and my team ended up winning – which was awesome. However, the experience of being part was a reward in itself, with the collective energy creating a real buzz, and there was a huge amount of satisfaction in being part of something so productive.

Since it was all internal development, I’m not going to share the details of the idea. However, I was one of the two developers on the team (we were also joined by an awesome interface designer and fabulous digital sales person) and I wrote a back-end server in Node.js that had to implement a web server, IMAP to Gmail, and OAuth to Box.com. I’d been doing some serious Node.js development in a previous project, so I didn’t have to learn that, and the IMAP stuff wasn’t too different from a hobby project I’ve discussed before (although that was in Python). Getting OAuth to work was the main hurdle, but the advantage of picking popular frameworks and services is that others are likely to have solved the major problems before me, and Stack Overflow was a good source of solutions.

In any case, I thought it might be worthwhile to share a couple of the things that I think I did well, and which might help others going into their own Hackathons. Putting aside the strength of the idea and the talent possessed by the team – which would have been the principal things that helped us win the top prize – I think there were three things that put us in the best position to pull it off.

1. Networking prior to pitching

The start of the Hackathon was to build a team on the strength of a one-minute pitch, and around half the participants pitched an idea. So, it was a pretty competitive way to start things off, and one minute isn’t much time to sell yourself and your idea. However, before the pitching began, there was about a half hour of social drinks (the Hackathon started after work had finished for the day).

I decided to use the social drinks time to be social, rather than just chatting to people that I already knew. As it turned out, this was a good thing to do, since a natural ice-breaker was to ask if someone was planning to pitch an idea, and to share the idea I was planning to pitch. This meant that I got to speak to several people for longer than one minute about my idea, and one of those people ended up deciding to join my team.

This was a lucky break, since once the one-minute pitches were all done, there were now two of us going around selling the project idea to others. I doubt I would’ve gotten a project team together without this, since I hadn’t pre-arranged a team to work on the idea.

2. Knowing ahead of time how to achieve the idea

Luckily, the idea was one that I’d done some initial work on with others in Telstra. Also, I’d done a bit of research to see how a useful version of it might be implemented in the time available. One of the rules was that we had to use a partner API, like that of Box.com’s, so for example I had a quick look to see that the APIs would do what was needed.

As a result, I was able to explain clearly at the start how I proposed that we would go about building something. Also, I was able to respond to a variety of objections and arguments that were put to us by mentors, peers and judges during the Hackathon.

That’s not to say that I was stubborn or unmoving when it came to the idea (at least, I’d like to think I wasn’t). It’s just that I wasn’t making decisions or coming up with responses from a position of ignorance. We did explore a couple of variants of the idea as we went along and there were additional features that were built that I hadn’t originally thought of. However, we were very focussed, and I think this helped in realising the idea.

3. Progressing through Tuckman’s Stages ASAP

If you haven’t heard of Tuckman’s Forming, Storming, Norming and Performing stages of team development, add it to your to-do list to read up. (Or do it now – I’ll wait here if you want.) I was conscious that the team had only a limited time to complete the project, and a major risk was consuming valuable time in internal team politics. We needed to get to the Performing stage as quickly as possible.

Rather than detail exactly how the team evolved, I’ll just mention a few things that I think helped us progress:

  • Forming the team in a social setting was a good way to start with some of the barriers broken down.
  • The pre-work mentioned in point #2 above helped us stay in synch. Also, the first thing I did is answer questions from the team on the idea and its implementation, so we begun heading in the same direction.
  • The next thing I did was ask everyone for their thoughts and plans on how to begin, so we had a collective plan.
  • As the idea evolved, we wrote up the specifics on one of the walls of the office we were in so that everyone could see it.
  • Everyone had largely independent activities, so we weren’t held up waiting on each other.
  • I was team “captain” but I spent much of my time contributing to the final outputs, i.e. was part of the team rather than the manager of the team.

That said, the team was made up of easy-going people, so it was probably less likely we’d have a big falling-out. However, since I didn’t know any of them in advance, I didn’t know this.

Finally

We also spend a couple of hours prior to the final three-minute presentation going over (and over) the demo and presentation. This was worthwhile, but an obvious thing to do.

So, I think I ended up with a winning team through a combination of good luck and good planning. However, while I can’t help with the luck, I hope the above tips would aid you if you’re entering a Hackathon. I hope you enjoy it as much as I did.

Android on Xperia

One of my handsets is a Sony Xperia E C1504 (code-name Nanhu) which was a low-end-ish Android handset when it launched in early 2013, and was apparently relatively popular in India. One of its claims to fame is that it was also one of the first handsets to have a version of Firefox OS available for it. But why I’m writing about it here is that Sony has been hassling me to upgrade it to the most recent version of firmware (11.3.A.2.23), and recently I gave in, but since I was mucking about with the firmware I thought I’d “root” it as well. And therein lies the tale.

Although, as is often the case with Android when wandering off the well-trod path, it’s more of a cautionary tale.

“Rooting” an Android device means to gain complete control over the operating system through installing a superuser tool in the system partition. When Android is running, the system partition is read-only, so this step has to be done outside of Android itself (unless an “exploit” is used, utilising a bug in an Android implementation to achieve this). The usual process for achieving root is: 1) unlock the bootloader, 2) install a custom recovery partition, 3) copy a superuser tool onto the device’s filesystem, and 4) install the superuser tool into the system partition from within recovery. Oh, if only it was that easy.

Step 0 – Install USB Drivers

Before you can do anything, you need to get the right USB drivers set up on your PC, which is a world of pain itself. Complications come from whether the PC is running a 32-bit or 64-bit operating system, whether the drivers are 32-bit or 64-bit, whether the drivers support ADB, fastboot, or (for Sony) flash modes, and which particular USB port the cable is plugged into (and for which mode). I’m running Windows 8.1 64-bit, which seems to have limited driver support for this sort of thing.

I had to:

  • Install the Android SDK from Google, so that the fastboot and adb tools were installed on my PC
  • Install the ClockworkMod universal USB drivers
  • Before going any further, make sure the drivers are working. Use “adb devices” or “fastboot devices” from the command line to list the devices that can be seen. To put the Sony Xperia E into fastboot mode: turn off the handset, ensure it is not connected to the PC via the USB cable, hold down the volume up button, then connect it to the PC via the USB cable.
  • When I connected the Xperia E in fastboot mode, the laptop reported it as an unknown device “S1 Boot”. I opened the Device Manager (press Windows-X and select it from the menu, for a quick way to get to that), right-clicked on the unknown device, selected “Update Driver Software”, then “Browse my computer for drive software”, then “List me pick from list” and I chose the Nexus bootloader driver from the ClockworkMod set of drivers.
  • I used my laptop’s powered USB port for ADB and fastboot modes, but an unpowered USB port for the flash mode.

Pro-tip: If something’s not working, try another USB port. Try all of them.

Step 1 – Unlock the bootloader

Sony provides an official way to unlock the bootloader. Be warned that although it’s “official”, it will still void your warranty, and potentially brick your device. Ensure everything that is valuable on the device has been backed up somewhere else. At the very least, all apps and configuration on the device will be lost anyway.

I followed the unlocking instructions from Sony Developer World. However, my device immediately stopped working afterwards. It would go into a boot loop, showing the Sony logo, then the boot animation, then back to the Sony logo, and so on. The solution was to reflash the operating system.

I download the Flashtool that’s available from Flashtool.net (I used Flashtool64 since I have a 64-bit Windows). It’s the one that’s commonly used in the Sony Android custom ROM community.

I followed the instructions at XDA Developers to flash the 11.3.A.2.23 version of the C1504 firmware onto the device (the newest version currently available). This happened to be an Indian build of the firmware, so I’ve ended up with some links to Shahrukh Khan videos on my device as a result. :)

It was pretty hard to find these instructions. Most Sony Xperia E firmware flashing instructions refer to the C1505 version, which has support for 3G at 900MHz instead of 850MHz. Since Telstra’s 3G network is 850MHz, I need this capability of the C1504, and didn’t trust that C1505 firmware would give me what I wanted.

Now my handset booted again, but aside from the Shak Ruhk Khan videos, I hadn’t gained anything new yet.

Step 2 – Install a custom Recovery partition

The Android fastboot mode gives access to the bootloader. Now that it was unlocked, the bootloader allowed a special partition called the recovery partition to be flashed with new firmware. The recovery partition is an alternate boot partition to the default system partition.

The two most popular Android recovery partitions are from CWM (ClockworkMod) and TWRP (Team Win Recovery Project). I’ve used both in the past, but this time I chose CWM since it seemed to be more widely tested on the Xperia E. Unfortunately, I found that most of the instructions for installing the CWM recovery resulted in Wi-Fi ceasing to work on my device. I also tried to build a new version of CWM for my device, but the CWM build tools didn’t support it.

However, I found someone had built a version of the ZEUS kernel (replacing the default Sony Xperia kernel) that included CWM recovery, and this wouldn’t have the Wi-Fi issue. I followed the instructions at XDA Developers to flash that onto my device.

Step 3 – Superuser tool

Now when I turned on the device, it booted as normal, but a blue light appeared at the base of the device at the beginning when the Sony logo is shown. When the blue light is shown, the device can be diverted to boot into CWM recovery rather than the default Android system by press a volume key. However, I needed to put a superuser tool onto the device’s SD card before this new feature would be useful.

The superuser tool is made up of two files (a system application called “su” and an Android app called “Superuser.apk”), but both are stored within a zip file for easy installation. I got the zip file from the AndroidSU site (the one labelled Superuser-3.1.3-arm-signed.zip).

I installed the zip file onto the sdcard simply by enabling the Developer Options on the device (under Settings) and ticking the USB Debugging option, then attaching the device to my PC via USB, and using the command on my PC:
adb push Superuser-3.1.3-arm-signed.zip /sdcard/

Step 4 – Install the Superuser tool

I disconnected and rebooted the device, and when the Sony logo appeared (and the blue light), pressed a volume key. The device booted into the recovery partition. I followed the instructions at XDA Developers (starting at step 5).

And that’s all (!). Now I can boot my unlocked, rooted Xperia E (C1504). Remember, if you’ve followed along with these instructions, you’ve voided your warranty. But at least now you can install whatever you want on the device, or change any of the configuration settings.

One thing you might want to install straight up is a build.prop editor, such as the mysteriously named Build Prop Editor, to change configurations. For example, tweaking the network settings for the Australian mobile operators seems to improve performance. I haven’t tried these myself yet, but it’s an example of the sort of thing that can be done.