Facebook Node SDK example

I’ve been writing some Node.js software to interact with Facebook recently. To do this, I just picked the first SDK listed in the Facebook developer’s page for Node.js. However, I couldn’t find a good example listing of how to use this SDK to iterate over multiple pages of results. So, this is a quick post that will hopefully serve as such an example.

The complete Node.js application can be downloaded from Github at https://github.com/aesidau/sbs-headlines, but I’ll walk through it step by step here. The application will list the first 1,000 news headlines from the SBS News page on Facebook. However, before any of this will work, you will need to get the fb and async modules, so start with something like:

npm install fb
npm install async

Also, I assume that you’ve set up a Facebook application over at https://developers.facebook.com/ and have its app id and app secret to hand. In fact, for this Node.js example, I am assuming that these values are stored in the environment variables FB_APP_ID and FB_APP_SECRET, respectively.

The first step in any Facebook API application is to use these app credentials to get an access token that can be used in later Facebook API calls. In this example, I’m going to get a basic access token that isn’t associated with a Facebook user, so will be able to obtain only public information. That’s all we need here, anyway.

// Acquire a new access token and callback to f when done (passing any error)
function acquireFacebookToken(f) {
  FB.napi('oauth/access_token', {
    client_id: process.env.FB_APP_ID,
    client_secret: process.env.FB_APP_SECRET,
    grant_type: 'client_credentials'
  }, function (err, result) {
    if (!err) {
      // Store the access token for later queries to use
      FB.setAccessToken(result.access_token);
    }
    if (f) f(err);
  }); // FB.napi('oauth/access_token'
}

This has been written as a function so we can call it later. Note that it uses a callback to indicate to the caller when the process has been completed, passing back any errors that came up.

So far, so obvious. Now that the access token is sorted, let’s look at what is required to iterate over a Facebook feed using the API.

I’m going to use the doUntil function in the async module. This enables iteration of functions that return results in callbacks. The other thing to note is that each call to the Facebook API will return a “paging” object that will contain a “next” attribute but only if there is another page of results to retrieve. This attribute can be parsed to construct the next Facebook API query to obtain the next page of results.

I have also included a test for if the access token has expired. This shouldn’t happen in a simple app where the access token was only just acquired. However, in many apps, the access token may have been acquired hours before. So, if this code is to be reused, it’s a good idea to deal with this case.

// Process the Facebook feed and callback to f when done (passing any error)
function processFacebookFeed(feed, f) {
  var params, totalResults, done;

  totalResults = []; // progressively store results here
  params = { // initial set of params to use in querying Facebook
    fields: 'message,name',
    limit: 100
  };
  done = false; // will be set to true to terminate loop
  async.doUntil(function(callback) {
    // body of the loop
    FB.napi(feed, params, function(err, result) {
      if (err) return callback(err);
      totalResults = totalResults.concat(result.data);
      if (!result.paging.next || totalResults.length >= 1000) {
        done = true;
      } else {
        params = URL.parse(result.paging.next, true).query;
      }
      callback();
    }); // FB.napi
  }, function() {
    // test for loop termination
    return done;
  }, function (err) {
    // completed looping
    if (err && err.type == 'OAuthException') {
      // the access token has expired since we acquired it, so get it again
      console.error('Need to reauthenticate with Facebook: %s', err.message);
      acquireFacebookToken(function (err) {
        if (!err) {
          // Now try again (n.b. setImmediate requires Node v10)
          setImmediate(function() {
            processFacebookFeed(f);
          }); // setImmediate
        } else if (f) {
          f(err);
        }
      }); // acquireFacebookToken
    } else if (f) {
      f(err, totalResults);
    }
  }); // async.doUntil
}

Lastly, we just need to wire these two functions together so that we get the access token, retrieve the results (i.e. the headlines from SBS World News Australia), and then print them out.

acquireFacebookToken(function (err) {
  if (err) {
    console.error('Failed authorisation to Facebook: %s', err.message);
  } else {
    console.log('Acquired Facebook access token');
    // Now let's do something interesting with Facebook
    processFacebookFeed('SBSWorldNewsAustralia/feed', function (err, results) {
      if (err) {
        console.error('Failed to retrieve Facebook feed: %s', err.message);
      } else {
        // Print out the results
        results.forEach(function (i) {
          var headline = i.message || i.name;
          // If it's an embedded video, possible there's no headline
          if (headline) {
            console.log(headline);
          }
        }); // results.forEach
      }
    }); // processFacebookFeed
  }
}); // acquireFacebookToken

And that’s it. I hope this has been useful for others. Grab the complete application from GitHub to try it out, but make sure you set up your environment variables for the App ID and App Secret first.

Pi, Python and I (part 2)

Raspberry PiIn my previous post, I talked about how I’m using a Raspberry Pi to run a Facebook backup service and provided the Python code needed to get (and maintain) a valid Facebook token to do this. This post will be discussing the actual Facebook backup service and the Python code to do that. It will be my second Python program ever (the first was in the previous post), so there will likely be better ways to do what I’ve done, although you’ll see it’s still a pretty simple exercise. (I’m happy to hear about possible improvements.)

The first thing I need to do is pull in all the Python modules that will be useful. The extra packages should’ve been installed from before. Also, because the Facebook messages will be backed-up to Gmail using its IMAP interface, the Google credentials are specified here, too. Given that those credentials are likely to be something you want to keep secret at all costs, all the more reason to run this on a home server rather than on a publicly hosted server.

from facepy import GraphAPI
import urlparse
import dateutil.parser
from crontab import CronTab
import imaplib
import time

# How many status updates to go back in time (first time, and between runs)
MAX_ITEMS = 5000
# How many items to ask for in each request
REQUEST_ITEMS = 25
# Default recipient
DEFAULT_TO = "my_gmail_acct@gmail.com" # Replace with yours
# Suffix to turn Facebook message IDs into email IDs
ID_SUFFIX = "@exportfbfeed.facebook.com"
# Gmail account
GMAIL_USER = "my_gmail_acct@gmail.com" # Replace with yours
# and its secret password
GMAIL_PASS = "S3CR3TC0D3" # Replace with yours
# Gmail folder to use (will be created if necessary)
GMAIL_FOLDER = "Facebook"

Before we get into the guts of the backup service, I first need to create a few basic functions to simplify the code that comes later. Initially, there’s a function that is used to make it easy to pull a value from the results of a call to the Facebook Graph API:

def lookupkey(the_list, the_key, the_default):
  try:
    return the_list[the_key]
  except KeyError:
    return the_default

Next a function to retrieve the Facebook username for a given Facebook user. Given that we want to back-up messages into Gmail, we have to make them look like email. So, each message will have to appear to come from a unique email address belonging to the relevant Facebook user. Since Facebook generally provides all their users with email addresses at the facebook.com domain based on their usernames, I’ve used these. However, to make it a bit more efficient, I cache the usernames in a list so that I don’t have to query Facebook again when the same person appears in the feed multiple times.

def getusername(id, friendlist):
  uname = lookupkey(friendlist, id, '')
  if '' == uname:
    uname = lookupkey(graph.get(str(id)), 'username', id)
    friendlist[id] = uname # Add the entry to the dictionary for next time
  return uname

The email standards expect times and dates to appear in particular formats, so now a function to achieve this based on whatever date format Facebook gives us:

def getnormaldate(funnydate):
  dt = dateutil.parser.parse(funnydate)
  tz = long(dt.utcoffset().total_seconds()) / 60
  tzHH = str(tz / 60).zfill(2)
  if 0 <= tz:
    tzHH = '+' + tzHH
  tzMM = str(tz % 60).zfill(2)
  return dt.strftime("%a, %d %b %Y %I:%M:%S") + ' ' + tzHH + tzMM

Next, a function to find the relevant bit of a URL to help travel back and forth in the Facebook feed. Given that the feed is returned to use from the Graph API in small chunks, we need to know how to query the next or previous chunk in order to get it all. Facebook uses a URL format to give us this information, but I want to unpack it to allow for more targeted navigation.

def getpagingpart(urlstring, part):
  url = urlparse.urlsplit(urlstring)
  qs = urlparse.parse_qs(url.query)
  return qs[part][0]

Now a function to construct the headers and body of the email from a range of information gleaned from processing the Facebook Graph API results.

def message2str(fromname, fromaddr, toname, toaddr, date, subj1, subj2, msgid, msg1, msg2, inreplyto=''):
  if '' == inreplyto:
    header = ''
  else:
    header = 'In-Reply-To: <' + inreplyto + '>\n'
  utcdate = dateutil.parser.parse(date).astimezone(dateutil.tz.tzutc()).strftime("%a %b %d %I:%M:%S %Y")
  return "From nobody {}\nFrom: {} <{}>\nTo: {} <{}>\nDate: {}\nSubject: {} - {}\nMessage-ID: <{}>\n{}Content-Type: text/html\n\n

{}{}

".format(utcdate, fromname, fromaddr, toname, toaddr, date, subj1, subj2, msgid, header, msg1, msg2)

Okay, now we've gotten all that out of the way, here's the main function to process a message obtained from the Graph API and place it in an IMAP message folder. The Facebook message is in the form of a dictionary, so we can look up the relevant parts by using keys. In particular, any comments to a message will appear in the same format, so we recurse over those as well using the same function.

Note that in a couple of places I call encode("ascii", "ignore"). This is an ugly hack that strips out all of the unicode goodness that was in the original Facebook message (which allows foreign language characters and symbols), dropping anything exotic to leave plain ASCII characters behind. However, for some reason, the Python installation on my Raspberry Pi would crash the program whenever it came across unusual characters. To ensure that everything works smoothly, I ensure that these aren't present when the text is processed later.

def printdata(data, friendlist, replytoid='', replytosub='', max=MAX_ITEMS, conn=None):
  c = 0
  for d in data:
    id = lookupkey(d, 'id', '') # get the id of the post
    msgid = id + ID_SUFFIX
    try: # get the name (and id) of the friend who posted it
      f = d['from']
      n = f['name'].encode("ascii", "ignore")
      fid = f['id']
      uname = getusername(fid, friendlist) + "@facebook.com"
    except KeyError:
      n = ''
      fid = ''
      uname = ''
    try: # get the recipient (eg. if a wall post)
      dest = d['to']
      destn = dest['name']
      destid = dest['id']
      destname = getusername(destid, friendlist) + "@facebook.com"
    except KeyError:
      destn = ''
      destid = ''
      destname = DEFAULT_TO
    t = lookupkey(d, 'type', '') # get the type of this post
    try:
      st = d['status_type']
      t += " " + st
    except KeyError:
      pass
    try: # get the message they posted
      msg = d['message'].encode("ascii", "ignore")
    except KeyError:
      msg = ''
    try: # there may also be a description
      desc = d['description'].encode("ascii", "ignore")
      if '' == msg:
        msg = desc
      else:
        msg = msg + "
\n" + desc except KeyError: pass try: # get an associated image img = d['picture'] msg = msg + '
\n' except KeyError: img = '' try: # get link details if they exist ln = d['link'] ln = '
\nlink' except KeyError: ln = '' try: # get the date date = d['created_time'] date = getnormaldate(date) except KeyError: date = '' if '' == msg: continue if '' == replytoid: email = message2str(n, uname, destn, destname, date, t, id, msgid, msg, ln) else: email = message2str(n, uname, destn, destname, date, 'Re: ' + replytosub, replytoid, msgid, msg, ln, replytoid + ID_SUFFIX) if conn: conn.append(GMAIL_FOLDER, "", time.time(), email) else: print email print "----------" try: # process comments if there are any comments = d['comments'] commentdata = comments['data'] printdata(commentdata, friendlist, replytoid=id, replytosub=t, conn=conn) except KeyError: pass c += 1 if c == max: break return c

The last bit of the program uses these functions to perform the backup and to set up a cron job to run the program again every hour. Here's how it works..

First, I grab the Facebook Graph API token that the previous program (setupfbtoken.py) provided, and initialise the module that will be used to query it.

# Initialise the Graph API with a valid access token
try:
  with open("fbtoken.txt", "r") as f:
    oauth_access_token = f.read()
except IOError:
  print 'Run setupfbtoken.py first'
  exit(-1)

# See https://developers.facebook.com/docs/reference/api/user/
graph = GraphAPI(oauth_access_token)

Next, I set up the connection to Gmail that will be used to store the messages using the credentials from before.

# Setup mail connection
mailconnection = imaplib.IMAP4_SSL('imap.gmail.com')
mailconnection.login(GMAIL_USER, GMAIL_PASS)
mailconnection.create(GMAIL_FOLDER)

Now we just need to initialise some things that will be used in the main loop: the cache of the Facebook usernames, the count of the number of status updates to read, and the timestamp that marks the point in time to begin reading status from. This last one is to ensure that we don't keep uploading the same messages again and again, and the timestamp is kept in the file fbtimestamp.txt.

friendlist = {}

countdown = MAX_ITEMS
try:
  with open("fbtimestamp.txt", "r") as f:
    since = '&since=' + f.read()
except IOError:
  since = ''

Now we do the actual work, reading the status feed and processing them:

stream = graph.get('me/home?limit=' + str(REQUEST_ITEMS) + since)
newsince = ''
while stream and 0 < countdown:
  streamdata = stream['data']
  numitems = printdata(streamdata, friendlist, max=countdown, conn=mailconnection)
  if 0 == numitems:
    break;
  countdown -= numitems
  try: # get the link to ask for next (going back in time another step)
    p = stream['paging']
    next = p['next']
    if '' == newsince:
      try:
        prev = p['previous']
        newsince = getpagingpart(prev, 'since')
      except KeyError:
        pass
  except KeyError:
    break
  until = '&until=' + getpagingpart(next, 'until')
  stream = graph.get('me/home?limit=' + str(REQUEST_ITEMS) + since + until)

Now we clean things up: record the new timestamp and close the connection to Gmail.

if '' != newsince:
  with open("fbtimestamp.txt", "w") as f:
    f.write(newsince) # Record the new timestamp for next time

mailconnection.logout()

Finally, we set up a cron job to keep the status updates flowing. As you can probably guess from this code snippet, this all is meant to be saved in a file called exportfbfeed.py.

cron = CronTab() # get crontab for the current user
if [] == cron.find_comment("exportfbfeed"):
  job = cron.new(command="python ~/exportfbfeed.py", comment="exportfbfeed")
  job.minute.on(0) # run this script @hourly, on the hour
  cron.write()

Alright. Well, that was a little longer than I thought it would be. However, the bit that does the actual work is not very big. (No sniggering, people. This is a family show.)

It's been interesting to see how stable the Raspberry Pi has been. While it wasn't designed to be a home server, it's been running fine for me for weeks.

There was an additional benefit to this backup service that I hadn't expected. Since all my email and Facebook messages are now in the one place, I can easily search the lot of them from a single query. In fact, the Facebook search feature isn't very extensive, so it's great that I can now do Google searches to look for things people have sent me via Facebook. It's been a pretty successful project for me and I'm glad I got the chance to play with a Raspberry Pi.

For those that want the original source code files, rather than cut-and-pasting from this blog, you can download them here:

If you end up using this for something, let me know!

Pi, Python and I (part 1)

Raspberry PiI’ve been on Facebook for almost six years now, and active for almost five. This is a long time in Internet time.

Facebook has, captured within it, the majority of my interactions with my friends. Many of them have stopped blogging and just share via Facebook, now. (Although, at least two have started blogging actively in the last year or so, and perhaps all is not lost.) At the start, I wasn’t completely convinced it would still be around – these things tended to grow and then fade within just a few years. So, I wasn’t too concerned about all the *stuff* that Facebook would accumulate and control. I don’t expect them to do anything nefarious with it, but I don’t expect them to look after it, either.

However, I’ve had a slowly building sense that I should do something about it. What if Facebook glitched, and accidentally deleted everything? There’s nothing essential in there, but there are plenty of memories I’d like to preserve. I really wanted my own backup of my interactions with my friends, in the same way I have my own copies of emails that I’ve exchanged with people over the years. (Although, fewer people seem to email these days, and again they just share via Facebook.)

The trigger to finally do something about this was when every geek I knew seemed to have got themselves a Raspberry Pi. I tried to think of an excuse to get one myself, and didn’t have to think too hard. I could finally sort out this Facebook backup issue.

Part of the terms of my web host are that I can’t run any “robots” – it’s purely meant to handle incoming web requests. Also, none of the computers at home are on all the time, as we only have tablets, laptops and phones. I didn’t have a server that I could run backup software on.. but a Raspberry Pi could be that server.

For those who came in late, the Raspberry Pi is a tiny, single-board computer that came out last year, is designed and built in the UK, and (above all) is really, really cheap. I ordered mine from the local distributor, Element14, whose prices start at just under $30 for the Model A. To make it work, you need to at least provide a micro-USB power supply ($5 if you just want to plug it into your car, but more like $20 if you want to plug it into the wall) and a Micro SD card ($5-$10) to provide the disk, so it’s close to $60, unless you already have those to hand. You can get the Model B, which is about $12 more and gets you both more memory and an Ethernet port, which is what I did. You’ll need to find an Ethernet cable as well, in that case ($4).

When a computer comes that cheap, you can afford to get one for projects that would otherwise be too expensive to justify. You can give them to kids to tinker with and there’s no huge financial loss if they brick them. Also, while cheap, they can do decent graphics through an HDMI port, and have been compared to a Microsoft Xbox. No wonder they managed to sell a million units in their first year. Really, I’m a bit slow on the uptake with the Raspberry Pi, but I got there in the end.

While you can run other operating systems onto it, if you get a pre-configured SD card, it comes with a form of Linux called Raspbian and has a programming language called Python set up ready to go. Hence, I figured as well as getting my Facebook backup going, I could use this as an excuse to teach myself Python. I’d looked at it briefly a few years back, but this would be the first time I’d used it in anger. I’ll document here the steps I went through to implement my project, in case anyone else wants to do something similar or just wants to learn from this (if only to learn how simple it is).

The first thing to do is to head over to developers.facebook.com and create a new “App” that will have the permissions that I’ll use to read my Facebook  feed. Once I logged in, I chose “Apps” from the toolbar at the top and clicked on “Create New App”. I gave my app a cool name (like “Awesome Backup Thing”) and clicked on “Continue”, passed the security check to keep out robots, and the app was created. The App ID and App secret are important and should be recorded somewhere for later.

Now I just needed to give it the right permissions. Under the Settings menu, I clicked on “Permissions”, then added in the ones needed into the relevant fields. For what I want, I needed: user_about_me, user_status, friends_about_me, friends_status, and read_stream. “Save Changes” and this step is done. Actually, I’m not sure if this is technically needed, given the next step.

Now I needed to get a token that can be used by the software on the server to query Facebook from time to time. The easiest way is to go to the Graph API Explorer, accessible under the “Tools” menu in the toolbar.

I changed the Application specified in the top right corner to Awesome Backup Thing (insert your name here), then clicked on “Get access token”. Now I need to specify the same permissions as before, across the three tabs of User Data Permissions (user_about_me, user_status), Friends Data Permissions (friends_about_me, friends_status) and Extended Permissions (read_stream). Lastly, I clicked on “Get Access Token”, clicked “OK” to the Facebook confirmation page that appeared, and returned to the Graph API explorer where there was a new token waiting for me in the “Access token” textbox. It’ll be needed later, but it’s valid for about two hours. If you need to generate another one, just click “Get access token” again.

Now it’s time to return to the Pi. Once I logged in, I needed to set up some additional Python packages like this:

$ sudo pip install facepy
$ sudo pip install python-dateutil
$ sudo pip install python-crontab

And then I was ready to write some code. The first thing was to write the code that will keep my access token valid. The one that Facebook provides via the Graph API Explorer expires too quickly and can’t be renewed, so it needs to be turned into a renewable access token with a longer life. This new token then needs to be recorded somewhere so that we can use it for the backing-up. Luckily, this is pretty easy to do with those Python packages. The code looks like this (you’ll need to put in the App ID, App Secret, and Access Token that Facebook gave you):

# Write a long-lived Facebook token to a file and setup cron job to maintain it
import facepy
from crontab import CronTab
import datetime

APP_ID = '1234567890' # Replace with yours
APP_SECRET = 'abcdef123456' # Replace with yours

try:
  with open("fbtoken.txt", "r") as f:
  old_token = f.read()
except IOError:
  old_token = ''
if '' == old_token:
  # Need to get old_token from https://developers.facebook.com/tools/explorer/
  old_token = 'FooBarBaz' # Replace with yours

new_token, expires_on = facepy.utils.get_extended_access_token(old_token, APP_ID, APP_SECRET)

with open("fbtoken.txt", "w") as f:
  f.write(new_token)

cron = CronTab() # get crontab for the current user
for oldjob in cron.find_comment("fbtokenrenew"):
  cron.remove(oldjob)
job = cron.new(command="python ~/setupfbtoken.py", comment="fbtokenrenew")
renew_date = expires_on - datetime.timedelta(1)
job.minute.on(0)
job.hour.on(1) # 1:00am
job.dom.on(renew_date.day)
job.month.on(renew_date.month) # on the day before it's meant to expire
cron.write()

Apologies for the pretty rudimentary Python coding, but it was my first program. The only other things to explain are that the program sits in the home directory as the file “setupfbtoken.py” and when it runs, it writes the long-lived token to “fbtoken.txt” then sets up a cron-job to refresh the token before it expires, by running itself again.

I’ll finish off the rest of the code in the next post.

Please comment

Perhaps it is the responsible thing to do, or perhaps in my case it is a bit presumptuous given I average just a little over two comments per post, but I’ve written up a comment policy for my blog.

As I’ve mentioned before, one of the reasons I blog is to chat to distant friends and family about topics that currently interest me. So, I really do want people to write comments.

However, since I embraced Facebook, I now tend to have those sorts of chats there, and automatically copy my blog posts into my Facebook notes in order to let people chat, debate, disagree or laugh at me in that space. It’s more private, provides a simple notification of new posts / comments, and is a neutral space.

However, Facebook comment streams are a pretty awful way of conducting a conversation. The lack of a decent text editor, the difficulty in pulling more people into the discussion, and the inability to reply to a specific comment (only the the entire stream) make it less than ideal. And the whole thing is then “owned” by Facebook, which I don’t necessarily have a high degree of trust in. So, despite it being where most of my online conversations occur these days, I still resist making it the primarily place where I put my posts.

Which means that many people I know have a choice of two places where they might respond to one of my posts: the blog or Facebook. A choice, importantly, that doesn’t extend to places where they might respond to others’ responses. If someone has commented in Facebook, the response can be lodged on my blog. This isn’t neat, but I can’t see a good solution that doesn’t also give up some of the benefits of Facebook.

That doesn’t mean one doesn’t exist! I think I need to look deeper into using something like Disqus or IntenseDebate for my blog comments.

But for those that do wish to comment on the blog, please do. I aim to exercise only light-touch moderation, and ask only that you respect other writers and the topic at hand.

Communications Industries

This is something that I’ve been mulling over for a little while, and I’m hoping that by posting it here, the discipline of having to write it down will clarify it for me. And, perhaps people reading this will comment on the ideas and help flesh them out a bit more.

A variety of industries have developed that deal with interpersonal communications at a distance. Each of them has crystalised around their own set of technologies. However, there seems to be a blurring of the boundaries between them these days, and in order to better understand what this means, it could be helpful to understand how the underlying technologies of these industries relate.

If you simply consider two properties of technologies used by the communications industries, it appears that you can draw some reasonable boundaries. The two properties I’m thinking of are privacy and continuousness, and by building a simple 2×2 matrix out of them, you get the following diagram:

Publishing, Messaging, Broadcasting and Telephony

Admittedly, the separation between Public and Private is a little fuzzy. Does sending something to 10 of your closest friends make something public? What about 100? What about 1,000? Does it depend on the friends? However, I think it is reasonably well understood that you can have private group communication, not just private one-on-one communication. I think we also usually know when we are engaging in something that is highly (publicly?) visible, and something that is private, despite the occasional exception.

And while the distinction between discrete and continuous communication ought to be unambiguous, it too has its fuzziness. A discrete communication is one where the communication itself has a clearly defined boundary, e.g. an image, a movie, a book, a song, or an article. While a continuous communication lasts as long as the communications channel is maintained, e.g. a TV channel, a radio channel, a telephone call, or a video conferencing session. Recordings tend to be discrete while performances tend to be continuous.

But, what if you watch a movie on the TV, or listen to a song on the radio? This is an example of convergence between the different industries. Once a movie or a song is published, the broadcast network can distribute it, although it loses some of its discreteness in the process.

Similarly, there is convergence between the broadcasting and telephony industries. A reporter can call into a news station, and have their story broadcast out to the viewers. Or when your call is put on hold, you may end up listening to a live radio station.

Convergence between telephony and messaging is well understood within the telco realm. You can call and leave a voicemail for someone, which becomes a message they can later retrieve. Or it might be transcribed and then delivered as a text message. Or it could be send as an audio attachment in an email. Or instead of leaving the voicemail, you could record an audio MMS and send that instead. And if a SMS is sent to a fixed phone that cannot display it, a phone call can be generated instead with text-to-speech used to render the message audible.

However, what has become most interesting of all recently is the convergence between messaging and publishing. This blog post is an act of publishing. However, I could have sent it via email to a few of my friends, and that would have been messaging. Instead of emailing it, I can make it available via Facebook. If it is made available to my network of friends, it is messaging. But, with the click of a button I can change my preferences so that Notes are public, so it is publishing.

One of the interesting aspects to social networking for me is that some networks are designed primarily around a messaging model (like Facebook or Friendster) and others around a publishing model (like Myspace or Twitter). As the social networks all compete to offer similar features, the sharp lines between the models get very blurred.

Reblog this post [with Zemanta]

How I stopped loving Twitter and embraced Facebook

Maybe this post is for geeks, but perhaps not. Twitter is a social networking site that allows you to send and receive short messages to your friends. Facebook, I’m sure you’ve heard of. Twitter is much trendier, much more elite, than Facebook, but I’ve pretty much given it up these days. According to Tom Reynolds, I probably shouldn’t admit this, or I will be shunned by the cool kids. However, I feel that I should explain.

I joined up to Twitter reasonably early on for an Australian, starting in March 2007. I wasn’t a huge user – perhaps posting about once a week. But I was there for the first meet-up of the Melbourne Twitter Underground Brigade in June 2007. These days Twitter is pretty big, with some people estimating about 5 million users, even the PM has been seen to use Twitter, and there are some pretty good features that it supports:

  • You can create an alias for people to message you on. You can give out this alias, rather than a “real” identifier for messaging (such as your email address), to protect your privacy a bit.
  • You can set up the alias to deliver messages to you in the form that you want, without the sender having to know anything about it. You can receive messages as emails, IMs or SMS, for example. And reply back in that form as well.
  • You can send messages to all your friends in one go. This is something that is not easily done with SMS, for example, without paying a fair bit of money, e.g. four times as much for four friends, etc.
  • You can publish messages so that anyone can come along later and read all the messages you sent out (and potentially join up as your friend).
  • It was extremely low cost (you only paid to send SMSs, in Australia at least, while all the messages you received were free).

Aside from the publishing aspect, which is rather interesting, these were all things that it’s been known for a long time that people like. No wonder Twitter was successful. I found it an easy way to keep abreast of my friends’ moods and activities. Australians were the sixth largest user of Twitter via SMS.

However, then in August last year, Twitter stopped delivering SMS to anyone outside of North America and India. They had to do this, because it cost them a lot of money to send all those SMS messages, and they weren’t getting any commissions from the mobile operators outside of those regions. Since SMS was the main way I used Twitter, it pretty much cut me off.

Serendipitously, Facebook was undergoing a redesign around the same time. The new design was clearly influenced by the success of Twitter and the status update field and feeds of friends status updates now appeared prominently.

My friends around the world have been gradually appearing on Facebook. Even those who don’t do geeky things. Even my parents. But where Twitter was like an open field where all were welcome to gather and communicate, Facebook is like a gated community or a private club – you are only welcome if you’ve been invited. Philosophically this is something I really hate. Sure, I like my privacy, but I don’t want to feel like I’m locked away. I don’t walk down the street hidden under a shroud, and I don’t think I need to be treated that way online. I occasionally blog. I post my photos to Flickr.

That said, I was forced onto Facebook in order to communicate with my friends who were using it to communicate with me. If I wanted to see their photos, they were on Facebook. If I wanted to see their mood or hear about their activities, it was on Facebook.

What changed is I got an iPod Touch, and installed the Facebook application. It was almost like using Twitter again, except even more people that I like were on it.

One day, Twitter might sort out a way to make enough money to pay for those SMS messages, bring back the features I want, and somehow attract all my Facebook-loving friends onto it. But until then, I’m stuck with Facebook, and despite my initial reluctance to join in, I find myself using it more often than I ever used Twitter.