Metric of the Moment

Being on the technology-side of the telco industry, it’s interesting to see how all the complexity of technological advances is packaged up and sold to the end user. An approach that I’ve seen used often is reducing everything to a single number – a metric that promises to explain the extent of technological prowess hidden “under the hood” of a device.

I can understand why this is appealing, as it tackles two problems with the steady march of technology. Firstly, all the underlying complexity should not need to be understood by a customer in order for them to make a buying decision – there should be a simple way to compare different devices across a range. And secondly, the retail staff should not need to spend hours learning about the workings of new technology every time a new device is brought into the range.

However, an issue with reducing everything to a single number is that it tends to encourage the industry to work to produce a better score (in order to help gain more sales), even when increasing the number doesn’t necessarily relate to any perceptible improvement in the utility of the device. Improvements do tend to track with better scores for a time, but eventually they pass a threshold where better scores don’t result in any great improvement. Reality catches up with such a score after a few months, when the industry as a whole abandons it to focus on another metric. The whole effect is that the industry is obsessed with the metric of the moment, and these metrics change from time to time, long after they have stopped being useful.

Here are some examples of the metrics-of-the-moment that I’ve seen appear in the mobile phone industry:

  • Talk-time / standby-time. Battery types like NiCd and NiMH were initially the norm, and there was great competition to demonstrate the best talk-time or standby-time, which eventually led to the uptake of Li-Ion batteries. It became common to need to charge your phone only once per week, which seemed to be enough for most people.
  • Weight. Increasing talk-time or standby-time could be accomplished by putting larger batteries into devices, but at a cost of weight. A new trend emerged to produce very light handsets (and to even provide weight measurements that didn’t include the battery). The Ericsson T28s came out in 1999 weighing less than 85g, but with a ridiculously small screen and keyboard (an external keyboard was available for purchase separately). Ericsson later came out with the T66 with a better design and which weighed less than 60g, but then the market moved on.
  • Thinness. The Motorola RAZR, announced at the end of 2004, kicked off a trend for thin clamshell phones. It was less than 14mm thick (cf. 1mm thinner than the T28s). Other manufacturers came out with models, shaving off fractions of millimeters, but it all became a bit silly. Does it really matter if one phone is 0.3mm thicker than another?
  • Camera megapixels. While initially mobile phone cameras had rather feeble resolutions, they have since ramped up impressively. For example, the new Nokia N8 has a 12 megapixel camera on board. Though, it is hard to believe that the quality of the lens would justify capturing all of those pixels.
  • Number of apps. Apple started quoting the number of apps in the app store of its iPhone soon after it launched in 2008, and it became common to compare mobile phone platforms by the number of apps they had. According to 148Apps, there are currently over 285,000 apps available to Apple devices. One might think that we’ve got enough apps available now, and it might be time to look at a different measure.

In considering what the industry might look to for its next metric, I came up with the following three candidates:

  • Processor speed. This has been a favourite in the PC world for some time, and as mobiles are becoming little PCs, it could be a natural one to focus on. Given that in both the mobile and PC worlds, clock speed is becoming less relevant as more cores appear on CPUs and graphics processing is handled elsewhere, perhaps we will see a measure like DMIPS being communicated to end customers.
  • Resolution. The iPhone 4 Retina 3.5″ display, with 960×640 pixels and a pixel density of 326 pixels / inch, was a main selling point of the device. Recently Orustech announced a 4.8″ display with 1920×1080 pixels, giving a density of 458 pixels / inch, so perhaps this will be another race.
  • Screen size. The main problem with resolution as a metric is that we may have already passed the point where the human eye can detect any improvement in pixel densities, so screens would have to get larger to provide benefit from improved resolutions. On the other hand, human hands and pockets aren’t getting any larger, so hardware innovations will be required to enable a significant increase in screen size, eg. bendable screens.

But, really, who knows? It may be something that relates to a widespread benefit, or it may be a niche, marketing-related property.

The fact that these metrics also drive the industry to innovate and achieve better scores can be a force for good. Moore’s Law, which was an observation about transistor counts present in commodity chips, is essentially a trend relating to such a metric, and has in turn resulted in revolutionary advances in computing power over the last four decades. We haven’t hit the threshold for it yet – fundamental limits in physical properties of chips – so it is still valid while the industry works to maintain it.

However, it is really the market and the end customers that select the next metric. I hope they choose a good one.

Social Media and the Laugh

When I was back in high school, one of my English Lit teachers used to say “A wise man laughs with trepidation”. He said it a lot. He also joked a lot. Perhaps he was warning us that Sex And Violence Fridays weren’t likely to be as funny to parents.

But anyway, he was right that with most humour, someone is the butt of the joke. Someone is being ridiculed, if only the joke-teller. But very often someone is being offended.

And this week, some people were so offended by Catherine Deveny‘s postings on Twitter, that her employer at The Age newspaper decided to give her the sack. Now, I’m not so interested in whether her comments were offensive or not (since, almost by the very definition of humour, someone would find them offensive), but in what this example can tell us about communications in the age of social media.

Last year, Julian Morrow of The Chaser fame gave the Andrew Olle Media Lecture on a related matter. It was (and still is) a very interesting speech, and outlined the concepts of a primary audience, who are the people that a comedian is targeting their humourous content at, and a secondary audience, who are the people that discover the content after the fact. For example, the primary audience may watch your TV show, but the secondary audience may watch the highlights/lowlights of your TV show when they are rebroadcast on the nightly current affairs program.

Since in a world where anything can be discovered later on the Internet, e.g. via clips on YouTube, a specific Google search or even through the Internet Archive, the secondary audience potentially consists of everyone living and who may live in the future. It’s a given that for anything humourous you’ve publicly released, there will eventually be someone who will find it and be offended by it.

I’ve previously tried to characterise communications technologies into those that are public and those that are private. Twitter was classified as a publishing business where primarily it attempts to allow communications to be publicly disseminated.

I don’t know if Deveny’s Twitter followers at the time (her primary audience) were particularly offended, or whether it was in the wider group of social media users who discovered her tweets (the secondary audience) that the most offended people came from. Given that her humour is at the more offensive end of the spectrum, I’d expect her primary audience to be pretty thick-skinned. So, if it was the secondary audience’s reaction that resulted in her sacking, then this is likely to be a template for future problems for comedians.

Is is reasonable for a comedian to take into account the reactions of their secondary audience?

In an ideal world, perhaps not. But pragmatically, if it’s going to affect important things like their ability to pay a mortgage, then probably they will. However, the secondary audience in the world of social media and the Internet can be anyone who will ever live.

Is it even possible for them to foresee the reactions of this group?

Even in an ideal world, probably not.

I wouldn’t be surprised to see comedians move away from publishing platforms like Twitter and towards messaging platforms like Facebook (to use the classification scheme from my previous post). This would seem to be an approach for limiting the risk from the secondary audience.

I’m aware that there is plenty of publicly available, offensive material on Facebook, but here I’m talking about the ability to set up a private channel of communication to a select group of people, i.e. Facebook Groups. Of course, it’s up to Facebook as a business to determine if they want to host groups that non-group-members find offensive, but from the perspective of my argument here, this “messaging” functionality will exist somewhere (e.g. email lists) even if not within Facebook. I’m just using them as a contrasting example to Twitter.

Unfortunately, the clear downside of humour moving away from the public domain into private groups is that we can’t easily or accidentally discover a new comedian. In this brand new, Internet-connected world, we may find ourselves in the old, historical situation of comedians telling their jokes to audiences in (virtual) rooms. And people laughing, even if with trepidation.

Funds and Property

I’ve written about it before (“I am not a nutter” and “That’s not a Housing Affordability Crisis”), and I’m about to write about it again. Today I received a letter from my accountant (who, admittedly, is more savvy than the average accountant when it comes to property) confirming, and even encouraging purchase of geared property in a super fund. I quote:

If you have over $120,000 sitting in Superannuation you can now buy property through your superannuation fund … the SMSF makes the first installment of 20% deposit plus stamp duty/ legal costs plus the first year’s interest repayment.

And I have also come across a company called the Quantum Group that is setting up a similar structure for superannuation funds, calling them property warrants. So, there’s also an option for people whose accountants aren’t quite as savvy.

The residential property market has been performing quite well recently. For example, the average annual growth of median residential property prices in Melbourne over the last ten years has been 10.65% (according to this article, reporting Residex figures). If a property purchased at $450,000 (the current Melbourne median property price) grows at the average figure of 10.65% annually, and is purchased at a gearing level of 80% (as in the example from my accountant), then the growth is considerably higher. Ignoring tax, rents and interest payments, the $90,000 invested would become equity of around $880,000 after ten years – that’s about 25% annual growth. Not bad, and will be hard for super fund investors to ignore.

I would expect that once superannuation funds start investing directly in residential property, the big players in Australian superannuation will want to address the demand by packaging up property so that it is easy to invest in, i.e. indirect investment in residential property, or funds of geared residential property which a SMSF can buy units in. The catch will be that while the SMSF area is regulated by the ATO, the wider superannuation funds industry is regulated by APRA, and they are not going to want to see superannuation funds gearing up and putting people’s pensions at risk. The gearing cat is already out of the bag, so perhaps all they can do is cap it at a more conservative level, of say 60% (this would have produced a return of around 18% in the example above).

It is worth considering what sort of property funds the industry would be looking to set up. Generally they look to the blue-chip end of the market, so in property this would be houses or whole apartment blocks (rather than individual apartments) and in well-established suburbs such as Hawthorn, Toorak and South Yarra in Melbourne, and their equivalents in Sydney and possibly Brisbane. Such property typically goes for multiple millions of dollars, but I would expect that people living in such houses would prefer not to rent it. I don’t really know – I’ve never been in that position myself! Innovation in rental / purchase contracts will probably be required to give residents in such houses the certainty, control, or capital gains that they require. However, where there’s money, there’s incentive to fix such problems.

So, initially, I expect to see the big funds going after apartment blocks, then eventually houses, then when supply is exhausted in the blue-chip areas, moving into neighbouring areas or the other cities in Australia. A side-effect of this staggered buy-up is that these funds may not be particularly diversified. There could be a “Toorak houses” fund, or a “South Yarra apartments” fund. It may not be a bad thing – it doesn’t matter if a particular fund is not diversified as long as someone’s overall portfolio is diversified. And it could enable people buying that type of property in that type of area to invest in something that tracked the investment performance of their dwelling without having to invest in (i.e. renovate) the dwelling itself.

Is this complete speculation, or have similar things happened overseas? Well, to be honest, no. Real-estate Investment Trusts (REITs), as they are often known overseas, tend to invest in hotels, office blocks, shopping centres, and sometimes apartment blocks. Although I’m no expert, I’m not aware of big REITs buying up houses. So, this is all in the realm of speculation. But the fact that it hasn’t happened overseas should not be an indicator that it won’t happen here, as Australia tends to lead the world when it comes to putting real estate into retail funds. According to Wikipedia, the first real-estate trust was launched in Australia in 1971.

Anyway, for the everyday investor, who can’t pony-up a few million to buy a house in Toorak, the impact of competition for real-estate from the major fund managers is likely to be limited. You’re more likely to be bidding against someone running a SMSF. Unfortunately, the number of SMSFs is growing rapidly.

Finally, one thing to watch out for will be unscrupulous operators. There are already dodgey property marketers who prey upon interstate investors, e.g. Perth people buying overpriced property in Melbourne, or Melbourne people buying overpriced property in Brisbane. This will give them one more tool to exploit: that vulnerable people can invest their super into a dodgey scheme, and possibly not realise for many years that the property that they’ve bought was massively overpriced because the whole thing is so hands-off. Hopefully people know not to invest in something they don’t fully understand. It’s a vain hope, I know.