Metric of the Moment

Being on the technology-side of the telco industry, it’s interesting to see how all the complexity of technological advances is packaged up and sold to the end user. An approach that I’ve seen used often is reducing everything to a single number – a metric that promises to explain the extent of technological prowess hidden “under the hood” of a device.

I can understand why this is appealing, as it tackles two problems with the steady march of technology. Firstly, all the underlying complexity should not need to be understood by a customer in order for them to make a buying decision – there should be a simple way to compare different devices across a range. And secondly, the retail staff should not need to spend hours learning about the workings of new technology every time a new device is brought into the range.

However, an issue with reducing everything to a single number is that it tends to encourage the industry to work to produce a better score (in order to help gain more sales), even when increasing the number doesn’t necessarily relate to any perceptible improvement in the utility of the device. Improvements do tend to track with better scores for a time, but eventually they pass a threshold where better scores don’t result in any great improvement. Reality catches up with such a score after a few months, when the industry as a whole abandons it to focus on another metric. The whole effect is that the industry is obsessed with the metric of the moment, and these metrics change from time to time, long after they have stopped being useful.

Here are some examples of the metrics-of-the-moment that I’ve seen appear in the mobile phone industry:

  • Talk-time / standby-time. Battery types like NiCd and NiMH were initially the norm, and there was great competition to demonstrate the best talk-time or standby-time, which eventually led to the uptake of Li-Ion batteries. It became common to need to charge your phone only once per week, which seemed to be enough for most people.
  • Weight. Increasing talk-time or standby-time could be accomplished by putting larger batteries into devices, but at a cost of weight. A new trend emerged to produce very light handsets (and to even provide weight measurements that didn’t include the battery). The Ericsson T28s came out in 1999 weighing less than 85g, but with a ridiculously small screen and keyboard (an external keyboard was available for purchase separately). Ericsson later came out with the T66 with a better design and which weighed less than 60g, but then the market moved on.
  • Thinness. The Motorola RAZR, announced at the end of 2004, kicked off a trend for thin clamshell phones. It was less than 14mm thick (cf. 1mm thinner than the T28s). Other manufacturers came out with models, shaving off fractions of millimeters, but it all became a bit silly. Does it really matter if one phone is 0.3mm thicker than another?
  • Camera megapixels. While initially mobile phone cameras had rather feeble resolutions, they have since ramped up impressively. For example, the new Nokia N8 has a 12 megapixel camera on board. Though, it is hard to believe that the quality of the lens would justify capturing all of those pixels.
  • Number of apps. Apple started quoting the number of apps in the app store of its iPhone soon after it launched in 2008, and it became common to compare mobile phone platforms by the number of apps they had. According to 148Apps, there are currently over 285,000 apps available to Apple devices. One might think that we’ve got enough apps available now, and it might be time to look at a different measure.

In considering what the industry might look to for its next metric, I came up with the following three candidates:

  • Processor speed. This has been a favourite in the PC world for some time, and as mobiles are becoming little PCs, it could be a natural one to focus on. Given that in both the mobile and PC worlds, clock speed is becoming less relevant as more cores appear on CPUs and graphics processing is handled elsewhere, perhaps we will see a measure like DMIPS being communicated to end customers.
  • Resolution. The iPhone 4 Retina 3.5″ display, with 960×640 pixels and a pixel density of 326 pixels / inch, was a main selling point of the device. Recently Orustech announced a 4.8″ display with 1920×1080 pixels, giving a density of 458 pixels / inch, so perhaps this will be another race.
  • Screen size. The main problem with resolution as a metric is that we may have already passed the point where the human eye can detect any improvement in pixel densities, so screens would have to get larger to provide benefit from improved resolutions. On the other hand, human hands and pockets aren’t getting any larger, so hardware innovations will be required to enable a significant increase in screen size, eg. bendable screens.

But, really, who knows? It may be something that relates to a widespread benefit, or it may be a niche, marketing-related property.

The fact that these metrics also drive the industry to innovate and achieve better scores can be a force for good. Moore’s Law, which was an observation about transistor counts present in commodity chips, is essentially a trend relating to such a metric, and has in turn resulted in revolutionary advances in computing power over the last four decades. We haven’t hit the threshold for it yet – fundamental limits in physical properties of chips – so it is still valid while the industry works to maintain it.

However, it is really the market and the end customers that select the next metric. I hope they choose a good one.

4 thoughts on “Metric of the Moment”

  1. What comes to hoping for a good metric, I’m not sure one exists as they tend to be – if technically feasible – overshot anyway.

    Do you think any of the past single-figure metrics was a “good” one?

  2. They can be good for driving improvements in a particular area, before they overshoot. The example of a good one that I gave was the processing one, i.e. Moore’s Law, where the metric in the PC space was basically clock speed for many years.

    I would also argue that the camera megapixels metric was a good one initially, as cameras on mobile phones are now actually decent at taking photos. There is a lot of utility in having the ability to take a photo wherever you are (assuming you carry a mobile phone).

  3. Having thought about this a little more, I think pretty much all of the past single-figure metrics were good ones. Initially. If they didn’t reflect some underlying benefit to the end customer, they wouldn’t have taken off in the first place.

  4. Sure, none of the things those single-figure metrics strive for are bad as such. But whenever an industry – any industry – gets myopic and focuses around a single metric, the others suffer.

    Sometimes these tradeoffs are justifiable and make sense – few smartphone owners are willing to trade back to a 2G calls-only phone to gain a week of battery life instead of the day that is the standard now.

    Sometimes, however, they don’t – like the Razr, in Motorola’s quest for thinness, was a pretty horrible device in most respects. It became a fashion statement, not a usable mobile.

    It could, of course, be argued that technologies developed during the hyperattention to the one metric trickle down to more sensible products over time, but it’s hard to put a value on this. Lots of interesting technologies have trickled down from the trillions the world spends on military technology, but has it been worth it in the sense that it couldn’t have been achieved in a way that makes much more sense? I doubt it.

    But your last point – something must be beneficial or it wouldn’t take off – is a bit dangerous. I would agree that the features were _desired_ by customers – but not with the “benefit” part. Desire and benefit are two very different things.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.