Being on the technology-side of the telco industry, it’s interesting to see how all the complexity of technological advances is packaged up and sold to the end user. An approach that I’ve seen used often is reducing everything to a single number – a metric that promises to explain the extent of technological prowess hidden “under the hood” of a device.
I can understand why this is appealing, as it tackles two problems with the steady march of technology. Firstly, all the underlying complexity should not need to be understood by a customer in order for them to make a buying decision – there should be a simple way to compare different devices across a range. And secondly, the retail staff should not need to spend hours learning about the workings of new technology every time a new device is brought into the range.
However, an issue with reducing everything to a single number is that it tends to encourage the industry to work to produce a better score (in order to help gain more sales), even when increasing the number doesn’t necessarily relate to any perceptible improvement in the utility of the device. Improvements do tend to track with better scores for a time, but eventually they pass a threshold where better scores don’t result in any great improvement. Reality catches up with such a score after a few months, when the industry as a whole abandons it to focus on another metric. The whole effect is that the industry is obsessed with the metric of the moment, and these metrics change from time to time, long after they have stopped being useful.
Here are some examples of the metrics-of-the-moment that I’ve seen appear in the mobile phone industry:
- Talk-time / standby-time. Battery types like NiCd and NiMH were initially the norm, and there was great competition to demonstrate the best talk-time or standby-time, which eventually led to the uptake of Li-Ion batteries. It became common to need to charge your phone only once per week, which seemed to be enough for most people.
- Weight. Increasing talk-time or standby-time could be accomplished by putting larger batteries into devices, but at a cost of weight. A new trend emerged to produce very light handsets (and to even provide weight measurements that didn’t include the battery). The Ericsson T28s came out in 1999 weighing less than 85g, but with a ridiculously small screen and keyboard (an external keyboard was available for purchase separately). Ericsson later came out with the T66 with a better design and which weighed less than 60g, but then the market moved on.
- Thinness. The Motorola RAZR, announced at the end of 2004, kicked off a trend for thin clamshell phones. It was less than 14mm thick (cf. 1mm thinner than the T28s). Other manufacturers came out with models, shaving off fractions of millimeters, but it all became a bit silly. Does it really matter if one phone is 0.3mm thicker than another?
- Camera megapixels. While initially mobile phone cameras had rather feeble resolutions, they have since ramped up impressively. For example, the new Nokia N8 has a 12 megapixel camera on board. Though, it is hard to believe that the quality of the lens would justify capturing all of those pixels.
- Number of apps. Apple started quoting the number of apps in the app store of its iPhone soon after it launched in 2008, and it became common to compare mobile phone platforms by the number of apps they had. According to 148Apps, there are currently over 285,000 apps available to Apple devices. One might think that we’ve got enough apps available now, and it might be time to look at a different measure.
In considering what the industry might look to for its next metric, I came up with the following three candidates:
- Processor speed. This has been a favourite in the PC world for some time, and as mobiles are becoming little PCs, it could be a natural one to focus on. Given that in both the mobile and PC worlds, clock speed is becoming less relevant as more cores appear on CPUs and graphics processing is handled elsewhere, perhaps we will see a measure like DMIPS being communicated to end customers.
- Resolution. The iPhone 4 Retina 3.5″ display, with 960×640 pixels and a pixel density of 326 pixels / inch, was a main selling point of the device. Recently Orustech announced a 4.8″ display with 1920×1080 pixels, giving a density of 458 pixels / inch, so perhaps this will be another race.
- Screen size. The main problem with resolution as a metric is that we may have already passed the point where the human eye can detect any improvement in pixel densities, so screens would have to get larger to provide benefit from improved resolutions. On the other hand, human hands and pockets aren’t getting any larger, so hardware innovations will be required to enable a significant increase in screen size, eg. bendable screens.
But, really, who knows? It may be something that relates to a widespread benefit, or it may be a niche, marketing-related property.
The fact that these metrics also drive the industry to innovate and achieve better scores can be a force for good. Moore’s Law, which was an observation about transistor counts present in commodity chips, is essentially a trend relating to such a metric, and has in turn resulted in revolutionary advances in computing power over the last four decades. We haven’t hit the threshold for it yet – fundamental limits in physical properties of chips – so it is still valid while the industry works to maintain it.
However, it is really the market and the end customers that select the next metric. I hope they choose a good one.