Why Indigenous Australians are special

In Australia, we are about to vote in a referendum to change the constitution, to add an “Aboriginal and Torres Straight Islander Voice” to the list of government entities. We’ll get to vote Yes or No on the 18th (oops, I mean 14th) October, and it will be the first time in over 20 years that we’ve had the opportunity to do something like that.

I’ve had many discussions with people here about the Voice, and I will probably vote Yes given there are a majority of Indigenous Australians who want it. The idea for it came out of the 2017 First Nations National Constitutional Convention, and had been preceded by many years of discussion of how to recognise Indigenous Australians in the constitution. The “Uluru Statement from the Heart” summarises the majority position of a large number of Elders from this convention, and includes the statement “We call for the establishment of a First Nations Voice enshrined in the Constitution”.

I am not going to present here an argument or evidence for why this should be supported. There are good analyses elsewhere. However, one of the things that has come up when I’ve discussed the Voice with others is that if the Voice is seen as a way of addressing disadvantage (which it is intended to be), and if Indigenous Australians are a significantly disadvantaged group (which they are), why should they get a Voice in the constitution in priority over other disadvantaged groups, e.g. refugees? Why should we call out a particular population in the constitution? In other words, why are Indigenous Australians special?

I may not be qualified to answer this. My school education in Australia was at a time when Indigenous Australians were not well covered in the curriculum. I do not have lived experience when it comes to Indigenous Australian communities. However, I have tried to educate myself. I’ve read all six books in the First Knowledges series, books by Stan Grant, Bruce Pascoe, and Bill Gammage, and even Indigenous Australia for Dummies. I have listened to the 2022 Boyer lectures by Noel Pearson, and I’ve visited many parts of Australia with Indigenous tour guides, and try to listen.

Despite that, I haven’t seen an answer to this question so far in the copious material flying around the Internet on the Voice referendum, and it seems central to the claim of the No case that the proposed constitutional change will create an unwelcome new division in our society, so I’m going to give this a crack.

A first response is that this question is an example of Whataboutism, and raising the disadvantage of other groups doesn’t somehow disprove the need for Indigenous Australians to get better outcomes than they’ve gotten historically. Additionally, presumably all groups should get the support they need to address their disadvantage. It’s not an either-or. We should do better. However, I’ll take on the question as if it was asked sincerely.

Another response is that the question is backwards. That it is instead Indigenous Australians that make Australia so special. The something-around 60,000 years of time spent shaping and learning about the flora, fauna and geography of this country has helped us be what we are today. After European settlement, the Indigenous people have played a role in making early settlers, explorers and farmers succeed. My grandmother was helped into the world by an Indigenous mid-wife, for example. While this is a valid response, I feel it doesn’t treat the question seriously.

I’ve come across two arguments for why First Australians are special enough to merit their own constitutionally-endorsed organisation: a legal one, and a moral one.

The legal one is essentially that they have unique rights that no-one else in Australia has, both recognised by the High Court and covered in Commonwealth legislation, but this uniqueness is ignored by the constitution. What is known as the Mabo Case was a claim of “native title” rights to the Murray Islands – part of the Torres Straight islands, off the coast of Queensland – by Eddie Mabo and others. This was due to the people there continuing their traditional activities since before European settlement, and recognition of the traditional laws and society that underpinned these. While no other population of people who have arrived in Australia since European settlement can claim this, it is not a unique situation internationally. For example, in Canada it is also recognised that Indigenous peoples there have rights that pre-existed any colonisation. Importantly, these rights don’t result simply from genetic lineage or “race”, but due to being part of a society that has continued to exist in Australia for thousands of years.

The moral one is Australian governments (both state and federal) have consistently passed laws to the detriment of Indigenous Australians, and are able to continue to do so because of an imbalance of power between the various governments in power and the Indigenous populations. Until Indigenous people have more say over what is done to them, the situation risks continuing. Some examples of Commonwealth government actions that targeted Indigenous Australians include:

Additionally, one legal expert has claimed that “Australia is the only industrialised nation that allows its parliament to make special detrimental laws for the Indigenous peoples of the land.” If so, Australia is not covering itself in glory here.

To guarantee a say about the stream of regular measures and laws that are targeted towards them by the Commonwealth government requires something that is not entirely subject to the Commonwealth government. Previous entities that represented Indigenous interests (NACC, ADC, and ATSIC) each managed to survive for a few years before being abolished by the Commonwealth. Having a new entity established by the constitution provides more balance and continuity in the relationship.

In conclusion, there is no new division here. Indigenous Australians are set apart from other Australians due to access to unique rights, and due to being uniquely and repeatedly targeted by Commonwealth government activities and laws. If the referendum succeeds, this will not change. But we can hope that other things change for the better.

Book Review – The Biggest Estate On Earth

I haven’t written a review here for ages, but I thought I’d write about this book to get some of my thoughts down about it. I just finished reading it during our holiday in New Zealand, and the contrast between a neighbouring country with a relatively recent human occupation (< 1000 yrs) and that of Australia was made even more stark through reading this book. For example, we visited Zealandia, a wildlife/eco-sanctuary which aims to provide a look at what New Zealand would have looked like before human habitation.  In Australia, where humans have been here for tens of thousands of years, what would such a project even mean?

The Biggest Estate on Earth

A historical analysis of the extent people managed the Australian landscape prior to European contact

The key controversy about this book is mentioned by Bill Gammage in his Appendix: that this is an application of the discipline of academic history to an area normally considered to be the domain of science – the Australian landscape. Accordingly, the book is dense with an overwhelming amount of source material that Gammage draws upon to support his analysis. This density made the book a bit of a chore for me to get through at times, and I maybe should have read just the first two and last two chapters, but the key insight is rewarding: that prior to European contact, people in Australia extensively managed the landscape to the extent we may even say that they “farmed” it.

As a historical text, Gammage draws upon both primary and secondary sources, but the former are extensive. Sources included writings from early explorers, surveyors, botanists, anthropologists, politicians, and farmers from across Australia, as well as paintings and maps from the time. A particularly interesting source for me was photographs of trees, which due to their multi-hundred year lifespans, are a form of documentation about what occurred in their vicinity during their life.

I found the argument repetitive, but still convincing, and am happy to believe that across Australia by 1788, people broadly shaped the landscape to suit their needs for both animal and plant food sources, as well as for large gatherings. Early Europeans to see this landscape described it over and over again as a “park”. The main tool used by indigenous peoples for shaping the land was controlled and timed burning, with fire being used on most days of the year, as people moved across their country. Since European contact and settlement, such practices have ceased and plant, animal and insect populations have also changed as a result. While it isn’t possible to return to the landscape or landcare regimes of those days, it highlights the knowledge that has been lost.

Rating by andrew: 3.5 stars
***1/2

Wrist Computers

At some point in the last century, a strange thing happened: people took something that they’d been happy to carry around in their pockets for centuries and started to wear it on their wrist. Why?

I have just bought myself a smartwatch, and it’s got me thinking about this. A smart watch is typically what a 1980s calculator watch would be if someone invented it today. Because that’s basically what 99% of them are. Not calculator watches, of course, but stick with me for a bit. Just as in the 1980s, the most computing power an ordinary person could carry around in their pocket was a calculator, so people tried to put a tiny version of it on their wrist. These days, the most computing power an ordinary person can carry around in their pocket is a smartphone, so people are trying to put a tiny version of it on their wrists.

That said, you may not be too surprised to hear that the smartwatch I bought was part of the 1% that aren’t like that. It is a Withings Activité Pop, which is an analog watch that happens to also talk to my smartphone using Bluetooth. Withings isn’t the only maker of this sort of smart watch, e.g. you can also get a Martian watch which takes a similar approach to being “smart”. I expect other watch makers will put chips in their watches and it will become pretty normal soon.

I am really loving my Withings smartwatch. It automatically updates the time when daylight savings changes or when I travel into a different timezone. It has a pedometer inside it, and shows me my progress towards my daily step target on a dial on the face. It also has a bunch of other features, and sometimes gets new ones that appear for free, like tracking swimming strokes. But most of all, it looks good, is light on my wrist, and has a battery life of over 8 months. While these as expected features of a normal watch, they are rather novel in a smartwatch.

As a result, smartwatches haven’t really taken off yet in the way that, say, FitBit fitness trackers have. Is the smartwatch market destined for greatness or niche-ness?

Perhaps the history of the pocket watch has some relevant lessons, for which I will be drawing heavily on Wikipedia. The wearable watch was a 16th century innovation, beginning as a clock-on-a-pendant with only an hour hand. Some 17th century improvements brought the glass-covered face and the minute hand, and they became regularly carried in (waist coat) pockets at this time. It took until late in the 18th century for the pocket watch to move beyond a pure luxury item.

Pocket watches continued to be the dominant form of watch, at least for men, until the late 19th century, when the “wristlet” (we know it better as the wrist watch) came along. The British Army began issuing them to servicemen in 1917, where synchronising the creeping barrage tactic between infantry and artillery was important, and pocket watches were impractical. Reading the time at a glance was probably the first “killer app”, and by 1930, the ratio of wrist to pocket watches was 50 to 1. Within a couple of decades, the pocket watch had been completed disrupted.

While it was more convenient to read the time on a wrist watch than a pocket watch, it was also was also awkward to wear a heavy thing on a wrist, and in terms of fashion, the wrist watch was considered more of a women’s fashion item. In the end, World War I forced the issue, eliminating the fashion consideration, and the convenience factor overcame the weight problem.

Coming back to the present, UK mobile operator O2 published a report called “All About You” in 2012 that noted 46% of respondents had dispensed with a watch in favour of using their smartphone to check the time. It seems the greater utility of a smartphone has led people to forgo their watches, even if it means that time has gone back into the pocket.

So, there’s an argument that if the smartwatch provided similar utility to the smartphone, people would again shift from the pocket to the wrist. My Withings watch doesn’t in any way substitute for my smartphone, and is really a smartphone accessory. However, something like a LG Urbane Second Edition watch runs Android and has an LTE connection for calls and texting, and is more powerful than even a smartphone of a few years ago. Speech recognition can make up for the lack of keyboard entry, and a Bluetooth headset can enable private conversations.

However, economically a smartphone is actually a games platform, and games dominate the revenues from apps on smartphones. Making the smartwatch a viable games platform may be required for it to replace smartphones. Even in the 1980s, there were attempts to create games for the wrist, but they weren’t enormously successful compared to the game & (pocket) watch versions. Admittedly, there are games for modern smartwatches. However, they drain the battery and aren’t the same calibre as smartphone games.

If we measure the period of the smartphone since 2002, when Nokia introduced Series60 handsets, it has been with us for 13 years. The pocket watch, from invention to disruption, lasted 400 years, but declined due to the rise of the wrist watch in the last 50 of those years. If the smartwatch disrupted the smartphone at the same speed, it would need less than 2 years.

All I can say is: watch this space.

Facebook Movie Review

In the lead-up to having our second child, we are getting in some things that will be harder to do once there’s a newborn around. So, about a week ago, thanks to our baby-sitting neighbour, we got out to see a movie together.

We are fans of Aaron Sorkin‘s oeuvre, with the box sets of both The West Wing and Sports Night in our TV cabinet. Since he’d been out here in Australia spruiking his new film recently, and that was its opening night, the choice of what to see was pretty simple.

The Social Network

A tale of friendship and betrayal with a lot of geeky detail mixed in

This is a film that follows the Sorkin model. Sports Night had rapid-fire technical sport talk, West Wing had a thousand-words-a-minute political speak, and The Social Network has a firehose of geek speak and technical computer detail. But in none of those cases did it really limit your understanding of the plot, and on the contrary, it does at least make you feel smart.

It also doesn’t hurt that the movie is well cast and acted, and dialogue is clever and humorous. Because it’s based on a true story, you already know how it will end – Facebook will be a success – but the tale isn’t about Facebook, so much as the interesting bunch of people who were around in the early days of the social networking website, and the roles they played in bringing it about.

Perhaps these characters are as much the social network of the title as the website. They are excellent fodder for Sorkin’s script and part of the enjoyment for me was in the fleshing-out of the characters as the film progressed.

While Mark Zuckerberg doesn’t get a particularly favourable presentation in the movie, his friend Eduardo Saverin is treated very sympathetically. Still, Zuckerberg is presented in a way that allows us to feel that we can almost understand him and what has driven him to become the billionaire and social media titan that he is today.

Another aspect that comes across well is the excitement and craziness that comes from being in a high-growth start-up. This is another thing Sorkin is good at capturing, whether it is the crazy cultures of the armed forces, top-tier politics or TV journalism. In this case, it helps explain the lure of why people would want to join a start-up (despite the high risk and long hours).

So, while this isn’t a truly great movie, it was a very interesting one. Especially so as the influence of the Facebook social network continues to grow in our lives. By getting a perspective on the early days of this service, it helps in understanding the changes Facebook is undergoing.

Rating by andrew: 3.5 stars
***1/2

Perhaps there was no Tulip bubble

I’ve been trying to understand what a financial bubble really means, and in the course of this, came across some interesting information: apparently the great Dutch Tulip bubble of the 1630s wasn’t a bubble after all. Although I am wary to merely summarise stuff that is written better elsewhere, I thought this was a good one to share.

Background

Tulips came to the Netherlands from Turkey in the 1500s, and became a popular status symbol. Multicoloured varieties were produced due to the effects of a plant-specific virus, and as a result (skipping the details), it would take at least seven years after planting the bulb of a spectacular tulip to produce new tulips from it. Understandably, possessing a spectacular tulip bulb gave you an advantage for quite a period of time before others could gain a similar bulb.

Due to rising prices, speculators entered the tulip market in 1634, and a more formal futures market in bulbs arose in 1636. By some accounts, offers for single bulbs reached insane levels, e.g. 49,000 m^2 of land for a bulb. In February 1637, the price of tulips crashed. For the next few centuries, “tulip mania” is used as a textbook example of crazy market behaviour with a boom and bust.

Why not a bubble?

If a bubble is where markets are caught up in “irrational exuberance”, then it appears that this market continued to be rational. And if a bubble is where a market eventually “pops” and the resulting crash causes widespread losses, then it appears that losses were not significant. Of course, it is hard to know for sure, since all of this happened almost four centuries ago, but there is apparently some evidence that:

  • In February 1637, the futures contracts all became options contracts, i.e. the purchaser of a contract to buy bulbs no longer had the obligation to purchase or take delivery of the bulbs if it looked like they would make a loss.
  • This change was known to be coming from several months beforehand, encouraging purchasers of contracts to agree to high bulb prices with minimal risk. Taking this perspective, the contracts were rationally priced.
  • Actual tulip prices (as opposed to the price of these futures contracts) remained at ordinary levels.
  • The Dutch authorities halted the trade in the contracts, and later decreed them unenforceable gambling debts. So, given that no tulips changed hands that winter (as the tulips would’ve all been in the ground) and the contracts were unenforceable, it’s not clear that there were any significant losses made.
  • Most of the support for claims of a tulip bubble come from some anti-speculation propaganda published soon afterwards.

Perhaps, then, there was no Tulip bubble. I guess all those textbooks need updating.

How the 80s became the naughties

One of the defining events of the naughties – the first decade of this millennium – was the global financial crisis. How mortgage defaults in a few US states, leveraged many times over through the global financial system, brought about a crash in the world’s stock markets and a world-wide recession. But its genesis was in 1980s Wall Street as chronicled by ex-Salomon Brothers employee Michael Lewis.

Liar’s Poker

Vulgar, incredible and fascinating take on 1980s finance

I listened to the audio book earlier this year, and Lewis’ tale blew my mind. Here was a person who, by rights, should not have been in that place at that time, as he didn’t have the traditional qualifications to get a job trading at Salomon Brothers, nor did he even interview for the job. Furthermore, instead of continuing to make wads of money, he chose to quit and write an account of it. Lastly, it was written well in a very accessible style. The chance of all these things happening must have been minuscule – and yet they did.

If Lewis is to be believed, and he presents himself credibly, Wall Street in the 80s was inhabited by a bunch of racist, chauvinist cowboys who through luck more than wisdom and possessing a complete disregard for their customers managed to make out like bandits. This is, of course, completely counter to the image that Wall Street projects of itself to its customers.

The story is part-memoir, part-history and part-ethnography. The author’s prior education was in art history, and second career was in journalism, and he picks out the threads that led to the particular situation that he was dropped into, as well as charting his progress through the firm and documenting its culture. It is a unique book, and truly fascinating, even if you don’t have a background in finance let alone the bond market.

My rating: 4.5 stars
****1/2

Liar’s Poker

First-hand account of Wall Street’s cowboy culture and the rise of mortgage bonds

Most recently, Kate gave me a paper copy of Liar’s Poker, given how much I enjoyed the audio book. I quickly discovered that it was a rather different book.

Liar’s Poker was Lewis’ first book, and the text really does feel like it. For example, paragraphs feel like they are crammed full of information. In the audio version this wasn’t so obvious. Also, there is a great deal of background, historical information in the middle of the book, which I found bogging down the interesting personal tale, and much of which was excised from the audio book version. The book would’ve benefited from more aggressive editing, and the audio book, being an abridged version, had effectively received this.

That said, it remains a compelling tale. All of the aspects that I liked in the audio book version, I still found in the paper version, although it was less focussed. Perhaps if another reader hadn’t experienced the audio book first, there wouldn’t have been an expectation of a fast pace already set.

My rating: 3.5 stars
***1/2

History Repeats

When I heard that the name for the new variant of Vegemite was “iSnack 2.0”, it took me a little while to understand that it wasn’t a joke. The new variant is basically 30% Vegemite and 70% cream cheese, but apparently it deserved a revolutionary new name.

Although I am horrified at the thought of a breakfast spread that includes not only punctuation but numbers in its name, there seems to be historical precedent for this sort of crazy naming. Here’s what Wikipedia says..

  • Just like the new variant was named following a national naming competition, the original was named following a similar competition back in 1922.
  • Just like the new name is extremely derivate of popular products on the market, the original name was derived from the popular spread Marmite that was shipped to Australia from 1919.

Although, Vegemite wasn’t always called Vegemite. From 1928 to 1935 it was sold as Parwill, in order to work with a marketing slogan of “Marmite but Parwill” (get it?). Obviously, the product name was changed back when the marketing didn’t work. So, if history continues to repeat, perhaps iSnack 2.0 will be given a less ridiculous name once the marketing people wise up. We can only hope.

Reblog this post [with Zemanta]

Melbourne Chocolate

Melburnians seem to take their chocolate heritage for granted. I still find it amazing, and while I still do, I want to jot it down here.

Both Melbourne citizens and Australians in general are fans of chocolate. According to IBISWorld, chocolate and confectionery in Australia is a $2.5b per year industry. If we look at Nielsen’s list of the top confectionery sold in convenience stores during the year to February 2009 by share of value, the top chocolate bars (candy bars, for US readers) were:

  1. Mars 2Pak 80g
  2. Snickers 2Pak 80g
  3. Cherry Ripe 85g
  4. Mars Bar 65g
  5. Twirl Bar Kingsize 63g
  6. Snickers 60g
  7. Kit Kat 45g
  8. Boost 80g
  9. Turkish Delight Twin 76g
  10. Cherry Ripe 55g

I’m listing these to highlight an interesting fact. However, we need to examine where each of these chocolate bars were invented:

Yep, the Cherry Ripe holds two of the top ten places for chocolate bar sales, and it was invented in Melbourne. (All the rest come from three places: US, UK and Ireland.)

Noted Melburnian Sir Macpherson Robertson (1859 – 1945) founded the MacRobertson’s chocolate company which, according to Wikipedia, was responsible for the Cherry Ripe, Freddo Frog, Bertie Beetle and Snack. The chocolate company was sold to Cadbury-Schweppes, and the Ringwood-based factory continues to this day. Sadly, they don’t offer any public tours.

The MacRobertson name no longer appears on the Cherry Ripe wrapper, but it does live on in Melbourne through a highschool, a bridge, and the building of the National Herbarium of Victoria.

Reblog this post [with Zemanta]