Archive for November, 2006
Posted by: Eric in Opinion
In part one of this two-part entry I pointed out that I am not a great fan of conspiracy theories. Hint: you may want to read part one before reading this. Trust me, you need to read it. Otherwise you may worry about people deliberately doing very bad things, when this post is all about people doing very bad things by accident. Unintended bad things, according to modern moral standards, are nowhere near as bad as intentional bad things, if you know what I mean. So you need to appreciate why I worry about well-intentioned goofs.
As I said before, I think conspiracy theories are daft. However, I do think people sometimes do things, sometimes as large groups, with unintended but important consequences. Those consequences may then by exploited by others in ways that had not been imagined at the outset. So Albert Einstein did not intend to revive the flagging career of Keifer Sutherland, who at one low point considered joining the rodeo. But without the work of Einstein, Niels Bohr and the rest, Sutherland would be doing something useful like roping cows instead of pretending to be agent Jack Bauer looking for terrorist nukes in the tv show “24”.
So Microsoft is now using us all to gather data, in order to test its software. To do this, it has effectively developed a new paradigm for experimentation and gathering of results. Customers are effectively expected to test software – even if it goes beyond the beta stage – and are asked to permit automated feedback whenever something adverse happens. But in doing this, customers are not just helping Microsoft test software. They are also engaged in an altogether different kind of experiment, though the method of gathering data is equally sophisticated. In this experiment people do not just provide the data, they are its subject.
Science Fiction writer Isaac Asimov came up with an interesting idea in his Foundation stories. The essential premise was that a scientist, Hari Seldon, would conspire to manipulate the future, although this conspiracy was positive in that it was in the best interests of the human race. Seldon could do this using his scientific technique of “psychohistory” which was defined in the story as
“…that branch of mathematics which deals with the reactions of human conglomerates to fixed social and economic stimuli…
…Implicit in all these definitions is the assumption that the human conglomerate being dealt with is sufficiently large for valid statistical treatment…”
In other words, Seldon devises a statistical science which predicts the behaviour of very large groups of people in a way that cannot be applied to individuals. Of course, for Seldon to have generated a science of prediction, he must have amassed sufficient data to use as a basis for his calculations.
Okay, so that is science fiction. But you get my point. Right now, in a very crude way, many are trying to accumulate data about us. Some are consciously doing it with the intention of learning about our behaviour; others are doing it for other reasons. But just as Einstein’s theories started a chain of events that brings us to our current fears about the proliferation of nuclear weapons, so the mass of data accumulated about people invites research into predicting our behaviour. Such predictions may be for financial gain, or like Asimov’s fictional conspiracy for the good of all humanity, or (like the popular conception of conspiracy theories) for the good of the few at the expense of the many.
This generation is the beta generation. For the first time in history, people are able to come up with theories about the mass of human behaviour, and then actually test them out. The most basic example is to be found in marketing. To market successfully, you need to understand the market. And use that understanding to influence it. So marketing will be at the forefront of the experimentation (take a look at the depth of analysis here if you do not believe me). Traditional methods of gathering data, through questionnaires and surveys, will be increasingly replaced by the power of enforced gathering of data at the point of sale. The supermarkets have the power to track purchases and build up an understanding over time. Personal finance companies will have more and more data about someone’s credit history and spending habits. And on-line purchases will demand purchasers answer questions for the sole purpose of marketing.
Of course, you may assume that you are protected by some legal voodoo from anybody unreasonably analysing your data. If you are in the EU, forget about your rights under data protection legislation – for laws to work they have to be enforced by someone. And, thanks to global terrorism, governments have finally solved the problem of how much data is too much data for a company to keep. So, after wasting some taxpayers money debating its data protection directive back in the late 90’s, the EU has since stepped up to the mark and reversed some of that by passing the data retention directive. For those of you not familiar, the basic idea of the data retention directive is that telcos must keep lots and lots of data about who made what call to whom, what was said in emails, that kind of thing, just in case governments need it to spy on their people…ahem, I mean protect good people from bad people. So, after some worrying in the 90’s that companies might be tempted to keep too much data, now governments are bending over backwards to persuade or force (whichever works faster) those same companies to keep ever more. They are even going to give the telcos taxpayer’s money – though nobody is sure how much yet – to keep and store all that data, just in case it turns out handy for fighting terrorism, or serious crimes, or important things like tax evasion. You know, important things. Of course, nobody in the telcos would ever misuse that data – no no no no no no no no. The fact that a few years earlier the same Eurocrats were demanding increased powers over telcos to ensure that telcos did not hold on to unnecessary data is irrelevant. A few years ago, if telcos had data they could not be trusted, and governments had to protect the people from the evil that telcos would do with all that data. Now we have global terror. So telcos can be trusted with all that data. Obvious, really. Of course, if you are outside of the EU this is all irrelevant as you never had any rights anyhow.
If you are inclined towards conspiracy theories, then this should all be troubling. One relatively benign example of how technology increases the power for monitoring human behaviour is the use of virtual communities like Second Life for research. But it can get much cleverer than crude questionnaire-based methods of researching behaviour. As well as training computers to read number plates, in order to charge drivers for using congested areas, computers can be trained to read the actual behaviour of people. Telcos have used simple neural networks to try and spot for fraudsters for a long time. The basics for this kind of approach were always in place – high degree of automation, very large volumes of data representing behaviour all in a standard format, and enough financial incentive to make it worth investing in developing the technology. But sophisticated analysis of data and even neural networks are being used for far wider automated predictions of human behaviour – just see this list of abstracts from an academic conference dedicated to fighting crime.
The recent Vodafone-Yahoo advertising deal gives an example of the increasing opportunities for assimilating data about people. Supermarkets know what you buy, but not much else about you. But suppose you know what someone buys (because you are monitoring on-line purchases) AND their movements (because of their mobile phone) AND who their friends are (because you know which numbers they call, who they email, or their activity on social networks). You know an awful lot about that person. And the potential uses are extensive. So, imagine someone appears to be a “net promoter” of a particular product, in that they would promote it to their friends. If you know Rachel is a net promoter, why wait for her to recommend the product to friends? Why not just target an offer to Rachel’s friends straight away, mentioning how happy Rachel is with her purchase. Now this could backfire – perhaps the friends will be none too happy with Rachel. But then again, if you give Rachel a credit for every friend that buys, and each friend gets an exclusive discount too, then maybe everybody feels like a winner.
Anyhow, I do not work in marketing, so maybe my scenarios are daft or maybe they make sense. But the important thing is that it will, in theory, not be necessary to speculate any more. You can try out different marketing models and steadily learn which are the ones that work best. So it seems we are destined to be increasingly manipulated, or better understood, depending on how you look at it. But my bet is that the future will not work out that way. Not because they will not try to sift through the data and test us all. But for a few very important reasons that none will care to admit.
First, the data that will be gathered will often be garbage, and chances are very few will realise how bad it is. If you read part one of this two-part entry, you got my point of view on how clever people are. They think they are cleverer than they really are. So in practice, they make mistakes, especially when there is no straightforward feedback to highlight those mistakes. Poor data is very hard to identify, if you have no source of information that highlights poor data, and no simple way of testing it. Let me give you some examples of why a lot of data will be garbage.
This week I booked a flight with Easyjet. Rather annoyingly, the jolly fat Greek’s airline portal forced me to click a box stating where I was staying at my destination. I was given no choice – either I clicked one of the options or I would not be able to book on-line (so would have had to pay more to book over the phone or else fly with another airline). The point of gathering the data is clear – to understand the kind of customer I was and the potential to sell accommodation. So I did the only sensible thing. Which was to give a dishonest answer. But instead of picking an answer at random, I made sure I picked the answer that was most misleading. Instead of clicking the box saying I was going on business and staying in a hotel, I clicked the box saying I was visiting friends and staying at their home. That way I also minimised the risk that the jolly fat Greek will spend money on yet more unwanted advertising aimed at me.
Of course, it did not stop there. Next thing I did was book my car park space at the airport on-line. This required me to explain how I knew the car park existed. So I lied again. This time I rationalised that the best lie would be say it had been a recommendation by friends.
Even supermarkets cannot totally rely on their data about who buys what. A letter printed in the Financial Times last week tells a funny story about a customer shopping at Tescos without having one of those ubiquitous loyalty cards. What does the checkout person do? The checkout person generously offers the points – and unrelated data – to the next customer in the queue, who gladly accepts.
But not all data will be garbage. If you pay for something, hopefully the record of what is bought is right. When mobile phone companies record your location, chances are they will be right. One goal of enhanced 911 in the US is to pinpoint cellphones to within 50-300 metres. This may be a boon for emergency services, but it is also a potential boon for tracking the location of people. So if the data is correct, then the next problem becomes how to use the data to make predictions.
Making predictions is in our nature. Whether it be the biblical interpretation of dreams by Joseph, the rhymes of Nostradamus, the stories of hard science fiction writers like Asimov or Clarke or the predictions of professional futurologists like BT’s Ian Pearson (yes, Ian Pearson is for real – though I agree he is more like a spoof), there seems to be an eternal and unquenchable desire for prediction. So the question will be how predictable we are. My hope is that we are like the climate, where the facts can be so readily disputed by experts like Danish scientist Bjorn Lomborg. Many predictions down the ages turned out to be very wrong, and not just by weather forecasters. The aforementioned Isaac Asimov was known as a “hard” science fiction writer because he was a proper scientist who tried to extrapolate from proper science, but he predicted we would have invented positronic brains for walking, talking robots, and populated 50 planets before we would have managed artificial insemination – so he was just a little wrong there. And in Asimov’s Foundation, the character Seldon’s main goal was just to compile a source of all human knowledge – the Encyclopaedia Galactica – when had Asimov predicted the rise of the internet he would have realised Wikipedia is going to get there first :)
But the power to change is not dependent on making accurate predictions. Al Gore making a film about the environment is more influential than Bjorn Lomborg writing a book. Arguably Futurama making a cartoon about global warming is more influential than proper academic research. Even father-and-son Rupert and James Murdoch take global warming seriously, screening Al Gore’s film (and getting Al to come along and talk too) to News Corp execs. [The fact that the Murdochs could make Attila the Hun seem like a hippy should make it very tough for the beardy shagger Richard Branson – who finds himself outflanked by the Murdochs both commercially and ethically]. The Y2K bug may have turned into an anti-climax, with no planes falling out of the sky and everyone’s elevators still working fine, but it still motivated enormous change. Karl Marx may have inspired many changes around the world, but pretty much everyone now has given up on the worldwide communist revolution that Marx argued was inevitable. Thomas Malthus was wrong that population growth would outstrip food supply, but his name is still known because so many thought it credible. The “new paradigm” of dotcom boom evaporated along with a lot of paper profits. Confident predictions of WMD in Iraq were shown to be unjustified. So enormous changes can be inspired by bad predictions. And predictions will be accepted as fact if argued for persuasively by people in authority, even if their reasoning or data is flawed. That is the bit that really scares me. Responding to accurate predictions makes sense even if it is creepy, but responding to invalid predictions that may be harum scarum nonsense, generated by self-serving experts who only make guesses so that they can generate rewards for themselves. Disraeli had good reason to warn about “lies, damn lies and statistics.” With all this data being accumulated, the potential for bad, but persuasive statistics is great. A prediction does not have to be right to have profound consequences. The current gathering of lots data may motivate people to believe that humans are more intelligent than they really are. It is tempting to think that a poor conclusion is right, so long as it is supported by lots of data. With the current beta testing of theories being performed across the whole human race, premature conclusions may be hard to argue against. Worse still, if the data is available only to the elite, like the world’s governments, how will the rest of us be able to determine when their opinions are reasonable, and when not?
Posted by: Eric in Opinion
I am not a great fan of conspiracy theories. All conspiracy theories posit that a malign group of powerful people intelligently manipulate events to attain a particular goal. The conspiracy is kept a secret from the rest of society. I think that is naive. Very few people are that intelligent. And if they were, why would they waste their time doing the things that conspiracy theories are usually meant to be about? Most importantly, intelligence is probably a serious obstacle to achieving power. Just take a look at some of the U.S. Presidents in the last 30 years: Ford, Reagon, and George W. Bush. Al Gore is clever enough to write books that are actually about things other than himself. He made a Powerpoint slide presentation about global warming that was so good that they turned it into a movie that went on general release. Pretty smart, if you ask me. Gore was always going to come second to a man like George W. Bush. Bush has no choice but to keep his messages simple, because Bush is simple. Gore instead came across as wooden and hard to like. The painfully obvious difference in intellect probably helped Bush more than Gore. Simplicity beats intelligence most of the time.
[But I suppose the conspiracy theorists would argue that Presidents and the like are not the ones really running the show. Better save that debate for another time.]
One of the biggest problems with conspiracy theories is that they are usually so convoluted that it takes lots of intelligence and persistence to get to the end. And after all that you realise it was just nonsense and you wasted your time showing any interest to begin with. So conspiracy theories are ultimately not very rewarding unless you want to suspend disbelief and live in a fantasy. All of this is going to make writing this post very hard, as it looks a bit like a conspiracy theory, but is not, although it is a complex theory. A very complex theory, and about complexity. And about how to keep things simple. So to keep it simple this post is part one of two parts. Part one makes sense on its own. But part two is the really interesting bit. You just have to read part one first to make any sense of part two. So here goes with part one…
You may have noticed that lots of businesses are offering you lots of software these days. For free. On one condition. You test it for them. They call it “beta testing”. Another way of describing it would be “not sure how well this works yet” testing. Microsoft is no longer sure that releasing betas in the traditional way works that well – see here. But most software businesses do it. And betas are very popular with customers. Most of the popular new communications software gets high-profile beta releases: hotmail, messenger, skype, googlemail. Everyone does it. But even when you buy software, the testing never really ends. There is only one difference between a customer doing beta testing on Microsoft software and a customer clicking the box to email Microsoft with an error report when their paid-for software crashes. The difference is Microsoft does not want to call the latter “testing” because supposedly the software has been tested already. But all Microsoft did was to extend the idea of experimentation (aka testing) to its natural conclusion: treat all life as an experiment, treat all truths as contingent hypotheses, and then just get on with the real work of gathering as much data as possible to verify or falsify the hypotheses. I am sure the philosopher Karl Popper would have approved of this method. Continuously look for bugs on the assumption that even if the software looks like it is error-free, you never know for sure. This increased sophistication in allowing room for doubt is a positive thing, scientifically speaking. It is part of the reason why people started to talk about Einstein’s Theory of Relativity, when they used to talk about Newton’s Laws.
The troubling thing about needing to take a contingent approach to verifying software is that, if we cannot reach a definitive conclusion on whether software works correctly in a practical timeframe with a sensible level of resources, what chance do we have of verifying anything else complicated works properly? Okay, so software code may be complicated, but ultimately it is finite and mathematical. A line of code does the same thing each time it is executed; it is perfectly predictable. There may be very many, but ultimately there are only a finite number of logical sequences that could be executed in software in a given period of time. You could, in principle, execute every possible sequence within a period of time and so verify with certainty that it works correctly in all cases. But doing all that testing would be very slow. And costly and boring. So instead, testing involves having a reasonable go at checking that the main components work okay and then putting them together and seeing if everything works together okay for a while and then letting the customer have a play to see if they can find something wrong.
The message for revenue assurance is pretty plain. Everybody who ever claims to measure revenue loss is wrong. And always will be. And estimating loss is no better than using folklore to predict the weather. To measure revenue loss with absolute certainty you would need to know you were monitoring the outcome of every possible sequence of logical paths that might be involved in processing the data in a transaction. That would mean effective checks relevant to the execution of every line of software in every device from the network to the bill. And then some. Because losses involve much more. They involve the interaction of the software between systems (are the rules by which data is output from one system actually consistent with the expectations for the input into the next?) and physical and environmental factors (what happens if someone cuts the power to one of the systems and there is no failover? what happens even if there is a failover?) and we should not forget that, in most cases, there is also some processing done by humans. At the very least a human being is going to be involved in typing in reference data (more than one person has got the decimal point wrong when entering a new rate) and in writing the words to explain the charges to customers (the calculations described by those words need to be mathematically identical to the calculations performed in practice). So the best any revenue assurance department could come up with a contingent theory about loss. And that means the search for counter-examples must go on indefinitely. Which is rather a nuisance for revenue assurance people wanting a promotion ;)
I once wrote a paper explaining some theory and practice for metering and billing testing for T-Mobile UK. The people still working there must have forgotten about it because the original version is still up on their corporate website unchanged – even the spelling error in the URL is the same (mistakes happen everywhere). I mention it because writing the paper was a mistake. I thought I was pointing out some obvious and useful things. For example, I wanted to point out the only way you could really really be sure that a bill was accurate was to treat the whole business as a black box. You take the tariff documents that get published, then set up some services and make some calls. Finally you check that the bill was consistent with what the tariff document said and the services you received. Simple and fool-proof. And you do not need to know anything about how things work in the business in order to do it. The difficulty with that approach is plain: it would be an awful lot of work to really get confidence this way. But if you executed all varieties of calls and services at all times and locations etc etc you would eventually execute all logic paths. I contrasted that certainty of conclusion with the likely compromises that most would make in testing bill accuracy – which is to break up the tests into piecemeal components. Breaking them up makes it easier to focus on certain kinds of possible problems, but only at the cost that you totally fail to capture some kinds of error through your testing. In other words, you end like Microsoft – you trade certainty in exchange for being more cost-effective. You anticipate what might go wrong, and check for that. Sometimes you will miss something but it is a lot less work overall. But writing the paper backfired. It backfired because (a) probably no customers ever download and read this document, and (b) it upset the firms supposed to independently audit things like bill accuracy on behalf of customers. Pointing how much work would be involved to get certainty, and the real-life risks involved when deciding to compromising certainty for cost-effectiveness only upset the audit firms profiting from the work, especially as it was their job to be the clever people who would understand how to avoid mistakes. So they wrote a guide for bill accuracy approval that just said the opposite of what my T-Mobile document did. In other words, it said that following a complicated approach relying on human intelligence was less likely to be flawed than taking a simple approach which minimises reliance on human intellect. Now the document has got the regulator’s name on the front so doubtless customers can read it and rest assured that lots of super-intelligent people are protecting their interests and not making any mistakes whilst doing so. Much better than trusting an error-prone dullard like me, I am sure :(
You can be pretty sure that if Microsoft just eventually gives up and hands over software for its customers to de-bug, then there is no revenue assurance team in the well that is not doing effectively the same thing. However, unlike Microsoft, many in revenue assurance are a bit silly. The responsibility is handed over to the customers but then there is a failure to listen to the customer’s feedback. By feedback of course I mean the complaints the company gets about its accuracy. Most complaints may be nonsense, but if the revenue assurance department is not monitoring the valid ones, it is losing a vital source of data. But as I say, highlighting errors that revenue assurance missed only to be picked up by customers may not be the best way to get a promotion ;) This is another example of the supposed intelligence of the masses, in this case finding flaws that are not spotted by the “experts”. But understandably it takes a certain kind of expert to be willing to learn from mistakes. Other experts might feel accepting mistakes undermines their authority. Which is ironic for revenue assurance – a discipline that is itself a response to human fallibility.
To avoid mistakes you have to have an open mind about your own fallibility. In other words, you have to accept that you will make mistakes in order to reduce the chances of making mistakes. But believing in fallibility just means that any belief may be shown to be wrong. There is an old philosophical contradiction that illustrates the problem. It can be best stated in terms of a conversation between two people:
“Are you always right?”
“So sometimes you believe things that turn out to be wrong?”
“So which of your current beliefs are untrue?”
“I do not know….”
At one telco I was lumbered with responsibility for a new revenue assurance system. The purchase had been made just before I started working for the telco. After implementation the tool kept producing reports that said there were errors in how the bills were calculated. So I drew the obvious conclusion. When I told people my conclusion, the response was they did not like my conclusion and I should change it. My conclusion was very simple. The revenue assurance tool was wrong. You can imagine how much the vendors of that system liked that conclusion. And it was not that popular with the rest of revenue assurance either. Nobody was keen to admit that lots of money had been spent on a tool that did not work properly. So there was a lot of pressure to chase around the business and try to validate if the supposed errors were in fact real. But my reasoning was simple, so in my usual stubborn fashion I ignored what everyone was telling me to do. The revenue assurance tool was cheap, unproven, new, and had not been tested much. In contrast, the systems it was being used to test were expensive, old, long-established, proven and would have been tested many many more times, not just in our telco but in others too. So I put effort into finding out what was wrong with the revenue assurance tool, until the fault was found and corrected. Admitting to a faulty revenue assurance tool was inconvenient. But it would have been more inconvenient to admit the truth only after wasting a lot of people’s time chasing phantoms in other systems that were actually working fine. Of course, it would defeat the point of revenue assurance if you always assumed the revenue assurance test was flawed. But if you do not apply appropriate levels of scepticism you will waste a lot of time before you get to the truth, and it will be a lot more painful to admit the truth when you do eventually get there (if you ever do). So the message for revenue assurance teams is plain: doubt yourselves at least as much as everyone else. And make sure you keep on looking for evidence of your own failings. Some of the people who work in revenue assurance go into the role because they like to check on other people. It makes them feel superior. But the price of doing that job properly is self-doubt: they need to check on themselves just as much.
So did Al Gore learn from his mistakes? Probably. He was seen as overly wooden. His intelligence was as much a liability as an asset. So making fun of himself is positive and humanises his intelligence. What could be better for Gore than to appear on the TV cartoon series “Futurama” as a head in a fishtank making fun of your own books and environmental beliefs? He then takes the cartoon explanation of global warming in that show, and uses it in his (serious) documentary “An Inconvenient Truth”. And then Gore teams up with the Futurama crew to make a (funny) trailer for his (serious) film. The success of his film has even seemingly resurrected his prospects of standing for President again. This is a man joking that he used to be the next President of the United States. Maybe jokes like that get the US electorate to laugh him all the way into the White House. And the ability to tell a joke at your own expense cannot harm if the biggest challenger for the Democratic nomination is humourless Hilary Clinton. A good example of learning from past mistakes, as well as keeping things nice and simple.
Okay, lecture over… for now. End of part one. I divided this into two posts because if you do not want to believe people are fallible and being overwhelmed by the complexity and volume of data they receive, and they sometimes lack the ability to be self-critical in a way that may counter this problem, then you sure as heck are not going to want to read what I put into part two. So those guys in the audit firms can stop here. But if you are as cynical as me, read on….
Posted by: Eric in Opinion
I am going to a couple more conferences this year, which means I have been struggling to find ways to stay awake whilst hearing people say the same things about revenue assurance that they said last year. And the year before. And the year before that.
Then it occurred to me in a flash: bingo. Select five words or phrases from the list below at random at the start of a presentation, and cross them off if the speaker uses them! If you hear all 5 you can claim your prize at the end of the presentation by asking the speaker the following question: “surely what you just said was just a lot of meaningless slogans and buzzwords strung together in a moderately grammatical way?” whilst holding your scoresheet above your head as proof. Here’s the list:
> end-2-end (sic)
> revenue management
> next generation
> value chain
> best practice
> margin enhancement
> revenue operations centre
> virtual team
If you have caught a speaker using all 5 phrases in the same sentence, I suggest you click the link below and add it as a comment. Presentation titles also count. You know the kind of thing: “Benchmarking the evolution to proactive revenue management within a mature operator by leading optimization of the next generation value chain”.
(I quite like that title. I think I may use it myself for my next presentation.)
I will take all the best entries and create a special mock revenue assurance conference presentation to celebrate. Then give it at the next conference I go to. In other words, I am short of ideas and this is all I can come up with.
Posted by: Eric in Opinion
I was never that attracted to cyberworlds like Second Life. I have enough trouble with finding time to manage everything and then relax in my first life without taking on a second life too. But probably I was missing the point. If you think like IBM then the point is to get people to work together in a virtual office environment – read this. Forget teleconferences and videoconferences, they are virtualconferencing in SL. The thinking is obvious, to get people to relate to each other like in the real world, except via their virtual avatars. And whilst the envrionment is virtual, the work is very very real. I have blogged before on how telcos fly people around instead of consuming their own remote communications products. And as I get older, excuses to stay home and escape the commute appeal more and more. So forget this blog – I shall have to set up a virtual speakers’ corner to rant from in SL.
But as you may have noticed there is more than the real-world economy at play in SL and its ilk, because its fantasy economy is booming too. I am not just talking fantasy in the sense of the fantasy world of advertising and publicity (everyone from Toyota to Duran Duran is jumping on the virtual promotional bandwagon). So to build a speaker’s corner I will have to buy some virtual land – with my real money. Of course, a virtual economy like this should be a zero-sum game as far as real-world economics is concerned. Ignoring any currency movements between virtual currencies and real currencies, then the total worth of a virtual economy should be equal to the real-world value stored in it. Assuming the real world is rational. But it is not. Which is why governments of the world, easily drawn to any source of tax revenues (or at least to stopping ways of avoiding them), are asking themselves how to tax these virtual economies. And it is why one player in the Entropia “universe” paid $100,000 (in the real universe) for rights to a virtual “space station”.
Economist JM Keynes pointed out that people often invest in things not because they think they have any value, but because they think other people think they have value. Or because they think that other people will think that other people will think they have value. And so on. So if somebody else thinks a virtual space station is of value, who am I to argue? Keynes was thinking of the great Wall Street crash of the 1920’s but you wonder if people ever learn. Tech-savvy punters should probably be familiar with the idea that not everyone got rich out of the flood of money in the dotcom stock market mania, though I personally would go further. I think that the so-called market economies of virtual worlds have more similarities with Albanian pyramid schemes than anything else. The great thing about running a virtual world, and the bad thing about investing in one, is that the people running the world could just cash it all in whenever they like. Which means investors would have no way to get their money back. This is what it says in the SL terms of service:
You acknowledge that the Service presently includes a component of in-world fictional currency (“Currency” or “Linden Dollars” or “L$”), which constitutes a limited license right to use a feature of our product when, as, and if allowed by Linden Lab. Linden Lab may charge fees for the right to use Linden Dollars, or may distribute Linden Dollars without charge, in its sole discretion. Regardless of terminology used, Linden Dollars represent a limited license right governed solely under the terms of this Agreement, and are not redeemable for any sum of money or monetary value from Linden Lab at any time. You agree that Linden Lab has the absolute right to manage, regulate, control, modify and/or eliminate such Currency as it sees fit in its sole discretion, and that Linden Lab will have no liability to you based on its exercise of such right.
So they can just take your money away, or make it worthless, just like that. In the real world that might lead to a riot or a revolution. But where does it leave you in a virtual world? Pretty much stuffed, unless you plan to riot outside the Linden Lab offices in the real world. But the answer to this risk to a virtual investment should be obvious, and no different to any other kind of risk management. That means having insurance, disaster recovery, backup plans, that kind of thing. And with a virtual universe, nothing could be easier. So the really savvy tech entrepreneur needs to stop wasting time collecting dung in Entropia or performing virtual strip shows in Second Life. They need to create an effective way for users to authenticate their avatar’s actions to a third-party backup , so they can be reborn in another virtual universe if their second life disappears or they get ripped off. And all that they should ask in return is a reasonable real world premium. Probably a lot less than the premium on a real space station.
Posted by: Eric in Opinion
Bored with your revenue assurance career? Stuck in a rut because you lack the imagination and skill to add any more value to your business? Want to distract people so they do not notice there are lots of leakages that have been going on for years but that you missed? Here are the best ways to make yourself even more important in your own mind, if not in real life….
1. Start doing revenue “management”. This used to be the job of Marketing and Commercial Finance, but obviously Marketing and Commercial Finance lack your special skills, so you are there to help by pointing out how much more money could be made. Expect a backlash when people point out that RA must have too much time on their hands and demand the elimination of a cost leakage = wasted salaries on people in RA.
2. Do SOX. Not all of SOX, just a few bits you do not understand well but nobody else fancies doing. Try not to draw attention to the fact that you claim to assure the integrity of revenues but are not familiar with the company’s policy on revenue recognition. Also keep quiet on how you are able to claim to add lots more value in future when the controls were so tight last year that nothing could possibly go wrong.
3. Assure costs. Not the big interesting ones like building networks, just the ones where you reconcile revenues to directly variable costs. Do not mention that a check like that would have been a good idea just to assure your revenues in the first place.
4. Stop doing revenue management immediately after your business announces a big slump in earnings. Obviously that had nothing to do with the bit of revenue management you took responsibility for. Think of a new name for revenue management, something even vaguer than revenue management like “business assurance” or something like that.
5. Assure margins. This is a bit like revenue management but perhaps Marketing may not notice you are doing their job. Ideal way of increasing responsibility without being to blame for that slump in earnings.
6. Take on responsibility for detecting Fraud. Then think again because they would sack you if you messed this up.
7. Fly around doing conferences. Having done everything imaginable to add value to your own business, you have a lot of time on your hands to tell everyone else how successful you have been.
8. Cover revenue share with partners. Hope you spot your own screw-ups before someone in revenue assurance for your partners does. Remember, they may talk at the same conferences as you and if they boast of lots of leakages then everyone will know it was you that mucked up.
9. Do quality assurance. By definition, there are no consistent, measurable and objective targets to meet with quality assurance so it fits perfectly with your approach to revenue assurance.
10. Do business assurance. Every morning when you arrive at work, ask yourself if the business is still there and if it is, say “check” and pat yourself on the back for a job well done. Spend the rest of the day planning your sight seeing at the next exotic conference location.
10a. But seriously, assure everybody else’s business decisions. They cannot be trusted. Form an underground “shadow” exec board to make the tough decisions the real execs are afraid to make. Don’t tell the real execs because it might upset them. Or anyone else, just in case. Unless it is at a revenue assurance conference because the probability of that making its way back to the execs is nil.
10b. But really seriously, do business assurance and revenue management and anything else that takes your fancy. Write your own job spec and change it at will. Take a few extra days holiday because you deserve it. Give yourself free reign to second-guess any decision anyone else makes and point out all their mistakes. Unless earnings slump. That was someone else’s fault. Because obviously those decisions were not your responsibility. You know your place, even if nobody else does.