Archive for February, 2007

A quick summary of things wrong with the world: unpleasant never-ending wars, increasing popularity of DIY suicide terrorism, racial hatred, diminishing fossil fuels + increasing greenhouse gases, the spread of AIDS in developing countries, trains that do not stay on the rails. Of course, the list goes on and on. Luckily, the BBC has the perfect antidote if you spend too much time worrying about serious and intractable problems: moaning in an ill-researched way about trivial ones that have already being dealt with. Sometimes I think that is what the UK is best at.

There are few things that motivate me to be positive about the telecoms industry, and even fewer to be positive about telecoms regulators, but the BBC’s populist consumer affairs programme, Watchdog, is one. This programme is aimed at a particular segment of society: people who like to moan about how the problems of society are all the fault of greedy businesses. I guess greedy businesses are the one kind of cartoon villian that the BBC can still go after, leaving the newpapers to focus their ire on single mothers and asylum seekers. And last night, the BBC’s Watchdog team surpassed themselves with their imaginative choice of businesses to go after. Yes, everybody hates a telco, and this year the rise of broadband in the UK means ISPs are now public enemy number 1. The broadband providers have finally succeeded in usurping mobile SPs from their former stranglehold of the “most hated” category. (But do not worry – I predict the mobile SPs will make a strong comeback next year thanks to advertising over mobile internet).

I would put a link to the Watchdog programme so you could see it yourself, but the BBC do not offer Watchdog as a download. How very irritating of them. So here is the link to a boring static page giving the Watchdog “report” on UK broadband ISPs. So what is wrong with it? Where do I begin? (With the second sentence, as it happens…)

Free and misleading advertising from a public service broadcaster

You would think the BBC, being funded by taxpayers, might be careful not to provide advertising. Or at least to get the facts right when they do. Or at least to advertise a few rivals at the same time. But no….

Talk Talk were heavily criticised by the Advertising Standards Agency for marketing their broadband service as “free”. But this Watchdog report repeats the misleading marketing claim that Talk Talk offer a free service as if it is a straightforward fact. If I was Charles Dunstone I would congratulating the BBC on increasing his sales through this shoddy piece of journalism. To make matters worse, the site also reproduces a statement from Talk Talk that repeats the claim that their is no charge for their broadband service.

The Watchdog show also blatantly advertised, a service that makes money from encouraging people to switch providers. But none of the causes of complaint described on the show were problems with the supply of an existing established service. The problems were all caused by switching or trying to switch providers. Uswitch gave an astonishingly simplistic explanation of the cause of problems, by implying it all comes down to ISPs not having the capacity to handle the actual volume of customer interactions. The first thought is how would they know? They are obviously just people sat outside the ISPs who are guessing at what the problems on the inside might be. The second thought is if uswitch was right, how does encouraging more people to switch providers make life better for customers instead of worse? I would give you the link to but (a) you can probably work it out for yourselves and (b) I hit the site to see what they say themselves about the topic, but it is currently unavailable – how impressive is that?

A little knowledge is a dangerous thing

The Watchdog report says that all ISPs get lots of complaints. Erm… and how does this information help anyone? If we knew which ISPs got complaints about what, that might help. But Watchdog either does not know or cannot tell us. In short, the show told nobody anything of use. Apparently all ISPs stink for a variety of reasons. Great, so presumably the only useful advice would be not to get broadband. Tell that to people in developing countries who have to pay extraordinary prices to get unreliable internet access, if it is available. Watchdog will be using the internet (no sense of irony there) to conduct a survey of what people think of their ISPs. The results will be aired on their show on March 14th. Here are a few reasons not to tune in.

There are a hundred gazillion on-line surveys and discussion groups dedicated to broadband ISPs already. Watchdog would have given the average customer far more help by just pointing them at the information out there. The list of impartial resources is long:, Broadband Watchdog, the UK Broadband Internet Guide or are just a few. If the show is really interested in consumer protection, why not tell people about this freely available information, much of it generated from existing customers? But perhaps Watchdog had an exclusive advertising deal with…

The Watchdog survey is bound to give reliable unbiased results, of course. I mean, it is a survey of people who watch a weekly consumer affairs programme. No chance of the survey being biased towards people who like to moan, then. Does anyone expect satisfied customers who watched the show to take the trouble to complete the survey? On the other hand, the survey is being conducted over the internet, which might limit the number of people who complain that they currently do not have access to the internet.

Of course, let us not forget what prompted the show. Somebody did a survey. They did a proper survey without bias that cost lots of money to do and where you have to pay lots of money in order to read it. I guess the authors of that survey, Point Topic, will also be grateful for the free advertising by the BBC. If any ISPs had not already bought that survey, I assume they will be buying it today. The free summary of that survey is here. Great research by the BBC there: they just reproduced the main headlines of a survey conducted by someone else. So now the BBC’s Watchdog team needs to do another survey to check if people are unhappy because the first survey said people were unhappy. No duplication of effort there. Good use of taxpayers money.

So just how unhappy were people in the first survey? Well, the headline that generated interest in the BBC was that the percentage of fairly or very satisfied broadband customers in the UK fell from 92% in last year’s survey to 77% in this year’s survey. A much less interesting headline would be the growth in dissatisfied customers, which grew from 5% to 9%. At the same time, the number of households getting broadband grew by 3 million, a 34% increase. So whilst the industry has nothing to boast about with 6% of customers being fairly dissatisfied, and 3% very dissatisfied, it is hardly a crisis either. In the context of growth, the indicators highlight that some providers had teething troubles and difficulty managing growth, and some providers are lousy this year like they were last year. Whilst 9% of customers were dissatisfied, a further 12% were “neither satisfied nor dissatisfied”, which suggests most people are sensible enough to have low expectations of things given to them for free. Another 2% of customers responded that they did not know if they were satisfied or not! How can they not know the answer? Perhaps they signed-up to a free service but have never bothered to use it…

Despite the complete lack of discrimination by Watchdog in portraying all providers as equally bad, the companies that came out worst in the survey were the “free” providers with the most growth – Talk Talk and Sky. Not surprising, really. The only message here is that you get what you pay for. Sure, both Talk Talk and Sky may give lousy service, especially around things like not answering customer service calls, but the service they give is subsidised. The price the user pays is less than the cost to the supplier. I wonder how people responding to Watchdog’s survey will answer the question about “value for money”. My guess is that a high proportion will respond that they get poor value without even realising the service is subsidised. I also wonder whether Watchdog will do the decent thing and distinguish which ISPs give good service but poor value and those which give poor service but good value. Please also notice the one group of consumers whose interests are not being protected: the mugs who do not get broadband but who are paying inflated television, fixed-line or mobile telephony charges to finance the grab of broadband market share… Instead of demanding more money be spent on servicing broadband customers, Watchdog should be complaining about fair value for everyone else.

Closing the stable door after the horse has bolted

What exactly is the point of a consumer protection show? To help customers protect themselves? To force the regulator to intervene to protect customers? Wrong on both counts, if you take Watchdog as an example. Giving customers no useful information hardly helps them to protect themselves. But Watchdog did, in the end, give some accurate information. They told consumers that the regulator has already changed the rules in a way that will speed and improve migration of customers from one ISP to another. So well done Mr. Regulator. But as the rules on migrating customers have already changed, and the survey reflected opinions formed before the change in the rules, just what is acheived for the consumer by highlighting the historic issues? It seems hard to believe, but the only “news” here is that a regulator found out about a problem, and did something to fix it, before a bunch of lazy hack journalists wrote their lazy hack story about it. I do not know whether to be more shocked by the speed of the regulator or the inertia of the journalists. Please consider that regulators start the discussion relating to new regulations long long before the change takes place. The final wording of the new regulation was published months ago to give ISPs the time to make the changes necessary to comply. So everybody has known about many of the problems described in Watchdog’s show for ages. But when does it become important enough to report on it? When the problem is first identified? No. When the regulator decides it is a big enough issue that they need to act? No. The day somebody issues an out of date, commercially-available survey highlighting how bad things were when the problem was first identified? Yes. I can only imagine what breaking news the BBC will be giving us as a public service today: Soviet Union on the verge of break-up, Man lands on the Moon, silicon chips set to cause breakthrough in computing?

For those of you interested in doing the kind of research that the BBC cannot be bothered to, here is Ofcom’s statement on the new Condition 22 on MAC codes and broadband migrations. Our lazy friends of Watchdog might also have helped reduce the number of complaints – and the levels of dissatisfaction – by taking the care to point out in their report that the MAC code process and the 5-day handover period will not apply to a large minority of broadband customers. But never mind. They were probably exhausted from a hard day of staring out of the window at the taxpayer’s expense. They did get one fact right: that providers could be fined 10% of annual turnover if they failed to comply. In theory. On the TV show they even noted that “this would be a lot of money”! (These journalists must work round the clock on their material). It would have been slightly more informative to point out how unlikely it is that Ofcom would fine an ISP just 10 pounds or even just 10 pence for breaking the rules. I spend more on parking fines than Ofcom raises from penalties on ISPs. The chance of an Ofcom fine is smaller than the chance that Richard Branson will invite James Murdoch to holiday with him on his private island.


Here is the news. The BBC is a big waste of taxpayer’s money, reproducing old stories without checking the facts. Disreali’s maxim of “lies, damn lies and statistics” has been taken to new heights. All the BBC has done is read the summary of a survey, speak to a few customers and avoid any real investigation. If you still do not believe me about how lazy the BBC’s journalism is, check out the version of the story on the BBC News website or follow the link from there to a video clip of the story from yesterday’s main bulletin. Broadband providers are rubbish too, but you knew that already because telcos are rubbish in general. Particular problems with broadband relate to freeing up a network infrastructure created and maintained by a monopoly so it can be used by many service providers. Customers do not know or care about stuff like that and just want promises kept, but telcos focus on keeping 99% of promises and hoping the most complicated and expensive 1% of promises are delivered by magic. And you should never never never expect to get the facts from a DJ who spins wheels on game shows or from the presenter of a showcase for amusingly-shaped vegetables.

In other words, after all that, nothing new to report….

Bookmark and Share

The other day I blogged about how my electricity supplier completely failed to bill me for quite a few years. From some of the responses I have received since, it sounds like I am not alone. But if energy suppliers find it hard to bill now, they might their problems are going to get much worse in the near future, thanks to the surge in interest in finding environmentally friendly ways to power our lifestyles. Take this story about Californian utilities buying back energy stored on the batteries of electric cars or this story about buying back energy from domestic microgeneration in the UK. They are both great ideas, but there are two challenges. The obvious one is the technical challenges of making the storage and generation of domestic energy more efficient and affordable. The one easily ignored is the back office challenge of correctly paying back customers for the energy they supply to the grid. With domestic generation the solution should be simpler, but there still needs to be an adequate mechanism to meter and pay out to customers who generate more than they consume. Estimating meter readings is going to tend to be even more inaccurate when customers generate unpredictable amounts of energy from wind or sun. Then there are also the tax implications. Governments may want the tax, but creating an administrative hassle will only disincentivise the few people prepared to make the initial financial outlay to install solar panels and wind turbines. In the Californian electric car proposition, billing gets even more complicated. It will be necessary to correctly match the car to the customer’s account, even when plugged into the grid away from home. Imaginative fraudsters will already be dreaming up ways to trick the system into paying them for other people’s energy. So perhaps if revenue assurance in telcos gets a little boring, especially if everything becomes flat rate, then that may be compensated for by the interesting new problems in the energy sector. At the very least, revenue assurance in energy will no longer be synonymous the job of sending someone to read the meter.

Bookmark and Share

Today I am trying to motivate myself to finally finish writing the enhanced revenue assurance maturity model I started working on in May last year. But motivating myself is proving a struggle. It is proving so hard to motivate myself that I may actually iron some shirts first, just to put it off a little longer. I might write a very long blog entry to help delay things first, so you have been warned….

Back in May last year, reworking the maturity model seemed so obvious. The idea for improving the model came at TeleManagement World Nice. I was there to speak alongside my good friend and colleague, Dr. Gadi Solotorevsky, who as well as being Chief Scientist at cVidya also spends his time heading up the TeleManagement Forum’s RA team. [To be precise, he has headed up both their teams, technical and catalyst, for the last 4 years, so probably deserves some kind of medal for services to revenue assurance.]

One of the topics we discussed in that 2006 presentation was the maturity model outlined in the original TMF technical overview for revenue assurance, also known as TR131. [By the way, this document is now supposed to be available free of charge to everyone, despite what the web page says, thanks to a recent change of rules by the TMF.] Before that conference, I was starting to wonder if the idea of maturity was as dead as a dodo. The original version had been written in 2003 whilst I was still at T-Mobile UK, but it simply had not taken off, at least as far as I could tell. Many people seemed to like the basic concept, so that was not the problem. The goal was to define a strategic evolutionary path for revenue assurance, and then assess the actual state of the business against that model. I am not claiming that this idea was original or imaginative, because it was not original or imaginative. On the contrary, it was a simple reworking of the model underpinning the Carnegie Mellon Software Engineering Institute’s Capability Maturity Model Integration (CMMI). As far as I was concerned, the connections between the original stimulus for the CMMI, the goal of producing better software, and that of revenue assurance, avoiding bugs that cause money to be lost, were pretty obvious. And as CMMI was effectively a practical extension of the ideas of thinkers like William Edwards Deming, who were trying to apply a scientific approach to improving processes, it had a huge appeal. If nothing else, it seemed obvious to me that revenue assurance was a good representation of Deming’s ideals about iteratively studying and improving performance. However, in early 2006, as far as I could tell, nobody was interested in maturity. But I was wrong. At TeleManagement World Nice 2006, and similar events soon after, it was obvious the opposite was true. Lots of people were interested in maturity. The problem was the reverse of what I thought it was. The problem was not apathy about strategic thinking and paths to reveue assurance maturity. The problem was multiplicity and fragmentation of thinking. Because 2006 was the year when seemingly everyone developed and presented their own maturity model.

So, let us get something straight here. Everybody having their own maturity model acheives nothing. You might as well not have any maturity models. I do not say that because I am jealous about who has the better model. Imitation is the sincerest form of flattery. So if people were copying the idea of a maturity model from the work I did and published with the TMF, I am flattered. Perhaps people thought up the idea of maturity completely separately. Well that is fine too, though really people should do a little more homework before they reinvent the wheel. Reinventing the wheel is not a clever thing to do. When the TMF publishes a document about revenue assurance maturity, it hardly takes a detective to find out about it. Perhaps people thought their models were just better. Well, that is fine too, but they might as well drop the TMF a line to say so, instead of just working secretly on their own. So I do not really care who has the best model. All that matters is that somebody has a good model and that I can get to use it. The problem so far is that nobody has a model that is even remotely good, including the one I am working on, so squabbling about which model is better or worse would be a complete waste of time.

It does not take a lot to justify my statement that all the many revenue assurance maturity models are alike in one way: that they are rubbish. To justify the statement, I just need to quote some people with big brains and the ability to calculate the value of their own work.

William Edwards Deming said

“In God we trust, all others bring data.”

The statistician George Box wrote

“Essentially, all models are wrong, but some are useful.”

If the revenue assurance maturity model is to have some value, there has to be some data to support it. But so far, so nothing. I wil be honest with you on this point. Even the one time that T-Mobile UK did an exercise based on gauging maturity, it was all subjective in a way that would make it impossible to gauge improvement over time, meaning there was no useful ongoing collection of data. I flirted with the idea of using maturity as a benchmark for performance in C&W’s international operations, but again the exercise was stymied because there was inadequate data to genuinely gauge if the model was successful. However, I think I understand how you could collect data, from more than one telco, and use it to validate and refine a revenue assurance maturity model. But I struggle to see how most of the “maturity models” people talk about could ever be validated using data. The problem they have is not that they do not discuss maturity or strategy. They do. The problem is that they are an arbitrary snapshot of the author’s opinion. So, in essence, they are only useful to a telco if the author happens to say something relevant and useful to that telco, despite not having worked for it, knowing nothing about it and having no data to support his opinions. There is no methodical way to improve or change the model. In short, the problem with the average maturity model is not that it fails to discuss maturity. The problem with the average maturity model is that it is not a model.

The points made by Deming and Box are really pretty straightforward. Having a theory is nothing. Anyone can have a theory. Theories may sound good or sound bad, but you would be silly to trust a theory just because somebody says so. Some very plausible theories have been shown to be wrong. For example, the world is not flat, the earth is not the centre of the universe, human beings can travel faster than 30mph without dying and women tend to be smarter than men. Columbus was looking for China, not America, and he was lucky that his error about the size of the planet was cancelled out by finding a continent he had not expected to be there. So theories are only useful if you compare the theory to the real world. Then you can modify the theory to better conform to what you actually observe. That is just the essence of taking a scientific approach. A model is a kind of scientific theory with a clear relationship to specific data. So in 2005, after seeing all the various theories unsupported by data, it was obvious somebody needed to construct a mechanism that would make it easy to collate genuine data. So, there was really only one choice. First, it had to be driven by me, as I am about the only person daft enough to spend time constructing such a mechanism. Then, it had to be supported by the only organisation capable of collating that data: the TeleManagement Forum (TMF). The TMF is the only organisation which could be objective about a maturity model/theory (all the others were biased because they were selling something) and had the resources, mission and infrastructure to bring together data from many telcos. So we formed a team in the TMF and set to work on creating a more detailed model which could be used for a meaningful level of data collection. And a meaningful level of data collection is not the same as somebody being able to proclaim they had reached the highest level of maturity possible, just in order to enhance their own career. It means asking a series of detailed and specific questions where it would be straightforward to find and verify the answers, and where the questions could be applied to all kinds of telcos.

So that was back in the middle of 2006. Now we are someway into 2007 and it is still a work in progress. Constructing the base questionnaire, at sufficient detail to get meaningful data, but also general enough to apply to many businesses, has been very tough. Maybe, just maybe, it will be finished soon. But it still will not be a model. It will only become a model after some real data has been collected, and given the difficulties in getting agreement on the questionnaire, I am sceptical about whether that will ever happen. After all, if it is easier to just call up a consultant, or listen to someone speak at a conference about their opinions on what is “best practice” in the industry, why go to the trouble to collect data? There is only a motive for collecting data if you can distinguish between seeming to be good at something and actually being good at something. But in the absence of any real models, how do you distinguish the two? After all, Columbus came back to Europe from America still thinking he had landed in Asia. He died without realising he found a new continent. If people can make mistakes like that, what confidence can be applied to distinguishing worthless and worthwhile revenue assurance?

So this is my personal opinion of the state of revenue assurance maturity in the telecoms industry. It is pre-mature. To get useful science, you need to follow solid basic principles and you need to objectively collect data, then iterate over and over. In the absence of solid principles and solid data you get lots of opinion and debate – or people who agree with each other but who have no real idea if they are right or wrong and who do nothing to find out either way. In other words, you get philosophy, not science. So far revenue assurance is a philosophy, not a science. Its value will remain unprovable until it becomes a science. To become a science, some people will have to offer up objective data without being certain about the benefits. Their data may ultimately prove that their theories about good revenue assurance are all wrong. So it takes courage to go back to the real data. Drafting a model is just the first, easy, challenge. Populating the model with real data is the harder task. It is 4 years since we started down this path – I wonder how much longer it will be before we get to the end, if we ever do. Anyhow, it is late now and I will take the same approach to finishing the document that I suspect most will take when it becomes time to answer the maturity questionnaire and gather the data. I will do it tomorrow ;)

Epilogue: Yesterday I was asked to speak at IIR’s Telecoms Internal Audit and Risk Management Conference taking place between May 8 and May 11 in London. The topic? You guessed it: revenue assurance maturity. So I had better hurry up and finish the questionnaire after all. Wish me luck – I will need it if I am to collect any data by then….

Bookmark and Share

It seems amazing to me, but I hardly need to do research to find out how poor businesses are at issuing bills. I just need to open my own personal mail to find that out. Some several years after moving into my newly-constructed house, my duel-fuel gas and electricity supplier has finally issued me their first bill. Which is fine by me, as I always had every intention of paying, just no intention of wasting my time chasing them. In fact, until I received the bill, I had no idea who my gas and electricity service provider was. The one time I had a temporary problem with the electricity supply I spent two hours on the telephone trying to find out who to contact, with no success at all. In the end, a man from the construction business responsible for building the estate I live on came round knocking on people’s doors to explain the problem. He said it was all the fault of EDF. There did not seem much point trying to complain as I am not entitled to vote in the upcoming French elections and hence probably my opinion or problems are of no interest to Electricite De France (yet another business in the AOL habit of using a TLA as a name because they are too cheap to hide their nationalistic ancestry when selling abroad). It says a lot about how these businesses work that a diligent and caring person walking door to door stills end up being the quickest and most effective form of communication when something goes wrong.

So why am I blogging about this? Because I struggle to understand how my utility service provider failed to bill me for so long. As long as I have been living in the house, people with official badges have come around on a regular basis to read the gas and electricity meters. It looks like all the meter readings are stated on my first bill. So why did it take them so long just to reconcile the meter readings to the bills? Checking that a bill is issued for each meter reading taken should be a really straightforward control. After all, paying people to read meters is a real and significant cost and there is plenty of attention paid to the problem of balancing the cost of taking meter readings with the issues involved in estimating the amount supplied. I will not mention the name of my supplier. However, if I was a shareholder I would be wondering how much extra cash would be generated for dividends by simply identifying the unbilled meter readings each quarter.

Bookmark and Share

Last week a lot of people went to Barcelona in Spain, to talk about mobile communications. The FT provides a fairly good series of video interviews from the 3GSM congress.

All I want to say is that these people are selling the idea of being in touch, sharing information and ideas, communicating, doing business, and having fun, wherever you are in the world. The word “mobile” is often presented as a lifestyle choice, as if people might prefer to relate to each other over a mobile telephone instead of being face-to-face. I used to work with a CTO who talked a lot about “eating your own dog food” by which he meant using the firm’s own services, products and capabilities. We will know that mobile networking works when business people in the mobile industry use their own products to do business, instead of doing it by flying to Barcelona. I wonder in which year we will hear the first executive say “I am not going to the next 3GSM World Congress, I can do it all over my phone…”

Bookmark and Share