Archive for the Opinion Category

The relationship between people and machines has been a recurring theme on talkRA. I discuss the tension between people and machines a lot. But I think the tension is not really between people and machines. The tension is between people who treat other people like human beings, and people who treat other people like machines. People are varied, difficult, unpredictable, individual, and demanding. It would be convenient for a business to have 500 employees who all behave the same way, and 5 million customers who all behave the same way. But people are not like that. Whether we talk about software, controls, or processes, there is a danger that somebody with little empathy for real people, and divorced from the consequences of their decisions, will make terrible choices that then become hard coded into the ‘rules’ of the business. These decisions may look good from the cold, abstract perspective of a spreadsheet, but can be terrible for the human beings affected by them. The end result is the kind of customer experience shared below.

American technology journalist Ryan Block called Comcast to cancel his internet service. After ten minutes of arguing with Comcast’s ‘retention specialist’, he decided to record the remainder of the call, capturing the final eight minutes. Afterwards, he shared the recording on SoundCloud, where it took just two days to reach 4 million people. Why did 4 million people take an interest in Block speaking to Comcast about cancelling his service? Because Comcast’s representative repeatedly demanded an explanation for why Block wanted to cancel his service – in the obvious hope that Block would simply give up and remain a customer. Listen for yourself…

I wanted to talk about this incident because these examples must be balanced against any data-centric analysis of how to boost revenues, reduce churn, and so on. This recording is also data. Unfortunately, it is the kind of data that is hard to compress into numbers and spreadsheets. But it is still vitally important data if we want to understand how well the business is performing. And this data says: “avoid Comcast as your service provider, because they treat customers badly.”

The recording also says that Comcast treats its staff like machines. Whilst the customer thinks they are talking to a human being, who has some discretion over how they behave, the customer might as well be speaking to an IVR. Comcast’s representative behaves like a slave to the rigid rules he is expected to follow. That means the employee is required to ‘save’ the customer by any means possible, even if the customer is absolutely determined to leave.

A statement issued by Comcast puts the blame solely on their representative, saying:

The way in which our representative communicated with them is unacceptable and not consistent with how we train our customer service representatives.

However, others have questioned Comcast’s corporate attitude. When Comcast tweeted to say they would take ‘quick action’, Block tweeted back:

I hope the quick action you take is a thorough evaluation of your culture and policies, and not the termination of the rep.

And somebody claiming to be a former employee of Comcast used Reddit to share a much more comprehensive analysis of why Comcast’s representatives would behave like this:

If I was reviewing this guys calls I’d agree that this is an example of going a little too hard at it, but here’s the deal (and this is not saying they’re doing the right thing, this is just how it works). First of all these guys have a low hourly rate. In the states I’ve worked in they start at about 10.50-12$/hr. The actual money that they make comes from their metrics for the month which depends on the department they’re in. In sales this is obvious, the more sales you make the better you do.

In retention, the more products you save per customer the better you do, and the more products you disconect the worst you do (if a customer with a triple play disconnects, you get hit as losing every one of those lines of business, not just losing one customer.) These guys fight tooth and nail to keep every customer because if they don’t meet their numbers they don’t get paid.

Comcast uses “gates” for their incentive pays, which means that if you fall below a certain threshold (which tend to be stretch goals in the first place) then instead of getting a reduced amount, you get 0$. Let’s say that if you retain 85% of your customers or more (this means 85% of the lines of businesses that customers have when they talk to you, they still have after they talk to you), you get 100% of your payout – which might be 5-10$ per line of business. At 80% you might only get 75% of your payout, and at 75% you get nothing.

The CAEs (customer service reps) watch these numbers daily, and will fight tooth and nail to stay above the “I get nothing” number. This guy went too far, you’re not supposed to flat out argue with them. But comcast literally provides an incentive for this kind of behavior. It’s the same reason peoples bills are always fucked up, people stuffing them with things they don’t need or in some cases don’t even agree to.

I find this account of Comcast’s rules to be credible. Comcast may have a rule saying their reps should not argue with customers. However, nobody is this overzealous unless they are motivated to be like this. In other words, something in Comcast’s rules, procedures and incentives is motivating this human being to be so dogged at retaining customers. Without a financial incentive, it would be normal for the rep to just do as Block asked, cancelling the service and ending the call as quickly as possible. Arguing for nearly 20 minutes shows that the rep has something personally at stake. In this case, the rep has too much at stake.

Whilst Comcast’s motivational techniques might deliver good results on their spreadsheet – there is no doubt this kind of high-energy ‘retention’ strategy will influence some customers – there are also downside consequences for real people which may not be shown by the data that management looks at. No matter how much data we think we have, when it comes it comes to marketing analysis, customer service, satisfaction and loyalty, we need to remember how difficult it is to reduce people’s attitudes and behaviour to numbers which computers can calculate. Decision-makers who ignore human consequences do not deserve respect, whether they intend to disconnect a batch of old services, and wait to see if any customers complain that they have been affected, or whether they give a salesman a big bonus for results, then plead ignorance of the salesman’s unethical tactics.

Data can be clean and straightforward, making it pleasant to work with. Much of business assurance is rightly oriented around data. Manipulating and managing data contrasts with the messy business of how people think and act, which is difficult to record, measure and describe using rules and formulae. But telcos exist to serve people, and business assurance professionals should always keep that in mind.

Bookmark and Share

Imagine this advertising campaign…

Good news for all PhoneyTel customers!!!!

Previously your bills were only 70% accurate. We’re very sorry about that. We let ourselves down, and we let you down. Presenting mistakes in 3 bills out of every 10 is simply unacceptable.

We listened when you told us we needed to improve. So that’s what we did. We’ve improved. We took a hard look at ourselves, and we’ve changed the way we work. We’ve invested in our people, in our process, and in our technology. Now we can promise you the standard of performance that you expect from us, day in and day out. With thanks from our technology partners, we’ve just completed a project to resolve our accuracy problems. And that’s why we can proudly boast that PhoneyTel bills are now…

(drum roll)

90% accurate!!!!

That’s right. Your bill is only going to be 10% wrong, when previously it was 30% wrong. That’s the quality of service you deserve, and PhoneyTel is glad to give you what we think you deserve.

That would be a ridiculous campaign! Or maybe not. Consider the following excerpt from the website of a leading global supplier of ICT solutions…

Loss of revenue due to billing & charging complexity, inaccurate data input, invalid correction & discount control, ineffective payment collection, and bad debt management remain a major challenge for operators. The primary causes of revenue leakage include network configuration changes, tariff configurations, and poor system integration during the CDR processing cycle…

…Through [our] understanding of the revenue cycle, [we] can help operators reduce the potential impact and risk of revenue leakage through its revenue assurance processes, tools, and expertise…

…[We were] able to improve the billing accuracy for a certain African operator from approximately 70% to 90%, leading to fewer customer complaints.

I may not be a customer of a certain African operator, but excuse me as I still feel entitled to complain about such extraordinarily low expectations. Who, in all seriousness, thinks 90% accuracy is something praiseworthy, even if it is an improvement on what went before? Customers complain more about overcharges than undercharges, so it is hard to know what “70%” accuracy really means in this context. But whilst 90% accuracy sounds better, it still sounds lousy. Nobody should be seeking credit for delivering such inaccurate bills. That would be like a thief expecting you to thank him, because he took your wallet, instead of stealing your car.

Mistakes will always happen, but accuracy is not so hard to deliver that anyone should think a 10% error rate is tolerable. In a way, I am pleased that this supplier has been so transparent. However, I am more upset that they feel no embarrassment at being associated with such a low level of delivery.

Who is the supplier? (Drum roll…) Huawei. They may be fringe providers of RA services, but sloppiness at the fringe hardly suggests a robust core. Huawei employees, including the people who wrote this promotional spiel, should ask themselves a question. How happy would they be, if they learned that their payroll was 90% accurate?

Bookmark and Share

Daniel Peter of Mara-Ison Connectiva contributes today’s guest post, which revolves around a common challenge for revenue assurance and fraud management: measuring the benefits that are delivered. However, Daniel steps back from the usual headlong rush into calculations and equations, and first poses a much more fundamental question. What kind of value should revenue assurance and fraud management seek to deliver?

I have been thinking lately about the fundamental attitude of telcos towards Revenue Assurance and Fraud Management (RAFM). My recent interaction with RAFM Managers and System Integrators suggests that every dollar that’s being spent on RAFM is questioned; while cost consciousness is good for a telco’s short term profitability it might lead to loss of strategic advantage when unfavorable decisions are made on RAFM spending.

RA is not a mere hygiene factor but provides strategic advantage to the telco. I have also seen discussions in certain RA forums where they discuss, “whether cost of RA is higher than the leakage detection”, “what is the ideal payback period for RA System”, “what is the breakeven point for RA System”… This made me wonder whether the tools and techniques for breakeven analysis and payback period are the right approach for the investment/expenditure decision on RAFM.

Competitive forces for telcos are on the rise with regulators making concepts such as MNP mandatory; while consumers enjoy better features and services it puts tremendous cost pressure on the telco (margin has become very thin). This margin pressure has also affected the investment or budget allocation decision on Revenue Assurance and Fraud Management – for both people and technology. RAFM is a cost center and it’s being targeted by the management as a potential area to cut down cost just as they do to areas that are not core to the business; but RAFM is core to telco. Another interesting observation is that investment in RAFM is very different from an investment in a new software system or marketing campaign where short term return-on-investment calculation should be the driving force for decision making.

When a telco invests in an OSS, there’s a decision making process in place where the business and IT jointly participate in selecting the vendor. This approach helps the management in ensuring that the allocated budget has served the purpose, certainty on increasing profitability and securing the return on investment.

There are various tools and techniques to calculate RoI. For example, a telco planning to launch 4G LTE will perform breakeven analysis to determine the Break Even Point (BEP) for the incremental revenue generated from 4G LTE. Cost of cannibalization is factored in for this example as the subscriber would unsubscribe from the GPRS plan (Cost of cannibalization is the decrease in profits as a result of reduced sale of the existing product; customers are moving to the new product. From GPRS to 4G LTE in our example) BEP provides the number of incremental units the telco has to sell to cover the expenditure which means if the firm sells less than the BEP, they lose money. BEP is the point where the telco generates zero profits from that investment which means revenue is equal to the total expenditure at that point.

Payback period can also be assessed using breakeven analysis as we can forecast how long it will take to get to the breakeven point. When the payback period is very short, there is a risk that the return on investment is lower; in other words the return on investment is assured and the rate of return can also be quantified. It’s mandatory that the decision maker hit that number otherwise that expenditure will be classified as a bad decision. When the units sold exceed the BEP, it is fetching profits from the investment. Breakeven analysis is a good tool to assess whether an investment should be made or not and whether it’s feasible to achieve the BEP within an acceptable time frame. Tools like these are very helpful for investments/expenditures that yield direct results within a short period and the revenue stream is straight forward.

Now the question to answer is, can we consider tools such as these to make decisions on investment in RAFM? According to TMF, the objective of the Revenue Assurance Management processes is to establish an enterprise-wide revenue assurance policy framework, and an associated operational capability to resolve any detected revenue assurance degradations and violations. While all these can be quantified and measured, the question we have to consider is whether the telco should use a short-term RoI analysis such as Breakeven for RAFM?

In my opinion, measuring RoI for RAFM with tools and techniques such as Breakeven Analysis should be avoided, as RoI for investment in a plant/machinery/network expansion is focused on production and sales whereas RAFM function exists to provide strategic advantage and the returns are long-term although identified leakage in short-term can justify the expenditure in RAFM. Unlike investment in network expansion, the object of the RAFM function is different. Telco should assess the key outcomes of RAFM and not calculate RoI solely based on leakage detection. RoI for RAFM based on leakage detection focuses on how much leakage the investment has found and how many dollar worth of fraudulent practices have been found, which is an indicator of loss of qualitative focus.

RAFM by nature is number focused but the return on RAFM should be qualitative focused. RA leads to increased revenues but leakage detection from dollar spent is not the right approach. Telco should assess the risk areas RA is addressing, do a what-if analysis to quantify the potential loss (in terms of revenue leakage, quality of service and fraud should the risks go unnoticed), the satisfaction the board has over the reported revenue and the confidence the customers have on the bills sent to them and deduction in their prepaid vouchers have to be considered. All these translate to strategic advantage and have to be considered while evaluating the value addition from RAFM.

Bookmark and Share

I read Eric’s post about how the TM Forum has reduced the importance of people within their new Revenue Assurance Maturity Model. As both a founder of Compwise, a software business, and as a human being who helps big businesses to analyze their data, I wanted to explain why I feel the TM Forum has made a mistake.

For the last couple of years I have switched sides, wearing a client’s hat instead of a vendor’s one. One financial institution, an issuer of various charge cards (a.k.a. debit cards, credit cards, prepaid cards et al) asked me to check their product portfolio and assess their profitability.

At some point I had found myself conducting a typical RA audit assignment where the billing is rather complicated, including about 7 or 8 bill cycles (per TRX, daily, weekly, monthly, quarterly, semi annually, annually and a sporadic one). The tool I used for this audit was Microsoft Excel – with some reliance on “a little helper” assisting me with advanced Excel functions, as my command of Excel is fairly basic. The principles I followed were same as used by any auditor or RA practitioner working in telecoms.

I had analyzed one product only, analyzing the revenue streams, of what is called a 4-party model, which in practice involves around 6 or 7 parties. My analysis revealed hundreds of thousands of dollars in incorrect charges submitted to the financial institution.

I guess if I had procured dedicated software, implementation et al, this would produce better results than my humble use of Excel. However, for zero investment in software, and within a very short time frame, it is far more effective to get 90% of the value that can be saved, rather than waiting for 99.9% of savings to be delivered after the long timeframes involved in a tender, proof of concept, procurement negotiations, purchase and implementation of a specialized solution.

This also means the incremental value added by a specialized solution is not the 99.9% of savings that are reported by the tool. The incremental value is the 9.9% it delivers above the 90% that I could deliver using Excel (minus the costs of engaging me, but plus the costs of purchasing the solution).

Down the road, the data I used for my audit was exported to a BI tool. This makes it easier to analyze the data and find the same mistakes. Today there is a new generation of BI tools, which are agile, cheap and lightweight.

But whilst tools are becoming cheaper and more powerful, it is too easy to focus on the cash costs of tools and to neglect what people need to do, but tools can never do. We often take people for granted, even though people may be part of the problem that needs to be solved.

In my project for the financial institution, the most complex component was to establish the organizational consensus and acceptance for the project. It was obvious the process and the resulting findings would radiate on various departments and some stakeholders might feel concerned with the findings. The key challenge was not the technical part but rather the internal sales process, applying sensitivity in order to create an organizational joint effort where the goal is achieving an improved level of audit and control as well as improved risk management. This obstacle has nothing to do with technology. It is all about people. The source of the challenge lies with people, and only people can overcome it.

Lastly, I recall a situation 7 years ago when I was still running Compwise, selling specialist solutions to telcos. TMN developed an internal tool for churn analysis, with the help of their local IT partner. Based on their CDRs and tariffs, TMN’s in-house tool delivered 90% accuracy compared to the 99.9% accuracy of the Compwise churn simulation tool. TMN invited me to a demonstration of the results. My response was… “well done”. For them, 90% accuracy delivered the right return for their stakeholders. Who am I to argue otherwise?

Bookmark and Share

***Sigh***

I really have better things to do than to write a blog about everything that is wrong with the TM Forum’s new Revenue Assurance Maturity Model. I really do. Nothing I could write, or do, will influence the ‘best practice guidance’ issued by the TMF. The people running their RA team have a fixed agenda. That agenda is far more transparent than the supposedly collaborative process by which they issue their standards. In case you missed the press release, the agenda is most easily illustrated by the following:

  • Israeli software firm cVidya leads the TMF’s Enterprise Risk Management group;
  • Israeli software firm cVidya co-leads the TMF’s Fraud Management group; and
  • Israeli software firm cVidya leads the TMF’s Revenue Assurance group.

Does this suggest that TMF guidance draws upon a wide-ranging and representative sample of industry opinion about how to manage risk, fraud and assurance? You can decide for yourself. I state facts. It is also a fact that, over the last five years, the TM Forum has repeatedly issued guidelines which understate the importance of the employees of the telco, in order to push sales of technology. When I assert this, I get a lot of criticism. Naturally the criticism comes from people (however impressive machines are, they still cannot advocate for themselves). And so, the theory goes, I must be biased (after all, I have a freebie blog!) whilst the people I argue against must be honest, decent, unbiased folk (who only need to generate millions of dollars of revenue by selling software). So let me make a few, very succinct points about why the new RA Maturity Model proves that the TM Forum evades transparency wherever possible, and always seeks to undermine the value of people in order to promote sales of software.

1. Nobody told you that the new RA Maturity Model was coming out.

If you are employed by a company that is a member of the TM Forum, you have the right to comment on the draft documents they issue, prior to formal ‘approval’. Approval itself is a mystery – it is not clear who actually decides what is approved or how they reached that decision. But at least you can comment, saying if you like or dislike what they produce. Or you could, if you knew that a new document was coming out.

My point here is simple. In a few weeks, expect a press release from cVidya which plainly states that the TMF has approved the new RA Maturity Model, and that cVidya has made it available as a software add-on to their existing product suite. What the press release will definitely not say is that the TMF has approved “GB941 Revenue Assurance Solution Suite 4.5″ and that cVidya have updated their products accordingly. The press release will not use that language because nobody knows that “GB941 Revenue Assurance Solution Suite 4.5″ is code for the new RA Maturity model. And that is why the TM Forum sends out notifications alerting its members to the release of GB941 Revenue Assurance Solution Suite 4.5, without bothering to mention what is in it.

To illustrate my point, last week I contacted one of the four telco employees listed as authors of the new RA Maturity Model, to ask if he knew when the document was being approved. He had no idea that the deadline for comments was only days away. He admitted he had not even read the draft document. If the supposed authors of the document do not read and approve it, then who does?

2. The old model stated that people were an independent and crucial dimension in the determination of maturity. The new model deletes this dimension.

The original TMF RA Maturity Model had 5 dimensions: Organization, People, Influence, Tools and Process. In order to score as a fully mature organization, the organization had to be fully mature in every dimension, without exception. This was a straight copy from the Software Engineering Institute’s original Capability Maturity Model, and drew on their empirical evidence. The new TMF RA Maturity Model has 4 dimensions: Organization, Process, Measurement and Technology. Spot the difference? There are still a few questions about people, now included under the Organization dimension. But no explanation is given to justify this radical change, which demotes the importance of people, whilst putting even more emphasis on technology. In fact, two of the four dimensions are now dominated by technology, because the new ‘measurement’ dimension only makes sense in the context of technology to provide measurement.

But to fully understand the way in which people have been demoted in the new model, you need to appreciate the following…

3. The old model said that the assurance chain is only as strong as its weakest link. The new model just takes an average.

The old model was built on a straightforward but important principle. When many parts have to work together, the weakest performing part sets the limit on the overall performance. Good organization but weak process will deliver weak performance, good tools but weak people will deliver weak performance, and so on. The old RA Maturity Model emphasized the importance of improving maturity across all the dimensions in a coordinated way, because spending a lot of money or effort to improve one dimension would be wasteful and ineffective, if the other dimensions were left far behind. The new model just takes the aggregate score across all the questions, and translates this into the overall level of maturity.

This means that in the new model, even if no effort is put into recruiting and developing staff, a high maturity score is still possible by simply putting more money into technology and the things that senior managers tend to do.

Once again, the new RA Maturity Model deviates from a key principle in the Capability Maturity Model, which was why it was adopted in the original RA Maturity Model. No justification is given for this fundamental change of approach. The document begins by suggesting reasons why a new version of the maturity model was needed, such as the increasing popularity of digital services, and a different ‘ideal’ for revenue assurance (whatever that means). However, these reasons cannot possibly explain the much more fundamental changes that have been made in practice, without showing any reasoning or data to support those changes.

4. The new model makes it too easy to attain the highest level of maturity.

In the old model, to attain the highest level of maturity, the organization had to achieve the highest level within each of the five dimensions. It was a simple idea, which expressed how difficult it should be to achieve the ‘ideal’. In effect, an optimal organization could only exist if an optimal answer was given to every single question. Is this not obvious common sense? How can the whole organization be optimal at anything, if some crucial elements are sub-optimal?

The new model not only brings in averages, but sets low expectations. To be scored amongst the highest level of maturity, the telco needs only to achieve a score which is 80% of the maximum score possible. That means that a telco can completely fail to do some important tasks, like adequately training staff, or reviewing new products, and still be assessed as ‘optimal’ at revenue assurance.

5. There is obvious bias to the individual questions in the new model.

Consider the following questions, all taken from the new RA Maturity Model:

Is appropriate budget made available for the supply of RA technology?

Is appropriate budget made available for the deployment of RA technology?

Is appropriate budget made available for the operation of RA technology?

Is appropriate budget made available for the on-going support and maintenance of RA technology?

In contrast, there is only one question that might be interpreted as relating to another crucial aspect of a revenue assurance budget.

Is the resource profile of the RA team reviewed periodically to ensure it is staffed appropriately?

There are many other examples of how the questionnaire is slanted, but this example neatly illustrates the main problem. It was written by people obsessed by using software, and indifferent to the alternatives.

6. And all the rest…

I could go on for much longer, and in much more detail, but people complain that I rant for too long. So I will not go into a lot more detail. My main point is made: the new RA Maturity Model deliberately places less importance on people in order to focus even more attention on software and the budget to buy it. But there are very many other flaws with this work.

The new model repeatedly confuses the revenue assurance maturity of the whole organization (the very clear purpose of the original maturity model) with the maturity of a nominal RA Department. It even talks about the ‘ideal’ RA function, as if all that matters is the function, and not how the rest of the business behaves. The goal of revenue assurance is holistic, making demands all across the telco, and the original model sought to empower RA managers and staff by making this clear. Also, the business should have the right to split up work between different departments in any way that best suits them. What matters is the overall result to the organization, not the ego of some guy with the job title of ‘Head of RA’.

The new revision was supposedly needed to keep up with technology, but its understanding of technology is backward-looking. Time and again it refers to ‘RA technology’ in ways that indicate this technology must be separate to other technology. RA is a goal, not a technology. There is no reason why the same technology might not satisfy multiple goals, including the goals of RA. As such, the new model takes no account of the impact of Big Data, and other trends towards mass aggregate use of data across the enterprise. In fact, it still has a prejudice against using data from ‘secondary’ sources, even whilst Big Data is making a nonsense of the idea that data can only be trusted if it comes from ‘primary’ sources.

The new model claims to be a simplification of the old model, but it is not. The old model had five answers to every question, a simple way to express how every answer to a question maps to one of five levels of maturity. By destroying this mapping, the new model is opaque, and does not represent maturity as a stepwise improvement that must go across all dimensions.

As sadly typical of the TMF RA team leaders, the new model lacks transparency. This fits with its increasing complication, which is hidden from view and then mis-represented as simplicity. The new equations to calculate maturity are not visible to the user. The old assessment could be performed with pencil and paper, whilst the new one must be done in a Microsoft Excel spreadsheet, because of the equations hidden within. All the questions and answers in the original model were written out in full, so everybody could see them and implement them as they wished. Because the old model was transparent, telcos were free to tailor the model if they wanted to. There is some irony in this fact, because Gadi Solotorevsky often gave presentations about the ‘TM Forum RA Maturity Model’ in co-operation with Telefonica, even though Telefonica had very clearly changed the model to reflect their point of view. As the new document explicitly states, it would not be possible for a telco to change the new model, even if they wanted to, because of the way the equations have been implemented.

It should be noted that the new document claims to have improved on the old model because it has ditched the weighting scheme which was used in the original model. However, it is important to reflect on why the original model had such an inelegant weighting scheme. The reason was that the weightings were the result of many people’s contribution to the original model. If we surveyed the opinions of ten people about how important question A is, relative to question B, we might expect ten different answers. To get to an answer, the original model just totalled the weightings proposed by all the contributors, and used the average. It was not a perfect system, but it was clear and fair. The new model says it has improved upon this. However, I cannot work out how it would be possible to do this, unless just one or two people decided to impose their will on the work. As such, the new model must be much less of a collaborative team effort than the old model was.

Which leads me to my final point. When I was at WeDo’s user group event, I saw an excellent presentation by Daniele Gulinatti, VP of Fraud Management & Revenue Assurance at Telecom Italia. His presentation struck a chord with me, because it was all about real people, and getting the best from his team. They delivered great results by using imagination and good processes, irrespective of the limits on their technology budget. And some of his team were in the audience, and I can vouch that I could feel their enthusiasm from the other side of the hall. So I find it hard to reconcile Daniele’s effervescent humanity with the fact he is listed as one of the authors of this stilted, cold TMF document. On the one hand I see a manager who clearly understands that superior results can only come from a motivated team. On the other hand, I see a TMF document that treats people as inferior to, and more disposable than machines. What can I do, but shrug my shoulders, and wonder how this is possible?

Perhaps this divergence is natural in human affairs. Many managers want official-sounding documents to show to their bosses, arguing they should have a higher budget. I was always conscious of this potential pitfall with the original RA Maturity Model. Even though it explicitly presented a strategic overview, there was always the prospect that it might be manipulated to give quick budget wins. That is why so many vendors and consultants copied the idea (but not the content) in the hopes of boosting their sales. Their versions of the RA Maturity Model soon disappeared. The original TMF RA Maturity Model has thrived, because it really was long-term, strategic, and built on solid foundations. And that means curbing bias (like the need to maximize this year’s software budget) in order to present a more balanced model that genuinely considers what is needed in the long-run (like a motivated team, which receives proper rewards for its successes).

But like barbarians, the ‘leaders’ of the TMF team are determined to wreck anything that does not immediately gratify them. Maybe I am in the minority. Perhaps the majority agrees with their approach. If so, I would accept the will of the majority. But we will never know, because whilst the original RA Maturity Model was written in 2006 with the involvement of just three telcos, the new RA maturity model has been written in 2014 with the involvement of just three telcos. Getting three telcos to contribute to an RA document in 2006 was a minor miracle. In 2014, it is a sign of apathy, or worse. After all, people have had 8 years to get used to the idea of an RA Maturity Model. Only a few of us understood the idea in the beginning. The TMF claims that half of the respondents to its RA surveys use the model. But despite that, they could only get MTN, Telecom Italia, and Telefonica Chile to contribute their conception of the new ‘ideal’ for revenue assurance. With the greatest respect to the people working in those telcos, why do they know the new ‘ideal’ for revenue assurance, more than all the other people who now work in telco revenue assurance? And based on the person I spoke to, what confidence is there that anybody currently working for a telco has actually read the whole document?

The team who wrote the original RA Maturity Model produced the questionnaire using a voting process. Questions were proposed, answers proposed, people voted on which ones made the cut, and which were rejected. And then, there was a vote on the weighting of the questions. If the TMF really wanted the opinion of telcos, why did it not run a survey on the content of the new RA Maturity Model? Such a thing would have been impossible in 2006. In 2014, the same task is incredibly easy. I believe it is because the whole point of their ‘collaborative’ process is to exclude the involvement of telcos, whilst making it appear that they invite their input. Everything is done to make it hard to participate or respond, from requiring people to fly around the world to attend meetings in person, to hiding equations in spreadsheets, to sending out notifications about “GB941 Revenue Assurance Solution Suite 4.5″. The TMF does lots of surveys about lots of things. Why not decide the new ‘ideal’ for revenue assurance by doing a survey? The only possible reason is that the answers might not support the leaders’ agenda.

The new RA Maturity Model is a broken product. But that is no surprise: it is the output of a broken process. The TMF has no interest in fixing one. It is beyond my abilities to fix the other. The only good thing about the new model is that it will die in a year or two, victim of its own failings. It says too little about people – and people often last longer than technology. It is too easy to reach the top level of maturity, meaning there will soon be calls for an upgrade. It does not promote the kind of balanced approach needed for long-run improvement. The equations are too complicated to understand, and have been hidden from view, meaning they cannot be fixed if they do not work. These fundamental flaws have doomed it to an implausibly short life for a supposedly ‘strategic’ model. But then, we should not be surprised. The real authors of this revised model are worried about this quarter’s sales figures, not about the next evolution of a mature strategy for business improvement.

Bookmark and Share