Archive for March, 2010

I was trying to think about on how RA has moved, changed and re-invented itself in the 13 years I’ve now been involved, in some form or another, with RA. To my view there have been some movements and if I look at those, then perhaps I can have some liberty and crystal ball gaze to the future.

The first trend is the emergence of increasingly powerful software tools to specifically identify, address and help manage revenue assurance. There should be little doubt that this has helped RA investigate new areas of revenue streams for potential loss while also automating much RA activity. This can be very beneficial for those new to RA who can draw on this expertise to learn from the experience of the vendor and deploy an RA capability quickly and usually with some financial benefit. The contrast of this is that the maturity of the tool set exceeds the maturity of the RA people, who may miss valuable steps in understanding the underpinnings of RA work and how their business works. The risk is that this can produce an over reliance on RA to be “solved” by the software tool. The greater risk though is that RA is “defined” by the software tool and its capabilities and roadmap and not by the operator. Missing those early baby steps in RA can lead to a strategy that is aligned solely around the technology deployed and is hence, however well intentioned by the vendor, unbalanced.

This is a similar issue as faced by fraud management. In this case, I have seen many operators who struggle to adapt to new fraud types, until the vendor issues a software update, and will argue passionately that because their system does not detect that type of fraud, then it is not really fraud. When the software update though is deployed, it can be thought of as a fraud again.

The leads to the second trend which is the growing number of organisations undertaking services work in the revenue assurance domain. If RA got into the limelight first during the dot.com and tech busts of the late 90s and early 2000s then we should not be surprised that it has been reinvigorated through the latest GFC. The mantra around billing or charging accurately for every event is particularly powerful when revenue growth is limited. But perhaps these service organisations are also seeking to fill a void in the operators where, due to the reliance on technology as above, the ongoing return on investment from RA is not what was expected. But what might this void look like? Firstly, as RA becomes more operationalised into the business, the RA team comprises a greater number of staff whose role is orientated around the RA tool. I’ve mentioned this above already but it leads to a distancing from the real data and business processes and, more particularly, the need to think and challenge conventional ways becomes reduced. This is not unique to RA of course as increased reliance on computing and the automation they provide, means loss of the real experts in a system or process and replacement with defined work instructions that ensure consistency, if perhaps not always quality. You could speculate that this is how RA was able to come into existence in the first place as that level of complete and detailed knowledge held by system owners across the revenue chain was lost. This loss was not just from automation of course but the increased complexity that the automation enabled. The risk from this is that as RA becomes more automated, the room for thinking disappears as reducing cost becomes an increasingly important corporate objective. And so, I expect that we will start to see and hear of an increasing number of examples of RA missing some significant leakage or undertaking poor quality work. In fact, this trend is already evident as I have had vendors indicate to me that when they have done a proof of concept at an operator, they have found leakage that the incumbent RA system missed. By the same measure, operators have spoken to me about an increasing false positive rate, diminishing value of leakages identified and a greater role to alert a potential issue rather than alert, detail and then help in the resolution of these issues. This can only be due to looking for the same leakage day after day, month after month, rather than expanding thinking to look into areas not yet automated.

The last trend I want to comment on is the move from reactive to proactive RA – however that be chosen to be defined. A quote from Fernando Sales (Gerente General de Inteligencia Comercial, Telefonica Venezuela) in a 2009 Hugh Roberts presentation summed this up for me nicely: “the worst thing that I did was to set our [RA] department up as a profit centre – we are still paying the political price for this in our relationships with other business units, particularly IT”. Perhaps the quote does not align to the reactive-proactive issue but on further inspection it suggests to me that the creation of a function predicated around finding and recovering lost revenue can create a short, and even medium term, star but one that burns twice as bright for half the life. As an operator, leakage should reduce over time. Complexity may increase but so should RA operational efficiencies, new products may be launched but so should more effective detection mechanisms, new business models may be introduced but RA should know their own business and where the risks exist. And so, RA that built its business justification on leakage, will find it contributes diminishing returns and so investment becomes more difficult to justify. Looking forward then, I can’t see how any RA function will continue to justify itself on leakage – the question for each operator is how long. And so the move to the “nirvana” of proactive RA, where RA doesn’t have to find loss, it has to prevent it; and as importantly, the prevention of those losses are recognised as tangible and of business value. This is an issue RA must solve.

Against this back drop, it should hardly be surprising then RA people and vendors have sought to reinvent and legitimise themselves in many different ways. This includes aligning to more established functions, extending its remit beyond traditional switch to bill audits, moving into cost domains, moving from reactive to proactive, supporting transformation efforts and seeking creditability through industry standardisation. This post is not about my view on any of these but it is important to think on the motivation and rationale for any extension beyond traditional RA and understand in what direction, sometimes irreversible, this may take both the individual function and the overall discipline. The risk for RA is that it becomes too tool orientated and too operationalised such that it starts making errors (including errors of omission) while all the time returning less value. RA loses its attention to detail and understanding of the business and so become part of the problem not of the solution. Further the standardisation of RA techniques see the gradual removal of the strategic thinkers from RA teams and to vendors, consultancies, other functions or other industries.

Having forecasted gloom, I believe RA still has the opportunity to add real and lasting value but to do so needs to address the following, and probably within the next 12-24 months:

· Developing software tools that expose data and its treatment to the RA function to ensure the end-to-end process is transparent and understood by RA

· Having the different standards organisations, define RA by the work that needs to be done and not by what the tools that seek to address can do

· Defining “proactive” RA and a value proposition that extends beyond financial measures and ensuring this is communicated and understood at the most senior levels of organisation

· Enhancing the alignment in RA between data integrity activities and process improvement to drive root cause resolution; and using tools and techniques already developed in these areas

· Extension of RA and acceptance of its methodologies across other industries to allow cross industry movement of RA people

· Learning by telco RA of how these challenges are met in other industries and incorporating that into best practices

Bookmark and Share

In future, talkRA will be inviting distinguished guests to write one-off blogs for the site. This first guest blog is by Mark Yelland, consultant and co-author of ‘Revenue Assurance for Service Providers‘.

I must be missing something. Is revenue assurance more safety critical than aerospace, or require higher reliability than a satellite? It must be, because both those industries have been using sampling strategies without problems for years and yet one never hears about sampling as an approach within RA. And I use these as examples because both have had high profile failures and yet neither has moved away from sampling as an approach.

So let us consider some of the benefits of a sampling approach for usage.

For usage based products, using recognised sampling plans, such as the USA military Standard 105E, for batches over 0.5m, the sample size would be just 1250 to achieve an acceptable quality level of better than 0.01%, or 100 parts per million, which is probably good enough for RA. But that is less than 1% sampling, surely that can’t be right? According to well established sampling theory, tried and tested over decades, it is. OK, you may not be comfortable that there is only a 95% confidence level that your sample will detect all errors above 0.01%, but you can always calculate the sample size required to deliver the confidence level necessary, but even at a 10,000 sample per batch level, it still represents a significant drop in volumes of data that need to be processed.

With less volume of data to be processed, the analysis speeds up and the visibility of issues is quicker, so the potential impact of any problem is certainly no more and potentially less than 100% sampling.

With less time performing data analysis, the analysts can focus on either non-system based revenue assurance, such as prevention, audit or training.

But what constitutes a batch? There are a number of ways to define a batch, one could argue that all calls from one switch in a day, or calls terminating within a timeband, or all TAP calls in a day, or all Wholesale calls in a day, and so on represent a single batch. Or you might argue that the process is continuous, in which case there are sampling plans for that – I wanted to keep the discussion simple. In all cases, the only requirement is to make a case for a definition of a batch that you are happy to justify.

How do you take a random sample from the batch? Again there are a number of different approaches, for example capturing every nth call that is terminated – because you have no control over the start time, duration, call type and destination and the traffic is representative of the distribution of the traffic on the switch or network, it is close enough to a random sample not to compromise the findings. Again, happy to discuss / respond to challenge on this.

So what are the downsides.

Some organisations utilise the RA system to help with Sarbanes-Oxley compliance. I am not an expert, but my expectation would be that a process which was capable of detecting errors at the 1 in 10,000 level would probably be considered a suitable tool given the level of errors in other parts of the business.

The data is used for Business Intelligence reporting so needs to be 100% complete. Business decisions are not based on whether an answer is 5.01 or 4.99, or a trend in up by 1.01% or 0.99%, they are based on more significant gaps, for example 5 or 1, or 1% not 3%, simply because the uncertainty about the external factors makes reporting to this level of detail pointless, the usual argument concerning the difference between accuracy and precision. The probability is that the sample will provide the accuracy required to make business decisions.

We want to see that all the records are being captured. RA is about balancing costs versus risks, if the increase in your operational cost exceeds the predicted value of calls missed through using sampling, then you are acting against the best interests of your business. And with the drive to lower the prices of calls, this equation moves further away from 100% sampling with time.

As yet I have not heard a convincing argument that sampling on usage is not valid, but am open to offers.

I would not elect to use sampling on non-usage data or standing data, there are few benefits to be gained primarily because the volumes involved are usually considerably less than usage data, and the rate of change is slower, so 100% reconciliation on a periodic basis works for me.

The real problem is that people are reluctant to accept sampling theory, they create individual scenarios to justify not using sampling without applying the check – how realistic is that scenario and what would the potential cost be. It is a confidence issue, have confidence that the mathematics that has been used for many years is as valid today as it always was and be prepared to defend your position.

And just to make life interesting – if you accept that sampling is the correct approach, then the argument about real time RA disappears, which is why you are unlikely to find vendors pushing sampling.

I am not anti-tools, I am strongly for tools that can be easily used and are affordable to the smaller player. Using tools in a smarter manner has to be the right approach.

Final note that using test call generators is not random sampling; it is using a control sample to monitor outcomes.

Bookmark and Share

Gadi,

In your recent blog you wrote about your hope that revenue assurance will one day be regarded as a profession. Let me remind you of how your employer, cVidya, repeatedly obstructs the transformation of revenue assurance into a genuine profession.

To begin with, you wrote that “revenue assurance” is an established name and we should stick with it. As my colleague Güera Romo recently pointed out, a common language is the foundation of a common community and hence of a common profession. At the same time, your own company now says it sells “revenue intelligence”. Nobody knows what “revenue intelligence” is. As a vendor, your company is naturally inclined to invent new names for old things, in order to enhance sales. Whilst this helps cVidya’s sales, it runs directly counter to the spirit of your plea for consistent terminology across the industry. If you cannot restrain the impulse in your own company, I see no reason why anyone else should respond to your pleas for consistent terminology.

Vendors have a primary interest in competing, not in building a common profession. That is why we see cVidya not just competing with all other vendors, but we also see cVidya competing with – and systematically undermining – all RA professionals who understand that some RA challenges cannot be addressed with software. Just the other day I saw a cVidya-drafted TM Forum standard that was about “coverage” in revenue assurance. The word “coverage” had an established meaning. That did not discourage cVidya from redefining the word to suit its business interests. Coverage relates to the entire scope of revenue assurance control points, to all potential leakages, and to all activities to detect or prevent leakage. cVidya’s proposal was to redefine the word so only detection tasks that can be automated will be within the scope of the TMF’s new “coverage” model. As you are the leader of the TMF’s RA team, I am dumbfounded that you could be blind to the difference this makes. It means that if something cannot be done by cVidya’s software then it is not part of RA and if a leakage cannot be found by cVidya’s software then it does not even exist. This is no kind of coverage model as I understand the words. I believe any sincere professional will agree that the scope of revenue assurance should never be defined to perfectly match the functionality of one vendor’s products.

cVidya’s competitive and anti-professional instincts do not stop with their attempts to bias the TM Forum’s standards. You, Gadi, have to write a blog in competition with most of your industry peers. I invited you to write at talkRA alongside many notable professionals who accepted my offer. We count employees of rival vendors amongst the talkRA authors. You gave a unique reason for declining the offer to write at talkRA: you would not blog on the same website as competitors. You cannot expect a profession to flourish if professionals indulge in competitive behaviour like that. In contrast, you chose to write on your own website, where there is no prospect of readers finding alternative opinions from other professionals. Your website is registered in the name of cVidya and only ever presents opinions that are favourable to your employer.

Unlike most RA professionals, I have seen the consequences of standing against cVidya’s anti-professional behaviour first hand, when I ruined cVidya’s underhand attempt to control the RA profession through the World RA Forum. I revealed what cVidya wanted to hide: that the Forum was owned and set up by cVidya to promote sales. The World RA Forum promised to recycle TM Forum ‘best practice’ to its members – code for circumventing the rules on distributing the intellectual property of the TM Forum. With one hand cVidya planned to steal another organization’s intellectual property, and with the other hand it would have gifted it to telcos on condition that cVidya gets exclusive opportunities to sell its software. In its short life, the World RA Forum talked repeatedly about professionalism, whilst showing scant interest in following a professional and transparent code of conduct.

For professionalism to occur, it is necessary to have a degree of freedom of speech and for professionals to be able to engage in mutual dialogues and to critique one another in their shared pursuit of professionalism. Professionals need more than a code of conduct that exists on paper. Codes of behaviour must also be enforced by mutual consent, and if necessary by punishing those who behave improperly. These objectives do not sit well with the instincts of people who put competition ahead of all other concerns. That is why neither Rob Mattison’s GRAPA, nor cVidya’s manipulative attempts to influence the professionalization of revenue assurance can ever succeed. At best they can only deliver an empty shell. They can deliver outward appearances – mere words – but not the substance of professionalism.

In setting up talkRA, it was in my mind that professionalism begins with a foundation of openness and transparency between equals. We do not see this encouraged or fostered by either cVidya or by GRAPA. Gadi, I am not hopeful that you will approve and publish this blog comment on your website, as you have rejected previous criticisms of cVidya. Your site is ultimately another kind of marketing vehicle for cVidya. However, this open letter will be reproduced on talkRA in the spirit of a transparent dialogue between professionals. I urge you to stop asking others to aid the professionalism of revenue assurance, and instead to ask harder questions about how and why cVidya regularly stands opposed to the development of a genuine revenue assurance profession.

Bookmark and Share

Here is a quick roundup of recent news from the big revenue assurance vendors.

  • cVidya releases ‘Integrated Revenue Intelligence Solutions’ (IRIS), a rebadge of the offerings acquired through the takeover of ECtel. See here.
  • Subex launches version 4 of its RA solution, Moneta, which includes its ‘DICE’ data cube analysis engine. More here.
  • US firm Synaptitude Consulting will now be “powered by Lavastorm” after agreeing a deal with Lavastorm suppliers Martin Dawes Analytics. See here.
  • Meanwhile, Martin Dawes Analytics also announced the appointment of utility and comms industry veteran Bill Belcher as Director of Sales. More here.
Bookmark and Share

According to Data Warehouse Appliance suppliers Dataupia, the world’s largest Oracle OLAP Database is used by Subex. The 512TB system is used to supply online access to years of CDR data for a Subex managed service. What is more, Dataupia says the system has been running continuously for two years on a 24×7 basis. See the press release here.

In the announcement, Vinod Kumar, Group President of Subex, said:

“In the business of Managed Services, success is measured by customer satisfaction and efficient and robust production, delivered on budget. Together with Dataupia’s Satori Servers we have managed to delight customers such as front-line analysts who get their analysis done ten times faster.”

Vinod recently joined me on the talkRA podcast to talk about managed services. You can listen to the podcast here.

Bookmark and Share