Archive for August, 2012

So, if I understand it correctly, the way to become the world’s youngest billionaire is to sit up late at night, in your college dorm room, drinking beer, and hacking code that lets people see photos of faces of fellow students. Go figure. I thought making money was about selling something that other people want to buy. To be fair, in a way, Mark Zuckerberg did make and sell something that other people wanted to buy. He did that when selling stakes in Facebook, first to private investors, and then publicly. He made something – a website, essentially – that other people wanted to buy. That sales aspect worked out great. Lots of people wanted to buy a piece of Facebook, because they expected to make money by selling that piece of Facebook to somebody prepared to pay more, just to own (and then sell) a piece of Facebook. Peter Thiel was one of the guys who did that best. Thiel took a chance when pumping USD500K into Facebook in the early days, and has just taken his first opportunity to offload USD400m of stock, not that everyone is happy about that. The problem with buying and selling is that eventually you have to find a buyer who wants to own a piece of Facebook not because they intend to sell it, but because they want to keep a slice of Facebook’s operating profits. Eventually you have to find a buyer who wants to buy into the revenue stream that the website generates. And if the website does not generate the profits that justify the website’s stock price, then somebody ends up looking a sucker-berg, instead of a Zuckerberg.

The dismal collapse in the share price of Facebook has, and continues to be, widely covered in the press. See here for a telling and recent analysis from The Economist, which is downbeat about Facebook’s potential for revenue and profit growth. Some of this is perfectly understandable schadenfreude. There is no need for me to add to that, especially as a 50% drop in the value of Facebook still leaves a company that is valued at around $50 billion. It is more interesting to comment on what this drop tells us about how people value communication generally. It tells us that people are lousy at valuing it, especially when trying to estimate future sales. What is communication worth? Well, that rather depends on what is being communicated, by whom, to whom. For example, most adverts are worth zilch, when sent to me. In fact, they can be worth less than zlich, because I am the kind of stubborn grumpy goat that will consciously punish annoying companies by buying the products made by rivals, not that anyone seems to have noticed when I do that. But evaluating the commercial worth of communication is very pertinent to applying a rational estimation of the worth of any new project, initiative, or venture that is based on communication. In short, for all the enhancements of communication, many people rely on rubbish data when it comes to estimating risks and rewards.

Part of the problem with data is that even if you have a list of assets, such as customers (including their name, address, phone number, present location, relationship status, shoe size and the length of their inner ear canal), that does not mean you can accurately estimate how much each asset is worth. It is bad enough estimating the value of a house or a car. What is the value of a Facebook user? And when I pose that question, I immediately remind myself that there is no such thing as a ‘typical’ or ‘average’ user. Each person is different. If 95% of users never do anything that would generate revenue, then they are very different people to the minority who do the things that do generate revenue. So understanding the value of an asset is not liking knowing the total number of users, the total amount of revenue that is generated, and then dividing one by the other, to come up with an average. That ratio is just a pseudo-fact… though it is the kind of pseudo-fact which is all too familiar to people working in the communications industry.

Knowing the detail about the worth of assets can highlight where assumptions break down. Is it a good thing when Facebook adds users? Superficially, the answer should be yes. But there is nothing good about adding 80m fake users. I think it is fair to surmise that if one person creates two accounts, that does not mean they will generate twice as much revenue for Facebook in the long run, though in the short run you might see a burst of revenue as advertisers get stung for meaningless activity. The point was illustrated when a BBC journalist created a bogus company that supplied ‘virtual bagels’, and found that this ‘business’ was liked by an improbable number of teenagers in Cairo. I doubt teenagers in Cairo really do have a special interest in non-existent foodstuffs.

When it comes to social networks, diving into data is a good thing, not just for managers for also for shareholders and customers too. When we look at data, we must look for downsides as well as upsides. Facebook’s falling share price tells us that investors and banks did not place enough emphasis on the difficulties of translating a nice social toy into a money-making business. They were a bit too willing to accept the optimistic but vague assurances of those who stood to gain from an over-inflated floatation, though it is also possible that Facebook’s management do not understand their own revenue potential, and that can occur for many reasons. In particular, bias can be both conscious and unconscious. Obtaining and correctly assessing downside data is not just a technical, organizational and operational challenge. It can also go against the grain of human nature, especially when it feels like everybody else is backing a winner. Running with the crowd is the fundamental dynamic of any boom that turns to bust. The communications industry is as prone to that failing as any other, which is ironic given that one of its selling points is how much data is have. And this raises plenty of questions about the role and remit of revenue assurance, and how much it should be analysing future revenue potential as well as past and current cash collection. In an industry that is all about communicating with people, the recurring surprise is that those people continue to be total strangers.

Bookmark and Share

Listeners to our last podcast may have picked up on the panel’s views on whether revenue assurance should merge with fraud management. My worry when responding to this question is that it encourages an answer that is already behind the curve. There is also an increasing need to effectively respond to the relationship between fraud and security threats. Many attacks on security will have a financial motive. There has been an exponential increase in the scale of the security challenge, driven by the rise of smartphones, the cloud, and increasingly sophisticated products like mobile money. So framing questions about the relationship between revenue assurance and fraud management might distract us from adequately dealing with the links between fraud management and security. Too much focus on delivering synergies between fraud management and RA could divert attention from aspects of security that do not fit well with a traditional data analytic approach to revenue assurance. With that in mind, I recommend this excellent interview of Mark Johnson by Dan Baker for the Black Swan Journal. Mark brilliantly argues for the convergence of cybersecurity and business assurance, highlighting that threats are both internal and external, that the assets threatened are both internal and external, and that accidental error and omission is also a vital enabler for fraud. As Mark put it:

Revenue assurance and fraud vendors rarely pay any attention to cyber security, and I don‘t think they fully recognize just how far convergence is going to push things — how hard it’s going to be to make a distinction between different types of security incident. We need to get beyond the silos and look at the total picture.

A good example: many fraud cases involve changes to rules or activating accounts on a platform somewhere. So the revenue assurance guy will reconcile and find 5.3 million people activated on the HLR, when the billing system says there should only be 5.25 million. But what’s often never explored are the platform security and cyber security issues that may be the root causes of those particular issues. They often just focus on the revenue leakage and the reconciliation rather than the true root cause.

Likewise, the cyber security guys focus on authentication, access rights, and data classification, but don‘t seem to address the question: what are the revenue assurance implications of these cyber breaches? So a stronger business case needs to be built to understand the end-to-end issues, root causes, and costs. And I think they are really missing a trick there.

Mark Johnson has an unusual talent for ‘blue sky’ analysis of emerging threats, not least because he can step back from the telecoms experience and draw on his experience of law enforcement and financial services. And Dan Baker is a thorough researcher with an independent frame of mind; his excellent report on business assurance is available via talkRA. Based on his recent output, my guess is that Dan is already doing the groundwork for future reports that bridge the gap between revenue assurance, fraud management, and security. I have only one reason to hesitate when suggesting you should read their interview – it is so good, I have nothing useful to add! I admit to being jealous of how well they make the argument for breaking down silos. My hope is we will one day look back at this interview, and consider it a starting point for the movement to integrate cybersecurity with business assurance. Given the explosion of risks faced by telcos, connecting security to assurance is not a choice, but a necessity.

Bookmark and Share

I am proud to share the news that Revenue Assurance: Expert Opinions for Communications Providers, the collaborative book by all the talkRA authors, is now available as an e-book for Kindle! Since its publication by CRC Press, the hardback version has consistently topped the Amazon sales rankings for revenue assurance books. Amazon.com has priced the new Kindle version at a third less than the hardback – which proves the point that electronic communications makes everything cheaper. See here for the Kindle version on Amazon, and look here for more information about the book’s content and a comprehensive list of online retailers.

Bookmark and Share

Subex, the Indian Business Assurance firm, has announced their Q1 results for FY13. They do not make pretty reading. In the press release, Subex boss Subash Menon emphasized:

“While the business climate is definitely quite tough with strong head winds in Europe and the telecom industry experiencing bad times, we are confident of maintaining our leadership position in the Business Optimisation space. Our current quarter results have been impacted by the change in revenue recognition and this will get evened out during the year. This change was essential to be in line with the changes in our revenue model.”

This quote makes it sound like Subex may have changed their accounting policy. The impact of a change of accounting policy always needs to be reviewed with care, to determine whether the change has caused a superficial fluctuation in the reported results, or if it is being used to disguise some more fundamental shifts in business performance. And I would review the impact with care, if I could find any details about a policy change. Anywhere. And I looked. For longer than I would like. Whilst I do not expect an accounting policy change to make press release headlines, it is bad form to refer to it in the release, and then fail to give any more information. Or maybe the phrase ‘change in revenue recognition’ is being used to describe some other, vaguer, explanation for Subex’s poor results. If so, it fails to serve the purpose. There is no way to assess how much leeway to give management for a supposedly one-off poor result, if the explanation of the poor result is too vague to understand.

Because there is no additional data, I can only comment on the Q1 numbers as they stand. And to say they ‘stand’ is generous. It would be more accurate to say that the numbers are precariously leaning over and stumbling around, like a drunk on his way home from the pub. Q1 net income from operations fell to a meagre USD15m, a drop of 24.1% on the previous quarter, and down 25.7% year-on-year. Recently I blogged that Subex was in a stable orbit of generating roughly USD100m in annual revenue, but subsequent quarters will need to see a big improvement if Subex is to avoid a substantial decline in annual revenue. Product income continued to drive the overall numbers, contributing 86.5% of the revenue but precipitously fell by 26.4% since Q4 of FY12. In contrast, service revenues were flat.

Subex has previously made a habit of responding to disappointing sales figures by keeping a tight grip on costs, but they may have reached the point where further efficiencies are very hard to find. Operating expenditures were up by 4.9% since the previous quarter, due to a sharp rise of 16.1% for employee and subcontractor costs. In their press release, Subex indicated their products generated a positive EBITDA of USD1.07m, implying their services are a slight drag on EBITDA. Consolidated EBITDA was USD1m, but this seemed to be propped up by USD1.1m of ‘other’ income, which is not explained further and is up fourfold compared to the last quarter. The company’s loss before tax was USD0.7m, and the after tax loss was USD0.9m, about USD3m down on the profits generated in Q4 FY12.

These poor results follow some depressing action for Subex on the stock markets. Subex shares are currently trading at around 14 rupees (25 US cents), having fallen sharply in late July due to the dilution caused by issuing shares to address Subex’s ongoing FCCB overhang. However, this is only a footnote in a story of decline, with Subex priced as high as 50 rupees a year ago, and over 500 rupees in 2007.

Amidst the gloom, Subex were doubtless glad of some good news to please stockholders, and this comes in the form of a 5-year deal to provide the ROC RA and FMS to 14 opcos in the MTN Group; see the press release here. This must come as a disappointment to rivals such as cVidya, who had previously supplied their MoneyMap product to MTN South Africa.

It would be better not to speculate, though the signing of a multi-year deal may be a clue as to why Subash Menon thinks the ‘change in revenue recognition’ will get evened out over the rest of the year. One of the challenges in revenue recognition is to work out when out when to recognize revenue, and when to recognize losses, for contracts where the work stretches over a long period but where invoicing is infrequent. It is possible that Subex has not changed its accounting policy, and has instead suffered a hit this quarter due to anticipated losses for a large and extended contract. Normal practice is to recognize a share of the contract’s total revenue in proportion to how much of the contracted work has been completed. This is calculated at the time of preparing the accounts, to give a smoother and more meaningful figure than waiting to recognize revenue when the invoice is finally sent out. However, if it is anticipated that a contract will loss-making overall, all of the expected loss must be recognized immediately. Because Subex’s quarterly figures show just the P&L, and not the balance sheet, it is possible that Subex have presented their Q1 revenues net of a new provision for loss-making contracts. Even though the provision would dent the figures in Q1, by having taken the loss then, Subex could enjoy a relative upswing in latter quarters because the losses on ongoing contracts have already been provided for. It is worth reiterating that this is speculation, but it fits with the observation that this market is enduring intense price competition, and that lots of vendors have been tempted into signing loss-making contracts. This theory might also explain the rise in staff and subcontractor costs, because workload could have gone up because of the need to satisfy loss-making contracts. The cutthroat nature of the business assurance market was underlined by the MTN Group CFO’s comment on his deal with Subex:

“Subex was selected for this deployment after a highly competitive bidding process.”

Following the failure of Connectiva, this is a time for vendors to hang tough and try to withstand the ‘strong head winds’ that Subash Menon mentioned. It could be that other vendors are on a similar path to Connectiva, if things do not turn around. That said, sometimes it is not necessary for a business to turn things around for themselves. Sometimes the goal is staying one step ahead of the rivals, and waiting for them to fail…

Bookmark and Share

In part one of this article, guest author Shahid Ishtiaq described the research he performed into inefficient use of COTS business assurance tools. Now read the concluding part, where Shahid describes the CRAWLER solution, and how it improved the productivity of business assurance analysts in Etisalat.

The Pilot Project

A pilot project was launched internally in order to check if automating the “fixed investigation” element of assurance would increase the quality and performance of analysts. We named this project “CRAWLER”. In order to achieve our goal, many options were evaluated. We eventually settled on developing CRAWLER using a mix of common VBA (advance to Macro) and objected oriented programming. Combining both programming methodologies provided a strong control over the OSS/BSS systems front end. This article will not go into the details of the program logic but it is very simple and can be considered as something similar to Microsoft Excel’s Macro. Nothing was visible to the analyst; the CRAWLER logic queries all systems and brings the data to the analyst.

The following is a simple example of logic used by “CRAWLER”. In order to control a web-based intranet business application as a first step, an object of Internet Explorer was made and then the navigation URL was triggered. When the intranet application response was received and a full page in HTML was thrown, all the fields of the page were picked as objects. These fields were then filled with the required values and then the relevant button was triggered. By using such types of simple controls and using objects you can easily browse through the whole application and pick your relevant data without going in to the application technicalities. It is a very simple method and does not require any backend access and it is the same as the steps taken by the analyst through the front end interface.

After receiving control over the OSS/BSS applications we then moved to the next step and extracted alarms from the RAFM systems. In the start we extracted only the relevant information from the different systems and it was also on the request of the analyst. The analyst used to open the alarm, select “CRAWL” option and then all the relevant data was available (based on alarm type template) on one click. Later on, in order to optimize the performance the data extraction system (CRAWLER) was made dedicated. At any time 30 to 40 alarms pool, equipped with the latest live data, ready for investigation and decision making. Whenever analyst opens an alarm and initiates the decision making process, in the mean time another alarm with full information is added to the pool. Each alarm is equipped with a bunch of attachments, and they contain information extracted from different systems. I would also like to mention that in a few cases the attachments are images. These images are a snapshot of the results from the relevant systems. By snapshot you can imagine something similar to taking a screen shot.

In the last phase of our project we also added automation in the alarm/case transitions, especially closure. These transitions were template-based and most of the inputs and figures were automatically calculated once the analysts hit the relevant button. As already mentioned, normally these bunches of inputs made during different transitions of alarm management are rarely utilized for reporting in any vendor system. In our pilot project we also developed some basic reports on this data. These reports showed us the trends in different grey areas of the company and helped us in prioritizing and becoming more vigilant.

Conclusion

The pilot project was very successful and we were able to achieve our objectives, however there is room for a lot of improvements. The project has underlined the need to bring improvement in the alarm management sections of RAFM systems. In order to optimize the performance, the analyst should be able to categorize the concrete steps to reach the final conclusion. It is also vital to keep the content and context separate. The context information will help in making templates for the specific alarm types. The mining should also be built on top of RAFM alarms so that the grey areas and trend of leakages can be focused on more by the operator. This research has also opened many new endeavors where we can focus more towards new types of controls. For example, in the case of RA, we can build PI controls on DSR (Daily Sales Report) as these will help us gain more control over the opportunity loss and the issues that result as revenue leakage in later stages.

Bookmark and Share