Moinak Banerjee

Moinak began his foray into telecom revenue management space working as a Business Analyst for revenue assurance and fraud management projects. Over the years, he has worked in product management of several leading vendors that provide telecom OSS/BSS software. During the course of his career, he has been involved with various clients across Asia, Middle East, Africa, and Europe, including Tier 1 telcos, for both revenue assurance and fraud management.

He has witnessed RA and FM issues across multiple telecom operators and has been privy to the different methods of controlling and monitoring the same, along with accounting the leakages detected vs fixed and rolling them up to the financial book of records for the purposes of reporting to senior executives.

Currently he works as the Product Lead of Protiviti Governance Portal (eGRC software solution) at Protiviti (Middle East).

Moinak is an ardent blogger and can also be reached at his personal site: http://moinakbanerjee.com/; and on Twitter @SaysMaverick
The view/s expressed in the articles in talkRA is completely personal opinion, and has no bearings with the Employer and or its solutions, practices, methodologies and or business model/s.

Working in Product Management for a software is/was the best job I could have ever had, and Thanks to Subex for the tremendous opportunity it had provided me with. RA, FM, CRM, Mobile Money or now GRC (the new domain I am in) et al have a great set of  software tools, which hardly differ in capability, from a great set of reputed vendors who compete among themselves for providing the best of breed solutions for their customers. But… here is the question/situation:

Over time I have seen that these B2B software can be best utilized only when the users completely realize and fully understand what they truly want to achieve with ‘automation’ which is the key underlying capability of most of these solutions.  However that is rarely the case. Most of the software tools are thought to be magic wands that would solve problems just as Harry Potter would create magic with his wand. Everyone says “YES” to the thought ‘Software is Only As Good as Its Users‘ but how many truly implement the concept to the fullest. This is where ROI of the software comes in and there are a great many correct ways in determining the ROI. But I am not sure how much of those are truly used. A number of software providers provide options to sit with their customers and help them make best use of the tool through Customer Advocacy programs; but just because of the tinsy bit of cost associated with it, a lot of enterprises do shrug  away from doing so.

In one of my very recent experiences, I was discussing a potential opportunity for implementation of a GRC system, and in order to understand the business requirements, I had started off with a few probing questions. It was not much of a surprise that I found out, that somewhere deep in the “heart” the expectation was that of a Magic software that would work by itself and get everything done.

Somewhere, someone needs to do the soul-sucking drudgery of populating the system with actual data especially if its a eGRC system, understanding and working out the true business rules required for the business, nail down the “correct” requirements without going into ‘information/data overload’; somebody has to DO all that work, not merely talk about doing that work. Someone has to figure out who is going to ‘bell the cat’ and then make sure they have the time and resources to do it correctly.

So let me ask you three questions:

  1. Do you agree to this problem? If yes, what do you think is/are the biggest impediment/s in full utilization of the software/s at your organization for which millions may have been spent?
  2. How much of the features and functions evaluated during RFPs finally end up getting used?
  3. How much of the work is still done in spreadsheets and offline inspite of the ‘magic’ software that has been bought? Why do you think this is happening?

 

Bookmark and Share

My personal career transition from revenue assurance function (focused on telecoms) to that of enterprise risk management and eGRC software application has opened a completely new world of learning and things to think on. The basics of risk management and assessment include determining the likelihood and impact of risks and the effectiveness of the controls; but then the following question started pestering me. Thus this tiny post is for me to understand your opinion/thoughts.

You have your risk registers; you evaluate the risks; you add more risks and associated controls; you assess the IMPACTS and LIKELIHOODS of these risks; you test the controls for their effectiveness; you report and follow up and reassess; BUT…. if disaster strikes, ARE YOU PREPARED? The question is that of preparedness instead of control effectiveness. Essentially, how much aware are you of the velocity of the strike; and should a disaster strike, are you ready to take it head on? This is my 1st question!

While this was in my mind, and started researching about it, I came across this interesting article here. While I found this article answering the question I had in my mind, but preaching and practicing is different, and thus, the second part of my question is:

Are You really practicing this integrated approach to determine your preparedness?

If yes, how easy and effective have you found?

if No, what are the challenges You are facing?

Let me know your thoughts.

 

 

Bookmark and Share

One may think, that the above is obvious- but it definitely is not. As the saying goes “Ignorance is bliss”, and so a lot of large scale hype can be gathered using ‘silver bullets’. Big words and catchy phrases sets the ball rolling for a number of marketing hypes, and I guess that is just the case even with “Analytics”- today’s new silver bullet. What is more, “Analytics works on ‘Big Data’” which is another favorite of the Brotherhood-of-Catchy-Phrases! I am trying not to be skeptical about the value and contribution of both of these to the world of business, but well both of these are now being overtly misused and misrepresented. Sometime back I was having a conversation with a friend of mine who happens to work in “analytics” for a major and over-sized Indian company, and it went like this:

Me: What’s up? What are You doing these days?

Friend: I am into Analytics. It is “THE” big thing now. [Grins with a big smile.]

Me: Ah Great! So what are you doing in analytics?

Friend: We solve business problems using Analytics for all sorts of companies.

(Now I am interested. Business problems- oh yes, that is what I want to solve as well being a Product Manager)

Me: So what kind of business problems?

Friend: Ah those analytics ones?

Me: What analytics ones? [I am perplexed] So, what exactly do You do?

Friend: Well you see, the customers sends us data and asks a number of questions, and we analyse the data and give reports on the same!

Me: That is what regular data analysis is about! What or where is the “analytics” ?

Friend: Hey that is analytics!

… and I stop myself from asking more questions, knowing the fate and direction of the conversation.

I cannot blame my friend but as a matter of fact, this is what ( I am sure) a lot of people would find when a bit of investigation is done into this new buzz word called “Analytics”. Mindless marketing promotions are the biggest problem creators in today’s world.

Analytics is a means to an end, and not an end in itself. How one applies logic to make sense of data is the key benefit the organization can bring for its customer. The key to data analysis can be summarized in four elementary steps: 1) Define the problem 2) Disassemble the data available 3) Evaluate the data within the environment 4)decide on the solution to the problem.

The biggest challenge is in fact in the “definition of the problem” to be solved. Reducing churn for example is a definite problem to solve. Increasing customer stickiness is a problem to solve. Maximizing the use of available inventory is a problem to solve; and there are hoards of other challenges that can be solved, but the first key is to identify what “needs” to be solved.

The effective way to implement data analysis using analytical tools has been well laid by Thomas H. Davenport when he summarized the necessary steps as:

  1. Creating and championing the cause of problem solving using analytical solutions from the “top”. The business owners need to be convinced of the problems to be solved first – the how to solve can formulated by other experts.
  2. Creating a single team who works on the data. One can call it the “analytics” team, who know how to make sense of data using data analysis techniques and algorithms.
  3. Identification of the “what to solve”
  4. Having metrics to measure the quantified benefits of a program
  5. Using the “right” technology.

… and using the right technology can include hoards of ways which would be specific to the organization who is trying to solve the business problem. The final piece of the puzzle could be a good data visualization tool—but that is not the end of it. This brings me to having observed a large number of solutions out there in the market which are essentially data visualization tools but with the tag lines of “Business Intelligence tools” or currently as they say, “Analytics” tools. You wish they would solve the business problems without the feed of the domain. Well, the argument is, “it is obvious to know the domain”. Guess what- knowing the domain in and out in itself is the first challenge, and therefore it becomes far more important to first define the problem to be solved.

What is more, it is time to understand what would be the output of such use of “analytics”. More often than not, the output is provided in the form of a multidimensional dashboard which one would have to first understand, learn the inputs and then try and make sense of what is being shown. Essentially, the output is more of a dashboard which one has to understand first and then try to decipher to make meaningful sense out of it. Now that itself is the problem. What is the use of such “analytics” when one has to give efforts for finding meaningful sense? It is more like giving a GMAT or some B-School entrance examination where a bunch of data interpretation questions are asked to the test-takers. It does not make sense in business, unless the “exact business critical information” is upright visible. That is the INSIGHT that is required for the business executives, and being able to show insights out of underlying data requires thorough understanding of the domain and hence the business problem to be solved. Hence analytics is NOT about a fancy multi-dimensional dashboard representation of a huge amount of underlying data.

Just before I conclude, here is an example. Imagine a Graph is showing a downward trend of sales. This is a concern for the executives. But what is the “analytics” in this?? The graph can be extrapolated with prediction algorithms to show when sales are going to hit rock-bottom. That is also about the visualization and how good the visual tool is. Again, where is the “analytics”? The true value of “Analytics” is the insight to showcase “why” sales plummeted and the consequence of the “sales hitting rock bottom” as predicted by the scary graph. Since the interpretation of the graph is left onto the on-looker – the executive in this case, the real value of the INSIGHT is left open to interpretation. That is what “analytics” needs to be used for- to stop speculations and individual interpretations and provide the real INSIGHT into the state of affairs.

Since I don’t claim to be an expert all rounder like a few other people we may know, in the following posts I would try to share some thoughts on “Bigness” of Big Data!

 

Bookmark and Share

In my last post on Dunbar’s number and CEM; I mentioned that the key to better customer experience could be to target the set of 150 associations that each customer would have instead of individual customers themselves. Essentially it is about managing the customer’s complete engagement and interaction with the environment, influencing factors and the association with the services offered. In order to do so, one would have to go beyond the conventional “data” driven factors and appeal to the ‘right side’ of the brain which would definitely require a planned ‘soft touch’.

Here is an article from Mckinsey Quarterly, which speaks of the essential soft touch with “ Five ‘no regrets’ moves for superior customer engagement” to cater to the ever increasing demand from customers for better ‘experience’.

Bookmark and Share

Probably one of the key aspects of CEM is finding out the experience of the customer in his/her day to day interactions with the closest set of people. As it turns out, as per the Dunbar number, the number of people with whom one can maintain stable social relationships is around 150; the remaining are mostly acquaintances- for whom the emotional attachment can safely be counted as marginal. When it is the customer freaks out the most? The answer is, when the service requested is not obtained, especially when the service provides a communication link to the closest set of people for the customer. Also, the grudge that the customer may have, would first be off-loaded to the nearest and dearest ones– who are within the Dunbar number.

A significant aspect of CEM is to find out which are the good customers who need to be served, via-a-vis the not-so-important customers! Howsoever hard/harsh this may sound, but in real life, not all customers are treated equally, and hence the need for profiling and segmentation of the millions of customers. It may also be noted, that the number of most-significant customers are usually around the top 20%, although they may not be the most revenue generating mass. Correct me if I am wrong. Targeted campaigns are usually (for more efficiency) driven to these chosen set of customers!

Thus, to understand the experience of the customer for a service offering, it may be worthwhile in trying to ensure that the associated 150  key acquaintances are also served right; which means, CEM need not be focused for individual customers or customer segments, but need to be associated with the closely tied 150 nearest subscribers. Essentially it is not about the individual customer which affects the experience or affects the churn rate, but may be what counts is the experience gathered around the small set of 150 individuals.

What I feel is, the rules of segmentation and profiling need to find out these set of customers, who otherwise may have different profiles and or belong to different segments under conventional methods of segmentation. So, if there is a need to improve the experience of the customer, it would be about improving the experience of the group that interacts as a whole, and not individual customers in stand alone mode.

Here is what Wikipedia has to say for Dunbar’s number.

Bookmark and Share