Welcome back, before bringing the course to a close, we we want to provide a couple of stories, or short case studies. That illustrate in a bit more detail how analytics are applied in real business problems. In this video, we'll walk through a real example of how experiment driven analytics revealed some surprising insights around the phenomenon of customer churn in the wireless industry. Let's jump right in, in the early days of mobile phones, wireless carriers had a hard time attracting new customers. The phone itself is very expensive and the mass is simply weren't willing to shell out a hundreds of dollars upfront to buy it. Especially, given that the technology was new and unfamiliar. To stimulate adoption and customer acquisition, most carriers started providing the phones for free. With the idea that they could recoup their investment through somewhat higher monthly service fees. This presented a high business risk to the carrier. If customers didn't stick around for long enough, they wouldn't recoup the cost in the free phones and they lose money. Here's are really simplified view of what this looks like analytically. What we're looking at here is the cashflow diagram for a typical wireless customer. It shows the net what the customer pays less the cost incurred by the carrier by month. In this example, we see a large negative cash flow of about $600 at the beginning of the relationship in month zero. This represents the cost of the free phone. Then we see a series of small positive cash flows of about $30 per month which represent the month we service fees paid by the customer less the cost of providing that service. With this information we can calculate the cumulative cash flow overtime. Note that it takes about 20 months for the cumulative cash flow to become positive. And a bit longer for the carrier to actually make a reasonable profit. This example is a simplified version of what we do in customer lifetime value analysis. We basically look at the total profitability of an individual customer over their lifetime by looking at the actual revenues and costs associated with that customer. By doing this type of analysis, carriers figured out that customers really needed to stick around for about two years on average to be profitable. So the industry converged relatively quickly on the two-year contract as a standard business practice. The strategy worked well for attracting new customers, but it created a problem. Historically, mobile phone service had some of the lowest customer satisfaction scores across the board. And it was particularly poor during this period as carriers struggled with network quality and coverage gaps. At the same time, there wasn't much structure in place for what to do with existing customers, especially when initial contracts were over. So you can imagine what happened, we have customers with low satisfaction whose contracts have expired. They see a competitor who give them a new phone if they switched. The result was a very high degree of customer turnover called Churn in the industry. In fact, it wasn't uncommon to see Churn rates of up to 3% per month in the customer base, think about that. Annualized that's over a third of the customer base leading per year. Obviously, carriers start to realize this was a significant problem. Especially as the market matured and there were fewer and fewer people who are completely new to wireless. To continue to grow then we need to figure out how to retain more customers. The nice thing about the contract is that it gave a wireless carrier pretty good idea of about when the customer might leave. It is a really simple analysis to show what true rating by customers looks like. In fact, we showed this example in a separate video. You can see the large spike in churn rate that occurs right after the 24 month point. This response to the expiry of the contract. This is where a case study really begins. A particular wireless carrier wanted to explore how they might incent customers to stick around longer. In particular they wanted to see if they could get customers to renew their contracts which would lock in a longer revenue string. What they didn't know is how they should do it? How should they contact customers? When should they contact customers? How much incentive will it take to get the customer to renew their contract? Should they renew for another two years or for a shorter period like one year? Because the contract renewal effort had never been attempted. There is no historical data to analyze to answer these questions. So a controlled experiment was designed to test a number of factors to see what combination of things would work best. You would reach out to customers with some sort of contract renewal campaign. And test the number of things they thought might impact how likely customers would be to renew their contracts. Specifically, here's what was tested, three communication channels were considered. A bill insert which is basically just an additional note that's included with a bill when it comes in the mail. A stand-alone direct mail piece and an outbound call to a customer. The two paper channels were directed to customer to a 1-800 number to speak with an agent. Keep in mind this was before we could reliably use electronic channels like emails and text messaging. From a timing perspective, customers would be contacted two months before the contract expiry. One month prior to contract expiry and just after the contract expired. Three levels of incentives would be included, $50, $100 and $150. Finally, both two renewal and one renewal options will be tested. One nice thing about wireless carriers is that they have a lot of customers. Most of the major carriers at that time is between 50 millions and 80 millions customers each. So it's pretty easy to isolate a large group of customers who are nearly contract expiring. And was possible to run what we called fully factorial experiment which basically means that all combinations of all levels of factors were tested. In this case that meant that they were three channels, by three time points, by three incentive levels, by two contract types, or 54 total test combinations. Plus a single control group that was not contacted at all. Sample sizes for the group were chosen using statistical methods to ensure differences between them could be detective with high degree and precision. And the group itself were chosen at random, because small changes in customer behaviors tend to have large financial impacts. The group sizes were large about 10,000 in each group. But even at that size, less than 600,000 subscribers were needed for the experiment. To put that in perspective, it was less than one month's worth of expiring contract customers. While the direct marketing aspects of the experiment were being operationalized, an analytic plan for analyzing the results was put in place. The basic measure they wanted to know is how many people renew their contracts, right? Well, that's true, but for a really robust analysis, they knew that they needed to account for unanticipated changes to other aspects of a customer's behavior as well. For example, do the customer change their service plan to something with a different monthly charge? Did they change their usage behavior in any way that would affect the cost of service to the carrier? Did they use more or less customer care resource? They also knew they needed to consider both the behavior of customers who did take an offer to renew a contract, as well as customers who did not take the offer. To make sure all the measures were available, the analytic plan identified the right sources of data. Information from the billing system would be used to identify the contract status of a customer. Whether they had cancelled it or not, and the actual amount of revenue received. Information from usage tracking systems will be use to capture the cost changes in consumption. Information from the customer care system will be use to do the same thing for changes in calling behavior. And spreadsheet-based financials around the marketing campaign itself will be incorporated the cost of the contact itself from the incentive. With the full set of data around customer behaviors and their financial impacts, each test group could be compared with the control group and to each other. Incremental impact of the contract renewal after could be calculated and they could determine what combination of factors could work best in a full-scale rollout of the program. With the planning complete, experimental campaign was watched in the market. So, what happened? Well, before we look at the actual results, I want to know what you think. Think about the factors and levels that were tested in the experiment. Which combination of factors do you think produced the most favorable results? Did the bill insert, standalone direct mail piece, or outbound call work best for customer contact? What's the best time to contact customers two months before contract expire, one month before the contract expire or just after the contract expired? How large an incentive was required, $50, or $100 or $150? Or customers more incline to take an one year or two year contract? Take a moment and think about how you came to your choice. What considerations or behaviours led you to pick each item? This is were things get interesting. I'm not going to get into the details of how the calculations worked out or show you a bunch of charts and graphs that outline the analysis because the outcome was really simple. It turns out that the control group won every time. No matter what combination of factors were applied, the simple act of reaching out to customers actually stimulated higher turn rates. They caused more customers to cancel by making them an offer to renew the contract. A bunch of customers did take the offer to renew their contracts, but they apparently are the ones that were going to stay anyway. Eventhough we understand a lot more about customer turn behavior today, you can imagine how shock the carrier was at the time to see these results. The initial reaction was that, the data had to be wrong or that the experiment was flawed. A ton of additional analysis was applied to revalidate each aspect to the work. Including retesting some of the same things which produce the same results. Additional market research including focus groups and customer surveys started to reveal as really going on. It turns out that a lot of customers simply weren't aware of that there contracts were expiring or just in a low state of motivation to make a change. By stimulating that group, they carry reminded customers about contract and activated what we call shopping behavior. When presented in option we tend to want to know what alternatives are out there as well. This means were much more likely to find a better option and make a change than we would be if simply left alone. In this case, the carrier had basically kicked the sleeping giant. Overtime, this carrier and the wireless industry as a whole, learned how to better execute contract renewal efforts. Much of the improvement was achieved through experimentation in the application of more sophisticated modeling techniques. First, it was eventually discovered that customers needed to be contacted well and advance on their contract expiry dates. More like six months versus one to two months, this avoided shopping behavior since there was really no option to weave. And the decision was simply to take the offer or not take the offer and wait another six months. At which point the customer had time to forget about the contract or become complacent once again. Secondly, the offers needed to get richer, especially when customers were asked to renew their contracts early. Eventually the same arms race dominated new customer acquisition made its way to customer retention efforts. And a phone upgrade became the standard incentive in contract renewal efforts. Finally and most importantly carriers got much better using predictive analytics to identify what customers where high risk of turn. And even which customers in that group were likely to respond to different types of offers. This allowed only high risk customers to be targeted or targeted with the richest offers. There are lot of reasons why I really like this case than the analytics. First it really happen, it's one of the most interesting and surprising analytic fun except I experience on my career. And it underscores the need to constantly build context and learn about behaviors that underpin our analytic efforts. It's also a great example of how controlled experimentation can be used at a large scale to generate meaningful data for analysis. And it underscores the importance of having a control group looking at a broad set of behaviors to capture unintended effects. And examining the impact on both those that took an offer and those that did not take an offer. Finally, it illustrates how a variety of analytic methods are applied together in the pursuit of one outcome. In this case, we saw customer lifetime value analysis, statistical analysis and experimental design, campaign analytics, financial analytics and predictive and prescriptive analytics. All applied in an attempt to keep customers from cancelling their service. As you encounter problems in your organization you may consider how experiments maybe used in conjunctions with other methods. To reveal insights, grow your knowledge in contacts and achieve business results.