How often do you update Lead Scoring?
Creating a lead scoring model is a challenge. Improving a lead scoring model is relatively easy. So why don't marketers update scoring regularly?
Few marketers get lead scoring right the first time. It requires sales and marketing to agree on what a good lead actually looks like and exactly when it should be sent to sales, and that's just the easy bit. The complicated part is converting the lead definition you agreed with sales into a lead scoring model suitable for marketing automation or a data platform. The small number of contact and CDO fields available for profile scoring don't easily fit your ideal customer profile, and it's difficult to decide which of the many types of contact activity available for engagement scoring really mean that your prospects are ready to speak to sales. Creating a lead scoring model can be a challenge, but improving an existing lead scoring model is relatively easy once it is live and producing leads for your sales team to follow-up.
Data Driven Scoring
Optimising a lead scoring model is all about looking at the leads your marketing team is generating. Compare the ones that sales accepted and progressed to an opportunity against the ones that sales rejected. Look at the good leads generated by both sales and marketing. Are there any differences in their account details or their job titles compared to the ones that were ignored, rejected or disqualified. Look at the engagement history of your good leads and map it out on a timeline. What types of campaigns and what marketing channels did they interact with? How many campaigns did they respond to before they were sent to Sales? Look for trends where lots of good leads have the same profile or the same type of activity. Increase the weighting of your scoring model for those things, and remove any contact profiles or activities only seen when looking at the history of rejected leads.
Any marketer or data analyst can analyse the results of a lead scoring model this way if given the right information. Thankfully, everything you need is available through reports in marketing automation or the CRM system. Repurpose the data you compile for ROI reports and use it to track the success of your lead scoring models. You probably already have dashboards that map the funnel and measure campaign performance, be that through a dedicated attribution tool such as Bizible or a general purpose BI tool such as Tableau, Domo or Power BI. Filter these dashboards down to leads that actually progressed to become customers and adjust the timeframe back to the point in time they became an MQL. This will give you a picture of the profile and activity that are lead generating, as opposed to those that impact other points in the funnel. It may be very different from the broader picture you're used to seeing.
Strategy Driven Scoring
Given all the data available to do it, Lead scoring should be reviewed regularly but rarely is. The best practice is to review it every six months, allowing the model to be updated to account for changes in the marketing mix. For instance, many lead scoring models place a heavy emphasis on in-person events. That's generally a good thing, as events are frequently a good source of leads. However, with the global event calendar cancelled due to Covid-19, it might be a good idea to adjust the model to account for replacement activities. Not many organisations have done this, and are instead having to deal with a drop in lead volumes far greater than would otherwise be expected.
In the same vein, any changes to personas or target segments should lead to a review of lead scoring. If the people you're talking to change then the processes that drive the end to end lead funnel needs to be updated to account for this. Any good marketing leader will update audience definitions to fit changing business strategies or new products and services. Campaigns are adjusted to reach these new prospects. Yet, marketing operations tend to focus on the urgent task of filling in gaps in the marketing database at the expense of structural changes needed to make the new strategy prosper. Lead scoring is one component of this, but far from the only one.
AI Driven Scoring
Increasingly, AI is being used to fill in these gaps. It reduces the need to manage lead scoring by making it self adjusting, particularly on the engagement side. Predictive lead scoring has been a mainstay at the most advanced enterprise organisations for nearly a decade. Salesforce are leading the effort to democratise this with their Einstein Lead Scoring offering. AI scoring models have not always been successful though. I've seen several organisations adopt the likes of Mintigo and Lattice only to throw them out a few years later because the additional leads generated didn't pay for the cost of the technology. It was no coincidence that both companies were brought by broader data or technology vendors last year. AI is now a feature rather than a standalone product. If you have access to predictive data from 6sense, Lattice, Anaplan or elsewhere, then it should definitely be one component of your scoring model. It doesn't replace the entire model though. AI still has a long way to go