At Protagion, we recognise the incredible power of analytics and automation, while at the same time believing wholeheartedly in our uniquely human abilities. As we’ve argued before, it is the blend of these powers (man and machine) that creates immense value. In this article we discuss and share a TED talk by an executive from the Boston Consulting Group (BCG) on “human plus AI” approaches.
Some of our previous articles which touch on similar themes are:
In fact, Protagion’s business and approach to managing our careers actively is built around the concept of combining human experience, expertise and judgement (from our mentors and coaches) with the benefits offered by technology, algorithms and analytics to offer personalised career suggestions that evolve as professionals grow.
Read more to uncover the ideas of Sylvain Duranton, a business technologist and leader of BCG Gamma, which has deployed over 100 customised AI and analytics solutions for large companies around the world. His team conceptualises, builds, and deploys data science and advanced analytic solutions. Sylvain joined BCG in 1993, and as you will discover, he is a fan of “human plus AI” approaches. He says artificial intelligence is impacting a variety of fields, including businesses’ relationships with their customers, their industrial operations, risk assessment and management (in areas such as healthcare, finance and insurance), and improving the supply chain. However, he warns against “algocracy” (rules-based decisions without human oversight). We conclude the article with a video of his roughly 14-minute talk, given in Mumbai.
The return of rules-based bureaucracy?
While automation and analytics can be especially powerful, Sylvain worries that automation can mean more bureaucracy... He explains that over recent decades, companies have aimed for less human-led bureaucracy, simplifying rules and procedures and allowing for more judgement by those with the expertise, including devolution of decision-making to more agile, local teams. However, bureaucracy is making a comeback: he argues that algorithms are by their nature rules-based, inferring these sets of rules from past data and applying them systematically. This implies a new incarnation of officiousness, where ‘computer says no’ - he calls this “algocracy, where AI will take more and more critical decisions by the rules outside of any human control”.
This is a danger of targeting “human-zero” i.e. no human involvement (to reduce costs), he says, and instead Sylvain proposes that our aim should be to take better decisions (i.e. effectiveness and growth), rather than saving costs.
One example that Sylvain shares of rules-based bias is the rejection of university applicants from a specific postcode because all applicants from that postcode have been poor students in the past. In this way, the algorithm doesn’t give anyone the opportunity to prove the rule wrong, even if their other explanatory factors are excellent. He adds: “No one can check all the rules, because advanced AI is constantly learning. And if humans are kept out of the room, there comes the algocratic nightmare. Who is accountable for rejecting the student? No one, AI did. Is it fair? Yes. The same set of objective rules has been applied to everyone.”
The central question in our view is: how could we keep the benefits of expert human judgement while removing the downsides (such as discrimination)? A combination of man and machine could offer a solution.
‘Human plus AI’ is long, costly and difficult. Business teams, tech teams, data-science teams have to iterate for months to craft exactly how humans and AI can best work together. Long, costly and difficult. But the reward is huge.”
Weaving together AI with people and processes
In his experience, success with automation and artificial intelligence activities is achieved when the effort is split 10%-20%-70%:
“AI fails when cutting short on the 70%,” says Sylvain. He advises that the algorithms should be coded by data scientists and domain experts together, and that expert input is required in real life as there are many situations where there is insufficient data to draw robust conclusions. Also, full automation is difficult to achieve because AI does not understand context (while humans can). So, part of the 20% allocation should be about creating powerful interfaces for humans and the AI to solve the most challenging problems together, using combinations of technical excellence and sector experience.
Ethics, personalisation, targeting...
Part of the value added by humans is in deciding what is right or wrong, and in defining boundaries for what AI can do or not. An example is setting caps on prices “to prevent pricing engines from charging outrageously high prices to uneducated customers who would accept them”. Sylvain states that only humans can define those boundaries: “there is no way AI can find them in past data”.
[It is important to define] ethical rules and standards to help business and tech teams set limits between personalisation and manipulation, customisation of offers and discrimination, targeting and intrusion.”
The subject of appropriate prices for low price sensitivity customers (including loyal ones) is topical at the moment, and cross-subsidies are inherent in insurance by its nature. How fair is it to charge certain groups more than others (and especially more than the cost implied by their risk profile)? Should insurance customers be charged more simply because they are willing to pay more? Is it acceptable in some industries (like airlines, for example) but not others? The actuarial profession has a pivotal role to play in exploring these issues.
Four stages when deploying analytics
In other presentations, Sylvain has set out four stages for deploying analytics, which we found a helpful framework when considering these projects – there is naturally some overlap with his 10%-20%-70% model, although he describes the four stages in more technical detail:
1) Building the data ecosystem: cleaning the data, aggregation of data, and finding new data sources, either internal or external
2) Extracting business signals and insight from the data: clustering, classification, segmentation, optimisation, scoring, natural language processing and/or image processing
3) Field tests / pilots: building algorithms and testing findings with real-life customers and business situations to measure the value and adjust and learn until the business value is maximised i.e. a test and learn strategy
4) Embedding the algorithm in real life, including transforming processes and industrialising the insights gained across the organisation
Real-life examples of AI and analytics
In his talks, Sylvain shares a range of examples of applications of AI and analytics, including:
Applying judgement and leading change
To bring the benefits of machines, analytics and automation to life at scale in the hands of their users across organisations, the “human plus AI” approaches need to land well in the real world i.e. extend beyond the garage or sandbox into the way things are done in the business. To implement this scale of change successfully, Sylvain says, it must be led by people who are both business leaders and HR directors, managing the people elements as a key part. This includes training, and reviewing and changing daily working processes.
...Winning organisations will invest in human knowledge, not just AI and data. Recruiting, training, rewarding human experts.”
There is a significant amount of HR and change management work to do in implementing analytics and automation successfully, including both sourcing or developing the necessary talent and preparing the whole workforce for the arrival of the new approaches. Talent covers technology talent (such as coders or data scientists), sector experts who can apply judgement, and transformation specialists who can connect and coach.
Watch Sylvain’s approximately 14-minute video below: