
Dahnesh Dilkhush, Executive Director – Customer Success at Microsoft India and South Asia; Devang Mody, CEO at Bajaj Finserv Health; Neha Gupta, MD & Partner at Boston Consulting Group and Sonali Kulkarni, Country Head – BFSI, Microsoft India and South Asia, at the Microsoft AI Boardroom event
Use of generative AI in the banking, financial services and insurance sector is showing better outcomes than expected, according to officials businessline interacted with at a Microsoft AI Boardroom event.
In a recent blog post Microsoft had written about GenAI increasing productivity in the BFSI sector by 30 per cent. Officials said that in specific use cases, productivity and Return on Investment had increased while in the case of some insurance companies claims time had reduced by half.
Dahnesh Dilkhush, Executive Director – Customer Success at Microsoft India and South Asia; Devang Mody, CEO at Bajaj Finserv Health; Neha Gupta, MD & Partner at the Boston Consulting Group and Sonali Kulkarni, Country Head – BFSI, Microsoft India and South Asia, spoke with businessline.
Is the pace of AI adoption in the financial services sector as per expectations?
Mody: There are two aspects in pace of adoptions across use cases: results and output. The output far exceeds my expectation. We were working on a specific issue of improving accuracy of claim processing and other 3-4 specific things. The outcomes exceed my expectation.
Gupta: AI leaders in our survey who saw maximum results were those who said that per function or per business, they will only focus on three to four use cases. So the average use-case is 3.5. For people who use AI for more than six use-cases, it does not give results. People who have really seen a double Return on Investment (ROI) are those who used AI only for three to four cases. This is proven across large companies. In terms of number of companies adopting AI, the pace is picking up but it will increase much more this year compared to last year because the results are starting to get proven. So more organisations are becoming believers who were earlier on the fence about it.
Is Microsoft close to its anticipated goal of increasing productivity by 30 per cent using Gen AI?
Kulkarni: A lot of the case studies that we’ve already published talk a fair bit of how the ROI and productivity gain has increased. A large proportion of use cases that we’ve seen so far are around employee productivity. In Bajaj the time taken to do the claims and just adjudication has dramatically come down as well. So across different use cases, we’ve seen generative AI exceeding at times the expectation but definitely meeting the ROI. We showcased the ICICI Lombard example where they’ve cut their time by half, a 50 per cent reduction. So I think the numbers vary depending on the use case, depending on the organisation.
Are the benefits more prominent on the back-end or at the customer end?
Gupta: Since there is more control in the back end, that’s where there’s more AI adoption, but we are solving front end problems as well. Specifically our service industry, we are only working for front end. Before, AI would help a call centre agent while the call is going on. Now, there’s been a movement from post facto call analysis to in call, where the bot or AI can replace the agent? So there are different ways in which the back end to front end is happening. It’s happening across the value chain. It also depends on the maturity of the organisation to say how confident are we to take it to the front end.
Some banks have spoken of a 10 per cent reduction in workforce with the use of AI. Is that the downside of adopting AI?
Gupta: AI literacy is at the core. It will change the nature of the work that people are doing. So it’s moving from service to sales, or from operations to relationship. The good part is that if employees upskill, what we’ve seen is the productivity gain for an employee who is actually using an AI tool is significantly more than an employee who is not. So from their own perspective, to actually deliver better business outcomes, it makes sense to use AI. The mundane, repetitive operational tasks can definitely move to AI. As agentic AI comes in, reasoning-based tasks and the ability to interact with multiple systems will also shift to AI.
With cloud and AI threats like deepfakes, have you increased investments in cybersecurity?
Dilkhush: Yes, we are accelerating our investment in cybersecurity. Due to AI use by adversaries, Microsoft and the customers have to increase investment. It’s not because we are using AI. Whether it is deepfake, APTs, Advanced Persistent Threats, a lot of that is AI enabled now. The way we are using AI, in fact, is very late. The threat actors are using AI for years now. So their maturity is even bigger. So we have to use AI to fight AI. One of the areas where you will see AI significantly making impact is security. Deepfakes are maturing at a rate where you will have to use AI to really dissect and see whether it is an original image or a deepfake.
Was that one of the challenges when trying to convince companies about AI adoption?
Gupta: Earlier, sceptics used to ask us “Is this a new fad which will pass away?” or “Is it a very shiny new toy, which only a high large scale company can afford?” along with a lot of talk around AI literacy. The people wanted to translate their proof of concept (POC) into a scaler. That’s where the biggest gap came. So, we advise companies to create horizontal capabilities. If I’m creating a capability of knowledge summarisation, it can have 10 applications rather than 10 individual use-cases. So, now the conversation if moving towards how to create horizontal capabilities.
When you talk about horizontal capabilities, are you saying that people are keener on LLMs rather than SLMs?
Dilkhush: The decision of whether to use an LLM or SLM is largely dependent on what the company is trying to achieve. In the long run, people will have heterogeneity where they use both LLM and SLM. We’ll end up having organisations with multiple models. One of the recent surveys we did found that on an average companies are using around three models at any given point of time in an organisation. So not only just SLM and LLM, but within LLM, there will be two or three more models.
Published on April 4, 2025