Responsible AI: where to start, and what should companies be pushing for?
Mathew Joseph, Sustainability Director at Swedish investment company Kinnevik, joined us to give insights on how to begin a responsible AI journey, and what the outcomes of first assessments and policies should be. He also introduced the Responsible AI Commitments - a movement by a US-based organisation Responsible Innovation Labs to distill best practices and frameworks from industry, civil society, and the public sector.
Do not ignore AI: companies are unlikely to progress without it
Mathew says that AI cannot be ignored by companies that wish to be successfully sustainable: “I do not think companies will be long-term sustainable businesses if they're not dealing with AI in an ethical and responsible way”.
Mathew’s own journey with AI began as Kinnevik looked at its companies, searching for an AI leader. But they didn’t find one. He told the conference that none of Kinnevik’s 35 companies had an AI policy and that the topic was “on the board’s agenda”, but not usually discussed in practice. To tackle this, the investment centre realised that it would have to become the lead. Kinnevik itself is also going through the process since its general counsel has also approached Mathew to build the company’s AI policy, an exercise that is still ongoing.
Where to begin? Risk analysis, early embedding, finding a standard to follow
Mathew recognises that even starting the AI journey -- looking at it from an ethical and responsible angle -- is hard, but he advises companies and their management to engage with it positively, and not rush the process: “If we hurry through an agreement, it becomes a ticking-boxes exercise, and we don’t want that”. He says that hiring external consultants can often be a good investment, too.
So, what would Mathew like to see from Kinnevik’s companies? Businesses must do a deep analysis of the risks and opportunities that AI brings to them, as well as start to leverage AI tools in their operations, with a main focus on customer care. All of this needs to be spurred on by company boards: “The board doesn’t have to decide itself, but it has to be on the radar”. Joseph added that he would like Kinnevik’s businesses to sign up to a principles-based organisation like the Responsible Innovation Labs to be, at the very least, aligned to a publicly recognised standard.
Be transparent and proactively monitor your AI efforts
Mathew stresses that companies need to be transparent with their customers and suppliers about how they will implement AI across the business. This means giving their stakeholders an option to choose not to interact with AI, too. He also emphasised that AI policies must be a journey, not a destination, continuously reviewed and monitored:

“We've seen that across our portfolio, companies say a lot of things but do nothing. We'd like to really like companies to embed some sort of control and monitoring. I hate to use the word, but potentially an audit, of how the company and its operations are dealing with AI”.