Code of ethics: how to ensure AI shares human values

Sherin Mathew wants authorities to regulate artificial intelligence based on ethical considerations.

Mathew, founder of AI Tech UK and an expert speaker at TheBusinessDesk.com’s free Disruptors 2023 conference on 30 November, points to Cambridge Analytica’s use of data shared on Facebook to influence elections.

“AI is such an intimate technology. It’s an amalgamation of people’s intelligence with technology, working together. The real concern for me is that people are not aware of how their intelligence or insights or even the data is being used by technology – and if it’s not being used in a positive way it has a huge risk.”

Mathew is no Luddite – far from it. He’s been a Microsoft partner and was an AI lead with IBM. Earlier this year AI Tech UK launched an AI accelerator programme in Leeds, with funding from Leeds City Council and Innovate UK, intended to give people the skills they need to work in the AI sector.

“AI should be used as a positive, empowering tech,” he said. “Let’s look at government – government is facing challenges with the climate crisis, poverty, education, jobs. AI, if used effectively, could solve most of these problems.”

But he is keen to ensure AI is used ethically. Part of that is regulating the technology, part is to raise aware3ness of ethical issues among developers, and part is to democratise AI by encouraging open-source software and educating the public.

“We have a licence to use guns, we have a licence to drive a lorry, or fly a plane, or to run a big machine or a factory. There is a reason why these powerful things have licences – even business, you have to have a licence. Even to sell alcohol, you have to have a licence.

“We don’t have a licence to run powerful intellectual technology. That is my top, top, top concern – and it’s such a complicated concern. It’s not easy for a layman to even understand. It’s such a complicated concern that even politicians don’t have the vision and the foresight to see beyond what’s being told to them by the corporate guys.”

AI Tech UK runs an Ethics 360 course for AI developers. Mathew and his team have developed five principles of ethical AI use that he hopes would form the basis for regulators – and that companies will follow voluntarily before regulations are drawn up.

  1. Respect human rights and privacy.
  2. Purpose realisation: be clear about the purpose and intention of your AI.
  3. Positive disruption management: think about how your AI may impact society.
  4. Risk evaluation: assess the risk of your product before releasing it.
  5. Accountability: continually redesign to fix mistakes and oversights.

Sherin Mathew will speak at the free, one-day Disruptors 2023 conference at the Nexus, Leeds, on Thursday 30 November, where he will join the panel The Growth Journey: Technology, examining AI and quickly evolving technologies. The conference is sponsored by BHP, Clarion and SPG.

Close