AI and industry: Seven things you need to know
Policymakers and regulators need to combine a few different ingredients to build a healthy environment for AI innovation. One is proportionality: “There may be cases where if it does go wrong, the consequences don’t matter that much. With electricity, the consequences can be quite severe. A blackout can lead to deaths,” said Jonathan Thurlwell, Head of Emerging Technologies at the UK energy regulator Ofgem.
Getting the infrastructure right is also part of responsible AI. Dr Andrew Richards, Director of Research Computing Services at Imperial, described how the university is rebuilding its high‑performance computing to support AI in a more sustainable way: “We’ve decomposed our computer infrastructure so that we can deliver more computational power while using less energy for cooling – that saves money and gives us more to spend on compute.”
But once again, what matters in policy is not just technical performance, but also human factors. “Do you have the right competencies in place in the organisation, so people know how to deploy the tool effectively?” Mr Thurlwell asked.
Professor Alessandra Russo, Co-Director Imperial’s School of Convergence Science, warned that we need to avoid normalising ‘good‑enough’ automated decisions in areas where we should still care about accuracy and human judgement. “The big risk that keeps me awake is that implicitly embedding this technology in our society might change our societal values, and this is for me a big problem and risk. We’re seeing the problem with social networks – and there’s no way of backtracking.”