A Microsoft director has said that companies must refuse to create artificial intelligence that is unethical and could harm humanity, even if it affects their profits.
Hugh Milward, senior director of corporate, external and legal affairs, said businesses need to “draw a line” on what is acceptable – making the much-repeated warning that, “Just because something can be done, doesn’t mean it should be done.”
Talking at the Tech UK Digital Ethics Summit in London, Milward added, “There are three key aspects of AI development we need to look at: building ethical principles, regulation of facial recognition, and helping people develop the digital skills they will need to thrive in the workplace, as well as the human skills that make us who we are, such as empathy and critical thinking.”
AI is already being used by the NHS and major companies to improve how they work, and research released by Microsoft predicts that nearly half of bosses believe their business model won’t exist by 2023. Whilst 41 per cent of business leaders believe they will have to dramatically change the way they work within the next five years, more than half (51 per cent) do not have an AI strategy in place to address those challenges.
The Government set up the Centre for Data Ethics and Innovation earlier this year to advise ministers on how to develop AI safely.
Recent Stories