CSIRO publishes report on ethical use of AI
CSIRO has published a new report designed to help businesses implement ethical, responsible artificial intelligence.
The report, ‘Implementing Australia’s AI Ethics Principles: A Selection of Responsible AI Practices and Resources’, was developed for the agency by the Gradient Institute.
Released under a Creative Commons licence, the report includes recommendations for practices including impact assessments, data curation, fairness measures, pilot studies and organisational training, all aimed at helping businesses develop robust and responsible AI.
It has been published following research from Fifth Quadrant and the Gradient Institute finding that despite 82% of Australian businesses believing they were practising AI responsibly, less than 24% had measures in place to ensure they were aligned with responsible AI practices.
National AI Centre Director Stela Solar said many businesses don’t know how to responsibly navigate the fast-evolving AI environment and meet customer expectations for ethical use of the technology.
“We hear from businesses that their ability to innovate with AI is directly correlated with their ability to earn trust from the communities they serve. This is also reflected in Australia’s AI Ecosystem Momentum report, which found that trust, privacy, security, data quality and skills were the top blockers to AI projects,” she said.
“AI systems that are developed without appropriate checks and balances can have unintended consequences that can significantly damage company reputation and customer loyalty.”
A wide majority of IT leaders in Australia and NZ are struggling to balance modernisation and...
Research from Gartner indicates that 87% of CIOs plan to devote the biggest increases in their...
Intel will adopt technology from critical digital infrastructure provider Vertiv to help users...