By David Sweenor, Sr. Director of Product Marketing at Alteryx
The design and use of artificial intelligence is proving to be an ethical dilemma for companies throughout the United States considering its implementation. While currently only 6% of companies have embraced AI-powered solutions across their business, according to a survey by Juniper Networks, another 95% of respondents indicated they believe their organization would benefit from embedding AI into their daily operations, products and services. Which begs the question, if there is so much interest in the application of AI, then why is it taking companies so long to get on board?
The lagging and inconsistent adoption of responsible AI is one of the challenges companies are grappling with when it comes to AI. Currently, there are three elements that contribute to ethical concern around AI: privacy and surveillance, bias and prejudice, and the role of differing human values in the implementation and execution of AI. To alleviate those concerns, organizations from the public and private sectors have taken it upon themselves to implement ethics boards and establish their own set of AI principles to guide the development of responsible AI. To-date, there are over 80 different ethical AI frameworks available. However, foundationally it’s imperative for there to be a global consensus – based on data governance, transparency and accountability – on how to utilize and benefit from AI in a way that is both consistent and ethical.
To start, data governance helps an organization to understand and better manage the availability, usability, integrity and security of its data. It ensures the output from the relevant AI systems within an organization maintain the highest levels of data integrity and quality, while simultaneously preserving sensitive data and legitimized access. As today’s modern technologies become more common in the workplace, effective governance guidelines would guarantee reduced risk and maximize the value of the technology’s analytical outcomes.
In recent years, technology leaders have also appealed to the U.S. government for greater transparency into the development and deployment of AI models following antitrust investigations and growing backlash over Big Tech’s use of the technology. While some technology companies, including Microsoft, Google and IBM have already made responsible AI a strategic priority, the application of a broader AI ethics framework would alleviate the burden associated with the implementation of a still maturing technology. By identifying the core set of values in which all AI systems should align, as well as encouraging consumer trust by clearly communicating the intended outcomes when it comes to AI-powered decision making, we can set a global precedent that solves the existing AI accountability gap.
Lastly, human accountability is the final principle required for a successful global AI ethics framework. Accountability in AI ensures the designers and developers are responsible for abiding by the goals and objectives laid out in the governance charter and enforces liability through a chain of command that makes certain the systems operator oversees the decisions made by the algorithm. Saying a specific decision was made because an algorithm recommended a certain course of action is not a satisfying answer for either the public nor regulators. In the end, specific people and organizations need to be held accountable.
Though customer interest in AI-enabled interactions has increased dramatically since the start of the pandemic, the progress made to address the trust and ethics issues associated with AI technology is underwhelming. It is time for business and IT leaders to use their powers of persuasion to prompt regulatory and government bodies for a new, global framework reflective of the ethical bias and transparency issues prevalent with AI.
While some may still question the plausibility of a global ethics framework, there is a case to be made for a set of guidelines that would inspire consistent, best practices as it pertains to AI. This will only be achieved, however, through the implementation of formal standards, regardless of how long it may take to embed, and continued global cooperation grounded in equitable and accountable AI.
Bio: David Sweenor is an analytics thought leader, international speaker, author, and has co-developed several patents. David has over 20 years of hands-on business analytics experience spanning product marketing, strategy, product development, and data warehousing. He specializes in artificial intelligence, machine learning, data science, business intelligence, the internet of things (IoT), and manufacturing analytics. In his current role as the Sr. Director of Product Marketing at Alteryx, David is responsible for GTM strategy for the data science and machine learning portfolio.