We need to build Trust and Accountability into the use of AI.
This piece, co-authored by Michelle Calabro and Ryan Carrier of ForHumanity, is adapted from our response to the NYC Economic Development Corporation’s Request for Expressions of Interest in operating a Center for Responsible AI. The submission was a joint effort between ForHumanity, The Future Society and Michelle Calabro.
The proposal has neither been accepted nor rejected at the time of this publication. None of the authors or organizations involved have any affiliation or connection to the SEC.
Whether people know it or not, Artificial Intelligence powers much of the systems that run our public and private lives, in physical space and in virtual space. It’s integrated into home electronics like dishwashers, refrigerators, thermostats, and lighting systems. It sits on our countertops, listens to our voice commands, plays music and tells us jokes. It learns about the products we like and suggests more things to buy. It predicts the questions we want to ask Google, before we even finish typing. It helps movie production companies place smart financial bets on the best stories to tell. It helps us get smarter about how to run our cities. It drives large shipping trucks (without sleepy drivers inside) across American highways, and even optimizes the shipping company’s logistics too. In our factories, Artificial Intelligence-powered robots make the products. In our farmlands, it optimizes production. In New York City, Artificial Intelligence has taken over one of our most iconic scenes. Look at videos of the New York Stock Exchange (NYSE) during the 70’s, 80’s or 90’s. The trading floor was vibrant, flourishing, and exciting back then. Today? The floor is devoid of traders and is only used as a backdrop for business news programs and NYSE photo ops.
For most of human history, public spaces like crossroads and town squares used to be where people met to exchange information and ideas. Yet since the early 1990’s, the internet has been the place where individuals and groups across the world could connect; physical location has played less of a role in shaping cultural movements. In the early days of the internet, we thought of it as a relatively neutral ‘virtual place’ (free from commercial interests and surveillance) where we could connect to like-minded people, no matter where in the world they were located. In the last decade with the availability of big data, stronger computing power and refreshed optimism about AI research, the internet has become a different kind of place. It is infused with the interests of the entities that create and influence it — their culture, ethics, values, ideologies, local laws, profit motives, regulations, etc.; and new practices that can twist culture such as micro-targeting, voter suppression, and adaptive online content. As New Yorkers, we pride ourselves on the diversity of our population, and the tolerance for ‘otherness’ that is required to peacefully coexist. But that tolerance is being put to the test — Artificial Intelligence has been used to lower people’s tolerance for perspectives unlike their own. Harvard researchers have found that in 2016, “major spikes in outright fabrication and misleading information proliferated online, with people using warlike rhetoric in social media posts.”
Over recent years, we have seen numerous examples of negative outcomes from AI systems created by corporations: “Tay”, the racist chatbot from Microsoft that they shut down less than 24 hours after inception; Amazon’s hiring algorithm that was based on their own hiring data which was quickly shut down because it was biased; Google’s external Ethics Board which was forced to shutter only one week after launching in Spring 2019; and Facebook being fined $5 billion for the misuse of user data connected to privacy laws in July 2019. Regardless of the intent of corporations to be socially responsible, they exist to benefit shareholders and their primary goal is to achieve profits. The causes of these negative outcomes have to do with the technologies themselves, the data that was used, the interests that drove system design, and many more contributing factors.
What does it mean to create Responsible AI, and how can we hold companies accountable? Existing laws are proving insufficient, The United States does not currently have regulations on data security and the responsible use of AI, Congress is under-educated about the matter, universities train students to ask these questions but no one has definitive answers, and consumers are forced to comply with Privacy Policies and Terms of Service Agreements that err on the side of protecting corporations and not people. In order to mitigate further risk, the world has quickly called for guidelines, frameworks and best practices to be drafted, adopted and implemented.
New Yorkers have a unique history of creating successful systems of accountability. The largest industry in New York is the Financial Services industry, of which the four biggest audit/accounting/assurance firms are still headquartered here in our city. In the early 1970’s, the accounting industry came together to form the Financial Accounting Standards Board (FASB). Meanwhile in London, the industry there formed the precursor organization to today’s International Financial Reporting Standards (IFRS). Once FASB and IFRS established uniform procedures, processes and frameworks, the lawmakers knew they had a system that could be relied upon to deliver oversight, governance and trust. Both in the United States and abroad, these standards were adopted into law less than 24 months after their creation by the Securities and Exchange Commission (SEC) and similar foreign government agencies. Today, accounting is embedded deeply into our capital markets procedures, which has made the responsibleness and trustworthiness of the numbers a foregone conclusion to nearly all who rely upon them.
If independent auditors create a uniform (yet constantly evolving) framework for auditing AI systems, what are some potential outcomes? We’d escape the problem of technology companies not being able to regulate themselves. AI innovators would begin to consider the audit rules around bias, privacy, ethics, trust and cybersecurity, while designing AI systems in the future, and this would lead to more responsibly designed systems. Over time, it would lead to governance and oversight. People would have a way to decide whether an AI system is trustworthy. Please read here to learn more about the Independent Audit of AI Systems framework.