AI
Disclosures
As AI becomes more prevalent, it's important
for organizations to be transparent about how AI systems are being
used, and what data they are collecting and processing. This is where
AI disclosures come into play.
AI
Disclosures are a set of
guidelines and policies that organizations can
use to be more transparent about their use of AI systems. These
disclosures can help build trust with customers, partners, and the
public, and ensure that organizations are using AI in an ethical and
responsible manner.
Here
are some key elements of AI Disclosures:
Accountability:
Organizations
should be accountable for the decisions made by their AI systems,
including any errors or biases that may arise. This includes providing
channels for users to raise concerns or provide feedback about the
system.
Algorithmic
Transparency:
Organizations should provide details about the algorithms and models
used in their AI systems, including how they were trained and
validated, and any biases that may be present in the data or algorithms.
Data
Collection: Organizations
should be transparent about the types of data that their AI systems are
collecting, and how that data is being used. This includes information
about data sources, data retention policies, and how data is being
secured.
Purpose:
Organizations should
clearly state the purpose of their AI systems, including the intended
benefits for users and how the system will be used.
User
Control: Organizations should
give users control over their data, including the ability to access,
modify, or delete their data as needed. They should also provide clear
opt-in and opt-out mechanisms for data collection and processing.
Places AI Disclosures can be found include:
Annual reports:
Many companies are now including information about their use of AI in
their annual reports to shareholders. These reports are typically filed
with the Securities and Exchange Commission (SEC) in the United States
and can be found on the company's website or the SEC's website.
Company websites:
Some companies have dedicated pages on their websites that describe
their use of AI. These pages may include information about the
company's AI strategy, the types of AI technologies the company is
using, and the benefits and risks of using AI.
Earnings calls:
During earnings calls, companies may discuss their use of AI and how it
is impacting their business. Transcripts of earnings calls are
typically available on the company's website or on financial news
websites.
Industry reports:
Industry organizations and research firms often publish reports on the
use of AI in specific industries. These reports can provide insights
into the trends and challenges associated with AI adoption in different
sectors.
Press releases:
Companies may issue press releases to announce new AI initiatives or
partnerships. These press releases can be found on the company's
website or on newswire websites.
Regulatory filings:
In some jurisdictions, companies are required to disclose their use of
AI in regulatory filings. These filings can be found on the website of
the relevant regulatory agency.
Scientific papers:
Researchers are constantly publishing papers on the development and
application of AI technologies. These papers can be found on academic
databases such as Google Scholar or arXiv.
Here
are some examples of where AI Disclosures are used:
Accenture's Responsible AI Framework: Accenture's
Responsible AI Framework is a set of guidelines and policies that help
ensure that its AI systems are being used in an ethical and responsible
manner. The framework covers topics such as accountability,
transparency, and privacy, and provides guidance for how Accenture is
approaching the development and deployment of AI technologies.
Google's AI Principles: Google's AI
Principles outline their commitment to developing AI systems that are
socially beneficial, transparent, and accountable. The principles cover
topics such as fairness, privacy, and algorithmic transparency, and
provide guidance for how Google is approaching the development and
deployment of AI technologies.
Facebook's Responsible AI Practices: Facebook's Responsible
AI Practices outline their commitment to developing AI systems that are
transparent, accountable, and respectful of user privacy. The practices
cover topics such as fairness, accountability, and transparency, and
provide guidance for how Facebook is approaching the development and
deployment of AI technologies.
IEEE Global Initiative on Ethics of Autonomous and Intelligent
Systems: The IEEE Global Initiative is a multi-stakeholder effort
to advance ethical and responsible development of AI and autonomous
systems. They have developed a set of ethical principles and practices
for AI systems, as well as a framework for assessing and mitigating
ethical risks.
IBM's AI Fairness 360: IBM's AI Fairness 360 is an
open-source toolkit that helps developers detect and mitigate bias in
their AI systems. The toolkit includes algorithms and metrics for
assessing bias, and provides guidance for how to address issues that
are identified.
Microsoft's AI Ethics and Effects in Engineering and Research
(AETHER) Committee: Microsoft's AETHER committee is a group of
internal experts who provide guidance on ethical and responsible AI
practices. The committee focuses on issues such as fairness,
accountability, and transparency, and provides recommendations for how
Microsoft can ensure that its AI systems are being used in an ethical
and responsible manner.
The Partnership on AI: The Partnership on AI is a
multi-stakeholder organization that includes leading tech companies,
NGOs, and academic institutions. They have developed a set of ethical
guidelines for AI systems, covering topics such as fairness,
accountability, and transparency.
The European Union's General Data Protection Regulation
(GDPR): While not specifically focused on AI, the GDPR requires
organizations to be transparent about their data collection and
processing practices, which can apply to AI systems as well.
Organizations must provide clear and accessible information about how
user data is being used, and must obtain user consent for data
collection and processing.
The World Economic Forum's Global AI Action Alliance: The
Global AI Action Alliance is a multi-stakeholder initiative to promote
the responsible use of AI. They have developed a set of guiding
principles for responsible AI, which cover topics such as transparency,
accountability, and social impact.
The Algorithmic Accountability Act: Proposed in the United
States Congress, the Algorithmic Accountability Act would require large
tech companies to conduct impact assessments of their AI systems, and
to be transparent about the data and algorithms used in those systems.
The Act is aimed at promoting fairness and accountability in AI systems.
----------------
Company websites:
Industry reports:
Press releases:
Regulatory filings:
----------------------------
As
AI continues to become more prevalent, it's likely that new guidelines
and disclosures will emerge to address the unique ethical and social
challenges presented by AI systems.
Artificial
Intelligence (AI) disclosures are an important tool for organizations
to build trust with their users and stakeholders, and ensure that they
are using AI in a responsible and ethical manner.
----------------------------
|