Microsoft, Apple, and Meta Take Action to Allay Growing AI Fears

Microsoft, Apple, and Meta Take Action to Allay Growing AI Fears

Under pressure from regulators, oversight committees, and investors, businesses like as Microsoft Corp. and the social network behemoth Meta Platforms Inc. are becoming more open about how they employ AI. Furthermore, the shareholder campaigns have only just begun.

In response to worries about the possible spread of false information on Facebook and its other platforms, Meta recently revised its relatively new AI labeling policy to be considerably more explicit about content produced by the technology. In May, Microsoft published its first report on responsible AI. And after a resolution to provide further information on AI-related business and ethics gained 37.5% of the vote from shareholders in February, Apple declared that it will reveal more about its AI goals.

The companies are among six that shareholders have pressured to reveal the risks to their operations, finances, and workforce as well as society at large that the artificial intelligence (AI) capabilities they are creating to stay competitive in their markets pose. A focus of shareholder activities has shifted from technology businesses to the entertainment sector after the employment of developing technologies sparked labor issues during the Hollywood strikes that occurred last summer.

In June, an AI offer at streaming giant Netflix Inc. garnered 43% of shareholder votes, a very uncommon close to passing outcome for a first-time endeavor.

Businesses will continue to advance AI despite the pressure. Companies are pinning their hopes on AI as a huge source of revenue and promoting it in investor filings: In February, Bloomberg Law revealed that, compared to 2018, when AI was hardly discussed at all, more than 40% of S&P 500 corporations included AI in their most recent annual report.

Nonetheless, some corporations are beginning to change their conduct as a result of the efforts. The AFL-CIO announced earlier this year that it had dropped bids for artificial intelligence (AI) at The Walt Disney Co. and Comcast Corp. following those businesses’ purported agreement to release further information on AI usage.

Companies in all industries will be under increasing pressure from investors and other parties to release more details about how they utilize AI, according to Beena Ammanath, worldwide and US technology trust ethics leader at Deloitte LLP.

“There is enough awareness now that we’re going to see that shift to be more transparent,” Ammanath said. “I get to speak to a lot of boards and CEOs and their leadership teams, and I can tell you that the level of awareness or activity that is happening at a board level—something like this hasn’t happened in a long time.”

Large Tech

In May, Microsoft published its first-ever responsible AI report, outlining how it develops generative AI systems in a responsible manner to reduce false and misleading information. A year ago, the tech giant promised to produce the report for the US government.

Investors weren’t satisfied with that commitment. Late last year, a shareholder resolution requesting that Microsoft disclose its AI risk and its strategies to address any possible downsides was presented to the company for the first time, among other corporations. Despite having previously committed to providing the US government with a responsible AI report, 21.2% of investors backed Arjuna Capital’s December proposal, which requested that the company’s board generate an extra report.

“We believe Microsoft’s multi-faceted program to address the risks of misinformation and disinformation is longstanding and effective,” the company said in its proxy statement.

The peak of investor pressure surrounding AI occurred during Alphabet Inc.’s June annual meeting, when the Google parent firm was presented with three AI proposals simultaneously—more than any other company this year. The proposals included a human rights assessment looking into Google’s targeted advertising policies, which garnered 18.6% of the investor vote; a governance change to place Alphabet’s AI use under the control of the board’s audit and compliance committee, which received 7.4% of the vote; and a report on misinformation and disinformation spread by AI, which received 17.6% of the vote.

Even if the plan fails, a result in the double digits is seen to have the ability to influence business practices.

The Gemini chatbot, originally known as Bard, is one of Alphabet’s AI products that may be used for writing, research, and other language-related tasks. “Applying Alphabet’s resources responsibly as it continues to unlock the growth potential of AI across its products and services” is the commitment the tech giant made to its shareholders.

Investors ultimately want big tech to be more cautious and honest about the potential long-term effects of its rapid AI development.

“How much are you willing to sacrifice society for profit, and how fast is too fast?” questioned Trillium Asset Management’s chief advocacy officer Jonas Kron, who presented Alphabet with the AI governance idea.

In July, Meta revised their AI policy to alert consumers to the possibility of misleading media. The social media behemoth first included a “Made with AI” badge in April, but it has since modified the tag to read “AI info,” which users may click to view additional details. The modification, which Meta first released in response to pressure from an independent oversight group that demanded a revision of its policies, aims to provide customers additional explanation because the prior designation didn’t always match their expectations.

In May, Arjuna Capital put forth a proposal that garnered 16.7% of investor support. Meta, on the other hand, has just introduced a new feature—a digital assistant that can generate visuals and answer user questions. Arjuna voted against the bid and noted that the outcome of the vote was noteworthy because Mark Zuckerberg holds more than half of the company’s voting power.

Arjuna is going to keep pushing for more companies to adapt. According to Julia Cederholm, senior associate at Arjuna for ESG research and shareholder engagement, “the risks aren’t going away, so our engagements aren’t going away.”

In order to combat false information and disinformation, Meta said in their proxy statement that company has already “made significant investments” in safety and security.

The Pressure of Entertainment

The proposal that nearly passed at Netflix stated that ethical standards for AI use may help prevent labor disruptions and raised concerns about possible hiring discrimination, mass layoffs, and facility closures. The investment effort was prompted by worries expressed by employees in the entertainment business that AI could discredit or replace writers or be used to mimic the likenesses of actors.

In its proxy statement, Netflix stated that it is already bound by collective bargaining agreements with unions in the entertainment industry that have AI clauses. Additionally, Netflix stated that the kind of report it was asking for “may require disclosure of confidential research and development activities, strategic initiatives, and other information that may harm our competitive position.”

The strikes in the entertainment industry, according to Carin Zelenko, director of capital strategies at the AFL-CIO, showed what occurs when companies don’t get employees to consider how technology use may affect jobs and the company’s future.

“I really believe it’s important that, as companies are introducing these technologies, that they engage the workforce in how the technology can be used,” Zelenko said.

A few workers had the same sentiments. Ylonda Sherrod, a Communications Workers of America member and AT&T sales consultant in Ocean Springs, Mississippi, is vocal about her worries over worker empowerment and transparency in AI.

“I feel like we should have a say in how it’s implemented in the workplace, because it could be implemented better,” she said in an interview, adding that there should be restrictions and policies in place to make workers feel more secure across industries.

No Set Playbook

Companies will continue to struggle with how to effectively handle ethical, legal, and regulatory challenges across a variety of themes from data privacy to the environmental impact of the technology as the AI race heats up.

New regulations like the EU’s AI Act, which goes into force on August 1, increase the risk associated with AI. Similar to the shareholder suggestions, this regulation seeks to ensure that AI systems are guided by morally and safely. The Act forbids the deployment of artificial intelligence (AI) systems that could deceive or take advantage of people based on their age or disability, among other “unacceptable risk” scenarios.

Even if the businesses are located outside of the EU, the regulation will still apply to AI developers and suppliers that use their services there.

Although the US has been slower to enact regulations pertaining to the use of AI, late last year the White House issued an executive order including a number of directions pertaining to security and privacy, including a requirement that developers notify the US government of the findings of their safety tests. Gary Gensler, the chair of the Securities and Exchange Commission, also issued a strong warning to businesses in December on the practice of deceiving investors about their AI capabilities—a practice he dubbed “AI washing.”

Since authorities have only provided sporadic and insufficient guidance thus far, some businesses are attempting to establish their own risk mitigation infrastructures.

“I think organizations are experimenting, right now there is no fixed playbook for it,” said Deloitte’s Ammanath.

To address the enormous problem, several businesses have established new, high-level positions such as chief tech ethics officer, chief AI ethics officer, and even chief trust officer. They’re also forming committees with both internal and external participants to discuss AI and tech ethics.

However, Ammanath noted that when companies provide more details about their AI strategies, it’s critical to be transparent and target the appropriate stakeholders with information releases.

“The way you communicate about or explain how a model works to a data scientist would be different to how you explain it to your board or customer or investor,” she said.