In the arena of generative AI, names like Microsoft, OpenAI, and Meta dominate headlines. Yet, the lesser-known players find themselves overshadowed in key discussions.
Big AI’s Dominance in Policy Discussions
Recent times have seen Big Tech companies receive invitations to prestigious venues like the White House. These conglomerates are setting the stage for AI regulation talks, leaving smaller AI entities feeling sidelined.
AI’s Elite and Their Commitments
OpenAI, Meta, Microsoft, and other giants have entered agreements with the White House committing to responsible AI. The establishment of the Frontier Model Forum aims to enhance AI research and disseminate knowledge to the AI community and policymakers.
The Underrepresented Spectrum of AI
Contrary to popular belief, large companies are just a fraction of the generative AI landscape. Foundation models like those of Google and Anthropic are merely the base. A myriad of smaller ventures utilizes these models to craft applications and tools.
Challenges Faced by Small AI Entities
Without the financial resilience that larger companies possess, these smaller businesses are anxious about forthcoming regulations. Triveni Gandhi from Dataiku underscores the sense of alienation these companies experience, especially when larger players largely dictate the rules.
Call for Inclusivity in AI Regulation
While the Frontier Model Forum engages with governmental bodies, its future plans for expanding membership remain unclear.
Smaller players yearn for their perspectives to be included in regulatory discussions. Ron Bodkin of ChainML suggests that rules should be proportionate to an AI entity’s size and scope. Gandhi advocates for inclusion of diverse stakeholders in bodies like the International Organization for Standardization (ISO).
Potential Dangers of Exclusivity in Regulation
The hazard of regulatory capture is becoming evident. While it’s pragmatic for governments to consult major AI players, over-reliance might deter smaller companies and inadvertently favor the big players.
AI Now emphasizes the need for a more public-driven narrative around AI. Their report urges for a discussion steered by regulators and the public.
Beena Ammanath, from the Global Deloitte AI Institute, reiterates the importance of trust. Achieving faith in AI means involving a spectrum of voices, including academia, NGOs, and international bodies. As lawmakers deliberate on AI regulation, the dialogue still has room for expansion.
Ammanath concludes, “Policies must prioritize public interest to ensure trust, uphold ethics, and encourage responsible AI adoption.”