X

COMMENTARY: Navigating the uncharted waters of AI regulation

The ChatGPT app is seen on an iPhone in New York, Thursday, May 18, 2023. (AP Photo/Richard Dre ...

Regulation is trust. The problem is that “trust” has different meanings in government, technology and marketing. While all build on pyramids that start the same, they diverge in their destination.

The destination for technology is utility for the user. Think physics; if you do it wrong, such as on a spacecraft, things might get destroyed and explode.

In government, trust is the exchange of individual sovereignty to the government to protect one’s liberties and property. This time, consider Thomas Hobbes’ “Leviathan,” written during the English Civil War (1642–1651). It argues for a social contract and rule by an absolute sovereign.

In marketing, this relates as it is a promise by the brand to the buyer to curate a good/service, and, most important, an identity through curation from that purchase. Consider taking a TikTok/Insta selfie with a new purchase or an executive touting the new technology implementation transforming her organization.

There has never been a successful technology that has waited for regulation before launching.

Nor has there been a successful technology that hasn’t eventually been regulated. Now that artificial intelligence is a success, and the race for applications to become platforms has begun, the test of market power versus anti-competitive behavior will go through the natural cycle of all new technologies.

What makes AI different is that it has the potential to interfere with the government’s ability to protect liberty and property and to interfere with the core value of marketing/ad industries of creators, curators, gatekeepers and distributors of identity.

What that means is you will have the technology platform companies vie for the position that their regulatory framework should be the one that is adopted. Against that business backdrop, AI regulation is coming very quickly out of the European Union around privacy and data sovereignty, primarily through a host of cumbersome rules from the EU AI ACT and the June G7 meeting.

The EU AI Act, in its current form, would give significant power to closed platforms and place an undue burden on open-sourced ones, thus stifling innovation.

Ironically, this would give more market power to the established players and effectively regulate new market entrants. The act states: “(A) provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distribution channels.”

This means is that an open-source provider would have to have risk mitigation strategies, data governance measures and a 10-year documentation requirement, among others. Take the open-sourced platform like HuggingFace, where more than 50,000 organizations have used its model. It would force these organizations to a closed-sourced provider only to comply with this new rule.

Another issue will be cross-border compliance, a sticky subject since the General Data Protection Regulation will become more so as where the data is trained and where it comes from will come under new regulations.

Governments should focus on end-use case applications rather than prescribing specific technological mandates around the underlying models. To do so, policymakers would do better by protecting liberties and property and not focusing on regulating the bits that don’t byte.

Tim Heffernan is the chief growth officer at T3 Expo. He wrote this for InsideSources.com.

.....We hope you appreciate our content. Subscribe Today to continue reading this story, and all of our stories.
Subscribe now and enjoy unlimited access!
Unlimited Digital Access
99¢ per month for the first 2 months
Exit mobile version