As AI rises, lawmakers are scrambling to catch up



From “smart” vacuum cleaners and driverless cars to advanced disease diagnosis techniques, artificial intelligence has infiltrated every aspect of modern life.

Its promoters see it as revolutionizing the human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions.

Regulators in Europe and North America are concerned.

The European Union is likely to pass legislation next year – the AI ​​Act – aimed at slowing down the age of the algorithm.

The United States recently released a draft for an AI bill of rights, and Canada is also mulling legislation.

The use of biometric data, facial recognition and other technologies to create a powerful surveillance system in China has been of great importance in the debates.

Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West is also in danger of creating “totalitarian infrastructures”.

“I see this as a big threat, regardless of the benefits,” he told AFP.

But before regulators can act, they face the difficult task of defining what AI actually is.

‘The Cup Game’

Brown University’s Suresh Venkatasubramanian, who co-authored the AI ​​Bill of Rights, said trying to define AI was “a game of mugs.”

Any technology that affects people’s rights should be within the scope of the bill, he tweeted.

The 27-nation EU is taking the more tortuous route of trying to define the expansive field.

His bill lists the types of approaches defined as AI and includes virtually any computer system that involves automation.

The problem comes from the changing use of the term AI.

For decades, he described attempts to create machines that simulated human thought.

But funding largely dried up for this research, known as symbolic AI, in the early 2000s.

The rise of the Silicon Valley titans saw the rebirth of artificial intelligence as a byword for their number-crunching programs and the algorithms they spawned.

This automation allowed them to target users with advertising and content, helping them earn hundreds of billions of dollars.

“AI was a way for them to make more use of this surveillance data and not know what was going on,” Meredith Whittaker, a former Google worker who co-founded the university’s AI Now Institute, told AFP from New York.

Thus, the EU and the US have concluded that any definition of AI must be as broad as possible.

“Too Challenging”

But from that point on, the two Western powers have largely gone their separate ways.

The draft EU AI Law is over 100 pages long.

Among its most attractive proposals is a blanket ban on certain “high-risk” technologies, the kind of biometric surveillance tools used in China.

It also drastically limits the use of AI tools by immigration officials, police and judges.

Hasselbalch said some technologies were “simply too difficult for fundamental rights”.

The AI ​​Bill of Rights, on the other hand, is a short set of principles framed in aspirational language, with exhortations such as “to protect against insecure or ineffective systems.”

The bill was issued by the White House and is based on existing law.

Experts believe there is no specific AI legislation in the United States until 2024 at the earliest because Congress is deadlocked.

‘flesh wound’

Opinions differ on the merits of each approach.

“We desperately need regulation,” New York University’s Gary Marcus told AFP.

He notes that “big language models” – the artificial intelligence behind chatbots, translation tools, predictive text software and more – can be used to generate harmful misinformation.

Whittaker questioned the value of laws aimed at tackling AI rather than the “surveillance business models” that underpin it.

“If you’re not addressing this at a fundamental level, I think you’re putting a Band-Aid on a flesh wound,” he said.

But other experts have widely welcomed the US approach.

AI was a better target for regulators than the more abstract concept of privacy, said Sean McGregor, a researcher who chronicles the technological failures of the AI ​​incident database.

But he said there could be a risk of over-regulation.

“Existing authorities can regulate AI,” he told AFP, pointing to the US Federal Trade Commission and housing regulator HUD.

But where experts widely agree is the need to remove the hype and mysticism surrounding AI technology.

“It’s not magic,” McGregor said, comparing the AI ​​to a highly sophisticated Excel spreadsheet.

READ NEXT: New technology helps rapid injury assessment

Leave a Reply

Your email address will not be published. Required fields are marked *