JavaScript is disabled in your web browser or browser is too old to support JavaScript. Today almost all web pages contain JavaScript, a scripting programming language that runs on visitor's web browser. It makes web pages functional for specific purposes and if disabled for some reason, the content or the functionality of the web page can be limited or unavailable.

Sunday, May 18, 2025

ChatGPT chief: AI should be regulated by a US or global agency

by

730 days ago
20230518

The head of the ar­ti­fi­cial in­tel­li­gence com­pa­ny that makes Chat­G­PT told Con­gress on Tues­day that gov­ern­ment in­ter­ven­tion will be crit­i­cal to mit­i­gat­ing the risks of in­creas­ing­ly pow­er­ful AI sys­tems.

“As this tech­nol­o­gy ad­vances, we un­der­stand that peo­ple are anx­ious about how it could change the way we live. We are too,” Ope­nAI CEO Sam Alt­man said at a Sen­ate hear­ing.

Alt­man pro­posed the for­ma­tion of a US or glob­al agency that would li­cense the most pow­er­ful AI sys­tems and have the au­thor­i­ty to “take that li­cense away and en­sure com­pli­ance with safe­ty stan­dards.”

His San Fran­cis­co-based start­up rock­et­ed to pub­lic at­ten­tion af­ter it re­leased Chat­G­PT late last year. The free chat­bot tool an­swers ques­tions with con­vinc­ing­ly hu­man-like re­spons­es.

What start­ed out as a pan­ic among ed­u­ca­tors about Chat­G­PT’s use to cheat on home­work as­sign­ments has ex­pand­ed to broad­er con­cerns about the abil­i­ty of the lat­est crop of “gen­er­a­tive AI” tools to mis­lead peo­ple, spread false­hoods, vi­o­late copy­right pro­tec­tions and up­end some jobs.

Sen. Richard Blu­men­thal, the Con­necti­cut De­mo­c­rat who chairs the Sen­ate Ju­di­cia­ry Com­mit­tee’s sub­com­mit­tee on pri­va­cy, tech­nol­o­gy and the law, opened the hear­ing with a record­ed speech that sound­ed like the sen­a­tor, but was ac­tu­al­ly a voice clone trained on Blu­men­thal’s floor speech­es and recit­ing Chat­G­PT-writ­ten open­ing re­marks.

The re­sult was im­pres­sive, said Blu­men­thal, but he added, “What if I had asked it, and what if it had pro­vid­ed, an en­dorse­ment of Ukraine sur­ren­der­ing or (Russ­ian Pres­i­dent) Vladimir Putin’s lead­er­ship?”

The over­all tone of sen­a­tors’ ques­tion­ing was po­lite Tues­day, a con­trast to past con­gres­sion­al hear­ings in which tech and so­cial me­dia ex­ec­u­tives faced tough grillings over the in­dus­try’s fail­ures to man­age da­ta pri­va­cy or counter harm­ful mis­in­for­ma­tion. In part, that was be­cause both De­moc­rats and Re­pub­li­cans said they were in­ter­est­ed in seek­ing Alt­man’s ex­per­tise on avert­ing prob­lems that haven’t yet oc­curred.

Blu­men­thal said AI com­pa­nies ought to be re­quired to test their sys­tems and dis­close known risks be­fore re­leas­ing them, and ex­pressed par­tic­u­lar con­cern about how fu­ture AI sys­tems could desta­bilise the job mar­ket. Alt­man was large­ly in agree­ment, though had a more op­ti­mistic take on the fu­ture of work.

Pressed on his own worst fear about AI, Alt­man most­ly avoid­ed specifics, ex­cept to say that the in­dus­try could cause “sig­nif­i­cant harm to the world” and that “if this tech­nol­o­gy goes wrong, it can go quite wrong.”

But he lat­er pro­posed that a new reg­u­la­to­ry agency should im­pose safe­guards that would block AI mod­els that could “self-repli­cate and self-ex­fil­trate in­to the wild” — hint­ing at fu­tur­is­tic con­cerns about ad­vanced AI sys­tems that could ma­nip­u­late hu­mans in­to ced­ing con­trol.

That fo­cus on a far-off “sci­ence fic­tion trope” of su­per-pow­er­ful AI could make it hard­er to take ac­tion against al­ready ex­ist­ing harms that re­quire reg­u­la­tors to dig deep on da­ta trans­paren­cy, dis­crim­i­na­to­ry be­hav­ior and po­ten­tial for trick­ery and dis­in­for­ma­tion, said a for­mer Biden ad­min­is­tra­tion of­fi­cial who co-au­thored its plan for an AI bill of rights.

“It’s the fear of these (su­per-pow­er­ful) sys­tems and our lack of un­der­stand­ing of them that is mak­ing every­one have a col­lec­tive freak-out,” said Suresh Venkata­sub­ra­man­ian, a Brown Uni­ver­si­ty com­put­er sci­en­tist who was as­sis­tant di­rec­tor for sci­ence and jus­tice at the White House Of­fice of Sci­ence and Tech­nol­o­gy Pol­i­cy. “This fear, which is very un­found­ed, is a dis­trac­tion from all the con­cerns we’re deal­ing with right now.”

Ope­nAI has ex­pressed those ex­is­ten­tial con­cerns since its in­cep­tion. Co-found­ed by Alt­man in 2015 with back­ing from tech bil­lion­aire Elon Musk, the start­up has evolved from a non­prof­it re­search lab with a safe­ty-fo­cused mis­sion in­to a busi­ness. Its oth­er pop­u­lar AI prod­ucts in­clude the im­age-mak­er DALL-E Mi­crosoft has in­vest­ed bil­lions of dol­lars in­to the start­up and has in­te­grat­ed its tech­nol­o­gy in­to its own prod­ucts, in­clud­ing its search en­gine Bing.

Alt­man is al­so plan­ning to em­bark on a world­wide tour this month to na­tion­al cap­i­tals and ma­jor cities across six con­ti­nents to talk about the tech­nol­o­gy with pol­i­cy­mak­ers and the pub­lic. On the eve of his Sen­ate tes­ti­mo­ny, he dined with dozens of U.S. law­mak­ers, sev­er­al of whom told CN­BC they were im­pressed by his com­ments.

Al­so tes­ti­fy­ing were IBM’s chief pri­va­cy and trust of­fi­cer, Christi­na Mont­gomery, and Gary Mar­cus, a pro­fes­sor emer­i­tus at New York Uni­ver­si­ty who was among a group of AI ex­perts who called on Ope­nAI and oth­er tech firms to pause their de­vel­op­ment of more pow­er­ful AI mod­els for six months to give so­ci­ety more time to con­sid­er the risks. The let­ter was a re­sponse to the March re­lease of Ope­nAI’s lat­est mod­el, GPT-4, de­scribed as more pow­er­ful than Chat­G­PT.

The pan­el’s rank­ing Re­pub­li­can, Sen. Josh Haw­ley of Mis­souri, said the tech­nol­o­gy has big im­pli­ca­tions for elec­tions, jobs and na­tion­al se­cu­ri­ty. He said Tues­day’s hear­ing marked “a crit­i­cal first step to­wards un­der­stand­ing what Con­gress should do.”

A num­ber of tech in­dus­try lead­ers have said they wel­come some form of AI over­sight but have cau­tioned against what they see as over­ly heavy-hand­ed rules. Alt­man and Mar­cus both called for an AI-fo­cused reg­u­la­tor, prefer­ably an in­ter­na­tion­al one, with Alt­man cit­ing the prece­dent of the U.N.’s nu­clear agency and Mar­cus com­par­ing it to the US Food and Drug Ad­min­is­tra­tion. But IBM’s Mont­gomery in­stead asked Con­gress to take a “pre­ci­sion reg­u­la­tion” ap­proach.

“We think that AI should be reg­u­lat­ed at the point of risk, es­sen­tial­ly,” Mont­gomery said, by es­tab­lish­ing rules that gov­ern the de­ploy­ment of spe­cif­ic us­es of AI rather than the tech­nol­o­gy it­self.

Eu­rope lead­ing

LON­DON (AP) — Au­thor­i­ties around the world are rac­ing to draw up rules for ar­ti­fi­cial in­tel­li­gence, in­clud­ing in the Eu­ro­pean Union.

A Eu­ro­pean Par­lia­ment com­mit­tee vot­ed last week to strength­en the flag­ship leg­isla­tive pro­pos­al as it heads to­ward pas­sage, part of a years­long ef­fort by Brus­sels to draw up guardrails for ar­ti­fi­cial in­tel­li­gence. Those ef­forts have tak­en on more ur­gency as the rapid ad­vances of chat­bots like Chat­G­PT high­light ben­e­fits the emerg­ing tech­nol­o­gy can bring — and the new per­ils it pos­es.

Here’s a look at the EU’s Ar­ti­fi­cial In­tel­li­gence Act:

How do the rules work?

The AI Act, first pro­posed in 2021, will gov­ern any prod­uct or ser­vice that us­es an ar­ti­fi­cial in­tel­li­gence sys­tem. The act will clas­si­fy AI sys­tems ac­cord­ing to four lev­els of risk, from min­i­mal to un­ac­cept­able. Riski­er ap­pli­ca­tions will face tougher re­quire­ments, in­clud­ing be­ing more trans­par­ent and us­ing ac­cu­rate da­ta. Think about it as a “risk man­age­ment sys­tem for AI,” said Jo­hann Laux, an ex­pert at the Ox­ford In­ter­net In­sti­tute.

What are the risks

One of the EU’s main goals is to guard against any AI threats to health and safe­ty and pro­tect fun­da­men­tal rights and val­ues.

That means some AI us­es are an ab­solute no-no, such as “so­cial scor­ing” sys­tems that judge peo­ple based on their be­hav­ior. AI that ex­ploits vul­ner­a­ble peo­ple in­clud­ing chil­dren or that us­es sub­lim­i­nal ma­nip­u­la­tion that can re­sult in harm, such as an in­ter­ac­tive talk­ing toy that en­cour­ages dan­ger­ous be­hav­ior, is al­so for­bid­den.

Law­mak­ers beefed up the pro­pos­al by vot­ing to ban pre­dic­tive polic­ing tools, which crunch da­ta to fore­cast where crimes will hap­pen and who will com­mit them. They al­so ap­proved a widened ban on re­mote fa­cial recog­ni­tion, save for a few law en­force­ment ex­cep­tions like pre­vent­ing a spe­cif­ic ter­ror­ist threat. The tech­nol­o­gy scans passers-by and us­es AI to match their faces to a data­base.

The aim is “to avoid a con­trolled so­ci­ety based on AI,” Bran­do Benifei, the Ital­ian law­mak­er help­ing lead the Eu­ro­pean Par­lia­ment’s AI ef­forts, told re­porters Wednes­day. “We think that these tech­nolo­gies could be used in­stead of the good al­so for the bad, and we con­sid­er the risks to be too high.”

AI sys­tems used in high risk cat­e­gories like em­ploy­ment and ed­u­ca­tion, which would af­fect the course of a per­son’s life, face tough re­quire­ments such as be­ing trans­par­ent with users and putting in place risk as­sess­ment and mit­i­ga­tion mea­sures.

The EU’s ex­ec­u­tive arm says most AI sys­tems, such as video games or spam fil­ters, fall in­to the low- or no-risk cat­e­go­ry.

What about Chat­G­PT?

The orig­i­nal 108-page pro­pos­al bare­ly men­tioned chat­bots, mere­ly re­quir­ing them to be la­beled so users know they’re in­ter­act­ing with a ma­chine. Ne­go­tia­tors lat­er added pro­vi­sions to cov­er gen­er­al pur­pose AI like Chat­G­PT, sub­ject­ing them to some of the same re­quire­ments as high-risk sys­tems.

One key ad­di­tion is a re­quire­ment to thor­ough­ly doc­u­ment any copy­right ma­te­r­i­al used to teach AI sys­tems how to gen­er­ate text, im­ages, video or mu­sic that re­sem­bles hu­man work. That would let con­tent cre­ators know if their blog posts, dig­i­tal books, sci­en­tif­ic ar­ti­cles or pop songs have been used to train al­go­rithms that pow­er sys­tems like Chat­G­PT. Then they could de­cide whether their work has been copied and seek re­dress. (AP sto­ries)


Related articles

Sponsored

Weather

PORT OF SPAIN WEATHER

Sponsored