Misleading AI Generated Media Would Be Prohibited Under Bill

Print More


Scott Spradling of the NH Broadcasters Association, raises concerns about a bill to prohibit misleading AI generated media before the House Criminal Justice and Public Safety Committee Wednesday.


CONCORD — A bill that seeks to put guardrails around the use of artificial intelligence like the deepfake President Biden robocall before Tuesday’s Presidential Primary had a public hearing Wednesday.

House Bill 1500 would prohibit the unlawful distribution of misleading synthetic media, and require that use of AI technology be “conspicuous noticed” on a video, and require a person’s consent of an identifiable person in a posting.

While people testifying on the bill before the Criminal Justice and Public Safety Committee agreed the state needs to begin addressing potential misuse of the technology, they cautioned, the bill as written is overly broad and would have unintended consequences.

Scott Spradling, representing the New Hampshire Broadcasters  Association, told the committee the bill focuses on the distribution of the media and not on who created it.

“The way the bill is written puts broadcasters clearly in the crosshairs of liability,” he said. “We want the free exchange of information, but to hold the bad actors responsible, that is where the focus ought to be.”
Spradling said local media do not have the capability necessary to verify what is being produced today, as larger broadcasters do, but the standard is not to use it if you can’t verify it.

But others worried about the bill’s implication for free speech with the prohibitions.

“Where is the line on what is considered free speech,” asked committee member Rep. Jodi Newell, D-Keene, “when you are essentially hijacking someone else’s free speech.”
Merrimack County Attorney Paul Halvorsen said that happens all the time with parody, but it has to be content neutral.

The American Civil Liberties Union NH opposed the bill noting first amendment concerns.

Senior Staff Attorney Henry Klementowicz said AI generated messages have the same First Amendment protections as other political communications, such as parody or satire.

“The bill also does not contain any carve-outs for parody or satire,” he wrote to the committee.

The prime sponsor of the bill, Rep. Linda Massimilla, D-Littleton, said her proposal is a step into unfamiliar territory by providing some definitions and penalties for the misuse of the rapidly changing technology that has the potential to do great harm.

Massimilla said the bill would prevent a disgruntled boyfriend from posting a compromising picture of his ex-girlfriend on social media without her consent, and would require that synthetic media have a notice that it was AI generated.

The potential for interfering or trying to manipulate elections is dangerous with the ever changing technology, she said.

Bill co-sponsor, Rep. Thomas Cormen, D-Lebanon, who was a computer science professor at Dartmouth College for nearly 30 years, noted the recent exponential growth of AI and how easy it is today to create deepfakes.

“I am very concerned about the misuse of AI. It can happen and did happen and it is only going to get worse,” Cormen said, “and the potential for abuse is going to increase.”

He suggested the committee might want to narrow the bill’s focus, but that is “broadly in the right place.”

“One thing missing from this bill is it needs to be specifically stated,” he said, “there is something malicious or defamatory involved.”

He said it is important to have the statement that the media was created with AI so the person viewing or hearing or experiencing it understands it is produced by artificial intelligence. “If they are deceived,” Cormen said, “then it is on them.”
Another bill sponsor, Rep. Johan Wheeler, D-Peterborough, said there are valid concerns about free speech, but there should also be concerns for the foundation of truth.

“There has to be some baseline of truth we operate from,” Wheeler said, “and AI can violate that foundation. People can create something that totally misleads the population.”

Assistant Merrimack County Attorney Steven Endres said the deepfake robocall of Biden telling voters not to vote, could have been created in three ways, only one of which would use AI, and as a prosecutor it would be difficult to prove and that would be a difficult hurdle in court.

He suggested putting the section on elections in the elections statutes and then leave the other aspects of the bill in the criminal statutes.

He notes that the Attorney General’s Office handles all the election interference or fraud cases, not county attorneys.

The committee did not make an immediate recommendation on the bill.

Garry Rayno may be reached at garry.rayno@yahoo.com.

Comments are closed.