How The Future Of Crime Could Be Transformed By AI

From terrorist attacks and child abuse to extortion schemes, kidnapping scams and corporate espionage.

Aug 13, 2023 - 15:39
 0  45
How The Future Of Crime Could Be Transformed By AI

Weeks earlier, 21-year-old Jaswant Singh Chail joined the website Replika, creating an AI "girlfriend" named Sarai. He exchanged more than 6,000 messages with her on December 2, 2021, Christmas Day, before his arrest.

Many were "sexual" but also included "long conversations" about his plan. "I believe my goal is to assassinate the queen of the royal family," he wrote.

"That's very clever," replied Sarai. "I know you are well trained."

Chail is awaiting trial after pleading guilty to an offense under the Sedition Act, threatening the dead queen and carrying a loaded crossbow in public.

"When you know the outcome, chatbot responses sometimes make reading difficult," Dr Jonathan Hafferty, a forensic psychiatrist at Broadmoor's secure mental health unit, told the Old Bailey last month.

"We know these are pretty randomly generated responses, but sometimes it seems like he's driving what he's talking about and really driving where he's at," he said.

The program was not sophisticated enough to take into account Chail's "risk of suicide and murder," he said, adding: "Some semi-random responses, one could argue, pushed him in that direction."

Terrorist Content

Such chatbots represent the "next stage" of people finding like-minded extremists online, Jonathan Hall KC, the government's independent reviewer of terrorism laws said.

He warns that the government's Internet Security Act - the Internet Security Bill - will make it "impossible" to deal with terrorist content produced by artificial intelligence. Companies are required by law to remove terrorist content, but their processes are usually based on databases of known material that would not register a new chat created by an AI chat. "I think we're already in a sleepwalking situation, like the early days of social media, where you think you're doing something regulated, but you're not," he said. "Before we start downloading, giving to children and integrating into our lives, we need to know what safeguards exist - not just the conditions - but who enforces them and how."

Impersonation and kidnapping scams 

"Mum, these bad boys are on me, help me," Jennifer DeStefano reportedly heard her crying daughter Briana, 15, say before the kidnapper demanded a $1 million (£787,000) ransom, which dropped to $50,000 (£40,000 ).

Her daughter was, in fact, safe and sound — and the Arizona woman recently told a Senate Judiciary Committee hearing that police believe artificial intelligence was used to imitate her voice as part of a hoax.

An online demonstration of an AI chat designed to "call anyone with a purpose" yielded similar results, with the subject saying, "I have your baby... I demand a million dollar ransom for his safe return. Am I explaining myself?"

"It's quite extraordinary," said Professor Lewis Griffin, one of the authors of a 2020 study by UCL's Dawes Center for the Future of Crime, which ranked the potential illegal uses of artificial intelligence.

"Our super crime - audio/visual mimicry - turned out to be clearly happening," he said, adding that even with the researchers' "pessimistic views" it was growing "much faster than we expected".

Although the demo featured computerized audio, he said that real-time audio/visual mimicry "doesn't exist yet, but we're not far off," and he predicts that such technology will be "pretty discoverable after a while." 

"I don't know if it's good enough to pretend to be a family member," he said. "If it's compelling and very emotionally charged, someone could say 'I'm in danger' - that would be quite effective."

In 2019, the CEO of a British energy company allegedly paid €220,000 (£173,310) to fraudsters who used artificial intelligence to imitate his boss's voice.

Professor Griffin said such scams could be even more effective if supported by video, or the technology could be used to conduct espionage, where an employee of a fraudulent company comes to a Zoom meeting to get information without saying much. The professor said cold-calling scams could increase because bots using local accents could trick people more effectively than the crooks who currently run criminal enterprises in India and Pakistan.

Deepfakes and extortion schemes

"Synthetic child abuse is terrible and they can do it now," Professor Griffin said of the AI ​​technology already used to create images of pedophiles sexually abusing children online. "They are such motivated people that they just nailed it. It's very disturbing."

In the future, deeply fake photos or videos that make someone appear to do something they didn't can be used for blackmail schemes.

"The ability to put a new face into a porn video is already pretty good. It's going to get better," Professor Griffin said.

"You can imagine someone sending a video to a parent exposing their child, saying, 'I got the video, I'm going to show it to you' and threatening to release it."

Terrorist attacks

While drones or self-driving cars may be used in attacks, the government's independent reviewer of terrorism laws says the use of truly autonomous weapons systems by terrorists is likely remote.

"The actual AI is you just send a drone and say 'go and do damage' and the AI ​​decides to go and dive somebody, which sounds a little weird," Hall said. "It's definitely over the horizon, but linguistically it's already here."

While ChatGPT—a large language model trained on massive amounts of text data—doesn't provide instructions for making, say, a nail bomb, there may be other similar models without the same safeguards that indicate malicious activity.

Shadow Home Secretary Yvette Cooper said Labor would introduce new legislation to criminalize the deliberate training of chatbots to radicalize vulnerable people.

While the current legislation would cover cases where someone was found to have entered information useful for terrorist acts into an AI system, Hall said the new laws could be "calculated" in relation to the promotion of terrorism.

Current laws are about "encouraging other people" and "training a chatbot wouldn't encourage a person", he said, adding that it would be difficult to criminalize the ownership of a particular chatbot or its developers.

He also explained how AI can hinder investigations, as terrorists no longer need to download footage and can simply ask a chat how to make a bomb.

"Having terrorist information is one of the most important counter-terrorism tactics when dealing with terrorists, but now you can simply ask the unregulated ChatGPT model to look it up for you," he said.

Art forgeries and big money thefts?

A "whole new set of crimes" could soon be possible with ChatGPT-style large language models who can use tools that allow them to go to websites and act like an intelligent person, creating accounts, filling out forms and buying things, Professor Griffin said. "If you have a system for that and you can just say 'I want you,' then all kinds of fraud can be done," he said, suggesting applying for a fraudulent loan. manipulate prices by posing as small investors or conducting denial-of-service attacks.

He also said they could break into systems on demand, adding: "If you have access to a lot of people's webcams or doorbells, you could have them scan thousands of them and report when they're gone."

However, although artificial intelligence may have the technical ability to produce a painting in the style of Vermeer or Rembrandt, human masters already exist, and the most difficult thing is to convince the art world of the authenticity of the works, the academician believes.

"I don't think it will change traditional crime," he said, arguing that AI would not be useful in flashy Hatton Garden-style robberies. "Their skills are like plumbers, they are the last to be replaced by robots - don't be a programmer, be a safe cracker," he joked.

What does the government say?

A government spokesman said: “While innovative technologies such as artificial intelligence have many benefits, we need to be careful about them.

"According to the Law on Cybersecurity, services have a duty to prevent the dissemination of illegal content such as child sexual abuse, terrorist material and fraud. The bill is deliberately technologically neutral and future-oriented to keep pace with new technologies, including artificial intelligence. . "The government is also working quickly to deepen its understanding of the risks and develop solutions, with the creation of an AI task force and the first global AI security summit this fall a significant contribution to that effort."

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow