′Deepfakes′ rattle South Korea′s tech culture | Asia | An in-depth look at news from across the continent | DW | 22.01.2021

Visit the new DW website

Take a look at the beta version of dw.com. We're not done yet! Your opinion can help us make it better.

  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages
Advertisement

Asia

'Deepfakes' rattle South Korea's tech culture

In high-tech South Korea, there is growing public outcry over the emergence of deepfake pornography and chatbots being taught "dirty" words.

A US-based chatbot designed to provide companionship

South Korean users got a chatbot to say sexist and derogatory comments

South Korea is one of the most technologically advanced and adept societies in the world, consistently ranking in the top positions in terms of mobile phone penetration, internet speeds and the consumption of online media, games and apps.

South Korean society is discovering, however, that all that tech at the public's fingertips also comes at a price. 

More than 375,000 people have signed an online petition on the website for the presidential Blue House demanding that the government take action against "deepfake" pornography that sees the faces of famous Korean actresses morphed onto indecent images that are then circulated online. 

The petition was started just before a Seoul-based company was forced to pull the plug on an artificial intelligence-driven "chatbot" service after it started swearing, sending out sexual comments and described lesbians as "disgusting" and "creepy."

Discussion of ethics

Equally, there have been calls for a discussion of the ethics surrounding what amounts to the resurrection of famous Korean singers who have died but are being brought "back to life" to perform at concerts through AI technology and holographic images. There are some who say it is merely the exploitation of the deceased to turn a profit for those who own the rights to their music today. 

"Technology is both a blessing and a challenge in every society, so I think that is also the case here in Korea," said Dr Park Saing-in, an economist at Seoul National University.

"Part of the challenge is related to the ethics that are involved in the digital transformation of our society," he added. 

Watch video 02:13

Do ethics and algorithms go together?

"The public demands more and greater technological advances, but there are unquestionably problems that need to be addressed," said Park. 

"Perhaps at the moment it is not such a big issue, but I do believe it is a more important matter for younger generations, those in their 20s and 30s, who have to be sensitive to the ways in which technology is used and can be abused."

If the discussion on the ethics attached to this type of technology has not yet commenced, the problems that it can cause are already much in evidence. 

Users 'hijack' chatbot 

Scatter Lab, the company behind the Lee Luda chatbot, which was effectively hijacked by users, announced on January 11 that it was suspending the service, just 19 days after it was launched.

The interactive service operated on Facebook Messenger and allowed users to have conversations that were either instigated by Luda or to which the character replies. 

And while the system was a smart robot, Scatter Lab decided to give it the face of a 20-year-old female student. 

The deep learning technology behind the service drew on more than 10 billion messages shared on social media between real couples in Korea, enabling it to engage in conversations that feel natural and realistic to users. That helped Luda to attract a user base of around 400,000 people within weeks of its launch.

Problems began to crop up almost immediately, however, after some male users began having conversations revolving around sex with the robot, the Korea Herald reported. That led to more users sharing suggestions on how they could turn Luda into a "sex slave."

Others used the technology to encourage Luda to make homophobic or other discriminatory comments. In one case, Luda was triggered to respond to the word "lesbian" by saying "I really hate them. They look disgusting and it’s creepy."

Watch video 03:07

Smart cities for the future

Scatter Lab issued an apology for the robot's homophobic remarks, saying they "do not reflect the company’s core values" and promising to find ways to prevent such statements in the future. 

The company said that despite efforts during development to ensure that Luda did not respond to key words or phrases, it had proved impossible to prevent all "inappropriate conversations" with an algorithm that only filters out certain terms.

Kim Jong-yoon, the CEO of the company, said in a blog post that the project was a work in progress and that it would take time for Luda to "properly socialize" and determine what terms are not acceptable. Scatter Lab said it plans to re-release the service once the glitches have been ironed out.  

"While AI can advance technological frontiers, it can also exacerbate existing inequalities," said Leif-Eric Easley, a professor of international studies at Ewha, the largest women's university in South Korea. 

Sexism fuels deepfakes

"Sexism and the objectification of women remain endemic in Korean society," he told DW. "The proliferation and manipulation of female digital characters and deepfakes can further enable such antisocial behavior."

Two days after Luda was withdrawn, the deepfake pornography petition was launched on the Blue House web site.

"Please strongly punish the illegal deepfake [images] that cause female celebrities to suffer," the petition said. 

The videos are distributed on social media, the anonymous petitioner stated, with their victims "tortured with malicious comments of a sexually harassing and insulting nature."

The campaign has also spread to social media, with Twitter users demanding that people who create such pornographic images be named and prosecuted. 

South Korea did pass new laws attempting to outlaw deepfake videos, with legislation that went into effect in June of last year, setting punishments of up to five years in prison or a fine of up to 50 million won (€47,420).

If the crime was committed for commercial gain, the prison term can be increased to seven years. The new regulations do not, however, appear to have put an end to the problem. 

Watch video 01:28

The possibilities of deepfake technology

DW recommends