Biden robocall traced to chatbot created by OpenAI developed

A robocall that used a deepfake voice of President Joe Biden to discourage voters in New Hampshire from participating in the primary election on Tuesday was traced back to a chatbot created by an OpenAI developer, who was suspended by the company for violating its terms of service.

The robocall, which was reported by NBC News, began with the phrase “What a bunch of malarkey”, a signature expression of Biden, and urged Democrats to “save your vote for the November election” and not to “enable the Republicans in their quest to elect Donald Trump again”. The call also included the personal phone number of Kathy Sullivan, a former state party chair and a supporter of Biden, without her permission.

Sullivan denounced the robocall as “an attack on democracy” and called for the prosecution of the perpetrators. She also said that the call was intended to hurt Biden, who is not on the ballot in New Hampshire, but is a write-in candidate backed by a Super PAC called Granite for America.

The New Hampshire attorney general, John Formella, launched an investigation into the robocall and advised voters to “disregard the contents of this message entirely”.

The robocall was made using ChatGPT, a chatbot developed by OpenAI, a research organization that aims to create artificial intelligence that can benefit humanity. ChatGPT uses a neural network model called GPT-3, which can generate natural language texts on various topics and styles, including voice cloning.

According to the Washington Post, OpenAI confirmed that the robocall was created by one of its developers, who used the ChatGPT platform to generate the deepfake voice of Biden. The developer, whose identity was not disclosed, was suspended by the company for violating its code of conduct and terms of service, which prohibit the use of ChatGPT for illegal, harmful, or deceptive purposes.

OpenAI also said that it has taken measures to prevent the misuse of ChatGPT, such as adding a watermark to the generated speech to indicate that it is synthetic and not the original speaker’s voice, and requiring users to agree to a code of conduct and a terms of service before using the platform.

However, OpenAI also acknowledged that ChatGPT poses ethical and social challenges, such as privacy, consent, and misinformation, and that more research and regulation are needed to ensure the responsible use of voice cloning technology.

The robocall incident highlights the potential dangers of AI-generated media, especially in the context of the 2024 election, where deepfake audio and video could be used to manipulate public opinion, sow discord, or undermine trust in institutions. It also raises questions about the accountability and oversight of AI developers and platforms, and the need for public awareness and education on how to detect and combat fake media.

A robocall using a deepfake voice of President Joe Biden was sent to thousands of Democratic voters in New Hampshire, urging them not to vote in the primary election on Tuesday. The call was traced back to a chatbot created by an OpenAI developer, who was suspended by the company for violating its terms of service. The robocall was made using ChatGPT, a chatbot that uses a neural network model called GPT-3, which can generate natural language texts, including voice cloning. The robocall incident exposes the ethical and social challenges of AI-generated media, especially in the context of the 2024 election.

Fake media, also known as misinformation, disinformation, or propaganda, is any type of content that is false, misleading, or deceptive, and is intended to influence people’s beliefs, opinions, or actions. Fake media can take various forms, such as text, images, audio, or video, and can be spread through various platforms, such as social media, websites, or apps.

Detecting and combating fake media is a complex and challenging task, as it requires both technical and human efforts. Some of the possible ways to detect and combat fake media are:

Using artificial intelligence (AI) tools to analyze and verify the content, source, and context of the media. AI tools can use machine learning, natural language processing, computer vision, and other techniques to identify and flag potential fake media, such as deepfakes, bots, or trolls.
 

Developing media literacy skills and critical thinking abilities among the public. Media literacy is the ability to access, analyze, evaluate, and create media in various forms. Critical thinking is the ability to question, reason, and evidence-based judgments. 

Promoting ethical and professional standards and practices among the media industry and the technology companies. The media industry should provide high-quality journalism that is accurate, transparent, and accountable, and that corrects fake media without legitimizing them. The technology companies should invest in tools that identify and reduce fake media, and improve online accountability and responsibility.

These are some of the possible ways to detect and combat fake media, but there may be others. Detecting and combating fake media is a collective and ongoing effort that requires the participation and collaboration of all actors in the information ecosystem.

5 thoughts on “Biden robocall traced to chatbot created by OpenAI developed”

  1. I am not sure where youre getting your info but good topic I needs to spend some time learning much more or understanding more Thanks for magnificent info I was looking for this information for my mission

    Reply
  2. Почивки в Италия в определени места на добри цени и условия.
    Избери ваканция в Сицилия, Пулия, Римини, Кампания, Тоскана, Сардиния, Лигурия. Резервирайте вашата ваканция в Италия с Мистрал Травел!

    Reply
  3. Hi Neat post There is a problem along with your website in internet explorer would test this IE still is the market chief and a good section of other folks will pass over your magnificent writing due to this problem

    Reply

Leave a Comment