![]() 5 hrs Police arrest suspect after gunman kills six at U.S.5 hrs Iran shuts three cafes in Qom over unveiled women.5 hrs Philippines' Marcos says China ties are about more than conflict.5 hrs 'Massive' Russian shelling on Ukraine's Sloviansk: mayor.Ministers Rishi Sunak and Sajid Javid resign 2 hrs China backs Pakistan in opposing G20 Kashmir meetings.1 hr U.K.'s opposition leader says he would welcome snap national election.Sriram Lakshman is The Hindu’s Washington correspondent. “Now is the time for social media companies to put in place policies to protect users from this kind of misinformation, not in 2021 after viral deep-fakes have polluted the 2020 elections. Schiff summed this up at last week’s hearing. It additionally endeavours to update existing laws around ID theft as well as promote federal research to develop technologies that detect deep-fakes. The Deepfakes Accountability Act, a Bill working its way through the House, seeks to ensure that those creating deep-fake media include with it appropriate disclosures such as watermarks and descriptions the right of victims to sue creators and means for victims to protect their reputations when creators cannot be brought to court (e.g., foreign governments). We need a combination of law, markets and really, societal resilience to get through this,” Ms. Citron described a hypothetical scenario of a video that is manipulated so it appears that a company CEO is admitting that the company is insolvent, the night before an IPO. For instance, the panel heard how a prominent Indian journalist was at risk after her face was transposed onto a deep-fake pornographic video that went viral.īut individuals are not the only ones at risk, Danielle Citron, a law professor whose expertise includes AI issues, told the House panel. Witnesses providing testimony at the House hearing cited India in their examples to illustrate the dangers of misinformation. YouTube removed the video and Facebook kept it on there, marked it as “false”, and slowed the speed at which it was distributed. The Pelosi video also highlighted something else: the different responses across social media platforms, which enjoy considerably freedom from liability. While this was not deep-fake technology at work, it highlighted the danger of such videos. The ability for social media to make things viral compounds the problem.Ī few weeks ago, a video of House of Representatives Speaker Nancy Pelosi was altered to make her appear drunk by manipulating the audio so that her speech sounded slurred. The incorporation of AI in software has made it “dramatically” easier to manipulate media, Jack Clark, of OpenAI, a research and technology organisation that focusses on the safe use of AI, told the House panel. Hardware has become cheaper and more powerful and software more accessible and capable. ![]() Or an individual hacker claims to have stolen audio of a private conversation between two world leaders, when in fact no such conversation took place,” said Intelligence Committee Chair Adam Schiff, describing some hypothetical scenarios. “A state-backed actor creates a deep-fake video of a political candidate accepting a bribe with the goal of influencing an election. The Permanent Select Committee on Intelligence held a hearing on challenges of AI and deep-fakes on June 13. The Pentagon is also involved via its Defense Advanced Research Projects Agency. With the 2020 presidential elections looming, lawmakers and other stakeholders in the U.S. From creating fake pornographic videos to making politicians appear to say things they did not, the potential for damage to individuals, organisations and societies is vast. While the technique can be used to have some harmless fun, it is rife with possibilities of misuse. There is no clear definition of ‘deep-fake’, but the term is broadly used to refer to media manipulation rooted in deep neural networks (a form of AI). This is the world of ‘deep-fakes’ - audio and visual-media manipulation taken to a new level altogether, assisted by Artificial Intelligence (AI) and the latest advances in software. Soon, all your devices are beeping and chiming and you’ve got hundreds of Twitter notifications as the post goes viral. A few sentences in, you realise that the ‘you’ in the video is saying all sort of crazy things (that you did not say). Imagine watching a familiar video of yourself on Twitter in which you are, say, speaking at a conference.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |