Rise of deep-fakes in Russia – Ukraine Conflict
Deepfakes, or digital material made using machine learning, a kind of artificial intelligence technology, are piquing the attention of marketers and the general public alike, and are often depicted in the media as a “phantom danger.” Deepfakes, despite their importance in marketing theory and practice, are little understood or discussed. Deepfake technology was developed eight years ago by computer scientist Ian Goodfellow using “generative adversarial networks.” The deepfake essentially pits two Ais against each other in order to compete for realistic images. Deepfakes were initially used primarily by skilled exploiters, who infamously grafted people’s faces into videos or other images. A deepfake movie or picture may now be created by anybody using just a few tools. It was the year 2017 when there was a significant debate in Hollywood about the use of deep fake when the actor Carrie Fisher (who plays Princess Lela in Star Wars) died before the film could’ve been finished, there was a huge debate regarding the digital resurrection of Carrie Fisher, the deepfake is not limited to actors and in Hollywood blockbusters but is increasingly followed by AI (Artificial Intelligence), today AI is an industry with a profit expendable There has been a lot of talk about the emerging AI-based deepfake. Deepfakes are created by combining AI and a deep learning system with many deep networks. Over the last several years, there has been a lot of discussion concerning the usage of deep fakes and their merits and downsides of them. It brings down the cost of video marketing. Deepfake technology may help design more effective omnichannel advertising. It has the potential to give clients with a more tailored experience.
Low-Cost Video Advertising
Marketers that employ deepfakes may save money on video advertising expenditures since they don’t need an in-person performer. So rather than hiring actors in person, a marketer might acquire a licence to use an actor’s persona. You may then utilise prior digital recordings of the actor to produce a new video by inserting relevant lines from a script for the actor. For example, if you want to employ your company’s CEO for a new video campaign but they don’t have time to shoot a new ad owing to a busy schedule, you may create the new campaign using only a few existing recordings from previous campaigns or interviews. Better Omni-Channel Campaigns
Because you don’t require in-person performers for a campaign, you may recycle current videos for other marketing platforms, saving time and money. Instead of reshooting to match various goals for multiple channels, you may develop a sponsored social campaign by simply editing or replacing video clips. Alternatively, you may utilise speech synthesis to generate fresh dialogue for a podcast, radio, or streaming service ads.
Deepfake technology has increased hyper-personalization. Individual clients may be targeted with more relevant content and experiences depending on their unique preferences, such as race or skin colour. For example, if a client was of a different ethnicity than a brand’s marketing strategy. Deepfake technology allows you to change the skin tone of that model so that the client may see how the product would appear on their skin tone. With marketing, this method may help your business enhance inclusion and reach a larger market. Unfortunately, due of its overwhelming capability, deepfake technology has the potential to be utilised for nefarious reasons.
Issues with trust or ethics
Deepfake technology may be used to generate a fake video, and determining the authenticity of a piece of material has become increasingly challenging. Even if you could detect whether or not someone’s picture was theirs before seeing a film, it would be difficult for anybody who didn’t know the individual personally. A deepfake video used by a marketer or business may cause a customer to feel tricked by the campaign and lose faith in the brand in the future. For example, if it is feasible to generate a phoney review using deep fakes, this would be regarded as immoral.
The conflict between Russia and Ukraine began on February 24, 2022, when Russia invaded Ukraine. Since then, both nations have already been putting their point of view of just how and also why they do what they are doing. The Russia-Ukraine conflict represented a significant shift in how information is gathered and disseminated in a highly fluid combat environment. The growing importance of information and technology is not a source of contention. In December 2015, a cyberwarfare operation including devastating malware assaults attacked Ukraine’s industrial power and control system, knocking off electricity to 700,000 households for several hours. On the seventh day of the war between Russia and Ukraine, a rise in Deepfake videos began to appear and were used to propagate disinformation against Ukraine. Since the beginning of the fight, several videos have begun to surface on various platforms such as Facebook, Twitter, and Instagram to push false news about the war, putting US authorities on high alert. Along with cyberattacks on Ukrainian infrastructure, Russia launched a huge misinformation effort to mould the narrative. Many of these measures have been repeated by Russia, with Ukraine experiencing approximately 685,000 cyberattacks between 2020 and 2021. As the fight dragged on, Russian cyber hackers continued to use AI to propagate falsehoods. Whereas Ukrainian social media organizations have been mostly down, Russia’s top-down authoritarian approach has been characterised by blocking the free flow of information, suspending access to social media, and deploying an army of hired trolls to labour around the clock to repeat the Kremlin line and discredit news originating from Ukraine. Russia’s war in Ukraine, like any other conflict or geopolitical crisis, has offered ideal ground for internet falsehoods to thrive and spread. Even before Moscow started its full-scale invasion on February 24, false statements were being propagated by pro-Russian and even pro-Ukrainian accounts. In the midst of an online propaganda war, both state officials and private social media users continue to spread false information.
Why are deepfakes so dangerous?
Thousands of individuals saw deceptive films of unrelated explosions within hours of Russia’s incursion. Several people promptly posted video of explosions in Tianjin, China, and Beirut, Lebanon, saying it showed Russian bombers targeting “Ukrainian HQ.” The films were extensively shared on social media sites such as Facebook, Twitter, TikTok, and others, with the spectacular — but unconnected — footage attracting people’ attention. Simultaneously, other social media users started spreading false folk tales about Ukrainian deeds of courage. The most well-known of these concerns is the so-called “Ghost of Kyiv” fighter ace, who is said to have downed six Russian aircraft in a couple of hours at the outset of the invasion. Old video game or military practise film was posted with the rumour, gaining millions of views. Ukraine’s former President Petro Poroshenko backed up the allegation until the country’s military revealed in May that the “Ghost of Kyiv” was a “superhero mythology.” While inspiring tales of valour may give residents hope during a battle, researchers argue that obsessive disinformation can be destructive and provide a misleading image of the fight.
Pro-Russian users have often echoed the Kremlin’s initial statement that the invasion of Ukraine is a “special military operation” to “denazify” and “demilitarise” a “Neo-Nazi state.” Many have dismissed charges of Russian war crimes, even claiming that the conflict was a “hoax.” A news reporter was seen standing in front of lines of corpse bags, one of which was moving, in one widely circulated video. The film, however, did not depict fabricated Ukrainian combat victims, but rather a “Fridays for Future” climate change demonstration in Vienna in February, three weeks before the invasion started. Other instances of Ukraine war disinformation have focused on “crisis actors,” or those who are allegedly recruited to play fearful or dead conflict victims. According to one bogus story, a well-known beauty blogger “played” as the pregnant victim of a fatal assault on a maternity hospital in Mariupol on March 9. Volodymyr Zelenskyy — deepfakes, drugs, and green screens
As the first missile attacks struck Kyiv, President Volodymyr Zelenskyy posted a defiant video on social media, declaring that he would not abandon the nation. His presence in Ukraine’s capital and his nightly video speeches debunked suspicions that he had fled. Some people mistakenly claimed that the Ukrainian president was hiding in exile and utilising a green screen or film studio to appear in Kyiv. Many of the photographs showed Zelenskyy recording a hologram for different digital technology conferences around Europe, and as the battle continues, he has become a more regular target for Russian misinformation.
Finland and Poland have been drawn into the information war
False statements about the Ukraine conflict have migrated to other nations and the NATO military alliance. As violence raged on into May, social media users incorrectly claimed that European Union member nations were poised to join the battle. One video claimed that Poland’s military commander had issued an order putting army units on “full alert,” with a digitally created BBC News logo. The BBC then stated that no such piece had been created and that their branding had been exploited to make a phoney film. Polish authorities also accused Moscow of conducting cyber-attacks on the country. Another deceptive film claimed that Finland was preparing to deploy hundreds of tanks to its eastern border with Russia, heightening tensions. The video depicted a freight train transporting equipment to western Finland for yearly military drills.
It is now eight months into the war, and both Russia and Ukraine are to fault for the widespread disinformation regarding the conflict. The erroneous allegations regarding Ukrainian refugees are the most recent example of internet disinformation. However, the Ukraine conflict has been a fertile ground for many forms of disinformation, from photographs merely taken out of context to digitally manipulated movies that utilise artificial technologies to promote lies.
The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of The Kootneeti Team