Showing posts with label Deepfakes. Show all posts
Showing posts with label Deepfakes. Show all posts

Monday, January 29

Technologies to Wach


From protein engineering and 3D printing to detection of deepfake media, here are seven areas of technology that Nature will be watching in the year ahead.

Deep learning for protein design
Two decades ago, David Baker at the University of Washington in Seattle and his colleagues achieved a landmark feat: they used computational tools to design an entirely new protein from scratch. ‘Top7’ folded as predicted, but it was inert: it performed no meaningful biological functions. Today, de novo protein design has matured into a practical tool for generating made-to-order enzymes and other proteins. “It’s hugely empowering,” says Neil King, a biochemist at the University of Washington who collaborates with Baker’s team to design protein-based vaccines and vehicles for drug delivery. “Things that were impossible a year and a half ago — now you just do it.”  READ MORE...

Saturday, March 20

Content Creation

Despite the negative connotations surrounding the colloquial term deepfakes (people don't usually want to be associated with the word "fake"), the technology is increasingly being used commercially.

More politely called AI-generated videos, or synthetic media, usage is growing rapidly in sectors including news, entertainment and education, with the technology becoming increasingly sophisticated.

One of the early commercial adopters has been Synthesia, a London-based firm that creates AI-powered corporate training videos for the likes of global advertising firm WPP and business consultancy Accenture.

"This is the future of content creation," says Synthesia chief executive and co-founder Victor Riparbelli.

To make an AI-generated video using Synthesia's system you simply pick from a number of avatars, type in the word you wish for them to say, and that is pretty much it.

Mr Riparbelli says this means that global firms can very easily make videos in different languages, such as for in-house training courses.

"Let's say you have 3,000 warehouse workers in North America," he says. "Some of them speak English, but some may be more familiar with Spanish.

"If you have to communicate complex information to them, a four-page PDF is not a great way. It would be much better to do a two or three-minute video, in English and Spanish.

"If you had to record every single one of those videos, that's a massive piece of work. Now we can do that for [little] production costs, and whatever time it'll take someone to write the script. That pretty much exemplifies how the technology is used today."

Mike Price, the chief technology officer of ZeroFox, a US cyber-security company that tracks deepfakes, says their commercial use is "growing significantly year over year, but exact numbers are difficult to pin down".

However, Chad Steelberg, chief executive of Veritone, a US AI technology provider, says that the increasing concern about malicious deepfakes is holding back investment in the technology's legitimate, commercial use.

"The term deepfakes has definitely had a negative response in terms of capital investment in the sector," he says. "The media and consumers, rightfully so, can clearly see the risks associated.  TO READ ENTIRE ARTICLE, Click Here...


Thursday, February 18

ETHICS: Deepfakes



Falsified videos created by AI—in particular, by deep neural networks (DNNs)—are a recent twist to the disconcerting problem of online disinformation. Although fabrication and manipulation of digital images and videos are not new, the rapid development of AI technology in recent years has made the process to create convincing fake videos much easier and faster. AI generated fake videos first caught the public's attention in late 2017, when a Reddit account with the name Deepfakes posted pornographic videos generated with a DNN-based face-swapping algorithm. Subsequently, the term deepfake has been used more broadly to refer to all types of AI-generated impersonating videos.

While there are interesting and creative applications of deepfakes, they are also likely to be weaponized. We were among the early responders to this phenomenon, and developed the first deepfake detection method based on the lack of realistic eye-blinking in the early generations of deepfake videos in early 2018. Subsequently, there is a surge of interest in developing deepfake detection methods.

A climax of these efforts is this year’s Deepfake Detection Challenge. Overall, the winning solutions are a tour de force of advanced DNNs (an average precision of 82.56 percent by the top performer). These provide us effective tools to expose deepfakes that are automated and mass-produced by AI algorithms. However, we need to be cautious in reading these results. Although the organizers have made their best effort to simulate situations where deepfake videos are deployed in real life, there is still a significant discrepancy between the performance on the evaluation data set and a more real data set; when tested on unseen videos, the top performer’s accuracy reduced to 65.18 percent.  TO READ ENTIRE ARTICLE BY SIWEI LYU