Defenders show up to the war on deepfakes

8669834656?profile=original

Digitally altered and synthetic media are becoming more of a problem.  Openly available tools, including AI Deep Learning, enable the easy modification of pictures and videos for distribution on the Internet.  Most are benign; clearing up acne, improving image lighting, creating a funny meme, or perhaps narrowing a waistline for aesthetic reasons.  More disturbing is the generation of videos of known personalities, making them appear to make caustic statements or take part in inappropriate activities.  These fakes have appeared in political posts, social satire, news media, and pornographic material.  Motivations are sometimes for humor, vanity, vindictiveness, or to sway public viewpoints.   

The most malicious reasons are just around the corner.  Cybercriminals, who innately understand the value of impersonation and counterfeiting identities, are drooling at potentially using this technology to open entirely new lucrative branches of scams, phishing, and identity theft.  Every day the technology to create synthetic digital representations gets more believable and accessible, the closer it will end up in the hands of criminals. 

The societal problems are only beginning as the tools to create fakes are far outpacing the capabilities to detect them.  Several organizations are working toward the goal of confidently identifying digital modification in pictures, audio, and video. 

Microsoft has recently announced one such tool for analyzing videos, purposely being released in advance of the U.S. elections, to help media sites and social watchdogs detect misleading political deepfakes.  Microsoft Research is aware their technology will be undermined soon, but having some tools to help identify truth as the election cycle begins, is better than nothing. 

The war on deepfakes is just starting.  Technology innovation is working on both sides, to create realistic synthetic content and to detect such creations before they are accepted as truth.  Society will be caught in the cross-fire as we all must consider if what we see and hear is actually real.

 

 

Interested in more? Follow me on LinkedInMedium, and Twitter (@Matt_Rosenquist) to hear insights, rants, and what is going on in cybersecurity.

 

Image Source: https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/

E-mail me when people leave their comments –

CISO and Cybersecurity Strategist

You need to be a member of CISO Platform to add comments!

Join CISO Platform

CISO Platform

A global community of 5K+ Senior IT Security executives and 40K+ subscribers with the vision of meaningful collaboration, knowledge, and intelligence sharing to fight the growing cyber security threats.

Join CISO Community Share Your Knowledge (Post A Blog)