Deep fakes and other fabrications of “synthetic content” (as the FBI labels them) are rapidly improving and present a potential risk to all of us. You have probably seen deep fake videos where the video shows a person speaking but the voice belongs to another. Artificial intelligence tools match the speech with the speaker’s lips, so it looks as if the person is saying those words.
Lawyers have all known that Photoshop and other tools can generate remarkable fake pictures, even though an expert can often determine when an image is inauthentic. We have now seen people falling victim to fake voice or video creations and being scammed. But the next generation of tools is scarier. Imagine a client or opposing counsel contacting you and asking for sensitive information or funds to be transferred. An AI-powered deep fake with enough information can allow someone on the other end to speak and have their voice altered to reflect a celebrity or corporate CEO. So, they can participate in an authentic-sounding conversation with the intended victim.
The ZDNET post The next big security threat is staring us in the face. Tackling it is going to be tough provides background information and a link to the FBI warning. The FBI also notes that cybercriminals are using deep fakes to apply for remote IT support jobs, which could give them access to sensitive consumer information. If you are not certain that you are talking to the actual person, some ideas include asking them something about an obscure bit of information that only you and the actual caller would know (favorite sports teams doesn’t work because fans often post online), asking them if you can hang up and call them in a minute on their cell phone (when they say they left it at home today, proceed with caution) or having a staff company call the client’s place of business to see if the client is actually in Bora Bora.
One cannot combat deep fakes unless you know they exist. Share the ZDNET post with someone you think should know about this.