Subscribe to Entrepreneur for $5
Subscribe

LinkedIn Profile With AI-Generated Pic Tried to Schmooze With DC Insiders

The profile attempted to network with former government officials.

By
This story originally appeared on PCMag

Spies may be using AI-generated photos to create harder-to-detect fake profiles on LinkedIn.

@IntelMercenary via PCMag

Bogus accounts on social networking services are nothing new. But on Thursday, the Associated Press reported on the curious case of "Katie Jones," a redheaded woman on LinkedIn who claimed to work at a top think tank in Washington D.C.

According to the AP, the profile is not just a fake. The headshot of the woman may have been created by an AI-powered program. The evidence can be found in small, but noticeable inconsistences in the image.

Several tech experts told the AP they were convinced the profile photo was created by AI based on the flaws, which are common among photos fabricated by generated adversarial networks, or GANs. Researchers have been using the technology to show how it can pump out realistic, but ultimately fake photos of people who don't exist.

To create the photos, the GANs will model the fake faces from existing pictures of real people. Essentially, the AI algorithms will pluck different traits, such as hair styles, eye shapes and mouths, from various photos and merge them together to create an entirely new person.

But the process isn't perfect. The AI processes can sometimes have trouble rendering artifacts around the synthetic face. In the case of the Katie Jones profile, the photo shows a woman with a strange left earring that appears to be "blurry or "melted," the AP noted.

Still, the photo itself looks pretty real. The AP confirmed the Katie Jones account was a sham based not on the headshot, but on the credentials the LinkedIn profile had listed, all of which turned out to be phony.

The account is now gone, and it isn't clear if actual government spies created the Katie Jones profile or if it was just fraudsters. But the LinkedIn profile was attempting to network with former U.S. government officials and policy experts. Using an AI-generated photo would've made the profile resistant to reverse-image searches, which can often reveal whether a fake account lifted a picture from a real person's account.

Michael Kan

Written By

Michael has been a PCMag reporter since October 2017. He previously covered tech news in China from 2010 to 2015, before moving to San Francisco to write about cybersecurity.