Last Updated on July 21, 2024 by teamyantragyan
A variety of forms of misinformation and fraud are available online — from false or skewed facts presented as the truth to machine-generated photos and videos, which may be used in unethical or harmful ways.
A similar incident happened with a Stanford researcher which became the muse for her to unveil the 1000+ fake profiles on LinkedIn.
The Stanford Internet Observatory researcher Renee DiResta didn’t realize that receiving a software sales pitch on LinkedIn would lead her down a rabbit hole of over 1,000 fake corporate LinkedIn accounts. At first, the picture appeared to be a standard corporate headshot, but on further investigation by the researcher, several red flags were visible.
It all started when Renée DiResta received a message from a profile named “Keenan Ramsay”. With knowledge of information systems and how narratives spread, DiResta’s trained eye was quick to notice something was not quite right — the profile picture of the sender, Keenan Ramsey, looked off.
The researcher along with her colleague Josh Goldstein began digging into Keenan’s profile only to find out that it was not a real person. They found out that the fake account had a profile picture that was generated with the help of artificial intelligence (AI).
Later they found out that Keenan Ramsay who has been messaging her was an AI bot. DiResta was specifically tipped off by the alignment of Ramsey’s eyes (the dead center of the photo), her earrings (she was only wearing one), and her hair, several bits of which blurred into the background.
NPR looked into DiRestra and Goldstein’s claims and found more than 70 businesses linked to the fake profiles. Several of the businesses said they had hired outside marketers, but expressed surprise when told about the fake LinkedIn profiles. The businesses also denied authorizing the campaigns.
Accounts like Ramsey’s are used by companies to pitch software to potential new customers, and whenever a target responds they’re redirected to a real person. With this technique, companies are able to greatly broaden their reach without having to hire new people, NPR said.
So, what is an AI face?
The fake faces used by Ramsey and the countless army of bots like her are generated by general adversarial networks or GANs. A GAN uses two bots, one to generate fake faces and another to detect them, to produce the best possible results: Only when the detection bot can’t distinguish between a real and fake face is the image passed along.
It can be tricky to tell a GAN-generated face from a real one, but there are some signs:
- Backgrounds are often indistinct, blurry, or irregular
- Clothes often appear to be irregular, with collars being inconsistent, lines being imprecise and other similar artifacts
- Teeth can appear irregular or blend into lips
- Hair appears to have excess flyaways, which can vanish and reappear, while longer hair can look imprecise
- Reflections and lighting can be irregular
- Glitches in skin
- Missing or irregular accessories
- The person’s eyes are perfectly centered in the image
Why are these AI bots used?
Companies use profiles like these to cast a wide net of potential leads without having to use real sales staff and to avoid hitting Linkedin message limits. It was found that more than 70 businesses were listed as employers of fake profiles, with some companies telling NPR that they hired outside marketers to help with sales but hadn’t authorized the use of AI-generated photos, and were surprised by these findings.
The use of fake profiles is not permitted by Linkedin. The company’s spokesperson Leonna Spilman told that the company’s policies make it clear that every Linkedin profile must represent a real person.
“We are constantly updating our technical defenses to better identify fake profiles and remove them from our community, as we have in this case,” Spilman says. “At the end of the day, it’s all about making sure our members can connect with real people, and we’re focused on ensuring they have a safe environment to do just that.”
Difficult For Naked Eye to Detect Truth
Although some businesses may employ AI-assisted marketing tactics because they are cheaper than employing real people, it’s difficult for users on the other side of the screen to distinguish between a fake or real profile photo — a recent study by PNAS found that people have a 50% chance of guessing correctly. The research also found that some people find machine-generated faces more trustworthy because AI often uses average facial features, suspects Hany Farid, co-author of the study.
“If you ask the average person on the internet, ‘Is this a real person or synthetically generated?’ they are essentially at the chance, (relying on luck)” said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored a study with Sophie Nightingale of Lancaster University.
Farid’s study previously found that AI-generated photos were designed to look more trustworthy than real faces.
Some methods may help regular internet users spot such AI-generated online content. One of them is V7 Labs’s Google Chrome extension tool, which helps users spot fake profiles.
However, many people are unlikely to even suspect that the profiles they come across may be fake.
Farid said he finds the proliferation of AI-generated content worrying, not just the still images but also the video and audio content. He warned that it could foreshadow a new era of online deception.
Leave a Reply
You must be logged in to post a comment.