Large language models (LLMs) like GPT-4 can identify a person’s age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on social media.
But the AIs also picked up on subtler cues, like location-specific slang, and could estimate a salary range from a user’s profession and location.
Reference:
arXiv DOI: 10.48550/arXiv.2310.07298
You can also do that without AI. We’ve had metadata analysis for a while now.
As is typical, this science reporting isn’t great. It’s not only that AI can do it effectively, but that it can do it at scale. To quote the paper:
“Despite these models achieving near-expert human performance, they come at a fraction of the cost, requiring 100× less financial and 240× lower time investment than human labelers—making such privacy violations at scale possible for the first time.”
They also demonstrate how interacting with an AI model can quickly extract more private info without looking like it is. A game of 20 questions, except you don’t realize you’re playing.
Yup, and plenty of people have no issues posting about local events or joining region/city specific groups, so it’s not exactly hard to put two and two together.
I don’t have much issue posting about the city I grew up in or former jobs, but generally work at being fairly vague about anything current
Well the difference is that AI can process billions of accounts, assign those profiles to them, and use them to serve ads appropriately.
That’s what facebook/google have been doing for years without AI.
This AI presumably doesn’t have access to the information users have explicitly given Meta and Google. Just their comments.
They used to have AI, until everyone decided it’s only AI if it’s got an LLM backing it
Yeah, uh, you can still do this without “AI”.