AI-powered scams? Phone brand’s new feature might be tricking you
By
Maan
- Replies 1
Scams are a constant threat, but what happens when the tools designed to protect us start playing into the hands of fraudsters?
Apple’s latest update introduced an AI feature that promises to make life easier for users—but it may also be unintentionally blurring the line between legitimate messages and scams.
The implications could be more far-reaching than you might think.
Apple's latest artificial intelligence (AI) update has raised concerns after it began rewording scam messages and emails to make them seem more legitimate, potentially leading users into traps.
The update, launched in late 2024, introduced a feature called ‘Apple Intelligence,’ which aimed to summarise and prioritise notifications on iPhones, iPads, and Mac computers.
Users were shown how the new AI could condense multiple messages into one, like a group chat about brunch plans: ‘Brunch after soccer on Saturday; restaurant or host at home suggested.’
This was in line with Apple’s promotional material, which showed how the AI would flag the most urgent messages, such as important emails from friends, colleagues, or airlines.
However, some users found that the AI could not differentiate between legitimate messages and scams. In one example, a user shared a prioritised email about an income tax issue, which was clearly a scam.
The notification read: ‘Income Tax 751.23 AUD for the period Nov – Dec 2024 is pending preparation for Lodgement.’
A disappointed user, who used the pseudonym Steve, said, ‘It’s fun to make fun of AI, but having an AI tool that’s onboard millions of devices enable scams like this is going to catch people out and cost them money.’
Others, including UQ honorary professor Jeremy Howard, expressed concerns on social media, sharing screenshots of scam messages flagged as ‘Priority’ by the AI.
‘Maybe Apple Intelligence shouldn’t mark scam emails as “Priority” with a summary saying it’s for security purposes?’ Howard remarked.
Another user on social media shared a screenshot of a scam message from the USPS, commenting: ‘Imagine if Apple Intelligence actually flagged the scam messages instead of helpfully summarising them?’
Experts have warned that Australians, who lost $2.7 billion to scams in 2023, could be at greater risk due to misplaced trust in Apple’s AI-powered features.
La Trobe University professor Daswin De Silva said that the hype surrounding AI has made people more likely to trust it, even when it misidentifies threats.
He pointed out that the AI’s summarisation of messages could remove important cues that differentiate legitimate communications from scams.
‘By trying to reduce the information, you’re actually creating more work for the end user or the consumer to evaluate the information and then look for explainability factors and go into the actual messages and start reading them all over again,’ De Silva explained.
De Silva also warned about the dangers of rushing AI features into the market.
‘People are still getting used to working with AI or living with AI, and that’s why it has to be released in a gradual process. Dropping new features every so many months really doesn’t help.’
Although Apple did not comment on this particular incident, the company said earlier in the week that it planned to update the feature.
The update would make it clearer when text displayed on a device was a summarisation from Apple Intelligence.
Do you think Apple’s new feature could end up doing more harm than good? We’d love to hear your thoughts—drop a comment below!
Apple’s latest update introduced an AI feature that promises to make life easier for users—but it may also be unintentionally blurring the line between legitimate messages and scams.
The implications could be more far-reaching than you might think.
Apple's latest artificial intelligence (AI) update has raised concerns after it began rewording scam messages and emails to make them seem more legitimate, potentially leading users into traps.
The update, launched in late 2024, introduced a feature called ‘Apple Intelligence,’ which aimed to summarise and prioritise notifications on iPhones, iPads, and Mac computers.
Users were shown how the new AI could condense multiple messages into one, like a group chat about brunch plans: ‘Brunch after soccer on Saturday; restaurant or host at home suggested.’
This was in line with Apple’s promotional material, which showed how the AI would flag the most urgent messages, such as important emails from friends, colleagues, or airlines.
However, some users found that the AI could not differentiate between legitimate messages and scams. In one example, a user shared a prioritised email about an income tax issue, which was clearly a scam.
The notification read: ‘Income Tax 751.23 AUD for the period Nov – Dec 2024 is pending preparation for Lodgement.’
A disappointed user, who used the pseudonym Steve, said, ‘It’s fun to make fun of AI, but having an AI tool that’s onboard millions of devices enable scams like this is going to catch people out and cost them money.’
Others, including UQ honorary professor Jeremy Howard, expressed concerns on social media, sharing screenshots of scam messages flagged as ‘Priority’ by the AI.
‘Maybe Apple Intelligence shouldn’t mark scam emails as “Priority” with a summary saying it’s for security purposes?’ Howard remarked.
Another user on social media shared a screenshot of a scam message from the USPS, commenting: ‘Imagine if Apple Intelligence actually flagged the scam messages instead of helpfully summarising them?’
Experts have warned that Australians, who lost $2.7 billion to scams in 2023, could be at greater risk due to misplaced trust in Apple’s AI-powered features.
La Trobe University professor Daswin De Silva said that the hype surrounding AI has made people more likely to trust it, even when it misidentifies threats.
He pointed out that the AI’s summarisation of messages could remove important cues that differentiate legitimate communications from scams.
‘By trying to reduce the information, you’re actually creating more work for the end user or the consumer to evaluate the information and then look for explainability factors and go into the actual messages and start reading them all over again,’ De Silva explained.
De Silva also warned about the dangers of rushing AI features into the market.
‘People are still getting used to working with AI or living with AI, and that’s why it has to be released in a gradual process. Dropping new features every so many months really doesn’t help.’
Although Apple did not comment on this particular incident, the company said earlier in the week that it planned to update the feature.
The update would make it clearer when text displayed on a device was a summarisation from Apple Intelligence.
Key Takeaways
- Apple’s new AI feature, ‘Apple Intelligence,’ summarises and prioritises notifications on devices, but it may mistakenly flag scam messages as legitimate, increasing users' vulnerability.
- The AI feature condenses messages like group chats and emails, but in some cases, it fails to distinguish between genuine and fraudulent communications.
- Experts warn that misplaced trust in AI could lead to users falling for scams, especially with the summarisation feature removing critical cues that differentiate legitimate messages from scams.
- Apple plans to update the feature to clarify when notifications are AI-generated summaries, addressing concerns raised by experts and users.
Do you think Apple’s new feature could end up doing more harm than good? We’d love to hear your thoughts—drop a comment below!