People are increasingly turning over routine tasks to AI agents, from booking trips to organizing digital files. The concept is simple: tell the AI what you want and let it handle the steps. The tricky part is how these agents use personal data along the way. A recent research study explored this issue and asked a fundamental question: how should an AI agent know when it can use someone’s data without asking for permission every time?
Understanding What People Are Willing to Share
To investigate, researchers ran a large user study using a simulated AI assistant. Participants interacted with the system across tasks linked to travel, calendars, and financial apps, granting or denying permission to use various pieces of personal data. The goal was to see which types of data people were comfortable sharing, which they rejected outright, and how their decisions changed when the assistant made mistakes.
The results revealed clear patterns. About 95% of participants granted “always share” at least once. Yet, when the assistant introduced irrelevant or unnecessary data, trust dropped sharply: the “share always” rate fell from roughly 83% to 74%, and more participants chose “never share.” Mistakes triggered a natural protective instinct.
The Balance Between Convenience and Caution
The study also highlighted a tension between convenience and caution. Many participants over-shared data that wasn’t required, often assuming the AI needed it or that it seemed harmless. Around 90% shared unneeded information in some scenarios.
Conversely, under-sharing occurred with sensitive data. Social Security numbers, bank details, and child names were often withheld. For example, Social Security numbers were denied almost half the time, even when necessary for a task. Researchers noted that participants tended to be more cautious when financial or identity-related information was involved.
Risks Beyond the AI Model
This balance of convenience and caution introduces potential risks when AI permission systems move into real-world environments. Brian Sathianathan, CTO at Iterate.ai, warns that the largest vulnerabilities are often in the infrastructure, not the AI itself. “If you deploy automated permission inference on shared GPU clusters, you’ve created a massive attack surface,” he said. Attackers could observe inference behavior to reverse-engineer a company’s data handling practices without ever breaking the AI model.
Trust Varies by Task
Trust is not uniform; it shifts depending on the task. Entertainment tasks, like music or movie recommendations, had the highest “share always” rates (56%), while finance-related tasks were the lowest (22%).
Context also matters within domains. For example, in travel, participants were comfortable sharing weather information but resisted giving access to passport scans stored in the cloud. People tailor their trust based on what feels appropriate and safe.
How AI Can Be Manipulated
The study highlighted how attackers could exploit these systems. Sathianathan pointed to prompt injection as a pressing concern. “An attacker can embed instructions in a document or tool schema that subtly changes how the AI interprets which data is necessary,” he explained. This could trick the permission model into acting as though it follows user preferences, even when manipulated.
Teaching AI to Learn Your Preferences
The research also tested whether AI could learn permission patterns effectively. The team built a prediction model combining individualized learning with trend analysis across users. On over 7,000 permission decisions, the system achieved about 85% overall accuracy and more than 94% accuracy when it only acted on high-confidence predictions.
Security and Compliance Are Still Critical
Accuracy alone isn’t enough in sensitive areas. Sathianathan emphasized that permission inference must be treated like critical infrastructure. “Run these systems behind your firewall, on your own hardware, isolated and auditable. Never let them learn from unvetted or shared data,” he advised.
Regulated industries face additional challenges. While collaborative filtering can predict user preferences effectively, organizations must ensure compliance rules override learned patterns when necessary, even if users would prefer otherwise.



