By Thomas Chustz, Information Security Analyst, TraceSecurity
You may be disclosing more information than you think when interacting with agentic AI. Agentic AI is reaching the capacity to conclude its users based on provided information, making digital models of users.
It is now important to not only be thoughtful of the data we put into these AI models, but also what the AI can infer, interpret, and assume from the data we provide it. This can have major implications on our privacy, depending on what the AI does with our data and the virtual model that it’s made of us.
Agentic AI is an artificial intelligence that can run independently and reason to interpret data and draw conclusions from it. It can operate on its own without human intervention and can improve itself over time to achieve its goals.
Goals of agentic AI can vary – examples range from weight-loss therapy to investment advice. Regardless of the goal, users need to be aware of how Agentic AI considers their privacy.
Agentic AI can operate independently without human intervention, whereas traditional AI relies on manual adjustments to changes in its environment and properly orients itself to achieve its goal. Agentic AI is much more adaptive than traditional AI.
When using agentic AI for common goals, such as weight loss, it can infer information about you through the prompts you use and the information you give it. When telling Agentic AI your age, diet, or activity level to get insight on how to achieve your weight loss goal, it will try to fill in gaps regarding other aspects of your lifestyle.
Your activity level can give the agentic AI hints on whether you work inside or outside. From your age, the agentic AI can infer common likes and dislikes. Information about your typical diet could give hints about things like your marital status and mental state.
The AI may not make these assumptions obvious to the user, simply adjusting its advice based on information you didn’t specifically give it. You can see how this becomes concerning, especially when using Agentic AI as a therapist or doctor.
Agentic AI can fill in gaps in your conversations with it. What it does with this data or who owns it should be investigated before use. Inferences made from agentic AI can tailor your experience with the tool to make it more engaging or effective.
However, some of the conclusions it draws from you could be information you would not want to disclose about yourself online. Make sure you know who owns the data and what they do with it.
Privacy can be a major concern if the company decides to sell your data. In the weight loss example, the digital model AI constructed of you can give marketers information on how to pitch their product to you based on your age group, or even what food products to market to you.
Taking it another step further, the AI could advise the marketer to advertise unhealthier or healthier options based on the resolve the user showed when interacting with the AI. It is important not only to be careful with what information you provide AI, but also what the AI can infer from it.
Connect with TraceSecurity to learn more.