ChatGPT-4o is wildly capable, but it could be a privacy nightmare
Recently launched by OpenAI, ChatGPT-4o has acquired new, more human-like abilities, such as telling bedtime stories and identifying emotions through visual expressions. But achieving these feats requires access to more of our data, and the new macOS app will be able to access a person’s screen, causing understandable alarm amongst users.
In articles for Forbes, LexBlog and ComplexDiscovery, partner Oliver Willis comments on the data implications of the newest iteration of Chat-GPT and the potential privacy risks it presents.
‘From a user’s perspective, how does ChatGPT collect and use data about you when you are using it? From everyone else’s perspective, was ChatGPT trained on information about you and what will it tell users about you?
They stress that the training information is not used to profile people, or to learn about them, but some people will see the use of this data as inherently intrusive.
OpenAI offers a mechanism for restricting the use of their data to train ChatGPT, but it is less clear what OpenAI will do for someone who objects to it disclosing their personal data in a chat response.’
The full article is available to read on the Forbes, LexBlog and ComplexDiscovery websites.