Altman's vision for ChatGPT is a privacy nightmare.

 

Altman's vision for ChatGPT is a privacy nightmare.
Sam Altman, CEO of OpenAI

Altman's vision for ChatGPT is a privacy nightmare.


Sam Altman, CEO of the artificial intelligence company OpenAI, recently expressed his desire for the chatbot ChatGPT to document and remember everything in its user's life.

Altman made these statements during an AI event when a user asked him how to make ChatGPT more personalized.

Altman explained that the idea lies in a thinking model with "a trillion contextual tokens" that can store a user's conversations, emails, and readings, according to a report by Interesting Engineering, a website specializing in technology and engineering news, reviewed by Al Arabiya Business.

The OpenAI CEO said, "Every conversation you've ever had, every book you've ever read, every email you've ever read, everything you've ever read is there... and your life continues to add to that context."

He also noted that this might be possible, given the way college students use ChatGPT, adding that they upload files, connect data sources, and then use "complex prompts" to analyze that data. Young users often don't make important life decisions without consulting ChatGPT.

As for older users, Altman noted that ChatGPT is like an alternative to Google, while younger users, those in their twenties and thirties, see it as a life advisor.

The goal appears to be to develop ChatGPT into an intelligent companion with unlimited knowledge. Combined with autonomous AI agents currently being developed by Silicon Valley companies, this opens up some truly amazing possibilities. Imagine an AI that automatically schedules car maintenance, plans travel for distant occasions, or pre-orders the next volume of your favorite book series.

Altman wants ChatGPT to become an AI system with all-encompassing knowledge of its user's life. However, it could be insecure, as users may not trust a for-profit tech giant to know everything about their lives.

Users may also be concerned about the misuse of their personal data and other risks.

Chatbots could also behave in ways that might benefit a political group or serve any corporate objectives using sensitive and personal data. It has recently been observed that some chatbots comply with Chinese censorship requirements, while others provide answers that support a specific ideology.

Post a Comment

0 Comments