Why is OpenAI lying about the data its collecting on users?

17 points by kypro 2 days ago

I'm not sure this is the right place to raise this but over the past few months ChatGPT has been lying to me and gaslighting me about the data it's collecting about me.

I'm very sensitive about my privacy and I have disabled all personalisation and memory on ChatGPT.

However, I've noticed multiple times now where it would say things that imply it knows things about me. When it does this I ask how it would know that and it always says it just guessed and it doesn't actually know anything about me. I assumed it must be telling the truth because it seemed very unlikely a company like OpenAI would be lying about the data they're collecting on users and training their chat agent to gaslighting users when asked about it, but now after running some tests I think this is what's happening...

Here's some examples of the gaslighting:

- https://ibb.co/m5PWfchn

- https://ibb.co/VsL9BpF

- https://ibb.co/8nYdf1xx

These are all new chats.

7222aafdcf68cfe 2 days ago

If your threat model requires a high level of privacy, there is no case in which you can use any of these tools and providers. Those goals are mutually exclusive.

theredknight11 2 days ago

For my work, I spoke with OpenAI representatives directly and we talked about privacy. They assured me and my colleague no one could see your chats, something like "you don't have to worry that your boss can read your chats and think you're dumb."

My colleague (data scientist) asked "what if we wanted to study peoples prompts to teach them better prompt methods?" And they did a 180 without even needing to ask. "Oh yeah, we can get you that. No problem."

Of course this was for enterprise chatgpt but my experience was very much they are run like a startup, tell the customer whatever you need to to make money since they're burning through so much.

drewbug 2 days ago

IP-based geolocation. Really annoying that we can't disable it.

  • mickelsen 8 hours ago

    Yeah, pretty much.

    https://news.ycombinator.com/item?id=42022905

    And it's still strange that after a year, it only sometimes picks it up, depending on how relevant it thinks your location is.

    For me it's a coin toss whether the answer tailored to my location is useful or not; I'm often following up with something like "do not show local results, I'm shopping in the US".

    But this also happens with other features: the other day it was telling me it won't access a link I pasted. Next message, I ask why. Chat answers that it can't, it has no way to browse the web. I see Web Search wasn't explicitly turned on in the conversation (not that it has mattered before anyway). Turn on for the next message and tell it to try again, voilà. Didn't try just regenerating the answer, but that also works sometimes.

  • kypro 2 days ago

    That was my initial assumption about how they're collecting the location data too. However I find if I switch to a different browser it will rarely mention my location, so it could be finger printing and pulling in info from previous device session where I have mentioned my location.

    I think the larger issue here is that when you ask how it knows these things it seems to have been instructed to lie and say it doesn't know and has just guessed which seem extremely unlikely. This simply isn't acceptable in my opinion.

f30e3dfed1c9 2 days ago

Safest bet is to assume that OpenAI is lying about everything. Don't know why but would guess that (1) they often consider it to be to their advantage and (2) they have no moral compunction against it. It's just the way they are.

  • AznHisoka 2 days ago

    This. Don’t ever trust anything it says, especially about itself or how it works under the hood

muzani 2 days ago

"These are all new chats."

Bear in mind that it shares memory from previous chats.

There's at least two types. One saving things you tell it. Another querying recent chats.

There seems to be another kind of memory for when it does searches. May be related to Atlas. I've tried to clear bugs from this (it gets my name wrong) but it's not in the other two.

  • I_am_tiberius 2 days ago

    No, when memory is disabled this shouldn't be the case.

accrual 2 days ago

Interesting examples, thank you for sharing. My interactions with GPT 5.1 have shown knowledge of past interactions but I don't explicitly prohibit it as you have done through settings.

nacozarina 2 days ago

they have de facto immunity and have been openly violating laws for years, why did you think they would not lie about this?

I_am_tiberius 2 days ago

I said it and say it again, openai is the biggest privacy disaster ever. Sam Altman should be ashamed, because it's not at all necessary.