We can form and test hypotheses and experience the consequences. And then take that knowledge to our next trial. Even dogs and cats do this on a daily basis. Without that, how would we even evaluate whether something is conscious?
I don't know what you're thinking of, but mine are.
Practice of any kind (sports, coding, puzzles) works like that.
Most of all: interactions with any other conscious entity. I carry at least intuitive expectations of how my wife / kid / co-workers / dog (if you count that) will respond to my behavior, but... Uh. Often wrong, and have to update my model of them or of myself.
Yes, I am saying in both cases the expectations are violated regularly. It’s not obvious at all that an LLM’s “perception” of its “world” is any more coherent than ours of our world.
Okay, so you're talking about LLMs specifically in the context of a ChatGPT, Claude, or pick-your-preferred-chatbot. Which isn't just an LLM, but also a UI, a memory manager, a prompt builder, a vectorDB, a system prompt, and everything else that goes into making it feel like a person.
Let's work with that.
In a given context window or conversation, yes, you can have a very human-like conversation and the chatbot will give the feeling of understanding your world and what it's like. But this still isn't a real world, and the chatbot isn't really forming hypotheses that can be disproven. At best, it's a D&D style tabletop roleplaying game with you as the DM. You are the human arbiter of what is true and what is not for this chatbot, and the world it inhabits is the one you provide it. You tell it what you want, you tell it what to do, and it responds purely to you. That isn't a real world, it's just a narrative based on your words.