Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is that any different from the utter mess of a world humans find themselves existing in?


We can form and test hypotheses and experience the consequences. And then take that knowledge to our next trial. Even dogs and cats do this on a daily basis. Without that, how would we even evaluate whether something is conscious?


And these expectations are violated regularly?

The question of how to evaluate whether something is conscious is totally different from the question of whether it actually is conscious.


> these expectations are violated regularly?

I don't know what you're thinking of, but mine are.

Practice of any kind (sports, coding, puzzles) works like that.

Most of all: interactions with any other conscious entity. I carry at least intuitive expectations of how my wife / kid / co-workers / dog (if you count that) will respond to my behavior, but... Uh. Often wrong, and have to update my model of them or of myself.

I agree with your second paragraph.


Yes, I am saying in both cases the expectations are violated regularly. It’s not obvious at all that an LLM’s “perception” of its “world” is any more coherent than ours of our world.


LLMs can do the same within the context window. It's especially obvious for the modern LLMs, tuned extensively for tool use and agentic behavior.


Okay, so you're talking about LLMs specifically in the context of a ChatGPT, Claude, or pick-your-preferred-chatbot. Which isn't just an LLM, but also a UI, a memory manager, a prompt builder, a vectorDB, a system prompt, and everything else that goes into making it feel like a person.

Let's work with that.

In a given context window or conversation, yes, you can have a very human-like conversation and the chatbot will give the feeling of understanding your world and what it's like. But this still isn't a real world, and the chatbot isn't really forming hypotheses that can be disproven. At best, it's a D&D style tabletop roleplaying game with you as the DM. You are the human arbiter of what is true and what is not for this chatbot, and the world it inhabits is the one you provide it. You tell it what you want, you tell it what to do, and it responds purely to you. That isn't a real world, it's just a narrative based on your words.


A modern agentic LLM can execute actions in "real world", whatever you deem as such, and get feedback. How is that any different from what humans do?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: