Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Our brains categorically do not function anything like this.

Yes, they actually do. It seems you’re under the belief there is some special “reasoning machine” inside a brain: there is not. It is all “just probabilities” that are encoded and re-encoded depending on the outcome of the prediction.

What exactly do you think the brain is doing on the sensory input it receives if not “math operations?” It is literally just chemical, electrical, and thermal reactions in response to sensory data (itself just chemical, electrical, and thermal signals).

Where does the probabilistic reaction end and “reason” or “cognition” begin? Hint: Nowhere!



Yes, everything is just physical reactions when boiled down to its simplest state. Again, this is trivialising complexity. The nature of those reactions are where the brain differentiates itself. Even if the chemical reactions were purely mathematically-driven (not necessarily the case), the nature of the calculations is completely different. LLMs are calculating next-token-probability. That is literally the only thing they calculate. Your special "reasoning machine", then, would be code that instead calculates logic-adherence-probability. If you want to treat the brain as a computer, then we have such code somewhere in our DNA program. It's really self-evident that no part of an LLM's code is oriented towards genuine reasoning, therefore they cannot reason. Perhaps if a sufficiently powerful machine was coded in a way that was conducive to reasoning, it could be described as conscious. I don't think it's unreasonable to believe that consciousness could be attained in electrical circuits outright. LLMs are simply not the mechanism by which that can ever happen. They are merely a mechanism that is good at creating a fascimile of reasoning output with non-reasoning input.


“Reasoning” is nothing more than making a prediction, finding out whether it was right or wrong, then encoding that into your probabilities for a future prediction.

The human mind is so lossy at this that we literally invent symbolic systems to externalize the process and constrain our intrinsically probabilistic (therefore allegedly “reasoning-less”) hardware.

There are only predictions and outcomes as far as the brain or the LLM are concerned.


Wow, was everything you just said wrong on every account. Reasoning is the ability to follow logical rules to their conclusion; it is not a probabilistic endeavour. We have 100% confidence that 1 + 1 is 2, 2 + 1 is 3, 3 + 1 is 4, following infinitely. Meanwhile an LLM cannot reliably sum two numbers because it is operating purely probabilistically.

We invent symbolic systems to record and communicate the reasoning that occurs inside our heads. Externalization is in no way necessary to the actual reasoning. The "lossy" part is when humans die and lose the progress they had made on reasoning. Writing it down preserves and transfers the progress so that others can learn from and continue building on it.

I don't understand why LLM consciousness-believers are so down on the brain's code. Why is it difficult for you to believe that we have code that is, in fact, more complicated than guessing the next token? It's weird that we develop this technology that is cool but pretty clearly nowhere near our capabilities, and then diminish our own capabilities in our eagerness to promote how cool the new tool is.


Oof.

1. I know what reasoning is theoretically. Humans do not do that though. Brains do not have the hardware to do anything other than probabilistic prediction. You've already acknowledged this, but now are in denial because the corollaries are awfully inconvenient for a conclusion you've already committed yourself to!

2. No one claimed externalization is necessary to reason. I am claiming that people use externalization (even in lieu of collaboration with others) because brains are in fact so unreliable at even basic logic, which you allege they have built-in hardware to do (they do not).

3. I'm not an LLM consciousness-believer. I'm not sure why you keep supposing that I am.

> Why is it difficult for you to believe that we have code that is, in fact, more complicated than guessing the next token

Because there is no evidence for anything going on in the brain other predicting sensory input and probabilistically transforming it into behavior and conscious experience. Show evidence for where in the brain any other process is occurring and I'd be excited to dig into it :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: