Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


It’s incredibly embarrassing to be an asshole to someone while smacking down an argument they didn’t make.

The empirical demonstration is that LLM output is certainly not random. Do you disagree?

If no, then you agree with my empirical observation that the text they’re trained on has some degree of coherence. Or else the structure of LLM output is just emerging from the silica and has nothing at all to do with the (apparently coincidental) appearance of coherence in the input text.

Let me know if you disagree.

I never suggested this means anything at all about whether the system is conscious or not, GP did, and it’s a position I’m arguing against.


  > I never suggested this means anything at all about whether the system is conscious or not, GP did, and it’s a position I’m arguing against.
That's certainly not how I perceived anything you said. Let me get this straight.

u:yannyu's original comment (ironically, clearly LLM-generated) supposes that the barrier to LLM consciousness is the absence of environmental coherence. In other words, their stance is that LLMs are not currently conscious.

u:estearum, that is, you, reply:

  > The consistency and coherence of LLM outputs, assembled from an imperfectly coherent mess of symbols is an empirical proof that the mess of symbols is in fact quite coherent.
  > The physical world is largely incoherent to human consciousnesses too, and we emerged just fine.
In reply to a comment about world incoherence as a barrier to consciousness, you suggest that humans became conscious despite incoherence, which by extension implies that you do not think incoherence is a barrier to LLM consciousness either.

u:yannyu replies to you with another probably LLM-generated comment pointing out the difference between textual coherence and environmental coherence.

u:estearum, that is, you, make a reply including:

  > They [LLMs] sometimes get it wrong, just like all other conscious entities sometimes get their predictions wrong.
"All other conscious entities" is your choice of wording here. "All other" is definitionally inclusive of the thing being other-ed. There is no possible way to correctly interpret this sentence other than suggesting that LLMs are included as a conscious entity. If you did not mean that, you misspoke, but from my perspective the only conclusion I can come to from the words you wrote is that clearly you believe in LLM consciousness and were actively arguing in favor of it, not against it.

Incidentally, LLM output is not uniformly random, but rather purely probabilistic. It operates on the exact same premise as the toy script I posted, only with a massive dataset and significantly more advanced math and programming techniques. That anybody even entertains the idea of LLM consciousness is completely ridiculous to me. Regardless of whatever philosophical debates you'd like to have about whether a machine can reach consciousness, the idea that LLMs are a mechanism by which that's possible is fundamentally preposterous.


> but rather purely probabilistic.

In other words, indicative of a coherence - some type of structure – in the "world" of inputs into an LLM, which was the only point I was making.

No, I did not intend to suggest that LLMs are conscious with my "all other conscious entities". I intended to suggest that we don't know if they're conscious, and the provided reason to "prove" that they're not is nonsense.

Well, if you have a theory that cleanly proves your intuition that it's "preposterous" that an LLM could be conscious, you should go submit it and win a Nobel!

Otherwise you're just sharing your unfounded and unduly confident assumptions which isn't terribly interesting.


Win a nobel? Stating that LLMs aren't conscious is no more novel than stating that the sun isn't conscious, which is self-evident (but otherwise claims of such consciousness are unfalsifiable ala Russell's Teapot and the burden is surely to prove it exists, not to prove it doesn't). That didn't stop people who didn't understand it from worshipping the sun as a deity, of course. It is unfortunate that humans are somewhat predisposed to such mass delusions whenever presented with something beyond their immediate comprehension.


Nothing going on in an LLM is beyond my comprehension. It’s a simple machine… but so is the brain.

How to determine whether LLMs (or any other information processing devices like brains) produce consciousness is an actually unsolved problem, despite your assertions that the idea is preposterous.

In any case, no, I actually said the opposite of what you said here. Merely stating your unfounded yet high-confidence belief is extremely uninteresting, even as casual internet discourse.

If you could prove any of your beliefs on the matter, that’d be worthy of a Nobel. It’d require you to first take a step back from condescending “certainty” though, and it doesn’t seem like you’re headed that direction.


You're trivialising the complexity of both brains and LLMs in order to mystify and equate them. We can boil down a toy script, an LLM, and the brain down to engaging in a simplistic pattern of "act according to environmental information, adjust behaviour based on feedback". But unless you think that the script is conscious, there is clearly something more to consciousness than only that pattern.

The script is simple in concept, (relatively) simple in mechanics. LLMs are simple in concept, very complex in mechanics. Brains are simple in concept, extremely complex as to be beyond full human understanding in mechanics. We have the general gist of brain mechanics, and we can even engage in some brain hacking, but we lack the understanding to fully recreate the mechanics from scratch. To say that it is simple in mechanics is clearly not true, then.

To the extent that we do understand brain mechanics, they pose no similarity to the script nor the LLM. Although more mechanically complex, an LLM still operates on the exact same axis as the script; it is no more than math operations on a dataset. It has no capacity to reason. People selling LLMs have gone a long way to training it to have the illusion of reasoning, but in reality, "Thinking" is just a mode that increases the probability of certain tokens under certain circumstances. The modified winrate of tokens that reflect the superficial appearance of thinking exceeds the winrate of tokens that would otherwise come next, so it prints output that looks like introspection, but this is still just the result of rolling a die and comparing it against a probability table. Our brains categorically do not function anything like this. While the basic pattern of acting according to the environment may be similar, the actual mechanics by which this is achieved are completely different.

There isn't really anything to prove here. This isn't a meaningful scientific question. It is a delusion peddled by people who are trying to profit off of the delusion, and absorbed by people who do not actually understand the mechanical complexity of LLMs nor the brain and therefore are unable to differentiate between the two.


> Our brains categorically do not function anything like this.

Yes, they actually do. It seems you’re under the belief there is some special “reasoning machine” inside a brain: there is not. It is all “just probabilities” that are encoded and re-encoded depending on the outcome of the prediction.

What exactly do you think the brain is doing on the sensory input it receives if not “math operations?” It is literally just chemical, electrical, and thermal reactions in response to sensory data (itself just chemical, electrical, and thermal signals).

Where does the probabilistic reaction end and “reason” or “cognition” begin? Hint: Nowhere!


Yes, everything is just physical reactions when boiled down to its simplest state. Again, this is trivialising complexity. The nature of those reactions are where the brain differentiates itself. Even if the chemical reactions were purely mathematically-driven (not necessarily the case), the nature of the calculations is completely different. LLMs are calculating next-token-probability. That is literally the only thing they calculate. Your special "reasoning machine", then, would be code that instead calculates logic-adherence-probability. If you want to treat the brain as a computer, then we have such code somewhere in our DNA program. It's really self-evident that no part of an LLM's code is oriented towards genuine reasoning, therefore they cannot reason. Perhaps if a sufficiently powerful machine was coded in a way that was conducive to reasoning, it could be described as conscious. I don't think it's unreasonable to believe that consciousness could be attained in electrical circuits outright. LLMs are simply not the mechanism by which that can ever happen. They are merely a mechanism that is good at creating a fascimile of reasoning output with non-reasoning input.


“Reasoning” is nothing more than making a prediction, finding out whether it was right or wrong, then encoding that into your probabilities for a future prediction.

The human mind is so lossy at this that we literally invent symbolic systems to externalize the process and constrain our intrinsically probabilistic (therefore allegedly “reasoning-less”) hardware.

There are only predictions and outcomes as far as the brain or the LLM are concerned.


Wow, was everything you just said wrong on every account. Reasoning is the ability to follow logical rules to their conclusion; it is not a probabilistic endeavour. We have 100% confidence that 1 + 1 is 2, 2 + 1 is 3, 3 + 1 is 4, following infinitely. Meanwhile an LLM cannot reliably sum two numbers because it is operating purely probabilistically.

We invent symbolic systems to record and communicate the reasoning that occurs inside our heads. Externalization is in no way necessary to the actual reasoning. The "lossy" part is when humans die and lose the progress they had made on reasoning. Writing it down preserves and transfers the progress so that others can learn from and continue building on it.

I don't understand why LLM consciousness-believers are so down on the brain's code. Why is it difficult for you to believe that we have code that is, in fact, more complicated than guessing the next token? It's weird that we develop this technology that is cool but pretty clearly nowhere near our capabilities, and then diminish our own capabilities in our eagerness to promote how cool the new tool is.


Oof.

1. I know what reasoning is theoretically. Humans do not do that though. Brains do not have the hardware to do anything other than probabilistic prediction. You've already acknowledged this, but now are in denial because the corollaries are awfully inconvenient for a conclusion you've already committed yourself to!

2. No one claimed externalization is necessary to reason. I am claiming that people use externalization (even in lieu of collaboration with others) because brains are in fact so unreliable at even basic logic, which you allege they have built-in hardware to do (they do not).

3. I'm not an LLM consciousness-believer. I'm not sure why you keep supposing that I am.

> Why is it difficult for you to believe that we have code that is, in fact, more complicated than guessing the next token

Because there is no evidence for anything going on in the brain other predicting sensory input and probabilistically transforming it into behavior and conscious experience. Show evidence for where in the brain any other process is occurring and I'd be excited to dig into it :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: