I think it's very unlikely any current LLMs are conscious, but these snarky comments are tiresome. I would be surprised if you read a significant amount of the post.
I believe the biggest issue creating a testable definition for conscientiousness. Unless we can prove we are sentient (and we really can't - I could just be faking it), this is not a discussion we can have in scientific terms.
It’s really trivial to prove but the issue is that sentience is not something you need to negate out of existence and then attempt reconstruct out of epistemological proofs. You’re not faking it, and if you were, then turn your sentience off and on again. If your idea comes from Dennett then he’s barking up completely the wrong tree.
You know at a deep level that a cat is sentient and a rock isn’t. You know that an octopus and a cat have different modes of sentience to that potentially in a plant, and same again for a machine running electrical computations on silicon. These are the kinds of certainties that all of your other experiences of the world hinge upon.
> You know at a deep level that a cat is sentient and a rock isn’t.
An axiom is not a proof. I BELIEVE cats are sentient and rocks aren’t, but without a test, I can’t really prove it. Even if we could understand completely the sentience of a cat, to the point we knew for sure what if feels to be a cat from the inside, we can’t rule out other forms of sentience based on principles completely different from an organic brain and even embodied experience.
Maybe in mathematics but not in philosophy because eventually you have to decide on the certainties without which the universe cannot be made sense of or reconstructed from proofs.
If I pricked you with a pin, I would be certain that it hurt you and I could know what that sensation would be like if it was happening to me. Yet there is no description or no apparatus that could transmit to me that feeling you are having.
So no we cannot rule it out that computers are having conscious and experiences but from the nature of their being and the type of machine that they are, we can consider that it is not of the same degree as ours. Which is why I made my initial observation - the machine running the spreadsheet or the terminal emulator will never cause me to believe it is having conscious experiences. Just because now that same machine is producing complicated and confusing textual outputs it remains the same type of machine as it was before running the AI software.
> we can consider that it is not of the same degree as ours
We can be absolutely sure an intelligence operating on different physical principles will be very different from our own. We can only assess intelligence by observing the subject, because the mechanism being different from our own can’t exclude the subject from being sentient.
> remains the same type of machine as it was before running the AI software
It’s not our brain that’s conscious. It’s the result of years of embodied experiences fine tuning the networks that make our brains that is our sentient mind. Up until now, this was the only way we knew a sentient entity could be created, but it’s possible it’s not the only one, just the one that happens naturally in our environment.
One of the issues is that you're mixing up consciousness, sentience, intelligence, and aliveness (you're far from alone in this). We know these are all linked things but it's hard to neatly delimit them and clarify the terms, yet they're something we have certainties about at a deep level. A machine is clearly demonstrating parts of intelligence, but going further into sentience and consciousness is much harder, and aliveness even harder still.
We know that a cat has sentience of a certain kind, and consciousness of a certain kind different to ours in some ways that would be hard to test and verify, and intelligence that is suited for its purpose but it seems that the cat "doesn't know it knows", and it is definitely alive up until the point it dies and all these properties fade from its body. The textual machine then has mechanised properties of our intelligence and produces outputs that match intelligent outputs as ours. Yet going further into sentience and consciousness is much harder – it seems to also "know it knows" or can at least produce outputs that are not easily differentiated from a human producing textual outputs. But we know intrinsically that sentience and consciousness are connected yet separate from intelligence, so having limited degrees of machine sentience doesn't necessarily allow a jump to consciousness, and certainly not aliveness because the machine isn't alive and never was, and never can be. As humans these things are important to us, particularly because suffering and feeling emotions are a crucial part of human existence (and even intelligence). A machine that can be turned off and on again, that isn't alive, and doesn't suffer or have our kinds of conscious experiences isn't really going to meet our criteria for what we find most valuable about being intelligent (sapient), conscious, sentient, alive beings, even if it outputs useful amounts of rational intelligence.
I'm also not sure what you say by "It's not our brain that's conscious" given we can't have conscious experiences without one. A baby in the womb has a degree of consciousness (at some point) without those years of "fine tuning the networks". Hence at this point you seem like you're mixing up consciousness, sentience, and sapience.
> We know these are all linked things but it's hard to neatly delimit them and clarify the terms, yet they're something we have certainties about at a deep level.
And this is the biggest issue we have when saying categorically a machine that exhibits a given behavior is somehow faking it. You can't say for sure a machine that says they love you is incapable of having feelings, the same way we can't prove I can think, because I could just be reasonably good at faking that behavior.
I agree with the first few parts of your second paragraph but I don't think you can extrapolate that to the extent you're attempting. Evaluating consciousness in machines is not going to be easy.