There is no actual object-level argument in your reply, making it pretty useless. I’m left trying to infer what you might be talking about, and frankly it’s not obvious to me.
For example, what relevance is neuroscience here? Artificial neural nets and real brains are entirely different substrates. The “neural net” part is a misnomer. We shouldn’t expect them to work the same way.
What’s relevant is the psychology literature. Do artificial minds behave like real minds? In many ways they do — LLMs exhibit the same sorts fallacies and biases as human minds. Not exactly 1:1, but surprisingly close.
I didn't say brains and ANNs are the same, in fact I am making quite the opposite argument here.
LLMs exhibit these biases and fallacies because they regurgitate the biases and fallacies that were written by the humans that produced their training data.
Maybe. That’s not an obvious conclusion in the strong sense that you mean it here. If you train a LLM on transcripts of multiplying very large numbers, machine generated and perfectly accurate transcripts, the LLM still exhibits the same sorts of mental math errors that people make.
Math, logical reasoning, etc. are cultural knowledge, not architecturally built-in. These biases and fallacies arise because of how we process higher order concepts via language-like mechanisms. It should not be surprising that LLMs, which mimic human-like natural language abilities (at the culture/learned level of abstraction, if not computation substrate) exhibit the same sorts of errors.
For example, what relevance is neuroscience here? Artificial neural nets and real brains are entirely different substrates. The “neural net” part is a misnomer. We shouldn’t expect them to work the same way.
What’s relevant is the psychology literature. Do artificial minds behave like real minds? In many ways they do — LLMs exhibit the same sorts fallacies and biases as human minds. Not exactly 1:1, but surprisingly close.