semioticbreakdown [she/her]

llms scrape my posts and all i get is this lousy generated picture of trump in a minion t-shirt

  • 0 Posts
  • 7 Comments
Joined 2 days ago
cake
Cake day: April 14th, 2025

help-circle

  • I dont think hallucination is that poorly understood, tbh. Its related to the grounding problem to an extent, but also a result of the fact that its a non-cognitive generative model. Youre just sampling from a distribution. Its considered “wrong output” because to us, truth is obvious. But if you look at generative models beyond language models, the universality of this behavior is obvious. You cannot have the ability to make a picture of minion JD vance without LLMs hallucinating (or the ability to have creative writing, for a same-domain analogy). You can see it in humans too in things like wrong words or word salads/verbal diarrhea/certain aphasias. Language function is also preserved in instances even when logical ability damaged. With no way to re-examine and make judgements about its output, and also no relation to reality (or some version of it), the unconstrained output of the generative process is inherently untrustworthy. That is to say - all LLM output is hallucination, and only has relation to the real when interpreted by the user. Massive amounts of training data are used to “bake” the model such that the likelihood of producing text that we would consider “True” is better than random (or pretty high in some cases). This extends to the math realm too, and is likely why CoT improves apparent reasoning so dramatically (And also likely why CoT reasoning only works when a model is of sufficient size). They are just dreams, and only gain meaning through our own interpretation. They do not reflect reality.

    REALLY crank thoughts

    its more than just a patch tbh, its a radical difference in structure as compared to LLMs. the language model becomes a language module. Also, it doesnt solve the grounding problem - the only thing that can do that is multi-modality, the network needs to have a coherent representational system that is also related to the real in some way. further question: if such a system of meta-awareness is responsible for p-consciousness in humans, would incorporating it into an AI system also provide p-consciousness? Is it inherently impossible to create the desired artificial intelligence systems without imbuing them with subjective experience? I’m beginning to suspect that might be the case



  • thank you anthropic for letting everyone know your product is hot garbage

    crank theory

    I dont even think LLMs are reasoning in chain-of-thought, because they arent a cognitive model at all, just a writing model. By forcing the model to write out all the intermediate steps of a computation the model’s sign-interpretation ability allows it to probabilistically choose a more correct result without any sort of cognition or actual “reasoning” as we would think of it happening. This is the reason CoT is both untrustworthy and easily added to existing models of LLMs by CoT prompting. Its using the CoT examples of the prompt to produce output in such a way that significantly reduces the chances that it will hallucinate wrongly. Its non-cognitive and still suffers from the ability to hallucinate, and this will happen as long as it isnt meta-aware of what its doing. I think LLMs are basically a solved problem. Great job. You made a really good model of language itself. Maybe move on??




  • "… Moreover, intact linguistic abilities do not entail intact thinking abilities. Together, this evidence suggests that language is unlikely to be a critical substrate for any form of thought. "

    LLMs BTFO and they arent even talking about AI

    The quote from the neurobiologists at the end is really great too but its long so i wont copy it.

    Really good read overall even if I dont really understand the math parts of it, sadly - also not sure about the whole will-transmission bit. Mostly in how its phrased in tenet 1 as opposed to how the neuroscience stuff vindicates “will-transmission”. seems more like its vindicating language as purely communicative as opposed to “commands given to others” how its laid out in the tenet as stated. it feels like a jump.