Conversation

llms are probabilistic constraint satisfaction problem solvers
Quote Tweet
Has anybody already named the LLM phenomenon of what I'm going to call "Schrodinger's Riddle" for games like 20 questions with GPT4, where it pretends to have something in mind the whole time but then hallucinates a solution based on the arbitrary answers it's given to questions?
Image
Image
constraints are given by the language (implied by training data) and the context window. Based on that a llm generates a probability distribution over tokens how likely they are to satisfy the constraints
1
2
text generation then fits naturally into this metaphor - you just sample tokens from the distribution generated by the p-CSP solver and use them as a new constraint for next generation
2