Common Sense & World Models
Ask a five-year-old what happens when you tip a glass of water upside down and they'll tell you it spills. Ask a language model and it will probably give the right answer - but not because it understands gravity, water, or glasses. It's learned the statistical association between those concepts in text. This distinction between knowing facts about the world and actually understanding how the world works is one of AI's biggest unsolved problems. Humans navigate daily life using a vast web of intuitive knowledge - common sense - that we rarely think about: objects fall down, people get hungry, pushing someone is rude, ice is slippery. Encoding this kind of knowledge in AI systems has proven extraordinarily difficult. Current models can fake it convincingly in many situations by drawing on patterns in their training data, but they fail in unexpected ways when they encounter scenarios that require genuine physical or social intuition. This is why AI can write a plausible-sounding business plan but might suggest a meeting schedule that ignores the fact that humans need to eat lunch.