Since Alan Turing proposed the Turing Test in 1950, AI has pursued the vision of machines performing the full range of human intellectual tasks. Progress from expert systems, to deep learning, to large language models has shown remarkable capabilities, yet a gap remains between task-specific performance and the broader flexibility envisioned by Turing. The Turing Test reflects Open-world AI — systems that handle unfamiliar tasks, adapt, and learn continuously. Most modern AI, by contrast, optimizes fixed tasks in well-defined environments under clear success criteria — Closed-world AI.
Pursuing AI for the open world requires unique learning principles. This research proposes three foundational principles, rich feature, disentangled representation, and inference-time learning, each of which represents a distinct research direction. [see details]