The Layered Ontology of Models, Resolving the Epistemological Crisis of AI
Zhun Sun
Abstract
With the rapid development of modern Artificial Intelligence, especially the emergence of Large Language Models (LLMs), we face a growing epistemological crisis: our engineering capabilities have far surpassed our philosophical vocabulary. We have built systems that demonstrate emergent reasoning abilities, yet we struggle to articulate exactly what we have built. The traditional naming convention, _e.g._, lumping code, parameters, and behaviors together as a "Model", is no longer sufficient. It fails to capture the widening gap between human design intent and the resulting behavioral artifacts. Current discussions often oscillate between two extremes: a reductionist view that dismisses these systems as merely "stochastic parrots," and an anthropomorphic view that prematurely attributes consciousness to them. Both views stem from a lack of structural granularity when defining the ontological status of AI agents. This paper proposes to solve this problem through a "Five-Layer Model Hierarchy Ontology." Inspired by systems theory and cognitive science, we deconstruct the concept of a "Model" into five distinct layers: the Noumenal Model ($\mathcal{M}_N$), the Conceptual Model ($\mathcal{M}_C$), the Instantiated Model ($\mathcal{M}_I$), the Reachable Model ($\mathcal{M}_R$), and the Observable Model ($\mathcal{M}_O$). By tracing the evolution of these layers from classical machine learning to foundation models, we reveal how the transition from "Tabula Rasa" (blank slate) to "Artifact" has fundamentally changed. Furthermore, we apply this framework to reconstruct two classic philosophical problems, namely the nature of meaning (via the "Stochastic Chinese Room") and the nature of truth (via the "Paradox of the Two Poetics"), demonstrating that the essence of synthetic intelligence lies not in biological mimicry, but in the topological structure of statistical manifolds.
Successful Page Load