COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Sphere Neural-Networks for Rational Reasoning IIAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Challenger Mishra. The success of Large Language Models (LLMs), e.g., ChatGPT, is witnessed by their planetary popularity, their capability of human-like communication, and also by their steadily improved reasoning performance. However, it remains unclear whether LLMs reason. It is an open problem how traditional neural networks can be qualitatively extended to go beyond the statistic paradigm and achieve high-level cognition. Here, we present a novel qualitative extension by generalising computational building blocks from vectors to spheres. We propose Sphere Neural Networks (SphNNs) for human-like reasoning through model construction and inspection, and develop SphNN for syllogistic reasoning, a microcosm of human rationality. SphNN is a hierarchical neuro-symbolic Kolmogorov-Arnold geometric GNN , and uses a neuro-symbolic transition map of neighbourhood spatial relations to transform the current sphere configuration towards the target. SphNN is the first neural model that can determine the validity of long-chained syllogistic reasoning in one epoch without training data, with the worst computational complexity of O(N). SphNN can evolve into various types of reasoning, such as spatio-temporal reasoning, logical reasoning with negation and disjunction, event reasoning, neuro-symbolic unification, and humour understanding (the highest level of cognition). All these suggest a new kind of Herbert A. Simon’s scissors with two neural blades. SphNNs will tremendously enhance interdisciplinary collaborations to develop the two neural blades and realise deterministic neural reasoning and human-bounded rationality and elevate LLMs to reliable psychological AI. This work suggests that the non-zero radii of spheres are the missing components that prevent traditional deep-learning systems from reaching the realm of rational reasoning and cause LLMs to be trapped in the swamp of hallucination. This talk is part of the ml@cl-math series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsBiophysical Colloquia bld31 St. John's College Medical SocietyOther talksOn the Practical cost of Grover for AES Key Recovery The role of surface tension in hydrodynamics: a new perspective Mini Course - Random interlacements and the Gaussian free field The Resistance Of Giant Tropical Trees To Drought Symmetrized KL information: Channel Capacity and Learning Theory |