Neuro-Symbolic Strong-AI Robots with Closed Knowledge Assumption: Learning and Deductions
arXiv:2604.09567v1 Announce Type: cross
Abstract: Knowledge representation formalisms are aimed to represent general conceptual information and are typically used in the construction of the knowledge base of reasoning agent. A knowledge base can be thought of as representing the beliefs of such an agent. Like a child, a strong-AI (AGI) robot would have to learn through input and experiences, constantly progressing and advancing its abilities over time. Both with statistical AI generated by neural networks we need also the concept of \textsl{causality} of events traduced into directionality of logic entailments and deductions in order to give to robots the emulation of human intelligence. Moreover, by using the axioms we can guarantee the \textsl{controlled security} about robot's actions based on logic inferences.
For AGI robots we consider the 4-valued Belnap's bilattice of truth-values with knowledge ordering as well, where the value "unknown" is the bottom value, the sentences with this value are indeed unknown facts, that is, the missed knowledge in the AGI robots. Thus, these unknown facts are not part of the robot's knowledge database, and by learn through input and experiences, the robot's knowledge would be naturally expanded over time.
Consequently, this phenomena can be represented by the Closed Knowledge Assumption and Logic Inference provided by this paper.
Moreover, the truth-value "inconsistent", which is the top value in the knowledge ordering of Belnap's bilattice, is necessary for strong-AI robots to be able to support such inconsistent information and paradoxes, like Liar paradox, during deduction processes.