The Role of Symbolic Knowledge and Defeasible Reasoning at the Dawn of AGI: Difference between revisions

From lotico
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 9: Line 9:
</FONT>
</FONT>
<HR>
<HR>
<youtube>https://youtu.be/Iea6i9x7j4k</youtube>
<center><youtube>https://youtu.be/Iea6i9x7j4k</youtube></center>
<!-- iframe key="lotico" path="/lotico/reg.php?event=DefeasibleReasoning&eurl=The_Role_of_Symbolic_Knowledge_and_Defeasible_Reasoning_at_the_Dawn_of_AGI" w="500"/--><br>
<!-- iframe key="lotico" path="/lotico/reg.php?event=DefeasibleReasoning&eurl=The_Role_of_Symbolic_Knowledge_and_Defeasible_Reasoning_at_the_Dawn_of_AGI" w="500"/--><br>
Large language models and generative AI have shown amazing capabilities. We tend to see them as much more intelligent than they actually are. It is time to embrace the many research challenges ahead before we can truly realise AGI. Work in the cognitive sciences can help us to better mimic human cognition, and to understand how to address generative AI failures such as factual errors, logical errors, inconsistencies, limited reasoning, toxicity, and fluent hallucinations. How can we architect systems that continuously learn from limited data like we do, combining observations and direct experience along with autonomous, algorithmic and reflective cognition?
Large language models and generative AI have shown amazing capabilities. We tend to see them as much more intelligent than they actually are. It is time to embrace the many research challenges ahead before we can truly realise AGI. Work in the cognitive sciences can help us to better mimic human cognition, and to understand how to address generative AI failures such as factual errors, logical errors, inconsistencies, limited reasoning, toxicity, and fluent hallucinations. How can we architect systems that continuously learn from limited data like we do, combining observations and direct experience along with autonomous, algorithmic and reflective cognition?

Latest revision as of 10:30, 15 February 2024

Date: Thursday, February 8th, 2024

Time: 15:00 CET Berlin / 14:00 UTC London / 9:00 AM ET New York

Type: Online Meeting

Registration Count: 98 (as of 8 February 2024)



Large language models and generative AI have shown amazing capabilities. We tend to see them as much more intelligent than they actually are. It is time to embrace the many research challenges ahead before we can truly realise AGI. Work in the cognitive sciences can help us to better mimic human cognition, and to understand how to address generative AI failures such as factual errors, logical errors, inconsistencies, limited reasoning, toxicity, and fluent hallucinations. How can we architect systems that continuously learn from limited data like we do, combining observations and direct experience along with autonomous, algorithmic and reflective cognition?

If machine learning is so effective for neural networks, where does that leave symbolic AI? My conjecture is that symbolic AI has a strong future as the basis for semantic interoperability between systems, along with knowledge graphs as an evolutionary replacement for today's relational databases. We, however, need to recognise that human interactions and our understanding of the world is replete with uncertainty, imprecision, incompleteness and inconsistency. Logicians have largely turned a blind eye to the challenges of imperfect knowledge.

This is despite a long tradition of work on argumentation, stretching all the way back to Ancient Greece. This tradition underpins courtroom proceedings, ethical guidelines, political discussion and everyday arguments. I will introduce the plausible knowledge notation as a way to address plausible inference of properties and relationships, fuzzy scalars and quantifiers, along with analogical reasoning. Work on symbolic AI can help guide research on neural networks, and vice versa, neural networks can assist human researchers, speeding the development of new insights.

Speaker

Dr. Dave Raggett is a web pioneer with a lifelong interest in AI, gaining experience at the University of Oxford (PhD), the Machine Intelligence Research Unit at the University of Edinburgh, the Logic Programming Department at Imperial College, the Computer Science and AI Lab at MIT and many years at HP Lab’s knowledge based programming department. He is now a member of W3C/ERCIM and involved in a succession of European projects. He founded W3C’s Cognitive AI Community Group and is driving ongoing work on human-like AI. He holds an honorary professorship for the University of the West of England.

External Resources

Paper: Defeasible Reasoning with Knowledge Graphs. Dave Raggett. 2023 http://arxiv.org/abs/2309.12731

Slides: https://www.w3.org/2024/02-Raggett-lotico.pdf

Demo for plausible reasoning and argumentation https://www.w3.org/Data/demos/chunks/reasoning/

Event Categorization

Please be advised that this presentation is not addressing existing or already emerging W3C Semantic Web standards at this point. It rather provides a critical review and proposes an alternative strategy to address perceived shortcomings of current mainstream approaches to knowledge graphs, along with opportunities for neurosymbolic approaches using novel neural network architectures that are inspired by what we know about human cognition.

For Dave Neural networks and vector spaces cut through the barriers for scaling up knowledge based systems. Just as neural networks have blown away symbolic approaches to natural language translation, hand-crafted ontologies will always be impoverished compared with the richness and subtlety of approaches centred upon machine learning and collaborative knowledge engineering. Today’s generative AI is just the starting point, and there are many exciting opportunities for young researchers!

I recommend attending this session if you already possess a good understanding of existing standards and design patterns and would like to entertain new concepts that may influence the trajectory of future developments. It's an invitation to discuss concepts like PKN and Defeasible Reasoning with a leading domain expert directly.

Session-Type: Reasoning - Web of Data - Alternative Concepts - Critical Review - Research
Session-Level: Intermediate 
Session-Language: English