The Role of Symbolic Knowledge and Defeasible Reasoning at the Dawn of AGI Transcript

From lotico
Jump to navigation Jump to search

transcript for Lotico Event The Role of Symbolic Knowledge and Defeasible Reasoning at the Dawn of AGI

Dave Raggett: There we go, so I was going to talk about very briefly W3C the kind of evolution of ICT systems the limitations of logic uh the work that've been doing on deible reasoning and argumentation um and then talk a bit more about the symbolic AI and its limitations General generative Ai and its limitations and then work beyond that towards Aral General general intelligence and the work that I'm starting on kind of a what you might call neural networks or future neural networks and why you know semantic inability means that we'll continue to be working with a combination of symbolic and non symbolic approaches.

So I'm, I work for ERCIM which is the European partner for w3c so ERCIM is and basically fosters collaborative work European research community and I'm a part of a been a part of many European projects and we also seek to increase cooperation for industry and you can also find out more about what ERCIM members are doing for the ERCIM news I won't say many more about W3C because I guess most of you already know.

So in respect to the evolution of systems uh much of industry is still using relational databases these are costly adapt to meet evolving needs once you've come up with initial design of the database graph data spaces are are more flexible but it is still the case that the the business logic is embedded in the application code. Advances in AI will allow natural language and Cooperative problem solving and we we're starting down that path at the moment lowering the cost and increasing the flexibility of business processes so it's quite a dramatic shift um so going to first talk about the limitations of logic and deductive proof and this relates to kind of reasoning which has been studied since the days of ancient Greece uh so logic deals with the mathematical entailments to what is held to be true assumes perfect unchanging knowledge but it isn't applicable for knowledge that is uncertain context sensitive imprecise incomplete inconsistent and changing I every imperfect knowledge which is however typically the case for everyday knowledge particularly this individuals or or individual businesses that essentially you're learning stuff all the time and you have to adapt to new things so diffusible reasoning is the much broader than logic and forms the basis for legal arguments ethics itical arguments and everyday discussions and we should embrace the challenge so um the deductive proof becomes is replaced by defeasible reasoning with Arguments for and against the supposition and strict rules logically entail their conclusions but defeasible rules create a kind of presumption and implication in favor of their conclusions which may need to be withdrawn as you learn new things uh like as new evidence presented to a court arguments in support of a counterd do some supposition bu upon the facts and in the knowledge graph or the conclusions of previous arguments and the preferences between the arguments you can you know the dep presented or derived from the preferences between the rules additional considerations respect to consistency and then we can have arguments and counterarguments and counter arguments can be broadly classified into uh undermining another argument from the conclusions of the former contradict the premises of the latter undercutting another argument by casting doubt on the link between the premises and the conclusions and rebutting another argument when the respective conclusions can be shown to be contradictory so argumentation Theory um I think a good introduction is given at the Stanford encyclopedia philosophy and it lists to kind of different types of arguments in like involving deduction induction abduction analogy and fallacies actually I think that's a little weak but I'll come back to that later there's a long studies of argumentations go all the way back to Greece like carneades and Aristotle and but more recently in the last century you've had frager Hilbert and Russell more primly interested in MA mathematical reasoning and argumentation like logic based Steven tulman subsequently criticized the presumptions that arguments should be formulated in purely formal terms Walton extended the tools to cover wider ranging arguments to compiled an interesting set of argument schemes and then others like Han and Oakford uh applied basian techniques so statistical based inferencing um in the University Dundee we have aif which is an ontology intended to serve as a basis inter linga between different argumentation formats and uh um one of the people I particularly respect is Alan Collins who applied a more intuitive approach to plausible reasoning um that takes subs symbolic sort of metadata into account to model rough Notions of metadata in Li of statistics and he inspired my work on the plausible knowledge notation within the cognitive AI group and you know again you expected Sports arguments and favor counter to some supposition and uh this was like we just said a moment ago so to explain in a little bit more practical detail um you know the plausible inferences one sort of cases you have a a class and a subass and you have properties and relationships um applying to both so if you have a class um then you have some properties relation to Associated the class as a whole then maybe they also apply to subclasses that's kind of specializing but you can also go the other way like if you have some subass I don't know like Robins is a kind of stere typical examples of song birds so you can then generalize properties from that subass Robins to birds in general and so that say both specializing and generalizing and the expected certainty for such inferences are influenced by qualitative metadata and I won't have time to go into details for these but they come under the name such as typicality similarity strength dominance multiplicity and scope more details are given in the uh the spec the cognitive AI group so we also have implications like if if it is raining then it is cloudy which is generally true but you can also use implications in reverse because you know that uh you have some rough understanding about approximately how often you know how likely it is to be rainy when it is cloudy and uh so strictly in logic you can't do that but with you using a more statistical or sort of qualitative approach you can based on PRI knowledge um then we have uh the role of analogies and match based on matching structural relationships to suggest you know again property of ways of solving problems based on those uh matches and then we have the work by zard and others on fuzzy logic fuzzy sets so you have fuzzy ranges such as cold warm and hot and uh fuzzy modifiers like VAR as in very old again with the multiple lines of arguments for and against the premise so um we have a I put together a a web-based demonstrator in JavaScript a proof of concept um and there's large collection examples you can select them from the drop down at the top and then there are some controls which allow you to S of like ask the the system to uh work a little harder the effort checkbox to seek indirect evidence even when direct evidence is found and then the trace checkbox to see the uh reasoning and action on when it works backwards from the uh the supposition to the facts and the the output is the is goes the other way it uses the trace of reasoning to generate an explanation reverse and you can see in this case the premise was the flowers of englands include daals and then it found two two kind of lines of argument to support that and uh you know there no evidence there are any counter evidence that the flowers of England excludes stals includes and excludes in the sense of set membership and then you can see below there's say an example of a graph using some syntax highlighting anyway there's a link here the slides um the link to the slides was also been made available and there's a whole bunch of examples and I'll try and take a little bit more time on this so the top one climate of Belgium includes temperat that's an example of a property so Belgium having a property climate uh and then there's another example below that's an includes and excludes Rose kind of temperate flowers is an example of a relation and then if we drop down a little bit below that we have um uh flow increases of pressure for plumbing which is a relationship in a con scoped to a context in this case Plumbing the one below it scoped to a circuit and then uh then we have flow current is it flow is the current pressure is to voltage as an analogy and that's you know supports the statements above we can also have you know um variables typical questions that are put to school children to fill in the blanks as it were then we have an implication which I've just talked about and if it's somewhere if it's rainy then it's cloudy so forward is is a strong certainty in Reverse in Reverse it's kind of a lower certainty then we have uh some examples which relate to kind of ontological models um like uh uh younger than is equivalent to less than for age and it allows you to reason in terms of younger than by you know with reference to equivalent to age we have range let is a range of some property like age is infant child and adult so there are kind of fuzzy properties and this in the context of person and then we can actually talk about you know what is the characteristics of these properties such as age of infant can be considered between birth and four for for a person then we can also have uh you know like domains of John and chess like an of loves rather uh John being the subject and chess being the the object so we can talk about uh using properties to describe um the domains and then finally we have uh not almost penultimately we have uh some queries so you can the question mark x it's a kind of a variable and uh so which X were and so forth and that's using a modif ify vary and then we have also um a few others count and few and um and finally at the bottom we have a statement about statements in this case is a kind of reported speech Mar believes Jon says Joan JN loves Joan is a lie um uh I don't think I've got time to go into the syntax in much details unless the question comes up at the end but this is a kind of a railroad view of the syntax we also have a conventional kind of a BNF syntax in the in the specification so context dependent relations like Belgium is similar to Netherlands for latitude fuzzy ranges and context sensitivity so you can Define age was said fuzzy terms depending on whether you're a child or an adult um fuzzy modifiers like Paul is a close friend of John where close is acting like an adjective uh we have fuzzy quantifiers involving set comparisons like few X where color of X includes yellow from kind of Rose so you can evaluate um you know the set against your knowledge base and then see see how which those are yellow and see whether that justifies few um then we have the kind of the what if abduction kind of reasoning abductive reasoning for imagination planning what if understanding intent modeling of stories reported speech and so forth with named and unnamed collections of statements so an example of a query here which X where Jones said X likes T we have reasoning over related ontologies so climate seemed to be a good example where we there are many different ways of describing terminologies for climate describing climates um and the uh each of these terms associated with typical weather patterns uh like for example in the cities like Shanghai beris are Sydney and Hong Kong which have a so-call Chinese climate with Miles Winters and human Summers of tropical rain and this points the potential for using deible reasoning is there's no one way to relate the terms different sets of terms and it's more about searching for support for or counter to supposition with a varing degrees of certainty so when you have ontology using different terminologies defeasible reasoning becomes very useful and um wrapping up on this part of the talk uh further work is now needed on intuitive Syntax for reasoning strategies and tactics as well as ways to model posos which is the kind of the role of feelings and emotions as part of compelling arguments and this barely builds upon the established principles for Effective arguments or rhetoric dating back to to Aristotle and he you know introduced these terms ethos for establishing credibility like I'm an expert in this field or or whatever or you site somebody else who's a and you claim they are an expert um we have using emotion pathos to stir emotion s people's feelings logos which is sort of um using logic to emphasize rationality rational support for your argument and chos which is about an argument being opportune and time topical nature so using things which people can easily relate to and finally the use of rhetorical questions to sort of strengthen the the support in people's minds so to make progress on this this is a question of gathering use cases and a sweet of examples of uh of arguments which then will allow us to to figure out you know the better ways of expressing the tactics and strategies so I now switch T gear and talk about cognitive AI so cognitive AI can be considered to thinking about modeling human intelligence thinking and problem solving human memory learning language perception and attention but also feelings and emotions so why what what can the cognitive Sciences tell us so first of all you know it's the interdisciplinary study of the mind and its processes and it's broken down into a whole bunch of fields including Linguistics psychology Neuroscience philosophy anthropology and there's been decades to work in the cognitive sciences and understanding the mind how we learn the kind of mistakes we make and how long we take to make them and um this can provide deep insights for working neural networks and involves kind of a mix of symbolic and Subs symbolic models so I'm particularly impressed by the work over many decades by John Anderson on actor with and uh I talk about how he sort of we implemented that shortly Alan Collins on plausible reasoning DED Gentner on analogical reasoning L zard on fuzzy reasoning and lcov and Johnson on metaphors so um I think when it comes to sort of generative Ai and language models it looks it's important to look at how people process language human language processing and so the this short summary is sequential hierarchical and predictive and you can gather infant by looking at what people looking at when they're reading the Cades of the eyes when they're reading the text uh the buffering limitations of the phological loop which is you know a few words not thousands of words then the priming effects where the word sens curation is based on the kind of the near nearby words and of course you can stick a you can do a brain scan of which areas of the brain are active so we appear to to have bottom up processing for the sounds and syllables before we process the words and sentences sequential of limited overlap processing but then top down using the context and Prim knowledge so processing is both hierarchical and predictive and there's a link to a relevant paper there uh this is probably too complicated to explain but essentially this is my uh uh kind of high level architecture for know for artificial Minds inspired by the brain so we have the cortex with a kind of a multiple specialized graph dat dat bases and Associated algorithms with the sematic integration across the senses so different parts of the cortex have specialized in different sort of aspects like visual or oral or kind of logical we have then multiple cognitive circuits connecting to that kind of a Blackboard model black perception uh uh system one and two and action and uh the color scheme sort of supposed to be indicative of the picture above so the limic system in the brazal ganglia and the center of the brain then they hook up to the um the cortex reel cortex on the outside whereas the action is to kind of realtime controlled by the cerebellum which is actually got more neurons than the cortex which is quite packed very closely together so we have the um system one and two and uh you know just I'll explain a bit more in the next sline in TT to actions it's actions you want to delegate actions initiated under concious control leaving the Mind free to work on other things example as playing a musical instrument where you can't really think about your finger placements explicitly because it' be too slow and the cerebellum provides real-time coordination of all the different muscles involved using information placed into the cortex from your eyes um so I'm a fan of Keith stanovich is p triar model of mind so it starts at the bottom with autonomous mind with type one processing uh which is fast and opaque like recognizing a cat in a post graph or understanding a traffic sign when driving a car then we have the type two processing slow Del liberative and open to inspection like mental arithmetic formed by chaining type one processes together using working memory you can divide it into algorithm in reflective so one one is reflective about thinking about thinking um chunks and rules is basically inspired by the work of John A John Anderson and uh the idea is that you think about a kind of a rule engine at the basil ganglia connects to different parts of the cortex and the bundles of nerves correspond to buffers holding a single kind of chunk or or set of property values and you know think it's like a vet space model and then the the the chunks uh um the rules operate on these buffers and then can invoke actions uh in the cat itself or or including delegating actions like to uh it drive a um drive a robot so there's are demos for Smart Homes and factories uh and web-based demos again and the syntax of this is a lot simpler than for for for pkn and you have an example here of instructing a robot to move um talk about the limitations of um symbolic AI symbolic AI is generally handcrafted and as a result it's kind of impoverished uh representations compared to the subtle context civity and imprecision that the real world presents and this kind of limitation of how we model things results in problems and practical use um moreover you know it makes it expensive to develop and maintain so you can't it's very hard to really scale up symbolic AI in a kind of a meaningful way um you can have lots of triples but that's not quite the same thing um there've been recent successes for generative AI which show that computers can be very much better at knowledge engineering than we are so have to ask the question are we wasting our time in symbolic AI I'll come back to that later in the talk and now talk about the limitations of generative AI as in today so this astonishing ability to learn billions of parameters in complex Networks back propagation an amazing capability to deal text and images so many of the images in this talk are from uh generative Ai and um then we have ways of using Chain of Thought plus reinforcement learning uh really providing very um very compelling uh behavior and prompt engineering is a kind valuable new skill oh I think that uh these models will soon be able to craft good prompts for us so maybe that isn't something to focus on too much but it's prone to distractions and hallucinations and weaken logical reasoning semantic consistency so the top picture on the right is from stable diffusion and it looks generally good until you look at the how look at the number of the hand and it's clearly being a bit confused about how many fingers people have and then the one below it was of d e and so I asked it to to draw a picture with three red balls and two blue cubes and a wooden floor and it's completely failed absolutely failed the below is kind of a chat GPT thing if you ask ask it if 1 kilogram is heavier than 2 kilg it correctly aners no if I then distract it by introducing is 1 kilogram of lead heavier than 2 kilograms of feathers it gets distracted because feathers are lighter than lead so it gets the wrong answer so it's very easily distracted and so when using these systems um these kind of level of distractions hallucinations are a real problem and there's a lack of continual learning in temporal memory and they're very expensive to train Foundation models and it's very different from the human brain so more like Alchemy and science this stage but early days yet so language models essentially feed forward and so you've put in the prompt and it's fed through a neural network and it generates response so statistical prediction and the text is encoded as sequence of tokens are vectors and an embedding space and then it uses mechanisms particular Transformers to ensure long range attention and hierarchical dependencies and the uh the prams uh Network param train using back propagation and error function based on masking tokens there's no short-term memory no continual learning and moreover these bottles presses thousands of text tokens in parallel very unlike humans then we have the you know what's going on in the Box we we know that the first the outer layers deal like lexical information then grammatical information and in the depths it's dealing of semantics and pragmatics and so the um the part we have the things like near the edges the of speech and word senses she using the word neighborhoods to help that and then uh but the knowledge itself is expressed in a very opaque way we don't actually understand how the knowledge is being represented and it's relying on attention as a surrogate for the semantics and um the top and bottom layers are closely related to the word tokens and the middle layers to semantics and pragmatics so somehow in the middle the vectors representing the whole meaning of of the prompt um then we can inject a prompt before the user price of the prompt the context prompt and that's kind of a way of uh instructing the the language model to know what we're hoping for in the response and likewise if we want kind of a longer dialogue we can copy the response back down into the uh uh into the prompt and uh so that provideed kind of a short-term memory as again L of the lack of short-term memory and then this prompt engineering so good prompts give good responses many different kinds of prompts but generally speaking you specify what you want provide a few examples and we have things like chain of prompt Chain of Thought prompting to elicit sequential reasoning and there's an example on the right from one of the key papers and you can see that in the kind of the highlighted text that if you um if you provide more information about the steps the system is less likely to get it wrong rather like a child they may guess the answer under pressure but if you show them how to work it through and give them a little bit more space they can get the right answer they can learn to think through properly and um now then you also have the problem with uh adversarial attacks so crafted prompts bypassing the safety measures so is a risk of the the systems producing something you really really don't want it to do and um and we can now start to to use these language models to craft expert prompts and it's worth you you I suggest you give it a try if you use the Bing you can try using chat GPT you ask it to generate a prompt for images it'll probably generate much better prompts than you would have thought of yourself um then we have the Retrieval-Augmented Generation and so the language models are trained once the knowledge is static but it's very expensive to train them they Al they have difficulties in generating citations for the neolog is embedded in the parameters so the work around is to query a Knowledge Graph to obtain a list of relevant sources and citations inject that back into to the context prompt and include an instruction to generate links for those references and this has an advantage of it um avoids having to retrain the the language model but it also avoids the need to kind of um include sensitive information in the language model itself which might leak out at the wrong moment that's particularly important for commercial applications of course and then um in terms of how do you you even if you're using a knowledge graph or other source collections of information uh is still can be very handy to use a vector based index for text and images as a way of being able to allow the system to sort of figure out from the prompt which the most likely things uh you'd be interested in so now I'm going to switch to tack again think about uh artificial general intelligence and you know you can see there are ways of defining what we mean by that but here's mine creativity and problem solving the ability to create generate and adapt plans as needed good grasp of Common Sense Knowledge and Skills that cause an effect deible reasoning understanding human values and feelings uh continual learning with models the past present and future placing prompt engineering by learning from the kind of responses that most people prefer uh reflective cognition using models the agents goals and performance and carrying them out are likewise those of other the users and other people are the theory of mind and the ability to explain itself in terms we can easily appreciate which will depend on you it will vary from what you it maybe vary from one person to the next and it needs to be appropriate and adherence to values we demand of these applications so we don't want them to give racist sexist inflammatory responses we want AI agents unambiguously artificial agents and not confused of humans and uh and we can then stop applying this of things to smarter robot self-driving cars res unexpected we can't really do that right now although there's a lot of talk about self-driving cars it's easy to uh teach them the like 80 90% of what's needed but the last few percent in all the special cases becomes extremely hard uh tools for boosting human creativity and Effectiveness leading to better productivity for prosperous Society if we share the benefits a trusted personal agents this is my view know rather than the if you've heard of solid I think that AI will go much further and much richer than the kind of approach proposed for solid and will help us to deal with the complicated world look after privacy finances and health um also helping to cter cyber security encountering disinformation conspiracy theories and harmful content on social media um and AGI I believe if we understand about the argumentation it could one day win arguments at politicians and lawyers leading to Stronger democracies and better laws doing so by in-depth access to knowledge including which arguments will best convince people emotionally and intellectually sort of outthinking people in that respect um then the Mor is the last section of the talk I'm going to focus on U work on future neural networks so you know one of the biggest consideration is how to integrate episodic memory into neural networks uh to enable a mix of this type one and type two processing uh along with a cognitive operating system by that I mean you know if you've got this system which is very flexible you can think about many different things you want to thinking about useful things so how do you manage the time allocation for competing tasks akin to a mental operating system which clearly relates to Concepts around sort of emotions and feelings um reflective cognition along with episodic memory to uh support situational awareness including self-awareness and self assessment respect to execution high level goals how well am I doing on this should I switch to a different approach um so it's easier to discuss sentience in that sense and I want prefer to avoid discussing Consciousness in general which is harder to Define and also we have the philosophical questions about so the so-called hard problem of Consciousness respect to subjective experience um however if all experience reduces information processing within systems and neurons artificial systems and neurons that's qu is a non-issue for artificial agents uh we need the need for continual learning Guided by effective cognition so inspired by human learning in other words back propagation is is just the beginning it's not the end then we have how does the brain actually make memories so you have episodic memory just associated memory is a kind of a record like a personal diary holding temporal relationships that allow you to recall past sequences and past experiences and sequence and you have encyclopedic memory is time independent facts like birds fly dogs bark and um the episodic memory we have this consolidate we we consolidate in the new cortex of initial modeling in the hippocampus so what we can we learn from where the human brain works and uh you know the hepatic memory supports this kind of generally imaginative thinking abductive reasoning and creating updating plans reasoning about cause and effect inferring another agent intent and state of mind which is critical if you want to to collaborate with somebody so uh I think in terms of luring the hurdles for researchers large language models billion pram is are very expensive to train prohibited for many researchers like myself this is the barrier for work and Innovative new network architectures so the solution is to use smaller data sets and fewer parameters rather than trying to build a large language model we can investigate the architect using smaller models with fewer parameters and fewer and smaller data sets so chosen clearly you choose the data sets to support the research aims such as continual learning episodic memory and refective cognition and um then we have machine generated data sets there's talk about uh um machine learning from machine generated data being a very dodgy but on the other hand in the right context it can be very useful you can use uh for example Microsoft's tiny stories is generated using a large model like gp4 to generate a a smaller database which is better designed for investigating these smaller models uh we can also use knowledge graphs using tastic rules to of generate data from that and then finally handcrafted examples then there are different ways to learn you know so I talked about back propagation but one way is observation which you're essentially looking at lots and lots of data and trying to figure it out and small children babies and things are very really very observant then we have instruction like you go to school and you get taught and then there's experience where you try things out for yourself um so the idea is to evaluate different designs uh and then select the best for scaling up and then when you make the case for a bigger budget so there's continual learning uh with the generative AI suffer some catastrophic task interference learning a new task dramatically grades competence from the previous task you've learned so the limited workarounds such as transfer learning which is also referred as fine tuning where you you taper off some some sets of Weights you hold verly rigid and others you allow to change then there's uh you know bunch of solutions which have been in partially investigated weight regularization sparse network connections lateral inhibition self assembling neural networks because in the brain there's much more plasticity in our neural Connections in the cortex than people originally thought then there's the idea of allocating tasks to new models in other words not trying to use this using different neurons for different tasks and then there's kind of learning how to learn metal learning and um right there's the also combining genetic algorithms with the dynamic connections so that's the Rob was referring to then we have the uh giving AI agents Dynamic access to models of past present and future so you can learn um across time memory for very different time scales there's a long-term memory of the cortex in the human brain short-term memory in the hippocampus and the kind of working memories of the activation levels of as where your brain is lit up at the moment there's perception related memory so get going back to Bradley and hitch and um The phological Loop and the visual sketch bag so uh this is something you can observe on for yourself how long can you recall something youve just read if you look away and likewise if you looked at some scene or some picture and look away try and figure out what was in the scene then you look back and you can see how little bit you actually did and you can test yourself by using different delays uh situational awareness which requires needs for detailed short-term memory but if you want to learn patterns across many different episodes you want to avoid undue emphasis on the most recent event and you want to be treat the all the events regardless how old they are more fairly and so there's analogist between this between the difference between the hippocampus and the neocortex so one thing which as I said that current language models will very much feed forward but in the human brain feed back pathways are much more numerous than feed forward Pathways and like for example you know there's some information evidence there and um so also that the brain uses far fewer layers than in the current generative AI so this is it's very interesting we can learn from that um so this is the idea of latent semantics in the form of the activation levels of the artificial neurons working as as working memory uh we current language models use very wide networks um with many thousands of text tokens and purely feed forward and uh instead of limiting the encoder why why don't we limit the encoder decoder with and use the feedback from latent semantics to lower layers mimicking human language processing and the the feedback is in the form of the semantics what kind of feedback there's the kind of retainer state which is the kind of easier one to implement and then continuous we've have to kind of a dynamic processing and so there's plenty of design choices to study for researchers uh you know exactly how the details of how you provide this kind of feedback and whether um the existing form of Transformers is integral to such feedback or some other similar approach and if you're using Dynamic feedback then you need to ensure strong attractors to ensure the system stay stabilizes quickly and um implications of this is deep learning and then we have this idea of rather than was one barly homogeneous neural network having heterogeneous um neural network architectures with different kinds of Architects for different architecture different parts of the system um humped language models um we have the human language processing sequential hierar predictive we have the one to two second capacity and you can implement this as small sliding window over text tokens with long range attention replaced by attension to latent semantics as fed back from deeper layers and retain feedback then uh can be then combined blended with the input from the lower layer this can be informed by the mathematical models of how knowledge is represented in Vector spaces which we're still at a fairly early stage of understanding to be honest and uh this will make it easier for the computer to discover the details rather than us trying to work it out it's better to allow the computers to work out the details but we got to make the life easier um then we have additional neural modules we have the the idea of sequential cognition this this this type two cognition and you know the neural equivalent to a chunk rule engine and so the idea of using a feed forward Network that operates in the lat semantics I the kind of the layer in the middle of your language model and it corresponds to production rules conditions and actions and you and you train it based on examples of reasoning and stepwise tasks of deep reinforcement learning and then the the working memory corresponding to the kind of a deepest layer in the language model so the idea is these this uh rule engine is it were operates on the values retained from the previous processing step and then updates those values and then we have have sort of a declarative memory or episodic memory and encyclopedic memory and the idea of operations on this like the crud operations create read update and delete operations in the vector database U plus additional operations as needed this should mimic the human forgetting curve so sastic recall in the brain the forgetting curve for example is is essentially how we learn to remember things um which proved most useful in past experience discarding the things which weren't that useful and we have the the idea of episodic memory recording Salient details from a snapshot of working memory and based on some similarity Matrix like cosine metric to determine when to snapshot and supplemented about relationships between episodes and then he inspired by research on the brain and then um pretty much wrapping up the final section here to talk a bit about semantic interoperability which is about knowing that we understand each other so gen um you got the two robots here and to communicate effectively they need to be able to understand each other to know that they understand each other and by the way Gena of AI lacks semantic consistency you can see by the fact that the the the system failed to provide any support to the bodies from the legs so uh people keep written records when they don't want to rign fible memory the same applies to businesses everyday language isn't good enough when we need to be sure of a mutual understanding for example a business contract between a suppli and a consumer which makes use of standardized terms and legal language uh then for technical exchanges we need structured data for greed data models and semantics and that relies on symbolic representations so we'll continue to need this as we make greater use of AI AIS will need to ensure that they can understand each other as well as understanding us and knowledge graphs can then be seen as an evolution of of databases with standardized vocabularies and the need for obviously we work in groups like d3c to create those standards so I think this leads to the idea of collaborative knowledge engineering um handcrafting knowledge graphs and Rule sets is difficult and timec consuming this makes it hard to scale up and self-guided machine learning of neural networks is very much easier to scale up but suff some a lack of transparency the knowledge is sort of buried in the network parameters in ways which are inscrutable so how can we use AI for collaborative knowledge engineering with a human partner working together an artificial agent to sort of indicated in the picture on the right uh the agent is operating directly manipulates the knowledge graphs and Rule sets but it's guided through by the human partner and like using natural language and other means and again the two together they can create data sets for new or updated use cases and by keep curating the data sets the um the the artificial system the agent as it were can support can figure out by applying machine learning to update the rules um as needed as theologies are advised to deal with news new use cases and along with a version to support old and new applications and here again you have an example of the uh lack of causal understanding uh um in the model here because the tablets sort of floating in the air um so I have architecture of neuros symbolic cognitive agents of basically they Illustrated in details it explains how we can uh connect neural network systems to external systems such sensors and actuators but also to services and then then there sort of things like cameras and so on for multimodal interaction um but this is very much a kind of early stage but it means you could combine the neural neural network based reasoning with symbolic based reasoning externally and so in summary wrapping up the uh um we've talked about uh expanding from logic to argumentation to be able to express the complexities of the real world uh the help is wanted I'm looking we're looking for help with argumentation use cases and examples so that you can get computers to be a lot more useful in the outputs they provide machine learning beats handcrafted knowledge uh so we should definitely be taking advantage of that I think there's plenty research opportunities for researchers at universities and know on a relatively low budget working on human like AGI um there's an ongoing rule the symbolic AI for semantic inability but I'm very keen to find you know people who can help with um the mathematical foundations for neural networks if you want to be able to build to evaluate and design and evaluate these sort of new architectures that really uh strong the mathematical foundation will be really important so please get in touch you can help in these different areas um and then finally you know open to questions and comments so open the floor

Marco Neumann okay wow thank you Dave