Social signals
Evolutionary origins of language
Evolution and information
Simplicity Theory
Cognitive modelling of interest
Cognitive modelling of relevance
Cognitive modelling of meaning
Cognitive modelling of emotional intensity
Cognitive modelling of concept learning
Emergence as complexity drop
Qualia cannot be epiphenomenal
→ [ See all papers ] → [ Selected papers ]
I showed that general learning procedures are bound to produce "good shapes" (in the sense of the Gestalt theory).I also investigated a way of learning concepts (as opposed to skills) through argumentative discussion. |
In previous work, we proposed to generate Causal Bayesian Networks (CBN) as follows. Starting with considering all possible relations, we progressively discarded non-correlated variables. Next, we identified causal relations from the remaining correlations by employing “do-operations”. The obtained CBN could then be employed for causal inference. The main challenges of this approach included: “non-doable variables” and limited scalability. To address these issues, we propose three extensions: i) early pruning weakly correlated relations to reduce the number of required do-operations; ii) introducing aggregate variables that summarize relations between weakly-coupled sub-systems; iii) applying the method a second time to perform indirect do interventions and handle non-doable relations. Our proposal leads to a decrease in the number of operations required to learn the CBN and in an increased accuracy of the learned CBN, paving the way towards applications in large CPS.
Human beings understand causal relationships through observations, actions and counterfactual reasoning. While data-driven methods achieve high levels of correlation detection, they mainly fall short of finding causal relations, notably being limited to observations only. In this paper, we propose an approach to learn causal models, combining observed data and selected interventions on the environment. We use this approach to generate Causal Bayesian Networks, which can be later used to perform diagnostic and predictive inference. We use our method on a smart home simulation, a use case where having knowledge of causal relations pave the way towards explainable systems. Our algorithm succeeds in generating a Causal Bayesian Network close to the simulation’s ground truth causal interactions, showing encouraging future prospects for application in real-life systems.
Devrons-nous bientôt nous soumettre avec résignation à l’inévitable suprématie de l’intelligence artificielle ? Avant d’en appeler à la révolte, essayons de regarder à quoi nous avons affaire.
Les sociétés de chasseurs-cueilleurs n’ont pas d’écoles. Elles accumulent pourtant des savoirs, elles possèdent des langues et des cultures sophistiquées. Si l’on compare notre espèce aux autres primates, tout est différent. Les cultures animales existent, mais elles sont si restreintes qu’elles sont longtemps passées inaperçues aux yeux des éthologues. Pourquoi existe-t-il tant de « savoirs » dans notre espèce ? Et pourquoi les transmettons-nous ? Si la question semble saugrenue, c’est parce que nous avons perdu de vue le caractère apparemment contre-nature de ce comportement.
Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the form of two mechanisms: complexity drop and contrast. These mechanisms are supposed to operate dynamically and not through pre-processing as in neural networks. Their introduction may bring the functioning of AI away from mere reflex and closer to reflection.
En 2013, une étude alarmiste1 annonce un risque de disparition imminente, en quatre ans seulement, de la moitié des emplois aux États-Unis par l’introduction massive de l’intelligence artificielle dans le monde du travail. Or, rien de tel ne s’est produit2. Se pourrait-il que ces discours qui présentent l’IA comme un bouleversement absolu soient juste un moyen d’attirer l’attention, extrapolant abusivement une réalité bien modeste faite de techniques balbutiantes ?
Analogical reasoning is a cognitively fundamental way of reasoning by comparing two pairs of elements. Several computational approaches are proposed to efficiently solve analogies: among them, a large number of practical methods rely on either a parallelogram representation of the analogy or, equivalently, a model of proportional analogy. In this paper, we propose to broaden this view by extending the parallelogram representation to differential manifolds, hence spaces where the notion of vectors does not exist. We show that, in this context, some classical properties of analogies do not hold any longer. We illustrate our considerations with two examples: analogies on a sphere and analogies on probability distribution manifold.
The purpose of this paper is to propose a refinement of the notion of innateness. If we merely identify innateness with bias, then we obtain a poor characterisation of this notion, since any learning device relies on a bias that makes it choose a given hypothesis instead of another. We show that our intuition of innateness is better captured by a characteristic of bias, related to isotropy. Generalist models of learning are shown to rely on an ‘isotropic’ bias, whereas the bias of specialised models, which include some specific a priori knowledge about what is to be learned, is necessarily ‘anisotropic’. The so-called generalist models, however, turn out to be specialised in some way: they learn ‘symmetrical’ forms preferentially, and have strictly no deficiencies in their learning ability. Because some learning beings do not always show these two properties, such generalist models may be sometimes ruled out as bad candidates for cognitive modelling.
Students’ errors become manifest through erroneous behaviours noticed by the teacher. However, addressing behavioural deviation alone is not sufficient to design appropriate feedback. We propose here a model of student error, based on a separation between procedural and logical knowledge. This model is tested through its ability to predict the observed behaviour of subjects solving the Tower of Hanoi problem. Using this model, we are able to propose a ‘deep’ error classification, based on the observation of the internal representations of the system when it generates deviant behaviours. From this characterisation of errors, we aim at designing a critiquing system. Such a system will deliver more elaborate feedback to the learner, from which we hope better pedagogical efficiency and better acceptability.
The distinction between declarative and procedural knowledge is a well-accepted one. However, few models offer a consistent implementation of this distinction. We present such a system, based on a strict separation of logical and calculation capabilities, designed to model aspects of human problem solving behaviour. We have tested our approach on the Tower of Hanoi task by comparing the results provided by our model with the performance of novice subjects. We also compared these results with the performance of a few other computational models. These comparisons are quite promising. Our model has been designed to be simple and psychologically plausible. Its current implementation is still basic. We expect further improvement from the joint introduction of two separate learning abilities, a logical one and a procedural one.
Bias is always present in learning systems. There is no perfect, universal, way of learning that would avoid any ‘innate’ predetermination. However, all biases should not be considered equivalent. Usually, it is implicitly regarded as desirable to avoid anisotropic biases when designing a learning mechanism, especially when it is intended as a cognitive model of some human or animal learning ability. Anisotropic bias necessarily involves some ad hoc a priori knowledge that severely limits the generality of the learning device.
Knowledge elicitation is a critical problem in computerized learning environments that make use of a knowledge base. Fortunately, contrary to usual expertise elicitation situations, didactic scientific knowledge is quite often well formalized, and authors are used to deal with the logical organization of the domain they teach. We want to propose here an original tool, a logical spreadsheet which, if included in an authoring package, will help authors organize concepts and at the same time make both conception and maintenance of didactic knowledge bases much easier.
Conceptual knowledge is a fundamental part of what is taught to engineering students. However most efforts in C.A.L. research are devoted to helping students acquire new skills, not concepts. We describe here a research project that aims at providing the student with relevant conceptual explanations whenever these are needed. We try first to describe what a relevant explanation should be and how it could be generated. Then we consider the possibility of coupling the explanation module with a simulation program so that part of the knowledge used in explanations is extracted from the simulation.
L’étudiant qui cherche à acquérir un savoir-faire, ici la maîtrise de Prolog, a aussi besoin de connaissances conceptuelles. Pour répondre à ce type de besoin, nous avons développé un système qui permet à l’étudiant de simuler l’exécution de son programme Prolog, mais qui lui offre aussi la possibilité de soumettre ce programme au regard critique de SAVANT 3. Ce dernier système a été conçu pour soutenir une argumentation avec l’étudiant. Il est utilisé ici pour critiquer la justesse et l’efficacité du programme écrit par l’étudiant, ce qui permet à celui-ci de corriger d’éventuelles fautes conceptuelles. L’étudiant peut ainsi faire tourner son programme et observer son exécution, pour ensuite "discuter" de ce qu’il a écrit avec SAVANT 3. Nous abordons la question de savoir s’il est possible et souhaitable d’étendre ce qui n’est pour l’instant qu’une maquette à des situations réelles (par ex. programme Prolog complexe) et à des sujets quelconques (économie, architecture de réseau, etc.).
We present here an analysis of a specific form of explanation that can be found in naturally occurring conversations, and that may be needed by users of KBS: explanations as answers to surprises that follow a discrepancy between expectations and reality. We describe a tutoring system based on this type of explanation: SAVANT3 systematically looks for reasons to be surprised, so that the student feels compelled to give explanations. We examine the requirements that a system has to meet to be able to produce this kind of explanation based on a preliminary surprise.
On ne saurait imaginer l’enseignement du siècle prochain sans ordinateur. Certains affirment même que quelques séances où l’étudiant interagit avec la machine remplaceront bon nombre d’heures passées à écouter le monologue du professeur, à déchiffrer des livres ou à peiner sur des exercices. Pourtant, malgré la dimension de l’enjeu, personne n’est en mesure, à cette date, d’indiquer la manière de doter l’ordinateur d’une compétence suffisante pour qu’il tienne son rôle dans un tel scénario. Les principes qui sont à la base du système SAVANT3, développé à Télécom Paris, pourraient constituer un ingrédient de cet Enseignement Assisté par Ordinateur du futur.
The present study shows that there is a qualitative difference between concept and skill acquisition, and that it may have some consequences on the design of C.A.I. courseware. We show for instance that concept learning is essentially a logical process, based on rule acquisition or modification, and that conversation (free dialogue) is best suited for concept transmission. This paper describes a mixed-initiative dialogue module which is part of the ‘SAVANT 3’ CAI system.