Social signals
Evolutionary origins of language
Evolution and information
Simplicity Theory
Cognitive modelling of interest
Cognitive modelling of relevance
Cognitive modelling of meaning
Cognitive modelling of emotional intensity
Cognitive modelling of concept learning
Emergence as complexity drop
Qualia cannot be epiphenomenal
→ [ See all papers ] → [ Selected papers ]
Simplicity Theory says that interesting situations must appear abnormally simple. This means that they are less complex (in the Kolmogorov sense) than expected. This led me to define subjective probability as p = 2-U, where U represents unexpectedness.Please visit >>>>> www.simplicitytheory.science |
↪
Follow the new MOOC on Algorithmic Information Theory
that I developed on the the edX platform |
|
Exposé pour le centenaire de Claude Shannon à l’Institut Henri Poincaré.
Information, simplicité et pertinence. (in French) |
|
Talk at COGSCI 2015
Probability judgments rely on complexity assessments |
Texts available on the Web have been generated by human minds. We observe that simple patterns are over-represented: abcdef is more frequent than arfbxg and 1000 appears more often than 1282. We suggest that word frequency patterns can be predicted by cognitive models based on complexity minimization. Conversely, the observation of word frequencies offers an opportunity to infer particular cognitive mechanisms involved in their generation.
The paper presents a novel computational framework named CompLog. Inspired by probabilistic programming systems like ProbLog, CompLog builds upon the inferential mechanisms proposed by Simplicity Theory, relying on the computation of two Kolmogorov complexities (here implemented as min-path searches via ASP programs) rather than probabilistic inference.
We design a pattern mining algorithm to provide a summary of graphs in the form of a set of unexpected patterns, that is, patterns for which there is a drop between their expected complexity and the observed complexity.
We suggest that Bayes’ rule can be seen as a specific instance of a more general inferential template that can be expressed in terms of algorithmic complexities, namely through the measure of unexpectedness proposed by Simplicity Theory.
With the increasing number of connected devices, complex systems such as smart homes record a multitude of events of various types, magnitude and characteristics. Current systems struggle to identify which events can be considered more memorable than others. In contrast, humans are able to quickly categorize some events as being more “memorable” than others. They do so without relying on knowledge of the system’s inner working or large previous datasets. Having this ability would allow the system to: (i) identify and summarize a situation to the user by presenting only memorable events; (ii) suggest the most memorable events as possible hypotheses in an abductive inference process. Our proposal is to use Algorithmic Information Theory to define a “memorability” score by retrieving events using predicative filters. We use smart-home examples to illustrate how our theoretical approach can be implemented in practice.
This MOOC is about applying Algorithmic Information Theory to Artificial Intelligence. Algorithmic information was discovered half a century ago. It is a great conceptual tool to describe what artificial intelligence actually does, and what it should do to make optimal choices.
Le 13 septembre 1916, une éléphante de cinq tonnes prénommée Mary fut pendue à Erwin dans le Tennessee, devant un public de 2500 personnes, à l’aide d’une grue...
Comment décider si un événement est le fruit du hasard ou, au contraire, découle d’une causalité ciblée ? La question est fondamentale. Il en va des décisions que nous allons prendre et, parfois, de notre sécurité. Sur quels critères décidons- nous qu’un événement est ou n’est pas fortuit ? Si notre jugement en la matière est valide, comment expliquer qu’il puisse conduire des individus rationnels à rejeter systématiquement l’existence du hasard, pour lui préférer l’hypothèse de complots, d’infl uences magiques ou de la main du destin ? Et si cette capacité de jugement concernant le hasard n’est pas valide, comment expliquer que nous en soyons dotés ?
A referring expression (RE) is a description that identifies a set of instances unambiguously. Mining REs from data finds applications in natural language generation, algorithmic journalism, and data maintenance. Since there may exist multiple REs for a given set of entities, it is common to focus on the most intuitive ones, ie, the most concise and informative. In this paper we present REMI, a system that can mine intuitive REs on large RDF knowledge bases. Our experimental evaluation shows that REMI finds REs deemed intuitive by users. Moreover we show that REMI is several orders of magnitude faster than an approach based on inductive logic programming.
Analogies are 4-ary relations of the form “A is to B as C is to D”. When A, B and C are fixed, we call analogical equation the problem of finding the correct D. A direct applicative domain is Natural Language Processing, in which it has been shown successful on word inflections, such as conjugation or declension. If most approaches rely on the axioms of proportional analogy to solve these equations, these axioms are known to have limitations, in particular in the nature of the considered flections. In this paper, we propose an alternative approach, based on the assumption that optimal word inflections are transformations of minimal complexity. We propose a rough estimation of complexity for word analogies and an algorithm to find the optimal transformations. We illustrate our method on a large-scale benchmark dataset and compare with state-of-the-art approaches to demonstrate the interest of using complexity to solve analogies on words.
Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the form of two mechanisms: complexity drop and contrast. These mechanisms are supposed to operate dynamically and not through pre-processing as in neural networks. Their introduction may bring the functioning of AI away from mere reflex and closer to reflection.
Si vous marchez à reculons, les traces de pas que vous voyez devant vous sont les vôtres. Aucun robot, aucune intelligence artificielle (IA) ne sait ce genre de choses, sauf si l’on a pensé à les lui dire. Les IA sont-elles si intelligentes que cela ? À bien y regarder, elles apparaissent très intelligentes et très stupides à la fois. Pour quelle raison ? En sera-t-il toujours ainsi ? Dans ce livre, Jean-Louis Dessalles aborde ces questions d’une manière précise et accessible à tous. Chaque lecteur trouvera dans ce livre de quoi le surprendre. Il nous parle du passé, du présent et du futur des IA. Il évoque même ce qui, selon lui, leur manque pour devenir... intelligentes.
Several computational methods have been proposed to evaluate the relevance of an instantiated cause to an observed consequence. The paper reports on an experiment to investigate the adequacy of some of these methods as descriptors of human judgments about causal relevance.
Analogical reasoning is a cognitively fundamental way of reasoning by comparing two pairs of elements. Several computational approaches are proposed to efficiently solve analogies: among them, a large number of practical methods rely on either a parallelogram representation of the analogy or, equivalently, a model of proportional analogy. In this paper, we propose to broaden this view by extending the parallelogram representation to differential manifolds, hence spaces where the notion of vectors does not exist. We show that, in this context, some classical properties of analogies do not hold any longer. We illustrate our considerations with two examples: analogies on a sphere and analogies on probability distribution manifold.
Responsibility, as referred to in everyday life, as explored in moral philosophy and debated in jurisprudence, is a multiform, ill-defined but inescapable notion for reasoning about actions. Its presence in all social constructs suggests the existence of an underlying cognitive base. Following this hypothesis, and building upon simplicity theory, the paper proposes a novel computational approach.
Incremental learning designates online learning of a model from streaming data. In non-stationary environments, the process generating these data may change over time, hence the learned concept becomes invalid. Adaptation to this non-stationary nature, called concept drift, is an intensively studied topic and can be reached algorithmically by two opposite approaches: active or passive approaches. We propose a formal framework to deal with concept drift, both in active and passive ways. Our framework is derived from the Minimum Description Length principle and exploits the algorithmic theory of information to quantify the model adaptation. We show that this approach is consistent with state of the art techniques and has a valid probabilistic counterpart.
Analogical reasoning is still a difficult task for machines. In this paper, we consider the problem of analogical reasoning and assume that the relevance of a solution can be measured by the complexity of the analogy. This hypothesis is tested in a basic alphanumeric micro-world.
People avoid changing subject abruptly during conversation. There are reasons to think that this constraint is more than a social convention and is deeply rooted in our cognition. We show here that the phenomenon of topic connectedness is an expected consequence of the maximization of unexpectedness and that it is predicted by Simplicity Theory.
We propose to apply Simplicity Theory (ST) to model interest in creative situations. ST has been designed to describe and predict interest in communication. Here we use ST to derive a decision rule that we apply to a simplified version of a creative game, the Poietic Generator. The decision rule produces what can be regarded as an elementary form of creativity. This study is meant as a proof of principle. It suggests that some creative actions may be motivated by the search for unexpected simplicity.
Human beings do assess probabilities. Their judgments are however sometimes at odds with probability theory. One possibility is that human cognition is imperfect or flawed in the probability domain, showing biases and errors. Another possibility, that we explore here, is that human probability judgments do not rely on a weak version of probability calculus, but rather on complexity computations. This hypothesis is worth exploring, not only because it predicts some of the probability ‘biases’, but also because it explains human judgments of uncertainty in cases where probability calculus cannot be applied. We designed such a case in which the use of complexity when judging uncertainty is almost transparent.
Quelles sont les propriétés dont doit jouir une histoire pour être une histoire ? Il est possible de répondre à cette question en se plaçant dans le cadre de la modélisation cognitive. La notion centrale développée ici est celle de simplicité cachée. Pour être intéressante, une situation imaginaire doit comporter une « révélation » qui simplifie une situation perçue comme complexe. Ce cadre conceptuel peut avoir des retombées qui débordent le domaine de la production narrative. Il concerne potentiellement toute création de l’esprit censée intéresser autrui. Ceci inclut les récits, mais aussi les objets ou les projets, dès lors que ces objets ou ces projets se voient dotés d’une valeur narrative.
Unexpectedness is a major factor controlling interest in narratives. Emotions, for instance, are felt intensely if they are associated with unexpected events. The problem with generating unexpected situations is that either characters, or the whole story, are at risk of being no longer believable. This issue is one of the main problems that make story design a hard task. Writers face it on a case by case basis. The automatic generation of interesting stories requires formal criteria to decide to what extent a given situation is unexpected and to what extent actions are kept believable. This paper proposes such formal criteria and makes suggestions concerning their use in story generation systems.
The human mind is known to be sensitive to complexity. For instance, the visual system reconstructs hidden parts of objects following a principle of maximum simplicity. We suggest here that higher cognitive processes, such as the selection of relevant situations, are sensitive to variations of complexity. Situations are relevant to human beings when they appear simpler to describe than to generate. This definition offers a predictive (i.e. refutable) model for the selection of situations worth reporting (interestingness) and for what individuals consider an appropriate move in conversation.
The challenge of narrative automatic generation is to produce not only coherent, but interesting stories. This study considers the problem within the Simplicity Theory framework. According to this theory, interesting situations must be unexpectedly simple, either because they should have required complex circumstances to be produced, or because they are abnormally simple, as in coincidences. Here we consider the special case of narratives in which characters perform actions with emotional consequences. We show, using the simplicity framework, how notions such as intentions, believability, responsibility and moral judgments are linked to narrative interest.
Several studies have highlighted the combined role of emotions and reasoning in the determination of judgments about morality. Here we explore the influence of Kolmogorov complexity in the determination, not only of moral judgment, but also of the associated narrative interest. We designed an experiment to test the predictions of our complexity-based model when applied to moral dilemmas. It confirms that judgments about interest and morality may be explained in part by discrepancies in complexity. This preliminary study suggests that cognitive computations are involved in decision-making about emotional outcomes.
Algorithmic probability is traditionally defined by considering the output of a universal machine fed with random programs. This definition proves inappropriate for many practical applications where probabilistic assessments are spontaneously and instantaneously performed. In particular, it does not tell what aspects of a situation are relevant when considering its probability ex-post (after its occurrence). As it stands, the standard definition also fails to capture the fact that simple, rather than complex outcomes are often considered improbable, as when a supposedly random device produces a repeated pattern. More generally, the standard algorithmic definition of probability conflicts with the idea that entropy maximum corresponds to states that are both complex (unordered) and probable. We suggest here that algorithmic probability should rather be defined as a difference in complexity. We distinguish description complexity from generation complexity. Improbable situations are situations that are more complex to generate than to describe. We show that this definition is more congruent with the intuitive notion of probability.
Near-miss experiences are one of the main sources of intense emotions. Despite people’s consistency when judging near-miss situations and when communicating about them, there is no integrated theoretical account of the phenomenon. In particular, individuals’ reaction to near-miss situations is not correctly predicted by rationality-based or probability-based optimization. The present study suggests that emotional intensity in the case of near-miss is in part predicted by Simplicity Theory.
The feeling of good or bad luck occurs whenever there is an emotion contrast between an event and an easily accessible counterfactual alternative. This study suggests that cognitive simplicity plays a key role in the human ability to experience good and bad luck after the occurrence of an event.
Individuals devote one third of their language time to mentioning unexpected events. We try to make sense of this universal behaviour within the Costly Signalling framework. By systematically using language to point to the unexpected, individuals send a signal that advertises their ability to anticipate danger. This shift in display behaviour, as compared with typical displays in primate species, may result from the use by hominins of artefacts to kill.
Deux présidents emblématiques des États-Unis ont été assassinés à 1OO ans d’intervalle et leur histoire présente plusieurs points communs. Pourquoi notre cerveau est-il irrésistiblement attiré par de telles coïncidences, y cherchant des marques du destin ?
This study is an attempt to measure the variations of interest aroused by conversational narratives when definite dimensions of the reported events are manipulated. The results are compared with the predictions of the Complexity Drop Theory, which states that events are more interesting when they appear simpler, in the Kolmogorov sense, than anticipated.
Cette étude vise à mesurer les variations de l’intérêt suscitées par un événement lorsque certaines dimensions définies sont manipulées. Les résultats sont comparés aux prédictions de la théorie du décalage de complexité, selon laquelle les événements sont d’autant plus intéressants qu’ils sont plus simples qu’attendu.
Les conversations quotidiennes constituent une arène permanente où se joue l’essentiel de notre existence sociale. Dans ce jeu proprement humain, la pertinence est le principal critère. Nous possédons tous une intuition précise de ce qui rend une histoire ou un argument pertinent et nous sommes hypersensibles aux défauts de pertinence.
Individuals have an intuitive perception of what makes a good coincidence. Though the sensitivity to coincidences has often been presented as resulting from an erroneous assessment of probability, it appears to be a genuine competence, based on non-trivial computations. The model presented here suggests that coincidences occur when subjects perceive complexity drops. Co-occurring events are, together, simpler than if considered separately. This model leads to a possible redefinition of subjective probability.
Most of the situations of daily life that arouse human interest are experienced as unexpected. Highly unexpected events are preferentially memorised and are systematically signalled or reported in conversation. Probability theory is shown to be inadequate to predict which situations will be perceived as unexpected. We found that unexpectedness is best explained using Kolmogorov complexity, which is a strong indication that human individuals have an intuitive access to what was thought to be only an abstract mathematical notion. Many important and previously disparate facts about human communicative behaviour are shown to result from the cognitive ability to detect complexity shifts.
Nous définissons la complexité cognitive comme une notion dérivée de la complexité de Kolmogorov. Nous montrons qu’une partie importante de ce qui retient l’intérêt des êtres humains, notamment lors de la sélection des événements spontanément signalés ou rapportés, peut être prédite par un saut de complexité cognitive. Nous évaluons les conséquences de ce modèle pour l’étude de la pertinence conversationnelle.
La conversation humaine agit comme un filtre extraordinairement sélectif : seule une infime partie des situations que les locuteurs ont vécues ou ont pu connaître sera jugée digne d’être rapportée aux interlocuteurs. L’un des objectifs de la recherche sur le langage consiste à rechercher des critères permettant de prévoir si une situation sera perçue comme suffisamment « intéressante » si elle est mentionnée en conversation. Nous montrons ici que le caractère inattendu de certaines situations, qui conduit souvent à ce qu’elles soient rapportées en conversation, est lié à des écarts de complexité, et que ce phénomène peut s’expliquer dans le cadre plus général de la théorie « shannonienne » de la communication événementielle.
Though the ability of human beings to deal with probabilities has been put into question, the assessment of rarity is a crucial competence underlying much of human decision-making and is pervasive in spontaneous narrative behaviour. This paper proposes a new model of rarity and randomness assessment, designed to be cognitively plausible. Intuitive randomness is defined as a function of structural complexity. It is thus possible to assign probability to events without being obliged to consider the set of alternatives. The model is tested on Lottery sequences and compared with subjects’ preferences.
Two different conceptions of emergence are reconciled as two instances of the phenomenon of detection. In the process of comparing these two conceptions, we find that the notions of complexity and detection allow us to form a unified definition of emergence that clearly delineates the role of the observer.