questions

questions

Lotta Pesonen -
Number of replies: 0

Hello, below some questions inspired by the readings:

Rescorla & Wagner (1972)

-        How relevant are findings from classical conditioning studies in understanding human estimating causality? Many examples in this paper remind me of the Icard et al., (2017) -article "Normality and actual causal strength" we read for topics class last week.

-        ^For example, could abnormal inflation where more causality is attributed to an unexpected event/stimulus be explained in terms that the high prediction error associated with an unexpected event leads to a higher change in weight, which is reflected in the causality judgement? On the other hand, could abnormal deflation where less causality is attributed to an unexpected event/stimulus when another, more normal explanation is available be essentially the same phenomenon as CS1 blocking CS2 conditioning discussed in this paper?

-        How can the learning rate be manipulated? Stimulus saliency? Magnitude of prediction error? Emotion?

Lake et al., 2017

-        The authors were briefly talking about model-free and model-based learning. As far as I understand, model-free learning is based on prediction errors, and is associated with the dopaminergic system. After this reading, the mechanisms and neural correlates of model -based learning in the brain are still a bit unclear to me. When we talk about model -based and model-free learning in the brain, are we talking about two separate systems that interact with each other or a single system where model -free learning is taking place at one end and model -based at the other?

-        The authors talk about how neural networks are inspired by the brain. As far as I understand neural networks are not great for processing uncertainty. After taking the visual cognition -class, it seems that on every level of information processing, the main task of the brain is to deal with a great deal of uncertainty. How can such a fundamental aspect of cognition be missing in neural networks? Is this because a step was skipped in the modelling process for simplicity or do we actually just not know how the Bayes -rule is implemented in the brain? Is there a way to add a Bayesian component to DNNs?