[Leahy and Carey]
It seems odd, or at least not intuitive to me that given a board with 3 exits on one side and 1 on the other, infants actually guess that the ball would exit on the 1-exit side 1/4 of the time. What could be the underlying computations? They see the board, and then there’s a step when they latch onto one side in their guessing, and that step itself is informed by the actual probabilities? If so, that kind of computation itself seems quite sophisticated/complex already – maybe no less than what’s required for full-fledged modal representations – so does this suggest that the developmental shift to modal representations is not so much in computational ability as it is in the enrichment and refinement of conceptual structure and/or its mapping to states of the world?
[Stahl and Feigenson]
The prediction that the novel word (“blick”) would be more easily learned following an impossible event was not intuitive or obvious to me for two reasons.
Firstly, even if the impossible event requires the greatest extent of model revision (as mentioned in the paper), the highly improbable event might also require it to a very great extent –– perhaps the machine is rigged? And if the word is then taught, there’s also an assumption of relevance — maybe the adult is trying to tell me that this machine is rigged, and it gives me the singular colour, against all odds? It’s not intuitive or obvious to me that that model revision would necessarily be lesser than that after an impossible event.
Secondly, learning a word for a novel object seems like a distinct, linguistic step after the model revision. Why is it assumed that it would be necessarily boosted by a larger need for model revision?