I think this model is interesting, but I wonder how much people also take into account the severity of the outcome in their everyday reasoning about these causal events. I would expect that perhaps for more severe outcomes, people would not regard the morally bad actor as less causal, since any amount of reduction in the likelihood of the severe outcome would have been highly beneficial. Additionally, this model seems to only apply to causal evaluations that are binary, discrete and explicit (a causal relationship exists or not), but it seems to me that there is another prevalent kind of everyday causal evaluations that is more graded and subtle. For example, if I buy a bottle of milk and it tastes off, I am simply not going to have enough information to know the exact cause, but I might still blame the shop, the producer, the caterer, the person who might have mislabeled the bottle, etc. This kind of reasoning also often seems to be the case in more complex planning processes. I wonder how this model would extend to those.
Another reflection is whether, and how, this model might extend to game theory. Would it be plausible that non-optimal choices made by players are equivalent to prescriptive norm-violations in this model? I am not sure, since the idea in game theory (in my understanding) is that these norms are implicit and unspoken, yet conducive to optimal equilibrium sates. So they are based on statistical norms, but they are also what people “should” do, in a certain sense. But people still can, and do, often violate them.