Almost like an Expectimax Search but for outcome trees we annotate our nodes with what we know at any given moment (inside the curly braces)
Value of Perfect Information (VPI):
Think of the forecast as a separate variable, and we perfectly know the outcome that that variable
Forecast variable has their own distribution of being good/bad
different from probability of var given that forecast var
With knowing more evidence, we can have different probablities
MEU(e,E′)=maxa∑sP(s∣e,e′)U(s,a)
where P(s∣e,e′) is probability of actual variable given other evidence and forecast
VPI(E′,e)=MEU(e,E′)−MEU(e)
Properties of VPI:
∀E′,e:VPI(E′∣e)≥0: non negative: 0. Observing new information always allows you to make a more informed decision, and so your max-imum expected utility can only increase (or stay the same)
VPI(Ej,Ek∣e)=VPI(Ej∣e)+VPI(Ek∣e): Nonadditivity, observed twice doesnt increase VPI by sum of them
VPI(Ej,Ek∣e)=VPI(Ej∣e)+VPI(Ek∣e,Ej)+VPI(Ek∣e)+VPI(Ej∣e,Ek): Order independent, observe 2 variables in any order is the same