Hidden Markov Model Bonus Exercise

(tl;dr: No points, but practice, which is almost the same thing)

Disclaimer: Dieser Thread wurde aus dem alten Forum importiert. Daher werden eventuell nicht alle Formatierungen richtig angezeigt. Der ursprüngliche Thread beginnt im zweiten Post dieses Threads.

Hidden Markov Model Bonus Exercise
Hey people!

Last week during my tutorial session, I was asked whether I could provide another exercise on Hidden Markov Models (HMMs). Not for any bonus points, but just for the fun and the practice of it (practicing exercises does tend to translate into exam points, though).
So, attached to this post, you will find hmm_extra.pdf, which poses such an exercise for those of you that want to train their bayesian magic a little more.

There is no solution for this one and it’s not necessarily designed to have “clean” results. Since there’s no hand-ins and no points involved, feel free to discuss your approach or your results in this thread. I’ll say this much, though: there’s only a single hidden variable. The exercise can be read as if there are two (which I noticed after typing it up), but please disregard that.

All the best & don’t forget to be awesome!
Jonas

Attachment:
hmm_extra.pdf: https://fsi.cs.fau.de/unb-attachments/post_156381/hmm_extra.pdf


I have one question regarding the probabilities. Of course we have [m]P(salad | vegan) = 1/6[/m]. But I’m not sure how to wrap the “self cooking” stuff. Does that mean we should assume [m]P(salad | ¬vegan) = 9/10[/m] because of the fact “10% of the time Max brings his own meal no matter what cafeteria offers”? Thanks in advance.


As the task here is designed rather openly, I would personally add another state variable [m]Prepare[/m] and end up with the following CPT:

[m]P(salad | vegan, prepare) = 0[/m]
[m]P(salad | ¬vegan, prepare) = 0[/m]
[m]P(salad | vegan, ¬ prepare) = 1/6[/m]
[m]P(salad |¬ vegan, ¬ prepare) = 1[/m]

and for [m]Prepare[/m] we simply know that [m]P(prepare)=1/10[/m] and that there is no transition model.

At the first glance, this might increase the complexity of the inference methods, but if you look into the details of the given tasks, you can more or less ignore [m]Prepare[/m] for anything except the last step of task 1., since you always know whether or ignore that he does prepare his own meal in the other tasks.

E.g. for the first task you would obviously need to do filtering. But because you know that Max ate salad on all days, you also know that [m]Prepare[/m] has the value [m]false[/m] for all days. (When he prepares food, he will not eat salad.) So the filtering should work fine.

But I’m not really sure whether this is 100% realizable…


I basically made “eatign prepared food” another evidence xD it has 10% likelihood no matter what vegan is and therefore I’ve set the likelihoods for salad to 9/60 (might also still be 1/6 not sure whether the 1/6th is only if he has not brought food himself) and 9/10 respectively.

For a) I’ve also done filtering and then multiplied the beliefstates with the likelihood of salad :slight_smile:

For b) it is prediction, simply.

For c) I have no clue, help? Can I just multiply the daily probability given by prediction? That feels wrong >-<


That sounds also quite nice.

Why not just [m]1-P(no vegan for one week)[/m]?

That should just be [m]1-(0.9)^7[/m]…

Or maybe you have use the prior week without vegan lunch as well, that would be [m]1-P(no vegan for two weeks) = 1 - (0.9)^14[/m]…

EDIT: What I mean, is that for this task we basically ignore the sensor model, as we are asked whether What are the chances that […] there will be at least one vegan lunch option next week?. There is nothing regarding the evidences here.


I vote for that! What I did (with a really basic and simple model) was that. No warranty on correctness!


0.9^5`, thankfully we only work Mo-Fri :smiley: But knowing that we had no vegan on friday it would be 0.9 on monday, but 0.9^2+0.1^*0.7 on tuesday etc wouldn’t it? Then 1-multiply those predictions?