D. DANNY\'s DAILY LUNCH (15 PoINTS) Danny\'s Daily lunch choices are modeled by
ID: 3355279 • Letter: D
Question
D. DANNY's DAILY LUNCH (15 PoINTS) Danny's Daily lunch choices are modeled by a Markov Chain with the transition matrix: Burrito(b) Falafelif) Pizzalp) Sushi Burrito 0 0.75 0.25 0 Falafel 0.5 0.6 0.5 0 0 0.4 0.4 0.3 Pizza 0 0.2 Sushil 0.1 On Sunday, Danny chooses lunch uniformly at random. () (3 Points) Find the probabiity that Danny chooses Pizza on the following Tuesday and Friday and Falafel on Saturday? (2) (3 Points) In the long run, what is the probability that Danny will have Burrito? (3) (3 Points) Given that Danny had Pizza in the first day. In the long run, what is the proba- bility of having Sushi? (4)(3 Points) On average, how many days go between every two times Danny orders Burrito? (5) (3 Points) Assume that Danny eats Burrito on Monday, how many consecutive days on an average (starting Tuesday) does he not eat Sushi?Explanation / Answer
This question uses the concept of Markov chain where there is a transition matrix between two consecutive stages. Generally, there's a steady state situation, reached after N stages where N is sufficiently large.
---- Results and R-code ----
install.packages("markovchain")
library(markovchain)
States = c("1","2","3","4")
byRow = TRUE
lM = matrix(data = c(0,0.75,0.25,0,
0.5,0,0.5,0,
0.6,0,0,0.4,
0.1,0.2,0.4,0.3), byrow = byRow, nrow = length(States), dimnames = list(States,States))
mcl = new("markovchain", states = States, byrow = byRow, transitionMatrix = lM, name = "lunch.")
## Part (1)
initialState = c(0.25,0.25,0.25,0.25)
finalState_Tuesday = initialState*(mcl)^2 # 2 days gap between Sunday and Tuesday
finalState_Friday = initialState*(mcl)^5 # 5 days gap between Sunday and Tuesday
finalState_Saturday = initialState*(mcl)^6 # 6 days gap between Sunday and Tuesday
# P(having Pizza on Tuesday) = finalState_Tuesday[3] = 0.26375
# P(having Pizza on Friday) = finalState_Friday[3] = 0.2706647
# P(having Falafel on Saturday) = finalState_Saturday[2] = 0.2622496
### ANSWER = 0.26375*0.2706647*0.2622496 = 0.01872143 [ANSWER (1)]
## Part (2): Long run probability of having Burrito (Steady state)
steadyState = steadyStates(mcl)
data.frame(probs = matrix(data = steadyState))
# probs
1 0.3100775 ### <-- Long run probability of having Burrito [ANSWER (2)]
2 0.2635659
3 0.2713178
4 0.1550388
## Part (3): Long run probability of having Burrito (Steady state)
initialState = c(0,0,1,0)
N = 10000
finalState_longRun = initialState*(mcl)^N # Large N for long run probability, given initial state
data.frame(probs = matrix(data = finalState_longRun))
# probs
1 0.3100775
2 0.2635659
3 0.2713178
4 0.1550388 ### <-- Long run probability of having Sushi given he had Pizza on Monday [ANSWER (3)]
## Part (4): Expected time/steps between eating Burrito again, after having once
Let p(i) be the expected time to reach state 1 (Burrito) starting from state i, where
i {1, 2, 3, 4}. We have:-
t(1) = a = 0.75*(b) + 0.25*(c)
t(2) = b = 0.5*(a) + 0.5*(c)
t(3) = c = 0.6*(a) + 0.4*(d)
t(4) = d = 0.1*(a) + 0.2*(b) + 0.4*(c) + 0.3*(d)
### Solve this set of linear equations to get the time/steps taken to reach from State 1 to State 1 again
### that is, having Burrito the next time, after having Burrito today...
### [Answer 4] = Value for a <-- ANSWER
## Part(5): Time/steps taken to reach state of eating Sushi
Let p(i) be the expected time to reach state 4 (Sushi) starting from state i, where
i {1, 2, 3, 4}. We have:-
p(1) = a = 0.75*(b) + 0.25*(c)
p(2) = b = 0.5*(a) + 0.5*(c)
p(3) = c = 0.6*(a) + 0.4*(d)
p(4) = d = 0.1*(a) + 0.2*(b) + 0.4*(c) + 0.3*(d)
### Solve this set of linear equations to get the time/steps taken to reach from State 1 to State 4
### that is, having Sushi the next time, after having Burrito today...
### [Answer 5] = Value for a <-- ANSWER
---- Results and R-code ----
PLEASE NOTE THAT THE TRANSITION MATRIX GIVEN FOR THIS QUESTION RESULTS IN UNSOLVABLE SET OF LINEAR EQUATIONS. PLEASE MAKE SURE THAT THE TRANSITION MATRIX IS CORRECT OR VERIFIED.
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.