Now, let us consider the H-step ahead prediction under the mixture AR model. Based on what we have learned, this should be a fairly easy task. It's just extending our method to obtain in-sample prediction for mixture model to the H step ahead prediction using the exactly same function as we did for the single AR model in the first module. Now, suppose the sth posterior sample of all model parameters are denoted as Omega S, Beta S, and mu S. Suppose we want to obtain the h-step ahead prediction y_t plus h, it's sth posterior sample. To do this, we first need to sample a latent configuration variable corresponding to it. Let us denote it as l_ t plus h, s. This is sampled from a discrete distribution with parameter is weight W_1^s up to W_k^s. After we obtain the sample, the configuration variable, this y_t plus h of s, is just need to be sampled from a normal distribution which mean f_t transpose, so it's f_ t plus h transpose s times Beta^s_ l, t plus h s and mu s. Here this design vector, F_t plus h transpose s. This is equal to y t plus h minus 1 s dot dot dot up until y_t plus h minus p^s. Here, if this t plus h minus one is bigger, if this subscript is bigger than capital T, this thing will be our posterior sample. If the subscript is less than or equal to capital T, then this thing is from the original data. In terms of coding, we can use this function. Similar as before, the function have two inputs, h.step and S, where h.step is a number of steps you'll need to predict and S denotes the sth posterior sample. Firstly, we take the sth posterior sample of the weights, so air coefficients and variance. Then suppose our model is a mixture of AR 2 model, then we initialize y with the last two observations of our time series, y.sample. If it is a mixture model of other AR orders, we can just change the initialization here with the number of component equals to the AR order p. We define a y.pred vector to start a predictions. It comes to the sequential update procedural. We first sample a configuration variable, I'm calling it k.use here from the discrete distribution. Then select the corresponding Beta from the big Beta vector. Here this Beta.c At is just our Beta S LTS in the formula here. Then a new sample is obtained from this normal distribution, which is coded here. We stored it at the h.step slot of y.pred, then update y.cur and that complete our loop. Finally, after all h loops, we obtains a sth posterior sample of all h-step ahead predictions. Output is a worker of less h and the key slot of the outputs is the sth posterior sample of the case k-step ahead prediction.