[SOUND] Nice to see you again. In this lecture, I will introduce the parallel operator and describe how processes can communicate. So we can also model parallel behavior between processes, and we do that with the parallel operator or the merge. And the parallel operator is described as two bars input between two processes. So if you want to have the process AB in parallel with CD, we write it down as you see here on the screen. So this is the process, a b. This is the process, c d. And how does it behave? Well, recall that actions were atomic, so if you're in an initial state, we can do the behavior from the left side. We can do an a followed by a b. Or we can do the behavior of the right side, the c followed by a d. But you can also do the behavior in any arbitrary ordering. So if we have a, then we can also do the c directly after it. Or if you have done the c. We can actually do the a directly after it and what you see here is quite important. That's namely this typical diamond shape that you get when you have parallel behavior. We can simply continue finishing all the different interweavings of the actions and then we get this diagram, where you have any ordering of the sequence or the process at left and the process it directs. The parallel composition only terminates if both processes terminate. So if the process at the left terminates and at the right terminates. So we only have termination symbol here at the end. If you are in a state, for instance, this initial state, then we can do an a and a c and you can also do the a and the c at the same time. And then we get a multiaction. So here we have this multiaction a, c, indicating the possibility that a and c happen exactly at the same instance in time. And in the same way, we have a multiaction bc, ad, and bd. So this is the total behavior of the parallel composition of the process a, b, and parallel with c. How can we axiomatize the parallel operator? Well, people first try to axiomatize the parallel operator with the plus sequential composition actions in the parallel operator itself. But it turned out that it did not work well and it was who actually proved that you cannot axiomatize the parallel operator without actually operators. So we introduced two actually operators, namely the leftmerge and the communication merge. So how does this work? Well, if you have these two processes within the a and the b in parallel, then we know that we can do an a, first a step from the left side. Or we can do a b first, step from the right side, or we can do this multiaction, and this is reflected in the right side of this action. When we have the leftmerge, and the leftmerge is just the merge, except that for the first action, it holds that the action must come from the left. And the communication merge is also just an ordinary merge, except that the first action must be a multiaction coming from both components. So how can we axiomatize the leftmerge? Well, let's use to look at all cases. If you take the action alpha leftmerge x, then the first action must come from left side, so alpha must be done first. So you can see that this must be equal to alpha followed by x, alpha being a multiaction. If the left-hand side of the leftmerge is in deadlock and no action can be done, then the first action cannot come from the left side, and the whole process behaves as a deadlock. If the left-hand side is run now, multiaction alpha followed by x, then of course the behavior is alpha followed by x in parallel to y. And here you see that the leftmerge only differs from the merge in its first action. After the first action, it just behaves as the parallel operator again. Here we see that the leftmerge distributes over the choice if it occurs at the left. And because the sum operator was just a generalization of the choice operator, you can also see that the leftmerge distributes over the sum operator. Okay, what are the actions for the communication operator? The communication operator is commutative, it's associative. If you put the empty multiaction or the internal action tau in combination with the communication merger, the process x, then it adds nothing, so that's equal to x. If you put the deadlock besides an action in the communication merge then what you see is that the first step must come from both sides but deadlock cannot do a step so this is equal to deadlock. We see that if there are two actions in front, one at the alpha at the left and beta at the right, then they must synchronize at multiaction. That also happens if we have the processor alpha x communication merge with beta y, then alpha and beta in combination, that's the first step and after that, you have the parallel operator. The communication merge distributes over the choice and over the same operator. Okay, let's go back to our parallel process and ask ourselves the following. Namely, if ab is a small buffer, reading data at a, delivering it at b. And if the process cd represents also a small buffer, then it could be that we want to describe the process that reads something at a, then hands the data over when it's delivered at b. And it's to the next buffer by reading it at c, and then ultimately delivering at d. So I would want is that b and c communicate. And if you look at the total behavior, as we see here then there is no trace of communication or handing over data at all. So how do we model communication? We want the b and c happening at the same time and we say that the multiaction b c should become the action e and that's the result of the communication. So in the behavior, this means b c is replaced by e. And although we now have a communication, we still have far too many actions. Namely, we still see that action c, that can happen on it's own and action b that can happen on its own, not properly delivering or reading the information from the other buffer. So what we want to do is we want to say that all the actions that are superfluous should be blocked, and that we can only see the actions a, e and d. So only a, e and d should be allowed to happen. And for this we introduce the allow operator. An allow operator gets a number of multiactions, or actually labels of multi actions, there is no data in these actions. In this case, a, e, and d, and it says only a, e, and d can happen, and all other actions are blocked. So action c is blocked. This multiaction is not in the list for allowable operators so it's blocked. c is blocked here, b is blocked, c is blocked, and you can see that quite a number of actions are actually blocked. So these actions cannot happen and should be removed. And if we remove them, we see that the number of states become unreachable. So this state here is unreachable. And this one, that one, that one, that one. And this is if we remove all unreachable states and transitions and remove all blocked transitions, this is what remains. And you can see that this is quite what we would like to have. Namely, we can read something at a, then it's communicated to the next buffer by doing the action e. And then it's delivered by doing a d. Okay, well, now the axioms for the allow operator. Well, this follows a relatively simple pattern. If you apply the allow operator to a multiaction then the result is that multiaction provided the multiaction occurs in the set fee, that's the argument of the allow operator. To be more precise, if we have a multiaction when individual actions have data, the labels of the multiactions should appear as a multiaction in the field. And we can never block the action tau. So tau is not blockable by the allow operator, it should always posit. If alpha is not part of phi, then it's blocked, so it's renamed to the action, to that block. Delta, if you apply the allow operator to delta, it always remains delta. And then we get some rules that are the same for all kinds of action renaming operators and allows in some sense an action renaming operator. And it says that the distribute of the choice operator it distributes over the sequential conversation operator, and it distributes over the operator. So we have already seen the communication operator. We will see deadlock, the hiding operator and the renaming operator. But they all have a similar structure as it comes to their axioms. Okay, let's look at an exercise. If you look at the process a b c in parallel to d e f, how many states are there in the resulting transition system? And the second exercise, if you have this process a b c in parallel to d e f and I apply the communication operator, b c goes to g to it. Can this process then perform a g action somewhere in its database? And the last exercise, I have this process with an allow operator, the communication operator. So I allow g, I let b and e communicate to g. And the question is can this process perform a g action? So what did we do? We introduced a parallel operator. We showed that we needed the leftmerge and communication merge to axiomatize it. And we introduce the communication operator and allow operator such that these parallel processes can also communicate. In the next lecture, I will show the hiding, and encapsulation and the renaming operator. Thank you for watching.