CHAPTER 04.06: GAUSSIAN ELIMINATION: Gaussian Elimination With Partial Pivoting: Theory

 

I'm going to show you how to do Gauss elimination with partial pivoting, we're going to talk about the theory behind Gauss elimination with partial pivoting, and one of the which I would suggest to you strongly is that if you have not looked at the Naive Gaussian method theory and example, that you do that before you start understanding what Gauss elimination with partial pivoting is.  The reason why that is so is because there's a small difference between Gauss elimination with partial pivoting and Naive Gaussian method, and in order to be able to understand what that difference is, and also because we'll be skipping steps in the Gauss elimination with partial pivoting, because it's very similar to Naive Gaussian method except for one step, you do need to understand that first.  Now, the reason why we talk Gauss elimination with partial pivoting is because there are certain pitfalls of the Gauss elimination. One is that there's a possibility of division by 0, and the other one is that you get large round off errors.  So that's the . . . those are the two pitfalls of the Naive Gaussian method when . . . when we start using that to solve simultaneous linear equations.  Now, in order to avoid the pitfalls, what we can do is we can increase the number of significant digits, and what that will do is it will . . . we can go from single precision to double precision, or to quad precision, or extended double precision as some people might call it, or extended precision some people may call it, and what that will do is that will surely decrease the round off error, but it is not going to avoid division by 0, because no matter how many significant digits you're going to choose, if there is going to be division by 0, that is going to occur anyway.  So the way to do it this way, what we do is we avoid these pitfalls by using Gauss elimination with partial pivoting, where we avoid the division by 0, and also reduce the round off error. 

 

So the Gauss elimination with partial pivoting helps us in both instances of avoiding the pitfalls of the Naive Gaussian method, because it avoids division by 0.  Now somebody might say, hey, is this possible that we'll get division by 0 even in Gauss elimination with partial pivoting?  Yes, it is possible that you will get division by 0 in Gauss elimination with partial pivoting, but that will imply that the coefficient matrix which you have is not invertible, that it is . . . that you don't have a unique solution to the set of equations, that's what's going to happen if you are not able to avoid division by 0 with Gauss elimination with partial pivoting.  So I'm going to assign that as homework to see that whether you can come up with a . . . with a set of equations where you are not able to avoid division by 0 whether you are doing Gauss elimination with partial pivoting or Naive Gaussian method.

 

So what is different about partial pivoting?  This is the only thing which is different between Gauss elimination with partial pivoting and Naive Gaussian method, this is very important for you to realize what the difference between the two is.  So since . . . at the kth step of forward elimination, what you do is you find the maximum of these values.  Now, before I talk about what do we mean by find the maximum of these values, you've got to understand there are n-minus-1 steps in forward elimination, because there are a hundred equations, let's suppose, you'll have ninety-nine steps of forward elimination, and you'll have to go through this whole process before each step of forward elimination, not at just the first step or the second step or the third step, but at . . . before the beginning of each step of forward elimination you have to do that.  So let's suppose we are at the kth step of forward elimination.  So what we do is we look at the kth row, kth column, which is the diagonal element in the kth row, and then we look at the element which is right below the diagonal element in the same column, and all the way to last element in the nth row. We take the absolute value of those, so that's why you find out you have these marks there, so you're finding the absolute value of all those elements, and once you find the absolute value of those elements, you figure out, hey, in which row do you have this maximum element.  So, let's suppose the maximum of all these elements which you are finding out here, the absolute value of, let's suppose the maximum element is in the pth row.  So if the maximum element is in the pth row, what you're going to do is you're going to switch rows p and k. Keep in mind that you're only switching rows p and k, you're going to make row p to be the row k, and row k to be the row p.  You're not moving other rows at all, the only thing which you are doing, you're just doing a switch, a clean switch between rows p and k.  And this you do at the beginning of each step of forward elimination, and then you follow the same steps, exactly the same steps of forward elimination of the Naive Gaussian method.  So as an example with symbols here, let's suppose we are at the beginning of the second step of forward elimination.  So if you're at the beginning step of the forward elimination, that means that you're going to take second row, second column as your pivot element, you're going to take the absolute value of this, you're going to take the absolute value of this, you may . . . you will again have another element, let's suppose a42, and so on and so forth, all the way up to an2.  You're going to take the absolute value of all of these elements which are in the second row and below in the second column, and then you're going to find the maximum of these . . . you're going to find the maximum of these, and let's suppose the maximum turns out to be in fourth row, this element turns out to be the maximum of all the absolute elements which we are trying to find. If that is the case, we're going to switch row four with row two, that's all we're going to do, every other row will stay in place.  So what that means is this whole row which you have right here will be switched with this second row right here, and then you will conduct the steps of forward elimination like we have done before, nothing changes about that.

 

So let's look at an example here.  So here we are at the beginning of the second step of forward elimination.  That means that I'm going to take the second row, second column here, take the absolute value of that, absolute value of this, absolute value of this, so basically what I'm doing is I'm taking the absolute value of all the second column elements below . . . in the second row and below.  So the absolute value of this is 7, the absolute value of this is 4, the absolute value of this is 9, the absolute value of this is 17.  So which one is the maximum?  I know that, hey, this one is the maximum, 17 is the maximum.  So if 17 is the maximum, that means that I'll have to take this row four, so this is row five . . . row five, and this is row two, so I'm going to do a switch.  So I'll have to switch row five with row two, and take this down to fifth row.  So that's what I have to do at the beginning of the step of forward elimination in this example right here.  Now keep in mind that when you are switching the rows that you need to switch not only the coefficient matrix part of the row, but also the right-hand side.  Same thing right here, you will have to switch this one as well as this one right here, and because the equations should not change when you go from . . . when you switch the rows.  So this shows you how the rows have been switched, so you see that row five which I had previously has become row two, and row two has become row five, and once that switch has taken place, now you conduct the same steps of Naive Gaussian method of forward elimination as before, so that has not changed at all.  So that's why I'm emphasizing that before you go through this presentation that you do have looked at Naive Gaussian method to be able to figure out what we are trying to do here.  So the Gauss elimination with partial pivoting is, again, a method to solve simultaneous linear equations, n equations, n unknowns, and it has two steps, just like Naive Gaussian method, of forward elimination and back substitution.

 

The forward elimination . . . the forward elimination part is same as the Naive Gaussian method, except for the fact that we switch rows before each of the . . . at the beginning of each of the n-minus-1 steps of forward elimination, and I told you that before.  And as an example, this is the matrix at the beginning of the second step of forward elimination, and we're going to, again, look at the maximum element of the second column and below, and then, depending on where the maximum is, we'll switch that row with row number two. And this is what we get at the end of the forward elimination steps following doing that for each one of them. So what we are doing is we are making the second row, second column . . . second column to be 0 below the second row, same thing we are making fourth row . . . fourth row below the third column to be 0, and eventually what we are doing is we are ending up with all 0s below the diagonal, which basically makes this particular coefficient matrix into an upper triangular matrix which we can solve by using back substitution just like in the Naive Gaussian method.  So the back substitution steps are exactly the same as Naive Gaussian method.  The only difference between Gauss elimination with partial pivoting and Naive Gaussian method is in the forward elimination steps, and that, too, is in the beginning of each step of forward elimination, but the back substitution steps in Gauss elimination with partial pivoting and Naive Gaussian method, they're exactly the same, so there's no need to talk too much about it, but that's what we get, we solve the last . . . last equation for the unknown, and then we are going backwards, we are going to the second . . . last but one equation, and keep on going backwards to find out what our unknowns are.  And the reason why we see that the coefficient matrix being an upper triangular matrix translates the back substitution steps into solving only one equation, one unknown at a time.  And that's the end of this segment.