Are we on? I canft see. It looks kinda dark. I donft know. It looks a little dim there. All right. So today -- assuming this is working -- or assuming even itfs not working -- we are going to spend a little bit of time over the next couple days talking about linear systems, particularly linear time invariance systems because those are the ones that are most naturally associated with the Fourier Transform and can be understood and analyzed -- some aspects of them -- in terms of the Fourier Transform. But before doing that, we wanna talk about the general set up -- the idea of linear systems in general -- and talk about some of their general properties, as fascinating as they are. Now itfs a pretty limited treatment that wefre gonna do of this. So I would say this is more an appreciation rather than anything like a detailed study. Itfs a vast field, and in many ways, I think it was one of the defining fields of the 20th century. The 20th century, in many ways, was a century of -- I think I even said this in the -- made this bold statement in the notes. The 20th century was a century of linearity in a lot of ways. The 21st century -- I say this as a sweeping bold statement, but I stand by it. The 21st century may be the century of non-linearity. We donft know yet, but non-linear problems are becoming increasingly more trackable because of computational techniques. One of the reasons why linear problems were studied so extensively and were so useful is because a lot can be done sort of theoretically even if you couldnft compute. And then, of course, later on when computational techniques -- computational power was there, then they became even more -- they were able to be exploited even more. What I wanna get to is the connection between the Fourier Transform and linear systems, and thatfs gonna be primarily along the lines -- so we definitely wanna see how the Fourier Transform applies to linear systems, again in a fairly limited way. And here, the main ideas are talking about the impulse response and the transfer function. These are the sort of major topics that I wanna be sure that we hit. The impulse response and the transfer function -- these are terms, actually, wefve used already, but now wefre gonna see them a little bit more systematically and a little bit more generally. And, again, theyfre probably terms and probably ideas that youfve run across before if youfve had some of this material earlier in singles and systems. And the other thing is -- again, somewhat limited and maybe even to a lesser extent -- is to talk a little bit about complex exponentials appearing as IGen functions of certain linear systems -- time invariance systems. So wefll put that up here. Complex exponentials as IGen functions. Ifll explain the term later if you havenft heard it, although I suspect many of you have. IGen functions of linear time invariance systems. All right. So this is, I guess, sort of a preview of the main things that we wanna be -- we wanna discuss. But before getting that -- before doing that, I do have to do a certain amount of background work and frame things in somewhat general terms. So letfs get the basic definitions in the picture. First a basic definition of a linear system. So a linear system for us is a method of associating an output to an input that satisfies the principle of super position. All right? So itfs a very general concept. Itfs a mapping from inputs to outputs. In other words, itfs a function. But this is usually the engineering terminology thatfs associate with it. Outputs that satisfies the principles of super position. And you know what that is, but I will write it down. Super -- not supervision. Super position. I get it. Ifll get it. Super position. So you have -- you think of the linear system L as a black box. It takes an input V to an output W, and to say this has the principle super position says that if you add signals, then the -- add the inputs, and the outputs also add. If you scale the inputs, then the outputs also scale. So it says that L of V1 plus V2 -- whatever their nature -- is L of V1 plus L of V2, and it says that L of alpha times V is alpha times L of V. By the way, itfs sort of a common convention here, when youfre dealing with linear systems, not to write the parenthesis because itfs supposed to be reminiscent of matrix multiplication where you donft always write the parenthesis when youfre multiplying by a matrix. As a matter of fact, Ifll have more to say about that in just a little bit. All right? Thatfs the definition of linearity. To say that itfs a system is just to say that itfs a mapping from inputs to outputs. Again, that doesnft really say very much. Everything we study is sort of a mapping from inputs to outputs, but this extra condition of linearity is what makes it interesting. And it took a long time before this simple principle was isolated for special attention, but it turned out to be extremely valuable. I mean, nature provides you with many varied phenomena, and to make some progress, you have to somehow isolate whatfs common to the various phenomena. And again in mathematics, the way it works is -- in the applications of mathematics, you wanna turn that around and turn that into a -- turn around the solution of what you observe and turn that into a definition. So the definition that came from studying many different phenomena in many different contexts was this simple notion of linearity or super position -- same thing. All right? So it really is quite striking how fundamental and important these simple conditions turned out to be in so many different contexts. And thatfs really, I say, almost defines a lot of the applications of mathematics -- practical applications of mathematics in the 20th century -- just isolating systems that satisfy this sort of property. Now, there are additional properties that it might satisfy, and wefll talk about some of them. But the basic property of super position is the one that really started the whole ball rolling. Okay? I should say, as an extension of this, if you have finite sums, then I can take L applied to, say, I equals one to N of alpha I times VI, then thatfs the sum of -- so thatfs a linear combination of the inputs, and what linearity says is linear combination of the inputs go with the linear combination of the outputs. This is a sum from I equals one to N of alpha I times L of VI. Now itfs also true, in most cases, that this extends to infinite sums. But any time you deal with infinite sums, you have to deal with questions of conversions and extra properties, the operators -- we are not gonna make a big deal out of this. I wonft tell you anything thatfs not true, I hope. But again, Ifm not gonna always take the assumptions carefully. You can extend these ideas to infinite sums and even to intervals, which we will talk about a little bit. But generally, that requires additional assumptions on the operator, L. Which again, Ifm not gonna make -- and usually, assumptions are fairly mild, all right? That are gonna be satisfied in any real applications. The basic assumption that you often make, and again, without really talking about it in detail, is you assume continuity -- some sort of continuity properties. Any time limiting operations are involved -- wefve seen this in a number of instances -- there has to be some extra assumption on the operations youfre working with. And itfs generally some sort of continuity assumption that allows you to take limits. So you assume some kind of continuity. But the problem is defining what it is -- defining what continuity means and so on and so -- Ifm not gonna get into it. And again, itfs not gonna be an issue for us, but I thought I ought to mention it by -- to be honest. Now whatfs a basic example of anytime you learn a new concept, you should have -- or even revisit a familiar concept -- you should have examples in mind. What is an example of a linear system? There is actually only one example of a linear system. Theyfre all the same. It is the relationship of direct proportionality. The outputs are directly proportional to the inputs. L of V is equal to alpha times V. All right? That is certainly linear. It certainly satisfies the properties of super position. L of V1V2 is alpha times V1 plus V2. So thatfs alpha times V1 plus alpha times V2. So thatfs L of V1 plus L of V2. And likewise, if I scale, I actually call that alpha there. But Ifm thinking of alpha as just a constant here. L of, say, A times V is equal to A times L of V for the same reason. All right? The relationship of direct proportionality is the prototype -- the archetype -- for a linear system. In fact, itfs the only example. All right? All linear systems essentially can be understood in terms of direct proportionality. Thatfs one of the things that I wanna convince you of. Thatfs one of the things that I wanna try to explain. Itfs the only example. And thatfs sort of a bold statement, but I stand by it. Maybe a little shakily, but I stand by it. Say all linear systems say [inaudible] back somehow to the operation of direct proportionality. All right? So donft lose sight of that. So for example -- now, it can be very -- it can look very general, all right? It can look very general. Direct proportionality is also known as multiplication. So any linear system that is given by multiplication -- any system that is given by multiplication is a linear system. All right? So a little bit more generally is multiplication. That is to say you can think of multiplying by a constant, but if your signal is not a constant, but a function of T or a function of X, then I can multiply it by another function. So L of V of T, I can multiply by alpha T times V of T. Okay? The constant proportionality doesnft have to be constant. It can also depend on T. But nevertheless, the relationship is one of direct proportionality. And for the same simple reason as up here, that defines a linear system -- linear. So when there are many such examples of that -- practical examples of that -- a switch! A switch can be models of a linear system. If itfs on for a certain duration of time, then thatfs multiplication by, say, a rectangle function of a certain duration. So EG -- a switch -- L of V is -- the L of V of T is, say, a rectangle function of duration T times VT, a duration A. All right? So you switch on for duration A. Then you switch off. On for duration A. Now you donft necessarily think of flipping the switch as a linear operation, but it is. Why? Because itfs multiplication. You could -- somebody could say to you, gVerify that the act of switching on a light bulb is a linear operation.h But the fact is that itfs modeled mathematically by multiplication by a function, which is one for a certain period of time and zero for the rest of the time. And as multiplication, it is the operation -- it is the principle -- it is just expressing direct proportionality, and thatfs always linear. Sampling is a linear operation. Sampling at a certain rate L of V of T would be -- could be a Shaw function of spacing T times V of T because [inaudible] spacing P times V of T. All right? Itfs multiplication. Itfs direct proportion. Itfs linear. So again, somebody could say to you, gSay, is it true that the sample of the sum of two functions is the sum of the sample functions?h And you might be puzzled by that question, or you might -- that might take you a while to sort it out. You might try to show something. I donft know what you might try to show. You might try to show that sort of directly, but in fact, yes it must be true that the sum of two sample functions -- the sample of the sum of two functions is the sum of the sample functions. All right? But thatfs true because the relationship -- because sampling -- the act of sampling is a linear operation -- is a linear system. Okay? Itfs multiplication. Itfs direct proportion. Now, a slight generalization of direct proportion is direct proportion plus adding up the results. That is, to say, matrix multiplication. I should say slight, but important, generalization. Generalization is -- well, letfs say direct proportion plus adding two linear operations plus adding. And what I have in mind here is matrix multiplication. So i.e. matrix multiplication. All right? If I have an N-by-N matrix, say A is -- and let me see if I can even do this more generally -- say an N by M matrix, all right? So itfs N rows by M columns and V is an M vector, so itfs a column vector with M rows. Then A times V is an N vector, and the operation of multiplying the matrix, say, by the column vector V is a linear operation. It is a combination exactly of direct proportion or multiplication with adding. So what is it? If you write A as the matrix, say, AIJ so itfs mixed by columns and rows, then A times V the [inaudible] entry is a sum over J -- J equals one to N -- J equals one to N -- AIJVJ, isnft that right? Probably not. Letfs see. Do I have an M? I go across the number of pies, and N by M -- I hate this stuff. Man, I can never get this right. N by M matrix, so -- yeah. Right, M columns. Right, okay. Thatfs fine. That gives you all the entries. If it does, fine. If it doesnft, then switch M and N. Okay. Each component is multiplied by J -- thatfs direct proportionality -- and then theyfre all added up. And as you know, the basic property of matrix multiplication is that A applied to the sum of two vectors is A of V plus A of W. And A of a scale times V is alpha times A of V. Okay? Itfs a slight generalization, but it turns out to be, actually, a crucial generalization. And it comes up in all sorts of different applications. Those of you who are taking 266 [inaudible] have done nothing but study matrix multiplication. Well, that may be a little bit of an extreme statement, right? So EGE 263 where you study the linear dynamical system X dot is equal to AX, and you solve that. Say X of zero is equal to V in initial condition. All right? Then solve by X of T equals E to the T times A times X of zero, which is V. All right? Itfs a matrix times the fixed vector V gives you how the system evolves in time. All right? And you wanna be able to compute that, and you wanna be able to study that. And you spend your life doing that. Many people do. Now, again, without going into detail, now -- and wefll say a little bit more about this later -- the property of linearity is extremely general. There are special cases that are important, some of which Ifm sure youfve seen. So let me just mention special linear systems -- letfs just stick with the case of matrix multiplication right now. All right? So special -- linear systems with special properties derive from the special property of the matrix of A. So for example, some of the main examples are -- some of the most important examples are, for example, if A is symmetric, then you sometimes call it a self-adjoint system or a symmetric system. So A -- to say that A is symmetric is to say that itfs equal to its transpose. So for example, if A is symmetric, thatfs a special type of linear system. As a matter of fact, Ifll tell you why thatfs important in just a second. [Inaudible] transpose is equal to A. A can be -- or hermission is the complex version of this, where the condition is A star is equal to A. So this is the complex case. All right? That is A star is equal to the conjugate transpose. These are both very important special cases. They come up often enough, so again, it was important to single them out for special study. Or another possibility -- those are, maybe, the two main ones. Another possibility is A can be unitary or foginal. So if A unitary means that A times its conjugate transpose -- its adjoint -- is equal to the identity or A star times A is equal to the identity. Ifm talking about -- I should say here, Ifm talking about square matrices -- a N-by-N matrix. So itfs square. Okay? Now, a very important problem and a very important way of -- and wefll -- again, wefre gonna talk about this when we talk more about general linear systems -- a very important approach to understanding the properties of linear systems is to understand the aspect of their IGen values and IGen vectors associated with them. Ifm saying these things to you fairly quickly because Ifm going under the assumption that this is largely -- by and large review. All right? That youfve seen these things in other classes and other contexts. So you often look for IGen vectors and IGen values of matrix A. All right? And we are going to, likewise, talk about IGen vectors and IGen values for general linear systems, and thatfs where the Fourier Transform comes in. But just to remind you what happens here in this case, just to give you sort of the basic definition -- so you say V is an IGen vector if A times V is equal to lambda times V for some V. So V is a non-zero IGen vector -- non-zero. If therefs some non-zero vector thatfs transformed into itself. So there you see -- you really see the relationship with direct proportionality, all right? For an IGen vector, the relationship is exactly direct proportionality. A times V is just a scale version of V. The output is directly proportional to the input. All right? Now it may be that any -- that you have a whole family of IGen vectors that span the set of all possible inputs -- that form a basis for the set of all possible inputs. If you have IGen vectors, say, V1 through VN with corresponding IGen values lambda one through lambda N that form a basis for all the inputs for all the input, all the system, all the signals that youfre gonna input into the system, then you can analyze A -- the action of A easily. All right? Thatfs because, if it forms a basis for all the inputs -- if V is any input -- so letfs say yea. Itfs V any input -- then you can write V is some combination alpha I times VI. I equals 1N. Thatfs what itfs saying -- that thatfs what it means to say that they form a basis for it. And then A operating on V by linearity -- I can pull that to -- I can pull that A inside the sum and have it operate on the individual scaled IGen vectors. So A of V is A of the sum is the sum of I equals one to N of A of alpha I times VI. But again, the scaler alpha I comes out by linearity. Thatfs the sum from I equals one to N of alpha I times A times VI. But A just takes V to itself or a scaled version of itself -- VI to a scaled version of itself. So this is sum I equals one to N of alpha I times lambda I times VI. The action of A on an arbitrary input is really here. You see youfre getting direct proportionality plus adding. Itfs very simple to understand. Each component is stretched, and then the whole thing is scaled by whatever initially scaled the inputs. All right? If the inputs are scaled by alpha I, the outputs are also scaled by alpha I. In addition, theyfre scaled by how much the individual IGen vectors are stretched. Okay? Itfs a very satisfactory picture and an extremely useful picture. So the question is, for example, when do linear systems have a basis of IGen vectors? When can you do this? And thatfs when these special properties come in. All right? Thatfs when these special properties come in. So for example, and I wonft -- Ifm not gonna -- this is sort of, again, fundamental linear algebra that I assume youfve probably seen in some context. But because this is so important, you gotta ask yourself when can you actually do this. And the spectral theorem in finite dimensions for matrices says that if A is a hermission operator or symmetric operator in the real case, then it has a basis of IGen vectors, and you define a basis of IGen vectors. The spectral theorem says when you can do this. If A is symmetric or, in the context case, hermission, then you can find a basis -- actually, an orthonormal basis -- even better -- basis of IGen vectors. All right? Now if youfre thinking that this doesnft look -- that this looks sort of vaguely familiar or this is somehow -- Ifm using the similar sorts of words to when we talk about fourierciaries, and I talked about complex exponentials forming an orthonormal basis and so on. Itfs very similar. All right? Itfs very similar. And the whole idea of the Fourier Transform and diagonalizing the Fourier Transform -- applying IGen vectors, IGen functions -- in that case, you call them for the Fourier Transform or how they come up in Fourier series is exactly sort of whatfs going on here. Okay? These are simple ideas, right? We -- all that we started with this idea of super position -- that the sum of the inputs goes to the sum of the outputs and a scale version of the input goes to a scale version of the output. And the structure that that entails is really quite breathtaking. Itfs really quite astounding. All right? Now, therefs one other important fact about the finite dimensional case -- the case of just finding N-by-N square matrices -- thatfs very important. And also, wefre going to -- all these things have some analogue in the continuous case and sort of the infinite dimensional continuous case, which is where wefre gonna spend most of our time. All right? But this you should sort of know. This should be your touchstone for understanding the more -- what happens more generally or sort of what happens in the case of N by N matrices and what happens in the finite dimensional case -- what happens from what you learned in linear algebra. So one more property -- itfs not that matrix multiplication is just a good example of linear systems. All right? This is like -- itfs not just direct proportionality as an example of linear systems. Direct proportionality is the only example of linear systems. All right? Well slightly more generally, itfs not that -- or itfs not just that matrix multiplication is a good example -- a natural example of -- letfs call it finite dimensional linear systems. All right? So itfs like an N-by-N matrix operating on an N vector -- whatever. Itfs the only example. All right? Now you learned this in linear algebra, although you may not have learned it quite that way. What that means is that any linear operator -- Ifll say it very mathematically thatfll just give you an example -- any linear operator on a finite dimensional space can be realized as matrix multiplication. And Ifm gonna give you a problem to think about. Any linear system -- let me put it this way. Any finite dimensional -- so finite number of degrees of freedom -- a finite number of ways of describing any input -- described by a finite set of vectors -- a finite set of signals, inputs. Any finite dimensional linear system can be realized as matrix multiplication. All right? Itfs not just that itfs a good example. Itfs the only example. Now let me just say -- let me just take a little poll here. Raise your hand if you saw this in linear algebra -- saw this theorem in linear algebra. Not so widespread. All right. Well, you did. If you took a linear algebra class, you probably saw this result. All right? Maybe not phrased quite this way, but this is sort of one of the fundamental results of linear algebra. Now mathematicians are quick to say, gYes, but we donft like matrices. We would rather stay with the linear operators, per say. Beautiful and pristine as they are, to introduce matrices is an obscene act.h Went out like that. All right? We find it useful to manipulate matrices. We find it useful, often, to have this sort of representation. Ifll give you one example you can try out for yourself. So for example, some of you may have -- an example you may have done -- example: let me look at all polynomials of degree less than or equal to N. All right? Thatfs the space of inputs. Inputs are polynomials of degree less than or equal to N. So N is fixed. All right? So any input looks like A0 plus A1 times X as a constant term, a coefficient of X, a coefficient of X squared up to a coefficient of X to the N. Not exact -- Ifll allow myself to have some zero coefficients in here. So I donft go up -- I donft necessarily have to go all the way up to N, but I go up, at most, to N -- to X of the N. All right? So any input looks like that. Now what is a familiar linear operator? A familiar linear operator on polynomials that takes polynomials to polynomials is the derivative. If I differentiate a polynomial, I get a polynomial of lower degree. So take L to BDDX. All right? Thatfs the linear operator. Thatfs a linear system. All right? As such, in the space of polynomials is a finite dimensional space. It can be finite degrees -- a finite number of degrees of freedom. The degrees of freedom are exactly described by the N plus one coefficients. Theyfre N plus one because they have a constant term -- order one up to order -- up to degree N. All right? So L can be described as an N by one by N by one matrix as -- by an N plus one by N plus one matrix. Find it. Any linear operator in a finite dimensional space can be described as matrix multiplication -- can be written in terms of matrix multiplication. All right? Therefs a linear operator on a finite dimensional space. Doesnft look like a matrix, but it can be described as a matrix. Find the matrix. Yeah. Thank you. And no -- well, it can -- yes, actually -- yeah. So Ifll leave it to you to think that. Thatfs right. It actually drops the degrees by one. So you can describe it either -- if you do N plus one by N plus one -- let me give -- Ifll give you a hint. Youfre gonna have either a row or column of zeros in there. All right? But in general, if Ifm thinking of it just sort of as a map from N plus one degree polynomials and N plus one degree polynomials, itfd be a square matrix. All right? So Ifll let you think -- Ifll let you start this out. This is a problem -- actually, so let me take a poll again -- a brief poll again. Anybody do this problem in linear algebra class? Yeah. Okay. You probably hated it then. You may hate it now. But -- and again, itfs a sort of scattered minority response out there. All right? But it just shows, again, that this idea that -- itfs not just that itfs a good idea. Itfs the only idea. Representing linear operators matrix -- interpreting linear operator -- linear system on finite dimensional space as a matrix multiplication -- itfs not just a clever thing. Itfs not just a nice example. Itfs the only example. All right? And in fact, wefre gonna see that that same statement, more or less and for our purposes, holds an infinite dimensional continuous case. Thatfs what I wanna get to. I wonft quite -- I donft think Ifll quite get there today, but Ifll get a good part of the way. All right? I wanna see that a similar -- very similar statement -- but there is an analogous statement -- the infinite dimensional continuous case -- very satisfactory state of affairs. There is an analogous statement for the infinite dimensional continuous case. All right? So letfs understand that now. So Ifll understand that first, in terms of an example rather than a general statement, the example that generalizes matrix multiplication is integration against a kernel -- or what I should say is the operation that generalizes matrix multiplication is integration against the kernel. Something we have seen. Something I will write down now for you. So the operation -- the linear system that generalizes matrix multiplication is the operation of integration against the kernel. That would -- thatfs the phrase that you would use to describe it. So what it is? What do I have in mind here? Well, again, the inputs this time are gonna be functions. Wefll do it over here. All right. So the inputs are, instead of just a column vector, are going to be functions -- is a function, say, V of X. All right? And the kernel -- a fixed kernel for the operator -- the things defines the operation is a function of two variables. So the kernel is a function -- letfs call it K -- K for kernel -- K of XY. All right? Integration against the kernel is -- the operation is L of V. So I can -- itfs gonna be producing a new function. Ifll say thatfs also a function of variable X. Therefs also a little bit of a problem here like there is in this whole subject with writing variables, but let me write it. Itfs gonna go from minus infinity to infinity K of XY, V of Y, VY. All right? Kfs a function of two variables. I integrate K of XY against V of Y, VY. What remains is a function of X. That by definition is the output devaluated at X. All right? So L of V is another function. What is its value at X? I integrate K of XY against V of Y, VY. What remains in this integration is the function of X -- depends on X. Okay? Thatfs what I mean by integration against a kernel. The kernel K defines the operation -- defines the linear system. So it is linear because integration is linear. The integral of the sum of two functions is the sum of the integrals. The integral of a scaler times the function is the scaler times the integral of the function and so on. So thatfs the first thing. I wonft write that down, but I will say it. So L is a linear system. So L is linear. All right? Now first of all, if you sort of open your mind a little bit, you can really think of this as a sort of infinite dimensional continuous analogue of matrix multiplication. Itfs the infinite dimensional continuous analogue of matrix multiplication. Why? What do I have in mind by a statement like that? Well, what I have in mind is itfs like you think of V as, somehow, an infinite continuous column vector. All right? So itfs like you think of V as -- I mean, you can even make this more precise if you actually use [inaudible] sums, but I donft wanna do that. I donft wanna write the -- well, let me just write it out like this -- as an infinite column vector. All right? And think of this operation integral from minus infinity to infinity K of XY, V of Y, VY -- whatfs going on here? So this is like a column vector. K of XY is like a matrix -- a doubly infinite continuous matrix. X is the index of the row. V is the index of the column. You are, like, summing across the columns of the matrix time -- summing across a row of a matrix -- thatfs integrating with respect to Y -- times the corresponding column entry V of Y. So this is like a column index. This is like a row index. And an interval, of course, is like a sum. Okay? This is exactly whatfs going on. Exactly whatfs going on. K of XY -- youfre summing across the X row, right? XYYYYYYYY times VY VYYYYYYYY and youfre adding them all up according to the integral, and youfre getting the component -- the X component of V. All right? Now see, analogue -- now what else is true? Or what else is true? If itfs such a good analogue, are there analogues to the other statements that went along with the finite dimensional case? Well again, just as in the finite dimensional case, there are special linear systems that are characterized by special properties of the matrix. So too, in the sense of [inaudible] continuous case, there are special properties of the systems that are characterized by special properties of the kernel. All right? And although Ifm not gonna use them now, I at least wanna mention them because I wanna continue this sort of analogy between the finite dimensional discreet case and the infinite dimensional continuous case. So special linear systems arise by extra assumptions on the kernel -- on K of XY. All right? So for example, you might assume -- now what do you think is the analogue to the symmetric case? For a matrix, itfs that the transpose of the matrix is equal to the matrix. So what do you suppose the transpose of -- or the analogue of the transpose is for a kernel K of XY? What should the condition be? What should the symmetry condition be? Yes. Be bold. Ifll help you. I wonft help you. All right. What should a symmetry condition be thatfs sort of analogous to a matrix being equal to its transpose? If K of XY is the analogue of the matrix where X is the row and Y is the column, how do you get the matrix? You interchange the column and the row. [Student:][Inaudible]. Pardon me? [Student:]Time invariance? No, not time invariance. Wefll get to that. [Student:][Inaudible]. Right. I think I heard it there. All right. Symmetry -- or self-adjointess -- is the property K of XY is equal to K of YX. If the kernel satisfies this property, you say itfs a symmetric system -- symmetric or sometimes you call it a self-adjoint linear system. They have special properties. Ifm not gonna talk about the properties now, but again, Ifm just pursuing the analogy between the discreet case and the continuous case. All right? Or -- and whatfs hermission symmetry? Hermission symmetry, in the case of a complex kernel -- and I wonft allow the case of a complex kernel -- would be K of XY would be K of YX bar. Okay? Complex conjugate. This is all [inaudible] and so on and so on, and I wonft gulf into that very much now. Now we have seen many examples of linear systems that are given by the integration against a kernel. What is an example of -- what is a fundamental example in this class of a linear system that is given by integration against a kernel? A Fourier Transform. Good. So for example, a Fourier Transform -- FF of S -- is the integral from minus infinity to infinity E to the minus two pi IST, F of T, DT is exactly integration against a kernel. What is the kernel? The kernel is K of ST is a minus two pi IST. All right? It fits into that category. It has special properties -- many special properties. Thatfs why we have a course on it. Okay? But nonetheless, it fits under the general category of a linear system. And actually, you can check that K of ST is equal to K of TS is actually symmetric. All right? If I switch S and T, the kernel doesnft change. So itfs a symmetric linear system and so on. What is another example of an important linear system that can be described by a -- by integration against a kernel? Whatfs another example that we have studied extensively and use everyday almost -- on good days? [Student:][Inaudible]. Convolution. All right? All right? Fix a function H, all right? Then if I define L of V to be H convolved with V, that is a linear system. Thatfs a linear system. Convolution is linear, but what is that in terms of the operator? L of V of X is the integral from minus infinity to infinity H of X minus Y, V of Y, DY. All right? Convolution is a linear system that falls under the general category integration against the kernel. Itfs special one, actually, and itfs what -- as it turns out, itfs a very important special case because the kernel here doesnft depend on X and Y separately. It depends only on their difference. All right? So note for convolution -- all right. For convolution -- that is, for a linear system given by convolution, the kernel depends on X minus Y. [Inaudible] is a function only of one variable or the difference between the two variables X minus Y instead of X and Y separately -- and not X and Y separately. All right? Now for reasons which youfve probably seen, actually, and which wefll talk a little bit more about detail -- this particular special case leads this -- and this property leads to a so-called shift invariance or time invariance. All right? So in particular, if we shift X and Y by the same amount -- A, say -- some number A -- A. So X goes go X plus A or X minus A if I delay it by A. Y goes to Y minus A. And then, of course, X minus Y is equal to X minus -- goes to X minus A minus Y minus A is X minus Y. Itfs -- the difference is unchanged. All right? So the convolution is unchanged. If I shift X and Y. All right? And this leads to -- Ifm not -- I donft wanna say too much more about it than that, but this is what leads to the so-called shift invariance or time invariance of convolution. This leads to convolution. That is this observation leads to the phrase you hear -- and wefll talk about this -- convolution as a linear shift invariant or time invariant. People usually say time invariant, but itfs really better to say shift invariant somehow. Itfs more descriptive -- linear time invariant system. All right? But wefll get back to that. The fact is that, again, convolution is, of the form, integration against the kernel, but itfs a special kernel because it depends only on the difference of the variables, not on their -- not on the variables separately. Okay? In general, integration against the kernel is integration against the function of two variables. Now itfs not just that this is a good idea. Itfs not just that this is a good example of linear systems. Ifm not talking about convolution here. Ifm talking about generally integrating against a kernel. All right? So the words that I said, like, ten minutes ago, Ifm gonna say again. But in this different context. So itfs not just that integration against the kernel is a good example of linear systems -- in this case, continuous linear systems -- infinite dimensional linear systems -- just like itfs not just that matrix multiplication is a good example of finite dimensional linear systems. Itfs the only example. Okay? Itfs the only example. Any linear system -- now this is statement has to be qualified because there are assumptions you have to make and so on, but thatfs not the point. The point is that any linear system can be realized somehow as integration against the kernel. Yeah. [Student:][Inaudible] manifest in a matrix operator? Oh, thatfs a good question, and wefll come back to that, actually. Itfs in the notes. The matrix has had special properties [Student:][Inaudible]. Circulant, actually. Itfs a little bit more than Tarpowitz. Yeah. RTFN, man. Itfs in the notes. Okay? Wefll come back to that. All right. For now, donft spoil my drama. All right? Again, itfs not just that matrix -- in the finite dimensional case, itfs not just that matrix multiplication is a good example. Itfs the only example. In the infinite dimensional -- the continuous case, itfs not just that integration to kernel -- against the kernel is a good example. Itfs the only example. All right? Any linear system can be realized as integration against the kernel. All right? Now on that fantastically provocative statement, I think we will finish for today. And I will show you why this works next time.