Okay. Hi, everyone. Ifll get started again for CS 224n. Okay. So todayfs the final lecture on machine translations, and so today I plan to tell you everything else there is to know about machine translation. Well, I canft exactly do that, but I guess I sorta wanna concentrate on the sorta main additional ideas and do something of a survey that gets up to whatfs currently basically the state of the art, and what people do for statistical machine translation systems. And an interesting thing that wefll come to in a little bit is these IBM models that wefre looking at. They actually are still used in state of the art machine translation systems, though theyfre not the whole system. Theyfre actually used now as a component in how those systems are built. So first of all, I wanna quickly finish going through the hierarchy of IBM systems. And then I want to talk about evaluating MT systems, because thatfs, in general, important to know how you got a good one, and also comes up in the assignment. And then talk about some of the modern ideas in statistical phrase-based MT systems and syntax-based MT systems. Okay. And so, just on the other announcements, donft forget todayfs when the first assignment is due. And also today, wefve got the second assignment coming out, which is then the statistical MT alignment models, which you can put together with your language model and have a little MT system. And then on Friday, the sectionfs gonna be talking about the kind of EM algorithms that you need to make the assignment. Okay. So we spent a lot of time on IBM Model 1, and so at a high level, the important thing to think about is that, essentially, the only information that this uses as an alignment model to translate words is itfs trying to learn these probabilities of how to translate an English word into a French word. Thatfs all itfs got. All other knowledge of language is completely disappeared. So it doesnft have any ideas about knowing about phrase structure. It doesnft know that words at the start of the sentence will probably be translated near the start of the sentence. I mean, it literally knows nothing else. So in some sense, its character is very like a 1950s machine translation system, where the way you translate is you come to the next word, you look up in the dictionary the possible translations, and the only difference is we have a probability distribution over possible translation, rather than just the list in the dictionary. I mean, in particular, therefs a question for these guys. So words -- lots of words, as we discussed, have lots of meanings. The most famous word ambiguity in English that everyone uses as their first example is gbank,h which can mean either the edge of a river or a financial institution. So when we want to translate that into French, those two meanings of bank get translated differently. So if we build a Model 1 -- actually, I should be doing it the other way. I should be choosing a French word that has two meanings in English, actually. So if wefre doing French to English machine translation, if wefre taking a French word that has two possible translations in English, if wefre building a Model 1 model, can these alignment probabilities help us choose which translation is appropriate in a certain context? Yeah? [Student:]No. No. So these alignment probabilities are completely context-free. All they know is margin statistics. They know what the most common translation of that word is, and the second most common translation. Okay. But nevertheless, if wefre building French to English machine translation system using the noisy channel model, have we got a chance of getting that right? And if so, how? [Student:]Can you incorporate ideas from a language [inaudible]? Right. Yeah. So our language -- yeah, so the answer is our language model. So at the moment, our whole system is built on this leap of faith that, essentially, our language model can save us. So the hope is that all the alignment model knows is more and less common with probabilities translations for an individual word. And so itfll just sorta suggest those as possible translations, and then we just pray mightily that when the language model comes along, it will save us, and it will propose appropriate translations in context. And you know, sometimes thatfs actually going to work. So that if you have something where therefs a good context, like if you have a word -- I mean, a lot of the time you can get context for this ambiguation by looking at what precedes. So that if you have some kind of complication like griver bank,h the fact that riverfs the words before gives a lot of context as to what sense of the word follows. And so that context will go through the translation. And so the language model will prefer to put together words that have kinda good mutual collocational information that fit together well with each other. So quite a bit of the time you can actually choose appropriate translations by having the language model do all the work. On the other hand, if you start thinking to yourself, boy, youfre really hoping for a lot there, that canft work all the time, youfd be absolutely right. It absolutely doesnft work all the time. Okay. So then if we head up the IBM models, I mean, essentially, the models get a ton more complex. But really, they only put in three very simple linguistic ideas that they try to capture. So itfs just this minute bit of knowledge of how languages work that gets added as you go up the models. So the first bit that I talked about last time was when we went to Model 2, we have this idea of displacement, which is, by and large, the words at the beginning should appear at the beginning of the translation, and by and large, the words at the end should occur at the end of the translation. Thatfs pretty crude, but itfs a little bit true. And that gives us some value. Okay. And so then today I want to go on to the next three models, which add in a couple more ideas. Ifll just mention at this point, in current practice, and you see this in the Jurafsky and Martin book, after Model 2, people normally intersperse whatfs called the HMM alignment model, and thatfs discussed in the Jurafsky and Martin chapter. It wasnft an original idea model, and Ifm not gonna present it here right now, mainly because since we havenft talked about HMMs at all, thatfd be a big digression. But in practice, people have found it a useful model to stick into the hierarchy there. Okay. So if we go onto Model 3, what wefre -- the big idea that we wanna capture here is when wefre doing this word alignment, that what we were doing is that we were just going through each position. So we were saying, okay, letfs go through each position and letfs have an alignment variable that gives a source for that position. And so there was nothing in our assigning alignment variables that stopped us aligning every word to one word. The alignment variables are independent of each other. And so that, in part, is a very good thing. Keeping the alignment variables independent of each other means that you can train Models 1 and 2 very efficiently, and itfs precisely for that reason that we only do Models 1 and 2 in the assignment. Because if you want to start having complex dependences between things, things suddenly get a lot more complicated. In fact, youfll find in the text of the second assignment, when it suggests things that you could do for extra words, one of the suggestions -- in fact, I think itfs the first suggestion -- is you could try implementing Model 3. As a little warning, no one who has attempted that has ever succeeded in getting it to work in the length of time you have to do assignment two. So unless youfre really, really feeling gung-ho, youfd probably be better off ignoring that suggestion and looking at the other suggestions of ways in which you might be able to make things better. Okay. But at any rate, the idea that what we could do is just say that every word could be aligned to the same word, thatfs clearly linguistically ridiculous. In particular, there should be a kind of an idea that a lot of the time words do translate one-to-one, gproposalh to gproposition.h That one-to-one is common. On the other hand, depending on the language pair, there are just regularly cases thatfll be exceptions to that. So in French, gnot,h at least in written French, is standardly turned into this gne... pas,h which surrounds the verb. So youfd actually expect the word gnoth to have the fertility of two when going from English to Spanish. And similarly here, gimplementedh as being translated by this sorta put-in implementation little phrase. So gimplementedh has a fertility of three, that it chooses three words in the translation. So we should, actually, be able to do a better job of aligning words if we actually learned some information about the fertility of different words, because then wefd kinda know appropriate ways to do the alignments for them. And so thatfs precisely whatfs in Model 3. And so this is a picture of what Model 3 looks like. So Model 3 is, again, being done as a generative probabilistic model, and itfs just a much more complex generative process. So what wefre wanting to do is come up with an estimate of the probability of F given E, just like before. And the way we do that is if we start off with an English sentence, we first of all have a fertility for each word, where the fertility is a probability distribution over different fertilities. So if we think of it as a generative model, we roll our dice, and based on the fertility of each word, we work out that itfs going to become some number of words in the other language. And Ifm indicating that here just by replicating the word. So gdid,h when we rolled the dice it said its fertility is zero, so we just have it go to nothing. But gslap,h we roll the dice, its fertility was three. And so I now produce three copies of the word, gslap.h Okay. And we still need to have words in the output that have no source in the input, the spurious words. And so we have a second probability, which is a probability of between each word, sticking in one or more null words. So we stick in one null word here. And then at that point, we then have a translation model, which is like what we had in Model 1. So this is the equivalent of the probability of F given E in Model 1, where wefre just doing a single word-for-word translation, which turns our English words in to here Spanish words. And then finally, we still wanna keep what we inserted into Model 2, and so we also have the displacement model, and the displacement model is telling us the probabilities of these words being displaced relative to their position in English. And so that will reorder the sentence, and that will give us our output. So effectively, this is just a considerably more complex model where we now have a bunch of probability distributions. So we have fertilities for our words. And again, there are strong independence assumptions in this model. So fertility of the word just depends on what the word is. It doesnft depend on other fertilities or surrounding words. And then we have the translations of a word, which is still just like what I was complaining about. Theyfre still just being done independently. So wefre still relying on the language model to save us. And then we have these displacement probabilities, which are just like Model 2. So again, there are huge independence assumptions there. So displacement is just how likely words are to move how far around. Itfs not specific on what the words are. And so thatfs kinda what youfre seeing here. So the translation probabilities, chances of putting in extra word for the translation, where that gareh is the Spanish gare,h not the English gare.h Fertilities of words, and then distortions. So these are the kind of distortions that youfll want to have in Model 2 for the assignment. So the distortion d(5úU2) is the probability that the English word in position two of the English sentence generates the French word in position five of the French translations, where formally, you want to have that conditioned on the length of the two sentences, where youfve generated a length for the output sentence so you get a well-formed probability distribution. Yes? [Student:][Inaudible]. So, yeah, you can rely on the language model -- you can hope dearly that the language model will do everything, and thatfs what Model 1 does. And to some extent, yes, since the language model will want to keep words in good order, to some extent that will capture what the distortion model does. So you can hope that even though your Spanish starts off with the order of article, noun, adjective, that when you translate into English, the English language model will know that adjectives donft come after the noun in English. And so it will just move it around and say, no, no, it will be much better if you produce article, adjective, noun. Yes, and that may work for you. But, I mean, there are crucially a lot of circumstances that it wonft work. I mean, a kinda obvious one is if you take a sentence like, gA dog bit a man,h or something like that, and you -- well, I want to go with the opposite, the man biting the dog. So, gThe man bit the dog,h is the actual source sentence. Okay. If you, then, translate it without any knowledge of distortions, and you have a really good language model, the really good language model is just looking at the words that youfre offering it, and itfs gonna say, gMan biting dog,h thatfs kinda really rare. gDog biting man,h thatfs pretty frequent. Letfs move those around, and look, I get a much better language model score, so letfs go with that one. And that would be wrong, right, because wefve lost the semantics. And if, in that case, you had a distortion model, the distortion model, fingers crossed, might encourage you to get an output, which was still, gThe man bit the dog.h Therefs a lot of keeping your fingers crossed there, because you could imagine it going wrong in various ways, but at least it kinda indicates why distortions might be useful on top of the language model. Okay. This is just a little fine point. I mean, on the distortions, you could kinda have the nulls inside the distortion probabilities for generating spurious words, but in practice, people donft normally do that. And the spurious words are just put in the positions that arenft claimed by the real words, because really, they donft have a position in the source sentence themselves. I mean, in particular, you kinda donft really want to say that the source for all the spurious words is at position zero at the left end of the sentence, because that would mean that most of the spurious words would have huge distortion penalties, where really they shouldnft have distortion penalties at all, because youfre just inserting them in places in the sentence that are grammatically appropriate. Thatfs another slide going through Model 3. But maybe Ifve had enough Model 3 at that point. And so, really, at the high level, the essential thing to have in your head is just therefs this one new idea. Wefre modeling how many words an English word is typically translated by, and that will give us better translations, because we wonft be able to get these funny alignments where multiple words align to one word. And thatfs something -- you probably shouldnft build Model 3, but thatfs something that you should look out for when youfre looking at the output of your Model 1 and Model 2. I mean, to digress for a moment on the day that the first assignment is due, in the assignments, it has this discussion of how we like people to do some error analysis and look at the output and think about it and say some pertinent things about it. And we really do. But what wefd like not really is just that your write-ups are as long as possible by dumping out lots of junk of probability tables or something, but that your assignments, rather, appear to have had some mental energy that went into the write-up. And the mental energy is really in looking at what it does, and being thoughtful about it, relative to kind of what youfd like to have a system do. So for this machine translation case, one thing that you could do is look at the alignments your system is getting, do see obvious places in which multiple three or four words are being aligned to a single word inappropriately, which is just the kind of thing that Model 3 is meant to fix. You donft actually have to write Model 3. You could, nevertheless, in your write-up, show a couple of examples of that, and sorta point out that this problem is, indeed, occurring with your model, too. And therefore, you kind of understand better now why Model 3, perhaps, is a useful thing to have around. Okay. How do you learn Model 3? I mean, the short version is in exactly the same way you run an EM. The long version is that, boy, is gets a lot more complicated. So we still start off with the same parallel sentences. And so we still, just like for the other EM algorithm, we just give some values to all of these parameters. And then what wefre gonna do is, based on those parameters, find likely alignments. Then, based on those likely alignments, wefre gonna re-estimate all the parameters. Because once we have likely alignments, we can just look at the fertilities. If a word has been aligned to three things, we say fertility three, and put that into our counts and run the EM algorithm. So at a very high level, itfs exactly the same, and itfs not really any more complex. At an implementation level, it becomes a ton, ton more complex. And the reason for that, Ifm gonna kinda just gloss over here. But itfs really an important thing that you should think about and understand. And for doing that, you should be looking at Kevin Knightfs MT tutorial and in the Jurafsky and Martin Chapter 25. But the high-level point is if youfre doing Model 3, precisely because you put fertility into the picture, that alignments are no longer independent. Because whether one alignment exists affects the probability of whether another alignment to the same word exists, because youfve got a fertility factor in there. And so the fact that youfve lost that independence assumption between alignments means that itfs very hard to explore the space of alignments and work out the probabilities of different outputs. And in fact, in practice, people use various kinds of heuristic search procedures. Whereas in Models 1 and 2, precisely because alignments are independent, putting in one alignment makes no difference to the probability of another alignment. And itfs precisely because of that that you can kinda ignore everything that goes around you and just kind of work out expectations for individual alignments, as I showed in that slide for Model 1 and said just implement this and itfll work. But I really encourage you to actually sorta read through Kevin Knights tutorial and/or the Jurafsky and Martin, and actually kinda understand what assumptions you have to make to get that to work and why it breaks down for Model 3. Okay. So then we go on to Model 4. And I mean, Model 4 is, perhaps, best illustrated by this picture. So here we have a French and English sentence. So the English is, gOn Tuesday Nov. 4, earthquakes rocked Japan once again.h Now, what happens when we translate into French? Well, we have a word with a fertility of four here, because earthquakes is being translated by, gDes tremblements de terre,h so, gthe trembling of the earth.h Okay, so thatfs kinda cool. gHave been newly rocking Japan.h But the bit I want to focus on right now is right here at the beginning we have, gOn Tuesday Nov. 4,h and then the French translation appeared right down here, gTuesday 4 November.h Okay. So what happened was that this temporal modifier moved right from the beginning of the sentence to right at the end of the sentence. And to some extent, thatfs just a choice that sorta doesnft even mean a lot, because the English sentence could have been, gEarthquakes rocked Japan once again on Tuesday Nov. 4.h But nevertheless, therefs kind of slight preferences between languages, and in English people more commonly put those kind of temporal modifier at the beginning, and in French theyfre more commonly put late in the sentence. At any rate, this was the way the translation was done. Okay. Well, if youfre wanting to learn this is the correct alignment, and youfre using Model 2, youfre kinda gonna be in big trouble, because if we look at these alignments, theyfre all gonna involve huge displacement penalties, because this word has a huge displacement penalty, this word has a huge displacement penalty, that word has a huge displacement penalty. And that kind of seems unfair, because really what we have is this one phrase that is expressing the date thatfs moving from the start of the sentence to the end of the sentence. And one appropriate way to start thinking about this is, gee, we can build models that actually have some phrase structure in them, so they know about the structures of phrases and languages, and we can model things that way. And that, indeed, has become an extremely popular topic in recent work in statistical machine translation that Ifll mention in the end bit of the class. But IBM Model 4 did something simpler, and what it did was, essentially, just to handle these cases by saying you should only be penalized badly once. So when we aligned gTuesdayh and gMardih here, youfre gonna be penalized for there being a big displacement. But when youfre then considering the alignment of four, the alignment of fourfs displacement can be calculated relative to the preceding word. And so, therefore, here youfd say, well, therefs no displacement, because itfs gone along with the previous word. And this one has no displacement -- oh, actually, sorry, these two are swapped. So I guess there is a displacement of one. But at any rate, therefs a small displacement here, and a small displacement there. And so you only get small displacement penalties, because you can base the size of the displacement on the previous word. And so that kinda simulates saying that itfs roughly okay to move whole phrases, that you just get penalized once for moving the entire phrase. Okay. And then the difference between IBM Model 4 and Model 5 is purely technical and so Ifm really not gonna dwell on it. So the way they did IBM Model 4, the model was deficient, which means that it didnft actually give a probability mass of one to possible alignments. Some of the probability mass was lost on illegal alignments. And so on Model 5, they did a bit more hard work and they managed to come up with a way to have a non-deficient probability model. But thatfs purely technical. Okay. So one final question that itfs just good to have a sense of is, well, why do we have all of these models, 1 to 5? I mean, of course you could think, well, this is just an historical recapitulation. They invented Model 1 first and saw how that worked and decided it wasnft very good, so then they built Model 2, which was a bit better and kept on going. And in some sense, thatfs probably true as to what they did. But itfs not really capturing why people still talk about all of these models. And the answer for that is that people actually still use most of these models. I mean, sometimes they donft use all of them, but they at least use most of them. So maybe in the current system, the kind of thing people might commonly do starting off building models is theyfll run Model 1, they might skip Model 2 but use the HMM Model, and then theyfll run Model 3, and then theyfll run Model 4, something like that. They really run a bunch of these models. And the reason that they do it is, effectively, having simple models is a very effective way to bootstrap the more complex models and sort of the flavor of things. The flavor of things is something like kinda doing some kind of a kneeling process, so that the idea is for the high-order models, they have a ton of parameters. And if you just directly try and train those models, you get a minor bad effect, which is itfs really slow to train the models. And you get a major bad effect, which is since youfre trying to do EM optimization in this highly non-convex space, that you very easily get stuck in bad local maxima, and you come out with a very bad model. So instead of that, what you can do is, first of all, train Model 1. So Model 1 you can train really fast, because therefs no search over alignments. And you can get out of it sort of -- you know, Model 1 isnft a bad space to search over, so you can find decent models to the extent you can relative to Model 1. And so, you can precisely use those models to initialize Model 2. And so that will mean the Model 2 parameters will already sort of start off in a sensible part of the probability space. And then you repeat that over. So you train Model 2, and then you can use the Model 2 parameters to initialize Model 3. And so that means in the very first round of EM estimation of Model 3, rather than it being like when we did Model 1, where all you were doing was counting sentence co-currents and getting out who knows what for things like fertility probabilities, that your first estimates of your fertility probabilities in Model 3 are based on the alignments that you found with Model 2. So theyfre hopefully already halfway reasonable and you kinda hope to kinda ride a wave in the good part of the optimization space and find a good model. Yeah? [Student:]By the time you iterate up to Model 5, do you just use Model 5, or do you use some sort of interpolation between the five models? Normally people just use the final model. I mean, thatfs certainly an idea that you could think of trying, but in practice, people donft do that. I guess Ifm not actually -- I guess I actually feel I donft know for sure that therefs no value in that. I canft think of anyone who actually tried that and reported that it worked either better or worse. So I donft actually know an answer for that. But clearly, that kind of interpolating simple models is something thatfs been used to good effect in other contexts, so you can imagine it actually working to do that. I know. Thatfs a good idea. Okay. But then on the final note on alignment models, it is important, in some sense, they end up with these big, complex probabilistic models. I mean, at the time that IBM first proposed these models, which was in the early e90s, the models were so complex that, essentially, only someone with the resources of IBM could actually build models like Model 4 and Model 5. I mean, fortunately, in the intervening 15 years, computers have become way, way, way cheaper and faster, and these days, anybody who has a dozen Pentiums can train these models up in a fairly short amount of time. So thatfs good for the field. But itfs important not to be lost in the complexities, to not realize that therefs still just very little knowledge of language in these models. So herefs this simple fact about French or Spanish, that in French or Spanish, adjectives normally come after the noun, and in English they basically always come before the noun. You kinda like to know that. Youfd kinda like your machine translation model to know facts like that, that you need to reorder adjectives and nouns when you translate between the languages. And so itfs crucial to realize that therefs just nothing in these models still that can actually capture facts at that level of generality. All we can learn is that displacements of one in either direction seem to turn up kind of commonly between these language pairs. But displacements are global parameters. They donft know about the part of speech of words or anything like that. Thatfs obviously a suggestion of something that you could do to try and improve these models. If you knew the parts of speech of the import, and you learned part of the speech specific distortion parameters, would that help? I think it would. I mean, thatfs the kind of thing that you could try. So really here, though, wefre only relying on the language model to try and fix things up. Okay. So now I skip onto the next lot of slides. Okay, so the next thing I should say a little bit about is evaluation, then Ifll go onto more advanced work in machine translation. So here are some illustrative translation results to have something in your head. gLa politique de la haine.h The reference translation was, gthe politics of hate.h And the IBM said, gThe policy of the hatred.h And you kind of see that the machine translation is sort of more literal, like itfs translated the glah as gthe,h and the gde lah as gof the.h So glah is gtheh both times, where the human translator has decided that putting in glah as gtheh in the English translation isnft really appropriate in English. The IBM translation also sorta goofed by changing gpolitiqueh to gpolicy,h rather than gpolitics.h Okay. So we get some translations out, and our question is, well, how are we actually gonna look at these translation and work out whether theyfre good or bad translations? And up until this day, the gold standard answer to that is, basically, we get human beings to look at them. And the problem is that modelfs kind of expensive. In particular, itfs expensive if youfre like most people in the machine learning LLP business, spending your days trying to change and improve your models to make them better. So if you have lots of money on hand -- I know for the Microsoft machine translation group, they literally, every week, take the output of their latest research system and ship it off to an evaluation team, who do human evaluation. And they just get back weekly deltas on how the systems is improving or getting worse based on human evaluation. Well thatfs kinda good if youfre Microsoft, but that doesnft work quite so well if youfre sitting doing research at Stanford University, notwithstanding the fact that everyone else thinks that Stanford University is very rich. So thatfs sorta led to an enormous interest in other ways to do machine translation evaluation. And Ifll say a bit about that. But before I do, how do people do manual evaluation? I mean, there are various ways in which itfs done. One way is just doing sort of subjective scores of how good is this sentence, seven-point scale. The common idea thatfs often used is this division between adequacy and fluency. So the idea there is youfre trying to differentiate the adequacy of the translation, which sort of means it does translate all the content thatfs in the source sentence, but I donft care if the translation sounds kinda stilted or, perhaps, it even put in the termina where it would have been better -- an article like gthe,h where it would have been better to leave it out. Donft care about that. Is all of the sense of the original recoverable? Thatfs adequacy. And then fluency is is what youfre producing expressing the ideas the way that the target language speaker would, correctly using grammar function words and all of that kind of stuff. Sometimes people just do a simple binary good/bad on translations. I guess what thatfs doing is sort of, by giving a very easy evaluation task, you make it much easier for the humans to quickly evaluate sentences. And the second way of doing a machine translation evaluation -- this kinda parallels what we did with language models and perplexity versus word area rate -- is to do it in the context of a task. So if you want to do question answering of foreign language materials, you could kind of have the internal MT component. But again, the problem is itfs very hard to know how the internal components results are kinda being reflected through the rest of the system. And so thatfs led people to want to try and come up with automatic evaluation methods for MT, where you can just sort of evaluate your new system every night and decide whether itfs getting better or worse in an automatic fashion. And so the problem of coming up with automatic metrics for machine translation is itfs kind of difficult. So why is it hard to automatically evaluate machine translation? [Student:][Inaudible]. Yes. So fact one is, normally there are just lots of good ways to translate something, so itfs not that you can say -- itfs not one of these tasks of herefs the unique right answer; did they get it right or wrong? There are lots of possible translations. And human translators could quibble a little about whether onefs a little bit prettier or more fluent than another one, but to a large extent, there are just lots and lots of translations. And in particular, as a sort of an aspect of that lots and lots of translations, normally there are lots of ways that you can order things that are okay. So, gyesterday I went swimming,h or gI went swimming yesterday.h Both of them are okay. So you kind of canft go through in any position-by-position basis. So doing word error rate for speech recognition just doesnft work doing a kind of edit distance with insertions and deletions, because there are these big reorderings that can be perfectly okay. Occasionally, people have used an idea of a position independent word error rate, where youfre allowed to have any alignment between the words, and then youfre trying to account substitutions, insertions, deletions. But if you do that, then itfs paying no attention at all as to whether youfve got the right words modifying the right words, and that seems so lax. So people have tried to come up with various other ways of doing machine translation evaluation, and there are a whole bunch of them, to be honest. Itfs sorta become almost a sub-industry of coming up with automatic evaluations for machine translation. And there must be something like a dozen now that different people have proposed. But the original one was BLEU, which was proposed by IBM, and thatfs the one that Ifm gonna present. And the fact of the matter is, this BLEU evaluation metric is still kinda just the reference point for automatic evaluation metrics. What I mean by that is, all of the other eleven, the way you write your paper on evaluation metrics is that you write your paper introducing an evaluation metric and proving that it works better than BLEU. But in the real world, the majority of people the majority of the time still compare BLEU scores, the BLEUfs near enough to okay within the space of the evaluation measures that exist. So the idea of BLEU is what itfs gonna do is evaluate N-gram precision. So youfre gonna have one or more reference translations, and then youfre gonna have a machine translations, and youfre gonna say that the machine translation is good to the extent that you can find N-gram sequences, which are also present in the source. And so, a little bit this just seems like position independent error rate, that youfre looking for stuff here thatfs also there. And in some sense, you are. But the crucial extra stuff that youfre putting in is that youfre getting a lot more points if you match subsequences. And so the idea is, if you match subsequences of words with the original, that means that youfre kinda capturing whole phrases correctly. Okay. Then therefs the stop people cheating clause, which is the final part of the BLEU measure. So this part of the BLEU measure only measures precision. So itfs saying, of the stuff here, which words and word sequences can I find in the reference translation? That therefs nothing in the measure thatfs looking at recall, saying, in the original it says here Osama Bin Laden. Does the translation mention Osama Bin Laden? And that could worry you. And some of the other MT measures do pay attention to recall. But the way it was implemented in the BLEU measure was to simply say, letfs put in a brevity penalty. Because if you just had it up to here, you could totally cheap. You just take the input, whatever it is, and say the translation of that is gthe.h And then, almost certainly, the word gtheh would appear in the input and in the reference translation, and you do really well. So the brevity penalty is saying, well, youfre gonna be penalized unless your translation is the same length as the reference translation, or youfll be penalized to the extent that itfs shorter. So that sorta forces you to put words in, and so therefore, youfll do better if youfre putting in words that are found in the reference translation. Okay. And then I give this sorta mixed evaluation here. The initial claims from IBM and some of the initial results were that once you put in the brevity penalty, that the BLEU measure was hard to gain, that is, that increasing -- if you do things that improve your BLEU score, that those genuinely are improving the performance of your machine translation system, as evaluated by humans. And if I sorta skip ahead and show that there were initial results, such as these ones, where here a human judgments of translation, which were separately done for adequacy and fluency. And here are BLEU scores. And look, youfre getting a beautiful linear relationship right there. So that was the early claim. At this point I think I have to have a big caveat to that, because in the last several years, people have started getting cleverer and cleverer at using high tech discriminative machine learning methods of various kinds, whose only job in life is to optimize BLEU scores. And what people have been increasingly finding is that the correlation between BLEU score increases and human evaluation of translation quality has been breaking down to the extent that for the kind of increases that people are now getting in BLEU scores, that theyfre not really correlated at all with improvements in translation fluency. So it sorta seems to have really reached the point where people have to be coming up with something that is a better automatic evaluation metric than the BLEU score. But nevertheless, the BLEU Metric, simply because it did give this automated metric, was very effective at driving forward machine translation for a number of years. Okay. So what youfre actually doing for the BLEU score is that youfre calculating a weighted geometric mean of your N-gram precision up to a certain M. So the most common measure thatfs used is the BLEU 4 measure, which means youfre considering up to four grams in the measure, and youfre putting a weighting here, where youfve got a kind of a lower weighting on four grams and higher weighting on unigrams. And therefs one very weird thing about this metric that you want to be aware of, which is that in a geometric mean, if youfre translation has no 4-grams that match the reference translation, youfre gonna have a zero term there, and when you multiply the things together, you get zero. So the BLEU Metric falls apart, unless you have a non-zero score for each of the N-gram measures. I mean, one way of dealing that for just comparing your own systems is to say, okay, if I canft do BLEU 4 I can just back off to BLEU 3, or something like that. But that sort of seems a weird to bad feature of the metric. But thatfs how it is. In particular, a problem that that raises for the metric is you kind of canft use BLEU scores to evaluate individual sentences, because at the individual sentence level, you might have a 4-gram overlap, and youfll do really well, or youfll commonly not have a 4-gram overlap, and your BLEU score will be zero. And itfs sorta not really assessing the quality of the individual sentence, itfs just kind of random stuff. So BLEU scores are really only sensible when done over the entire corpus of the translation, and so therefore, the whole test set. And therefore, you canft really sensibility say which are my best and worst sentences based on BLEU scores. Yes? [Student:][Inaudible]. I think the intuition is the following: you do want to give people credit for matching long N-grams, but at the end of the day, therefs a lot of luck in whether you match long N-grams, because this one matched gairporth and gitfsh, even though actually it messed up and it put gtheh there, which is ungrammatical. Now, the system could equally translated it as gairporth and gthe itfs office,h and to a human being, that seems kind of equally as bad, because in both cases itfs put in a gtheh that shouldnft be there, given that the gitfsh is there and itfs only a question of which side of the gitfsh it puts it on. But if it had put the gtheh on this side, it would have only gotten a bigram match, whereas because it put the gtheh on this side, it got lucky and got a trigram match. So, although you want to pay attention to higher-order N-grams, itfs kind of -- itfs quite noisy and therefs quite a bit of luck in whether you get them right. So, I mean, it seems like for content words that should be translated, that the word, gGuamh should appear here, and if the word, gGuamh does appear here, thatfs bad. But that seems a more fundamental fact, and that encourages you to weight the unigrams higher. [Student:][Inaudible]. Yes, you are right. Yeah, so itfs kind of a compromise. I guess the bigrams still have a decent weight. [Student:][Inaudible]. I mean, it doesnft have to be huge, but typically whatfs used is the order of 500, 1,000 sentences. Enough that you could expect that --enough that you wonft be getting a zero 4-gram overlap count. [Student:][Inaudible] displacement penalty, because once you have [inaudible] there could be a matching word from the last sentence that really matches -- Oh, sorry, sorry. I should explain that. Yeah, itfs not that the overlap -- no, sorry. That was something I appreciate having to explain. The overlaps -- the counts of the overlaps are calculated her sentence. So youfre taking herefs a sentence and herefs how it was translated. And youfre just looking within that sentence for N-gram overlaps, that you canft get scores for ones that belong to different sentences of translations. Yeah, sorry. Okay. So you can run BLEU just like that by having one reference translation and one machine translation and counting up the N-gram overlaps, and people do. And so this is just one more slide that sort of shows possible outputs and sort of shows where there are N-gram matches of different sizes and gives some idea as to sort of how you do or donft get lucky, in terms of bigger N-grams matching. But the idea after that is, in the original version of BLEU, and still quite commonly in evaluations, people donft just use one reference translation. They use multiple reference translations. Having four is actually quite common. And the idea here is, effectively, if you have four reference translations, youfve sort of sampled the space of possible translations, so that if a system translated something in a reasonable way that a human being might translate it, youfve kind of got four chances for it to show up in reference translations, and then youfre more likely to get higher-order N-gram matches. And so people have used that idea of having multiple reference translations. On the other hand, if you think about it statistically, you could kind of think to yourself, well, that shouldnft matter. I could just have one reference translation, and itfs effectively sampling. Itfll still be the case that if you translate things well, more often youfll match the reference translation. That seems to be true, too. So sometimes people use multiple reference translations; sometimes they just use one. And you kind of really only have problems with using one reference translation if somehow therefs a correlation between the style of a system and the style of one particular translator, which wouldnft be found with other translators. At this point, herefs also one of the defects of the BLEU measure, that BLEU scores arenft absolute scores that you can just interpret like an accuracy score, because how high a BLEU score is crucially depends on how many reference translations there are. So that if you have four translations, you can quite commonly see BLEU scores in the 30s, 40s, 50s. Those are sort of pseudo-percentages, 0.3, 0.4, 0.5. Whereas, if you only have a single reference translation, BLEU scores will commonly be around 0.1. They might even be below 0.1 or 0.15, or something like that. And thatfs not meaning the systemfs three times worse. It just means youfve only got one reference translation. Okay. So moving on to translation systems. So the huge part that I havenft really said anything about is, boy, how do we actually make a machine translation system? And so, in principle, this is straightforward. We have built an alignment model, probability of F given E. Wefve built a language model, which gives probability of E. So once wefre given a French sentence, all we need to do is find the English sentence, E, that maximizes that product. But all we need to do is actually a very difficult problem. So doing machine translation decoding, even for the simplest models, is NP hard. And the reason for that is that, simply, because youfre able to do these different orderings of words, that simply because you can translate words in any order, that even apart from the choice of how to translate a word, thatfs already giving you an exponential factor right there, so you end up with an NP hard problem. So what do people do about that? People have explored some different search strategies. So you can actually do a reduction from machine translation to traveling salesman problem, because effectively, the reordering of the sentence is the order in which you visit your cities in the traveling salesman problem. The problem is that you canft actually use that to translate sentences longer than about five words, because it becomes impossibly slow and complex. By and large, the technique that people use, which Ifll briefly mention, is using beam search. So this is the standard way that people do search. So you have a sentence that you want to translate. So you have this French sentence and you want to produce an English sentence. So you start over here at the beginning and you start off with an empty hypothesis for an English sentence. So what youfre gonna do is want to insert the first English word in your sentence, which will just be the first word of your translation. And you have two sources of knowledge with which you could do that. One source of knowledge is your language model. The language model will know that English sentences commonly start with the word, gthe.h But thatfs not very good evidence. And the other source of knowledge you have is what words were in the French sentence. Now of course, the first word in the translation might not be the first word in the French sentence, so what youfre gonna be doing is exploring all words in the French sentence, and considering possible translations for them, and then they could be the first word of the English translation. And so that gives you a big number of possible translation hypotheses. And you prune somewhere as you usually do. And each translation hypothesis will cover some number of words in the French source sentence. And so, the examples here kind of all look like they cover one word, but in the general case, you could also generate an English word and the output that didnft cover any French word, because itfs just an extra word thatfs turning up in the English output, which had fertility zero in the alignment model. So you get a bunch of candidates and the idea of beam search is simply youfre gonna keep some good ones, and throw away the rest of them. And so you keep some good ones, and then you expand them by saying, okay, let me put the second English word in the sentence and, using the same kind of sources of information -- except your language model is now much better. Because you know what the first word in your English sentence is, you can common do a much better estimate of what would be a good second word, according to the language model. So you generate candidates for a two-word English translation. And for each candidate, again, you record which French words youfre covering in the input. And you kind of keep on going along, and you stop when youfve covered all French words in the sentence, because every word either has to end up being aligned to some English word, or itfs being classed as a spurious word. So you go along until youfve completed coverage. And so thatfs the general idea of the search. And then when you then kind of go backwards through your search tree and come out with the best translation. And so this is a kind of classic beam search algorithm. So the main question, then, is how do you trim these beams to only keep candidates around? Yes? [Student:][Inaudible]. You generate the English translation -- so this is your starting off with the French sentence and youfre translating to English. You generate the English translation sequentially as the sentence from left to right. So any candidate is some initial prefix of an English translation, that it can cover an arbitrary subset of the words in the French original. Okay. So if youfre doing a beam search, you have to throw away low-ranked candidates. And the question is, how to throw them away. In a lot of algorithms of this sort, you just have some global score for candidates and you throw away the ones that have a low score. And we score by our product of probability of English according to the language model times the probability of French given English, according to the alignment model. But therefs a problem in doing that here to get it to work effectively is that any long candidate is worse than a short candidate, because the probabilities keep on going down. And while that might suggest to you other ideas as to, well, I could do some kind of A-star search like idea, where I could consider the stuff that Ifve yet to do -- and ideas like that are possible -- but the most common method thatfs used in practice is that you actually keep separate beams of candidates that cover a different number of French words. So all candidates that cover two French words or all candidates that cover three French words are compared against each other, but candidates arenft compared that cover a different number of words in the source. And that kind of gives you a way of keeping things roughly comparable. Okay. So now I wanna say a bit about sort of some of the more recent stuff thatfs been done in MT. So this is what we have so far. If you go back to the original IBM paper, Brown et al., they call is the gFundamental Equation of Machine Translation.h I think youfll always want to be suspicious in MLP when people call something gthe fundamental equation of something.h But as we know, itfs just Bayes Rule, right. So wefve got that. Is this really the best thing to use? It turns out that no, you can easily do better than that. It turns out that you can always do better by introducing yourself a hyperparameter, in which you can scale the two halves, simply because, since the different component models -- both of the component models have bad independence assumptions in them. But depending on whether one half model has better or even worse independence assumptions, that youfre kind -- independence assumptions always make your probability values more extreme than they should be. And so you can effectively attenuate over extreme probability values by putting an exponent in one of the component models. So since the probability of English might be 0.001, when I raise it to the 2.4, that becomes a much smaller number, 0.0000 -- whatever. So if I raise the language model to the 2.4th power, does that mean Ifm paying more or less attention to the language model in my machine translation? Am I paying more attention to it or less attention to it, in terms of what dominates the score? [Student:] More? I got a more. So the answer is more. And the way that you have to think about this is by looking at likelihood ratios between two candidates. So that when I raise a number less than one to a positive power, a power greater than one, the number is getting smaller. But if you think of the ratios, the ratios are actually getting bigger, so Ifm paying more attention to it. So I did this before I came to class, so I had the numbers. There are with big probabilities. So if I had probabilities of 0.5 and 0.75, the ratio between them is 1.5. If I raise those two to the 2.4th power, I get 0.19 and 0.5. And so the ratio between them is now 2.6. And so that means Ifm now actually paying more attention to differences in language model score. So thatfs a little parameter you can put in and improve the quality of your translation output. If you start thinking along these lines, well, maybe I can put in other factors thatfll improve the quality of my translation output. And herefs one thatfs commonly used. You put in a factor that is just the length of English translation. And the reason that this seems like itfd be useful is all else being equal, the language model actually biases you against long translations, because all else being equal, the language model score is just higher the shorter a sentence is. Short sentences like, gHello,h they get good language model scores. Long sentences, like, gIn light of current economic indicators, the Federal Reserve Board is going to meet next week and blah, blah, blah,h really low language model scores. Now to some extent, that will be catered to by the alignment model, which will want to have translations for words, but kind of, it turns out, insufficiently. So you kind of end up, people have found, with a bias toward too short sentences. So you can counteract that by putting in just a factor thatfs the length of the English sentence, which again, you can weight with a hyperparameter to prefer longer sentences. And you can do that, and itfll improve the quality of your translation output. So once youfve started doing that, you start thinking, oh by, probably there are all sorts of things I can tack on the end and Ifll be able to improve the quality of my translation model output. And thatfs exactly what people have done in recent machine translation work, so that people still have language model scores, people still have alignment model scores, but what people have built is effectively feature-based models of machine translation quality, where you can think of anything else thatfs useful. Like you can just have features. A kind of a useful feature for evaluating English translations is just to look whether the English translation has a verb in it. If therefs no verb in the output, itfs probably bad and youfre missing something. So I can simply have a gdoes the sentence have a verbh feature. And I can put a weight on that. And that will help my translation quality a bit. And so you kind of effectively have a feature-based scoring, where you can have an arbitrary set of features, and then people do things like build log-linear models over these features. Ifm not gonna get into the details of that later. Wefll come back to some of those kind of models in other contexts later. Okay. The second major idea that people have then gone to is getting away from this word-based MT. So all of the IBM models, where at the end of the day it was just probability of French word given English word, context independently -- and that was clearly just really bad and didnft seem like it could work very well. And I mean, in particular, if you look at real language, itfs just full of all of these what linguists often call collocations, these little sequences of words that go together. So things like greal estate,h ginterested in.h Note that many of these arenft conventional phrases in a linguistic sense, so ginterested inh isnft a conventional phrase in the linguistic sense, because ginterested in soccer,h youfd say that gin soccerh is a prepositional phrase. This is just the very followed by a preposition. But these are exactly the kind of collocational units that are very useful and commonplace in language. And also, when you get into technical terminology, that thatfs mainly collocations, so things like gdisc drive,h and gnetwork routerh and all of those things, these kind of standard collocations, and so youfd like to be able to learn those. So that was a big gap in the IBM models. And the second gap, that I already mentioned with the adjective-noun word ordering, is wefd like to know something about syntax so we can do syntax-level transformations. And so for the last few minutes of today, Ifll do the phrasal translation part of that. Okay. So the idea of phrase-based statistical MT is that what youfre going to do with the input is youfre gonna chunk it into phrases, whereby phrases people hear just mean any subsequences of the input. No notion of linguistics whatsoever, just any subsequences of the input. And then what wefre going to do is have a probability distribution over phrase translations instead of individual word translations. And so youfd have the probability that this phrase is being translated by gto the conference.h And so then therefs still gonna be reordering, which will have displacement factors in that, but the reordering will be done in terms of this phrase chunk. So effectively, the model is, in some sense, kind of like before, apart from rather than doing it for each individual word, youfre working on these multi-word units, which are called phrases. And thatfs really the only new idea. But up until the moment, essentially doing these statistically phrase-based models is the state of the art of what people do for machine translation. Therefs starting to be some evidence for some languages that models that try to do more with real syntax in them are better, but this is still pretty much the state of the art. As it turns out, you can just capture a lot with these kind of multi-word phrases. [Student:][Inaudible]. I will do that on the next slide, or maybe the one after. So Ifm almost there. So once we have these multi-word phrases, note that we actually are not dependent on the language model totally anymore. Because, effectively, these multi-word collocational chunks will include appropriate translations for a word inside their multi-word chunks. So we are getting kind of context-based choice of how to translate individual words. And the phrases that we use arenft of just one size. Theyfre arbitrary size. So what you really do find is if they were just common utterances that keep on occurring, like your -- therefs actually sort of formulaic prose you see in the newspaper almost every day, so youfd kind of have, gAt the close of New York trading, blah, blah, blah.h And itfll just learn big phrases like, gat the close of New York trading,h and how to translate them. How do you learn the phrases? Here is the answer. And this is why the stuff that wefve learned already isnft useless. What you do is you assume that you start with word alignments. How do you get those word alignments? You run Models 1, 2, HMM, 4, etc. Once youfve got those word alignments, what you do is you consider possible phrases. So a possible phrase like this one is defined as a place where you get this kind of block structure. So therefs nothing down here, nothing across there, nothing there, and nothing there. So thatfs a possible phrase that could be taken as a chunk. There are also smaller phrases. So this is a possible phrase. It has the same properties. But this single word is no longer a possible phrase, because there are other things outside of it over here. And there are bigger phrases, so I could take all of this up to here, thatfs a big phrase. So the possible phrases are things that are defined by this property of having nothing out here, which means that youfre kind of capture -- for the words, gdid not slap,h youfre capturing all of the words that are aligned with them. And similarly, in reverse, for these Spanish words, youfre capturing all of the words that align by them. If things satisfy that property, you have a statistical phrase. Okay. So conventionally, how that is done is you actually run the IBM models twice. This is really the state of the art and basically what everyone does. If you look at it as sort of a funny process and you think how this couldnft possibly be optimal. And there has started to be a bit of research on better ways of doing things. But it really is in terms of the people who built the state of the art MT systems, of which, I guess, really the biggest players in statistical MT systems these days are Google, Microsoft, and then Southern California ISI and their commercial offices Language Weaver, this is really still what people do. You run your IBM models in one direction, English to French, and then you recall that there was this crucial limitation of the IBM models, which is that they allowed n-to-1 alignments in one direction, but not in the other direction. That seems crummy. So what you do is you run the IBM models in the opposite direction. So that allows you to get 1-to-n in the opposite direction. And so now you have two possible sentence alignments, and you want to merge them into a consensus alignment. And at this point, people use various heuristics. The obvious first thing is to think of union and intersection. But it turns out that intersections are too sparse and unions are too dense to work well, so by and large, people are using heuristic methods with funny names like rodiag final and things like that, which are doing what seems to be a sensible job of combining alignments from the two. And then this is giving you your final alignment matrix. And then, effectively, you extract off all statistical phrases up to a certain maximum size, and they become your phrase lexicon. So when we go back to here, when I just glibly said foreign input segmented into phrases, the foreign input isnft uniquely segmented into phrases. It could be segmented into phrases lots of ways. But the possible ways that it could be segmented into phrases is determined by what bilingual phrases exist in your statistical phrase lexicon. So you will explore any way of cutting things up that fits inside the statistical phrase lexicon. And effectively, youfre doing that in a decoder, just like before, because when you translate, youfre starting off with an empty candidate English translation. Youfre saying, okay, I want to lay down my first statistical phrase. And so the first statistical phrase will be something in your phrase lexicon, which is some number of words in English corresponding to some number of words of the foreign language. And so youfre not actually segmenting the foreign language before you begin. Youfre kind of segmenting the foreign language in the process of running the decoder. Okay. Well, thatfs where I can get to today, so Ifll stop there and be back there, and maybe say a few more minutes on that and then wefll be going on to the next topic of information extraction.