video_title
stringlengths 15
95
| transcription
stringlengths 51
44.5k
|
---|---|
Group theory, abstraction, and the 196,883-dimensional monster | Today, many members of the YouTube math community are getting together to make videos about their favorite numbers over 1 million, and we're encouraging you the viewers to do the same. Take a look at the description for details. My own choice is considerably larger than a million, roughly 8 times 10 to the 53. For a sense of scale, that's around the number of atoms in the planet Jupiter, so it might seem completely arbitrary. But what I love is that if you were to talk with an alien civilization or a super intelligent AI that invented math for itself without any connection to our particular culture or experiences, I think both would agree that this number is something very peculiar and that it reflects something fundamental. What is it, exactly? Well, it's the size of the monster, but to explain what that means, we're going to need to back up and talk about group theory. This field is all about codifying the idea of symmetry. For example, when we say a face is symmetric, what we mean is that you can reflect it about a line and it's left looking completely the same. It's a statement about an action that you can take. Something like a snowflake is also symmetric, but in more ways, you can rotate it 60 degrees or 120 degrees, you can flip it along various different axes, and all these actions leave it looking the same. A collection of all of the actions like this taken together is called a group, kind of at least, groups are typically defined a little more abstractly than this, but we'll get to that later. Take note, the fact that mathematicians have co-opted such an otherwise generic word for this seemingly specific kind of collection should give you some sense of how fundamental they find it. Also take note, we always consider the action of doing nothing to be part of the group, so if we include that do nothing action, the group of symmetries of a snowflake includes 12 distinct actions. It even has a fancy name, D6. The simple group of symmetries that only has two elements acting on a face also has a fancy name, C2. In general, there is a whole zoo of groups with no shortage of jargon to their names categorizing the many different ways that something can be symmetric. When we describe these sorts of actions, there's always an implicit structure being preserved. For example, there are 24 rotations that I can apply to a cube that leave it looking the same, and those 24 actions taken together do indeed constitute a group. But if we allow for reflections, which is a kind of way of saying that the orientation of the cube is not part of the structure we intend to preserve, you get a bigger group with 48 actions in total. If you loosen things further and consider the faces to be a little less rigidly attached, maybe free to rotate and get shuffled around, you would get a much larger set of actions. And yes, you could consider these symmetries in the sense that they leave it looking the same, and all of these shuffling, rotating actions do constitute a group, but it's a much bigger and more complicated group. The large size in this group reflects the much looser sense of structure which each action preserves. The looser sense of structure is if we have a collection of points and we consider any way that you could shuffle them, any permutation, to be a symmetry of those points. Unconstrained by any underlying property that needs to be preserved, these permutation groups can get quite large. Here, it's kind of fun to flash through every possible permutation of six objects and see how many there are. In total, it amounts to six factorial or 720. By contrast, if we gave these points some structure, maybe making them the corners of a hexagon and only considering the permutations that preserve how far apart each one is from the other, well then we only get the 12 snowflake symmetries that we saw earlier. Bump the number of points up to 12 and the number of permutations grows to about 479 million. The monster that we'll get to is rather large, but it's important to understand that largeness in and of itself is not that interesting when it comes to groups. The permutation groups already make that easy to see. If we were shuffling 101 objects, for example, with the 101 factorial different actions that can do this, we have a group with a size of around 9 x 10 to the 159. If every atom in the observable universe had a copy of that universe inside itself, this is roughly how many sub-atoms there would be. These permutation groups go by the name S sub N and they play a very important role in group theory. In a certain sense, they encompass all other groups. And so far, you might be thinking, okay this is intellectually playful enough, but is any of this actually useful? One of the earliest applications of group theory came when mathematicians realized that the structure of these permutation groups tells us something about solutions to polynomial equations. You know how in order to find the two roots of a quadratic equation, everyone learns a certain formula in school? Slightly lesser known is the fact that there's also a cubic formula, one that involves nesting cube roots with square roots in a larger expression. There's even a quartic formula for a degree 4 polynomial, which is an absolute mess. It's almost impossible to write without factoring things out. And for the longest time, mathematicians struggled to find a formula to solve degree 5 polynomials. I mean, maybe there's one, but it's just super complicated. It turns out though, if you think about the group which permutes the roots of such a polynomial, there's something about the nature of this group that reveals no quintic formula can exist. For example, the 5 roots of the polynomial you see on screen now, they have definite values. You could write out decimal approximations, but what you can never do is write those exact values by starting with the coefficients of the polynomial and using only the 4 basic operations of arithmetic together with radicals, no matter how many times you nest them. And that impossibility has everything to do with the inner structure of the permutation group S5. A theme and math through the last two centuries has been that the nature of symmetry in and of itself can show us all sorts of non-obvious facts about the other objects that we study. To give just a hint of the many, many ways that this applies to physics, there's a beautiful fact known as Noters theorem, saying that every conservation law corresponds to some kind of symmetry, a certain group. So all those fundamental laws like conservation of momentum and conservation of energy each correspond to a group. More specifically, the actions we should be able to apply to a setup such that the laws of physics don't change. All of this is to say that groups really are fundamental, and the one thing I want you to recognize right now is that they are one of the most natural things that you could study. What could be more universal than symmetry? So you might think that the patterns among groups themselves would somehow be very beautiful and symmetric. The monster, however, tells a different story. Before we get to the monster though, at this point some mathematicians might complain that what I've described so far are not groups exactly, but group actions, and that groups are something slightly more abstract. By way of analogy, if I mention the number three, you probably don't think about a specific triplet of things, you probably think about three as an object in and of itself, an abstraction, maybe represented with a symbol. In much the same way, when mathematicians discuss the elements of a group, they don't necessarily think about specific actions on specific objects. They might think of these elements as a kind of thing in and of itself, maybe represented with a symbol. For something like the number three, the abstract symbol does us very little good unless we define its relation with other numbers. For example, the way that it adds or that it multiplies with them. For each of these, you could think of a literal triplet of something, but again, most of us are comfortable, probably even more comfortable, using the symbols alone. Similarly, what makes a group a group are all of the ways that its elements combine with each other. And in the context of actions, this has a very vivid meaning. What we mean by combining is to apply one action after the other, read from right to left. If you flip a snowflake about the x-axis, then rotate it 60 degrees counterclockwise. The overall action is the same as if you had flipped it about a diagonal line. All possible ways that you can combine two elements of a group like this defines a kind of multiplication. That is what really gives a group its structure. Here, I'm drawing out the full 8x8 table of the symmetries of a square. If you apply an action from the top row and follow it by an action from the left column, it'll be the same as the action in the corresponding grid square. But if we replace each one of these symmetric actions with something purely symbolic, well, the multiplication table still captures the inner structure of the group, but now it's abstracted away from any specific object that it might act on, like a square or roots of a polynomial. This is entirely analogous to how the usual multiplication table is written symbolically, which abstracts away from the idea of literal counts. Literal counts, arguably, would make it much clearer what's going on. But since grade school, we all grow comfortable with the symbols. After all, they're less cumbersome, they free us to think about more complicated numbers, and they also free us to think about numbers in new and very different ways. All of this is true of groups as well, which are best understood as abstractions above the idea of symmetry actions. I'm emphasizing this for two reasons. One is that understanding what groups really are gives a better appreciation for the monster. And the other is that many students learning about groups for the first time can find them frustratingly opaque. I know that I did. A typical course starts with this very formal and abstract definition, which is that a group is a set in a collection of things, with a binary operation, a notion of multiplication between those things, such that this multiplication satisfies four special rules, or axioms. And all of this can feel, well, kind of random, especially when it isn't made clear that all of these axioms arise from the things that must obviously be true when you're thinking about actions and composing them. To any students among you with such a course in the future, I would say if you appreciate that the relationship groups have with symmetric actions is analogous to the relationship numbers have with counts. We can help to make the course a lot more grounded. An example might help to see why this kind of abstraction is desirable. Consider the symmetries of a cube and the permutation group of four objects. At first, these groups feel very different. You might think of the one on the left as acting on eight corners in a way that preserves the distance and orientation structure among them. But on the right, we have a completely unconstrained set of actions on a much smaller set of points. As it happens though, these two groups are really the same in the sense that their multiplication tables will look identical. Anything that you can say about one group will be true of the other. For example, there are eight distinct permutations where applying it three times in a row gets you back to where you started, not counting the identity. These are the ones that cycle three different elements together. There are also eight rotations of the cube that have this property, the various 120 and 240 degree rotations about each diagonal. This is no coincidence. The way to phrase this more precisely is to say there is a one-to-one mapping between rotations of a cube and permutations of four elements which preserves composition. For example, rotating 180 degrees about the y-axis, followed by 180 degrees about the x-axis, gives the same overall effect as rotating 180 degrees around the z-axis. Remember, that's what we mean by a product of two actions. If you look at the corresponding permutations under a certain one-to-one association, this product will still be true, applying the two actions on the left gives the same overall effect as the one on the right. When you have a correspondence where this remains true for all products, it's called an isomorphism, which is maybe the most important idea in group theory. This particular isomorphism between cube rotations and permutations of four objects is a bit subtle, but for the curious among you, you may enjoy taking a moment to think hard about how the rotations of a cube permute its four diagonals. In your mathematical life, you'll see more examples of a given group arising from seemingly unrelated situations, and as you do, you'll get a better sense for what group theory is all about. Think about how a number like three is not really about a particular triplet of things. It's about all possible triplets of things. In the same way, a group is not really about symmetries of a particular object. It's an abstract way that things even can be symmetric. There are even plenty of situations where groups come up in a way that does not feel like a set of symmetric actions at all, just as numbers can do a lot more than count. In fact, seeing the same group come up in different situations is a great way to reveal unexpected connections between distinct objects, that's a very common theme in modern math. And once you understand this about groups, it leads you to a natural question, which will eventually lead to the monster. What are all the groups? But now you're in a position to ask that question in a more sophisticated way. What are all the groups up to isomorphism? Which is to say we consider two groups to be the same if there's an isomorphism between them. This is asking something more fundamental than what are all the symmetric things. It's a way of asking what are all the ways that something can be symmetric? Is there some formula or procedure for producing them all? Some meta pattern lying at the heart of symmetry itself? This question turns out to be hard, exceedingly hard. For one thing, there's the division between infinite groups, for example the ones describing the symmetries of a line or a circle, and finite groups, like the ones we've looked at up to this point. To maintain some hope of sanity, let's limit our view to finite groups. In the same way that numbers can be broken down into their prime factorization, or molecules can be described based on the atoms within them. There's a certain way that finite groups can be broken down into a kind of composition of smaller groups. The ones which can't be broken down any further, analogous to prime numbers or atoms, are known as the simple groups. To give a hint for why this is useful, remember how we said that group theory can be used to prove that there's no formula for a degree 5 polynomial, the way there is for quadratic equations? Well, if you're wondering what that proof actually looks like, it involves showing that if there were some kind of mythical quintic formula, something which uses only radicals and the basic arithmetic operations, it would imply that the permutation group on five elements decomposes into a special kind of simple group, known fancifully as the cyclic groups of prime order. But the actual way that this breaks down involves a different kind of simple group, a different kind of atom, one which polynomial solutions built from radicals would never allow. That is a super high level description, of course, with about a semester's worth of details missing. But the point is that you have this really not obvious fact about a different part of math, whose solutions come from finding the atomic structure of a certain group. This is one of many different examples where understanding the nature of these simple groups, these atoms, actually matters outside of group theory. The task of categorizing all finite groups breaks down into two steps. One, find all the simple groups, and two, find all of the ways to combine them. The first question is like finding the periodic table, and the second is a bit like doing all of chemistry thereafter. The good news is that mathematicians have found all of the finite simple groups. Well, more pertinent is that they proved that the ones that they found are, in fact, all the ones out there. It took many decades, tens of thousands of dense pages of advanced math, hundreds of some of the smartest minds out there, and significant help from computers. But by 2004, with accommodating 12,000 pages to tie up the loose ends, there was a definitive answer. Many experts agree, this is one of the most monumental achievements in the history of math. The bad news, though, is that the answer is absurd. There are 18 distinct, infinite families of simple groups, which makes it really tempting to lean into the whole periodic table analogy. The groups are stranger than chemistry, because there are also these 26 simple groups that are just left over, they don't fit the other patterns. These 26 are known as the sporadic groups. That a field of study rooted in symmetry itself has such a patch together fundamental structure is, well, I mean it's just bizarre, it's like the universe was designed by committee. If you're wondering what we mean by an infinite family, examples might help. One such family of simple groups includes all of these cyclic groups with prime order. These are essentially the symmetries of a regular polygon with a prime number of sides, but where you're not allowed to flip the polygon over. Another of these infinite families is very similar to the permutation groups that we saw earlier, but there's the tiniest constraint on how they're allowed to shuffle in items. If they act on five or more elements, these groups are simple. Which incidentally is heavily related to why polynomials with degree five or more have solutions that can't be written down using radicals. The other 16 families are notably more complicated, and I'm told that there's at least a little ambiguity in how to organize them into cleanly distinct families without overlap. But what everybody agrees on is that the 26 sporadic groups stand out as something very different. The largest of these sporadic groups is known, thanks to John Conway, as the monster group, and its size is the number I mentioned at the start. The second largest, and I promise this isn't a joke, is known as the baby monster group. Together with the baby monster, 19 of these sporadic groups are in a certain sense children of the monster, and Robert Grease called these 20 the happy family. He also called the other six, which don't even fit that pattern, the pariahs. As if to compensate for how complicated the underlying math here is, the experts really let loose on their whimsy while naming things. Let me emphasize, having a group which is big is not that big a deal, but the idea that one of the fundamental building blocks for one of the most fundamental ideas in math comes in a collection that just abruptly stops around 8 x 10 to the 53, that's weird. Now at this point, given that I introduced groups as symmetries, a collection of actions, you might wonder what it is that the monster acts on. What object does it describe the symmetries of? There is an answer, but it doesn't fit into two or three dimensions to draw. Nor does it fit into four or five. Instead, to see what the monster acts on, we would have to jump up two, wait for it, 196,883 dimensions. Just describing one of the elements of this group takes about four gigabytes of data, even though plenty of groups that are way bigger have a much smaller computational description. The permutation group on 101 elements was, if you'll recall, dramatically bigger, but we can describe each one of its elements with very little data, for example a list of 100 numbers. No one really understands why the sporadic groups and the monster in particular are there. Maybe in a few decades there will be a clearer answer, maybe one of you will come up with it. Despite knowing that they are deeply fundamental to math and arguably to physics as well, a lot about them remains mysterious. In the 1970s, mathematician John McKay was making a switch from studying group theory to an adjacent field, and he noticed that a number very similar to this 196,883 showed up in a completely unrelated context, or at least almost. A number one bigger than this was in the series expansion of a fundamental function in a totally different part of math, relevant to these things called modular forms and elliptic functions. Assuming that this was more than a coincidence seemed crazy, enough that it was playfully deemed moonshine by John Conway. But after more numerical coincidences like this were noticed, it gave rise to what became known as the monstrous moonshine conjecture, whimsical names just don't stop. This was proved by Richard Bortchirds in 1992, solidifying a connection between very different parts of math that at first glance seemed crazy. Six years later, by the way, he won the field's medal, in part for the significance of this proof. And related to this moonshine is a connection between the monster and string theory. Maybe it shouldn't come as a surprise that something that arises from symmetry itself is relevant to physics, but in light of just how random the monster seems at first glance, this connection still elicits a double take. To me, the monster and its absurd size is a nice reminder that fundamental objects are not necessarily simple. The universe doesn't really care if its final answers look clean. They are what they are biological necessity, with no concern over how easily we'll be able to understand them. |
What is backpropagation really doing? | Chapter 3, Deep learning | Here we tackle back propagation, the core algorithm behind how neural networks learn. After a quick recap for where we are, the first thing I'll do is an intuitive walk-through for what the algorithm is actually doing without any reference to the formulas. Then, for those of you who do want to dive into the math, the next video goes into the calculus underlying all this. If you watched the last two videos, or if you're just jumping in with the appropriate background, you know what a neural network is, and how it feeds forward information. Here we're doing the classic example of recognizing handwritten digits, whose pixel values get fed into the first layer of the network with 784 neurons, and I've been showing a network with two hidden layers having just 16 neurons each, and an output layer of 10 neurons, indicating which digit the network is choosing as its answer. I'm also expecting you to understand gradient descent, as described in the last video, and how what we mean by learning is that we want to find which weights and biases minimize a certain cost function. As a quick reminder for the cost of a single training example, what you do is take the output that the network gives, along with the output that you wanted it to give, and you just add up the squares of the differences between each component. Doing this for all of your tens of thousands of training examples and averaging the results, this gives you the total cost of the network. As if that's not enough to think about, as described in the last video, the thing that we're looking for is the negative gradient of this cost function, which tells you how you need to change all of the weights and biases, all of these connections, so as to most efficiently decrease the cost. Backpropagation, the topic of this video, is an algorithm for computing that crazy complicated gradient. And the one idea from the last video that I really want you to hold firmly in your mind right now, is that because thinking of the gradient vector as a direction in 13,000 dimensions is to put it lightly beyond the scope of our imaginations, there's another way you can think about it. The magnitude of each component here is telling you how sensitive the cost function is to each weight and bias. For example, let's say you go through the process I'm about to describe when you compute the negative gradient and the component associated with the weight on this edge here comes out to be 3.2, while the component associated with this edge here comes out as 0.1. The way you would interpret that is that the cost of the function is 32 times more sensitive to changes in that first weight. So if you were to wiggle that value just a little bit, it's going to cause some change to the cost, and that change is 32 times greater than what the same wiggle to that second weight would give. Personally, when I was first learning about backpropagation, I think the most confusing aspect was just the notation and the index chasing of it all. But once you unwrap what each part of this algorithm is really doing, each individual effect that it's having is actually pretty intuitive. It's just that there's a lot of little adjustments getting layered on top of each other. So I'm going to start things off here with a complete disregard for the notation, and just step through those effects that each training example is having on the weights and biases. Because the cost function involves averaging a certain cost per example, over all the tens of thousands of training examples, the way that we adjust the weights and biases for a single gradient descent step also depends on every single example, or rather, in principle it should, but for computational efficiency, we're going to do a little trick later to keep you from needing to hit every single example for every single step. In other case, right now, all we're going to do is focus our attention on one single example, this image of a two. What effect should this one training example have on how the weights and biases get adjusted? Let's say we're at a point where the network is not well trained yet, so the activations in the output are going to look pretty random, maybe something like 0.5, 0.8, 0.2, on and on. Now, we can't directly change those activations. We only have influence on the weights and biases. But it is helpful to keep track of which adjustments we wish should take place to that output layer. And since we want it to classify the image as a two, we want that third value to get nudged up, while all of the others get nudged down. Moreover, the sizes of these nudges should be proportional to how far away each current value is from its target value. For example, the increase to that number two neurons activation is in a sense more important than the decrease to the number eight neuron, which is already pretty close to where it should be. So zooming in further, let's focus just on this one neuron, the one whose activation we wish to increase. Remember, that activation is defined as a certain weighted sum of all of the activations in the previous layer, plus a bias, which is all then plugged into something like the sigmoid squishification function, or a ray-lew. So there are three different avenues that can team up together to help increase that activation. You can increase the bias, you can increase the weights, and you can change the activations from the previous layer. Focusing just on how the weights should be adjusted? Notice how the weights actually have differing levels of influence. The connections with the brightest neurons from the preceding layer have the biggest effect, since those weights are multiplied by larger activation values. So if you were to increase one of those weights, it actually has a stronger influence on the ultimate cost function than increasing the weights of connections with dimmer neurons, at least as far as this one training example is concerned. Remember, when we talk about gradient descent, we don't just care about whether each component should get nudged up or down, we care about which ones give you the most bang for your butt. This, by the way, is at least somewhat reminiscent of a theory in neuroscience for how biological networks of neurons learn, heavy in theory. Often summed up in the phrase, neurons that fire together, wire together. Here, the biggest increases to weights, the biggest strengthening of connections, happens between neurons which are the most active, and the ones which we wish to become more active. In a sense, the neurons that are firing while seeing a two get more strongly linked to those firing when thinking about a two. To be clear, I really am not in a position to make statements one way or another about whether artificial networks of neurons behave anything like biological brains, and this fires together, wire together idea comes with a couple meaningful asterisks. But taken as a very loose analogy, I do find it interesting to note. Anyway, the third way that we can help increase this neuron's activation is by changing all the activations in the previous layer. Namely, if everything connected to that digit two neuron with a positive weight, got brighter, and if everything connected with a negative weight got dimmer, then that digit two neuron would become more active. And similar to the weight changes, you're going to get the most bang for your buck by seeking changes that are proportional to the size of the corresponding weights. Now, of course, we cannot directly influence those activations. We only have control over the weights and biases. But just as with the last layer, it's helpful to just keep a note of what those desired changes are. But keep in mind, zooming out one step here, this is only what that digit two output neuron wants. Remember, we also want all of the other neurons in the last layer to become less active, and each of those other output neurons has its own thoughts about what should happen to that second to last layer. So, the desire of this digit two neuron is added together with the desires of all the other output neurons, for what should happen to this second to last layer. Again, in proportion to the corresponding weights, and in proportion to how much each of those neurons needs to change. This right here is where the idea of propagating backwards comes in. By adding together all these desired effects, you basically get a list of nudges that you want to happen to this second to last layer. And once you have those, you can recursively apply the same process to the relevant weights and biases that determine those values, repeating the same process I just walked through and moving backwards through the network. And zooming out a bit further, remember that this is all just how a single training example wishes to nudge each one of those weights and biases. If we only listen to what that two wanted, the network would ultimately be incentivized just to classify all images as a two. So what you do is you go through this same back property for every other training example, recording how each of them would like to change the weights and the biases. And you average together those desired changes. This collection here of the average to nudges to each weight and bias is, loosely speaking, the negative gradient of the cost function referenced in the last video, or at least something proportional to it. I say loosely speaking only because I have yet to get quantitatively precise about those nudges. But if you understood every change that I just referenced, why some are proportionally bigger than others, and how they all need to be added together, you understand the mechanics for what back propagation is actually doing. By the way, in practice, it takes computers an extremely long time to add up the influence of every single training example, every single gradient descent step. So here's what's commonly done instead. You randomly shuffle your training data and then divide it into a whole bunch of mini-batches. Let's say each one having 100 training examples. Then you compute a step according to the mini-batch. It's not going to be the actual gradient to the cost function, which depends on all of the training data, not this tiny subset. So it's not the most efficient step downhill. But each mini-batch does give you a pretty good approximation, and more importantly, it gives you a significant computational speed up. If you would applaud the trajectory of your network under the relevant cost surface, it would be a little more like a drunk man stumbling aimlessly down a hill, but taking quick steps, rather than a carefully calculating man determining the exact downhill direction of each step before taking a very slow and careful step in that direction. This technique is referred to as stochastic gradient descent. There's kind of a lot going on here, so let's just sum it up for ourselves, shall we? Backpropagation is the algorithm for determining how a single training example would like to nudge the weights and biases. Not just in terms of whether they should go up or down, but in terms of what relative proportions to those changes cause the most rapid decrease to the cost. A true gradient descent step would involve doing this for all your tens and thousands of training examples and averaging the desired changes that you get. But that's computationally slow, so instead you randomly subdivide the data into these mini batches and compute each step with respect to a mini batch. Repeatedly going through all of the mini batches and making these adjustments, you will converge towards a local minimum of the cost function, which is to say your network is going to end up doing a really good job on the training examples. So with all of that said, every line of code that would go into implementing Backprop actually corresponds with something that you have now seen, at least in informal terms. But sometimes knowing what the math does is only half the battle, and just representing the damn thing is where it gets all muddled and confusing. So for those of you who do want to go deeper, the next video goes through the same ideas that were just presented here, but in terms of the underlying calculus, which should hopefully make it a little more familiar as you see the topic in other resources. Before that, one thing worth emphasizing is that for this algorithm to work, and this goes for all sorts of machine learning beyond just neural networks, you need a lot of training data. In our case, one thing that makes handwritten digits such a nice example is that there exists the M-NIST database, with so many examples that have been labeled by humans. So a common challenge that those of you working in machine learning will be familiar with is just getting the labeled training data that you actually need, whether that's having people labeled tens of thousands of images or whatever other data type you might be dealing with. It's actually transitions really nicely to today's extremely relevant sponsor, Crowdflower, which is a software platform where data scientists and machine learning teams can create training data. They allow you to upload text or audio or image data and have it annotated by real people. You may have heard of the human in the loop approach before, and this is essentially what we're talking about here, leveraging human intelligence to train machine intelligence. They employ a whole bunch of pretty smart quality control mechanisms to keep the data clean and accurate, and they've helped to train, test, and tune thousands of data and AI projects. And what's most fun, there's actually a free t-shirt in this for you guys. If you go to 3B1B.co slash Crowdflower or follow the link on screen and in the description, you can create a free account and run a project, and they'll send you a free shirt once you've done the job. And the shirt's actually pretty cool. I quite like it. So thanks to Crowdflower for supporting this video, and thank you also to everyone on Patreon helping support these videos. |
Olympiad level counting: How many subsets of {1,…,2000} have a sum divisible by 5? | In a moment, I will ask you a puzzle, and it's a pretty hard puzzle, actually, but before I do, I want to lead with a spoiler, which is the fact that the way we're going to solve this involves the use of complex numbers. And once you hear it, you will agree that that seems absurd, given that the puzzle is going to be purely a discrete question. It only asks about whole numbers and their sums. There's not a whiff of the imaginary or even continuity anywhere on the horizon. It's certainly not the only time that complex numbers are unreasonably useful for discrete math to borrow a phrase. The more famous example that I could bring up would be how the modern way that mathematicians understand prime numbers, you know, questions about how they're distributed, their density at certain regions, things like that. Well, it involves studying specially designed functions whose inputs and outputs are complex numbers. Some of you may know that this is what the famous Riemann hypothesis is all about. Basically, there's a specially designed function, and on the face of it, it looks unrelated to the discrete world of primes. It's smooth, it's complex valued, but under the hood it encodes all of the information that you could ever want about those discrete prime numbers. And most importantly, certain questions about primes are easier to answer by analyzing this function than they would be by directly analyzing the primes themselves. Of course, our puzzle, which I promise I'll share in just a moment, is a lot more innocent than the Riemann hypothesis. It's a toy problem. But at the end of the video, I'll share how the techniques that we use to solve it, the real reason that we're here, are actually pretty similar in spirit to the setup that leads to the Riemann hypothesis. And the prime number theorem in that whole circle of thoughts around it. Our puzzle for today comes from this book here by T2 and Rescue and ZoomingFung. It's basically a collection of problems used in training the USA team for the International Math Olympiad. And if we turn to chapter 2, advanced problems, problem number 10 asks this seemingly innocent question, find the number of subsets of the set 1 up to 2000, the sum of whose elements is divisible by 5. Okay, so that might take a little bit of a moment to parse. For example, something like the set 314, that would be a subset. All of its elements are also elements in the big set. And it's sum, 3 plus 1 plus 4 is 8, so that wouldn't be considered that's not in our count. Whereas something like the set 235, also a subset, has a sum of 10, that is divisible by 5, so it's one that we want to count. The preview animation that I had at the start is essentially a brute force program trying to answer this question. It will iterate through all of the different possible subsets, finding the sum of each one along the way, and it increments a counter each time that it finds a multiple of 5. And you know what, a nice warm up question here would be to pause and think about how many total subsets are there overall, forget this multiple of 5 stuff. How long will it take for this program to terminate? Many of you may know, the answer is 2 to the power 2000. The basic idea there is that when you're constructing a subset, you have 2000 different binary choices you can make. Do you include an element or do not? And all of those choices are independent of each other, so the total number of choices you have in constructing a subset is 2 times 2 times 2 times 2 on and on 2000 times. And thinking about our program, that is a monstrously huge number. So even if we gave this brute forcing approach all the time in the universe with all the physical resources the universe could conceivably provide, it wouldn't even come close, it wouldn't scratch the surface. Obviously we have to be a lot cleverer than that, and if you were to just guess what the answer should be, make a rough approximation. You'd probably guess, you know, it should be around a fifth of all the total subsets, there's probably a roughly even distribution of all these sums mod 5. And yes, that is true, that's a decent approximation. But the heart of the question, the real challenge here, is to get a precise answer. This can't be the actual answer, since it's not an integer, but is the true answer a little bit more or a little bit less, or maybe it's a lot more or a lot less? What tactics could you possibly use to figure out that error? To be clear, this lesson is definitely much more about the journey than the destination. Will you ever need to filter and count subsets in this way? Almost certainly not, I wouldn't expect so, but toy problem or not, it is a legitimately challenging question, and navigating that challenge develops skills that are relevant to other sorts of challenging questions. For you and me, there are at least two very surprising and very beautiful twists and turns that the solution I'd like to share with you takes. I've already tipped my hand that complex numbers will make a surprise appearance, but before we even get to that, there is another strange turn, which is arguably even weirder and even more unexpected. To set the stage though, let's just get our bearings with the puzzle and do what all good problem solvers should do and start with a simpler example, maybe just trying it with the set 1, 2, 3, 4, 5. If you were solving this problem with pencil and paper, you know you're one of these kids training for the IMO, it's not a bad idea to simply list out all two to the five subsets, it's only 32, it's not that many. There's different ways that you might want to organize all of these in your mind, but since the thing that we care about is their sum, the natural thing to do would be to go through all of them one by one and compute those sums. Over here, just doing it on YouTube, I've got a computer, so I'll cheat a little and show what all their sums are. I'll also cheat a little bit and rearrange all of these, organizing them suggestively into collections that all have the same sum. For instance, there are three distinct subsets that add up to six and they'll all sit in this little box and the three subsets adding up to 10 will all live in this little box. And all in all, the ones that we care about, the subsets with a sum divisible by five, have been put over here on the left and it looks like there's a total of eight of them. Oh, and by the way, I should say we are counting the empty set, we consider it sum to be zero and we consider that to be a multiple of five. By the end, I hope you'll agree all of those are abundantly natural choices to make. Take a moment to compare this answer to what you might expect hebristically. Out of all 32 total subsets, a fifth of that would have been 6.4, so at least in this small example, the true answer is a little bit bigger than that. That's maybe something you want to talk in the back of your mind. Okay, and this is the part of the video where I'll be honest with you, I have no idea how to motivate it. Personally, I like it when math feels like something you could have discovered yourself, and if you and I were sitting down together solving this problem, I think there's all sorts of natural steps that you might take. Maybe you try to understand if there's some sort of structure to the subsets, or you play around with how these sums are distributed mod five at many different iterations for other small examples. And from that, maybe you try to eke out some kind of proof by induction. When I shared an early version of this lesson with some patrons, people brought up some nice linear algebra approaches, all those are well and good, nothing wrong with those. But instead, my goal here is to teach you about something called a generating function. And it's one of those tactics where after the fact, you can think, okay, yeah, I get that this works. But how on earth would you have thought of that? Honestly, I don't know. There's a time in your life before you understand generating functions and a time after, and I can't think of anything that connects them other than a leap of faith. I'm going to ask you to consider the polynomial, 1 plus x times 1 plus x squared times 1 plus x cubed times 1 plus x to the fourth times 1 plus x to the fifth. Now, I know you could rightfully ask, where does this come from? What do polynomials have to do with things? What is the variable x even supposed to represent right now? And essentially x is purely a symbol. The only reason that we've written a polynomial here is that the act of algebraically expanding it is going to completely mirror the act of constructing subsets. And importantly, this grouping that we want where subsets with the same sum are all bunched together kind of happens automatically when you do this. And let me show you what I mean. When you expand out this expression, basically comes down to making five binary choices, which term from each parenthetical do you choose? If you choose the one from each of those parentheticals, that will correspond to the empty set where we don't choose any of the elements. Whereas if I choose the x to the one term and then one's from everything else, that will correspond to the singleton set that just contains the number one. Then, similarly, if I choose the x squared term, but one's from everything else, that corresponds to the set just containing two, just choosing the x cubed term, corresponds to the set just containing the number three. But interestingly, notice what happens, if I choose the x to the one term and the x squared term, and then one's from everything else. This corresponds to the choice of the subset that has one and two, and nothing from everything else. But in the polynomial, the way it expands looks like x cubed. So we have two different x cubed terms, each of which came from a subset, whose sum was three. And honestly, the pattern that I'm going for here is one that's probably easiest if you just take the time to pause and think through for yourself what happens when you expand everything here. Essentially, every possible subset corresponds to one of the terms in this expansion. And then the critical point is that the exponent in the term that you get from that expansion equals the sum of that corresponding subset. Kind of confusing when you say it out loud, but again, if you just kind of think it through yourself, I think you can see what I mean. For example, when all of the dust settles and we collect all 32 terms here, three of those terms are x to the 10, and each of those came from a choice of elements, whose sum was equal to 10. Now normally, when we write a polynomial, we collect together all like terms. So instead of having three copies of x to the 10th, we would just see the coefficient three in front of x to the 10th. So each of these coefficients is a way of encoding the number of subsets with a particular sum. So this, like I said at the start, is an example of something called a generating function, where the idea is if you have some question with an answer associated with each positive integer, in our case, how many subsets add up to a particular value? When you construct a polynomial whose coefficients correspond to the answers to that question, you can get a surprising amount of insight from your original question by mathematically manipulating and analyzing the properties of this polynomial. There are tons and tons of examples of generating functions, but just to bring up one other one, which is especially fun, you can use the same idea to study Fibonacci numbers. So all the coefficients of this polynomial will be Fibonacci numbers, and in this case it's an infinite polynomial, so I should really be calling it a power series. I won't fully explain the details here, but I will leave them up on the screen for anyone who's curious. The basic idea is that the rule that's used to define Fibonacci numbers, each one being the sum of the previous two, can be expressed as an equation in terms of this function. That equation, in turn, lets you write that function in an alternate form. And then, and here's most of the details I'm skipping over, if you manipulate that, you know, throw in a little partial fraction decomposition here, a little bit of geometric series power expansion there, you can get yourself an exact closed form expression for each individual Fibonacci number, which is really cool. I mentioned this really just to show the tip of the iceberg of the fact that this idea of a generating function goes way, way beyond our particular example. Now, in our particular problem, if we extend from the simple example with just 1, 2, 3, 4, 5 to the big example with all the numbers up to 2,000, our corresponding generating function involves these 2,000 different binomial terms, you know, 1 plus x, 1 plus x squared on and on, up to 1 plus x to the 2,000. And the idea is that if you were to expand this, the coefficients tell us all the information we want. Now, it would be insane to actually expand it, but it is helpful to keep in the back of your mind in principle what that would look like. For example, in principle, if you expanded it, you would find that the coefficient in front of the x to the 25th term happens to be 142. And this corresponds to the fact that there are 142 distinct subsets that have a sum of 25. So the art of analyzing a generating function here will be to deduce facts about these coefficients without actually expanding the expression. So moving forward, I'm just going to write this expansion more abstractly, just a sum from n equals 0 up to capital N, where C sub n tells us the coefficients that we don't know. All of that starts off as a black box to us. And moving forward, we're going to start treating this as an actual function, something where we plug in x, we see what the output is, and then we ask, what does that tell us about the coefficients? For example, a very easy input would be to plug in something like x equals 0. In that case, importantly, we know how to evaluate it using the factored form above. If you plug in x equals 0 for everything, all of the terms look like 1, so the answer is 1. And in the expanded form, all of those terms involving in x will get killed, they go to 0, leaving us just with the first term, C sub 0. Now, in this case, that doesn't really tell us anything all that exciting. It essentially translates to saying there is a single empty set, but we're just getting our feet wet. As the next example, take a moment to think about evaluating f at 1. This is something we can do with the expression we know when you plug in 1 for all of these x's, every term looks like a 2, so in total, we get 2 multiplied by itself 2,000 times. On the other hand, in the expanded expression, if you plug in x equals 1, all of these powers of x go to 1, so we're essentially adding up all of the coefficients, which is pretty cool when you think about it, just by evaluating the function at a single number, we can deduce what the sum of all of the coefficients are. Now, again, in our particular example, it's not all that exciting because we already know what the sum of these coefficients are. Remember, each coefficient counts how many subsets have a certain sum, and so when you add them up, we're just counting all of the subsets, which we know to be 2 to the 2,000. However, I can give you a genuinely new fact if I ask you to evaluate this function at negative 1. Take a moment to think about what that means. If you plug in negative 1, again, we start with the thing we know, the fact of the expression up top, and here all you need is to look at the first term, when you plug in x, the first parenthetical goes to 0, so the whole expression has to be 0. But what does that tell you when we apply it to the expanded expression using all of the coefficients? And in the spirit of being as suggestive as possible of the strange turns that this solution takes, I want you to really visualize the various powers of negative 1 in this expression, in terms of rotations. The first term, negative 1 to the 0, is just 1, which we'll picture as a vector from 0 to 1. Then negative 1 to the first power is just negative 1 itself, which I want you to be thinking about as a 180 degree rotation away from that last term. Then when we take negative 1 squared, that's positive 1, again, a 180 degree rotation. And in general, each successive term here looks like another rotation by 180 degrees. Algebraically, what this translates to is that we have an oscillating sum between the even coefficients and the odd coefficients, but keep the visual in the back of your mind. This expression is true for any generating function, but again, for our special generating function, we know that this value, this alternating sum, should equal 0. And a way you can interpret that is that it's telling you there's an equal balance between the even coefficients and the odd coefficients. And remember, maybe in the context of our smaller example, these coefficients are encoding for us facts about subsets. So if there's an equal balance between all those even coefficients and the odd coefficients, it's telling you that half of all the subsets have an even sum and half of them have an odd sum. That's probably what you would expect, but it's not obvious at first how you would show that. And with the generating function, it just kind of pops right out. And again, to be suggestive of where we're going, let me rewrite this a little bit by taking the last two things we evaluated, add up those two in the divide by one half. If you think about it, this is a way of filtering out all of the even coefficients and killing all of the odd coefficients. So it becomes an especially clean way to write the fact that the sum of all of the even coefficients, which again in the back of your mind means the total number of subsets within even sum, will look like half of the total. This is needless to say, tantalizingly close to the actual question we want to answer. What we would like to do is find some clever thing that we can do to the function f, some well chosen numbers to evaluate it on, so that we get all the coefficients corresponding to multiples of five. Again, thinking back to what these coefficients encode for us, that will be answering our final question, that will be counting the total number of subsets, whose sum is divisible by five. The trick to doing this is to generalize what we just did, where the successive powers of the input were rotating back and forth. But this time, we don't want them to rotate every other time. We'd like them to somehow rotate with a period of five. And to do that, we extend into the complex plane. You see up there, we can find a value, so that as we take successive powers of it, it will rotate by a fifth of a turn, giving us a process with a frequency of five. And if you step back, I know that it's kind of absurd that I'm asking you to think about complex numbers. I mean, we started with a counting question, it's discrete math, but hopefully it's not all that wild. And again, the reason that I'm drawing things out to tee up the various strange turns in the solution, is that they're actually not all that strange in the broader scheme of math. The trick we're about to apply has a heavy resemblance to many other instances of using complex numbers to better understand discrete questions of integers. So the more it feels like something that you could have discovered yourself, the more it might actually be the case that when you're working on some future problem in this circle of thoughts, you will discover it yourself. To be specific, the complex number that I care about is one that I'm going to label Zeta, and it sits a fifth of a turn around the unit circle. So its angle is 2 pi fifths radians, and its magnitude is one. This means with the standard Euler's formula notation, we would write that number explicitly as e to the power 2 pi i divided by 5. If you're not as comfortable with that notation, you could think of it as something whose real part is the cosine of 72 degrees, 72 being a fifth of a full turn, and the imaginary part is the sine of 72 degrees. But to be honest, you don't actually need to think about the explicit value. Instead, the important thing to focus on is the property that powers of this number have. For example, when you square it, because its magnitude was one, magnitude of its square is also one, but it rotates a fifth of a turn around the unit circle. So it now sits two fifths of a turn around. Similarly, when you raise it to the third power, you end up three fifths of a turn around, raise it to the fourth power, you end up four fifths of a turn, and raise it to the fifth power, and you've gotten all the way back around to one. It's the same thing as if you had raised it to the zeroth power. We get this cycling every five terms. That's the thing that we care about. These numbers have a special name, they're called the fifth roots of unity? Essentially because they solve the equation z to the fifth equals one. They are fifth roots of the number one. If you just presented someone with this equation, they would probably say the answer is clearly z equals one. But the idea is that there are four other answers in the complex plane. Four other numbers where when you raise them to the fifth, you get one, and considering them as a collective is often quite useful. Remember that equation, it'll come back for us a little bit later. So in analogy with what we did earlier, where we added together f of one and f of negative one to get this cancellation among the odd terms, what we're going to do is evaluate f at all five of these numbers and then add them together, and hopefully we get some cancellation. That might seem kind of complicated, but let's just take a super simple example, like the case where f of x is simply equal to x. In that case, when we add up these five terms, we're just adding up the roots of unity themselves. Zeta to the zero, plus zeta to the one, on and on, up to zeta to the fourth. When you add complex numbers, you can think of it like vector addition with the tip to the tail. So zeta to the zero, plus zeta will look like this. And then if I add on zeta squared, bringing the tail of that vector to the tip of the last one, we get this. Then similarly, if I bring the tail of zeta cubed over to the tip of that one, and then do likewise for zeta to the fourth, you'll see how the overall sum actually loops back to be zero. Another way to think about this is that all five of these terms are evenly balanced around the number zero. Their center of mass is at the origin. Now it's helpful to think about a slightly less trivial example if f of x was x squared. So when you square zeta to the zero, it stays zeta to the zero. This is just a fancy way of saying the number one. When you square zeta, you get zeta squared itself. So you might imagine this dot up here moving over to the zeta squared dot when we do it. zeta squared moves to zeta to the fourth. You might imagine this dot moving over to zeta to the fourth. zeta cubed moves to zeta to the sixth, which because we loop around every five times is the same thing as zeta to the one. So this dot will move up here. And finally, zeta to the fourth squares to give us zeta to the eighth, which reduces to be the same as zeta cubed, which I might draw like this. That might seem a little confusing to think about, especially with all the arrows I have drawn here. But it's worth thinking through at least once in your life, because the idea here is that when we square this, like go to all of these different terms and I program them to double the angle that they have, the overall effect is to just shuffle those terms. We get the same numbers, but written in a different order. So their sum is still going to be zero. Similarly, if you go through this exercise with x cubed, which I encourage you to do, and you follow around where are each one of these dots going to end up, you'll be able to see that when we cube these terms, when we take each one and we multiply the angle that it has by three, again, we just shuffle them around. Same terms listed into different order, unsurprisingly, the same thing happens if our function was x to the fourth. But, critically, where things change is if we consider the function x to the fifth. In that case, when you raise zeta to the fifth power, by definition, it goes to one. Similarly, zeta squared raised to the fifth power goes to one. All of these go to one. They are the roots of unity. This is after all their whole purpose in life. So in this case, when we apply the function and add them all up, instead of going to zero and getting cancellation, we get a kind of constructive interference, all of them equal one, so their sum is equal to five. So if you step back and think about what all those examples mean, essentially, this expression is something that will go to zero for powers of x, which are not divisible by five, but it goes to something non-zero for powers of x, which are divisible by five. And that's exactly the kind of filter that we're looking for. If you're worried that our actual function is much more complicated than a simple power of x, essentially, things play really nicely here because everything is linear. If f is some massive polynomial and we want to evaluate this big sum, you could sort of think of going column by column. Where each time you really are just adding up powers of zeta, and in most cases, all those powers cancel out with each other and you get zero, but when all of those powers are multiples of five, they constructively interfere, and instead you get five times whatever the corresponding coefficient is. Deep in the weeds, it's easy to forget why we're here in the first place, but remember, each one of those coefficients tells us how many subsets add up to a certain value, and so what we want is to add up all of the coefficients that are multiples of five. And what we have right now is a way to explicitly do that. If we evaluate this function on these five different roots of unity, which I know seems kind of weird, then all we have to do is divide by five and it gives us the sum that we want. That's really cool if you ask me. We have a question that's just about subsets. It's a discrete math problem, and yet the way that we can answer it is to evaluate a crazy polynomial on some judiciously chosen complex numbers. The more math you do, the less crazy that seems, because complex numbers have this bizarre relationship with discrete math, but it really is wonderful. There's no two ways about it. However, some of you might complain, the only way that this is useful is if we can actually evaluate this wild expression on our polynomial. And remember, the form of the polynomial we know the one we're comfortable with is the factored form, where you have this 1 plus x, 1 plus x squared, on and on, all the way up to 1 plus x to the 2000. Everything up to this point is just meaningless symbolic play, pushing around one part problem into another, unless we can actually roll up our sleeves and do some honest calculation here. This is the final thrust in our argument, so step back, take a deep breath. It's actually not as bad as you might think, but let's start just by thinking about how you might evaluate just one of the roots of unity that we need, maybe zeta itself. So what that looks like is 1 plus zeta times 1 plus zeta squared times 1 plus zeta cubed on and on. Except, importantly, after those first five terms, everything starts repeating, because powers of zeta repeat. The entire expression up to 2000 is basically just going to be a copy of this expression 400 times. It still might seem hard to evaluate this expression, but it's way easier than multiplying out 2000 different terms. A way you might visualize this is that we're taking each one of those roots of unity, but basically adding one, we're shifting them all to the right. This picture actually lends itself to a really nice geometric intuition for the numerical answer that we might expect. The thing that we want is the product of these five different complex numbers, these five yellow dots. And if you know a thing or two about complex numbers, since these come in conjugate pairs, all we really need is to multiply the lengths of these five yellow lines. For example, that dot furthest to the right corresponds to 1 plus zeta to the fifth, which in the diagram I'm labeling as zeta to the zero plus one, but it doesn't matter in either case they're both just fancy ways of writing the number two. Next to that, we have the values 1 plus zeta and 1 plus zeta to the fourth, both of which have the same magnitude. The lengths of these lines are the same. And let's just give that a name l1. So we need to multiply two different copies of that length l1 squared. Similarly, the remaining two values zeta squared plus 1 and zeta cubed plus 1, they also have the same length and they're a conjugate pair. So let's just call that length l2. So our product needs to include two copies of that l2. If we were just making a loose heuristic guess, you might notice that l1 is a length that's something a little bit longer than 1, and l2 is something a little bit shorter than 1. So the final answer here probably comes to something around two ish. We're not positive, but something in that ballpark. To turn this into an exact answer, we could just expand out the full expression. It's honestly not that bad. There's only 32 different terms. Okay, you've hung with me for a long time now, and I know that it's getting to be a lot. But there's one final trick in this whole argument that makes our last step much simpler than you might think it should be. And let's just recap to remind ourselves of where we are. So we started with this question asking us, count the number of subsets of 1 up to 2000, who some is divisible by 5. We then constructed this polynomial, whose coefficients tell us how many subsets have a particular sum for each value n. So what we want is to add up every fifth coefficient of that polynomial. Then we saw how evaluating this polynomial as a function on all of the fifth roots of unity than adding them up. Inzup giving us exactly this filter that we want. And here we're evaluating just one of those terms, f of zeta, which essentially comes down to a product of five complex numbers. It's a super slick way to actually evaluate that product. Here's the final trick. Remember, I described these numbers as roots of unity. They solve the equation z to the fifth equals 1. Another way to think about that is that they are roots of the polynomial z to the fifth minus 1. Now what that means is we can factor the polynomial z to the fifth minus 1 to look like this, where there's one factor corresponding to each one of the roots. You take z minus each one of the roots. This expression is kind of magical when you think about all of the crazy cancellation that has to happen when you expand it all out. But it is true, and it's super useful for us right now, because the expression on the right hand side looks almost identical for the thing we need to evaluate up at the top here. It basically just has minus signs where we wish there were plus signs. The trick is to plug in z equals negative 1. If you do that, you essentially have the negative of what we want. So if you multiply it by negative 1, notice how the left hand side here, which started out as negative 1 minus 1 or negative 2, that just becomes 2. And then the right hand side turns into the thing that we want to evaluate. So, just as our geometric intuition earlier might have suggested, not only is the answer around 2, the answer quite magically turns out to be precisely 2. That is actually super nice and very lovely, because it means this bigger expression that we want to evaluate, where we're adding up F on all of the different roots of unity, we know its value on the first root of unity. It will be 2 to the power 400. Essentially identical reasoning shows that its value on the next 3 roots of unity is also 2 to the power 400, because remember when you take powers of zeta squared or of zeta cubed, you get the same list of numbers they're just shuffled in a different order. The only one that's different is when we evaluate it as zeta to the 0. But zeta to the 0 is a fancy way of saying the number 1, and we know how to evaluate this at 1. That's one of the easy things we did this earlier. All of these parentheticals turn into 2, so it looks like taking 2 multiplied by itself 2,000 times. And so, finally with that, we have a highly explicit honest answer to our counting question. To add up all of these coefficients which are divisible by 5, which remember is a way of counting how many total subsets have a sum divisible by 5, the answer is 1 fifth of this weird complex expression, which we just computed to be 2 to the 2,000 plus 4 different copies of 2 to the 400. And here you might want to do just a quick sanity check on does this answer make any sense? For example, if you do it in the smaller case with the set 1, 2, 3, 4, 5, and you walk through all the same reasoning that we just did, it tells you that the answer is 1 fifth of 2 to the 5th, the total number of subsets, plus 4 times 2 to the 1 in this case, which is a fifth of 32 plus 8, which is 8. And if you'll remember when we explicitly looked at them all, that was in fact the answer. Look, this is a hard puzzle. And when it's worth putting in the time to solve a hard problem, it's also worth taking some time to reflect on it. What do you get out of this? What's the takeaway? Now you could reflect on the answer itself. How the dominant part is indeed 1 fifth of all the total subsets like we might have guessed, and how this error term came about from the not quite destructive interference in a massive combination of roots of unity. But again, what makes this question interesting is not the answer, it's the way that we solved it, namely taking a discrete sequence that we want to understand, and treating it as the coefficients on a polynomial. Then evaluating that polynomial on complex values. Both of those steps are probably highly unexpected at the outset, but both of those steps relate to some very general and powerful techniques that you'll find elsewhere in math. For example, at the top of the lesson I promised that the technique that we would use would be similar in spirit to the way that primes are studied, and the set of ideas that leads up to the Riemann hypothesis and things like that. Now this is a very beautiful topic, enough so that I think it seems a little criminal to cram some kind of rushed version into the end here. The right thing to do, I think, is to just make that video I promised a while back about the Zeta function, take the time to do it right. But if you're curious, and if you'll allow me to throw some things up on the screen without explaining them, here's the two or three sentence version of how the two are parallel. Just like our subsets puzzle, the way that Riemann studied primes involved a discrete sequence we want to understand, something carrying information about prime numbers, and then considering a function whose coefficients are the terms in that sequence. In that case it's not quite a polynomial, instead it's a related structure known as a derichele series, or direct le series depending on who you ask, but it's the same essential idea. Then the way to sus out information about those coefficients comes from studying how this function behaves with, you guessed it, complex valued inputs. The techniques in his case get a lot more sophisticated after all Riemann was a pioneer in complex analysis, but the fact remains, extending your domain beyond real numbers like this offers you the mathematician a lot more power in making deductions about the coefficients. For some viewers, this all might leave the lingering question of why exactly complex numbers are so unreasonably useful in this way. It's a hard question to answer exactly, but if you think about our puzzle everything we just did. As soon as we were in this situation where plugging in different inputs revealed hidden information about the coefficients, it's sort of like the more inputs you can work with, the better, so you might as well open yourself up to a richer space of numbers like the complex plane. But there is a more specific intuition that I want you to come away with here. In our puzzle, the relevant fact that we wanted, the sum of every fifth coefficient, was a kind of frequency question. And the real reason the complex numbers, as opposed to some other structure, proved to be useful for us, is that we could find a value so that successive products have this cycling behavior. This use of values on the unit circle and roots of unity in particular to sus out frequency information. It's extremely fruitful. It is almost impossible to overstate how helpful that idea is. Just to give one out of thousands of examples, in the 1990s Peter Schorr found a way for quantum computers to factor large numbers way way faster than classical computers can. And if you go in and you look at the details of how what we now call Schorr's algorithm works, the idea is essentially this, the use of roots of unity to detect a kind of frequency information. More generally, this is the core idea that underlies Fourier transforms and Fourier series and the infinite swell of topics that follow from those. As to the topic of generating functions themselves, we've really only just scratched the surface here. And if you want to learn more, I highly recommend this kind of hilariously named book, Generating Functionology by Herbert Wolf. And I'll also leave up a few fun puzzles on the screen here for anyone who wants to flex their muscles a bit with the idea. |
How to send a self-correcting message (Hamming codes) | Have you ever wondered how it's possible to scratch a CD or a DVD and still have it play back whatever its story? The scratch really does affect the ones and zeros on the disc, so it reads off different data from what was stored, but, unless it's really scratched up, the bits that it reads off are decoded into precisely the same file that was encoded onto it, a bit for bit copy, despite all of those errors. There is a whole pile of mathematical cleverness that allows us to store data, and just as importantly to transmit data, in a way that's resilient to errors. Well, actually, it doesn't take that much cleverness to come up with A way to do this. Any file, whether it's a video or sound or text, some code, an image, whatever, is ultimately some sequence of ones and zeros. And a simple strategy to correct any bit that gets flipped would be to store three copies of each bit. Then the machine reading this file could compare these three copies and always take the best two out of three whenever there's a discrepancy. But what that means is using two thirds of your space for redundancy. And even then, for all of that space given up, there's no strong guarantee about what happens if more than one bit gets flipped. The much more interesting question is how to make it so that errors can be corrected while giving up as little space as possible. For example, using the method that you'll learn about this video, you could store your data in 256-bit blocks where each block uses 9 bits, 9, to act as a kind of redundancy, and the other 247 bits are free to carry whatever meaningful message or data that you want. And it will still be the case that if any bit gets flipped here, just by looking at this block and nothing more, a machine will be able to identify that there was an error and precisely where it was so that it knows how to correct it. And honestly, that feels like magic. And for this particular scheme, if two bits get flipped, the machine will at least be able to detect that there were two errors, though it won't know how to fix them. We'll talk a little bit later about how this scales for blocks with different sizes. Methods that let you correct errors like this are known, reasonably enough, as error correction codes. For the better part of the last century, this field has been a really rich source of surprisingly deep math that gets incorporated into devices we use every day. The goal here is to give you a very thorough understanding of one of the earliest examples, known as a hamming code. And by the way, the way I'm thinking about the structure of this video is less about explaining it as directly as possible, and more matter of prompting you to invent it for yourself, with a little gentle guidance here and there. So when you feel like you see where it's going at some point, take that moment to pause, actively predict what the scheme is going to be before I tell you. Also, if you want your understanding to get down to the hardware level, Ben Eater has made a video in conjunction with this one showing you how to actually implement hamming codes on breadboards, which is extremely satisfying. Now, you should know, hamming codes are not as widely used as more modern codes, like the read Solomon algorithm, but there is a certain magic to the contrast between just how impossible this task feels at the start and how utterly reasonable it seems once you learn about hamming. The basic principle of error correction is that in a vast space of all possible messages, only some subset are going to be considered valid messages, as an analogy think about correctly spelled words versus incorrectly spelled words. Whenever a valid message gets altered, the receiver is responsible for correcting what they see back to the nearest valid neighbor, as you might do with a typo. Coming up with a concrete algorithm to efficiently categorize messages like this, though, takes a certain cleverness. The story begins in the 1940s, when a young Richard hamming was working for Bell Labs, and some of his work involved using a very big expensive punch card computer that he had only limited access to. And the programs he kept putting through it kept failing, because every now and then a bit would get misread. Frustration being the crucible of invention, he got so fed up that he invented the world's first error correction code. There are many different ways to frame hamming codes, but as a first pass, we're going to go through it the way that hamming himself thought about them. Let's use an example that's simple, but not too simple, a block of 16 bits. We'll number the positions of these bits from 0 up to 15. The actual data that we want to store is only going to make up 12 of these bits, while four of the positions are going to be reserved as a kind of redundancy. The word redundant here doesn't simply mean copy, after all, those four bits don't give us enough room to blindly copy the data. Instead, they'll need to be a much more nuanced and clever kind of redundancy, not adding any new information, but adding resilience. You might expect these four special bits to come nicely packaged together, maybe at the end or something like that, but as you'll see, having them sit in positions which are powers of 2 allows for something that's really elegant by the end. It also might give you a little hint about how the scales for larger blocks. Also, technically it ends up being only 11 bits of data, you'll find there's a mild nuance for what goes on at position 0, but don't worry about that for now. Like any error correction algorithm, this will involve two players, a sender who's responsible for setting these four special bits, and then a receiver who's responsible for performing some kind of check in then correcting the errors. Of course the word sender and receiver really refer to machines or software that's doing all the checks, and the idea of a message is meant really broadly to include things like storage. After all, storing data is the same thing as sending a message just from the past to the future instead of from one place to another. So that's the setup, but before we can dive in, we need to talk about a related idea which was fresh on Hamming's mind in the time of his discovery, a method which lets you detect any single bit errors, but not to correct them, known in the business as a parody check. For a parody check, we separate out only one single bit that the sender is responsible for tuning, and the rest are free to carry a message. The only job of this special bit is to make sure that the total number of ones in the message is an even number. So for example right now that total number of ones is 7, that's odd, so the sender needs to flip that special bit to be a 1, making the count even. But if the block had already started off with an even number of ones, then this special bit would have been kept at a 0. This is pretty simple, deceptively simple, but it's an incredibly elegant way to distill the idea of change anywhere in a message to be reflected in a single bit of information. This, if any bit of this message gets flipped, either from 0 to 1 or 1 to 0, it changes the total count of ones from being even to being odd. So if you're the receiver, you look at this message, and you see an odd number of ones, you can know, for sure, that some error has occurred, even though you might have no idea where it was. In the jargon, whether a group of bits has an even or an odd number of ones is known as its parody. You could also use numbers, and say the parody is 0 or 1, which is typically more helpful once you start doing math with the idea. And this special bit that the sender uses to control the parody is called the parody bit. And actually, we should be clear, if the receiver sees an odd parody, it doesn't necessarily mean there was just one error. There might have been three errors, or five, or any other odd number, but they can know for sure that it wasn't zero. On the other hand, if there had been two errors, or any even number of errors, that final count of ones would still be even. So the receiver can't have full confidence that an even count necessarily means the message is error free. You might complain that a message which gets messed up by only two bit flips is pretty weak, and you would be absolutely right. Keep in mind, though, there is no method for error detection or correction that could give you 100% confidence that the message you receive is the one that the sender intended. After all, enough random noise could always change one valid message into another valid message just by pure chance. Instead, the goal is to come up with a scheme that's robust up to a certain maximum number of errors, or maybe to reduce the probability of a false positive like this. Parody checks on their own are pretty weak, but by distilling the idea of change across a full message down to a single bit, what they give us is a powerful building block for more sophisticated schemes. For example, as Hamming was searching for a way to identify where an error happened, not just that it happened, his key insight was that if you apply some parity checks not to the full message, but to certain carefully selected subsets, you can ask a more refined series of questions that pin down the location of any single bit error. The overall feeling is a bit like playing a game of 20 questions, asking yes or no queries that chop the space of possibilities in half. For example, let's say we do a parity check just on these 8 bits, all of the odd numbered positions. Then if an error is detected, it gives the receiver a little more information about where specifically the error is, namely that it's in an odd position. If no error is detected, among those 8 bits, it either means there's no error at all, or it's at somewhere in the even positions. You might think that limiting a parity check to half the bits makes it less effective, but when it's done in conjunction with other well chosen checks, it counterintuitively gives us something a lot more powerful. To actually set up that parity check, remember, it requires earmarking some special bit that has control for the parity of that full group. Here, let's just choose position number 1. So for the example shown, the parity of these 8 bits is currently odd, so the sender is responsible for toggling that parity bit, and now it's even. This is only 1 out of 4 parity checks that we'll do, the second check is going to be among the 8 bits on the right half of the grid, at least as we've drawn it here. This time we might use position number 2 as a parity bit, so these 8 bits already have an even parity, and the sender can feel good leaving that bit number 2 unchanged. Then on the other end, if the receiver checks the parity of this group and they find that it's odd, they'll know that the error is somewhere among these 8 bits on the right. Otherwise, it means either there's no error, or the error is somewhere on the left half. Or I guess there could have been 2 errors, but for right now we're going to assume that there's at most 1 error in the entire block, things break down completely for more than that. Here, before we look at the next 2 checks, take a moment to think about what these first 2 allow us to do when you consider them together. Let's say you detect an error among the odd columns, and among the right half. It necessarily means the error is somewhere in the last column. If there was no error in the odd column but there was one in the right half, well that tells you it's in the second to last column. Likewise, if there is an error in the odd columns but not in the right half, you know that it's somewhere in the second column. And then if neither of those 2 parity checks detects anything, it means the only place that an error could be is in that left most column. But it also might simply mean there's no error at all. Which is all a rather belabored way to say that 2 parity checks let us pin down the column. From here, you can probably guess what follows. We do basically the same thing but for the rows. There's going to be a parity check on the odd rows, using position 4 as a parity bit. So in this example, that group already has an even parity, so bit 4 would be set to a 0. And finally, there's a parity check on the bottom 2 rows, using position 8 as a parity bit. In this case, it looks like the sender needs to turn that bit 8 on in order to give the group even parity. Just as the first 2 checks let us pin down the column. These next 2 let you pin down the row. As an example, imagine that during the transmission there's an error at, say, position 3. Well this affects the first parity group, and it also affects the second parity group. So the receiver knows that there's an error somewhere in that right column. But it doesn't affect the third group, and it doesn't affect the fourth group. And that lets the receiver pin point the error up to the first row, which necessarily means position 3, so they can fix the error. You might enjoy taking a moment to convince yourself that the answers to these 4 questions really will always let you pin down a specific location, no matter where they turn out to be. In fact, the astute among you might even notice a connection between these questions and binary counting. And if you do, again let me emphasize. Pause. Try for yourself to draw the connection before I spoil it. If you're wondering what happens if a parity bit itself gets affected, well, you can just try it. Take a moment to think about how any error among these 4 special bits is going to be tracked down just like any other, with the same group of 4 questions. It doesn't really matter since at the end of the day what we want is to protect the message bits. The error correction bits are just writing along. But protecting those bits as well is something that naturally falls out of this scheme as a byproduct. You might also enjoy anticipating how this scales. If we used a block of size 256 bits, for example, in order to pin down a location, you need only 8 yes or no questions to binary search your way down to some specific spot. And remember, each question requires giving up only a single bit to set the appropriate parity check. Some of you may already see it, but we'll talk later about the systematic way to find what these questions are in just a minute or two. Hopefully this sketch is enough to appreciate the efficiency of what we're developing here. Everything, except for those 8 highlighted parity bits, can be whatever you want it to be, carrying whatever message or data you want. The 8 bits are redundant in the sense that they're completely determined by the rest of the message, but it's in a much smarter way than simply copying the message as a whole. And still, for so little given up, you would be able to identify and fix any single bit error. Well, almost. Okay, so the one problem here is that if none of the 4 parity checks detect an error, meaning that the specially selected subsets of 8 bits all have even parities, just like the sender intended, then it either means there was no error at all, or it narrows us down into position 0. You see, with 4 yes or no questions, we have 16 possible outcomes for our parity checks. And at first, that feels perfect for pin pointing 1 out of 16 positions in the block, but you also need to communicate a 17th outcome, the no error condition. The solution here is actually pretty simple. Just forget about that 0th bit entirely. So when we do our 4 parity checks and we see that they're all even, it unambiguously means that there is no error. What that means is rather than working with a 16 bit block, we work with a 15 bit block, where 11 of the bits are free to carry a message and 4 of them are there for redundancy. And with that, we now have what people in the business would refer to as a 1511 hamming code. That said, it is nice to have a block size that's a clean power of 2, and there's a clever way that we can keep that 0th bit around and get it to do a little extra work for us. If we use it as a parity bit across the whole block, it lets us actually detect, even though we can't correct, 2 bit errors. Here's how it works. After setting those 4 special error correcting bits, we set that 0th 1 so that the parity of the full block is even, just like a normal parity check. Now, if there's a single bit error, then the parity of the full block toggles to be odd, but we would catch that anyway thanks to the 4 error correcting checks. However, if there's 2 errors, then the overall parity is going to toggle back to being even, but the receiver would still see that there's been at least some error because of what's going on with those 4 usual parity checks. So if they notice an even parity overall, but something non-zero happening with the other checks, it tells them there were at least 2 errors. Isn't that clever? Even though we can't correct those 2 bit errors, just by putting that 1 little bothersome 0th bit back to work, it lets us detect them. This is pretty standard, it's known as an extended Heming Code. Technically speaking, you now have a full description of what a Heming Code does, at least for the example of a 16 bit block. But I think you'll find it more satisfying to check your understanding and solidify everything up to this point by doing one full example from start to finish yourself. I'll step through it with you though so you can check yourself. To set up a message, whether that's a literal message that you're translating over space or some data that you want to store over time, the first step is to divide it up into 11 bit chunks. Each chunk is going to get packaged into an error-resistant 16 bit block. So let's take this one as an example and actually work it out. Go ahead, actually do it, pause and try putting together this block. Okay, you ready? Remember, position 0 along with the other powers of 2 are reserved for error correction duty. So you start by placing the message bits in all of the remaining spots, in order. You need this group to have an even parity, which it already does, so you should have set that parity bit in position 1 to be a 0. So next group starts off with an odd parity, so you should have set its parity bit to be 1. The group after that starts with an odd parity, so again you should have set its parity bit to 1. And the final group also has an odd parity, meaning we set that bit in position 8 to be a 1. And then as the final step, the full block now has an even parity, meaning that you can set that bit number 0, the overarching parity bit to be 0. So as this block is sent off, the parity of the four special subsets and the block as a whole will all be even, or 0. As the second part of the exercise, let's have you play the role of the receiver. Of course that would mean you don't already know what this message is, maybe some of you memorized it, but let's assume that you haven't. What I'm going to do is change either 0, 1 or 2 of the bits in that block, and then ask you to figure out what it is that I did. So again, pause and try working it out. Okay, so you as the receiver now check the first parity group, and you can see that it's even. So any error that exists would have to be in an even column. The next check gives us an odd number, telling us both that there's at least one error, and narrowing us down into this specific column. The third check is even, chopping down the possibilities even further, and the last parity check is odd, telling us there's an error somewhere in the bottom, which by now we can see must be in position number 10. What's more, the parity of the whole block is odd, giving us confidence that there was one flip and not two, if it's three or more all bits are off. After correcting that bit number 10, pulling out the 11 bits that were not used for correction gives us the relevant segment of the original message, which if you rewind and compare is indeed exactly what we started the example with. And now that you know how to do all this by hand, I'd like to show you how you can carry out the core part of all of this logic with a single line of Python code. You see, what I haven't told you yet is just how elegant this algorithm really is, how simple it is to get a machine to point to the position of an error, how to systematically scale it, and how we can frame all of this as one single operation rather than multiple separate parity checks. To see what I mean, come join me in part 2. |
Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra | As you can probably tell by now, the bulk of this series is on understanding matrix and vector operations through that more visual lens of linear transformations. This video is no exception, describing the concepts of inverse matrices, column space, rank, and null space through that lens. A fore warning though, I'm not going to talk about the methods for actually computing these things, and some would argue that that's pretty important. There are a lot of very good resources for learning those methods outside this series, keywords Gaussian elimination and row echelon form. I think most of the value that I actually have to add here is on the intuition half. Plus, in practice, we usually get software to compute this stuff for us anyway. First a few words on the usefulness of linear algebra. By now you already have a hint for how it's used in describing the manipulation of space, which is useful for things like computer graphics and robotics. But one of the main reasons that linear algebra is more broadly applicable and required for just about any technical discipline is that it lets us solve certain systems of equations. When I say system of equations, I mean you have a list of variables, things you don't know, and a list of equations relating them. In a lot of situations, those equations can get very complicated. But if you're lucky, they might take on a certain special form. Within each equation, the only thing happening to each variable is that it's scaled by some constant, and the only thing happening to each of those scaled variables is that they're added to each other. So no exponents or fancy functions are multiplying two variables together, things like that. The typical way to organize this sort of special system of equations is to throw all the variables on the left and put any lingering constants on the right. It's also nice to vertically line up the common variables, and to do that you might need to throw in some zero coefficients whenever the variable doesn't show up in one of the equations. This is called a linear system of equations. You might notice that this looks a lot like matrix vector multiplication. In fact, you can package all of the equations together into a single vector equation where you have the matrix containing all of the constant coefficients and a vector containing all of the variables, and their matrix vector product equals some different constant vector. Let's name that constant matrix A to note the vector holding the variables with a bold-faced X and call the constant vector on the right-hand side V. This is more than just a notational trick to get our system of equations written on one line. It's light on a pretty cool geometric interpretation for the problem. The matrix A corresponds with some linear transformation, so solving AX equals V means we're looking for a vector X, which, after applying the transformation, lands on V. Think about what's happening here for a moment. You can hold in your head this really complicated idea of multiple variables all intermingling with each other just by thinking about squishing in morphing space and trying to figure out which vector lands on another. Cool, right? To start simple, let's say you have a system with two equations and two unknowns. This means the matrix A is a 2 by 2 matrix and V and X are each two-dimensional vectors. Now, how we think about the solutions to this equation depends on whether the transformation associated with A squishes all of space into a lower dimension, like a line or a point, or if it leaves everything spanning the full two dimensions where it started. In the language of the last video, we subdivide into the cases where A has zero determinant and the case where A has non-zero determinant. Let's start with the most likely case, where the determinant is non-zero, meaning space does not get squished into a zero-area region. In this case, there will always be one and only one vector that lands on V, and you can find it by playing the transformation in reverse. Playing where V goes, as we rewind the tape like this, you'll find the vector X such that A times X equals V. When you play the transformation in reverse, it actually corresponds to a separate linear transformation, commonly called the inverse of A, denoted A to the negative one. For example, if A was a counterclockwise rotation by 90 degrees, then the inverse of A would be a clockwise rotation by 90 degrees. If A was a right-ward shear that pushes J hat one unit to the right, the inverse of A would be a left-ward shear that pushes J hat one unit to the left. In general, A inverse is the unique transformation with the property that if you first apply A, then follow it with the transformation A inverse, you end up back where you started. Applying one transformation after another is captured algebraically with matrix multiplication, so the core property of this transformation A inverse is that A inverse times A equals the matrix that corresponds to doing nothing. The transformation that does nothing is called the identity transformation. It leaves I hat and J hat each where they are, unmoved, so its columns are 1, 0 and 0, 1. Once you find this inverse, which in practice you do with a computer, you can solve your equation by multiplying this inverse matrix by V. And again, what this means geometrically is that you're playing the transformation in reverse and following V. This non-zero-determining case, which for a random choice of matrix is by far the most likely one, corresponds with the idea that if you have two unknowns and two equations, it's almost certainly the case that there's a single unique solution. This idea also makes sense in higher dimensions, when the number of equations equals the number of unknowns. Again, the system of equations can be translated to the geometric interpretation where you have some transformation A and some vector V, and you're looking for the vector X that lands on V. As long as the transformation A doesn't squish all of space into a lower dimension, meaning its determinant is non-zero, there will be an inverse transformation A inverse with the property that if you first do A, then you do A inverse, it's the same as doing nothing. And to solve your equation, you just have to multiply that reverse transformation matrix by the vector V. But when the determinant is 0 and the transformation associated with the system of equations, squishes space into a smaller dimension, there is no inverse. You cannot unsquish a line to turn it into a plane. At least that's not something that a function can do. That would require transforming each individual vector into a whole line full of vectors. But functions can only take a single input to a single output. Similarly, for three equations and three unknowns, there will be no inverse if the corresponding transformation squishes 3D space onto the plane, or even if it squishes it onto a line or a point. Those all correspond to a determinant of 0, since any region is squished into something with zero volume. It's still possible that a solution exists even when there is no inverse. It's just that when your transformation squishes space onto, say, a line, you have to be lucky enough that the vector V lives somewhere on that line. You might notice that some of these zero-determinant cases feel a lot more restrictive than others. Given a 3 by 3 matrix, for example, it seems a lot harder for a solution to exist when it squishes space onto a line compared to when it squishes things onto a plane, even though both of those are zero-determinant. We have some language that's a bit more specific than just saying zero-determinant. When the output of a transformation is a line, meaning it's one-dimensional, we say the transformation has a rank of one. If all the vectors land on some two-dimensional plane, we say the transformation has a rank of two. So the word rank means the number of dimensions in the output of a transformation. For instance, in the case of two by two matrices, rank two is the best that it can be. It means the basis vectors continue to span the full two dimensions of space and the determinant is not zero. But for three by three matrices, rank two means that we've collapsed. But not as much as they would have collapsed for a rank one situation. If a 3D transformation has a non-zero determinant and its output fills all of 3D space, it has a rank of three. This set of all possible outputs for your matrix, whether it's a line, a plane, 3D space, whatever, is called the column space of your matrix. You can probably guess where that name comes from. The columns of your matrix tell you where the basis vectors land. The span of those transformed basis vectors gives you all possible outputs. In other words, the column space is the span of the columns of your matrix. So a more precise definition of rank would be that it's the number of dimensions in the column space. When this rank is as high as it can be, meaning it equals the number of columns, we call the matrix full rank. Notice, the zero vector will always be included in the column space, since linear transformations must keep the origin fixed in place. For a full rank transformation, the only vector that lands at the origin is the zero vector itself. But for matrices that aren't full rank, which squish to a smaller dimension, you can have a whole bunch of vectors land on zero. If a 2D transformation squishes space onto a line, for example, there is a separate line in a different direction, full of vectors that get squished onto the origin. If a 3D transformation squishes space onto a plane, there's also a full line of vectors that land on the origin. If a 3D transformation squishes all of space onto a line, then there's a whole plane full of vectors that land on the origin. This set of vectors that lands on the origin is called the null space or the kernel of your matrix. It's the space of all vectors that become null in the sense that they land on the zero vector. In terms of the linear system of equations, when v happens to be the zero vector, the null space gives you all of the possible solutions to the equation. So that's a very high level overview of how to think about linear systems of equations geometrically. Each system has some kind of linear transformation associated with it. And when that transformation has an inverse, you can use that inverse to solve your system. Otherwise, the idea of column space lets us understand when a solution even exists. And the idea of a null space helps us to understand what the set of all possible solutions can look like. Again, there's a lot that I haven't covered here, most notably how to compute these things. I also had to limit my scope to examples where the number of equations equals the number of unknowns. But the goal here is not to try to teach everything. It's that you come away with a strong intuition for inverse matrices, column space, and null space, and that those intuitions make any future learning that you do more fruitful. Next video, by popular request, will be a brief footnote about non-square matrices. Then after that, I'm going to give you my take on dot products, and something pretty cool that happens when you view them under the light of linear transformations. See you then. |
Euler's Formula Poe | Famously, start with E raised to pi with an i. We've been taught by a lot that you've got minus 1. Can we glean what this means for such words are absurd how to treat the repeat of a feet pi i times. This is bound to confound to your mind redefines these amounts 1 can't count which is their amount our friend E. Numbers act as abstract functions which slide the rich tutti space in its place with a grace when they sound. Multiply they don't slide acting a second way they rotate and dilate but keep straight the same plane. Now what we write is E to the x won't perplex when you know it's for show that x goes up and right. It is not as you thought repeat E product E. It functions with gumption on functions we've now seen. It turns slide side to side into growths and drinks both. Up and down come around as turns round which is key. This is y pi times i which slides north as brought forth and returned we have learned as a turn halfway round. Minus 1 matched by none turns this way and we're done. |
The quick proof of Bayes' theore | This is a footnote to the main video on Bayes theorem. If your goal is simply to understand why it's true from a mathematical standpoint, there's actually a very quick way to see it based on breaking down how the word AND works in probability. Let's say there are two events, A and B. What's the probability that both of them happen? On the one hand, you could start by thinking of the probability of A, the proportion of all possibilities where A is true, then multiply it by the proportion of those events where B is also true, which is known as the probability of B given A. But it's strange for the formula to look asymmetric in A and B. Presumably, we should also be able to think of it as the proportion of cases where B is true, among all possibilities, times the proportion of those where A is also true, the probability of A given B. These are both the same, and the fact that they're both the same gives us a way to express P of A given B in terms of P of B given A, or the other way around. So when one of these conditions is easier to put numbers to than the other, say when it's easier to think about the probability of seeing some evidence given a hypothesis, rather than the other way around, this simple identity becomes a useful tool. Nevertheless, even if this is somehow a more pure or quick way to understand the formula, the reason I chose to frame everything in terms of updating beliefs with evidence in the main video is to help with that third level of understanding, being able to recognize when this formula, among the wide landscape of available tools in math, happens to be the right one to use. Otherwise, it's kind of easy to just look at it not along and promptly forget. And you know, while we're here, it's worth highlighting a common misconception that the probability of A and B is P of A times P of B. For example, if you hear that one in four people die of heart disease, it's really tempting to think that that means the probability that both you and your brother die of heart disease is one in four times one in four, or one in sixteen. After all, the probability of two successive coin flips yielding tails is one half times one half, and the probability of rolling two ones on a pair of dice is one sixth times one sixth, right? The issue is correlation. If your brother dies of heart disease, and considering certain genetic and lifestyle links that are at play here, your chances of dying from a similar condition are higher. A formula like this, as tempting and clean as it looks, is just flat out wrong. What's going on with cases like flipping coins or rolling two dice is that each event is independent of the last. So the probability of B given A is the same as the probability of B. What happens to A does not affect B. This is the definition of independence. Keep in mind, many introductory probability examples are given in very gamified contexts, things with dice and coins, where genuine independence holds. But all those examples can skew your intuitions. The irony is that some of the most interesting applications of probability, presumably the whole motivation for the kind of courses using these gamified examples, are only substantive when events aren't independent. Bays theorem, which measures exactly how much one variable depends on another, is a perfect example of this. |
The DP-3T algorithm for contact tracing (via Nicky Case) | The safest way to reopen the economy soon, without causing a second wave in the virus, will involve some notion of contact tracing. But there's a common misconception that this requires tracking people's locations. A friend of mine, Nikki Case, recently wrote up a post explaining why contact tracing is needed, and importantly, how the privacy-protecting variants of it work. This was done in collaboration with the epidemiologist Marcel Salathe, and the security researcher Carmela Trenkoso. What follows is a video adaptation of that post. As far as COVID-19 cares, there are only three kinds of people, not infected yet, infected and contagious, but with no symptoms, and infected contagious and showing symptoms. If you have widespread testing, you can get people to self-isolate as soon as they show symptoms. The problem is that the virus still spreads because of all the contacts that happened while people are contagious, but asymptomatic. However, if when someone shows symptoms and tests positive, you isolate not only them, but everyone they've been in contact with, you're staying one step ahead of the virus. The old school way to do this is with interviews, but that's slow, it's inefficient, and frankly, it's quite the intrusion on people's privacy. Another approach in the modern world would be to ask people who've tested positive to forfeit all the geolocation information from their phones, and then to track down the people who have been in those same spots. But now we are well into big brother territory, so do we have to sacrifice privacy for health? Well, I'll just let Nikki's illustration speak for itself here. There are several clever algorithms that let you alert everybody who's recently been in contact with someone who tests positive for COVID-19, but without compromising the privacy of anybody involved. Side note here, I found this very surprising. I know it shouldn't have been since I've gone through this dance many times of thinking something's impossible only to see the cryptography makes it actually possible, but I would not blame anybody at all for assuming that downloading an app that can alert everybody you've been in contact with must necessarily be tracking and revealing your location and a lot of other information. The code for these apps is entirely open, so you don't have to trust me or whoever wrote the app or Nikki or anyone to believe that it's doing what it really claims to be doing. Anyway, back to the post. Let's see how this works with the help of Alice and Bob. Alice gets a tracing app. Every five minutes, her phone sends out some uniquely pseudo-random gibberish to all the nearby devices using Bluetooth. Because these messages are pseudo-random, they don't use GPS, and they contain no information about Alice's identity, not her location, not anything. It really is gibberish, but the key point is that this gibberish is unique. Now, while her phone sends out messages, it also listens for messages from nearby phones. For example, Bob's. Bob also has a privacy-first tracing app that's compatible with, or the same, as Alice's. If Alice and Bob stay close to each other for more than five minutes, their phones will exchange respective strings of unique gibberish. Both of these phones remember all of the messages that they said and heard over the last 14 days. Again, because the random messages contain no information, Alice's privacy is protected from Bob and vice versa. The next day, Alice develops a dry cough and a fever. Alice gets tested. Alice has COVID-19. This is not a good day for Alice. But she won't suffer in vain. Alice tells her app to upload all of the random gibberish messages that it's been sending out to a hospital database. And to do this, she uses a one-time passcode given to her by her doctor. This code is to prevent spam. The database then stores Alice's gibberish, and again, the random messages give no information about Alice, where she was, who she was with, what she was doing, or even how many people Alice met. It really is meaningless to the hospital. But it's not meaningless to Bob's phone. Bob's phone often checks this hospital list of random messages that have come in from COVID-19 positive cases. Essentially, the hospital's database is saying to all the phones out there, hey, we just got this new random gibberish. If you've seen that same random gibberish sometime in the last 14 days, it means you've been in contact with someone who just tested positive for COVID-19. Once Bob's phone recognizes some of these numbers that are the gibberish snippets now known to be associated with positive test cases, it can warn Bob to self-quarantine. And so, Bob cuts off the chain of transmissions, we're staying one step ahead of the virus. And that's it. That's how digital contact tracing apps can proactively prevent the spread of COVID-19 while also protecting our rights. Thanks, Alice and Bob. Stay safe. One important thing to realize is that you don't need everyone to have these apps. Estimates have it that about 60% would do the trick for COVID-19. Even if you can't catch all possible contacts, what you need is to catch enough so that the spread of coronavirus shifts from growing exponentially to shrinking exponentially. For those of you who know what this means, what we need is for R to drop below one. The author of the post that this video is an adaptation of is a friend of mine, Nikki Case, who is a brilliant programmer and artist. He often makes these interactive explanations of societally important ideas, including a new one all about what happens next with COVID-19. So I would highly recommend taking a look at his work if you get the chance. He kindly made this post public domain and moreover helped me in putting together this video. Any reasonable plan for reopening the economy without causing a second wave in the virus will include some kind of contact tracing. And given how counterintuitive the privacy protection here can be, I wanted to do all that I could to help amplify this message. If you agree, please do share either Nikki's original post or this video adaptation of it. And since the time that Nikki posted that, Apple and Google have put out press releases about their own interoperable contact tracing systems. And I'll tell you, I'm one of those people who aggressively turns off location tracking as much as I can on my phone, no matter how annoying the apps can be about asking, I'm looking at you Fitbit. But I am 100% on board with these apps because it's not tracking your location or anything like that. And if you're a nerd like me who wants to dig into the cryptography a bit, I would recommend looking at the white paper and the code for the DP3T algorithm for which I've left links in the description. And as a final note, among the many unfortunate consequences of the lockdowns is a spike in calls to mental health services like the suicide prevention hotline. If somebody that you know is at risk of suicide, give them a call, make sure they're doing okay. And if you're at risk, take a look at the resources in the description. And if it's an option, reach out to the people in your life to talk about what you're going through. |
The Summer of Math Exposition | I want to tell you about a contest that I'm running with a friend of mine, James Schloss, who some of you might recognize from the YouTube channel, LeoSOS and his Twitch stream and things like that. Also now there's a 3B1B podcast, but more on that in just a moment. Basically, we want there to be more math explanation online, and we want to encourage more people to get started actually doing it. We're calling it the Summer of Math Exposition, where essentially we're just inviting anyone who wants to to submit some kind of math explainer, whether that's a video or a blog post or an interactive game or whatever it is that explains math online in some way. To the link that's on screen now by August 22nd, and then once we... Yes. I see you. Would you like to sniff at the microphone? Probably yes, that's very sweet. Very sweet. You're a very affectionate creature. Anyway, after August 22nd, we're going to have a selection process to choose some winners from among them, and then I'll feature them in a 3B1B video. And then for the rest, I'll probably also put together a playlist of all of the videos and a list of links on the website somewhere to all the project submissions. And then I was also thinking maybe I would send something to the winners, like creating some custom gold, plushy pie creatures or something like that. But the main prize is to have your work featured and hopefully get it out to a few more people. The reason I'm interested in doing this is I think there's a lot of people out there who would be really good at doing this. Would have some excellent explanation that would genuinely help a lot of people or show a topic that's not really covered very well elsewhere. And who might even be thinking about doing it, you know, there's that back of your mind spot that says maybe one day I'll try my hand at a video or just write this up as a blog post. But you just never really got around to it, you know, life's busy. You're not sure where to start. Seems like there's a lot of other things out there. My hope is that by dangling the tiniest carrot that I can provide, just mentioning good work when it exists, then maybe that gets a couple more people over this hump who might otherwise not have made something and then actually make it. So one of the very few constraints on entries for this particular contest is that it has to be something new. It can't be a thing that you made a while ago and you're just submitting a link to it now. But something that you make between now and August 22nd. And the real spirit of this all is to encourage people who have never tried it before to get started in it somehow. The other constraint is that it does have to be about math, but math in the broadest possible sense of the term so that could include physics or computer science as long as it's got some mathy components to it. So if you're doing physics and there's formulas that are relevant, don't try away from those formulas. Or if you're doing some computer science and there's some, you know, algorithmic complexity or something mathematical to it, try to lean into that a little bit more. Other than that, the topic matter is completely up to you. So maybe you have some topics that you've seen or that you've learned about, but which you really feel aren't covered well online anywhere and it would really be adding something new to the space. Maybe you're a physics buff who is interested in the philosophical side of things and you've learned about conways free wheel theorem and you want to talk about it and whether that's an appropriate name, whether it's actually philosophically interesting or not, or just share with people what it is. Or maybe you're someone who's into information theory and things like that and you've learned about how Kolmogorov complexity can be used to describe some things about the distributions of primes. And you think that's actually an interesting angle for how to introduce Kolmogorov complexity in the first place. Or anything like this where it's something kind of new to the space, if that's you, you should definitely consider submitting. But it doesn't have to be a topic that no one has ever covered or that's severely undercovered online. Even if it's something that's very standard and especially if it's something that a lot of students have to reach at some point, coming up with a better way to explain it, and thinking about what's the state of the art explanation on any particular thing, that could also add a lot to the space. Say for example, you've taught students about partial fraction decomposition and tutoring or teaching or something like that and you feel like you've come across a way of explaining it that makes it a little bit more memorable. You should definitely submit. Or maybe you have some really pretty way to visualize certain trig identities that students run into that keeps them from feeling very rote and instead sheds a light on, you know, how beautiful math can be and all that kind of thing. If that's you and you feel passionate about it, you should definitely submit that. One set of people who I'm particularly interested in for this competition are the teachers and the lecturers and basically anyone with a lot of boots to the ground experience seeing people learn and seeing what actually works. Because I think there's a lot of outstanding explanations out there that stay largely confined to the classroom or otherwise stay offline, whereas if just a little bit of effort was put into producing it or sharing it online in some way, those lessons might actually reach and benefit one to two orders of magnitude more people. And I get it. Teachers are absurdly busy. They don't have time for extra things on the side and it's kind of hard to know where to get started. So maybe one potential partnership here would be the teachers who have really good instincts for what works in education and then a student who maybe has a lot of energy or desire to get started on YouTube or otherwise just has more free time on their hands and pairing something together like that might actually make for a good partnership. In other case, whatever category you fall into, I do know there's a lot of people who do want to get started with this because they write to me a lot and one of the most common sentiments out there is, well, I don't know where to get started. I want to make a video but I don't really have any experience with video making, things like that. And I have a couple things to say for someone who feels like they're in that boat. In a kind of loose conjunction with this contest, I decided to start a podcast where for the first many conversations, I'll be interviewing people who have some kind of experience in the space of putting out explanations. So that could be other YouTubers, but it could also include mathematicians who are really engaged with outreach or the founder of Khan Academy, things like this. And essentially have conversations which act to either inspire or otherwise inform anyone who might be getting started with this. After doing several of these interviews, one of the most useful pieces of information that I think comes out from them is just how ramshackle and unprofessional the setup for a lot of people is in the very beginning. And as a result, it should come as no surprise that one of the most common pieces of advice, one of the most universal answers to the question of what advice would you give to someone getting started with this is to just start. The things that differentiate the people who actually put stuff out there versus the ones who don't is not a matter of having a lot more experience with it beforehand. It's a matter of having a kind of generative spirit that just wants to make stuff. Because my, I'll say persona, but this is, you know me, this is genuinely me, is give it a go. You're probably going to fail, but it's worth a try. I get a lot of people kind of saying, oh, you're like, you make YouTube videos. I've always wanted to make YouTube videos and I'm like, great, just do it. And they're like, no, no, but I need to buy a nice camera and I need to get a good set. And I'm like, no, no, no, like, just do it. There's always this type of thing. Wait, am I ready yet? Am I ready yet? Well, press record and start, see what happens. If I take the example which I know best, which is my own, there are so many really embarrassing things about the early videos on this channel or my process in creating them. I mean, the sound quality was pretty terrible for a long time. That's one big thing. I edited in iMovie for way longer than I care to admit. Also despite being now a professional YouTuber, when it comes to cameras and actually filming things like this, I really have no idea what I'm doing. Like right now, I'm just using a phone, which I guess is fine. I find the process of being alone in a room and just talking to a camera incredibly awkward. This is actually my second time recording this whole video because the first time I thought I would be really clever and have some notes to like guide what I wanted to say and I'd put them on the monitor next to the camera. But what the result was is that I would just kind of have my eyes darting back and forth between the two without me consciously realizing it. It's just this reminder that I really don't know what I'm doing. But this isn't a self-effacing thing. The point here is that if you find yourself with a potentially good explainer that you want to make, but you're a little self-conscious about how to start or you're worried that you're going to make a mistake, just don't worry about it. Just dive right in. So many of us have no idea what we're doing when we begin. All that said, sometimes this just do it advice is a little bit frustrating because I mean it's not actionable. You say, okay, I'm going to start. But then upon starting, it tells you nothing. So in the spirit of some more concrete advice, I do have a couple things that I might want to pass along that are specific to the case of math explainers. The first one, and I do actually find this quite important, is when you're putting together the explanation, whatever form, whatever medium, whatever genre you choose, try to be aware of the layers of abstraction that are relevant to your topic. So like if you're teaching a young child about fractions, and you're talking about two-thirds plus one-fifth, there's two different layers of abstraction that that expression lives in. There's one where you have a very concrete example of two-thirds of something, two-thirds of a cake, and then one-fifth of a cake, and trying to get a sense of what that means. And then there's the symbols, and a big part of the lesson at play here is understanding how the symbols relate to the actual case, and why the rules that we apply to the symbols make sense in light of the concrete case. And also, why we opt to do the more abstract thing, because it takes much less thinking than actually trying to reason about two-thirds of a cake, plus one-fifth of a cake. And this happens at all levels, if you're teaching a calculus class, and you're talking about optimizing functions, there's the idea of a function as a very abstract thing that could be any particular function, or any differentiable function, or what have you, and then there's lots of specific examples. Or maybe specific cases where they come up, like a function defining the profit of a company, and that's the thing you want to optimize. I made a whole video about group theory, where, in the middle I went on for a while about the difference between thinking of group actions as these abstract entities versus as something concrete, like a symmetry, and why there exists the two, and what the benefits and trade-offs are. But my point in this first piece of advice is not merely to address the layers of abstraction, you don't even have to. But if you're clear in your own head, try very hard to structure your explanation to go from the concrete to the abstract. I think almost always, when you understand something, the natural inclination is to go the other way around. I find myself doing this in pretty much any first draft of a script that I have, it seems like all the textbook authors that I ever read tend to do this, you start with the abstract idea, you put the examples later. But I really do think that in the case of learning, first trying to populate the learner's mind with a bunch of examples of things that have a similar pattern between them, and letting their brain do the abstraction, see that similar pattern between things, such that when you bring in that higher layer, you start defining an abstract vector space, or you're doing some symbolic manipulations with particular rules. That once that happens, you're articulating something in the brain of the learner that was already sitting there in the first place. It wasn't just handed to them in a vacuum. Otherwise, it's a little bit like trying to build a building from the top floor down. So that's one, and that's very specific to math, as a more generic idea, piece of advice number two, would be keeping the very forefront of your mind the fact that content is king, but the thing that you're explaining, the choice of the topic, or how you're explaining it, determines the majority of the value and the quality of the thing that you make. All the things about production quality, or how fancy the animations are, or the lighting, or whatever it is, all of that is secondary to making sure you've chosen an actually good topic, and it's something that people would want to consume. They haven't seen it elsewhere, it's offering something fresh. Now, that's so easy to not along with and say, yes, yes, of course, content is king. But the thing is, you end up spending about 1% of your time, if that, choosing what you're going to explain, and how you're going to explain it, and then like 99% of the time just carrying it out in some way. And as a result, it can be easy to lose sight of that important part. So my encouragement to you would be, spend more time than you would otherwise tend to, on choosing that topic. So maybe workshop a couple of different things by doing sample lessons with people, or try to write out a list of all the different things that you could, and ask, are they actually fresh? Adding something to the space, is there a reason someone would want to consume this? Spending that extra little bit of time on the thing that determines the majority of the value is almost certainly worth it. And the third piece of advice, which maybe plays into this a little bit, is, when you're beginning, if you're just starting something fresh, and there's no presence online at this point, try to choose something much more esoteric and specific than you might be inclined to. I've seen a lot of people who want to get started on YouTube, for example, and the way that they try to go about it is to choose a topic that will appeal to the most people. After all, they want their video to blow up. They want a lot of subscribers and things like that. But there's a couple of issues with this. First of all, it's a much more competitive space. If you're going to try to describe something, that a lot of people might be searching for. So if you go in saying, I'm going to do a series about quantum mechanics. Well, there's a billion others out there, and yours is going to have to stand out for some reason, and you don't have a foothold at that point. But another one is that the very specific and niche things build a much more loyal audience in that beginning, because you're offering something which they could not find anywhere else, and sometimes to be the consumer of something very specific is such a good feeling that you want to pay it back, and you find yourself rooting for the creator. So you're much more likely to get very good faith feedback, just a warmer community, and also, oftentimes we tend to overestimate just how niche things are, like sometimes something that's so weirdly specific, some very esoteric bit of engineering actually appeals to hundreds of thousands or millions of people, especially if you yourself are enthusiastic about it, and people can index on that. So by doing that, you find yourself often with a topic that actually does have a broader appeal, but it's not competitive with the things that everyone thinks will have broader appeal, and you potentially get that audience loyalty. When I started this channel, I really was thinking of it as a very niche thing. I did not think it would be a thing that a lot of people would want to watch. I actually specifically wanted to find topics in math that no one would think to search forth. I was kind of the original conception that I wavered from a little bit afterward. The fourth piece of advice is to pick a genre that your piece falls into. So the other day, I was giving this talk to a group of people, and one of them wanted to get started making online explainers, and they asked whether it was ethical for someone who's just barely learning a topic, just starting to learn it, to also make explainers of it online. I mean, after all, they're more likely to make mistakes. They don't know the broader context. And there's so many things I like about that question. It's already demonstrating a kind of care and consideration for factual accuracy, and doing right by the student, that more people who are making online explanations should consider. So like, that very fact suggested to me that this person probably should be doing it. But one of the things I suggested is to acknowledge there are different types of explainers out there. And the type where the narrator is a little bit more distanced, they're kind of standing on top of a hill and explaining the way that things are. And to do that, you really have to research the topic very deeply. You should probably know 10 times as much about the topic as what you're actually saying in the content, so that you know that you're teeing things up for where it actually leads or you're being cognizant of whatever nuances there are, things like that. But another genre entirely is the discovery journalism, where the person who is learning the topic kind of just admits that factor is open about the fact that they're just starting with it and taking the viewer along a journey with them. And many times that's actually a better piece of content. It's actually better for learning the topic. And it comes with this imbuilt piece of humility that a lot of online content lacks. But there's lots of other genres like this. There's the worked example where you're explicitly helping people with homework. There's the try to find an interesting demo and serve mainly to inspire. And basically just before you get started, decide which one of those you feel like you're the best fit for. And then when you're looking at other pieces out there, other explainers, and trying to index off of what seems to work, what doesn't, be aware of which ones are in the lane that you intend to be in and don't necessarily pattern match off of the ones that aren't. You know, one of the mistakes I think I made with my very first video is I have this conception that sometimes if you talk faster than is comfortable on the internet, that sort of works. That's like a satisfying thing to consume. Because there are videos out there that are this firehose of information and something about that scratches a niche and I think people like it. But what I didn't really appreciate was the fact that math should fall into a completely different category than that. It is not fun at all to have math come at you at this firehose rate. And basically I was just pattern matching off of things that I should not have been pattern matching off of. As point number five, or I don't actually know where I am at the list at this point, especially in the case of math, if you're bringing up definitions of things, try not to let them feel too arbitrary. Try to let them be well motivated. Explain why that's the definition. What else it could have been. Try to make it something that the learner feels like they discovered themselves. Because too often we hand these things down on high as the starting point and it's not really clear why or where that came from. So all of that is just on the content side. What exactly are you explaining? Independent of the multimedia component of it. The sound and the video and all that. And like I said, content is king that definitely determines the majority of quality. But it does actually matter a little bit beyond that to have at least a little bit of production quality, I think. And I'll give a really good example. So I was watching this lecture the other day by Tadashi Tokyeda on just a really interesting set of ideas about applying physical intuition to solving math problems. And he had in there maybe seven or eight outstanding little arguments that each one of which could have been just a beautiful video in its own right. But the, you know, the talk was over zoom and the intro was really long and the sound quality is everything that you assume from zoom and the lighting of his shot was weird. And the talk was really good. And I do think, you know, it's great that it's online and a lot of people will be consuming it. But it's probably fair to say that if all of that content, the actual set of ideas, was instead say a number file video, it would reach a hundred times as many people. And more than that, it would be a more pleasant experience for those who are consuming it. And it really doesn't take that much. So I'll just end with a couple pieces of advice on that front. The first one, which again, I acknowledge as very hypocritical here, is sound quality actually matters, especially in an era of zoom where we are all inundated with this sort of suboptimal version of the voices of all the people in our lives. The learner will appreciate a respite from all that with something that actually comes from a good microphone that you learned how to use at some point. On the side of visuals, you know, I'm obviously a big believer in the idea that a well-chosen illustration or an animation can really make a mathematical idea a lot more clear. And be an example of that concretization and kind of going from the lower layer of abstraction on upward by just showing exactly what it is on screen in some way. Now the way I do things is with programmatic animations. I sort of wrote this custom library called Manum to do that. And last year, actually a group of people that called themselves the Manum community created a fork of it with the hope of making a lot more user-friendly. And I think they succeeded with that. There's a lot better documentation, it's better tested, just all around friendlier to use. So you can use that tool and thanks to them, it's actually a lot easier than it used to be. There's some other libraries that I've seen that mentioned Manum as an inspiration, you know, one that's written in Julia or one that's in Haskell. And it doesn't have to be programmatic either. I think where programmatic animations make sense for math is if you're somehow leveraging loops or conditionals or layers of abstraction. And in the right context, I think it can be a wonderful way to let the visuals authentically reflect the math that you're describing if the code is essentially just that math as it's illustrating things. But it doesn't have to be. And a lot of times people use Manum or other programmatic animations for things that do not need to be programmatic. That you could have easily done in something like Keynote or which add flashiness for flashiness is sake that doesn't actually aid with the explanation. I think one really good example of using traditional animation software is the channel of Borbock Tree. So he really has these friendly handwritten kind of whiteboard lectures, but uses animation to help those whiteboards come alive. And he uses AdobeAnimate for that. And I think it's a really nice way to make this friendly hand drawn environment come to life, which is different from kind of the platonic stark. This is precisely what the math would draw when you're, you know, illustrating a surface or something like that. Also I see a lot of people use Manum to manipulate algebraic expressions and things like that. But if you look at other videos, things like Mathologer, you know, he's doing a lot of that in PowerPoint. And again, content is king. The first thing is to focus on what are you actually describing. And then just showing it however is easiest to show it in that case works totally fine. You don't need anything extremely precise or that leverages loops and abstraction for the formulas. I recognize a kind of hypocrisy here, but you know, I have walked myself into a certain corner with the style that I want for the channel. If you do want to go down that whole of programmatic animations, though, another tool which has popped up recently is something called smoothstep.io, which I think is a really nice way for people to get started with shaders, which are an absurdly powerful way to do absurdly beautiful things. And it's written by this guy, Matt Henderson, who has a Twitter account that everyone should follow because he has some of the most beautiful math illustrations that I think I've ever seen. So experimenting with software like that is another rabbit hole that you could go down. If say you want to use this competition as an excuse to try something new, something that you've always wanted to get started with, but never really had the excuse to do. Now if you have some pieces of advice that you want to pass along to people, or if you want to just engage with the community in some way to see what other people are thinking of making or propose your own project ideas, get feedback, talk about software, anything like that, we did set up a discord space associated with the summer of math exposition. There's a link in the description. Just be mindful if you do contribute to that community that you want your comments to be encouraging to others who are getting started and productive to that goal and try to avoid anything that is the opposite of that goal. And again, another source of what will hopefully include some inspirational or informative things will be the podcast. The first episode is out now. It's with the mathematician Alex Contourvich, who some of you may recognize from the video he did with Kanta or the video he did with Veritasium on PiDay. The episode after that is going to be with Salcon, and there's just a really interesting lineup of people here. So I think you'll enjoy it. You can get it wherever you get your podcasts. There's a video version of it, which is going to live on a second channel that is just my name Grant Sanderson. And I figure for all future videos that are a little bit like this one that's not really animated math, but other stuff, that's probably the channel that I'll put it on. So keep an eye on that channel if that's something that you're interested in. And I will say this about the podcast. Even though the original intent was something that was very much tied to this competition, and the idea of targeting people interested and getting started. A lot of the times I would find myself with an interesting guest, and I just have a whole bunch of other things that I want to ask them that have nothing to do with that. So maybe the better framing here is to say that the podcast is 20% about that goal, and the other 80% is just the usual interview style podcast vibe where you have interesting guests, and I just want to ask things that I'm genuinely curious to know about them. And then I get to grad school, and I'm moving into my office in grad school, and I have all my old papers. And I just started, you know, for fun, leaping through them. I don't know if you ever look back at the stuff you wrote freshman year. And I look at them like, what the hell was I writing? Oh my god, this is garbage. This is complete. The epsilon's and deltas are backwards. You can't have the epsilon's and deltas be backwards. And you only took off three points. I would have taken off nine or something. Like, it brought me with so nice. So if lean was around back then, boy would it have straightened me out. It's actually very inspiring to me, because I feel one of the common piece of advice that I'll give to someone if they want to learn more math. |
Visualizing quaternions (4d numbers) with stereographic projection | What you are looking at right now is something called Quaternion multiplication. Or rather, you're looking at a certain representation of a specific motion happening on a four-dimensional sphere being represented in our three-dimensional space, one which you'll understand by the end of this video. Quaternions are an absolutely fascinating and often under-appreciated number system from math. Just as complex numbers are a two-dimensional extension of the real numbers, Quaternions are a four-dimensional extension of complex numbers. But they're not just playful mathematical shenanigans. They have a surprisingly pragmatic utility for describing rotation in three dimensions, and even for quantum mechanics. The story of their discovery is also quite famous in math. The Irish mathematician William Rowan Hamilton spent much of his life seeking a three-dimensional number system, analogous to the complex numbers. And as the story goes, his son would ask him every morning whether or not he had figured out how to divide triples, and he would always say no, not yet. But on October 16, 1843, while crossing the Brune Bridge in Dublin, he realized, with a supposed flash of insight, that what he needed was not to add a single dimension to the complex numbers, but to add two more imaginary dimensions. Three imaginary dimensions describing space, and the real numbers, sitting perpendicular to that in some kind of fourth dimension. He carved the crucial equation describing these three imaginary units into the bridge, which today bears a plaque in his honor showing that equation. Now, you have to understand, our modern notion of vectors with their dot product and the cross product and things like that didn't really exist in Hamilton's time, at least not in a standardized form. So after his discovery, he pushed hard for Quaternions to be the primary language with which we teach students to describe three-dimensional space, even forming an official Quaternion society to proselytize his discovery. Now, unfortunately, this was balanced with mathematicians on the other side of the fence, who believed that the confusing notion of Quaternion multiplication was not necessary for describing three dimensions, resulting in some truly hilarious old-timey trash talk legitimately calling them evil. It's even believed that the mad-hatter scene from Alice in Wonderland, whose author you may know was an Oxford mathematician, was written in reference to Quaternions, that the chaotic table placement changes were mocking their multiplication, and that certain quotes were referencing their non-competitive nature. Fast forward about a century, and the computing industry gave Quaternions a resurgence among programmers who work with graphics and robotics and anything involving orientation in 3D space. And this is because they give an elegant way to describe and to compute 3D rotations, which is computationally more efficient than other methods, and which also avoids a lot of the numerical errors that arise in these other methods. The 20th century also brought Quaternions some more love from a completely different direction, quantum mechanics. You see, the special actions that Quaternions describe in four dimensions are actually quite relevant to the way that two state systems, like spin of an electron or the polarization of a photon, are described mathematically. What I'll show you here is a way to visualize Quaternions in their full four-dimensional glory. It would surprise me if this approach was fully original, but I can say that it's certainly not the standard way to teach Quaternions, and that the specific four-dimensional right hand rule image that I'd like to build up to, is something that I haven't really seen elsewhere. Building up an understanding for this visual will take us meaningful time, but once you have it, there is a very natural and satisfying intuition for how to think about Quaternion multiplication. It won't be until the next video that I show you how exactly Quaternions describe orientation in three dimensions, which is for some people the whole reason we care about it, but once we're able to go at it armed with the image of what they're doing to a 4D hypersphere, there's a pleasing understanding to be had for the otherwise opaque formulas characterizing this relationship. The structure here will be to start by imagining teaching complex numbers to someone who only understands one dimension, then describing 3D rotating. And ultimately, to represent what Quaternions are doing up in four dimensions, within the constraints of our 3D space. Our first character is Linus the Line Lander, whose mind can only grasp the one-dimensional geometry of lines in the algebra of real numbers. We're going to try to describe complex numbers to Linus, and it's really important for you to empathize with him as much as you can during this, because in a few minutes you're going to be in his shoes. On the one hand, you could define complex numbers purely algebraically. You say each one is expressed as some real number, plus some other real number times i, where i is a newly invented constant whose defining property is that i times i equals negative 1. Then you say to Linus to multiply two complex numbers, you just use the distributive property, what many people learn in school as FOIL, and you apply this rule that i times i equals negative 1 to simplify things down further. And that's fine, that totally works, and the standard textbook way to introduce Quaternions is analogous to this, showing the algebraic rules and calling it done. But I think something is missing if we don't at least try to show Linus the geometry of complex numbers, and what complex multiplication looks like, since the problems in math and physics where complex numbers are shockingly useful often leverage this spatial intuition. You and i, who understand two dimensions, might think of it like this. When you multiply two complex numbers, z times w, you can think of z as a sort of function acting on w, rotating and stretching it in some way. I like to think of this by broadening the view and asking what does z do to the entire plane, and you can think of that bird's-eye view action by imagining using one hand to fix the number zero in place, and using another hand to drag the point at 1 up to z, since anything times zero is zero, and anything times 1 is itself. And in two dimensions, there is one and only one stretching rotating action on the plane that will do this. This is also how I'll have you thinking about quaternion multiplication later on, where the number on the left acts as a kind of function to the one on the right, and we'll understand this function by seeing how it acts by transforming space, although instead of rotating 2D space, it does a sort of double rotation in 40 space. By the way, if you want to review thinking about complex numbers as a kind of action, a good warm-up for this video might be the one I did on e to the pi i, explained with introductory group theory. Now, Linus the Line Lander is pretty comfortable with the idea of stretching, that's what multiplication by real numbers looks like. Maybe it's a little weird for him to think about stretching in multiple dimensions, but it's not fundamentally different. The thing to communicate to Linus is rotation, specifically focus on the unit circle of the complex plane, all the numbers a distance 1 from 0, since multiplication by these numbers corresponds to pure rotation. How would you explain to Linus the look and the feel of multiplying by these numbers? First, that might seem impossible, I mean rotation is just such an intrinsically two-dimensional idea, but on the other hand, rotation involves only one degree of freedom, a single number, the angle, specifies a given rotation uniquely. So, in principle, it should be possible to associate the set of all rotations to the one-dimensional continuum that is Linus's world. And there are many ways you could do this, but the one I'm going to show you is what's called a stereographic projection. It's a special way to map a circle onto a line, or a sphere into a plane, or even a 40 hypersphere into 3D space. For every point on the unit circle draw a line from negative 1 through that point, and wherever it intersects the vertical line through the circle center, that's where the point of the circle gets projected. So, for example, the point at 1 gets projected into the center of the line, the point I actually stays fixed in place, as does negative I. All of the points on that 90-degree arc between 1 and I will get projected somewhere in the interval between where 1 landed and where I landed. As you continue farther around the circle, on the arc between I and negative 1, the projected points end up farther and farther away at an increasing rate. Similarly, if you come around the other way towards negative 1, the projected points end up farther and farther on the other end of the line. This line of projected points is what we show to Linus, labeling a few key points, like 1 and I and negative 1 all for reference. Technically, the point at negative 1 has no projection under this map, since the tangent line to the circle at that point never crosses the vertical line. But what we say is that negative 1 ends up at the point at infinity. This is a special point you imagine adding to the line where you would approach it if you walk infinitely far along the line in either direction. Now it's important to remember, and to remind Linus, that what he's seeing is only the complex numbers that are a distance 1 from the origin, the unit circle. Linus doesn't see most numbers, like 0 or 1 plus I or negative 2 minus I, but that's okay, because right now we just want to describe complex numbers z where multiplying by z has the effect of a pure rotation, so he only needs to understand the unit circle. For example, when we take the number I and multiply it by any other complex number w, the effect is to rotate by 90 degrees counterclockwise, and when we apply this action to the circle being projected down to the line for Linus, what does he see? Well, it's a bit of a strange morphing action on the line, one which I want you to become familiar with for something we'll see later on. It's easiest to understand by following a few key reference points. I times 1 is I, so that means the number 1 should move up to I. I times I is negative 1, so the point at I slides off to infinity. I times negative 1 is equal to negative I, so that point at infinity kind of comes back around from the bottom to the position 1 unit below the center, and I times negative I is 1, so that point at negative I slides up to 1. Even though this is kind of a weird motion, it lets us communicate some important ideas to Linus. For example, multiplying by I 4 times, which corresponds to rotating by 90 degrees 4 times in a row, gets us back to where we started. I to the 4th equals 1. Here, to get more of a feel for things, let me just show the circle rotated at various different angles. On both the left and the right half of the screen here, I'm putting a hand on the point that started at the number 1 to help us and to help Linus keep track of the overall motion. Next, let's introduce Felix the Flatlander, who only understands two-dimensional geometry. Imagine trying to explain rotations of a sphere to Felix. In the spirit of transitioning from complex numbers to quaternions, let's extend the complex numbers with its horizontal axis of real numbers and its vertical axis of imaginary numbers with a third axis, defined by some newly invented constant, J, sitting 1 unit away from 0, perpendicular to the complex plane. Instead of having this new axis in the Z direction, for a better analogy with how we visualize quaternions, we'll want to orient things so that the I and the J axes sit in the X and the Y directions with the real number line aligned along the Z direction. So every point in 3D space is described as some real number, plus some real number times I, plus some real number times J. As it happens, it's not possible to define a notion of multiplication for a 3D number system like this that would satisfy the usual algebraic properties that make multiplication a useful construct. Perhaps I'll outline why this is the case in a follow-on video, but staying focused on our current goal, think about describing 3D rotations in this coordinate system to Felix the Flatlander. The unit sphere consists of all those numbers which are a distance 1 from 0 at the origin, meaning the sum of the squares of their coordinates is 1. We can't show all of 3D space to Felix, but what we can do is project this 2D surface to him and give him a feel for what reorientations of this sphere look like under that projection. Analogist to what we did before, stereographic projection will associate almost every point on the unit sphere with a unique point on the horizontal plane defined by the I and the J axes. For each point on the sphere, draw a line from negative 1 at the south pole through that point and see where it intersects the plane. So the point 1 at the north pole ends up at the center of the plane. All of the points of the northern hemisphere get mapped somewhere inside the unit circle of the IJ plane. And that unit circle, which passes through IJ-negative-I-negative-J, actually stays fixed in place. And that's an important point to make note of. Even though most points and lines and patches that Felix the Flatlander sees are going to be warped projections of the real sphere, this unit circle is the one thing that he has, which is an honest part of our unit sphere, unaltered by projection. All of the points in the southern hemisphere get projected outside that unit circle, each getting farther and farther away as you approach negative 1 at the south pole. And again, negative 1 has no projection under this mapping. But what we say is that it ends up at some point at infinity. That pointed infinity is something such that no matter which direction you walk on the plane, as you go infinitely far out, you'll be approaching that point. It's analogous to how if you walk any direction away from the north pole, you're approaching the south pole. Now let me just pull up a view of what Felix sees in two dimensions. As I rotate the sphere in various ways, the lines of latitude and longitude drawn on that sphere get projected into various circles and lines in Felix's space. And the way I've done things up here, the checkerboard pattern on the surface of the sphere is accurately reflected in the projected view that you see with Felix. And the pink dot represents where the point that started at the north pole ends up after the rotation. And that yellow circle represents where the equator ended up after the projection. The more you put yourself in Felix's shoes right now, the easier quaternions will be in a moment. And as with Linus, it helps to focus on a few key reference objects rather than trying to see the whole sphere. This circle, passing through one i negative 1 to negative i, gets mapped onto a line which Felix sees as the horizontal axis. It's important to remind Felix that what he sees is not the same thing as the i-axis. Remember, we're only projecting the numbers that have a distance 1 from the origin. So most points on the actual i-axis, like 0 and 2i and 3i, etc, are completely invisible to Felix. Similarly, the circle that passes through 1 j negative 1 and negative j gets projected onto what he sees as a vertical line. And in general, any line that Felix sees comes from some circle on the sphere that passes through negative 1. In some sense, a line is just a circle that passes through the pointed infinity. Now think about what Felix sees as we rotate the sphere. A 90 degree rotation about the j-axis brings 1 to i, i to negative 1, negative 1 to negative i, and negative i to 1. So what Felix the flat-lander sees is an extension of the rotation that line is the line-lander was seeing. Notice also that this action rotates the i j unit circle to the position where the 1 j unit circle used to be. So what Felix sees is his yellow unit circle getting transformed into a vertical line while that red vertical line gets transformed into the unit circle. Of course, from our perspective, we know this is all just rigid motion, no actual stretching or morphing is taking place. All of that is just an artifact of the projection. Similarly, a rotation about the i-axis involves moving 1 to j, j to negative 1, negative 1 to negative j, and negative j to 1. This rotation turns the i j unit circle into the 1 i unit circle, which to Felix looks like the unit circle getting transformed into a horizontal line. A rotation about the real axis is actually quite easy for Felix to understand, since the whole projection simply gets rotated about the origin, where the only point staying fixed in place are 1 at the origin and negative 1. In the same way that the complex numbers included the real numbers with a single extra, quote unquote, imaginary dimension, represented by the unit i, and that the not actually a number system thing we had in three dimensions included a second imaginary direction, j. The quaternions include the real numbers, together with three separate imaginary dimensions, represented by the units i, j, and k. Each of these three imaginary dimensions is perpendicular to the real number line, and they're all perpendicular to each other, somehow. So, in the same way that complex numbers are represented as a pair of real numbers, each quaternion can be written using four real numbers, and it lives in four dimensional space. You often think of this as being broken up into a real or scalar part, and then a 3D imaginary part, and Hamilton used a special word for quaternions that had no real part and just i, j, k components, a word which was previously somewhat foreign in the lingo of math and physics, vector. On the one hand, you could just define quaternion multiplication by giving the rules for how i, j, and k multiply together, and saying that everything must distribute nicely. This is analogous to defining complex multiplication by saying that i times i is negative 1, and then distributing and simplifying products. And indeed, this is how you would tell a computer to perform quaternion multiplication, and the relative compactness of this operation, compared to say matrix multiplication, is what's made quaternion so useful for graphics programming in many other things. There's also a rather elegant form of this multiplication rule written in terms of the dot product and the cross product, and in some sense, quaternion multiplication subsumes both of these notions, at least as they appear in three dimensions. But just as a deeper understanding for complex multiplication comes from understanding its geometry, that multiplying by a complex number involves a combination of scaling and rotating, you and i are here for the four dimensional geometry of quaternion multiplication. And just as the magnitude of a complex number, its distance from zero, is the square root of the sum of the squares of its component, that same operation gives you the magnitude of a quaternion. And multiplying one quaternion, q1, by another, q2, has the effect of scaling q2 by the magnitude of q1, followed by a very special type of rotation in four dimensions. And those special 4D rotations, the heart of what we need to understand, correspond to the hypersphere of quaternions, a distance 1 from the origin. Both in the sense that the quaternions whose multiplying action is a pure rotation, live on that hypersphere, and in the sense that we can understand this weird 4D action just by following points on the hypersphere, rather than trying to look at all of the points in the inconceivable stretches of four dimensional space. And analogous to what we did for Linus and Felix, we stereographically project this hypersphere into 3D space. This label in the upper right is going to show a given unit quaternion, and this little pink dot, will show where that particular quaternion gets projected in our 3D space. Just as before, we're projecting from the number negative 1, which sits on the real number line that is somehow perpendicular to all of our 3D space, and beyond our perception. Just as before, the number 1 ends up projected straight into the center of our space. And in the same way that I and negative I were fixed in place for Linus, and that the IJ unit circle was fixed in place for Felix, we get a whole sphere passing through IJ and K on that unit hypersphere, which stays in place under the projection. So what we see as a unit sphere in our 3D space represents the only unaltered part of the hypersphere of quaternions getting projected down onto us. It's something analogous to the equator of a 3D sphere, and it represents all of the unit quaternions, whose real part is 0, what Hamilton would have described as unit vectors. The unit quaternions with positive real parts, between 0 and 1, end up somewhere inside this unit sphere, closer to the number 1 in our 3D space, which should feel analogous to how the Northern Hemisphere got mapped inside the unit circle for Felix. On the other hand, all the unit quaternions with negative real part end up somewhere outside that unit sphere. The number negative 1 is sitting off at the pointed infinity, which you can easily find by walking in any direction. Keep in mind, even though we see the projection of some of these quaternions as being closer or farther from the origin of our 3D space, everything you're looking at represents a unit quaternion. So everything you're looking at really has the same magnitude, the same distance from the number 0. And that number 0 itself is nowhere to be found in this picture. Like all other non-unit quaternions, it's invisible to us. In the same way that for Felix, the circle passing through 1i negative 1 and negative i got projected into a line through the origin. When we see this line through the origin, passing through i and negative i, we should understand that it really represents a circle. Likewise, up on the hypersphere, invisible to us, there is a unit sphere passing through 1i j negative 1 negative i and negative j. And that whole sphere gets projected into the plane that we see passing through 1i negative i j negative j and negative 1, often infinity, what you and I might call the xy plane. In general, any plane that you see here really represents the projection of a sphere somewhere up on the hypersphere which passes through the number negative 1. Now the action of taking a unit quaternion and multiplying it by any other quaternion from the left can be thought of in terms of two separate 2D rotations, happening perpendicular to and in sync with each other in a way that could only ever be possible in 4 dimensions. As a first example, let's look at multiplication by i. We already know what this does to the circle that passes through 1 and i, which we see as a line. 1 goes to i, i goes to negative 1, often infinity, negative 1 comes back around to negative i and negative i goes to 1. Remember, just like what line I saw, all of this is the stereographic projection of a 90 degree rotation. Now look at the circle passing through j and k, which is in a sense perpendicular to the circle passing through 1 and i. Now it might feel weird to talk about two circles being perpendicular to each other, especially when they have the same center, the same radius and they don't touch each other at all, but nothing could be more natural in 4 dimensions. You can think of the action of i on this perpendicular circle as obeying a certain right hand rule, if you'll excuse the intrusion of my ghostly green screen hand into our otherwise pristine platonic mathematical stage, you let that thumb of your right hand point from the number 1 to i and you curl your fingers. The jk circle will rotate in the direction of that curl. How much? Well, by the same amount as the 1i circle rotates, which is 90 degrees in this case. This is what I meant by two rotations perpendicular to and in sync with each other. So j goes to k, k goes to negative j, negative j goes to negative k, and negative k goes to j. This gives us a little table for what the number i does to the other quaternions, but I want this not to be something that you memorize, but something that you could close your eyes and you could really see. Computationally, if you know what a quaternion does to the numbers 1, i, j and k, you know what it does to any arbitrary quaternion, since multiplication distributes nicely. In the language of linear algebra, 1, i, j and k form a basis of our 4-dimensional space, so knowing what our transformation does to them gives us the full information about what it does to all of space. Geometrically, a 4-dimensional creature would be able to look at those two perpendicular rotations that i just described, and understand that they lock you into 1 and only 1 rigid motion for the hypersphere. We might lack the intuitions of such a hypothetical creature, but we can maybe try to get close. Here's what the action of repeatedly multiplying by i looks like on our stereographic projection of the ijk sphere. It gets rotated into what we see as a plane, then gets rotated further back to where it used to be, though the orientation is all reversed now. Then it gets rotated again into what we see as a plane, and after the fourth iteration, it ends up right back where it started. As another example, think of a quaternion like q equals negative square root of 2 over 2, plus square root of 2 over 2 times i, which if we pull up a picture of a complex plane, is a 135-degree rotation away from 1 in the direction of i. Under our projection, we see this along the line from 1 to i somewhere outside the unit sphere. If that sounds weird, just remember how line-is would have seen this same number. The action of multiplying this q by all other quaternions will look to us, like dragging the point at 1 all the way to this projected version of q, while the jk circle gets rotated 135 degrees, according to our right hand rule. Multiplication by any other quaternion is completely similar. For example, let's see what it looks like for j to act on other quaternions by multiplication from the left. The circle, through 1 and j, which we see projected as a line through the origin, gets rotated 90 degrees, dragging 1 up to j. So j times 1 is 1, and j times j is negative 1. The circle perpendicular to that one, passing through i and k, gets rotated 90 degrees, according to this right hand rule, where you point your thumb from 1 to j. So j times i is negative k, and j times k is i. In general, for any other unit quaternion you see somewhere in space, start by drawing the unit circle, passing through 1, q, and negative 1, which we see in our projection as a line through the origin. Then draw the circle perpendicular to that one, on what we see as the unit sphere. You rotate the first circle, so that one ends up where q was, and rotate the perpendicular circle by the same amount, according to the right hand rule. One thing worth noticing here is that order of multiplication matters. It's not, as mathematicians would say, commutative. For example, i times j is k, which you might think of in terms of i acting on the quaternion j, rotating it up to k. But if you think of j is acting on i, j times i, it rotates i to negative k. In fact, commutativity, the ability to swap the order of multiplication, is a way more special property than a lot of people realize. And most groups of actions on some space don't have it. It's like how in solving a Rubik's cube, order matters a lot. Or how rotating a cube about the z-axis, and then about the x-axis, gives a different final state, from rotating it about the x-axis, then about the z-axis. And last, as one final, but rather important point. So far I've shown you how to think about quaternions, as acting by left multiplication. Where when you read an expression like i times j, you think of i as a kind of function, morphing all of space, and j is just one of the points that it's acting on. But you can also think of them as a different sort of action, by multiplying from the right. Where in this expression, j would be acting on i. In that case, the rule for multiplication is very similar. It's still the case that one goes to j, and j goes to negative one, etc. But instead of applying the right hand rule to the circle, perpendicular to the one j circle, you would use your left hand. So either way, i times j is equal to k. But you can either think about this with your right hand curling the number j to the number k, as your thumb points from one to i, or as your left hand curling i to k, as its thumb points from one to j. Understanding this left hand rule for multiplication from the other side will be extremely useful for understanding how unit quaternions describe rotation in three dimensions. And so far, it's probably not clear how exactly quaternions do describe 3D rotation. I mean, if you consider one of these actions on the unit sphere, passing through i, j and k, it doesn't leave that sphere in place, it morphs it out of position. So the way that this works is slightly more complicated than a single quaternion product. It involves a process called conjugation, and i'll make a full follow on video all about it so that we have the time to go through some examples. In the meantime, for more information on the story of quaternions and their relation to orientation in 3D space, quanta, a mathematical publication, i'm sure a lot of you are familiar with, just put out a post in a kind of loose conjunction with this video. Link in the description. If you enjoyed this, consider sharing it with some friends, and if you felt like the narrative structure here was actually helpful for understanding, maybe reassure those friends who would be turned off by a large time stamp that good math is actually worth the time. And many thanks to the patrons among you. I actually spent way longer than I care to admit on this project, so your patience and support is especially appreciated this time around. |
Binary, Hanoi, and Sierpinski, part 2 | Welcome back! So in the last part I showed a way to solve towers of Hanoi just by counting up in binary, a solution that I learned from computer scientist Keith Schwartz. If somehow you landed here without watching that, you probably want to go check it out. Here in this video, I want to describe a constrained variant of that puzzle and how it relates to finding a curve that fills your pinskies triangle. The common variant is that you can think of these disks or in a line, you're only allowed to move a disk from one spindle to an adjacent one. And so now the question is solve towers of Hanoi. So for example, like in the previous one we start by moving disk 0 to b and the next move is to move disk 1 over to c, but you can't do that because you can't get one straight over to c. Exactly. So it's more complicated. But the idea for this modified version is still similar. So if I want to move disk 3, let's try to get it from a to c. So as before, I can't move disk 3 unless this tower above it, 2, 1, and 0, are out of the way. So ultimately, I'd like to get this tower to be not flocking it, but there's two ways it can block it now. So previously just had to not be on top. Now it has to not only not be on top, it can't be adjacent, either. So really what I want to do is take this tower of 0, 1, and 2, and somehow, I don't know, get it to things c, then move disk b over, then get it off of c back to a, then move disk 3 from spindle b to spindle c, and then move that tower all the way back. The smaller case breaks down essentially the same way. You solve it for two disks, move disk number 2, solve for two again. Then, move disk number 2, then solve for two disks yet again. As you keep subdividing in this self-similar pattern, expanding each sub problem into its own set of sub problems, eventually you get to the smallest sub problem of them all, moving a tower of size 1, which just involves moving disk number 0 twice. As with the unconstrained case, this is going to give you the most efficient solution. Since at every scale, you're only doing what you have to do. You have to move that sub tower with 0, 1, and 2 over to peg c if you plan on moving disk number 3. And you have to move it back to a in order to move disk 3 again. And you have to move it all the way back to c at third time. There's just never any room for inefficiency once you break it down in terms of sub problems like this. And just as the unconstrained puzzle mirrored the pattern of counting in binary, the breakdown of this constrained problem is mirrored by counting in base 3, also known as ternary. Here, let's take a moment and talk about what base 3 feels like, what the rhythm of counting there is. So as you know, our usual counting system base 10 has 10 separate digits, and binary or base 2 has two separate digits. So ternary is a system of representing numbers, where you only have three distinct symbols available, 0, 1, and 2. They are indeed called tritz. But hey, if that sounds a little bit off to you, just ask a French speaker how they feel about our convention of calling binary digits bits. Anyway, back to what we were doing, let's think about the rhythm of counting in ternary. You start by just counting through the tritz, 0, 1, 2. Then you have to roll over to a second tritz in the threes place. Then you count to 4, which is 1, 1, 5, which is 1, 2, and you have to roll over again to 2, 0, representing 6. Then 2, 1, 2, 2, at which point you have to roll over twice to the 9's place, writing the number 9 as 1, 0, 0. Just like base 2 and base 10, there's a certain self-similarity to this pattern. At all scales, it looks like counting up to some amount, rolling over. Counting to that same amount, rolling over again, then counting to that same amount a third time. But this is the pattern that we just saw with the constrained towers of Hanoi. Do a sub-task, make some kind of larger movement, do a sub-task, make that same larger movement, then do the sub-task a third time. So just like binary counting mirrors the solution to the unconstrained towers of Hanoi, counting in turnary is going to mirror the recursive pattern for solving the constrained towers of Hanoi. This gives a really nice and methodical way to solve the constrained problem just by counting. Every time you're only changing that last trit, move disc number 0. So for the first two moves, when you're counting 1, 2, you'll be moving disc 0. Whenever you roll over just once to the 3's place, such as when you count from 2 up to 3, represented by 1, 0, you move disc number 1. And continuing on like this, you'd flip the last, move 0, flip the last, move 0, roll over, writing 2, 0, and then move disc 1. Then twice more you flip the last and move disc 0. Then whenever you roll over twice to the 9's place, you move disc number 2. Again, it's pretty fun to just sit back and watch this play out. It takes a while though. For four discs, you're going to have to count all the way up to 2, 2, 2, 2, which is turnary for 3 to the 4th minus 1, which is 80. That means that solving this involves 80 steps. Nevertheless, like I said before, this is the most efficient solution. But this also then gives you something very cool you can ask, how many different configurations are there for towers of night? Actually let's take a moment and think about this. How many total configurations of discs on pegs are possible, where the discs on a given peg have to be in descending order of size? The answer is 3 to the 4th. 81 possible states that your puzzle can be in. To see this, notice that there are 3 choices for where to put that biggest disc. Then 3 choices for where to put the next biggest disc. And for each successive disc, you have 3 choices for where to place it. And remember, this process of solving by counting in turnary involves taking 3 to the 4th minus 1 steps, which means from start to end, you're going to hit 81. And if you think about it, it can't hit a given configuration twice. If it did, there would be some more efficient solution out there by working up to the first time where you hit that configuration, then skipping over everything in between that, and the second time you hit that same configuration, then just continuing. I have in a sense a grant to it. This will not only solve towers of no, this will go through every single possible configuration of the discs. Pretty cool, right? Well here's where things get awesome. For those of you familiar with graphs, there is a very beautiful structure to these configurations. What I'm going to do is represent each one of these configurations with a dot, a node in our graph. Then we say that 2 different configurations are connected if you can move from 1 to the other with some legal towers of a noi move. And this time, I mean an unconstrained move, so configurations are still considered connected even if it requires a move from peg A to peg C. When you go through and connect all of the configurations like this, finding all the edges that represent some kind of move, the graph structure for towers of a noi has this suspiciously familiar shape. Now let's zoom in on it and think about where this pattern is coming from. For example, this node here, representing the start state with all the discs on peg A, is going to be connected to this one, which is the result of moving disc 0 over to peg B. And both of those are connected to this one, which is the result of moving disc 0 to peg C. These three configurations make up a little triangle in our graph. That is, the three configurations are all mutually connected to each other. In fact, any configuration is going to be part of one of these triangles somewhere in the graph. You just consider what happens as you freely move around disc number 0 among the three pegs. Now to understand how these little triangular islands are interconnected, take a look at these two nodes. They differ only by a movement of disc number 1 from peg A to peg C, so they're going to be connected by an edge. This edge is kind of a bridge between two different triangle islands. In fact, those two triangles, along with this other one, form kind of a meta triangle when you consider all the movements of disc 1. Each little clump of three nodes here represents a different position for disc number 1, and the triforce pattern as a whole represents all configurations you can get by moving around discs number 1 or 0. And self similarly, a movement of disc number 2 gives you a bridge from one of these triforce patterns to a new one. And the three possible places where disc 2 can live give us these three separate triforce patterns which are each connected by a little bridge. The fact that there are only two nodes in one of these triforce patterns that has an edge going out of the pattern itself is just a reflection of the fact that it's hard to move disc number 2. That only happens in those very special configurations where disc 1 and disc 0 are both out of the way. As you consider more and more discs, this pattern continues. The graph structure for towers of Hanoi configurations is one big syropinsky triangle. Isn't that crazy? So now let's think about what it means for this method of solving the constrained problem by counting in turnary to walk through all possible configurations. What it's going to give us is a way to wander through this graph and hit each node once and only once. This is crazy to me. If you just look at this syropinsky graph structure, it's not clear at first that such a path is even possible. And yet we found one just by counting. But that's not beautiful, I don't know what is. For the final animation here, I want to show you guys what those paths look like for syropinsky graphs of higher and higher order. But first, some thanks are in order for the fine funders of this video. As always, my sincerest gratitude to the Patreon supporters. Like I mentioned in the last video, I've started working on an Essence of Calculus series, and so far I'm really appreciating the feedback the patrons are giving. This pair of videos was also supported by Desmos. Like I said at the end of part 1, Desmos creates these really meaningful interactive math activities for classrooms and pretty great tools for teachers. And importantly, they're hiring. So anyone out there interested should definitely check out the careers page that I've linked in the description. And by all means, reach out to them if you're interested. There are a lot of EdTech companies out there that make some flashy tools in the hope of bringing modern technology to math curricula. But often there's not actual pedagogical substance underlying the tech. I really do think though that Desmos is going about things the right way. I was surprised by the amount of collective teaching experience that they have among their team. And even the non-teachers that I met there are really thoughtful and really well versed when it comes to educational research. The result is that their activity is actually addressed what's lacking and needed in a student's education. So if you're interested in the EdTech space, and if you're interested in going to a place that, you know, actually matters, I highly encourage you to take a look. Alright, so for this last animation here, I'm going to show the path that walks through this year-pinsky graph according to the way in which turnary counting solves the constrained towers of Hanoi Puzzle. If I fade out that graph just to emphasize the path, here's what it looks like for higher and higher orders corresponding to towers of Hanoi with more and more discs. What you're seeing is very similar to the idea of space-filling curves that I talked about in the Hilbert curve video. It's just that the limit here feels a fractal instead of the square. Isn't that crazy? You can have a curve that feels Sirpinski's triangle, which I totally don't think of as a curved type thing. |
The hardest problem on the hardest test | Do you guys know about the Putnam? It's a math competition for undergraduate students. It's a six hour long test that just has 12 questions broken up into two different three hour sessions. And each one of those questions is scored 1-10, so the highest possible score would be 120. And yet, despite the fact that the only students taking this thing each year are those who clearly are already pretty interested in math, the median score tends to be around 1 or 2. So it's a hard test. And on each one of those sections of six questions, the problems tend to get harder as you go from 1 to 6, although of course difficulty is in the eye of the beholder. But the thing about those 5s and 6s is that even though they're positioned as the hardest problems on a famously hard test, quite often these are the ones with the most elegant solutions available. Some subtle shift in perspective that transforms it from very challenging to doable. Here I'm going to share with you one problem that came up as the sixth question on one of these tests a while back. And those of you who follow the channel, you know that rather than just jumping straight to the solution, which in this case would be surprisingly short, when possible, I like to take the time to walk you through how you might have stumbled across the solution yourself, where the insight comes from. That is, make a video more about the problem solving process than about the problem used to exemplify it. So anyway, here's the question. If you choose 4 random points on a sphere and consider the tetrahedron with these points as its vertices, what is the probability that the center of the sphere is inside that tetrahedron? Go ahead, take a moment and kind of digest this question. You might start thinking about which of these tetrahedron contain the sphere's center, which ones don't, how you might systematically distinguish the two. And how do you even approach a problem like this, right? Where do you even start? Well, it's usually a good idea to think about simpler cases, so let's knock things down to two dimensions, where you'll choose 3 random points on a circle, and it's always helpful to name things, so let's call these guys P1, P2, and P3. The question is, what's the probability that the triangle formed by these points contains the center of the circle? I think you'll agree it's way easier to visualize now, but it's still a hard question. So again, you ask, is there a way to simplify what's going on? Get ourselves to some kind of foothold that we can build up from. Well, maybe you imagine fixing P1 and P2 in place, and only letting that third point vary. And when you do this, and you play around with it in your mind, you might notice that there's a special region, a certain arc, where when P3 is in that arc, the triangle contains the center, otherwise not. Specifically, if you draw lines from P1 and P2 through the center, these lines divide up the circle into 4 different arcs, and if P3 happens to be in the one on the opposite side as P1 and P2, the triangle has the center, if it's in any of the other arcs, though, no luck. We're assuming here that all of the points of the circle are equally likely. So what is the probability that P3 lands in that arc? It's the length of that arc, divided by the false circumference of the circle. The proportion of the circle that this arc makes up. So what is that proportion? Obviously that depends on where you put the first two points. I mean, if they're 90 degrees apart from each other, then the relevant arc is 1 ¼ of the circle. But if those two points were farther apart, that proportion would be something closer to a half. And if they were really close together, that proportion gets closer to 0. So think about this for a moment. P1 and P2 are chosen randomly, with every point on the circle being equally likely. So what is the average size of this relevant arc? Maybe you imagine fixing P1 in place and just considering all the places that P2 might be. All of the possible angles between these two lines, every angle from 0 degrees up to 180 degrees, is equally likely. So every proportion between 0 and 0.5 is equally likely. And that means that the average proportion is 0.25. So if the average size of this arc is a ¼ of the full circle, the average probability that the third point lands in it is ¼. And that means that the overall probability that our triangle contains the center is ¼. But can we extend this into the three-dimensional case? If you imagine three out of those four points just being fixed in place, which points of the sphere can the fourth one be on so that the tetrahedron that they form, contain the center of the sphere. Just like before, let's go ahead and draw some lines from each of those fixed three points through the center of the sphere. And here it's also helpful if we draw some planes that are determined by any pair of these lines. Now what these planes do, you might notice, is divide the sphere into eight different sections, each of which is a sort of spherical triangle. And our tetrahedron is only going to contain the center of the sphere. If the fourth point is in the spherical triangle on the opposite side as the first three. Now unlike the 2D case, it's pretty difficult to think about the average size of this section as we let the initial three points vary. Those of you with some multivariable calculus under your belt might think, let's just try a surface integral. And by all means, pull out some paper and give it a try. But it's not easy. And of course, it should be difficult. I mean, this is the sixth problem on a putnam. What do you expect? And... What do you even do with that? Well, one thing you can do is back up to the two-dimensional case and contemplate if there is a different way to think about the same answer that we got. That answer, one fourth, looks suspiciously clean. And it raises the question of what that four represents. One of the main reasons I wanted to make a video about this particular problem is that what's about to happen carries with it a broader lesson for mathematical problem solving. Think about those two lines that we drew for P1 and P2 through the origin. They made the problem a lot easier to think about. And in general, whenever you've added something to the problem setup that makes it conceptually easier, see if you can reframe the entire question in terms of those things that you just added. In this case, rather than thinking about choosing three points randomly, start by saying choose two random lines that pass through the circle center. For each line, there's two possible points that it could correspond to, so just flip a coin for each one to choose which of the end points is going to be P1, and likewise for the other which endpoint is going to be P2. Choosing a random line and flipping a coin like this is the same thing as choosing a random point on the circle. It just feels a little bit convoluted at first. But the reason for thinking about the random process this way is that things are actually about to become easier. Well still think about that third point P3 as just being a random point on the circle. But imagine that it was chosen before you do the two coin flips. Because you see, once the two lines and that third point are set in stone, there's only four possibilities for where P1 and P2 might end up, based on those coin flips, each one being equally likely. But one, and only one of those four outcomes, leaves P1 and P2 on the opposite side of the circle as P3, with the triangle that they form containing the center. So no matter where those two lines end up and where that P3 ends up, it's always a one-fourth chance that the coin flips leave us with the triangle containing the center. Now that's very subtle. Just by reframing how we think about the random process for choosing points, the answer one-quarter popped out in a very different way from how it did before. And importantly, this style of argument generalizes seamlessly up into three dimensions. Again, instead of starting off by picking four random points, imagine choosing three random lines through the center of the sphere, and then some random point for P4. That first line passes through the sphere at two points, so flip a coin to decide which of those two points is going to be P1, likewise for each of the other lines flip a coin to decide where P2 and P3 end up. Now there's eight equally likely outcomes of those coin flips, but one and only one of them is going to place P1, P2, and P3 on the opposite side of the center as P4. So one and only one of these eight equally likely outcomes gives us a tetrahedron that contains the center. Again, it's kind of subtle how that pops out to us, but isn't that elegant? This is a valid solution to the problem, but admittedly the way that I've stated it so far rests on some visual intuition. If you're curious about how you might write it up in a way that doesn't rely on visual intuition, I've left a link in the description to one such write up in the language of linear algebra, if you're curious. And this is pretty common in math, where having the key insight and understanding is one thing, but having the relevant background to articulate that understanding more formally is almost a separate muscle entirely. One that undergraduate math students can't spend most of their time building up. But the main takeaway here is not the solution itself, but how you might find that key insight if it was put in front of you and you were just left to solve it. Namely, just keep asking simpler versions of the question until you can get some kind of foothold. And then when you do, if there's any kind of added construct that proves to be useful, see if you can reframe the whole question around that new construct. To close things off here, I've got another probability puzzle, one that comes from this video sponsor Brilliant.org. Suppose that you have eight students sitting in a circle taking the puttnam. It's a hard test, so each student tries to cheat off of his neighbor choosing randomly which neighbor to cheat from. Now, circle all of the students that don't have somebody cheating off of their test. What is the expected number of such circled students? It's an interesting question, right? Brilliant.org is a site where you can practice your problem solving abilities with questions like this and many, many more, and that really is the best way to learn. You're going to find countless interesting questions curated in a pretty thoughtful way so that you really do come away better at problem solving. If you want more probability, they have a really good course on probability. But they've got all sorts of other math and science as well, so you're almost certainly going to find something that interests you. Me, I've been a fan for a while, and if you go to Brilliant.org slash 3B1B, it lets them know that you came from here. And the first 256 of you to visit that link can get 20% off their premium membership, which is the one I use if you want to upgrade. Also, if you're just itching to see a solution to this puzzle, which, by the way, uses a certain tactic and probability that's useful in a lot of other circumstances, I also left a link in the description that just jumps you straight to the solution. |
Bayes theorem, the geometry of changing beliefs | The goal is for you to come away from this video, understanding one of the most important formulas in all of probability, Bayes' theorem. This formula is central to scientific discovery. It's a core tool in machine learning and AI, and it's even been used for treasure hunting. When in the 1980s, a small team led by Tommy Thompson, and I'm not making up that name, used Bayes' and search tactics to help uncover a ship that had sunk a century and a half earlier, and the ship was carrying what, in today's terms, amounts to $700 million worth of gold. So it's a formula worth understanding, but of course, there are multiple different levels of possible understanding. At the simplest, there's just knowing what each one of the parts means, so that you can plug in numbers. Then there's understanding why it's true, and later I'm going to show you a certain diagram that's helpful for rediscovering this formula on the fly, as needed. But maybe the most important level is being able to recognize when you need to use it. And with the goal of gaining a deeper understanding, you and I are going to tackle these in reverse order. So before dissecting the formula or explaining the visual that makes it obvious, I'd like to tell you about a man named Steve. Listen carefully now. Steve is very shy and withdrawn, invariably helpful, but with very little interest in people or the world of reality. A meek and tidy soul, he has a need for order and structure and a passion for detail. Which of the following do you find more likely? Steve is a librarian, or Steve is a farmer. Some of you may recognize this as an example from a study conducted by the two psychologists Daniel Kahneman and Amos Tversky. Their work was a big deal. It won a Nobel Prize, and it's been popular as many times over in books like Kahneman's Thinking Fast and Slow, or Michael Lewis's The Undoing Project. What they researched was human judgments, with a frequent focus on when these judgments irrationally contradict what the laws of probability suggest they should be. For example with Steve, our maybe librarian, maybe farmer, illustrates one specific type of irrationality. Or maybe I should say alleged irrationality. There are people who debate the conclusion here, but more on all of that later on. According to Kahneman and Tversky, after people are given this description of Steve as a meek and tidy soul, most say that he's more likely to be a librarian. After all, these traits line up better with the stereotypical view of a librarian than a farmer. According to Kahneman and Tversky, this is irrational. The point is not whether people hold correct or biased views about the personalities of librarians and farmers. It's that almost nobody thinks to incorporate information about the ratio of farmers to librarians in their judgments. In their paper, Kahneman and Tversky said that in the US, that ratio is about 20 to 1. The numbers that I could find today put that actually much higher, but let's stick with the 20 to 1 number since it's a little easier to illustrate and it proves the point just as well. To be clear, anyone who has asked this question is not expected to have perfect information about the actual statistics of farmers and librarians and their personality traits. But the question is whether people even think to consider that ratio enough to at least make a rough estimate. Rationality is not about knowing facts, it's about recognizing which facts are relevant. Now if you do think to make that estimate, there's a pretty simple way to reason about the question, which Spoiler Alert involves all of the essential reasoning behind Bayes theorem. You might start by picturing a representative sample of farmers and librarians, say 200 farmers and 10 librarians. Then when you hear of this meek and tidy soul description, let's say that your gut instinct is that 40% of librarians would fit that description and that 10% of farmers would. If those are your estimates, it would mean that from your sample you would expect about 4 librarians to fit the description and about 20 farmers to fit that description. So the probability that a random person among those who fit this description is a librarian is 4 out of 24 or 16.7%. So even if you think that a librarian is 4 times as likely as a farmer to fit this description, that's not enough to overcome the fact that there are way more farmers. The upshot, and this is the key mantra underlying Bayes theorem, is that new evidence does not completely determine your beliefs in a vacuum. It should update prior beliefs. If this line of reasoning makes sense to you, the way that seeing evidence restricts the space of possibilities and the ratio you need to consider after that, then congratulations! You understand the heart of Bayes theorem. Maybe the numbers that you would estimate would be a little bit different, but what matters is how you fit the numbers together to update your beliefs based on evidence. Now understanding one example is one thing, but see if you can take a minute to generalize everything that we just did and write it all down as a formula. The general situation, where Bayes theorem is relevant, is when you have some hypothesis, like Steve is a librarian, and you see some new evidence, say this verbal description of Steve as a meek and tidy soul. And you want to know the probability that your hypothesis holds given that the evidence is true. In the standard notation, this vertical bar means given that, as in we are restricting our view only to the possibilities where the evidence holds. Now remember the first relevant number we used. It was the probability that the hypothesis holds before considering any of that new evidence. In our example, that was one out of 21, and it came from considering the ratio of librarians to farmers in the general population. This number is known as the prior. After that, we need to consider the proportion of librarians that fit this description, the probability that we would see the evidence given that the hypothesis is true. Again, when you see this vertical bar, it means we are talking about some proportion of a limited part of the total space of possibilities. In this case, that limited part is the left side, where the hypothesis holds. In the context of Bayes theorem, this value also has a special name, it's called the likelihood. Similarly, you need to know how much of the other side of the space includes the evidence. The probability of seeing the evidence given that the hypothesis isn't true. This funny little elbow symbol is commonly used in probability to mean not. So with the notation in place, remember what our final answer was. The probability that our librarian hypothesis is true given the evidence is the total number of librarians fitting the evidence for, divided by the total number of people fitting the evidence, 24. But where did that four come from? Well, it's the total number of people times the prior probability of being a librarian, giving us the 10 total librarians, times the probability that one of those fits the evidence. That same number shows up again in the denominator, but we need to add in the rest. The total number of people times the proportion who are not librarians, times the proportion of those who fit the evidence, which in our example gives 20. Now notice the total number of people here, 210, that gets cancelled out. And of course, it should, that was just an arbitrary choice made for the sake of illustration. This leaves us finally with a more abstract representation purely in terms of probabilities. And this, my friends, is Bayes' theorem. More often, you see this denominator written simply as P of E, the total probability of seeing the evidence, which in our example would be the 24 out of 210. But in practice, to calculate it, you almost always have to break it down into the case where the hypothesis is true, and the one where it isn't. Capping things off with one final bit of jargon, this answer is called the posterior. It's your belief about the hypothesis after seeing the evidence. Writing it out abstractly might seem more complicated than just thinking through the example directly with a representative sample. And yeah, it is. Keep in mind though, the value of a formula like this is that it lets you quantify and systematize the idea of changing beliefs. Scientists use this formula when they're analyzing the extent to which new data, validates or invalidates their models. Programmers will sometimes use it in building artificial intelligence. Where at times, you want to explicitly and numerically model a machine's belief. And honestly, just for the way that you view yourself and your own opinions and what it takes for your mind to change, Bayes' theorem has a way of reframing how you even think about thought itself. Putting a formula to it can also be more important as the examples get more and more intricate. However, you end up writing it. I actually encourage you not to try memorizing the formula, but to instead draw out this diagram as needed. It's sort of a distilled version of thinking with a representative sample, where we think with areas instead of counts, which is more flexible and easier to sketch on the fly. Rather than bringing to mind some specific number of examples, like 210, think of the space of all possibilities as a one by one square. Then, any event occupies some subset of this space, and the probability of that event can be thought about as the area of that subset. So for example, I like to think of the hypothesis as living in the left part of the square, with a width of p of h. Now, I've recognized them being a bit repetitive, but when you see evidence, the space of possibilities gets restricted, right? And the crucial part is that that restriction might not be even between the left and the right. So the new probability for the hypothesis is the proportion that it occupies in this restricted wonky shape. Now if you happen to think that a farmer is just as likely to fit the evidence as a librarian, then the proportion doesn't change, which should make sense, right? A relevant evidence doesn't change your beliefs. But when these likelihoods are very different from each other, that's when your belief changes a lot. Base theorem spells out what that proportion is, and if you want, you can read it geometrically. Something like p of h times p of e given h, the probability of both the hypothesis and the evidence occurring together, is the width times the height of this little left rectangle, the area of that region. Alright, this is probably a good time to take a step back, and consider a few of the broader takeaways about how to make probability more intuitive beyond just base theorem. First off, notice how the trick of thinking about a representative sample, with some specific number of people, like our 210 librarians and farmers, was really helpful. There's actually another conumin into Versky result, which is all about this, and it's interesting enough to interject here. They did this experiment that was similar to the one with Steve, but where people were given the following description of a fictitious woman named Linda. Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in the anti-nuclear demonstrations. After seeing this, people were asked, what's more likely? One, that Linda is a bank teller, or two, that Linda is a bank teller and is active in the feminist movement. 85% of participants said that the latter is more likely than the former. Even though the set of bank tellers who are active in the feminist movement is a subset of the set of bank tellers, it has to be smaller. So that's interesting enough, but what's fascinating is that there's a simple way that you can rephrase the question that dropped this error from 85% to zero. Instead, if participants were told that there are 100 people who fit this description, and then they're asked to estimate how many of those 100 are bank tellers, and how many of them are bank tellers who are active in the feminist movement? Nobody makes the error. Everybody correctly assigns a higher number to the first option than to the second. It's weird, somehow phrases like 40 out of 100 kick our intuitions into gear much more effectively than 40%, much less 0.4, and much less abstractly referencing the idea of something being more or less likely. That said, representative samples don't easily capture the continuous nature of probability. So turning to area is a nice alternative, not just because of the continuity, but also because it's way easier to sketch out when you're sitting there pencil and paper puzzling over some problem. You see, people often think about probability as being the study of uncertainty, and that is of course how it's applied in science, but the actual math of probability, where all the formulas come from, is just the math of proportions. And in that context, turning to geometry is exceedingly helpful. I mean, take a look at base theorem, as a statement about proportions, whether that's proportions of people, of areas, whatever. Once you digest what it's saying, it's actually kind of obvious. Both sides tell you to look at the cases where the evidence is true, and then to consider the proportion of those cases where the hypothesis is also true. That's it, that's all it's saying. The right hand side just spells out how to compute it. What's noteworthy is that such a straightforward fact about proportions can become hugely significant for science, for artificial intelligence, and really any situation where you want to quantify belief. I hope to give you a better glimpse of that as we get into more examples. But before more examples, we have a little bit of unfinished business with Steve. As I mentioned, some psychologists debate Conoman and Tversky's conclusion, that the rational thing to do is to bring to mind the ratio of farmers to librarians. They complain that the context is ambiguous. I mean, who is Steve, exactly? Should you expect that he's a randomly sampled American? Or would you be better to assume that he's a friend of the two psychologists interrogating you? Or maybe that he's someone that you're personally likely to know? This assumption determines the prior. I, for one, run into way more librarians in a given month than I do farmers. And, needless to say, the probability of a librarian or a farmer fitting this description is highly open to interpretation. For our purposes, understanding the math, what I want to emphasize is that any question worth debating here can be pictured in the context of the diagram. Questions about the context shift around the prior, and questions about the personalities and stereotypes shift around the relevant likelihoods. All that said, whether or not you buy this particular experiment, the ultimate point that evidence should not determine beliefs, but update them, is worth tattooing in your brain. I am in no position to say whether this does or does not run against natural human instinct. We'll leave that to the psychologists. What's more interesting to me is how we can reprogram our intuition to authentically reflect the implications of math, and bringing to mind the right image can often do just that. |
Triangle of Power | Usually, I don't think notation and math matters that much. Don't get me wrong, I enjoy a bad notation rant as much as the next guy, and there are clearly a few simple changes to our conventions that could speed up learning for math students everywhere. But at the end of the day, notation, good or bad, it's just not the point of math. Even the most carefully designed symbols and syntax will fail to capture the underlying visual that constitutes understanding. So I figured it's better to just spend time focusing on that underlying essence and let the symbols just be what they are in peace. But that said, when unintuitive notation actively stalls the gears of learning, my disposition on the matter hardens up a little bit. In particular, I'm thinking of one three-sum of syntax, which when you stop and think about it, is an egregious source of friction in math education everywhere. If you take the fact that two multiplied by itself three times equals eight, for example, we have three separate ways to explain that relationship. Two cubed equals eight, with a superscript. The cube root of eight is two, with a squagally radical symbol. And the log base two of eight equals three, which we write using the word log itself. What the hell do these three ways of writing the same fact have to do with each other? Making up syntax for a concept is fine, but don't do it in three completely different ways for the same concept and force students to learn every rule about that concept three separate times. It's like it's a different language. This way of writing things isn't just counterintuitive. It's counter-mathematical. Since rather than making seemingly different facts look the same, which is what math should do, it takes three facts which should obviously be the same and makes them look artificially different. Just think about how confusing logarithms were the first time that you learned about them. This is, of course, a known issue, and the internet has no shortage of people raising the same concern with suggestions for a better notation. But recently, I stumbled across a math exchange post with this suggestion so lovely, so symmetrical, so utterly reasonable that I just have to share it. For a relationship like two cubed equals eight, take a triangle and write two in the lower left, three on the top, and eight on the lower right. To express the operation two cubed, remove that bottom right corner. The symbol as a whole represents the value that should go in the missing corner. To express log base two of eight, which is asking the question two to the what equals eight, remove the top number. The symbol as a whole represents the value that should go in the missing corner. To express the cube root of eight, which is saying what number to the third power equals eight, remove the bottom left corner. The symbol as a whole represents the value that should go in the missing corner. In other words, all three operations are completely symmetrically represented. This triangle deserves a name, and a friend of mine at Khan Academy decided that we should call it the triangle of power. The definition alone is mildly pleasing, but where it gets fun is when you see how much smoother all of the different operations become. In our current notation, there are six different ways to express the various inverse operations. Most of these are memorized as separate entities, some are rarely even talked about, and there's no discernible pattern even though all of them describe the same basic idea. That student still have to spend six times the effort to memorize each one, or six times more likely to make a mistake, and have six separate opportunities to decide math is dumb and boring and conducive to failure, and why don't I just go study art instead? With the triangle of power, all these operations follow the same pattern. Our brains are really good at picking up on patterns like this, and you can much more easily imagine a smooth mental image associated with the property. There's even kind of an aesthetic pleasure to this, and who knows, maybe more of the artistically inclined students would look favorably upon math long enough to see just how valuable their intuitions really are in the science. Let's take another property, like the idea that A to the x times A to the y equals A to the x plus y. The corresponding fact for logarithms is that log of x times y equals log of x plus log of y. When you write this with the triangle of power, it's a little easier to see that both of these expressions are really saying the same thing. Remember, the symbol as a whole represents the number at the missing corner, so the top expression is saying that when you multiply two numbers that belong on the bottom right of the triangle, it corresponds with adding the numbers that belong to the top. But that's also what the lower expression is saying. When you multiply the numbers at the bottom right, it corresponds with adding numbers that belong to the top. To help students with this, you could draw inside of the triangle, saying that when the lower left is constant, the numbers at the top like to add while the bottom right numbers like to multiply. What about when a different corner stays constant, like the top? Well, in this case, you'd write a multiplication sign in both the bottom corners, because with exponents and radicals, multiplication turns into multiplication. The natural question that a student might ask from here is if there's an analogous rule for when the lower right stays constant. There is. You have to introduce a new operation, which for the sake of this video, I'll call O plus, where A O plus B equals 1 over 1 over A plus 1 over B. This is not actually a ridiculous thing to introduce, since it comes up in physics all the time, like when you're computing parallel resistance. With that symbol, you could say that when the lower right number stays constant, the top numbers like to get O plus together, and the bottom left number is like to get multiplied. This is actually a really nice connection between logarithms and roots, and it never gets discussed, probably because the notation isn't really conducive to asking the question. I could go on and on here, showing a lot of other properties, but honestly, I think the best case I can make here is to encourage you to explore it for yourself, and notice that just about everything involving exponents, logs, or radicals becomes nicer when you use the triangle of power. By the way, I hope that it goes without saying that in this perfect world, students wouldn't learn these operations purely from the symbols. They should still ask why it's true, and why it doesn't follow a different pattern. What's point is that when the notation actually reflects the math, the questions that students are most naturally asking tend to be the ones that cut right into the essence of what's going on. The asymmetries in the notation correspond with actual asymmetries in the numerical relationship A to the B equals C itself, not in the artificial asymmetries of squiggles and words. When a student asks why the top likes to get added in one context, but O plus in another, the teacher can point out the property that reflecting the triangle reciprocates the top, and then they can start addressing where that fact comes from. My sincere hope is that students don't learn by symbolic patterns, but by substantive reasoning and re-derivation within their own heads. But the fact is, most of us do first learn things by symbolic manipulation, so when there's an opportunity to significantly speed up that process, we should take it. And if you agree with me that the triangle of power is clearly better than what we have already, start actually using it in your notes to see what it feels like. Spread the word, and if you're a teacher, maybe start teaching this to your students so that we can get them hooked while they're young. |
Why this puzzle is impossible | It's the holiday season, a time of year to bring people together and to do something a little bit different. So, mythology here. I'm Matt Parker from Stand Up Maths. Hey, this is Sam from Wendover Productions and Half is Interesting. Hi everyone, this James Glign from the Singing Been on a Channel. It's Brady reporting for service from number far, objectivity and various other channels. Hey everyone, my name is Stephen Welch, my channel is Welch Labs. I'm Matt Parker from the channel Looking Glass Universe. Grant told me he was sending me a puzzle and a mug. Hey Grant, I am here. I've got a mug and some paper and some markers. And I'm ready to do your puzzle. I really should know how to solve this mug because I'm the guy that makes and sells them with Matt Parker. So, I've instructed not to read the directions before starting. Hi Ben. Hey Grant. So, a friend just gave me this mug. You are going to be challenged and I'm just going to kind of make you do this on camera to embarrass you. We've got three different houses here, three different cottages. And then three different utilities, the gas, the power and the water. Draw a line from each of the three utilities to each of the three houses. So, nine lines in total. Okay, without letting any two cross. No two lines cross. So, right here, if you wanted to just go straight from power to the house, that's an integral. Okay, interesting. That is quite the challenge. So, nine lines that don't cross. That doesn't even sound possible. I've got my mug. I've got my utilities mug here. Okay. So, here it is. Here's my handmade mug. I've even got real coffee in the mug. I mean, that, look at that. That's the tension in detail. I'm willing to give this a go. I'm just worried I'm going to muck it up. I tend to make a bit of a parker square of these things. When I, when I try, ah, see? Well, let's just fill in as many as I can and see what happens. Sure this will end terribly. So, there's one. There's the other. There we go. Gas line. It's going to be easy. We're going to go like this. Room. Room. Sound effects are crucial. Ah, I'm not going to go around the green one. I don't want to fall for that. I can do another one. And now, up to five. I have to go. I'm looking at my display over here. I should have put it over there. But, oh well. Oh, that's good. That's good. That's good. The second house. Okay, I'll put it into the second house. Resonably, this is easy enough. And so, we just need to get from here to there. I have one, two, three, four, five, six, seven lines. Two to go. So, I have that one connected to that one. I mean, that one connected to that one. Oh, now we get into trouble. Okay, now I start to see the problem. And there, I have made my fatal error in not paying attention. I have boxed in this house right here. As you can see, there's no way to get to it. Gas needs to get to, number one and two. And that's the problem because we're cut off. I kind of want to try it on paper. Okay, it's getting really awkward to draw in a mug. I think what I'm going to do is I'm going to go to a piece of paper. This, this kind of property that you can make lines go from here to here. And also, all the way around, makes it seem like I should be drawing a sphere. Something like that. Okay, let me, I need bigger lines, bigger, bigger space. Uh, now I've just blocked off. Oh, how is this possible? This isn't getting anywhere. Let's try again. Water I need to the first and second. What? Oh, I really messed it up. Okay. To make that at least look easier, I'm going to go around here. Around, around, around, around, around, around to two. I go around the mug with the gas here. So, I'm just going to go all the way around. I'm going to go around, let's go underneath the handle here. Over, over, over, over, over, over. So now it's close. We just need to figure out how to get that red in there. House number three is all done and good. Look at that. House number three. Good to go. So, this house has all three and that house has all three. But this one in the middle doesn't have gas. Alright, let me try something new. Let me just try an experiment here. Let's be, let's be empirical. What's really nice about the mug is that it's shiny. So if you use a dry erase marker, you can undo your mistakes. Just rub it off. Posit? Okay. So there's some very pleasing math within this puzzle for you and me to dive into. But first, let me just say a really big thanks to everyone here who was willing to be my guinea pigs in this experiment. Each of them runs a channel that I respect a lot and many of them have been incredibly kind and helpful to this channel. So if there's any there that you're unfamiliar with or that you haven't been keeping track with, they're all listed in the description so most certainly check them out. We'll get back to all of them in just a minute. Here's the thing about the puzzle. If you try it on a piece of paper, you're gonna have a bad time. But if you're a mathematician at heart, when a puzzle seems hard, you don't just throw up your hands and walk away. Instead, you try to solve a meta puzzle of sorts. See if you can prove that the task in front of you is impossible. In this case, how on earth do you do that? How do you prove something is impossible? For background, anytime that you have some objects with a notion of connection between those objects, it's called a graph. Often represented abstractly with dots for your objects, which I'll call vertices, and lines for your connections, which I'll call edges. Now in most applications, the way you draw a graph doesn't matter what matters as the connections. But in some peculiar cases, like this one, the thing that we care about is how it's drawn. And if you can draw a graph in the plane without crossing its edges, it's called a planar graph. So the question before us is whether or not our Utilities Puzzle graph, which in the Lingo is fancifully called a complete by Partite Graph K33, is planar or not. And at this point, there are two kinds of viewers, those of you who know about Euler's formula and those who don't. Those who do might see where this is going, but rather than pulling out a formula from thin air and using it to solve the metapuzzle, I want to flip things around here and show how reasoning through this conundrum, step by step, can lead you to rediscovering a very charming and very general piece of math. To start, as your drawing lines here between homes and Utilities, one really important thing to keep note of is whenever you enclose a new region. That is, some area that the paint bucket tool would fill in. Because you see, once you've enclosed a region like that, no new line that you draw will be able to enter or exit it. So you have to be careful with these. In the last video, remember how I mentioned that a useful problem-solving tactic is to shift your focus onto any new constructs that you introduce trying to reframe your problem around them? Well, in this case, what can we say about these regions? Right now, I have up on the screen an incomplete puzzle, where the water is not yet connected to the first house, and it has four separate regions. But can you say anything about how many regions a hypothetically complete puzzle would have? What about the number of edges that each region touches? What can you say there? There's lots of questions you might ask and lots of things you might notice. And if you're lucky, here's one thing that might pop out. For a new line that you draw to create a region, it has to hit a vertex that already has an edge coming out of it. Here, think of it like this. Start by imagining one of your nodes as lit up while the other five are dim. And then every time you draw an edge from a lit up vertex to a dim vertex, light up the new one. So at first, each new edge lights up one more vertex. But if you connect to an already lit up vertex, notice how this closes off a new region. And this gives us a super useful fact. Each new edge, either, increases the number of lit up nodes by one, or it increases the number of enclosed regions by one. This fact is something that we can use to figure out the number of regions that a hypothetical solution to this would cut the plane into. Can you see how? When you start off, there's one node lit up and one region, all of 2D space. By the end, we're going to need to draw 9 lines, since each of the 3 utilities gets connected to each of the 3 houses. Five of those lines are going to light up the initially dim vertices. So the other 4 lines each must introduce a new region. So a hypothetical solution would cut the plane into 5 separate regions. And you might say, okay, that's a cute fact, but why should that make things impossible? What's wrong with having 5 regions? Well, again, take a look at this partially complete graph. Notice that each region is bounded by 4 edges. And in fact, for this graph, you could never have a cycle with fewer than 4 edges. Say you start at a house, then the next line has to be to some utility, and then a line out of that is going to go to another house. And you can't cycle back to where you started immediately, because you have to go to another utility before you can get back to that first house. So all cycles have at least 4 edges. And this right here gives us enough to prove the impossibility of our puzzle. Having 5 regions each with a boundary of at least 4 edges would require more edges than we have available. Here, let me draw a planar graph that's completely different from our utilities puzzle, but useful for illustrating what 5 regions with 4 edges each would imply. If you went through each of these regions and add up the number of edges that it has, well, you end up with 5 times 4 or 20. And of course, this way over counts the total number of edges in the graph since each edge is touching multiple regions. But in fact, each edge is touching exactly 2 regions, so this number 20 is precisely double counting the edges. So any graph that cuts the plane into 5 regions, where each region is touching 4 edges, would have to have 10 total edges. But our utilities puzzle has only 9 edges available. So even though we concluded that it would have to cut the plane into 5 regions, it would be impossible for it to do that. So there you go, bada boom bada bang! It is impossible to solve this puzzle on a piece of paper without intersecting lines. Tell me that's not a slick proof. And before getting back to our friends and the mug, it's worth taking a moment to pull out a general truth sitting inside of this. Think back to the key rule where each new edge was introducing either a new vertex by being drawn to an untouched spot, or it introduced a new enclosed region. That same logic applies to any planar graph, not just our specific utilities puzzle situation. In other words, the number of vertices minus the number of edges plus the number of regions remains unchanged, no matter what graph you draw. Namely, it started at 2, so it always stays at 2. And this relation, true for any planar graph, is called Oilers Characteristic Formula. Historically, by the way, the formula came up in the context of convex polyhedra, like a cube, for example, where the number of vertices minus the number of edges plus the number of faces always equals 2. So when you see it written down, you often see it with an F for faces instead of talking about regions. Now before you go thinking of me as some kind of Grinch that sends friends an impossible puzzle, and then makes them film themselves trying to solve it, keep in mind. I didn't give this puzzle to people on a piece of paper. And I'm betting the handle has something to do with this. Okay. Otherwise, why would you put broad of bugs over here? And not a piece of paper. This is a valid observation. Ooh, I have one cool idea, maybe. Use the mug handle to be... oh yeah, I think I see it. Okay. I feel like it has to do something with the handle, and it's at our ability to hop one line over the other. I'm gonna start by, I think, taking advantage of the handle, because I think that that is the key to this. You know what? I think actually sphere is the wrong thing to be thinking about. I mean, like, famously, a mug is topologically the same as a donut. So to solve this thing, you're gonna have to use the torusiness of the mug. You're gonna have to use the handle somehow. That's the thing that makes this a torus. So let's take the green and go over the handle here. Okay. And then the red can kind of come under. Nice. And there we go. There you go. I think I did it. All right. My approach is to get as far as you can with... As far as you can as if you were on a plane. And then see where you get stuck. So look, I'm gonna draw this to here like that. And now I've come across a problem because electricity can't be joined to this house. This is where you have to use the handle. So whatever you did, do it again, but go around the handles. So I'm gonna go down here. I'm gonna loop underneath, come back around, and back to where I started like that. And now I'm free to get my electricity through. I'm gonna go over the handle like that. There we go. Bit messy, there you go. And then I'm gonna go on the inside of the handle. Go all the way around the inside of the handle. And finally connect to the gas company. To solve this puzzle, just join the M and there's three more connections to go. So let's just make them one, two, and now we have to connect those two guys. Just watch it. Can through the front door, out through the back door, done. No intersections. Maybe you think that it's cheating. Well, it's also topological puzzle, so it means the relative positions of things don't matter. What that means is we can take this handle and move it here creating another connection. Ho, ho, ho. Oh my god, am I done? Is this over? Think I'm gonna cut it. 24 minutes. Granted, so this would take 15 minutes. There you go. I think I solved it. You're in the success category. Hard but not impossible. Hard but not impossible. This is maybe perhaps not the most elegant solution to this problem. If I drew this line here, you'll think, oh no, he's blocked that house. There's no way to get the gas in, but this is why it's on a mug, right? If you take the gas line all the way up here to the top, you then take it over and into the mug. If you draw the line under the coffee, it wets the pen. So when the line comes back out again, the pen's not working anymore. You can go straight across there in and join it up. And because it wasn't drawing, you haven't had to cross the lines. Easy. By the way, funny story. So I was originally given this mug as a gift and I didn't really know where it came from. And it was only after I had invited people to be a part of this that I realized the origin of the mug, MathSkeer, is a website run by three of the YouTubers I had just invited, Matt James and Steve. Small world. Given just how helpful these three guys were and the logistics have a lot of this, really the least I could do to thank them is give a small plug for how gift cards from MathSkeer could make a pretty good last minute Christmas present. Back to the puzzle though, this is one of those things where once you see it, it kind of feels obvious. The handle of the mug can basically be used as a bridge to prevent two lines from crossing. But this raises a really interesting mathematical question. We just proved that this task is impossible for graphs on a plane. So where exactly does that prove breakdown on the surface of a mug? And I'm actually not going to tell you the answer here. I want you to think about this on your own. And I don't just mean saying, oh, it's because Euler's formula is different on surfaces with a hole. Really, think about this. Where specifically does the line of reasoning that I laid out break down when you're working on a mug? I promise you, thinking this through will give you a deeper understanding of math. Like anyone tackling a tricky problem, you will likely run into walls and moments of frustration. But the smartest people I know actively seek out new challenges, even if they're just toy puzzles. They ask new questions. They aren't afraid to start over many times. And they embrace every moment of failure. So give this and other puzzles an earnest try and never stop asking questions. But Grant, I hear you complaining. How am I supposed to practice my problem solving if I don't have someone shipping me puzzles on topologically interesting shapes? Well, let's close things off by going through a couple puzzles created by this week's mathematically oriented sponsor Brilliant.org. So here I'm in their Intro to Problem Solving course and going through a particular sequence called Flipping Pairs. And the rules here seem to be that we can flip adjacent pairs of coins, but we can't flip them one at a time. And we are asked, is it possible to get it so that all three coins are gold side up? Well, clearly I just did it. So yes. And the next question, we start with a different configuration, have the same rules, and we're asked the same question. Can we get it so that all three of the coins are gold side up? And you know, there's not really that many degrees of freedom we have here, just two different spots to click. So you might quickly come to the conclusion that no, you can't. Even if you don't necessarily know the theoretical reason yet, that's totally fine. So no, and we kind of move along. So next, it's kind of showing us every possible starting configuration that there is, and asking for how many of them can we get it to a point where all three gold coins are up. Obviously I'm kind of giving away the answer, it's sitting here for on the right, because I've gone through this before. But if you want to go through it yourself, this particular quiz has a really nice resolution. And a lot of others in this course do build up genuinely good problem solving instincts. So you can go to brilliant.org slash 3B1B to let them know that you came from here, or even slash 3B1B flipping to jump straight into this quiz. And you can make an account for free, a lot of what they offer is free. But they also have a annual subscription service if you want to get the full suite of experiences that they offer. And I just think they're really good. I know a couple of the people there, and they're incredibly thoughtful about how they put together math explanations. Water goes to one, and then wraps around to the other. And naively at this point, oh wait, I've already messed up. Then from there, water can make its way to cottage number three. Ah, I'm trapped. I've done this wrong again. |
The determinant | Chapter 6, Essence of linear algebra | Hello, hello again. So moving forward, I'll be assuming that you have a visual understanding of linear transformations and how they're represented with matrices, the way that I've been talking about in the last few videos. If you think about a couple of these linear transformations, you might notice how some of them seem to stretch space out while others squish it on in. One thing that turns out to be pretty useful for understanding one of these transformations is to measure exactly how much it stretches or squishes things. More specifically, to measure the factor by which the area of a given region increases or decreases. For example, look at the matrix with columns 3, 0 and 0, 2. It scales i hat by a factor of 3 and scales j hat by a factor of 2. Now if we focus our attention on the 1 by 1 square whose bottom sits on i hat and whose left side sits on j hat, after the transformation this turns into a 2 by 3 rectangle. Since this region started out with area 1 and ended up with area 6, we can say the linear transformation has scaled its area by a factor of 6. Share that to a shear whose matrix has columns 1, 0 and 1, 1, meaning i hat stays in place and j hat moves over to 1, 1. That same unit square, determined by i hat and j hat, gets slanted and turned into a parallelogram. But the area of that parallelogram is still 1, since its base and height each continue to have length 1. So even though this transformation smushes things about, it seems to leave areas unchanged, at least in the case of that 1 unit square. Frankly though, if you know how much the area of that 1 single unit square changes, it can tell you how the area of any possible region and space changes. For starters, notice that whatever happens to 1 square in the grid has to happen to any other square in the grid, no matter the size. This follows from the fact that grid lines remain parallel and evenly spaced. Then, any shape that's not a grid square can be approximated by grid squares pretty well, with arbitrarily good approximations if you use small enough grid squares. So, since the areas of all those tiny grid squares are being scaled by some single amount, the area of the blob as a whole will also be scaled by that same single amount. This very special scaling factor, the factor by which a linear transformation changes any area, is called the determinant of that transformation. I'll show how to compute the determinant of a transformation using its matrix later on in this video, but understanding what it represents is, trust me, much more important than the computation. For example, the determinant of a transformation would be 3 if that transformation increases the area of a region by a factor of 3. The determinant of a transformation would be 1 half if it squishes down all areas by a factor of 1 half. And the determinant of a 2D transformation is 0 if it squishes all of space onto a line, or even onto a single point, since then the area of any region would become 0. That last example will prove to be pretty important. It means that checking if the determinant of a given matrix is 0 will give a way of computing whether or not the transformation associated with that matrix squishes everything into a smaller dimension. You'll see in the next few videos why this is even a useful thing to think about, but for now I just want to lay down all of the visual intuition, which in and of itself is a beautiful thing to think about. Okay, I need to confess that what I've said so far is not quite right. The full concept of the determinant allows for negative values. But what would the idea of scaling an area by a negative amount even mean? This has to do with the idea of orientation. For example, notice how this transformation gives this sensation of flipping space over. If you are thinking of 2D space as a sheet of paper, a transformation like that one seems to turn over that sheet onto the other side. Any transformations that do this are said to invert the orientation of space. Another way to think about it is in terms of i hat and j hat. Notice that in their starting positions, j hat is to the left of i hat. If after a transformation, j hat is now on the right of i hat, the orientation of space has been inverted. Whenever this happens, whenever the orientation of space is inverted, the determinant will be negative. The absolute value of the determinant, though, still tells you the factor by which areas have been scaled. For example, the matrix with columns 1, 1, and 2 negative 1 encodes a transformation that has determinant, I'll just tell you, negative 3. And what this means is that space gets flipped over and areas are scaled by a factor of 3. So why would this idea of a negative area scaling factor be a natural way to describe orientation flipping? Think about the series of transformations you get by slowly letting i hat get closer and closer to j hat. As i hat gets closer, all of the areas in space are getting squished more and more, meaning the determinant approaches 0. Once i hat lines up perfectly with j hat, the determinant is 0. Then if i hat continues the way that it was going, doesn't it kind of feel natural for the determinant to keep decreasing into the negative numbers? So that's the understanding of determinants and two dimensions. What do you think it should mean for three dimensions? It also tells you how much of transformation scales things, but this time it tells you how much volumes get scaled. Just as in two dimensions, where this is easiest to think about by focusing on one particular square with an area 1 and watching only what happens to it. In three dimensions, it helps to focus your attention on the specific 1 by 1 by 1 cube, whose edges are resting on the basis vectors i hat, j hat and k hat. After the transformation, that cube might get warped into some kind of slanty slanty cube. This shape, by the way, has the best name ever, parallel a piped, a name that's made even more delightful when your professor has a nice thick Russian accent. Since this cube starts out with a volume of 1 and the determinant gives the factor by which any volume is scaled, you can think of the determinant simply as being the volume of that parallel a piped that the cube turns into. A determinant of zero would mean that all of space is squished onto something with zero volume, meaning either a flat plane, a line, or in the most extreme case, until a single point. Those of you who watched chapter 2 will recognize this as meaning that the columns of the matrix are linearly dependent. Can you see why? What about negative determinants? What should that mean for three dimensions? One way to describe orientation in 3D is with the right hand rule. Point the four finger of your right hand in the direction of i hat, stick out your middle finger in the direction of j hat, and notice how when you point your thumb up, it's in the direction of k hat. If you can still do that after the transformation, orientation has not changed, and the determinant is positive. Otherwise, if after the transformation, it only makes sense to do that with your left hand, orientation has been flipped, and the determinant is negative. So if you haven't seen it before, you're probably wondering by now, how do you actually compute the determinant? For a 2 by 2 matrix with entries A, B, C, D, the formula is A times D minus B times C. Here's part of an intuition for where this formula comes from. Let's say that the terms B and C both happened to be zero. Then the term A tells you how much i hat is stretched in the x direction, and the term D tells you how much j hat is stretched in the y direction. So since those other terms are zero, it should make sense that A times D gives the area of the rectangle that our favorite unit square turns into, kind of like the 3002 example from earlier. Even if only one of B or C are zero, you'll have a parallelogram with a base A and a height D, so the area should still be A times D. Loosely speaking, if both B and C are non-zero, then that B times C term tells you how much this parallelogram is stretched or squished in the diagonal direction. For those of you hungry for a more precise description of this B times C term, here's a helpful diagram if you'd like to pause and ponder. Now if you feel like computing determinants by hand is something that you need to know, the only way to get it down is to just practice it with a few. There's really not that much I can say or animate that's going to drill in the computation. This is all triply true for three-dimensional determinants. There is a formula, and if you feel like that's something you need to know, you should practice with a few matrices, or go watch Salcon work through a few. Honestly, though, I don't think that those computations fall within the essence of linear algebra, but I definitely think that understanding what the determinant represents falls within that essence. Here's kind of a fun question to think about before the next video. If you multiply two matrices together, the determinant of the resulting matrix is the same as the product of the determinants of the original two matrices. If you tried to justify this with numbers, it would take a really long time. But see if you can explain why this makes sense in just one sentence. Next up, I'll be relating the idea of linear transformations covered so far to one of the areas where linear algebra is most useful, linear systems of equations. See you then. |