diff --git "a/transcript/allocentric_cM4ISxZYLBs.txt" "b/transcript/allocentric_cM4ISxZYLBs.txt" new file mode 100644--- /dev/null +++ "b/transcript/allocentric_cM4ISxZYLBs.txt" @@ -0,0 +1,1352 @@ +[0.000 --> 2.000] you +[30.000 --> 32.000] you +[60.000 --> 62.000] you +[90.000 --> 92.000] you +[120.000 --> 122.000] you +[150.000 --> 152.000] you +[180.000 --> 182.000] you +[210.000 --> 212.000] you +[240.000 --> 242.000] you +[270.000 --> 272.000] you +[300.000 --> 303.000] you +[330.000 --> 332.000] you +[332.000 --> 342.000] you +[342.000 --> 344.000] you +[344.000 --> 346.000] you +[346.000 --> 348.000] you +[348.000 --> 352.000] i +[352.000 --> 356.000] have +[356.000 --> 359.960] on their face, which I think is asking a slightly different question. +[359.960 --> 363.080] And so I thought today I would try to answer that in, +[363.080 --> 364.880] like, you know, at least at the beginning of this talk, +[364.880 --> 369.600] try to give you an idea of why my lab does the odd things that we do. +[369.600 --> 372.240] And then to give you one example, like, +[372.240 --> 375.960] in depth of what we actually do in practice, +[375.960 --> 378.000] which will unfortunately not involve brain decoding, +[378.000 --> 379.880] but I'm happy to talk to you about that. +[379.880 --> 382.360] So first, the introduction of motivation. +[382.360 --> 385.160] You know, the brain is just a deep network, +[385.160 --> 387.640] but it's a hideously complicated deep network +[387.640 --> 390.720] that is very, very different from the deep networks +[390.720 --> 393.200] you guys use in computer science. +[393.200 --> 397.080] Brain has about 80 billion neurons, about 60 billion of those +[397.080 --> 398.800] are actually back in the cerebellum, +[398.800 --> 401.400] which is an anatomically very old structure +[401.400 --> 404.960] that is what you might argue relatively inefficient. +[404.960 --> 408.040] And therefore, the brain kind of evolved +[408.040 --> 410.320] a new method of building neural networks, +[410.320 --> 415.040] which is the method that my cerebellum student is like, +[415.160 --> 416.280] grimacing at me. +[416.280 --> 418.800] Look, we can argue about this forever, Amanda. +[418.800 --> 420.160] 80 billion neurons. +[422.160 --> 425.720] Anyway, so there's another alternative, +[425.720 --> 428.800] there's another alternative that the brain has evolved, +[428.800 --> 430.080] which is the cerebell cortex, +[430.080 --> 431.560] where there's about 20 billion neurons +[431.560 --> 433.960] who together in these very complicated networks, +[433.960 --> 435.680] each one of those neurons is communicating +[435.680 --> 438.000] with 10,000 other neurons. +[438.000 --> 440.920] The cerebell cortex, the outline part of the brain +[440.920 --> 443.200] that you can see when you sort of just, you know, +[443.200 --> 445.240] see a picture of the side of the brain, +[445.240 --> 449.800] consists of about 300 different areas and modules. +[449.800 --> 452.240] This network is very highly interconnected. +[452.240 --> 455.960] If you record from a single brain area, +[455.960 --> 458.000] it has about a 50% chance of being connected +[458.000 --> 459.600] with every other brain area. +[459.600 --> 461.160] And every one of those connections, +[461.160 --> 462.800] everyone does feed forward connections, +[462.800 --> 465.000] has a concomitant feedback connection. +[465.000 --> 468.320] So the whole brain is this complicated series of loops. +[468.320 --> 471.400] Everything, or information is constantly looping around, +[471.400 --> 474.960] and the time skills of these loops are slow, +[474.960 --> 477.440] because, you know, we don't have electrical wires +[477.440 --> 479.000] in the brain, we've got essentially +[479.000 --> 481.120] electrochemical processes happening, +[481.120 --> 482.560] and those are very slow. +[482.560 --> 486.560] So if you have a neuron at the back of the brain, +[486.560 --> 490.400] communicating with, you know, a prefrontal brain structure, +[490.400 --> 492.520] that might take 30 or 40 milliseconds +[492.520 --> 494.240] for that loop to be completed. +[494.240 --> 498.640] So the brain is on a single neuron level relatively slow, +[498.640 --> 500.280] and there are a lot of feedback loops, +[500.280 --> 502.920] and these feedback loops have long delays. +[502.920 --> 505.720] And so there's a lot of reverberant activity that happens, +[505.720 --> 508.720] and the principles of a system that +[508.720 --> 512.120] involves these oscillatory feedback loops that slow delays, +[512.120 --> 514.280] this very high degree of connectivity feed forward +[514.280 --> 517.400] and feedback, the principles of operation of that thing +[517.400 --> 519.520] are going to be likely to be very, very different +[519.520 --> 521.960] from the principles of operation. +[521.960 --> 524.960] Conventional neural networks that we use in computer science. +[524.960 --> 528.400] Because all these neural networks, like convolutional networks, +[528.400 --> 529.840] and more complicated kinds of things +[529.840 --> 533.440] that people have been devised, all grew out of essentially +[533.440 --> 537.040] our understanding of how neural networks worked in World War II. +[537.040 --> 537.280] Right? +[537.280 --> 539.480] McCulloch and Pitts, and those really guys +[539.480 --> 543.440] set off on a trajectory that was picked up in AI and computer +[543.440 --> 546.720] science and has evolved ever since, fairly slowly +[546.720 --> 548.040] until 10 years ago. +[548.040 --> 553.160] And now there's this giant burgeoning of evolutionary progress +[553.160 --> 556.320] in artificial neural networks on a completely different evolution +[556.320 --> 560.640] or a path than the much slower mammalian evolutionary path. +[560.640 --> 563.400] So transformer networks don't really +[563.400 --> 565.520] have anything to do with how the brain operates. +[565.520 --> 567.280] Attention and transformer networks basically +[567.280 --> 569.360] has nothing to do with how attention operates in the brain. +[569.360 --> 571.720] They're different things, both interesting, +[571.720 --> 573.280] but different things. +[573.280 --> 575.640] So my job is to understand the brain, +[575.640 --> 576.760] because I'm a neuroscientist, which +[576.760 --> 578.800] means I have to understand an architected system +[578.800 --> 580.640] that somebody else built. +[580.640 --> 585.960] And in that domain, we ask a lot of standard questions +[585.960 --> 587.280] that everybody in the field asks. +[587.280 --> 589.920] And they're the same subsets of questions. +[589.920 --> 592.400] First, how is the brain divided into parts? +[592.400 --> 593.960] I mean, if you just look at the outside of the brain here, +[593.960 --> 596.640] it's not obvious that there is more than one part here. +[596.640 --> 602.520] But in fact, this folded structure +[602.520 --> 604.240] is sort of like a beach ball that's +[604.240 --> 605.800] had all the air sucked out of it so that it +[605.800 --> 608.880] confided inside your skull and distributed across +[608.880 --> 612.160] the surface of that structure are a lot of different areas +[612.160 --> 614.520] that have different functional properties. +[614.520 --> 616.600] What information is represented in each of these modules +[616.600 --> 617.640] or areas? +[617.640 --> 623.520] What is the neural code that instantiates these representations? +[623.520 --> 624.760] How is information transformed? +[624.760 --> 627.800] Is it passes through these circuit circuits? +[627.800 --> 631.920] How are top-down processes like expectation and priors +[631.920 --> 634.760] and control processes and attention implemented? +[634.760 --> 637.840] And how do they affect representations and information flow? +[637.840 --> 640.960] And then how does all of this interact with memory, +[640.960 --> 642.360] obviously, any systems can do anything. +[642.360 --> 643.840] Interesting is going to have to have memory. +[643.840 --> 645.640] You can have to store those memories. +[645.640 --> 646.680] You can have to learn things. +[646.680 --> 648.480] You can have to recall the memories +[648.480 --> 651.880] and apply them with current experience to make decisions. +[651.880 --> 655.480] All this stuff is basically an open question in the brain. +[655.480 --> 657.200] That people have been working on for, +[657.200 --> 658.360] at this point, hundreds of years. +[661.360 --> 664.280] So as you will see, it'll become very clear +[664.280 --> 666.640] when I get to the actual example. +[666.640 --> 668.960] Because this is neuroscience, and we're +[668.960 --> 671.200] doing experiments on a structure that we're trying +[671.200 --> 674.000] to understand, most of the questions we ask +[674.000 --> 677.160] can be formulated as a regression problem. +[677.160 --> 679.320] I've got some x variables, which are either the things +[679.320 --> 682.840] I manipulate or the observed variables +[682.840 --> 686.600] that I see in a naturally behaving animal. +[686.600 --> 688.440] And I've got some my y variables, which +[688.440 --> 689.600] are the brain activity. +[689.600 --> 691.240] And I want to understand the difference +[691.240 --> 693.320] between the relationship between the x variables +[693.320 --> 694.080] and the y variables. +[694.080 --> 695.920] That's a regression problem. +[695.920 --> 699.320] Most things in science are of that kind of problem. +[699.320 --> 702.320] If I'm an astrophysicist, and I point a telescope up +[702.320 --> 705.640] at the sky, I detect some vanishingly small fraction +[705.640 --> 707.240] of all the stars in the sky. +[707.240 --> 710.360] And I have to make inferences about how the sky is a whole +[710.360 --> 711.040] works. +[711.040 --> 713.600] And we're basically in the same situation in neuroscience, +[713.600 --> 716.760] except our microscopes are different. +[716.760 --> 719.360] They're microscopes instead of telescopes. +[719.360 --> 722.760] But it's all a regression problem. +[722.760 --> 724.800] And the one thing I was like to mention when I'm talking +[724.800 --> 728.920] to engineers is the criteria for success +[728.920 --> 731.720] in solving this regression problem are kind of different +[731.720 --> 734.040] in neuroscience than they are in engineering. +[734.040 --> 736.080] Engineers want to build something. +[736.080 --> 738.800] And the first requirement is the thing has to work. +[738.800 --> 742.280] So engineers value prediction and generalization. +[742.280 --> 745.320] And as you all know, you would like a proof for every system +[745.320 --> 745.920] you design. +[745.920 --> 748.760] But if you can't write a proof, and it seems to work, +[748.760 --> 752.160] and you just put really big safety boundaries on it, +[752.160 --> 753.600] and you can deploy it anyway. +[753.600 --> 756.360] And that's OK as a provisional model. +[756.360 --> 758.880] In neuroscience, and most areas of science, +[758.880 --> 760.520] it's actually the opposite. +[760.520 --> 762.840] People actually never check predictions in neuroscience +[762.840 --> 763.840] or psychology. +[763.840 --> 765.120] They never check generalization. +[765.120 --> 767.560] It's not a requirement of any paper that's published. +[767.560 --> 771.440] And so what people value is an elegant explanatory model +[771.440 --> 773.440] rather than a good prediction. +[773.440 --> 776.960] Now this makes me sad because I want both. +[776.960 --> 779.480] I want good predictions and generalization +[779.480 --> 781.400] and a beautiful elegant model. +[781.400 --> 784.400] But I have noticed that I'm in the minority because, of course, +[784.400 --> 786.160] science is a social enterprise, and people +[786.160 --> 790.760] have a vested interest in behaviors that I would consider +[790.760 --> 791.360] not optimal. +[791.360 --> 793.360] For example, pretending that statistical significance +[793.360 --> 797.040] is important, or pretending that the data set you have, +[797.040 --> 799.040] if the model fits really well within the data set you have, +[799.040 --> 801.160] you don't need to have a separate fit test set, things +[801.160 --> 801.960] like that, right? +[801.960 --> 803.160] Which are very common. +[803.160 --> 807.440] My lab, we try to not buy into those dysfunctions, +[807.440 --> 810.280] and we try to make sure that all of the procedures +[810.280 --> 812.320] that we use in the lab are adhering +[812.320 --> 816.360] to the best possible standards in modern data science. +[816.360 --> 818.520] Just going to mention one more thing. +[818.520 --> 821.040] Because a lot of my colleagues think AI +[821.040 --> 823.120] until chat GPT was kind of useless. +[823.120 --> 827.160] And I always like to point out, no, the whole reason +[827.160 --> 829.400] we have data science is because of AI. +[829.400 --> 831.840] Because by the time the 1990s came, +[831.840 --> 835.680] AI was a seriously broken, horrible area. +[835.680 --> 838.280] And the government was not going to fund it anymore. +[838.280 --> 840.520] The government was like, we've spent 50 years of money +[840.520 --> 841.280] on this thing. +[841.280 --> 842.160] Nothing works. +[842.160 --> 843.440] Why are we paying you? +[843.440 --> 845.600] And the AI people, this is a cartoon, of course. +[845.600 --> 847.360] The AI people all got together and said, +[847.360 --> 848.640] we have to fix our problem. +[848.640 --> 849.920] What's our problem? +[849.920 --> 855.160] And they realized, look, science is politics with data. +[855.160 --> 855.680] Right? +[855.680 --> 856.960] It's some data. +[856.960 --> 858.880] And then a sociological experiment +[858.880 --> 861.960] that gets applied to the data, which is politics. +[861.960 --> 863.240] What's the problem with AI? +[863.240 --> 864.320] There's no data. +[864.320 --> 865.760] It's just politics. +[865.760 --> 869.600] So if you don't, if you don't have, there's no ground truth. +[869.600 --> 871.840] So if you don't have some way to keep yourself +[871.840 --> 874.600] from political dysfunction, bad things will happen. +[874.600 --> 877.600] So in that from the 90s to late 2010, +[877.600 --> 880.520] the machine learning AI community did a fantastic job +[880.520 --> 882.840] of basically inventing modern data science, +[882.840 --> 885.320] of like grafting statistics and computer science together, +[885.320 --> 888.080] and making sure that they could create models that work, +[888.080 --> 890.520] that predicted accurately and that generalized well. +[890.520 --> 892.800] And all of modern data science, I personally +[892.800 --> 894.640] think came out of that dysfunction. +[894.640 --> 896.760] And it was a marvelous success, as we +[896.760 --> 899.200] see because of the great success of an AI today. +[900.200 --> 902.440] One of my regrets, however, is that a lot of those data +[902.440 --> 906.000] science things have not yet leaked into other areas of science. +[906.000 --> 908.840] And so those of you who are very young, +[908.840 --> 910.640] when you're taking data science and you're maybe +[910.640 --> 912.040] thinking of going into an experimental science +[912.040 --> 916.320] like biology or psychology, just keep doing what you're doing. +[916.320 --> 919.360] Do it the right way and wait till the old people die +[919.360 --> 921.200] and then everything will be fine. +[921.200 --> 923.800] Science progresses one funeral at a time. +[923.800 --> 925.560] All right. +[925.560 --> 928.160] So one of the fundamental problems we have in neuroscience +[928.160 --> 930.320] is that everything is data limited, right? +[930.320 --> 933.320] In any Earth science, either theory limited or data limited. +[933.320 --> 935.280] And at an end of the point in time, one of those two things +[935.280 --> 937.320] is going to be the main limitation you face. +[937.320 --> 940.560] And in neuroscience, there are plenty of ideas and theories. +[940.560 --> 942.160] There's just no data. +[942.160 --> 945.560] Just like an astronomy where data limited in neuroscience +[945.560 --> 946.680] where data limited. +[946.680 --> 950.200] So every method we have for measuring the brain is limited, +[950.200 --> 952.960] either in space or in time. +[952.960 --> 954.920] And so if you're going to measure the brain, +[954.920 --> 958.080] you need to make some decision about which kind of limitation +[958.080 --> 961.400] you want to suffer from, right? +[961.400 --> 965.200] And if you're working with humans, then you have to decide +[965.200 --> 967.120] do you want to do invasive things with humans, which means +[967.120 --> 969.080] you're only going to get a very small amount of data +[969.080 --> 971.600] from pre-surgical patients or do you +[971.600 --> 973.560] want to use non-invasive methods. +[973.560 --> 977.040] So for me, I would like to get high-quality data sets +[977.040 --> 981.280] from neurotypical or at least not people suffering +[981.280 --> 982.560] from a medical disorder. +[982.560 --> 985.760] And so I want to have a method that gives me +[985.760 --> 988.720] the most space and time information I can. +[988.720 --> 991.280] And so in my lab, we generally focus on this method +[991.280 --> 992.480] called Functional MRI. +[995.560 --> 998.000] Here's another way to think about this problem. +[998.000 --> 1000.360] You can think about this like a classic information theory +[1000.360 --> 1001.080] problem. +[1001.080 --> 1002.560] OK, I've got a brain here. +[1002.560 --> 1004.800] It's got a bunch of bits of information in it. +[1004.800 --> 1006.840] I want to extract all those bits of information +[1006.840 --> 1010.000] and put them on my computer where I can analyze them. +[1010.000 --> 1012.560] And ideally, I would get all the information out of the brain +[1012.560 --> 1013.720] and put it on my computer. +[1013.720 --> 1014.960] But I can't do that. +[1014.960 --> 1017.760] Because I have to suck the information out of the brain +[1017.760 --> 1020.760] through some task, the person's always doing some task, +[1020.760 --> 1022.680] through this little tiny straw that's +[1022.680 --> 1025.840] given by whatever lousy method I'm using for recording +[1025.840 --> 1026.680] the brain. +[1026.680 --> 1028.480] And that straw is going to determine +[1028.480 --> 1031.680] how many bits of information I get from this brain per unit +[1031.680 --> 1034.440] time money graduate student, which are the three factors +[1034.440 --> 1038.440] that limit any engineering or science project +[1038.440 --> 1040.440] at the university. +[1040.440 --> 1042.280] So we want to optimize this pipeline +[1042.280 --> 1044.360] to get as many bits of information per unit time money +[1044.360 --> 1045.920] graduate student as we can. +[1045.920 --> 1049.400] And we want that data to predict and generalize +[1049.400 --> 1050.800] to the real world. +[1050.800 --> 1053.400] So in my lab, generally, we focus on naturalistic +[1053.400 --> 1056.000] stimuli and tasks because the brain is a nonlinear dynamical +[1056.000 --> 1056.760] system. +[1056.760 --> 1058.240] If you have a nonlinear dynamical system +[1058.240 --> 1061.040] and you want it to generalize to natural world, +[1061.040 --> 1063.080] you need to measure it in the natural situation. +[1063.080 --> 1065.120] Otherwise, your nonlinear areas will probably +[1065.120 --> 1067.560] get you and your predictions will fail. +[1067.560 --> 1071.960] We try to get as much information as we can out +[1071.960 --> 1074.520] of the brain per unit time money graduate student. +[1074.520 --> 1076.200] We follow best practices of data science. +[1076.200 --> 1077.680] We always do predictions. +[1077.680 --> 1079.080] We always do cross validation. +[1079.080 --> 1081.880] We always have generalization tests. +[1081.880 --> 1084.480] We use an encoding model approach to analyze these data. +[1084.480 --> 1085.360] I'll show you in a second. +[1085.360 --> 1087.520] That's basically just multiple regression. +[1087.520 --> 1092.400] And we perform subsequent statistical modeling +[1092.400 --> 1096.040] after that to try to understand what we did. +[1096.040 --> 1098.280] So I told you I'm using going to use MRI in this talk. +[1098.280 --> 1101.600] For those of you who have never thought about MRI, +[1102.560 --> 1104.840] this is all you need to know about functional MRI. +[1104.840 --> 1106.600] MRI, magnetic resonance imaging, +[1106.600 --> 1108.400] is just a big chemistry experiment. +[1108.400 --> 1111.120] We stick our sample, which in this case is your head, +[1111.120 --> 1114.760] in a big magnetic field, and it aligns some vanishingly +[1114.760 --> 1118.080] small number of protons along the main magnetic field. +[1118.080 --> 1121.480] Then we apply an electromagnetic gradient, +[1121.480 --> 1124.720] and that pushes the proton spins off axis. +[1124.720 --> 1127.640] Then we turn off that gradient and the protons +[1127.640 --> 1131.240] precess back down to the base magnetic field. +[1131.240 --> 1133.440] And when they do that, they give off energy. +[1133.440 --> 1136.440] And we can either look at that energy in two planes, +[1136.440 --> 1139.640] it's three dimensions, so two planes give us all the information +[1139.640 --> 1142.680] we need about how fast these protons precess down. +[1142.680 --> 1146.240] And it turns out that oxygenated and geogygenated +[1146.240 --> 1148.920] hemoglobin have different magnetic properties. +[1148.920 --> 1152.040] And therefore, the protons bound to oxygenate +[1152.040 --> 1155.880] or geogygenated hemoglobin spin down at different rates. +[1155.880 --> 1160.440] So we can Fourier encode these electromagnetic gradients. +[1160.440 --> 1163.920] And we can recover three-dimensional positions +[1163.920 --> 1168.400] in the brain of small volumes of protons. +[1168.400 --> 1172.760] And then we can look at the rate at which they spin down +[1172.760 --> 1176.080] to try to get an idea of how much oxygen there is +[1176.080 --> 1176.960] in the bloodstream. +[1176.960 --> 1178.040] Why oxygen? +[1178.040 --> 1180.680] Because a neuron is a little chemical engine that +[1180.680 --> 1185.520] burns oxygen with sugar to create a chemical called ATP, +[1185.520 --> 1187.840] which is the fuel that drives the cells. +[1187.840 --> 1191.520] So as neurons fire, they're constantly extracting sugar +[1191.520 --> 1193.000] and oxygen from the bloodstream. +[1193.000 --> 1194.480] And the more neurons that are firing, +[1194.480 --> 1196.000] the more sugar and oxygen is being +[1196.000 --> 1197.960] as triad from the bloodstream. +[1197.960 --> 1199.920] Now, the problem with this is that the students +[1199.920 --> 1202.080] of noise is going to depend on how strong this made magnetic +[1202.080 --> 1203.120] field is. +[1203.120 --> 1206.920] And we don't want to put somebody in a magnetic field that's +[1206.920 --> 1208.960] so strong they would levitate or something bad +[1208.960 --> 1209.960] would happen to them. +[1209.960 --> 1213.560] So typically, we have only a vanishing small fraction +[1213.560 --> 1215.760] of the protons that are aligned in our students +[1215.760 --> 1216.640] and noise is low. +[1216.640 --> 1219.600] And we need to average over space to get good signal. +[1219.600 --> 1223.240] And at the main magnetic we use here at Berkeley, +[1223.240 --> 1226.280] so center three-testile magnet, our spatial resolution +[1226.280 --> 1228.040] is about two to three millimeters. +[1228.040 --> 1231.640] There's a brand new magnet we just installed at Berkeley, +[1231.640 --> 1233.400] called the next gen 17 magnet that +[1233.400 --> 1236.640] goes down to a half a millimeter resolution. +[1236.640 --> 1238.560] And that's just coming up to speed right now. +[1238.560 --> 1240.120] I'm happy with the F-Roy. +[1240.120 --> 1242.440] Yeah. +[1242.440 --> 1243.960] And it's the only magnet in the world. +[1243.960 --> 1249.040] Right now, Berkeley has the best F-Roy machine on the planet. +[1249.040 --> 1249.320] OK. +[1249.320 --> 1250.480] So now I'm going to go. +[1250.480 --> 1251.840] So that kind of gives you the background +[1251.840 --> 1254.320] of why we do MRI, tells you about the limitations you +[1254.320 --> 1255.160] have to deal with. +[1255.160 --> 1256.880] Now, I'm going to go into an actual experiment +[1256.880 --> 1258.240] for the rest of this talk. +[1258.240 --> 1261.680] So this experiment was spearheaded by Tian Xiao Zhang, +[1261.680 --> 1262.920] who I think's here somewhere. +[1262.920 --> 1263.600] Is Tian Xiao Zhang? +[1263.600 --> 1264.320] Yeah, he's back there. +[1264.320 --> 1265.960] So if you have any questions, you can ask that guy, +[1265.960 --> 1268.320] because I'm just the talking head. +[1268.320 --> 1269.880] When I was getting this talk ready, +[1269.880 --> 1272.720] I realized that Tian Xiao is actually giving the same talk, +[1272.720 --> 1274.400] probably with a different introduction. +[1274.400 --> 1276.000] Next Monday at Oxyopia. +[1276.000 --> 1278.120] So if you want to go to that talk, you can. +[1278.120 --> 1280.600] If you like this and you want to see more. +[1280.600 --> 1283.440] So we decided, in this particular experiment, +[1283.440 --> 1284.680] we studied language in the lab. +[1284.680 --> 1285.880] We studied vision. +[1285.880 --> 1287.720] We studied a lot of different things. +[1287.720 --> 1290.200] But in this talk, I thought I would talk about navigation. +[1290.200 --> 1292.760] Navigation's a cool task because we all do it. +[1292.760 --> 1293.640] We do it all the time. +[1293.640 --> 1294.800] It's a naturalistic task. +[1294.800 --> 1296.960] And there's also a lot of different brain subsystems. +[1296.960 --> 1298.600] So look at navigation. +[1298.600 --> 1302.080] There has been some work on navigation in F-Roy and in humans. +[1302.080 --> 1304.720] But that all involves very reduced environments. +[1304.720 --> 1308.160] Like you show pictures of people, pictures of different places, +[1308.160 --> 1309.840] and ask them if they recognize it. +[1309.840 --> 1311.680] Or you show them a picture of one place. +[1311.680 --> 1313.960] Then show them a picture of another place from a different angle. +[1313.960 --> 1315.560] And ask them if they're the same or different, +[1315.560 --> 1317.880] very reduced kinds of situations. +[1317.880 --> 1321.720] We want to do a naturalistic task as good as we could. +[1321.720 --> 1325.400] So Tian Xiao built a video game environment using Unreal Engine. +[1325.400 --> 1327.480] It's about two by three kilometers on a side. +[1327.480 --> 1330.760] Has hundreds of buildings and roads inside it. +[1330.760 --> 1333.920] And people have to learn this outside the scanner. +[1333.920 --> 1337.800] Takes them about 10 hours to learn all the landmarks in this world. +[1337.800 --> 1340.240] And then we put them inside the scanner. +[1340.240 --> 1341.080] This is the scanner. +[1341.080 --> 1342.680] It's just a big magnet. +[1342.680 --> 1344.200] So Tian Xiao's outside the magnet. +[1344.200 --> 1345.600] Now he's going to slide in there. +[1345.600 --> 1348.120] You notice he has optics that present the virtual reality +[1348.120 --> 1351.080] to his eyes and has a steering wheel and foot pedals +[1351.080 --> 1353.080] that we built that Tian Xiao built. +[1353.080 --> 1355.760] Whenever I use we in this talk, I mean him. +[1355.760 --> 1360.320] It's the royal we that we built that are magnet safe. +[1361.320 --> 1365.160] OK, so we put people in the MRI machine +[1365.160 --> 1367.920] and we just do a taxi driver task. +[1367.920 --> 1370.960] So you get a queue, go to the grocery store, +[1370.960 --> 1373.120] and you just have to drive to the grocery store. +[1373.120 --> 1375.720] And then eventually you arrive at the grocery store, +[1375.720 --> 1377.840] and then you get another taxi driver task. +[1377.840 --> 1379.680] And this is just like an Uber driver. +[1379.680 --> 1385.720] It's not that exciting a task, but it's a naturalistic task. +[1385.720 --> 1387.760] There are other cars that are pedestrians. +[1387.760 --> 1390.280] There are different times of day, different traffic patterns. +[1390.280 --> 1392.960] You have to use all that information in this task. +[1392.960 --> 1396.880] So I always like to show a movie of brain activity in this task +[1396.880 --> 1400.280] because this really gives you an idea of what's going on. +[1400.280 --> 1402.680] The brain is inconveniently folded up inside the skull, +[1402.680 --> 1406.120] so we can extract it computationally and then flatten it out. +[1406.120 --> 1407.600] And if we did something like that with your brain, +[1407.600 --> 1410.560] we'd end up with something about the size of a large pizza. +[1410.560 --> 1412.200] The visual system here is in the middle. +[1412.200 --> 1414.600] The prefrontal cortex is at the far left and far right. +[1414.600 --> 1417.080] The somatosensory strip is kind of here +[1417.080 --> 1419.800] and the auditory system is here and here. +[1419.800 --> 1423.600] Now, red on this map means more brain activity, +[1423.600 --> 1425.800] more metabolic activity, and blue means +[1425.800 --> 1428.440] relatively less metabolic activity. +[1428.440 --> 1430.200] So all that's happening here is the person +[1430.200 --> 1433.120] is driving to some random destination we don't know where. +[1433.120 --> 1435.480] And we can follow the patterns of brain activity +[1435.480 --> 1437.440] as the person is driving. +[1437.440 --> 1441.760] And what you'll notice is that these patterns vary a lot. +[1441.760 --> 1445.040] And they depend on what's going on outside. +[1445.040 --> 1448.360] So here the person stopped behind another car. +[1448.360 --> 1451.040] So we get activity in a network, a brain network called +[1451.040 --> 1453.320] the default mode network, which is the internal illumination +[1453.320 --> 1455.960] network that is activated when you're +[1455.960 --> 1457.200] talked to yourself. +[1457.200 --> 1458.920] You'll see when the person turns a corner of that, +[1458.920 --> 1461.480] we'll end up with activity in the motor strips. +[1461.480 --> 1463.000] When they have to break and accelerate, +[1463.000 --> 1465.080] you'll get activity in the motor strip. +[1465.080 --> 1469.720] Anything that happens in this task must have some correlate +[1469.720 --> 1472.520] in the brain, because we're neuroscientists here. +[1472.520 --> 1473.920] If there's a soul, it's irrelevant to us. +[1473.920 --> 1475.360] It all has to be in the brain. +[1475.360 --> 1477.640] There's only one world we're dealing with here. +[1477.640 --> 1480.760] So anything you see must be represented in the brain. +[1480.760 --> 1483.760] Any motor action must be represented in the brain. +[1483.760 --> 1486.640] Your intentions to drive your cognitive plans +[1486.640 --> 1490.760] about where you went, your thinking about where you came from, +[1490.760 --> 1492.960] all of that stuff must be represented in the brain. +[1492.960 --> 1494.240] And that makes it very clear that this +[1494.240 --> 1496.680] is just a giant multiple regression problem. +[1496.680 --> 1498.160] I've got a bunch of x variables, which +[1498.160 --> 1502.680] are the perception data, the controls data, and the task +[1502.680 --> 1503.400] we gave you. +[1503.400 --> 1505.080] And I've got a bunch of y data, which +[1505.080 --> 1508.160] is this time series of brain activity +[1508.160 --> 1510.120] over about 100,000 points in the brain. +[1510.120 --> 1513.640] And I just got to figure out how they're related to one another. +[1513.640 --> 1514.680] OK. +[1514.680 --> 1517.680] So the first thing you often do when you get these kinds of data +[1517.680 --> 1522.400] is you just plot the brain activity on flat maps to look at it, +[1522.400 --> 1524.920] see where things were activated. +[1524.920 --> 1528.320] And this task activates a lot of things in the visual system. +[1528.320 --> 1529.720] In the front-line fields, in the motor +[1529.720 --> 1532.280] of semantics, sensory strip, and this parietal cortex, which +[1532.280 --> 1534.960] is the parietal cortex is the part of the visual system +[1534.960 --> 1537.000] that involves coordinate system transformations +[1537.000 --> 1539.400] between your eye coordinates and your hands. +[1539.400 --> 1542.240] There's a lot of different coordinate systems involved there. +[1542.240 --> 1545.080] And there are some activity also here in prefrontal cortex +[1545.080 --> 1546.520] that probably has to do with planning, +[1546.520 --> 1549.440] some with audition that has to do with the sound in the MRI. +[1549.440 --> 1552.440] They sound in the video that you could not hear, +[1552.440 --> 1554.640] because they did not play it. +[1554.640 --> 1557.200] So now the video game environment, because this is a video game, +[1557.200 --> 1559.000] we have ground truth about everything that happened +[1559.000 --> 1559.760] in the video game. +[1559.760 --> 1560.960] Is there a question? +[1560.960 --> 1563.520] So you said that everything has to be +[1564.480 --> 1565.320] in the brain. +[1565.320 --> 1567.320] Sometimes we hear that the brain +[1567.320 --> 1568.480] is going to get, right? +[1568.480 --> 1569.960] There are no ones in the brain. +[1569.960 --> 1572.320] How do you know that the happening +[1572.320 --> 1574.520] has to be brain-opening? +[1574.520 --> 1576.960] That's an amazingly cool question that nobody's ever asked +[1576.960 --> 1577.480] before. +[1577.480 --> 1579.920] Actually, it's one person that's asked before. +[1579.920 --> 1582.320] We're not recording from the gut. +[1582.320 --> 1585.120] So if stuff's happening in the gut, it's irrelevant to us. +[1585.120 --> 1587.800] Because this is only in the head. +[1587.800 --> 1592.960] So there could be a correlation between the gut and the brain. +[1592.960 --> 1594.680] I suspect that when I'm driving home, +[1594.680 --> 1596.560] like at the end of the day, if I'm hungry, +[1596.560 --> 1599.480] I'm probably driving differently than if I'm not hungry. +[1599.480 --> 1600.600] Just take an example. +[1600.600 --> 1602.280] So I think that probably has an influence, +[1602.280 --> 1603.280] but we won't see it. +[1603.280 --> 1605.040] We won't know about it. +[1605.040 --> 1608.320] So anyway, there's a bunch of different feature spaces here. +[1608.320 --> 1610.680] We know where all the buildings are. +[1610.680 --> 1611.880] We have the semantic segmentation. +[1611.880 --> 1613.400] We know all the surface normals are. +[1613.400 --> 1615.200] We know what the inferred distance of all the buildings +[1615.200 --> 1615.720] was. +[1615.720 --> 1616.760] We have all that ground truth. +[1616.760 --> 1618.400] And we can use that to make features. +[1618.400 --> 1620.560] We also have behavioral control. +[1620.560 --> 1621.800] We're measuring the steering wheel. +[1621.800 --> 1623.160] We're measuring the foot pedals. +[1623.160 --> 1625.280] And we're measuring, as shown here in this blue bubble, +[1625.280 --> 1626.480] where your eye is. +[1626.480 --> 1628.280] Where your eye is is very important. +[1628.280 --> 1630.160] Because we don't have any direct measure of attention +[1630.160 --> 1630.920] in this task. +[1630.920 --> 1635.280] So it turns out attention in mammals follows your eye movements +[1635.280 --> 1637.320] or precedes your eye movements, really. +[1637.320 --> 1640.000] So we can use eye movements as a proxy for attention. +[1642.960 --> 1647.840] So what we actually did in this, what Chang did in this experiment, +[1647.840 --> 1651.720] is he created 34 different feature spaces. +[1651.720 --> 1654.560] Using these various variables. +[1654.560 --> 1658.480] Some of these feature spaces are related to the perception, +[1658.480 --> 1663.520] like gaze grid, eye tracking, motion energy, which +[1663.520 --> 1665.840] is just how much motion energy occurs in different locations +[1665.840 --> 1667.680] in the display. +[1667.680 --> 1670.320] The spatial semantics, which are the labels of objects +[1670.320 --> 1672.200] in the scene, the gaze semantics, which +[1672.200 --> 1674.760] are the labels of objects that you're actually looking at, +[1674.760 --> 1676.560] which are the behaviorally relevant things in the scene. +[1677.080 --> 1684.400] The scene structure, the depth, all of these things were coded. +[1684.400 --> 1686.760] Then we also have all the control information, +[1686.760 --> 1689.640] like where your foot pedals were, where the steering wheel +[1689.640 --> 1690.880] was, where the accelerator was. +[1690.880 --> 1691.920] We have all that. +[1691.920 --> 1694.840] And then we have a bunch of navigation information. +[1694.840 --> 1696.480] And most of this navigation information +[1696.480 --> 1698.920] comes from theories of navigation +[1698.920 --> 1702.840] from the rodent literature and also some from the human literature. +[1702.840 --> 1705.000] So there are dozens of different theories +[1705.000 --> 1707.720] about what kind of information about navigation +[1707.720 --> 1713.600] might be represented, future path navigation directions +[1713.600 --> 1716.000] in various coordinate systems to the target +[1716.000 --> 1717.960] that you're going to, all of those kinds of things. +[1717.960 --> 1720.640] And all of this was coded in various feature spaces. +[1720.640 --> 1723.560] In every case, the way this works is pretty simple. +[1723.560 --> 1726.440] You take the information you have. +[1726.440 --> 1728.400] You essentially create an embedding that +[1728.400 --> 1730.880] reflects just the feature space you care about. +[1730.880 --> 1733.920] And then you concatenate all these embeddings together. +[1733.920 --> 1735.680] And that means you now have a regression problem +[1735.680 --> 1737.000] where you have your training data. +[1737.000 --> 1740.680] And you have a stack of features, where each feature space +[1740.680 --> 1742.920] has, of course, a long list of features. +[1742.920 --> 1744.800] And now you simply do Ridge regression +[1744.800 --> 1748.040] to find a set of model weights that map each of those features +[1748.040 --> 1749.320] onto every box on the brain. +[1749.320 --> 1751.760] So you're going to do, you've got 34 features, +[1751.760 --> 1755.360] comprising about 2,500 or so features. +[1755.360 --> 1757.680] And you've got 100,000 voxels on the brain. +[1757.680 --> 1760.800] So you're doing 100,000 regression problems, +[1760.800 --> 1766.040] where you're fitting a 25,000 feature log vector to each voxel. +[1766.040 --> 1767.480] That's where data is going to be. +[1767.480 --> 1770.960] So everyone 100,000 voxels has 2,500 weights +[1770.960 --> 1772.200] in the regression model. +[1772.200 --> 1774.760] This would be impossible if it was like 1980. +[1774.760 --> 1777.720] But the kernel trick allows you to do all of this +[1777.720 --> 1781.720] with a matrix of dimensions of the length of the experiment +[1781.720 --> 1783.440] rather than the number of features. +[1783.440 --> 1785.520] And so this is all done in kernel space +[1785.520 --> 1788.760] by some statistical miracle that I still can't even +[1788.760 --> 1789.600] have found them. +[1789.600 --> 1790.600] It's amazing to me this work. +[1790.600 --> 1793.080] Just to say that a little differently, +[1793.080 --> 1797.280] your goal is just to find out where in the brain +[1797.280 --> 1800.200] the stimulus appears. +[1800.200 --> 1803.520] Well, these features of the stimulus, right? +[1803.520 --> 1806.000] We want to know where all these aspects appear. +[1806.000 --> 1810.600] So we don't know what, one more thing I should mention +[1810.600 --> 1812.440] that makes this a little clearer. +[1812.440 --> 1815.760] We don't know which of these feature spaces +[1815.760 --> 1817.920] is represented the brain and which isn't. +[1817.920 --> 1820.280] And these feature spaces are all collinear, right? +[1820.280 --> 1823.280] If I have seen semantics, which is a label of all the objects +[1823.280 --> 1824.960] in the scene, and I have gaze semantics, which +[1824.960 --> 1826.840] is the label of the objects that I'm looking at, +[1826.840 --> 1828.640] those are correlated, right? +[1828.640 --> 1829.960] So what we're really trying to do here +[1829.960 --> 1833.120] is we're trying to find out what features are represented +[1833.120 --> 1835.320] at what point in the brain, what perceptual, motor, +[1835.320 --> 1836.560] and cognitive features. +[1836.560 --> 1838.600] And we're trying to do that in as data driven a manner +[1838.600 --> 1839.720] as we can. +[1839.720 --> 1842.760] So to do that, we fit more feature spaces than we need. +[1842.760 --> 1845.160] And then we're going to interrogate the data afterwards, +[1845.160 --> 1846.880] looking through the T-leaves to try +[1846.880 --> 1849.000] to see what was actually represented. +[1849.000 --> 1850.000] Is that clearer? +[1850.000 --> 1851.280] Well, it's clear. +[1851.280 --> 1853.080] Is the pedestrians up there? +[1853.080 --> 1854.080] Yes. +[1854.080 --> 1855.960] The pedestrians are going to appear somewhere +[1855.960 --> 1857.440] on the right, basically. +[1857.440 --> 1859.520] Well, not necessarily. +[1859.520 --> 1862.280] Well, it was both, it was most likely, right? +[1862.280 --> 1865.480] So that's the mapping. +[1865.480 --> 1867.680] You want to find the mapping. +[1867.680 --> 1868.640] Yes, but of everything. +[1868.640 --> 1870.360] The pedestrians just box on the right. +[1870.360 --> 1871.600] Exactly. +[1871.600 --> 1872.800] That's the problem. +[1872.800 --> 1875.720] And but it's not only that the only reason I was correcting +[1875.720 --> 1877.160] was you were talking about visual things. +[1877.160 --> 1879.160] But remember, this is an navigation experiment. +[1879.160 --> 1881.120] We really care about as an navigational variables. +[1881.120 --> 1883.560] So you have some sense of how long it's +[1883.560 --> 1885.520] going to take you to get where you're going. +[1885.520 --> 1887.040] So that should be represented somewhere. +[1887.040 --> 1889.280] You have a sense of the path you're going to take, right? +[1889.280 --> 1890.520] This is a complicated map. +[1890.520 --> 1892.200] You could take a lot of different paths. +[1892.200 --> 1894.360] So when people start in this navigation experiment, +[1894.360 --> 1897.600] they have a path that they start to take. +[1897.600 --> 1899.280] They might deviate from that path later on +[1899.280 --> 1900.560] if there's too much traffic or something. +[1900.560 --> 1903.800] But there must be a cognitive map of the path, for example. +[1903.800 --> 1907.160] So that's all the stuff we're really trying to pull out here. +[1907.160 --> 1909.320] But the general idea is correct. +[1909.320 --> 1911.840] So one thing I should mention, this +[1911.840 --> 1916.840] is basically a big, ugly applied math problem. +[1916.840 --> 1919.080] One of the, there are a lot of aspects to this problem +[1919.080 --> 1920.480] because it's a big data kind of problem. +[1920.480 --> 1923.600] It's got a lot of annoying things that have to be done. +[1923.600 --> 1925.840] One of the annoying things is that all of these features +[1925.840 --> 1927.960] faces have different students' noise properties. +[1927.960 --> 1931.280] The students' noise is governed by how many examples +[1931.280 --> 1934.560] of each of the features you acquired in your experiment. +[1934.560 --> 1936.120] It has to do with where in the brain it occurs +[1936.120 --> 1937.320] because different places in the brain +[1937.320 --> 1939.080] have different students' noise properties +[1939.080 --> 1941.680] because of MRI susceptibility artifacts. +[1941.680 --> 1945.280] There are all kinds of factors that can affect the students' +[1945.280 --> 1947.080] noise for these different feature spaces. +[1947.080 --> 1949.120] So this is going to be a ridge regression problem, +[1949.120 --> 1951.960] where we're going to have a regularizer and some features, +[1951.960 --> 1953.120] and we're going to put those together. +[1953.120 --> 1956.160] We have to estimate the regularizer and then essentially +[1956.160 --> 1959.400] use that to condition the data when we do our regression problem. +[1959.400 --> 1962.320] Every feature space gets its own regularizer in our framework. +[1962.320 --> 1966.960] So this is using a method of called Tick-and-Off regression +[1966.960 --> 1968.520] that we have a very specific implementation +[1968.520 --> 1972.000] of called banded ridge regression that we have software for +[1972.000 --> 1974.360] that just allows these problems to be run really quickly +[1974.360 --> 1975.480] on GPUs. +[1975.480 --> 1977.200] That's long story short. +[1977.200 --> 1978.560] So we spent a lot of time in the lab +[1978.560 --> 1980.360] basically solving these applied math problems +[1980.360 --> 1983.280] for doing these big fitting kinds of issues. +[1983.280 --> 1984.920] All right, so what do you get out of this experiment? +[1984.920 --> 1987.960] Here's one example that will be helpful. +[1988.600 --> 1992.160] The video game engine gives you 16 categories of features. +[1992.160 --> 1995.080] That's just how the video game engine keeps track of features. +[1995.080 --> 1997.600] So of the semantic structure of the scene. +[1997.600 --> 2000.240] So like there's buildings, we can see them on here. +[2000.240 --> 2003.920] There are sidewalks, road lines, foliage, ground, +[2003.920 --> 2005.120] pedestrians, and so on. +[2005.120 --> 2007.440] These are the features the video game engine gives us. +[2007.440 --> 2012.200] So for every voxel, we can create a 16 long vector of weights +[2012.200 --> 2015.760] that tell us how much that voxel cares about these various +[2015.760 --> 2019.720] categories of objects that appear in the video game. +[2019.720 --> 2022.480] Now we can basically take all of the voxels +[2022.480 --> 2024.280] and we can do principal components on it +[2024.280 --> 2026.280] and take the first three principal components +[2026.280 --> 2028.360] and apply them to the red, green, blue channels +[2028.360 --> 2033.280] of our display and make a map by projecting those PCs +[2033.280 --> 2035.560] now back onto the surface of the brain. +[2035.560 --> 2039.800] And now we see what semantic features each place in the brain +[2039.800 --> 2040.680] represents. +[2040.680 --> 2043.640] So these purple areas are representing pedestrians, +[2044.120 --> 2045.640] which is what you mentioned. +[2045.640 --> 2049.640] The greenish areas are representing roads and road lines. +[2049.640 --> 2053.240] The yellow regions are representing foliage and so on. +[2053.240 --> 2055.640] So you can see a lot of places in the brain represent pedestrians +[2055.640 --> 2057.840] and people because we're social animals +[2057.840 --> 2059.480] and people are important to us. +[2059.480 --> 2061.360] A lot of places in the brain represent the structure +[2061.360 --> 2062.480] of the environment. +[2062.480 --> 2065.680] There are places that represent the road, signs, and so on. +[2065.680 --> 2069.360] Now this is fine, but you can't do this for 34 feature spaces. +[2069.360 --> 2070.640] You will go insane. +[2070.640 --> 2072.800] It's just too much data. +[2072.800 --> 2074.120] So you're going to do something. +[2074.120 --> 2075.200] What do you do in these kind of problems? +[2075.200 --> 2076.760] You do dimensionality reduction. +[2076.760 --> 2079.880] So one kind of sleazy method to mentionality +[2079.880 --> 2082.240] reduction you can do that I do not particularly like +[2082.240 --> 2083.320] is TSNE. +[2083.320 --> 2088.360] TSNE depends a lot on the kernel that you start with +[2088.360 --> 2090.160] and it's very susceptible to noise. +[2090.160 --> 2092.240] But it gives you a nice summary. +[2092.240 --> 2095.160] So here we have 34 of the feature spaces, +[2095.160 --> 2096.520] all the feature spaces together. +[2096.520 --> 2099.120] And we've just classified them here into five classes. +[2099.120 --> 2102.760] And we've used TSNE to produce a very low-dimensional 3D +[2102.760 --> 2105.520] bedding that we can project onto the surface of the cortex +[2105.520 --> 2107.640] and then we've color-coded each of the feature spaces +[2107.640 --> 2110.040] by the same scheme. +[2110.040 --> 2111.400] So you can see where on the brain +[2111.400 --> 2113.680] these different kinds of features are represented. +[2113.680 --> 2117.080] Can you expand the acronym, PSN? +[2117.080 --> 2119.240] God, temporal, no. +[2119.240 --> 2121.800] I can't even remember what the heck it is now. +[2121.800 --> 2122.600] Zhang, what is this? +[2122.600 --> 2123.800] Do you remember? +[2123.800 --> 2126.640] From a cross-section. +[2126.640 --> 2127.640] So you know anybody can remember? +[2127.640 --> 2129.240] Does anybody remember what TSNE is? +[2129.240 --> 2131.280] No, it's a classic neighborhood bedding. +[2131.280 --> 2132.920] Oh, a collegial one of my students knew it. +[2132.920 --> 2133.520] OK, good. +[2133.520 --> 2134.040] There you go. +[2134.040 --> 2136.640] There's one of us, one person in the room. +[2136.640 --> 2139.880] Temporals to Castic Neighborhood Bedding. +[2139.880 --> 2142.000] Which provides me no information whatsoever +[2142.000 --> 2144.440] about what the thing actually does. +[2144.440 --> 2147.480] All I remember from this is don't use TSNE. +[2147.480 --> 2149.840] That's the rule I learned when I was exposed to TSNE +[2149.840 --> 2152.520] because it's very unstable. +[2152.520 --> 2155.400] All right, but that's what we use because it's not events. +[2155.400 --> 2157.440] This is just a, we're just on the way +[2157.440 --> 2158.920] to where we want to go. +[2158.920 --> 2163.680] So anyway, so the red places here are all representing visual stuff. +[2163.680 --> 2165.480] And these are all in the visual system. +[2165.480 --> 2170.000] This is the red and the topic here is the yellow stuff here is motor. +[2170.000 --> 2172.000] And these are all, you know, all these motor variables +[2172.000 --> 2173.760] are represented in the motor system. +[2173.760 --> 2177.160] The navigation stuff, the past navigation where you were +[2177.160 --> 2179.840] is in these representing these purple patches. +[2179.840 --> 2182.080] And those seem to be broadly distributed in the brain. +[2182.080 --> 2184.840] And the future navigation is also +[2184.840 --> 2187.000] represented in broadly distributed locations in the brain. +[2187.000 --> 2190.200] So it seems like the navigational features +[2190.200 --> 2193.800] spaces are projecting under the brain's subsystems writ large, +[2193.800 --> 2196.920] prefrontal cortex, motor cortex, visual cortex, +[2196.920 --> 2198.480] as we would expect, which is good. +[2198.480 --> 2200.400] Because if this didn't work, you know, +[2200.400 --> 2204.120] we would have to question our whole basis for being in this experiment. +[2204.120 --> 2206.200] All right, but what you really want to do, you know, +[2206.200 --> 2208.440] you don't really care about how these individual features spaces +[2208.440 --> 2209.040] represented. +[2209.040 --> 2211.920] What you want to know is are there navigation networks in the brain? +[2211.920 --> 2213.400] That's the real question we had here. +[2213.400 --> 2215.280] So let's try to see if we can pull that out. +[2215.280 --> 2217.520] To do this, this is a really complicated slide +[2217.520 --> 2220.680] that I'm not going to go into. +[2220.680 --> 2222.760] To my students, student postdoc in the lab, +[2222.760 --> 2224.560] Mateo's in the back. +[2224.560 --> 2226.320] I don't see the other student. +[2226.320 --> 2229.600] So Emily, Meshki, and Mateo Visconti +[2229.600 --> 2233.200] both worked on this project to develop +[2233.200 --> 2235.360] a new method called model connectivity. +[2235.360 --> 2237.360] Connectivity is a word for correlation +[2237.360 --> 2240.680] that is used in neuroscience, sadly. +[2240.680 --> 2241.600] You guys have a run problem. +[2241.600 --> 2242.400] It's a major causality. +[2242.400 --> 2243.440] It has nothing with causality. +[2243.440 --> 2246.520] So you know, everybody's got their sins. +[2246.520 --> 2248.320] Anyway, you see model connectivity, +[2248.320 --> 2249.760] think model correlation. +[2249.760 --> 2251.880] All we're doing here is every single voxel +[2251.880 --> 2254.640] has a feature vector of 2,500 long. +[2254.640 --> 2256.640] And we're just going to basically take the angle between those +[2256.640 --> 2259.640] vectors and or the correlation between those vectors +[2259.640 --> 2262.880] and use them in a cluster analysis to pull out networks. +[2262.880 --> 2263.680] That's all we're doing. +[2263.680 --> 2265.320] Pretty straightforward. +[2265.320 --> 2266.040] OK. +[2266.040 --> 2269.400] So and then, of course, since we're using cluster analysis +[2269.400 --> 2272.760] on the feature vectors, now we're going to get a dendrogram. +[2272.760 --> 2274.760] We're going to have different numbers of clusters +[2274.760 --> 2276.360] that we could pick out of this. +[2276.360 --> 2278.960] And we use cross validation across subjects +[2278.960 --> 2281.400] to determine how many networks we can pull out of our data +[2281.400 --> 2282.400] set. +[2282.400 --> 2284.680] And that's going to be data limited. +[2284.680 --> 2284.880] All right. +[2284.880 --> 2287.360] So here's the number of clusters we're pulling out. +[2287.360 --> 2289.840] And here's our held out prediction. +[2289.840 --> 2294.440] And you can see that the more clusters we pull out, +[2294.440 --> 2295.920] better our predictions are. +[2295.920 --> 2298.520] But you can see that there's a knee here around 10 or 15 +[2298.520 --> 2299.440] networks. +[2299.440 --> 2301.920] So because 10 or 15 networks is also +[2301.920 --> 2304.080] a countable number that we can actually think about, +[2304.080 --> 2306.760] that's probably where we're going to focus our attention here. +[2306.760 --> 2311.320] So we're pulling 10 networks out of these 34 feature spaces, +[2311.320 --> 2313.200] and now we can look at these networks. +[2313.200 --> 2314.600] So this plots a little complicated, +[2314.600 --> 2316.520] but it should be straightforward what we're doing here, +[2316.520 --> 2318.080] based on what I just said. +[2318.080 --> 2320.640] We have here our 10 networks that we pulled out +[2320.640 --> 2322.600] by cutting off our dendrogram. +[2322.600 --> 2324.520] Now, remember, each one of these networks +[2324.520 --> 2327.560] consists of some combination of these different feature +[2327.560 --> 2329.880] spaces, these different 34 feature spaces. +[2329.880 --> 2333.840] And so we can marginalize across the features in each feature +[2333.840 --> 2338.280] space and use these circles to indicate +[2338.280 --> 2342.480] the weight that that individual feature space has in each network. +[2342.480 --> 2345.400] So you can see, for example, that network one, +[2345.400 --> 2348.520] it has a large weight for this scene structure feature space +[2348.520 --> 2351.920] and this attended visual semantics feature space. +[2351.920 --> 2356.680] But it has a very low weight for this depth feature space. +[2356.680 --> 2357.880] Excuse me, I don't think that's depth. +[2357.880 --> 2361.800] That's actually retinotopic motion energy. +[2361.800 --> 2365.840] So each one of these clusters has a different constellation +[2365.840 --> 2368.960] of features that weight highly in that cluster. +[2372.200 --> 2374.480] And rather than interrogate this map, +[2374.480 --> 2377.480] it's easier to just project the clusters onto the brain +[2377.480 --> 2378.640] and see what they do. +[2378.640 --> 2380.600] So if you do this, you find out there's +[2380.600 --> 2384.000] a low-level vision cluster that where all the voxels +[2384.000 --> 2386.680] are tuned for low-level visual features +[2386.680 --> 2389.400] like motion energy and those all end up +[2389.400 --> 2391.680] being located in retinotopic visual cortex, +[2391.680 --> 2394.680] where we know low-level features are represented. +[2394.680 --> 2396.880] There's high-level vision where the voxels +[2396.880 --> 2399.720] are selective of voxels, a three-dimensional pixel. +[2399.720 --> 2401.080] I didn't make that clear. +[2401.080 --> 2404.640] Where the voxels represent the semantic category +[2404.640 --> 2407.400] of the objects in the scene. +[2407.400 --> 2411.360] And those semantically selective visual areas +[2411.360 --> 2415.800] form a patchwork, a mosaic, that's sort of on the back +[2415.800 --> 2417.880] of the brain surrounds the retinotopic visual areas, +[2417.880 --> 2419.320] the low-level visual areas. +[2419.320 --> 2421.720] So that works just as it's supposed to. +[2421.720 --> 2423.280] There's a visual attention network. +[2423.280 --> 2425.880] This is loading in this thing called IPS. +[2425.880 --> 2428.240] The IPS is the inter-parietal sulcus. +[2428.240 --> 2431.680] And it's a region of the brain that is heavily modulated +[2431.680 --> 2435.560] by attention because that's on the visual stream pathway +[2435.560 --> 2437.600] that is involved with coordinate transformations +[2437.600 --> 2439.000] between different coordinate systems. +[2439.000 --> 2440.480] You can imagine that's going to be very important +[2440.480 --> 2442.800] in navigation. +[2442.800 --> 2444.800] Then there are several motor networks. +[2444.800 --> 2446.720] There's a foot network that loads highly +[2446.720 --> 2449.280] in the foot representation of your somatosensory +[2449.280 --> 2450.560] and motor system. +[2450.560 --> 2453.120] There's a hand network that loads highly +[2453.120 --> 2455.160] in the hand representation of your motor +[2455.160 --> 2456.600] and somatosensory system. +[2456.600 --> 2459.080] And there's a supplementary motor network, which +[2459.080 --> 2462.120] is a diffuse network distributed +[2462.120 --> 2464.760] in these secondary motor areas. +[2464.760 --> 2469.120] Remember that to first order, it's not at all true, +[2469.120 --> 2471.160] but just when you're broadly thinking about it, +[2471.160 --> 2473.480] the visual system is organized, kind of like just +[2473.480 --> 2476.680] an AlexNet convolutional network with successfully deeper +[2476.680 --> 2479.800] layers representing more complicated and abstract things. +[2479.800 --> 2481.320] And the motor system is flipped. +[2481.320 --> 2484.480] So the output of motor cortex is going down +[2484.480 --> 2485.960] to the spinal cord nuclei. +[2485.960 --> 2488.160] That's a pretty low-level motor code. +[2488.160 --> 2491.080] But higher levels of the motor cortex, +[2491.080 --> 2493.520] the supplementary motor areas, are representing more abstract +[2493.520 --> 2494.400] motor variables. +[2495.120 --> 2499.320] OK, so this is perception and this is motor. +[2499.320 --> 2501.120] We'd expect to see that fine. +[2501.120 --> 2505.440] Again, this just shows us what we were doing was not crazy. +[2505.440 --> 2507.560] But what we want, did you have a question? +[2507.560 --> 2509.800] But what we want is to pull out the navigation networks. +[2509.800 --> 2511.240] That was the interesting thing. +[2511.240 --> 2513.000] So in this 10 network solution, there's +[2513.000 --> 2516.200] three navigation networks that we can pull out. +[2516.200 --> 2519.440] And they all, why do we say their navigation networks? +[2519.440 --> 2521.160] Because they load very, very highly +[2521.160 --> 2524.440] on these navigation-related variables down there. +[2524.440 --> 2527.640] And so now we can try to inspect each of those. +[2527.640 --> 2530.520] That's going to be more difficult than you think. +[2530.520 --> 2533.440] So here's the three navigation networks. +[2533.440 --> 2536.440] And if you look at the features that these networks weigh +[2536.440 --> 2539.560] heavily on, you'll see that one of these networks +[2539.560 --> 2541.960] is predominantly visual. +[2541.960 --> 2544.160] One of these networks is predominantly motor. +[2544.160 --> 2547.840] And one of these networks is distributed across the navigation +[2547.840 --> 2548.880] features. +[2548.880 --> 2552.360] So this suggests that these navigation networks are +[2552.360 --> 2555.720] divided into visually biased navigation networks, motor +[2555.720 --> 2560.880] biased navigation networks, and more navigational navigation +[2560.880 --> 2561.760] networks. +[2561.760 --> 2564.360] I should mention, nobody asked me, and I +[2564.360 --> 2566.640] was remiss to not have mentioned this. +[2566.640 --> 2569.560] When we fit these 34 models, we're +[2569.560 --> 2572.480] fitting all 34 models simultaneously. +[2572.480 --> 2575.440] So since they're all fit simultaneously, +[2575.440 --> 2577.320] variance is attributed to each network, +[2577.320 --> 2579.680] according to where it needs to be, and what the regularization +[2579.680 --> 2581.520] parameter was for that. +[2581.520 --> 2582.840] It's attributed to each feature space, +[2582.840 --> 2584.800] according to what the regularization parameter is +[2584.800 --> 2586.000] for that feature space. +[2586.000 --> 2589.040] And then when we do this, although I'm only +[2589.040 --> 2594.440] pulling out three networks, remember, all those other networks, +[2594.440 --> 2596.680] I'm only pulling out three networks here. +[2596.680 --> 2598.880] But all the other networks, the vision network and the motor +[2598.880 --> 2601.200] network, they're still in the model. +[2601.200 --> 2603.200] I'm just taking a slice out of the model here. +[2604.200 --> 2610.200] OK, so we have three kinds of motor networks. +[2610.200 --> 2612.760] And if you look at where these motor networks are represented, +[2612.760 --> 2615.960] the motor networks are represented more +[2615.960 --> 2617.520] by the motor system. +[2617.520 --> 2619.440] Where are we here? +[2619.440 --> 2624.520] The, oh, I'm mislabeled this, and I +[2624.520 --> 2626.760] can't no longer remember what the labels are. +[2626.760 --> 2628.080] And I can't go back. +[2628.080 --> 2628.920] There we go. +[2628.920 --> 2632.760] So abstract is blue, motor is green, and sensory is red. +[2632.760 --> 2637.880] So red network is sensory network, motor is green, +[2637.880 --> 2640.480] and abstract is green, and motor is blue. +[2640.480 --> 2645.160] So these are kind of lining up with the way you'd expect. +[2645.160 --> 2647.360] Now, are these three discrete networks? +[2647.360 --> 2648.680] No, these are gradients. +[2648.680 --> 2651.640] So there's essentially one navigation network. +[2651.640 --> 2654.240] But it's distributed, remember, in this physical substrate. +[2654.240 --> 2657.680] And certain locations in this physical substrate that +[2657.680 --> 2660.040] contain this navigation network are more heavily +[2660.040 --> 2661.280] weighted toward motor. +[2661.280 --> 2664.440] Certain locations are more heavily weighted toward vision. +[2664.440 --> 2665.960] And certain locations are more heavily +[2665.960 --> 2668.200] weighted toward abstract navigation. +[2668.200 --> 2672.240] But these are gradients not discrete networks. +[2672.240 --> 2675.800] OK, so that's all kind of abstract, +[2675.800 --> 2677.040] and that's still a work in progress. +[2677.040 --> 2680.280] It's notoriously hard to interpret these complicated kinds +[2680.280 --> 2682.200] of networks, not only in this experiment, +[2682.200 --> 2683.840] but in all the navigation experiments +[2683.840 --> 2685.480] people do in rodents. +[2685.480 --> 2687.080] It's fairly difficult to figure out +[2687.080 --> 2689.680] what these very abstract brain areas are doing. +[2689.680 --> 2691.880] And for those of you who have tried to interpret a deep neural +[2691.880 --> 2696.400] network in engineering, you know that you have that same problem. +[2696.400 --> 2699.400] Interpreting these networks is notoriously difficult. +[2699.400 --> 2701.160] But there are some simpler things we can do. +[2701.160 --> 2703.240] So let's look at attention. +[2703.240 --> 2707.720] Attention is a huge variable in human thought, +[2707.720 --> 2709.200] in the human brain function. +[2709.200 --> 2713.400] And I think the reason for this, the reason any psychologist +[2713.400 --> 2716.040] will tell you is that the brain has very limited processing +[2716.040 --> 2716.760] power. +[2716.760 --> 2720.960] And so what happens is the brain networks +[2720.960 --> 2727.680] are reallocated using attention to whatever task is currently being demanded. +[2727.680 --> 2730.440] And there's a lot of data to show this that +[2730.440 --> 2733.920] has been collected both in neurophysiology and animals and also in MRI. +[2733.920 --> 2737.000] So this is just a simple MRI experiment. +[2737.000 --> 2740.320] We have people watching movies in this experiment. +[2740.320 --> 2741.680] This is an old experiment. +[2741.680 --> 2744.720] And in one condition, we have them attend to humans. +[2744.720 --> 2746.800] We just say whenever you see a human, hit the button. +[2746.800 --> 2748.920] In another condition, we have them attend to vehicles. +[2748.920 --> 2750.440] Whenever you see a vehicle, you hit the button. +[2750.440 --> 2752.360] They're just watching naturalistic videos. +[2752.360 --> 2754.760] And what you see is when they're attending humans, +[2754.760 --> 2758.240] human in this map is mostly green and yellow. +[2758.240 --> 2761.440] You can see that the map is largely biased towards humans. +[2761.440 --> 2763.680] And when they're attending to vehicles, which is purple in this map, +[2763.680 --> 2766.680] you see that the map becomes much more purple. +[2766.680 --> 2769.400] And when they're passively viewing it somewhere in between. +[2769.400 --> 2771.800] So what ends up happening is when you attend, +[2771.800 --> 2773.440] and you're going out in your daily life, and you +[2773.440 --> 2776.480] attend to that person walking up the sidewalk, +[2776.480 --> 2781.400] then your brain tries to become a giant person evaluator or person detector. +[2781.400 --> 2783.840] And it can't do this perfectly. +[2783.840 --> 2788.240] It's not like every neuron in your brain becomes a person detector. +[2788.240 --> 2790.320] Neurons in the peripheral visual system, +[2790.320 --> 2793.600] and that are at the periphery of the motor system. +[2793.600 --> 2795.320] They don't change their tuning much. +[2795.320 --> 2799.080] But neurons are prefrontal cortex, which is a very abstract part of the brain. +[2799.080 --> 2802.080] We'll completely change their tuning depending on the task. +[2802.080 --> 2807.400] And they seem weird to those of you who have worked with neural networks. +[2807.400 --> 2811.280] But if you kind of think about neural networks a bit differently, it makes sense. +[2811.280 --> 2815.800] When you're training a neural network, like just say you like decided in a class, +[2815.800 --> 2817.200] I've got AlexNet. +[2817.200 --> 2822.040] I'm just going to train AlexNet to do discrimination between dogs and cats. +[2822.040 --> 2825.520] While in your training, AlexNet, the weights of that network +[2825.520 --> 2829.320] are constantly being updated every single iteration through that network. +[2829.320 --> 2831.320] That's how that network learns. +[2831.320 --> 2835.760] So the way to think about attention in brain networks is that it's a very short term +[2835.760 --> 2838.920] updating of the weights through the whole system. +[2838.920 --> 2841.320] You can think of it as short term learning. +[2841.320 --> 2845.000] So this is because the human brain, unlike artificial neural networks, +[2845.000 --> 2848.920] where we usually train and then deploy, the human brain is constantly learning +[2848.920 --> 2850.920] all the time at all timescales. +[2850.920 --> 2853.680] And attention is the very shortest time scale of that. +[2853.680 --> 2858.920] So attention is the way that your brain tries to update weights to solve a specific problem +[2858.920 --> 2865.280] by essentially just reallocating the reengineering. +[2865.280 --> 2872.200] The information flow through the network to make a parent or to make explicit +[2872.200 --> 2877.640] representation of the information that's most relevant to the task. +[2877.640 --> 2878.960] So we know this happens in humans. +[2878.960 --> 2880.920] Does it happen during driving? +[2880.920 --> 2885.320] So here are, again, this is this gase semantics model. +[2885.320 --> 2887.560] These are the 16 categories of things you could look at. +[2887.560 --> 2892.000] And these are the weights of the features for those 16 categories during the active navigation +[2892.000 --> 2893.000] task. +[2893.000 --> 2898.840] So you can see that when you're actively navigating in the world, there is a large representation +[2898.840 --> 2901.960] of buildings and fields and vehicles. +[2901.960 --> 2906.520] You can see a represented and pedestrians and also traffic signs seem to be represented +[2906.520 --> 2910.120] because they're, of course, very important for this task. +[2910.120 --> 2915.600] If we compare this map to the map we get when you simply passively watch random movies, +[2915.600 --> 2919.000] random videos, you see that this map is very, very different. +[2919.000 --> 2922.600] So if you're not doing an active navigation task, you're just looking at random videos +[2922.600 --> 2927.640] of people and cars and buildings, you see that the representations are predominantly oriented +[2927.640 --> 2932.560] toward essentially people and not so much these other kinds of factors. +[2932.560 --> 2935.200] So this is an attention difference. +[2935.200 --> 2938.440] It's not shown in this slide, but we can show that this isn't due to just the difference +[2938.440 --> 2939.840] in stimulus statistics. +[2939.840 --> 2941.680] This is actually due to attention. +[2941.680 --> 2946.480] Representing the representation from a passive viewing situation where your brain is predominantly +[2946.480 --> 2951.120] representing people, those are the most important thing, to an active brain representation system +[2951.120 --> 2954.120] where you're representing the navigational variables. +[2954.120 --> 2955.600] And you can see this is the difference map. +[2955.600 --> 2961.200] You can see there's this huge bias towards navigation related stuff being represented +[2961.200 --> 2963.240] when you're doing navigation. +[2963.240 --> 2970.200] Now if you go through the literature of MRI, you'll find that there's kind of two subsets +[2970.200 --> 2973.080] of networks in the visual system. +[2973.080 --> 2978.240] One is like a person oriented and animate subset of networks. +[2978.240 --> 2983.520] These consist of brain areas called like the pair hippocampal, excuse me, the fused +[2983.520 --> 2988.600] form face area, the extra straight body area, and several other parts of the visual system +[2988.600 --> 2991.200] that seem to respond to animate stuff. +[2991.200 --> 2994.520] And then there's a separate network for inanimate stuff. +[2994.520 --> 2999.600] This consists of areas called the pair hippocampal place area, the occipital place area, the +[2999.600 --> 3001.240] retro-spleenial cortex. +[3001.240 --> 3004.000] Was there a question or no, just stretching? +[3004.000 --> 3005.840] Okay, good. +[3005.840 --> 3012.680] So there are different subnetworks for animate and inanimate objects in the brain. +[3012.680 --> 3020.360] So the cool thing, so right here for example, we're showing what happens when you are +[3020.360 --> 3027.820] passive viewing vehicles and you're not actually actively engaged in vehicle, sort of in +[3027.820 --> 3028.820] active navigation. +[3028.820 --> 3031.580] And you can see that vehicles are represented, for example, in the pair hippocampal place +[3031.580 --> 3036.220] area, this occipital place area, and up here in the retro-spleenial cortex. +[3036.220 --> 3041.580] Now when you act, the weird thing and the cool thing is when you actively engage in a +[3041.580 --> 3046.660] navigation task, vehicles now become very, very important because you can't run into them +[3046.660 --> 3048.140] and you have to avoid them. +[3048.140 --> 3053.580] So they become, they end up being represented as animate objects. +[3053.580 --> 3057.220] They get represented in the fused form face area, the occipital place area, and they're +[3057.220 --> 3060.180] no longer represented in the object network. +[3060.180 --> 3065.820] So the whole system completely re-orients the way it views this class of inanimate objects +[3065.820 --> 3066.820] based on the task. +[3066.820 --> 3068.740] You can see the difference here. +[3068.740 --> 3074.860] In passive navigation, these blue areas are where vehicles are represented in passive +[3074.860 --> 3077.900] navigation and the red areas are where they're represented during driving. +[3077.900 --> 3080.980] You know, there's no white, which indicates that this is a complete shift. +[3080.980 --> 3084.860] This is actually the biggest attention effect I've ever seen. +[3084.860 --> 3089.340] It's a complete reorientation of the system according to this task demands. +[3089.340 --> 3093.140] When you're driving, vehicles are, you're treating basically like other people, which makes +[3093.140 --> 3096.540] sense because when you're driving, you're concerned about this other vehicle is what +[3096.540 --> 3098.660] is the person driving the vehicle going to do? +[3098.660 --> 3104.820] So you engage in this theory of mind behavior that is all part of the social negotiation +[3104.820 --> 3107.900] of active navigation. +[3107.900 --> 3109.780] And we're very interested in this topic. +[3109.780 --> 3114.380] So we've looked a lot at multi, sorry, we're beginning to look a lot at multi-agent interactions +[3114.380 --> 3116.780] and we're doing this with Claire Tomlin's lab. +[3116.780 --> 3118.700] So I see Chris in the back of the room there. +[3118.700 --> 3119.700] Yes, question. +[3119.700 --> 3122.100] Oh, you can just yell at me. +[3122.100 --> 3126.300] You don't know what the timescale that transition from remote to other. +[3126.300 --> 3131.500] Oh, this is going to be a very quick transition on all of the, like, hundreds of milliseconds +[3131.500 --> 3132.500] at the most. +[3132.500 --> 3133.500] Yeah. +[3133.500 --> 3139.500] So we don't have that directly, but based on other attention data just in the literature. +[3139.500 --> 3140.500] Yeah. +[3140.500 --> 3143.740] How would imagine you'd find a safe place for court tunes? +[3143.740 --> 3144.740] Yeah. +[3144.740 --> 3145.740] Yeah. +[3145.740 --> 3150.260] People represent if somebody's looking at a robot and they're representing it as an agent +[3150.260 --> 3152.580] that they act interact with, it becomes representative of the people. +[3152.580 --> 3153.580] I would expect. +[3153.580 --> 3155.340] Even if there's too many behind it. +[3155.340 --> 3156.340] Right. +[3156.340 --> 3157.340] Probably. +[3157.340 --> 3159.740] We haven't done that experiment, but I would be my guest. +[3159.740 --> 3160.740] All right. +[3160.740 --> 3163.420] So this is being done with a group in Claire Tomlin's lab. +[3163.420 --> 3166.220] Claire, as you know, her group studies active navigation. +[3166.220 --> 3169.020] So this is, like, just preliminary data. +[3169.020 --> 3170.420] I just want to mention where we're going. +[3170.420 --> 3174.380] Everything I told you about is, like, the static features that we were grasped under the +[3174.380 --> 3175.380] brain. +[3175.380 --> 3179.660] It's not really a particularly interesting way to model the brain. +[3179.660 --> 3183.420] What we would like is something that's more dynamic, that has a plant and, you know, +[3183.420 --> 3187.620] a policy and something that's, like, feels more like a cognitive process. +[3187.620 --> 3193.580] And so Claire's group has been implementing a model predictive control framework to try +[3193.580 --> 3200.660] to see if there's part of the brain that's particularly involved in negotiating vehicle +[3200.660 --> 3203.220] vehicle interactions during driving. +[3203.220 --> 3207.780] And so this is a standard model predictive control loop where the driver is constantly +[3207.780 --> 3213.380] trying to estimate what the next car is going to do and then adjust their behavior for +[3213.380 --> 3214.380] that. +[3214.380 --> 3218.180] So we have a model predictive control set of equations. +[3218.180 --> 3224.260] We need to use these as parameters as features that we fit to the brain. +[3224.260 --> 3228.900] So the first stage of that is basically optimizing this network so that it simulates the behavior +[3228.900 --> 3230.700] of the actual car in the experiment. +[3230.700 --> 3232.380] We do this just from the stimulus. +[3232.380 --> 3237.380] So basically, we set these model predictive control parameters so that the behavior of +[3237.380 --> 3242.380] the actual vehicle, the person was driving, matches the behavior in the experiment. +[3242.380 --> 3243.700] Now we have our features. +[3243.700 --> 3248.020] It's basically the behavior of the vehicle projected into this model predictive control framework. +[3248.020 --> 3252.660] And now we can use those parameters to progress onto the brain to discover where in the brain +[3252.660 --> 3255.180] these NPC features are represented. +[3255.180 --> 3258.500] And this model is fit along with all the other 34 models. +[3258.500 --> 3262.540] So what we're discovering here is unique variance that is attributed to this model predictive +[3262.540 --> 3266.500] control framework and not to any of the other highly correlated variables that we looked +[3266.500 --> 3267.500] at. +[3267.500 --> 3271.500] And you can see that there's a lot of locations in the brain that this NPC model fits +[3271.500 --> 3273.100] well. +[3273.100 --> 3277.500] Up here in the motor system, this is probably variance sharing with the controls. +[3277.500 --> 3279.500] But there's locations that have unique variance, for example. +[3279.500 --> 3281.500] This broken area is actually a speech area. +[3281.500 --> 3285.500] It's a classic speech area that goes back 150 years in neuroscience and psychology. +[3285.500 --> 3290.500] And on both sides of it, you see these little punk-tate bright spots that are predicted by the +[3290.500 --> 3295.500] model, the predictive control model, but no other model that we have fit to these data. +[3295.500 --> 3300.500] So we're very excited about this because this is a much more interesting way to model cognitive +[3300.500 --> 3302.500] variables than we've been using. +[3302.500 --> 3305.500] And we think it has good legs for the future. +[3305.500 --> 3306.500] All right. +[3306.500 --> 3309.500] So in summary, I told you that activation, active navigation is supported by distributed +[3309.500 --> 3310.500] networks. +[3310.500 --> 3317.500] Many findings in the rodent navigation literature end up being validated in this experiment. +[3317.500 --> 3322.500] There are a lot of known navigation-related regions of interest like the peri-epic-capal place +[3322.500 --> 3328.500] area that has been known in this literature, but now we're going to have a lot of different +[3328.500 --> 3331.500] cap-up-place area that has been known in this literature. +[3331.500 --> 3336.500] But now we can see in this more sensitive data set that it actually consists of several +[3336.500 --> 3340.500] substructures or sub-areas. +[3340.500 --> 3346.500] We see that navigation leads to widespread shifts in semantic representation due to attention, +[3346.500 --> 3350.500] which we would have expected based on other attention experiments, but this is the first time it's +[3350.500 --> 3352.500] been shown in a naturalistic task. +[3352.500 --> 3357.500] And there are probably brain representations mediating multi-agent interactions using the +[3357.500 --> 3360.500] model predictive control framework. +[3360.500 --> 3362.500] But that's really preliminary data. +[3362.500 --> 3368.500] The next person you should listen to is Chris, who hopefully next year will be able to talk about this in more detail. +[3368.500 --> 3374.500] So for future directions, we are working hard to obtain a more fine-grained understanding of exactly +[3374.500 --> 3377.500] what is being represented in these navigation networks. +[3377.500 --> 3378.500] It's a very hard problem. +[3378.500 --> 3384.500] One promising future direction is to do exactly what Chris is doing, and we're working on that. +[3384.500 --> 3388.500] We're also going to look at navigation in open areas, like open fields. +[3388.500 --> 3392.500] And the reason for that is a large fraction of the rodent literature on navigation, which is where the best +[3392.500 --> 3398.500] data comes from, is all done in open arena, not in amaze. +[3398.500 --> 3402.500] But I do want to mention that this approach can be used for any video game environment. +[3402.500 --> 3407.500] In fact, originally our experiments that we started doing this with 10 years ago were using counter-strike. +[3407.500 --> 3412.500] And personally, I always want to do this with Grand Theft Auto because it just seems like that's the most open world game you can have. +[3412.500 --> 3417.500] So this is a generalizable framework and all our tools are open source. +[3417.500 --> 3419.500] So that's about it. +[3419.500 --> 3425.500] I'm not going to talk about medical things because we don't have time, so I'm just going to skip to the end. +[3425.500 --> 3427.500] But that's it. +[3427.500 --> 3428.500] Thanks very much for your time. +[3429.500 --> 3430.500] Wow. +[3430.500 --> 3440.500] Questions. I'm going to go to students first. +[3440.500 --> 3443.500] I always like to go to, here we go. +[3443.500 --> 3445.500] This is our microphone. +[3445.500 --> 3446.500] Wow. +[3446.500 --> 3456.500] Wait, because the driver is in a simulation and not the real world, is there any possibility that the data is different than if the driver was actually driving in a real car? +[3456.500 --> 3457.500] Totally. +[3457.500 --> 3462.500] You should think about the difference between a controlled experiment and the real world as a continuum. +[3462.500 --> 3470.500] And we've moved as far down that continuum as we can in MRI, but there's things left. +[3470.500 --> 3474.500] Unreal Engine doesn't look like the real world. +[3474.500 --> 3476.500] There's no vestibular input. +[3476.500 --> 3481.500] So when you're moving around the world, you're constantly getting vestibular input about your acceleration and your orientation. +[3481.500 --> 3482.500] We have none of that. +[3482.500 --> 3489.500] In fact, the person is lying down in their back, which is completely different from driving, unless you're really one of those relaxed drivers. +[3489.500 --> 3492.500] So there are going to be differences, right? +[3492.500 --> 3496.500] And we don't know what they are, and they're going to be very hard to sort out. +[3496.500 --> 3503.500] Because anytime, you know, I could collect brain data while people are driving in a real car, but to do that I would have to use EEG. +[3503.500 --> 3507.500] And EEG is a really low information method. +[3507.500 --> 3514.500] There are very, very few bits of information coming through EEG, so you're probably not going to be able to liken it to conclusions about how those data relate to these data. +[3514.500 --> 3516.500] Back here. +[3516.500 --> 3517.500] Yeah. +[3517.500 --> 3518.500] You're up. +[3518.500 --> 3520.500] Talk to the boss. +[3520.500 --> 3521.500] Talk to the boss. +[3521.500 --> 3530.500] So you mentioned at the beginning that artificial neural networks and particularly transformer networks have nothing to do with the brain. +[3530.500 --> 3532.500] Well, yeah, but that was a check statement. +[3532.500 --> 3533.500] Nothing. +[3533.500 --> 3538.500] I was just struck by your description of how attention works in the brain. +[3538.500 --> 3544.500] I sounded remarkably similar to how the attention mechanism works in transformers. +[3544.500 --> 3547.500] That's what the transformer people would like you to think. +[3547.500 --> 3548.500] Well, that. +[3548.500 --> 3550.500] Trans-former. +[3550.500 --> 3556.500] You know, it was a classic jack kind of overstatement for generalization for rhetorical purposes. +[3556.500 --> 3564.500] The transformer networks, there's been a bit of recent work on this trying to understand the relationship between transformer attention and attention attention. +[3564.500 --> 3568.500] And transformer attention does seem to be implementing some sort of grouping process. +[3568.500 --> 3577.500] And grouping processes are actually the purpose of attention in brains, right? +[3577.500 --> 3583.500] So all of intermediate vision in human, in mammalian brains is involved in segmentation and grouping. +[3583.500 --> 3587.500] Grouping the pieces together that need to be grouped and segmenting figure from ground. +[3587.500 --> 3589.500] That's all intermediate vision. +[3589.500 --> 3592.500] And that's clearly a very, potentially driven process. +[3592.500 --> 3596.500] So at that level, they are related. +[3596.500 --> 3606.500] But attention, I think, you know, again, if you think about attention as the learning component of training a network, I think they're very, very, very similar. +[3606.500 --> 3611.500] Because you can imagine a system of the human brain if you want to implement attention. +[3611.500 --> 3633.500] If you just change the gain, just change the gain of the weights at a peripheral level, say in primary visual cortex, as those small gain changes percolate up the system, and you pool it's excessively higher levels of processing, then what's going to happen is these small gain changes that are just like putting the volume control on different neurons are going to lead to representation changes at the higher level. +[3633.500 --> 3641.500] And to the extent that attention in neural networks, artificial neural networks, influence that same kind of thing, then yeah, they're, they're analogous. +[3641.500 --> 3645.500] Thank you. +[3645.500 --> 3649.500] But I think the box is going to go over there. You can yell at me right now. +[3649.500 --> 3651.500] You want to just yell? Now I'll repeat it. +[3651.500 --> 3654.500] No, no, no, we've got the microphone. It's being reported. +[3654.500 --> 3657.500] I'm not taking off. I'm not going to take over your job, Jeff. +[3657.500 --> 3666.500] Okay. When you mention attention changes weight, you meant like attention changes synaptic weights. +[3666.500 --> 3667.500] Ah. +[3667.500 --> 3670.500] And if that's the case, where's the evidence for it? +[3670.500 --> 3674.500] Yeah, nobody knows. Nobody knows what attention does or how it works. +[3674.500 --> 3679.500] So there are multiple theories for how attention works in the brain. +[3679.500 --> 3684.500] One is that it somehow, or other changes the synaptic efficacy in neurons. +[3684.500 --> 3698.500] Another is that essentially it's there's a whole set of modulatory channels that come in and essentially multiply fixed weights with variable weights and change the computation that way. +[3698.500 --> 3703.500] Nobody, nobody knows. So when I said attention, change to the weights, I meant purely in this model space. +[3703.500 --> 3705.500] Oh, okay. Yeah. +[3706.500 --> 3715.500] So you mentioned that, well, we lack vestibular input with the current experiment. +[3715.500 --> 3716.500] Yeah. +[3716.500 --> 3722.500] Would EEG over long duration of time like not suffice? How does that work? +[3722.500 --> 3726.500] So I like long duration of time, right? +[3726.500 --> 3733.500] If you have a lousy method of measurement, kind of the best thing you can do is collect a really large data set. +[3733.500 --> 3738.500] So EEG over a longer period of time would be way better than short EEG experiment. +[3738.500 --> 3744.500] But you're still always going to be limited because remember, FMRI is a volumetric measure. +[3744.500 --> 3750.500] It's a chemit, it's basically measuring the bulk tissue. +[3750.500 --> 3761.500] And the bulk tissue has been spatially encoded by applying these gradients that allow you to use the 48 transform to infer the spatial position of the different signals. +[3762.500 --> 3768.500] So there's a sophisticated MRI is a two-way street. You put in a scoded signal. +[3768.500 --> 3771.500] And the coded signal is multiplexed with what's already going on the brain. +[3771.500 --> 3775.500] And then you can decode the signal and recover a lot of information. +[3775.500 --> 3779.500] EEG is a one-way street. It's purely passive. You're not putting anything in. +[3779.500 --> 3782.500] You're purely measuring things. So there's no coding that goes on. +[3782.500 --> 3788.500] And EEG is essentially a two-dimensional sheet overlying a three-dimensional volume. +[3788.500 --> 3795.500] So you also have not only do you not code anything, but you now have this problem of volumetric surface measurements. +[3795.500 --> 3799.500] So EE and the skull acts like a big low pass filter and filters out most of the EEG signals. +[3799.500 --> 3803.500] So EEG has loss and loss and loss at every level. +[3803.500 --> 3808.500] And then bits of information per unit time when you graduate student from EEG are vanishingly small. +[3808.500 --> 3817.500] Pretty much the main thing you see in EEG signals is giant sets of these brain networks being switched in and out as the tasks change. +[3818.500 --> 3823.500] It's really exciting to see more naturalistic behaviors being brought into the scanner. +[3823.500 --> 3830.500] I cannot imagine the kind of engineering problems that you all had to solve to make that work with motion, etc. +[3830.500 --> 3837.500] So very nice to see that kind of work and really nice to see how task modulates representations in only some ways. +[3837.500 --> 3841.500] That's so positive. I feel like the next thing is going to be really horrible. +[3841.500 --> 3842.500] I mean, you're not wrong. +[3843.500 --> 3851.500] Well, the point is I just want to ask you to kind of follow through on the promise in the title. +[3851.500 --> 3858.500] Reverse engineering. I haven't seen anything that would lead me to believe that you would be able to reverse engineering thing with the system from the dating show today. +[3858.500 --> 3863.500] I love it. I do want to point out it's TOR reverse engineering. +[3863.500 --> 3866.500] So specifically. +[3867.500 --> 3872.500] So all I have to do is just make sure the vector is pointing in that direction. +[3872.500 --> 3875.500] Yeah, it's really hard. +[3875.500 --> 3881.500] We know you guys already know that if you have just any neural, you know, GPT. +[3881.500 --> 3884.500] How does GPT actually work? Try to reverse engineer GPT. +[3884.500 --> 3886.500] Good luck. It's really, really hard. +[3886.500 --> 3890.500] We have that problem, but we also don't have any data. +[3890.500 --> 3895.500] At least in GPT, you essentially, you know, you have infinite amount of time to look at that network. +[3895.500 --> 3898.500] You could do whatever you want. We don't have that. +[3898.500 --> 3900.500] We've got like an hour's worth of data from this stupid thing. +[3900.500 --> 3903.500] It's really, really hard. Please don't tell my funders. +[3903.500 --> 3907.500] This is a fundamentally impossible problem. +[3907.500 --> 3915.500] So when I was an undergraduate, we had a famous researcher, Dr. Wilder Penfield, +[3915.500 --> 3920.500] who was also, he was a surgeon, but he was also a psychologist. +[3920.500 --> 3925.500] And he cut open people's brains and he played music. +[3925.500 --> 3932.500] And different electrical signals would appear in different parts of the brain as the music played. +[3932.500 --> 3936.500] It seems like, and that was like almost 100 years ago. That was a long time ago. +[3936.500 --> 3940.500] So it seems like you're doing the most modern version. +[3940.500 --> 3946.500] Yes. And you don't have to open anyone's brain up because you have MRI. +[3946.500 --> 3948.500] Yes. Is that where we are? +[3948.500 --> 3953.500] Yes. So one of my joke names for MRI is functional hemophronology. +[3953.500 --> 3956.500] Fundamentally, for those of you who don't remember phonology, +[3956.500 --> 3960.500] phonology is this widely and well-deserved discredited method in the 19th century, +[3960.500 --> 3966.500] where people thought you could tell, you could basically look at the bumps on people's heads to infer +[3966.500 --> 3969.500] what their brain was good at and what they were bad at. +[3969.500 --> 3973.500] And then that man, if that's true, if you were a baseball player and had a really good vision, +[3973.500 --> 3975.500] you'd have big bump over the visual system. +[3975.500 --> 3977.500] I'll give you the last one. +[3977.500 --> 3983.500] This is really, you know, that idea was crazy in one sense and not crazy in another. +[3983.500 --> 3986.500] It's crazy that the bumps in your head is going to tell you about the ending about the brain. +[3986.500 --> 3991.500] But the fact that the brain is localized, that there are structures that represent certain kinds of information, +[3991.500 --> 3996.500] that is clear from from the infield and all the subsequent work, right? +[3996.500 --> 4001.500] If you get a brain lesion in certain brain areas, you will lose that function. +[4001.500 --> 4005.500] If you have a stroke and it affects your visual cortex, you will go blind. +[4005.500 --> 4010.500] In other brain areas, you have a stroke there, you just kind of get worse at anything, +[4010.500 --> 4013.500] at everything. Why? Because it's a hugely connected network. +[4013.500 --> 4019.500] And if a little piece gets taken out, there are other things that you can compensate for it, right? +[4019.500 --> 4022.500] And there are other pathways for the information closing the network. +[4022.500 --> 4026.500] So some brain areas are very specialized, some brain areas are not at all specialized, +[4026.500 --> 4033.500] some brain areas are not affected by attention at all, some brain areas are completely affected by attention. +[4033.500 --> 4038.500] It's a mottly bag. But yeah, we're essentially just enumerating here, right? +[4038.500 --> 4043.500] If there are, if there are say, 500 brain areas, it's probably more than we need, +[4043.500 --> 4046.500] and each brain area is representing, you know, 100 dimensions. +[4046.500 --> 4050.500] Well, okay, I know how many dimensions I need to recover from 50,000 dimensions, right? +[4050.500 --> 4054.500] It's an enumerable problem. +[4054.500 --> 4061.500] Ah, yes. Is there any work in turning EEG into F-M-R-I? +[4061.500 --> 4068.500] F-M-R-E-E-G by inputting the currents into the brain. +[4068.500 --> 4069.500] Into the brain. +[4069.500 --> 4075.500] Into the brain. I know there's a lot of experiments amateurily with putting, like, +[4075.500 --> 4076.500] 1.1. +[4076.500 --> 4078.500] Yes, yes, yes, yes. +[4078.500 --> 4079.500] Okay, that's an old question. +[4079.500 --> 4081.500] Putting in the signal on the B-B-B. +[4081.500 --> 4084.500] Yeah, yeah. So can you put signals into the brain? +[4084.500 --> 4088.500] The answer is yes. You know, you could do that if you want. +[4088.500 --> 4091.500] The answer is can you control anything, right? +[4091.500 --> 4093.500] So, um, can you interrogate in that way? +[4093.500 --> 4098.500] Yeah, so, so, so think of it the way I like to talk about it this way. +[4098.500 --> 4101.500] Imagine you guys are all engineers, so, you know, probably when you were five years old, +[4101.500 --> 4105.500] you, like, took a part in a TV or radio and like started looking inside it, +[4105.500 --> 4107.500] trying to figure out what the circuits were. +[4107.500 --> 4111.500] And you can imagine if you might find like a circuit, if you just have a voltmeter that's, you know, +[4111.500 --> 4114.500] correlated with like the brightness of the TV, okay, fine. +[4114.500 --> 4116.500] But imagine now you said, I'm going to make the TV really bright. +[4116.500 --> 4119.500] I'm going to put in a bunch of current into the circuit and see what happens. +[4119.500 --> 4121.500] The TV is probably going to blow up. +[4121.500 --> 4124.500] And that's mostly what happens when you put, when you put signal in the brain. +[4124.500 --> 4127.500] So there's a method called transcranial magnetic stimulation, +[4127.500 --> 4130.500] which is essentially causes temporary brain lesions. +[4130.500 --> 4134.500] And the method, there's old methods, electrical convulsive therapy, +[4134.500 --> 4137.500] which is, you've heard about it, probably seen in, you know, +[4137.500 --> 4140.500] one fluid of the kukus nest, which is putting in giant voltage into the brain. +[4140.500 --> 4143.500] And that's the human analog of turn it off and turn it on again, +[4143.500 --> 4145.500] which we all know is the only way to fix the computer. +[4145.500 --> 4148.500] And so, works for humans, too. +[4148.500 --> 4152.500] And there are other things you can do, which is putting in more subtle currents. +[4152.500 --> 4157.500] So there have been a lot of attempts over the past 20 years to put in just subtle currents, +[4157.500 --> 4161.500] say between, you know, prefrontal cortex maybe and the plow cortex, +[4161.500 --> 4165.500] with the idea being that there's already a recurrent loop there. +[4165.500 --> 4169.500] And if you can lower the membrane potential of the circuits in this recurrent loop, +[4169.500 --> 4173.500] you can actually increase the amount of activity in that recurrent loop. +[4173.500 --> 4176.500] And if that recurrent loop is transmitting information or modulating information +[4176.500 --> 4180.500] that you need to do a task, it might improve the task. +[4180.500 --> 4184.500] I am agnostic about whether the stuff works or not. +[4184.500 --> 4188.500] I think there's a lot of evidence that it probably does something. +[4188.500 --> 4192.500] GDCS, but it's really a work in progress, because the voltage is a very, very low. +[4192.500 --> 4196.500] Complicable effects are very high, and it's still a work in progress at this point. +[4196.500 --> 4200.500] So you can put stuff in, but it's not something you want to do. +[4200.500 --> 4204.500] Okay, I think we can stop here. Thanks very much for your time. +[4204.500 --> 4205.500] Thanks for your time. +[4214.500 --> 4216.500] Thank you.