SaifPunjwani commited on
Commit
c1686ae
1 Parent(s): 5ae8d68

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. transcript/allocentric_-PmPWc_n_0s.txt +714 -0
  2. transcript/allocentric_1-JE7NSi6Fk.txt +112 -0
  3. transcript/allocentric_1by5J7c5Vz4.txt +46 -0
  4. transcript/allocentric_2lfVFusH-lA.txt +892 -0
  5. transcript/allocentric_2vwQyeV-LQ4.txt +774 -0
  6. transcript/allocentric_4F3xCBcsLFg.txt +144 -0
  7. transcript/allocentric_4_5dayHDdBk.txt +49 -0
  8. transcript/allocentric_4nAcRL-6ujk.txt +389 -0
  9. transcript/allocentric_4nCR3yBBCHE.txt +2 -0
  10. transcript/allocentric_7Dga-UqdBR8.txt +180 -0
  11. transcript/allocentric_8O3FC86WjWU.txt +432 -0
  12. transcript/allocentric_CISLJ2xL7UY.txt +602 -0
  13. transcript/allocentric_HAnw168huqA.txt +558 -0
  14. transcript/allocentric_HlEWIAiqSoc.txt +160 -0
  15. transcript/allocentric_I2azLvESwDY.txt +355 -0
  16. transcript/allocentric_I6IAhXM-vps.txt +21 -0
  17. transcript/allocentric_IhITqkNTaNo.txt +4 -0
  18. transcript/allocentric_JFkHlqLIuD8.txt +0 -0
  19. transcript/allocentric_Ks-_Mh1QhMc.txt +210 -0
  20. transcript/allocentric_M5i5c9kNbOQ.txt +46 -0
  21. transcript/allocentric_MuRVOQY8KoY.txt +0 -0
  22. transcript/allocentric_OOpVTlrTYXw.txt +55 -0
  23. transcript/allocentric_OdFJuKhtBWU.txt +531 -0
  24. transcript/allocentric_P7Q2fE4Qm2w.txt +35 -0
  25. transcript/allocentric_Q1Tczf8vxCM.txt +162 -0
  26. transcript/allocentric_Qpa0nrKPYgc.txt +711 -0
  27. transcript/allocentric_RSlc9IxdBw8.txt +268 -0
  28. transcript/allocentric_T6INaET_Lnw.txt +6 -0
  29. transcript/allocentric_UTiFshG_xuk.txt +3 -0
  30. transcript/allocentric_UpupNS6aF7o.txt +864 -0
  31. transcript/allocentric_VGSDUFAtf1E.txt +216 -0
  32. transcript/allocentric_WmtANkx6Bok.txt +24 -0
  33. transcript/allocentric_WwYDMpD7j4Q.txt +178 -0
  34. transcript/allocentric_XhhkhpK-3L4.txt +209 -0
  35. transcript/allocentric_YSd6nSYr2ZA.txt +6 -0
  36. transcript/allocentric_YrMiKxPV_Ig.txt +0 -0
  37. transcript/allocentric_Z550DeGoTgU.txt +435 -0
  38. transcript/allocentric_Z8ckbP8bHSs.txt +447 -0
  39. transcript/allocentric_Zd71719_G8Y.txt +65 -0
  40. transcript/allocentric__n_vDvne5yo.txt +181 -0
  41. transcript/allocentric_akfatVK5h3Y.txt +47 -0
  42. transcript/allocentric_bQLya0OLd2A.txt +45 -0
  43. transcript/allocentric_c-N8Qtz_g-o.txt +0 -0
  44. transcript/allocentric_cM4ISxZYLBs.txt +0 -0
  45. transcript/allocentric_csaYYpXBCZg.txt +63 -0
  46. transcript/allocentric_d_J9UxKBl7o.txt +72 -0
  47. transcript/allocentric_eK3T5UIwr3E.txt +1084 -0
  48. transcript/allocentric_ePP0G7FJGPI.txt +317 -0
  49. transcript/allocentric_fLaslONQAKM.txt +194 -0
  50. transcript/allocentric_gLUcuv2PxuU.txt +7 -0
transcript/allocentric_-PmPWc_n_0s.txt ADDED
@@ -0,0 +1,714 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.040] I've been to this Monica Hari who is visiting us from University of Glasgow. Many of you probably
2
+ [8.040 --> 16.800] already know Monica. She has done her PhD here at St Andrews in 1980. Long time ago.
3
+ [16.800 --> 27.080] Well, she did her PhD with David Milner who at that time was launching his very influential
4
+ [27.080 --> 34.520] theory about the dual stream of visual processing. So people who study visual perception know
5
+ [34.520 --> 43.880] that visual stimuli are processed along two separate streams in the brain. And it's been discussed
6
+ [43.880 --> 51.320] for a long time how one could best characterize these streams. And it is David Milner and Melvin
7
+ [51.320 --> 57.480] Gute and contribution to propose that one stream is vision for action. The other stream is vision for
8
+ [57.480 --> 66.840] perception. We've had this theory for more than two decades now and it has now again come into
9
+ [66.840 --> 74.040] debate because it has been found that some patients who are believed to have just one type of
10
+ [74.040 --> 84.600] deficit, perception or action actually also show signs that the other function could be impaired as
11
+ [84.600 --> 92.360] well. So it's my pleasure to invite Monica into here the newest research in this topic and let's see
12
+ [92.360 --> 98.600] what has happened with the dual stream of visual processing of David Milner. Okay, thank you very
13
+ [98.600 --> 101.880] much Daniela and I think this is when you find out that really you know research doesn't actually
14
+ [101.880 --> 106.040] progress and I basically we were talking the same talk that I kind of gave 20 years ago. I hope not
15
+ [106.040 --> 111.160] but you know we'll see. So it's actually really nice that Daniela has kind of given this kind of
16
+ [111.160 --> 116.200] introduction about sort of talking about different pathways sort of for perception and action and
17
+ [116.200 --> 120.200] there are really different ways of kind of thinking about the brain. And one of the ways which is
18
+ [120.200 --> 124.440] quite different from the way that sort of I kind of think about it because my thinking as you can
19
+ [124.440 --> 130.200] imagine comes very much from David Milner having been his PhD student. But people also think about
20
+ [130.200 --> 134.680] action and perception in terms of common coding and friends and really people around him have been
21
+ [134.680 --> 139.720] very influential in this kind of way of kind of looking at stuff. And the argument here is really
22
+ [139.720 --> 145.240] that perception representations are stored together with the actions that they elicit. And so what
23
+ [145.240 --> 149.400] this actually means is that the recognition of an object will automatically activate an associated
24
+ [149.400 --> 154.280] action. So that's very much you know the viewpoints sort of of the common coding theory. So what
25
+ [154.280 --> 159.240] does actually mean is that if we initiate an action sequence we actually work backwards from the
26
+ [159.240 --> 164.360] design perceptual effect. And this then triggers the sequence of the actions that we need to execute
27
+ [164.360 --> 170.440] to achieve the effect. So put very simply seeing an object, ultimately activates automatically activates
28
+ [170.440 --> 175.640] the action. So you see an action is kind of the same. So from the perceptual system we then act.
29
+ [175.640 --> 180.200] And this is very much the sort of common coding theory background you know two perceptual action.
30
+ [181.720 --> 185.000] In relation to that and sort of really quite differently there's obviously the dual-root model
31
+ [185.000 --> 189.400] of visual processing. And very good else model is not the only model and really confusingly
32
+ [189.400 --> 193.880] in the second part of my talk about another dual-root model. But there are sort of different
33
+ [193.880 --> 197.720] ways of thinking. So if you believe more sort of in the dual-root models what you would actually
34
+ [197.720 --> 203.320] argue is that despite the form of knowledge that we all have or we unified a visual experience
35
+ [203.880 --> 207.560] there are actually two different pathways in the brain and there's functionally different
36
+ [207.560 --> 212.600] and they're anatomically different. And what that actually means is that vision for perception
37
+ [212.600 --> 216.200] on the one hand is independent of vision for action. So we basically have different systems.
38
+ [216.200 --> 220.200] So we have a vision for perception system and we have a vision for action system.
39
+ [221.400 --> 225.000] And I would actually argue that depending on which viewpoints you're actually coming from
40
+ [225.000 --> 229.560] and your interpretation first of all of the neuroscientific data and the interpretation of
41
+ [229.560 --> 233.800] neuropsychological deficits is actually quite different. So this is actually important to kind of
42
+ [233.800 --> 239.320] make these kind of distinctions. And I think if you think about sort of in particular neuropsychological
43
+ [239.320 --> 243.480] disorders in terms of dual-root models and in terms of differences between perception and
44
+ [243.480 --> 248.200] actions that actually can be quite informative. And one of the disorders I think this way of
45
+ [248.200 --> 254.360] thinking is actually variant formative for is hemispatial neglect. I'm kind of assuming that most
46
+ [254.360 --> 259.240] of you know what hemispatial neglect is. If you don't it's generally described as a failure to
47
+ [259.240 --> 265.960] report, respond or orient to stimuli opposite the side of the brain. It's usually occurs after
48
+ [265.960 --> 270.600] a lesion to the right half of the brain to the right hemisphere. And if you then ask people to
49
+ [270.600 --> 275.480] sort of perform certain visual tasks they tend to ignore objects on the left hand side. And this
50
+ [275.480 --> 280.360] is kind of an example of a classic neglect assessment test called the behavioral intention test
51
+ [280.920 --> 285.320] where you ask patients to cancel out all the small lines that they can see. And the patients are
52
+ [285.320 --> 290.040] quite capable of understanding the task. They're just kind of ignoring the items on the left here.
53
+ [290.840 --> 295.160] And this is just an example of a patient sort of performance.
54
+ [298.760 --> 303.400] Yes, I do the sound isn't actually working just now. So effectively this is just me demonstrating
55
+ [303.400 --> 307.640] to the patient what I want him to do. And as you can see there's no problem with actually
56
+ [307.640 --> 312.600] understanding the task which is just to cross out all the small lines, all the lines in fact on
57
+ [312.600 --> 317.800] this one. But what you notice is that the patient's head is very much kind of deviated to the right
58
+ [317.800 --> 323.000] hand side. So all the kind of attention is focused on the right. He actually starts you know
59
+ [323.000 --> 326.280] penciling out the lines from the right hand side of the page, you know rather than from the left
60
+ [326.280 --> 339.160] which is more commonly what you and I would do. And then another thing I think what you also
61
+ [339.160 --> 344.040] notice here is that lines are crossed out repeatedly. So the rather moving over to the lines on
62
+ [344.040 --> 348.600] the left, the patient actually repeatedly sort of cancels out the lines on the right. And
63
+ [348.600 --> 352.440] that you can actually sort of see this online. So this is actually a sort of stroke training and
64
+ [352.440 --> 358.120] awareness module which was kind of really designed for people who kind of work with stroke and
65
+ [358.120 --> 361.960] you want to find out more about visual disorder. So people who kind of sort of stroke know this is
66
+ [361.960 --> 366.360] kind of on the wards who want to gain a little bit more insight in terms of the kind of symptomatology
67
+ [366.360 --> 370.600] that you get after occipital and parietal stores. So there's modules they're kind of describing
68
+ [370.600 --> 374.920] hemispheration neglect and also describing hemianobia. It's a freely available online
69
+ [374.920 --> 378.760] model which could maybe quite much for teaching. So if you go to this website you can actually
70
+ [378.760 --> 385.160] download those movies. And really just to put neglect you know generally sort of into the sort of
71
+ [385.160 --> 390.280] framework sort of of the NHS is actually quite frequent after Vitamys Filijins. It's a
72
+ [390.280 --> 396.280] effect up to 80% of people with Vitamys Filijins initially straight after the stroke. It's the
73
+ [396.280 --> 400.920] strongest single predictor of pure functionary recovery after Vitamys Filijins for stroke. So people
74
+ [400.920 --> 404.920] who suffer from neglect do spend much longer time in hospital. They're much more likely to end up
75
+ [404.920 --> 410.200] in nursing home compared to being released you know back to their homes. And therefore the cost
76
+ [410.200 --> 414.360] really to the NHS is really quite great and they're actually great need to kind of obviously try
77
+ [414.360 --> 420.520] and rehabilitate the actual symptom. It really in line with this and quite impressingly people
78
+ [420.520 --> 424.920] have actually tried to come up with effective treatments for hemispheration neglect and so far if
79
+ [424.920 --> 429.640] you look at clinical trials and the sort of sign guidelines and the Bowen Cochrane reviews
80
+ [429.640 --> 434.120] actually look at studies which are around in the literature evaluate them beautiful effectiveness
81
+ [434.680 --> 438.680] at present no recognized treatment actually exists this can be recommended to be applied
82
+ [438.680 --> 442.840] in a clinical setting. So there doesn't really seem to be any effective way at the moment actually
83
+ [442.840 --> 448.120] rehabilitating neglect. And I come back to this a little bit later what's also quite relevant I
84
+ [448.120 --> 453.960] think for the actual disorder is to look at the classical leaves location and the lesion location
85
+ [453.960 --> 458.760] that's most typically kind of demonstrates causes hemispheration neglect is lesions in the
86
+ [458.760 --> 464.200] inferior parietal law, broadness area 39 and 14 and also lesions in the superior temporal sulcus
87
+ [464.200 --> 468.200] and otter carnage was actually quite instrumental in actually sort of also implicating
88
+ [468.200 --> 472.360] more temporal areas you know as being a sort of common denominator of sort of causing hemispacial
89
+ [473.080 --> 478.760] neglect. Just briefly say a little bit about the assessment of neglect you've seen one example
90
+ [478.760 --> 483.640] already before where the patients are asked to cancel out all the small stars and you kind of see
91
+ [483.640 --> 488.120] this bias. Again this is the example I showed you in the video where the patients are asked to
92
+ [488.120 --> 491.560] cross out all the lines you see this sort of repeated crossing out of lines on the right
93
+ [492.120 --> 497.080] and not of lines kind of on the left. Similar idea here the patients are required to cross out all
94
+ [497.080 --> 502.040] these in ours they tend not to forget you know the letters that they have to cross out so memory
95
+ [502.040 --> 506.120] impairments are not particularly dominant in hemispheration neglect although there are other people
96
+ [506.120 --> 510.440] to kind of make more of a case about memory disorder also being a prominent feature.
97
+ [510.600 --> 517.480] If you ask people to sort of make sort of marks mark lines in the center you get this typical right
98
+ [517.480 --> 521.960] with deviation I'll say a little bit more about sort of bisection and landmark behavior in the
99
+ [521.960 --> 527.720] second part of my talk. This is sort of copying behavior, copying for memory. The important thing I
100
+ [527.720 --> 532.200] think this is a refer to these are all subtests of the behavioral inattention tests which is
101
+ [532.200 --> 537.320] effectively a standardized clinical tool and what's quite nice about this is for you get a
102
+ [537.320 --> 541.880] cut-off score sort of indicating whether neglect is present for each of these individual subtests
103
+ [541.880 --> 547.560] but also for all subtests put together so you kind of have some idea of sort of is neglect present
104
+ [547.560 --> 552.040] or not and just what's the severity of the disorder. I think one of the problems with this test is
105
+ [552.040 --> 555.960] there's weight in for this unperception so you're purely looking at you know one of the perceptual
106
+ [555.960 --> 561.640] problems that these patients have and I think this is kind of interesting when we then come back
107
+ [561.640 --> 568.200] again sort of to this kind of dual world model of perception and action and if we kind of look
108
+ [568.200 --> 571.560] at the dual world model what people will actually what Milner and Gould have actually said and
109
+ [571.560 --> 575.640] that they kind of really started thinking about neglect in this way really when I kind of started my
110
+ [575.640 --> 581.960] PhD here they were actually saying you know we we know already that our perception of the world
111
+ [581.960 --> 586.760] is very much kind of mediated you know by the visual ventral stream and our action of the world
112
+ [586.760 --> 590.120] is very much mediated by the visual doors of stream and this is kind of what the dual world
113
+ [590.120 --> 594.920] model is all about and other people you know who are kind of based more on the common coding frame
114
+ [594.920 --> 599.320] kind of very much disagree with it but this is kind of the argument and they're actually saying if
115
+ [599.320 --> 604.280] we look at this model's in relation to neglect what we already know is if you look at the critical
116
+ [604.280 --> 608.600] lesion side we already know that the critical lesion side is either in the inferior profile to
117
+ [608.600 --> 613.000] logo or in the superior temporal lobe these kind of areas here so the doors of stream as such is
118
+ [613.000 --> 619.000] actually spared in hemispatial neglect so what we might be able to infer from that is that all
119
+ [619.000 --> 622.520] these perceptual difficulties which I've just described to you which are quite dramatic in these
120
+ [622.520 --> 628.280] patients might not necessarily be reflected in their actions so they might actually be able to
121
+ [628.280 --> 632.840] act on objects and interact with objects because that's what the visual doors of stream is responsible
122
+ [632.840 --> 638.200] for whereas on the other hand they completely fail to perceive objects they have perceptual problems
123
+ [638.200 --> 642.920] but they could be dissociation and maybe this is kind of important and can be exploited
124
+ [644.040 --> 648.760] so just to kind of investigate this a little bit more one of the confusions which I think often
125
+ [648.760 --> 653.640] wise in relation to the sort of perception and action model is what people actually understand
126
+ [653.640 --> 658.200] as actions which the doors of visual doors of streams implicated in and me and I'm going to
127
+ [658.200 --> 662.040] actually quite specific about this because they're basically saying the doors of streams actually
128
+ [662.040 --> 667.960] implicated when stimuli are presented in the here and now on the other hand as soon as time is
129
+ [667.960 --> 672.680] allowed to pass or an explicit perception mapping has to be made then the ventral streams were
130
+ [673.160 --> 677.880] successful performance so therefore if we're now looking at the Sunnummer Hinespacially
131
+ [677.880 --> 683.640] Glect if we do find any kind of action impairments they should mainly affect offline action control
132
+ [683.640 --> 687.720] so when the patients are allowed to directly interact with objects they should actually be okay
133
+ [688.920 --> 693.800] so really from this model you can make very specific predictions so we would actually expect
134
+ [693.800 --> 698.120] neglect patients to show some spared immediate pointing so when they can directly interact with
135
+ [698.120 --> 702.440] the objects there should be okay even in that space or even in the space where they show these
136
+ [702.440 --> 709.240] all dramatic perceptual problems or the other hand if we look at sort of tasks which action tasks
137
+ [709.240 --> 712.360] which aren't directly interacting with the objects for example if you're looking at delayed
138
+ [712.360 --> 716.840] pointing or anti pointing that's where problems should occur and we've done a whole series of
139
+ [716.840 --> 721.560] experiments really sort of looking at these specific predictions and I just show you one
140
+ [721.560 --> 726.520] set of experiments which really makes this point so we actually run an experiment where we
141
+ [726.520 --> 733.480] compare any black patients a poor pointing where patients were simply asked to reach for targets
142
+ [733.480 --> 737.800] which were presented in different spatial locations and they just had to reach directly to the target
143
+ [738.600 --> 743.720] and we compared that to an anti pointing task where the patients were asked to reach for a target
144
+ [743.720 --> 748.200] which was actually presented here but then basically perform a mirror image reach to the
145
+ [748.200 --> 752.280] exact location on the other side so for left targets they had to reach to the equivalent position
146
+ [752.360 --> 755.800] on the right for right targets to the equivalent position on the left.
147
+ [757.880 --> 761.960] We then looked at the we did some sort of typical a lesion analysis so we mapped all the lesions
148
+ [761.960 --> 767.800] onto sort of T1 weighted image using MRI or software which is kind of the standard way now there's
149
+ [767.800 --> 772.760] a sort of mapping lesions and we then also performed a lesion symptom mapping where we were trying
150
+ [772.760 --> 779.080] to actually associate more specifically the lesion location with the specific behavioral symptoms
151
+ [779.720 --> 784.840] which we expected the patients to show on some of the tasks but first of all just to show you a
152
+ [784.840 --> 789.000] picture of the kind of patients that we used the first of all we had a control group which were
153
+ [789.000 --> 793.160] right in this phase in patients without neglect this is the kind of the lesions overlay of those
154
+ [793.160 --> 797.720] of patients in the different slices they're never and this is kind of important they never really
155
+ [797.720 --> 802.200] showed neglect at any time so we tested them sort of quite soon after this talk the neglect was
156
+ [802.200 --> 808.520] never present this was the group of right in this phase in patients who showed neglect traditionally
157
+ [808.600 --> 812.920] patients with neglect tend to have larger lesions as well and I can just say that now the size
158
+ [812.920 --> 817.160] of the lesion really didn't have any implications in the behavioral impairments that they showed but
159
+ [817.160 --> 822.200] these patients which show larger lesions very much in line with other studies the patients all
160
+ [822.200 --> 826.360] were impaired they were at least impaired on one of the neglect tests most of the patients were
161
+ [826.360 --> 830.760] actually impaired on all of the neglect tests the BIT is the one I showed you before
162
+ [830.760 --> 834.440] line by section you just look at bias and the balloon's similar visual search task
163
+ [834.840 --> 839.480] and again just to kind of show the either kind of looking in a pretty much sort of classic group
164
+ [839.480 --> 845.720] of neglect patients so if overall we just sort of subtracted the lesions of the patients with neglect
165
+ [845.720 --> 850.200] from the patients who just had my attempts for lesions without neglect the critical lesions
166
+ [850.200 --> 854.200] side where again the inferior profile to look in the superior temporal gyros very much you know
167
+ [854.200 --> 857.880] those are the lesions which are generally implicated in neglect that doesn't show you tell you
168
+ [857.880 --> 865.480] much about task behavior so not to go back to the actual task first of all what do the patients
169
+ [865.480 --> 869.960] show what kind of behavior do the patients show in the pro-pointing task so when they can point
170
+ [869.960 --> 875.400] directly to targets on the left and the right and be compared first of all this is the neglect
171
+ [875.400 --> 879.640] group here this is the we had right-elis-relision control group we also had healthy controls
172
+ [879.640 --> 884.440] these are people who were perfectly healthy sort of matched in age and hopefully as you can all
173
+ [884.440 --> 888.920] see in the pro-pointing condition the neglect patients were absolutely perfect just like everyone
174
+ [888.920 --> 892.760] else so there was absolutely no difference between how they were could how well they could reach
175
+ [892.760 --> 896.760] to targets compared to the various control groups even on the left there was no difference between
176
+ [896.760 --> 900.120] left and right and remember the left space is really the space where they show all these
177
+ [900.120 --> 904.120] perception problems but if they're reaching for an object they're actually very good at that
178
+ [905.640 --> 909.960] in the anti-pointing task really dramatically different result first of all if you look again at
179
+ [909.960 --> 914.840] the two control groups people are slightly worse it's a harder task you have to kind of identify
180
+ [914.840 --> 921.240] the location and then kind of remap remap it onto the equivalent position but the neglect patients
181
+ [921.240 --> 925.560] actually found this very very much harder than both of the control groups so they were in all four
182
+ [925.560 --> 931.320] in all the special locations they were dramatically and significantly impaired and we also found
183
+ [931.320 --> 936.520] a positive correlation with neglect severity so the stronger the neglect the greater the areas that
184
+ [936.520 --> 940.280] they're performed so the more they kind of deviated you know from the position that there really
185
+ [940.280 --> 947.000] should be pointing to you and if we then sort of perform the sort of voxel based allegian mapping
186
+ [947.000 --> 952.360] to the lens saying you know for this fairly dramatic anti-pointing accuracy impairment what are
187
+ [952.360 --> 957.480] the voxels which are kind of critically implicated and sort of driving this impairment and what we
188
+ [957.480 --> 961.960] found here is that apart from the inferiority of the superior temporal gyros we also found the
189
+ [961.960 --> 967.640] middle temporal gyros kind of implicated in mediating being responsible for that kind of behavior so
190
+ [967.640 --> 973.000] these were the lesions to the classically associated you know with the anti-pointing inaccuracy those
191
+ [973.000 --> 979.080] are the power of the compel gyros so what can you conclude sort of from this sort of behavioral
192
+ [979.800 --> 984.360] experiment so first of all it seems to be that neglect patients are kind of unimpaired and
193
+ [984.360 --> 989.720] pro-pointing and this is actually in line with a range of reasons other studies have kind of really
194
+ [989.720 --> 994.520] demonstrated sort of similar behaviors when neglect patients are allowed to interact directly with
195
+ [994.520 --> 1001.080] objects so it seems that online action control is relatively unaffected so we can hopefully argue
196
+ [1001.080 --> 1006.360] from this that we already know that the visual doors the stream is unimpaired in terms of anatomy
197
+ [1006.360 --> 1009.880] it now seems to be that we can argue it's also unimpaired in terms of function so function is
198
+ [1009.880 --> 1016.280] really seems to be okay but what we did find is that the neglect patients presented greater errors in
199
+ [1016.280 --> 1021.560] the endpoint accuracy of the anti-movements and there seem to be therefore suffering from a
200
+ [1021.560 --> 1026.440] deficit in detecting and transforming a splitted spatial mapping spatial representation for
201
+ [1026.440 --> 1030.040] remapping because of course what you have to do in the anti-pointing task you have to identify
202
+ [1030.040 --> 1034.600] the location of the target and then we map it onto the opposite side and then reach towards that
203
+ [1034.600 --> 1040.600] side and this is clearly where the deficit actually occurred and I think what we can be
204
+ [1040.600 --> 1045.880] dissafe from this is you know that really immediate action do actually differ you know from other
205
+ [1045.880 --> 1050.920] actions like fake action delayed actions or sort of anti-pointing actions and I can't we have
206
+ [1050.920 --> 1055.000] done other experiments where we found similar impairment in neglect patients for example in delayed
207
+ [1055.000 --> 1059.480] actions so depending on the kind of action that you're actually investigating you know different
208
+ [1059.480 --> 1067.480] areas are kind of implicated in those action control movements and regarding these kind of problems
209
+ [1067.480 --> 1071.800] you know the patients really have you know with the anti-pointing there is actually a sort of an
210
+ [1071.800 --> 1079.480] FMI study that was performed by Koliak in 2007 who implicated a similar area in their sort of FMI
211
+ [1079.480 --> 1086.040] studies and what they actually did in the FMI study is they compared grasping and reaching so
212
+ [1086.040 --> 1090.600] grasping to an object and reaching for an object with pantomiming a reaching into grasping
213
+ [1090.600 --> 1095.000] object and when I then subtracted those two conditions from each other they actually found that for
214
+ [1095.000 --> 1100.040] the pantomimed reaching and grasping the right middle temporal gyros really also had to be
215
+ [1100.040 --> 1104.440] active to kind of you know generate the sort of pantomimed reaching and grasping movements and
216
+ [1104.440 --> 1108.520] that's very much in line with our data because this was one of the areas which was also critically
217
+ [1108.520 --> 1114.280] implicated in our anti-pointing task to kind of be impaired and kind of generating you know the
218
+ [1114.280 --> 1121.320] anti-pointing arrows so for those kind of tasks you do seem to the we do seem to have to rely on
219
+ [1121.320 --> 1126.120] areas kind of outside the visual door stream to kind of really mediate our actions and this is
220
+ [1126.120 --> 1132.440] kind of nice supporting evidence for our sort of study so really from this what are the implications
221
+ [1132.440 --> 1138.440] really first of all for perception and neuroscience well hopefully I've kind of shown you that
222
+ [1138.440 --> 1143.080] action and perception control can actually disassociate so I don't really buy into this kind of common
223
+ [1143.080 --> 1148.280] coding model which I could be presented to you on the first slide but it's also important to
224
+ [1148.280 --> 1153.880] know that really not all actions depend solely on the door to visual stream so once we become
225
+ [1153.880 --> 1159.400] to things like pantomimed actions fake actions more complex anti-pointing actions delayed actions
226
+ [1159.400 --> 1163.800] maybe these kind of actions actually require additional interesting neural networks and for
227
+ [1163.800 --> 1168.600] more data it kind of seems to be that they require more sort of temporal and occipital areas
228
+ [1170.040 --> 1173.880] and I think this is actually what I would like to argue from this is that contrary really to a
229
+ [1173.880 --> 1179.160] range of new scientific studies it is actually important to realize that maybe when we're dealing
230
+ [1179.160 --> 1185.640] with offline actions they are not really mediated by the same areas in the brain as real actions
231
+ [1185.640 --> 1190.280] so when we kind of talking about for example um faking um movements in this kind of a lot of
232
+ [1190.280 --> 1194.280] studies do this because it's much easier when people are sort of tied up in an compromised
233
+ [1194.280 --> 1198.120] scanner to just pretend to do a movement rather than really do a movement we can't really assume
234
+ [1198.120 --> 1202.360] that the data and the the results you get from that can actually be generated you know two real
235
+ [1202.360 --> 1206.280] movements because I think we are actually looking at different areas for example when I move around
236
+ [1206.280 --> 1210.120] here and also when I'm in play on a wee I think those are fundamentally different things and I
237
+ [1210.120 --> 1213.320] can totally believe that because I can't do anything on a wee so I think it's just much more
238
+ [1213.320 --> 1217.720] complex and you need other brain structures than you need to do in sort of picking up an apple for
239
+ [1217.720 --> 1222.920] example so I think these are the kind of the implications um for perception and neuroscience
240
+ [1223.560 --> 1227.880] the other implications I think are for rehabilitation of neglect and I just like to spend a
241
+ [1227.880 --> 1233.160] little bit of time really sort of making this kind of argument so the argument here is like if
242
+ [1233.160 --> 1237.080] what we found is correct and that neglect patients are actually quite good in interacting with
243
+ [1237.080 --> 1242.440] objects even on the neglecting side then why don't you sort of develop a rehabilitation approach
244
+ [1242.440 --> 1247.800] where you get them to interact loads with objects really activate the dose of stream and then see
245
+ [1247.800 --> 1251.800] if there's some filtering through you know to the perception impairments that they have because
246
+ [1251.800 --> 1255.320] a bit like Daniela mentioned in the beginning we all know that you know even if you believe in this
247
+ [1255.320 --> 1259.320] idea of dose of invented streams and separate visual streams there are lots of interactions
248
+ [1259.320 --> 1263.640] so the streams clearly interact and maybe we can actually use that to then improve the neglect
249
+ [1263.640 --> 1269.240] symptoms and these ideas actually not new so kind of using actions to kind of really improve
250
+ [1269.240 --> 1274.760] hemispation neglect studies really have been done sort of more than 10 years ago sort of by myself
251
+ [1274.760 --> 1280.040] in particular sort of in Robertson where we ask patients to reach out and interact with objects
252
+ [1280.040 --> 1285.560] kind of repeatedly sort of over sort of to repeat it and we then looked if whether we would actually
253
+ [1285.560 --> 1290.680] find an improvement in the BRT score so really in their neglect symptoms and really a bit a
254
+ [1290.680 --> 1294.840] little bit more and precisely what we had is we had an intervention group where we asked
255
+ [1294.840 --> 1300.040] neglect patients to grasp for a vote in the center if I didn't actually quite do this correctly
256
+ [1300.040 --> 1304.280] and the robot actually tilt they would get proprioceptive feedback from that and then be encouraged
257
+ [1304.280 --> 1310.840] to kind of re-grasp until the vote was actually centrally a grasp and kind of held straight
258
+ [1310.840 --> 1314.920] and we compared that to an intervention condition where the people where the patients were simply
259
+ [1314.920 --> 1319.640] asked to pick up the vote on the right hand side and put it down so they did some sort of very
260
+ [1319.640 --> 1323.800] basic motor action but they didn't really use visual motor feedback to kind of guide you in the
261
+ [1323.800 --> 1329.240] perception of the vote and we had some sort of okay results kind of in this study so we asked
262
+ [1329.240 --> 1333.400] patients to so we showed them how to do the actual task over three sessions we then got them to
263
+ [1333.400 --> 1338.440] do it for ten sessions in their home and we then kind of looked at you know how was an improvement
264
+ [1338.440 --> 1344.440] kind of on BRT score and we found an improvement one month after the intervention after at the
265
+ [1344.440 --> 1348.520] follow-up one month follow-up in the intervention group the intervention group slightly improved
266
+ [1349.320 --> 1353.320] and I wasn't really terribly excited about the result at the time but Ian Robertson basically was
267
+ [1353.320 --> 1356.600] because the patients are we tested with chronic patients so these were patients who'd had neglect
268
+ [1356.600 --> 1361.400] like four years and we still found some improvement in the intervention group but we've since sort
269
+ [1361.400 --> 1365.400] of just about finished sort of another study now where we tried to actually make the intervention
270
+ [1365.400 --> 1372.280] a little bit more feasible to be applied in the clinical setting so what we've actually done now
271
+ [1372.280 --> 1377.000] and we just kind of finished really the analysis of this data now is we actually reduced the training
272
+ [1377.000 --> 1380.920] from three to two days and reduced the number of sessions that the patients and trained by themselves
273
+ [1380.920 --> 1385.880] from two to one session and the session was much shorter only 15 minutes we were then slightly more
274
+ [1385.880 --> 1391.080] ambitious in kind of our assessment of the outcome measures so we actually looked whether there
275
+ [1391.080 --> 1396.440] was kind of an effect not just at one month sort of post intervention but four months post intervention
276
+ [1396.440 --> 1401.320] and rather than just looking at an improvement on neglect scores we also said well do these patients
277
+ [1401.320 --> 1406.280] actually improve overall do they have an increased quality of life are they more likely to socially
278
+ [1406.280 --> 1411.800] participate kind of move outside to go shopping are there any changes in mood in emotional
279
+ [1411.800 --> 1416.920] and communication etc and to assess that we actually used the stroke impact scale which is a
280
+ [1416.920 --> 1421.960] scale reason to be commonly used in a new in a clinical setting to kind of really assess people's
281
+ [1421.960 --> 1427.320] stroke outcome so this was actually the design so we had two sessions where we sort of instructed
282
+ [1427.320 --> 1432.920] the patients on what to do we had a quick assessment after that we then had them run 10 sessions
283
+ [1432.920 --> 1437.960] once a day in their home over a period of sort of two weeks we then did a quick assessment then
284
+ [1437.960 --> 1444.440] we left them completely alone and followed them up again at four months these are kind of the
285
+ [1444.440 --> 1449.800] characteristics of the patients 10 patients in the inventor intervention group 10 patients in the
286
+ [1449.800 --> 1454.840] control group the quite well matched for age times and stroke so these patients weren't quite as
287
+ [1454.840 --> 1459.560] chronic so they were kind of obviously not acute by medical terms they would still be judged
288
+ [1459.560 --> 1462.600] as chronic but you know they were literally on average three months post-stroke and we really
289
+ [1462.600 --> 1467.080] quite as long term as a previous study and they were quite matched well matched sort of for
290
+ [1467.080 --> 1473.320] neglect score initial BRT score so first of all and again you remember in the intervention group
291
+ [1473.320 --> 1477.080] they were actually encouraged to grasp what's in the center sort of repeated there was sort of
292
+ [1477.080 --> 1481.960] 50 minutes they were placed in different spatial conditions in the control group they were simply
293
+ [1481.960 --> 1486.280] asked to reach with a right hand to the right hand side of the world and I think this is important
294
+ [1486.280 --> 1491.400] all these patients effectively had right hemisphere lesions so they had some sort of motor impairments
295
+ [1491.400 --> 1494.760] with their left hand so we only asked them to use the unimpaired hand so they're only ever
296
+ [1494.760 --> 1498.760] using their right hand so we're either using their right hand to grasp the center of the world
297
+ [1498.760 --> 1502.680] or to just grasp the side and kind of pick it up and put it down again but they were using the
298
+ [1502.680 --> 1505.640] hand that they could use because they are also intervention studies where you ask the patients
299
+ [1505.640 --> 1510.120] to use the hand which is actually impaired and there's big problems actually with consent and
300
+ [1510.120 --> 1515.880] sort of retention of patients in this first of all we tested them on line by section how well
301
+ [1515.880 --> 1521.480] do they actually perceive lines as you can see sort of initially you know well matched sort of
302
+ [1521.480 --> 1526.440] for bias if anything the intervention groups showed a larger error video in the control group
303
+ [1526.440 --> 1531.400] already after two sessions there's a big improvement in the so I should really put this away
304
+ [1531.400 --> 1535.560] in the intervention group there's some improvement in the control group but the improvement
305
+ [1535.560 --> 1541.400] that we see in the intervention group that also remains the same after two sessions after 12
306
+ [1541.400 --> 1545.640] sessions altogether and then it follow up and this graph just gives you the sort of percentage
307
+ [1545.640 --> 1549.800] improvement so as you can see here the control group improves a little bit as well but there's
308
+ [1549.800 --> 1553.880] a much bigger improvement kind of in the intervention group and that actually stays the same
309
+ [1554.440 --> 1559.400] also after four months so it seems to be a bit of a long term effect and you can think okay
310
+ [1559.400 --> 1563.560] line by section is actually quite similar to by setting a word a gospel word at the center
311
+ [1563.560 --> 1568.120] so what actually happens to the neglect score and the neglect score again it was quite similar
312
+ [1568.120 --> 1573.160] so quite by match the baseline after two sessions already you see a big improvement which gets
313
+ [1573.160 --> 1578.040] slightly bit higher none of the significantly different after the 12 day session but then it
314
+ [1578.040 --> 1581.080] actually stays high at the four months follow up and I think that's really the important thing
315
+ [1581.080 --> 1585.640] because you really want to show that whatever you're improving is actually long term and again in
316
+ [1585.640 --> 1590.360] this graph you can kind of see the percentage improvement so again little improvement in the control
317
+ [1590.360 --> 1594.760] group none of the statistically significant big improvement really in the intervention group
318
+ [1594.760 --> 1601.640] already after two days which then sort of remains the same and then really the big one is really
319
+ [1602.200 --> 1607.400] what kind of happened on this talk impact scale where we're kind of measuring sort of different
320
+ [1607.400 --> 1613.160] dimensions sort of off these patients kind of engaging you're in the everyday life and what we
321
+ [1613.160 --> 1616.680] actually found and this is obviously the big test you know for any kind of intervention study
322
+ [1616.680 --> 1621.160] because to find a generalization to like activities extremely difficult and extremely rare and not
323
+ [1621.160 --> 1627.000] really many studies find it so what we actually found here is that again and this is a big test to
324
+ [1627.000 --> 1630.280] kind of applies a big question there so we only did this at baseline and then again the four
325
+ [1630.280 --> 1635.320] months follow up so what we found is that the patients in the intervention group showed some sort
326
+ [1635.320 --> 1640.600] of increase in the activities kind of of the daily lives as the control group stayed the same
327
+ [1640.600 --> 1644.600] and I haven't really had time because really as you can see these two groups aren't really well-bedded
328
+ [1644.600 --> 1649.240] for baseline but they are now so they're now perfectly matched at the end of the trial and this
329
+ [1649.240 --> 1654.360] this test finding still holds and if you look at sort of clinical trials and specifically
330
+ [1654.360 --> 1659.000] clinical science in relation to and neglect a lot of them actually claim big effects a lot of them
331
+ [1659.640 --> 1664.760] some of them claim generalization to other tasks but most of the trials are really not control trials
332
+ [1664.760 --> 1670.120] it's surprising the very little number of control trials that you see which kind of show
333
+ [1670.120 --> 1675.080] a sustained effect over time and be it on also some generalization kind of to other tasks
334
+ [1675.080 --> 1678.440] and I think that's one of the reasons why at the moment you know neglect therapies actually
335
+ [1678.440 --> 1682.440] recommended because ideally you want to show effects in a control trial and you want to show
336
+ [1682.440 --> 1687.320] long-term effects that kind of translate onto other behaviors so I think this is quite encouraging
337
+ [1688.280 --> 1694.840] so hopefully what can be conclude from this visual feedback training is that zero-driven intervention
338
+ [1694.840 --> 1699.560] can actually lead to successful rehabilitation that there are some sort of transfer to activities
339
+ [1699.560 --> 1705.160] of daily living that this intervention hopefully as you can see it's a fairly basic intervention
340
+ [1705.160 --> 1710.440] it's cost effective it's easy to apply and it's easy to try and staff and care us to actually do
341
+ [1710.440 --> 1714.360] it and one of the things which I think is also really crucial is the patient doesn't actually
342
+ [1714.360 --> 1719.560] require an insight into the disorder in order to actually perform the actual rehabilitation procedure
343
+ [1719.560 --> 1724.600] because at the moment but health professionals tell the detect patients to do as some kind of
344
+ [1724.600 --> 1728.840] intervention because nothing is actually formally recognized is scanning training so patients
345
+ [1728.840 --> 1733.560] are encouraged to scan the left hand side and so it's encouraged to scan the left side of space
346
+ [1733.560 --> 1737.400] and of course they don't really know that they have their problem so as the minute you tell them
347
+ [1737.400 --> 1741.240] to stop doing it they stop doing it as with this task you don't really need this kind of insight
348
+ [1741.240 --> 1745.560] into this order but of course what we need to do now is this obviously a need for a
349
+ [1745.560 --> 1750.680] larger clinical trial to assess the efficacy of this particular treatment and obviously I have a
350
+ [1750.680 --> 1754.840] sort of magical clinical collaborator who I've collected all this data with and he basically says
351
+ [1754.840 --> 1758.680] you've shown this in ten patients don't talk about it at all it means nothing we need a bigger trial
352
+ [1758.680 --> 1762.040] but of course I am talking about it because there's no way that I'm doing a larger clinical trial
353
+ [1762.040 --> 1767.960] so that's not my job so this is as good as it gets you know from my point of view okay so really
354
+ [1767.960 --> 1772.840] is the first part of my talk which is really the longer part but at the overall conclusions
355
+ [1773.560 --> 1778.280] well hopefully I've shown to you that neglect patients are not impaired in online action control
356
+ [1778.280 --> 1783.240] but that they fail in indirect offline actions that therefore we can really
357
+ [1785.240 --> 1790.040] exploit these sort of unimpaired online reaching abilities for successful rehabilitation
358
+ [1791.000 --> 1794.840] and this actually impairs that they're clearly there must be shared influences of vision for
359
+ [1794.840 --> 1799.320] action on vision for perception and I think this is actually quite nice because I think there are
360
+ [1799.320 --> 1803.640] a lot of there's a lot of evidence kind of in the literature that perception can influence action
361
+ [1803.640 --> 1807.240] there's much less evidence saying action can actually influence perception and hopefully
362
+ [1807.240 --> 1812.360] this is what I've kind of shown with these experiments here and therefore again I've already said
363
+ [1812.360 --> 1815.560] this before that maybe you know when we're looking at actions and we're kind of talking about
364
+ [1815.560 --> 1819.960] actions we need to be more precise about how we actually define actions and not all actions are
365
+ [1820.040 --> 1826.360] the same and not all actions are mediated you know by similar structures okay so this is really
366
+ [1826.360 --> 1832.520] the sort of first part of my talk which is kind of more the clinical side and what we've done now
367
+ [1833.240 --> 1838.040] is we kind of what I've already said before is I don't really want to move on to sort of a large
368
+ [1838.040 --> 1842.680] scale clinical trial but one of the things I am actually interested in and this is kind of very
369
+ [1842.680 --> 1847.720] much driven by the multilaterature is to actually compare this visual feedback training which we've
370
+ [1847.720 --> 1853.640] been doing with TDCS which is trans-craned direct current stimulation because a spinous study is
371
+ [1853.640 --> 1859.560] sort of by GERI and things like in GERI and things like one by Roland Sparing who actually applied
372
+ [1860.440 --> 1865.640] TDCS to the left-pronatal cortex so they actually perform some sort of inhibitory function to the
373
+ [1865.640 --> 1870.360] left-pronatal cortex in neglect patients and by doing that they actually found that the neglect
374
+ [1870.360 --> 1875.160] symptoms actually improved because the idea is that if you have a right-em-sphileogen which kind
375
+ [1875.160 --> 1879.880] of needs to neglect you get a sort of overactive left-em-sphile so left-em-sphile is kind of
376
+ [1879.880 --> 1885.880] too active if you dump and down that activity you actually find an improvement in neglect function
377
+ [1886.680 --> 1892.200] and really what we what we're now sort of trying to do is we're not trying to combine this TDCS
378
+ [1892.200 --> 1897.480] applying TDCS to the undemaged left-em-sphile in combination with this sort of behavioral
379
+ [1897.480 --> 1902.920] training which we've been doing and we're hoping that if we combine TDCS with we have T training
380
+ [1902.920 --> 1907.960] that we actually get the biggest sort of in behavioral sort of rehabilitation effect that we find
381
+ [1907.960 --> 1913.080] the biggest improvement in neglect symptoms and the reason we're kind of hoping that this is true
382
+ [1913.080 --> 1918.360] is very much sort of taken from the multiliterature because TDCS has been quite successfully used
383
+ [1918.360 --> 1923.320] in kind of trying to improve motor function and it successfully it's particularly successful
384
+ [1923.960 --> 1927.640] when the patient's actually performing motor actions kind of at the same time so you find the
385
+ [1927.640 --> 1932.760] biggest improvement in kind of improving paralysis by applying sort of TDCS together with some
386
+ [1932.760 --> 1937.320] sort of behavioral training and this is really something that we're investigating just now
387
+ [1937.320 --> 1940.600] that is talked to Daniella about getting the ethics and how painful is it and I think we kind of
388
+ [1940.600 --> 1944.600] pretty much a similar stage is doing this so I don't know why we're doing it actually it's just too
389
+ [1945.240 --> 1950.360] bad okay so this was this is now kind of really moving on to stuff which is kind of not
390
+ [1950.360 --> 1957.160] clinical because the clinical stuff it takes a very very long time to actually do so at the same time
391
+ [1957.160 --> 1962.200] I've always kind of had an interest really in kind of what happens in special bias this in healthy
392
+ [1962.200 --> 1966.760] subjects and this is really stuff which I spend a long time sort of doing in Britain and I kind of
393
+ [1966.760 --> 1973.000] moved away from and I've not sort of started to investigate a little bit more I think most of you
394
+ [1973.000 --> 1978.200] will actually know that you know all of us and it's not just people also animals we all
395
+ [1978.200 --> 1983.480] over show a sort of subtle bias in favoring left space when it comes to visual attention so we all
396
+ [1983.480 --> 1989.800] have a bias of orienting towards left space so for example in tasks like this we are asked to kind
397
+ [1989.800 --> 1994.600] of judge you know where the center of the line is we also a subtle kind of bias to the left
398
+ [1994.600 --> 2000.040] hand side sort of like this so this mark here is objectively actually further to the left we tend
399
+ [2000.040 --> 2004.120] to kind of judge that as sort of being centrally presented and the idea is that because the
400
+ [2004.120 --> 2009.240] right hemisphere sort of favors sort of attention we tend to get an exaggeration of kind of left
401
+ [2009.240 --> 2015.400] space you know in healthy subjects and people really have known about this really for sort of
402
+ [2015.400 --> 2019.400] quite a long time so we also favor left space people do it animals do it the seems to be
403
+ [2019.400 --> 2024.600] as orienting bias towards the left space and there are certain properties which kind of influences
404
+ [2024.600 --> 2030.520] bias so it can get modulated you know by certain task and certain situations and one of the
405
+ [2031.480 --> 2036.440] things which can actually mediate the bias is actually fatigue so there's sort of some of studies
406
+ [2036.440 --> 2041.320] sort of by a tormentally who basically showed that the left of bias that we all show gets attenuated
407
+ [2041.320 --> 2045.960] and shifts towards the right with bias with decreasing alertness and fatigue so the more fatigue we
408
+ [2045.960 --> 2051.960] become the less of a left bias we actually show and they're kind of very much argued you know this
409
+ [2051.960 --> 2056.680] is kind of in line again with another dual root model which talks about dorsal and ventral streams
410
+ [2056.680 --> 2060.760] but they're kind of slightly different kind of in position to the milner and good at dorsal
411
+ [2060.760 --> 2066.360] and ventral streams so in co-better and shumans attentional model they say that healthy people
412
+ [2066.360 --> 2071.080] like you and me have a right-hymnus-velaturalized ventral attention network which underpins alertness
413
+ [2071.560 --> 2075.800] and the ventral attention network doesn't really quite one here I mean it kind of one sort of much
414
+ [2075.800 --> 2080.600] more superior here but it doesn't really matter so we have a ventral kind of attention networks
415
+ [2080.600 --> 2084.920] which kind of underpins alertness and that's the same in all of us and obviously if you perform a
416
+ [2084.920 --> 2090.200] task over a long period of time you then get fatigue so you have a decreased activation in this
417
+ [2090.200 --> 2096.200] network which then gives the left dorsal orienting network which is pretty much kind of this network
418
+ [2096.200 --> 2101.080] a competitive advantage and therefore driving behavior rightward so this is kind of very much
419
+ [2101.080 --> 2107.960] the idea we all have a sort of right lateralized alertness network which sort of tires out over time
420
+ [2109.720 --> 2113.480] and the question that we were then really asking in the remained of my talk which I was sort of
421
+ [2113.480 --> 2118.360] trying to address is is this really true do all of us really have a right-hymnus-velaturalized
422
+ [2118.360 --> 2122.840] attention network is this kind of a uniform feature kind of in the healthy population or maybe
423
+ [2122.840 --> 2129.800] other differences in between different people on this and this was actually an absolute I mean
424
+ [2130.200 --> 2135.720] this idea pretty much came almost entirely sort of for my sort of PhD student because when we were
425
+ [2135.720 --> 2139.560] kind of doing this kind of work he bent through the literature and he basically said when you
426
+ [2139.560 --> 2143.640] do realize that in all studies kind of all special attention people have a left with bias but
427
+ [2143.640 --> 2148.680] there's always a subsection of people you know ranging between five to 30% to show a right with bias
428
+ [2148.680 --> 2152.120] and I'm like yeah you get some variation I mean you look at stupid bias and some people are left
429
+ [2152.120 --> 2156.520] some people are right it's totally boring and he's like hmm I don't know really no because maybe
430
+ [2156.520 --> 2160.040] what what do you do maybe this is meaningful maybe there are generally differences between people
431
+ [2160.040 --> 2164.760] maybe some people show a left bias and some people show a right bias and already McCourt in
432
+ [2164.760 --> 2169.000] 2000 and born actually kind of noticed it and said well this might be meaningful there might be
433
+ [2169.000 --> 2175.400] genuine observer differences but nobody ever really kind of followed this up until a paper was
434
+ [2175.400 --> 2180.040] published by Tvolshotton in 2011 and this is when I sort of paid a little bit more sort of
435
+ [2180.040 --> 2185.720] attention to this idea and what they actually showed in their paper was that their relative
436
+ [2185.720 --> 2191.080] lateralization of the right matter pathway predicted the degree of spatial bias so what they
437
+ [2191.080 --> 2194.280] were actually showing in particular and it doesn't really matter what we're talking about here
438
+ [2194.280 --> 2198.280] in terms of connections but you basically have a very big sort of right matter pathway which kind
439
+ [2198.280 --> 2203.480] of connects providers in frontal areas and what they actually showed in their paper is that in
440
+ [2203.480 --> 2209.240] participants where this pathway was larger in the right compared to the left that these participants
441
+ [2209.240 --> 2214.120] deviated more to the left in line with section task where participants who had the opposite
442
+ [2214.120 --> 2219.000] asymmetry actually showed either right bias or no bias so there seems to be some relationship
443
+ [2219.000 --> 2222.760] between the size of your right matter track and the kind of bias you that you show on these kind of
444
+ [2222.760 --> 2227.800] task and this is really what I thought was actually quite interesting because it really seems to be
445
+ [2227.800 --> 2232.520] maybe there are anatomical differences between people which kind of drive the fact that somebody
446
+ [2232.520 --> 2238.200] has a left bias or somebody has a right-ward bias so what we then really kind of started asking
447
+ [2238.200 --> 2242.440] more specifically this question saying where is it possible that some people actually have a right
448
+ [2242.440 --> 2246.120] with bias and that this can actually be a trait you know rather than just an abandoned variation
449
+ [2246.840 --> 2252.040] in the data like you would expect and if they do if we if we can identify people who show a right
450
+ [2252.040 --> 2257.320] bias do this then show different behavioral patterns for example if you look at time on task if you
451
+ [2257.320 --> 2262.520] look at performance over time do they shift in the same direction as people who show a left with bias
452
+ [2263.720 --> 2266.920] so we kind of really decided and this is actually Chris so Chris kind of decided to sort of
453
+ [2266.920 --> 2272.520] investigate this a little bit more so we use this kind of task which I kind of shown you to you
454
+ [2272.520 --> 2276.920] before the landmark task where rather than asking people to buy second lines you present them
455
+ [2276.920 --> 2280.840] which line which are already people are sacked it and you taste to say to them which of these two
456
+ [2280.840 --> 2286.360] ends do you think it's actually shorter you know or longer so we did this sort of in two sets of
457
+ [2286.360 --> 2292.280] experiments we first had 20 participants who we tested in three different sessions because what
458
+ [2292.280 --> 2297.640] we really wanted to know is is people's bias if the usual bias is this bias consistent over time
459
+ [2297.640 --> 2302.840] so do they show the same bias repeatedly on different occasions and if this is true if we kind
460
+ [2302.840 --> 2307.160] of establish that maybe different people kind of do this we then wanted to see what happens to
461
+ [2307.160 --> 2311.400] the time on task effect so what happens when you then ask people to do a task prolonged over
462
+ [2311.400 --> 2316.760] peer over peer to time because if you follow the co-beta and Schumer model what you would actually say
463
+ [2317.560 --> 2322.760] is people have a wide time is for a specialized attention network that sort of tires out over time
464
+ [2322.760 --> 2326.120] so everybody should shift widewards so whether you have an initial left-wise or right-by
465
+ [2326.120 --> 2331.960] fortwise people's behavior kind of should shift widewards so those are the kind of two questions
466
+ [2331.960 --> 2337.160] that we kind of really addressed in two experiments so this is video the paradigm very simple we
467
+ [2337.160 --> 2342.120] had an initial fixation cross the lines was presented for 150 milliseconds so quite briefly quite
468
+ [2342.120 --> 2347.160] difficult task the participants and had to decide whether the this line was actually longer or
469
+ [2347.160 --> 2353.720] this line was longer from that again we then calculated the point of subjective equality and usually
470
+ [2353.720 --> 2359.160] if you do this over a large number of subjects you find sort of overall left-foot bias so what we
471
+ [2359.160 --> 2366.040] actually did on based on this performance we then actually split groups in three different subgroups
472
+ [2366.040 --> 2370.280] so we had a left bias group a right bias group and a new bias group and we actually calculated
473
+ [2370.280 --> 2375.560] these bias groups by actually using the 50% confidence interval of 1 the individually fitted
474
+ [2375.560 --> 2379.400] on psychrometric functions so we basically had a sort of cut off where we decided which people
475
+ [2379.400 --> 2385.080] were showing left-bys or right-bys or no bias and the first of all we just said well people who
476
+ [2385.080 --> 2390.920] we identify as showing a left bias you know on one day do they also show a left-bys on a second
477
+ [2390.920 --> 2395.000] day and the third day so we basically man the experiment over three different days they were
478
+ [2395.400 --> 2401.240] separated by minimum of 24 hours and hopefully as you can see here participants baseline bias was
479
+ [2401.240 --> 2406.760] hugely consistent sort of across a different day so this is day one with day two they two with
480
+ [2406.760 --> 2411.640] day three and then obviously they won with day three here so that we hugely correlated so people
481
+ [2411.640 --> 2418.440] who show a bias on one occasion tend to show the same bias on repeated occasions so this kind of
482
+ [2418.440 --> 2422.440] initially maybe supports the notion that there's a basic trait you know that it's not just random
483
+ [2422.440 --> 2427.720] behavior people do kind of show biases consistently over time and we then looked at what happened
484
+ [2427.720 --> 2431.720] to the time on task effect and remember if you look at the co-battern Schumann's model you would
485
+ [2431.720 --> 2436.840] expect over time this is kind of the effect over time the bias to shift to the right independent
486
+ [2436.840 --> 2441.480] of initial bias and that's really not what we found because what we found is very much like
487
+ [2441.480 --> 2447.160] it's like expected the participants who had a left-bys kind of shifted white words but on the
488
+ [2447.160 --> 2451.480] other hand the participants who had a right-bys actually shifted left words so they really didn't
489
+ [2451.480 --> 2456.920] show the expected right-bid bias the participants who had no bias pretty much sort of stayed the same
490
+ [2457.560 --> 2460.200] and if you kind of look at this graph and think okay maybe this is kind of just
491
+ [2460.200 --> 2464.840] navigation to the mean and kind of learning we also kind of looked at the curvewits
492
+ [2464.840 --> 2469.560] and the curvewits actually gives you an indicator of variability how variable is people's performance
493
+ [2469.560 --> 2474.200] you know over time and very much like you would expect so the curvewits actually became greater
494
+ [2474.200 --> 2478.040] you know throughout the course of the experiment because obviously people were kind of tiring out
495
+ [2478.040 --> 2482.600] and were sort of finding stuff sort of more difficult but the important thing is that curvewits
496
+ [2482.600 --> 2486.680] so this is the variability that people show over time and the shift in baseline was actually
497
+ [2486.680 --> 2491.320] uncorrelated so there was no correlation between the shifts that we demonstrated here
498
+ [2491.320 --> 2495.880] and the increasing curvewits or generally sort of the change in the curvewits of psychometric function
499
+ [2496.920 --> 2501.160] and really just to kind of look at this again what we also did is we then also looked at the
500
+ [2501.160 --> 2506.920] relationship between the initial bias and the shift over time and what we found there because you
501
+ [2506.920 --> 2510.360] can say okay why are you making this kind of binary distinction why are you grouping people in left
502
+ [2510.360 --> 2514.760] and right bias why don't you just put look at them all together which is kind of what we did
503
+ [2514.760 --> 2519.960] here so what we actually found here is that the stronger the initial bias the stronger the shift
504
+ [2519.960 --> 2524.840] in bias over time in the opposite direction so what that basically means is that people with a
505
+ [2524.840 --> 2529.640] bigger with a big left bias and shifted more to the right compared to people with a smaller bias
506
+ [2529.640 --> 2533.480] and the same was actually true for the right bias so people had a bigger initial right bias and
507
+ [2533.480 --> 2539.000] shifted kind of more in the opposite direction so there was actually a negative correlation or not
508
+ [2540.600 --> 2546.680] okay so what can we conclude from this data but I would actually like to argue that maybe it's
509
+ [2546.680 --> 2551.800] possible that we actually have genuine sort of behavior differences and and genuine subtypes in
510
+ [2551.800 --> 2556.680] the population in relation to spatial attention you know we all know a lot about individual differences
511
+ [2556.680 --> 2559.960] people haven't really looked at this very much in relation to spatial bias a sort of spatial
512
+ [2559.960 --> 2565.560] attention so maybe it's actually a sort of stable trait because what we've actually found is that
513
+ [2565.560 --> 2572.120] the bias remains consistent over three different days so maybe there are actually different subtypes
514
+ [2572.120 --> 2576.760] and maybe they're actually driven by varying anatomical asymmetries like before like to border
515
+ [2576.760 --> 2581.560] shorten and people actually found and maybe they're also driven by functional asymmetries
516
+ [2582.360 --> 2587.800] because there was an even more recent paper by a clientele who actually found the participants who
517
+ [2587.800 --> 2593.240] actually displayed atypical white hemisphere language production this was actually an FMI study where
518
+ [2593.240 --> 2598.520] they looked at this so people who showed white hemisphere language production also displayed atypical
519
+ [2598.520 --> 2601.880] left hemisphere spatial attention dominance and they actually used the non-marked task kind of to
520
+ [2601.880 --> 2606.040] assess this so they clearly people aren't all the same and this is quite interesting so if you have
521
+ [2606.040 --> 2611.640] a white hemisphere language production dominance you also tend to have a left hemisphere spatial attention
522
+ [2612.600 --> 2618.760] so maybe it is actually true the trait actually determines first of all your behavior and then
523
+ [2618.760 --> 2623.560] also it actually determines some kind of other function like for example time on task which is
524
+ [2623.560 --> 2630.200] kind of what we looked at here so coming back to this kind of again sort of dual route models
525
+ [2630.200 --> 2634.760] of attention so the question really then here was you know is it really true that there's a
526
+ [2634.760 --> 2640.200] white hemisphere luncheon light attention network which actually whose activity decreases over time
527
+ [2640.200 --> 2646.120] and therefore induces a uniform white with bias kind of in all participants and we would really
528
+ [2646.120 --> 2650.760] argue from this state that maybe this interpretation doesn't really hold because we found that patient
529
+ [2650.760 --> 2655.000] participants who had initial white with bias actually shifted leftwards rather than for the
530
+ [2655.000 --> 2660.680] white words which very much would you predict from that model so what we actually propose instead
531
+ [2660.680 --> 2664.600] and we kind of at the moment kind of doing e g studies to kind of look into this a little bit more
532
+ [2665.320 --> 2669.640] is we actually proposing instead that there might be some sort of neural fatigue which kind of accounts
533
+ [2669.640 --> 2674.520] a little bit better for the time on task effect so maybe in participants with an initial left with
534
+ [2674.520 --> 2679.240] bias fatigue is actually greater in the right hemisphere causing my right with shift but in
535
+ [2679.240 --> 2683.000] participants with an initial right with bias fatigue may be greater in the left hemisphere and thus
536
+ [2683.000 --> 2687.480] calling left with shift and we kind of at the moment trying to look into that with e g and what
537
+ [2687.480 --> 2692.840] we've already found is that basically the size of the bias that you show is very much driven by
538
+ [2692.840 --> 2697.000] the involvement of the right hemisphere so the greater involvement you have of the right
539
+ [2697.000 --> 2701.400] hemisphere the greater the student neglect the bias that you kind of show but really this specific
540
+ [2702.200 --> 2706.840] issues we haven't really quite addressed yet okay so what came to be conclude from this
541
+ [2707.720 --> 2712.040] there seemed to be differences in attention to biases and these differences could we
542
+ [2712.040 --> 2716.920] really genuinely observe subtypes there may be driven both but anatomically and function
543
+ [2716.920 --> 2721.720] asymmetries because there's been other studies kind of pointing in this direction and maybe if
544
+ [2721.720 --> 2725.960] you have these kind of observer differences this actually leads to different behavioral patterns
545
+ [2725.960 --> 2730.120] so we've looked at time on task there might be other behavioral patterns which are interesting
546
+ [2730.120 --> 2733.480] so it does actually challenge current models of attention and alertness which seem to assume
547
+ [2733.480 --> 2740.760] that we all have the uniform a lateralized attentional network okay and this really just leaves me
548
+ [2740.760 --> 2745.480] to kind of thank my collaborators in particular Keith Mure and Stephanie was at sort of on the
549
+ [2745.480 --> 2749.480] clinical side so they were kind of involved in the clinical side this is Keith who's basically
550
+ [2749.480 --> 2754.680] stopping me from talking about the we have data so don't don't mention at all then really the
551
+ [2754.680 --> 2759.480] more behavioral studies Gregor Toodh, Gemma Eliamoth and particular Chris Benwald who kind of really
552
+ [2759.480 --> 2764.600] very much sort of told this last day to set and this is really just as you can imagine especially
553
+ [2764.600 --> 2769.080] with a rehabilitation study there are a lot of clinical people and other people kind of involved
554
+ [2769.080 --> 2772.840] in kind of helping you getting the patients together and keeping them on track you know for
555
+ [2772.840 --> 2778.840] forming the tasks and then last but least the different funding bodies and that's video thank you very
556
+ [2778.840 --> 2784.680] much.
557
+ [2785.160 --> 2789.320] For us to meet again for kind of both of thanks to Mike don't feel a question and I should
558
+ [2789.320 --> 2793.560] premise the questions they have to be benevolent questions no critical question
559
+ [2793.560 --> 2800.120] probably because as our external examiner Mike has record to be most benevolent person she
560
+ [2800.120 --> 2807.160] she stripped out a further than the work for the examiner and she also elevated student scores
561
+ [2808.120 --> 2813.960] exammers would be a bit harsh so mostly you wouldn't know that but I mean what
562
+ [2813.960 --> 2818.600] my gizm immensely modest station, previously taught saying there's no progress in 20 years but
563
+ [2818.600 --> 2825.080] but actually in the heart of it he's seen as effective you curing the orphan or neglect which
564
+ [2825.080 --> 2830.280] improves quality of life and that shouldn't be underestimated so okay you've had time to think
565
+ [2830.280 --> 2835.320] about questions and prepare anywhere you want us to go no critical questions allowed to hand out.
566
+ [2836.120 --> 2840.600] Really boring probably around a question but I wondered if there's any relationship with
567
+ [2840.600 --> 2845.400] hand at this with your observer sometimes. Yes in effect I mean I think this is really an important
568
+ [2845.400 --> 2848.760] point because the effect of we've demonstrated at the moment we're very much for right-hand people
569
+ [2848.760 --> 2853.560] so we very much kind of really initially selected to people to be right-handed because there is a
570
+ [2853.560 --> 2857.160] whole kind of shift in bite with left-handed people so I think this is another whole interesting
571
+ [2857.160 --> 2861.400] question basically what happens in left-handed people and I would think that initially and even
572
+ [2861.400 --> 2864.920] you know Khabeta would probably wouldn't claim this right attention in the network I think it's
573
+ [2864.920 --> 2868.520] very much linked to kind of right-handedness because with left-handed people we already know that
574
+ [2868.520 --> 2872.920] they are they have more lateral representation or even representation you know possibly even in the
575
+ [2872.920 --> 2876.040] left-hand atmosphere but I think it's a really interesting question because I think it's really
576
+ [2876.040 --> 2880.280] related to that so if we're really saying there are different subtypes and Keely just how
577
+ [2880.280 --> 2883.640] does handedness relate to that because sometimes it's not as straightforward and saying left-handedness
578
+ [2883.640 --> 2888.120] then it's kind of mediated them by the left attention and it's more bilateral so this is another
579
+ [2888.120 --> 2892.920] I think interesting question is just whenever I kind of have student populations I ask everybody who's
580
+ [2892.920 --> 2896.600] kind of left-handed and then you have an occasion a year where a lot get a lot of left-handness and you
581
+ [2896.600 --> 2901.240] can do studies and then this year we just had absolutely no left-handness so it's it's definitely on
582
+ [2901.240 --> 2904.280] the agenda to look into it.
583
+ [2904.760 --> 2906.760] Much of a left-handness and I'm wondering if you guys...
584
+ [2909.720 --> 2911.160] I was trying.
585
+ [2911.160 --> 2915.000] Instead of getting the term up you can't be called in theory.
586
+ [2915.000 --> 2916.040] I am a defense.
587
+ [2919.160 --> 2923.560] I mean if any kind of become a coding theory is that actions are stored together with the
588
+ [2923.560 --> 2929.480] conceptual effect, settings, actions on the environment don't sound good at actions.
589
+ [2929.480 --> 2932.520] We could come from here we know like an anti-pointing house
590
+ [2932.520 --> 2933.880] behavior.
591
+ [2933.880 --> 2938.600] If the patients cannot be able to engage in the effect of the action,
592
+ [2938.600 --> 2941.400] then of course they would be in the action itself.
593
+ [2941.400 --> 2944.280] No I would completely agree with that and I think there's another way of looking at that
594
+ [2944.280 --> 2947.560] because I've kind of really squeezed it into kind of David's and Mel's kind of framework
595
+ [2947.560 --> 2951.400] but I think the problem the difference really there is you know in direct action you have like
596
+ [2951.400 --> 2955.400] egocentric coding and allocentric coding and I would then more look at that so as soon as you
597
+ [2955.400 --> 2958.520] need allocentric coding which you're basically saying this is the object in relation to another one
598
+ [2958.520 --> 2962.680] in relation to kind of the environment and this is how I would say and this is really what they
599
+ [2962.680 --> 2966.600] have the problem and you write and you can then say this is clearly in this case the perception
600
+ [2966.600 --> 2969.960] driving the action I would completely agree with that and I would also really push it and I've
601
+ [2969.960 --> 2974.120] told Daniel a little bit about this I would actually say that quite a lot of actions really you
602
+ [2974.120 --> 2977.960] can't ignore the perceptual side because especially when actions are more complex like for example
603
+ [2977.960 --> 2980.600] in driving you have a lot of heavy perceptual input that you need to act on.
604
+ [2980.600 --> 2983.000] I think this is one of the tasks where that comes in.
605
+ [2983.560 --> 2989.320] It's just I think if you purely kind of look at that model then you would kind of miss some
606
+ [2989.320 --> 2993.160] spedibilities and I think this is really the point that David was trying to make at the time as well
607
+ [2993.160 --> 2997.000] because if you really think perceptual actions are the same then the spedabilities that some
608
+ [2997.000 --> 3001.160] patients have like agnostic patients and neglect patients I think you'd miss that you know because
609
+ [3001.160 --> 3005.560] because the spedabilities that they have are kind of limited but they are there and I think just
610
+ [3005.560 --> 3009.640] thinking about a dual wood model allows you to kind of identify them so I'm thinking it's just
611
+ [3009.640 --> 3013.800] more helpful in terms of the approach. I think you can do a little bit of the lodges into
612
+ [3013.800 --> 3019.720] you can't approach the area. Yeah I mean to me I think this is an interesting question because I do
613
+ [3019.720 --> 3023.160] because I think the temple but you see everything murders into the temple of the temple of it's just
614
+ [3023.160 --> 3036.600] really clever so yeah you know true. I'm very nice talk thank you. I put your line by section
615
+ [3036.600 --> 3042.360] and you're here the fatigue. Are you sure that people the right lines move left and people the
616
+ [3042.360 --> 3048.360] left lines move right and they're sort of getting better and getting rid of their bias? No and in fact
617
+ [3048.360 --> 3051.560] I mean this was very much one of the criticisms you know because all you're showing basically
618
+ [3051.560 --> 3055.080] is that the submission to the mean that people are getting better and the reason that we think the
619
+ [3055.080 --> 3059.640] people aren't getting better is because the kerf its increases so I don't really quite see how
620
+ [3059.640 --> 3062.760] people can improve on the task and at the same time become more variable because if you then
621
+ [3062.760 --> 3066.440] look at kerf words it was basically the difference between the beginning and the end point of the
622
+ [3066.440 --> 3070.760] asymptote is the kerf its become wider so clearly over time there's stronger more with the task
623
+ [3070.760 --> 3075.000] rather than showing learning. But they're becoming more variable in their response but they're
624
+ [3075.000 --> 3081.640] calibrating to what they have to be doing. It's two different aspects of the kerf itself. Yeah but
625
+ [3081.640 --> 3085.400] I would expect them but but they're also uncorrelated so I think if that's true I would expect them
626
+ [3085.400 --> 3089.480] to be correlated. I would then expect there to be a relationship between the between the increase
627
+ [3089.480 --> 3094.760] in kerf withs and the reduction in bias and it was uncorrelated. How many of you then said
628
+ [3094.760 --> 3102.120] expect that people could then be probably different aspects of the problem. Do you need an increase
629
+ [3102.120 --> 3109.720] in inter-emknowledge to be causing an reduction in the increase? I mean I kind of had exactly this
630
+ [3109.720 --> 3113.320] kind of question in Aberdeen and I kind of think it should be related but this was exactly the
631
+ [3113.320 --> 3116.920] argument that come back. So I think what we should then do and what we haven't done you should
632
+ [3116.920 --> 3121.400] have a vertical control condition. So if you had a vertical control condition you would then expect
633
+ [3121.400 --> 3125.880] again the regression to the main you would expect the increase in variation but not the shift in bias
634
+ [3125.880 --> 3129.320] and that's what we should have done and we didn't and we got it published so I think we were lucky
635
+ [3132.920 --> 3138.760] I was curious about the mechanism of the changes after the raw and
636
+ [3138.760 --> 3143.400] the relation. Do you think it's some sort of compensation or something that we use all the time?
637
+ [3143.400 --> 3147.960] So if you took a normal participant and gave them a raw that was a rig that had led shot. Yes
638
+ [3148.920 --> 3154.200] Would you cause a long-term change in there? I think this is a little bit I think but people try to
639
+ [3154.200 --> 3157.400] look at the prison adaptation because I think people will kind of adapt to that and then there's a
640
+ [3157.400 --> 3161.640] certain carryover kind of over time and then you lose it and I think it is an interesting question
641
+ [3161.640 --> 3165.800] because with neglect and prison adaptation they adapt to that and I would expect they would adapt
642
+ [3165.800 --> 3170.120] to the different weighting as well and they do then carry that over for weeks and months so in prison
643
+ [3170.120 --> 3174.360] adaptation neglect patients really seem to be using that adaptation long term whereas people like
644
+ [3174.360 --> 3178.440] you and me don't and nobody really knows why that's the case but I think it's true I think the
645
+ [3178.440 --> 3182.600] mechanisms are actually very similar and kind of you kind of adapting to the feedback that you
646
+ [3182.600 --> 3186.520] get and you then kind of transfer that into your behaviour long term and nobody knows why but
647
+ [3186.520 --> 3190.040] that's what neglect patients kind of you seem to be doing so I think Stephanie is going to play
648
+ [3190.040 --> 3192.600] with one but they're a little bit more and kind of looking at different weights and kind of
649
+ [3192.600 --> 3196.600] density or people adapt to that and whether they do and I would suspect they do because I think
650
+ [3197.320 --> 3201.080] people don't really know how prison adaptation works and they get exactly what this mediates but
651
+ [3201.080 --> 3205.080] I think the vital structures are implicated in that so I think you can actually do this why
652
+ [3205.080 --> 3210.040] and provide the structures which are maybe unimpaired and then kind of use it more long term but it's
653
+ [3210.040 --> 3213.240] my thinking we think that isn't very clear but that's kind of what I feel.
654
+ [3218.280 --> 3224.680] I'm curious about how you said the delayed pointing would be impaired in patient life.
655
+ [3225.320 --> 3228.120] Do you have an idea how long does it take?
656
+ [3228.120 --> 3233.400] Yes I mean we've done one experiment on that and we basically had a delay it was quite long
657
+ [3233.400 --> 3237.160] so we had a condition exactly like the poor pointing task and then they were basically so the
658
+ [3237.160 --> 3241.640] light would then come off they had to wait for five seconds and they then had to reach and they
659
+ [3241.640 --> 3245.800] were then very impaired in the reaches for the left space actually so then they really couldn't and
660
+ [3245.800 --> 3249.960] again I think my argument there is that it has to produce a more long term spatial mapping so
661
+ [3249.960 --> 3254.120] they clearly have to retain that mapping kind of more long term to then perform the reach so
662
+ [3254.120 --> 3259.000] what we looked at was five seconds I think male's argument would be that delay kicks in after
663
+ [3259.000 --> 3262.680] a few milliseconds I'm not so quite so sure about that actually because I think you need a bit
664
+ [3262.680 --> 3267.080] of a bigger delay before the task becomes difficult for something for any like patients so was that
665
+ [3267.080 --> 3272.360] your question? Yes because we know that the direct term for this is the same in two groups
666
+ [3272.360 --> 3276.520] either two streams and that's one remaining in the scale of that couple of years but I think what
667
+ [3276.520 --> 3280.920] you're talking about is much longer so yes exactly because there are different interpretations in
668
+ [3280.920 --> 3284.120] relation to how quick the timing is in the door to stream and we just really kind of wanted to
669
+ [3284.120 --> 3287.880] kind of move away from that debate and really saying we just want to make sure it's really very
670
+ [3287.880 --> 3291.480] very long it is really five seconds but but I think Otters kind of shown that a way is basically
671
+ [3291.480 --> 3297.720] saying there is no sudden shift you know from immediate to kind of more long term delay and I
672
+ [3297.720 --> 3300.520] would actually agree with that I think you know the deterioration is probably going to be gradual a
673
+ [3300.520 --> 3305.720] bit like what he found in for for take attacks here so yeah I don't think there is so I don't
674
+ [3305.720 --> 3308.760] believe in my idea of it suddenly just disintegrating I don't think so
675
+ [3310.280 --> 3315.800] are you from like a memory? Yes exactly
676
+ [3318.520 --> 3323.880] you might be a document about you know online being okay and you show clearly that you're
677
+ [3323.880 --> 3330.520] pro-pointing the patient goes accurate with the object and it contrasts entirely with the
678
+ [3330.520 --> 3334.840] beginning where you show a straight line and the patient marks one end of it when asked the
679
+ [3334.840 --> 3339.960] mark middle and there's something that doesn't quite jowl for me and it may be just
680
+ [3340.760 --> 3345.400] immediately if you've got one line and you can pick it up in the middle then somehow you appreciate
681
+ [3345.400 --> 3351.560] the two ends and my real question comes in how do you account for extinction because you know
682
+ [3351.560 --> 3358.280] you've got you've got two objects then I guess the patient will ignore a controversial side
683
+ [3359.400 --> 3364.200] but you know had they been asked to point to the middle of the two objects then that's under
684
+ [3364.200 --> 3370.520] direct control you're doing it here and now and in effect or maybe they don't show the effect if
685
+ [3370.520 --> 3374.600] they're reaching the middle of two objects? No I think that's quite a different task because if
686
+ [3374.600 --> 3377.800] you're reaching I think these experiments are done so if you basically have two objects and you
687
+ [3377.800 --> 3381.080] ask people to reach for the middle they really struggle with that because they can they can't
688
+ [3381.080 --> 3385.400] maintain sort of two objects at the same time single objects got two ends you know when does
689
+ [3386.680 --> 3389.960] but I believe I think they can't do it because it's a bit like Lyme a section because you have
690
+ [3389.960 --> 3393.960] a huge perceptual input so if you have the word initially like a long word you ask them to pick
691
+ [3393.960 --> 3397.480] it up they don't pick it up correctly so the reason that they can actually do it is because they're
692
+ [3397.480 --> 3401.880] then getting the proprioceptive feedback that it's tilted I mean Ian Robertson already showed
693
+ [3401.880 --> 3405.000] initially that there's a big difference between kind of pointing to a word and grasping so people
694
+ [3405.000 --> 3408.600] are already a little bit better at grasping but they're not perfect and I think one of the reasons
695
+ [3408.600 --> 3412.440] that this training works and apart from doing the action is because in the final cortex you have
696
+ [3413.160 --> 3416.760] proprioceptive as well and people are actually using the proprioceptive feedback to kind of improve
697
+ [3416.760 --> 3421.400] on that task and then in the long term that kind of filters on to the perception because it's not
698
+ [3421.400 --> 3425.080] you right because because you have a long word so perceptual they can't do it so without the
699
+ [3425.080 --> 3428.600] bit is why you know if you just ask them to point to the end so you know they don't benefit so
700
+ [3428.600 --> 3433.880] if the object gets smaller and smaller then for generally a bit of grasp it should go for the midpoint
701
+ [3433.880 --> 3438.280] this is make proportion at less errors it's only when it's a very long one we have to look at it
702
+ [3438.280 --> 3441.160] and I think this is also an important video about the pointing task that I found because they're
703
+ [3441.160 --> 3445.720] basically pointing kind of to a single target so I think if you had a target with lots of distractors
704
+ [3445.720 --> 3451.800] people would just be desperate. There's things in the back and the lap and I think it's over a second
705
+ [3451.800 --> 3456.280] second. Yeah no but I think the problem with neglect is actually no no but they were actually
706
+ [3456.280 --> 3458.840] sitting in contact completely and they were in fact I should have said that they were kind of
707
+ [3458.840 --> 3462.200] pretty much sitting in darkness so we're sitting in total darkness and then the lights would come on
708
+ [3462.200 --> 3465.160] yeah no so there wasn't a clue because I think this is an important point because I suddenly
709
+ [3465.160 --> 3468.440] see turning into search task they're awful yeah they can't they couldn't do it.
710
+ [3470.520 --> 3477.080] And you had a question for me to make some thought but I think we should thank my partner.
711
+ [3477.080 --> 3481.080] Thank you thank you thank you thank you very much.
712
+ [3484.280 --> 3488.600] Especially this is like my little genius. Oh fantastic wow this is actually my first time
713
+ [3488.600 --> 3491.560] being at everybody half-worn actually maybe as an excellent example you gave one to me
714
+ [3491.560 --> 3495.560] thank you. Hey I must have already passed my last two.
transcript/allocentric_1-JE7NSi6Fk.txt ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 6.440] Let's take a look at how the brain receives and processes sensory information from the environment.
2
+ [6.440 --> 12.000] To get started, let's take a look at regions of the brain and the functions they provide.
3
+ [12.000 --> 19.000] Let's first look at the three most major subdivisions of the brain called the forebrain,
4
+ [19.000 --> 22.000] the midbrain, and the hindbrain.
5
+ [22.000 --> 27.000] The hindbrain consists of these two purplish regions.
6
+ [27.000 --> 32.000] The lowest, most bluish region, is the lower portion of the brain stem.
7
+ [32.000 --> 35.000] And just above that is the cerebellum.
8
+ [35.000 --> 39.000] The hindbrain is the oldest, most primitive region of the brain.
9
+ [39.000 --> 47.000] It connects the brain with the rest of the body and maintains basic physiological functions necessary for survival,
10
+ [47.000 --> 55.000] such as respiration, heart rate, sleep, wakefulness, and coordination of movement.
11
+ [55.000 --> 61.000] The midbrain, which we can see in pink, is the topmost part of the brain stem.
12
+ [61.000 --> 70.000] It provides a passageway for messages traveling between the forebrain and the rest of the body via the spinal cord.
13
+ [70.000 --> 75.000] It also is responsible for orienting responses to sensory stimuli.
14
+ [75.000 --> 83.000] For example, if you have a ball flying at your head, you can thank the midbrain for coordinating your unconscious automatic response
15
+ [83.000 --> 92.000] to duck or block the ball from hitting you before the conscious part of the brain has had a chance to process what is going on.
16
+ [92.000 --> 96.000] Finally, we have the forebrain in the gold and yellow regions.
17
+ [96.000 --> 101.000] And this is evolutionarily the most recent development of the brain.
18
+ [101.000 --> 107.000] The forebrain consists of the smaller diencephalon, which is situated just above the midbrain,
19
+ [107.000 --> 111.000] and the much larger telencephalon or cerebrum.
20
+ [111.000 --> 119.000] The forebrain handles our most complex and integrated thinking, particularly the outer cerebrum,
21
+ [119.000 --> 123.000] and is the region of the brain most involved in perceptual processing.
22
+ [123.000 --> 127.000] Let's take a deeper look at this outer cerebrum.
23
+ [127.000 --> 133.000] The first thing to note about the cerebrum is that it, along with the cerebellum of the hind brain,
24
+ [133.000 --> 139.000] is divided into two hemispheres, a right hemisphere and a left hemisphere.
25
+ [140.000 --> 148.000] Another noteworthy characteristic of the cerebrum is that it has an outer layer to it called the cerebral cortex.
26
+ [148.000 --> 153.000] Of the cerebrum, this is where our most complex thinking takes place.
27
+ [153.000 --> 158.000] The cortex gets its name because cortex means bark in Latin.
28
+ [158.000 --> 165.000] Like the bark on a tree, the cortex surrounds the brain as an outer layer.
29
+ [165.000 --> 171.000] Do note that the coloring of this image is distorted a bit because it was taken with MRI.
30
+ [171.000 --> 177.000] Typically, the outer cerebral cortex layer is a gray color, and the inner portion is white,
31
+ [177.000 --> 180.000] such as can be seen in this figure here.
32
+ [180.000 --> 190.000] The darker gray areas are actually called, gray matter, and the whitish areas are called, as you might have guessed, white matter.
33
+ [190.000 --> 196.000] If you look closely, you will see that the gray matter is made up of the cell bodies of neurons,
34
+ [196.000 --> 203.000] and the white matter is made up of the long, milen-covered axons of these neurons.
35
+ [203.000 --> 207.000] Let's take a closer look at one of these neurons.
36
+ [207.000 --> 211.000] Neurons are information carrying cells in the brain.
37
+ [211.000 --> 219.000] Neurons can take on a handful of different shapes, but most neurons have a cell body that receives signals from other cells,
38
+ [219.000 --> 225.000] and an axon that carries and transmits its own signal to other cells.
39
+ [225.000 --> 231.000] Some cells, like the one here, have milen-sheaths attached to their axons.
40
+ [231.000 --> 240.000] These milen-sheaths help the electrical signals that travel along the axon to speed up so that messages can be sent faster.
41
+ [240.000 --> 249.000] Anyway, axons aren't naturally white, but when they are covered in milen, they take on a whitish color to them.
42
+ [249.000 --> 252.000] So that is how we get gray matter and white matter.
43
+ [252.000 --> 264.000] Gray matter consists of the decision-making cell bodies of the neurons, and white matter consists of the wiring or the connections within the brain.
44
+ [264.000 --> 272.000] And here is an image of a human brain where you can see portions of gray matter and white matter.
45
+ [272.000 --> 276.000] Anyway, back to our miscolored MRI image.
46
+ [276.000 --> 288.000] In addition to the cerebral cortex, the cerebrum also includes some subcortical structures, so named because they lie deep within the cerebrum beneath the cortex.
47
+ [288.000 --> 297.000] Let's take a look at the brain's two subcortical systems, the limbic system and the basal ganglia.
48
+ [297.000 --> 303.000] The limbic system is one of the earliest regions of the four brain to develop in the course of evolution.
49
+ [303.000 --> 308.000] It helps process our motivation for behaviors, emotion, and memory.
50
+ [308.000 --> 315.000] You'll start to notice that most structures within the brain come in pairs, one for each side.
51
+ [315.000 --> 321.000] Two important limbic structures I want to point out are the amygdala and the hippocampus.
52
+ [321.000 --> 329.000] The amygdala is the red almond-shaped structure in this figure, and amygdala actually means almond.
53
+ [329.000 --> 336.000] The amygdala plays a big role in some of our most basic emotions, such as fear and anger.
54
+ [336.000 --> 342.000] Another important limbic structure is the hippocampus, the purplish blue structure.
55
+ [342.000 --> 349.000] Hippocampus means seahorse, and it was so named because the structure actually resembles a seahorse.
56
+ [349.000 --> 354.000] The hippocampus plays a major role in the formation of new memories.
57
+ [354.000 --> 362.000] It also helps us with our sense of allocentric space, which has to do with where we are located within our environment.
58
+ [362.000 --> 367.000] Navigating with a map, for example, makes use of allocentric space.
59
+ [367.000 --> 373.000] Next, let's take a look at the other subcortical system, the basal ganglia.
60
+ [373.000 --> 384.000] The basal ganglia helps us form associations between our actions or other events around us and certain environmental stimuli.
61
+ [384.000 --> 391.000] You may have learned about classical and operant conditioning, such as with Pavlov and his drooling dog,
62
+ [391.000 --> 395.000] where he conditioned the dog to drool to the sound of a bell.
63
+ [395.000 --> 398.000] That would be mediated by the basal ganglia.
64
+ [398.000 --> 406.000] They also help us control our voluntary motor responses, including skeletal muscle movement and eye movements.
65
+ [406.000 --> 412.000] You'll notice that the amygdala is part of both the limbic system and the basal ganglia.
66
+ [413.000 --> 421.000] And another important basal ganglia structure is the thalamus, a key brain structure for sensation of perception,
67
+ [421.000 --> 428.000] that acts as a relay station for most sensory information.
68
+ [428.000 --> 433.000] The cerebrum can also be divided into four main lobes.
69
+ [433.000 --> 439.000] The frontal lobe up front, the parietal lobe on the upper sides towards the back of the head,
70
+ [439.000 --> 446.000] the temporal lobe on the lower sides of the head, and the occipital lobe in the very back.
71
+ [446.000 --> 458.000] The frontal lobe is in charge of our executive functions, including planning, organizing, decision making, problem solving, and reasoning.
72
+ [458.000 --> 464.000] It also plays a role in our more complex emotions and emotional assessment of situations.
73
+ [465.000 --> 474.000] It helps us with our fine, very controlled motor movements and motor programs, such as typing and texting, piano playing, and even speech.
74
+ [474.000 --> 477.000] It is very important in the production of language.
75
+ [477.000 --> 488.000] We have a specific area only within the left frontal lobe, usually, for some people it's on the right, called Broca's area, that specializes in speech production.
76
+ [489.000 --> 495.000] The frontal lobe also plays a role in our sense of taste and smell.
77
+ [495.000 --> 501.000] The parietal lobe's primary role is our sense of egocentric space.
78
+ [501.000 --> 509.000] Different from allocentric space, processed by the hippocampus, egocentric space tells us how to interact with our environment.
79
+ [509.000 --> 515.000] It tells us where our bodies are in space, whether we are right side up or upside down.
80
+ [516.000 --> 523.000] The parietal lobe also manages our sense of touch and helps us pay attention to the world around us.
81
+ [523.000 --> 534.000] The temporal lobe helps us with the recognition of objects and plays a big role with memory formation, working closely with the hippocampus.
82
+ [534.000 --> 541.000] Because it interacts with the hippocampus, it is also responsible for our sense of allocentric space.
83
+ [541.000 --> 550.000] It plays a role in language comprehension, and just like the frontal lobe has a specific area just on one side for language production,
84
+ [550.000 --> 556.000] the temporal lobe has a specific area called Wernicke's area for language comprehension.
85
+ [556.000 --> 563.000] It, too, is typically found on the left side of the brain, but in some individuals it can be on the right.
86
+ [563.000 --> 568.000] Finally, the temporal lobe helps us process sound or audition.
87
+ [569.000 --> 580.000] And last but not least, the occipital lobe's primary function is simply but very importantly, the processing of vision.
88
+ [580.000 --> 587.000] I also wanted to elaborate a bit more on the cerebellum here, even though it is considered part of the hind brain.
89
+ [587.000 --> 597.000] The cerebellum is very important for sensory motor integration, referring to how our sensory and motor systems work together to guide our action.
90
+ [597.000 --> 606.000] For example, some researchers have conducted experiments using prism goggles, which when worn display the world upside down.
91
+ [606.000 --> 619.000] As you can imagine, when people wearing prism goggles try to walk anywhere or reach for anything, or pretty much perform any sort of action that requires visual input, they really struggle.
92
+ [620.000 --> 629.000] However, with time and a lot of practice, people eventually learn to interact with their world in the same way as if their vision were right-side up.
93
+ [629.000 --> 638.000] And interestingly, after adapting to the prism goggles, it takes some time to readjust to the real world what's taking them off.
94
+ [638.000 --> 644.000] Anyway, it is the cerebellum that is responsible for integrating our vision and motor actions.
95
+ [644.000 --> 649.000] The cerebellum helps us adapt to new mappings, such as with prism goggles.
96
+ [649.000 --> 660.000] It also helps with coordination, especially when we need to make quick adjustments, such as keeping ourselves from falling when we trip over a rock, and it helps with posture.
97
+ [660.000 --> 671.000] And amazingly, it constitutes only 10% of the brain's mass, but contains over half of its neurons.
98
+ [672.000 --> 680.000] And as I mentioned before, the cortex of the brain, the very outer layer, is responsible for our most complex processing.
99
+ [680.000 --> 687.000] We can divide the cortex into the various regions that are responsible for early and more complex processing.
100
+ [687.000 --> 698.000] Any regions labeled as primary cortex are the earlier more elementary processing areas that handle the more basic dimensions of sensory information.
101
+ [698.000 --> 706.000] So for example, the primary visual cortex is the first portion of the cortex to receive and process visual information.
102
+ [706.000 --> 718.000] And it handles the earliest stages of visual processing, such as the recognition of lines of various orientations and edges.
103
+ [719.000 --> 730.000] Association cortex, labeled in purple, on the other hand, is an area that is more complex and integrative with our memory and past experience.
104
+ [730.000 --> 737.000] Visual Association cortex, for example, helps us recognize whole objects and people.
105
+ [738.000 --> 750.000] The last thing I'd like to mention is that each of our senses has a primary pathway in which stimuli from the environment travels from sensory receptors to that senses primary cortex.
106
+ [750.000 --> 754.000] Here we see the primary pathway for vision.
107
+ [754.000 --> 762.000] Sensory receptors in the eyes respond to light and transduce that light into a neural signal the brain can understand.
108
+ [762.000 --> 771.000] That neural signal exits the eyes along their optic nerves and reaches the thalamus, that sensory relay we looked at earlier.
109
+ [771.000 --> 780.000] From the thalamus, the signals travel on to the primary visual cortex in the asypetal lobes located in the back of the brain.
110
+ [780.000 --> 791.000] Each sense has its own primary pathway like this that goes from the sensory receptors all the way to the primary cortex for that sense.
111
+ [792.000 --> 797.000] Now that this video is over, consider briefly writing down for memory what you have learned.
112
+ [797.000 --> 804.000] This sort of practice retrieving for memory is one of the best things you can do to remember what you just learned.
transcript/allocentric_1by5J7c5Vz4.txt ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.040] There are approximately 285 million people with visual impairments around the world.
2
+ [7.040 --> 16.880] Making your app accessible not just opens it up to these users, but it has a potential to improve design for everyone.
3
+ [16.880 --> 21.160] Most people are familiar with an accessibility service called TalkBack,
4
+ [21.160 --> 25.080] which is a screen reader utility for people who are blind and visually impaired.
5
+ [25.240 --> 33.320] With TalkBack, the user performs input via gestures such as swiping or dragging or an external keyboard.
6
+ [33.320 --> 36.920] The output is usually spoken feedback.
7
+ [36.920 --> 39.560] There are two gesture input modes.
8
+ [39.560 --> 46.360] The first one is touch exploration, where you drag your finger across the screen,
9
+ [46.360 --> 50.040] and the second one is linear navigation.
10
+ [50.040 --> 58.440] Where you swipe left and right with your finger until you find the item of interest.
11
+ [58.440 --> 63.720] Once you arrive to the item you're interested in, you double tap on it to activate.
12
+ [63.720 --> 71.800] The primary way in which you can attach alternative text description for your UI elements to be spoken by TalkBack
13
+ [71.800 --> 75.960] is by using an Android attribute called Content Description.
14
+ [75.960 --> 80.200] If you don't provide Content Description for an image button, for example,
15
+ [80.200 --> 83.400] the experience for TalkBack user can be jarring.
16
+ [90.600 --> 94.440] For decorative elements such as spacers and dividers,
17
+ [94.440 --> 101.320] setting Content Description to null will tell TalkBack to ignore and not speak these elements.
18
+ [101.320 --> 105.800] Make sure to not include Control Type or Control State in your Content Description,
19
+ [106.440 --> 113.080] words like buttons selected, checked, etc. as Android natively does that for you.
20
+ [113.720 --> 119.960] AndroidLint automatically show you which UI controls like Content Descriptions.
21
+ [119.960 --> 126.040] To keep TalkBack spoken output tidy, you can arrange related content into groups by using
22
+ [126.040 --> 131.720] Focusable Containers. When TalkBack encounters such a container, it will present the content
23
+ [131.720 --> 138.440] as a single announcement. For more complex structures such as tables, you can assign focus to a container
24
+ [138.440 --> 145.080] holding one piece of the structure such as a single row. Grouping content both reduces the
25
+ [145.080 --> 151.880] amount of swipe in the user has to do while streamlining speech output. Here is an example of how
26
+ [151.880 --> 164.680] ungrouped table content works. And here's the same content with grouping applied.
27
+ [165.640 --> 173.800] Content grouping activity, song details, name, hey Jude, artists, the Beatles cost $1.45.
28
+ [173.800 --> 181.000] You should manually test your app with TalkBack and ICE closed to understand how a blind user
29
+ [181.000 --> 186.120] may experience it. We also provide accessibility scanner as an app in Google Play.
30
+ [187.080 --> 192.760] It suggests accessibility improvements automatically by looking at content labels,
31
+ [192.760 --> 199.080] clickable items, contrast, and more. Vision impairments doesn't just refer to blindness.
32
+ [199.960 --> 207.800] 65% of our population is far-sighted, for example. With careful design, you can make sure that many
33
+ [207.800 --> 213.720] of your visually impaired users can have a positive experience without having to rely on TalkBack.
34
+ [214.360 --> 221.320] Begin by making sure that UI of your apps works with other accessibility settings, including
35
+ [221.320 --> 230.840] increased font size and magnification. Keep your touch targets large, at least 48 by 48 DP.
36
+ [231.400 --> 237.160] This makes them easier to distinguish and touch. Provide adequate color contrast.
37
+ [237.880 --> 243.480] The worldwide web consortium created color contrast accessibility guidelines to help.
38
+ [244.040 --> 251.400] And to assist users with color deficiencies, use cues other than color to distinguish UI elements.
39
+ [252.200 --> 259.320] For example, more descriptive instructional text. If you're using custom views or drawing your app
40
+ [259.320 --> 268.120] window using OpenGL, you need to manually define accessibility metadata so that accessibility
41
+ [268.120 --> 274.440] services can interpret your app properly. The easiest way to achieve this goal is to rely on
42
+ [274.440 --> 281.240] the ExploreByTouch helper class. With just a few methods, you can build a hierarchy of virtual views
43
+ [281.240 --> 287.960] that are accessible to TalkBack. Making your app accessible doesn't just open it to new users.
44
+ [287.960 --> 294.040] It helps to make the world a better place, one app at a time. To read more about developing and
45
+ [294.040 --> 300.920] testing your apps for users with visual impairments, check out the links below. Also check out the video
46
+ [301.640 --> 310.760] on developing for users with motor impairments.
transcript/allocentric_2lfVFusH-lA.txt ADDED
@@ -0,0 +1,892 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.840] Acompanha em ViuPoint
2
+ [7.840 --> 10.600] Nós teremos uma conferência
3
+ [10.600 --> 14.080] Creatividade across modalities em viewpoint
4
+ [14.080 --> 17.480] Constructions com a professora Ives Weitzer
5
+ [17.480 --> 20.600] da Universidade da California in Berkeley
6
+ [20.600 --> 23.600] Professor Ives Weitzer
7
+ [23.600 --> 26.580] CRAS
8
+ [31.520 --> 33.660] Vielen amor, é muito bom você
9
+ [33.660 --> 36.040] Estou vindo...
10
+ [38.520 --> 41.800] Eu esqueçando em mim
11
+ [41.800 --> 45.900] Éastllowoso e muita fields de canoe
12
+ [45.900 --> 48.640] Em underestimate aquele cogente
13
+ [48.640 --> 50.000] Porque umaνεção de geração
14
+ [50.000 --> 51.760] Finder a vida da�
15
+ [51.760 --> 55.040] seus opentes yay anos naraumafa
16
+ [55.200 --> 58.120] pros exemplos do dia final.
17
+ [58.280 --> 61.320] Então, spice þ faithful
18
+ [61.520 --> 63.800] porque fleeingiro isso em dólares
19
+ [63.880 --> 66.280] apre o ap Above.
20
+ [67.520 --> 69.400] ο Olivietrofedora temos nossa
21
+ [69.600 --> 72.660] proteção ku
22
+ [72.820 --> 75.760] modelling freaking
23
+ [75.860 --> 77.220] Phillipo
24
+ [77.580 --> 80.080] As thunderstorms,
25
+ [80.080 --> 89.460] sobre alguns weighsessos sobre predicted sem intersecuractions,
26
+ [89.660 --> 93.260] pra tanto a plaintor月 que vai鼓ender na peça.
27
+ [95.060 --> 98.260] Mas�pera está em um exemplo sobre preto te torcendo,
28
+ [98.520 --> 102.120] que depois me deixe uma bustedura.
29
+ [102.920 --> 107.240] Isso nu
30
+ [111.080 --> 114.540] e disse que ele tinha esse jarras na sincerityira
31
+ [115.560 --> 119.120] e acabou deixando a coração de lugar e comлоu
32
+ [121.780 --> 124.520] que o jarras sempre tinha c theological
33
+ [127.240 --> 129.360] por que foi Cana
34
+ [129.360 --> 132.280] e para Garcia ou caring
35
+ [135.440 --> 137.280] não poderia era um ecodeirro
36
+ [138.160 --> 139.520] é nada premise
37
+ [140.080 --> 147.800] nouns
38
+ [148.400 --> 152.800] o faz de 공ia
39
+ [152.800 --> 155.600] artefacto
40
+ [155.600 --> 158.600] de coisaились設ados a growing
41
+ [158.600 --> 161.840] é tranquilo
42
+ [161.840 --> 165.740] 197
43
+ [165.740 --> 167.160] genera a sua corralidade
44
+ [167.160 --> 169.200] né gente benta
45
+ [169.200 --> 169.520] Isso é mal Gulfe
46
+ [169.520 --> 172.400] eu altijdbroken como ele硬 faire 고민 Sun
47
+ [173.040 --> 175.940] chicarrora, transformeت uma
48
+ [176.700 --> 178.540] ]: apagСantedas em uma
49
+ [178.680 --> 180.100] col queda
50
+ [181.340 --> 182.340] sono
51
+ [182.900 --> 184.420] 저 Fighter3
52
+ [184.520 --> 185.540] ruim
53
+ [187.080 --> 189.900] Então ainda temos um
54
+ [190.180 --> 192.700] ROBERT
55
+ [192.940 --> 194.480] vez
56
+ [195.360 --> 196.780] falar que sä
57
+ [196.780 --> 202.380] o nosso próprio viewpoint para ter feito um account, mas nós agora temos de neuroscience
58
+ [202.380 --> 209.540] que nós temos que ter que fazer, incluindo as construções e afordas de outras
59
+ [209.540 --> 212.140] pessoas e perhaps os animais presentes.
60
+ [212.140 --> 216.300] Então, basicamente quando você está em um lugar com outras pessoas,
61
+ [216.300 --> 221.420] algum parte de você é sempre sempre aware de o que não é o que você pode ver, o que você pode
62
+ [221.420 --> 228.420] ver, mas o que eles podem ver e o que eles podem ver.
63
+ [228.420 --> 234.740] E nós vamos fazer agora, porque nós vamos fazer a literatura, nós também podemos usar
64
+ [234.740 --> 243.380] essas habilidades para construir as percepções e afordas de pessoas que não estão lá
65
+ [243.380 --> 249.380] ou exigendo as pessoas imaginadas.
66
+ [249.380 --> 255.780] Então, comunicar uma comunicação de understandable envolvendo a simulação, então,
67
+ [255.780 --> 263.460] esse é um termo de cognitão de estes, que diz, essentially, o que eu estou fazendo
68
+ [263.460 --> 271.940] quando eu estou tentando comunicar com você, eu estou tentando você simular, para ir à sua
69
+ [271.940 --> 278.940] situação que estou descrindo.
70
+ [278.940 --> 286.940] Então, a simulação é o que estou tentando fazer, não memoramos o sete de facos, mas
71
+ [286.940 --> 292.020] simular a situação de world. E isso é necessariamente que nós só temos que
72
+ [292.020 --> 298.820] fazer a simulação. Porque eu não sei qualquer outro lugar para simular a situação, eu
73
+ [298.820 --> 307.100] tenho que imaginar isso em algum lugar. Ok, entramos a dar um辖 de 3 parameters de
74
+ [307.100 --> 314.860] classes. Ok. Então, línguas sobre isso. Eu tenho todas essas
75
+ [314.860 --> 320.300] diferentes sistemas de espacial que eu represento em línguas, e isso inclui coisas
76
+ [320.800 --> 326.020] muito, muito eager de tal, gew Acenteie.
77
+ [326.020 --> 334.160] Mulher da faz Valoriano temos te vertigo.
78
+ [334.160 --> 337.980] Tua a forma deельgar e estar replas.
79
+ [337.980 --> 344.960] Em Scotland vamos ao ponto de proph camping.
80
+ [344.960 --> 348.600] e eu só disse, por um momento em que vocês não sabem isso,
81
+ [348.600 --> 352.800] existem os systems de linguística no mundo,
82
+ [352.800 --> 355.440] onde eu não poderia dizer,
83
+ [355.440 --> 361.600] oh, olha, você tem um cronho ali, eu tenho que dizer,
84
+ [361.600 --> 367.700] oh, olha, você tem um cronho ali, ou um salto do céu,
85
+ [367.700 --> 370.080] eu tenho que saber o direção do céu do céu,
86
+ [370.080 --> 373.480] então eu tenho que ser able a gente saber isso.
87
+ [373.540 --> 377.200] Ok, assim antecedendo de onde eu vou.
88
+ [378.340 --> 381.020] Nabre, quando eu acho que tenho Yoga,
89
+ [381.020 --> 384.720] asiıyla comentava como o seu civilization planteou essas coisas,
90
+ [384.920 --> 387.500] um commonly douse de waveira слово de kitty,
91
+ [387.500 --> 389.700] é que lidasse à zmanda,
92
+ [390.060 --> 393.020] a 말씀드� comeu, o próximo e etc.
93
+ [394.320 --> 397.100] Uma outrailians differently do tipo,
94
+ [397.100 --> 399.020] um que vem de intervention longuama comiture,
95
+ [399.020 --> 402.020] mas é um意ousedlande,
96
+ [403.480 --> 417.860] Aqui aqui, highway Jimin K
97
+ [417.860 --> 420.640] Skills de pássito, se você raising seus hands
98
+ [420.640 --> 424.620] e seu Bash positive você pode ser mais e retreat lace.
99
+ [425.700 --> 429.520] Talvez chirping um blockedu comothing.
100
+ [429.880 --> 432.360] É um stuffed solve護loiero.
101
+ [432.360 --> 436.000] De sais como é o assessa do mutador,
102
+ [436.500 --> 439.300] tipo decide de um jogador ou outro.
103
+ [440.420 --> 444.520] O movimento do meu armado poderia representar o movimento e tal.
104
+ [444.520 --> 459.120] E foi tudo que depoispo fico se for endured, por exemplo, na snail queisa tivermos as funções
105
+ [459.120 --> 470.520] информ Mye Valley como pra parkedool Levi pitching a gente no primeiro timeline.
106
+ [470.520 --> 472.920] owing.
107
+ [473.580 --> 476.160] Nós já us demos noted,
108
+ [476.520 --> 480.240] que Hoísia aqui é feira eappeiro.
109
+ [480.400 --> 486.800] mas também está é um gourdeio �ardá dummy gestsky questo accountual de droogam Baptido lain
110
+ [487.960 --> 489.620] Aquela arrumado da avaient Lars
111
+ [489.720 --> 490.720] para reproverem isso.
112
+ [490.860 --> 492.820] Rather стран da докумência Motion
113
+ [493.280 --> 494.680] deance
114
+ [495.220 --> 497.120] pleche aailleurs
115
+ [497.120 --> 501.140] como na走了 Rumocion,
116
+ [501.140 --> 504.980] como se tenía uma conexão no lado da elasticсть,
117
+ [504.980 --> 506.960] e de particular,
118
+ [506.960 --> 510.580] eu poderia lá na errada do início e só me�� meτά.
119
+ [513.440 --> 517.040] 심, então aставляma de estar affectedas,
120
+ [517.040 --> 519.420] no quê avaliando esse tempo Chromecos Battle
121
+ [519.420 --> 521.620] raha é prevencido a matem sechs Caucasados
122
+ [521.620 --> 521.700] daortuntalka,
123
+ [521.700 --> 527.000] ou de demain, que eu faci pra esterbar...
124
+ [531.080 --> 537.280] wavelength ou blankamento eu sayo de gesto.
125
+ [537.280 --> 544.560] Na disputa helical, שהוא sempre percebe eu que não dentro do environment.
126
+ [544.560 --> 546.720] É tão bomule.
127
+ [546.720 --> 558.400] trap, então a gente detecta que o organismo todas essas las que acabava os Manuel e Neração
128
+ [558.900 --> 575.740] sua das asiaturas de outras pessoas, e minhas pacitudes alike assim como a execução de graphite initiate
129
+ [575.740 --> 581.560] waves do teng 화� הוא biếtil
130
+ [581.560 --> 586.620] não até questiones do
131
+ [586.620 --> 588.520] voce se Saintiu
132
+ [588.520 --> 591.820] ninguém conhece, se precisa近ar
133
+ [591.820 --> 594.000] então aqui estiverizam alguns
134
+ [594.000 --> 596.300] enconthos icómes
135
+ [596.300 --> 598.740] que dizer isso eta
136
+ [598.740 --> 599.340] ele vira
137
+ [599.340 --> 606.360] com umalona cruzista, Manhattan.
138
+ [606.360 --> 609.440] solta.
139
+ [612.400 --> 613.400] Sim, deixa meu cigetinho rozumerem, se não tiver您щado,
140
+ [613.420 --> 617.120] se temчто do geri,
141
+ [618.280 --> 620.500] se não tended dar como seria multifask,
142
+ [628.720 --> 630.300] o quê 뉴스 Veio com Eggman
143
+ [632.300 --> 633.160] työrtico retiga.
144
+ [633.160 --> 646.540] Agora, eu vou tentar, isso vai ser mais difícil com o green over here.
145
+ [706.540 --> 736.540] Aqui é o centro da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da da
146
+ [736.540 --> 747.700] Bose
147
+ [747.700 --> 761.900] De novo economia, acho que seja só na Insten Stone aqui né então então era muito offensive para
148
+ [761.900 --> 766.840] sensitively shocking about anکta exponencial a umaonal traumática,
149
+ [766.840 --> 769.540] então tem måsa deichtigos exploitados para submitted
150
+ [769.540 --> 776.760] fixar o fato de você voltar em todaoffenda合防za Falls.
151
+ [778.320 --> 781.820] Est storyline,
152
+ [784.820 --> 787.000] esta espécie não muda,
153
+ [787.000 --> 793.180] pico, faço vazio é se manipulasse em
154
+ [793.280 --> 796.500] 49 até mesmo希 Paco.
155
+ [796.640 --> 798.580] repositística da Bethane
156
+ [798.600 --> 801.860] gaat e الا-nimic vousざ quer
157
+ [801.880 --> 803.680] esse tratamento, pois isso não灌çava
158
+ [805.060 --> 807.640] por papa que eu este assuming
159
+ [808.500 --> 809.920] ito que agora se бывает
160
+ [810.900 --> 812.160] com engines
161
+ [812.420 --> 814.280] ok
162
+ [815.280 --> 815.720] KN
163
+ [817.000 --> 834.760] Ok, então, em interacias, eu não tenho um espaço em mim, mas eu também tenho um espaço
164
+ [834.820 --> 839.220] de revalor parazugarmos a uma das pessoas na reção.
165
+ [846.740 --> 851.380] Como eu faço em uma Ugharl elevation.
166
+ [851.380 --> 858.380] Vamos dizer, é claro que não há mais uma vez que seus espécieuxes não se ativam.
167
+ [858.380 --> 862.380] Tem um pouco de espécie entre seus espécieuxes.
168
+ [862.380 --> 866.380] E agora, isso significa que há uma espécieuxa conversacional,
169
+ [866.380 --> 868.380] incluindo também o seu espécieuxo.
170
+ [868.380 --> 872.380] E, crucially, há uma espécieuxa de espécieuxes.
171
+ [872.380 --> 875.380] Então, o espécieuxo directo para a espécieuxa,
172
+ [875.380 --> 879.380] você vai ser directo para que ele tenha feito isso.
173
+ [882.380 --> 885.380] other espécieuxes podem ser aqui.
174
+ [885.380 --> 888.380] Mas, se eu quiser dizer, não se dizer nada agora,
175
+ [888.380 --> 891.380] ou se eu quiser fazer um ponto,
176
+ [893.380 --> 897.380] isso vai ser muito difícil se eu fazer isso,
177
+ [897.380 --> 899.380] ou isso,
178
+ [899.380 --> 902.380] se o address ele foi aqui.
179
+ [902.380 --> 905.380] Eu tenho que fazer isso por que ele tenha feito isso.
180
+ [905.380 --> 908.380] E o resto é um espécieuxo, um espécieuxo.
181
+ [908.380 --> 911.380] Agora, isso significa que, como eu disse,
182
+ [911.380 --> 916.380] que quer fazer um ponto, quer fazer um ponto de uma conversação de uma conversação.
183
+ [916.380 --> 922.380] Eles não podem fazer isso por que eles têm que fazer isso por que eles estão aqui.
184
+ [922.380 --> 927.380] E, no mesmo tempo, o que é o que é o que é o centro de uma conversação
185
+ [927.380 --> 931.380] é que não só se retenir sobre sua espécieuxa,
186
+ [931.380 --> 934.380] mas sobre a outra espécieuxa.
187
+ [934.380 --> 939.380] Isso realmente faz o que quer fazer, você quer fazer a conversação.
188
+ [939.380 --> 942.380] Então, tem um espécieuxo,
189
+ [942.380 --> 947.380] e é um espécieuxo,
190
+ [947.380 --> 951.380] mas um espécieuxo e um extrúcido de uma espécieuxa.
191
+ [952.380 --> 954.380] Ok.
192
+ [954.380 --> 956.380] E nós só precisamos fazer isso.
193
+ [956.380 --> 959.380] Ah, nós podemos dizer isso, agora, então, como eu disse,
194
+ [959.380 --> 961.380] para interaccional de ajustes interaccional,
195
+ [961.380 --> 963.380] você retenir sobre a espécieuxa ou,
196
+ [963.380 --> 966.380] ou, por exemplo, a outra espécieuxa,
197
+ [966.380 --> 969.380] que é extrainteraccional.
198
+ [969.380 --> 971.380] Isso não tem que fazer um ponto de uma espécieuxo.
199
+ [971.380 --> 973.380] Então, no McNeill Lab,
200
+ [973.380 --> 977.380] tem um lugar que tem que fazer um lugar onde os estudantes
201
+ [977.380 --> 980.380] retenir sobre a espécieuxa e a outra espécieuxa.
202
+ [980.380 --> 983.380] Então, o caso que eles estavam fazendo
203
+ [983.380 --> 986.380] foi uma espécieuxa ou um papel de papel.
204
+ [986.380 --> 989.380] Então, eles foram firstos com uma espécieuxa,
205
+ [989.380 --> 993.380] e eles tinham que fazer isso sem.
206
+ [993.380 --> 995.380] Ok.
207
+ [995.380 --> 997.380] E o instructor de uma espécieuxa
208
+ [997.380 --> 1001.380] foi realmente retenir sobre a espécieuxa
209
+ [1001.380 --> 1005.380] e ajudando a fazer uma espécieuxa
210
+ [1005.380 --> 1008.380] ou um papel de papel.
211
+ [1008.380 --> 1011.380] Então, por exemplo, quando você retenir sobre a espécieuxa
212
+ [1011.380 --> 1013.380] ou uma espécieuxa,
213
+ [1013.380 --> 1015.380] é extrainteraccional,
214
+ [1015.380 --> 1017.380] se isso é um lugar onde ele está,
215
+ [1017.380 --> 1019.380] como uma espécieuxa ou um papel de papel,
216
+ [1019.380 --> 1021.380] ou se eu estou tentando realmente fazer uma espécieuxa
217
+ [1021.380 --> 1025.380] e fazer uma conversação.
218
+ [1025.380 --> 1027.380] Ok.
219
+ [1035.380 --> 1037.380] Sim, isso é importante.
220
+ [1037.380 --> 1039.380] 13.
221
+ [1043.380 --> 1045.380] Ok.
222
+ [1045.380 --> 1048.380] Então, quando estamos interessando,
223
+ [1048.380 --> 1050.380] nós dissemos que nós somos fosos,
224
+ [1050.380 --> 1053.380] e nós só disse que não é necessariamente
225
+ [1053.380 --> 1055.380] o meu próprio fio,
226
+ [1055.380 --> 1057.380] o que eu estou fazendo,
227
+ [1057.380 --> 1060.380] eu estou imaginando fosos.
228
+ [1060.380 --> 1062.380] Então, se eu dizer que ele disse,
229
+ [1062.380 --> 1064.380] e eu disse,
230
+ [1064.380 --> 1066.380] ou não,
231
+ [1066.380 --> 1068.380] você sabe que esses gestos não estão direcidos
232
+ [1068.380 --> 1070.380] em você, eles estão direcidos
233
+ [1070.380 --> 1072.380] em uma espécieuxa ou um papel de papel
234
+ [1072.380 --> 1074.380] em uma conversação.
235
+ [1074.380 --> 1078.380] Mas você também se nota que eu já deixei os roles.
236
+ [1078.380 --> 1080.380] Eu disse que ele disse,
237
+ [1080.380 --> 1082.380] como isso, eu disse, ou não,
238
+ [1082.380 --> 1086.380] então eu não tinha nem mais tempo para ir lá.
239
+ [1086.380 --> 1088.380] Eu deixei os roles,
240
+ [1088.380 --> 1090.380] e isso é o que Perry Jansen costa
241
+ [1090.380 --> 1093.380] um pouco mais de 100 de 18 de agressão de rotação
242
+ [1093.380 --> 1096.380] em americanos e sangueis de exemplos.
243
+ [1096.380 --> 1098.380] Ok.
244
+ [1100.380 --> 1102.380] Também documento em sangueis de rotação
245
+ [1102.380 --> 1108.380] que os roles de rotação são de um tipo de espécieuxa.
246
+ [1108.380 --> 1110.380] Então, por exemplo, eu poderia dizer,
247
+ [1110.380 --> 1112.380] então ele disse,
248
+ [1112.380 --> 1114.380] e eu disse,
249
+ [1114.380 --> 1116.380] ou não,
250
+ [1116.380 --> 1120.380] então são dois ways de regras de rotação
251
+ [1120.380 --> 1122.380] em uma espécieuxa,
252
+ [1122.380 --> 1124.380] e o investimento de um tipo de espécieuxa,
253
+ [1124.380 --> 1126.380] e o que você quer fazer,
254
+ [1126.380 --> 1128.380] que ele pode fazer,
255
+ [1128.380 --> 1130.380] e o investimento de um tipo de espécieuxa.
256
+ [1130.380 --> 1132.380] Ainda mais,
257
+ [1132.380 --> 1134.380] eu posso representar,
258
+ [1134.380 --> 1136.380] isso é, como nós já dissem,
259
+ [1136.380 --> 1138.380] um bom app,
260
+ [1138.380 --> 1141.380] representando mais de um
261
+ [1141.380 --> 1143.380] uma pessoa,
262
+ [1143.380 --> 1145.380] e um ponto de tempo,
263
+ [1145.380 --> 1147.380] é a mesma.
264
+ [1147.380 --> 1150.380] Eu não posso literalmente não ser aware
265
+ [1150.380 --> 1153.380] de como você pode ver.
266
+ [1153.380 --> 1155.380] Então,
267
+ [1155.380 --> 1159.380] a funa é, como eu represento,
268
+ [1159.380 --> 1163.380] se eu tenho as duas coisas em que eu estou falando,
269
+ [1163.380 --> 1167.380] como eu represento as duas coisas em uma espécieuxa,
270
+ [1167.380 --> 1169.380] que é uma espécieuxa,
271
+ [1169.380 --> 1171.380] então a melhor forma de representar uma espécieuxa,
272
+ [1171.380 --> 1173.380] é uma espécieuxa.
273
+ [1173.380 --> 1175.380] Então, eu tenho uma espécieuxa,
274
+ [1175.380 --> 1177.380] eu tenho uma espécieuxa,
275
+ [1177.380 --> 1179.380] mas, por exemplo,
276
+ [1179.380 --> 1181.380] há dois pessoas que estão representando as duas coisas,
277
+ [1181.380 --> 1183.380] eu posso ir para a outra espécieuxa,
278
+ [1183.380 --> 1185.380] eu acho que estão entre essas duas coisas,
279
+ [1185.380 --> 1187.380] mas eu posso também te dar uma outra espécieuxa,
280
+ [1187.380 --> 1189.380] e então a Paul Dutas,
281
+ [1189.380 --> 1191.380] eu vou te dar uma espécieuxa,
282
+ [1191.380 --> 1193.380] mas também é uma espécieuxa,
283
+ [1193.380 --> 1195.380] então ele tem um grande exemplo
284
+ [1195.380 --> 1197.380] da história do American Sign Language,
285
+ [1197.380 --> 1200.380] e a história é sobre alguns dois coisas
286
+ [1200.380 --> 1202.380] que são os cardes e um novo.
287
+ [1202.380 --> 1205.380] Então ele tem alguém com seus cardes,
288
+ [1205.380 --> 1207.380] você vê os cardes em meu hand,
289
+ [1207.380 --> 1209.380] e a outra espécieuxa,
290
+ [1209.380 --> 1211.380] ele se enlue,
291
+ [1211.380 --> 1215.380] então, espera um minuto,
292
+ [1215.380 --> 1217.380] o meu corpo é agora representando
293
+ [1217.380 --> 1219.380] o card player que está sentindo,
294
+ [1219.380 --> 1222.380] e meu hand é representando
295
+ [1222.380 --> 1225.380] o card, que é a outra espécieuxa,
296
+ [1225.380 --> 1226.380] mas,
297
+ [1226.380 --> 1229.380] o meu hand é representando
298
+ [1229.380 --> 1233.380] o card, que é a outra espécieuxa,
299
+ [1233.380 --> 1235.380] e eu vou só dar uma espécieuxa,
300
+ [1235.380 --> 1237.380] para o meu hand é representado,
301
+ [1237.380 --> 1240.380] é só que eles tenham...
302
+ [1240.380 --> 1245.380] Então eu posso ir para pessoas,
303
+ [1245.380 --> 1248.380] que é o que é aqui,
304
+ [1249.380 --> 1252.380] ok, vamos para o quarto,
305
+ [1259.380 --> 1261.380] ok, vamos fazer 15 para o momento,
306
+ [1261.380 --> 1262.380] alright,
307
+ [1269.380 --> 1271.380] então, em gestora em sua espécie,
308
+ [1271.380 --> 1273.380] eu tenho constantemente
309
+ [1273.380 --> 1276.380] de uma espécieuxa e de uma espécieuxa
310
+ [1276.380 --> 1278.380] de processos.
311
+ [1280.380 --> 1281.380] Então,
312
+ [1282.380 --> 1284.380] se eu realmente representar
313
+ [1284.380 --> 1287.380] uma espécieuxa, não meditário,
314
+ [1287.380 --> 1289.380] eu posso dizer algo que,
315
+ [1290.380 --> 1291.380] ele veio para eu,
316
+ [1291.380 --> 1293.380] e eu vou mover o meu hand para eu,
317
+ [1294.380 --> 1296.380] e ele vai para a espécieuxa,
318
+ [1296.380 --> 1298.380] para eu me vencer.
319
+ [1298.380 --> 1301.380] Mas, se eu for descrubar para você,
320
+ [1301.380 --> 1302.380] por exemplo,
321
+ [1302.380 --> 1304.380] eu vou dizer como eu vou aguentar o gredo,
322
+ [1305.380 --> 1307.380] então eu posso dizer,
323
+ [1307.380 --> 1309.380] então, primeiro você vai para a espécieuxa,
324
+ [1309.380 --> 1311.380] você tem que fazer admissões,
325
+ [1311.380 --> 1313.380] então você vai fazer as classes basicamente,
326
+ [1313.380 --> 1315.380] então você vai fazer isso,
327
+ [1315.380 --> 1317.380] e meu hand vai para a espécieuxa,
328
+ [1317.380 --> 1319.380] e a espécieuxa,
329
+ [1319.380 --> 1322.380] eu não vou fazer isso,
330
+ [1322.380 --> 1325.380] e vai finalmente você vai aguentar o gredo.
331
+ [1325.380 --> 1326.380] Então,
332
+ [1326.380 --> 1328.380] uma espécieuxa,
333
+ [1328.380 --> 1330.380] que não é sobre a espécieuxa,
334
+ [1330.380 --> 1332.380] é sobre uma espécieuxa,
335
+ [1332.380 --> 1336.380] a Crisis EsெUSは processos como esté장.
336
+ [1336.380 --> 1337.380] right?
337
+ [1337.380 --> 1339.380] Estamos K cardiovascularerlo,
338
+ [1339.380 --> 1342.380] e processos muito mais ajudam por o começo,
339
+ [1343.380 --> 1345.380] mas tem que estar cansado errando esse gredo
340
+ [1345.380 --> 1348.380] e isso vai ser uma espécieuxa como eurem Eye.
341
+ [1348.380 --> 1350.380] Então, está idiotamente difícil,
342
+ [1350.380 --> 1351.380] consegue viver x altos,
343
+ [1351.380 --> 1354.380] gins com uma espécieuxa para eu tomar a forma de adam,
344
+ [1354.380 --> 1357.360] ela tem que governar com 300 ajudar
345
+ [1357.380 --> 1358.520] e que nos Liuan está supplementado para
346
+ [1358.520 --> 1362.040] eu já comecei a melodearTech e advisors,
347
+ [1362.380 --> 1363.380] Ok.
348
+ [1366.380 --> 1369.380] Nós só fiz 16 e 17, ok.
349
+ [1372.380 --> 1376.380] Ok, e vamos ver o 18 de uma segunda.
350
+ [1376.380 --> 1377.380] Ok.
351
+ [1378.380 --> 1382.380] Então, um facto crucial sobre estes espaises,
352
+ [1382.380 --> 1387.380] tem todos os tipos de interna-dictifices.
353
+ [1388.380 --> 1391.380] Ai tentamos dos testamentos de中共,
354
+ [1392.380 --> 1398.380] mas coke é dejointast чем a rede,
355
+ [1399.380 --> 1401.380] a dusca Prima,
356
+ [1402.380 --> 1408.380] Lenart e sua ação número de pessoas sem acreditar em sinal才 de emagreza.
357
+ [1409.380 --> 1414.380] Então, nós vamos ver digamos que você não está brincando,
358
+ [1414.380 --> 1440.580] nuggets al favorite você, ou parece um
359
+ [1440.580 --> 1443.200] para mim é só letter bind para você,
360
+ [1443.200 --> 1445.180] para luchar para você tomar attempted.
361
+ [1445.180 --> 1447.740] Esse é só um tecido indicatório
362
+ [1447.740 --> 1451.380] que o healthy tide aqui desderemlin
363
+ [1451.380 --> 1455.020] e isso é só um tecido formulário que avançará...
364
+ [1455.020 --> 1460.160] papa na monitoreia du paramilme langamente.
365
+ [1460.160 --> 1463.560] 事importante do país que é nomeado
366
+ [1463.560 --> 1467.140] ter o lugar do seuavoro que você pode ter na verdade.
367
+ [1467.140 --> 1474.820] Então o molho está na cor, o molho está na cor, o molho está na cor, o molho está na cor,
368
+ [1474.820 --> 1479.520] então o molho está na cor, e então o molho está na cor.
369
+ [1479.520 --> 1482.520] Ok.
370
+ [1482.520 --> 1486.360] So 19.
371
+ [1486.360 --> 1491.020] É só um dia abstracto aqui.
372
+ [1491.840 --> 1496.080] Sim, a criticismista do cucumbers sem masa para...
373
+ [1496.080 --> 1500.600] É a estruturalocks de sutura sobre Manhattan e SmartMan认?
374
+ [1500.600 --> 1502.560] Bem.
375
+ [1502.560 --> 1506.120] Ora.
376
+ [1506.120 --> 1510.540] Porqueений li devia averag화를 a gente ditch?
377
+ [1510.540 --> 1515.360] Como deveria ampler?
378
+ [1515.360 --> 1517.400] Aqui se responsabe armas a gente menos.
379
+ [1517.400 --> 1523.400] Hoje nós conseguimos ver resemblémosensionais todas asucksativas tortured
380
+ [1523.400 --> 1527.380] а tudo, como nos índoles são cometendo as pravhas?
381
+ [1527.400 --> 1534.900] O adolescências deука do ví� про� located em vários sites
382
+ [1534.900 --> 1538.380] Intoxinamos o essencial Sem Encon alive.
383
+ [1538.400 --> 1541.400] посanos podem 감사ádate isso, mas houverayou preso com houveray também
384
+ [1541.400 --> 1545.240] Na inválvio, é que é uma texta histórica.
385
+ [1545.560 --> 1550.120] É assim que se inventaria,
386
+ [1550.240 --> 1555.400] o meningolo é muito económico,
387
+ [1556.120 --> 1558.720] se foram tomados em outros inst профritos,
388
+ [1558.880 --> 1561.640] se seus linguais têm isso dos bosses.
389
+ [1561.640 --> 1567.120] O mesmo잖아요 queECTEM bom Festival Battery.
390
+ [1567.120 --> 1573.560] Outro disclosure é que elas não se conformar aweder doem dos bichos individuales.
391
+ [1574.580 --> 1578.080] Então aqui está... que estamos aqui agora para um problema de bonito,
392
+ [1579.940 --> 1582.120] itatedão bites yummy.
393
+ [1583.040 --> 1586.200] Então waveset��라고요.
394
+ [1587.140 --> 1589.600] Sem saudniejs裹 na inverse,
395
+ [1589.880 --> 1593.200] Trop payofften op crave a cortexia fortes antes de pular.
396
+ [1593.560 --> 1595.600] Então agora é uma exemplo quanto.
397
+ [1595.600 --> 1604.140] Politico, ela sabe então, Já negou Pathon, me escreve rapidamente Songs.
398
+ [1605.140 --> 1607.060] Professor E.
399
+ [1607.200 --> 1614.420] Eu acho que estou развитida com osifier 오늘은.
400
+ [1614.640 --> 1621.740] topia travail Jú unemployendo.
401
+ [1622.100 --> 1624.260] Speaker Yellow
402
+ [1624.260 --> 1628.900] Então em inglês em inglês, e a gente tem visto isso emranean
403
+ [1628.900 --> 1633.100] EQ a embracing quê, que temos que, assim como eu estou Repeat.
404
+ [1635.580 --> 1638.020] Mas ele é que?
405
+ [1638.140 --> 1641.240] O uso da maneira de terpie,
406
+ [1641.240 --> 1646.520] faz uma calendaria scientifica entre a Use do antecessorral delauseu Melissa
407
+ [1646.520 --> 1648.140] e spell, o ia não ao internationamento drank,
408
+ [1648.140 --> 1651.040] para que que que é um esea tournaments.
409
+ [1651.040 --> 1655.040] Não é o mesmo.
410
+ [1655.040 --> 1663.040] A Now é não o que o Narraturus é o que o Narraturus é o que o Narraturus é o primeiro aspecto de Narraturus.
411
+ [1663.040 --> 1671.040] E a Mami, nós estamos aqui, é o Narraturus Mami.
412
+ [1671.040 --> 1675.040] A Now é a minha equilíbrio.
413
+ [1675.040 --> 1683.040] Então, até agora, é o nosso prato, e o prato do Narraturus é o primeiro aspecto do Narraturus.
414
+ [1683.040 --> 1686.040] Nesse exemplo que ela traz,
415
+ [1686.040 --> 1696.040] e que nós temos a introdução de um personagem com o Chi.
416
+ [1696.040 --> 1698.040] Seria ela.
417
+ [1698.040 --> 1706.040] Mas, no que segue, e que nós temos a Mami, um ponto de vista,
418
+ [1706.040 --> 1710.040] porque não é simplesmente o ponto de vista do Narraturus.
419
+ [1710.040 --> 1719.040] Esse Now, e esse Zoom, eles revelam que é a perspectiva da personagem que está sendo colocada ali.
420
+ [1719.040 --> 1730.040] E o Mami, essa referênciação que se estabelece, também é pela perspectiva do Chi e não do Narraturus propriamente.
421
+ [1730.040 --> 1732.040] Ok.
422
+ [1732.040 --> 1736.040] Ok, então vamos para o 23.
423
+ [1736.040 --> 1740.040] Ok.
424
+ [1740.040 --> 1742.040] Então,
425
+ [1742.040 --> 1746.040] Liven Bondelinote,
426
+ [1746.040 --> 1750.040] suggested that there's this added kind of indirect speech.
427
+ [1750.040 --> 1751.040] So,
428
+ [1751.040 --> 1754.040] o que nós estamos apenas olhando,
429
+ [1754.040 --> 1759.040] o passplus agora é o nosso exemplo de que alguns pessoas chamam
430
+ [1759.040 --> 1763.040] o pre-indirect speech and style.
431
+ [1763.040 --> 1765.040] Ok.
432
+ [1765.040 --> 1771.040] Ok. Então, que ela está trazendo um exemplo de construções com agora, mas passado.
433
+ [1771.040 --> 1775.040] No Inglês, no com o passado.
434
+ [1775.040 --> 1778.040] E o que tem sido estudado,
435
+ [1778.040 --> 1781.040] é que essas construções revelam na verdade,
436
+ [1781.040 --> 1784.040] construções de discurso indireto livre,
437
+ [1784.040 --> 1787.040] em que você tem a mezclagem de um ponto de vista.
438
+ [1787.040 --> 1791.040] Dona Rador com o ponto de vista de uma personagem.
439
+ [1791.040 --> 1793.040] Ok.
440
+ [1793.040 --> 1794.040] Ok.
441
+ [1794.040 --> 1796.040] Então, aqui em Liven Bondelinote,
442
+ [1796.040 --> 1802.040] ele nos nota um tipo de mixed viewpoint em language,
443
+ [1802.040 --> 1806.040] que ele chama distanced indirect speech and thought.
444
+ [1806.040 --> 1808.040] E aqui é um exemplo,
445
+ [1808.040 --> 1811.040] então você estava fazendo o que você estava.
446
+ [1811.040 --> 1813.040] Então, se você está na verdade,
447
+ [1813.040 --> 1817.040] em que os pronouns são exatamente correctos
448
+ [1817.040 --> 1820.040] para a sua corrente situais de distanced,
449
+ [1820.040 --> 1822.040] é uma direção de alguma coisa.
450
+ [1822.040 --> 1823.040] Ok.
451
+ [1823.040 --> 1825.040] Mas a experiência não é,
452
+ [1825.040 --> 1826.040] não é?
453
+ [1826.040 --> 1831.040] É uma coisa que não é a corrente situais que está sendo falando.
454
+ [1831.040 --> 1833.040] Ok. Então,
455
+ [1833.040 --> 1838.040] nesta construção, que mostram esse distanciamento,
456
+ [1838.040 --> 1842.040] uma das construções identificadas por vando ela note,
457
+ [1842.040 --> 1850.040] é essa construção de discurso indireto distanciado,
458
+ [1850.040 --> 1851.040] que é o que ele chama.
459
+ [1851.040 --> 1854.040] Quando você tem uma construção como essa do exemplo,
460
+ [1854.040 --> 1858.040] em que o Yu não é para se referir a uma situação
461
+ [1858.040 --> 1863.040] imediata de interação do falante com seu interlocutor.
462
+ [1863.040 --> 1866.040] Mas é para se referir alguma coisa passada
463
+ [1866.040 --> 1870.040] que vem marcado daí nesse exemplo com o oos.
464
+ [1870.040 --> 1872.040] Ok.
465
+ [1872.040 --> 1874.040] Ok.
466
+ [1874.040 --> 1881.040] Então, vamos dizer que esses combinadores de vôndios
467
+ [1881.040 --> 1884.040] são as marcas que são apropriadas
468
+ [1884.040 --> 1886.040] para o que o vôndo da narrativa,
469
+ [1886.040 --> 1888.040] e o vôndo da narrativa,
470
+ [1888.040 --> 1890.040] se mexer com um outro,
471
+ [1890.040 --> 1894.040] produz um imprenso de dois vôndios de corres,
472
+ [1894.040 --> 1896.040] o vôndo da narrativa,
473
+ [1896.040 --> 1898.040] e o vôndo da narrativa.
474
+ [1898.040 --> 1899.040] Ok.
475
+ [1899.040 --> 1902.040] E eles também vão ter uma forma para nós.
476
+ [1902.040 --> 1903.040] Ok.
477
+ [1903.040 --> 1907.040] Então, o que ela está mostrando é que, pela linguagem,
478
+ [1907.040 --> 1912.040] pelas construções e pelas marcas linguísticas que são oferecidas,
479
+ [1912.040 --> 1916.040] o que acontece é que algumas vão se especializar para o ponto de vista,
480
+ [1916.040 --> 1919.040] donar a dor, outras da personagem,
481
+ [1919.040 --> 1923.040] e nós como leitores aquilo que nós percebemos é que existe uma mezcla
482
+ [1923.040 --> 1927.040] desses dois pontos de vista no discurso.
483
+ [1927.040 --> 1928.040] Ok.
484
+ [1928.040 --> 1929.040] Ok.
485
+ [1929.040 --> 1932.040] Então, não se acontece em uma linguagem spoken,
486
+ [1932.040 --> 1934.040] que é uma coragem de gesto,
487
+ [1934.040 --> 1936.040] e eu vou ter que abrir um pouco aqui,
488
+ [1936.040 --> 1938.040] porque nós vamos fazer um pouco de tempo.
489
+ [1938.040 --> 1939.040] Mas, ok.
490
+ [1939.040 --> 1941.040] Vamos tentar 25.
491
+ [1941.040 --> 1944.040] Então, o que ela diz é que isso também foi percebido.
492
+ [1944.040 --> 1946.040] Não só na linguagem literária,
493
+ [1946.040 --> 1949.040] a linguagem verbal, mas também foi estudado
494
+ [1949.040 --> 1952.040] na linguagem de sinais americana.
495
+ [1953.040 --> 1955.040] Ok.
496
+ [1955.040 --> 1960.040] Então, isso é reconhecido em gesto que eu posso ter duas coisas de vista,
497
+ [1960.040 --> 1965.040] que são os quais são os quais é o seu ponto de vista e observar os pontos.
498
+ [1965.040 --> 1971.040] Então, vamos dizer que eu vou fazer isso para representar uma actividade de uma representação
499
+ [1971.040 --> 1974.040] de que alguém pode fazer se eles estão rolando.
500
+ [1974.040 --> 1977.040] Ou eu vou fazer isso para fazer, ok.
501
+ [1977.040 --> 1980.040] Acompanha como uma pessoa pode ser quando eles estão running.
502
+ [1980.040 --> 1983.040] Então, se eles estão rolando,
503
+ [1983.040 --> 1986.040] eles vão fazer isso.
504
+ [1986.040 --> 1989.040] Eu vou fazer um trajectory,
505
+ [1989.040 --> 1992.040] que é o mais um ponto de vista e observar.
506
+ [1992.040 --> 1994.040] Então, vamos fazer um ponto de vista.
507
+ [1994.040 --> 1997.040] Sim, nós temos um ponto de vista.
508
+ [1997.040 --> 2001.040] E eu acho que eu vou fazer isso para...
509
+ [2001.040 --> 2004.040] Ok. Então, vamos me ver.
510
+ [2004.040 --> 2008.040] Então, ela diz que na linguagem gesto,
511
+ [2008.040 --> 2011.040] ela se acompanha a fala corrente,
512
+ [2011.040 --> 2016.040] o que acontece é que podem ser misturados tanto o ponto de vista do falante,
513
+ [2016.040 --> 2018.040] quanto o ponto de vista de um observador.
514
+ [2018.040 --> 2021.040] Então, se eu faço esse gesto,
515
+ [2021.040 --> 2023.040] eu estou provavelmente falando,
516
+ [2023.040 --> 2028.040] falando de uma ação de remar ou se eu faço assim,
517
+ [2028.040 --> 2029.040] uma ação de correr.
518
+ [2029.040 --> 2032.040] Mas se eu faço isso para correr,
519
+ [2032.040 --> 2037.040] por exemplo, eu já estou incorporando o ponto de vista de um observador
520
+ [2037.040 --> 2040.040] que não é necessariamente da pessoa que está falando.
521
+ [2040.040 --> 2045.040] E aí o que acontece é que esses dois pontos de vista do falante,
522
+ [2045.040 --> 2047.040] do observador, eles se misturam o tempo todo
523
+ [2047.040 --> 2052.040] enquanto nós estamos produzindo fala com o gesto.
524
+ [2052.040 --> 2058.040] Ok, então, agora vou mostrar uma história de como isso funciona.
525
+ [2058.040 --> 2061.040] E eu acho que vai ser só para 29 no treino.
526
+ [2061.040 --> 2064.040] 29,31.
527
+ [2064.040 --> 2069.040] Então, essa é uma história que ela vai mostrar que foi documentado numa pesquisa.
528
+ [2069.040 --> 2072.040] Ok, então, em esse exemplo aqui,
529
+ [2072.040 --> 2074.040] esse é um americano storyteller,
530
+ [2074.040 --> 2076.040] telling a história para um amigo,
531
+ [2076.040 --> 2079.040] e ela está se enactando
532
+ [2079.040 --> 2082.040] um stronador de fico de fico,
533
+ [2082.040 --> 2084.040] de forma de se manter.
534
+ [2084.040 --> 2086.040] Então, o que está acontecendo aqui?
535
+ [2086.040 --> 2089.040] Nós temos duas pessoas e a moça.
536
+ [2089.040 --> 2091.040] É uma contadora de histórias,
537
+ [2091.040 --> 2096.040] e ela está contando para o interlocutor uma determinada história
538
+ [2096.040 --> 2102.040] em que o personagem é um oficial desconfiado que está segurando um documento.
539
+ [2102.040 --> 2107.040] Ok, então, você pode ver que ela está se atingindo na forma de se sentir,
540
+ [2107.040 --> 2112.040] na forma de se sentir, que os olhos estão se atingindo na forma de se sentir,
541
+ [2112.040 --> 2116.040] e você pode ver que o seu forehead rincole está se atingindo.
542
+ [2116.040 --> 2119.040] Então, você pode perceber que o olhar dela está direcionado
543
+ [2119.040 --> 2121.040] para esse documento imaginário,
544
+ [2121.040 --> 2123.040] a cabeça dela está abaixo,
545
+ [2123.040 --> 2126.040] como se ela tivesse interagindo com papel,
546
+ [2126.040 --> 2129.040] e a testa dela está enrugada,
547
+ [2129.040 --> 2132.040] num tom de setecismo.
548
+ [2132.040 --> 2136.040] Ok, agora, em 30,
549
+ [2140.040 --> 2142.040] ela está agora sendo o seu猿ô,
550
+ [2142.040 --> 2145.040] o猿ô está respondendo a oficial.
551
+ [2145.040 --> 2147.040] Então, ela está se atingindo na forma de se sentir,
552
+ [2147.040 --> 2151.040] ela está bien adoptadoaria e Adapt antsiej parecia depressず e escrutado.
553
+ [2151.040 --> 2170.860] Ela знакa se example de rep
554
+ [2170.860 --> 2175.060] poryangon sofema não por Bloco do Peço bora pela withdrawal da было intersectionacionada.
555
+ [2175.060 --> 2178.540] Agora ela muda de pont de vista, ele incorpora a dois.
556
+ [2178.540 --> 2181.640] Porque agora ela é o eo dela passado,
557
+ [2181.640 --> 2184.940] então ela tá agindo todo inocente com o sorriso,
558
+ [2184.940 --> 2190.640] mas as mãos dela ainda estão segurando aquele formulário imaginário.
559
+ [2190.640 --> 2193.340] E ela não precisou mudar a postura corporal,
560
+ [2193.340 --> 2196.860] a única coisa que ela fez foi levantar a cabeça
561
+ [2196.860 --> 2199.120] e dirigir o olhar dela para interlocutor.
562
+ [2199.120 --> 2202.960] Então aí, existe a incorporação de um outro ponto de vista,
563
+ [2202.960 --> 2204.720] o desse eo passado dela.
564
+ [2205.720 --> 2209.720] Ok, então em todas essas pícitas,
565
+ [2209.720 --> 2212.720] acho que você pode ver, ela não está olhando
566
+ [2212.720 --> 2214.720] para a pessoa que ela está falando.
567
+ [2214.720 --> 2216.720] Ela está olhando para o espaço.
568
+ [2216.720 --> 2217.720] O primeiro ela está olhando para o espaço
569
+ [2217.720 --> 2220.720] eu tinha por ela tendo a ser oficial.
570
+ [2220.720 --> 2222.720] E então, como ela speaking para o seu passado,
571
+ [2222.720 --> 2224.720] ela está olhando para o espaço
572
+ [2224.720 --> 2225.720] e ela está olhando para ele,
573
+ [2225.720 --> 2229.720] e ela está olhando para o espaço oficial.
574
+ [2229.720 --> 2232.720] Então veja que nesse...
575
+ [2233.720 --> 2236.720] nessas duas figuras, nessas duas imagens o que acontece é
576
+ [2236.720 --> 2238.720] o ponto de vista dela direcionado
577
+ [2238.720 --> 2240.720] é primeiro para o formulário
578
+ [2240.720 --> 2242.720] e depois para um espaço distante,
579
+ [2242.720 --> 2246.720] não necessariamente para o interlocutor com quem ela está interagindo.
580
+ [2248.720 --> 2249.720] Então, ninguém que...
581
+ [2249.720 --> 2251.720] Então quando você vê uma pessoa contando uma história assim,
582
+ [2251.720 --> 2252.720] você nunca vai pensar,
583
+ [2252.720 --> 2254.720] para onde que essa pessoa está olhando.
584
+ [2254.720 --> 2256.720] Você sabe o que ela está olhando?
585
+ [2256.720 --> 2258.720] Então, ninguém que...
586
+ [2258.720 --> 2260.720] Então quando você vê uma pessoa contando uma história assim,
587
+ [2260.720 --> 2261.720] você nunca vai pensar, para onde que essa pessoa está olhando?
588
+ [2261.720 --> 2264.720] Você sabe o que eles estão fazendo e para onde eles estão olhando?
589
+ [2264.720 --> 2267.720] Você sabe o que eles estão fazendo e para onde eles estão olhando?
590
+ [2268.720 --> 2271.720] Então, agora o real world interlocutor
591
+ [2271.720 --> 2274.720] vai perguntar e agora vai perguntar 31.
592
+ [2274.720 --> 2276.720] E nesse slide agora,
593
+ [2276.720 --> 2282.720] o interlocutor do mundo real fez uma pergunta para contadora.
594
+ [2283.720 --> 2285.720] Então agora ele se torna a base de ele,
595
+ [2285.720 --> 2289.720] né, instead de entrar a história da land,
596
+ [2289.720 --> 2294.720] e se nos denuncia que uma pessoa ainda está olhando naquela documenta.
597
+ [2295.720 --> 2298.720] E agora ela se vira para ele, ela se dirige para ele,
598
+ [2298.720 --> 2302.720] só que note em que uma das mãos dela ainda está segurando
599
+ [2302.720 --> 2305.720] o formulário imaginário da história.
600
+ [2307.720 --> 2309.720] Então, o meu ponto aqui é que
601
+ [2309.720 --> 2311.720] esse tipo de mistura,
602
+ [2311.720 --> 2312.720] uma parte da verdade,
603
+ [2312.720 --> 2315.720] esse ponto é que o seu corpo é o narrador.
604
+ [2316.720 --> 2317.720] E a outra parte da verdade,
605
+ [2317.720 --> 2319.720] o narrador é o narrador,
606
+ [2319.720 --> 2323.720] esse é o tipo de mistura que nós estamos vendo
607
+ [2323.720 --> 2324.720] em freindira,
608
+ [2324.720 --> 2326.720] em fizesis e direitos.
609
+ [2327.720 --> 2330.720] Então, o que ela quer mostrar aqui
610
+ [2330.720 --> 2332.720] é exatamente o seguinte,
611
+ [2332.720 --> 2335.720] o que nós vimos na literatura, com os exemplos de literatura,
612
+ [2335.720 --> 2337.720] do discurso indireto livre,
613
+ [2337.720 --> 2340.720] o discurso indireto distanciado que eles chamam,
614
+ [2341.720 --> 2344.720] é que na fala normal, com o gesto,
615
+ [2344.720 --> 2346.720] nós conseguimos também fazer incorporação
616
+ [2346.720 --> 2348.720] desses múltiplos pontos de vista,
617
+ [2348.720 --> 2350.720] em que, por um lado,
618
+ [2350.720 --> 2352.720] ela está incorporando o narrador,
619
+ [2352.720 --> 2354.720] ela mesma como narradora,
620
+ [2354.720 --> 2357.720] e por outro lado, ela incorpora o ponto de vista de um personagem,
621
+ [2357.720 --> 2359.720] porque ela ainda mantém segura
622
+ [2359.720 --> 2361.720] aquele formulário imaginário.
623
+ [2366.720 --> 2368.720] Ok, então, quando nós vemos esses misturas,
624
+ [2368.720 --> 2371.720] é uma parte da sua body enacting a literatura,
625
+ [2371.720 --> 2373.720] uma parte da sua body enacting a sua character,
626
+ [2374.720 --> 2376.720] ou quando nós vemos esses misturas,
627
+ [2376.720 --> 2378.720] quando nós vemos,
628
+ [2378.720 --> 2380.720] agora é a parte da sua character,
629
+ [2380.720 --> 2382.720] e a parte da sua body enacting a literatura,
630
+ [2382.720 --> 2387.720] esses são os nossos síndios que a língua
631
+ [2387.720 --> 2390.720] ou a body enacting a uma parte da sua body
632
+ [2390.720 --> 2393.720] porque eles são incompatibles com a outra.
633
+ [2394.720 --> 2396.720] Então, o que acontece nessas construções
634
+ [2396.720 --> 2398.720] em que nós temos, por exemplo,
635
+ [2398.720 --> 2401.720] o corpo manifestando dois pontos de vista
636
+ [2401.720 --> 2403.720] ou por meio da linguagem também,
637
+ [2403.720 --> 2404.720] nos exemplos de literatura,
638
+ [2404.720 --> 2406.720] nós temos a mistura do agora,
639
+ [2406.720 --> 2408.720] com o passado,
640
+ [2408.720 --> 2411.720] o que acontece é que tudo isso serve de evidência
641
+ [2411.720 --> 2413.720] para mostrar que nós conseguimos
642
+ [2413.720 --> 2416.720] mezclar ou mergir esses dois,
643
+ [2416.720 --> 2419.720] esses múltiplos pontos de vista.
644
+ [2422.720 --> 2425.720] Ok, e vamos fazer 32,
645
+ [2425.720 --> 2428.720] nós estamos ficando no fim, ok?
646
+ [2428.720 --> 2432.720] Segurem aí que ela já está chegando no final.
647
+ [2488.720 --> 2491.720] Então, ela está mostrando como é que isso também pode ser visto na arte.
648
+ [2491.720 --> 2495.720] Ela se refere a uma pintura da cena da anunciação
649
+ [2495.720 --> 2497.720] do nascimento do menino Jesus,
650
+ [2497.720 --> 2500.720] em que um anjo conversa com a virgem.
651
+ [2500.720 --> 2503.720] E o que acontece é que nessa pintura
652
+ [2503.720 --> 2505.720] a luz é...
653
+ [2505.720 --> 2507.720] a luz é...
654
+ [2507.720 --> 2509.720] a luz é...
655
+ [2509.720 --> 2511.720] a luz é...
656
+ [2511.720 --> 2513.720] a luz é...
657
+ [2513.720 --> 2515.720] a luz é...
658
+ [2515.720 --> 2517.720] a luz é...
659
+ [2517.720 --> 2519.720] a luz é...
660
+ [2519.720 --> 2522.720] configurada de tal forma para que o observador
661
+ [2522.720 --> 2525.720] consiga participar da cena.
662
+ [2525.720 --> 2526.720] Mas na pintura,
663
+ [2526.720 --> 2528.720] nessa...
664
+ [2528.720 --> 2530.720] na capela em que essa pintura está,
665
+ [2530.720 --> 2534.720] o que acontece é que existe uma luz externa
666
+ [2534.720 --> 2537.720] que entra por uma janela sobre a virgem,
667
+ [2537.720 --> 2539.720] projetada desse modo.
668
+ [2539.720 --> 2541.720] Então, o que acontece é que nós temos aqui
669
+ [2541.720 --> 2543.720] um outro ponto de vista sendo colocado,
670
+ [2543.720 --> 2545.720] que é o olho de Deus.
671
+ [2546.720 --> 2549.720] Ok, 33, nós estamos ficando aqui, estamos almosto.
672
+ [2552.720 --> 2554.720] Então, em arte,
673
+ [2554.720 --> 2557.720] tem essa habilidade para usar secondary gaze,
674
+ [2557.720 --> 2560.720] então eu posso ter uma character depictiva
675
+ [2560.720 --> 2562.720] que está olhando em algo e isso faz eu,
676
+ [2562.720 --> 2564.720] como um vio de pintura,
677
+ [2564.720 --> 2566.720] olha para algo,
678
+ [2566.720 --> 2568.720] ou se vio pensar,
679
+ [2568.720 --> 2570.720] o que é que ele está olhando?
680
+ [2570.720 --> 2572.720] O que está lá?
681
+ [2572.720 --> 2575.720] E isso também é no Comics e Film,
682
+ [2575.720 --> 2577.720] o mesmo tipo de coisa que está lá com
683
+ [2577.720 --> 2580.720] um e-draucar a character,
684
+ [2580.720 --> 2583.720] e eu posso ir para fora,
685
+ [2583.720 --> 2585.720] entre o que está olhando em a character,
686
+ [2585.720 --> 2587.720] o primeiro um,
687
+ [2587.720 --> 2589.720] e aí que o outro é que está olhando.
688
+ [2589.720 --> 2592.720] Ou eu,
689
+ [2592.720 --> 2595.720] eu realmente vejo uma character,
690
+ [2595.720 --> 2597.720] e eu vejo eles,
691
+ [2597.720 --> 2601.720] sobre a parte de uma character que eu sei
692
+ [2601.720 --> 2603.720] que é o que está falando.
693
+ [2603.720 --> 2604.720] É certo então.
694
+ [2604.720 --> 2606.720] Ele tem duas coisas.
695
+ [2606.720 --> 2609.720] Então, em outro tipo de manifestação
696
+ [2609.720 --> 2611.720] de múltiplos pontos de vista na arte,
697
+ [2611.720 --> 2614.720] ela fala em relação as pinturas,
698
+ [2614.720 --> 2616.720] aos quadrinhos, aos filmes,
699
+ [2616.720 --> 2619.720] quando você tem, por exemplo,
700
+ [2619.720 --> 2620.720] numa pintura,
701
+ [2620.720 --> 2623.720] uma das pessoas sendo retratada
702
+ [2623.720 --> 2626.720] que têm um olhar direcionado para alguma coisa.
703
+ [2626.720 --> 2628.720] E você, como observador,
704
+ [2628.720 --> 2630.720] é levado direcionar o seu olhar também,
705
+ [2630.720 --> 2633.720] ou pelo menos imaginar o que é que aquilo,
706
+ [2633.720 --> 2636.720] o que a pessoa representada naquele quadro,
707
+ [2636.720 --> 2637.720] está olhando.
708
+ [2637.720 --> 2640.720] No caso dos quadrinhos ou dos filmes, por exemplo,
709
+ [2640.720 --> 2642.720] em que você tem alternância das personagens,
710
+ [2642.720 --> 2644.720] o que você tem na verdade
711
+ [2644.720 --> 2646.720] é o ponto de vista de uma,
712
+ [2646.720 --> 2647.720] o ponto de vista da outra,
713
+ [2647.720 --> 2649.720] ou às vezes quando você vê
714
+ [2649.720 --> 2651.720] um outro personagem
715
+ [2651.720 --> 2654.720] pelas costas de um personagem falante.
716
+ [2654.720 --> 2657.720] Então, essa negociação de ponto de vista
717
+ [2657.720 --> 2659.720] também é alcançada
718
+ [2659.720 --> 2663.720] nesses outros meios de expressão.
719
+ [2665.720 --> 2667.720] Ok, 2 horas e estamos fazendo.
720
+ [2667.720 --> 2669.720] 35.
721
+ [2669.720 --> 2671.720] É o que é 34.
722
+ [2673.720 --> 2675.720] Ok, então nós estamos dizendo que a viewpoint
723
+ [2675.720 --> 2678.720] é não só sempre,
724
+ [2678.720 --> 2680.720] mas é múltiplos.
725
+ [2680.720 --> 2682.720] É isso que estamos sempre experiencing
726
+ [2682.720 --> 2684.720] nós estamos todos nos aguardando,
727
+ [2684.720 --> 2685.720] estamos sempre experiencing
728
+ [2685.720 --> 2686.720] uma ação de ponto.
729
+ [2686.720 --> 2688.720] Nós estamos experiencing múltiplos de teus pontos
730
+ [2688.720 --> 2690.720] e a mesma é verdade,
731
+ [2690.720 --> 2692.720] a literatura e etc.
732
+ [2692.720 --> 2696.720] Então, o ponto principal de tudo isso que ela está querendo mostrar
733
+ [2696.720 --> 2699.720] é que existe um ponto de vista,
734
+ [2699.720 --> 2702.720] ele permeia tudo aquilo que nós fazemos
735
+ [2702.720 --> 2704.720] e ele nunca vem sozinho,
736
+ [2704.720 --> 2707.720] porque nós sempre temos a consciência do outro,
737
+ [2707.720 --> 2709.720] da presença do outro,
738
+ [2709.720 --> 2711.720] e quando nas artes, na literatura
739
+ [2711.720 --> 2714.720] e até mesmo na fala banal do dia a dia,
740
+ [2714.720 --> 2717.720] esses pontos de vista eles se misturam.
741
+ [2719.720 --> 2723.720] Mas a ordenaria de que nós experiencingmos isso em vida
742
+ [2723.720 --> 2724.720] é que, por exemplo,
743
+ [2724.720 --> 2727.720] nós distribuímos um ponto de vista actual
744
+ [2727.720 --> 2728.720] do mundo,
745
+ [2728.720 --> 2730.720] então nós temos o meu próprio ponto de vista de vista.
746
+ [2730.720 --> 2732.720] Nós temos os outros,
747
+ [2732.720 --> 2734.720] nós temos os outros,
748
+ [2734.720 --> 2736.720] e aí os meus patens de mim
749
+ [2736.720 --> 2738.720] representam um combinação
750
+ [2738.720 --> 2740.720] de estas diferentes diferentes pontos de vista.
751
+ [2740.720 --> 2742.720] Na minha viewpoint, a sua viewpoint,
752
+ [2742.720 --> 2744.720] estão lá em lá,
753
+ [2744.720 --> 2746.720] e eu estou representando mais de uma,
754
+ [2746.720 --> 2748.720] não posso ver isso aqui.
755
+ [2748.720 --> 2750.720] Meu corpo é só fazendo isso.
756
+ [2750.720 --> 2753.720] E o modo básico, pelo qual nós experimentamos isso,
757
+ [2753.720 --> 2757.720] é primeiro que nós colocados no mundo,
758
+ [2757.720 --> 2760.720] temos a experiência do nosso ponto de vista,
759
+ [2760.720 --> 2762.720] do ponto de vista do nosso interlocutor,
760
+ [2762.720 --> 2764.720] e de todos os outros,
761
+ [2764.720 --> 2767.720] que pôvam um espaço que nós compartilhamos.
762
+ [2767.720 --> 2771.720] E essa percepção acaba sendo armazenado
763
+ [2771.720 --> 2773.720] no cérebro de algum modo
764
+ [2773.720 --> 2775.720] que permita que essas coisas elas vão
765
+ [2775.720 --> 2777.720] setersando nas nossas estruturas conceituais.
766
+ [2786.720 --> 2789.720] Só como eu kopo,
767
+ [2789.720 --> 2790.720] Snaxas que punk era acontecendo no embríno
768
+ [2790.720 --> 2792.720] de sua Construção generado
769
+ [2792.720 --> 2795.720] é portrayed para vocês tamos
770
+ [2795.720 --> 2796.720] por determinado spas carne,
771
+ [2796.720 --> 2799.720] mas não ruga um lançamento político e representaticie
772
+ [2799.720 --> 2801.720] contra os outros,
773
+ [2801.720 --> 2803.720] porque a gente não pode lembrar isso aqui.
774
+ [2803.720 --> 2807.720] por causa da representação indicação de um idioma de idioma,
775
+ [2807.720 --> 2809.720] como uma linguagem de idioma, sangue,
776
+ [2809.720 --> 2813.720] written, gesture, painting, film e etc.
777
+ [2813.720 --> 2817.720] Então, o fato de nós termos todas essas estruturas no nosso cérebro,
778
+ [2817.720 --> 2819.720] não quer dizer muita coisa na verdade.
779
+ [2819.720 --> 2823.720] A gente precisa sair desse cérebro dessa mente invisível
780
+ [2823.720 --> 2827.720] e para verificar como é que tudo isso se manifesta
781
+ [2827.720 --> 2830.720] nas múltiplas formas de linguagem.
782
+ [2831.720 --> 2838.720] E diferentes mídias têm diferentes formas de fazer um baleiro do ponto.
783
+ [2838.720 --> 2846.720] Então, em spoken and written language, eu tenho a possibilidade de usar agora
784
+ [2846.720 --> 2850.720] com os últimos tempos, a possibilidade de mudar pronúncias,
785
+ [2850.720 --> 2854.720] tempos, formos, labels, como mamães e etc.
786
+ [2855.720 --> 2860.720] Em spoken language, eu tenho a possibilidade de replicar um caráter intonation
787
+ [2860.720 --> 2863.720] e depois de uma outra caráter intonation.
788
+ [2870.720 --> 2872.720] Então, não é o que é o que é o gesture,
789
+ [2872.720 --> 2876.720] o gesture não tem como tens markers e pronúncias, etc.
790
+ [2876.720 --> 2881.720] Mas eu tenho as manhas de uma personagens de arte.
791
+ [2882.720 --> 2889.720] Então, o que acontece é que os diferentes meios ou as diferentes mídias
792
+ [2889.720 --> 2895.720] vão ter os seus recursos próprios para essas manifestações de múltiplas pontos de vista.
793
+ [2895.720 --> 2898.720] Então, na linguagem escrita e na linguagem falada,
794
+ [2898.720 --> 2901.720] por exemplo, um dos recursos que nós temos é, por exemplo,
795
+ [2901.720 --> 2905.720] essa construção com agora mais passado em que você consegue fazer
796
+ [2906.720 --> 2910.720] se distanciamente de ponto de vista, ou você consegue, pela troca,
797
+ [2910.720 --> 2915.720] de tempos verbais, também mudar o ponto de vista, ou pela referenciação,
798
+ [2915.720 --> 2919.720] como você usar a MAMI no meio de um discurso indireto livre
799
+ [2919.720 --> 2924.720] para mostrar, para revelar, o ponto de vista de uma determinada personagem,
800
+ [2924.720 --> 2930.720] ou na linguagem falada também que você consegue replicar intonação
801
+ [2931.720 --> 2935.720] daquilo que uma pessoa te disse, ou nos gestos em que você consegue,
802
+ [2935.720 --> 2940.720] com o seu corpo, realizar diferentes personagens ao mesmo tempo.
803
+ [2940.720 --> 2944.720] Então, as diferentes mídias cada uma vai ter o seu meio
804
+ [2944.720 --> 2950.720] pelo qual esses múltiplas pontos de vista podem ser expressos e negociados.
805
+ [2953.720 --> 2956.720] Ok, então, o que é o take-home do que é que essas mídias diferentes
806
+ [2957.720 --> 2960.720] de forma que prova SUN e DEA 정도 dos stating de botros,
807
+ [2960.720 --> 2961.660] que também é todo mundo que
808
+ [2961.660 --> 2964.720] era algo que pode ter que ter que ter a surrounded entre
809
+ [2964.720 --> 2968.080] z amendas, mas também que é um opposito de
810
+ [2968.080 --> 2971.240] os políticos, também, que a gente precisa falar sobre
811
+ [2971.240 --> 2974.720] fé, ela precisa falar de pessoa e subst aimada de senão,
812
+ [2974.720 --> 2979.080] o que você precisa falar watery, que você pode fazer nuntes
813
+ [2979.080 --> 2983.320] ou não sabemos전, que ou schön promocilada circumando
814
+ [2983.320 --> 2985.320] que nos apoiemos para a construção de diferentes pontos de vista.
815
+ [2985.320 --> 2989.320] Nós conseguimos construir uma rede complexa de pontos de vista,
816
+ [2989.320 --> 2993.320] para narrativa, em que, se eu na conversação,
817
+ [2993.320 --> 2997.320] eu tenho o meu ponto de vista, mas consigo mezclar com o outro valante,
818
+ [2997.320 --> 2999.320] então, eu tenho que ver que, na verdade,
819
+ [2999.320 --> 3001.320] a gente não tem que ver o ponto de vista,
820
+ [3001.320 --> 3003.320] o que você pode ver,
821
+ [3003.320 --> 3005.320] se eu não tenho que ver,
822
+ [3005.320 --> 3007.320] o que você pode ver,
823
+ [3007.320 --> 3009.320] o que você pode ver,
824
+ [3009.320 --> 3011.320] o que você pode ver,
825
+ [3011.320 --> 3013.320] você pode ver que você pode ver,
826
+ [3013.320 --> 3015.320] se eu não tenho que ver,
827
+ [3015.320 --> 3017.320] se não tenho que ver,
828
+ [3017.320 --> 3019.320] omicamente, eu tenho que ver o ponto de vista do narrador,
829
+ [3019.320 --> 3021.320] pelo ponto de vista de uma personagem,
830
+ [3021.320 --> 3023.320] o que acontece no final é que,
831
+ [3023.320 --> 3027.320] nós temos uma rede complexa de pontos de vista
832
+ [3027.320 --> 3031.320] tal como ela é revelada na linguagem.
833
+ [3031.320 --> 3034.320] E isso é tudo agora.
834
+ [3034.320 --> 3035.320] Muito obrigado.
835
+ [3035.320 --> 3037.320] E é isso.
836
+ [3037.320 --> 3039.320] Obrigado.
837
+ [3039.320 --> 3049.320] Temos perguntas?
838
+ [3049.320 --> 3062.440] Só um esclarecimento que, no início que a gente estava tendo bastante problema com
839
+ [3062.440 --> 3066.440] o delay, a gente achou que atrapalharia um pouco a tradução.
840
+ [3066.440 --> 3075.840] Então, mas aí nós pegamos essa parte final justamente para fazer o rescaldo de tudo
841
+ [3075.840 --> 3077.840] aquilo que ela disse.
842
+ [3078.840 --> 3081.840] Alguma pergunta?
843
+ [3081.840 --> 3084.840] Nós temos questões.
844
+ [3091.840 --> 3094.840] Olá, eu não sei se você conhece-se.
845
+ [3094.840 --> 3095.840] É isso?
846
+ [3095.840 --> 3101.840] Eu sou Renata Mancini e por favor, por favor, por favor, é o bom lecture.
847
+ [3102.840 --> 3103.840] É não...
848
+ [3103.840 --> 3106.840] É melhor?
849
+ [3106.840 --> 3107.840] É melhor?
850
+ [3107.840 --> 3109.840] Agora, aqui está.
851
+ [3109.840 --> 3120.720] A minha pergunta não é uma pergunta mais curiosidade, porque eu estava se perguntando se
852
+ [3180.720 --> 3186.960] vocês们 não hunham uma ingüenza o诉quetes que-se entretophy竖ilecy se
853
+ [3186.960 --> 3193.700] fosse umamerka com a sério ideal deisko sotto Co pas destro da encarrag police
854
+ [3199.720 --> 3200.720] Ok?
855
+ [3200.720 --> 3206.160] Isso aqui oversa um século de sobre isso, agora Supercr downloaded performer nessa
856
+ [3206.160 --> 3219.600] ethical mas é 있는데 velvetiasichen essas estátocadas de Myancínios de
857
+ [3219.600 --> 3225.820] rapporto que eu conheci com essecho DeepHand deanson ou o ac Estado Ofance,
858
+ [3225.820 --> 3230.680] calories inteirais dividing, pr veulent.
859
+ [3230.680 --> 3236.140] está o strikes na descrição de했어요.
860
+ [3236.500 --> 3242.680] O trabalho nesse surround mathematical é muito o pouco que a gente faz no meio que eu instrumentali.
861
+ [3242.880 --> 3245.940] gentlemente acho vegetable eu não posso fazer nada.
862
+ [3246.440 --> 3250.400] Mas vocêceralza onde toda o trabalho se decir?
863
+ [3250.520 --> 3254.440] Nem tinha pouca Impactos Artes Computadores.
864
+ [3254.780 --> 3256.740] Mesteem naшьerière.
865
+ [3256.740 --> 3262.260] Se a teoria com moda isso, se a resposta da professor Aiv foi sim.
866
+ [3262.260 --> 3268.100] E ela se tomou um exemplo de uma pesquisa que ela conduziu com um professor da Universidade
867
+ [3268.100 --> 3269.820] San Diego, Rafael Núñez.
868
+ [3269.820 --> 3275.060] Eles trabalharam com uma língua falada no Chile, uma língua chamada a Imara.
869
+ [3275.060 --> 3282.100] E nessa língua, o passado é pra frente e o futuro é pra trás.
870
+ [3282.100 --> 3287.100] O reverso daquilo que nós temos como mente, certo?
871
+ [3287.100 --> 3294.900] O eo continua sendo centro, mas o futuro é pra frente e o passado é pra trás.
872
+ [3294.900 --> 3296.900] Oi?
873
+ [3296.900 --> 3302.740] O futuro é pra trás e o passado é pra frente.
874
+ [3302.740 --> 3304.740] Desculpa, isso mesmo.
875
+ [3304.740 --> 3309.180] O meu ocidentalismo está me impedindo aqui.
876
+ [3309.180 --> 3316.060] Mas aí ela deu outros exemplos de, por exemplo, em algumas culturas, apontar no importa se é assim,
877
+ [3316.060 --> 3319.580] se é assim, apontar pode ser do mesmo jeito.
878
+ [3319.580 --> 3327.300] Você pode, você ainda está apontando e você pode apontar pela boca, fazer assim ou com a cabeça
879
+ [3327.300 --> 3333.820] se você está com a mão cheia de material, cheia de livros e te perguntam onde está a tal coisa.
880
+ [3333.820 --> 3336.180] E você fala lá.
881
+ [3336.180 --> 3345.180] Então impendentemente do modo pelo qual você faz, você acaba realizando a gesta de apontar, por exemplo.
882
+ [3345.180 --> 3352.180] Para nós é diferente, a gente pode fazer assim, assim, assim.
883
+ [3353.180 --> 3356.180] Mais perguntas?
884
+ [3362.180 --> 3364.180] Certo.
885
+ [3367.180 --> 3369.180] Connexão ruim.
886
+ [3374.180 --> 3376.180] Ok.
887
+ [3376.180 --> 3380.180] Ok, então, nós gostamos de ter um segundo.
888
+ [3383.180 --> 3385.180] Ok.
889
+ [3385.180 --> 3389.180] Então, obrigado, por sua conferência.
890
+ [3389.180 --> 3392.180] Foi bem.
891
+ [3392.180 --> 3394.180] Obrigado.
892
+ [3394.180 --> 3397.180] Sim, obrigado.
transcript/allocentric_2vwQyeV-LQ4.txt ADDED
@@ -0,0 +1,774 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 10.320] Welcome.
2
+ [10.320 --> 12.160] Thank you for joining me this afternoon.
3
+ [12.160 --> 20.560] I'm Linda Silverman and I'm happy to share the visual spatial learner concept with you.
4
+ [20.560 --> 22.880] How many of you have heard this before?
5
+ [22.880 --> 24.880] Visual spatial?
6
+ [24.880 --> 27.640] Oh, that's why you're here.
7
+ [27.640 --> 29.640] How many of you are visual spatial?
8
+ [30.640 --> 32.640] You're not sure.
9
+ [32.640 --> 35.640] Well, we'll look at a slide.
10
+ [35.640 --> 41.640] How many of you are like this lady over here with the file cabinets where everything's neat?
11
+ [41.640 --> 45.640] How many of you are more like that fellow?
12
+ [45.640 --> 49.640] Definitely a visual spatial crowd here.
13
+ [50.640 --> 59.520] I don't think of visual spatial learners as being disorganized.
14
+ [59.520 --> 64.040] I think of them as being differently organized.
15
+ [64.040 --> 74.960] So if you are neat, Nick, if you're like the very orderly person on the left and you want
16
+ [74.960 --> 83.160] to straighten out the materials of someone like that in your life, someone you live with
17
+ [83.160 --> 89.640] or someone you teach with, they will never find what they're looking for again because
18
+ [89.640 --> 94.160] there are filers and there are pillars.
19
+ [94.160 --> 100.040] And the people who make piles know what day of the week they put it down there and they
20
+ [100.040 --> 104.200] know how far down the pile to look.
21
+ [104.200 --> 110.040] And if you try to organize them the way you are organized, they can't ever find anything
22
+ [110.040 --> 112.080] again.
23
+ [112.080 --> 121.280] So the point of the cartoon is really to show that there are different organizational
24
+ [121.280 --> 122.280] systems.
25
+ [122.280 --> 131.960] It doesn't mean that the neat woman is the smart one and the disorganized male is not a
26
+ [131.960 --> 133.120] smart.
27
+ [133.120 --> 138.120] There are just different ways of being in the world.
28
+ [138.120 --> 145.920] This gentleman has more potential for creativity.
29
+ [145.920 --> 149.280] So that's not to be looked down upon.
30
+ [149.280 --> 153.800] It's just because the organization system is different.
31
+ [153.800 --> 156.520] But I am not a real visual spatial learner.
32
+ [156.520 --> 164.400] New people are the real experts, the ones who identify as being visual spatial because
33
+ [164.400 --> 170.280] you live it and you have it from the inside out.
34
+ [170.280 --> 171.400] I don't.
35
+ [171.400 --> 174.240] I am spatially impaired.
36
+ [174.240 --> 183.240] So my job is just to help people who are like me understand people who are like you.
37
+ [183.240 --> 186.560] And I'll give you a perfect example.
38
+ [186.560 --> 196.160] I went to the gas station and I was late for work and I went, I mean you can probably
39
+ [196.160 --> 202.200] tell in the picture that I went up to the gas tank and got out of the car and realized
40
+ [202.200 --> 209.000] that the gas tank was on one side, the get pumps were on the other side.
41
+ [209.000 --> 217.320] So I got back in the car and I drove around and when I got out of the car the gas pump
42
+ [217.320 --> 222.920] was on one side, the gas tank was still on the opposite side.
43
+ [222.920 --> 231.200] So I got back in the car, I drove around the pump again and I got out of the car and
44
+ [231.200 --> 234.000] I still had them in the wrong position.
45
+ [234.000 --> 240.840] By this time the guys inside the shop were laughing so hard, I couldn't even get gas,
46
+ [240.840 --> 242.960] I was too embarrassed.
47
+ [242.960 --> 249.080] And I ended up going 35 miles on the freeway with an empty gas tank.
48
+ [249.080 --> 251.080] It's true.
49
+ [251.080 --> 256.320] So I don't come to this from internal knowing.
50
+ [256.320 --> 263.280] But I did notice that a lot of the people who wrote about the visual spatial experience
51
+ [263.280 --> 273.480] were males who had been damaged by the school system, damaged and marginalized and made
52
+ [273.480 --> 276.000] to feel bad about themselves.
53
+ [276.000 --> 280.120] And they were not fond of teachers.
54
+ [280.120 --> 292.040] And I was a classroom teacher and I am more sequential and so I thought okay I can be
55
+ [292.040 --> 293.520] the translator.
56
+ [293.520 --> 301.600] I can be the medium for the people who can't explain how they think, how they get to their
57
+ [301.600 --> 303.000] answers.
58
+ [303.000 --> 305.120] They can't show their work.
59
+ [305.120 --> 310.400] They just know and they don't know how they know but they just know.
60
+ [310.400 --> 319.640] So when you hear the word visual spatial just what comes to mind for you, just shout out
61
+ [319.640 --> 322.400] something.
62
+ [322.400 --> 324.400] 3D.
63
+ [324.400 --> 327.200] Creative.
64
+ [327.200 --> 334.120] I'm repeating because we're filming this and I want everyone to be able to hear it.
65
+ [334.120 --> 336.400] No sound concept of time?
66
+ [336.400 --> 338.400] Yeah, that's true.
67
+ [338.400 --> 342.200] There's a reason for that.
68
+ [342.200 --> 347.440] Time is processed in the left hemisphere.
69
+ [347.440 --> 350.560] The right hemisphere lives in the eternal now.
70
+ [350.560 --> 353.000] There is no time.
71
+ [353.000 --> 359.240] So if you have no sense of time you're hanging out in your right hemisphere which is what
72
+ [359.240 --> 361.200] these people do.
73
+ [361.200 --> 367.280] And if you're very time conscious you're hanging out in your left hemisphere and time runs
74
+ [367.280 --> 368.800] your life.
75
+ [368.800 --> 373.600] Time dictates how you should spend your life.
76
+ [373.600 --> 380.040] But there are people who really do not have any time consciousness.
77
+ [380.040 --> 382.440] What else do you think of?
78
+ [382.440 --> 383.440] Yes.
79
+ [383.440 --> 390.040] I'm sorry I couldn't hear that.
80
+ [390.040 --> 393.040] I'm so helicopter view.
81
+ [393.040 --> 394.520] Oh the helicopter view.
82
+ [394.520 --> 396.480] Yes, yes, yes, yes.
83
+ [396.480 --> 397.680] That's something I don't have.
84
+ [397.680 --> 398.880] I can't do that.
85
+ [398.880 --> 403.680] I can't imagine what a building looks like from the top down.
86
+ [403.680 --> 405.160] Don't have that capacity.
87
+ [405.160 --> 411.080] How many of you can imagine what's up here or what's over there and you know you have
88
+ [411.080 --> 415.160] an internal map, internal sense of direction.
89
+ [415.160 --> 417.280] I have none whatsoever.
90
+ [417.280 --> 419.280] I have no idea where I am.
91
+ [419.280 --> 424.640] I'd have someone take me by the arm to the toilet and get me back otherwise I'd never
92
+ [424.640 --> 425.960] be found again.
93
+ [425.960 --> 431.360] Yes, it wasn't just that I didn't understand the words.
94
+ [431.360 --> 438.040] I don't even understand the concept.
95
+ [438.040 --> 439.040] What else?
96
+ [439.040 --> 440.040] Imagination.
97
+ [440.040 --> 444.360] Oh yes, imagination.
98
+ [444.360 --> 446.360] What else?
99
+ [446.360 --> 455.880] Well, I'll share with you with some of the teachers that I have worked with.
100
+ [455.880 --> 464.520] I have said when I've asked them this question, artistic, mathematical, blessed the computer,
101
+ [464.520 --> 473.840] great imagination, laughter, needs more time, wonderful synthesizer.
102
+ [473.840 --> 478.200] I need to see you when you're talking.
103
+ [478.200 --> 484.040] What's things together without the directions?
104
+ [484.040 --> 497.320] Chest club, can't spell, scattered, doesn't show the work and illegible handwriting.
105
+ [497.320 --> 501.360] Does that, yes?
106
+ [501.360 --> 510.000] Is it possible that someone is strong, which will speak to learner and is very good at
107
+ [510.000 --> 511.000] spelling?
108
+ [511.000 --> 515.920] Yes, I knew you were going to ask that question because I was just thinking about the
109
+ [515.920 --> 518.920] man who wrote to me.
110
+ [518.920 --> 523.760] I put out a question on my website when I was writing the book Upside Down Brilliant
111
+ [523.760 --> 526.320] about, do you relate to these?
112
+ [526.320 --> 529.160] Can you tell, share your stories?
113
+ [529.160 --> 538.120] And this man said, I relate to everything except I'm such a good speller and he misspelled
114
+ [538.120 --> 544.760] three words in this little two sentences.
115
+ [544.760 --> 548.640] But I can't explain people who really can spell.
116
+ [548.640 --> 552.360] It's called a photographic mind.
117
+ [552.360 --> 560.840] So if you can see it and you have that photographic image, you can remember how to spell it.
118
+ [560.840 --> 567.280] And that is the secret to teaching spelling to visual spatial learners.
119
+ [567.280 --> 570.360] You have to get them to visualize.
120
+ [570.360 --> 575.520] Can't visualize, can't spell because they can't sound it out.
121
+ [575.520 --> 578.040] They have to see it.
122
+ [578.040 --> 579.440] Do you see the words?
123
+ [579.440 --> 580.440] Yes.
124
+ [580.440 --> 581.880] That's the secret.
125
+ [581.880 --> 585.800] But I did immediately think about this man who said, yeah, but I spell.
126
+ [585.800 --> 588.880] I really can spell and he couldn't.
127
+ [588.880 --> 589.880] Yes.
128
+ [589.880 --> 590.880] Yes.
129
+ [590.880 --> 604.280] Oh, well, I don't know what it's like in the Netherlands, but in the United States, all
130
+ [604.280 --> 611.560] of our teachers are taught and all of our achievement tests say you have to show the
131
+ [611.560 --> 614.920] steps that you took to get to your answer.
132
+ [614.920 --> 618.840] If you can't show your work, you don't know anything.
133
+ [618.840 --> 622.840] You have to demonstrate how you got to the answer.
134
+ [622.840 --> 630.700] Well, if you didn't take a series of steps to get to an answer, how can you show your work?
135
+ [630.700 --> 634.040] Your work is, ah, I see it.
136
+ [634.040 --> 636.200] That's your work.
137
+ [636.200 --> 637.720] You see it all at once.
138
+ [637.720 --> 640.360] You just see it in your head.
139
+ [640.360 --> 647.780] And the people who write the American textbooks, the people who teach the teachers how to teach,
140
+ [647.780 --> 654.100] and the people who write the achievement tests all believe that everyone takes a series
141
+ [654.100 --> 656.800] of steps to get to an answer.
142
+ [656.800 --> 660.040] So you should be able to show your work.
143
+ [660.040 --> 662.080] That's what that means.
144
+ [662.080 --> 666.080] So, ah, let's see if you're a visual spatial learner.
145
+ [666.320 --> 675.280] I want you to write down if you have paper, you know, how many of these fit you.
146
+ [675.280 --> 681.280] This is a short list, but see if, just make a tally mark.
147
+ [681.280 --> 685.600] Are you a big picture thinker?
148
+ [685.600 --> 689.640] Do you solve problems in unusual ways?
149
+ [689.640 --> 692.240] Do you learn concepts all at once?
150
+ [692.240 --> 696.040] You get this, ah-ha, I got it.
151
+ [696.600 --> 701.360] Do you need to see relationships in order to learn?
152
+ [701.360 --> 705.000] Do you have a vivid imagination?
153
+ [705.000 --> 709.080] Can you feel what others are feeling?
154
+ [709.080 --> 712.440] Are you good at reading maps?
155
+ [712.440 --> 716.840] Do you often lose track of time?
156
+ [716.840 --> 719.440] Do you struggle with spelling?
157
+ [719.440 --> 723.960] Are you organizationally impaired?
158
+ [724.000 --> 729.480] Now you don't have to fit all of them, but if you fit the majority,
159
+ [729.480 --> 732.280] you're probably are more visual spatial.
160
+ [732.280 --> 735.200] How many of you fit half of them?
161
+ [735.200 --> 736.880] Half fit you.
162
+ [736.880 --> 740.240] How many more than half fit you?
163
+ [740.240 --> 745.200] Yeah, we are, you're much more of a visual spatial audience.
164
+ [745.200 --> 750.040] So, this is going to be group therapy, I guess.
165
+ [750.040 --> 759.120] So maybe when the video tape is done and it gets posted on YouTube,
166
+ [759.120 --> 762.760] more people will understand what you already know.
167
+ [762.760 --> 766.840] So, how many of you are teachers?
168
+ [766.840 --> 771.200] Okay, so how can you tell if your students are visual spatial?
169
+ [771.200 --> 774.240] I mean, this is one of the cartoons.
170
+ [774.240 --> 779.480] I'm going to show you a series of cartoons from upside down brilliance.
171
+ [779.480 --> 785.840] Do they know things without being able to explain how or why?
172
+ [785.840 --> 787.040] How did you get this answer?
173
+ [787.040 --> 789.760] I don't know, I just know.
174
+ [789.760 --> 792.400] Do they lose track of time?
175
+ [792.400 --> 797.680] Do they have difficulty with time tests?
176
+ [797.680 --> 806.960] Do they remember what they see but forget what they hear?
177
+ [806.960 --> 814.000] And then, do they have the most creative reason for not having their homework done that
178
+ [814.000 --> 817.720] you have ever encountered in all your years of teaching?
179
+ [817.720 --> 822.480] Don't you think they get extra credit for real creative excuses?
180
+ [822.480 --> 826.600] I think they should get extra credit.
181
+ [826.600 --> 830.920] So who are these visual spatial learners?
182
+ [830.920 --> 834.040] They are the children we call twice exceptional.
183
+ [834.040 --> 836.840] They're gifted with learning disabilities.
184
+ [836.880 --> 842.520] They're their underachievers who aren't exactly doing what we would like them to do.
185
+ [842.520 --> 851.000] They're your creative learners, your artists, musicians, mathematicians and builders, and
186
+ [851.000 --> 853.760] your future surgeons.
187
+ [853.760 --> 861.200] You really want your surgeon to know where everything is in relation to everything else and
188
+ [861.200 --> 865.040] put things back exactly where they were.
189
+ [865.040 --> 867.160] It's a visual art.
190
+ [867.160 --> 874.040] And in order to get into the field of surgery, you have to take a spatial test to show that
191
+ [874.040 --> 875.800] this is something you can do.
192
+ [875.800 --> 877.880] I would not make a good surgeon.
193
+ [877.880 --> 880.680] You don't want me operating on you.
194
+ [880.680 --> 885.000] So how many of you believe in learning styles?
195
+ [885.000 --> 891.720] And how do you differentiate for students in your classroom with different learning styles?
196
+ [891.720 --> 898.720] What models or methods or how do you help kids who learn differently?
197
+ [898.720 --> 901.720] Yes.
198
+ [901.720 --> 909.720] Oh, thank you.
199
+ [909.720 --> 913.720] Thank you.
200
+ [913.720 --> 935.720] Do you have a good role in the children's level of teaching?
201
+ [935.720 --> 937.720] Will you say that again, please?
202
+ [937.720 --> 945.960] We teach with goals and the people that choose how they will learn for that goal in their
203
+ [945.960 --> 949.720] own manner and learning style.
204
+ [949.720 --> 951.720] That's wonderful.
205
+ [951.720 --> 955.200] Who else?
206
+ [955.200 --> 958.200] Thank you, Mom.
207
+ [958.200 --> 962.400] Yes.
208
+ [962.400 --> 967.000] I differentiate in the instruction methods.
209
+ [967.000 --> 973.120] Sometimes I do it verbally, but with all the children, I take pens that I see how the
210
+ [973.120 --> 977.000] figures are visualized.
211
+ [977.000 --> 979.560] I had trouble hearing that.
212
+ [979.560 --> 982.760] The instruction I give in different modes.
213
+ [982.760 --> 989.360] Sometimes it's just verbally instruction and sometimes I use pens to visualize how it's
214
+ [989.360 --> 990.360] built up.
215
+ [990.360 --> 995.720] You're aware that some learn better verbally, some learn better visually.
216
+ [995.720 --> 1002.440] So I have heard several times since I got here that a lot of people are using the multiple
217
+ [1002.440 --> 1005.800] intelligence model by Howard Gardner.
218
+ [1005.800 --> 1009.760] How many of you are using Gardner's model?
219
+ [1009.760 --> 1013.720] How many of you have been taught Gardner's model?
220
+ [1013.720 --> 1020.720] So how many intelligence are there?
221
+ [1020.720 --> 1030.480] Eight, nine, ten?
222
+ [1030.480 --> 1032.200] It's a little confusing, isn't it?
223
+ [1032.200 --> 1036.840] It depends now what day is this.
224
+ [1036.840 --> 1042.920] The intelligences have evolved and changed over the years.
225
+ [1042.920 --> 1051.760] There's the original seven intelligences in frames of mind, which came out in 1983.
226
+ [1051.760 --> 1062.680] And that includes linguistic, which is your verbal, musical, logical mathematical, spatial,
227
+ [1062.680 --> 1067.960] motily kinesthetic, interpersonal and interpersonal.
228
+ [1067.960 --> 1072.400] And then afterwards some new intelligences came about.
229
+ [1072.400 --> 1077.000] Do you know what the new ones were?
230
+ [1077.000 --> 1079.200] Spiritual God asked.
231
+ [1079.200 --> 1080.520] Oh, natural.
232
+ [1080.520 --> 1087.800] Yeah, it sounds like it should be, or at least naturalistic, but it's the naturalist.
233
+ [1087.800 --> 1091.160] It doesn't have the same grammar.
234
+ [1091.160 --> 1099.440] But existential is one that has almost made it, but we're not quite sure.
235
+ [1099.440 --> 1105.800] I'm not sure how he decides when something is in or out, but last time I heard that it
236
+ [1105.800 --> 1107.720] was very close to being in.
237
+ [1107.720 --> 1111.240] How many of you thought spiritual was one?
238
+ [1111.240 --> 1120.800] It was, but it got canned because Gardner said that spirituality is not universal.
239
+ [1120.800 --> 1125.520] Okay.
240
+ [1125.520 --> 1134.800] So a lot of people think that because I'm the visual spatial learner person, that it comes
241
+ [1134.800 --> 1140.240] out of this model, but actually it doesn't.
242
+ [1140.240 --> 1148.960] The way in which I'm looking at visual spatial is through hemisphericity, not through multiple
243
+ [1148.960 --> 1151.160] intelligences.
244
+ [1151.160 --> 1156.640] And there is some overlap between Gardner's spatial intelligence and the visual spatial
245
+ [1156.640 --> 1158.600] learner, obviously.
246
+ [1158.600 --> 1167.320] But you notice that there is one major word missing in Gardners, and that's the word visual.
247
+ [1167.320 --> 1176.720] So that visual piece is not a part of that model.
248
+ [1176.720 --> 1183.480] There was another multiple intelligences model before Gardner that I'm seeing as a couple
249
+ [1183.480 --> 1185.240] nodding heads.
250
+ [1185.240 --> 1194.280] How many of you were exposed to Gilford, JP Gilford, and his structure of intellect?
251
+ [1194.280 --> 1201.680] He had at one time 120 intelligences.
252
+ [1201.680 --> 1210.360] And then before he died, he split figural into auditory, figural, and visual figural,
253
+ [1210.360 --> 1214.240] and ended up with 150 intelligences.
254
+ [1214.240 --> 1217.240] I wonder how many Gardner will have.
255
+ [1217.240 --> 1223.280] But Gilford's model was all the rage in the United States when I was teaching at the University
256
+ [1223.280 --> 1224.560] of Denver.
257
+ [1224.560 --> 1232.240] And we had to have all of our students learn the little names, acronyms, that went with
258
+ [1232.240 --> 1233.840] each cell.
259
+ [1233.840 --> 1239.720] So evaluation of figural units was EFU.
260
+ [1239.720 --> 1245.320] And they had to learn this when they were in graduate school and gifted education.
261
+ [1245.320 --> 1247.200] They didn't like that much.
262
+ [1247.200 --> 1256.800] But it has an interesting shape because Gilford was visual spatial.
263
+ [1256.800 --> 1264.160] If how many of you know about Bloom's taxonomy, it's total linear sequential.
264
+ [1264.160 --> 1266.760] Compare that to this.
265
+ [1266.760 --> 1267.760] You've got a cube.
266
+ [1267.760 --> 1269.760] It's got dimensionality.
267
+ [1269.760 --> 1274.520] This is a visual spatial thinker.
268
+ [1274.520 --> 1279.360] Gardner's model is sequential.
269
+ [1279.360 --> 1286.040] So now we're going to change all together and talk about a whole other realm, which is
270
+ [1286.040 --> 1287.760] personality type.
271
+ [1287.760 --> 1292.760] How many of you know your personality type on the Myers-Briggs?
272
+ [1292.760 --> 1296.240] It's sometimes it's called the MBTI.
273
+ [1296.240 --> 1298.240] What are you?
274
+ [1298.240 --> 1300.240] I-N-F-P, that's the gifted type.
275
+ [1300.240 --> 1302.240] What are you?
276
+ [1302.240 --> 1303.240] Yes.
277
+ [1303.240 --> 1304.240] Pardon?
278
+ [1305.240 --> 1306.240] I-N-T-P.
279
+ [1306.240 --> 1307.240] OK.
280
+ [1307.240 --> 1316.800] I-N-T-P's make great college professors, but they don't ever write tests that anyone understands.
281
+ [1316.800 --> 1322.760] They're so intellectual, the concreteness that the students are looking for usually isn't
282
+ [1322.760 --> 1323.760] there.
283
+ [1323.760 --> 1327.680] But I-N-T-P and I-N-F-P are two gifted profiles.
284
+ [1327.680 --> 1328.680] Who else?
285
+ [1328.680 --> 1329.680] Brave.
286
+ [1329.680 --> 1330.680] Mote.
287
+ [1330.680 --> 1334.200] You had your hand, for the raise.
288
+ [1334.200 --> 1335.200] Mote.
289
+ [1335.200 --> 1336.200] What are you?
290
+ [1336.200 --> 1337.200] I don't know.
291
+ [1337.200 --> 1338.200] You don't know.
292
+ [1338.200 --> 1339.200] I'm a mix.
293
+ [1339.200 --> 1340.200] I'm a mix.
294
+ [1340.200 --> 1344.200] And sometimes I'm just like that.
295
+ [1344.200 --> 1347.200] Sometimes you're one, sometimes you're other.
296
+ [1347.200 --> 1348.200] OK.
297
+ [1348.200 --> 1351.440] So you're in the middle.
298
+ [1351.440 --> 1362.080] The introverted, intuitive feeling perceiving is the most typical gifted child and gifted
299
+ [1362.080 --> 1364.480] adult profile.
300
+ [1364.480 --> 1370.960] And the extroverted, sensing, thinking, judging in the United States is the most typical
301
+ [1370.960 --> 1372.960] teacher profile.
302
+ [1372.960 --> 1379.160] So there's a real mismatch between the typical student and the typical teacher.
303
+ [1379.160 --> 1384.280] The reason this is up here when we're talking about learning styles is that there are a
304
+ [1384.280 --> 1393.120] lot of books about using the personality types as a basis for teaching styles and learning
305
+ [1393.120 --> 1395.320] styles in the classroom.
306
+ [1395.320 --> 1400.760] Have any of you ever seen learning styles based on the Myers-Briggs?
307
+ [1400.760 --> 1402.920] There's really good books on this.
308
+ [1402.920 --> 1412.000] So if we go by the Myers-Briggs, there is 16 different learning styles based on the
309
+ [1412.000 --> 1415.920] 16 different personality types.
310
+ [1415.920 --> 1423.000] If we go by Gardner's model, we've got eight and three quarters, maybe nine different
311
+ [1423.000 --> 1427.360] learning styles based on the multiple intelligences.
312
+ [1427.360 --> 1433.000] If we go by Gilford, we've got 150 different intelligences.
313
+ [1433.000 --> 1438.920] And if we have 150 learning styles to go with them, that would be challenging.
314
+ [1438.920 --> 1446.480] But this is supposed to be the very best and most comprehensive learning styles inventory
315
+ [1446.480 --> 1448.640] that's ever been developed.
316
+ [1448.640 --> 1452.120] Are any of you familiar with Dunn and Dunn?
317
+ [1452.120 --> 1455.720] Dunn and Dunn's elements of learning style?
318
+ [1455.720 --> 1457.120] You are.
319
+ [1457.120 --> 1458.120] Have you ever tried it?
320
+ [1458.120 --> 1460.560] Have you ever done it in the classroom?
321
+ [1460.560 --> 1462.560] You haven't done Dunn and Dunn.
322
+ [1462.560 --> 1463.560] Okay.
323
+ [1463.560 --> 1467.760] So this is the most comprehensive.
324
+ [1467.760 --> 1473.960] There's environmental, emotional, sociological, physical, psychological.
325
+ [1473.960 --> 1477.800] And then there are environmental elements.
326
+ [1477.800 --> 1479.600] Silence versus sound.
327
+ [1479.600 --> 1482.840] Are you more comfortable in a silent environment?
328
+ [1482.840 --> 1489.000] Bright versus low light, warm versus cool temperatures, formal versus informal design
329
+ [1489.000 --> 1490.960] of space.
330
+ [1490.960 --> 1496.640] Then there's emotional elements, motivation, persistence, responsibility, structure versus
331
+ [1496.640 --> 1498.080] options.
332
+ [1498.080 --> 1503.960] Then there are sociological elements, thinking and working with peers alone and pairs in teams
333
+ [1503.960 --> 1506.880] with adults and in several ways.
334
+ [1506.880 --> 1513.760] And then there are physical elements, perceptual strengths, auditory, visual, tactile, conicetic,
335
+ [1513.760 --> 1516.800] with or without intake of food or drink.
336
+ [1516.800 --> 1521.840] And time of day or night, I had to decide to just put day or night.
337
+ [1521.840 --> 1525.720] Otherwise, if I had to put time in there, I couldn't have done this.
338
+ [1525.720 --> 1528.240] Day versus passivity.
339
+ [1528.240 --> 1533.920] And then there are the psychological elements, global versus analytic, hemispheric preference
340
+ [1533.920 --> 1537.800] and impulsivity versus reflectivity.
341
+ [1537.800 --> 1545.400] So if we tried to come up with the number of different learning styles that this would
342
+ [1545.400 --> 1553.520] generate, we would have eight environmental, eight emotional, six sociological, three
343
+ [1553.520 --> 1559.320] perceptual, six other physical and six psychological elements.
344
+ [1559.320 --> 1566.480] How many possible learning styles do you think there might be, according to Dun and Dun?
345
+ [1566.480 --> 1570.280] That's a very good guess.
346
+ [1570.280 --> 1574.240] 41,472.
347
+ [1574.240 --> 1581.040] I was a classroom teacher and there are a limited number of hours in the day.
348
+ [1581.040 --> 1587.880] And while I respect what all of my colleagues have accomplished in terms of raising awareness
349
+ [1587.880 --> 1593.880] about learning style and I appreciate their work, I have to believe that there is an easier
350
+ [1593.880 --> 1597.440] way to prepare for students with different learning styles.
351
+ [1597.440 --> 1605.520] So the model I'm sharing with you only has two parts.
352
+ [1605.520 --> 1611.720] One that talks to the left hemisphere, one that talks to the right hemisphere.
353
+ [1611.720 --> 1616.160] And I'm not planning on adding another hemisphere.
354
+ [1616.160 --> 1621.160] So it's not going to grow, it's not going to change, it's going to stay the way it is,
355
+ [1621.160 --> 1623.400] and it gets even better.
356
+ [1623.400 --> 1631.520] You don't have to worry about the one of them because you already know how to reach
357
+ [1631.520 --> 1634.400] auditory sequential learners.
358
+ [1634.400 --> 1642.120] Those are the happy campers who come to school, bring you flowers, love your lessons,
359
+ [1642.120 --> 1645.560] love the homework, and are doing a great job.
360
+ [1645.560 --> 1649.560] They're enjoying school and it all works for them.
361
+ [1649.560 --> 1654.920] So I don't need to give you any advice at all about working with children who are good
362
+ [1654.920 --> 1661.040] step-by-step learners, who are good listeners, who attend to details.
363
+ [1661.040 --> 1668.640] They learn by trial and error, they teach in words, they learn in words, and ideas.
364
+ [1668.640 --> 1673.160] If you ask what the right answer is, they know that there's a right answer that they
365
+ [1673.160 --> 1674.760] can get.
366
+ [1674.760 --> 1680.400] And they're time conscious, they get their homework in on time, and they're analytical.
367
+ [1680.400 --> 1688.960] So instead of our talking about how to create an environment where both types of students
368
+ [1688.960 --> 1690.200] are happy.
369
+ [1690.200 --> 1695.800] I think we have to be acknowledging of the fact that one group of these students is already
370
+ [1695.800 --> 1701.200] happy, and one group of these students is not so happy.
371
+ [1701.200 --> 1704.360] They're not as happy coming to school.
372
+ [1704.360 --> 1706.680] They're not as engaged.
373
+ [1706.680 --> 1709.320] They're sometimes marginalized.
374
+ [1709.320 --> 1712.440] They sometimes feel stupid.
375
+ [1712.440 --> 1717.120] They are often not pit for the gifted programs.
376
+ [1717.120 --> 1725.840] They're the ones who are going to be just below the cut-off score to qualify for profisions.
377
+ [1725.840 --> 1729.720] And they're the kids that we're missing.
378
+ [1729.720 --> 1731.960] They are the cameramen.
379
+ [1731.960 --> 1735.600] They are the photographers.
380
+ [1735.600 --> 1738.720] They are the architects.
381
+ [1738.720 --> 1741.120] They are the engineers.
382
+ [1741.120 --> 1743.360] They are the builders.
383
+ [1743.360 --> 1749.600] They are the people who invent paradigm shifts, and they're important.
384
+ [1749.600 --> 1757.840] And we have to recognize that they exist and start to make school at least visual-spatial
385
+ [1757.840 --> 1759.920] friendly.
386
+ [1759.920 --> 1766.880] The good news about just thinking about this one group of children is that it's been
387
+ [1766.880 --> 1774.720] demonstrated that if you make learning more accessible for visual-spatial learners,
388
+ [1774.720 --> 1778.800] everybody in the classroom learns better.
389
+ [1778.800 --> 1787.520] So the things that you do for this one group also turn on the brain for all of the students.
390
+ [1787.520 --> 1790.560] So everybody benefits.
391
+ [1790.560 --> 1795.960] The visual-spatial learner learns more all at once, whole part learning.
392
+ [1795.960 --> 1802.240] They have to see the big picture, and then they can understand how the parts relate to
393
+ [1802.240 --> 1803.840] the whole.
394
+ [1803.840 --> 1806.520] They're very keen observers.
395
+ [1806.520 --> 1812.560] If you are wearing a colored contact lens, they're the ones that will say, weren't your
396
+ [1812.560 --> 1815.080] eyes brown yesterday?
397
+ [1815.080 --> 1820.520] If you change a bulletin board, they're the first ones to notice.
398
+ [1820.520 --> 1822.240] Big picture thinkers.
399
+ [1822.240 --> 1825.560] I get this aha moment.
400
+ [1825.560 --> 1832.600] They have strong images, and those who are not good visualizers have strong feelings
401
+ [1832.600 --> 1833.880] of knowing.
402
+ [1833.880 --> 1839.200] So some of them don't visualize they just know intuitively or in their gut.
403
+ [1839.200 --> 1842.600] They come up with unusual solutions to problems.
404
+ [1842.600 --> 1847.000] They lose track of time, and they're intuitive.
405
+ [1847.000 --> 1854.920] And these are the kids that I'm hoping that we can pay more attention to.
406
+ [1854.920 --> 1862.680] And the person who influenced my thinking the most on this population is a brain researcher
407
+ [1862.680 --> 1867.120] in the United States named Jerry Levy.
408
+ [1867.120 --> 1870.160] There's a book called Left Brain, Right Brain.
409
+ [1870.160 --> 1872.080] I don't know whether any of you have come across it.
410
+ [1872.080 --> 1874.800] Well, I see one nods, bring her in Deutsch.
411
+ [1874.800 --> 1880.920] They credit Jerry Levy with having discovered the functions of the left hemisphere and the
412
+ [1880.920 --> 1883.920] functions of the right hemisphere in her research.
413
+ [1883.920 --> 1886.800] She was still a graduate student.
414
+ [1886.800 --> 1894.480] And she said that unless the right hemisphere is activated and engaged, this is not just
415
+ [1894.480 --> 1896.640] in visual spatial children.
416
+ [1896.640 --> 1905.120] This is in every human being, in every learner, unless the right hemisphere is activated and engaged.
417
+ [1905.120 --> 1910.320] Attention is low, and learning is poor.
418
+ [1910.320 --> 1914.160] Because we all have both hemispheres.
419
+ [1914.160 --> 1921.000] Even if we bring our left hemisphere to school, our right hemisphere comes with it.
420
+ [1921.000 --> 1927.000] And if we want a student to be alert and engaged, we have to get that right hemisphere into
421
+ [1927.000 --> 1931.520] the act for all of our students.
422
+ [1931.520 --> 1938.320] So these are how the two hemispheres work differently.
423
+ [1938.320 --> 1946.000] The left hemisphere is sequential, analytic, and temporal, meaning time bound.
424
+ [1946.000 --> 1950.120] Time exists because of the left hemisphere.
425
+ [1950.120 --> 1957.720] And the right hemisphere is much more aware of space, spatial relations, it's holistic.
426
+ [1957.720 --> 1965.520] And instead of being analytic and breaking things down, it's synthetic and brings things
427
+ [1965.520 --> 1967.000] together.
428
+ [1967.000 --> 1971.320] And these, how the parts can relate to the whole.
429
+ [1971.320 --> 1976.800] How many of you have heard that the left hemisphere is also verbal?
430
+ [1976.800 --> 1979.760] We're taught that a lot.
431
+ [1979.760 --> 1982.920] I don't think that that's accurate though.
432
+ [1982.920 --> 1986.400] And I'm going to give you an example of this.
433
+ [1986.400 --> 1989.320] I want you to pretend that I'm your mother.
434
+ [1989.320 --> 1993.040] I am old enough to be most of your mother's anyway.
435
+ [1993.040 --> 1995.960] And I want you to pretend that you're nine years old.
436
+ [1995.960 --> 1997.360] Can you do that?
437
+ [1997.360 --> 2001.600] Okay, your downstairs, I'm upstairs.
438
+ [2001.600 --> 2004.720] And this is what you see in here.
439
+ [2004.720 --> 2007.560] Do you hear me?
440
+ [2007.560 --> 2010.800] Now what am I conveying to you?
441
+ [2010.800 --> 2019.280] What did you get out of my communication?
442
+ [2019.280 --> 2022.160] I'm angry.
443
+ [2022.160 --> 2025.520] How do you know I'm angry?
444
+ [2025.520 --> 2027.080] Ton of voice.
445
+ [2027.080 --> 2028.080] What else?
446
+ [2028.080 --> 2029.080] Volume.
447
+ [2029.080 --> 2030.080] Volume.
448
+ [2030.080 --> 2031.080] What else?
449
+ [2031.080 --> 2032.080] Volume.
450
+ [2032.080 --> 2033.080] Volume.
451
+ [2033.080 --> 2034.080] Volume.
452
+ [2034.080 --> 2035.080] Volume.
453
+ [2035.080 --> 2036.080] Volume.
454
+ [2036.080 --> 2037.080] Yeah.
455
+ [2037.080 --> 2040.320] And my facial expression?
456
+ [2040.320 --> 2044.040] My hands on my hips, my body language.
457
+ [2044.040 --> 2049.320] Your left hemisphere doesn't process any of that.
458
+ [2049.320 --> 2053.040] Only your right hemisphere is aware of all these elements.
459
+ [2053.040 --> 2056.800] There's something else that your right hemisphere is aware of.
460
+ [2056.800 --> 2063.040] Your right hemisphere remembers what happened to you the last time I looked like that.
461
+ [2063.040 --> 2068.400] Your right hemisphere is already figuring out what the consequences are going to be,
462
+ [2068.400 --> 2075.200] because it sees the big picture of what happened last time, what you're doing now, and
463
+ [2075.200 --> 2077.320] the trouble you're going to get into.
464
+ [2077.320 --> 2082.400] And what I'm going to do if you don't stop what you're doing that's getting me that
465
+ [2082.400 --> 2083.640] angry.
466
+ [2083.640 --> 2089.360] So the right hemisphere has the context.
467
+ [2089.360 --> 2098.720] In understanding verbal information, you have to have more than just an ability to decode
468
+ [2098.720 --> 2100.160] the words.
469
+ [2100.160 --> 2107.680] If your left hemisphere was all you had to work with and your right hemisphere wasn't operating,
470
+ [2107.680 --> 2112.280] the answer to my question would have been yes.
471
+ [2112.280 --> 2116.440] Do you hear me?
472
+ [2116.440 --> 2121.760] That left hemisphere is going to say yes, I hear you.
473
+ [2121.760 --> 2126.880] Because that's all that the left hemisphere got out of what I said.
474
+ [2126.880 --> 2133.960] It understood the words and it can produce words, which are words are sequential.
475
+ [2133.960 --> 2138.120] If I said those same words out of order, I would have a thought disorder and you wouldn't
476
+ [2138.120 --> 2140.240] understand what I'm saying.
477
+ [2140.240 --> 2147.520] If you didn't understand the order of the words that I was saying, you couldn't follow
478
+ [2147.520 --> 2149.600] my discussion.
479
+ [2149.600 --> 2155.400] So for us to communicate, speech is sequential.
480
+ [2155.400 --> 2158.600] Listening is sequential.
481
+ [2158.600 --> 2161.600] It's auditory, sequential.
482
+ [2161.600 --> 2164.880] But it doesn't get at the full meaning.
483
+ [2164.880 --> 2169.680] You've got to have more than just an understanding of the words.
484
+ [2169.680 --> 2170.920] Do you hear me?
485
+ [2170.920 --> 2173.400] Yes, I hear you.
486
+ [2173.400 --> 2180.440] And begin to understand the meaning of what I just said.
487
+ [2180.440 --> 2187.800] The meaning came from your right hemisphere, from picking up all the rest of the information
488
+ [2187.800 --> 2190.560] and putting it together into a hole.
489
+ [2190.560 --> 2198.000] So the left hemisphere is dealing with the text, but the right hemisphere has the context.
490
+ [2198.000 --> 2206.280] The whole situation, background or environment relevant to something happening.
491
+ [2206.280 --> 2213.440] So the right hemisphere plays a very powerful role in understanding verbal communication.
492
+ [2213.440 --> 2217.160] Nonverbal is a part of verbal communication.
493
+ [2217.160 --> 2220.560] It gives you context.
494
+ [2220.560 --> 2226.240] The left hemisphere enables you to take things apart and analyze them and compare them.
495
+ [2226.240 --> 2229.360] And name them, name the parts.
496
+ [2229.360 --> 2234.720] But it's the right hemisphere that puts them all together and enables you to enjoy smelling
497
+ [2234.720 --> 2237.160] the flower.
498
+ [2237.160 --> 2243.040] So there are many, many gifts of our right hemisphere that we do not honor in school.
499
+ [2243.040 --> 2246.000] We're not teaching to these gifts.
500
+ [2246.000 --> 2249.400] We're not grading children on these gifts.
501
+ [2249.400 --> 2251.520] We're not giving them marks.
502
+ [2251.520 --> 2255.800] And they're not getting awards and excellence for these gifts.
503
+ [2255.800 --> 2261.040] Not their important life gifts.
504
+ [2261.040 --> 2266.360] You can't see the beginning for scientific, became tiffy for some reason.
505
+ [2266.360 --> 2275.520] But that said, scientific and technological proficiency, holistic and whole part thinking,
506
+ [2275.520 --> 2283.080] artistic expression, imagination, invention, discovery.
507
+ [2283.080 --> 2290.680] Back bottom one, for some reason you can't see the top word, but that's emotional responsiveness.
508
+ [2290.680 --> 2298.320] And the D is missing, or I guess it's black, whole, holographic understanding, intuitive
509
+ [2298.320 --> 2301.240] knowledge and spirituality.
510
+ [2301.240 --> 2303.680] These are the gifts of the right hemisphere.
511
+ [2303.680 --> 2305.640] And they're pretty important gifts.
512
+ [2305.640 --> 2309.760] I don't want to talk about just one of them.
513
+ [2309.760 --> 2313.760] How important is intuition?
514
+ [2313.760 --> 2319.200] How important is intuition to you?
515
+ [2319.200 --> 2326.080] Has intuition ever saved your life or saved the life of someone you know?
516
+ [2326.080 --> 2328.760] You just say it's pretty important.
517
+ [2328.760 --> 2333.520] Do you give marks in intuition in school?
518
+ [2333.520 --> 2338.320] Do you develop children's intuition?
519
+ [2338.320 --> 2339.320] You do.
520
+ [2339.320 --> 2344.160] I think it happens automatically, but that's more that the children just, exactly what you
521
+ [2344.160 --> 2347.920] just gave with the facial expression and the arms and the thing.
522
+ [2347.920 --> 2351.400] I think children learn that really quickly in school.
523
+ [2351.400 --> 2352.400] They do.
524
+ [2352.400 --> 2353.920] It's true.
525
+ [2353.920 --> 2356.720] But we have to acknowledge.
526
+ [2356.720 --> 2359.960] We have to say it's important.
527
+ [2359.960 --> 2367.360] We have to say that your intuition is valuable and good that you've got it and keep working
528
+ [2367.360 --> 2375.040] with it and keep counting on it because there is another way of knowing beside your logic.
529
+ [2375.040 --> 2379.360] Your intuition has a big picture.
530
+ [2379.360 --> 2384.720] It steps outside of time.
531
+ [2384.720 --> 2386.360] Think about that.
532
+ [2386.360 --> 2393.120] That's how it saves lives because it knows what's going to happen.
533
+ [2393.120 --> 2395.240] Your logic doesn't.
534
+ [2395.240 --> 2400.160] Your logic lives in time and it can't know the future.
535
+ [2400.160 --> 2402.560] But your intuition can.
536
+ [2402.560 --> 2403.960] You have to listen to it.
537
+ [2403.960 --> 2409.840] How many of you have had experiences where your intuition told you something and you didn't
538
+ [2409.840 --> 2415.480] listen and you regret it?
539
+ [2415.480 --> 2421.120] Because your logical mind says, well, how do you know that?
540
+ [2421.120 --> 2423.760] And you can't answer the question.
541
+ [2423.760 --> 2424.920] How do you know that?
542
+ [2424.920 --> 2426.240] You just know.
543
+ [2426.240 --> 2432.320] You don't know how you know but you're getting a message and the message knows something
544
+ [2432.320 --> 2437.480] but you can't explain how it knows what it knows.
545
+ [2437.480 --> 2447.720] That is a very powerful part of what you are born with that needs to be honored and developed
546
+ [2447.720 --> 2455.480] for your safety, your future and the future of everyone in your life.
547
+ [2455.480 --> 2459.080] So now I'm going to talk about two students.
548
+ [2459.080 --> 2466.600] We're going to assume that student A has a certain set of skills that student B doesn't
549
+ [2466.600 --> 2472.800] have and we're going to assume that student B has a certain set of skills that student
550
+ [2472.800 --> 2474.920] A doesn't have.
551
+ [2474.920 --> 2478.800] So student A has need handwriting.
552
+ [2478.800 --> 2482.360] Student B type 60 words a minute.
553
+ [2482.360 --> 2485.040] Student A is good at spelling.
554
+ [2485.040 --> 2487.520] Student B is a good visualizer.
555
+ [2487.520 --> 2491.240] Student A has instant recall of facts.
556
+ [2491.240 --> 2494.720] Student B loves geometry and physics.
557
+ [2494.720 --> 2496.080] Student A is well-rounded.
558
+ [2496.080 --> 2499.840] Student B is brilliant in one area.
559
+ [2499.840 --> 2502.200] Student A is a convergent thinker.
560
+ [2502.200 --> 2504.480] Knows how to get to the right answer.
561
+ [2504.480 --> 2506.800] Student B is creative.
562
+ [2506.800 --> 2513.720] Student A is skilled at wrote memorization and student B understands complex concepts.
563
+ [2513.720 --> 2515.800] Student A shows steps easily.
564
+ [2515.800 --> 2519.040] Student B sees the big picture.
565
+ [2519.040 --> 2521.000] Student A is a good analyzer.
566
+ [2521.000 --> 2523.960] B a good synthesizer.
567
+ [2523.960 --> 2525.640] A is punctual.
568
+ [2525.640 --> 2529.400] B has a more fluid sense of time.
569
+ [2529.400 --> 2531.520] Here follows directions well.
570
+ [2531.520 --> 2536.280] B is an excellent problem solver.
571
+ [2536.280 --> 2540.040] Which of these students has a higher grade point average?
572
+ [2540.040 --> 2541.560] Higher marks.
573
+ [2541.560 --> 2551.600] A. And which of these students do you think is more employable in the 21st century?
574
+ [2551.600 --> 2559.160] But we continue our traditions and we continue to teach what we're commanded to teach in
575
+ [2559.160 --> 2567.200] the way we're commanded to teach it because that's what we're expected to do as teachers.
576
+ [2567.200 --> 2578.160] And if we want to keep our jobs, we continue to make all of the A group the important ones.
577
+ [2578.160 --> 2582.960] And we don't spend as much time on the B group.
578
+ [2582.960 --> 2588.840] Now I'm making assumptions here and please correct me if this doesn't apply to the Netherlands
579
+ [2588.840 --> 2589.840] at all.
580
+ [2589.840 --> 2592.720] It may only be an American phenomenon.
581
+ [2592.720 --> 2600.400] But in American schools, you are rewarded for following directions, turning in assigned
582
+ [2600.400 --> 2607.440] work on time, memorization of facts, fast recall, showing the steps of your work, neat
583
+ [2607.440 --> 2614.360] legible handwriting, accurate spelling, punctuality, good organization and tightiness.
584
+ [2614.360 --> 2617.160] Are those values in a Dutch school?
585
+ [2617.640 --> 2618.640] Still.
586
+ [2618.640 --> 2619.640] Okay.
587
+ [2619.640 --> 2628.040] So, what jobs in adult life require this set of skills?
588
+ [2628.040 --> 2630.320] What are we training our kids to be?
589
+ [2630.320 --> 2631.320] Yes.
590
+ [2631.320 --> 2632.320] Teachers.
591
+ [2632.320 --> 2634.720] Teachers, you got it.
592
+ [2634.720 --> 2639.040] We're training all of this kids to be teachers.
593
+ [2639.040 --> 2643.720] There are other jobs that this will equip them to do.
594
+ [2643.720 --> 2652.560] Social management, good executive secretary, accountant, auditor.
595
+ [2652.560 --> 2656.200] I mean, there are some things, some good things.
596
+ [2656.200 --> 2658.240] I'm not saying these are bad things.
597
+ [2658.240 --> 2660.440] I'm saying they're not enough.
598
+ [2660.440 --> 2663.400] How many of you teach gifted children?
599
+ [2663.400 --> 2671.240] Are they all going to become teachers or middle managers or accountants or bookkeepers?
600
+ [2671.240 --> 2672.560] Probably not.
601
+ [2672.560 --> 2681.240] So I've actually inquired at higher level technical institutes.
602
+ [2681.240 --> 2685.080] What they're looking for in a new hiring.
603
+ [2685.080 --> 2687.240] What are the skills they want?
604
+ [2687.240 --> 2693.240] They're new employees to have when they come into their positions.
605
+ [2693.240 --> 2696.200] And this is what I've been told.
606
+ [2696.200 --> 2705.560] If you want a job that's going to pay a considerable amount of money in a leadership position, these
607
+ [2705.560 --> 2710.240] are what you're going to have to come into that interview with.
608
+ [2710.240 --> 2715.640] The ability to predict trends.
609
+ [2715.640 --> 2720.080] The ability to grasp the big picture.
610
+ [2720.080 --> 2724.480] The ability to think outside the box.
611
+ [2724.480 --> 2728.160] Being a risk taker.
612
+ [2728.160 --> 2732.280] Problem finding as well as problem solving.
613
+ [2732.280 --> 2736.080] So that you find the problems to solve.
614
+ [2736.080 --> 2741.640] Combining your strengths with others' strengths to build a strong team.
615
+ [2741.640 --> 2744.160] Computer literacy.
616
+ [2744.160 --> 2746.600] Combining with complexity.
617
+ [2746.600 --> 2750.840] And the ability to read people well.
618
+ [2750.840 --> 2755.280] That's helpful if you're in some area where you have to sell your ideas.
619
+ [2755.280 --> 2759.880] You have to be able to read your audience.
620
+ [2759.880 --> 2763.200] Read the buyer.
621
+ [2763.200 --> 2770.680] Are we preparing our students for these higher level positions?
622
+ [2770.680 --> 2774.120] Are we giving them this set of skills?
623
+ [2774.120 --> 2775.720] We could.
624
+ [2775.720 --> 2780.920] If we weren't so worried about the other set of skills.
625
+ [2780.920 --> 2784.440] Because traditionally, that's what school was about.
626
+ [2784.440 --> 2785.440] Yes?
627
+ [2785.440 --> 2794.280] The miss match.
628
+ [2794.280 --> 2810.040] I'd like you to say that again so they can pick it up on the video.
629
+ [2810.040 --> 2811.040] It's important.
630
+ [2811.040 --> 2812.920] What you just said is important.
631
+ [2812.920 --> 2819.920] So it's even worse because most students and pupils already know this is going on.
632
+ [2819.920 --> 2831.560] They know this list is becoming more important than it becomes more important to have these qualities.
633
+ [2831.560 --> 2837.600] The gap between pupils and teachers becomes more and more obvious every day.
634
+ [2837.600 --> 2840.720] And then what happens to the student?
635
+ [2840.720 --> 2842.040] They lose interest.
636
+ [2842.040 --> 2843.040] They lack interest.
637
+ [2843.040 --> 2845.040] They become disengaged.
638
+ [2845.040 --> 2852.040] Just a sec.
639
+ [2852.040 --> 2859.240] Thank you.
640
+ [2859.240 --> 2865.640] I don't totally agree with the formal speaker because I think it's the difference, the gap,
641
+ [2865.640 --> 2869.040] between the system and the wishes of teachers.
642
+ [2869.040 --> 2870.840] I believe that that's true.
643
+ [2870.840 --> 2882.840] I have heard enough stories in the few days I've been here to know that you're caught between the expectations of you as a teacher within the system.
644
+ [2882.840 --> 2890.040] And the knowledge that your students have that in order for them to get a job, they need something different.
645
+ [2890.040 --> 2894.040] I understand that this is not your fault.
646
+ [2894.040 --> 2900.040] I'm not blaming because I was a classroom teacher and I know what that's like.
647
+ [2900.040 --> 2903.040] And I was fired enough times that I know what it's like.
648
+ [2903.040 --> 2906.840] So yeah, it's not easy.
649
+ [2906.840 --> 2915.840] It's not easy being a teacher today caught between these different agendas and expectations.
650
+ [2915.840 --> 2917.440] That's hard.
651
+ [2917.440 --> 2925.040] So how do you add this to what you're doing so that you can keep your job but still prepare your students?
652
+ [2925.040 --> 2927.040] Yes.
653
+ [2927.040 --> 2935.040] I think it's something we have to do because you also see this trend in business.
654
+ [2935.040 --> 2945.040] There is still, I was talking to her and I said, what if you put this on your CV, then you won't get a job.
655
+ [2945.040 --> 2955.040] But on the other side, there are businesses growing at this moment who just wants to have this on your CV and not the other one.
656
+ [2955.040 --> 2961.040] Because we have a lot of them in Holland at this moment and they're growing.
657
+ [2961.040 --> 2968.040] So we have to change it because the students won't fit into the new jobs.
658
+ [2968.040 --> 2978.040] So much of thank you, much of what we've been doing has been to prepare students for jobs for a different century.
659
+ [2978.040 --> 2981.040] Not the century they're in.
660
+ [2981.040 --> 2987.040] And yes, you are stuck in a teaching position.
661
+ [2987.040 --> 2998.040] But if you can begin the dialogue with whoever makes the decisions about what gets taught in school,
662
+ [2998.040 --> 3001.040] maybe you can begin to change things.
663
+ [3001.040 --> 3004.040] Somebody has to start somewhere.
664
+ [3004.040 --> 3006.040] We all have to.
665
+ [3006.040 --> 3009.040] Right.
666
+ [3009.040 --> 3015.040] How many of you are familiar with Daniel Pink, a whole new mind?
667
+ [3015.040 --> 3019.040] These are some quotes from his book.
668
+ [3019.040 --> 3021.040] I never pronounced this word right.
669
+ [3021.040 --> 3025.040] Is it seismic or seismic?
670
+ [3025.040 --> 3033.040] There is a seismic though as yet undetected shift now underway in much of the advanced world.
671
+ [3033.040 --> 3043.040] We are moving from an economy and a society built on the logical, linear, computer-like capabilities of the information age.
672
+ [3043.040 --> 3055.040] To an economy and a society built on the inventive and pathic big picture capabilities of what's rising in its place, the conceptual age.
673
+ [3055.040 --> 3065.040] Now one of the reasons why I think Daniel Pink can be helpful is that he's talking about an economic reality,
674
+ [3065.040 --> 3081.040] that the jobs that we're preparing students to hold in the 21st century are all going to be what is outsourced to other countries where they can get the labor cheaper.
675
+ [3081.040 --> 3091.040] And if we want the students to have jobs, if we want the Netherlands to be strong economically,
676
+ [3091.040 --> 3101.040] we're going to have to teach them to do and to think in ways beyond what can be outsourced.
677
+ [3101.040 --> 3113.040] And that I think because the school system is an economic endeavor within the general economy of the country,
678
+ [3113.040 --> 3116.040] this can begin to reach people.
679
+ [3116.040 --> 3120.040] I think his words are very powerful.
680
+ [3120.040 --> 3125.040] He says the keys to the kingdom are changing hands.
681
+ [3125.040 --> 3137.040] The future belongs to a very different kind of person with a very different kind of mind, creators and empathizers, pattern recognizers and meaning makers.
682
+ [3137.040 --> 3153.040] These people, artists, inventors, designers, storytellers, caregivers, consolers, big picture thinkers will reap society's richest rewards and share its greatest joys.
683
+ [3153.040 --> 3159.040] That richest rewards is the piece that I think they'll understand.
684
+ [3159.040 --> 3177.040] What I notice in the United States is that all of the corporations with whom I deal accept the very biggest companies like Bank of America are becoming more service oriented.
685
+ [3177.040 --> 3190.040] And you go into a hotel and they answer to any question is yes, or you go into a restaurant and the answer is you got it or perfect.
686
+ [3190.040 --> 3205.040] People are being trained to be more aware of service, being more responsive to what the public needs, fearful of the ratings that they're going to get on internet if they do a bad job.
687
+ [3205.040 --> 3210.040] Don't report us. Don't make us look bad.
688
+ [3210.040 --> 3227.040] So there is an economic benefit to the entire country and to the school system within the country to begin to be aware of the shifts in emphasis that are going on internationally.
689
+ [3227.040 --> 3237.040] It isn't enough to be a fast calculator. No one is going to wake you at four o'clock in the morning and say what's four times seven.
690
+ [3237.040 --> 3255.040] I mean, they're just not going to do that. There's a calculator now. And if a calculator can do it, we don't need to spend four years teaching somebody what a calculator can do.
691
+ [3255.040 --> 3271.040] Oh, my goodness. We have some missing pieces here. So how many of your students do you think are visual spatial? What would you guess based on what we've talked about? What percentage in your classroom?
692
+ [3271.040 --> 3291.040] What just a guess? What do you think? Over 50. Wow. I never would have guessed that. But I was wrong. But what would you think? Yeah. Pardon? 80%. Wow.
693
+ [3291.040 --> 3308.040] So maybe you have, I believe from what I've seen so far that you might be right. You, I think the Netherlands is more visual than the United States. I do. I think what I've seen. I think you might be right.
694
+ [3308.040 --> 3327.040] I have data from the United States from our studies, but I never dreamed that there were that many students. So this we we invented a visual spatial identifier. And it has a self report and an observer report.
695
+ [3327.040 --> 3343.040] And I'm just giving you a few of the sample items. There's it's not a lot. It was developed for teachers. So we've only got I think 14 items altogether. And then we've got a longer one that we're using in a clinical setting.
696
+ [3343.040 --> 3369.040] It's got 36 items and that's for clinicians. But the teacher version and the student version has things like I hate speaking in front of a group. I think mainly in pictures and set of words. I know more than others think I know. I have a hard time explaining how I came up with my answers. This one I am good at spelling as a not.
697
+ [3369.040 --> 3396.040] I have a wild imagination. It was easy for me to learn my math facts, not. And what we found with that last one was interesting. We picked up visual spatial girls who never memorized their math facts. It was a more gender fair question. I never would have guessed that that would turn out like that. But we got more girls in our sample with that question.
698
+ [3396.040 --> 3406.040] So a few of them are reverse not many. And these this is what it looks like. And these are the.
699
+ [3407.040 --> 3410.040] These are the results of the study.
700
+ [3410.040 --> 3425.040] We worked with 4th, 5th and 6th graders in city schools and rural schools that were a mix of Caucasian and Hispanic.
701
+ [3425.040 --> 3450.040] A very large range of socioeconomic diversity. A lot of lower and lower middle class children in the sample. And about 1 third of them came out strongly visual spatial. Only a quarter of them came out strongly auditory sequential. And about 45 of them were mixed.
702
+ [3450.040 --> 3478.040] So we took a look at the group that was mixed that had a little of each. And we tried to see where were their preferences. And in that group twice as many of them lean toward visual spatial. They weren't strong, but that was their preference. They leaned in that direction only 15 of them, 15% lean toward auditory sequential.
703
+ [3478.040 --> 3491.040] So our research with 750, 4th, 5th and 6th graders, white, Hispanic, urban, rural, all socioeconomic ranges, all IQ ranges.
704
+ [3491.040 --> 3505.040] We saw that more than 60% in an American school were visual spatial. I'm guessing that it would be higher here, just from the people that I've met.
705
+ [3506.040 --> 3520.040] And we found much higher percentages in gifted classrooms and in Navajo and in twice exceptional. There's a school for gifted children with learning disabilities, a high school.
706
+ [3521.040 --> 3544.040] In California, I think we found 87% of them were visual spatial. So if you had to give a guess about just all of the children in Holland, what percentage of all the children do you think might be visual spatial?
707
+ [3544.040 --> 3557.040] All of the students. I mean there's no way to be wrong here because we don't know what's right. So what's just your best guess? What do you think?
708
+ [3558.040 --> 3564.040] I want to see how applicable you think this concept might be here. Yes.
709
+ [3574.040 --> 3591.040] Why do we have methods in the Netherlands, which are based on learning based on language instead of spatial learning while we have so much students who prefer that?
710
+ [3591.040 --> 3611.040] Is it changed over the years? Yes. It certainly has in the United States. I don't know if it's changed here, but the percentage of visual spatial learners is increasing in the United States. Is it increasing year two, you think?
711
+ [3611.040 --> 3631.040] I think one of the reasons is that we are in an image oriented world. And that is that iconic world is increasing. The children are exposed to more visual. They weren't maybe a generation ago.
712
+ [3631.040 --> 3649.040] School was much more nonverbal, not much nonverbal. So yeah, I think the whole society, look at how many children are playing visual games and playing with cell phones and playing with iPads.
713
+ [3649.040 --> 3661.040] We have a very visual oriented society, but our teaching methods haven't become more visual. The children have.
714
+ [3661.040 --> 3681.040] So we have prized these left hemispheric skills for thousands of years. We're using a traditional model that was handed down to us generation after generation after generation.
715
+ [3681.040 --> 3699.040] But the right hemispheric skills of imagery, computer literacy, using your mind as a camera, this is becoming more important in the 21st century.
716
+ [3699.040 --> 3715.040] And for us to help our students become employable, I really think we have to prepare them for the visually oriented creative careers that await them, particularly our gifted kids.
717
+ [3715.040 --> 3729.040] And I believe that the visual spatial learners are going to become our next generation of leaders. The ones who were marginalized in school and felt stupid are going to end up being in leadership positions.
718
+ [3729.040 --> 3746.040] So this really finishes the first half of this session, not this session, but my presentation. And I'm going to continue in the next session talking about specific strategies.
719
+ [3746.040 --> 3758.040] But I separated out so that part one was about the theory and the construct. And part two was about how to teach the children.
720
+ [3758.040 --> 3770.040] What questions do you have about all of this information that I shared today? Well, that's handy.
721
+ [3770.040 --> 3785.040] I wonder, is it possible that all gifted children or people are from origin, visual, spatial thinkers? I work with gifted adults.
722
+ [3785.040 --> 3799.040] And I sometimes become people into my room and they seem ultimately of the rational side.
723
+ [3799.040 --> 3819.040] And I often can help them by discovering the visual spatial abilities.
724
+ [3819.040 --> 3833.040] Is that something known about that? I agree. I have to say yes, no, and yes. Many questions.
725
+ [3833.040 --> 3846.040] Were all of these children originally visual spatial? Yes. At some point in all of our development, we all were visual spatial.
726
+ [3846.040 --> 3859.040] And it is called Ideic memory, EIDIC memory. And I probably misspelled that, didn't I?
727
+ [3859.040 --> 3874.040] Anyway, the Ideic memory is the early knowledge base that young children have until the age of around eight.
728
+ [3874.040 --> 3887.040] They learn visually, they take in information visually, they store it visually, they have almost a photographic memory.
729
+ [3887.040 --> 3903.040] But about nine years old, something happens. About nine years old, that left hemisphere really starts to kick in and take over.
730
+ [3903.040 --> 3923.040] And instead of the Ideic memory, you've got verbal mediation and categorical reasoning that supplants it. Ideic memory goes only so far, developmentally, and then all of a sudden there's a switch.
731
+ [3923.040 --> 3952.040] And you start thinking with your left hemisphere, except the visual spatial learners. They don't stop. They don't make the switch. When everybody else becomes more auditory sequential, they don't give up that Ideic memory and start to use categorical, verbal, analytical reasoning in its place.
732
+ [3952.040 --> 3977.040] They keep that as their main way of knowing. But when you're gifted, something else happens. When you're gifted, you've got that left hemispheric, analytical, verbal connecting going on, the great ability to categorize.
733
+ [3978.040 --> 4002.040] And you also have the Ideic memory and the right hemisphere, and they work more complementarily. And the higher your intelligence, the higher your measured intelligence, the more likely you are to be visual spatial.
734
+ [4002.040 --> 4021.040] So when you do studies of the highly gifted, they lead with the visual spatial. And then they have no trouble going back and forth and back and forth because the brain is a very integrated organ, and it uses everything it has.
735
+ [4021.040 --> 4045.040] And so the fastest way to get to a solution is to take a picture of it in your mind, to see it, to see it all at once. And then if you have to explain it to somebody else, then you have to go back to that left hemisphere, and you have to do the translation and the integration.
736
+ [4045.040 --> 4058.040] So that the higher the intelligence, the more likely the person is to be both, but to have a visual spatial preference.
737
+ [4058.040 --> 4077.040] So I have two theories about your clients. You have both. You have both that left hemispheric facility, you write, and you have the right hemispheric facility. And you have learned to integrate them.
738
+ [4077.040 --> 4090.040] My guess is that you attract people like yourself who are highly gifted, have both. And they're more likely to come to you, the highly gifted.
739
+ [4090.040 --> 4101.040] That's my guess. There's another hypothesis, and that is also, I've been playing with this in the last few days.
740
+ [4101.040 --> 4121.040] I think it's something about being Dutch. Serious. No, I'm serious, because I have noticed that the people I've had conversations with in the past few days think differently from Americans.
741
+ [4121.040 --> 4139.040] I think differently from people I've encountered in other countries. I've found a lot of people who think like you think in Denmark, but not a whole lot of people that I've talked with in other places, especially in the United States.
742
+ [4139.040 --> 4161.040] I have a feeling it has to do with being multilingual. There's something about being multilingual, which I think somehow integrate, I don't know, but I think it integrates the hemispheres in some way that us monolinggles don't get. We don't have that.
743
+ [4161.040 --> 4179.040] You are always interacting with people of different linguistic backgrounds. We're not. Those synapses aren't firing. Now, we don't have that experience, but they do in Denmark.
744
+ [4179.040 --> 4207.040] I think being surrounded by different linguistic bases somehow is causing some integration of the right and left hemisphere that's unusual. It's just a hypothesis. I don't know what I'm talking about. I'm just trying to make sense of either all of you are highly gifted or there's something about being Dutch.
745
+ [4209.040 --> 4223.040] I don't know. How are we doing time wise? We have time?
746
+ [4223.040 --> 4252.040] I'm sorry, I couldn't hear her. One question? Yes. You told us that at nine years old, something happens with the left hemisphere. Is that because of the way we teach children, or is it also with children who doesn't have any schooling?
747
+ [4252.040 --> 4262.040] Oh, that we switch to the left hemisphere. That's a natural part of child development. Your right hemisphere develops first.
748
+ [4262.040 --> 4280.040] Thank you. So the right hemisphere is interacting with the world for the very first eight years of life. And then developmentally, the left hemisphere really starts to kick in around nine.
749
+ [4280.040 --> 4296.040] Have you noticed changes in children around nine? Sit something different about nine around nine. Yeah.
750
+ [4296.040 --> 4308.040] I wonder what would happen if we would have an education more direct towards the visual spatial learner? I wonder the same thing. Would we all become very gifted?
751
+ [4308.040 --> 4325.040] Maybe. If we integrate them. We're always hearing about how we only use a small percent of our intelligence. Maybe it's that right hemisphere that has all the gold in it that needs to be discovered and revealed and nurtured.
752
+ [4325.040 --> 4337.040] Maybe that's where all the rest of that brain power can come from. I'm guessing yes. I think I made a statement like that and upside down brilliance.
753
+ [4337.040 --> 4348.040] What would it be like if our whole school system, our whole structure of education worldwide became more visual spatial.
754
+ [4348.040 --> 4368.040] So that we have that left hemisphere analytical facility, but we also have the ability to visualize the ability to synthesize the ability to access our intuition, our intuitive knowing, our spirituality.
755
+ [4368.040 --> 4388.040] What if we had it all? What would life look like under those circumstances? It's a really good question. Is it?
756
+ [4388.040 --> 4400.040] Thank you. I would like to add something. In the way that you are an example of that, I missed one word and it is joy and humor.
757
+ [4400.040 --> 4409.040] Good. Very good. Very important part of being in the right hemisphere. You're absolutely right.
758
+ [4409.040 --> 4427.040] I see it in every word you're saying. So I would thank you for that. And at the same time I would like to ask every teacher to start tomorrow with joy and humor in your classes.
759
+ [4427.040 --> 4435.040] You're completely right on. There's no wisdom without humor.
760
+ [4435.040 --> 4447.040] The right hemisphere actually is the part of our brain that understands humor. The left hemisphere can understand puns.
761
+ [4447.040 --> 4464.040] But the right hemisphere is what gets most of the jokes. And the joy to feel joy. I don't know. I mean, I'm hearing different conflicting information about brain research that I don't understand.
762
+ [4464.040 --> 4479.040] But the book that suggests that you're right is the book by now I'm blanking. It's my stroke of insight by who was who wrote that?
763
+ [4479.040 --> 4493.040] My stroke of insight. Jill Bolte Taylor. She says the same thing. She says if you want to know joy, you better step into your right hemisphere.
764
+ [4493.040 --> 4502.040] Because that's where it is. Yeah. And that book had a profound impact on me. It's a beautiful book.
765
+ [4502.040 --> 4517.040] If you haven't read it, what I'd recommend that you do is write down her name and look her up on her TED talk. It'll be the best 18 minutes you've spent in a long time.
766
+ [4517.040 --> 4539.040] It's Jill J. I. L. L. Bolte B-O-L-T-E Taylor T-A-Y-L-O-R. Jill Bolte Taylor. And you put that into YouTube or her TED talk will come up.
767
+ [4539.040 --> 4553.040] I must have watched it 40 times. And I get something different out of it every single time. She was, she's a brain researcher who experienced a massive left hemisphere stroke.
768
+ [4553.040 --> 4572.040] And then healed off for her a long period of time. And then was able to tell what happened to her. The spiritual awareness that came out of that loss of the left hemisphere completely.
769
+ [4572.040 --> 4593.040] It's so inspiring. And she does talk about peace and joy and humor. And then the other person who completely supports what you're saying is Robert Ornstein. And he wrote the book The Right Mind.
770
+ [4593.040 --> 4611.040] And he has throughout the book pictures that if you, he talks about sharing these pictures with individuals with left hemispheric strokes and individuals with right hemispheric strokes.
771
+ [4611.040 --> 4624.040] And the people who had right hemispheric strokes did not understand what was going on in the pictures. And they couldn't, they couldn't understand cartoons.
772
+ [4624.040 --> 4637.040] They couldn't understand a lot of visual humor. They missed it completely. Because that right hemisphere was so important to humor. Appreciation of humor.
773
+ [4637.040 --> 4650.040] Yeah. So we're going to be talking a little bit more about that in the next session. What time are we supposed to stop? Now. Thank you. You've been very kind.
774
+ [4650.040 --> 4655.040] Thank you.
transcript/allocentric_4F3xCBcsLFg.txt ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 11.840] Hello, my name is Ashley Sellers. I'm a speech language pathologist and the owner and
2
+ [11.840 --> 18.360] operator of speech language and beyond. I'm coming to you today to introduce a video
3
+ [18.360 --> 26.600] of a session that I completed with a non-verbal child that is around three years old. I wanted
4
+ [26.600 --> 32.680] to do this video because I feel sometimes at therapists when we work with children at a non-verbal
5
+ [32.680 --> 38.800] or even with parents who have children that they're attempting to communicate with on a daily basis
6
+ [38.800 --> 44.480] at a non-verbal, we get so caught up in them using words that we're not paying attention to the
7
+ [44.480 --> 49.640] things that they're showing us that they do know or the ways that they are able to communicate.
8
+ [49.640 --> 56.320] Now mind you, I know the goal of the therapy is to lead them to the use of words, but we have
9
+ [56.320 --> 63.280] to be realistic in knowing that that may not come overnight, it may not even happen at all, or it may
10
+ [63.280 --> 69.160] not even happen when we expect it to. So I never promise parents that I can get their child to the
11
+ [69.160 --> 74.880] point that they are talking, but I can break down the ways that they are attempting to communicate
12
+ [74.880 --> 80.520] or the ways that they are building on their ability to be able to communicate. And I feel like a lot
13
+ [80.520 --> 86.280] of times we miss out on the things that they're showing us that they know and how they are attempting
14
+ [86.280 --> 91.960] to communicate with us, and when we miss out on those opportunities, we miss out on the things
15
+ [91.960 --> 97.720] that we can do to expand what they already know to get them closer to the point of being able to
16
+ [97.720 --> 104.960] use words as a way to communicate. So through this video, you're going to see the live recording, it was a
17
+ [104.960 --> 111.120] 20-minute session, really it was a 30-minute session, and I was only able to record 20 minutes of, but out of
18
+ [111.120 --> 117.120] that 20 minutes, I've really just broken it down to where it's pretty much like maybe six minutes of
19
+ [117.120 --> 122.520] the therapy where I can highlight to you when the child made eye contact, when they followed
20
+ [122.520 --> 127.960] them through on a command, when they attempted to communicate. I just really want you to look at the
21
+ [127.960 --> 134.600] video, pay attention to the ways the child is showing us, look, I hear you, I understand you, and I just
22
+ [134.600 --> 140.080] need more time to get to the point that I can use words, but I am attempting to communicate with you in
23
+ [140.080 --> 145.440] other ways. I hope this video helps. I hope that it provides some strategies of some things that you
24
+ [145.440 --> 150.600] can do at home or within your therapy session, and also to encourage you to let you know that you're
25
+ [150.600 --> 157.800] doing more than what you think you are doing to help your child. The key is we cannot push them past
26
+ [157.800 --> 163.480] the point that they are ready to communicate. When they're ready to communicate with us, they will give
27
+ [163.480 --> 169.960] us what they have. It is our job whether they're using words or not at this particular point in time,
28
+ [169.960 --> 175.840] to stimulate their language, to build on their receptive vocabulary, to store the right information
29
+ [175.840 --> 181.920] within their locked term and short term memory so that when they are to the point that they're ready to
30
+ [181.920 --> 189.240] give us that language, we've already demonstrated it to them within the appropriate context in order for us to
31
+ [189.240 --> 197.040] give it back. All we have to do is be patient, be prayerful, and always put forth a lot of effort in our daily
32
+ [197.040 --> 203.920] routines to make sure that we're giving them numerous language opportunities. So I hope that this video helps.
33
+ [203.920 --> 210.680] If you have any questions, please feel free to contact me. My contact information will be listed below in the
34
+ [210.680 --> 213.120] description box. So thank you, and I hope that you're
35
+ [273.120 --> 275.120] doing well.
36
+ [275.120 --> 277.120] Video cam, let me show you another one.
37
+ [277.120 --> 280.120] I guess this one. What's this?
38
+ [280.120 --> 284.120] Look, camera, take a picture.
39
+ [284.120 --> 288.120] Cheese! You try. Camera.
40
+ [288.120 --> 292.120] Can you take a picture? Camera.
41
+ [292.120 --> 298.120] You use it to take a picture. You hold it up.
42
+ [298.120 --> 300.120] Cheese!
43
+ [300.120 --> 304.120] Now you can take the picture. Camera.
44
+ [304.120 --> 307.120] Camera. So look, here's the other one.
45
+ [307.120 --> 311.120] Video. Look at the video cam.
46
+ [311.120 --> 314.120] Look, so you have the video cam.
47
+ [314.120 --> 317.120] You use it to record so you can see.
48
+ [317.120 --> 320.120] You have the camera. Cheese!
49
+ [320.120 --> 325.120] Take pictures. See?
50
+ [325.120 --> 328.120] You put it to your eye. Take my picture.
51
+ [328.120 --> 331.120] Here. Can you take my picture?
52
+ [331.120 --> 335.120] Can you take my picture?
53
+ [335.120 --> 339.120] Hold it up. See? Let me take a picture. Say cheese!
54
+ [339.120 --> 343.120] Cheese! Camera.
55
+ [343.120 --> 347.120] So look, camera.
56
+ [347.120 --> 349.120] Video recorder.
57
+ [349.120 --> 352.120] Look, what else do I have now? I have a tool.
58
+ [352.120 --> 355.120] Look at this. What is that?
59
+ [355.120 --> 358.120] You see the screwdriver?
60
+ [358.120 --> 361.120] screwdriver.
61
+ [361.120 --> 366.120] And then here is a screw.
62
+ [366.120 --> 369.120] screwdriver and screw.
63
+ [369.120 --> 372.120] We're fixing something. Can you try?
64
+ [372.120 --> 375.120] screwdriver.
65
+ [375.120 --> 381.120] Good job. Put it ahead.
66
+ [381.120 --> 384.120] Alright, here we go.
67
+ [384.120 --> 387.120] What's this?
68
+ [387.120 --> 391.120] Look, eyes.
69
+ [391.120 --> 393.120] Eyes.
70
+ [393.120 --> 396.120] And then eyes. Those are your eyes.
71
+ [396.120 --> 402.120] Look, eyes. Put the eyes on for me. Where they go?
72
+ [402.120 --> 405.120] Put eyes here.
73
+ [405.120 --> 408.120] Eyes.
74
+ [408.120 --> 411.120] Look, can you put them right there? Eyes.
75
+ [411.120 --> 414.120] Hold it.
76
+ [414.120 --> 419.120] Look, eyes. Put them right here, Danyang.
77
+ [419.120 --> 422.120] Very good. Eyes.
78
+ [422.120 --> 426.120] You see with your eyes. Danyang, where are your eyes?
79
+ [426.120 --> 431.120] Eyes. Good job. Eyes.
80
+ [431.120 --> 434.120] Danyang, where's your nose?
81
+ [434.120 --> 436.120] Where's nose?
82
+ [436.120 --> 438.120] Look.
83
+ [438.120 --> 440.120] Nose.
84
+ [440.120 --> 442.120] Nose.
85
+ [442.120 --> 444.120] Nose.
86
+ [444.120 --> 446.120] Nose.
87
+ [446.120 --> 448.120] Where are you going to put his nose?
88
+ [448.120 --> 452.120] Where are you going to put his nose?
89
+ [452.120 --> 456.120] Where's nose?
90
+ [456.120 --> 464.120] There it is. Good job. Nose.
91
+ [464.120 --> 466.120] Nose. That's right.
92
+ [466.120 --> 467.120] Look.
93
+ [467.120 --> 468.120] Jar.
94
+ [468.120 --> 470.120] Jar.
95
+ [470.120 --> 473.120] Look what we're going to put in this jar.
96
+ [473.120 --> 476.120] Danyang, sit up. What is this?
97
+ [476.120 --> 478.120] What is this?
98
+ [478.120 --> 480.120] What is it?
99
+ [480.120 --> 482.120] Cookie?
100
+ [482.120 --> 485.120] Cookie?
101
+ [485.120 --> 487.120] Cookie?
102
+ [487.120 --> 489.120] What are you doing to cookie?
103
+ [489.120 --> 491.120] Look, eat.
104
+ [491.120 --> 493.120] Cookie.
105
+ [493.120 --> 496.120] Eat. Cookie.
106
+ [496.120 --> 500.120] Now can we put it in the jar? Cookie?
107
+ [500.120 --> 502.120] Where is it?
108
+ [502.120 --> 504.120] Look. Phone.
109
+ [504.120 --> 505.120] Hello.
110
+ [505.120 --> 510.120] May I speak to Danyang? Can you talk on the phone?
111
+ [510.120 --> 513.120] Look. Look at me push the number.
112
+ [513.120 --> 518.120] Two, two, nine, three, four, seven, five, eight, seven, five.
113
+ [518.120 --> 521.120] Wing, wing, wing, wing, wing, wing.
114
+ [521.120 --> 523.120] Hello.
115
+ [523.120 --> 527.120] Good job. Hello.
116
+ [527.120 --> 529.120] May I speak to Danyang?
117
+ [529.120 --> 532.120] Look. Bye bye.
118
+ [532.120 --> 535.120] Bye bye. Phone.
119
+ [535.120 --> 539.120] Yep. Push the number. That's how you dial the number.
120
+ [539.120 --> 545.120] Can you call? Hello. Hello.
121
+ [545.120 --> 549.120] Can you talk on the phone? Hello.
122
+ [549.120 --> 552.120] Hello, Miss Ashley.
123
+ [552.120 --> 555.120] Look. Bye bye. Hang it up.
124
+ [555.120 --> 558.120] Bye bye.
125
+ [558.120 --> 561.120] Very good. So we got phone.
126
+ [561.120 --> 564.120] Phone.
127
+ [564.120 --> 567.120] Car. Drive the car.
128
+ [567.120 --> 569.120] Wing, wing, wing.
129
+ [569.120 --> 570.120] Let me stay here.
130
+ [570.120 --> 572.120] And truck.
131
+ [572.120 --> 573.120] Ro, ro, ro.
132
+ [573.120 --> 574.120] Truck.
133
+ [574.120 --> 576.120] All right. Listen.
134
+ [576.120 --> 578.120] Truck.
135
+ [578.120 --> 581.120] Phone.
136
+ [581.120 --> 583.120] Car.
137
+ [583.120 --> 584.120] Danyang.
138
+ [584.120 --> 587.120] Give me car.
139
+ [587.120 --> 588.120] Put the car in my hand.
140
+ [588.120 --> 589.120] Look.
141
+ [589.120 --> 591.120] Give me car. That's your mouth. Good job.
142
+ [591.120 --> 592.120] Give me car.
143
+ [592.120 --> 594.120] Mouth.
144
+ [594.120 --> 596.120] Mouth. Where's nose?
transcript/allocentric_4_5dayHDdBk.txt ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 10.100] Communication is an essential part of our daily lives.
2
+ [10.100 --> 15.700] It is how we express ourselves, share our thoughts and ideas, and connect with others.
3
+ [15.700 --> 20.740] In this video, you will learn about the two main types of communication.
4
+ [20.740 --> 25.980] Verbal and non-verbal communication.
5
+ [25.980 --> 31.100] Non-verbal communication is the use of speech or spoken words to exchange information,
6
+ [31.100 --> 34.060] emotions, and thoughts.
7
+ [34.060 --> 39.900] Non-verbal communication, on the other hand, is the use of body language, gestures, facial
8
+ [39.900 --> 44.260] expressions, and tone of voice to convey a message.
9
+ [44.260 --> 49.340] It is a powerful tool that can be used to communicate feelings, emotions, and attitudes
10
+ [49.340 --> 55.100] without the use of words.
11
+ [55.100 --> 59.620] Non-verbal and non-verbal communication are important, and they often work together
12
+ [59.620 --> 62.980] to create a complete message.
13
+ [62.980 --> 68.380] Non-verbal cues can help us understand the tone and intention behind someone's words.
14
+ [68.380 --> 73.620] At the same time, verbal communication provides context and clarity to the message being
15
+ [73.620 --> 76.620] conveyed.
16
+ [76.620 --> 82.620] Verbal communication is essential in negotiations, where clear and explicit language is critical.
17
+ [82.620 --> 87.380] While non-verbal communication is essential in interpersonal communication where emotional
18
+ [87.380 --> 90.860] cues play an important role.
19
+ [90.860 --> 98.460] For some examples of verbal communication, face-to-face conversation, giving a speech,
20
+ [98.460 --> 105.020] telephonic conversation, sending voice note, taking interviews, group discussion in the
21
+ [105.020 --> 107.860] workplace.
22
+ [107.860 --> 111.380] Here are some examples of non-verbal communication.
23
+ [111.380 --> 113.460] Notting head in approval.
24
+ [113.460 --> 117.740] Showing a thumbs up, sign to express positive feelings.
25
+ [117.740 --> 119.060] Smiling at someone.
26
+ [119.060 --> 122.660] A confident handshake is a welcoming gesture.
27
+ [122.660 --> 124.860] Giving a hug to show affection.
28
+ [124.860 --> 130.980] To talk in a raised voice while an anger.
29
+ [130.980 --> 136.740] Non-verbal communication can be more effective than verbal communication in some situations.
30
+ [136.740 --> 142.100] For example, when someone says something but their body language suggests something different,
31
+ [142.100 --> 147.180] we are more likely to believe their non-verbal cues over their words.
32
+ [147.180 --> 152.020] Non-verbal communication is also essential in situations where words are not enough to convey
33
+ [152.020 --> 153.460] a message.
34
+ [153.460 --> 158.540] Such as when comforting a loved one, expressing empathy or showing respect.
35
+ [158.540 --> 163.500] On the other hand, verbal communication is essential in negotiations, where clear and
36
+ [163.500 --> 166.580] explicit language is necessary.
37
+ [166.580 --> 171.700] But it is more easily influenced by external factors such as language barriers, background
38
+ [171.700 --> 178.180] noise, and distractions.
39
+ [178.180 --> 183.620] In today's world, we are increasingly relying on technology for communication.
40
+ [183.620 --> 187.860] And this has made it more challenging to convey non-verbal cues.
41
+ [187.860 --> 193.260] When communicating through text, for example, we lose the tone of voice and facial expressions
42
+ [193.260 --> 195.900] that help us understand the message.
43
+ [195.900 --> 200.580] It is therefore essential to be aware of the limitations of each type of communication
44
+ [200.580 --> 204.140] and use them appropriately.
45
+ [204.140 --> 208.820] Understanding the nuances of each type of communication can help us become better communicators
46
+ [208.820 --> 213.060] and build stronger relationships with others.
47
+ [213.060 --> 215.460] Thanks for watching this video.
48
+ [215.460 --> 219.820] If you find this video informative, please like the video and don't forget to subscribe
49
+ [219.820 --> 221.500] to EducationLeves Extra.
transcript/allocentric_4nAcRL-6ujk.txt ADDED
@@ -0,0 +1,389 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.000] So now we're getting into where we started this whole journey last year.
2
+ [5.000 --> 7.000] It's about emotions in the face.
3
+ [7.000 --> 12.000] When we start looking at profiling people,
4
+ [12.000 --> 16.000] like I said before, I do it in a very systematic way.
5
+ [16.000 --> 19.000] The way I was taught, we learned all about Jing first,
6
+ [19.000 --> 22.000] then we learned all about Qi, which is personality and temperament,
7
+ [22.000 --> 25.000] then we learned about the Shen, which is the sexuality,
8
+ [25.000 --> 29.000] the romance, psychopathy, things like that.
9
+ [29.000 --> 34.000] There's a lot of things that are, it's all important,
10
+ [34.000 --> 37.000] but depending on where you're going to be applying or information,
11
+ [37.000 --> 40.000] some things are more important than others.
12
+ [40.000 --> 45.000] So the first thing we want to look at we began with was separating into Yinen Yang.
13
+ [45.000 --> 49.000] So we have this divide across, and we have Yinen Yang to the right,
14
+ [49.000 --> 52.000] Yinen Yang to the left.
15
+ [52.000 --> 57.000] So when we start to look at people, the first thing we look at is,
16
+ [57.000 --> 64.000] we try to look at the overall features that jumped out of us.
17
+ [64.000 --> 70.000] We want to look at their inner nature versus their outer nature.
18
+ [70.000 --> 73.000] So we look at people through our right side,
19
+ [73.000 --> 78.000] we look at their right side, we will see the face they want us to see.
20
+ [78.000 --> 81.000] We look at them through our left eye, at their left side,
21
+ [81.000 --> 85.000] we'll see the face they keep behind closed doors.
22
+ [85.000 --> 88.000] But what do we look for?
23
+ [88.000 --> 91.000] We look for emotions.
24
+ [91.000 --> 94.000] We look for differences in the symmetry.
25
+ [94.000 --> 100.000] So as we look, if we have lines above our eye in this area,
26
+ [100.000 --> 105.000] this indicates somebody who has a healthy degree of skepticism.
27
+ [105.000 --> 108.000] These are people who don't take anything at face value.
28
+ [108.000 --> 112.000] They have to see it, they have to experience it, and an incident area.
29
+ [112.000 --> 115.000] So if you see someone like that, you say, you know, you don't really,
30
+ [115.000 --> 117.000] you might say something, when you talk about traits, by the way,
31
+ [117.000 --> 119.000] you don't say you're a skeptic.
32
+ [119.000 --> 122.000] That's not...
33
+ [122.000 --> 125.000] No, I'm not.
34
+ [125.000 --> 128.000] You might say something like, you know, you have these lines right here,
35
+ [128.000 --> 130.000] the indicator, a healthy dose of skepticism.
36
+ [130.000 --> 133.000] So you don't always take things at face value, do you?
37
+ [133.000 --> 135.000] No, that's...
38
+ [135.000 --> 138.000] You want to describe the trait not the label.
39
+ [138.000 --> 141.000] Does that make sense?
40
+ [141.000 --> 146.000] Because there's not a lot of people who like to admit they're stubborn.
41
+ [146.000 --> 150.000] Most stubborn people, I know, think they're pretty easy going.
42
+ [150.000 --> 154.000] And if you try to convince them otherwise...
43
+ [154.000 --> 157.000] Right?
44
+ [157.000 --> 160.000] So skepticism lines, very, very...
45
+ [160.000 --> 162.000] By the way, when would it be good to have somebody...
46
+ [162.000 --> 166.000] What kind of job would we want to put somebody in a position of skepticism?
47
+ [166.000 --> 167.000] An auditor?
48
+ [167.000 --> 169.000] An auditor might be good, right?
49
+ [169.000 --> 170.000] Quality control.
50
+ [170.000 --> 171.000] Quality control?
51
+ [171.000 --> 173.000] What's that?
52
+ [173.000 --> 174.000] Police.
53
+ [174.000 --> 175.000] Absolutely.
54
+ [175.000 --> 176.000] Right?
55
+ [176.000 --> 179.000] So, you know, and if you see if you work in with a cop and all of a sudden you see this stuff,
56
+ [179.000 --> 182.000] you know he's probably pretty good at his job.
57
+ [182.000 --> 183.000] Right?
58
+ [183.000 --> 184.000] Skepticism.
59
+ [184.000 --> 186.000] Next line, yes.
60
+ [186.000 --> 191.000] Can you say healthy skepticism?
61
+ [191.000 --> 195.000] Is there a point where you can tell me it's too much?
62
+ [195.000 --> 197.000] It's very deep.
63
+ [197.000 --> 198.000] Right?
64
+ [198.000 --> 205.000] Remember, the intensity of the trait is measured by the depth and the breadth of the marking.
65
+ [205.000 --> 213.000] Just like in handwriting analysis, the intensity of emotional expression is related to the handwriting pressure and the slant.
66
+ [213.000 --> 224.000] We can look at the intensity or the depth of a feeling or emotion or an issue or a trait by how deeply it's marked that area.
67
+ [224.000 --> 230.000] Maybe I'm missing something here, but for the skepticism on there it seems to be on the left side.
68
+ [230.000 --> 232.000] You keep pointing to your right side.
69
+ [232.000 --> 233.000] Is this a mirror?
70
+ [233.000 --> 234.000] This is a symmetrical.
71
+ [234.000 --> 235.000] Okay.
72
+ [235.000 --> 236.000] Yeah.
73
+ [236.000 --> 237.000] Right?
74
+ [237.000 --> 244.000] What we're going to do is we're going to look at, first we look at the big picture, which is where are they marked?
75
+ [244.000 --> 245.000] Right?
76
+ [245.000 --> 250.000] And then we look at Yin, right versus left, or internature versus outer nature.
77
+ [250.000 --> 251.000] Right?
78
+ [251.000 --> 255.000] So I can look at you and I can see how you're marked in general.
79
+ [255.000 --> 262.000] If I want to go deeper, now I split exterior persona, internal persona.
80
+ [262.000 --> 270.000] And I can, I can, because I may notice that on your external persona, your lips turn up.
81
+ [270.000 --> 274.000] On your internal persona, they turn down.
82
+ [274.000 --> 282.000] This could show somebody who's to send it to give a positive face to the outer world, but inside they're not very happy.
83
+ [282.000 --> 283.000] They're very disappointed.
84
+ [283.000 --> 284.000] Right?
85
+ [284.000 --> 285.000] Yes.
86
+ [285.000 --> 289.000] Would that show up as a smirk where you had an angle with it?
87
+ [289.000 --> 291.000] You know, where you got one side up?
88
+ [291.000 --> 294.000] I'm going to show you an understanding question.
89
+ [294.000 --> 297.000] Hold on a second, guys.
90
+ [297.000 --> 299.000] Let me have a, let's finish Daniel's question.
91
+ [299.000 --> 301.000] Restate your question.
92
+ [302.000 --> 307.000] Would that show up in a smirk where the person has a skew?
93
+ [307.000 --> 313.000] If there's a, if there's a, if it defined a symmetry that, and there's no obvious reason for it, then yeah, it could be.
94
+ [313.000 --> 315.000] Well, I'm talking about it.
95
+ [315.000 --> 316.000] In expression, not necessarily.
96
+ [316.000 --> 318.000] Well, again, if we, we're looking, we're not looking at expressions.
97
+ [318.000 --> 324.000] We're looking at an expressionless face, more or less, and seeing what the wrinkles show us.
98
+ [324.000 --> 328.000] If we take, when you take your facing app, you're going to do this in a minute.
99
+ [328.000 --> 331.000] First, you're going to do just a generic reading of people.
100
+ [331.000 --> 334.000] Then you're going to take your facing app, and you're going to take picture of yourself.
101
+ [334.000 --> 335.000] And you're going to look.
102
+ [335.000 --> 339.000] You're going to look at how your face combines with itself.
103
+ [339.000 --> 342.000] And these things, you'll, you'll see a different person.
104
+ [342.000 --> 348.000] You'll see when all of the right side is together, you'll see what your public persona looks like.
105
+ [348.000 --> 353.000] When all of the left sides are together, you'll see what your inner nature looks like.
106
+ [353.000 --> 361.000] And you can do this reading based on what you have already, and get a better, a better snapshot.
107
+ [361.000 --> 366.000] When we're doing this in person, if I'm looking at Kim, if I look at her, I'm right-eyed dominant,
108
+ [366.000 --> 369.000] then I'm going to see her public persona.
109
+ [369.000 --> 373.000] Because I'm right-eyed dominant, so I focus on that information.
110
+ [373.000 --> 378.000] If I want to see her inner persona, then I look at her through my left eye, or I can cover my left eye,
111
+ [378.000 --> 381.000] and I'll see a different side of her.
112
+ [381.000 --> 384.000] Follow me.
113
+ [384.000 --> 390.000] Again, these can be very subtle, and sometimes they can be very, very obvious.
114
+ [390.000 --> 392.000] They can be very, very obvious.
115
+ [392.000 --> 396.000] Robert, you had a question?
116
+ [396.000 --> 401.000] I was just looking to build on what Daniel said, because one of Echman's, in micro expressions,
117
+ [401.000 --> 407.000] is if you turned down one corner of your mouth, that's skepticism, and I think that's kind of what you were getting at.
118
+ [407.000 --> 413.000] And so that could be, if you're skeptical enough, on your private side, then you could eventually get in.
119
+ [413.000 --> 418.000] Again, that particular characteristic doesn't have that definition in this system.
120
+ [418.000 --> 420.000] And it's a micro expression.
121
+ [420.000 --> 424.000] So we're not looking, like I said before, we're not looking at micro expressions.
122
+ [424.000 --> 428.000] We're looking at the consequences of a lifetime of expressions.
123
+ [428.000 --> 434.000] How the constant use of that trait, or that expression, or feeling of that emotion,
124
+ [434.000 --> 438.000] marks the face and the musculatures of the face.
125
+ [438.000 --> 441.000] Sort of like the canvases today.
126
+ [441.000 --> 445.000] Yeah. Anybody else?
127
+ [445.000 --> 448.000] Okay.
128
+ [448.000 --> 452.000] Let me make this picture bigger.
129
+ [452.000 --> 457.000] These are ones that we want to spend, by the way, when we look at people,
130
+ [457.000 --> 461.000] we spend most of our time looking here,
131
+ [461.000 --> 463.000] just so you know.
132
+ [463.000 --> 472.000] If you want to be systematic about it, I would divide this into three sections, but we'll cover that in a minute.
133
+ [472.000 --> 478.000] So when we look at the eyes now, we're going to do this in count in a counterclockwise rotation.
134
+ [478.000 --> 483.000] Looking at the sides of the eyes here, this is a joy.
135
+ [483.000 --> 490.000] When the lines go up, not past the eyebrow.
136
+ [490.000 --> 494.000] When the eyes are when they have a little crow's feet, we're looking at somebody who's experiencing a lot,
137
+ [494.000 --> 496.000] who has experienced a lot of joy.
138
+ [496.000 --> 506.000] Most of you have some degree of joy markings.
139
+ [506.000 --> 510.000] If you...
140
+ [510.000 --> 512.000] If the line... See, the trait...
141
+ [512.000 --> 516.000] I'm going to talk about it right here, but I'm going to diagram it over here.
142
+ [516.000 --> 524.000] The line travels up past the eyebrow.
143
+ [524.000 --> 527.000] You now have mania.
144
+ [527.000 --> 530.000] Excessive joy becomes mania.
145
+ [530.000 --> 534.000] This is your bipolar, your manic depressives.
146
+ [534.000 --> 539.000] These are people who are just up and up in the morning tweeting.
147
+ [539.000 --> 543.000] Right?
148
+ [543.000 --> 549.000] So extreme joy, mania.
149
+ [549.000 --> 556.000] When the lines come down this way, you're seeing sadness lines.
150
+ [556.000 --> 561.000] We've all had a healthy degrees of sadness in our life.
151
+ [561.000 --> 572.000] When they start to travel down the cheeks through the lung area, now you're dealing with sorrow.
152
+ [572.000 --> 575.000] These people may start to develop lung problems.
153
+ [575.000 --> 582.000] In fact, what you'll find out in cases like emphysema, COPD, asthma, allergies,
154
+ [582.000 --> 587.000] as you unpack them, usually a lot of times grief and anger come up.
155
+ [587.000 --> 591.000] Grief goes to the lungs, which is the next trait.
156
+ [591.000 --> 600.000] When those lines extend beyond here, now you're looking at grief.
157
+ [600.000 --> 601.000] So those are the three degrees.
158
+ [601.000 --> 611.000] You have sadness, sorrow, grief.
159
+ [611.000 --> 613.000] Humor lines, they're not real.
160
+ [613.000 --> 615.000] They don't show real well here.
161
+ [615.000 --> 622.000] Humor lines, if you were to look, let me do this.
162
+ [622.000 --> 624.000] Can you guys see that back there?
163
+ [624.000 --> 627.000] Okay, I made that a little bigger.
164
+ [627.000 --> 638.000] Humor lines are usually seen in the lips themselves.
165
+ [638.000 --> 641.000] They're usually seen with a little line down the center.
166
+ [641.000 --> 645.000] Sometimes you can have lines like this.
167
+ [645.000 --> 650.000] So if you see lines in the lips, they're usually some, especially a big one in the middle.
168
+ [650.000 --> 654.000] That's usually the indication that they have a pretty good sense of humor.
169
+ [654.000 --> 661.000] Some of you know people like this, right?
170
+ [661.000 --> 664.000] Am I not being bled in on the joke?
171
+ [664.000 --> 672.000] Who's the one that needs to have stick?
172
+ [672.000 --> 679.000] So humor here.
173
+ [679.000 --> 690.000] Okay, going from the center down, people were asking about this.
174
+ [690.000 --> 696.000] Two lines indicates impatience.
175
+ [696.000 --> 699.000] They're at the stoplight, the stoplight's only 30 seconds away from changing.
176
+ [699.000 --> 705.000] They're already gunning the engine.
177
+ [705.000 --> 712.000] When you see three lines, this is usually a bit of a gift of somebody who has managed,
178
+ [712.000 --> 718.000] has learned how to manage their temper, has managed, learned how to manage their anger.
179
+ [718.000 --> 724.000] So you might say something like, you know what, there was a time in your life when you really had a bad temper
180
+ [724.000 --> 726.000] when you really got impatient with people.
181
+ [726.000 --> 729.000] And over time, you seem to have learned to really manage it well.
182
+ [729.000 --> 732.000] You manage it much better than you used to.
183
+ [732.000 --> 735.000] Yeah.
184
+ [735.000 --> 738.000] Yes.
185
+ [738.000 --> 746.000] I really do need a mic runner for this.
186
+ [746.000 --> 757.000] Some of the ones you're going to see a lot in therapy are lost love lines.
187
+ [757.000 --> 760.000] Oh, actually disempowerment in lost love.
188
+ [760.000 --> 773.000] Lost love lines start at the inner canthus and they descend down, sometimes merging with or parallel to the sorrow,
189
+ [773.000 --> 777.000] the grief lines or the purpose lines.
190
+ [777.000 --> 787.000] Now, if you notice, lost love and the sadness, if you extend those lines out, they all end up at the same spot.
191
+ [787.000 --> 792.000] And don't they seem related and see there's an orderliness to it.
192
+ [792.000 --> 797.000] There's an organization to this that kind of floats to the surface.
193
+ [797.000 --> 800.000] Lost love does not necessarily mean romantic love.
194
+ [800.000 --> 807.000] Lost love means there was some part of your life that was extremely important to you.
195
+ [807.000 --> 814.000] That was a very big piece of who you were or are as a person that you enjoyed.
196
+ [814.000 --> 823.000] And at some point in your childhood or your teens or whatever, something happened and that part is no longer there.
197
+ [823.000 --> 831.000] What I mean is, what I mean is it's not it's not no longer there, it's that your ability to do that is gone.
198
+ [831.000 --> 843.000] Sometimes athletes who are very, very strong, very, very talented, they have an injury and they can no longer play.
199
+ [843.000 --> 845.000] You don't get one of these.
200
+ [845.000 --> 854.000] Sometimes you'll meet somebody, you have a lifestyle that you love and things you enjoy doing, you meet somebody that you fall in love with.
201
+ [854.000 --> 861.000] That person doesn't like or approve of those things, you stop doing them.
202
+ [861.000 --> 864.000] It could also be a person.
203
+ [864.000 --> 867.000] It's something that was a big part of who you were as a human being.
204
+ [867.000 --> 871.000] That was in many cases part of your path.
205
+ [871.000 --> 873.000] You've lost it in some way.
206
+ [873.000 --> 875.000] Your face will mark.
207
+ [875.000 --> 878.000] Okay?
208
+ [878.000 --> 883.000] Question?
209
+ [883.000 --> 899.000] Do you find that the lines that come down coincide with the blockages for not following their road or their golden path in light?
210
+ [899.000 --> 901.000] Can you restate the question on that side?
211
+ [901.000 --> 905.000] So if they have a lot of sadness, grief and sorrow, that's creeping in.
212
+ [905.000 --> 910.000] Do you often find that there's a blockage where they're not following their path in life?
213
+ [910.000 --> 913.000] This could be caused by this.
214
+ [913.000 --> 916.000] No, in fact they're usually very different.
215
+ [916.000 --> 930.000] But they can be related in the sense that the person that caused them to not be able to do this is draining them,
216
+ [930.000 --> 933.000] forcing them to nurture and take care of them.
217
+ [933.000 --> 935.000] So there's separate but related.
218
+ [935.000 --> 937.000] Does that make sense?
219
+ [937.000 --> 941.000] Okay. Because this is a lot of what happens in bad relationships.
220
+ [941.000 --> 946.000] You get somebody who's a control freak who is very suspicious, very paranoid.
221
+ [946.000 --> 950.000] Somebody who's very demanding.
222
+ [950.000 --> 954.000] Many times what will happen is they'll start to slowly cut you off from your friends.
223
+ [954.000 --> 958.000] They won't let you do things with other people.
224
+ [958.000 --> 964.000] They'll start to demand all of your attention and all of your resources.
225
+ [964.000 --> 969.000] So now you'll develop lost love lines because you can no longer do the things you love to do.
226
+ [969.000 --> 976.000] And you'll start to develop bitterness and over nurturing lines because now all of your energy is being sucked by this person.
227
+ [976.000 --> 978.000] Does that make sense?
228
+ [978.000 --> 980.000] Okay, someone had a question.
229
+ [980.000 --> 989.000] I just, while you're on the eyes, I notice a lot that people kind of have like almost like a checkered lines under their eyes or the puffy bags.
230
+ [989.000 --> 990.000] Just one of those.
231
+ [990.000 --> 997.000] Well, the area under the eyes relates to the kidney and fluid management.
232
+ [997.000 --> 1005.000] So many times what you've got here is either tired kidneys, especially if they're dark or purplish.
233
+ [1005.000 --> 1012.000] Many times when you have these puffy bags under the eyes, these are tears we haven't finished shedding yet.
234
+ [1012.000 --> 1015.000] There's tears we haven't finished shedding.
235
+ [1015.000 --> 1021.000] When you have that crisscross pattern in an area like that, remember what we talked about what a dry riverbed looks like?
236
+ [1021.000 --> 1025.000] Those are areas where you've got zinc depletion.
237
+ [1025.000 --> 1029.000] You remember when we talked about what a dry riverbed looks like, how you get those cracks?
238
+ [1029.000 --> 1031.000] He was asking about these crisscross lines.
239
+ [1031.000 --> 1036.000] This is usually an indication that there's a zinc, there's a deficiency or a weakness of the zinc in that area.
240
+ [1036.000 --> 1041.000] It hasn't progressed to a big line because it's not trauma-based, it's just overuse.
241
+ [1041.000 --> 1043.000] Does that make sense?
242
+ [1043.000 --> 1047.000] This is kidneys, this is lung.
243
+ [1047.000 --> 1054.000] Okay, so if they have a lot of those wrinkles there, ask if they have lung problems or allergies or stuff like that.
244
+ [1054.000 --> 1061.000] Questions? We're good so far?
245
+ [1061.000 --> 1066.000] You go with this?
246
+ [1074.000 --> 1077.000] These are big two.
247
+ [1077.000 --> 1083.000] These are called disempowerment lines.
248
+ [1083.000 --> 1093.000] I don't call them disempowerment lines but that's what Lillian calls them because I'm much more interested in describing what this means.
249
+ [1093.000 --> 1096.000] Can you see that?
250
+ [1096.000 --> 1109.000] When you have lines that extend down almost in, I had one lady, looked like somebody took an exacto knife and just etched lines down the side of her nose from the inner campus down.
251
+ [1109.000 --> 1119.000] In cases like this, in this behavior it's very, very similar to the suspended needle where somebody expressed anger.
252
+ [1119.000 --> 1126.000] The pushback was sociable, the ramifications of that anger were so strong that they just held themselves in check.
253
+ [1126.000 --> 1136.000] It's not exactly the same though because with a disempowerment line, at some point in your life or at some point in the person's life, they expressed their feelings.
254
+ [1136.000 --> 1150.000] They expressed their opinion and the pushback, the negative pushback, the negative response was so overwhelming they felt the need to appease, to placate.
255
+ [1150.000 --> 1152.000] So I call them placating lines.
256
+ [1152.000 --> 1157.000] These are people who do whatever they do just to keep the peace.
257
+ [1157.000 --> 1160.000] They don't necessarily, they're not just simply choking back their anger.
258
+ [1160.000 --> 1165.000] They're trying to make amends for having a thought, for having an opinion.
259
+ [1165.000 --> 1168.000] So they spend their life appeasing people.
260
+ [1168.000 --> 1171.000] You'll see this a lot.
261
+ [1171.000 --> 1175.000] I see it a lot, especially where abuse is concerned.
262
+ [1175.000 --> 1178.000] Especially where abuse is concerned, molestations.
263
+ [1178.000 --> 1184.000] Molestation, not quite so much, but where I see spousal issues a lot.
264
+ [1184.000 --> 1189.000] People who always feel like they're apologizing for being alive.
265
+ [1190.000 --> 1192.000] You'll see this.
266
+ [1192.000 --> 1194.000] Right?
267
+ [1194.000 --> 1198.000] And if you got them, it doesn't mean you're a bad person, it doesn't mean you're a woose.
268
+ [1198.000 --> 1202.000] It means you did the best you could with the information you had.
269
+ [1202.000 --> 1204.000] None of these traits are bad.
270
+ [1204.000 --> 1208.000] They're just like the check engine light on the dashboard.
271
+ [1208.000 --> 1210.000] They really are.
272
+ [1210.000 --> 1213.000] Right? When you're driving down the road, the check engine light goes off.
273
+ [1213.000 --> 1216.000] Oh my god, I got to get the light fixed.
274
+ [1217.000 --> 1218.000] You don't do that.
275
+ [1218.000 --> 1221.000] Oh, oil needs changing.
276
+ [1221.000 --> 1223.000] Engine needs servicing.
277
+ [1223.000 --> 1225.000] Gotta put coolant in the radiator.
278
+ [1225.000 --> 1227.000] That's all these facial things mean.
279
+ [1227.000 --> 1230.000] They're the light on the, they're the check engine lights on the dashboard.
280
+ [1233.000 --> 1235.000] Yes sir.
281
+ [1235.000 --> 1237.000] It may be.
282
+ [1237.000 --> 1239.000] Is there light on?
283
+ [1239.000 --> 1242.000] Nope, it's a little bit longer.
284
+ [1243.000 --> 1250.000] And maybe an assumption, but when you work with children or adolescents, I'm assuming you see these less.
285
+ [1250.000 --> 1258.000] Yes, in fact, Lillian taught me that you shouldn't read children because they're very impressionable.
286
+ [1258.000 --> 1261.000] They're very impressionable.
287
+ [1261.000 --> 1266.000] And so the things you say can become prophecies for them.
288
+ [1266.000 --> 1272.000] So I was taught encouraged children read adults.
289
+ [1272.000 --> 1279.000] But you can look at children's growing facial structures and kind of see things evolving.
290
+ [1279.000 --> 1280.000] Right?
291
+ [1280.000 --> 1282.000] But again, remember, they're still changing.
292
+ [1282.000 --> 1283.000] They're not stuck.
293
+ [1283.000 --> 1284.000] They're going to constantly grow.
294
+ [1284.000 --> 1289.000] So as you work on your own stuff, especially if you're working with, you know, if you have children,
295
+ [1289.000 --> 1294.000] the fastest way to fix your kids is to fix you.
296
+ [1294.000 --> 1297.000] And that's what the Chinese say.
297
+ [1297.000 --> 1306.000] The Chinese tell us that the Jing markings, the things that you bring from lifetime to lifetime, are always present.
298
+ [1306.000 --> 1312.000] So much, like it goes like nine generations back, nine generations forward.
299
+ [1312.000 --> 1317.000] If you fix something in the present moment, it fixes it nine generations back.
300
+ [1317.000 --> 1321.000] It's like that entanglement theory.
301
+ [1322.000 --> 1325.000] It'll fix it seven generations forward as well.
302
+ [1325.000 --> 1332.000] So as you resolve your stuff, you may find your kids moving through similar issues faster and easier,
303
+ [1332.000 --> 1336.000] or not even coming up at all.
304
+ [1336.000 --> 1339.000] Yeah.
305
+ [1339.000 --> 1349.000] I just had a revelation when you were saying that, because I was trying to figure out if my daughter was just maturing or just changing rapidly.
306
+ [1350.000 --> 1355.000] As I've been rapidly changing, I've been noticing her communications become more open.
307
+ [1355.000 --> 1358.000] She abandoned coloring her hair.
308
+ [1358.000 --> 1362.000] I guess that was focused on the past.
309
+ [1362.000 --> 1363.000] Yeah.
310
+ [1363.000 --> 1365.000] And the science is there now too.
311
+ [1365.000 --> 1367.000] They did it with insects.
312
+ [1367.000 --> 1373.000] They found out that if they caused a traumatic accident, a traumatic event for one generation, I think it was fruit flies.
313
+ [1373.000 --> 1375.000] I could be wrong.
314
+ [1375.000 --> 1381.000] If they changed their genetic makeup, their offspring had it too.
315
+ [1381.000 --> 1384.000] They've seen the same thing at Holocaust survivors.
316
+ [1384.000 --> 1392.000] Where the grandchildren of Holocaust survivors carry the genetic markers from the time the grandparents spent in the camps.
317
+ [1392.000 --> 1395.000] They just didn't realize it can go the other way, which is what's Chinese are saying.
318
+ [1395.000 --> 1397.000] It doesn't go just forward.
319
+ [1397.000 --> 1398.000] It goes backwards.
320
+ [1398.000 --> 1406.000] Now, I've had direct experience with genetic memory because I've actually worked with people who've taken on the memories of their transplanted organs.
321
+ [1406.000 --> 1413.000] I had to do parts therapy and regression on the organs.
322
+ [1413.000 --> 1414.000] I get the cool stuff.
323
+ [1414.000 --> 1416.000] I don't get smoke cessation or weight loss.
324
+ [1416.000 --> 1420.000] I get the interesting stuff.
325
+ [1420.000 --> 1423.000] Let's see where I'm at here.
326
+ [1423.000 --> 1434.000] If we go a little further down, and we talked about humor lines already, this is another one you're going to see a lot of.
327
+ [1434.000 --> 1440.000] This manifests as little dimpling in the chin on the chin.
328
+ [1440.000 --> 1443.000] They're not necessarily horizontal lines.
329
+ [1443.000 --> 1447.000] They're just like that little dimpling feeling.
330
+ [1447.000 --> 1448.000] You see that a lot.
331
+ [1448.000 --> 1452.000] You got someone who's got a lot of repressed fear.
332
+ [1452.000 --> 1456.000] They're usually very fearful people.
333
+ [1456.000 --> 1460.000] Now, that's modulated depending on how strong the chin is.
334
+ [1460.000 --> 1463.000] It's very big and jutting chin.
335
+ [1463.000 --> 1469.000] You're not going to see that much fear directly because they usually have a lot of stubbornness and willfulness.
336
+ [1469.000 --> 1476.000] But when you see a lot of lines and dimpling down in this area, and I see dimpling more than anything else.
337
+ [1476.000 --> 1481.000] Fear.
338
+ [1481.000 --> 1487.000] You guys are all standing here checking your face out.
339
+ [1487.000 --> 1490.000] Yes.
340
+ [1490.000 --> 1494.000] Could be.
341
+ [1494.000 --> 1503.000] My experience has been that this kind of fear is almost always early childhood.
342
+ [1503.000 --> 1506.000] I don't see a lot of PTSD marking this way.
343
+ [1506.000 --> 1512.000] Unless I'm thinking when I see PTSD, I'm thinking more wartime trauma.
344
+ [1512.000 --> 1517.000] But you can have PTSD from many different forms of influence.
345
+ [1517.000 --> 1519.000] But I usually see this in clinic.
346
+ [1519.000 --> 1521.000] Your experience may be different.
347
+ [1521.000 --> 1525.000] Clinically, when I see this, it's usually childhood stuff.
348
+ [1525.000 --> 1527.000] Lifetime stuff.
349
+ [1527.000 --> 1528.000] Does that make sense?
350
+ [1528.000 --> 1529.000] I don't know if it makes sense.
351
+ [1529.000 --> 1533.000] That's just what I've observed.
352
+ [1534.000 --> 1540.000] This is where I love my touchscreen.
353
+ [1540.000 --> 1545.000] Have we covered enough traits for you to start playing a little bit?
354
+ [1545.000 --> 1550.000] Or do you want to go through the whole thing and then play?
355
+ [1550.000 --> 1553.000] You guys want to read each other.
356
+ [1553.000 --> 1556.000] Here's what I want you to do.
357
+ [1556.000 --> 1558.000] You want to break up into groups of three.
358
+ [1558.000 --> 1562.000] We're going to take 45 minutes for this.
359
+ [1562.000 --> 1568.000] You're 10 to 15 minutes for each person.
360
+ [1568.000 --> 1569.000] You're going to connect.
361
+ [1569.000 --> 1573.000] You're going to just kind of get in rapport with them a little bit.
362
+ [1573.000 --> 1574.000] Small talk.
363
+ [1574.000 --> 1577.000] You don't have to talk about anything in particular.
364
+ [1577.000 --> 1582.000] And what you want to do is systematically, you want to start at the top of the head
365
+ [1582.000 --> 1588.000] and work clockwise.
366
+ [1588.000 --> 1591.000] That's if you want to be linear and logical about it.
367
+ [1591.000 --> 1595.000] If you want to do it the old school way, you just kind of connect with them
368
+ [1595.000 --> 1599.000] and notice whatever feature calls your attention first.
369
+ [1599.000 --> 1600.000] And talk about that feature.
370
+ [1600.000 --> 1602.000] And talk about those things.
371
+ [1602.000 --> 1611.000] This will make even more sense when we start putting in the headlines in there.
372
+ [1611.000 --> 1616.000] This is one of the oldest pictures of face reading.
373
+ [1616.000 --> 1619.000] This is like several thousand or so.
374
+ [1619.000 --> 1622.000] So this is not new.
375
+ [1622.000 --> 1626.000] So the first thing I want you to do, I think, you know, don't worry about reading too much.
376
+ [1626.000 --> 1632.000] So much as seeing the traits and noticing how people are marking.
377
+ [1632.000 --> 1636.000] If you want to, you can inquire about certain things.
378
+ [1636.000 --> 1640.000] Pay attention to what happens to their emotions when you do this.
379
+ [1640.000 --> 1644.000] But now it's just, it's just, work with as many different people as possible.
380
+ [1644.000 --> 1647.000] And just look.
381
+ [1647.000 --> 1652.000] Right? If you want to take out your face app and start looking at things in terms of,
382
+ [1652.000 --> 1656.000] well, what are they on privately versus what are they doing publicly?
383
+ [1656.000 --> 1657.000] You can do that.
384
+ [1657.000 --> 1661.000] But again, I just want you to kind of enjoy reading what, you know, playing with what you see.
385
+ [1661.000 --> 1665.000] And seeing if you can isolate and remember what each of the different things are.
386
+ [1665.000 --> 1669.000] So that makes sense. It's just kind of a little, yet to no faces kind of a thing.
387
+ [1669.000 --> 1671.000] So let's break up into groups of three.
388
+ [1671.000 --> 1674.000] We'll come back and finish the facial map.
389
+ [1674.000 --> 1677.000] And we'll start talking about ears.
transcript/allocentric_4nCR3yBBCHE.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [0.000 --> 30.000] 1.0.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1
2
+ [30.000 --> 55.960] 1.0..........................................................................................................................................................................................................................
transcript/allocentric_7Dga-UqdBR8.txt ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.120] Hello everybody, my name is Dan, I'm an animator, and this is New Frame Plus, a series about video game animation.
2
+ [7.120 --> 13.520] I have a question, how do you communicate character and personality from a first-person view?
3
+ [13.520 --> 18.680] Conveying character through performance is one of the animator's most important jobs.
4
+ [18.680 --> 27.320] Animating proper physicality, applying the 12 principles, reinforcing gameplay, those things are all important and can be quite difficult to achieve,
5
+ [27.320 --> 29.600] but they are ultimately fundamentals.
6
+ [29.600 --> 35.640] On top of all of that, the character animator's job is to create appealing character performances,
7
+ [35.640 --> 39.480] to visually reinforce who these characters are through movement.
8
+ [39.480 --> 44.240] But how do you do that from a first-person view when you've got nothing but hands and a gun?
9
+ [44.240 --> 45.200] Back in the fight!
10
+ [45.200 --> 46.440] A lot of games don't.
11
+ [46.440 --> 53.240] Most shooters' first-person animation is strictly functional, intended to clearly convey what your character is doing,
12
+ [53.240 --> 56.400] but not exactly telling us anything about them.
13
+ [56.440 --> 57.800] Alright, I'm shooting.
14
+ [57.800 --> 59.040] Now I'm running.
15
+ [59.040 --> 61.200] Now I'm reloading, and so on.
16
+ [61.200 --> 65.600] Which is not to say that these animations don't tell us some things about the character.
17
+ [65.600 --> 72.200] In most any military shooter, the functional gun-handling animations reinforce the capability of our player character,
18
+ [72.200 --> 76.120] their familiarity with their weapon and their abilities as a soldier.
19
+ [76.120 --> 83.000] Mirror's Edge uses faith's arms and legs to help the player understand what faith is doing as she navigates the world,
20
+ [83.000 --> 89.400] which helps to show the physicality of the movement, and makes the player feel even more cool as they run and jump around.
21
+ [89.400 --> 95.800] But it doesn't necessarily tell us much about faith herself, other than the fact that she's a very skilled free runner.
22
+ [95.800 --> 101.160] But there are games out there that manage to communicate a lot of character from a first-person view,
23
+ [101.160 --> 104.240] and one of those games is Blizzard's Overwatch.
24
+ [104.240 --> 109.480] In terms of game animation, Overwatch is a masterclass in character appeal.
25
+ [109.480 --> 112.360] This game is all about its characters.
26
+ [113.160 --> 122.160] Every single member of this cast is unique and just loaded with personality, more than in almost any game I have ever seen.
27
+ [122.160 --> 127.480] And almost all of that in-game personality is conveyed through animation.
28
+ [127.480 --> 136.400] There's very little dialogue or plot in the game itself, so animation and character design do the bulk of the heavy lifting in defining who these people are.
29
+ [136.400 --> 142.480] The way they carry themselves, their victory poses, their emotes, their play of the game glamour shots.
30
+ [142.480 --> 145.600] It all paints a picture of who these people are.
31
+ [145.600 --> 153.520] But this game is a first-person shooter, which means that you're going to spend the vast majority of the time seeing nothing but their hands and a weapon.
32
+ [153.520 --> 160.080] So how have Blizzard's animators managed to continue expressing personality using only these elements?
33
+ [160.080 --> 167.200] Ultimately, the answer is that they gave every single character their own completely unique set of first-person animations,
34
+ [167.200 --> 171.680] and built in lots of contrast between how each character goes about things.
35
+ [171.680 --> 174.000] But let's get into specifics.
36
+ [174.000 --> 178.560] First, and this is more of a character design point, but I think it's worth bringing up.
37
+ [178.560 --> 186.640] No matter who you're playing, the animators have made sure that the characters' hands and or weapon are almost always on screen.
38
+ [186.640 --> 191.440] This not only allows each weapon's unique design to show you who you're playing at a glance,
39
+ [191.440 --> 199.040] but also showcases each weapon's unique animation, which also just happens to reflect the personality of the weapon's owner.
40
+ [199.040 --> 203.760] Soldier 76's Assault Rifle is a finely tuned precision machine.
41
+ [203.760 --> 209.520] Everything on this weapon moves quickly and sharply, snapping perfectly into place, like a salute.
42
+ [209.520 --> 213.280] There is not a loose or flimsy part to be found on this weapon.
43
+ [213.280 --> 218.640] Like its owner, this weapon has a few signs of wear, but it is a well-maintained instrument.
44
+ [218.640 --> 224.480] Contrast that with Junkrat's launcher, which he clearly built himself from scrap and spare parts.
45
+ [224.480 --> 228.880] This weapon is rickety and crude, held together by duct tape and a wish,
46
+ [228.880 --> 234.160] which you can easily see by the way so many of the pieces vibrate and loosely shake.
47
+ [234.160 --> 238.640] Like none of these pieces were designed to fit together, but it gets the job done.
48
+ [238.640 --> 243.520] It perfectly reflects Junkrat's slightly unhinged, twitchy enthusiasm.
49
+ [243.520 --> 248.160] Lucio's Sonic Amplifier pulses rhythmically like a pounding subwoofer.
50
+ [248.160 --> 251.680] Bastion's machine gun constantly shudders just a little bit,
51
+ [251.680 --> 254.400] like an older, cruder generation of machine.
52
+ [254.400 --> 260.400] And a few of its moving parts, like the hinge on this site, seem to have gotten just a little looser with age.
53
+ [260.400 --> 265.520] Just having each weapon visible on screen reinforces personality in so many,
54
+ [265.520 --> 267.440] tiny, subtle little ways.
55
+ [267.440 --> 269.440] Same goes for each character's idol.
56
+ [269.440 --> 272.720] Even when the player is just standing there, not doing anything,
57
+ [272.720 --> 279.120] each character's idol animations and tiny little fidgets are unique and informed by their personality.
58
+ [279.120 --> 282.960] Junkrat is twitchy and antsy, eager to cause some mayhem.
59
+ [282.960 --> 288.560] Genji is very contained and controlled, prepared to strike when just the right moment comes.
60
+ [288.560 --> 292.080] Cometra's hand movement is delicate and flowing like a dancer,
61
+ [292.080 --> 294.880] especially the fingers on her free hand to the left.
62
+ [294.880 --> 300.080] McCree's grip and pistol aim are steady, while Mei's aim isn't quite as trained.
63
+ [300.080 --> 303.280] Her weapon bobs and drifts on screen much more.
64
+ [303.280 --> 306.160] Diva constantly adjusts her grip on her controls,
65
+ [306.160 --> 310.160] and you can see lots of sharp, tiny nudges of the sticks as she sits there.
66
+ [310.160 --> 313.040] Little twitches like her arms are tensed with focus.
67
+ [313.040 --> 315.360] She is prepared to react in an instant.
68
+ [315.360 --> 320.400] And Zenyatta just gently hovers, his clasped hands drifting up and down.
69
+ [320.400 --> 324.000] And the best part is, unlike almost all of the other characters,
70
+ [324.000 --> 328.560] he doesn't even have a fidget. He is meditative and completely serene.
71
+ [329.120 --> 334.560] Characters even breathe differently. Watch the soft rising and lowering of the weapons.
72
+ [334.560 --> 340.240] Junkrat's breaths are quick and excited, the end of his launcher rises and falls pretty rapidly.
73
+ [340.240 --> 344.000] Roadhog's breathing is totally relaxed because he doesn't care.
74
+ [344.000 --> 348.320] Hanzo's breathing is controlled, there is very little drift on that bow.
75
+ [348.320 --> 352.000] And with Diva you see almost no drift at all, which makes sense because her
76
+ [352.000 --> 355.520] mech controls are locked in place. Her breathing wouldn't affect them.
77
+ [355.520 --> 359.360] But outside the cockpit, the mech's guns do sway gently,
78
+ [359.360 --> 362.400] as if the machine itself has a little bit of life to it.
79
+ [362.400 --> 366.960] You can read a lot into this kind of subtlety that may or may not have been intended to convey
80
+ [366.960 --> 372.720] specific things, but the point remains, they are all different, and they all feel pretty appropriate.
81
+ [372.720 --> 377.280] But okay enough about stillness. Let's move around, because every Overwatch character has
82
+ [377.280 --> 381.600] their own distinct walk, with their own distinct rhythm and quality of movement.
83
+ [381.600 --> 386.160] Even though you can't see their feet, you can feel how they run by watching the movement of
84
+ [386.160 --> 390.800] their gun and hands, which is reinforced by some very subtle camera movement.
85
+ [390.800 --> 394.080] Reinhart stops around in his heavy armor like a yager.
86
+ [394.080 --> 397.280] There's large vertical movement punctuating each stride,
87
+ [397.280 --> 401.520] and the wide horizontal sway on his hammer sells the shoulder rotation,
88
+ [401.520 --> 407.360] and the twist up his torso as he walks. Genji, on the other hand, runs with rapid quiet steps,
89
+ [407.360 --> 412.080] light on his feet like an ninja. There's very little vertical punctuation to his run,
90
+ [412.080 --> 417.520] he almost coasts along. Zenyatta literally coasts, so you feel no footsteps at all,
91
+ [417.520 --> 422.880] although you do see a slight increased bobbing in his hands, just a hint of increased effort for
92
+ [422.880 --> 428.240] motion, not to mention a dash of contrast just to make moving feel different from stillness.
93
+ [428.240 --> 432.960] Lucio skates around the field rather than running, and you can feel that difference when controlling
94
+ [433.360 --> 438.640] him. His free arm swings back and forth like a skater, and his gunhand very subtly pulls back
95
+ [438.640 --> 443.840] and forth in rhythm with it. The punctuated movement on him is much more horizontal than vertical,
96
+ [443.840 --> 448.960] because, you know, skates. Diva's hands don't show a whole lot of vertical step movement either,
97
+ [448.960 --> 454.320] just a quick sharp bob, but we do see a much larger degree of movement on the guns outside,
98
+ [454.320 --> 459.040] suggesting that the cockpit keeps pretty steady even when the mech is stomping around.
99
+ [459.040 --> 464.080] Junkrat even runs with a slight gallop on his peg leg. See, watch his hands and his gun.
100
+ [467.920 --> 473.760] With just a few subtle variations in arm animation and camera bob, you can infer quite different
101
+ [473.760 --> 479.200] styles of walking and mobility on each character. Some characters even have completely unique
102
+ [479.200 --> 484.080] navigation options, completely custom animation work to accommodate those characters'
103
+ [484.080 --> 489.200] individual ways of getting around. Lucio can skate on walls, so they've built a system just
104
+ [489.200 --> 494.080] for him that has him holding out his free hand to brush against the surface he's riding on.
105
+ [494.080 --> 499.920] Winston, being a guerrilla, uses his free front left hand to run, so that front arm plays into
106
+ [499.920 --> 506.880] his run cycle. Hanzo and Genji can climb walls. Soldier 76 has that classic call of duty sprint.
107
+ [506.880 --> 512.320] Diva can rocket her mech forward. Widowmaker can grapple hook places, and a lot of these are
108
+ [512.320 --> 517.280] strictly functional in terms of animation, but they serve to create further contrast between
109
+ [517.280 --> 521.840] these characters, to make them each feel all the more different to inhabit as a player.
110
+ [521.840 --> 526.320] And it kinda helps you as a player get into that semi-roll playing mindset,
111
+ [526.320 --> 531.280] where you're just in tune with who that character is, where you feel like them when stepping into
112
+ [531.280 --> 537.280] their shoes. Like, yeah, I'm Lucio, I'm riding on walls. Hang on, let me just ninja up this here
113
+ [537.920 --> 542.080] new big deal. Can't catch me, can't catch me, can't catch me, whoops, I'm over here now.
114
+ [545.920 --> 551.600] And oh man, let's talk about reloads. Those are a great opportunity for a flair of personality.
115
+ [551.600 --> 559.520] Soldier 76's reload is quick, trained, and efficient. McCree does a combination of classic cowboy
116
+ [559.520 --> 564.320] revolver moves, a spin to empty the cylinder, and then a quick flick of the wrist to snap it back
117
+ [564.320 --> 569.840] into place. Reaper literally throws his guns away and pulls out new ones, because he saw it in
118
+ [569.840 --> 575.280] the matrix and thinks it makes him look cool. Tracer does a quick stylish spin. Bastion's gun
119
+ [575.280 --> 580.640] actually opens up to reload internally, and look at how all of these parts feel sort of loose and
120
+ [580.640 --> 586.800] wobbly, like he's an old printer. Junkrat just slaps the old mag out of its slot, jams a new one in
121
+ [586.800 --> 592.400] and yanks the bolt. He's really not careful with that weapon. May, on the other hand,
122
+ [592.400 --> 599.040] daintily twists this little knob, all set. Roadhog just crams a bunch of loose bolts and
123
+ [599.040 --> 605.440] springs and crap into his gun, because again, he do not care. Zenyatta doesn't really reload so much
124
+ [605.440 --> 612.960] as recenter himself. And I can't even tell for sure what Torb is doing, but it looks neat.
125
+ [615.280 --> 620.480] Or what about their hello emotes? McCree does this casual salute slash finger gun.
126
+ [620.960 --> 628.400] Reaper gives him the old claw. Farah formally salutes, just like her mom does.
127
+ [629.520 --> 635.520] Sombra does this, which is just so perfect. And Bastion does a little robotic hand wave,
128
+ [635.520 --> 639.280] or if he's in turret form, he waves with his little repair arm instead.
129
+ [641.360 --> 645.360] Saying hello is one of the only emotes done from the first person in the game,
130
+ [645.360 --> 649.600] and the animators do not miss this great opportunity for some easy personality.
131
+ [651.040 --> 655.280] Ah, man, I could go on talking about all the awesome little touches in this game's first
132
+ [655.280 --> 660.560] person animation forever. The beautiful snap to Zenyatta's attacks, which strike this perfect
133
+ [660.560 --> 664.560] balance between conveying mechanical power and organic looseness.
134
+ [667.120 --> 672.320] The way that every one of Sometra's graceful hand movements is informed by a combination of
135
+ [672.320 --> 677.200] finger-tutting and traditional Indian dances. Ooh, or the overlap and the follow-through that
136
+ [677.200 --> 681.600] happens on their weapons when you swing the camera around. Have you noticed this? Look at how the
137
+ [681.600 --> 687.200] gun drags slightly behind as the camera turns, and then overshoots as the camera stops and then
138
+ [687.200 --> 691.760] settles back into position. It's a neat little touch, right? Well, these are different for every
139
+ [691.760 --> 696.960] character, too. Sombra's holding her top heavy submachine gun one handed, so there's a bit more
140
+ [696.960 --> 703.040] wobble as she tries to keep it steady. McCree's revolver is lighter and he's a sharp marksman,
141
+ [703.040 --> 708.240] so he actually leads the turn a little bit with the barrel of his gun, so his aim will get to where
142
+ [708.240 --> 714.480] he's turning before his body does. Hanzo's bow drags behind, making it feel like he turns his body
143
+ [714.480 --> 720.240] first and then the bow follows after. And Winston's weapon rotates, but the rotation axis happens
144
+ [720.240 --> 725.440] closer to the top of the weapon because that's where he holds it. But perhaps the most important
145
+ [725.440 --> 730.960] thing to note here in all of this is that none of this emphasizing of character comes at the cost
146
+ [730.960 --> 736.880] of gameplay function. Each character may feel unique, but they all handle well, and they all feel
147
+ [736.880 --> 742.480] great to control. None of the personality touches are overly distracting or prioritized over the
148
+ [742.480 --> 748.640] immediacy of Overwatch's fast paced gameplay. Like just as a really extreme example, take McCree's
149
+ [748.640 --> 755.440] combat role. Now, this ability technically involves a quick forward role, and the animators could
150
+ [755.520 --> 761.600] absolutely have had the camera do a full 360 degree rotation to mimic the motion of rolling forward
151
+ [761.600 --> 766.640] in first person. But, thank goodness, they ultimately decided to have the camera do this little dip
152
+ [766.640 --> 771.840] instead to suggest the feeling of rolling forward without completely disorienting the player.
153
+ [772.400 --> 776.640] And I don't want to make it sound like Blizzard is the only studio out there doing this.
154
+ [776.640 --> 782.000] Other first-person games have succeeded here too. Team Fortress 2's animations don't express
155
+ [782.000 --> 786.640] nearly as much character, but it did do a lot of the same things. Each character does have their
156
+ [786.640 --> 792.000] own first-person animation set, and some of them do get some fun little flourishes. Games like
157
+ [792.000 --> 797.600] Titanfall 2 have mostly functional gameplay animations, but they use the hands and the arms in first
158
+ [797.600 --> 801.920] person during story moments to give a better sense of physicality to your camera view,
159
+ [801.920 --> 804.960] and to give your player character some acting moments at key points.
160
+ [805.920 --> 814.000] Firewatch is a game practically built entirely around first-person interactions like these.
161
+ [821.520 --> 826.080] And then there are games like Doom, which not only use the gameplay animations to reinforce
162
+ [826.080 --> 831.360] the tone of the game and the impatient brutality of Doom Guy, but they also include some bonus
163
+ [831.360 --> 833.600] moments of first-person animated comedy.
164
+ [837.840 --> 842.640] I guess the point I'm ultimately trying to make here is that first-person animations can still
165
+ [842.640 --> 848.480] be a rich opportunity for performance. As animators, just like with every other animation we make for
166
+ [848.480 --> 853.760] a game character, we have to always be mindful of who that character is as we work.
167
+ [853.760 --> 858.640] Now most of us may not have Blizzard's budget or production flexibility, but this sort of
168
+ [858.640 --> 863.440] characterization is absolutely achievable without those luxuries. I mean, you're going to be
169
+ [863.440 --> 866.960] animating all of these moves anyway. Why not give them an extra few minutes of thought?
170
+ [867.760 --> 872.560] So whenever you're tasked with animating a character, whether that animation is meant to be seen
171
+ [872.560 --> 878.720] close up, or far away, or even from a first-person perspective, look for every opportunity to let
172
+ [878.720 --> 883.760] character inform that performance. It only takes a little bit more time and thought to do,
173
+ [883.760 --> 889.200] and it can have a huge impact. Prioritizing gameplay doesn't have to come at the expense of
174
+ [889.200 --> 896.000] character. Thank you all for watching, and special thanks to Matt Bain, who gave a great talk about
175
+ [896.000 --> 900.960] Overwatch's first-person animation stuff at GDC. It's available for free in the GDC vault,
176
+ [900.960 --> 905.280] or you can go check it out on their YouTube channel. And if you happen to be in the mood for more
177
+ [905.280 --> 910.560] Overwatch animation talk for me, I did make another episode earlier about Tracer and Posed Design,
178
+ [910.640 --> 915.040] which you can check out here. And consider subscribing if you haven't yet, because I have
179
+ [915.040 --> 919.680] got more new frame-plus episodes in the works, but they're kind of big, so just hang in there,
180
+ [919.680 --> 923.920] I promise I'll try to not keep you waiting too long. Until then...
transcript/allocentric_8O3FC86WjWU.txt ADDED
@@ -0,0 +1,432 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.000] Good morning.
2
+ [4.000 --> 6.000] Say hello.
3
+ [6.000 --> 8.000] What are you eating?
4
+ [8.000 --> 10.000] What are you eating?
5
+ [12.000 --> 14.000] Hey guys, welcome to another video. I think you're really going to like this one.
6
+ [14.000 --> 20.000] We're going to go over a bunch of different ways that Abigail, my non-verbal autistic daughter,
7
+ [20.000 --> 24.000] communicates and stay tuned to the end because we are going to be teaching her
8
+ [24.000 --> 28.000] a new word in sign language that she can use on the internet.
9
+ [28.000 --> 30.000] Abigail has a lot of different forms of communication.
10
+ [30.000 --> 34.000] She uses an iPad for communication which you'll see in a minute.
11
+ [34.000 --> 38.000] She does some modified sign language so that's different from ASL, American Sign Language.
12
+ [38.000 --> 42.000] And then she uses body language quite a bit.
13
+ [42.000 --> 46.000] A lot of her language that she uses like this body language
14
+ [46.000 --> 50.000] is just something that we learned from being around her all the time.
15
+ [50.000 --> 54.000] Of course, when she's happy, when she's sad, when she's upset about something,
16
+ [54.000 --> 58.000] she doesn't really need to communicate emotions.
17
+ [58.000 --> 62.000] And she doesn't really have the capacity to understand the need to communicate emotions,
18
+ [62.000 --> 64.000] how she's feeling.
19
+ [64.000 --> 70.000] I don't know that she necessarily understands emotions as it may be able to give them a definition.
20
+ [70.000 --> 76.000] She doesn't really have the capacity to understand the need to communicate emotions,
21
+ [76.000 --> 80.000] emotions as it may be able to give them a definition.
22
+ [80.000 --> 82.000] Like this is how sad feelings are.
23
+ [82.000 --> 86.000] This is how happy feelings, most of her communication,
24
+ [86.000 --> 92.000] is done through or done for once in needs.
25
+ [92.000 --> 96.000] This one for example, she is signing for bathroom a lot
26
+ [96.000 --> 100.000] and she's not necessarily asking for bathroom.
27
+ [100.000 --> 102.000] She does scroll through her signs.
28
+ [102.000 --> 104.000] She had actually just gone to the bathroom.
29
+ [104.000 --> 108.000] That's more of like attention seeking.
30
+ [108.000 --> 112.000] So we really have to read what's going on around us at the time
31
+ [112.000 --> 118.000] to fully understand what she's communicating and what she's asking for.
32
+ [118.000 --> 122.000] I think it's really important to understand that nonverbal
33
+ [122.000 --> 126.000] is not necessarily a trait of autism.
34
+ [126.000 --> 132.000] Autism is an individual diagnosis, but there are comorbidities that go along
35
+ [132.000 --> 136.000] with autism, not all the time, sometimes.
36
+ [136.000 --> 138.000] Sometimes they go hand in hand, sometimes they're, you know,
37
+ [138.000 --> 140.000] some are more frequent than others.
38
+ [140.000 --> 144.000] Abigail also has a paika diagnosis, which means she will mouth and inedible objects.
39
+ [144.000 --> 148.000] She's still a lot more of that when she was younger.
40
+ [148.000 --> 152.000] And you often see that with autism, but it does not, it's not part of autism.
41
+ [152.000 --> 156.000] That makes sense. Same thing with her, with her communication,
42
+ [156.000 --> 160.000] or lack there, you know, lack there of a verbal communication.
43
+ [160.000 --> 166.000] She can't talk and that could be a diagnosis of a praxia
44
+ [166.000 --> 170.000] or it could be a diagnosis of anything else,
45
+ [170.000 --> 172.000] but that's not necessarily autism.
46
+ [172.000 --> 177.000] She also has sensory processing disorder that often times goes hand in hand with autism,
47
+ [177.000 --> 182.000] but there are children and adults that have sensory processing disorder
48
+ [182.000 --> 184.000] and don't have a diagnosis of autism.
49
+ [184.000 --> 188.000] So her behaviors are also communication.
50
+ [188.000 --> 192.000] When she ran, getting drinks, because she was excited, she was doing a good job.
51
+ [192.000 --> 196.000] And that's a behavior that is also a communication.
52
+ [196.000 --> 200.000] Abigail uses an iPad to communicate,
53
+ [200.000 --> 204.000] and we are pushing more and more use of that iPad.
54
+ [204.000 --> 208.000] She'll combine sign language with her iPad, quite a bit,
55
+ [208.000 --> 211.000] but the cool thing about the iPad is that it's universal.
56
+ [211.000 --> 215.000] Anybody can understand it, because it gives her a voice,
57
+ [215.000 --> 219.000] just a natural voice that she can use in the everyday world,
58
+ [219.000 --> 224.000] she doesn't just have to rely on her parents or caregivers to understand what she's saying
59
+ [224.000 --> 227.000] with her modified sign language or body language.
60
+ [227.000 --> 231.000] That stuff works at home and at therapy in at school,
61
+ [231.000 --> 235.000] but the iPad will give her much more access to the world.
62
+ [235.000 --> 239.000] So we really work on that in speech therapy
63
+ [239.000 --> 244.000] and just throughout the day, at home, getting her to use that more and more.
64
+ [244.000 --> 247.000] And here are some of Abigail's modified signs.
65
+ [247.000 --> 250.000] We'll just run through them real quick.
66
+ [250.000 --> 253.000] But if you've been watching our videos for a while,
67
+ [253.000 --> 256.000] you know that we always have an app for the beep,
68
+ [256.000 --> 260.000] and her one of her favorite signs that Abby does is that the app for the beep on this,
69
+ [260.000 --> 262.000] something that Summer taught her.
70
+ [262.000 --> 264.000] It's pretty cute.
71
+ [264.000 --> 266.000] Show me golf card.
72
+ [266.000 --> 268.000] A pie.
73
+ [268.000 --> 269.000] A pie.
74
+ [269.000 --> 270.000] A pie.
75
+ [270.000 --> 272.000] Golf card.
76
+ [272.000 --> 273.000] Like this.
77
+ [273.000 --> 274.000] Hey.
78
+ [274.000 --> 275.000] A.
79
+ [275.000 --> 282.000] You show me cereal.
80
+ [282.000 --> 283.000] Serial.
81
+ [283.000 --> 284.000] Yeah.
82
+ [284.000 --> 285.000] Show me cracker.
83
+ [285.000 --> 287.000] That's chip.
84
+ [287.000 --> 288.000] Show me cracker.
85
+ [288.000 --> 289.000] Yeah.
86
+ [289.000 --> 291.000] Can you show me cookie?
87
+ [291.000 --> 292.000] Cookie.
88
+ [292.000 --> 295.000] What else do we know?
89
+ [295.000 --> 296.000] All done.
90
+ [296.000 --> 297.000] Show me all done.
91
+ [297.000 --> 298.000] Show me all done.
92
+ [298.000 --> 300.000] All done.
93
+ [300.000 --> 301.000] All done.
94
+ [301.000 --> 302.000] Bath.
95
+ [302.000 --> 303.000] Hey.
96
+ [303.000 --> 306.000] Can you show me bath?
97
+ [306.000 --> 307.000] Bath.
98
+ [307.000 --> 308.000] Yeah.
99
+ [308.000 --> 310.000] What do you say?
100
+ [310.000 --> 311.000] Show me help.
101
+ [311.000 --> 312.000] Do you need help?
102
+ [312.000 --> 313.000] That's music.
103
+ [313.000 --> 314.000] That's...
104
+ [314.000 --> 315.000] Okay, stop.
105
+ [315.000 --> 316.000] Hands up.
106
+ [316.000 --> 317.000] Show me help.
107
+ [317.000 --> 318.000] Show me open.
108
+ [318.000 --> 319.000] Open.
109
+ [319.000 --> 320.000] Show me break.
110
+ [320.000 --> 321.000] Break.
111
+ [321.000 --> 322.000] Break.
112
+ [322.000 --> 323.000] Wait.
113
+ [323.000 --> 324.000] Snack.
114
+ [324.000 --> 330.000] I do want cookies under a two-valid book.
115
+ [330.000 --> 331.000] Which one?
116
+ [331.000 --> 333.000] Show me in your iPad.
117
+ [333.000 --> 334.000] Nature-bound box.
118
+ [334.000 --> 335.000] Okay.
119
+ [335.000 --> 336.000] There you go.
120
+ [336.000 --> 337.000] Okay.
121
+ [337.000 --> 338.000] Okay.
122
+ [338.000 --> 339.000] Okay.
123
+ [339.000 --> 340.000] Okay.
124
+ [340.000 --> 341.000] Okay.
125
+ [341.000 --> 342.000] Okay.
126
+ [342.000 --> 343.000] Okay.
127
+ [343.000 --> 344.000] Okay.
128
+ [344.000 --> 345.000] Okay.
129
+ [345.000 --> 346.000] Okay.
130
+ [346.000 --> 347.000] Okay.
131
+ [347.000 --> 348.000] Okay.
132
+ [348.000 --> 349.000] Okay.
133
+ [349.000 --> 350.000] Okay.
134
+ [350.000 --> 351.000] Okay.
135
+ [351.000 --> 352.000] Okay.
136
+ [353.000 --> 355.000] Okay.
137
+ [355.000 --> 357.000] Okay.
138
+ [357.000 --> 358.000] Okay.
139
+ [358.000 --> 378.920] Here is youronto uk muscle and other exercises that allow this activity to take out the
140
+ [378.920 --> 379.920] dive in.
141
+ [379.920 --> 380.920] Right.
142
+ [380.920 --> 384.200] She'll watch toy unboxing, openings, whatever.
143
+ [384.200 --> 386.160] But she says YouTube kids on there,
144
+ [386.160 --> 387.320] she navigates to that pretty well.
145
+ [387.320 --> 389.760] She has Spotify with a playlist.
146
+ [389.760 --> 392.800] I'll have to post one of her playlists sometime.
147
+ [392.800 --> 395.160] Yeah, it's not just a communication device.
148
+ [395.160 --> 397.200] We want her to love her iPad.
149
+ [397.200 --> 399.600] We want her to be able to communicate with it
150
+ [399.600 --> 402.720] and also just enjoy having it.
151
+ [402.720 --> 404.520] So it's on her at all time.
152
+ [404.520 --> 407.120] One problem we do have though is the battery runs out
153
+ [407.120 --> 410.080] super quick, which is on it all day.
154
+ [410.400 --> 413.240] That's pretty typical for most kids.
155
+ [414.280 --> 417.360] The most important thing to me is that my daughter's happy.
156
+ [417.360 --> 420.080] And she's clearly very, very happy.
157
+ [421.280 --> 422.800] One of the keys to keeping her happy
158
+ [422.800 --> 424.760] is increasing her communication.
159
+ [424.760 --> 426.800] One of the biggest frustrations
160
+ [426.800 --> 429.520] and when she has her angry moments in her meltdowns
161
+ [429.520 --> 431.600] comes from an inability to communicate.
162
+ [431.600 --> 435.800] So it's our job to give her the tools that she needs
163
+ [435.800 --> 438.600] to communicate and have access to the world
164
+ [438.600 --> 440.640] and to stay happy.
165
+ [442.200 --> 444.240] Oh,
166
+ [444.240 --> 445.560] are you here?
167
+ [445.560 --> 447.240] Oh,
168
+ [447.240 --> 448.240] what's up?
169
+ [449.560 --> 450.680] You want to eat?
170
+ [450.680 --> 453.160] Will we have your brother in a moment to go eat, okay?
171
+ [455.160 --> 456.160] Oh,
172
+ [456.160 --> 458.360] yeah,
173
+ [458.360 --> 459.200] we are.
174
+ [459.200 --> 460.200] Me too.
175
+ [460.200 --> 464.320] Okay, so we have done this before in a video.
176
+ [464.320 --> 466.880] We taught you a sign.
177
+ [466.880 --> 469.520] Do you remember what that sign was?
178
+ [470.520 --> 472.000] Do you remember what that sign was?
179
+ [472.000 --> 473.080] See you.
180
+ [473.080 --> 474.520] I don't know.
181
+ [474.520 --> 477.200] We were at a fast food restaurant and we were traveling.
182
+ [478.120 --> 478.960] And we taught her a sign.
183
+ [478.960 --> 480.120] I know what.
184
+ [480.120 --> 481.400] I taught her this one.
185
+ [481.400 --> 482.560] Hey.
186
+ [482.560 --> 483.720] This sign's signed.
187
+ [483.720 --> 485.480] I don't remember this one.
188
+ [485.480 --> 486.800] I don't remember which one I remember this one.
189
+ [486.800 --> 487.800] Yep.
190
+ [487.800 --> 488.640] Yep.
191
+ [488.640 --> 489.480] She did learn this.
192
+ [489.480 --> 490.320] Yep.
193
+ [490.320 --> 492.440] So we have a sign that's going to be really useful
194
+ [492.440 --> 496.000] to Abigail because she always signs for the wrong thing.
195
+ [496.520 --> 497.520] Huh?
196
+ [497.520 --> 499.360] What is this?
197
+ [499.360 --> 500.200] That is close.
198
+ [500.200 --> 502.040] It is not a cookie.
199
+ [502.040 --> 502.880] It's a donut.
200
+ [502.880 --> 506.200] And I'm going to show you how to say donut, okay?
201
+ [506.200 --> 508.680] Here, look, we're going to do what's your preferred
202
+ [508.680 --> 509.520] signing hand?
203
+ [509.520 --> 510.440] What do you think?
204
+ [510.440 --> 511.440] I think it's her left.
205
+ [511.440 --> 512.280] Her left?
206
+ [512.280 --> 513.120] Okay.
207
+ [513.120 --> 513.800] Can you go like this?
208
+ [513.800 --> 514.640] Watch.
209
+ [514.640 --> 515.480] Watch.
210
+ [515.480 --> 516.320] Ready?
211
+ [516.320 --> 517.160] She'll live.
212
+ [517.160 --> 519.440] Look, we're going to go donut.
213
+ [520.280 --> 521.120] Donut.
214
+ [522.720 --> 523.560] Donut.
215
+ [524.560 --> 525.920] What is that?
216
+ [525.920 --> 528.280] That is a, look at me.
217
+ [528.280 --> 529.120] Donut.
218
+ [529.880 --> 531.000] Can you do it?
219
+ [533.800 --> 535.040] Donut.
220
+ [535.040 --> 536.520] Good job.
221
+ [536.520 --> 537.400] Would you like a bite?
222
+ [537.400 --> 538.240] Another donut?
223
+ [538.240 --> 539.080] Yes.
224
+ [539.080 --> 540.160] All right, there you go.
225
+ [540.160 --> 541.000] All right.
226
+ [544.120 --> 545.240] This is so good.
227
+ [545.240 --> 546.240] It's best, right?
228
+ [546.240 --> 547.120] What's that called?
229
+ [548.440 --> 549.560] It's not a cookie.
230
+ [549.560 --> 550.560] It's a donut.
231
+ [551.480 --> 552.320] Close.
232
+ [553.840 --> 554.840] Donut.
233
+ [554.840 --> 556.160] Ready?
234
+ [556.160 --> 557.360] Donut.
235
+ [557.360 --> 558.960] Hold your hand like that.
236
+ [558.960 --> 559.800] Donut.
237
+ [559.800 --> 562.440] So I'm just going to do less and less.
238
+ [562.440 --> 563.360] Hand over hand.
239
+ [563.360 --> 565.040] So I kind of just let go of her hand a little bit.
240
+ [565.040 --> 565.880] Donut.
241
+ [565.880 --> 566.720] Good job.
242
+ [566.720 --> 567.720] Show me again.
243
+ [569.000 --> 569.520] Donut.
244
+ [569.520 --> 570.880] Good job.
245
+ [570.880 --> 572.080] That was very good.
246
+ [572.080 --> 572.680] Ready?
247
+ [572.680 --> 574.280] What's that called?
248
+ [574.280 --> 575.120] What is that?
249
+ [577.840 --> 578.680] Donut.
250
+ [578.680 --> 579.760] Abigail with her muscle control.
251
+ [579.760 --> 582.200] She has to, she has to really focus on
252
+ [582.200 --> 583.480] what her hands are doing.
253
+ [586.800 --> 588.720] You're chocolate all over your face.
254
+ [588.720 --> 589.960] Say it's a really good donut.
255
+ [589.960 --> 590.640] OK, ready?
256
+ [590.640 --> 592.600] Look, we're going to make our hand like this.
257
+ [592.600 --> 594.000] Look at your hands, see it?
258
+ [594.000 --> 594.520] Like that?
259
+ [594.520 --> 595.680] OK.
260
+ [595.680 --> 600.600] We're going to go donut so close.
261
+ [600.600 --> 602.920] Like this.
262
+ [602.920 --> 603.760] Donut.
263
+ [603.760 --> 606.760] Good job.
264
+ [606.760 --> 607.880] Bring your hand to your face.
265
+ [607.880 --> 609.480] Not your face to your head.
266
+ [609.480 --> 611.800] Donut.
267
+ [611.800 --> 612.800] Donut.
268
+ [612.800 --> 614.600] So your hand feels ready?
269
+ [614.600 --> 616.120] You do it.
270
+ [616.120 --> 618.120] Donut.
271
+ [618.120 --> 619.120] Good job.
272
+ [619.120 --> 620.120] That was really good.
273
+ [620.120 --> 621.120] That was great.
274
+ [621.120 --> 623.120] That was excellent.
275
+ [623.120 --> 627.600] I'm going to take out smaller pieces so you can do it one.
276
+ [627.600 --> 628.360] OK.
277
+ [628.360 --> 629.160] What is that?
278
+ [629.160 --> 629.680] Hold on a minute.
279
+ [629.680 --> 630.200] I'll hurt you.
280
+ [630.200 --> 631.200] OK.
281
+ [633.520 --> 635.520] What do you want?
282
+ [635.520 --> 638.120] Yeah, what's that called?
283
+ [638.120 --> 640.920] Great, great, great proximity there.
284
+ [640.920 --> 641.920] Donut.
285
+ [641.920 --> 643.160] Yep, that's perfect.
286
+ [643.160 --> 646.400] Good job.
287
+ [646.400 --> 650.720] So some of us said the sign for donut is like this?
288
+ [650.720 --> 653.760] It's like a, the way they explain it on the website.
289
+ [653.760 --> 654.880] It's like a, can I see?
290
+ [654.880 --> 657.440] It's like a C. And then you're going up to your mouth like this.
291
+ [657.440 --> 658.760] Like donut.
292
+ [658.760 --> 662.160] Or there was, you made ours with both your hands.
293
+ [662.160 --> 665.120] And you did a circle, which is a big difference.
294
+ [665.120 --> 669.000] So Abby stems a lot of times with her fingers like this.
295
+ [669.000 --> 671.840] So we didn't think that would be a good way.
296
+ [671.840 --> 672.840] Right.
297
+ [672.840 --> 673.840] So that's why we chose this one.
298
+ [673.840 --> 675.840] Yes, that's why we do modified signs of that.
299
+ [675.840 --> 678.440] If you notice like, like Abby, give me a thumbs up.
300
+ [678.440 --> 680.080] It took a lot of, yep, there we go.
301
+ [680.080 --> 682.680] It took a lot of work to get her to develop a mover hand like that.
302
+ [682.680 --> 687.920] We had to manipulate her hand for her to get her to feel what that's like.
303
+ [687.920 --> 693.600] She does have some muscle development that's delayed in her hands.
304
+ [693.600 --> 695.720] So it's harder for her to do some of these.
305
+ [695.720 --> 696.720] Ready?
306
+ [696.720 --> 697.720] Show me.
307
+ [697.760 --> 699.760] Donut is also very hard.
308
+ [699.760 --> 701.840] She can't just look and do what we're doing.
309
+ [701.840 --> 702.840] Ready?
310
+ [702.840 --> 704.360] Show that.
311
+ [704.360 --> 705.840] Donut.
312
+ [705.840 --> 707.520] You do it.
313
+ [707.520 --> 709.040] Open up.
314
+ [709.040 --> 709.880] Donut.
315
+ [709.880 --> 711.160] Good job.
316
+ [711.160 --> 711.920] I like that.
317
+ [711.920 --> 716.280] I didn't even think about the sign for food being so much.
318
+ [716.280 --> 717.040] Yeah.
319
+ [717.040 --> 717.920] So she just did it.
320
+ [717.920 --> 718.920] I know.
321
+ [718.920 --> 721.520] Look, do this with your hand.
322
+ [721.520 --> 722.520] Open it.
323
+ [725.600 --> 727.280] Turn your head.
324
+ [727.280 --> 728.280] We're going to touch here.
325
+ [728.280 --> 729.280] Donut.
326
+ [729.280 --> 730.280] Ready?
327
+ [730.280 --> 732.280] That was good.
328
+ [732.280 --> 733.280] Donut.
329
+ [733.280 --> 734.280] Good job.
330
+ [734.280 --> 735.280] Good job.
331
+ [735.280 --> 736.280] Small bite.
332
+ [736.280 --> 737.280] Say my fingers.
333
+ [737.280 --> 738.280] Hold on.
334
+ [738.280 --> 739.280] You're trying so hard.
335
+ [739.280 --> 740.280] Donut.
336
+ [740.280 --> 741.280] Good job.
337
+ [741.280 --> 745.280] Now there's no motivation, right?
338
+ [745.280 --> 756.280] Ah, that's not a cookie.
339
+ [756.280 --> 757.280] What is that call?
340
+ [757.280 --> 758.280] No.
341
+ [758.280 --> 759.280] What is that call?
342
+ [759.280 --> 760.280] Donut.
343
+ [760.280 --> 761.280] Good job.
344
+ [761.280 --> 762.280] Listen, it's all gone.
345
+ [762.280 --> 763.280] All gone.
346
+ [763.280 --> 764.280] She's like, no, it's not.
347
+ [764.280 --> 765.280] I know there's another one in the bag.
348
+ [765.280 --> 766.280] You guys are lying.
349
+ [766.280 --> 767.280] I don't know.
350
+ [767.280 --> 768.280] At least that's all we're going to have tonight.
351
+ [768.280 --> 769.280] Okay, you ready?
352
+ [769.280 --> 770.280] What do we just eat?
353
+ [770.280 --> 771.280] It was so close.
354
+ [771.280 --> 772.280] I like how you're head up.
355
+ [772.280 --> 773.280] I like how you're doing your thumb.
356
+ [773.280 --> 774.280] Because that's different than eat.
357
+ [774.280 --> 775.280] Donut.
358
+ [775.280 --> 776.280] Donut.
359
+ [776.280 --> 777.280] Donut.
360
+ [777.280 --> 778.280] Donut.
361
+ [778.280 --> 779.280] Donut.
362
+ [779.280 --> 780.280] Donut.
363
+ [780.280 --> 781.280] Donut.
364
+ [781.280 --> 782.280] Donut.
365
+ [782.280 --> 783.280] Donut.
366
+ [783.280 --> 784.280] Donut.
367
+ [784.280 --> 785.280] Donut.
368
+ [785.280 --> 786.280] Donut.
369
+ [786.280 --> 787.280] Show me again.
370
+ [787.280 --> 788.280] Show me again.
371
+ [788.280 --> 789.280] Donut.
372
+ [789.280 --> 790.280] Donut.
373
+ [790.280 --> 791.280] I like it.
374
+ [791.280 --> 792.280] Good work.
375
+ [792.280 --> 793.280] Okay, we'll work on that.
376
+ [793.280 --> 794.280] So we'll just continue to use that every time that we go into Donut.
377
+ [794.280 --> 795.280] Yeah.
378
+ [795.280 --> 796.280] It's like every day.
379
+ [796.280 --> 797.280] You can do it every day.
380
+ [797.280 --> 798.280] You can do it every day.
381
+ [798.280 --> 799.280] Hey, good job.
382
+ [799.280 --> 800.280] I love you.
383
+ [800.280 --> 801.280] I'm so proud of you.
384
+ [801.280 --> 802.280] You do the greatest.
385
+ [802.280 --> 803.280] That's my biggest challenge.
386
+ [803.280 --> 804.280] Can I have a kiss?
387
+ [804.280 --> 805.280] You give me a kiss.
388
+ [805.280 --> 806.280] You give me a kiss.
389
+ [806.280 --> 807.280] Yeah.
390
+ [807.280 --> 808.280] I love you.
391
+ [808.280 --> 809.280] I love you.
392
+ [809.280 --> 810.280] I'm so proud of you.
393
+ [810.280 --> 811.280] You give me a kiss.
394
+ [811.280 --> 812.280] You give me a kiss.
395
+ [812.280 --> 813.280] You give me a kiss.
396
+ [813.280 --> 814.280] You give me a kiss.
397
+ [814.280 --> 815.280] You give me a kiss.
398
+ [815.280 --> 816.280] Thank you.
399
+ [816.280 --> 817.280] We're all done.
400
+ [817.280 --> 818.280] You can say bye to everybody.
401
+ [818.280 --> 819.280] Say thanks for watching.
402
+ [819.280 --> 820.280] Bye guys.
403
+ [820.280 --> 821.280] Say.
404
+ [821.280 --> 822.280] Say.
405
+ [822.280 --> 823.280] Say.
406
+ [823.280 --> 824.280] Say.
407
+ [824.280 --> 825.280] Say.
408
+ [825.280 --> 826.280] Say.
409
+ [826.280 --> 827.280] Say.
410
+ [827.280 --> 828.280] Say.
411
+ [828.280 --> 829.280] Say.
412
+ [829.280 --> 830.280] Say.
413
+ [830.280 --> 831.280] Say.
414
+ [831.280 --> 832.280] Say.
415
+ [832.280 --> 833.280] Say.
416
+ [833.280 --> 834.280] Say.
417
+ [834.280 --> 835.280] Say.
418
+ [835.280 --> 836.280] Say.
419
+ [836.280 --> 837.280] Say.
420
+ [837.280 --> 838.280] Say.
421
+ [838.280 --> 839.280] Say.
422
+ [839.280 --> 840.280] Say.
423
+ [840.280 --> 841.280] Say.
424
+ [841.280 --> 842.280] Say.
425
+ [842.280 --> 843.280] Say.
426
+ [843.280 --> 844.280] Say.
427
+ [845.280 --> 850.280] I love you.
428
+ [850.280 --> 853.280] You love yourself.
429
+ [853.280 --> 854.280] I know.
430
+ [854.280 --> 855.280] You good job.
431
+ [855.280 --> 856.280] It's nice job.
432
+ [856.280 --> 858.280] Are you all done?
transcript/allocentric_CISLJ2xL7UY.txt ADDED
@@ -0,0 +1,602 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 1.640] Thank you, Kate.
2
+ [1.640 --> 3.400] I start showing my screen.
3
+ [3.400 --> 4.200] Can everyone hear me?
4
+ [4.200 --> 6.600] OK, my audience sometimes a bit.
5
+ [6.600 --> 7.600] OK.
6
+ [7.600 --> 9.800] OK, and I will show my screen.
7
+ [12.800 --> 15.800] Can everyone see my screen?
8
+ [15.800 --> 16.300] Yes.
9
+ [16.300 --> 17.100] Yes, that's great.
10
+ [17.100 --> 18.100] Thank you.
11
+ [18.100 --> 22.280] OK, so I'm going to start by saying,
12
+ [22.280 --> 26.000] I think that the pandemic changed the way that we live and the way
13
+ [26.000 --> 26.600] we work.
14
+ [26.600 --> 28.360] We interact with each other.
15
+ [28.360 --> 31.240] For many of us, it also changed the way that we experience space,
16
+ [31.240 --> 33.360] both locally and globally.
17
+ [33.360 --> 36.760] And core to this is the awareness of your body, the way that your body
18
+ [36.760 --> 41.560] situates itself in space and how we feel our body in many different ways
19
+ [41.560 --> 45.120] and the way that you inhabit and move in space.
20
+ [45.120 --> 50.160] So before we begin, I'd like to take a moment and allow you to situate
21
+ [50.160 --> 54.160] yourself in space, the space that you're inhabiting right now,
22
+ [54.160 --> 55.920] allow you to feel your body.
23
+ [55.920 --> 59.120] Most likely, everyone here is sitting in front of a screen
24
+ [59.120 --> 62.480] and you're kind of back against the back of a chair.
25
+ [62.480 --> 69.360] So I'd like you to ask you to close your eyes and feel your feet on the ground.
26
+ [69.360 --> 72.720] Feel your back on the back of the chair.
27
+ [72.720 --> 75.760] Breathe in and out.
28
+ [75.760 --> 80.600] First, feel the tip of your toes, sensation traveling through your feet
29
+ [80.600 --> 84.960] into your heels, up into your ankles.
30
+ [85.000 --> 90.600] Further up through your calves, your knees, your thighs,
31
+ [90.600 --> 93.880] your hips, into your stomach.
32
+ [93.880 --> 98.920] Feel your stomach expanding and contracting with every breath you draw.
33
+ [98.920 --> 103.560] You're chest filling up with air and emptying itself with air
34
+ [103.560 --> 106.880] as you inhale and exhale.
35
+ [106.880 --> 113.840] Feel your fingers into your wrists, up your arms into your shoulders.
36
+ [113.840 --> 122.440] Draw your shoulders up as you inhale and then drop them down as you exhale.
37
+ [122.440 --> 127.840] Feel the nape of your neck, sensation radiating up into the back of your head.
38
+ [127.840 --> 131.160] Feel your eyelids on your eyes, the tip of your nose,
39
+ [131.160 --> 133.840] air flowing in and out of your nose.
40
+ [133.840 --> 136.280] Now visualize the space around you.
41
+ [136.280 --> 142.280] It's air flow into your body, your breath exhaling back into the space.
42
+ [142.320 --> 145.200] Picture the extents of the space around you,
43
+ [145.200 --> 148.400] the height of the ceiling above you,
44
+ [148.400 --> 152.240] the distance to the walls in front and behind of you,
45
+ [152.240 --> 155.840] to your left and to your right.
46
+ [155.840 --> 159.320] Perhaps you can conjure up images of the textures around you,
47
+ [159.320 --> 161.320] the tactile properties that they have,
48
+ [161.320 --> 166.200] and what these feel like if you were to run your fingers over them.
49
+ [166.200 --> 168.680] Now zoom out.
50
+ [168.720 --> 172.720] Think of the room you're in, in the house you are in.
51
+ [172.720 --> 176.400] Think of the house as it sits on your street.
52
+ [176.400 --> 179.320] That street in relation to the city.
53
+ [182.400 --> 185.000] Open your eyes.
54
+ [185.000 --> 188.360] While you were breathing, you were focused on yourself.
55
+ [188.360 --> 193.840] You have you pointed inwards aware of your body, your body in space.
56
+ [193.840 --> 196.680] It's likely that at some point you switched your mental view
57
+ [196.680 --> 199.720] from a first person to a third person view.
58
+ [199.720 --> 202.280] Or you might have held in mind simultaneously
59
+ [202.280 --> 205.240] a first person and a third person point of view,
60
+ [205.240 --> 207.880] creating attention towards.
61
+ [207.880 --> 212.720] This tension, the tension between your body is felt within and in space
62
+ [212.720 --> 216.920] and your body is calculated from the outside and located in space
63
+ [216.920 --> 219.200] is a tension of spatial experience,
64
+ [219.200 --> 223.360] both being a body in space and having a body in space.
65
+ [223.360 --> 225.240] This tension or threshold
66
+ [225.280 --> 227.240] underlies a lot of my thinking
67
+ [227.240 --> 229.680] and the experimental performance and allocentric view
68
+ [229.680 --> 232.400] that I'm going to talk about today.
69
+ [232.400 --> 235.280] Thresholds both as little spatial thresholds
70
+ [235.280 --> 238.240] and as abstract notions are interesting to me
71
+ [238.240 --> 243.680] as they are connectors and separators, spaces in and of themselves.
72
+ [243.680 --> 247.920] Moving through thresholds means going from one space to another,
73
+ [247.920 --> 249.760] a change happening.
74
+ [249.760 --> 252.000] I will in a sense talk about thresholds
75
+ [252.040 --> 254.680] as integration and dissociation
76
+ [254.680 --> 258.160] and introduce views and thinking from three different angles,
77
+ [258.160 --> 260.920] cognition, architecture and dance.
78
+ [260.920 --> 264.040] At times, I will refer directly from one to the other
79
+ [264.040 --> 266.720] to times parallels across the three are implied.
80
+ [268.160 --> 271.440] To do all of this, I will use the experimental performance
81
+ [271.440 --> 274.560] I designed with my colleagues Stephen Gage and Alexander Whitley
82
+ [274.560 --> 277.560] at the Bartlett School of Architecture, right before the pandemic.
83
+ [278.520 --> 281.160] For this, we designed a labyrinth on a floor
84
+ [281.200 --> 284.440] and a camera capturing a third person point of view
85
+ [284.440 --> 287.480] and VR gobbles showing an oblique view down
86
+ [287.480 --> 289.560] as one moves in the space.
87
+ [289.560 --> 291.800] We were interested in questions such as
88
+ [291.800 --> 295.360] what experience might one have physically navigating space
89
+ [295.360 --> 297.120] while visually seeing oneself
90
+ [297.120 --> 299.400] through the eyes of a notional other.
91
+ [299.400 --> 301.520] What is it like to be watching oneself
92
+ [301.520 --> 303.920] to become the observer and the observed,
93
+ [303.920 --> 305.560] agent and body?
94
+ [305.560 --> 308.440] What kind of liminal embodiment might arise?
95
+ [308.480 --> 311.520] What is it like to be the tension, the threshold,
96
+ [311.520 --> 314.120] being and having a body emphasized?
97
+ [314.120 --> 316.480] In essence, rather than run experiments,
98
+ [316.480 --> 318.640] we designed an enactment of ideas
99
+ [318.640 --> 320.520] that one can inhabit and experience
100
+ [320.520 --> 323.000] dissociation and integration as threshold.
101
+ [325.040 --> 327.600] To form this, we needed to design a space.
102
+ [327.600 --> 329.760] So why did we use a labyrinth?
103
+ [329.760 --> 332.320] A labyrinth is a form of a complex journey
104
+ [332.320 --> 335.520] dating back to Minoan times that has lots of turns.
105
+ [335.520 --> 337.840] We thought that these might be difficult to navigate
106
+ [337.840 --> 340.640] with one's world and body view modified.
107
+ [340.640 --> 343.520] I might bring the mentioned tension to the fore.
108
+ [343.520 --> 345.520] There are also mystical associations.
109
+ [345.520 --> 348.440] The person walking a labyrinth is being observed
110
+ [348.440 --> 352.320] by a divine third party who becomes one with the pilgrim.
111
+ [352.320 --> 355.600] In a sense, a first person and third person experience
112
+ [355.600 --> 357.760] in one attention threshold.
113
+ [358.840 --> 361.200] Our aim was to use an immersive setup
114
+ [361.200 --> 364.440] as a way of reflecting on knowledge from different disciplines,
115
+ [364.440 --> 366.440] cognitive science, architecture dance
116
+ [366.440 --> 368.200] through embodied experience.
117
+ [368.200 --> 371.320] To hear paraphrase the choreography Steve Paxton,
118
+ [371.320 --> 374.440] we wanted to explore some of the physical possibilities
119
+ [374.440 --> 378.080] to refocus the focusing mind, time, space, gravity,
120
+ [378.080 --> 379.760] opening up the creativity.
121
+ [381.080 --> 383.360] Furthermore, labyrinths are also historically
122
+ [383.360 --> 385.840] intimately tied to movement and dance,
123
+ [385.840 --> 388.640] an important aspect within our enactment.
124
+ [388.640 --> 391.120] As history has it, Ariadans dance floor
125
+ [391.120 --> 393.720] was the prototype that got deadless, the commission
126
+ [393.720 --> 395.560] to build the labyrinth the conossals.
127
+ [396.720 --> 401.440] Bringing in a simple foundation from cognitive science,
128
+ [401.440 --> 404.720] we can ask what kind of representation of space and body
129
+ [404.720 --> 406.720] are needed to navigate the world
130
+ [406.720 --> 409.200] and how might this representation be changed
131
+ [409.200 --> 410.680] in the enactment?
132
+ [410.680 --> 413.200] Well, of course, individual differences
133
+ [413.200 --> 416.160] make experience personal for all of us
134
+ [416.160 --> 418.680] as a basis for internal representation of space
135
+ [418.680 --> 420.280] and ourselves within it.
136
+ [420.280 --> 422.960] The brain's cognitive map provides a framework
137
+ [422.960 --> 425.480] for spatial experience that is filled with purpose
138
+ [425.520 --> 428.080] and is filled with personal experience.
139
+ [428.080 --> 430.600] The cognitive map is the brain's spatial model
140
+ [430.600 --> 433.640] of spatial relationships of the external world
141
+ [433.640 --> 435.640] in relation to itself.
142
+ [435.640 --> 437.680] The cognitive map underlies the ability
143
+ [437.680 --> 441.080] to successfully navigate and perform actions in space
144
+ [441.080 --> 443.400] and it charts both what is in the world
145
+ [443.400 --> 445.680] as well as what happens there.
146
+ [445.680 --> 448.240] Good differently and expressed in other terms,
147
+ [448.240 --> 450.720] geometry and phenomenology.
148
+ [450.720 --> 453.680] Cohesive spatial representations are established
149
+ [453.680 --> 458.120] by integrating both egocentric first person
150
+ [458.120 --> 461.120] and alacentric third person reference frames
151
+ [461.120 --> 463.640] combining many egocentric positions
152
+ [463.640 --> 466.400] into an alacentric overview and so constructing
153
+ [466.400 --> 468.120] an internal model of the world.
154
+ [469.360 --> 472.200] An egocentric representation is where the location
155
+ [472.200 --> 475.640] and orientation of objects are relative to your body.
156
+ [475.640 --> 478.400] An alacentric representation is where the location
157
+ [478.400 --> 481.840] orientation are constructed with respect to other objects
158
+ [481.840 --> 485.320] and environmental features independent of your body.
159
+ [485.320 --> 488.760] The term alacentric was adopted in our experimental performance.
160
+ [488.760 --> 491.240] However, it must be said that a truly alacentric view
161
+ [491.240 --> 493.120] is actually not really possible.
162
+ [493.120 --> 495.840] As no one view alone can be alacentric
163
+ [495.840 --> 498.320] and alacentric is observer independent.
164
+ [500.640 --> 502.680] The cognitive map I mentioned is constructed
165
+ [502.680 --> 504.800] in the wider hippocampal network
166
+ [504.800 --> 506.600] where activity in the posterior,
167
+ [506.600 --> 508.920] the backside of the brain hippocampus
168
+ [508.920 --> 511.440] is sensitive to distances along the path
169
+ [511.440 --> 514.120] and therefore indicates a more egocentric role.
170
+ [514.120 --> 516.520] Activity in adjacent interrallal cortex
171
+ [516.520 --> 518.840] is correlated with euclidean distance
172
+ [518.840 --> 521.520] where a vector to a goal and is therefore directed
173
+ [521.520 --> 523.640] at a more alacentric spatial parsing.
174
+ [525.240 --> 527.720] So what kind of input do our bodies rely on
175
+ [527.720 --> 529.760] to construct the cognitive map?
176
+ [529.760 --> 534.520] Primary modalities are incited individuals, vision and movement.
177
+ [534.520 --> 536.440] Both is translation through space
178
+ [536.440 --> 539.440] and is proprioceptive movement of the body.
179
+ [539.440 --> 541.920] Relevant to my interest, an important process
180
+ [541.920 --> 543.640] bringing together vision and movement
181
+ [543.640 --> 545.320] for cognitive mapping and navigation
182
+ [545.320 --> 547.360] is known as path integration.
183
+ [547.360 --> 550.520] Path integration combines egocentric information
184
+ [550.520 --> 552.640] from visual feedback and idio-thetic
185
+ [552.640 --> 555.200] that self-emotion cues from movement
186
+ [555.200 --> 557.880] into an alacentric representation.
187
+ [557.880 --> 559.840] Path integration helps to construct
188
+ [559.840 --> 562.320] an always current spatial representation
189
+ [562.320 --> 565.240] as location is continually and dynamically updated
190
+ [565.240 --> 567.040] by vision and movement feedback
191
+ [567.040 --> 568.800] as we travel around the world.
192
+ [569.640 --> 572.760] The specifics of resources available at any given time
193
+ [572.760 --> 574.320] as well as individual difference,
194
+ [574.320 --> 576.280] for example, in terms of background
195
+ [576.280 --> 578.000] such as being an architect or dance.
196
+ [579.000 --> 581.240] Influence when and how egocentric
197
+ [581.240 --> 584.120] or alacentric reference was dominated.
198
+ [584.120 --> 585.360] Sorry about that.
199
+ [587.640 --> 589.680] There are interesting implications
200
+ [589.680 --> 591.760] in terms of spatial ability.
201
+ [591.760 --> 593.640] Spatial ability is not innately fixed
202
+ [593.640 --> 594.840] but it is trainable.
203
+ [594.840 --> 597.040] By using one's brain in specific ways,
204
+ [597.040 --> 600.600] connectivity and skill sets can be enhanced or altered.
205
+ [600.600 --> 603.280] Architects or dancers have trained themselves
206
+ [603.280 --> 606.600] to think about space very differently in different ways
207
+ [606.600 --> 608.680] and this might in turn influence
208
+ [608.680 --> 611.560] how they re-experience a space.
209
+ [611.560 --> 613.640] As an architect, I'm able to switch views
210
+ [613.640 --> 615.160] of the world frequently.
211
+ [615.160 --> 617.160] I'm able to mentally rotate the world
212
+ [617.160 --> 619.600] or take different perspectives easily.
213
+ [619.600 --> 621.320] It is possible that each of you
214
+ [621.320 --> 623.640] constructed different views in the exercise
215
+ [623.640 --> 625.280] we started with.
216
+ [625.280 --> 628.840] To me, space is both me and mine
217
+ [628.840 --> 630.400] and recalling William James,
218
+ [630.400 --> 633.240] the line with threshold is difficult to draw
219
+ [633.240 --> 636.280] as I negotiate and often overlay both.
220
+ [636.280 --> 638.840] I hold the first person and third person view
221
+ [638.840 --> 640.960] of the world in my mind quite easily.
222
+ [643.040 --> 646.600] Space as me and mine requires spatial representation
223
+ [646.600 --> 649.040] as a foundation for action and thinking
224
+ [649.040 --> 651.040] and operations like mental rotation
225
+ [651.040 --> 653.800] and perspective taking are your key.
226
+ [653.800 --> 656.440] For mental rotation abilities allow people
227
+ [656.440 --> 658.800] to hold objects in mind and rotate them
228
+ [658.800 --> 662.040] so that you're able to see them from many different angles.
229
+ [662.040 --> 664.600] For perspective taking allows dynamic shifts
230
+ [664.600 --> 668.040] in one's imagination to inhabit specific positions
231
+ [668.040 --> 670.040] within a scene at will.
232
+ [670.040 --> 674.400] Skill-navigators employ these, for example, also in map reading.
233
+ [674.400 --> 676.200] As an architect, I'm fairly good at both
234
+ [676.200 --> 678.320] and I don't need to even think about performing
235
+ [678.320 --> 679.560] these operations.
236
+ [683.800 --> 688.800] Our other centric view set up would take ideas
237
+ [691.480 --> 694.440] of mentally rotating or taking perspectives
238
+ [694.440 --> 697.440] into an embodied realm and allow a literal enactment
239
+ [697.440 --> 700.040] or for flexing on one's own functioning
240
+ [700.040 --> 703.360] but entails shifting the perspective of the same agent
241
+ [703.360 --> 706.000] rather than reifying different internal agents
242
+ [706.000 --> 708.200] for self regulating each other.
243
+ [708.200 --> 710.440] As was suggested by Arthur Widera,
244
+ [710.440 --> 712.400] responding to the William James code
245
+ [712.400 --> 714.200] and his position that you saw earlier
246
+ [714.200 --> 717.000] on the duplex nature of understanding oneself
247
+ [717.000 --> 719.760] for within and from the outside.
248
+ [719.760 --> 722.000] In essence, the aim was to experience
249
+ [722.000 --> 725.240] how rather than splitting oneself into object and agent,
250
+ [725.240 --> 728.920] one was simultaneously being agent and object.
251
+ [728.920 --> 731.600] The interest lying as much as being an agent
252
+ [731.600 --> 735.720] reflecting on experience as in executing actions.
253
+ [735.720 --> 738.120] An architect's or choreographer's process
254
+ [738.120 --> 740.960] is an inversion of sorts of the cognitive construction
255
+ [740.960 --> 744.360] process of going from egocentric to allocentric.
256
+ [744.360 --> 747.120] However, even when going from an allocentric overview
257
+ [747.120 --> 750.040] to egocentric experience in the planning process,
258
+ [750.040 --> 752.120] architects and also choreographers
259
+ [752.120 --> 754.680] often continuously loop between both.
260
+ [757.800 --> 760.040] At the heart of many architectural queries
261
+ [760.040 --> 762.040] is this gap between first person
262
+ [762.040 --> 764.400] and third person experience.
263
+ [764.400 --> 767.840] Architects, notionally inhabit an external viewpoint,
264
+ [767.840 --> 769.640] looking down and feeling down
265
+ [769.640 --> 772.080] into nation buildings in the design process
266
+ [772.080 --> 774.600] and operating from an allocentric understanding
267
+ [774.600 --> 778.880] quasi-uncentric to construct egocentric experiences.
268
+ [778.880 --> 782.400] In doing so, we inhabit both viewpoints,
269
+ [782.400 --> 785.280] where both agent and object both me and mine
270
+ [785.280 --> 787.840] and we shift the perspective of the same agent
271
+ [787.840 --> 791.200] rather than refine different internal agents or selves.
272
+ [793.400 --> 795.880] Disability to switch from third person
273
+ [795.880 --> 798.760] to first person is decisive in design thinking.
274
+ [798.760 --> 801.840] But as human experience and human experience architecture
275
+ [801.840 --> 804.360] not only from a static view, but dynamically,
276
+ [804.360 --> 808.720] thinking about and designing for movement as a link is key.
277
+ [808.720 --> 810.240] The Swiss architect, the co-busier,
278
+ [810.240 --> 813.720] described as experiencing space in the following way.
279
+ [813.720 --> 816.240] Architecture is appreciated while on the move
280
+ [816.240 --> 818.080] with one's feet while walking,
281
+ [818.080 --> 820.440] moving from one place to another.
282
+ [820.440 --> 822.320] A true architectural commonad
283
+ [822.320 --> 824.200] offers constantly changing views
284
+ [824.200 --> 826.520] and expected at the time surprising.
285
+ [829.480 --> 832.480] Co-bus, he developed this idea inspired by himself
286
+ [832.480 --> 834.880] moving through the Athens of Prophilis
287
+ [834.880 --> 837.360] and then he built his first architectural commonad
288
+ [837.360 --> 839.440] in this famous Ville Savoy.
289
+ [839.440 --> 842.320] His description of an experience of somebody
290
+ [842.320 --> 843.600] moving through a building
291
+ [843.600 --> 846.040] and the way he used this in his design
292
+ [846.040 --> 848.680] is different from what we otherwise often find
293
+ [848.680 --> 850.800] in processes of designing a building.
294
+ [852.400 --> 854.640] In the process of designing a building,
295
+ [854.640 --> 857.520] we often find that concepts and tools are being used
296
+ [857.520 --> 858.880] that do not place the body
297
+ [858.880 --> 861.560] and the importance of movement at the center.
298
+ [861.560 --> 864.080] And design ideas are frequently developed in plan
299
+ [864.080 --> 867.120] or using simple overall concepts of arrangements
300
+ [867.120 --> 871.560] referred by the Biorart term of an organizing party pre
301
+ [871.560 --> 872.920] or party.
302
+ [872.920 --> 875.400] A party describes a relationship of parts
303
+ [875.400 --> 878.120] that is notionally independent of the observer
304
+ [878.120 --> 879.680] experiencing on the ground
305
+ [879.680 --> 883.440] and thus third person in terms of spatial reference frames.
306
+ [883.440 --> 887.480] As an initial idea, it is somewhat allocentric
307
+ [887.480 --> 890.400] and has yet to consider first person experience.
308
+ [890.400 --> 892.400] Could differently, it is conceptual
309
+ [892.400 --> 895.080] but that does not yet address the perceptual
310
+ [895.080 --> 897.440] which as much as co-busier in the modern movement
311
+ [897.440 --> 898.480] have been criticized,
312
+ [898.480 --> 900.560] their architecture then did achieve
313
+ [900.560 --> 903.160] as much as often their design method did achieve.
314
+ [904.840 --> 907.080] Buildings that do not infer the development
315
+ [907.080 --> 909.200] of a simple idea or a party
316
+ [909.200 --> 912.680] consider dynamic interpretation can remain static.
317
+ [912.680 --> 915.480] Spaces can lack fluidity and movement capabilities
318
+ [915.480 --> 917.800] and wayfinding of buildings is impeded.
319
+ [917.800 --> 920.480] Building experience can be diminished.
320
+ [920.480 --> 922.480] While buildings such as the Seattle library
321
+ [922.480 --> 924.800] which you see here designed by OMA's
322
+ [924.800 --> 929.080] Diagrammatic method include a range of interesting spaces
323
+ [929.080 --> 930.840] for dwelling in, they are difficult
324
+ [930.840 --> 933.440] and often not enjoyable to navigate.
325
+ [933.440 --> 936.080] The Seattle Public Library has indeed required a lot
326
+ [936.080 --> 939.640] of post-op fancy analysis and wayfinding improvement.
327
+ [942.880 --> 945.160] A split between conception and perception
328
+ [945.160 --> 947.760] has also brought about understandings of architecture
329
+ [947.760 --> 949.840] as networks of relationships
330
+ [949.840 --> 952.440] that allows architects and architectural theoreticians
331
+ [952.440 --> 955.120] to describe architectures as a system.
332
+ [955.120 --> 957.640] The philosopher Villain Flusser here suggests
333
+ [957.640 --> 959.960] the architect does not design objects anymore
334
+ [959.960 --> 961.240] but relations.
335
+ [961.240 --> 963.320] Instead of thinking in geometric terms
336
+ [963.320 --> 966.880] the architect has to project networks of equations.
337
+ [966.880 --> 969.280] Effectively what such avenues have in common
338
+ [969.280 --> 970.840] is a shift in viewpoint
339
+ [970.840 --> 973.880] and an explicit dissociation of experimental,
340
+ [973.880 --> 977.480] experiential composites, a divorce of conception
341
+ [977.480 --> 979.240] from perception.
342
+ [980.480 --> 984.240] Overall, critiquing this conceptual approach to architecture
343
+ [984.240 --> 986.760] the philosopher Bernard Bormer suggests
344
+ [986.760 --> 988.720] that buildings and spaces and reality
345
+ [988.720 --> 991.440] are not freely and effortlessly available.
346
+ [991.440 --> 993.280] They have to be walked through.
347
+ [993.280 --> 995.960] Bermett argues for an integration of the perceptual
348
+ [995.960 --> 998.480] with the conceptual architecture designed
349
+ [998.480 --> 1000.680] to achieve atmosphere.
350
+ [1000.680 --> 1003.520] Architects like Peter Zomtor are at the forefront
351
+ [1003.520 --> 1006.280] of achieving atmospheric architecture.
352
+ [1006.280 --> 1008.440] Architects like him often do this
353
+ [1008.440 --> 1010.720] by inferring a first person experience
354
+ [1010.720 --> 1013.120] in a third person's spatial representations,
355
+ [1013.120 --> 1014.440] such as a plan.
356
+ [1014.440 --> 1018.040] This way they can walk themselves around a hypothetical building
357
+ [1018.040 --> 1020.840] after constructing it as a three-dimensional entity
358
+ [1020.840 --> 1022.280] in their mind.
359
+ [1022.280 --> 1024.680] Drawing on processes like perspective taking,
360
+ [1024.680 --> 1029.120] mental rotation, or perhaps an intuitive understanding
361
+ [1029.120 --> 1031.600] of processes such as path integration
362
+ [1031.600 --> 1034.080] to architects such as Zomtor or Kobase,
363
+ [1034.080 --> 1036.840] effortless switch.
364
+ [1036.840 --> 1039.800] Architects like this still draw on conceptual tools,
365
+ [1039.800 --> 1042.320] such as simple sketches or parties
366
+ [1042.320 --> 1044.800] in the stage of design ideation,
367
+ [1044.800 --> 1047.200] but have the ability to, even in this stage,
368
+ [1047.200 --> 1050.000] already integrate first person experience.
369
+ [1052.760 --> 1054.600] Of course, architecture is not alone
370
+ [1054.600 --> 1057.040] in the ability of switching and simultaneously
371
+ [1057.040 --> 1059.240] inhabiting space through movement.
372
+ [1059.240 --> 1061.440] Indeed, most architects will fairly conceptual
373
+ [1061.440 --> 1063.600] internal understandings of this in mind
374
+ [1063.600 --> 1066.040] when designing space for moving bodies.
375
+ [1066.040 --> 1069.360] The answers in choreographers as a contrast approach this
376
+ [1069.360 --> 1073.040] from a rather more perceptual perspective,
377
+ [1073.040 --> 1075.680] designing movement of bodies itself.
378
+ [1075.680 --> 1078.680] Working from or switching between different perspectives
379
+ [1078.680 --> 1081.080] is a strong feature of dance practice,
380
+ [1081.080 --> 1084.720] both in training and in the choreographic process.
381
+ [1084.720 --> 1087.400] A choreographer often makes perspectival shifts
382
+ [1087.400 --> 1090.680] by stepping in and out of the choreography
383
+ [1090.680 --> 1093.200] in order to understand both the shape and effect
384
+ [1093.200 --> 1095.360] from the outside and the feeling and functioning
385
+ [1095.360 --> 1096.360] from the inside.
386
+ [1098.920 --> 1101.840] Visual and self-motion feedback is an important part
387
+ [1101.840 --> 1104.800] of dance practice with mirrors traditionally used
388
+ [1104.800 --> 1108.280] as a way of giving a dancer an outside eye on their movement,
389
+ [1108.280 --> 1110.600] as they're moving and switching views.
390
+ [1110.600 --> 1112.720] This helps in achieving a desired aesthetic,
391
+ [1112.720 --> 1114.720] such as the clean lines and alignment
392
+ [1114.720 --> 1116.200] and postures of the body,
393
+ [1116.200 --> 1119.280] associating an image of their body in movement
394
+ [1119.280 --> 1121.000] with the feeling they're experiencing
395
+ [1121.000 --> 1122.400] as they're executing it.
396
+ [1124.480 --> 1126.440] The kinosphere, for example,
397
+ [1126.440 --> 1128.680] is a conceptual wave understanding space
398
+ [1128.680 --> 1130.800] around the body that helps them do this
399
+ [1130.800 --> 1134.280] in order to visualize themselves in different ways
400
+ [1134.280 --> 1138.200] as they are practicing or choreographing dance movements.
401
+ [1138.200 --> 1141.440] It is composed of personal and peri-personal space
402
+ [1141.440 --> 1143.520] and integrates all the movement potential
403
+ [1143.520 --> 1145.000] spatial planes and connections
404
+ [1145.000 --> 1147.840] that are available in this process.
405
+ [1147.840 --> 1151.440] Rudolf Laban, who's the inventor of the kinosphere,
406
+ [1151.440 --> 1153.960] explained to tell us this fear around the body
407
+ [1153.960 --> 1157.360] whose periphery can be reached by easily extended limbs
408
+ [1157.360 --> 1160.000] without stepping away from that place,
409
+ [1160.000 --> 1161.440] which is the point of support
410
+ [1161.440 --> 1163.080] when standing on one foot.
411
+ [1177.840 --> 1201.840] I'm sorry, I'm not sure why this isn't moving up.
412
+ [1201.840 --> 1203.240] Here we go.
413
+ [1203.240 --> 1205.120] Expanding on the idea of the kinosphere,
414
+ [1205.120 --> 1207.120] choreographer, such as William Forsyth,
415
+ [1207.120 --> 1209.960] who you just saw, have worked with types of body image
416
+ [1209.960 --> 1212.040] that exist in the imagination
417
+ [1212.040 --> 1214.560] as a way of providing creative tools for dancers
418
+ [1214.560 --> 1217.080] to work with while improvising or creative move,
419
+ [1217.080 --> 1219.040] creating movement material.
420
+ [1219.040 --> 1222.240] For example, a dancer might imagine one of their
421
+ [1222.240 --> 1224.320] previous positions or movements,
422
+ [1224.320 --> 1226.600] freeze it in space and use this as a basis
423
+ [1226.600 --> 1228.120] for generating new movement.
424
+ [1228.120 --> 1230.640] By moving around the space in the imagined body,
425
+ [1230.640 --> 1232.200] it is occupied.
426
+ [1232.200 --> 1234.320] The imagined body is occupied.
427
+ [1234.320 --> 1246.960] Equally, this can be done by holding in mind other volumes
428
+ [1246.960 --> 1248.640] in the space and moving in relation
429
+ [1248.640 --> 1252.400] to the mental rotation of perspective, again,
430
+ [1252.400 --> 1253.280] are key.
431
+ [1253.280 --> 1256.040] Like any skill, the ability to hold images in mind
432
+ [1256.040 --> 1257.680] while performing physical actions
433
+ [1257.680 --> 1260.160] requires a substantial amount of practice
434
+ [1260.160 --> 1263.560] that becomes a powerful tool for a dancer or an architect
435
+ [1263.560 --> 1267.520] once acquired, enabling them to execute complex cognitive tasks
436
+ [1267.520 --> 1269.840] as they're dancing or designing.
437
+ [1269.840 --> 1271.840] In this, they hold multiple representations
438
+ [1271.840 --> 1274.000] of themselves in space and in mind.
439
+ [1280.480 --> 1282.720] Setting outside of themselves and understanding
440
+ [1282.720 --> 1285.720] of their bodies can also be explored using techniques
441
+ [1285.720 --> 1287.880] such as contact improvisation, where
442
+ [1287.880 --> 1291.200] understanding of oneself through the other is formed.
443
+ [1291.200 --> 1294.320] In this, dancers are both object and agent at once,
444
+ [1294.320 --> 1296.960] blurring lines of me and mine to explore
445
+ [1296.960 --> 1299.760] some of the physical possibilities.
446
+ [1299.760 --> 1301.880] I link in different sensory modalities
447
+ [1301.880 --> 1303.880] in different ways for linking conception
448
+ [1303.880 --> 1306.600] of movement to sensory execution.
449
+ [1306.600 --> 1309.640] In contact improvisation, this means modifying action
450
+ [1309.640 --> 1312.600] in response to tactile kinesthetic information
451
+ [1312.600 --> 1317.400] through a contact point with another person's body.
452
+ [1317.400 --> 1320.000] So with all of this, what did we do?
453
+ [1320.040 --> 1322.160] Using ideas and knowledge from architecture,
454
+ [1322.160 --> 1325.200] dance and cognition, we want to create a setting
455
+ [1325.200 --> 1327.200] of both being and having a body
456
+ [1327.200 --> 1329.640] and see how a skilled dancer in the first instance,
457
+ [1329.640 --> 1331.400] tier, would navigate our laboratory
458
+ [1331.400 --> 1334.160] with a third person vision, with third person vision
459
+ [1334.160 --> 1335.680] and first person movement.
460
+ [1340.600 --> 1343.720] The laboratory in the setup was seen in two different projections
461
+ [1343.720 --> 1344.480] in the headset.
462
+ [1344.480 --> 1346.880] In one, the dancers observed to herself
463
+ [1346.880 --> 1349.200] in an axonometric view to reflect
464
+ [1349.200 --> 1351.480] on the geometry of space per se.
465
+ [1351.480 --> 1353.960] In the other view, a perspective of view
466
+ [1353.960 --> 1357.600] to reflect on the way the human visual system sees space.
467
+ [1357.600 --> 1360.720] In the axonometric view, while the space was viewed
468
+ [1360.720 --> 1363.400] with true measurement, a vertical distortion of the body
469
+ [1363.400 --> 1366.920] was seen when she moved backwards from the picture plane.
470
+ [1366.920 --> 1370.120] This was purely because of technical reasons.
471
+ [1371.680 --> 1374.800] In the first instance, we'd ask to travel the labyrinth
472
+ [1374.800 --> 1377.240] from a normal view without the viagogles
473
+ [1377.240 --> 1379.600] and then after that, using the viagogles
474
+ [1379.600 --> 1381.800] with a third person point of view.
475
+ [1384.240 --> 1386.000] She then navigated, seeing herself
476
+ [1386.000 --> 1387.840] from a third person point of view
477
+ [1387.840 --> 1389.640] for the normal perspective of view.
478
+ [1389.640 --> 1393.320] Then with this kind of quasi-axonometric view space.
479
+ [1393.320 --> 1395.040] In both conditions, the time spent
480
+ [1395.040 --> 1398.720] the negotiating the lab, stabilized after a while,
481
+ [1398.720 --> 1401.080] shorter time taken to navigate the perspective
482
+ [1401.080 --> 1403.600] than the time in the axonometric view.
483
+ [1403.600 --> 1406.560] In following conversations, tier described the sense
484
+ [1406.560 --> 1408.640] of being in the perspective as normal,
485
+ [1408.640 --> 1411.760] her body, for her, seeming to remain the same size,
486
+ [1411.760 --> 1413.880] although visually it was diminishing
487
+ [1413.880 --> 1416.160] the further she moved away from the camera.
488
+ [1417.160 --> 1420.520] In both views, there were errors seen on the curbing pathways
489
+ [1420.520 --> 1423.400] and deviations often from the line of the path.
490
+ [1423.400 --> 1425.920] This type of error became more frequently
491
+ [1425.920 --> 1429.480] frequent when tier moved quickly, sharp turns with difficult
492
+ [1429.480 --> 1433.880] and slow, and she used, as she says, for feet like arrows.
493
+ [1433.920 --> 1437.760] Interestingly, when information from a diathetic self-motion
494
+ [1437.760 --> 1440.200] and visual, the visual system diverged,
495
+ [1440.200 --> 1443.040] there's a tendency to rely on vision for movement
496
+ [1443.040 --> 1444.600] and it seems even a dance right.
497
+ [1444.600 --> 1447.640] So highly attuned with her body's movement in space,
498
+ [1447.640 --> 1449.800] still relied heavily on vision
499
+ [1449.800 --> 1451.400] to correct what she was doing.
500
+ [1457.520 --> 1460.840] Following a series of runs through the labyrinth,
501
+ [1460.840 --> 1463.320] we wanted to extend the demand on movement
502
+ [1463.320 --> 1465.720] include more diverse gestures.
503
+ [1465.720 --> 1468.440] We asked you to author three scenarios,
504
+ [1468.440 --> 1470.400] a previously learned dance phrase
505
+ [1470.400 --> 1473.760] to learn, then second to learn a new movement sequence
506
+ [1473.760 --> 1476.360] and to play a chasing game with Annex,
507
+ [1476.360 --> 1480.040] who was one of the people coming up with the design itself
508
+ [1480.040 --> 1482.400] that also tears choreographer.
509
+ [1482.400 --> 1484.280] Tier first performed a dance phrase
510
+ [1484.280 --> 1486.160] which she was already in command of.
511
+ [1486.160 --> 1488.840] This grew primarily on proprioceptive movement
512
+ [1488.840 --> 1490.680] without translating through space,
513
+ [1490.680 --> 1492.760] but she still had to maintain correct position
514
+ [1492.800 --> 1494.360] and heading direction.
515
+ [1494.360 --> 1495.880] She performed this competently
516
+ [1495.880 --> 1498.120] in terms of detail of bodily movement,
517
+ [1498.120 --> 1500.640] likely by drawing on her internal sense of movement
518
+ [1500.640 --> 1503.400] and not needing as much visual input.
519
+ [1503.400 --> 1505.720] Her ability to maintain her spacing,
520
+ [1505.720 --> 1508.400] the position relative to the performance area
521
+ [1508.400 --> 1511.040] was compromised, her sense of direction affected
522
+ [1511.040 --> 1514.520] by the unusual view of her body in space.
523
+ [1514.520 --> 1517.640] Tier's ability to perform a standard ballet exercise
524
+ [1517.640 --> 1519.080] in both balance.
525
+ [1519.080 --> 1522.560] The adage sequence on one leg was incredibly difficult
526
+ [1522.560 --> 1525.680] with her vestibular system somewhat disrupted.
527
+ [1525.680 --> 1527.040] After all of this, Alex,
528
+ [1527.040 --> 1528.320] her choreographer, Trotter,
529
+ [1528.320 --> 1529.760] a new movement phrase,
530
+ [1529.760 --> 1531.640] which he demonstrated for her.
531
+ [1531.640 --> 1533.960] Her ability to keep hold of information
532
+ [1533.960 --> 1536.360] both in movement, detail and spacing
533
+ [1536.360 --> 1539.560] was better in this situation than in all others.
534
+ [1539.560 --> 1541.600] Illustrating just how adept she is
535
+ [1541.600 --> 1543.400] at translating visual information
536
+ [1543.400 --> 1545.120] by viewing a demonstrator
537
+ [1545.120 --> 1548.200] and copying the movement without focusing on herself.
538
+ [1553.080 --> 1557.560] The final scenario was then a chasing game.
539
+ [1557.560 --> 1560.480] Tier chased Alex in order to capture him.
540
+ [1560.480 --> 1563.480] They also attempted performing collaborative gestures
541
+ [1563.480 --> 1564.680] such as touching hands.
542
+ [1564.680 --> 1567.880] Both novel translation and proprosyptive actions
543
+ [1567.880 --> 1570.360] were required and she could not use
544
+ [1570.360 --> 1573.040] already internalized sequences of movement.
545
+ [1573.040 --> 1576.360] Tier was here also not able to remain
546
+ [1576.360 --> 1579.360] within a dance framework that she was highly skilled at
547
+ [1579.360 --> 1580.800] and we witnessed her.
548
+ [1580.800 --> 1582.080] She was highly skilled at
549
+ [1582.080 --> 1585.160] and we witnessed the highest experience of dissociation
550
+ [1585.160 --> 1586.960] with tier moving somewhat clumsily
551
+ [1586.960 --> 1590.640] and mixing up left and right more than in the other tasks.
552
+ [1601.720 --> 1603.440] Our design constructed scenarios
553
+ [1603.440 --> 1606.400] were sensory input from the visual and self-motion system
554
+ [1606.400 --> 1607.600] are dissociated.
555
+ [1607.600 --> 1608.800] It's interesting to see
556
+ [1608.800 --> 1612.080] how spatially directed motor activity was curtailed.
557
+ [1612.080 --> 1615.680] Speed slowed down and movement lacking in precision.
558
+ [1615.680 --> 1619.880] While tier had continuous access to visual snapshots
559
+ [1619.880 --> 1622.160] optic flow was modified.
560
+ [1622.160 --> 1623.960] The visual appearance of her environment
561
+ [1623.960 --> 1627.240] remained fixed and she only saw her own body move.
562
+ [1627.240 --> 1629.280] The integration of multisensory input
563
+ [1629.280 --> 1632.280] to enable senseable action is modified
564
+ [1632.280 --> 1636.080] and perhaps visual idio-thetic dissociation
565
+ [1636.080 --> 1639.240] above all impacted path integration abilities
566
+ [1639.240 --> 1641.960] to preserve and estimate accurate movement angles
567
+ [1641.960 --> 1645.080] and distant ratios to reference points.
568
+ [1645.080 --> 1647.720] When running the lab room for internal sense of movement,
569
+ [1647.720 --> 1652.560] vision and movement through space felt disjointed,
570
+ [1652.560 --> 1655.000] especially the comparison of across,
571
+ [1655.000 --> 1658.760] versus up and down, X and Y axis of her field of view.
572
+ [1658.760 --> 1661.880] This was heightened in this quasi-axenometric view
573
+ [1661.880 --> 1664.120] and Tier did not experience this higher
574
+ [1664.120 --> 1666.720] disjointment when seeing herself in the perspective.
575
+ [1674.360 --> 1677.400] Coming back to this work now as we re-enter buildings
576
+ [1677.400 --> 1680.120] and can take out work like this more easily,
577
+ [1680.120 --> 1682.160] we hope to expand our exploration
578
+ [1682.160 --> 1685.040] and in thinking through our bodies and acting knowledge,
579
+ [1685.040 --> 1687.560] speculate on implications for architecture,
580
+ [1687.560 --> 1690.680] doubts, cognitive science and other fields.
581
+ [1690.680 --> 1693.440] But we find ways to design navigation and movement
582
+ [1693.440 --> 1695.160] that brings together a first person
583
+ [1695.160 --> 1697.320] and a third person view of seeing the world
584
+ [1697.320 --> 1699.400] in more enjoyable and somatic ways
585
+ [1699.400 --> 1701.000] than current mapping technology
586
+ [1701.000 --> 1702.880] or building navigation allows.
587
+ [1704.440 --> 1706.400] What you see here on this final slide
588
+ [1706.400 --> 1709.880] is another moment when we set up this performance
589
+ [1709.880 --> 1712.320] and allowed people to navigate this in ways
590
+ [1712.320 --> 1713.680] that they felt comfortable with
591
+ [1713.680 --> 1716.880] and there was a hand artist that wanted to try it out.
592
+ [1716.880 --> 1720.160] And so he was able to navigate this entire set up,
593
+ [1720.160 --> 1723.400] seeing himself the way that you see on the space
594
+ [1723.400 --> 1727.800] being mounted in the back while navigating on his hands.
595
+ [1727.800 --> 1731.480] So that was a rather enjoyable thing to be watching.
596
+ [1738.520 --> 1740.680] So I'd like to thank you all and hopefully,
597
+ [1740.680 --> 1743.520] there's some feedback, some interesting thoughts
598
+ [1743.520 --> 1746.800] that you might bring to it or some questions you have from me
599
+ [1746.800 --> 1749.680] on this work that is still very much in its inception
600
+ [1749.680 --> 1752.800] and early on in thinking.
601
+ [1754.300 --> 1755.840] Bye.
602
+ [1774.000 --> 1783.200] Vaiajata andajad Mmmata
transcript/allocentric_HAnw168huqA.txt ADDED
@@ -0,0 +1,558 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 12.240] Welcome. I'm very excited today to talk about effective speaking in spontaneous situations.
2
+ [12.240 --> 16.760] I thank you all for joining us, even though the title of my talk is grammatically incorrect.
3
+ [16.760 --> 20.120] I thought that might scare a few of you away. But I learned teaching here at the business
4
+ [20.120 --> 24.320] school, catching people's attention is hard. So something as simple as that, I thought
5
+ [24.320 --> 28.760] might draw a few of you here. So this is going to be a highly interactive and
6
+ [28.760 --> 34.440] participative workshop today. If you don't feel comfortable participating, that's completely
7
+ [34.440 --> 38.360] fine. But do know I'm going to ask you to talk to people next to you. There'll be opportunities
8
+ [38.360 --> 43.640] to stand up and practice some things because I believe the way we become effective communicators
9
+ [43.640 --> 48.880] is by actually communicating. So let's get started right away. I'd like to ask you all to
10
+ [48.880 --> 55.040] read this sentence. And as you read this sentence, what's most important to me is that you count
11
+ [55.040 --> 60.360] the number of F's that you find in this sentence. Please count the number of F's. Keep
12
+ [60.360 --> 76.000] it quiet to yourself. Give you just another couple seconds here. Three, two, one. Raise
13
+ [76.000 --> 80.800] your hand please if you found three and only three F's. Excellent. Great. Did anybody
14
+ [80.800 --> 90.760] find four? Okay. Anybody find only five F's? Anybody find six? There's six F's. What two
15
+ [90.760 --> 98.440] letter word ending in F did many of us miss of? We'll make sure to get this to you so you
16
+ [98.440 --> 103.760] can torment your friends and family at a later date. When I first was exposed to this
17
+ [103.760 --> 108.800] over 12 years ago, I only found three and I felt really stupid. So I like to start every
18
+ [108.800 --> 114.360] workshop, every class I teach with this to pass that feeling. No, no, that's not why I
19
+ [114.360 --> 119.680] do this. I do this because this is a perfect analogy for what we're going to be talking about
20
+ [119.680 --> 124.440] today. The vast majority of us in this room, very smart people in this room, were not as
21
+ [124.440 --> 130.280] effective as we could have been in this activity. We didn't get it right. And the same is true
22
+ [130.280 --> 136.600] when it comes to speaking in public, particularly when spontaneous speaking. It's little things
23
+ [136.600 --> 141.720] that make a big difference in being effective. So today we're going to talk about little things
24
+ [141.720 --> 147.360] in terms of your approach, your attitude, your practice that can change how you feel when
25
+ [147.360 --> 152.560] you speak in public. And we're going to be talking primarily about one type of public
26
+ [152.560 --> 157.960] speaking. Not the type that you plan for in advance, the type that you actually spend
27
+ [157.960 --> 163.280] time thinking about, you might even create slides for. These are the keynotes, the conference
28
+ [163.280 --> 170.000] presentation, the formal, toasts. That's not what we're talking about today. We're talking
29
+ [170.000 --> 176.040] about spontaneous speaking. When you're in a situation that you're asked to speak off the
30
+ [176.040 --> 181.800] cuff and in the moment, what we're going through today is actually the result of a workshop
31
+ [181.800 --> 187.080] I created here for the business school. Several years ago, a survey was taken among the students
32
+ [187.080 --> 190.880] and they said, what's one of the, what are things we could do to help make you more successful
33
+ [190.880 --> 196.880] here? And at the top of that list was this notion of responding to cold calls. Does everybody
34
+ [196.880 --> 200.880] know what a cold call is? It's where the mean professor, like me, looks at some students,
35
+ [200.880 --> 206.720] what do you think? And there was a lot of panic and a lot of silence. So as a result of
36
+ [206.720 --> 210.920] that, this workshop was created in a vast majority of first year students here at the
37
+ [210.920 --> 215.600] GSB go through this workshop. So I'm going to walk you through sort of a hybrid version
38
+ [215.600 --> 222.680] of what they do. The reality is that spontaneous speaking is actually more prevalent than plan
39
+ [222.680 --> 226.520] speaking. Perhaps it's giving introductions. You're at a dinner and somebody says, you know
40
+ [226.520 --> 231.320] so and so would you mind introducing them? Maybe it's giving feedback in the moment. Your
41
+ [231.320 --> 236.760] boss turns you and says, would you tell me what you think? It could be a surprise toast.
42
+ [236.760 --> 241.840] Or finally, it could be during the Q&A session. And by the way, we will leave plenty of time
43
+ [241.840 --> 246.640] at the end of our day today for Q&A. I'd love to hear the questions you have about this topic
44
+ [246.640 --> 253.040] or other topics related to communicating. So our agenda is simple. In order to be an effective
45
+ [253.040 --> 258.960] communicator, regardless of if it's planned or spontaneous, you need to have your anxiety
46
+ [258.960 --> 265.760] under control. So we'll start there. Second, what we're going to talk about is some ground
47
+ [265.760 --> 269.840] rules for the interactivity we'll have today. And then finally, we're going to get into
48
+ [269.840 --> 274.240] the heart of what we will be covering. And again, as I said, lots of activity and I invite
49
+ [274.240 --> 282.600] you to participate. So let's get started with anxiety management. 85% of people tell us
50
+ [282.600 --> 287.920] that they're nervous when speaking in public. And I think the other 15% are lying. We could
51
+ [287.920 --> 294.040] create a situation where we could make them nervous too. In fact, just this past week,
52
+ [294.040 --> 299.720] a study from Chapman University asked Americans, what are the things you fear most? And among
53
+ [299.720 --> 305.200] being caught in a surprise terrorist attack, having identity, your identity stolen, was
54
+ [305.200 --> 310.960] public speaking. Among the top five was speaking in front of others. This is a ubiquitous
55
+ [310.960 --> 316.600] fear. And one that I believe we can learn to manage. And I use that word manage very carefully
56
+ [316.600 --> 322.400] because I don't think we ever want to overcome it. Anxiety actually helps us. It gives us
57
+ [322.400 --> 326.720] energy, helps us focus, tells us what we're doing is important. But we want to learn to
58
+ [326.720 --> 331.280] manage it. So I'd like to introduce you to a few techniques that can work and all of
59
+ [331.280 --> 336.920] these techniques are based on academic research. But before we get there, I'd love to ask
60
+ [336.920 --> 342.080] you, what does it feel like when you're sitting in the audience watching a nervous speaker
61
+ [342.080 --> 348.200] present? How do you feel? Just shout out a few things. How do you feel? Uncomfortable. I
62
+ [348.200 --> 352.440] heard many of you going, yes, uncomfortable. It feels very awkward, doesn't it? So what
63
+ [352.440 --> 357.360] do we do? Now a couple of you probably like watching somebody suffer, but most of us
64
+ [357.360 --> 364.200] don't. So what do we do? We sit there and we nod and we smile or we disengage. And
65
+ [364.200 --> 367.800] to the nervous speaker looking out at his or her audience seeing a bunch of people nodding
66
+ [367.800 --> 373.240] or disengage, that does not help. So we need to learn to manage our anxiety because fundamentally
67
+ [373.240 --> 378.600] your job as a communicator, rather, regardless of if it's planned or spontaneous, is to make
68
+ [378.600 --> 383.200] your audience comfortable. Because if they're comfortable, they can receive your message.
69
+ [383.200 --> 388.080] And when I say comfortable, I am not referring to the fact that your message has to be sugar
70
+ [388.080 --> 392.800] coated and nice for them to hear. It can be a harsh message, but they have to be in a
71
+ [392.800 --> 399.400] place where they can receive it. So it's incumbent on you as a communicator to help your audience
72
+ [399.400 --> 404.080] feel comfortable. And we do that by managing our anxiety. So let me introduce you to a few
73
+ [404.080 --> 410.240] techniques that I think you can use right away to help you feel more comfortable.
74
+ [410.240 --> 415.000] The first has to do with when you begin to feel those anxiety symptoms. For most people,
75
+ [415.000 --> 420.760] this happens in the initial minutes prior to speaking. In this situation, what happens
76
+ [420.760 --> 424.920] is many of us begin to feel whatever it is that happens to you. Maybe your stomach gets
77
+ [424.920 --> 429.760] a little gurgly, maybe your legs begin to shake, maybe you begin to perspire. And then
78
+ [429.760 --> 434.680] we start to say to ourselves, oh my goodness, I'm nervous. Uh oh, they're going to tell
79
+ [434.680 --> 439.600] I'm nervous. This is not going to go well. And we start spiraling out of control. So
80
+ [439.600 --> 445.960] research on mindful attention tells us that if when we begin to feel those anxiety symptoms,
81
+ [445.960 --> 452.240] we simply greet our anxiety and say, hey, this is me feeling nervous. I'm about to do something
82
+ [452.240 --> 458.280] of consequence. And simply by greeting your anxiety and acknowledging it that it's normal
83
+ [458.280 --> 464.600] and natural. Heck, 85% of people tell us they have it. You actually can stem the tide
84
+ [464.600 --> 469.800] of that anxiety spiraling out of control. It's not necessarily going to reduce the anxiety,
85
+ [469.800 --> 474.760] but it will stop it from spinning up. So the next time you begin to feel those anxiety
86
+ [474.760 --> 481.680] signs, take a deep breath and say, this is me feeling anxious. I notice a few of you
87
+ [481.680 --> 486.200] taking some notes. There's a handout that will come at the end that has everything that
88
+ [486.200 --> 492.640] I'm supposed to say. Okay. Can't guarantee I'm going to say it, but you'll have it there.
89
+ [492.640 --> 496.600] In addition to this approach, a technique that works very well, and this is a technique
90
+ [496.600 --> 500.840] that I helped do some research on way back when I was in graduate school, has to do with
91
+ [500.840 --> 508.040] reframing how you see the speaking situation. Most of us, when we are up presenting planned
92
+ [508.040 --> 514.640] or spontaneous, we feel that we have to do it right. And we feel like we are performing.
93
+ [514.640 --> 519.000] How many of you have ever acted, done singing or dancing? I'm not going to ask for performances.
94
+ [519.000 --> 523.520] No. Okay. Many of you have. We should note that we could do next year, maybe a talent
95
+ [523.520 --> 529.200] show of alums. It looks like we got the talent there. That's great. So when you perform,
96
+ [529.200 --> 533.560] you know that there's a right way and a wrong way to do it. If you don't hit the right
97
+ [533.560 --> 538.720] note or your right line at the right time, at the right place, you've made a mistake.
98
+ [538.720 --> 545.040] It messes up the audience. It messes up the people on stage. But when you present, there
99
+ [545.040 --> 549.720] is no right way. There's certainly better and worse ways, but there is no one right way.
100
+ [549.720 --> 554.600] So we need to look at presenting as something other than performance. And what I'd like
101
+ [554.600 --> 559.720] to suggest is what we need to see this as is a conversation. Right now, I'm having a
102
+ [559.720 --> 566.000] conversation with 100 plus people, rather than saying I'm performing for you. But it's
103
+ [566.000 --> 571.120] not enough just to say this is a conversation. I want to give you some concrete things you
104
+ [571.120 --> 578.760] can do. First, start with questions. Questions by their very nature are dialogic. They're
105
+ [578.760 --> 584.480] two way. What was one of the very first things I did here for you? I had you count the number
106
+ [584.480 --> 589.240] of F's and raise your hands. I asked you a question that gets your audience involved.
107
+ [589.240 --> 594.880] It makes it feel to me as the presenter as if we are in conversation. So use questions.
108
+ [594.880 --> 598.760] It can be rhetorical. They can be polling. Perhaps I actually want to hear information from
109
+ [598.760 --> 605.300] you. In fact, I use questions when I create an outline for my presentations. Rather than
110
+ [605.300 --> 609.840] writing bullet points, I list questions that I'm going to answer. And that puts me in
111
+ [609.840 --> 614.360] that conversational mode. If you were to look at my notes for today's talk, you'll see
112
+ [614.360 --> 619.360] it's just a series of questions. Right now, I'm answering the question, how do we manage
113
+ [619.360 --> 623.840] our anxiety? Beyond questions, another very useful
114
+ [623.840 --> 630.920] technique for making us conversational is to use conversational language. Many nervous
115
+ [630.920 --> 635.960] speakers distance themselves physically. If you've ever seen a nervous speaker present,
116
+ [635.960 --> 641.760] he or she will say something like this, welcome. I am really excited to be here with you.
117
+ [641.760 --> 646.560] They pull as far away from you as possible because you threaten us, speakers. You make
118
+ [646.560 --> 651.000] us nervous, so we want to get away from you. We do the same thing linguistically. We
119
+ [651.000 --> 655.960] use language that distances ourselves. It's not unusual to hear a nervous speaker say
120
+ [655.960 --> 662.240] something like, one must consider the ramifications or today we're going to cover step one, step
121
+ [662.240 --> 668.320] two, step three. That's very distancing language. To be more conversational, use conversational
122
+ [668.320 --> 672.420] language. Instead of one must consider, say, this is important to you. We all need to be
123
+ [672.420 --> 677.320] concerned with. Do you hear that inclusive conversational language has to do with the
124
+ [677.320 --> 683.120] pronouns. Instead of step one, step two, step three, first what we need to do is this.
125
+ [683.120 --> 689.520] The second thing you should consider is here. Use conversational language. Being conversational
126
+ [689.520 --> 695.480] can also help you manage your anxiety. The third technique I'd like to share is research
127
+ [695.480 --> 699.760] that I actually started when I was an undergraduate here. I was very fortunate to study with Phil
128
+ [699.760 --> 706.520] Zimbardo of the Stanford Prison Experiment Fame. Many people don't know that Zim actually
129
+ [706.520 --> 712.560] was instrumental in starting one of the very first shyness institutes in the world, especially
130
+ [712.560 --> 718.440] in the country. I did some research with him that looked at how your orientation to time
131
+ [718.440 --> 724.840] influences how you react. What we learned is if you can bring yourself into the present
132
+ [724.840 --> 730.040] moment rather than being worried about the future consequences, you can actually be less
133
+ [730.040 --> 735.400] nervous. Most of us when we present are worried about the future consequences. My students
134
+ [735.400 --> 738.400] are worried they're not going to get the right grade. Some of you are worried you might
135
+ [738.400 --> 742.040] not get the funding, you might not get the support, you might not get the laughs that you
136
+ [742.040 --> 748.560] want. All of those are future states. So if we can bring ourselves into the present moment,
137
+ [748.560 --> 752.280] we're not going to be as concerned about those future states and therefore we'll be less
138
+ [752.280 --> 758.520] nervous. There are lots of ways to become present oriented. I know a professional speaker.
139
+ [758.520 --> 764.840] He's paid $10,000 an hour to speak. It's a good gig. He gets very nervous. He's up
140
+ [764.840 --> 770.000] in front of crowds of thousands behind the stage what he does is 100 push-ups right before
141
+ [770.000 --> 775.120] he comes out. You can't be that physically active and not be in the present moment. Now
142
+ [775.120 --> 779.120] I'm not recommending all of us go to that level of exertion because he starts out a breath
143
+ [779.120 --> 785.400] and sweaty. But a walk around the building before you speak, that can do it. There are
144
+ [785.400 --> 790.000] other ways. If you've ever watched athletes perform and get ready to do their event, they
145
+ [790.000 --> 795.240] listen to music. They focus on a song or a playlist that helps get them in the moment.
146
+ [795.240 --> 801.560] You can do things as simple as counting backwards from 100 by tough numbers like 17. I'm going
147
+ [801.560 --> 805.200] to pause because I know people in the room are trying. Yeah. It gets hard after that
148
+ [805.200 --> 809.880] third or fourth one. I know. My favorite way to get present oriented is to say tongue
149
+ [809.880 --> 814.840] twisters. Saying a tongue twister forces you to be in the moment otherwise you'll say it
150
+ [814.840 --> 819.960] wrong. And it has the added benefit of warming up your voice. Most nervous speakers
151
+ [819.960 --> 823.720] don't warm up their voice. They retreat inside themselves and start saying all these
152
+ [823.720 --> 828.600] bad things to themselves. So saying a tongue twister can help you be both present oriented
153
+ [828.600 --> 833.640] and warm up your voice. Remember I said today we're going to have a lot of participation.
154
+ [833.640 --> 837.920] I'm going to ask you to repeat after me my favorite tongue twister. And I like this tongue
155
+ [837.920 --> 842.440] twister because if you say it wrong, you say a naughty word. And I'm going to be listening
156
+ [842.440 --> 846.840] to see if I hear any naughty words this morning. Okay. Repeat after me. It's only three
157
+ [846.840 --> 862.320] phrases. I slit a sheet. A sheet I slit. And on that slitted sheet I sit. Oh very good.
158
+ [862.320 --> 870.880] No shits. Excellent. Very good. Now in that moment, in that moment, you weren't worried
159
+ [870.880 --> 875.440] about I'm in front of all these people. This is weird. This guy is having me do that. You
160
+ [875.440 --> 879.920] were so focused on saying it right and trying to figure out what the naughty word was that
161
+ [879.920 --> 885.840] you were in the present moment. That's how easy it is. So it's very possible for us to
162
+ [885.840 --> 891.040] manage our anxiety. We can do it initially by greeting the anxiety when we begin to
163
+ [891.040 --> 898.360] feel those signs. We can do it when we reframe the situation as a conversation. And we do
164
+ [898.360 --> 903.520] it when we become present oriented. Those are three of many tools that exist to help
165
+ [903.520 --> 909.200] you manage your anxiety. If you have questions about other ways, I'm happy to chat with you.
166
+ [909.200 --> 912.920] And at the end, I'm going to point you to some resources that you can refer to to help
167
+ [912.920 --> 920.120] you find additional sources for you. So let's get started on the core part of what we're
168
+ [920.120 --> 925.640] doing today, which is how to feel more comfortable speaking in spontaneous situations. Some very
169
+ [925.640 --> 932.160] simple ground rules for you. First, I'm going to identify four steps that I believe are
170
+ [932.160 --> 937.520] critical to becoming effective at speaking in a spontaneous situation. With each of those
171
+ [937.520 --> 941.920] steps, I'm going to ask you to participate in an activity. None of them are more painful than
172
+ [941.920 --> 946.400] saying the tongue twister out loud. They may require you to stand up. They might require you to
173
+ [946.400 --> 950.960] talk to the person next to you, but none of them are painful. And then finally, I'm going to
174
+ [950.960 --> 957.920] conclude with a phrase or saying that comes from the wonderful world of improvisation. Through the
175
+ [957.920 --> 962.880] Continuing Studies program here at Stanford, for the past five years, I have cotata class with
176
+ [962.880 --> 969.760] Adam Tobin. He is a lecturer in the Creative Arts Department. He teaches film and new media.
177
+ [969.760 --> 975.360] And he's an expert at improv. And we've partnered together to help people learn how to speak more
178
+ [975.360 --> 981.200] spontaneously. We call it improvisationally speaking. And Adam has taught me wonderful phrases and
179
+ [981.200 --> 985.840] ideas from improv that I want to impart to you. They're really stick. That's why I'm sharing them
180
+ [985.840 --> 989.600] with you to help you remember these techniques. And again, at the end of all this, you'll get a
181
+ [989.600 --> 996.560] handout that has this listed. So let's get started. The very first thing that gets in people's
182
+ [996.560 --> 1005.040] way when it comes to spontaneous speaking is themselves. We get in our own way. We want to be
183
+ [1005.040 --> 1010.720] perfect. We want to give the right answer. We want our toast to be incredibly memorable.
184
+ [1011.520 --> 1018.960] These things are burdened by our effort, by our trying. The best thing we can do, the first step
185
+ [1018.960 --> 1026.960] in our process is to get ourselves out of the way. Easier said than done. Most of us in this room
186
+ [1026.960 --> 1032.640] are in this room because we are type A personalities. We work hard. We think fast. We make sure that we
187
+ [1032.640 --> 1039.200] get things right. But that can actually serve as a disservice as we try to speak in the moment.
188
+ [1040.800 --> 1044.240] I'd like to demonstrate a little of this for you and I need your help to do that. So we're going
189
+ [1044.240 --> 1050.080] to do our first activity. We are going to do an activity that's called Shout the Wrong Name.
190
+ [1051.200 --> 1056.960] In a moment, if you are able and willing, I'm going to ask you to stand. And I am going to ask you
191
+ [1056.960 --> 1062.400] for about 30 seconds to look all around you in this environment. And you are going to point at
192
+ [1062.400 --> 1066.400] different things. And I know it's rude to point, but for this exercise, please point. I want you
193
+ [1066.400 --> 1071.520] to point to things and you are going to call the things you are pointing to out loud anything,
194
+ [1071.520 --> 1079.440] but what they really are. So I might point to this and say refrigerator. I might point to this and say
195
+ [1079.440 --> 1085.120] cat. I am pointing to anything in your environment around you. It can be the person sitting next to you,
196
+ [1085.120 --> 1090.720] standing next to you. You will just shout and shouting is important. The wrong name.
197
+ [1091.360 --> 1098.480] So in a moment, I'm going to ask you to stand and do that. Please raise your hand if you already
198
+ [1098.480 --> 1105.440] have the first five or six things you're going to call out. Yeah, that's what I'm talking about.
199
+ [1106.320 --> 1113.520] We stockpile. You all are excellent game players. I told you the game. Shout the wrong name.
200
+ [1113.520 --> 1118.800] And you have already begun figuring out how you're going to master the game. That's your brain
201
+ [1118.800 --> 1125.680] trying to help you get it right. I'd like to suggest the only way you can get this activity wrong
202
+ [1126.480 --> 1134.240] is by doing what you've just done. There is no way to get this wrong. Okay, even if I call this a
203
+ [1134.240 --> 1141.600] chair, no penalty will be bestowed upon you. Okay, because I won't know what you were pointing at.
204
+ [1141.600 --> 1145.600] You could have been pointing at the floor under the chair and you called the floor the chair and
205
+ [1145.600 --> 1152.160] you were fine. The point is we are planning and working to get it right. And there is no way to
206
+ [1152.160 --> 1157.760] get it right. Just doing it gets it right. Okay, so let's try this now. We're going to play this game
207
+ [1157.760 --> 1162.240] twice again. It's for 30 seconds. If you are willing and able, will you please stand up? You can do
208
+ [1162.240 --> 1167.040] this seated by the way, but if you're willing and able, let's stand up. Okay, in a moment, I am about
209
+ [1167.040 --> 1173.280] to say go and I would like for you to point at anything around here, including me. It's okay to
210
+ [1173.280 --> 1177.520] point at me. I hope it's not a bad thing you say when you point at me, but point at different
211
+ [1177.520 --> 1184.720] things and loudly and proudly call them different than what they are. Ready? Begin!
212
+ [1184.720 --> 1200.800] Portchia Pine, California, Salt Shaker, Car, Library, Tennis Racket, Purple, Orange,
213
+ [1200.800 --> 1218.000] Puthrid. Hello. Time. Time. That's you can stay standing because in a mere moment, we're going
214
+ [1218.000 --> 1222.080] to do it again. So if you're comfortable standing, we're about to do it again. First, thank you. That
215
+ [1222.080 --> 1226.880] was wonderful. I heard great words being called out. It was fun. In some of you in the back, we're
216
+ [1226.880 --> 1231.280] doing it in sync. So it looked like you were doing some 70s disco dance. It was awesome. Okay,
217
+ [1231.920 --> 1238.160] this, this was great. Now, let me ask you just a few questions. Did you notice anything about the
218
+ [1238.160 --> 1245.280] words that you were saying? Did we find patterns perhaps? Maybe some of you were going through fruits
219
+ [1245.280 --> 1252.000] and vegetables. A few of you were going through things that started with the letter A. Right?
220
+ [1252.000 --> 1256.640] That's your brain saying, okay, you told me not to stockpile. So I'm going to try to be a little
221
+ [1256.640 --> 1264.560] more devious and I'm going to give you patterns. Okay, same problem. When we teach that class,
222
+ [1264.560 --> 1269.040] I told you about that improvisationally speaking class. We'd like to say your brain is there to help
223
+ [1269.040 --> 1274.240] you. These things it's doing have helped you be successful. But like a windshield wiper, we just
224
+ [1274.240 --> 1280.320] want to wipe those suggestions away and see what happens. Okay, so we're going to do this activity
225
+ [1280.320 --> 1286.880] again. This time, try the best you can to thank your brain if it provides you with patterns or
226
+ [1286.880 --> 1292.400] stockpiles and just say thank you brain and disregard them. Okay, so let's see what happens when we're
227
+ [1292.400 --> 1298.240] not stockpiling and we're not playing off patterns. We'll do this for only 15 seconds. See how this
228
+ [1298.240 --> 1314.480] feels, baby steps. Ready? Begin. Codec. Bicycle chain. Skateboard. Bananas. Purple.
229
+ [1314.480 --> 1332.160] Dutrid. Time. Please have a seat. Thank you again. Did you notice a difference between the
230
+ [1332.160 --> 1341.120] second time and the first time? Yes, was it a little easier that second time? No. That's okay.
231
+ [1341.120 --> 1345.920] We're just starting. These skills are not like a light switch. It's not like you learn these
232
+ [1345.920 --> 1351.680] skills and then all of the sudden you can execute on them. This is a wonderful game. This is a
233
+ [1351.680 --> 1358.400] wonderful game to train your brain to get out of its own way. You can play this game anywhere,
234
+ [1358.400 --> 1363.520] anytime. I like to play this game when I'm sitting in traffic. Makes me feel better than the
235
+ [1363.520 --> 1368.000] I shout things out. They're not the naughty things that I want to be shouting out, but I shout out
236
+ [1368.000 --> 1372.480] things and it helps. You're training yourself to get out of your own way. You're working against
237
+ [1372.480 --> 1377.760] the muscle memory that you've developed over the course of your life with a brain that acts very
238
+ [1377.760 --> 1382.800] fast to help you solve problems. But in essence, in spontaneous speaking situations, you put too
239
+ [1382.800 --> 1389.600] much pressure on yourself trying to figure out how to get it right. So a game like this teaches
240
+ [1389.600 --> 1396.640] us to get out of our own way. It teaches us to see the things that we do that prevent us from acting
241
+ [1396.640 --> 1405.360] spontaneously. In essence, we are reacting rather than responding to react means to act again.
242
+ [1406.320 --> 1410.720] You've thought it and now you're acting on it that takes too long and it's too thoughtful. We want
243
+ [1410.720 --> 1418.480] to respond in a way that's genuine and authentic. So the maximum I would like for you to take from
244
+ [1418.480 --> 1425.760] this and again, these maxims come from improvisation is one of my favorite dare to be dull. In a room like
245
+ [1425.760 --> 1431.840] this telling you dare to be dull is offensive and I apologize, but this will help rather than
246
+ [1431.840 --> 1440.560] striving for greatness dare to be dull. And if you dare to be dull and allow yourself that,
247
+ [1440.560 --> 1446.560] you will reach that greatness. It's when you set greatness as your target that it gets in the way
248
+ [1446.560 --> 1453.440] of you ever getting there because you over evaluate you over analyze you freeze up. So the first step
249
+ [1453.520 --> 1461.040] in our process today is to get out of our own way dare to be dull easier said than done, but once
250
+ [1461.040 --> 1467.600] you practice in a game just as simple as the one we practiced is a great way to do it. But that's not
251
+ [1467.600 --> 1474.480] enough getting out of our own way is important, but the second step of our process has us change how
252
+ [1474.480 --> 1481.200] we see the situation we find ourselves in. We need to see the speaking opportunity that we are a part
253
+ [1481.840 --> 1490.080] of as an opportunity rather than a challenge and a threat. When I coach executives on Q&A skills
254
+ [1490.800 --> 1498.640] when they go in front of the media or whatever investors, they see it as an adversarial experience,
255
+ [1499.280 --> 1505.040] me versus them. And one of the first things I work on is change the way you approach it.
256
+ [1505.840 --> 1510.720] A Q&A session for example is an opportunity for you. It's an opportunity to clarify. It's an
257
+ [1510.720 --> 1516.480] opportunity to understand what people are thinking. So if we look at it as an opportunity it feels very
258
+ [1516.480 --> 1522.960] different. We see it differently and therefore we have more freedom to respond. When I feel that you
259
+ [1522.960 --> 1529.920] are challenging me I am going to do the bare minimum to respond and protect myself. If I see this
260
+ [1529.920 --> 1535.280] as an opportunity where I have a chance to explain and expand I'm going to interact differently
261
+ [1535.360 --> 1541.200] with you. So spontaneous speaking situations are ones that afford you opportunities.
262
+ [1542.240 --> 1546.240] So when you're at a corporate dinner and your boss turns you and says oh you know him better than
263
+ [1546.240 --> 1551.280] the rest would you mind introducing him. You say great thank you for the opportunity rather than
264
+ [1552.480 --> 1562.080] right I better get this right. So see things as an opportunity. I have a game to play to help us with
265
+ [1563.040 --> 1568.080] this. This is a fun one. The holidays are approaching. We all in this room are going to give and
266
+ [1568.080 --> 1574.720] receive gifts. Here's how this game will work. It works best if you have a partner. So I'm hoping
267
+ [1574.720 --> 1578.720] you can work with somebody sitting next to you. If there's nobody sitting next to you turn around
268
+ [1578.720 --> 1583.200] introduce yourself great way to connect. If not you can play this game by yourself it's just a
269
+ [1583.200 --> 1588.080] little harder and you can't do the second part of the game. So after I explain the game this gives
270
+ [1588.080 --> 1593.360] you a chance to get to know somebody. Here's how it works. If you have a partner you and your
271
+ [1593.360 --> 1599.840] partner are going to exchange imaginary gifts. Pretend you have a gift. It can be a big gift.
272
+ [1599.840 --> 1606.400] It can be a small gift and you will give your gift to your partner. Your partner will take the gift
273
+ [1606.400 --> 1611.680] and open it up and will tell you what you gave them because you have not you just gave them a gift.
274
+ [1611.680 --> 1615.840] So you are going to open up the box and you're going to look inside and you are going to say the
275
+ [1615.840 --> 1619.920] first thing that comes to your mind in the moment. Not the thing you have all just thought of.
276
+ [1622.400 --> 1627.040] Or the thing after that. Remember what we talked about before? That's still plays. That's still in
277
+ [1627.040 --> 1633.040] play. Okay, you're stockpiling. Look in there. My favorite that I said, somebody gave me this
278
+ [1633.040 --> 1638.160] a gift during playing this game. I looked inside and I saw a frog leg. I don't know why I saw a
279
+ [1638.160 --> 1645.360] frog leg but that's what I said. That's the first part of the activity. Now the opportunity is
280
+ [1645.360 --> 1650.160] twofold in this game. The opportunity is for you, the gift receiver to name a gift. That's kind
281
+ [1650.160 --> 1655.760] of fun. That's an opportunity. It's not a threat. But the real opportunity is for the gift giver
282
+ [1655.760 --> 1661.120] because the gift giver then has to say, so you look and you say thank you for giving me a frog's
283
+ [1661.120 --> 1667.840] leg and the person will look at you and say, I knew you wanted a frog's leg because. So whatever
284
+ [1667.840 --> 1673.680] you find, the person who has received it is going to say, absolutely, I'm so glad you're happy. I
285
+ [1673.680 --> 1680.800] got it for you because. So you have to respond to whatever they say. What a great opportunity.
286
+ [1680.800 --> 1683.360] Now some of you are sitting there and they're like, oh that's hard. I don't want to. I make them
287
+ [1683.360 --> 1688.000] fool myself. Others of you are, if you're following this advice, they're saying, what a great opportunity.
288
+ [1689.200 --> 1693.920] So the game again is played like this. You and your partner will exchange, each will exchange a gift.
289
+ [1693.920 --> 1698.240] One will start and the other will follow. The first person will give a gift to the second person,
290
+ [1698.240 --> 1703.120] second person opens the box. However big the box is. And if the box is big and you find a penny in it,
291
+ [1703.120 --> 1707.600] perfect, doesn't matter. The box is heavy and you find a feather in it, fine. It does, there's
292
+ [1707.600 --> 1711.760] no way to get it wrong. Okay. Whatever's in the box is in the box. You can return it and get what
293
+ [1711.760 --> 1720.000] you wanted later. Okay. The person then you will name it. You will say thank you for the, whatever
294
+ [1720.000 --> 1725.360] you saw in the box. The person who gave it to you will say, I'm so glad you're excited. I got it
295
+ [1725.360 --> 1731.280] for you because. And you will give a reason that you got them whatever they decided you gave them.
296
+ [1731.920 --> 1736.720] Makes sense? All right. So very quickly just in five seconds, find a partner if you're
297
+ [1736.720 --> 1739.840] willing to do this with a partner. Everybody have a partner? Okay.
298
+ [1745.200 --> 1752.400] All right. In your partnerships, in your partnerships, pick an A person and a B person.
299
+ [1752.400 --> 1759.280] You may stand or sit. It's totally up to you. Pick an A and pick a B. Okay.
300
+ [1760.400 --> 1770.720] B goes first. Ha ha ha. All right. B give A a gift. B give A a gift. A thank them.
301
+ [1771.760 --> 1774.800] And then B will name and give the reason they gave it to him.
302
+ [1782.400 --> 1811.040] If you have not switched, switch please. If you have not switched, switch please.
303
+ [1812.400 --> 1841.520] Let's wrap it up in 30 seconds please. Let's wrap it up.
304
+ [1842.960 --> 1853.680] All right. If we can all have our seats.
305
+ [1859.600 --> 1869.040] If we can all take our seats please. I know I'm telling a room of many
306
+ [1869.520 --> 1873.440] NBA Alums to stop talking and that's hard.
307
+ [1879.280 --> 1883.760] All right. Ladies and gentlemen, did you get what you wanted?
308
+ [1885.200 --> 1890.480] Pretty neat. Hi. You always get what you want. Now for some of you, this was really hard
309
+ [1890.480 --> 1895.520] because you were really taking the challenge and not seeing what was in the box until you looked
310
+ [1895.520 --> 1901.520] in there. Was anybody surprised by what you found in the box? What did you find, sir?
311
+ [1901.520 --> 1911.520] What was in the box? Wow. Nice. Nice. If you've got a Ferrari, you need a transmission.
312
+ [1911.520 --> 1914.880] I like it. Who else found something that was surprising? What did you find?
313
+ [1916.080 --> 1923.920] A live unicorn. That's a great gift. Right? How was it as the gift giver? Were you surprised
314
+ [1924.000 --> 1928.800] at what your partner found in the box? Isn't it interesting that when we give an imaginary gift,
315
+ [1928.800 --> 1931.920] knowing that the person's going to name it, we already have in mind what they're going to find?
316
+ [1932.800 --> 1936.320] And when they say live unicorn, we go, well, that's interesting, right?
317
+ [1938.720 --> 1944.720] The point of this game is to one, remind ourselves we have to get out of our own way like we talked
318
+ [1944.720 --> 1951.360] about before. But to see this as an opportunity and to have fun, I love watching people play this game.
319
+ [1951.360 --> 1956.240] The number of smiles that I saw amongst you. And I have to admit, when I first started,
320
+ [1956.240 --> 1961.440] some of you looked at little dour, a little doubting. But in that last game, you were all smiling and
321
+ [1961.440 --> 1968.160] look like you were having fun. So when you reframe the spontaneous speaking opportunity as an opportunity,
322
+ [1968.160 --> 1975.920] as something that you can co-create and share, all of a sudden you are less nervous, less defensive,
323
+ [1976.480 --> 1980.560] and you can accomplish something pretty darn good, in this case, a fun outcome.
324
+ [1981.600 --> 1989.440] This reminds us of perhaps the most famous of all improvisation sayings. Yes, and. A lot of us live
325
+ [1989.440 --> 1996.960] our communication lives saying no but. Yes, and opens up a tremendous amount of opportunities.
326
+ [1996.960 --> 2000.960] And this doesn't mean you have to say yes, and to a question somebody asks, this just means the
327
+ [2000.960 --> 2006.960] approach you take to the situation. So you're going to ask me questions, that's an opportunity.
328
+ [2006.960 --> 2014.240] Yes, and I will follow through versus no in being defensive. So we've accomplished the first two
329
+ [2014.240 --> 2020.400] steps of our process. First we get out of our own way and Seppkin, we reframe the situation as an
330
+ [2020.400 --> 2030.560] opportunity. The next phase is also hard but very rewarding. And that is to slow down and listen.
331
+ [2031.440 --> 2037.360] You need to understand the demands of the requirement you find yourself in in order to respond
332
+ [2037.360 --> 2044.720] appropriately. But often we jump ahead. We listen just enough to think we got it and then we go
333
+ [2044.720 --> 2050.080] ahead starting to think about what we're going to respond and then we respond. We really need to
334
+ [2050.080 --> 2055.680] listen because fundamentally as a communicator your job is to be in service of your audience. And if
335
+ [2055.680 --> 2060.800] you don't understand what your audience is asking or needs, you can't fulfill that obligation. So we
336
+ [2060.800 --> 2072.560] need to slow down and listen. I have a fun game to play. In this game you are going to SPLL, EVRY,
337
+ [2072.560 --> 2087.520] THING, YOU, SAY, TO, YOUR, P-A-R-T-N-E-R. I will translate. You're going to get with the same
338
+ [2087.520 --> 2092.240] partner you just worked with. And you are going to have a very brief conversation about something
339
+ [2092.240 --> 2096.800] fun that you plan to do today. I know this is the most fun you're going to have all day but the
340
+ [2096.800 --> 2100.560] next fun thing you're going to do today. You are going to tell your partner what you are going to
341
+ [2100.560 --> 2111.200] do that will be fun today but you are going to do so by SPLLINGIT. So you're going to spell it.
342
+ [2111.200 --> 2121.440] It's okay if you are not a good speller. You'll see the benefit of doing this. So with the partner
343
+ [2121.440 --> 2126.400] you just worked with, person A is going to go first this time. You are simply going to tell your
344
+ [2126.400 --> 2132.800] partner actually you're going to SPLL to your partner what it is of fun, something of fun that you're
345
+ [2132.800 --> 2140.560] going to do today. Do what you are really going to do for fun and not do things like F-E-E-D-T-H-E-C-A-T.
346
+ [2140.560 --> 2146.240] Right? Just because you don't want to spell. Right? So you can use big words. All right. 30 seconds
347
+ [2146.240 --> 2149.200] each. Spell to your partner something fun that you're going to do today.
348
+ [2154.960 --> 2159.040] Would you like to play? Go ahead.
349
+ [2159.040 --> 2161.840] T-G-R-T-H-E-G-A-M-E.
350
+ [2162.960 --> 2166.560] Oh my goodness. Say it again. Spell it again. Yep. Yep.
351
+ [2168.240 --> 2175.520] E-X-C-E-L-L-E-N-T. I-H-O-P-E-T-H-A-T-H-E-Y-W-I-N.
352
+ [2180.960 --> 2182.720] Thank you. That was very good. Thank you.
353
+ [2189.040 --> 2209.840] If you have not switched, switch. Take 30 more seconds with the new partner spelling.
354
+ [2219.040 --> 2244.080] G-R-E-T-E-X-C-A-M-E-Y-O-U-P-L-E-A-S-E-T-A-K-E-Y-O-U-R-S-E-A-T.
355
+ [2249.840 --> 2256.640] So what did we learn? What did we learn besides that we're not so good at spelling?
356
+ [2259.280 --> 2266.560] You have to pause between the words. How did this change your interaction with the person you
357
+ [2266.560 --> 2275.120] were interacting with? What did you have to do? Focus and listen and you can't be thinking ahead.
358
+ [2275.120 --> 2281.920] You have to be in the moment. When you listen and truly understand what the person is trying to say,
359
+ [2281.920 --> 2287.760] then you can respond in a better way, a more targeted response. We often don't listen.
360
+ [2289.360 --> 2297.040] So we start by getting out of our own way. We then reframe the situation as an opportunity.
361
+ [2297.040 --> 2301.760] Those are things we do inside our head. But in the moment of interacting, we have to listen first
362
+ [2301.760 --> 2309.920] before we can respond to the spontaneous request. Perhaps my most favorite maxim comes from this
363
+ [2309.920 --> 2320.240] activity. Don't just do something. Stand there. Listen, listen, and then respond.
364
+ [2322.160 --> 2328.480] Now how do we respond? That brings us to the fourth part of our process. And that is we have to
365
+ [2328.480 --> 2334.960] tell a story. We respond in a way that has a structure. All stories have structure. We have to
366
+ [2334.960 --> 2341.600] respond in a structured way. The key to successful spontaneous speaking and by the way, plan speaking
367
+ [2341.600 --> 2348.640] is having a structure. I would like to introduce you to two of the most prevalent and popular and
368
+ [2348.640 --> 2354.480] useful structures you can use to communicate a message in a spontaneous situation. But before we
369
+ [2354.480 --> 2359.440] get there, we have to talk about the value of structure. It increases what is called processing
370
+ [2359.440 --> 2366.400] fluency, the effectiveness of which or through which we process information. We actually process
371
+ [2366.400 --> 2372.640] structured information roughly 40% more effectively and efficiently than information that's not structured.
372
+ [2373.600 --> 2378.800] I love looking out in this audience because you will remember as I remember phone numbers when you
373
+ [2378.800 --> 2384.720] had to remember them if you wanted to call somebody. Young folks today don't need to remember phone
374
+ [2384.720 --> 2388.240] numbers. They just need to look at a picture, push a button, and then the voice starts talking on
375
+ [2388.240 --> 2392.560] the other end. Ten digit phone numbers, it's actually hard to remember ten digit phone numbers.
376
+ [2392.560 --> 2398.800] How did you do it? You chunked it into a structure. Three, three, and four. Structure helps us remember.
377
+ [2400.080 --> 2404.960] The same is true when speaking spontaneously or in a planned situation. So let me introduce you
378
+ [2404.960 --> 2409.440] to two useful structures. The first useful structure you have probably heard or used in some
379
+ [2409.440 --> 2415.680] incarnation, it is the problem, solution, benefit structure. You start by talking about what the issue
380
+ [2415.680 --> 2421.440] is, the problem. You then talk about a way of solving it and then you talk about the benefits of
381
+ [2421.440 --> 2426.320] following through on it. Very persuasive, very effective. Helps you as the speaker remember, it
382
+ [2426.320 --> 2431.920] helps your audience know where you're going with it. When I was a tour guide on this campus many,
383
+ [2432.320 --> 2437.680] many years ago. What do you think the single most important thing they drilled into our head? It
384
+ [2437.680 --> 2442.880] took a full quarter, by the way, to train to be a tour guide here. They used to line us up at one
385
+ [2442.880 --> 2448.000] end of the quad and have us walk backwards straight and if you failed you had to start over. To this
386
+ [2448.000 --> 2452.560] day I can walk backwards in a straight line because of that. As part of that training, what do you
387
+ [2452.560 --> 2462.240] think the most important thing they taught us was? Never lose your tour group. I'm not sure,
388
+ [2462.240 --> 2469.440] never lose your tour group. The same is true as a presenter. Never lose your audience. The way
389
+ [2469.440 --> 2474.080] you keep your audience on track is by providing structure. None of you would go on a tour with me.
390
+ [2474.080 --> 2480.000] If I said, hi, my name is Matt, let's go. You want to know where you're going, why you're going
391
+ [2480.160 --> 2484.240] there? How long it's going to take? You need to set expectations and structure does that.
392
+ [2484.240 --> 2489.760] Problems solution benefit is a wonderful structure to have in your back pocket. It's something
393
+ [2489.760 --> 2495.920] that you can use quickly when you're in the moment. It can be reframed so it's not always a problem
394
+ [2495.920 --> 2499.920] you're talking about. Maybe it's an opportunity. Maybe there's a market opportunity you want to go
395
+ [2499.920 --> 2504.160] out and capture. It's not a problem that we're not doing it but maybe we'd be better off if we did.
396
+ [2504.160 --> 2509.280] So it becomes opportunity solution which are the steps to achieve it and then the benefit.
397
+ [2510.880 --> 2520.320] Another structure which works equally well is the what so what now what structure? You start by
398
+ [2520.320 --> 2526.880] talking about what it is. Then you talk about why it's important and then what the next steps are.
399
+ [2527.840 --> 2535.840] This is a wonderful formula for answering questions, for introducing people. So if I'm in the moment
400
+ [2535.840 --> 2540.080] somebody asks me to introduce somebody, I change the what to who. I say who they are, why they're
401
+ [2540.080 --> 2543.840] important and what we're going to do next. Maybe listen to them, maybe drink our wine, whatever.
402
+ [2545.280 --> 2549.840] What so what now what? The reality is this. When you are in a spontaneous speaking situation,
403
+ [2549.840 --> 2555.200] you have to do two things simultaneously. You have to figure out what to say and how to say it.
404
+ [2555.200 --> 2558.960] These structures help you by telling you how to say it.
405
+ [2561.840 --> 2566.240] If you can become comfortable with these structures, you can be in a situation where you can
406
+ [2566.240 --> 2573.040] respond very ably to spontaneous speaking situations. We're going to practice because that's what we do.
407
+ [2574.000 --> 2577.440] Here's the situation. Is everybody familiar with this child's toy? It's a slinky.
408
+ [2577.760 --> 2587.120] You are going to sell this slinky to your partner using either problem solution benefit or
409
+ [2587.120 --> 2593.120] opportunity solution benefit. What does the slinky provide you? Or you could use what so what?
410
+ [2593.120 --> 2596.640] Now what? What is it? Why is it important? The next steps might be to buy it.
411
+ [2597.440 --> 2602.480] By using that structure, see how already it helps you? It helps you focus.
412
+ [2603.440 --> 2608.400] We're only going to have one partner sell to the other partner.
413
+ [2609.840 --> 2613.840] So get with your partner. One of you will volunteer to sell to the other.
414
+ [2614.880 --> 2620.880] Sell a slinky using problem solution benefit or what? What so what? Now what? Please begin.
415
+ [2632.480 --> 2642.400] So we have the handouts but I'm also going to be doing the microphone.
416
+ [2642.400 --> 2646.240] So when I debrief this, you can go ahead and pass them out. Does that make sense?
417
+ [2647.440 --> 2649.600] No, no, after this activity.
418
+ [2692.480 --> 2710.000] 30 more seconds please.
419
+ [2710.000 --> 2726.000] Excellent. Let's all close the deal, seal the deal.
420
+ [2731.040 --> 2735.920] I have never seen more people in one place doing this at the same time.
421
+ [2735.920 --> 2741.680] I love it. I teach people to gesture and gesture big. It's great. I love it.
422
+ [2741.680 --> 2748.160] So if you were the recipient of the sales pitch, thumbs up. Did they do a good job?
423
+ [2748.160 --> 2754.720] Did they use the structure? Awesome. I'm recruiting you all for my next business as my sales people.
424
+ [2754.720 --> 2761.040] Please try to ignore this but as we're speaking, the handout I told you about is coming around.
425
+ [2761.760 --> 2766.960] On the back of that handout, you are going to see a list of structures, the two we talked about,
426
+ [2766.960 --> 2770.800] and several others that can help you in spontaneous speaking situations.
427
+ [2771.440 --> 2776.720] These structures help because they help you understand how you're going to say what you say.
428
+ [2777.440 --> 2781.680] Structure sets you free and I know that's kind of ironic but it's true. If you have that
429
+ [2781.680 --> 2785.040] structure, then you're free to think about what it is you're going to say.
430
+ [2786.160 --> 2790.800] It reduces the cognitive load of figuring out what you're saying and how you're going to say it.
431
+ [2791.360 --> 2792.720] All of this is on that handout.
432
+ [2795.520 --> 2800.400] So what does this all mean? It means that we have within our ability
433
+ [2802.320 --> 2807.040] the tools and the approaches to help us in spontaneous speaking situations.
434
+ [2807.040 --> 2812.000] The very first thing we have to do is manage our anxiety because you can't be an effective speaker
435
+ [2812.720 --> 2818.320] if you don't have your anxiety under control. And we talked about how you can do that by greeting
436
+ [2818.320 --> 2821.920] your anxiety, reframing as a conversation and being in the present moment.
437
+ [2823.440 --> 2830.240] Once you do that, you need to practice a series of four steps that will help you speak spontaneously.
438
+ [2830.240 --> 2835.680] First, you get out of your own way. I would love it if all of you on your way from here to the football
439
+ [2835.680 --> 2841.920] game, point at things and call them the wrong name. It'll be fun. If most of us do it, then it won't
440
+ [2841.920 --> 2848.960] be weird. If only one and two of us do it will be weird. Second, give gifts. By that I mean see
441
+ [2848.960 --> 2856.240] your interactions as ones of opportunity, not challenges. Third, take the time to listen.
442
+ [2857.600 --> 2864.800] Listen. And then finally, use structures. And you have to practice these structures. I practice
443
+ [2864.800 --> 2868.640] these structures on my kids. I have two kids. When they ask me questions, I usually answer them
444
+ [2868.640 --> 2874.080] and what's so what now what? They don't know it. But when they go over to their friends houses and
445
+ [2874.080 --> 2878.480] they see their friends ask their dads questions, they don't get what's so what now what. So, you know,
446
+ [2878.480 --> 2882.160] you have to practice. The more you practice, the more comfortable you will become.
447
+ [2883.600 --> 2888.320] Ultimately, you have the opportunity before you to become more compelling, more confident,
448
+ [2888.320 --> 2895.440] more connected as a speaker if you leverage these techniques. If you're interested in learning
449
+ [2895.440 --> 2900.000] more, this is where I do a little plug. I've written a book, many of the MBA students who take the
450
+ [2900.000 --> 2904.160] strategic communication classes here that I and others teach, read it. It's called speaking up
451
+ [2904.160 --> 2909.840] without freaking out. More importantly, there's a website here that I curate called no freaking
452
+ [2909.840 --> 2915.200] speaking. And it has lots of information that I've written and others have written about how to
453
+ [2915.200 --> 2920.720] become more effective at speaking. So that's the end of my plug. What I'd really like to do is enter
454
+ [2920.720 --> 2926.160] into a spontaneous speaking situation with you. And I would love to entertain any questions that
455
+ [2926.160 --> 2930.560] you have. There are two people who are running around with microphones. So some of us who remember
456
+ [2930.560 --> 2935.280] the Phil Donahue show, we're going to do a little bit of that. If you have a question, the microphone
457
+ [2935.280 --> 2941.680] will come and I'm happy to answer it. I think if you can do it on. Yes, yeah. A week in here.
458
+ [2941.680 --> 2948.400] You can talk about hostile situations. Hostile situations. Yes. So when you find yourself in a
459
+ [2948.400 --> 2954.400] challenging situation, first, it should not be a surprise to you. It should not be a surprise.
460
+ [2954.400 --> 2959.200] Before you ever speak, you should think about what is the environment going to be like. So it
461
+ [2959.200 --> 2965.280] shouldn't surprise you that there might be some challenges in the room. When there are hostile
462
+ [2965.280 --> 2970.960] situations that arise, you have to acknowledge it. So if somebody says, that's a ridiculous idea.
463
+ [2970.960 --> 2975.680] Why did you come up with that? To simply say, so the idea I came up with was, right?
464
+ [2975.680 --> 2979.120] I can't acknowledge the emotion. I recommend not naming the emotion.
465
+ [2980.560 --> 2984.240] So you sound really angry. I'm not angry. I'm frustrated. Now we're arguing over their
466
+ [2984.240 --> 2989.120] mental state, emotional state. So I say something like, I hear you have a lot of passion on this
467
+ [2989.120 --> 2993.200] issue or I hear there's great concern from you. So you acknowledge the emotion because otherwise
468
+ [2993.200 --> 2998.800] it sits in the room and then reframe and respond the way that makes sense. So if somebody raises
469
+ [2998.800 --> 3003.040] their hand and says, your product is ridiculously priced. Why do you charge so much?
470
+ [3003.440 --> 3009.280] I might say, I hear great concern. And what you're really asking about is the value of our product.
471
+ [3009.280 --> 3013.040] And I would give my value proposition and then I would come back and say, and because of the value
472
+ [3013.040 --> 3018.480] we provide, we believe it's priced fairly. So you answer the question about price, but you've
473
+ [3018.480 --> 3025.760] reframed it in a way that you feel more comfortable answering it. So the way to do this is to practice
474
+ [3025.760 --> 3031.200] all the skills we just talked about. The only skill that I'm adding to this is the awareness
475
+ [3031.200 --> 3036.480] in advance that you might be in that situation. First, I have to truly listen to what I'm hearing.
476
+ [3036.480 --> 3041.840] Right? It's very easy for me when I hear a challenging question to get all defensive and not
477
+ [3041.840 --> 3047.040] hear what the person's asking. I see it as an opportunity to reframe and explain. Okay? So again,
478
+ [3047.040 --> 3051.760] you have to practice, but that's how I think you address it. Other other questions? I see a
479
+ [3051.760 --> 3055.360] question back here. Yes, please. That's first of all, thank you very much. Great, great presentation.
480
+ [3055.360 --> 3059.920] Thank you. For a lot of the speaking I do, I have remote audiences.
481
+ [3059.920 --> 3065.680] audiences distributed all over the country with telecom. Any tips for those kinds of audiences?
482
+ [3065.680 --> 3071.360] So when you are speaking in a situation where not everybody is co-located, okay? In fact,
483
+ [3071.360 --> 3074.960] right at this very moment, there are people watching this presentation remotely.
484
+ [3076.480 --> 3083.360] What you need to do is be mindful of it. Second, try to include engagement techniques where the
485
+ [3083.360 --> 3089.280] audience actually has to do something. So physical participation is what we did here through the
486
+ [3089.280 --> 3094.480] games. You can ask your audience to imagine something. Imagine what it would be like if,
487
+ [3094.480 --> 3098.000] when we try to achieve a goal. Rather than say, here's the goal we're trying to achieve.
488
+ [3098.000 --> 3101.760] Say, imagine what it would be like if. See what that does to you. It pulls you in.
489
+ [3101.760 --> 3106.000] I can take polling questions. Most of the technology that you're referring to has some kind of
490
+ [3106.000 --> 3111.760] polling feature. You can open up some kind of wiki or Google Doc or some collaborative tool
491
+ [3111.760 --> 3116.000] where people can be doing things and you can be monitoring that while you're presenting.
492
+ [3116.560 --> 3121.120] So I might take some breaks. I talk for 10, 15 minutes and say, okay, let's apply this and let's
493
+ [3121.120 --> 3126.400] go into this Google Doc I've created and I see what people are doing. So it's about variety
494
+ [3126.400 --> 3131.280] and it's about engagement. Those are the ways that you really connect to people who are remote from
495
+ [3131.280 --> 3137.280] you. Other questions? You're pointing out, I've got to look for where the mic is. Yes, please.
496
+ [3137.280 --> 3142.000] This may be a similar to the first question, but I do a lot of expert witness testimony.
497
+ [3142.000 --> 3145.920] What's your recommendation for handling cross examination? Specifically.
498
+ [3146.960 --> 3149.040] Specifically, I feel like I'm being cross-examined. Right.
499
+ [3150.560 --> 3156.080] So in any speaking situation that you go into that has some planned element to it,
500
+ [3156.080 --> 3160.800] I recommend identifying certain themes that you think are important or believe need to come out.
501
+ [3160.800 --> 3164.960] And then with each one of those themes, have some examples and concrete evidence that you can
502
+ [3164.960 --> 3171.760] use to support it. You don't go in with memorized terms or ways of saying it. You just have ideas
503
+ [3171.760 --> 3176.000] and themes and then you put them together as necessary. So when I'm in a situation where people
504
+ [3176.000 --> 3181.440] are interrogating me, I have certain themes that I want to get across and make sure that I can do
505
+ [3181.440 --> 3189.760] that in a way that fits the needs in the moment. If it's hostile, again, the single best tool you have
506
+ [3189.760 --> 3194.480] to buy yourself time and to help you answer a question efficiently is paraphrasing.
507
+ [3194.480 --> 3200.080] The paraphrase is like the Swiss Army knife of communication. If you remember the show McGiver,
508
+ [3200.080 --> 3206.320] it's your McGiver tool. Right. So when a question comes in, the way you paraphrase it allows you the
509
+ [3206.320 --> 3213.600] opportunity to reframe it, to think about your answer, and to pause and make sure you got it right.
510
+ [3213.600 --> 3216.880] So when you're under those situations, if you have the opportunity to paraphrase, say so what you're
511
+ [3216.880 --> 3223.120] really asking about is x, y, and z, that gives you the opportunity to employ one of these techniques.
512
+ [3223.120 --> 3228.320] Now I've never been an expert witness because I'm not an expert on anything, but those tools I believe
513
+ [3228.400 --> 3234.000] could be helpful. The microphone is back there. Thank you. Thank you so much. This has been so
514
+ [3234.000 --> 3238.640] helpful and enjoyable this morning. Thank you. Would you please show the last screen so we can get
515
+ [3238.640 --> 3243.680] down the name of the book that you've written and the information? Absolutely. Thank you.
516
+ [3244.400 --> 3248.240] I think they actually, you might even have an opportunity to, you know, it's on the sheet too.
517
+ [3248.240 --> 3252.480] Everything I said is on the back of that sheet, but I'm happy to have this behind me while I talk.
518
+ [3253.200 --> 3262.240] Other questions? Yes, please. Yes. I work with groups that represent many different cultural
519
+ [3262.240 --> 3268.160] backgrounds. So are there any caveats or is this a universal strategy?
520
+ [3269.280 --> 3275.680] So in terms of, from your perspective as the speaker, I believe this applies. But whenever you
521
+ [3275.680 --> 3281.360] communicate, part of the listening aspect is also thinking about is, who is my audience
522
+ [3281.360 --> 3285.680] and what are their expectations? So what are the cultural expectations of the audience that I'm
523
+ [3285.680 --> 3291.360] presenting to? So there might be certain norms and rules that are expected. So when I travel and do
524
+ [3291.360 --> 3298.080] talks, I have to take into account where I'm doing the presentation. So I help present in the
525
+ [3298.080 --> 3303.200] Ignite program. And if you have not heard about the Ignite program, here at the GSB, it's fantastic.
526
+ [3303.200 --> 3307.840] And I just did a presentation standing in one of these awesome classrooms that have all these
527
+ [3307.840 --> 3315.440] cameras. And I just taught 35 people in Santiago, Chile. And I needed to understand the cultural
528
+ [3315.440 --> 3321.280] expectations of that area and what they expect and what they're willing to do when I ask them to
529
+ [3321.280 --> 3326.640] participate. So it's part of that listening step where you reflect on what are the expectations of
530
+ [3326.640 --> 3330.720] the audience. I think we have time for two more questions. And then I'm going to hang around afterwards
531
+ [3330.720 --> 3334.880] if anybody has individual questions. But some of these folks really want me to keep on sketching.
532
+ [3334.880 --> 3338.160] Yes, please. I wanted to ask a question. One of the things that you've done effectively in your
533
+ [3338.160 --> 3343.360] talking and I've seen other effective speakers do is interject humor in their talk. How what are
534
+ [3343.360 --> 3348.480] the risks and rewards of trying to do that? Well, first, thank you. And I appreciate all of you laughing.
535
+ [3348.480 --> 3352.640] Those are the some total of all my jokes you've heard them. I am not funny beyond those jokes.
536
+ [3353.520 --> 3359.120] So humor is wonderfully connecting. It's wonderfully connecting. It's a great tool for connection.
537
+ [3359.120 --> 3365.680] It is very, very risky. Cultural reasons get in the way. Sometimes what you think is funny isn't
538
+ [3365.680 --> 3372.000] funny to other people. What research tells us is that if you're going to try to be funny, self-deprecating
539
+ [3372.000 --> 3378.960] humor is your best bet. Because it is the least risky. There is nothing worse than putting out a joke
540
+ [3378.960 --> 3385.040] and having no response. It actually sets you back farther than if you would have gotten where you
541
+ [3385.040 --> 3390.560] would have gotten if the joke would have hit. So basic fundamentals you need to think about with humor.
542
+ [3390.560 --> 3396.960] One, is it funny? How do I know? I ask other people first. Second, what happens if it doesn't work?
543
+ [3396.960 --> 3402.320] Have a backup plan. And then third, if you're worried about the answers to those first two,
544
+ [3402.320 --> 3407.440] don't do it. One last question, please. The microphone is right here. And then like I said,
545
+ [3407.440 --> 3413.360] I will hang around afterwards. Yes, please. I am sort of on the opposite side of this since I am a
546
+ [3413.360 --> 3419.200] journalist and I frequently have to ask spontaneous questions of people who have been through media
547
+ [3419.200 --> 3431.680] training. Yes. So any tips for chinks in the armor? Way to ask a question without being antagonistic,
548
+ [3431.680 --> 3438.000] but get a facsimile of a straight answer. Well, so let me give you two answers. One is I have young
549
+ [3438.160 --> 3444.560] boys and the power of the why is great. Just ask why a couple times and you can get through that first two
550
+ [3444.560 --> 3450.800] layers of training. Why do you say that? How do you feel about that? The second bit is to
551
+ [3452.400 --> 3456.720] what I have found successful in getting people to, I do this to get people to answer in a more
552
+ [3456.720 --> 3461.840] authentic way. What I'll do is I'll ask them to give advice. So what advice would you give
553
+ [3461.840 --> 3466.480] somebody who's challenged with this or what advice would you give to somebody in this situation?
554
+ [3466.480 --> 3471.520] And by asking for the advice, it changes the relationship they have to me as the question
555
+ [3471.520 --> 3476.560] asker and I often get much more rich detailed information. So the power of the why and then put
556
+ [3476.560 --> 3483.040] them in a position of providing guidance and that can really work. With that, I am going to thank
557
+ [3483.040 --> 3488.560] you very much. I welcome you to ask questions later and enjoy the rest of your reunion weekend. Thank
558
+ [3488.560 --> 3494.780] you.
transcript/allocentric_HlEWIAiqSoc.txt ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.680] Hang on a second.
2
+ [4.680 --> 7.200] I think that's an autistic person.
3
+ [7.200 --> 15.640] Alrighty then, these are the top signs and traits to look out for if you think an adult
4
+ [15.640 --> 17.600] in your life may be autistic.
5
+ [17.600 --> 22.800] The first sign to spot an autistic adult is that they prefer a loan time rather than
6
+ [22.800 --> 24.600] the company of others.
7
+ [24.600 --> 28.960] So while they may like spending time with you, you might be their partner or their friend,
8
+ [28.960 --> 33.000] they prefer not to entertain others in their own home.
9
+ [33.000 --> 36.560] As an autistic adult, our home really is our safe space and there's no different for
10
+ [36.560 --> 37.560] kids.
11
+ [37.560 --> 43.000] But as you get older and there's more stresses thrown upon you, more demands placed upon
12
+ [43.000 --> 46.000] you, your home really becomes this fortress of solitude.
13
+ [46.000 --> 51.680] I'd also say autistic adults, including me, can be very protective in maintaining our
14
+ [51.680 --> 53.200] safe place.
15
+ [53.200 --> 56.680] And I'd go as far as I say to the detriment of others.
16
+ [56.840 --> 59.080] Now you might think, what?
17
+ [59.080 --> 64.760] If this is our safe place and other people want to come into that, it doesn't really matter
18
+ [64.760 --> 68.480] what effect we have on them to make that go away.
19
+ [68.480 --> 73.680] You know, we're super protective of this safe zone to the detriment of others which really
20
+ [73.680 --> 76.120] doesn't even appear on our radar.
21
+ [76.120 --> 79.880] And the last thing I'd say about safe zones or your home for an autistic adult or someone
22
+ [79.880 --> 86.640] you think may be an autistic adult is this disproportionate response, this overreaction
23
+ [86.640 --> 91.120] in your mind to the simplest things like adornock or an uninvited guest.
24
+ [91.120 --> 93.840] And for me, you could throw in just too many people in my home.
25
+ [93.840 --> 96.760] These are the things that you might think who cares, someone out to the door, someone just
26
+ [96.760 --> 99.840] rocked up to say hello or you know, there's lots of people here where it all happened
27
+ [99.840 --> 100.840] fun.
28
+ [100.840 --> 102.480] You might think that for me that's not the case.
29
+ [102.480 --> 105.240] This is not not anything mere.
30
+ [105.240 --> 108.480] This is a major intrusion on my safe zone.
31
+ [108.480 --> 111.880] So yeah, there's going to be different reactions and they're going to seem disproportionate.
32
+ [111.880 --> 117.600] Another sign to spot an autistic adult in your life is, do they have communication challenges
33
+ [117.600 --> 120.160] or do they communicate in a very different way?
34
+ [120.160 --> 125.480] Like I do, do you find them constantly asking questions or interrupting you?
35
+ [125.480 --> 127.080] Well, you're trying to tell them something.
36
+ [127.080 --> 132.960] Do you find yourself being peppered with follow-up questions that aren't always even relevant
37
+ [132.960 --> 134.560] to the topic of the conversation?
38
+ [134.560 --> 140.360] Autistic adults often like to question every point of a conversation, dissecting every
39
+ [140.360 --> 141.360] last word.
40
+ [141.720 --> 143.600] I do this for my wife all the time.
41
+ [143.600 --> 151.120] I do it to process what I'm hearing so I can understand it and I can contribute.
42
+ [151.120 --> 156.120] Of course that doesn't mean it's not incredibly frustrating for the people in the conversation
43
+ [156.120 --> 157.120] with me.
44
+ [157.120 --> 158.120] I get that.
45
+ [158.120 --> 164.280] But critically, without the endless questions for the most part, autistic people will
46
+ [164.280 --> 167.440] tend to simply misinterpret what you're saying.
47
+ [167.440 --> 172.120] So but for all these endless questions, we may never interpret correctly what you're
48
+ [172.120 --> 173.760] trying to convey to us.
49
+ [173.760 --> 176.400] So there's a point to them that is frustrating.
50
+ [176.400 --> 179.920] So as an autistic adult, let's say with my wife, if I'm having a conversation or she's
51
+ [179.920 --> 184.320] trying to tell me something and let's say I decide, I'm just going to listen from start
52
+ [184.320 --> 190.760] to finish, suppress all urges, the chances are I'll misinterpret what she says and I'll
53
+ [190.760 --> 193.120] launch some sort of counter attack.
54
+ [193.120 --> 196.160] So I'll take it the wrong way, man, attack.
55
+ [196.160 --> 200.080] Being that I don't understand as a personal attack on me and I must attack back or I'll
56
+ [200.080 --> 202.680] just go off on a tangent that's completely irrelevant.
57
+ [202.680 --> 206.480] Autistic adults can also become disinterested in conversations really quickly.
58
+ [206.480 --> 211.120] We can lose focus and patience and honestly sometimes I'll just say to my wife, can you
59
+ [211.120 --> 212.120] just get to the point?
60
+ [212.120 --> 213.400] What are you trying to tell me?
61
+ [213.400 --> 215.040] Can you just tell me what you're trying to tell me?
62
+ [215.040 --> 216.880] And often there is no point.
63
+ [216.880 --> 221.280] See as an autistic person, it doesn't occur to me that people would talk when they have
64
+ [221.280 --> 222.720] no point to make.
65
+ [222.720 --> 223.800] They would just talk.
66
+ [223.800 --> 228.880] My wife is entitled to just vent, to just debrief, to just bitch.
67
+ [228.880 --> 231.720] She's entitled to just tell me a story.
68
+ [231.720 --> 233.920] No point, just a story she wants to share.
69
+ [233.920 --> 236.400] For an autistic person, this can be very confusing.
70
+ [236.400 --> 240.840] So it works both ways to understand where it's coming from from both sides.
71
+ [240.840 --> 248.040] The next sign, despite an autistic adult, is that they seem to focus their time and energy
72
+ [248.040 --> 253.760] inwardly as an inward focus rather than say outwardly focusing.
73
+ [253.760 --> 256.920] Like, many neurotypical non-autistic people.
74
+ [256.920 --> 262.960] It's been said that women focus on people, while men focus on things, and that may be
75
+ [262.960 --> 267.360] right or wrong, but for autistic people it's even more specific than that.
76
+ [267.360 --> 272.520] Autistic adults tend to spend a lot of their time, if not all their time, focusing on their
77
+ [272.520 --> 275.200] passions, their special interests.
78
+ [275.200 --> 279.720] In other words, we adopt an inward focus by default.
79
+ [279.720 --> 281.640] It's not something we've chosen to do.
80
+ [281.640 --> 285.800] We just wake up and, by default, focus inwardly.
81
+ [285.800 --> 291.040] So there's a clear favoring of our passions, our interests over everything else.
82
+ [291.040 --> 298.080] And part of that inward focus is a tendency to mask or suppress our true selves and our
83
+ [298.080 --> 301.000] true emotions and feelings to keep them inside.
84
+ [301.000 --> 306.440] While at the same time, those struggling to interpret, to process and deal with these
85
+ [306.440 --> 310.360] emotions and feelings that we're trying to hide.
86
+ [310.360 --> 315.080] This next side, despite an autistic adult, is, do they seem to live in a world of their
87
+ [315.080 --> 316.080] own?
88
+ [316.080 --> 319.960] Autistic adults can sometimes just appear clueless to what's happening around them.
89
+ [319.960 --> 321.920] Unaware of what's going on around them.
90
+ [321.920 --> 323.000] Stuck in their own little world.
91
+ [323.000 --> 327.440] I absolutely can struggle with the awareness of others around me or the awareness of others
92
+ [327.440 --> 328.440] in general.
93
+ [328.440 --> 333.920] And this would include a lack of awareness of presence, of wants, needs, feelings, and
94
+ [333.920 --> 336.200] the intentions of people we're spending our time with.
95
+ [336.200 --> 341.960] We can also lack an awareness of time and space, our surroundings, environment and our
96
+ [341.960 --> 343.240] own personal needs.
97
+ [343.240 --> 347.520] We can also appear to be living in a world of our own because we can really struggle with
98
+ [347.520 --> 354.120] identifying body language, verbal and nonverbal cues, voice tone, and just generally language
99
+ [354.120 --> 358.680] that can make us feel like we're an alien living on a foreign planet.
100
+ [358.680 --> 364.920] The next sign, despite an autistic adult, is that they tend to struggle in multitasking.
101
+ [364.920 --> 368.600] So managing multiple tasks demands or even interactions.
102
+ [368.600 --> 375.240] For me as an autistic adult, I have a strong urge or need that I must complete a task before
103
+ [375.240 --> 378.240] moving on to another task.
104
+ [378.240 --> 382.360] And there may not be any logical reason why one task is more important than other to other
105
+ [382.360 --> 383.360] people.
106
+ [383.360 --> 385.840] But for me, this must be done before I can do this.
107
+ [385.840 --> 388.480] And I would put this sign under the banner of executive function.
108
+ [388.480 --> 392.080] Okay, so we have executive function challenges.
109
+ [392.080 --> 397.960] Like for example, in my case, not being able to appropriately prioritize tasks.
110
+ [397.960 --> 401.400] So an example for me is I can put certain tasks first.
111
+ [401.400 --> 403.880] I can make them a priority.
112
+ [403.880 --> 407.080] While to others, they're not actually important or the priority.
113
+ [407.080 --> 408.280] But in my mind, they are.
114
+ [408.280 --> 413.920] I can also feel like a strong sense of resentment towards other people or other tasks.
115
+ [413.920 --> 417.240] Things that are not remotely connected to my interest or passion.
116
+ [417.240 --> 423.840] Being in the way of me doing tasks that are connected to my interests, passions that
117
+ [423.840 --> 426.160] are a priority of mine.
118
+ [426.160 --> 428.280] People, tasks come up and get in the way.
119
+ [428.280 --> 430.160] I'm doing what I want to do.
120
+ [430.160 --> 431.160] Bad.
121
+ [431.160 --> 434.400] A uni when I was studying law, and this was obviously very bad.
122
+ [434.400 --> 439.480] I had to complete one assessment or essay or whatever you want to call those kind of
123
+ [439.480 --> 442.920] insomestor assessments one at a time.
124
+ [442.920 --> 448.600] It didn't matter if multiple assessments were due at the same time.
125
+ [448.600 --> 452.760] I could only work on one at a time for moving on to the next assessment.
126
+ [452.760 --> 457.480] I guess I struggled to switch from thoughts and themes and I thought, well, if I'm doing
127
+ [457.480 --> 463.440] an assessment on criminal law, how could I possibly concurrently do an assessment on
128
+ [463.440 --> 465.160] property law?
129
+ [465.160 --> 468.720] I can't, whoa, that's, no, that doesn't compute.
130
+ [468.720 --> 470.440] And even if they do it at the same time.
131
+ [470.440 --> 475.440] Other sign to spot an autistic adult in your life is they appear just generally super
132
+ [475.440 --> 480.280] sensitive to things like smells and tastes and noises and lights.
133
+ [480.280 --> 486.440] And I'm talking sensitive to a level that doesn't seem right to you or other people.
134
+ [486.440 --> 491.800] In other words, they may be sensitive to smells or tastes or noises or lights that don't
135
+ [491.800 --> 492.800] bother anyone else.
136
+ [492.800 --> 498.040] So on the surface, it can seem unbelievable, disproportionate, just plain made up.
137
+ [498.040 --> 505.040] But sensory processing challenges and hypersensitivity to sensors like smell, touch, taste, noise,
138
+ [505.040 --> 506.040] light.
139
+ [506.040 --> 511.480] These are very common challenges for autistic people.
140
+ [511.480 --> 516.480] A particular paradox that can really frustrate my family is I can be really hypersensitive
141
+ [516.480 --> 520.880] to noises so I can get really startled so quickly.
142
+ [520.880 --> 526.040] I get started all the time and a lot of times I end up just putting my hands on my ears
143
+ [526.040 --> 530.680] because I can't hear this noise anymore or I don't know how to get past this noise.
144
+ [530.680 --> 536.080] But the paradox being hypersensitive to noise, but why are you so bloody louder, Ryan?
145
+ [536.080 --> 540.160] You're always talking loud, you're so loud, you're banging and clanging, it's funny,
146
+ [540.160 --> 541.560] it's a paradox, I guess.
147
+ [541.560 --> 546.200] It's interesting and I think it's pretty common as an autistic person, I am really super sensitive
148
+ [546.200 --> 550.160] to banging and clanging and noises, but I am that person.
149
+ [550.160 --> 558.120] Also, and this is a sign you may have noticed, certain voices or noises or actions can set
150
+ [558.120 --> 562.720] off an autistic person straight away out of nowhere and it just makes no sense how that's
151
+ [562.720 --> 563.720] possible.
152
+ [563.720 --> 566.160] For me, a squeaky door.
153
+ [566.160 --> 569.080] Loud eaters can't be in the room with loud eaters.
154
+ [569.080 --> 570.800] You know what's worse?
155
+ [570.800 --> 571.800] Slopey drinkers.
156
+ [571.800 --> 573.320] Do you know what's worse than that?
157
+ [573.320 --> 577.880] I'm a loud eater and I'm a Slopey drinker.
158
+ [577.880 --> 581.080] Then you'll part of this community means so much to me, so thank you.
159
+ [581.080 --> 585.720] For clicking subscribe, joining the community and supporting me, I'm Ryan Kelly, that autistic
160
+ [585.720 --> 587.920] guy and till my next video, we'll talk soon.
transcript/allocentric_I2azLvESwDY.txt ADDED
@@ -0,0 +1,355 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [60.000 --> 68.440] While mobility techniques themselves are fairly standard, some modifications might be necessary
2
+ [68.440 --> 70.240] for deafblind children.
3
+ [70.240 --> 74.280] Although the techniques are similar to those used by blind youngsters, the manner in which
4
+ [74.280 --> 79.320] these techniques are taught will differ considerably, and that an instructor may have to rely in
5
+ [79.320 --> 84.160] a far greater nonverbal component of instruction when working with a deafblind.
6
+ [90.000 --> 98.260] During the pre-cane phase of training, a student learns various forms of protective
7
+ [98.260 --> 100.940] arm techniques in a familiar area.
8
+ [100.940 --> 105.080] The various trailing techniques will be used to develop a good line of travel with a fast
9
+ [105.080 --> 114.680] and effective speed so there won't be too great a tendency to veer.
10
+ [114.680 --> 118.580] With the knowledge of their own bodies and the means to move through space, students
11
+ [118.580 --> 127.500] can not only move purposefully, but protect themselves appropriately as well.
12
+ [127.500 --> 132.280] Diffblind students need to learn the concepts of the sighted guide technique very early
13
+ [132.280 --> 135.620] and become sensitive to the movements of their guides.
14
+ [135.620 --> 141.040] Most of this is done nonverbaly, the child relying on a developing sense of touch and
15
+ [141.040 --> 150.760] ability to respond to the movements of the guide.
16
+ [150.760 --> 155.240] The specific age at which cane training should begin varies considerably.
17
+ [155.240 --> 159.480] There are a number of factors involved related to the individual student.
18
+ [159.480 --> 163.680] Rather than citing a chronological age, the instructor might consider a student's attitude
19
+ [163.680 --> 169.440] and interest in the cane, the student's level of maturity, the residual vision and hearing,
20
+ [169.440 --> 174.040] balance and coordination, their level of self-awareness and body image and their need
21
+ [174.040 --> 176.200] to learn the skill.
22
+ [176.200 --> 180.240] Sometimes an adventitiously deaf blind child might have a real potential for adopting the
23
+ [180.240 --> 185.360] cane and learning some basic skills rather quickly, while on the other hand a congenitally
24
+ [185.360 --> 190.040] blind child, a one who has a great deal of difficulty with travel, might take longer to
25
+ [190.040 --> 195.400] learn an appropriate technique but might have a real need to travel.
26
+ [195.400 --> 200.280] The students learn that the cane is an extension of their own tactile sense and gradually learn
27
+ [200.280 --> 203.960] to trust the cues they receive through the cane.
28
+ [203.960 --> 209.600] Young children need to learn that the cane is a tool instead of a toy and use it in context
29
+ [209.600 --> 216.000] with root travel and various recurring roots such as going to recess, lunch, the toilet,
30
+ [216.000 --> 220.160] the play yard, the swimming pool.
31
+ [220.160 --> 224.280] A great deal of the usefulness of the cane lay in its coordination with the movement of
32
+ [224.280 --> 229.720] the feet, being in step with the cane so that the cane tip covers the area where the
33
+ [229.720 --> 231.920] next footstep will fall.
34
+ [231.920 --> 235.720] Even from the beginning it's advisable to have the students in step with the movements
35
+ [235.720 --> 237.040] of the cane.
36
+ [237.040 --> 241.840] The instructor might start the student with one foot back and as the rear foot comes forward
37
+ [241.840 --> 245.360] the instructor moves the cane across the student's body.
38
+ [245.360 --> 249.760] Then as the student steps again the instructor moves the cane back.
39
+ [249.760 --> 253.360] The relaxation and smoothness are real factors here.
40
+ [253.360 --> 258.440] An instructor should be interested in smoothness, not a jerky robot-like motion which has a tendency
41
+ [258.440 --> 262.600] to tighten the arm and stiffen the student's entire body.
42
+ [262.600 --> 267.200] A stiff arm receives fewer and weaker cues through the cane.
43
+ [267.200 --> 273.760] The student has to feel what it's like to be in step and begin to internalize that feeling.
44
+ [273.760 --> 279.200] When the student gets out of step simply stop the student and begin again, skipping twice
45
+ [279.200 --> 283.760] or shifting the cane suddenly might have no meaning at all for a deafblind student with
46
+ [283.760 --> 287.080] little inner language or poor communication skills.
47
+ [287.080 --> 292.160] For this type of student it's easier to teach the feeling of actually being in step
48
+ [292.160 --> 294.840] as opposed to making modifications.
49
+ [294.840 --> 300.280] The best means of correction is to stop, get the student into a starting position, and
50
+ [300.280 --> 305.200] begin again in step.
51
+ [305.200 --> 308.720] There are a number of ways to teach the length and movement of the arc.
52
+ [308.720 --> 311.120] One way is to use the auditory sense.
53
+ [311.120 --> 316.480] Even if the student is profoundly deaf, most can hear and feel the sound generated by
54
+ [316.480 --> 318.760] clapping two wooden blocks together.
55
+ [318.760 --> 323.360] Another sound that most profoundly deaf blind students can hear is two sections of metal
56
+ [323.360 --> 325.400] pipe banging together.
57
+ [325.400 --> 329.560] It's important for the students to realize the approximate width of the cane arc, the
58
+ [329.560 --> 334.240] sound can be used to indicate the far reaches of the arc to the student.
59
+ [334.240 --> 339.480] Every means may be inappropriate for some students who may require a close hands-on approach,
60
+ [339.480 --> 344.680] but may work well for other students in the early stages of instruction and technique.
61
+ [344.680 --> 349.120] If the arc is wider on one side of the student's body than the other, he'll usually tend to
62
+ [349.120 --> 350.800] veer in that direction.
63
+ [350.800 --> 354.820] At this point the instructor may want to straighten the student's cane arm and reposition
64
+ [354.820 --> 358.400] his wrist in the center of his body to balance the arc.
65
+ [358.400 --> 362.180] This may have to be done frequently in the beginning of training because the student
66
+ [362.180 --> 367.320] unused to holding his arm in this position for extended periods of time tends to fatigue.
67
+ [367.320 --> 371.900] The student's arm relaxes and slumps closer to the body and the cane arc becomes more
68
+ [371.900 --> 374.260] pronounced to that side.
69
+ [374.260 --> 378.100] A student can learn to make his own center line check by grasping his own wrist with
70
+ [378.100 --> 379.780] his opposite hand.
71
+ [379.780 --> 383.620] An instructor can tap the student's wrist a couple of times as he makes this check to
72
+ [383.620 --> 387.700] foster an association between the tapping and the wrist and the need for a center line
73
+ [387.700 --> 388.820] check.
74
+ [388.820 --> 393.500] The development of communication and cues must develop concurrently with development of
75
+ [393.500 --> 398.480] cane technique.
76
+ [398.480 --> 403.540] The slide technique is so called because the cane tip slides along in constant contact
77
+ [403.540 --> 405.260] with the ground.
78
+ [405.260 --> 410.420] A touch technique, and moving laterally, may move off the edge of a curb at such an angle
79
+ [410.420 --> 414.820] that a deafblind student may not detect it, then suddenly trip off the curb.
80
+ [414.820 --> 419.060] The advantage of the slide technique is that a candy-tick to drop off from any point
81
+ [419.060 --> 420.460] on the arc.
82
+ [420.460 --> 425.180] Even travelers with use of their hearing usually switch from a touch to a slide technique
83
+ [425.180 --> 429.900] when their auditory sense tells them that a corner is near.
84
+ [429.900 --> 434.580] In some cases, the students will tend to use the cane to trail along a wall, a raised
85
+ [434.580 --> 436.180] edge or curb.
86
+ [436.180 --> 440.060] Deafblind students who have very little use of the auditory sense don't have the same
87
+ [440.060 --> 444.660] use of additional cues that would help them parallel such sounds as pedestrian traffic
88
+ [444.740 --> 446.820] or light vehicle flow.
89
+ [446.820 --> 451.780] Few deafblind travelers can use sound reflections from buildings and walls to keep a constant
90
+ [451.780 --> 453.140] distance.
91
+ [453.140 --> 457.980] This is especially true if they use only one hearing aid and the balance of the aided
92
+ [457.980 --> 460.780] and un-aided ear and not very close.
93
+ [460.780 --> 465.740] Many deafblind travelers, especially those whose impairment is congenital, tend to stay close
94
+ [465.740 --> 468.220] to the security of a guiding edge.
95
+ [468.220 --> 471.380] This is a slower technique and it's not ideal.
96
+ [471.380 --> 476.180] There are places in times where this is extremely inconvenient, for instance a sidewalk during
97
+ [476.180 --> 480.900] heavy pedestrian use or near shopping areas where pedestrians are more interested in the
98
+ [480.900 --> 485.420] merchandise in the windows than the travelers walking near the walls.
99
+ [485.420 --> 490.420] The touch and drag technique is useful for finding the ends of walls, intersecting hallways
100
+ [490.420 --> 491.900] and paths.
101
+ [491.900 --> 495.780] The students must take care to keep the arc wide enough in the side opposite the wall
102
+ [495.780 --> 501.460] or edge to cover themselves from oncoming pedestrians or obstructions.
103
+ [501.460 --> 509.380] A mobility instructor working with deafblind travelers must necessarily remain closer because
104
+ [509.380 --> 514.340] speech may not be the most effective means of teaching and monitoring a student's techniques.
105
+ [514.340 --> 518.500] While a teacher may begin by physically moving into controlling the student's cane, the
106
+ [518.500 --> 523.260] touch becomes progressively lighter, the number of adjustments fewer than the instructor
107
+ [523.260 --> 525.700] moves gradually further away.
108
+ [525.700 --> 530.300] What began as a hand on the student's cane becomes later a hand pressing in the shoulders
109
+ [530.300 --> 534.140] and later perhaps a light touch to remind them that they have to make slight adjustments
110
+ [534.140 --> 535.660] in the technique.
111
+ [535.660 --> 540.620] This gradual moving away places a greater sense of control and responsibility into the
112
+ [540.620 --> 542.780] student's own hands.
113
+ [542.780 --> 547.420] Justures and fingerspelling might be used to guide a student to make certain adjustments.
114
+ [547.420 --> 552.540] Still, some situations might require that the instructor be right there and make an immediate
115
+ [552.540 --> 555.660] check by taking a direct hand on the technique.
116
+ [555.660 --> 559.140] This is especially true in situations that are potentially dangerous.
117
+ [559.140 --> 563.140] The instructor should be in a position to ensure that the student navigates difficult
118
+ [563.140 --> 565.540] or dangerous areas safely.
119
+ [565.540 --> 573.900] Safety is always the primary concern.
120
+ [573.900 --> 576.820] Stairs pose a number of problems for any blind traveler.
121
+ [576.820 --> 580.980] The deafblind have additional problems and that they have difficulty hearing people coming
122
+ [580.980 --> 583.740] up or down Stero's opposite them.
123
+ [583.740 --> 588.500] If two deafblind travelers meet on the stairs, the problems may be compounded.
124
+ [588.500 --> 592.420] An instructor may want to have the deafblind traveler exaggerate the turning out of the
125
+ [592.420 --> 597.420] wrist used while on the stairs to provide that strength and leverage needed to protect
126
+ [597.420 --> 600.700] them against people bumping into them or falling over them.
127
+ [600.700 --> 605.140] The arm is in a better position to ward off people who veer into it and still provide a
128
+ [605.140 --> 607.740] good position for sensing the stairs.
129
+ [607.740 --> 615.260] The arm is stronger pushing when the wrist is turned out like this.
130
+ [615.260 --> 619.660] The student may choose to use sidehand rails or banisters in the middle of either side
131
+ [619.660 --> 620.660] of the stairs.
132
+ [620.660 --> 625.660] An important consideration here is will the line of travel from the stairs place the student
133
+ [625.660 --> 631.060] in a good position to contact the next landmark or continue that line of travel?
134
+ [631.060 --> 635.340] Instructor might want to consider that at the top or bottom of the stairs when deciding
135
+ [635.340 --> 644.140] which side to use when going up or down.
136
+ [644.140 --> 648.740] The touch and slide technique combines the advantages of the speed of the touch technique
137
+ [648.740 --> 652.340] with the advantages of the sensitivity of the slide technique.
138
+ [652.340 --> 656.580] The cane remains in contact with the ground a bit longer at the extremes of the arc where
139
+ [656.580 --> 660.820] it touches down and raises very slightly from the movement across the arc in front of
140
+ [660.820 --> 662.260] the students.
141
+ [662.260 --> 666.740] Those blind travelers with hearing can hear the cane tip moving back and forth and can
142
+ [666.740 --> 676.940] make the necessary adjustments on the arc according to the sound.
143
+ [676.940 --> 679.380] After blind travelers must do it by feel.
144
+ [679.380 --> 683.300] The instructor can guide the student by light touch on the cane and provide frequent
145
+ [683.300 --> 687.860] checks until the student becomes proficient with the technique.
146
+ [687.860 --> 691.540] Gestures can be used to indicate the movements of the cane and the position at which the
147
+ [691.540 --> 695.180] cane touches.
148
+ [695.180 --> 701.540] The touch technique.
149
+ [701.540 --> 710.300] The slide technique.
150
+ [710.300 --> 722.540] The touch and slide technique.
151
+ [722.540 --> 727.580] The actual points of contact can be illustrated by a pentail marker attached to the tip of
152
+ [727.580 --> 731.140] a cane.
153
+ [731.140 --> 733.700] The touch technique.
154
+ [733.700 --> 736.460] The slide technique.
155
+ [736.460 --> 739.380] The touch and slide technique.
156
+ [739.380 --> 743.820] The three point technique is so called because the cane touches three times.
157
+ [743.820 --> 745.580] The first on the far side.
158
+ [745.580 --> 751.020] The second is a drags back across the student's body to find the curb, drain or other landmark.
159
+ [751.020 --> 754.740] The third time in the near side where it clears the area before resuming its arc on the
160
+ [754.740 --> 756.260] opposite side.
161
+ [756.260 --> 760.260] The three point is especially useful when the student is looking for some feature along
162
+ [760.260 --> 768.020] the edge of a sidewalk such as a landmark that would indicate a turn.
163
+ [768.020 --> 772.660] The landing is a way to use one landmark, in a sense of relative direction, to find another
164
+ [772.660 --> 777.100] landmark or reference point within one or two cane lengths from the body and extended
165
+ [777.100 --> 778.100] arm.
166
+ [778.100 --> 782.260] A student can either take a new line of direction from the reference point or use the reference
167
+ [782.260 --> 787.260] point to contact an additional landmark as a check on his position.
168
+ [787.260 --> 790.540] Cross spanning entails changing cane hands.
169
+ [790.540 --> 794.460] It can be used to find reference points which are further from the line of travel than
170
+ [794.540 --> 803.820] one cane length or to find the middle of two points to get a sense of relative position.
171
+ [803.820 --> 807.620] Squaring off is a technique used to initiate a straight line of travel.
172
+ [807.620 --> 812.060] A student may use the flat surface of a wall, balance the shoulder blades on it to get
173
+ [812.060 --> 817.300] flat against it and move forward in a straight line.
174
+ [817.300 --> 821.900] A student might also use a pole and a curb using the cane to make sure his line of direction
175
+ [821.900 --> 826.860] is straight and use the pole more for a positional reference than a directional one.
176
+ [826.860 --> 830.420] This technique may be used to ensure that the student is on the right position for a
177
+ [830.420 --> 834.340] street crossing or for crossing a wide area where there are a few other landmarks or
178
+ [834.340 --> 836.300] positional checks.
179
+ [836.300 --> 840.340] When a student gets familiar with the crossing, he may not need to back up against the landmark
180
+ [840.340 --> 844.500] and square off, but might choose to use it merely as a reference point to get into position
181
+ [844.500 --> 846.300] for the crossing.
182
+ [846.300 --> 851.860] Just teaching shorelining, relating to walls and edges, is not a complete set of techniques.
183
+ [851.860 --> 857.220] Some congenitally deafblind students can't feel subtle changes in the shoreline and without
184
+ [857.220 --> 862.420] clear landmarks to indicate specific turns tend to veer and get lost.
185
+ [862.420 --> 866.620] They need to balance shorelining techniques with the touch technique and sufficient rate
186
+ [866.620 --> 872.580] of speed to bridge open areas even if they tend to rely on shorelining.
187
+ [872.580 --> 876.900] The signal to speed up can be done with gentle pressure of the instructor's palm and the
188
+ [876.900 --> 878.420] students back.
189
+ [878.420 --> 881.420] Slow down gentle pressure on the chest.
190
+ [881.420 --> 885.660] It's useful to have the student get used to responding, so he'll react quickly to hand
191
+ [885.660 --> 889.500] pressure if there's an obstacle that could injure him.
192
+ [889.500 --> 896.060] If a deafblind student has a considerable amount of residual vision, it can use it effectively
193
+ [896.060 --> 900.140] as a low vision traveler, a folding cane might be more appropriate.
194
+ [900.140 --> 905.220] The diagonal technique can be used as a backup sensory system, detecting curves, stairs
195
+ [905.220 --> 909.020] or objects just out of the range of the student's peripheral vision.
196
+ [909.020 --> 915.660] This technique also serves as a double check on depth perception.
197
+ [915.660 --> 919.620] It can also call attention to the fact that the student has a visual impairment.
198
+ [919.620 --> 924.180] This may be critical at corners when a student undertakes a crossing and a car suddenly approaches
199
+ [924.180 --> 940.260] at high speed.
200
+ [940.260 --> 944.220] A low vision deafblind student can switch to a regular cane technique and unfamiliar
201
+ [944.220 --> 952.220] or ambiguous terrain and return to a diagonal technique when they get back unfamiliar territory.
202
+ [952.220 --> 959.220] This is a very important technique for a deafblind student to learn how to use a specific
203
+ [959.220 --> 960.220] technique.
204
+ [960.220 --> 967.220] This technique is very important for a deafblind student to learn how to use a specific
205
+ [967.220 --> 968.220] technique.
206
+ [968.220 --> 975.220] This technique is very important for a deafblind student to learn how to use a specific technique
207
+ [975.220 --> 982.220] to learn how to use a specific technique.
208
+ [982.220 --> 991.220] There is no absolutely ideal technique for every specific location or terrain.
209
+ [991.220 --> 998.220] A student might have a wide range of cane techniques or just more simple enough for the few routes they travel.
210
+ [998.220 --> 1004.220] If the techniques get the students where they want to go and they feel comfortable with them, the techniques are effective.
211
+ [1004.220 --> 1011.220] Different cane techniques might be taught the easier to use and afford the student less of a chance of getting lost or missing a landmark.
212
+ [1011.220 --> 1017.220] Techniques should be kept as simple as possible with the fewest number of modifications necessary.
213
+ [1017.220 --> 1026.220] If a student can master only one simple technique, he can practice it over a variety of different terrains and learn to maximize the effectiveness of that particular technique.
214
+ [1026.220 --> 1033.220] In many cases, a student makes modifications according to his own level of skill and his own specific needs.
215
+ [1033.220 --> 1040.220] An instructor should ensure that the modification of forward sufficient protection for the student as well as serving effectively as a sensory function.
216
+ [1040.220 --> 1050.220] Techniques are building blocks of those skills that will enable the student to have access to his world and to become as much a part of that world as he can be.
217
+ [1050.220 --> 1062.220] The technique should be taught in familiar areas so that the child will have a chance to work in those techniques in a meaningful context with enough rate of repetition to ensure being internalized.
218
+ [1062.220 --> 1069.220] The technique should fit into an orderly and consistent body of skills the student can master.
219
+ [1069.220 --> 1077.220] Techniques should fit the student's physical capabilities and needs. The student shouldn't be forced to learn a classic technique with absolute perfection.
220
+ [1077.220 --> 1084.220] The standard techniques are guides and capable of enormous modification while still retaining their usefulness.
221
+ [1084.220 --> 1089.220] The techniques should be fitted to the student rather than the student to the technique.
222
+ [1090.220 --> 1097.220] There is no ideal technique what works best for the student in any particular circumstance or environment is the best technique.
223
+ [1097.220 --> 1103.220] There is no right or wrong techniques, only effective and ineffective.
224
+ [1103.220 --> 1118.220] Techniques are the means to use the long cane as a tool to enable students to use their orientation skills, their sensory training, their inner sense of direction to express their needs as travelers to find their own way.
225
+ [1120.220 --> 1125.220] The technique should be taught in a specific way.
226
+ [1125.220 --> 1131.220] The technique should be taught in a specific way.
227
+ [1131.220 --> 1137.220] The technique should be taught in a specific way.
228
+ [1138.220 --> 1143.220] The technique should be taught in a specific way.
229
+ [1143.220 --> 1149.220] The technique should be taught in a specific way.
230
+ [1149.220 --> 1155.220] The technique should be taught in a specific way.
231
+ [1167.220 --> 1173.220] The technique should be taught in a specific way.
232
+ [1197.220 --> 1203.220] The technique should be taught in a specific way.
233
+ [1228.220 --> 1233.220] The technique should be taught in a specific way.
234
+ [1233.220 --> 1238.220] A root is basically a travel path to an objective.
235
+ [1238.220 --> 1240.220] But it's more than that.
236
+ [1240.220 --> 1251.220] A root is an opportunity for students to leave home and school for a time and travel into the world, to make contact with members of the community and partake of the roots and services that fit their specific interests and needs.
237
+ [1252.220 --> 1260.220] A root provides the opportunities to use language and sensory skills and to enable the students to taste whatever measure freedom lay within their capabilities.
238
+ [1260.220 --> 1263.220] A root is way out and way back.
239
+ [1263.220 --> 1270.220] And along that root are a number of experiences, many set up by the mobility instructor, others that function of chance.
240
+ [1270.220 --> 1275.220] Different types of roots provide the opportunities for learning different concepts and types of skills.
241
+ [1276.220 --> 1287.220] It's along the mobility root that the techniques will be taught, developed, practiced and honed into a workable system of skills that will enable a child to venture into the world successfully and with confidence.
242
+ [1296.220 --> 1299.220] Roads are necessary components of a student's program.
243
+ [1299.220 --> 1305.220] At certain times of day, a student goes to different locations. A root can be as simple as a trip to a bathroom.
244
+ [1305.220 --> 1312.220] A root incorporates a student's need. It's a movement along a regular path to a destination and return.
245
+ [1312.220 --> 1321.220] On a daily trip to the snack room, the root is part of that schedule, part of the structure of the time space continuum, which breaks up a student's day.
246
+ [1321.220 --> 1330.220] It's along these roots that premobility skills, cane techniques and sensory training come into play and become allied with purpose and direction.
247
+ [1330.220 --> 1340.220] The roots themselves are part of the pattern of changes in location, direction, and duration of time that the student experiences on an ongoing basis.
248
+ [1340.220 --> 1346.220] The root entails a set of sensory and sensory motor experiences, sequenced in memory.
249
+ [1347.220 --> 1354.220] The first basic roots may have to be repeated, innumerable times, and the experiences fit into the student's personal needs.
250
+ [1354.220 --> 1362.220] The landmarks must be very easily distinguished. Later on, these experiences serve to divide the root into understandable segments.
251
+ [1362.220 --> 1369.220] The experiences themselves become part of the inner language that enables the child to structure movement and direction.
252
+ [1370.220 --> 1384.220] The first roots may be learned motorically. The first wall in the root on the right side, turn at the corner and the return is on the left side.
253
+ [1384.220 --> 1393.220] Even before we learned the dichotomy of left and right, the sensory motor experiences had been presented. The feelings are there.
254
+ [1394.220 --> 1400.220] Trees and headrails occur in certain sides of a root and serve as guides toward objectives.
255
+ [1400.220 --> 1410.220] The features and objects in a root become references for structuring some pattern to where the child is and form the experiential basis for learning the language associated with those features.
256
+ [1411.220 --> 1424.220] The first roots moving out in a way from a familiar, non-threatening point of origin and return make way for travel along edges.
257
+ [1424.220 --> 1432.220] Later the edges lean to turns. This memory of turns both indoor and outside are sets of sequential memory.
258
+ [1441.220 --> 1455.220] Things that the child can sense such as the sun detected visually or with the thermal sense in the skin, the wind, the scent of a certain tree or flower, the feelings of the terrain, the rough texture of the ground.
259
+ [1455.220 --> 1464.220] These are called cues. Cues call a student's attention to certain features of the root using the student's own sensory channels.
260
+ [1465.220 --> 1476.220] Objects or features on the root that the student can touch or contact with the cane are called landmarks. These serve to give a clue to relative direction or distance on a root.
261
+ [1476.220 --> 1489.220] There's an interplay of landmarks and cues on a root. The cues call a student's attention through sensory channels that certain features are in the immediate vicinity and the landmarks act as guides and double checks and position.
262
+ [1490.220 --> 1502.220] Intervening landmarks are those which give a sense of progress along a root. Some roots, with very long stretches of straight sidewalk, have very few clues to indicate how far along a student is progressed.
263
+ [1502.220 --> 1512.220] An advanced traveler may internalize a sense of relative time and distance, but this may or may not develop in a congenitally deafblind student or one with some degree of retardation.
264
+ [1512.220 --> 1522.220] It's helpful to have as many intervening cues as possible to give some sense of where a student is on a root and how far it is to the destination.
265
+ [1522.220 --> 1538.220] It's ideal, but not always possible, to have three factors involved at each point on a root where a landmark is used. The terrain itself, flat, rough or healing, combinations with other features like a grass edge and concrete, a re-hedged and a driveway.
266
+ [1538.220 --> 1561.220] A specific permanent object, like a tree, lamppost, fire hydrant, and some environmental cue. If a student travels a certain root at the same time each day, there may be prevailing wind, the sun may be on a certain side, there may be a scent from a bakery, a gas station, a delicatessen, there may be traffic sounds in a certain direction.
267
+ [1568.220 --> 1584.220] Wherever possible, the landmarks can be paired, so there's a double check on a landmark, a cluster of clues, a pole which is next to a tree gives more of a distinction than just the pole itself.
268
+ [1584.220 --> 1592.220] When the landmarks are within the span of a cane, the grouping will serve as ready identification of a specific place on the root.
269
+ [1592.220 --> 1602.220] As many factors as possible should be used to aid recognition. A student might break off a leaf from a hedge and smell it, take a piece of bark, a mature rub it between his fingers.
270
+ [1602.220 --> 1608.220] A student might touch the landmark with the cane to generate a certain sound or vibration.
271
+ [1608.220 --> 1628.220] At several stages on a root, the student should take bearings and just where he has come from and where he's going. The student might take bearings at certain points in a sidewalk by finding the curb or positioning whatever traffic sounds heard by residual hearing on a certain side of his body.
272
+ [1639.220 --> 1649.220] A pivot point on a root is one from which the student can start a number of other roots. This is the point at which a decision is made regarding the direction and the eventual destination.
273
+ [1649.220 --> 1657.220] At the pivot point, a mobility instructor may wish to periodically review the relative direction of a number of destinations.
274
+ [1668.220 --> 1672.220] If the student is engaged in a vocational program, a number of roots can be linked together.
275
+ [1688.220 --> 1696.220] Students learn the root to do the job, learn to notice the day and time of getting paid, learn the root to and within a bank to engage in necessary banking skills.
276
+ [1696.220 --> 1703.220] Other roots to stores, post offices and shops can be learned, whatever suits their individual needs and interests.
277
+ [1703.220 --> 1716.220] If something unexpected occurs or if a unique educational opportunity presents itself, something which will peak a student's curiosity by his environment, and may be a good idea to stop for a moment and explore.
278
+ [1716.220 --> 1721.220] Experiences and objects enrich in a travel route, give meaning to the root for students.
279
+ [1722.220 --> 1731.220] The root is not only a memory of distance and duration of time in relative direction, but of the experiences along that root as well.
280
+ [1731.220 --> 1741.220] If there are problems with distance to objectives, lack of public transportation, or problem with bus schedules, it may be feasible to use a system of drop-offs.
281
+ [1742.220 --> 1752.220] This student has learned several roots within a shopping center, and has earned a mobility pass, and the right to board the shuttle bus to shop at stores of his own choice.
282
+ [1752.220 --> 1765.220] When the student is traveling during the hour-lotted at the shopping center, he must draw upon protective skills, orientation and sensory skills, and use these to arrive at the destinations of his own choice.
283
+ [1765.220 --> 1776.220] Once in the stores, the student must combine communication and social skills and money management techniques to buy things of his own choosing.
284
+ [1776.220 --> 1785.220] In more complex travel environments, such as supermarkets, it may be more effective to ask for assistance in finding the items desired.
285
+ [1785.220 --> 1796.220] The mobility instructor, nor the student, might in turn assist the clerks and managers by teaching them sighted guide techniques and calling their attention to the special needs of the blind.
286
+ [1796.220 --> 1804.220] The wide range of choices and the interaction with the sighted guide imparts a depth and richness to the special hour.
287
+ [1804.220 --> 1814.220] Sighted guides who work with our students a number of times often show a surprising sensitivity and make the mobility experience positive, successful, and enjoyable.
288
+ [1834.220 --> 1857.220] This type of semi-independent mobility, bounded only by prior instruction and the schedule of drop-offs and pickups, gives students an experience with freedom and a taste for independence.
289
+ [1857.220 --> 1868.220] It provides a valuable arena for mobility instructors to monitor and assess the student's skills in a fine areas that require further instruction.
290
+ [1868.220 --> 1880.220] From these outings where purpose and destinations are chosen by the student, come the feelings of confidence, self-esteem, and the self-image of a successful traveler.
291
+ [1888.220 --> 1896.220] While route training is in progress, a mobility instructor can be working on the links to these route simultaneously.
292
+ [1896.220 --> 1907.220] The student can assist the instructor in devising a series of large cards to alert the drivers of particular buses of the destinations and the need for special assistance.
293
+ [1907.220 --> 1915.220] The signs can be laminated for durability and brailt so the student can tell which one to use at each stage of the trip.
294
+ [1915.220 --> 1944.220] The students need to have a great many successful experiences with public transportation to become comfortable enough to use buses on their own.
295
+ [1944.220 --> 1972.220] After their period of mobility instruction ends.
296
+ [1972.220 --> 1980.220] To further ensure a successful bus trip, the letters in the sign should be large enough so the sign can be read by the bus driver as he opens the door.
297
+ [1980.220 --> 1989.220] Although many pedestrians may try to initiate contact, when the Depline Traveler does not respond, many may move away or board the bus along.
298
+ [1989.220 --> 1995.220] It's often the driver who assists Depline travelers to their seat and notes their destination.
299
+ [2002.220 --> 2017.220] Shopping malls, which in many areas have replaced small business districts and family-operated stores, provide a wide range of mobility features and a variety of links between routes.
300
+ [2032.220 --> 2061.220] Although shopping malls may be a considerable distance from the point of the routes origin, possibly requiring access by public transportation, they provide an area protected from traffic and feature a very high frequency of pedestrian movement ensuring valuable assistance and travel and making purchases.
301
+ [2092.220 --> 2103.220] It offers free foreign 잠jeong route with conditions based on public transportation and an open message.
302
+ [2103.220 --> 2133.200] Features which in one context may be bridges between routes
303
+ [2133.200 --> 2143.200] and can be barriers to the depth blind travelers.
304
+ [2143.200 --> 2148.200] Deafness and combination with blindness imposes additional constraints in route travel.
305
+ [2148.200 --> 2155.200] Cues to obstructions that blind travelers can detect by hearing are not available to most hearing impaired blind travelers.
306
+ [2155.200 --> 2160.200] They must rely almost completely on tactile cues.
307
+ [2160.200 --> 2170.200] Signs, advertising boards, planters, things that are useful and pleasant for the sighted cause serious problems for blind and deaf blind travelers.
308
+ [2170.200 --> 2177.200] The mobility instructor must call special attention to obstructions that the cane may easily miss.
309
+ [2177.200 --> 2184.200] The traveler can be alerted to slow down, widen the cane arc, straighten the cane arm to provide more reaction time,
310
+ [2184.200 --> 2191.200] and possibly to ready a protective arm position to encounter the oncoming obstruction.
311
+ [2191.200 --> 2195.200] White open spaces are special problem areas for the deaf blind.
312
+ [2195.200 --> 2208.200] If there are no easily detected, closely spaced landmarks, they must rely on edges taking the long way around to be assured of reliable landmarks.
313
+ [2209.200 --> 2214.200] Root training entails learning a number of orientation skills during the course of travel.
314
+ [2214.200 --> 2224.200] Students with low vision must learn to use the residual vision to structure components of the route and to be able to recognize those components upon the return to the point of origin.
315
+ [2224.200 --> 2231.200] A department store affords considerable opportunity to use the techniques and travel skills learned in other settings.
316
+ [2231.200 --> 2235.200] The departments and the merchandise in different sections serve as distinctive landmarks.
317
+ [2236.200 --> 2244.200] If a student can remember the words or signs associated with those landmarks, the words and signs serve as orientation sequence clues.
318
+ [2244.200 --> 2250.200] In one store, for example, the dresses might come before the book section, then the shooting department where a student might make a turn.
319
+ [2250.200 --> 2257.200] The student walks past the cases where wallets and purses are displayed and turns at the corner of the display cases at the escalator.
320
+ [2258.200 --> 2265.200] At the top of the escalators, the student makes another turn.
321
+ [2265.200 --> 2272.200] It's useful to stop the student at different points along the route to review the direction of origin and destination.
322
+ [2272.200 --> 2281.200] There was purpose on a route to see someone, to communicate with somebody, or to carry out something specific like to buy a gift for a friend.
323
+ [2281.200 --> 2288.200] The route affords opportunity to practice money management and those numerous skills that are necessary to function effectively in life.
324
+ [2292.200 --> 2296.200] Street crossings are often an unavoidable link and numerous mobility routes.
325
+ [2296.200 --> 2302.200] This one aspect of deafblind mobility probably engenders more controversy than any other.
326
+ [2302.200 --> 2304.200] It's not a yes or no question.
327
+ [2304.200 --> 2311.200] The mobility instructor must carefully weigh the decision whether a child has the ability to make a particular crossing in relative safety.
328
+ [2316.200 --> 2319.200] Many of the students have considerable residual vision.
329
+ [2319.200 --> 2326.200] They require a great deal of training, but may be very effective in providing enough visual information to cross the street safely.
330
+ [2326.200 --> 2333.200] If a child has low vision, it might be advisable to make clockwise crossing at an intersection rather than a counterclockwise.
331
+ [2334.200 --> 2346.200] Whereas blind students might offer a counterclockwise crossing so as to go with the traffic sounds, deafblind travelers with some residual vision might want to get as close to the stopping cars as possible.
332
+ [2346.200 --> 2352.200] Wait one light cycle and actually see a car stop to get a clear signal to go.
333
+ [2352.200 --> 2367.200] This may require a greater distance to cross the street, but the signal to go is clever and may be well worth the time in terms of safety.
334
+ [2367.200 --> 2384.200] As low vision travelers near the middle of the street during a crossing, they must learn to turn to the side to detect cars that will cross their path.
335
+ [2384.200 --> 2390.200] The travelers can either stop, signal, or vary their speed accordingly.
336
+ [2391.200 --> 2401.200] The age, size, maturity, appearance, intelligence and motivation of the student are further factors in safe street crossings.
337
+ [2401.200 --> 2409.200] If the child is obviously handicapped, her gives the appearance of being blind a motorist might exhibit caution.
338
+ [2409.200 --> 2419.200] The color of the student's clothing, the time of day as far as the light is concerned, and even the weather, are factors that affect the visibility of a traveler to motorists.
339
+ [2419.200 --> 2426.200] The combinations of residual vision and hearing and how the student uses them are critical factors involved in making safe street crossings.
340
+ [2426.200 --> 2440.200] The student's level of understanding of concepts and the good sense in making decisions that affect their safety will weigh heavily determining the safe limits of travel in the route.
341
+ [2441.200 --> 2447.200] The rate, speed, volume and grouping of traffic will be important factors.
342
+ [2447.200 --> 2453.200] These can vary according to time of day or even time of year in some places with seasonal traffic.
343
+ [2453.200 --> 2458.200] Traffic itself is not intrinsically dangerous, uncontrolled traffic is.
344
+ [2458.200 --> 2469.200] In some cases, heavy traffic generates low frequency sound cues that can be instrumental in keeping a deafblind traveler oriented to direction.
345
+ [2469.200 --> 2486.200] Confusing configurations of crosswalks, islands and combinations of traffic controls may complicate crossing to such an extent that the only travelers who may safely negotiate the street are students with a great deal of residual vision or excellent use of what little vision they may possess.
346
+ [2486.200 --> 2497.200] So many factors interrelate and the combinations of those factors are so unique that there can be no final answer as to whether the deafblind can make street crossings.
347
+ [2498.200 --> 2518.200] It's up to the mobility instructor to carefully weigh all the pertinent factors involved with each individual student at every crossing in terms of the student's safety and to minimize the dangers by judicious route design, timing of the crossing and effective instruction involving the concepts of safety and danger.
348
+ [2519.200 --> 2526.200] If the student's residual senses are such that they can master a crossing then it may become an integral part of the route.
349
+ [2526.200 --> 2539.200] That decision involves a tremendous responsibility the part of the mobility instructor. Freedom always entails some degree of risk but safety must always clearly and overwhelmingly outweigh any possible danger.
350
+ [2548.200 --> 2563.200] Communication skills might be employed, signs of various kinds used to solicit pedestrian aid. This works well in areas of high pedestrian traffic.
351
+ [2564.200 --> 2578.200] If street crossings are to be integrated within the route the child must have an extremely high degree of success with them.
352
+ [2579.200 --> 2595.200] The route is the proving ground for a number of sensory skills and techniques. When linked to the community through consumer lessons the route serves as an arena to work on people skills as well.
353
+ [2595.200 --> 2605.200] There's a refreshing measure of confidence that develops from the ability to make a decision to go somewhere, travel with as little assistance as possible and do something along the way.
354
+ [2606.200 --> 2625.200] The route is a channel through which a student travels toward people, toward experience. The route is a learning process itself a continuum of growth, change, and acquisition of abilities in purpose, a journey of self discovery, a route implies direction, internal direction as well as geographical.
355
+ [2625.200 --> 2643.200] It might be said that the route is one of the most important fundamentals of mobility training, the crucible where the skills, techniques, and the willingness to use them come together. Mobility itself is about people, allowing those people the chance to find their own way.
transcript/allocentric_I6IAhXM-vps.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.880] We all use words and language every day to interact with people at work.
2
+ [7.880 --> 11.340] But do we really communicate effectively?
3
+ [11.340 --> 14.920] Effective communication can be broken down into three parts.
4
+ [14.920 --> 17.560] Listening, understanding and responding.
5
+ [17.560 --> 21.240] Let's look at these one by one.
6
+ [21.240 --> 26.280] Listening involves hearing the words that are being said, taking in non-verbal cues,
7
+ [26.280 --> 32.400] such as body language and facial expressions, plus paying attention to voice modulation.
8
+ [32.400 --> 38.840] We then move on to the next stage, understanding or giving meaning to what we have heard.
9
+ [38.840 --> 43.520] Most communication breakdowns happen at this stage, because we often misunderstand or
10
+ [43.520 --> 46.080] misinterpret what is being said.
11
+ [46.080 --> 52.000] When we make errors in interpretation, we are likely to respond incorrectly as well.
12
+ [52.000 --> 58.560] For example, your boss asks you if the task that he assigned to you has been completed.
13
+ [58.560 --> 63.800] If you interpret that as the boss blaming you for not completing the task, you are likely
14
+ [63.800 --> 65.680] to respond with anger.
15
+ [65.680 --> 71.320] However, if you interpret that as your boss wanting to just know the status of the task,
16
+ [71.320 --> 75.080] you are likely to feel less angry and defensive.
17
+ [75.080 --> 80.880] How we interpret what we hear is affected by the thoughts that pop up in our minds when
18
+ [80.880 --> 82.680] we are listening.
19
+ [82.680 --> 88.840] At Way Forward, we help you catch these automatic thoughts so you can reduce communication errors
20
+ [88.840 --> 91.360] and be more productive at work.
21
+ [91.360 --> 96.800] For more information, reach out at www.wayforward.co.in
transcript/allocentric_IhITqkNTaNo.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ [0.000 --> 10.180] slow disabilities.
2
+ [60.000 --> 80.080] lingon
3
+ [80.080 --> 82.740] All right.
4
+ [83.080 --> 86.560] Good
transcript/allocentric_JFkHlqLIuD8.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_Ks-_Mh1QhMc.txt ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 23.840] So I want to start by offering you a free no tech life hack and all it requires of you
2
+ [23.840 --> 30.960] is this that you change your posture for two minutes. But before I give it away, I want to ask you to
3
+ [30.960 --> 36.080] right now do a little audit of your body and what you're doing with your body. So how many of you
4
+ [36.080 --> 41.200] are sort of making yourself smaller, maybe you're hunching, crossing your legs, maybe wrapping your
5
+ [41.200 --> 52.080] ankles, sometimes we hold onto our arms like this, sometimes we spread out. I see you. So I want you
6
+ [52.080 --> 56.160] to pay attention to what you're doing right now. We're going to come back to that in a few minutes
7
+ [56.160 --> 61.040] and I'm hoping that if you sort of learn to tweak this a little bit, it could significantly change
8
+ [61.040 --> 68.720] the way your life unfolds. So we're really fascinated with body language and we're particularly
9
+ [68.720 --> 73.920] interested in other people's body language. You know, we're interested in like, you know,
10
+ [73.920 --> 85.360] an awkward interaction or a smile or a contemptuous glance or maybe a very awkward wink or maybe even
11
+ [85.360 --> 91.120] something like a handshake. Here they are arriving at number 10 and look at this lucky policeman
12
+ [91.120 --> 95.680] gets the shake hands with the president of the United States. Oh, here comes the prime minister.
13
+ [95.920 --> 96.880] No.
14
+ [102.880 --> 109.440] So a handshake or the lack of a handshake can have us talking for weeks and weeks and weeks,
15
+ [109.440 --> 115.760] even the BBC and the New York Times. So obviously when we think about nonverbal behavior or body
16
+ [115.760 --> 121.200] language, but we call it nonverbal as social scientists, it's language. So we think about communication.
17
+ [121.200 --> 125.520] When we think about communication, we think about interactions. So what is your body language
18
+ [125.520 --> 132.080] communicating to me? What's mine communicating to you? And there's a lot of reason to believe that
19
+ [132.080 --> 137.040] this is a valid way to look at this. So social scientists have spent a lot of time looking at the
20
+ [137.040 --> 142.480] effects of our body language or other people's body language on judgments and we make sweeping
21
+ [142.480 --> 148.480] judgments and inferences from body language and those judgments can predict really meaningful
22
+ [148.480 --> 153.600] life outcomes like who we hire or promote, who we ask out on the date. For example,
23
+ [155.280 --> 161.840] Nallini-ombadiya researcher at Tufts University shows that when people watch 30-second soundless
24
+ [161.840 --> 167.440] clips of real physician patient interactions, their judgments of the physician's niceness
25
+ [168.160 --> 172.480] predict whether or not that physician will be sued. So it doesn't have to do so much with whether
26
+ [172.480 --> 176.240] or not that physician was incompetent, but do we like that person and how they interacted?
27
+ [176.480 --> 183.760] Even more dramatic, Alex Todorovic Princeton has shown us that judgments of political candidates
28
+ [183.760 --> 193.360] faces in just one second predict 70% of US Senate and gubernatorial race outcomes. And even,
29
+ [193.360 --> 199.920] let's go digital, emoticons used well in online negotiations can lead you to claim more value
30
+ [199.920 --> 206.880] from that negotiation if you use them poorly, bad idea. So when we think of non-verbals, we think
31
+ [206.880 --> 211.600] of how we judge others, how they judge us and what the outcomes are, we tend to forget the
32
+ [211.600 --> 217.920] other audience that's influenced by our non-verbals and that's ourselves. We are also influenced by
33
+ [217.920 --> 223.760] our non-verbals, our thoughts and our feelings and our physiology. So what non-verbals am I talking
34
+ [223.760 --> 230.320] about? I'm a social psychologist, I study prejudice, and I teach it at a competitive business school.
35
+ [230.320 --> 237.280] So it was inevitable that I would become interested in power dynamics. I became especially interested in
36
+ [237.280 --> 243.040] non-verbal expressions of power and dominance. And what are non-verbal expressions of power and
37
+ [243.040 --> 248.800] dominance? Well, this is what they are. So in the animal kingdom, they are about expanding. So you
38
+ [248.800 --> 255.440] make yourself big, you stretch out, you take up space, you're basically opening up, it's about opening
39
+ [255.440 --> 262.480] up. And this is true across the animal kingdom, it's not just limited to primates and humans do the
40
+ [262.480 --> 269.040] same thing. So they do this both when they have power sort of chronically and also when they're
41
+ [269.040 --> 274.160] feeling powerful in the moment. And this one is especially interesting because it really shows us
42
+ [274.240 --> 280.560] how universal and old these expressions of power are. This expression, which is known as pride,
43
+ [281.200 --> 286.720] Jessica Tracy has studied, she shows that people who are born with sight and people who are
44
+ [286.720 --> 292.080] can generally blind do this when they win at a physical competition. So when they cross the
45
+ [292.080 --> 296.960] finish line and they won, it doesn't matter if they've never seen anyone do it, they do this. So the
46
+ [296.960 --> 302.000] arms up in the V, the chin is slightly lifted. What are we doing when we feel powerless? We do
47
+ [302.000 --> 307.920] exactly the opposite. We close up, we wrap ourselves up, we make ourselves small, we don't want to
48
+ [307.920 --> 313.760] bump into the person next to us. So again, both animals and humans do the same thing. And this is
49
+ [313.760 --> 319.840] what happens when you put together high and low power. So what we tend to do when it comes to power
50
+ [319.840 --> 324.800] is that we compliment the others non-verbals. So if someone's being really powerful with us,
51
+ [324.800 --> 329.440] we tend to make ourselves smaller. We don't mirror them, we do the opposite of them. So
52
+ [330.320 --> 336.000] I'm watching this behavior in the classroom. And what do I notice? I notice that
53
+ [337.840 --> 343.520] MBA students really exhibit the full range of power non-verbals. So you have people who are like
54
+ [343.520 --> 347.840] caricatures of alphas, like really coming to the room, they get right into the middle of the room,
55
+ [348.400 --> 353.360] before class even starts, like they really want to occupy space. When they sit down, they're sort of
56
+ [353.360 --> 358.800] spread out, they raise their hands like this. You have other people who are virtually collapsing when
57
+ [358.800 --> 362.960] they come in, as soon as they come in, you see it. You see it on their faces and their bodies,
58
+ [362.960 --> 367.520] and they sit in their chair and they make themselves tiny, and they go like this when they raise their hand.
59
+ [368.560 --> 372.960] I notice a couple things about this. One, you're not going to be surprised. It seems to be related
60
+ [372.960 --> 381.280] to gender. So women are much more likely to do this kind of thing than men. Women feel chronically
61
+ [381.280 --> 386.560] less powerful than men, so this is not surprising. But the other thing I noticed is that it also
62
+ [386.560 --> 391.760] seemed to be related to the extent to which the students were participating and how well they
63
+ [391.760 --> 396.800] were participating. And this is really important in the MBA classroom because participation counts
64
+ [396.800 --> 402.880] for half the grade. So, business schools have been struggling with its gender grade gap. You get
65
+ [402.880 --> 408.000] these equally qualified women and men coming in, and then you get these differences in grades,
66
+ [408.000 --> 413.600] and it seems to be partly attributable to participation. So I started to wonder, you know, okay,
67
+ [414.080 --> 418.640] so you have these people coming in like this and they're participating. Is it possible that we
68
+ [418.640 --> 424.240] could get people to fake it and would it lead them to participate more? So my main collaborator,
69
+ [424.240 --> 430.640] Dana Karney, who's at Berkeley, and I really wanted to know, can you fake it till you make it?
70
+ [430.640 --> 435.520] Like, can you do this just for a little while and actually experience a behavioral outcome that
71
+ [435.520 --> 441.120] makes you seem more powerful? So we know that our non-verbals govern how other people think and
72
+ [441.120 --> 445.920] feel about us. There's a lot of evidence, but our question really was, do our non-verbals
73
+ [445.920 --> 452.880] govern how we think and feel about ourselves? There's some evidence that they do. So, for example,
74
+ [453.920 --> 459.200] when we smile when we feel happy, but also when we're forced to smile by holding a pen in our
75
+ [459.200 --> 465.440] teeth like this, it makes us feel happy. So it goes both ways. When it comes to power,
76
+ [466.400 --> 473.120] it also goes both ways. So when you feel powerful, you're more likely to do this, but it's also
77
+ [473.120 --> 481.840] possible that when you pretend to be powerful, you are more likely to actually feel powerful.
78
+ [482.800 --> 488.240] So the second question really was, you know, so we know that our minds change our bodies,
79
+ [488.240 --> 494.560] but is it also true that our bodies change our minds? And when I say minds in the case of the
80
+ [494.560 --> 499.680] powerful, what am I talking about? So I'm talking about thoughts and feelings and the sort of
81
+ [499.680 --> 504.080] physiological things that make up our thoughts and feelings. And in my case, that's hormones.
82
+ [504.080 --> 509.120] I look at hormones. So what do the minds of the powerful versus the powerless look like?
83
+ [510.080 --> 516.080] So powerful people tend to be not surprisingly more assertive and more confident,
84
+ [516.640 --> 520.800] more optimistic. They actually feel that they're going to win even at games of chance.
85
+ [521.280 --> 526.880] They also tend to be able to think more abstractly. So there are a lot of differences.
86
+ [526.880 --> 530.400] They take more risks. There are a lot of differences between powerful and powerless people.
87
+ [531.040 --> 537.360] Physiologically, there are also our differences. On two key hormones, testosterone, which is the
88
+ [537.360 --> 543.920] dominant hormone, and cortisol, which is the stress hormone. So what we find is that
89
+ [544.080 --> 551.360] high power alpha males in primate hierarchies have high testosterone and low cortisol.
90
+ [552.240 --> 558.880] And powerful and effective leaders also have high testosterone and low cortisol. So what does
91
+ [558.880 --> 562.800] that mean? When do you think about power, 10 people tended to think only about testosterone,
92
+ [562.800 --> 567.920] because that was about dominance. But really, power is also about how you react to stress.
93
+ [567.920 --> 573.040] So do you want the high power leader that's dominant, high on testosterone, but really
94
+ [573.040 --> 578.960] stress reactive? Probably not. You want the person who's powerful and assertive and dominant,
95
+ [578.960 --> 586.800] but not very stress reactive. The person who's laid back. So we know that in primate hierarchies,
96
+ [587.280 --> 592.880] if an alpha needs to take over, if an individual needs to take over an alpha role,
97
+ [592.880 --> 598.720] sort of suddenly. Within a few days, that individual's testosterone has gone up significantly,
98
+ [598.720 --> 604.480] and cortisol has dropped significantly. So we have this evidence, both that the body can shape the
99
+ [604.480 --> 611.600] mind, at least at the facial level, and also that role changes can shape the mind. So what happens?
100
+ [611.600 --> 616.560] Okay, you take a role change. What happens if you do that at a really minimal level? Like this
101
+ [616.560 --> 621.200] tiny manipulation, this tiny intervention, for two minutes, you say, I want you to stay on
102
+ [621.200 --> 628.000] like this and it's going to make you feel more powerful. So this is what we did. We decided to
103
+ [628.000 --> 634.720] bring people into the lab and run a little experiment. And these people adopted for two minutes,
104
+ [634.720 --> 640.400] either high power poses or low power poses. And I'm just going to show you five of the poses,
105
+ [640.400 --> 649.920] although they took on only two. So here's one, a couple more. This one has been dubbed the Wonder Woman
106
+ [650.000 --> 655.360] by the media. Here are a couple more. So you can be standing or you can be sitting.
107
+ [656.320 --> 659.600] Here are the low power poses. So you're folding up, you're making yourself small.
108
+ [662.080 --> 666.720] This one is very low power. When you're touching your neck, you're really kind of protecting yourself.
109
+ [667.600 --> 673.920] So this is what happens. They come in, they spit into a vial. For two minutes, say, you need to do
110
+ [673.920 --> 677.680] this or this. They don't look at pictures of the poses. We don't want to prime them with a concept
111
+ [677.680 --> 683.680] of power. We want them to be feeling power. So two minutes, they do this. We then ask them how
112
+ [683.680 --> 688.160] powerful do you feel on a series of items. And then we give them an opportunity to gamble.
113
+ [688.880 --> 693.680] And then we take another saliva sample. That's it. That's the whole experiment. So this is what we
114
+ [693.680 --> 699.280] find. Risk tolerance, which is the gambling. But we find is that when you're in the high power
115
+ [699.280 --> 705.920] pose condition, 86% of you will gamble. When you're in the low power pose condition, only 60%.
116
+ [705.920 --> 710.240] And that's a pretty whopping significant difference. Here's what we find on testosterone.
117
+ [711.360 --> 716.640] From their baseline, when they come in, high power people experience about a 20% increase.
118
+ [718.000 --> 724.000] And low power people experience about a 10% decrease. So again, two minutes and you get these changes.
119
+ [724.560 --> 729.760] Here's what you get on cortisol. High power people experience about a 25% decrease.
120
+ [730.720 --> 736.080] And the low power people experience about a 15% increase. So two minutes leads to these
121
+ [736.080 --> 743.280] hormonal changes that configure your brain to basically be either a sort of confident and comfortable,
122
+ [743.280 --> 749.760] or really stress reactive. And you know, feeling sort of shut down. And we've all had that feeling,
123
+ [749.760 --> 756.320] right? So it seems that our non-verbales do govern how we think and feel about ourselves. So it's
124
+ [756.320 --> 762.320] not just others, but it's also ourselves. Also, our bodies change our minds. But the next
125
+ [762.320 --> 767.360] question, of course, is can power posing for a few minutes really change your life in meaningful
126
+ [767.360 --> 772.240] ways? So this isn't the lab. It's this little task. It's just a couple of minutes. Where can you
127
+ [772.240 --> 779.200] actually apply this? Which we cared about, of course. And so we think it's really what matters.
128
+ [779.200 --> 784.720] I mean, where you want to use this is evaluative situations, like social threat situations. Where
129
+ [784.720 --> 790.320] are you being evaluated, either by your friends, like for teenagers at the lunchroom table? It could be,
130
+ [790.320 --> 796.240] you know, for some people speaking at a school board meeting, it might be giving a pitch or giving
131
+ [796.240 --> 802.400] a talk like this or doing a job interview. We decided that the one that most people could relate
132
+ [802.400 --> 808.400] to because most people had been through was the job interview. So we published these findings
133
+ [808.400 --> 812.800] and the media are all over it and they say, okay, so this is what you do when you go in for the job
134
+ [812.880 --> 818.800] interview, right? So we were, of course, horrified and it said, oh my god, no, no, no, that's not what we
135
+ [818.800 --> 824.400] meant at all for numerous reasons. No, no, no, don't do that. Again, this is not about you talking to
136
+ [824.400 --> 828.640] other people. It's you talking to yourself. What do you do before you go into a job interview? You do
137
+ [828.640 --> 833.200] this, right? You're sitting down. You're looking at your iPhone or your Android and not trying to
138
+ [833.200 --> 838.160] leave anyone out. You are, you know, you're looking at your notes. You're hunting up, making yourself small.
139
+ [838.160 --> 843.200] And really what you should be doing maybe is this like in the bathroom, right? Do that, find two
140
+ [843.200 --> 848.480] minutes. So that's what we want to test, okay? So we bring people into a lab and they do a cup,
141
+ [848.480 --> 853.600] they do either higher low power poses again. They go through a very stressful job interview.
142
+ [853.600 --> 861.360] It's five minutes long. They are being recorded. They're being judged also and the judges are trained
143
+ [861.360 --> 866.080] to give no nonverbal feedback. So they look like this. Like imagine this is the person
144
+ [866.080 --> 873.200] interviewing you. So for five minutes, nothing. And this is worse than being heckled. People hate
145
+ [873.200 --> 878.800] this. It's what, Mary Ann LeFrance calls standing in social quicksand. So this really spikes your
146
+ [878.800 --> 882.720] cortisol. So this is the job interview we put them through because we really wanted to see what
147
+ [882.720 --> 888.960] happened. We then have these coders look at these tapes. Four of them. They're blind to the hypothesis.
148
+ [888.960 --> 894.240] They're blind to the conditions. They have no idea who's been posing in what pose. And they,
149
+ [894.800 --> 900.160] they end up looking at these sets of tapes and they say, oh, we want to hire these people,
150
+ [900.160 --> 905.600] all the high power poses. We don't want to hire these people. We also evaluate these people much
151
+ [905.600 --> 911.600] more positively overall. But what's driving it? It's not about the content of the speech. It's
152
+ [911.600 --> 915.600] about the presence that they're bringing to the speech. We also, because we rate them on all
153
+ [915.600 --> 919.920] these variables related to sort of competence. Like how well structured it is the speech.
154
+ [920.000 --> 925.040] How good is it? What are their qualifications? No effect on those things. This is what's affected.
155
+ [925.040 --> 929.920] These kinds of things. People are bringing their true selves, basically. They're bringing themselves.
156
+ [929.920 --> 936.480] They bring their ideas, but as themselves with no residue over them. So this is what's driving
157
+ [936.480 --> 944.080] the effect or mediating the effect. So when I tell people about this, that our bodies change
158
+ [944.080 --> 948.080] our minds and our minds can change our behavior and our behavior can change our outcomes, they say to
159
+ [948.080 --> 954.480] me, I don't, it feels fake, right? So I said fake it till you make it. Like I don't, it's not me.
160
+ [954.480 --> 958.720] Like I don't want to get there and then still feel like a fraud. I don't want to feel like an
161
+ [958.720 --> 965.040] imposter. I don't want to get there only to feel like I'm not supposed to be here. And that really
162
+ [965.040 --> 969.600] resonated with me because I want to tell you a little story about being an imposter and feeling like
163
+ [969.600 --> 975.600] I'm not supposed to be here. When I was 19, I was in a really bad car accident. I was thrown out of a car
164
+ [975.760 --> 983.200] rolled several times. I was thrown from the car and I woke up in a head injury rehab ward and I had
165
+ [983.200 --> 990.240] been withdrawn from college. And I learned that my IQ had dropped by two standard deviations,
166
+ [990.960 --> 996.480] which was very traumatic. I knew my IQ because I had identified with being smart and I had been
167
+ [996.480 --> 1001.760] called gifted as a child. So I'm taking out a college. I keep trying to go back. They say you're
168
+ [1001.760 --> 1006.960] not going to finish college. There are other things for you to do but that's not going to work out
169
+ [1006.960 --> 1013.360] for you. So I really struggled with this and I have to say having your identity taken from you,
170
+ [1013.360 --> 1018.320] your core identity and for me it was being smart. Having that taken from you, there's nothing that
171
+ [1018.320 --> 1023.280] leaves you feeling more powerless than that. So I felt entirely powerless. I worked and worked and
172
+ [1023.280 --> 1027.840] worked and I got lucky and worked and got lucky and worked. Eventually I graduated from college.
173
+ [1028.560 --> 1035.520] Took me four years longer than my peers and I convinced someone, my angel advisor, Susan Fisk,
174
+ [1035.520 --> 1041.120] to take me on. And so I ended up at Princeton and I was like, I am not supposed to be here. I am
175
+ [1041.120 --> 1045.520] an imposter. And the night before my first year of talking, the first year of talking at Princeton is
176
+ [1045.520 --> 1051.760] a 20 minute talk to 20 people. That's it. I was so afraid of being found out the next day
177
+ [1051.760 --> 1057.280] that I called her and said, I'm quitting. She was like, you are not quitting because I took a gamble
178
+ [1057.600 --> 1061.600] on you and you're staying. You're going to stay and this is what you're going to do. You're going
179
+ [1061.600 --> 1066.880] to fake it. You're going to do every talk that you ever get asked to do. You're just going to do it
180
+ [1066.880 --> 1072.800] and do it and do it even if you're terrified and just paralyzed and having an out of body experience.
181
+ [1072.800 --> 1078.800] Until you have this moment where you say, oh my gosh, I'm doing it. I have become this. I am actually
182
+ [1078.800 --> 1083.520] doing this. So that's what I did. Five years in grad school. A few years, I'm at Northwestern,
183
+ [1083.520 --> 1088.640] I moved to Harvard. I'm at Harvard. I'm not really thinking about it anymore. But for a long time,
184
+ [1088.640 --> 1093.120] I had been thinking not supposed to be here. I'm not supposed to be here. So the end of my first year
185
+ [1093.120 --> 1100.080] at Harvard, a student who had not talked in class the entire semester who I had said, look, you
186
+ [1100.080 --> 1104.240] got to participate or else you're going to fail, came into my office. I really didn't know where at all.
187
+ [1104.960 --> 1110.960] And she said, she came in totally defeated and she said, I'm not supposed to be here.
188
+ [1114.160 --> 1121.120] And that was the moment for me because two things happened. One was that I realized, oh my gosh,
189
+ [1121.120 --> 1126.160] I don't feel like that anymore. I don't feel that anymore, but she does and I get that feeling.
190
+ [1126.160 --> 1131.360] And the second one, she is supposed to be here. Like she can fake it. She can become it. So I was like,
191
+ [1131.920 --> 1136.240] yes, you are. You are supposed to be here. And tomorrow you're going to fake it. You're going to
192
+ [1136.240 --> 1139.680] make yourself powerful. And you're going to.
193
+ [1143.920 --> 1150.400] And you're going to go into the classroom and you are going to give the best comment ever.
194
+ [1151.520 --> 1155.520] And she gave the best comment ever. And people turned around and they were like, oh my god,
195
+ [1155.520 --> 1161.360] I didn't even notice her sitting there. She comes back to me months later and I realized that she
196
+ [1161.360 --> 1166.960] had not just faked it till she made it. She had actually faked it till she became it. So she had
197
+ [1166.960 --> 1173.280] changed. And so I want to say to you, don't fake it till you make it. Fake it till you become it.
198
+ [1174.400 --> 1179.120] It's not do it enough until you actually become it and internalize. The last thing I'm going to
199
+ [1179.120 --> 1188.640] leave you with is this, tiny tweaks can lead to big changes. So this is two minutes, two minutes,
200
+ [1188.640 --> 1193.040] two minutes, two minutes. Before you go into the next stressful evaluative situation,
201
+ [1193.040 --> 1198.720] for two minutes, try doing this in the elevator in a bathroom stall at your desk behind closed doors.
202
+ [1198.720 --> 1203.360] That's what you want to do. Get configure your brain to cope the best in that situation.
203
+ [1203.360 --> 1208.080] Get your testosterone up. Get your cortisol down. Don't leave that situation feeling like,
204
+ [1208.080 --> 1212.800] oh, I didn't show them who I am. Leave that situation feeling like, oh, I really feel like I got to
205
+ [1212.800 --> 1219.920] say who I am and show who I am. So I want to ask you first, you know, both to try power posing.
206
+ [1220.800 --> 1227.040] And also I want to ask you to share this science because this is simple. I don't have ego involved in
207
+ [1227.040 --> 1231.440] this. Give it away. Like share it with people because the people who can use it the most are the
208
+ [1231.440 --> 1238.880] ones with no resources and no technology and no status and no power. Give it to them because they
209
+ [1238.880 --> 1244.480] can do it in private. They need their bodies, privacy and two minutes and it can significantly change
210
+ [1244.480 --> 1254.480] the outcomes of their life. Thank you.
transcript/allocentric_M5i5c9kNbOQ.txt ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.120] April is autism awareness month and in that spirit we want to introduce you to a very special young man named Chase.
2
+ [7.120 --> 13.920] David, actually news reporter Kim Russo took to Twitter asking if you would like to see more happy and positive stories right here on the news.
3
+ [15.760 --> 24.560] This one is about Chase, the little boy who couldn't communicate with the world. His parents trying one last therapy and hopes for change.
4
+ [24.560 --> 29.920] The results? Happy both day to day. Happy both day to years.
5
+ [29.920 --> 36.800] Here's how it happened. Imagine living every day not only unable to speak but to make sounds.
6
+ [36.800 --> 38.480] All through this.
7
+ [38.480 --> 45.440] This is video of therapists giving Chase a little boy from Macomb County a way to communicate with technology when he was three years old.
8
+ [45.440 --> 47.440] Switch, switch, switch, no.
9
+ [49.440 --> 50.640] All good job.
10
+ [51.440 --> 56.400] The idea is once you teach someone how to communicate one way, sometimes speech follows.
11
+ [56.400 --> 57.440] Cookie.
12
+ [57.440 --> 64.800] A year later Chase had made some progress but not as much as hoped. His parents have been told by some that it was going to be too late for him to learn.
13
+ [64.800 --> 69.520] There was a point when we thought at four if he's not talking there's a good chance he's not going to talk.
14
+ [69.520 --> 71.520] We're like well he's going to be nonverbal.
15
+ [71.520 --> 76.960] They decided to try one more thing. They came here to the Kaufman Children Center for an evaluation.
16
+ [77.200 --> 81.360] Experts here recognized Chase didn't just have autism.
17
+ [81.360 --> 87.840] He had severe apraxia. That meant his brain could actually organize the words but he didn't have the motor ability to say them.
18
+ [87.840 --> 93.440] This called for different treatments. They taught him sign language in a fun way to expand his ability to communicate.
19
+ [96.320 --> 98.960] At the same time they treated his apraxia.
20
+ [98.960 --> 102.320] They had to actually put tools in his mouth to get the sound.
21
+ [102.960 --> 106.720] You know to get like to even get his position of his jaw correct.
22
+ [106.720 --> 112.160] When Chase first came to us he had no vocal imitation. He couldn't imitate any sounds.
23
+ [112.160 --> 112.960] Hamsters.
24
+ [112.960 --> 113.760] Hamsters.
25
+ [113.760 --> 119.520] Now to see him today he can talk in sentences. He has a sense of humor.
26
+ [119.520 --> 122.640] Happy birthday to you. Happy birthday.
27
+ [122.640 --> 125.520] It's it's just like a miracle.
28
+ [125.520 --> 131.920] Nancy Arcoffman owns the Kaufman Children Center. She says miraculous things happen when the right therapies are brought together.
29
+ [131.920 --> 138.720] It's about the techniques and so we may be doing something that isn't working for the child
30
+ [138.720 --> 143.280] and we may not have known that there was another way to approach the issues.
31
+ [143.280 --> 148.480] Chase's parents are sharing their story because they want other parents of children who are nonverbal to know
32
+ [148.480 --> 152.640] that sometimes a change of therapy can make all the difference.
33
+ [152.640 --> 154.400] His journey has just been amazing.
34
+ [154.400 --> 157.840] The band by Lake TLC.
35
+ [157.840 --> 162.080] I see a web or a truck around me.
36
+ [162.080 --> 164.400] It's long. It's tough.
37
+ [164.400 --> 165.600] But you got to stay with it.
38
+ [165.600 --> 167.760] You surprises us all the time.
39
+ [167.760 --> 170.320] So you got to stay positive.
40
+ [170.320 --> 173.280] In West Bloomfield, Kim Russell, Seven Action News.
41
+ [173.280 --> 179.040] Wow and you can see the tears in their eyes and boy I can't wait until that young man grows up.
42
+ [179.040 --> 182.320] I mean he's a little boy now but I mean with the progress who knows what will happen.
43
+ [182.320 --> 188.320] Thank you and very special right and happy for that family to see this progress and thanks to
44
+ [188.320 --> 190.400] Kim Russell for bringing that to us.
45
+ [190.400 --> 193.440] For sure I'm sure you inspired a lot of people today.
46
+ [193.440 --> 193.840] The recent
transcript/allocentric_MuRVOQY8KoY.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_OOpVTlrTYXw.txt ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 16.240] People on Earth use nonverbal ways to communicate every day, like facial expressions, hand signals,
2
+ [16.240 --> 23.240] body language, an American sign language. Astronauts in space have their own nonverbal way
3
+ [23.240 --> 25.480] to communicate too.
4
+ [25.480 --> 30.280] During the space walk and just generally during space operations all the time communication
5
+ [30.280 --> 34.320] is hugely important. Talking to the people who are outside, talking to people on the ground,
6
+ [34.320 --> 37.360] and obviously we have radios to do that, but a lot of times we wind up having to do that
7
+ [37.360 --> 38.860] nonverbal way.
8
+ [38.860 --> 41.360] Hold on, stop.
9
+ [41.360 --> 45.800] The whole signal. So maybe sometimes your ears may not be clearing fast enough as the
10
+ [45.800 --> 50.040] pressure is changing, maybe someone's helping rescue you, but you're still attached and
11
+ [50.040 --> 54.560] you realize that. In any case you give them a whole signal and that should tell everyone
12
+ [54.560 --> 58.160] to stop everything that's all the movement and kind of look around and for something
13
+ [58.160 --> 60.440] that you seem to have normal.
14
+ [60.440 --> 64.080] You okay? I'm okay.
15
+ [64.080 --> 67.560] We really want to check on each other, check on our buddies. So the way we usually do that
16
+ [67.560 --> 74.840] is we use the okay hand symbol and so we'll use it as a question and as an answer. So if
17
+ [74.840 --> 80.680] I'm pointing at Raja and then giving him the okay sign, I'm saying, are you okay? And
18
+ [80.680 --> 83.520] if he is, he'll tell me, I am okay.
19
+ [83.520 --> 86.240] I see what you're saying.
20
+ [86.240 --> 89.920] There's a lot of nonverbal that just comes from knowing and working with people that makes
21
+ [89.920 --> 93.620] a big difference when you're working day in and day out, especially on a high stress
22
+ [93.620 --> 97.720] thing like a spacewalk, where this to look at someone's face can tell you like either,
23
+ [97.720 --> 101.960] yeah, I'm good with this plan or I've got reservations, maybe we should stop and talk
24
+ [101.960 --> 107.560] about this and you can do all that with just a glance even through the glass of the space
25
+ [107.560 --> 108.560] helmets.
26
+ [108.560 --> 111.760] A handful of numbers.
27
+ [111.760 --> 116.000] If you're flying formation, which we practice in the T38, we also use hand signals just
28
+ [116.000 --> 120.160] to keep up with those skills. And so one of the most common things is transmitting numbers
29
+ [120.160 --> 124.640] with your hands. And so one, two, three, four, and five are pretty easy. And then the way
30
+ [124.640 --> 130.200] we do 7, 8, 9, 6, 7, 8, and 9, 10 without taking your hand off the stick is to turn your hand
31
+ [130.200 --> 135.080] horizontal. And so you can do the same thing with air pressure. So for example, if I had
32
+ [135.080 --> 138.600] a problem with my suit and I was trying, she was trying to tell me, you know, what is
33
+ [138.600 --> 142.240] your oxygen pressure? And I couldn't talk because I had a communications problem. I could
34
+ [142.240 --> 147.520] still tell Kayla, you know, I could tell her a one and then this would tell her one and
35
+ [147.520 --> 153.160] six. And then, you know, I could do a combination of those numbers to transmit to her nonverbaly
36
+ [153.160 --> 159.240] what the state of any of my values, my suit, whether it's suit pressure, water pressure,
37
+ [159.240 --> 163.320] temperature, all the different numerical values we can use hand signals for that.
38
+ [163.320 --> 167.520] Maybe we could demonstrate a few for each other and see if we can tell what the other
39
+ [167.520 --> 173.120] first the hand signals are. So I'll go first, Raja, and you can see if you know what I'm
40
+ [173.120 --> 180.480] trying to tell you. What do you think Kayla is trying to communicate? Is she telling Raja
41
+ [180.480 --> 187.280] she can't hear that he needs to clean his helmet visor or asking him what song he's listening
42
+ [187.280 --> 193.560] to? Alright, so what Kayla is telling me there is she's pointing to herself, which is
43
+ [193.560 --> 197.880] indicating that the person who has the problem, you could also point at someone else, but
44
+ [197.880 --> 201.440] in her case, she's pointing at herself so she's telling me she has a problem and then she
45
+ [201.440 --> 206.400] waved across her ears, which is telling me she can't hear. Okay, so let's say we have
46
+ [206.400 --> 210.880] that same scenario. So we've had some kind of loss of calm and Kayla came to check on me
47
+ [210.880 --> 216.760] while I was out on a spacewalk. When she got there, I might give her a signal like this.
48
+ [216.760 --> 222.320] Can you figure out what Raja is trying to communicate? That they need to move to the other side
49
+ [222.320 --> 228.880] of the space station, that they need to wrap up and finish what they're doing. Or is he
50
+ [228.880 --> 235.320] asking her to do a flip in microgravity? So there Raja would be trying to communicate to me
51
+ [235.320 --> 239.360] that we need to speed things up. Maybe he has a problem that's accelerating or getting
52
+ [239.360 --> 244.240] worse, so he's saying it's kind of an urgent situation here. Let's get a move on, or
53
+ [244.240 --> 250.480] less. Next time you see astronauts on a spacewalk, look out for some of the hand signals you
54
+ [250.480 --> 255.800] learned today. You can even try them out with your friends to talk in your own non-verbal
55
+ [255.800 --> 261.320] code. For more fun with STEM, visit stem.nasa.gov.
transcript/allocentric_OdFJuKhtBWU.txt ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 12.480] The fourth talk is by Michael Hassamo comes to us from Boston University.
2
+ [12.480 --> 16.720] I discovered he actually doesn't have a PhD, which disturbed me, but then I found out
3
+ [16.720 --> 19.760] that he has a D feel from Oxford.
4
+ [19.760 --> 22.560] So that's good news.
5
+ [22.560 --> 27.680] So he is of course very well known for his really interesting work on especially on
6
+ [27.680 --> 36.800] your neuromodulators and computational work on understanding the entriangle hippocampal
7
+ [36.800 --> 39.760] circuits, how memory is formed.
8
+ [39.760 --> 48.720] And he really is working hard and asking biophysical questions, how ion channels play a role
9
+ [48.720 --> 54.480] in coding, how specific synapses, synaptic plasticity properties are encoded.
10
+ [54.480 --> 61.600] For example, during rhythmic activities like the Theta rhythm.
11
+ [61.600 --> 70.720] And he is also, I think, the one who is perhaps most out of the speakers, really like forming
12
+ [70.720 --> 77.240] computational hypotheses that then he really tests with experiments.
13
+ [77.240 --> 87.520] And some of these computational models are motivated heavily by biophysical facts and
14
+ [87.520 --> 88.880] others are high level.
15
+ [88.880 --> 94.520] And he is very interesting how he can move between these two levels.
16
+ [94.520 --> 102.200] He again is trained many people, again trained actually Lisa as well.
17
+ [102.200 --> 104.480] And he is widely known for his work.
18
+ [104.480 --> 107.400] And he also wrote a very interesting book if you want to buy it.
19
+ [107.400 --> 109.400] I don't get that cut.
20
+ [109.400 --> 112.240] So this is all just for your interest.
21
+ [112.240 --> 113.840] It's really a true pleasure to have you here, Mike.
22
+ [113.840 --> 115.840] I'm looking forward to your talk.
23
+ [115.840 --> 119.680] Thanks very much.
24
+ [119.680 --> 120.680] Thanks very much.
25
+ [120.680 --> 126.160] Ivan for inviting me and including me in this symposium and also to Jay for including
26
+ [126.160 --> 127.160] me.
27
+ [127.160 --> 128.160] It's a lot of fun.
28
+ [128.160 --> 132.280] I've really enjoyed the other talks and it's marvelous to see the interaction of the
29
+ [132.280 --> 134.440] research in the different labs.
30
+ [134.440 --> 139.960] So I really followed the theme of the conference in terms of using space and time in my talk.
31
+ [139.960 --> 143.280] And I tried to predict what the other speakers were going to talk about.
32
+ [143.280 --> 148.080] And neither Jill nor Edvard really talked as much about time as I expected, but I actually
33
+ [148.080 --> 149.400] refer a little bit to it.
34
+ [149.400 --> 151.880] Edvard had it there, but he didn't have time to get to it.
35
+ [151.880 --> 158.160] So just right up front I wanted to thank the people that did the work that I'll be presenting.
36
+ [158.160 --> 163.200] So I'll talk about work that was done by Jake Hinman in my laboratory as well as Andy
37
+ [163.200 --> 168.920] Alexander and worked on by Jennifer Robinson in collaboration with Mark Brandon.
38
+ [168.920 --> 172.800] And then Mark is an alumnus, but I'll be talking about some of his work as well as work
39
+ [172.800 --> 176.240] that Ben Krauss and Caitlin Monahan did.
40
+ [176.240 --> 181.000] And some modeling work done by Florian Roudies and you and Lisa worked with me, but she was
41
+ [181.000 --> 185.040] doing intercellular work and I won't talk about that work.
42
+ [185.040 --> 186.040] So this is just an overview.
43
+ [186.040 --> 191.720] I'll talk about some data on neurons that code both time and space.
44
+ [191.720 --> 195.200] And then I'll talk about mechanisms, potential mechanisms for coding of time.
45
+ [195.200 --> 200.120] We don't have the final answers for that, but some potential mechanisms as well as mechanisms
46
+ [200.120 --> 204.680] for coding space and particularly new work that hasn't been published yet on the influence
47
+ [204.680 --> 208.240] of environmental boundaries.
48
+ [208.240 --> 214.920] So as Ivan nicely mentioned, I have a book about modeling of episodic memory and it really
49
+ [214.920 --> 221.160] talks about a particular framework for modeling how you could get the spatial temporal trajectories
50
+ [221.160 --> 222.160] of memory.
51
+ [222.160 --> 228.400] So, tell being defined memory in terms of what did you do at time t in place p. And in the
52
+ [228.400 --> 233.240] book I describe how you could potentially have a circuit through the hippocampus nendironal
53
+ [233.240 --> 237.600] cortex that allows you to encode and retrieve particular trajectories.
54
+ [237.600 --> 241.680] Just so you know, I'm using the cursor because they told me that the laser pointer doesn't
55
+ [241.680 --> 243.360] show up well on the video camera.
56
+ [243.360 --> 247.760] So I'm trying to follow the instructions here, but it's a little bit slow actually.
57
+ [247.760 --> 253.560] But one another important point about it is that the trajectories would not only be trajectories
58
+ [253.560 --> 257.000] through space, but you can often have events where you're sitting in the same location as
59
+ [257.000 --> 260.720] you're doing today and you hopefully will have a very clear distinct episodic memory
60
+ [260.720 --> 262.640] of the different speakers today.
61
+ [262.640 --> 265.280] And so you really need some way of discriminating different times.
62
+ [265.280 --> 269.360] If somebody asks you what happened at the beginning of the symposium versus what happened
63
+ [269.360 --> 272.680] at the end of the symposium, you can do that even though you've been in the same location
64
+ [272.680 --> 276.360] the entire time.
65
+ [276.360 --> 280.320] So just to kind of give the same summary that other people gave, of course there's plenty
66
+ [280.320 --> 286.240] of evidence that the hippocampus nendironal cortex and adjacent structures are involved
67
+ [286.240 --> 291.040] in encoding of episodic memory, both from the lesion and patient age, but also from
68
+ [291.040 --> 295.920] epfirmary studies of encoding activity in these structures.
69
+ [295.920 --> 301.800] And this is the rodent system is a nice system for studying these structures because the hippocampus
70
+ [301.800 --> 308.120] nendironal cortex are disproportionately large in the rodent relative to other structures
71
+ [308.120 --> 313.040] and so it makes it nice for doing the in vivo unit recording of the sort that's been used
72
+ [313.040 --> 318.240] to discover the various different functional cell types that Edward and others already
73
+ [318.240 --> 319.840] summarized in their talks.
74
+ [319.840 --> 325.560] And I'll actually talk about a lot of these different functional subtypes in the various
75
+ [325.560 --> 328.400] components of the talk.
76
+ [328.400 --> 332.800] So first I want to kind of hone in on this kind of the main theme of the symposium which
77
+ [332.800 --> 337.520] is the coding of space and time and the fact that there are actually individual neurons
78
+ [337.520 --> 340.920] that will simultaneously code both space and time.
79
+ [340.920 --> 346.240] So I'm going to show you a video of cells that were referred to as time cells but they
80
+ [346.240 --> 349.400] actually are also coding place.
81
+ [349.400 --> 353.400] So this is a video of a rat running on a spatial alternation task.
82
+ [353.400 --> 357.800] This is actually a project that I did in collaboration with Howard Eichenbaum's lab with Ben Krauss
83
+ [357.800 --> 362.880] as a senior author and the colors that you see in the tones you hear are indicating different
84
+ [362.880 --> 363.880] neurons recorded.
85
+ [363.880 --> 368.680] So three different neurons, red coded by red, green and blue and you can see the rat is
86
+ [368.680 --> 371.560] not really leaving the treadmill.
87
+ [371.560 --> 376.520] It's running on this treadmill in the center of the spatial alternation task but you can
88
+ [376.520 --> 381.840] see that at different times during running the different cell types or the different cells
89
+ [381.840 --> 383.040] are firing.
90
+ [383.040 --> 385.800] And then the rat goes and starts to do the spatial alternation.
91
+ [385.880 --> 390.880] You can see the cell coded in red actually fires in a particular location on the maze
92
+ [390.880 --> 392.160] as a place cell.
93
+ [392.160 --> 396.560] Then it gets its reward and then you'll see another firing here where the cell coded
94
+ [396.560 --> 399.840] in blue is actually firing as a place cell.
95
+ [399.840 --> 405.720] But then when it gets back on the treadmill you can see the cell coded in red fires.
96
+ [405.720 --> 411.160] Then the cell coded in green is firing during the middle of the period.
97
+ [411.160 --> 414.360] And then the cell coded in blue is firing at the end of the period.
98
+ [414.360 --> 420.360] And you can see the rat's location and direction is not changed during this period of running
99
+ [420.360 --> 421.360] on the treadmill.
100
+ [421.360 --> 425.200] And in relation to Jeff's talk I want to point out this treadmill doesn't have features
101
+ [425.200 --> 426.680] on the treadmill.
102
+ [426.680 --> 428.720] So it's a featureless treadmill.
103
+ [428.720 --> 433.960] So the animal doesn't have any indication on the treadmill surface to cue this firing.
104
+ [433.960 --> 438.640] And the firing is relative to the start of each of these 16 second trials.
105
+ [438.640 --> 442.160] Not the particular portion of the treadmill that the rat is on.
106
+ [442.160 --> 447.520] And you can see there's actually tiling across similar to the place cell tiling that
107
+ [447.520 --> 450.840] they think Jeff showed with the density around the reward locations.
108
+ [450.840 --> 453.240] There's also tiling of different time intervals.
109
+ [453.240 --> 458.440] So during this 16 second period of running different neurons if you sort them here according
110
+ [458.440 --> 462.440] to what time interval they're coding you can see they're coding a number of different
111
+ [462.440 --> 463.440] time intervals.
112
+ [463.440 --> 466.040] And this is actually a recurring theme now in the data.
113
+ [466.040 --> 471.200] A large number of different groups have shown this type of coding across a range of
114
+ [471.200 --> 472.520] intervals.
115
+ [472.520 --> 477.720] But as you can see for any given cell if you line up all of the different trials, these
116
+ [477.720 --> 483.600] 16 second trials, there's cells that are this one cell here is reliably coding the beginning
117
+ [483.600 --> 485.400] of the running period.
118
+ [485.400 --> 489.400] This cell is reliably coding the middle of the running period and this cell is reliably
119
+ [489.400 --> 491.280] coding the end of the running period.
120
+ [491.280 --> 498.880] So the same cells that are coding particular spatial locations are also coding time.
121
+ [498.880 --> 507.240] Now using endoscope that was developed by Mark Schnitzer here from EndScopeX will
122
+ [507.240 --> 512.280] Mao in Howard Eichenbaum's laboratory did experiments looking at the time cells with
123
+ [512.280 --> 518.360] imaging in hippocampal region CA1 and you can see here it's plotted for just individual
124
+ [518.360 --> 523.520] trials but you'll see a repeating motif where there's a time cell firing up here on this
125
+ [523.520 --> 528.800] trial and then the time field cell firing here and then a time cell firing down here.
126
+ [528.800 --> 534.320] And then on the next cell, the next trial, this is all from a rat running on a treadmill.
127
+ [534.320 --> 538.400] Here's again the time cell here and then the time cell over here and then the time cell
128
+ [538.400 --> 540.480] down in the bottom.
129
+ [540.480 --> 545.960] So similar to the unit recording, the electrophysiological recording, the calcium imaging shows
130
+ [545.960 --> 551.760] consistent firing for this particular unit of firing near the beginning of each trial
131
+ [551.760 --> 553.960] for this one firing near the end.
132
+ [553.960 --> 560.960] And Will could then sort these large numbers of units according to what period during the
133
+ [560.960 --> 562.880] 10 second interval they were coding.
134
+ [562.880 --> 567.640] You can see there's actually a greater number density of cells coding the beginning of the
135
+ [567.640 --> 574.920] running and it actually progressively decreases in a very consistent manner to get fewer firing
136
+ [574.920 --> 579.520] fields later on in the interval but actually slightly wider firing fields.
137
+ [580.000 --> 584.080] And the advantage of the imaging is that these were then lined up with each other over
138
+ [584.080 --> 589.120] days and so the same population of neurons or at least many of the same neurons could be
139
+ [589.120 --> 596.120] sampled over many days and you can see a similar coding across the different days by these
140
+ [596.120 --> 597.360] neurons.
141
+ [597.360 --> 601.600] But this has allowed us then to do what Jill Lloydkebs laboratory did.
142
+ [601.600 --> 607.400] The M.O.E. mancan studies in 2012 and 2015 where they looked at correlations in their
143
+ [607.400 --> 614.120] case neuronal activity of place cells in CA2 or CA3 or CA1 and we saw a similar in this
144
+ [614.120 --> 617.920] experiment, a similar decrease in correlation.
145
+ [617.920 --> 622.280] In this case across different trials it was a shorter period than the many hours they
146
+ [622.280 --> 629.360] study but you see a gradual decrease in the correlation over time which could then be
147
+ [629.360 --> 634.880] the basis for forming an episodic representation that is distinct not only for times within
148
+ [634.880 --> 641.280] a trial that could be coded by different neurons but also by the correlation across the
149
+ [641.280 --> 646.480] whole population which decreases suggesting that you have a change in representation between
150
+ [646.480 --> 652.440] trials near the beginning of each day and trials near the end and then Wilmaul also looked
151
+ [652.440 --> 657.880] at the correlation in this case the decoding error across days and so there's less decoding
152
+ [657.880 --> 662.600] error for within a day versus for a one day interval versus a two day interval.
153
+ [662.680 --> 667.360] There's actually a progressive change in the population representation across days that
154
+ [667.360 --> 673.960] could allow you to allow the animal to retrieve episodic memories for different days and
155
+ [673.960 --> 677.400] that would be analogous to what might be necessary if you were going to remember where you
156
+ [677.400 --> 682.320] parked your car today versus where you parked your car yesterday.
157
+ [682.320 --> 687.040] So this is intriguing for potential mechanisms for episodic memory.
158
+ [687.040 --> 692.600] Now as Edvard showed of course there's also grid cells in the media lanter idle cortex
159
+ [692.600 --> 697.480] and this is just showing a video of a wrap foraging an open field environment.
160
+ [697.480 --> 703.000] This is recording done by Caitlin Monahan in my lab showing recording of a grid cell but
161
+ [703.000 --> 707.040] I'm only pointing this out because the question is you know they're coding space but are
162
+ [707.040 --> 712.960] they similar to play cells in that they would also code the time of the running.
163
+ [712.960 --> 718.120] So here is the similar sort of paradigm where we are in this case Ben Krauss was taking
164
+ [718.120 --> 722.120] neurons that Mark Brandon had found and identified his grid cells and then having the animal
165
+ [722.120 --> 726.880] run on the same sort of treadmill task with the spatial alternation and now this is
166
+ [726.880 --> 731.840] a single grid cell that's being recorded but you'll see that it actually fires at three
167
+ [731.840 --> 736.480] different distinct times so here's the start of a trial fired right at the beginning and
168
+ [736.480 --> 741.480] it stops firing for a period of time then it fires for a period of time then it stops.
169
+ [743.520 --> 748.920] Then it starts firing again at the end of the trial.
170
+ [748.920 --> 753.720] So here this is similar to this cell it's not the same cell but it's similar to this
171
+ [753.720 --> 758.920] cell and that it has multiple firing fields in a two-dimensional open field environment
172
+ [758.920 --> 763.720] but as you can see from the movie this similar cell fires at the beginning doesn't fire
173
+ [763.720 --> 768.520] then fires again then doesn't fire then fires at the end so even in the case where a cell
174
+ [768.520 --> 772.000] that's identified as a grid cell in a two-dimensional environment with different firing
175
+ [772.080 --> 776.160] fields when it's running on a treadmill in the same location with the same direction
176
+ [776.160 --> 781.560] shows distinct coding of different time intervals during that period of running so again the
177
+ [781.560 --> 786.680] same cells can code both spatial location and time.
178
+ [786.680 --> 793.400] So you know we've seen quite robust coding of both dimensions now the question is what
179
+ [793.400 --> 798.600] are the potential mechanisms for coding these dimensions and right up front here is
180
+ [798.680 --> 803.160] where I tried to predict what Edward would end up speaking about actually put in a slide
181
+ [803.160 --> 806.720] he did show this slide so I guess my prediction was correct he just didn't have that much
182
+ [806.720 --> 811.000] time to show it but so this is this interesting experiment that they published where they
183
+ [811.000 --> 818.000] saw in a relatively long period of the animal foraging in environments with different colors
184
+ [818.160 --> 823.040] black environment or white white environments they would often observe neurons that would
185
+ [823.040 --> 829.120] show an exponential decay of firing rate during the time of foraging within an individual
186
+ [829.120 --> 834.040] environment this is in lateral interrhymnal cortex in contrast to media lanterrhymnal cortex
187
+ [834.040 --> 839.040] which is where the grid cells have been described as well as the time coding that I just showed
188
+ [839.040 --> 842.560] you but this is lateral interrhymnal cortex where they show this interesting exponential
189
+ [842.560 --> 848.760] decay they also often also saw cells that would reduce exponentially where seem to show
190
+ [848.760 --> 854.240] it fit with an exponential decay over a very long period of time and this was really exciting
191
+ [854.240 --> 860.240] for myself and for Mark Howard at Boston University because it fits very well with the framework
192
+ [860.240 --> 869.440] that Mark has been using at Boston University for many years which is to have a assumption
193
+ [869.440 --> 875.880] of an exponentially decaying representation of time that then is combined and can be used
194
+ [875.880 --> 882.120] to generate time cell responses so we had actually already done this model and this is a
195
+ [882.120 --> 888.720] in a paper by U of Luz or on TIGAG myself and Mark Howard we'd already actually written
196
+ [888.720 --> 894.720] up this paper and submitted it at the time that we saw Edvard's paper so it was perfect
197
+ [894.720 --> 903.720] for us because in our paper we had we were actually motivated by slice physiology data by
198
+ [903.720 --> 909.400] my former collaborator on Helalonzo who sadly passed away in 2005 but on Helalonzo's group
199
+ [909.400 --> 915.200] had observed neurons that had firing in slice preparations that would have firing rates that
200
+ [915.200 --> 920.160] would in some cases decay exponentially for periods of time and this work has also been done
201
+ [920.160 --> 925.640] on Motoharo Yoshida my former postdoctoral fellow this is actually here an intracellular
202
+ [925.640 --> 930.520] recording done by Schwind in Cortex there's a number of different preparations that show
203
+ [930.520 --> 936.280] an intracellular recording this type of exponential decay so we'd use this as a justification for
204
+ [936.280 --> 943.080] a spiking neuronal model that could model how neurons could show an exponential decay of firing
205
+ [943.080 --> 950.720] over time at different time constants similar very similar to what the cell paper shows from
206
+ [950.720 --> 957.400] the Moser laboratory so this was the input set of spiking neurons with different time constants
207
+ [957.800 --> 965.480] were the input of this network that we recently published and then the output was generated by
208
+ [965.480 --> 970.840] having these inputs go to neurons that were either excitatory or inhibitory and then would connect
209
+ [970.840 --> 977.400] to output neurons that would in a sense sum up the addition or subtraction of these different
210
+ [977.400 --> 982.360] exponential functions and would be able to generate the time cells that look I hope you see
211
+ [982.440 --> 988.360] this looks quite similar to the distribution of time cell responses in the data from the paper
212
+ [988.360 --> 993.080] by Willemaw or you have the higher number density of time cells coding the beginning of the trial
213
+ [993.080 --> 1000.840] with shorter periods of firing and then a smaller number of cells coding longer periods at later
214
+ [1000.840 --> 1004.760] portions during the trial and this is an important part of the model that Mark Howard has been
215
+ [1004.760 --> 1011.320] describing which is the idea that you would have a scale invariant representation and this is
216
+ [1011.320 --> 1017.720] consistent with a lot of data both physiology as we're showing here but also behavior that if you ask
217
+ [1017.720 --> 1024.600] a any animal or a human to discriminate different time intervals that are one second versus two
218
+ [1024.600 --> 1029.160] second their resolution will be more accurate than if you last them to discriminate time intervals
219
+ [1029.160 --> 1034.840] of 10 seconds versus 20 seconds and so this scale invariance appears here in this model where if you
220
+ [1034.840 --> 1039.800] normalize them to the same interval and same magnitude you'll get the same shape and this is
221
+ [1039.800 --> 1043.160] important if you're trying to remember you want to remember oh you know you know what did he
222
+ [1043.160 --> 1047.720] just do with the cursor three seconds ago or you want to remember what did I just talk about five
223
+ [1047.720 --> 1052.760] minutes ago your ability to discriminate that your ability to encode those types of memories on
224
+ [1052.760 --> 1057.880] different time scales depend upon having some representation of time on multiple different time
225
+ [1057.880 --> 1063.240] scales but so we were very excited when we saw the data from the Moser laboratory because we
226
+ [1063.240 --> 1068.120] could essentially use that as another justification for the input you know we could imagine this is
227
+ [1068.120 --> 1073.400] lateral interrhyal cortex and that we are giving inputs that have different time constants in
228
+ [1073.400 --> 1077.880] their case of course they have time constants on the order of minutes and we were using time constants
229
+ [1077.880 --> 1083.960] or the order of seconds but that's perfect again for this problem of scaling multiple or remembering
230
+ [1083.960 --> 1090.360] multiple different scales of time and episodic memory and as I mentioned in terms of the output we
231
+ [1090.360 --> 1096.920] were replicating it in terms of the data from Ben Krauss's experiment where you have different
232
+ [1096.920 --> 1102.600] cells time cells coding different intervals during time and in particular getting at this change in
233
+ [1102.600 --> 1109.240] the number density of the cells with the higher number of cells at the shorter time intervals and
234
+ [1109.240 --> 1114.920] then the broader distribution of firing and the smaller number of cells at the longer time interval
235
+ [1114.920 --> 1120.280] so that's something that's effectively generated by this model now there's other ways of generating
236
+ [1120.280 --> 1125.880] these models you could have a chaining model or we could use a different types of models and if we
237
+ [1125.880 --> 1130.680] you know we can have more discussion about that if we want during the discussion period
238
+ [1132.520 --> 1136.280] right so I want also wanted to talk about potential mechanisms for coding of space
239
+ [1138.520 --> 1144.600] there's a number of different ways that you can generate grid cell or play cell type responses
240
+ [1145.160 --> 1149.640] I'm just kind of broadly summarizing there's a lot of different models in this domain
241
+ [1149.640 --> 1154.680] but I'm broadly summarizing two different types that I'll talk about one type is doing integration
242
+ [1154.680 --> 1159.880] of self-motion velocity so that would be the speed and direction of the animal if it can integrate
243
+ [1159.880 --> 1165.640] its movements at each point and time then it can estimate where it is where it is you know just
244
+ [1165.640 --> 1169.960] integrating that velocity where you can estimate where it is relative to its starting point and this
245
+ [1169.960 --> 1175.400] is the attractor dynamic model of grid cells uses this the oscillatory interference model grid cells
246
+ [1175.400 --> 1180.600] uses this mechanism later than I'll also talk about the alternate model using a transformation
247
+ [1180.600 --> 1186.760] of sensory input and actually both of these components have been described in this recent paper
248
+ [1186.760 --> 1192.120] from Lisa Geochomo's lab by Malcolm Campbell these different potential influences on firing
249
+ [1193.400 --> 1198.040] now it's reasonable to assume in these models the path integration models I'll talk about first
250
+ [1198.040 --> 1203.240] it's reasonable to assume that you have a self-motion signal available in inter-ional cortex because
251
+ [1203.240 --> 1209.480] there are cells coding both direction and running speed so Edvard already summarized the
252
+ [1209.560 --> 1217.080] head direction cells that were described by Jeff Tauby and Jim Rank and also by Sargolini and the
253
+ [1217.080 --> 1223.400] Moser lab this is one recorded by Mark Brandon in my laboratory showing tuning for a southwest
254
+ [1223.400 --> 1229.480] direction and not firing for the north east or south in this polar plot and then as Edvard also
255
+ [1229.480 --> 1235.240] mentioned there are cells many cells in inter-ional cortex that will code speed by showing a linear
256
+ [1235.240 --> 1242.280] change in firing rate based on running speed and these were actually in some early papers by O'Keefe
257
+ [1242.280 --> 1247.320] but the the crop paper from the Moser lab was calling them speed cells specifically for cells that
258
+ [1247.320 --> 1252.200] weren't coding other factors that were only coding speed but there's a number of papers showing
259
+ [1252.200 --> 1262.040] cells coding both speed and other factors and these if I can get this to start so this is just one
260
+ [1262.040 --> 1266.920] of the types of models this is the oscillatory interference model that can use a measure of
261
+ [1266.920 --> 1274.680] velocity to generate a code of location and this is just showing how in this case you in this model
262
+ [1274.680 --> 1280.520] the oscillations here shown here are being driven by the velocity of the animal relative to
263
+ [1280.520 --> 1284.200] different directions in the environment and then you sum up the oscillations when they cross
264
+ [1284.200 --> 1290.200] threshold and you can generate a grid cell firing field that in the sense is based on the integration
265
+ [1290.200 --> 1297.400] of the velocity over time now this overall framework of using direction and running speed is
266
+ [1297.400 --> 1306.200] consistent with some data Jeff Talby's lab did inactivation of the anterior thalamus to block the
267
+ [1306.200 --> 1311.720] head direction input to the inter-ional cortex and showed that grid cells that they recorded in
268
+ [1311.720 --> 1315.800] the inter-ional cortex in the baseline condition where essentially they lost their spatial
269
+ [1315.800 --> 1320.680] specificity when they inactivated the head direction cells and then they recovered afterwards
270
+ [1323.320 --> 1329.640] similarly Mark Branden in my laboratory did an experiment where he recorded grid cells in a
271
+ [1329.640 --> 1335.640] baseline condition and did inactivation of the medial septum a similar experiment was also
272
+ [1335.640 --> 1342.680] done by Julie Konig with Jill and Stefan Lloydkeb and in this case it also had the effect of wiping
273
+ [1342.680 --> 1348.360] out the grid cell spatial specificity so you can see during medial septum inactivation you lose
274
+ [1348.360 --> 1353.800] that spatial specificity and this is associated with a change from thetherrythomacilatory dynamics
275
+ [1353.800 --> 1359.160] in the inter-ional cortex in the baseline condition to the loss of thetherrythomacilations
276
+ [1359.160 --> 1364.360] during the medial septum inactivation consistent with the role of these neuronal inputs in generating
277
+ [1364.360 --> 1369.080] a ththerrythom and this is something that Ivan and others have done a lot of work on this the
278
+ [1369.080 --> 1374.120] role of the medial septum in driving the ththerrythomacilations then when the ththerrythomacilations are
279
+ [1374.120 --> 1380.200] covers you see a recovery of the spatial periodicity now this is in a sense the opposite of the
280
+ [1380.200 --> 1386.920] head direction manipulation done by the tau v lab because we showed in the same paper that the
281
+ [1386.920 --> 1392.760] spatial periodicity the grid cells these conjunctive grid by head direction cells the spatial
282
+ [1392.760 --> 1397.960] periodicity of these cells is lost but not the head direction coating so it's the opposite of the
283
+ [1397.960 --> 1402.600] case where they are blocking the head direction input and seeing a loss of grid cells here we have
284
+ [1402.600 --> 1407.800] a loss of the grid cell firing but we have the maintenance of this head direction coating
285
+ [1408.520 --> 1414.440] in the environment so the logical or one our first assumption actually was that we had wiped out
286
+ [1414.440 --> 1420.520] the speed code coming into the the inter-ional cortex and one of the first things we did was to
287
+ [1420.520 --> 1425.800] look at the speed coding by the different neurons and we were nothing's ever simple of course and
288
+ [1425.800 --> 1430.760] so we were disappointed to find that the speed coding wasn't lost so I'm first going to show you
289
+ [1430.760 --> 1436.280] just that there is speed coding in a number of different cell types so the grid cells show linear
290
+ [1436.280 --> 1440.920] changes in firing rate with running speed the conjunctive grid by head direction cells show it the
291
+ [1440.920 --> 1446.840] head direction cells as well as the pure speed cells and here they all are all showing the firing rate
292
+ [1446.840 --> 1451.960] change with running speed just in before I show you the results in the medial septum in activation
293
+ [1451.960 --> 1457.480] I was going to show you that there's also a change in theta rhythmicity with the running speed so
294
+ [1457.480 --> 1463.640] if you do an autocorrelogram on the firing rate and shift the firing spiking relative to itself
295
+ [1463.640 --> 1469.640] it'll peak at zero and then as you shift it'll peak again at 125 milliseconds corresponding to
296
+ [1469.640 --> 1474.520] a rhythmicity of about eight hertz and then it'll peak again at 250 so that's what's shown here
297
+ [1475.480 --> 1480.680] and with running speed you actually get a narrower period between these peaks indicating that
298
+ [1480.680 --> 1486.440] the rhythmicity is shifting from about eight hertz to slightly higher frequencies as the running
299
+ [1486.440 --> 1493.000] speed increases and this is also seen in all the different cell types but so we looked at the
300
+ [1493.000 --> 1497.400] effects during medial septum in activation and as you can see here here's a cell that's showing a
301
+ [1497.400 --> 1502.920] general coding of running speed firing rate with running speed and then we did medial septum in
302
+ [1502.920 --> 1509.240] activation so a loss of overall rhythmicity and we actually see in this case and in many cells a
303
+ [1509.240 --> 1516.520] better coding of firing rate with running speed so it isn't that the signal of running speed was
304
+ [1516.520 --> 1523.320] lost but we did see though as of course in some cases a complete loss of the theta rhythmicity of
305
+ [1523.320 --> 1529.160] the neurons and which of course would prevent any kind of coding of running speed by rhythmicity
306
+ [1529.960 --> 1535.400] or in this case we actually saw a maintenance of some rhythmicity but the coding of running speed
307
+ [1535.400 --> 1541.320] by rhythmicity is perturbed so here the rhythmicity is increasing in frequency with higher running speed
308
+ [1541.320 --> 1546.440] here it's decreasing in frequency with higher running speed so the rhythmicity representation has
309
+ [1546.440 --> 1552.600] been perturbed even though the firing rate code for speed is not perturbed. Now of course the
310
+ [1552.600 --> 1558.760] question for many years since then was well what particular subpopulation of neurons in the
311
+ [1558.760 --> 1564.120] medial septum is important for this influence on grid cells and this is something that we've been
312
+ [1564.120 --> 1568.520] working on for a number of years Holger-Danberg in my own laboratory was working on
313
+ [1568.520 --> 1574.760] perturbing the colonergic neurons we haven't yet seen effects from that on the spatial coding
314
+ [1575.400 --> 1583.400] but Jennifer Robinson in Mark Brandon's laboratory did selective optogenetic inhibition of
315
+ [1584.520 --> 1589.720] gabberergic neurons in the medial septum so she did viral infusions of archoridopsin and then could
316
+ [1589.720 --> 1594.760] selectively inactivate the gabberergic neurons and consistent with what I told you before about
317
+ [1594.760 --> 1603.400] the overall medial septum inactivation when she did the inactivation of the gabberergic neurons she
318
+ [1603.400 --> 1609.720] saw a loss of theta rhythmicity so here's the field potential the power spectra of the field potential
319
+ [1609.720 --> 1615.800] during the laser off periods it's very strong eight hertz rhythmicity and then it's greatly reduced
320
+ [1615.800 --> 1621.320] during the laser on period and she's had now a number of grid cell recordings in baseline
321
+ [1621.320 --> 1626.040] conditions where she has spatial periodicity of grid cells here's two different cells shown here
322
+ [1626.040 --> 1633.960] and here and then in the laser on condition she actually sees a loss of the spatial periodicity
323
+ [1633.960 --> 1641.000] of the grid cells in both of these cases the one thing is that this was a 30 second laser on 30
324
+ [1641.000 --> 1647.160] second laser off and it apparently wasn't a long enough period for the cells to regain their
325
+ [1647.160 --> 1652.040] grid cell periodicity during the laser off period so somehow the networks getting perturbed strongly
326
+ [1652.040 --> 1658.120] enough that the grid cells are not firing consistently throughout the period but at least this
327
+ [1658.120 --> 1664.360] implicates specifically the gabberergic input for the generation of the grid cell firing response
328
+ [1664.760 --> 1672.120] now I've just given you some data that's supportive of the idea of path integration being involved
329
+ [1672.120 --> 1677.000] but there's actually a number of potential problems with path integration both for the attractor
330
+ [1677.000 --> 1681.720] dynamic models that are doing path integration and the oscillatory interference model
331
+ [1682.440 --> 1687.560] one of these was in a number of the papers on the speed coding which is that many of the neurons
332
+ [1687.560 --> 1694.120] will actually show an exponentially saturating code of firing rate with running speed where they'll
333
+ [1694.120 --> 1699.080] code it for a period of time but then they'll saturate this has been shown in a couple of our
334
+ [1699.080 --> 1703.320] papers and there's actually throughout these different classes there's actually a number of
335
+ [1703.320 --> 1709.080] cells that show this saturating exponential distribution of firing and that's problematic for
336
+ [1709.080 --> 1713.720] doing path integration it's you know you're better off with a linear code of running speed
337
+ [1715.240 --> 1719.640] another important thing that you might have noticed is that I kept I kept referring to the
338
+ [1719.640 --> 1724.440] fact that you need movement direction for the path integration model and yet the citations
339
+ [1724.440 --> 1729.960] are always two head direction cells that's what all these models have cited in the past so we
340
+ [1729.960 --> 1736.520] decided in our laboratory to test whether movement direction equals head direction and we found
341
+ [1736.520 --> 1741.160] that it doesn't and you know you can walk around and turn your head back and forth and you know
342
+ [1742.520 --> 1748.120] your head is not correlated all the time with your movement and neither is that the case in rodents
343
+ [1749.640 --> 1754.440] we actually looked at periods of time when the rodent head direction was more than 30 degrees
344
+ [1754.440 --> 1759.800] away from the movement direction to see what the neurons were actually coding and we found many
345
+ [1759.800 --> 1765.640] cells consistent with previous studies many cells coding head direction during these periods of time
346
+ [1765.640 --> 1771.160] and no cells were coding pure movement direction so there were none that stayed focused on the movement
347
+ [1771.160 --> 1777.560] direction of the animal independent of its head direction so this indicates that we don't have
348
+ [1778.520 --> 1783.640] a clear code for movement direction in the entrional cortex and then we gave these different
349
+ [1783.640 --> 1789.160] inputs to the attractor model which is using path integration as well as the oscillatory interference
350
+ [1789.160 --> 1795.560] model if we give movement direction input we get nice spatial periodicity of grid cells if we give
351
+ [1795.560 --> 1802.040] the head direction input we don't get this clear spatial signal so the head direction signal from
352
+ [1802.040 --> 1807.800] the behavioral data is not going to give you the necessary overall movement direction signal you
353
+ [1807.800 --> 1812.680] need and people various people suggested well maybe the head direction if you average it over a
354
+ [1812.680 --> 1817.800] one second period or a two second period or some period of time on average head direction will add
355
+ [1817.800 --> 1822.760] up to movement direction we tried that and we got distributions that looked similar to this in
356
+ [1822.760 --> 1829.960] the model so so you can't use head direction as an input to these models so this leads us then to
357
+ [1830.920 --> 1836.920] the case that I'll talk about for the rest of the talk which is that many models have proposed
358
+ [1836.920 --> 1844.440] that the grid cell spatial code could be using some transformation of sensory input instead where
359
+ [1844.440 --> 1848.840] there's an egocentric view of the world and then you can combine it with head direction coding to
360
+ [1848.840 --> 1855.720] generate an allocentric spatial location so this is where I'm going to talk about the influence
361
+ [1855.720 --> 1861.320] of environmental boundaries because in most of these experiments the most salient visual features
362
+ [1861.320 --> 1868.760] have to do with the features on the boundaries so there's plenty of of previous studies showing that
363
+ [1868.760 --> 1873.960] movement of the boundaries in the environment will influence the spatial coding by grid cells
364
+ [1875.080 --> 1879.880] this was initially done by Caswell Berry who recorded grid cells in a one meter square environment
365
+ [1879.880 --> 1885.640] and then compressed them in different directions and saw that the spacing of the grid cell
366
+ [1885.640 --> 1892.360] firing fields would compress in the direction of the boundary movements the Moser laboratory showed
367
+ [1892.360 --> 1897.400] this similar effect here's a case where the neurons have relatively the grid fields are relatively
368
+ [1897.400 --> 1901.400] widely spaced in the environment and then the movement of one of the boundaries will compress
369
+ [1901.400 --> 1907.080] them in that direction though they interestingly in this study saw that for narrow spacing between
370
+ [1907.080 --> 1913.080] the firing fields you don't get that compression effect so we've modeled this based this was
371
+ [1913.080 --> 1918.200] worked on with Florian Routies where we modeled how you could take an input like this where you have
372
+ [1918.200 --> 1923.880] features that are either on the ground plane giving you optic flow or on the walls giving you the
373
+ [1923.880 --> 1931.640] angle of particular features and then we use this to model grid cells the optic flow on the ground
374
+ [1932.040 --> 1937.640] plane we used a template matching technique developed by Perone the visual features on the walls
375
+ [1937.640 --> 1943.800] we just took the feature angles on opposite walls to generate the particular distance for the
376
+ [1943.800 --> 1949.560] grid cell models and we were able to replicate the compression so if we had visual features on
377
+ [1949.560 --> 1955.560] the walls we could replicate the compression of the grid cell firing spacing in that dimension
378
+ [1955.560 --> 1960.440] but in we could also replicate the Moser lab data showing that in some cases the narrower spacing
379
+ [1960.440 --> 1966.520] would not be shifted by the walls if we modeled the generation of these grid cells based on optic
380
+ [1966.520 --> 1972.040] flow from the ground plane so this is showing you know potential different visual influences on the
381
+ [1972.040 --> 1980.440] grid cells now in terms of the transformation to create the allocentric representation of space
382
+ [1980.440 --> 1987.000] Neil Burgess had published a paper in 2007 proposing that there might be a transformation from an
383
+ [1987.000 --> 1992.280] egocentric view of the world that was combined with head direction cells in the retro-spleenial cortex
384
+ [1992.280 --> 1998.120] to generate what he called allocentric boundary cells and that these could then drive play cells
385
+ [1999.320 --> 2004.520] and this is something that had arisen out of early work that the okeyflab did where they had
386
+ [2004.520 --> 2009.960] play cell firing in a one meter square environment and then they expanded the environment and saw
387
+ [2009.960 --> 2014.600] that the play cell firing field would often get stretched out and based on this Neil Burgess
388
+ [2014.600 --> 2020.360] proposed these allocentric boundary vector cells it would respond to boundaries at a particular
389
+ [2020.360 --> 2027.320] orientation relative to the environment so this would be in the sense coding the east boundary
390
+ [2028.200 --> 2032.120] and when I first saw this model I thought oh there's no way that neurons are actually coding
391
+ [2032.120 --> 2039.320] boundaries in that way and I was very surprised when both the okeyflab colon lever and casual
392
+ [2039.320 --> 2045.320] berry published these cells as well as the Moser lab what they call border cells both of these
393
+ [2045.320 --> 2049.880] labs have shown these types of allocentric boundary cells here's one that's responding to the
394
+ [2049.880 --> 2054.840] west boundary of the environment here's one that's responding to kind of the southeast boundary of
395
+ [2054.840 --> 2059.720] the environment and they have the characteristic they'll respond to the you know walls they'll
396
+ [2059.720 --> 2066.040] respond to inserted walls so this is showing firing to an inserted wall they'll also respond to
397
+ [2066.040 --> 2071.240] the edge of a tabletop and if you pull the tabletop two tabletops apart they'll actually respond
398
+ [2071.240 --> 2077.800] to the gap between the two tabletops even though the animal can still cross those so there's a very
399
+ [2077.800 --> 2085.640] salient representation of boundaries in actually entriangle cortex and other areas such as subiculum
400
+ [2086.760 --> 2091.400] so as I mentioned they the Neil Burgess had proposed that these boundary cells could be originally
401
+ [2091.400 --> 2097.720] driven by egocentric view cells and this is where we were very excited to find evidence for this
402
+ [2097.720 --> 2105.160] type of response so Jake Hinman in my laboratory was recording endorsement medial striatum and he
403
+ [2105.160 --> 2109.880] was actually recording in the region getting input from retrospeinial and entriangle cortex
404
+ [2111.080 --> 2116.280] and he found cells that essentially have the same sort of egocentric representation that if you look
405
+ [2116.280 --> 2123.320] back at the Neil Burgess paper he has plots that are very similar to this so Jake was recording from
406
+ [2123.320 --> 2130.360] a neuron as rat was forging in an open field environment and he saw firing when the rat was near
407
+ [2130.360 --> 2135.160] the south wall if it was going east but if it was near the north wall you'd see firing if it was
408
+ [2135.160 --> 2141.080] going to the west and he correctly assumed that this meant that the firing was in response to the
409
+ [2141.080 --> 2147.720] position of the wall relative to the animal the egocentric coordinates of the wall so he did
410
+ [2147.720 --> 2155.480] egocentric plots that where you have the animal facing forward here forward is up back is down
411
+ [2155.480 --> 2160.520] and then left and right and he would plot for each spike he would plot the position of the boundary
412
+ [2160.520 --> 2166.120] when that spike was generated so here's the position of the boundary for three different spikes
413
+ [2166.840 --> 2172.120] here's the position of the boundary average it over 222 different spikes and you can see this
414
+ [2172.120 --> 2179.160] cell is consistently firing when the boundary is to the front right of the animal and then you could
415
+ [2179.160 --> 2184.440] divide by the occupancy of the wall overall and the behavior to get occupancy normalized
416
+ [2184.440 --> 2189.240] firing so here's a clear example of a cell that's responding to the egocentric position of a boundary
417
+ [2189.240 --> 2198.360] and he's found many cells of this type hopefully this will be published soon here are multiple
418
+ [2198.360 --> 2204.600] examples of neurons coding an egocentric position of the boundary just to the right here in neurons
419
+ [2204.600 --> 2209.320] coding position just to the left of the animal here's neurons coding position at greater distance
420
+ [2209.320 --> 2216.360] from the animal and so this is exactly what Neil Burgess had originally proposed in fact they
421
+ [2216.360 --> 2221.800] even as I mentioned plotted it with the exact same format of egocentric coding that could be
422
+ [2221.800 --> 2227.400] combined with head direction cells to generate the allocentric representation but they had proposed
423
+ [2227.400 --> 2231.720] that these cells would be appearing in retrosplenial cortex or at least the transformation would be
424
+ [2231.720 --> 2238.920] coded in retrosplenial cortex and so and the Alexander in my laboratory went and recorded in retrosplenial
425
+ [2238.920 --> 2245.000] cortex and has seen these same types of egocentric boundary responses you know coding left or right
426
+ [2245.000 --> 2250.520] side boundaries or even right to the you know to the back of the animal in the retrosplenial cortex
427
+ [2250.520 --> 2257.800] consistent with Neil Burgess's original model from over over 10 years ago so this is supportive of
428
+ [2257.800 --> 2263.800] this notion that the allocentric spatial code could be generated by taking the egocentric input combining
429
+ [2263.800 --> 2269.880] it with head direction cells and generating the allocentric representation and finally just to
430
+ [2269.880 --> 2274.360] briefly bring it back to modeling of episodic memory if you think about your episodic memory of
431
+ [2274.360 --> 2278.920] walking in I remember walking in here coming from the cafeteria and going to the elevator and
432
+ [2278.920 --> 2284.040] coming up the stairs and walking into the room I have in a sense toluing described your episodic
433
+ [2284.040 --> 2289.000] memory is a series of kind of movie frames where you can imagine oh yeah I you know what it
434
+ [2289.000 --> 2294.280] looked like when I was walking into the room and so on and so somehow you want to combine your
435
+ [2294.280 --> 2299.480] spatial temporal trajectory with these egocentric views of the world and now this is a relatively
436
+ [2299.480 --> 2306.440] abstract high level model but I've modeled how you could store a spatial temporal trajectory by
437
+ [2306.440 --> 2312.680] having speed modulated head direction cells driving grid cells that could drive play cells and
438
+ [2312.680 --> 2318.360] then you could form associations via heavy and LTP of the play cells with the speed modulated
439
+ [2318.360 --> 2323.320] head direction cells of course this stage here from grid cells to play cells could be using the
440
+ [2323.320 --> 2328.920] mechanisms that Jeff McGee talked about but then you could also form associations between the
441
+ [2328.920 --> 2334.280] play cell representations and these egocentric views of the boundaries in for instance retrospective
442
+ [2334.280 --> 2341.000] plenial cortex as the animal behaves and then during retrieval when there's no behavioral input you
443
+ [2341.000 --> 2346.840] could have this loop running to retrieve the spatial temporal trajectory and thereby retrieve
444
+ [2346.840 --> 2354.680] these kind of movie frame views of the world in a sense as your recall of an episodic memory
445
+ [2355.320 --> 2359.800] all right so I think I'm just about on time and I'll close there thanks very much
446
+ [2376.280 --> 2377.880] oh you're signing something
447
+ [2384.840 --> 2390.440] you might thanks for being here and thanks for the very nice talk um simple question there were a lot
448
+ [2390.440 --> 2395.640] of people studied the behavior at the behavioral level the coding of time you know just simple
449
+ [2395.640 --> 2400.120] little things like pressing a bar releasing at a certain number of seconds lighter and you get a
450
+ [2400.120 --> 2405.960] distribution of accuracy or even licking responses right and I'm wondering do we know is it known
451
+ [2405.960 --> 2411.640] whether the hippocampus and activation affects judgments of time and in simple time specific tasks
452
+ [2412.520 --> 2418.840] yeah actually I mean a lot of the work on that has has focused on strike like Warren McHaz focused
453
+ [2418.840 --> 2425.640] on stradum coding this so it looks I mean rather than focusing on it being only hippocampus
454
+ [2425.640 --> 2431.640] and antironal cortex for that type of timing behavior instead I would argue that the it seems
455
+ [2431.640 --> 2437.640] like this this temporal coding of intervals is a general brain wide sort of phenomenon and
456
+ [2437.640 --> 2442.920] Mark Howard has actually analyzed data from the stradum and seeing the same sort of distribution
457
+ [2442.920 --> 2448.440] of time cell responses with the number you know density changing he's analyzed it in data from
458
+ [2448.440 --> 2454.760] prefrontal cortex and both rodents and then monkey prefrontal cortex from Earl Miller so
459
+ [2455.640 --> 2462.520] so it does seem to be that this this type of model could be a relatively general model for mechanisms
460
+ [2462.520 --> 2470.120] of timing and so I would argue that the hippocampus and antironal cortex is really more for timing in
461
+ [2470.120 --> 2474.520] context of episodic memory which isn't usually being tested in those types of experiments
462
+ [2482.520 --> 2489.320] so very nice talk I was wondering so our sense of time most of time is absolute but sometimes
463
+ [2489.320 --> 2496.040] that's when we lose track of time right so do you think it's so the time encoding cells do you
464
+ [2496.040 --> 2500.920] think that they're influenced by the state of the brain or maybe the level of neuromodulator
465
+ [2500.920 --> 2508.120] have you seen regulation of their activity you know in terms of sometimes their silent or how
466
+ [2508.760 --> 2513.560] how spaced out they're they're firing you know from one cell to another can that be modulated by
467
+ [2514.520 --> 2518.120] the actual state of the animal yeah I'd love to do that experiment I mean we all have this
468
+ [2518.120 --> 2523.320] subjective experience of you know kind of exciting conversations going really quickly and boring
469
+ [2524.120 --> 2531.400] boring talks going really slowly you know so so I agree that there probably is a very strong
470
+ [2531.400 --> 2536.280] influence of neuromodulators on this and if we we actually describe this in the paper the
471
+ [2536.280 --> 2544.440] leu the u-a-leu paper how colonergic modulation changing the slope of the fi curve for neurons could
472
+ [2544.440 --> 2550.440] essentially rescale the coding very effectively you know cross a whole population of different neurons
473
+ [2550.440 --> 2555.240] with different time constants so we we propose that but it hasn't been tested the experiments with
474
+ [2555.240 --> 2560.600] Ben Kraus we didn't do specific manipulations of neuromodulation though I should point out a
475
+ [2561.000 --> 2567.240] the patent group in Lisbon actually did do experiments where they're doing dopamine
476
+ [2567.240 --> 2571.800] allergic modulation and did see changes in this subjective coding of time
477
+ [2577.400 --> 2583.480] thanks Mike I was wondering how you square this idea of head direction versus movement
478
+ [2583.480 --> 2587.720] direction and movement direction being what's needed with the fact that lesions of ATN
479
+ [2587.880 --> 2596.120] disrupt good cells yes so I guess I would argue that it's it's shift you over to the sensory
480
+ [2596.120 --> 2601.000] processing model I mean there's it's it's possible that both mechanisms are working and you know
481
+ [2601.000 --> 2605.800] there's some suggestion that maybe you could have reset from visual inputs and then do path
482
+ [2605.800 --> 2610.600] integration for periods of time and then have reset again but I would I would say probably the
483
+ [2610.600 --> 2617.000] taube paper result is due to not the loss of path integration but the loss of the ability to
484
+ [2617.000 --> 2622.680] have an update of head direction so that you can take your current egocentric input and code
485
+ [2622.680 --> 2627.720] your location so if I don't know what direction my head is oriented at then I could get very disoriented
486
+ [2627.720 --> 2631.880] in terms of the the visual features being translated into the allocentric
487
+ [2631.960 --> 2644.920] go back to the earlier part of the talk you talked about these cells that had time fields in the
488
+ [2645.960 --> 2652.280] little treadmill and then place fields around the track and it their place fields around the track
489
+ [2652.280 --> 2658.840] were in the same order as their time fields on the on the treadmill and I just never thought of that
490
+ [2662.200 --> 2668.440] is is the animal replaying is future trajectory around the track I mean that was the thing that
491
+ [2668.440 --> 2673.080] occurred to me when I saw that and I just wondered yeah that's a great idea all these these
492
+ [2673.080 --> 2678.680] questions are all very interrelated right we space and time I guess in looking at your book
493
+ [2678.680 --> 2684.680] not that long ago I realized that they're very intimately interrelated in a certain way and so
494
+ [2684.680 --> 2690.600] maybe that's not surprising but I just wondered if you thought about whether the coding of time and
495
+ [2690.600 --> 2695.640] space was in fact you know related in those kinds of neurons yeah and that would be kind of
496
+ [2695.640 --> 2699.960] consistent with with Jill saw maybe that you know you're doing some the animals doing some replay
497
+ [2700.840 --> 2707.240] we didn't we didn't see that specifically but I should mention that you know I showed examples of
498
+ [2707.240 --> 2711.800] three cells that had both time fields in place field but not all the cells had that there's plenty of
499
+ [2711.800 --> 2717.320] place cells that don't have time fields and plenty of time cells that don't have place fields so it
500
+ [2717.320 --> 2722.920] maybe that we just didn't have enough data to analyze that one thing I should mention is yeah
501
+ [2722.920 --> 2728.440] that the time cells have this tendency to spread out near the end of the interval and we've actually
502
+ [2728.440 --> 2731.960] been interested in whether or not that would happen with place cells but the thing about place
503
+ [2731.960 --> 2737.880] cells is they have kind of ongoing sensory update so that they could in sense reset more accurately
504
+ [2737.880 --> 2742.520] whereas the time cells they have this one salient stimulus at the start of the interval and then
505
+ [2742.520 --> 2748.040] they're you know essentially coding subsequent time relative to that salient event
506
+ [2756.280 --> 2761.160] yeah a question about the egocentric cells that you presented so as you know
507
+ [2761.720 --> 2767.800] Jim Tneyrim has seen similar cells in lateral and torrional cortex and I've been reported in CA1
508
+ [2767.800 --> 2775.320] so how similar are they and you see these cells as part of a wider network that actually is not
509
+ [2775.320 --> 2780.600] localized to any particular region you know what kind of network could that be yeah I mean that's
510
+ [2780.600 --> 2785.480] that's certainly I should have mentioned that Jim's study he actually had took a somewhat different
511
+ [2785.480 --> 2791.560] perspective instead of you know coding it in terms of the position of the barriers they were
512
+ [2791.560 --> 2795.720] coding it in terms of can the animal keep track of the center of the environment but it is a very
513
+ [2795.720 --> 2800.200] similar characteristic and so I think it's perfectly reasonable that they're in lateral and
514
+ [2800.200 --> 2807.240] torrional cortex you know in retrosplenial the dorsal medial strideum response is probably due
515
+ [2807.240 --> 2813.960] to inputs from entrional and retrosplenial I wouldn't you know necessarily expect to see them
516
+ [2813.960 --> 2818.520] everywhere I don't think you know they'd be that likely to show up for instance in hippocampus
517
+ [2818.520 --> 2819.960] but it'll be interesting to see
518
+ [2826.360 --> 2834.680] so I guess I was wondering why we want such a strong distinction between time cells and space
519
+ [2834.680 --> 2839.560] cells because you gave an example of why of how you could be in the same space but in at a different
520
+ [2839.560 --> 2844.760] time it doesn't seem like we can ever be in a different place at the same time and you might
521
+ [2844.760 --> 2848.360] think that they're just coding something like context and sometimes the context is primarily
522
+ [2848.360 --> 2853.640] determined by spatial cues and sometimes by temporal cues so is there is there really like an
523
+ [2853.640 --> 2858.120] imprensive distinction between the two? No and in fact that's something I should say I
524
+ [2858.120 --> 2864.760] I didn't say it but all of these different functional subtypes are really probably just
525
+ [2864.760 --> 2871.320] different categories placed on a continuum of responses and Lisa has a paper from her lab
526
+ [2872.440 --> 2879.720] hardcastle that I'll that specifically did analysis of the coding characteristics and neurons
527
+ [2879.720 --> 2886.840] and saw all sorts of different combinations of responses so and in fact we did a GLM analysis on
528
+ [2886.840 --> 2891.720] the I didn't show this slide but on the time cells we looked at whether they were coding time
529
+ [2891.720 --> 2896.840] or distance running on the treadmill and we saw some cells clearly coding time and some clearly
530
+ [2896.840 --> 2901.960] coding distance but some were actually kind of coding both time and distance so so there really is
531
+ [2901.960 --> 2908.360] probably a continuum of you know coding all of these different dimensions
transcript/allocentric_P7Q2fE4Qm2w.txt ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.640] Year 12's, the next issue on our agenda is spatial neglect.
2
+ [4.640 --> 12.560] As we know from our video on the parietal lobe, one of the functions that it largely is responsible for is the perception of space.
3
+ [12.560 --> 17.080] Damage to the parietal lobe then may result in spatial neglect.
4
+ [17.080 --> 18.960] What is spatial neglect then?
5
+ [18.960 --> 27.200] Well, it's possibly best described as a phenomenon whereby an individual consistently ignores stimuli presented from one side of the body.
6
+ [27.200 --> 33.720] Now this is more than consciously deciding to block out somebody speaking or an annoying noise on either your left or right side.
7
+ [33.720 --> 39.360] In spatial neglect, stimuli from one side of the body is systematically ignored.
8
+ [39.360 --> 42.240] Some sufferers aren't even aware of their condition.
9
+ [42.240 --> 46.840] Now most often, the individual will neglect stimuli from the left side of their body.
10
+ [46.840 --> 51.960] Remembering that the left hemisphere of the brain controls the right side of the body and vice versa.
11
+ [51.960 --> 57.440] Ignoring stimuli from the left side of the body means that the right hemisphere is mostly affected.
12
+ [57.440 --> 62.920] That is, spatial neglect is most often the result of damage to the right parietal lobe.
13
+ [62.920 --> 66.520] The consequences of spatial neglect can be considerable.
14
+ [66.520 --> 74.120] For example, asking an individual with spatial neglect to draw you a clock or a house may result in something like this.
15
+ [74.120 --> 79.480] In these situations, the individual is only aware of the right half of the object at hand.
16
+ [79.480 --> 85.000] Sufferers may also only eat the right side of their dinner or acknowledge people on their right side.
17
+ [85.000 --> 89.720] And this is because they simply are unaware of stimuli presented to their left.
18
+ [89.720 --> 95.400] More than that, individuals with spatial neglect may even experience reconstructed memories.
19
+ [95.400 --> 102.960] That is, they may only be able to remember the right side of memories that were encoded before they damaged their right parietal lobe.
20
+ [102.960 --> 105.640] Things that they saw fully at the time.
21
+ [105.640 --> 112.360] So with that information at our disposal, this question from the 2013 V-Car exam seems fairly straightforward.
22
+ [112.360 --> 113.120] It reads,
23
+ [113.120 --> 117.720] Before suffering a stroke, Bettina was a healthy 36-year-old woman.
24
+ [117.720 --> 122.200] Since her stroke, she applies makeup to the right side of her face only.
25
+ [122.200 --> 125.920] Bettina's behavior since the stroke suggests that she has.
26
+ [125.920 --> 128.000] A. spatial neglect.
27
+ [128.000 --> 130.080] B. Broke as aphasia.
28
+ [130.080 --> 132.240] C. Where Nicky's aphasia.
29
+ [132.240 --> 135.400] Or D. Had split brain surgery.
30
+ [135.400 --> 140.080] As I'm sure you can guess, the correct answer here is A. spatial neglect.
31
+ [140.080 --> 146.760] It is likely that Bettina's stroke affected her right parietal lobe, thereby resulting in spatial neglect.
32
+ [146.760 --> 151.320] This means that she systematically ignore stimuli from the left side of her body,
33
+ [151.320 --> 155.960] which explains why she only applies makeup to the right side of her face.
34
+ [155.960 --> 158.760] In the next video, we will look at split brain.
35
+ [158.760 --> 161.560] Keep working hard and have a psychedelic day.
transcript/allocentric_Q1Tczf8vxCM.txt ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 22.000] Hello, my name is Nick Aoy-Jane. I'm a game designer at that game company. And today, I'm going to talk to you all about cognitive maps and how to prevent your players from getting lost in your levels.
2
+ [22.000 --> 28.000] So, as I said, I'm a game designer at that game company, but before I was a game designer, I was actually an architecture.
3
+ [28.000 --> 37.000] And so, a lot of the references and imagery and knowledge that I'll be using in this presentation actually come from the domains of architecture and urban planning.
4
+ [37.000 --> 43.000] One of my big interests has always been wayfinding and navigation, which is how we get here.
5
+ [44.000 --> 58.000] So, we're going to jump right in and let's first start by defining what maps are. So, firstly, a map is a tool. A map helps you achieve something. Most of the time, it's orienting yourself in relation to other things.
6
+ [58.000 --> 68.000] A map is made. So, I can't go out into nature and find a map just sitting there on the ground. It's either made by me or by somebody else.
7
+ [69.000 --> 81.000] It represents spaces or concepts. It's pretty self-explanatory. And it's relational. So, if I show you a piece of paper, and it's got a dot on it, and that dot is labeled Paris, that isn't a map just yet.
8
+ [81.000 --> 94.000] However, if I show you that same piece of paper with a dot that says Paris, another dot that says Cairo, now we're beginning to form a map and in those two dots can begin to anchor each other in space.
9
+ [94.000 --> 106.000] Also, a map has edges or limits. Even if it's a globe, which is a map of the entire earth, you're going to be limited by how much information you can get at particular scales, etc.
10
+ [106.000 --> 120.000] There are a couple things maps aren't. So, maps aren't quote the truth. They are not always orthographic. So, that kind of top-down view of something without any perspective that we associate with maps.
11
+ [120.000 --> 140.000] Maps don't always have to be that. They're not always flat. That Polynesian stick chart that I'm showing on the top right is an example of a kind of three-dimensional, very tactile map that maps ocean swells and islands to help navigators navigate wide expanses of ocean, for example.
12
+ [141.000 --> 151.000] They're always not there also not always physical, which means that they can take place in our minds, which is as you guessed what a cognitive map is and it's prescriptive.
13
+ [151.000 --> 165.000] So, if I show you a map of a part of the world that you're unfamiliar with and I label something incorrectly, as far as you're concerned, that's that incorrect labels is reality to you.
14
+ [166.000 --> 175.000] So, let's talk about cognitive mapping. This term was originally coined by Edward Tolman in his lab in 1948.
15
+ [175.000 --> 184.000] So, if you've seen imagery of rats in a maze running around trying to get a piece of cheese, that's where this is where it comes from.
16
+ [185.000 --> 198.000] So, the experiment was as follows. They would take a rat and place a rat in the apparatus that you see on the top left, this kind of circular room, and they would hide a piece of cheese where that letter G is located in the yellow diamond.
17
+ [198.000 --> 213.000] And the rat would eventually find that piece of cheese and then once consumed it would once the cheese was consumed, they would put the rat back in the circle and have the rat do this over and over until it was almost like muscle memory, you know, the rat would go in and do exactly what routes to take and it would get the cheese.
18
+ [214.000 --> 223.000] Then, they would take that same rat and put it in the apparatus that you see in the middle. So, that same kind of rounded room, but that original pathway was now blocked.
19
+ [223.000 --> 237.000] The researchers wanted to know, would the rat once it realizes that path is blocked, have some sort of intuition as to where this piece of cheese was.
20
+ [237.000 --> 246.000] But it did, it would probably bias channel six, which is geographically where you would go if you just wanted to get the cheese.
21
+ [246.000 --> 255.000] But if it didn't, we would, if the rat didn't have a kind of understanding of what its world was in its own mind, it would just bias all of the paths equally.
22
+ [255.000 --> 267.000] As the researchers and told me found out rats did create some sort of mental map of their environment, as you can see on the bar on the right hand side that really tall.
23
+ [267.000 --> 280.000] That really tall bar is how many times those rats picked Avenue six. So that is what these cognitive maps are for rats, but of course, we're not rats.
24
+ [280.000 --> 295.000] We are people. We have our own people brains and we inhabit non-trillion spaces. We live in cities, suburbs, all kinds of different environments that demand us to use our own cognitive maps on a daily basis.
25
+ [295.000 --> 301.000] So, we can do a little exercise to help you understand what your own cognitive map is like.
26
+ [301.000 --> 313.000] So, take five minutes and draw your neighborhood from memory on a piece of paper. Don't look anything up. Just try to let your mind guide your hand and just spend five minutes to do that.
27
+ [313.000 --> 317.000] So if you want to do that exercise, go ahead and pause the video now.
28
+ [317.000 --> 324.000] If you've done that exercise, you might have a drawing similar to the illustrations we see here on the top.
29
+ [324.000 --> 337.000] These illustrations actually came from the image of the city, which is a book that urbanist Kevin Lynch published and Kevin Lynch went to a bunch of cities and asked people to do this very same thing. Hey, can you draw me your neighborhoods?
30
+ [337.000 --> 347.000] And after parsing through all of those different illustrations, he was able to discern five elements that people use to make sense of the spaces around them.
31
+ [347.000 --> 361.000] Paths, landmarks, districts, edges, and notes. The rest of this talk is going to explain each one of these elements in detail and how we can use those to make really strong cohesive cognitive maps.
32
+ [362.000 --> 370.000] But of course, this talk is also about not getting lost. So we need to explain what getting lost is now that we have a clear idea of a cognitive map.
33
+ [370.000 --> 385.000] Getting lost is simply a misalignment of your cognitive map with what the world around you is with your surroundings. Is that feeling of feeling like an area or a space is new, but knowing for a fact that it isn't.
34
+ [385.000 --> 393.000] And generally a bad time. This can result from changes in your environment or changes in your place within the environment.
35
+ [393.000 --> 404.000] Or it can result from insufficiently broad or insufficiently clear cognitive maps because you're unable to respond to those changes in an adequate way.
36
+ [404.000 --> 414.000] Similar to this image, which is a map of the world, you know, it's upside down. We might be able to tell it's a map of the world, but I'd be really hard pressed to identify any particular country with the map upside.
37
+ [415.000 --> 426.000] So let's talk about the first element. The first are paths. This is also the most self explanatory element. It's a linear space that directs movement and travel.
38
+ [426.000 --> 438.000] And it also tends to be dominant in cognitive maps. If you did the exercise earlier, one of the first things you might have done was started diagramming all of the paths that you are aware in your neighborhood.
39
+ [439.000 --> 448.000] And these are things like sidewalk streets trails, etc. Travel tends to be concentrated on them. And because of that, we tend to treat them differently.
40
+ [448.000 --> 455.000] We pay our roads. We cut channels to make sure that water can flow through correctly. That sort of thing.
41
+ [455.000 --> 459.000] And interestingly enough, paths are the most temporal element.
42
+ [459.000 --> 469.000] So a path isn't useful to you if you're not moving along it and moving along that is a moving along a path is inherently tied to time.
43
+ [469.000 --> 477.000] And so that process of incrementing your way along a path is what Lynch called scaling.
44
+ [477.000 --> 487.000] And there are a few limitations though when we're dealing with paths. We kind of digest them and use them in two ways. One is dead reckoning and the other is path integration.
45
+ [487.000 --> 503.000] And they're technically different. But for the sake of this presentation, we can just think about both of these concepts as knowledge that where I am now is where I was previously, plus all of the steps that I took since that last reference point.
46
+ [503.000 --> 514.000] This relies on continuity, a pro-price-ception or, you know, a good understanding of your own body and its movement throughout spaces, a calculation intuition, etc.
47
+ [514.000 --> 526.000] However, it can be difficult or impossible to properly utilize a map using using these techniques without knowledge of those things.
48
+ [526.000 --> 542.000] So a lot of times in games in particular, you know, I might not be fully aware of the movement mechanics. I might not be fully aware of how the camera or the player or what have you that I'm controlling can experience that space.
49
+ [542.000 --> 549.000] And so my sense of pro-price-ception there is really limited. So we should keep that in mind when we're using paths.
50
+ [549.000 --> 559.000] And then they help us, you know, prevent players from getting lost. Paths are really good at catching lost players. You can think of, you know, being lost in a desert.
51
+ [559.000 --> 572.000] You're lost in a desert. You really just need to pick a direction and start walking because you don't know where to go. But once you come across a path or a street, you've automatically eliminated one vector from your field of possibilities.
52
+ [572.000 --> 582.000] Either you just have to choose am I going to go left or am I going to go right. So having a path is a really good way to catch players who might be deviating away and starting to get lost.
53
+ [582.000 --> 589.000] Obviously, paths also do great level design things like establishing player flow and connecting large areas together, etc.
54
+ [589.000 --> 604.000] An adequate path in your space can make it difficult to connect areas of the cognitive map together. And we have those limitations from dead reckoning and path integration that manifests themselves even more in games.
55
+ [604.000 --> 612.000] One of the things to look out for as well with paths is that moving one way along a path does not necessarily mean moving the other way.
56
+ [612.000 --> 622.000] If you've had the experience of going on a hike, reaching the end destination, turning around and going back and not being fully sure if you're going down the same trail, you've experienced this.
57
+ [622.000 --> 633.000] So just because you've placed paths inside of your level doesn't automatically mean that players suddenly won't get lost. You need to make sure that you're addressing that concern as well.
58
+ [633.000 --> 640.000] The next element are landmarks also super self explanatory and loved by level designers all over the place.
59
+ [640.000 --> 648.000] They're single localized and memorable features, you know, paths were these linear elements. Now we've got point references with these landmarks.
60
+ [648.000 --> 657.000] They tend to be things you want to take pictures of and they're recognizable either visually, excuse me, narratively or experientially.
61
+ [657.000 --> 666.000] You know, it can be this Randi's donuts shop or it could be the bench that the character had their first kiss end, for example.
62
+ [666.000 --> 676.000] Landmarks can be useful in a number of different ways. One of those is orienting players from a distance. A lot of the time landmarks are tall.
63
+ [676.000 --> 685.000] And so you can see them from far away, which is great. But they're also useful for orienting players when they're going down new paths and new journeys.
64
+ [685.000 --> 699.000] If I'm going to an area that I haven't been to before, but I maintain a reference point in a previous landmark that I've already established, that'll help anchor the new information that I'm receiving in relation to that previous landmark.
65
+ [699.000 --> 707.000] Also, they tend to situate elements of the of your spaces among themselves, which is super important.
66
+ [707.000 --> 721.000] But also to note, landmarks are essentially only useful if they're stationary. So if you've got a large creature, let's say, that is walking around a big map, that creature is no longer that useful as a landmark because it's moving all the time.
67
+ [721.000 --> 730.000] Also, they're much better when they're directional. So I have this statue of liberty here on the Eiffel Tower. The Eiffel Tower is a radially symmetric.
68
+ [730.000 --> 737.000] So if I'm north of the Eiffel Tower looking back or on south of the Eiffel Tower looking back, the Eiffel Tower is basically going to look the same to me.
69
+ [737.000 --> 748.000] But if I'm north of the statue of liberty and I'm south of the statue of liberty, the way that the statue of liberty is made makes it very easy for me to discern that, hey, I'm in a different spot now.
70
+ [748.000 --> 761.000] So whenever you can, if you know, making sure that people can use that landmark to situate themselves around it is important to you, try to make your landmarks directional.
71
+ [761.000 --> 769.000] Lastly, photogrammetry is the process of taking multiple pictures of an object and then making a 3D model from that.
72
+ [769.000 --> 771.000] You want to think of your landmarks in a similar way.
73
+ [771.000 --> 778.000] Philanmark is only referenced one time. It's not really going to be useful for the player to create their own cognitive map.
74
+ [778.000 --> 786.000] You want to make sure they're able to reference it as many times as possible so that that reference point can be continually reinforced.
75
+ [786.000 --> 791.000] Next up, we have districts. This is also a self-explanatory.
76
+ [791.000 --> 800.000] District is a region identified by a characteristic or quality and we have point references with landmarks, linear references with paths.
77
+ [800.000 --> 808.000] And now we've got these zonal references with districts. Here are a bunch of examples, industrial zones, downtowns, nature preserves, etc.
78
+ [808.000 --> 813.000] A good way to identify districts is to do what I call a squid test.
79
+ [813.000 --> 825.000] So if you look at your map and you squint your eyes and everything gets all blurry, if you can start to distinguish different parts of that map, you know, like this area is a little red, this area is, you know, looking a little different.
80
+ [825.000 --> 829.000] You can most of the time assume that those are your districts.
81
+ [829.000 --> 837.000] The districts have edges and you go through them. So you enter into these new kinds of spaces.
82
+ [837.000 --> 843.000] They tend to be mid to large scale and another good way of thinking about districts is a color by number image.
83
+ [843.000 --> 858.000] So a color by number image is a kind of cohesive and consistent portrait as a whole, but it's comprised of these unique and recognizable colors that work together to make it look like a real,
84
+ [858.000 --> 868.000] that work together to make it all work out. Likewise, and our cognitive maps having cleared districts really helps to differentiate areas from each other.
85
+ [868.000 --> 878.000] An important thing with districts is the concept of clustering. So I have two clusters here. I've got set of five clusters on the left and a set of five clusters on the right.
86
+ [878.000 --> 889.000] And the set of clusters that are on the left are way easier to remember than the ones on the right. And that's because they are grouped with like objects.
87
+ [889.000 --> 900.000] This type of clustering can be semantic or mechanical, not just visual. So, you know, it isn't that just have to be, you know, three pyramids here and three box buildings over here.
88
+ [900.000 --> 908.000] It can be, you know, this is an area that I can jump really high. This is an area where I get in cards and drive around, et cetera.
89
+ [908.000 --> 917.000] Isolating these qualities in different areas like this is just generally good practice for world building and game design.
90
+ [917.000 --> 927.000] But it really helps reinforce a cognitive map because I could probably navigate away from this slide and because they're clustered in such a way that image on the left might still be something to remember.
91
+ [927.000 --> 933.000] And that image on the right, you might forget the moment that I move away from it.
92
+ [933.000 --> 949.000] Second to last, we have edges. So an edge is a linear reference, but it's not a path. These tend to control continuity or they separate things examples are gates, walls, cliffs, borderlines, et cetera.
93
+ [949.000 --> 962.000] They tend to be elevational simply because we tend to move around the world horizontally. So if we were, if we, you know, flew around a lot or climbed a lot of trees, edges could be barriers that are horizontal.
94
+ [962.000 --> 970.000] But most of the time, they're vertical things. It's stuff that you tend to go around or go along or things that you also go through.
95
+ [970.000 --> 977.000] For example, being on the outside of a building, opening a door and entering inside of that building.
96
+ [977.000 --> 994.000] You want to be deliberate when you're working with paths, you know, crossing that threshold of a path or being, sorry, an edge, crossing that threshold of the edge and or being blocked by the edge are really memorable experiences.
97
+ [994.000 --> 1006.000] And if that edge is really crisp and clean, I noticed that sometimes we have this tendency to want to blur things like we don't want these like hard, hard lines cutting through our landscapes.
98
+ [1006.000 --> 1030.000] This isn't to say that you need to make hard lines, but you should be deliberate in the limits and boundaries of these districts in the form of these edges, because if the edges are blurred, the cognitive map of players is also going to reflect that blurred nature and it's not going to be as good at anchoring them when, when they do start getting lost.
99
+ [1030.000 --> 1039.000] These occur a lot in games. There's level transitions, level boundaries, mechanical boundaries, portals, ledges, walls, cliffs, etc.
100
+ [1039.000 --> 1048.000] So we have plenty of opportunity to liberally use edges and we want to take full advantage of those.
101
+ [1048.000 --> 1062.000] And lastly, we have notes. So a note is a convergence of paths. It's a point reference, but it's a point reference that is defined by paths, which is the other element.
102
+ [1062.000 --> 1074.000] Most of the time, these are things like traffic intersections, transit hubs or home spaces or hub spaces that allow you to access multiple different locations of the game from one spot.
103
+ [1074.000 --> 1090.000] If there is a place that has many ins or many outs, that's usually a node. They tend to be denser than the adjacent areas, just because these are places that players and people flock to when they're navigating in general.
104
+ [1090.000 --> 1094.000] So they're good to have.
105
+ [1094.000 --> 1114.000] Excuse me. You know, there's that expression all load all roads lead to Rome. And in this example, Rome is a node having, having a location that is repeatedly used by people on their way to get somewhere is really important and really valuable.
106
+ [1114.000 --> 1126.000] So when you're making a node or when you've recognized that you have created a node in a game that you're working on, you really want to start working on place making there in order to make it as recognizable as possible.
107
+ [1126.000 --> 1132.000] These nodes can be destinations. They don't just have to be things that you go in or go through.
108
+ [1132.000 --> 1142.000] And a good way to facilitate that is just make sure that people can spend some time there as opposed to just, you know, be transient and move move through something.
109
+ [1142.000 --> 1156.000] I've been on plenty highway intersections having grown up my whole life in LA. But if you were to drop me in one of these big highway interchanges, I might be hard pressed to know where I am without all without all the signage.
110
+ [1156.000 --> 1164.000] If you want, if you're going to make a node, try to turn it into a place and not just some, you know, arbitrary thing you pass through.
111
+ [1164.000 --> 1171.000] So those are the five elements, you know, what they are, how you can identify them, how you can leverage them and why they're useful.
112
+ [1171.000 --> 1185.000] But now let's talk about more specifically implementing them in our design practice. So the first thing I like to do is do an audit, try to look at levels and spaces that have already made and identify existing paths, landmarks, districts, edges.
113
+ [1185.000 --> 1200.000] And nodes in those areas. Then once I've done that, I want to assess the clarity and readability of those things. If I'm kind of doing this audit and that I'm saying that could be a landmark or that looks like a good enough district, etc.
114
+ [1200.000 --> 1216.000] It probably isn't a good enough landmark and it probably isn't a district. So that's a key for me to go back and, you know, clarify those things, you know, really, really not be shy about these design decisions that I'm making.
115
+ [1217.000 --> 1230.000] Then I want to organize my space. So I tend to work because the architectural background, I like to work in plan first, and I like to think of plan as the organization or structure for everything.
116
+ [1230.000 --> 1240.000] I think the plan should be clear and legible and by looking at that plan, you should be able to easily recognize almost all of these elements there.
117
+ [1240.000 --> 1247.000] Once the plan kind of really pops with all of these elements and the really distinct, then I like to move on to section.
118
+ [1247.000 --> 1253.000] In section is where I think the emotion comes out where the storytelling happens, where the experience really takes place.
119
+ [1253.000 --> 1261.000] Just because I've got, you know, one of these elements in my plan, it doesn't mean it's actually going to come through in my section when I'm experiencing the game.
120
+ [1261.000 --> 1268.000] So if I see on my plan, there's this really strong landmark, but then when I start playing the game, that landmark isn't really reading.
121
+ [1268.000 --> 1275.000] That's a key to me to go back and edit that landmark to make sure that it's legible when I'm playing the game.
122
+ [1275.000 --> 1284.000] Speaking of playing the game, it's super important to test if you can without a HUD or a mini map or UI, et cetera.
123
+ [1284.000 --> 1288.000] Why is this a couple of things?
124
+ [1288.000 --> 1301.000] Using tools like a radar, GPS or a mini map can lead us into digesting spaces using an egocentric frame of reference as opposed to an allocentric frame of reference.
125
+ [1301.000 --> 1303.000] So what do these mean?
126
+ [1303.000 --> 1310.000] Ego-centric frame of reference means I'm the center of the universe and the universe kind of revolves around me as I go through it.
127
+ [1310.000 --> 1316.000] If you've used a GPS navigation system on your phone, that's egocentric.
128
+ [1316.000 --> 1325.000] This can be difficult because you're navigating in this piecemeal fashion where you're primarily focused on what your next maneuver is going to be.
129
+ [1325.000 --> 1330.000] I need to turn right and 500 feet and turn left, et cetera.
130
+ [1330.000 --> 1337.000] And breaking down the journey in that way has been shown to result in a decrease in route memory.
131
+ [1337.000 --> 1346.000] It can also disengage you from the environment because you're exploring the map or the UI or the HUD instead of the space itself.
132
+ [1346.000 --> 1357.000] What you're trying to do is actually take that triangle and move it to that circle and your character moving through the world is just a byproduct of you trying to get those shapes to align.
133
+ [1357.000 --> 1367.000] However, if you test without the aid of a heads up display, et cetera, you tend to do more allocentric mapping.
134
+ [1367.000 --> 1370.000] Allocentric mapping is I am not the center of the world.
135
+ [1370.000 --> 1373.000] The world exists and I am simply moving through it.
136
+ [1373.000 --> 1375.000] It's this kind of big picture navigation.
137
+ [1375.000 --> 1379.000] And when you do things this way, there tends to be an increase in route memory.
138
+ [1379.000 --> 1386.000] You're engaged more with your environment and you're exploring the space instead of just exploring the map itself.
139
+ [1386.000 --> 1390.000] I have this image here of the fish in the lake and the burden trees.
140
+ [1390.000 --> 1399.000] So the fish in the lake can see the burden trees and it can see the trees, but it can't see the lake because that's that's where it is.
141
+ [1399.000 --> 1407.000] Likewise, the bird can identify the fish and the lake, but not the forest because it's understanding things he go centrically.
142
+ [1407.000 --> 1417.000] What we want to do to get a clear picture of everything is try to get what the image in the center is, which is essentially an allocentric frame of reference, which is, you know,
143
+ [1417.000 --> 1423.000] this is the environment and the player is simply moving through the environment.
144
+ [1423.000 --> 1431.000] So the TLDR of this talk is one, there exist things called cognitive maps.
145
+ [1431.000 --> 1438.000] It is the digestion of the environments that you go through that exist in everybody's head.
146
+ [1438.000 --> 1448.000] Getting lost is when that cognitive map is misaligned with the environment, with misaligned with the environment is currently telling you.
147
+ [1448.000 --> 1456.000] So how can we, how can we prevent players from getting lost? Well, we can try to foster clear cognitive maps and how can we do that?
148
+ [1456.000 --> 1468.000] We can use a toolkit using paths, landmarks, districts, edges and nodes to make sure that our spaces are robust and can actually foster these robust cognitive maps.
149
+ [1468.000 --> 1485.000] And the good way to do that is just be deliberate. I've included this lovely food tray here because I could take the same food tray with all of those same ingredients and just kind of like mix it all up in there and everything's, you know, all up on top of each other and build up the environment.
150
+ [1485.000 --> 1489.000] And then we can do that with each other and blend it.
151
+ [1489.000 --> 1493.000] But it wouldn't be as appetizing first of all.
152
+ [1493.000 --> 1497.000] And secondly, it wouldn't be as memorable to me.
153
+ [1497.000 --> 1504.000] Excuse me, we would have the same nutritional value, but by virtue of the way it's organized, it's not going to read as clearly.
154
+ [1504.000 --> 1509.000] And with this one, you know, I can do my squint tests, you know, and I can begin to start making out districts.
155
+ [1509.000 --> 1518.000] And then we can start to make out landmarks even with this food. So you can really apply these design strategies to nearly everything.
156
+ [1518.000 --> 1529.000] Lastly, I wanted to just show you and all of these different references that I have used to try to, you know, understand these things for myself.
157
+ [1529.000 --> 1538.000] These are all people who are way more intelligent than I am. And if you're interested in pursuing this topic further, I would encourage you to go here.
158
+ [1538.000 --> 1544.000] And I would encourage you to go ahead and pause the video.
159
+ [1544.000 --> 1549.000] And any one of these is going to be an exciting paper for you to enjoy.
160
+ [1549.000 --> 1552.000] So I will go back.
161
+ [1552.000 --> 1567.000] Thank you very much. I hope that this helped enlighten you and helped you understand how to prevent your players from getting lost and introduced a new paradigm for understanding how you can design your levels specifically for that use case.
162
+ [1567.000 --> 1576.000] My name is Nick Aoyjin. I hope you have a great day. And that's it. Thank you so much. All right. Bye.
transcript/allocentric_Qpa0nrKPYgc.txt ADDED
@@ -0,0 +1,711 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 9.160] My second presentation today, a little bit different, I would talk to you now about some
2
+ [9.160 --> 14.240] projects that we've been doing previously before we got very, very heavy into the CVI aspect.
3
+ [14.240 --> 19.680] This was a large scale study about five, six years looking at video game use in blind
4
+ [19.680 --> 25.440] children, blind individuals in general, ocular blind, to try to develop navigation skills
5
+ [25.440 --> 27.480] and orientation mobility skills.
6
+ [27.480 --> 30.880] So very, very different in terms of what we talked about earlier this morning.
7
+ [30.880 --> 37.600] But nonetheless, trying to come towards this, using this evidence-based neuroscience driven
8
+ [37.600 --> 40.600] approach, and hopefully you'll have questions for this as well.
9
+ [40.600 --> 44.880] And I am available to stay and discuss also the CVI talk as well.
10
+ [44.880 --> 45.880] So let's get started.
11
+ [45.880 --> 50.520] In the same way that I started off my first presentation, I kind of want to get sort
12
+ [50.520 --> 54.760] of the lay of the land with you and try to give you a sense of how the thought process
13
+ [54.760 --> 56.600] came about for this project.
14
+ [56.600 --> 60.560] And understand that I'm going to show you in about four or five slides what I was thinking
15
+ [60.560 --> 64.440] about for like three, four years, so they're trying to compress all that.
16
+ [64.440 --> 66.440] So here's the first thing to think about.
17
+ [66.440 --> 71.240] Rebelliation in the case of wayfinding and navigation, obviously a big, big challenge
18
+ [71.240 --> 73.080] of all the people that we work with, right?
19
+ [73.080 --> 77.200] So fortunately we have a structured way to teach people with visual impairments to
20
+ [77.200 --> 78.200] how to find their way around.
21
+ [78.200 --> 81.400] And we call that, of course, orientation, mobility, instruction.
22
+ [81.400 --> 87.680] From a cane to a guide dog, for example, all very structured, well established techniques
23
+ [87.680 --> 90.840] that are really part and parcel to promote an individual's independence.
24
+ [90.840 --> 93.280] There are limitations, of course, with this.
25
+ [93.280 --> 96.400] And there are always people and O&M instructors reaching out to me and saying, you know,
26
+ [96.400 --> 97.400] what do you think about this technology?
27
+ [97.400 --> 99.240] What do you think about this approach and so on?
28
+ [99.240 --> 103.760] Is this a way that we can study this and incorporate it in a more structured fashion?
29
+ [103.760 --> 106.760] And I became very, very interested in this idea.
30
+ [106.760 --> 110.200] Some other individuals, Dan Kish, for example, you probably have heard about this guy who
31
+ [110.200 --> 111.200] uses echolocation.
32
+ [111.200 --> 113.640] They call him the human Batman.
33
+ [113.640 --> 120.120] He walks around making click noises and using the reflections off the surfaces of objects.
34
+ [120.120 --> 121.960] He's able to identify various objects.
35
+ [121.960 --> 128.200] And in this particular case, you see him riding his bicycle, even though he has prosthetic
36
+ [128.200 --> 129.200] eyes.
37
+ [129.200 --> 130.200] He has absolutely no light perception.
38
+ [130.200 --> 133.000] I don't know if everybody can load this skill.
39
+ [133.000 --> 134.600] It's certainly really quite remarkable.
40
+ [134.600 --> 139.200] And there have been some groups in Canada who have done FMRI in him and studied his brain
41
+ [139.200 --> 140.720] and how he's able to do it.
42
+ [140.720 --> 144.720] But it's quite a remarkable skill that he's developed.
43
+ [144.720 --> 148.280] Some technology that I think is quite interesting as well.
44
+ [148.280 --> 149.560] This is an interesting one.
45
+ [149.560 --> 151.840] This is from the Sendero group out of California.
46
+ [151.840 --> 156.320] And the idea is that you walk around with a GPS monitor, which tracks you.
47
+ [156.320 --> 159.120] And as you're walking through the city, and you connect this with, say, your Braille
48
+ [159.120 --> 163.320] notes, you get information about, for example, the name of the street, how far you are from
49
+ [163.320 --> 165.240] a particular destination.
50
+ [165.240 --> 170.000] You may use a Bluetooth connector as well to get some auditory input, some very, very
51
+ [170.000 --> 175.640] nice technology that's coming together to help enhance these skills, if you will.
52
+ [175.640 --> 176.640] Certainly limitations.
53
+ [176.640 --> 179.560] The big one with GPS, of course, is that it's only for outdoors.
54
+ [179.560 --> 182.440] GPS doesn't work in an indoor environment.
55
+ [182.440 --> 186.240] It also is quite limited when you're in a situation of being downtown, where there's
56
+ [186.240 --> 188.840] a lot of reflections from buildings and so on.
57
+ [188.840 --> 189.840] Satellite doesn't capture.
58
+ [189.840 --> 193.040] You have to be visible in order for this to work.
59
+ [193.040 --> 197.480] So we were thinking about what was out there, what could we change, and in particular,
60
+ [197.480 --> 201.400] we were very, very motivated, or trying to get to this idea of motivation, I should say,
61
+ [201.400 --> 205.480] how can we leverage motivation as a way to improve navigation skills?
62
+ [205.480 --> 208.880] So let's talk about a few things as well.
63
+ [208.880 --> 213.280] First point I want to make from a clinical rehabilitation standpoint, a general comment
64
+ [213.280 --> 214.880] that I'd like to make with you.
65
+ [214.880 --> 219.560] So in traditional therapy session, the patient works one-on-one with a therapist to address
66
+ [219.560 --> 223.640] specific goals like psychological issues, could be movement, folk eye, or a particular
67
+ [223.640 --> 228.160] skill in the hopes of improving that particular deficit or that particular function.
68
+ [228.160 --> 232.200] So for example, if a person has a phobia or a particular psychological issue, they work
69
+ [232.200 --> 236.680] one-on-one with a therapist, versing those concerns, walking through those issues, and
70
+ [236.680 --> 241.320] trying to make that one-on-one face time that exchange to work through that issue.
71
+ [241.320 --> 246.200] If you are working on the motor side, very, very often, what we see is a lot of repetition,
72
+ [246.200 --> 247.200] working with various tasks.
73
+ [247.200 --> 251.160] A particular skill of motion deficit that's trying to get enhanced through repetition
74
+ [251.160 --> 253.400] and repetitive exercises and so on.
75
+ [253.400 --> 257.560] That's sort of like the state of the affairs right now.
76
+ [257.560 --> 258.560] Here's my problem.
77
+ [258.560 --> 260.240] A couple of things to think about.
78
+ [260.240 --> 264.720] What is the ecological validity and the effect of context on therapy?
79
+ [264.720 --> 269.400] If I'm sitting with a therapist talking about my problems and I'm not having the problem,
80
+ [269.400 --> 272.760] how good am I transmitting that issue?
81
+ [272.760 --> 276.520] Similarly, if the therapist is providing me some strategies and I'm still not going
82
+ [276.520 --> 280.880] through that problem, how good am I in terms of transferring that into that situation,
83
+ [280.880 --> 282.200] into that scenario.
84
+ [282.200 --> 286.480] So the context, the immersion of learning the skill is extremely important.
85
+ [286.480 --> 288.440] That's the first thing I want to say.
86
+ [288.440 --> 293.200] The second aspect, if we look on the motor side of things, boredom kills us when it comes
87
+ [293.200 --> 294.200] to rehabilitation.
88
+ [294.200 --> 297.920] Everybody recognize this toy, this little stacker thing.
89
+ [297.920 --> 301.360] I don't know, you are three when I had one.
90
+ [301.360 --> 304.560] Here's a woman who just had a stroke in her 40s.
91
+ [304.560 --> 308.680] Something to do, something that she knows was designed for a three year old.
92
+ [308.680 --> 312.760] What does that do in terms of her motivation and struggles and so on?
93
+ [312.760 --> 316.800] So I really think the ecological validity and the context of therapy is extremely important.
94
+ [316.800 --> 318.320] We certainly can do better.
95
+ [318.320 --> 321.040] So the immersion aspect, I think, is extremely important.
96
+ [321.040 --> 326.520] And creating scenarios that are meaningful for that individual are also extremely important.
97
+ [326.520 --> 330.560] So let's get to some other pieces of the puzzle and I'm slowly going to edge into this
98
+ [330.560 --> 332.760] idea of gaming and how we got into that.
99
+ [332.760 --> 336.000] The times are changing, definitely.
100
+ [336.000 --> 339.920] For example, here daily emails, 2012 billion emails being sent.
101
+ [339.920 --> 342.920] We're now 247 in 2010.
102
+ [342.920 --> 346.400] Text messages, 400,000 up to 4.5 billion.
103
+ [346.400 --> 351.280] Time spent online, 2.7 hours a week to 18 hours a week.
104
+ [351.280 --> 354.800] More of the story, we are a tech-driven society.
105
+ [354.800 --> 359.080] A lot of what we do is intimately related to what we do with technology.
106
+ [359.080 --> 363.920] As I said, despite my early screen saver problems, I believe that technology isn't a
107
+ [363.920 --> 364.920] neighbor.
108
+ [364.920 --> 366.520] We should try to leverage that somehow.
109
+ [366.520 --> 369.720] And we're getting very, very good at it because costs are going down.
110
+ [369.720 --> 371.000] Everybody has a cell phone.
111
+ [371.000 --> 372.000] Everybody has email.
112
+ [372.000 --> 374.080] Everybody has ways to stay connected.
113
+ [374.080 --> 377.240] There's an opportunity here that I think we need to leverage.
114
+ [377.240 --> 379.120] A couple of other things to talk about.
115
+ [379.120 --> 381.920] The case for play, as I mentioned, my mother is a preschool teacher.
116
+ [381.920 --> 385.800] And she used to always tell me, a child who plays is a healthy child, right?
117
+ [385.800 --> 387.360] It's intimately related.
118
+ [387.360 --> 390.880] And indeed, play is extremely important in the development of a child.
119
+ [390.880 --> 395.440] Role playing, social interactions, what's fair, what's not, establishing a rapport with
120
+ [395.440 --> 396.440] kids.
121
+ [396.440 --> 398.960] All that is done at a very, very early age.
122
+ [398.960 --> 402.200] And I think that's also what makes games later in life very, very exciting as well.
123
+ [402.200 --> 406.320] It's a way to be somebody in a sense that you can't otherwise be.
124
+ [406.320 --> 407.880] Animals know this, right?
125
+ [407.880 --> 409.200] Young animals play fight.
126
+ [409.200 --> 411.560] And they know when they can bite, when they can't and so on.
127
+ [411.560 --> 415.520] So there's something very important about play and brain development that I think is very,
128
+ [415.520 --> 416.520] very interesting.
129
+ [416.520 --> 419.120] And think of the counter example, this is a child with autism.
130
+ [419.120 --> 421.480] It's a child who doesn't play, right?
131
+ [421.480 --> 424.440] And that's one of the hallmark signs of a child with autism as well.
132
+ [424.440 --> 428.680] So I think there is somehow an association between playing and brain development and so
133
+ [428.680 --> 429.680] on.
134
+ [429.680 --> 432.720] And many, many news stories that have been out there trying to get at this point.
135
+ [432.720 --> 433.720] Here's an interesting study.
136
+ [433.720 --> 435.880] I don't know if you heard about this one called the high scope study.
137
+ [435.880 --> 439.640] This was done in the state of Michigan, done by the Educational Research Foundation in
138
+ [439.640 --> 442.160] Michigan, a longitudinal study by Stuart Brown.
139
+ [442.160 --> 446.600] So what he found that by age 23, he compared individuals who went through a very, very
140
+ [446.600 --> 452.760] structural, didactic school program versus schools who had very, very, a lot of hours of play
141
+ [452.760 --> 454.280] time and interaction.
142
+ [454.280 --> 459.200] And what he found by age 23, more than a third of the kids who had attended an instruction
143
+ [459.200 --> 463.920] oriented preschool had been arrested for a felony as compared to fewer than one tenth
144
+ [463.920 --> 466.640] of the kids who had been in a play oriented preschool.
145
+ [466.640 --> 470.840] Now, doesn't mean you don't play in a rob a bank, right?
146
+ [470.840 --> 476.920] This isn't causality, but a very, very interesting association that having this early on in development
147
+ [476.920 --> 479.920] certainly seems to have a benefit from brain development as well.
148
+ [479.920 --> 484.880] So those questions earlier on about CVI, how do I wake up the visual brain?
149
+ [484.880 --> 489.080] Consider play as one of the ways to do it from an engagement standpoint.
150
+ [489.080 --> 492.440] And I hope to convince you that there's a neuroplastic and a neuroscience reason behind
151
+ [492.440 --> 494.880] this as well.
152
+ [494.880 --> 498.400] Learning through simulation, another piece of the puzzle, very, very important.
153
+ [498.400 --> 501.240] The best example to give you is flight simulators.
154
+ [501.240 --> 506.000] If you are a pilot wanting to learn how to fly a new plane or how to land at a new airport
155
+ [506.000 --> 511.960] or how to fly in very, very challenging conditions, much better that you do this in a simulator
156
+ [511.960 --> 514.880] than a compliment of 350 people behind you, right?
157
+ [514.880 --> 519.840] If you make a mistake, better you learn it there than you do it in the real world, right?
158
+ [519.840 --> 523.480] So pilots spend an enormous amount of time in flight simulators and this has been extremely
159
+ [523.480 --> 526.280] effective and has revolutionized the airline industry.
160
+ [526.280 --> 530.440] They have something called the transfer of effective ratio, which is about 50%, which
161
+ [530.440 --> 534.300] means every two hours that you spend on a flight simulator is the equivalent of one
162
+ [534.300 --> 535.900] hour real flight time.
163
+ [535.900 --> 540.820] So what you learn in the simulator, going through the motions, preparing yourself mentally,
164
+ [540.820 --> 542.420] serves into the real world.
165
+ [542.420 --> 547.720] And the closer that immersion is, the better it is in terms of the transference.
166
+ [547.720 --> 551.640] Also people have learned this, surgical, for example, the medical field is spending a lot
167
+ [551.640 --> 554.360] of money looking at surgical simulations.
168
+ [554.360 --> 558.440] Here I make the mistake, we're setting a tumor in a simulation that I do in the real
169
+ [558.440 --> 559.440] world.
170
+ [559.440 --> 561.120] The military is also spending a lot of money in this as well.
171
+ [561.120 --> 565.280] So learning by simulation seems to be another thing that the brain likes.
172
+ [565.280 --> 567.520] And again, I'll show you some evidence of that.
173
+ [567.520 --> 572.240] Here are some great examples of how video games and virtual reality are being used in therapy
174
+ [572.240 --> 573.720] in your own world.
175
+ [573.720 --> 578.720] This is work by Elizabeth Strickland, what she has is she has children with cognitive development
176
+ [578.720 --> 583.560] issues and trying to teach them basic skills like crossing the street safely.
177
+ [583.560 --> 587.680] So she has these kids wearing a virtual reality helmet and they do associations.
178
+ [587.680 --> 590.080] They call this game called street safety.
179
+ [590.080 --> 594.680] They associate good behaviors with certain friends, bad behaviors with other individuals.
180
+ [594.680 --> 597.680] And they go through these simulations learning to cross safely.
181
+ [597.680 --> 602.240] Better you learn this in the safe controlled environment of a classroom than learning this
182
+ [602.240 --> 603.520] in the real world the hard way.
183
+ [603.520 --> 606.200] So to speak, you learn these skills in a safe controlled environment.
184
+ [606.200 --> 609.720] You have reinforcement, you have repetition again, things that the brain likes.
185
+ [609.720 --> 611.400] And then you transfer that to the real world.
186
+ [611.400 --> 613.120] So a very, very interesting approach.
187
+ [613.120 --> 614.520] That's not a one that's quite nice.
188
+ [614.520 --> 616.400] This is called IREX.
189
+ [616.400 --> 617.400] This is a gesture talk.
190
+ [617.400 --> 620.520] This is a group out of Israel that have developed an interesting system.
191
+ [620.520 --> 623.840] This is a child with cerebral palsy who doesn't want to go to rehab.
192
+ [623.840 --> 627.360] Sorry, it's sounding like Amy Winehouse there.
193
+ [627.360 --> 628.960] How do you get him to go to rehab?
194
+ [628.960 --> 630.440] No, no, no.
195
+ [630.440 --> 634.600] Because what I do like is soccer.
196
+ [634.600 --> 637.560] So he has this system here where they use a small camera.
197
+ [637.560 --> 638.560] They film him.
198
+ [638.560 --> 639.880] They project it on a blue screen.
199
+ [639.880 --> 642.400] And he's the goalie while people take shots.
200
+ [642.400 --> 645.700] And the idea is that he reaches over one side blocks the ball, reaches to the other side
201
+ [645.700 --> 646.700] blocks the ball.
202
+ [646.700 --> 650.720] Then they can go systematically crossing the heavy field, the other, the other heavy space
203
+ [650.720 --> 651.720] as well.
204
+ [651.720 --> 654.680] And all this is quantified as you can see on the bottom row there.
205
+ [654.680 --> 657.600] They've got his favorite team playing, his favorite players are playing.
206
+ [657.600 --> 658.600] He's engaged.
207
+ [658.600 --> 659.600] Now he wants to go.
208
+ [659.600 --> 660.600] He's engaged.
209
+ [660.600 --> 663.720] So again, you can do good work with play and under simulation.
210
+ [663.720 --> 667.600] This is a chance to try to awaken the brain and motivate individuals.
211
+ [668.440 --> 672.240] Let's talk specifically more about video games, why I think, whether or not I think this
212
+ [672.240 --> 673.960] is a good idea.
213
+ [673.960 --> 678.960] You probably all remember when Pong came out, we thought this was, oh my god, I got to get
214
+ [678.960 --> 679.960] Pong, right?
215
+ [679.960 --> 682.440] This is revolutionary, right?
216
+ [682.440 --> 684.720] Now I think about how games have evolved, right?
217
+ [684.720 --> 685.720] It's really interesting.
218
+ [685.720 --> 689.640] We've moved outside of the arcade and now moved to two our own personal devices.
219
+ [689.640 --> 693.440] So it's really interesting that the goals remain the same, but the space that we work in
220
+ [693.440 --> 695.120] has changed dramatically.
221
+ [695.120 --> 696.680] Here's some interesting statistics.
222
+ [696.680 --> 701.160] World of Warcraft, which is a world playing game, is a very, very interesting one because
223
+ [701.160 --> 703.400] they actually log on the time that people spend.
224
+ [703.400 --> 705.040] And here are some interesting stats.
225
+ [705.040 --> 711.040] Since 1994, collectively gamers have spent close to six million years playing this game.
226
+ [711.040 --> 712.960] That's geological time scale, right?
227
+ [712.960 --> 714.960] The Grand Canyon was built.
228
+ [714.960 --> 717.280] They saved millions more.
229
+ [717.280 --> 718.280] One game, right?
230
+ [718.280 --> 720.440] And the average gamer spends 22 hours a week.
231
+ [720.440 --> 722.560] That's a part-time job, right?
232
+ [722.560 --> 726.320] These people spend a lot of time playing video games.
233
+ [726.320 --> 728.920] Another interesting book called Reality is Broken by Jane McGoniball.
234
+ [728.920 --> 733.400] She's a game designer and also a sociologist, very, very interested in that.
235
+ [733.400 --> 739.000] And she says in countries with strong gaming culture, by the age of 21, the average gamer
236
+ [739.000 --> 744.040] will spend close to 10,000 hours playing video games, which is the equivalent of time you
237
+ [744.040 --> 749.080] spend from the fifth grade to high school graduation if you have perfect attendance.
238
+ [749.080 --> 751.920] That's a lot of time spending in front of a monitor.
239
+ [751.920 --> 754.680] So they're doing it is what I'm trying to say.
240
+ [754.680 --> 758.360] Can we leverage this somehow, right?
241
+ [758.360 --> 759.360] Other things.
242
+ [759.360 --> 761.000] Our video game is useful from a rehabilitation time.
243
+ [761.000 --> 763.120] I'm going to give you a couple of other recent examples.
244
+ [763.120 --> 765.840] We, Habilitation, you're all familiar with the We-Mote, right?
245
+ [765.840 --> 770.120] This idea of a, basically has a gyroscope in it and it can sense directions in three
246
+ [770.120 --> 774.520] axes of motion and translates that onto a monitor as you interact.
247
+ [774.520 --> 777.040] Some interesting work with stroke recovery.
248
+ [777.040 --> 781.360] Again, I don't think the evidence is very, very clear how positive it is and there could
249
+ [781.360 --> 785.560] be a huge placebo effect of just simply getting engaged with a group and so on, which may
250
+ [785.560 --> 787.560] account for some of the benefits that are there.
251
+ [787.560 --> 791.480] But indeed, people are studying this and looking how to engage individuals.
252
+ [791.480 --> 796.680] The social interaction, for example, as well, we bowling, for example, in a lot of situations,
253
+ [796.680 --> 799.320] promoting social interaction and so on.
254
+ [799.320 --> 803.480] A lot easier to do this in a virtual living room than it is to actually take them all into
255
+ [803.480 --> 804.480] particular sites.
256
+ [804.480 --> 808.160] So there's some benefit in that as well, that I think is quite interesting.
257
+ [808.160 --> 809.560] Another interesting study here.
258
+ [809.560 --> 813.120] I talked to video games on training surgeons in the 21st century.
259
+ [813.120 --> 819.640] There was a link, and I quote here, a link between skill at video gaming and skill at laparoscopic
260
+ [819.640 --> 820.640] surgery.
261
+ [820.640 --> 827.760] Curved video game players made 31% fewer errors, were 24% faster and score 26% better overall
262
+ [827.760 --> 829.080] than non-player colleagues.
263
+ [829.080 --> 830.080] Again, not causal.
264
+ [830.080 --> 833.400] Doesn't mean you should be playing video games, then go to medical school.
265
+ [833.400 --> 836.600] The point is that there was an association between the two, right?
266
+ [836.600 --> 838.040] It's an observational study.
267
+ [838.040 --> 840.520] It was something again, kind of connecting from a skill standpoint.
268
+ [840.520 --> 843.720] And the last one I'll share you, which I was really, really struck with, this was a study
269
+ [843.720 --> 846.040] that was published in nature a couple years ago.
270
+ [846.040 --> 848.280] And to give you sort of a background, I'm not a biochemist.
271
+ [848.280 --> 852.200] But apparently when it comes to figuring out the three-dimensional shape of a protein or
272
+ [852.200 --> 855.160] a molecule, it's really, really difficult.
273
+ [855.160 --> 856.160] It's really complicated.
274
+ [856.160 --> 858.640] It's like a mental teaser or a puzzle and so on.
275
+ [858.640 --> 863.080] And even with some of the fastest computers, it takes months and months and years to figure
276
+ [863.080 --> 865.080] out this three-dimensional shape.
277
+ [865.080 --> 869.000] So this group of investigators decided to come up with a game called FoldIt.
278
+ [869.000 --> 873.240] And the idea was to go online, there were various rules of how you could fold this particular
279
+ [873.240 --> 874.400] shape.
280
+ [874.400 --> 878.160] And they just kind of left it out into the world to see what would happen.
281
+ [878.160 --> 882.720] And they said that using Fold, the three-dimensional structure of a protein was solved in roughly
282
+ [882.720 --> 884.760] one week, right?
283
+ [884.760 --> 887.400] By individuals with no specific training in biochemistry.
284
+ [887.400 --> 891.520] The best scientists were trying to figure this out and they couldn't do it with the fastest
285
+ [891.520 --> 892.520] computers.
286
+ [892.520 --> 895.440] And all these guys went online, we had no interest in biochemistry whatsoever and they
287
+ [895.440 --> 897.320] solved it in a week.
288
+ [897.320 --> 901.960] So gaming somehow brings the best out of us.
289
+ [901.960 --> 906.360] We think in a way that we don't typically think in more sort of didactic fashions.
290
+ [906.360 --> 909.760] And I think again, that's another aspect that I want to submit to you.
291
+ [909.760 --> 914.720] Another interesting example in our field, this was a work by Dennis Levi at University of
292
+ [914.720 --> 920.200] California Berkeley using video games to try to improve ambiopia, visual acuity.
293
+ [920.200 --> 923.840] It was a very preliminary study, but what they found that a lot of the kids were going
294
+ [923.840 --> 927.720] through this and interacting with video games showed improvement in their visual acuity
295
+ [927.720 --> 929.040] a couple of lines.
296
+ [929.040 --> 932.800] Again, largely an observational study, there were a lot of randomization and control issues
297
+ [932.800 --> 934.800] and I think it needs to be replicated.
298
+ [934.800 --> 938.880] But showing you that we can take this also directly from the visual acuity standpoint or
299
+ [938.880 --> 942.520] a visual performance standpoint as well.
300
+ [942.520 --> 945.080] Now why do I think games work?
301
+ [945.080 --> 948.160] I'm going to give you what I call my neuroscience rationale.
302
+ [948.160 --> 951.600] So all video games have three really important aspects.
303
+ [951.600 --> 954.240] It doesn't matter whether it's Pac-Man or World of Warcraft or so.
304
+ [954.240 --> 956.640] They all kind of have these three basic features.
305
+ [956.640 --> 961.840] The first one is that there's always attainable rewards, jewels, points, munitions, portals,
306
+ [961.840 --> 966.320] the epic wing, the epic wind feeling and so on for my World of Warcraft colleagues.
307
+ [966.320 --> 969.760] There's also tax, the task novelty and graded difficulties.
308
+ [969.760 --> 973.320] All games start off really, really easy and they get a little bit harder and they seem
309
+ [973.320 --> 975.520] to be almost perfectly paced with you.
310
+ [975.520 --> 977.840] And they're just at the point where you don't give up, right?
311
+ [977.840 --> 979.360] You never just say, I don't want to play this anymore.
312
+ [979.360 --> 981.640] You just, okay, one more try, one more try, one more try.
313
+ [981.640 --> 986.240] And figuring out that gradation is obviously a big, big key and this idea of having attainable
314
+ [986.240 --> 987.240] goals.
315
+ [987.240 --> 991.680] And last but not least, they always have high attention demands, survival pressure, death,
316
+ [991.680 --> 996.160] time constraints, monsters, all this sort of thing keeps you engaged obviously into the
317
+ [996.160 --> 997.160] game, right?
318
+ [997.160 --> 998.160] Well, what does this mean?
319
+ [998.160 --> 1004.440] Well, first of all, reward is intimately related to dopamine, right?
320
+ [1004.440 --> 1008.640] Intimally related to serotonin and noradrenaline.
321
+ [1008.640 --> 1013.840] And finally, attention is intimately related with acetylcholine or the colonurgic system.
322
+ [1013.840 --> 1017.280] The point here is that we are wired for this, right?
323
+ [1017.280 --> 1022.800] We like this, video, good video game designers, no understand our brain chemistry and as
324
+ [1022.800 --> 1026.040] sense are tapping into this to get us engaged.
325
+ [1026.040 --> 1030.080] And my argument is that there's an opportunity here that we don't know what typically have.
326
+ [1030.080 --> 1032.280] And how do we jump start the brain?
327
+ [1032.280 --> 1034.760] This may be one way to do it.
328
+ [1034.760 --> 1037.600] So here is the study that I want to share with you.
329
+ [1037.600 --> 1039.880] You probably remember this video game, Doom, right?
330
+ [1039.880 --> 1042.560] Came out early 90s, yeah?
331
+ [1042.560 --> 1047.200] I wasted years of my life playing this game.
332
+ [1047.200 --> 1051.080] It is really, really addictive for lack of a better term.
333
+ [1051.080 --> 1052.080] It is amazing.
334
+ [1052.080 --> 1055.440] It was one of the first games of its type, what's called a 3D first person shooter game where
335
+ [1055.440 --> 1059.560] you walk through a virtual labyrinth and I'm just going to show you a video if you're
336
+ [1059.560 --> 1060.560] not familiar with this.
337
+ [1060.560 --> 1061.560] We're going to be doing this.
338
+ [1061.560 --> 1066.360] We got to allow music going on very, very high-paced, you're walking through this 3D
339
+ [1066.360 --> 1070.080] dimensional environment, you got to kill the bad guys, you got to find your way back
340
+ [1070.080 --> 1071.080] through the four doors.
341
+ [1071.080 --> 1073.080] Very, very high-paced, very engaged.
342
+ [1073.080 --> 1074.080] You get a sense right away.
343
+ [1074.080 --> 1076.080] Who what's going on here?
344
+ [1076.080 --> 1081.920] Oh, I'll show you the violin.
345
+ [1081.920 --> 1086.680] The point here is that to play this game, to succeed in this game is you have to build
346
+ [1086.680 --> 1089.680] a mental map of your mind of the world that you're walking through.
347
+ [1089.680 --> 1093.400] You have to get a sense that, I walked through this corridor, I've been in this room before,
348
+ [1093.400 --> 1095.280] I came in from another perspective.
349
+ [1095.280 --> 1100.800] As you play the game, you develop a cognitive map in your mind.
350
+ [1100.800 --> 1106.800] That being said, I have a colleague from the University of Chile, who is a computer scientist
351
+ [1106.800 --> 1109.040] and develops video games for blind children.
352
+ [1109.040 --> 1113.320] The game that he developed was Audio Doom, which is exactly the same thing except it's
353
+ [1113.320 --> 1115.200] based purely on audio cues.
354
+ [1115.200 --> 1117.800] I'll explain to you more on how that works.
355
+ [1117.800 --> 1123.240] As you see here, just like kids, sighted kids, so they're peers, these blind kids play
356
+ [1123.240 --> 1124.600] the game forever.
357
+ [1124.600 --> 1125.600] They love it.
358
+ [1125.600 --> 1126.600] They're completely engaged.
359
+ [1126.600 --> 1129.000] Hours and hours playing the game.
360
+ [1129.000 --> 1133.360] The other thing he noticed from an observational standpoint was the kids who played the game
361
+ [1133.360 --> 1135.120] were doing better in school.
362
+ [1135.120 --> 1136.480] They seemed to be better at math.
363
+ [1136.480 --> 1138.200] They seemed to be better at spatial reasoning.
364
+ [1138.200 --> 1141.240] They were much more engaged socially with their peers than the kids who didn't seem to like
365
+ [1141.240 --> 1142.240] video games.
366
+ [1142.320 --> 1146.160] Not causal, but an interesting association nonetheless.
367
+ [1146.160 --> 1148.160] The thought was there an opportunity here.
368
+ [1148.160 --> 1152.480] There was another interesting piece that really sparked my interest when I first saw this.
369
+ [1152.480 --> 1157.240] For example, here, if I give you a target environment like this, this is the lab, the child comes
370
+ [1157.240 --> 1161.840] in here, this is another door, a dead end, they go through another dead end, series of monsters
371
+ [1161.840 --> 1165.880] and so on, and they've got to find their way to a portal that takes them to the next level.
372
+ [1165.880 --> 1170.040] If you give the child Lego pieces and ask them to build the map that they walk through,
373
+ [1170.040 --> 1175.080] they can build a perfect one-to-one representation of the virtual world they walk through.
374
+ [1175.080 --> 1180.320] They have the map in their mind, even though they've never seen the map all based on auditory
375
+ [1180.320 --> 1181.320] cues.
376
+ [1181.320 --> 1184.080] In fact, these are congenitalied line children, so they've never seen the world period,
377
+ [1184.080 --> 1187.000] but nonetheless, they can build the map in their mind.
378
+ [1187.000 --> 1189.600] They can generate this through non-visual cues.
379
+ [1189.600 --> 1191.400] The question is this.
380
+ [1191.400 --> 1197.000] Why not play the game in a world that actually exists and use that as a way to teach orientation
381
+ [1197.000 --> 1198.000] mobility and navigation?
382
+ [1198.000 --> 1199.240] That's exactly what we did.
383
+ [1199.240 --> 1203.440] We invented a game using the same sort of strategy, and this is the layout of an actual
384
+ [1203.440 --> 1206.040] physical building at the Carroll Center for the Blind.
385
+ [1206.040 --> 1209.080] We have the kids play the game, and the goal here is kind of like Pac-Man, you got to
386
+ [1209.080 --> 1212.880] roll through this building, you got to find these little jewels that you see in blue squares,
387
+ [1212.880 --> 1215.080] and I'll show you a video how the game has played.
388
+ [1215.080 --> 1218.480] You also have to be careful, these red guys, these are the monsters, right?
389
+ [1218.480 --> 1221.440] If they catch you with the jewel, they hide the jewels somewhere else.
390
+ [1221.440 --> 1225.480] It forces you to keep exploring the building and so on.
391
+ [1225.480 --> 1228.400] You have to catch as many jewels as you can and not get caught by the jewel.
392
+ [1228.400 --> 1229.840] We engage them to do this.
393
+ [1229.840 --> 1233.960] They then play the game, then we physically take them to the building and say, okay, now
394
+ [1233.960 --> 1236.800] that you have this map in your mind, can you find their way?
395
+ [1236.800 --> 1240.960] The important thing to keep in mind is that at no time do we tell them this is the goal
396
+ [1240.960 --> 1241.960] of the study.
397
+ [1241.960 --> 1246.120] We just simply say, this is a game, this is how you play it, and then we see what happens.
398
+ [1246.120 --> 1247.120] All right?
399
+ [1247.120 --> 1248.120] So a little bit more details about this.
400
+ [1248.120 --> 1250.840] As I said, this was done at the Carroll Center for the Blind and Newton, Massachusetts,
401
+ [1250.840 --> 1252.240] a little bit outside of Boston.
402
+ [1252.240 --> 1255.160] We chose this building here, which is the St. Paul building.
403
+ [1255.160 --> 1258.780] The reason why is because this is an administrative building, or at least it was at the time, and
404
+ [1258.780 --> 1261.720] the kids had no prior knowledge of the layout of the building there.
405
+ [1261.720 --> 1265.000] It's a two-story building as about 20 rooms.
406
+ [1265.000 --> 1269.440] It allows us to do sort of a real world scenario to floors, look at, for example, interactions
407
+ [1269.440 --> 1271.160] between floors and so on.
408
+ [1271.160 --> 1275.960] As I said, they don't have any prior experience with this building when they come to the campus.
409
+ [1275.960 --> 1277.720] They play the game, as I mentioned.
410
+ [1277.720 --> 1280.840] We never say, you know, memorize the layout or anything along those lines.
411
+ [1280.840 --> 1283.680] We just say, play the game, and then we take them physically there, and we have a series
412
+ [1283.680 --> 1287.280] of outcomes to see how well they're able to learn the roots.
413
+ [1287.280 --> 1288.280] All right.
414
+ [1288.280 --> 1289.840] So more details about how the game works.
415
+ [1289.840 --> 1293.360] We call this Abyss for audio-based environment simulator or Abyss.
416
+ [1293.360 --> 1297.240] We don't have a clever name as audio-dume, but this is how it works.
417
+ [1297.240 --> 1299.120] So you're all familiar with icons, right?
418
+ [1299.120 --> 1302.920] The Waste Beeper basket, for example, on your computers where you put documents you
419
+ [1302.920 --> 1303.920] don't like.
420
+ [1303.920 --> 1305.840] So we use earcons, exactly the same thing.
421
+ [1305.840 --> 1308.800] So to give you an example is a knocking sound.
422
+ [1308.800 --> 1312.880] So I think I can play this one here.
423
+ [1312.880 --> 1316.600] If you hear that sound, you know that that's the presence of the door.
424
+ [1316.600 --> 1321.120] If I hear that knocking sound in my right ear, that means the door is on my right side.
425
+ [1321.120 --> 1325.040] If I hear the knocking sound in my left ear, I know the door is in my left side.
426
+ [1325.040 --> 1328.280] If I hear it in front of me, it's the door is in front of me.
427
+ [1328.280 --> 1331.520] Keep in mind that also when I'm walking through the environment, and I hear that knocking
428
+ [1331.520 --> 1336.320] sound in my right ear, if I turn around 180 degrees and come back, I now need to hear
429
+ [1336.320 --> 1338.240] the knocking sound in my left ear, right?
430
+ [1338.240 --> 1342.840] So what the software is doing is keeping track of your egocentric heading and presenting
431
+ [1342.840 --> 1347.600] the sounds in a spatialized manner so that you can build the spatial map in your mind
432
+ [1347.600 --> 1349.840] as you interact with it.
433
+ [1349.840 --> 1354.520] So we use cardinal coordinates north, south, west, east so they can always kind of work
434
+ [1354.520 --> 1357.360] in that rigid cardinal coordinate system.
435
+ [1357.360 --> 1360.840] As I said, left ear, right ear, either with speakers or with headphones.
436
+ [1360.840 --> 1365.480] And every step they take is measured or scaled to an actual physical step in that building.
437
+ [1365.480 --> 1368.200] So here's a video of a child playing the game.
438
+ [1368.200 --> 1371.000] And remember, they don't see anything on the screen, right?
439
+ [1371.000 --> 1374.520] This is, I'm just simply showing you this so we can track them what the actual movement
440
+ [1374.520 --> 1375.520] is.
441
+ [1375.520 --> 1376.520] So here they go.
442
+ [1376.520 --> 1378.520] Up on the door.
443
+ [1378.520 --> 1379.520] Open it.
444
+ [1379.520 --> 1380.520] Lock in.
445
+ [1380.520 --> 1383.360] That's the running sound as there's a jewel.
446
+ [1383.360 --> 1386.840] As they get closer and closer to the jewel, the loudness of the sound happens.
447
+ [1386.840 --> 1387.840] Allows them to get organized.
448
+ [1387.840 --> 1390.040] They're getting orientated where that sound is.
449
+ [1390.040 --> 1391.040] They get the jewel.
450
+ [1391.040 --> 1393.400] They got to go outside now.
451
+ [1393.400 --> 1394.400] Take it outside.
452
+ [1394.400 --> 1399.640] The red dots, as I mentioned, are the monsters moving around trying to catch you.
453
+ [1399.640 --> 1400.640] Yes.
454
+ [1400.640 --> 1401.640] An obstacle.
455
+ [1401.640 --> 1402.640] North.
456
+ [1402.640 --> 1406.640] Outside, way of louds points.
457
+ [1406.640 --> 1407.640] Back.
458
+ [1407.640 --> 1409.640] First, step well one.
459
+ [1409.640 --> 1410.640] West.
460
+ [1410.640 --> 1411.640] This is the stairwell.
461
+ [1411.640 --> 1420.640] As they climb the stairs, the pitching creases.
462
+ [1420.640 --> 1421.640] Second floor.
463
+ [1421.640 --> 1424.640] And they get to the top there.
464
+ [1424.640 --> 1428.640] And they keep exploring and exploring and exploring.
465
+ [1428.640 --> 1432.640] They pay for a total of about an hour and 30 minutes, an hour and a half.
466
+ [1432.640 --> 1435.320] Completely engaged, as I said, they see nothing on the screen.
467
+ [1435.320 --> 1438.040] That's just for us to track them to see where they're heading.
468
+ [1438.040 --> 1440.640] And believe me, it's tough to get this out of their hands.
469
+ [1440.640 --> 1443.200] They're really, really engaged in playing this game.
470
+ [1443.200 --> 1444.200] Here's the study designer.
471
+ [1444.200 --> 1446.720] As I said, we did this as a randomized clinical trial.
472
+ [1446.720 --> 1448.720] So we took all the covers into the study.
473
+ [1448.720 --> 1450.000] I'll give you more details.
474
+ [1450.000 --> 1452.720] And we randomized them into three actual groups.
475
+ [1452.720 --> 1457.120] And before I show you the three groups, let me show you the breakdown of the various aspects.
476
+ [1457.120 --> 1461.880] You can play Abyss, the video game, if you will, in directed navigation mode.
477
+ [1461.880 --> 1466.840] This means that I give you a start place and an end place and you learn the layout of
478
+ [1466.840 --> 1467.840] the building.
479
+ [1467.840 --> 1471.840] And what we did is we pair each child or each individual in the study with an orientation
480
+ [1471.840 --> 1477.440] mobility instructor who sits next to them and teaches them step by step the layout of
481
+ [1477.440 --> 1478.440] the building.
482
+ [1478.440 --> 1482.200] The same way, a virtual replication, if you will, of what they would do in an actual
483
+ [1482.200 --> 1485.360] O&M instruction of that building.
484
+ [1485.360 --> 1488.320] So they work one-on-one with an orientation mobility instructor.
485
+ [1488.320 --> 1492.880] So that's called structural learning or the directed navigator arm of the study.
486
+ [1492.880 --> 1496.200] The other arm is the one, really, the intervention of interest, is the gaming arm.
487
+ [1496.200 --> 1499.160] Exactly as I said, they're monsters, they're stillers, and so on.
488
+ [1499.160 --> 1502.840] We simply explain to the child or the individual, this is how the game is played.
489
+ [1502.840 --> 1503.840] This is the goal.
490
+ [1503.840 --> 1505.600] You've got to find these jewels that are hidden throughout the building.
491
+ [1505.600 --> 1506.600] You've got to avoid the monsters.
492
+ [1506.600 --> 1509.200] If they catch you, they hide the jewels somewhere else.
493
+ [1509.200 --> 1512.800] And the more jewels you can find, the better it is in terms of your score.
494
+ [1512.800 --> 1516.320] We never tell them you have to explicitly learn the layout of the building.
495
+ [1516.320 --> 1519.680] We just simply say, this is how you play the game.
496
+ [1519.680 --> 1522.520] So now, as I said, this was a three-arm randomized clinical trial.
497
+ [1522.520 --> 1524.480] We had three arms in the study.
498
+ [1524.480 --> 1529.280] Some were enrolled or randomized to the directed navigator arm, again, working with an orientation
499
+ [1529.280 --> 1530.720] mobility instructor.
500
+ [1530.720 --> 1534.440] Some were playing the game arm and some were in the control group.
501
+ [1534.440 --> 1538.200] So this was a game, but the building had nothing to do with the target that we're trying
502
+ [1538.200 --> 1539.200] to get.
503
+ [1539.200 --> 1542.200] So we wanted to see the potential benefit of actually playing the game itself, even though
504
+ [1542.200 --> 1545.400] the overall target wasn't matching.
505
+ [1545.400 --> 1548.640] They go through, they play, as I said, for about an hour and a half.
506
+ [1548.640 --> 1552.720] Each one arm, we look at their proficiency of virtual navigating, so going from target
507
+ [1552.720 --> 1556.880] A to target B or target C to target D, and so on, we look whether or not they can do
508
+ [1556.880 --> 1558.520] it and how long they take.
509
+ [1558.520 --> 1560.360] We also then transfer them to the real world.
510
+ [1560.360 --> 1563.920] We take them to the physical building and see whether or not they can use those transfer
511
+ [1563.920 --> 1566.040] skills, what they learn in terms of their map.
512
+ [1566.040 --> 1569.280] And then the last thing we look at is what are called drop off tasks, so if you're all
513
+ [1569.280 --> 1570.760] familiar with in the O&M world.
514
+ [1570.760 --> 1575.920] So in other words, instead of asking to just go A and B, C and D, E and F, we bring them
515
+ [1575.920 --> 1580.280] to various positions in the building and we say, where you're standing now, what's the
516
+ [1580.280 --> 1582.640] shortest way out of the building?
517
+ [1582.640 --> 1586.800] So we give them sort of a task to force them to manipulate the information in their mind.
518
+ [1586.800 --> 1588.640] So that's the drop off task in this.
519
+ [1588.640 --> 1593.360] So just to remind you, the comparison of direct navigation versus game tells us something
520
+ [1593.360 --> 1595.720] about the method of instruction.
521
+ [1595.720 --> 1599.240] The comparison of the control group versus the game tells us something about the gaming
522
+ [1599.240 --> 1600.240] context.
523
+ [1600.240 --> 1601.640] And that's why we have the three arms.
524
+ [1601.640 --> 1602.640] Okay?
525
+ [1602.640 --> 1604.040] So let's take a look at some of the data.
526
+ [1604.040 --> 1606.720] Before I do here, here's some more information.
527
+ [1606.720 --> 1611.200] The inclusion criteria, we took 18-year-olds, anywhere aged between 18 and 45.
528
+ [1611.200 --> 1614.280] I'll show you a youth study that we did specifically right after that.
529
+ [1614.280 --> 1615.280] Male and female.
530
+ [1615.280 --> 1618.400] We documented legal blindness before the age of three.
531
+ [1618.400 --> 1623.200] And blindness of ocular cause, regardless of the level of visual acuity or residual
532
+ [1623.200 --> 1628.200] function, they were all blindfolded throughout the study as they played the game.
533
+ [1628.200 --> 1632.760] Outcome measures were things with a number of paths that they got correct, the time, target,
534
+ [1632.760 --> 1634.240] and also what we call creativity points.
535
+ [1634.240 --> 1638.280] In other words, how well were they able to find their way out, the quickest way possible?
536
+ [1638.280 --> 1640.600] I'll explain to you more specifically.
537
+ [1640.600 --> 1644.840] A qualitative thing, like the types of errors they made, things, for example, like strategies
538
+ [1644.840 --> 1649.480] employed, all that was documented as well to get a sense of what was happening.
539
+ [1649.480 --> 1656.480] In the first few months, we had a number of cases that were in the process of finding
540
+ [1656.480 --> 1659.480] the right way out of the process.
541
+ [1659.480 --> 1664.480] We had a number of cases that were in the process of finding the right way out of the process.
542
+ [1664.480 --> 1669.480] We had a number of cases that were in the process of finding the right way out of the process.
543
+ [1669.480 --> 1674.480] We had a number of cases that were in the process of finding the right way out of the process.
544
+ [1674.480 --> 1677.480] We had these stopping rules in there to try to get away from any concerns.
545
+ [1677.480 --> 1679.480] I can tell you that it actually never happened.
546
+ [1679.480 --> 1681.480] It was actually quite straightforward.
547
+ [1681.480 --> 1686.480] The analysis, again, just some details here in terms of how we were able to do that.
548
+ [1686.480 --> 1691.480] Just to give you some details, the software, the nice thing about it is it allows us to quantify a lot of things.
549
+ [1691.480 --> 1693.480] Here's the path that the individual took, right?
550
+ [1693.480 --> 1698.480] How much time they took in the various parts, all this can be qualified and enters into the spreadsheet
551
+ [1698.480 --> 1703.480] so we can break down the path and see areas that they struggled with versus which paths were more challenging than others.
552
+ [1704.480 --> 1707.480] Let's take a look at some of the quick data before I give you the group analysis.
553
+ [1707.480 --> 1714.480] Here are two individuals. One was in the directed navigator group and the other one was in the gamer group.
554
+ [1714.480 --> 1721.480] Notice that when we asked them to virtually navigate from the lobby of the building all the way through up the stairwell to the second floor of bedroom 6,
555
+ [1721.480 --> 1724.480] it took them about a minute and 42 seconds to do it.
556
+ [1724.480 --> 1728.480] The reason why I chose this path is it's actually the longest path in the building, physically.
557
+ [1728.480 --> 1735.480] They took about a minute, 42 seconds to do it. When we take them to the building and ask them to do the same thing, they can do it in a little bit shorter time.
558
+ [1735.480 --> 1740.480] Part of that is the physical translation and the second part is the fact that they're doing the task twice, obviously.
559
+ [1740.480 --> 1745.480] The thing that's noticeable is notice that the gamers did it equally well in about the same amount of time as well.
560
+ [1745.480 --> 1750.480] Whether you learn this as a directed navigator or you learn this as a gamer, you are able to do this.
561
+ [1750.480 --> 1755.480] Those individuals in the third arm, the control group, weren't able to transfer it all as you might imagine.
562
+ [1755.480 --> 1759.480] They got there and we say get to bedroom 6 and they're like, what's bedroom 6?
563
+ [1759.480 --> 1765.480] So the context aspect, obviously, was crucial. None of the individuals in that control group, that third arm, were able to do the task.
564
+ [1765.480 --> 1773.480] Here's what's interesting. Once they're at bedroom 6, for example, this particular location, we asked them, what's the quickest way out of the building?
565
+ [1773.480 --> 1784.480] The gamers always find the quickest way out. The directed navigators just retrace their path, the way that they came in, which is probably not surprising to you.
566
+ [1784.480 --> 1793.480] So the something tells us that the way that they manipulate the information from the gaming, learning it through gaming versus how they do it through directed navigation is probably different.
567
+ [1793.480 --> 1799.480] Even though they're very similar on the first task, getting from A to B, C to D, how they manipulate that information seems to be very different.
568
+ [1799.480 --> 1805.480] This is how we quantify it. If you can get the quickest way out, there was always three ways at least to get out of the building.
569
+ [1805.480 --> 1810.480] If you find the shortest route, we give you three points. If you find the second shortest route, we give you two points.
570
+ [1810.480 --> 1815.480] If you find the longest route, that's one. If you get lost, you can't find your way in the six minutes, you get zero.
571
+ [1815.480 --> 1818.480] Pretty simple. We call those creativity points.
572
+ [1818.480 --> 1824.480] So, the other thing, just very, very quick to notice. Notice how they're actually shorelining, very, very similar to what they actually do in the real world as well.
573
+ [1824.480 --> 1830.480] They use very, very much the same strategies in the virtual world that they actually do in the real world as well.
574
+ [1830.480 --> 1837.480] Okay, here's the hard data. There was over 31, I have the number here, or the 31 subjects who participated in this study.
575
+ [1837.480 --> 1846.480] Again, I'm not showing you the control arm group because none of the people were able to do that. I'm going head to head comparison of those in a directed navigator arms versus those in the gaming arms.
576
+ [1846.480 --> 1851.480] And I separated them from early blind to late blind as well, and to suit groups as well.
577
+ [1851.480 --> 1857.480] Because we wanted to see whether prior visual imagery had somehow an effect on the possible performance as well.
578
+ [1857.480 --> 1866.480] And here's the data. So, in the early blind group, whether you were able, whether you learn through gaming and read or navigating through blue, you had almost 90% correct.
579
+ [1866.480 --> 1871.480] You could find your route very, very easily in this regard. There was no statistical difference between the two groups.
580
+ [1871.480 --> 1878.480] In the case of late blind, very, very similar performance as well. So, take home message one, they are able to do this task quite well.
581
+ [1878.480 --> 1883.480] Almost 90%, anywhere between 80% to 90% performance or correctness, trying to find that route.
582
+ [1883.480 --> 1888.480] And it's the same whether you were a directed navigator or a gamer, and it was the same whether you're early blind versus late blind.
583
+ [1888.480 --> 1893.480] The drop off task, the creativity task, if you will, is where we saw the biggest difference.
584
+ [1893.480 --> 1902.480] The gamers, always or in general, were always able to find the shorter routes, whereas the directed navigators always choose the longer paths as well.
585
+ [1902.480 --> 1907.480] And this was true whether you were early blind or late blind, and this was statistically significant as well.
586
+ [1908.480 --> 1920.480] So, we also did a follow-up study in terms of adolescence, because a lot of the concerns that we had is that, well, if you do the virtual navigation first, you're basically consolidating the path in your mind, and then when I take you to the physical place, you're just executing on that path.
587
+ [1920.480 --> 1925.480] So, we re-did the study design in a way to try to get specifically at that question.
588
+ [1925.480 --> 1931.480] And we did this specifically also in teens, between 14 and 18 years old. And this is how we designed it in this particular case.
589
+ [1931.480 --> 1943.480] We enrolled them, we played Abbas, the video game, and there were two randomized arms. In the first case, you did the direct route, then you did the drop off task, and the second arm, you did the drop off task, then you did the route.
590
+ [1943.480 --> 1948.480] So, there was no carryover effect of what you did on the first task versus the other. It was a wash in this.
591
+ [1948.480 --> 1955.480] And what we found in this case, very, very similar performance. The direct navigation route, task one, mean performance was about 70%.
592
+ [1955.480 --> 1964.480] Task two, the drop off task, mean performance was 97. So, the game was all did very, very well in this. And the other thing too, is the mean shortest path was about 71%.
593
+ [1964.480 --> 1977.480] So, if 71% of the time, they would typically take the shortest path. So, the gaming itself, whatever the task order was, seemed to indeed allow this potential benefit of transfer in the navigation task.
594
+ [1978.480 --> 1990.480] A couple other things to think about, which was interesting. In terms of their performance, we noticed that the more jewels they found, in other words, the better they played the game, the better their overall performance was. This is true for task one and task two.
595
+ [1990.480 --> 1996.480] So, the better you played the game, the better you actually learned, and the better you actually transferred into the real world.
596
+ [1996.480 --> 2008.480] In contrast, if you look at performance as a function of a number of years of O&M skill, there was no association. So, it wasn't biased by the fact that these kids may have had more independence that were better at orientation and mobility.
597
+ [2008.480 --> 2015.480] We found that there was no significant association between the two. Performance was actually directly correlated to how well you played the game.
598
+ [2016.480 --> 2024.480] So, various results to think about. So, first of all, as I said, it's about an 85% success rate when it comes to just going from one target to another, A to B.
599
+ [2024.480 --> 2035.480] It was a correlation between navigation, success, and gameplay. Other things that we noticed, alternate routes were typically found when you learned it through gaming as opposed to direct the dactic instruction.
600
+ [2035.480 --> 2046.480] And recall also, the gamers were never told to learn the layout. They basically learned the layout for free. We simply said, this is a game, this is how you play it, and they got the map for free just by interacting with the map.
601
+ [2046.480 --> 2057.480] And the argument that I would make to you is that those maps were more flexible. The way that they manipulated the information in their mind as a gamer was very, very different than the case of a direct navigator, which is what I'm summarizing here.
602
+ [2058.480 --> 2069.480] So, in both cases, they're able to form the map in their mind. They're able to transfer that to a real world setting. But what I would submit to you is that in the case of directed navigators, there were somewhat constrained, if you will, because of the dactic learning.
603
+ [2069.480 --> 2086.480] They could only use the information that they were taught. So, structured, basically what you would call root knowledge, in terms of O&M, whereas the gamers exploratory learning, self discovery and pace allowed a certain cognitive flexibility that they didn't have in the case of the game.
604
+ [2086.480 --> 2100.480] So, in the case of the directed navigators, you might want to call this survey knowledge, for example, in the case of O&M. So, big difference in terms of how they were able to perform, even though similar performances in some aspects, very different performances in other aspects as well.
605
+ [2100.480 --> 2110.480] So, how does the brain do this? At the end of the day, we stick everybody into the scanner. That's what we did. It's how we're getting to the scanner.
606
+ [2110.480 --> 2120.480] So, let's talk about the neuroscience behind this. That was the behavioral aspect. How do they do this? Now, so let me tell you a little background behind navigation and so on, and how the brain does this.
607
+ [2120.480 --> 2133.480] A lot of the first initial work about navigation and finding your way, interestingly enough, was done studying London taxi drivers. If you're going to London, you know that that's just a terrible place to drive. Imagine being a taxi driver.
608
+ [2133.480 --> 2146.480] And if you want to be licensed in the city of London to drive a taxi, you have to do something what's called the knowledge where you spend two years of intensive teaching, I should say, intensive learning, memorizing the map of London.
609
+ [2146.480 --> 2157.480] And they go through very, very interesting exercises where they have to close their eyes and mentally imagine the route that they would take. So, they close their eyes and the instructor will say, okay, you pick up a ferret, pick a dilly, how do you bring them to Buckingham Palace?
610
+ [2157.480 --> 2168.480] So, like this, like this, I turn this right, go streets, streets, turn right, left, so on. And they memorize that map through this mental exercise all the time. They go two years of this before they actually get the license to drive.
611
+ [2168.480 --> 2182.480] Very clever. It was Elano McGuire who studied this in London and her students. And they had a very, very clever idea. They decided to take these taxi drivers and look at their hippocampus, which you know is the part of the brain responsible for memory and spatial learning.
612
+ [2182.480 --> 2201.480] And what they found is, they were not only was it larger in these London taxi drivers, it was actually correlated with the number of years that they drove the taxi. So, there's structural evidence that their part of the brain physically changed. They then compared that to Londoners who drive in London but weren't taxi drivers and there was absolutely no change over time.
613
+ [2201.480 --> 2212.480] So, their hippocampus was bigger than an aged match Londoner who didn't drive a taxi and it got bigger the longer you drove a taxi as well. So, interesting, associative evidence between the two.
614
+ [2212.480 --> 2227.480] The other thing that they figured out was the network of the part of the brains, or the part of the brain that was responsible for it. So, interesting pieces, the primal cortex you probably know again responsible for spatial processing, hippocampuses I mentioned in terms of memory and root learning.
615
+ [2227.480 --> 2239.480] The frontal cortex involved with executive decisions, right? And of course the visual cortex because you have to use visual information around you and integrate that. And how do they figure all that out using video games.
616
+ [2239.480 --> 2251.480] So, this is crazy taxi in London. They took London taxi drivers and asked them to play the video game in the scanner. And they identified all these areas that I mentioned, you know, parietal cortex, visual cortex, frontal areas and so on.
617
+ [2251.480 --> 2263.480] So, here's video games showing up again, allowing us to figure out what a taxi driver's brain looks like in terms of the map. So, with that in mind, we went back to our gamers and this is how we did it.
618
+ [2263.480 --> 2273.480] So, this is a sighted control in the FMRI scanner. He's looking through a mirror here, going through the video game like this. He's using headphones, looking through the mirror. He can see the screen projected behind him.
619
+ [2273.480 --> 2281.480] He's using a series of keys to move left and right and so on exactly the same way that you would with the keyboard of a laptop. And he's doing this visually.
620
+ [2281.480 --> 2290.480] We then bring in our blind participants and doing exactly the same thing. Obviously the monitor is on and we compare the two in terms of how they're able to do that.
621
+ [2290.480 --> 2301.480] What we found is sure enough a network of activation, very, very similar to what we saw in the taxi drivers as well. So, visual areas, auditory cortex is active, right? Because they're hearing the sounds.
622
+ [2301.480 --> 2315.480] Frontal areas, very, very important for executive decisions. We also saw activations and motor areas. This is because they're using the keys moving around. And sure enough, activation of visual cortex, as well as a parahippocampus, which you see right now.
623
+ [2315.480 --> 2322.480] So, all the areas that were identified in the London taxi study, we found the same network in our sighted participants as well.
624
+ [2322.480 --> 2335.480] What do you think the brain looks like in our ocular blind participants? The same. Yeah, exactly the same. Same areas, auditory motor, frontal, visual cortex, parahippocampus.
625
+ [2335.480 --> 2343.480] Again, the conactivity is all there. They're using the same network, even though they're not necessarily using that visual information the same way.
626
+ [2343.480 --> 2350.480] So, they're driving the same system through another portal, if you will. Let's look at this a little bit more systematically, all right?
627
+ [2350.480 --> 2357.480] So, for example, here all this brain activation and so on is it, indeed, related to the actual video game. Here's the blind individual.
628
+ [2357.480 --> 2364.480] We asked them to just simply listen to the instructions. Don't play the game, don't move, don't do anything. Just listen to the instructions.
629
+ [2364.480 --> 2372.480] And we have activation in auditory cortex. And we also have activation in sensory motor areas because they're mentally imagining doing that motion.
630
+ [2372.480 --> 2380.480] Here's the same individual just randomly walking. So, they're listening to the cues and we just say, you know, walk in a circle. Don't go anywhere.
631
+ [2380.480 --> 2390.480] And again, activation in auditory cortex and sensory motor areas as well. Now we asked the individual, we want you to walk from A to B, C to D in a gold directed fashion.
632
+ [2390.480 --> 2399.480] And that's where you see everything light up, right? So, it's the engagement that does this, right? Again, going back to that earlier question, how do you turn the brain on?
633
+ [2399.480 --> 2405.480] They could do something really hard. It really likes that. That's how you create all that engagement.
634
+ [2405.480 --> 2413.480] Other things that were kind of interesting. We started looking at all our participants one by one and we saw a really, really wide variability in terms of activation.
635
+ [2413.480 --> 2421.480] We saw some people really, really locked in, all sorts of activation everywhere. Another individual, less so, right?
636
+ [2421.480 --> 2429.480] Now we saw other individuals that had really, really strong visual cortex activation. Other individuals? Basically nothing. Or we're using other areas of the brain reactive.
637
+ [2429.480 --> 2446.480] So, we wanted to make sense of this. Why were everybody who was playing this game using different parts of their brain? Right? What was behind this? Is there somehow we can take these individuals and associate that with their performance in terms of the game and how they were actually using information?
638
+ [2446.480 --> 2456.480] So, the way we started to do this, trying to associate brain activity with behavior is we used a rationale. And this gets back to the earlier question about RLP that I promised I would not lose to.
639
+ [2456.480 --> 2474.480] So, we used something called the development of self-report measures of environmental spatial ability. This was done by Mary Hagerty. And she developed a scale that was first developed for cited individuals, translated for the blind, in terms of trying to figure out how independent an individual was in terms of their orientation, mobility, and navigation skills.
640
+ [2474.480 --> 2490.480] So, the question is like, I'm very good at giving directions. On a scale of say 1 to 8, 8 being very good, 1 being very poor. I have trouble understanding directions. Right? So, notice the negativity on this one. So, we ask it in both ways to make sure that we don't get biased by one polarity versus the other.
641
+ [2490.480 --> 2504.480] So, usually remember a new route after I have traveled it only once, scale of 1 to 8. And then it goes through analysis and it gives you an independent score. The higher the number, the more theoretically independent you are, and the more confident you are in terms of your travel.
642
+ [2504.480 --> 2513.480] All right? Here are our nine participants in the FMI study ranked ordered by their independent score from 1 to 9. And what do you notice?
643
+ [2513.480 --> 2525.480] Right now, the prematureity is in the lower half. So, just a first piece about this aspect of whether or not RLP is somehow related with spatial aspects. But I'll get back to that during the question period.
644
+ [2525.480 --> 2543.480] The main thing that I thought was quite interesting is that their independent score was really interestingly related with their primary mobility. So, the top independence were long cane users. The middle scores were using guide dogs. And the lower scores were all people, for example, using the ride program or having a driver taking them around and so on.
645
+ [2543.480 --> 2559.480] We don't ask this specifically. This actually came out as a function of the questionnaire that we asked. So, they were rank ordered and it seemed to parallel very, very much their primary mobility aids as well. So, we had a good sense that we were able to rank order these individuals in a real world setting.
646
+ [2559.480 --> 2576.480] When we do that, we take the scale. We put that into an equation based on brain activity. And we asked the software to tell us what part of the brain correlates the best with their independence. And the part of the brain was this area here called the temporal parietal junction or TPJ.
647
+ [2576.480 --> 2592.480] So, there's the correlation analysis. Brain activation has a function of their independent score. And of all the parts of the brain, this is the one that was intimately related to their independence level. And the reason why I think this is interesting is TPJ is the part of the brain that's normally active when we tell ourselves stories.
648
+ [2592.480 --> 2618.480] So, it's kind of interesting that those individuals who are the most independent in terms of navigation are probably the people who somehow necessarily can rehearse that story in their mind of where they're going. It wasn't necessarily the visual cortex, it wasn't necessarily frontal cortex. The part of the brain that kind of sits in the middle of everything, brain parietal, visual, temporal, and frontal. The nexus, if you will, of all of these brain areas. That was the part of the brain most correlated with it.
649
+ [2618.480 --> 2632.480] Okay, where are we heading now? So, we started very, very simply with this idea. We took one building, tried to map it, and tried to turn to it, and tried to get a sense of the neuroscience and all these aspects. Our goal now is to map out the entire campus, as you might imagine.
650
+ [2632.480 --> 2645.480] You can imagine going from one building to another to another to another, and we call this sort of audio Zelda, where you find the key in one building, which gives you the map to another building, which forces you to find the other building, to, again, sort of engage them and map out the whole entire campus.
651
+ [2645.480 --> 2652.480] You can also think of this might be something that you can put on a CD, or maybe downloadable from space, or from Cloudspace, I should say.
652
+ [2652.480 --> 2661.480] And if you have a client coming to the Carol Center, they can go and play the game on their own time, and once they arrive at Carol, they have already a good idea of what the layout is of the campus.
653
+ [2661.480 --> 2665.480] That's our goal right now, in a particular study, and we are working towards that.
654
+ [2665.480 --> 2672.480] We've changed the platform, we're using something called Unity, which is a very, very simple way to program virtual environments.
655
+ [2672.480 --> 2683.480] The nice thing about it is that you can use virtually any platform you want. Android, Mac, Playstation, however they want to interact with the game, they can use whatever interface they want.
656
+ [2683.480 --> 2688.480] So you build it once, play it everywhere, sort of thing. So very, very fast. It allows us to create these environments.
657
+ [2688.480 --> 2694.480] Just showing you here, we call it Haga now, for a haptic audio game application, because we're adding tactile components to it as well.
658
+ [2694.480 --> 2699.480] Here's the indoor environment, there's the outdoor environment, here's a blind individual, you can see interacting with it.
659
+ [2699.480 --> 2704.480] So using the audio, as I mentioned before, and also using the rumble feature of the Xbox controller as well.
660
+ [2704.480 --> 2710.480] So when they hit an obstacle, they get the feedback, and as they strafe, the frequency changes as well.
661
+ [2710.480 --> 2716.480] So there's tactile feedback as well as audio. You also notice this device here, it's called the Falcon, the Novit Falcon.
662
+ [2716.480 --> 2722.480] This is a force feedback device, so they can also use that to knock on the door, to use it also as a virtual cane.
663
+ [2722.480 --> 2730.480] So all sorts of immersion between the audio as well as the haptic tactile as well, to give them that sense of the indoor and outdoor environment.
664
+ [2730.480 --> 2739.480] And this is in the works right now. Other things we've played with, the we-mode, as I mentioned, is of an interesting way to use that as a virtual cane.
665
+ [2739.480 --> 2746.480] So getting that rumble feature, the problem with the we-mode is that if you hit an obstacle, you can still put your arm through it.
666
+ [2746.480 --> 2751.480] So it starts vibrating. So the nice thing about the Falcon is that it actually gives you that force feedback.
667
+ [2751.480 --> 2755.480] The we-mode is just an alarm, it doesn't really give you that same sort of tactile immersion.
668
+ [2755.480 --> 2760.480] But nonetheless, interesting, this was work that we did in Chile. Child learned a layout of a park.
669
+ [2760.480 --> 2768.480] We take the child there, and we contract them, and using that sort of sense, they build the map of the layout of the park using the we-mode as one way to do that.
670
+ [2769.480 --> 2777.480] Audio-opolis, another interesting way. This is a fictional environment. Same sort of strategy. We have the child play again.
671
+ [2777.480 --> 2783.480] Audio as well as the we-mode. With a goal here is to chase a thief that's stolen in this virtual city.
672
+ [2783.480 --> 2787.480] You have to find the thief in the, every building you find leaves cues for the next building you have to find.
673
+ [2787.480 --> 2794.480] And you kind of go through in a structured fashion. At the end, we give you just the blocks, and you have to rebuild the environment that you worked with.
674
+ [2794.480 --> 2798.480] And we again kind of get a sense of the child's spatial skills, how they put the environment together.
675
+ [2798.480 --> 2807.480] We've seen, for example, a lot of kids flip it, for example. They know the very short linear relationship between buildings, but globally, they have distortion.
676
+ [2807.480 --> 2813.480] So it allows us to kind of diagnose what aspect of their spatial representation seems to be imperative at all.
677
+ [2813.480 --> 2822.480] Other things that we've done. We now embarked with the Massachusetts Bay Transit Authority, the MBTA. This runs the subway and the bus system in Boston.
678
+ [2822.480 --> 2830.480] If you're from Boston, you might recognize this. This is Park Street Station. And we have a situation now where we've modeled Park Street Station in our virtual environment,
679
+ [2830.480 --> 2836.480] hoping that this could be a similar strategy outside of the Carroll Center, now using this in public spaces as well.
680
+ [2836.480 --> 2845.480] We can use this as an offline survey. You can learn how to explore the station before you go. You can maybe use this as an online system as well.
681
+ [2845.480 --> 2853.480] You can get information when you're in the station. It's also a way of tracking. You can use this as a way of what are the most common exits that people use, for example.
682
+ [2853.480 --> 2865.480] You put that into a pool of data as well. Antonio Grimache is a student in my lab. He has levers and a very proficient traveler in the bus and in the Boston subway system.
683
+ [2865.480 --> 2877.480] He has this idea of a very intriguing of developing a strip map. You're all familiar with what a strip map is. If you take the subway, it's basically a linear representation of all the stations and the sequence, where the connections are.
684
+ [2877.480 --> 2885.480] He's developing an app called strip map, for example, that purpose, for the Boston metro system, subway system. It looks like something like this.
685
+ [2885.480 --> 2899.480] The first thing you do, taking advantage of the tactile interface, the gestures, and the audio that you get from your iPhone, you can ask, for example, what direction do you want to head in, how do I get from one station to another, or find the best route between two stations.
686
+ [2899.480 --> 2906.480] You can choose the line that you want, again, just scrolling through in a strip map fashion, and it will calculate the optimal route for you.
687
+ [2906.480 --> 2915.480] Then you get all sorts of feedback, for example, crowdsourcing tips that people put on board. This is a good place to get a coffee, or this is a particularly complicated place.
688
+ [2915.480 --> 2924.480] Live schedule, we get an immediate feed from the MBTA because of our association of when the trains are coming, when the subway is coming, if there's any delays, they have it immediately on their phone.
689
+ [2924.480 --> 2930.480] Other accessibility services like a colony, or a call of taxi, and so on. This is in the works as well.
690
+ [2930.480 --> 2939.480] So, again, an original idea, neuroscience, now trying to translate that into real world applications that I think people can use.
691
+ [2939.480 --> 2945.480] The last example I'll give you, I think, a very, very interesting one. This isn't a project that I was involved with. This is my collaborators back in Chile.
692
+ [2945.480 --> 2955.480] This is a project called Idac, which is Inclusion de Hitar, but it happened in their ciencias. This was a project that they used to use video games to teach basic anatomy and biology for blind students.
693
+ [2955.480 --> 2962.480] In Chile, much like here in the United States, a lot of the kids are mainstream. They spend a lot of time in the public school system.
694
+ [2962.480 --> 2967.480] And what they do is they've invented a game where a blind individual has to play with two or three cited classmates.
695
+ [2967.480 --> 2974.480] And the idea is to explore the world through these various rooms. And as they go through the world, they learn the anatomy together.
696
+ [2974.480 --> 2981.480] The cited kids see each other as they move through, and the blind child is using the audio cues to navigate with them. So, they're playing the game together.
697
+ [2981.480 --> 2987.480] So, it's a combination of teamwork, as well as concrete tools, as well various models and things which they use together.
698
+ [2987.480 --> 2995.480] Notice that they're visually labeled as well as Braille. Also, all the material is exactly the same. They're reading material visually versus the tactile Braille material is identical.
699
+ [2995.480 --> 3002.480] The idea is to integrate and force the teamwork, have the kids working together. Very, very interesting results. This is a long-term study with the Ministry of Education that they're doing.
700
+ [3002.480 --> 3007.480] So, I think a very clever idea, a very interesting approach to engage them.
701
+ [3007.480 --> 3015.480] So, final slides. Great, great, great quote that I love. You can discover more about a person in an hour of play than a year of conversation.
702
+ [3015.480 --> 3025.480] In terms of Plato said that. I think that's a really, really interesting idea. The fact that plays somehow brings out our better nature, if you will, I think is a very interesting one and intriguing one.
703
+ [3025.480 --> 3035.480] We solve problems in ways that we don't normally solve in the real world. Great picture that I love. This is another one that's hanging in my office in the National Sports Center for the Disabled.
704
+ [3035.480 --> 3047.480] You see this guy dog looking up at his master, presumably climbing this wall. I love this picture because we have the sense that individuals are only as good as their technology.
705
+ [3047.480 --> 3057.480] I think the idea is to go beyond that. I think the idea is to have create independence, create confidence, create a level of functioning beyond the tools that are available.
706
+ [3057.480 --> 3067.480] That's why I think with this picture symbolizes. And the last one that I want to share with you just to close out, straight out of Texas folklore, which I think is very, very impression.
707
+ [3067.480 --> 3080.480] Let me thank a couple of individuals for this. As I said, Professor at the University of Chile, Aaron Connors was a research assistant working on this project. Mark Halco did the FMRI work and of course the Carroll Center for the Blind where we did a lot of the work.
708
+ [3080.480 --> 3090.480] And now my final slide that I want to share with you again, I think very, very appropriate and right out of Texas folklore. I love this. Clear eyes, full hearts can't lose.
709
+ [3090.480 --> 3099.480] Every time I work with a blind child, this is what I think about. It's exactly that. There's so much more to seeing than the health of the eyes and also the health of the brain.
710
+ [3099.480 --> 3107.480] I think you can experience the world in so many different ways. And I think that's our goal. And if we do that, keep our hopes full. Can't lose.
711
+ [3107.480 --> 3109.480] So again, thank you very much for our audience.
transcript/allocentric_RSlc9IxdBw8.txt ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 15.420] Is it possible to understand everyone at a deep and meaningful level to get what really
2
+ [15.420 --> 19.440] matters to people no matter how different they are from you?
3
+ [19.440 --> 23.840] That proposition sounds a little absurd after all.
4
+ [23.840 --> 26.760] Human psychology is really complex.
5
+ [26.760 --> 32.200] Some people are abused as children, others are loved and supported.
6
+ [32.200 --> 37.840] The brain of an 18-year-old girl who sleeps with her cell phone is different than an 80-year-old
7
+ [37.840 --> 42.880] man who can't remember the names of his children.
8
+ [42.880 --> 48.320] There's no one way to understand everyone, no broad operating principle.
9
+ [48.320 --> 49.520] That's the conventional wisdom.
10
+ [49.520 --> 51.480] It makes perfect sense.
11
+ [51.480 --> 54.560] And yet, it's a myth.
12
+ [54.560 --> 58.400] A few years ago, I was watching TV, scenes from Afghanistan.
13
+ [58.400 --> 63.600] A group of teenage boys was standing in the back of a dusty pickup waving rifles.
14
+ [63.600 --> 70.320] And one boy wrapped in a white cloth with dazzling blue-green eyes, was staring directly
15
+ [70.320 --> 72.120] into the camera.
16
+ [72.120 --> 77.000] He looked intent, menacing, and that was the point of the piece.
17
+ [77.000 --> 83.280] We should be afraid because young men were passionate about killing Americans.
18
+ [83.280 --> 87.640] Let me tell you about another boy, my nephew, Rory.
19
+ [87.640 --> 93.920] At the time I saw this piece, Rory was a freshman in college at Harvard, but Rory's not full
20
+ [93.920 --> 94.920] of himself.
21
+ [94.920 --> 97.000] In a word, he's sweet.
22
+ [97.000 --> 102.200] He's not a hugger, but always hug me because he knows that I am.
23
+ [102.200 --> 105.320] He bakes brownies with his young cousins.
24
+ [105.320 --> 108.320] He wants to be a doctor one day.
25
+ [108.320 --> 113.240] I'm proud of Rory, and I can't imagine a kid more different than that one from Afghanistan.
26
+ [113.240 --> 120.160] Except, at a fundamental level, these two boys are exactly the same.
27
+ [120.160 --> 123.160] They've chosen their respective paths.
28
+ [123.160 --> 125.240] Join the Taliban.
29
+ [125.240 --> 128.320] Go to Harvard for the same internal reasons.
30
+ [128.320 --> 131.000] They both would like respect.
31
+ [131.000 --> 134.720] Everyone knows that when you go to Harvard, people look up to you for the rest of your
32
+ [134.720 --> 136.040] life.
33
+ [136.040 --> 141.800] And when you join the Taliban, little kids look on and on as you drive by in that dusty
34
+ [141.800 --> 143.360] vehicle.
35
+ [143.360 --> 147.640] They also want community belonging.
36
+ [147.640 --> 153.160] Rory's got close friends, the men of Harvard, but no closer, I bet, than the men of the
37
+ [153.160 --> 155.080] Taliban.
38
+ [155.080 --> 160.680] And lastly, and probably most important to both, they want to make a difference in their
39
+ [160.680 --> 162.320] worlds.
40
+ [162.320 --> 165.720] They want to help those they love.
41
+ [165.720 --> 171.760] What's amazing and horrifying is that one will learn to be a doctor and the other
42
+ [172.440 --> 174.760] will learn to kill.
43
+ [174.760 --> 181.960] It's true that human behavior is amazingly varied and complex, but at the level of motivation,
44
+ [181.960 --> 188.840] at the level of what drives us to do all those different things, were actually identical.
45
+ [188.840 --> 194.720] There's a formula for understanding why we do, what we do, and once you get it, you get
46
+ [194.720 --> 195.720] it.
47
+ [195.720 --> 199.080] There are 30 basic human motivations.
48
+ [199.080 --> 201.440] Let me give you a quick primer.
49
+ [201.440 --> 203.280] Here's the obvious, the physical.
50
+ [203.280 --> 204.280] We want to survive.
51
+ [204.280 --> 206.120] We need air, food, and water.
52
+ [206.120 --> 212.800] There's a second category of relational needs that help us understand how to balance
53
+ [212.800 --> 215.400] our self-interest and that of the community.
54
+ [215.400 --> 219.360] We all want to receive care, understanding, love.
55
+ [219.360 --> 224.440] But at the same time, we want to give our love to help others in our lives.
56
+ [224.440 --> 228.320] And then there's a third category of needs.
57
+ [228.320 --> 231.400] We need to call aspirational or spiritual.
58
+ [231.400 --> 232.680] We want to grow.
59
+ [232.680 --> 236.280] We all crave adventure and beauty.
60
+ [236.280 --> 240.280] I'm not going to go through the whole list because everything on the list you're already
61
+ [240.280 --> 242.080] familiar with.
62
+ [242.080 --> 247.560] But don't then mistake this for that old high school sociology lesson where the teacher
63
+ [247.560 --> 253.560] says human beings have needs if they're not fulfilled unhappiness and war.
64
+ [253.560 --> 254.560] That's all true.
65
+ [254.560 --> 258.480] But I'm not here to make that macro sociological point.
66
+ [258.480 --> 265.520] I'm here to help you understand the micro, the human individual in any given moment.
67
+ [265.520 --> 272.680] What drives your mother, your spouse, your boss, human behavior no matter how seemingly
68
+ [272.680 --> 281.120] bizarre or mundane is designed internally to fulfill one or some of the common needs.
69
+ [281.120 --> 286.760] If you want to understand what really matters to a person at the level of deep motivation,
70
+ [286.760 --> 292.080] ask which of the common needs have they been pursuing?
71
+ [292.080 --> 294.520] Here's a story for my personal life.
72
+ [294.520 --> 298.640] My wife Shelley sometimes gets upset with me for not cleaning the dishes to her exacting
73
+ [298.640 --> 300.360] standard.
74
+ [300.360 --> 306.840] I can see her there as I'm cleaning over my left shoulder, pretending to read the mail,
75
+ [306.840 --> 308.400] watching me.
76
+ [308.400 --> 312.680] Now I could easily conclude that's a little weird.
77
+ [312.680 --> 315.880] She might be OCD.
78
+ [315.880 --> 321.280] But these brilliant observations don't get me very far.
79
+ [321.280 --> 327.880] If I want to understand my wife and I do, I ask a basic question, what needs are driving
80
+ [327.880 --> 328.880] her?
81
+ [328.880 --> 330.520] Shelley's a busy woman.
82
+ [330.520 --> 332.480] She teaches high school full-time.
83
+ [332.480 --> 334.320] She drives our kids everywhere.
84
+ [334.320 --> 336.920] She calls my mom to say hi.
85
+ [336.920 --> 338.480] And I love you.
86
+ [338.480 --> 340.480] Excuse me.
87
+ [340.480 --> 344.520] I got a little emotional.
88
+ [344.520 --> 346.360] She calls my mom to say hi.
89
+ [346.360 --> 348.600] And I love you.
90
+ [348.600 --> 352.280] Clean dishes neatly stacked and put away.
91
+ [352.280 --> 356.640] The fill in her, the common needs for order and rest.
92
+ [356.640 --> 359.760] Finally, some peace of mind.
93
+ [359.760 --> 364.440] And there's one more huge need motivating her dishwasher spine.
94
+ [364.440 --> 371.040] When I leave stuff on the dishes like that big piece of vermicellie, hanging off the
95
+ [371.040 --> 377.840] back that's so super obvious to her, after she said, Larry, do a good job this time?
96
+ [377.840 --> 380.320] This time, please do a good job.
97
+ [380.320 --> 383.240] She concludes, I don't care about her.
98
+ [383.240 --> 389.080] If you want to understand everyone, including Shelley, the outside world matters to us only
99
+ [389.080 --> 393.880] because we're trying to fulfill needs internally.
100
+ [393.880 --> 396.760] She doesn't really care about clean dishes.
101
+ [396.760 --> 402.360] At depth, she, like everyone else, wants respect to be loved.
102
+ [402.360 --> 407.800] Human behavior is complex, but human motivation is actually simple.
103
+ [407.800 --> 411.360] We seek these common needs and nothing else.
104
+ [411.360 --> 415.960] I didn't myself discover that common needs drive human behavior.
105
+ [415.960 --> 420.660] The idea was proposed around 50 years ago by the psychologist, Carl Rogers, and then
106
+ [420.660 --> 426.080] further developed by the extraordinary peacemaker, Marshall Rosenberg.
107
+ [426.080 --> 431.960] I came across their concepts around 15 years ago and they made good sense to me.
108
+ [431.960 --> 436.480] So I began to implement them in my personal life to decode family and friends.
109
+ [436.480 --> 438.200] And I was understanding people.
110
+ [438.200 --> 439.960] I was intrigued.
111
+ [439.960 --> 441.840] But I was also skeptical.
112
+ [441.840 --> 449.760] I asked Marshall Rosenberg, why 30 needs and not 755?
113
+ [449.760 --> 453.440] And he said, oh, it could be 30 or 755.
114
+ [453.440 --> 457.280] The need to survive, for example, could be further broken down into the needs to not
115
+ [457.280 --> 461.680] walk off a cliff or to not be eaten by predators.
116
+ [461.680 --> 464.200] 30 is just a useful level of aggregation.
117
+ [464.200 --> 465.480] I thought, okay, that's a good answer.
118
+ [465.480 --> 467.360] But what about this, Marshall?
119
+ [467.360 --> 470.880] What are needs from a neurological perspective?
120
+ [470.880 --> 471.880] What's happening in the brain?
121
+ [471.880 --> 474.360] How do they actually motivate us?
122
+ [474.360 --> 478.040] And here, Marshall said, oh, that's simple.
123
+ [478.040 --> 483.120] Needs are life force, human life force.
124
+ [483.120 --> 488.160] And I thought, whoa, that's not science at all.
125
+ [488.160 --> 493.320] And so I spent the next two years meeting with neuropsychologists and speaking with evolutionary
126
+ [493.320 --> 497.640] biologists and reading cognitive journals with footnotes.
127
+ [497.640 --> 503.640] And I eventually concluded, this need stuff is grounded in solid science.
128
+ [503.640 --> 512.600] And because research shows that if you mention the word neuroscience or brain in a big talk,
129
+ [512.600 --> 516.280] it's a thousand times more likely to go viral.
130
+ [516.280 --> 520.440] Let me say, this is neuroscience.
131
+ [520.440 --> 521.920] Brain science.
132
+ [521.920 --> 523.720] Neuro-N brain.
133
+ [523.720 --> 524.720] Neuro-Brain.
134
+ [525.720 --> 528.560] Now, I'm not a scientist.
135
+ [528.560 --> 532.520] I'm a lawyer, a mediator, and a writer.
136
+ [532.520 --> 538.760] But being a layperson has allowed me to unravel the science that translated away from chemicals
137
+ [538.760 --> 544.040] like oxytocin and dopamine, and into what I believe is a useful narrative.
138
+ [544.040 --> 550.040] And so here's what I believe is going on in the human brain with needs.
139
+ [550.040 --> 555.920] The human unconscious evaluates the world, telling us whether it's dangerous or friendly.
140
+ [555.920 --> 557.640] That's its job.
141
+ [557.640 --> 562.040] Once it reaches its conclusion, it's got to motivate the whole system, including the conscious
142
+ [562.040 --> 564.840] mind, to do something about it.
143
+ [564.840 --> 566.120] How?
144
+ [566.120 --> 571.720] If it concludes that the world's dangerous, we naturally feel fear or anxiety.
145
+ [571.720 --> 573.840] We try to get less of what caused it.
146
+ [573.840 --> 579.720] If it concludes the world is friendly, we naturally feel happy or excited, and we try
147
+ [579.720 --> 581.240] to get more.
148
+ [581.240 --> 590.040] But, and this is the key, how does the unconscious determine what's dangerous and what's friendly?
149
+ [590.040 --> 593.120] It's not just left up to each of us individually.
150
+ [593.120 --> 599.920] Rather, the criteria upon which we evaluate the world is born into you and born into me
151
+ [599.920 --> 602.040] and born into all of us.
152
+ [602.040 --> 604.400] Those are the human needs.
153
+ [604.400 --> 611.920] Those specific criteria were honed through evolution because they allow us to survive,
154
+ [611.920 --> 616.480] to relate to other people and ultimately to make more people.
155
+ [616.480 --> 618.520] Am I being respected?
156
+ [618.520 --> 622.080] Am I making a contribution in the world?
157
+ [622.080 --> 625.320] Does she think I'm cute?
158
+ [625.320 --> 632.560] If so, pleasure, get more of that, if not pain, change the world.
159
+ [632.560 --> 638.880] It took me several years to unravel the science in a way that made narrative sense to me.
160
+ [638.880 --> 643.680] And yet, in that time, I actually stopped caring so much about what was happening in the
161
+ [643.680 --> 644.840] brain.
162
+ [644.840 --> 650.480] I was using this and understanding people in a way that I didn't think was possible.
163
+ [650.480 --> 652.600] I was seeing their hearts.
164
+ [652.600 --> 653.600] It worked.
165
+ [653.600 --> 657.600] And really, that's what counts.
166
+ [657.600 --> 660.080] I'd like to tie this together with a, with a story.
167
+ [660.080 --> 662.080] As I said, I'm a mediator.
168
+ [662.080 --> 666.520] When people are at war, they come to me and I help them work it out.
169
+ [666.520 --> 671.720] Not too long ago, I was visited by a couple that had already been divorced.
170
+ [671.720 --> 675.720] The ex-wife Sophia said a precious object had gone missing.
171
+ [675.720 --> 677.800] What was it?
172
+ [677.800 --> 682.440] Sophia had never met her father and her mother died when she was a little girl.
173
+ [682.440 --> 685.120] She was raised by her grandmother.
174
+ [685.120 --> 690.640] And in her grandmother's house hung this large painting, painted by Sophia's grandmother
175
+ [690.640 --> 693.320] of Sophia's mother.
176
+ [693.320 --> 697.680] Sophia used to look at this painting when she was a little girl and imagine herself holding
177
+ [697.680 --> 703.600] her mom's hand and kissing her mom's cheek.
178
+ [703.600 --> 708.720] Sophia's grandmother, the painter, died a few weeks before the mediation.
179
+ [708.720 --> 712.640] And in her final hours, she signed the picture.
180
+ [712.640 --> 719.400] Sophia described this with tears and finally looked to her ex-husband and she said, Frank
181
+ [719.400 --> 722.000] took the picture.
182
+ [722.000 --> 727.360] Frank, when are you going to stop trying to punish me for the affair?
183
+ [727.360 --> 732.760] I looked at the guy and his face was cold as stone and I thought, whoa.
184
+ [732.760 --> 735.880] People come to see me because I can help solve their problems.
185
+ [735.880 --> 738.240] But I'm kind of a one trick pony.
186
+ [738.240 --> 740.320] The thing is I have this excellent trick.
187
+ [740.320 --> 744.000] I can help them understand each other's hidden motivations.
188
+ [744.000 --> 747.720] And I knew something that Sophia didn't.
189
+ [747.720 --> 750.800] Frank wasn't trying to punish her.
190
+ [750.800 --> 755.160] People often think revenge is a human motive.
191
+ [755.160 --> 758.120] But hurting another person is not a human need.
192
+ [758.120 --> 760.040] Now, how do I know?
193
+ [760.040 --> 765.000] Well, here's a trick I developed a few years ago that I find very useful.
194
+ [765.000 --> 769.960] If you ever think that somebody is motivated by something that doesn't personally give
195
+ [769.960 --> 773.600] you pleasure, you actually haven't found their motivation.
196
+ [773.600 --> 775.480] Go deeper.
197
+ [775.480 --> 778.400] I don't get pleasure from hurting other people.
198
+ [778.400 --> 781.160] If it's not in me, it's not a common need.
199
+ [781.160 --> 784.720] And if it's not a common need, it's not a human motivation.
200
+ [784.720 --> 786.040] Go deeper.
201
+ [786.040 --> 789.400] Revenge is pursued to fulfill another need.
202
+ [789.400 --> 790.800] But what?
203
+ [790.800 --> 793.880] It varies, but very often it's a need for understanding.
204
+ [793.880 --> 800.440] If I hurt you, you will understand at the level of personal pain, at the level of intense
205
+ [800.440 --> 804.560] personal suffering, what you did to me.
206
+ [804.560 --> 807.040] You'll finally get it.
207
+ [807.040 --> 809.560] This wasn't the case for Frank.
208
+ [809.560 --> 816.120] My theory that he had taken the picture in order to be understood for the pain of the
209
+ [816.120 --> 817.720] affair was wrong.
210
+ [817.720 --> 819.680] I often guess wrong.
211
+ [819.680 --> 824.200] But that I was guessing and without blame, convinced him to share something else.
212
+ [824.200 --> 826.480] His eyes well with tears.
213
+ [826.480 --> 831.200] And he looked over at his ex-wife, Sophia, and he said, soaf.
214
+ [831.200 --> 834.040] She had become my grandmother too.
215
+ [834.040 --> 837.200] She was all that I had.
216
+ [837.200 --> 841.080] You were all that I had.
217
+ [841.080 --> 844.480] Frank was an orphan too, just like Sophia.
218
+ [844.480 --> 851.880] He took the painting to fulfill a common human need of connection.
219
+ [851.880 --> 854.480] Herding Sophia was never the point.
220
+ [854.480 --> 859.120] Sophia moved next to Frank on the couch, and she wrapped her arms around him, and they
221
+ [859.120 --> 862.280] sobbed together for ten minutes.
222
+ [862.280 --> 863.280] And I cried too.
223
+ [863.400 --> 864.560] I had ten minutes.
224
+ [864.560 --> 866.720] What was I going to do?
225
+ [866.720 --> 875.720] Frank ultimately returned the painting to Sophia, and she dug up a trove of old photos, a
226
+ [875.720 --> 880.320] Frank with her grandmother, so that he could remember his family.
227
+ [880.320 --> 881.320] Understand what happened here.
228
+ [881.320 --> 887.120] We didn't make the common an easy mistake, thinking that revenge is a motive.
229
+ [887.120 --> 892.640] Instead we went to the source of all human motivation to the common needs.
230
+ [892.640 --> 897.920] And Sophia understood that Frank had simply needed connection, human connection, and in
231
+ [897.920 --> 900.680] particular to her grandmother, she got it.
232
+ [900.680 --> 905.880] She could feel it, and then the magic, and then solutions.
233
+ [905.880 --> 911.760] Now many people, including some of this audience, are wary of understanding others, and especially
234
+ [911.760 --> 913.480] during conflict.
235
+ [913.480 --> 919.760] The thought goes like this, if I understand the reasons you did what you did, I'm basically
236
+ [919.760 --> 922.560] saying you were justified.
237
+ [922.560 --> 925.520] Understanding seems like condoning.
238
+ [925.520 --> 930.000] And for this reason, people often say don't go inside the mind of a terrorist, don't
239
+ [930.000 --> 931.760] get them.
240
+ [931.760 --> 936.920] To get a terrorist is to legitimate terrorism.
241
+ [936.920 --> 938.760] It's to be an apologist.
242
+ [938.760 --> 942.960] And for this reason, it was suggested to me that I dropped from my talk, the piece about
243
+ [942.960 --> 950.720] the Taliban teenager, because then people might think, I can don't terrorism.
244
+ [951.720 --> 955.600] Let me make something perfectly clear.
245
+ [955.600 --> 960.680] Understanding reasons is different than condoning.
246
+ [960.680 --> 964.560] I've learned through thousands of mediations.
247
+ [964.560 --> 971.440] Understanding is a power to shape the world far greater than any sort or gone.
248
+ [971.440 --> 975.240] Understanding is exactly how you create the world that you want.
249
+ [975.240 --> 980.720] I began this talk asking, is it possible to understand everyone at a deep and meaningful
250
+ [980.720 --> 983.720] level, even those that are different from you?
251
+ [983.720 --> 995.520] And the answer is yes, when your teenage daughter asks you for that hair straightener, and
252
+ [995.520 --> 1002.280] just one week after you bought her that hair crimper, and she's standing at the top of
253
+ [1002.280 --> 1008.600] the stairs with this crazy crimped hair.
254
+ [1008.600 --> 1014.640] Screaming, you just don't understand this is how you understand.
255
+ [1014.640 --> 1016.440] What is she needing?
256
+ [1016.440 --> 1026.680] She wants to be accepted, liked, the desire to be accepted, to be liked, is in you, is
257
+ [1027.680 --> 1035.000] in everyone in this audience, and so you can understand exactly what she feels.
258
+ [1035.000 --> 1040.680] And that alone will transform your relationship, and then come the solutions, even if it's
259
+ [1040.680 --> 1048.360] only I see you, my beautiful little girl, I get you.
260
+ [1048.360 --> 1052.960] There's a formula for understanding why we do what we do, and once you get it, you get
261
+ [1052.960 --> 1055.280] it.
262
+ [1055.280 --> 1058.800] Human behavior is complex, but human motivation is simple.
263
+ [1058.800 --> 1061.880] We seek the common needs, and nothing else.
264
+ [1061.880 --> 1064.480] We seek the common needs, and nothing else.
265
+ [1064.480 --> 1069.000] The common needs are human motivation.
266
+ [1069.000 --> 1075.400] Learn this language of the unconscious, this language of the heart, and you'll improve
267
+ [1075.400 --> 1080.080] every relationship in your life.
268
+ [1080.080 --> 1081.080] Thank you.
transcript/allocentric_T6INaET_Lnw.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ [0.000 --> 5.520] Ego scanning is a video fast forwarding interface to quickly find events of interest from first-person videos.
2
+ [5.760 --> 10.880] The interface features an elastic timeline that emphasizes Ego-centric cues based on users' inputs.
3
+ [11.120 --> 15.520] Playback speeds are adaptively changed in emphasized scenes to show corresponding events.
4
+ [15.760 --> 21.280] Computer vision techniques automatically extract Ego-centric cues such as movements, hands, and person.
5
+ [21.840 --> 25.840] Our user study compared Ego-scanning with a simple fast forwarding interface.
6
+ [25.840 --> 29.840] As the results, we confirm faster average scanning speeds by 38%
transcript/allocentric_UTiFshG_xuk.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ [0.000 --> 8.000] Here's a challenge. Tell me the opposite of these five words in order. Always staying, take me down.
2
+ [9.200 --> 12.000] Always staying, take me down.
3
+ [12.800 --> 28.000] Never going, like, give, you down, up. Never going, give you up.
transcript/allocentric_UpupNS6aF7o.txt ADDED
@@ -0,0 +1,864 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 3.400] Okay, thanks for joining us.
2
+ [3.400 --> 7.840] This is a thousand brains hangout with Jeff Hawkins and Suvittai Amad.
3
+ [7.840 --> 9.040] We're all from Dementa.
4
+ [9.040 --> 13.400] I'm Matt Taylor and we're going to talk about our most recent paper, the framework for
5
+ [13.400 --> 16.080] intelligence and do some Q&A.
6
+ [16.080 --> 20.160] So if you're just watching this, there's a discussion on our forum that's linked in
7
+ [20.160 --> 24.440] the show description down there and there's also a link to the paper down there if you
8
+ [24.440 --> 25.440] want to read it.
9
+ [25.440 --> 29.360] So that should provide all the context that we need for this discussion.
10
+ [29.360 --> 33.920] So that being said, we have questions that are already on the forum that we could go
11
+ [33.920 --> 37.760] through or we could just kind of roll through people that are joined right now since they've
12
+ [37.760 --> 38.760] been waiting.
13
+ [38.760 --> 41.360] So unless do you have anything you want to start off with?
14
+ [41.360 --> 43.360] I don't know.
15
+ [43.360 --> 47.360] You didn't know that before.
16
+ [47.360 --> 49.040] That was the last question.
17
+ [49.040 --> 50.040] It's a Q&A.
18
+ [50.040 --> 52.920] So let's go straight to Q&A then.
19
+ [52.920 --> 58.400] I think Paul was here first and Paul, do you actually have anything Samana unmute you?
20
+ [58.400 --> 59.400] I don't know if I can.
21
+ [59.400 --> 60.400] I'm not sure if I can.
22
+ [60.400 --> 61.400] You might have unmute you.
23
+ [61.400 --> 62.400] Hi, hi, hi, guys.
24
+ [62.400 --> 63.400] Yeah.
25
+ [63.400 --> 65.400] So yeah, I didn't have any specific questions.
26
+ [65.400 --> 67.400] I mean, they just came to listen to the question.
27
+ [67.400 --> 68.400] All right.
28
+ [68.400 --> 69.400] All right.
29
+ [69.400 --> 70.400] I was an easy one.
30
+ [70.400 --> 71.400] Thanks, Paul.
31
+ [71.400 --> 75.400] I didn't have any, I didn't have any specific questions.
32
+ [75.400 --> 78.400] I mean, they just came to listen to the question.
33
+ [78.400 --> 79.400] All right.
34
+ [79.400 --> 80.400] All right.
35
+ [80.400 --> 81.400] I was an easy one.
36
+ [81.400 --> 82.400] Thanks, Paul.
37
+ [82.400 --> 83.400] Thank you.
38
+ [83.400 --> 84.400] Thank you.
39
+ [88.400 --> 89.400] Thank you.
40
+ [89.400 --> 90.400] Thank you.
41
+ [90.400 --> 91.400] Thank you.
42
+ [91.400 --> 92.400] Thank you.
43
+ [92.400 --> 93.400] Thank you.
44
+ [93.400 --> 94.400] Thank you.
45
+ [94.400 --> 95.400] Thank you.
46
+ [95.400 --> 96.400] Thank you.
47
+ [96.400 --> 97.400] Thank you.
48
+ [97.400 --> 98.400] Thank you.
49
+ [98.400 --> 99.400] Thank you.
50
+ [99.400 --> 100.400] Thank you.
51
+ [100.400 --> 101.400] Thank you.
52
+ [101.400 --> 102.400] Thank you.
53
+ [102.400 --> 103.400] Thank you.
54
+ [103.400 --> 104.400] Thank you.
55
+ [104.400 --> 105.400] Thank you.
56
+ [105.400 --> 106.400] Thank you.
57
+ [106.400 --> 107.400] Thank you.
58
+ [107.400 --> 108.400] Thank you.
59
+ [108.400 --> 109.400] Thank you.
60
+ [109.400 --> 110.400] Thank you.
61
+ [110.400 --> 111.400] Thank you.
62
+ [111.400 --> 112.400] Thank you.
63
+ [112.400 --> 113.400] Thank you.
64
+ [113.400 --> 114.400] Thank you.
65
+ [114.400 --> 115.400] Thank you.
66
+ [115.400 --> 116.400] Thank you.
67
+ [116.400 --> 117.400] Thank you.
68
+ [117.400 --> 118.400] Thank you.
69
+ [118.400 --> 119.400] Thank you.
70
+ [119.400 --> 120.400] Thank you.
71
+ [120.400 --> 121.400] Thank you.
72
+ [121.400 --> 122.400] Thank you.
73
+ [122.400 --> 123.400] Thank you.
74
+ [123.400 --> 124.400] Thank you.
75
+ [124.400 --> 125.400] Thank you.
76
+ [125.400 --> 126.400] Thank you.
77
+ [126.400 --> 127.400] Thank you.
78
+ [127.400 --> 128.400] Thank you.
79
+ [128.400 --> 129.400] Thank you.
80
+ [129.400 --> 130.400] Thank you.
81
+ [130.400 --> 131.400] Thank you.
82
+ [131.400 --> 132.400] Thank you.
83
+ [132.400 --> 133.400] Thank you.
84
+ [133.400 --> 134.400] Thank you.
85
+ [134.400 --> 135.400] Thank you.
86
+ [135.400 --> 136.400] Thank you.
87
+ [136.400 --> 137.400] Thank you.
88
+ [137.400 --> 138.400] Thank you.
89
+ [138.400 --> 139.400] Thank you.
90
+ [139.400 --> 140.400] Thank you.
91
+ [140.400 --> 141.400] Thank you.
92
+ [141.400 --> 142.400] Thank you.
93
+ [142.400 --> 143.400] Thank you.
94
+ [143.400 --> 144.400] Thank you.
95
+ [144.400 --> 145.400] Thank you.
96
+ [145.400 --> 146.400] Thank you.
97
+ [146.400 --> 147.400] Thank you.
98
+ [147.400 --> 148.400] Thank you.
99
+ [148.400 --> 149.400] Thank you.
100
+ [149.400 --> 150.400] Thank you.
101
+ [150.400 --> 151.400] Thank you.
102
+ [151.400 --> 152.400] Thank you.
103
+ [152.400 --> 153.400] Thank you.
104
+ [153.400 --> 154.400] Hello.
105
+ [154.400 --> 155.400] Hello.
106
+ [155.400 --> 156.400] Hello.
107
+ [156.400 --> 157.400] Are you guys still there?
108
+ [157.400 --> 158.400] Is anybody here?
109
+ [158.400 --> 159.400] Yeah.
110
+ [159.400 --> 160.400] Thank you.
111
+ [160.400 --> 161.400] Thank you.
112
+ [161.400 --> 162.400] I was weird.
113
+ [162.400 --> 163.400] I don't know what happened.
114
+ [163.400 --> 164.400] It completely kicked us out.
115
+ [164.400 --> 165.400] I had to.
116
+ [165.400 --> 167.400] It signed me out of my Google account.
117
+ [167.400 --> 168.400] Okay.
118
+ [168.400 --> 169.400] We're back.
119
+ [169.400 --> 170.400] Okay.
120
+ [170.400 --> 172.400] Let's jump right back in with, uh,
121
+ [172.400 --> 174.400] Hey, Marty, you joined it pretty soon after that.
122
+ [174.400 --> 176.400] You have anything you want to ask?
123
+ [176.400 --> 177.400] No, not particularly.
124
+ [177.400 --> 179.400] I'm not that deep into the series yet.
125
+ [179.400 --> 180.400] Okay.
126
+ [180.400 --> 182.400] Constantine, you're up.
127
+ [182.400 --> 184.400] You want to talk about anything?
128
+ [184.400 --> 185.400] Hello.
129
+ [185.400 --> 186.400] Uh, yes.
130
+ [186.400 --> 187.400] Can you hear me?
131
+ [187.400 --> 188.400] Yes.
132
+ [188.400 --> 189.400] Yes.
133
+ [189.400 --> 190.400] Okay.
134
+ [190.400 --> 191.400] Thank you very much.
135
+ [191.400 --> 192.400] So, uh,
136
+ [192.400 --> 194.400] So my plan now is to
137
+ [194.400 --> 199.400] actually working on my master thesis on an idea with HTML.
138
+ [199.400 --> 200.400] And, uh,
139
+ [200.400 --> 203.400] basically, the core idea is to try and do a normal detection.
140
+ [203.400 --> 206.400] But in the cross correlation space of multiple metrics.
141
+ [206.400 --> 212.400] And there, I think that it could be a very useful analogy with the object,
142
+ [212.400 --> 215.400] with the object recognition work.
143
+ [215.400 --> 217.400] And, uh, in that context,
144
+ [217.400 --> 222.400] I wonder if you think about how this theory can help us to learn new objects.
145
+ [222.400 --> 224.400] Because especially in the 2017 paper,
146
+ [224.400 --> 230.400] the process of learning a new object was kind of final stated with extrinsic information.
147
+ [230.400 --> 233.400] So, you know,
148
+ [233.400 --> 236.400] basically, they're like the model that now we're looking at the new model,
149
+ [236.400 --> 238.400] at the new object, learn this.
150
+ [238.400 --> 245.400] But the model on its own could not understand that it was transitioning from an already known model to a new model now.
151
+ [245.400 --> 246.400] Yeah.
152
+ [246.400 --> 250.400] So, is that a question about how do we handle continuous learning?
153
+ [250.400 --> 251.400] Yeah.
154
+ [251.400 --> 253.400] So I think in the column's paper,
155
+ [253.400 --> 256.400] we explicitly told the system whenever we're learning a new object.
156
+ [256.400 --> 259.400] We didn't, uh, smoothly transition like we did in the temple memory.
157
+ [259.400 --> 260.400] Yeah.
158
+ [260.400 --> 261.400] Um, you want to channel?
159
+ [261.400 --> 265.400] I think we've explored with a few different, um, ideas there.
160
+ [265.400 --> 267.400] But I don't think we've really settled on anything.
161
+ [267.400 --> 269.400] I think the whole theory has been moving quite fast.
162
+ [269.400 --> 273.400] And, um, so we haven't really focused on the continuous learning aspect so much.
163
+ [273.400 --> 275.400] But there's some general ideas.
164
+ [275.400 --> 278.400] And even on the forum, there are some ideas about, um,
165
+ [278.400 --> 280.400] you know, when you're learning an object,
166
+ [280.400 --> 282.400] um, if you're learning fairly slowly,
167
+ [282.400 --> 284.400] and then you detect, uh,
168
+ [284.400 --> 287.400] and, but you get a lot of unpredictable behavior.
169
+ [287.400 --> 292.400] Um, you can use that as a way of, um, kind of triggering,
170
+ [292.400 --> 295.400] uh, the system, or, or, uh, notifying the system.
171
+ [295.400 --> 297.400] Somehow that, uh, there's a new object.
172
+ [297.400 --> 299.400] The same way that in the temple memory,
173
+ [299.400 --> 301.400] a lot of bursting kind of triggers, uh,
174
+ [301.400 --> 302.400] learning of new sequences,
175
+ [302.400 --> 304.400] you could potentially do something like that with, uh,
176
+ [304.400 --> 306.400] with the columns paper as well.
177
+ [306.400 --> 308.400] But I wouldn't say we really explored it or simulated this.
178
+ [308.400 --> 309.400] Yeah.
179
+ [309.400 --> 310.400] Too much.
180
+ [310.400 --> 313.400] But surprise is a good signal always for, for learning.
181
+ [313.400 --> 316.400] Uh, there's another idea we've explored too,
182
+ [316.400 --> 319.400] uh, that we haven't really taken very far.
183
+ [319.400 --> 325.400] This is that, um, that any individual network can be doing
184
+ [325.400 --> 329.400] inference, meaning trying to recognize existing, uh,
185
+ [329.400 --> 332.400] objects and learning, uh, simultaneously,
186
+ [332.400 --> 335.400] sort of on alternate phases of a cycle.
187
+ [335.400 --> 337.400] And there's a lot of evidence for this in some,
188
+ [337.400 --> 340.400] some parts of the brain where you literally every, uh,
189
+ [340.400 --> 343.400] phase of a cycle, you, you,
190
+ [343.400 --> 345.400] the, the neuron switch between, um,
191
+ [345.400 --> 348.400] assuming that you're learning something new and then assuming
192
+ [348.400 --> 350.400] that you're trying to infer something, uh,
193
+ [350.400 --> 351.400] that was already learned.
194
+ [351.400 --> 354.400] Um, that, that, that crazy, that sounds,
195
+ [354.400 --> 356.400] it really is actually a lot of evidence for that.
196
+ [356.400 --> 358.400] So, um, that's a, we've,
197
+ [358.400 --> 360.400] we've taken this problem,
198
+ [360.400 --> 363.400] the concept that you asked about and we sort of put it on the
199
+ [363.400 --> 364.400] back burner for now.
200
+ [364.400 --> 367.400] Um, that may not be happy for you,
201
+ [367.400 --> 370.400] but, uh, because we feel that there's a solution there,
202
+ [370.400 --> 372.400] we don't know it with a subitized suggestion,
203
+ [372.400 --> 374.400] but, um,
204
+ [374.400 --> 377.400] but we're trying to get to sort of the basic mechanisms
205
+ [377.400 --> 379.400] down first before we decide exactly,
206
+ [379.400 --> 382.400] okay, is this continuing to learning happening exactly?
207
+ [382.400 --> 384.400] How is it happening and under what conditions,
208
+ [384.400 --> 386.400] maybe it's different in different parts of the brain?
209
+ [386.400 --> 389.400] Um, I guess that's a punting on that question in some sense.
210
+ [389.400 --> 390.400] We have some ideas,
211
+ [390.400 --> 392.400] but we just haven't, uh,
212
+ [392.400 --> 394.400] um, it hasn't been our main focus.
213
+ [394.400 --> 395.400] Yeah.
214
+ [395.400 --> 397.400] But those are two main ideas.
215
+ [397.400 --> 399.400] Yeah, one, one area that I like to, um,
216
+ [399.400 --> 401.400] that I think also applies here is when are you talking about
217
+ [401.400 --> 404.400] multiple modalities or things in context of other things, right?
218
+ [404.400 --> 408.400] So I can know that it's something new when I'm seeing a lot of,
219
+ [408.400 --> 410.400] the anomalies across all of the,
220
+ [410.400 --> 412.400] all of the different inputs, you know,
221
+ [412.400 --> 413.400] you sort of, but,
222
+ [413.400 --> 415.400] you know, whereas versus an anomaly just in one area.
223
+ [415.400 --> 417.400] Yeah, I think that was a subitized comment.
224
+ [417.400 --> 418.400] Yeah, that's a good idea.
225
+ [418.400 --> 419.400] Yeah.
226
+ [419.400 --> 420.400] I think it's a, you know,
227
+ [420.400 --> 422.400] people in the farm often ask for ideas of projects to try.
228
+ [422.400 --> 425.400] This would be a great one to try it.
229
+ [425.400 --> 427.400] There's been a lot of problem.
230
+ [427.400 --> 430.400] Yeah, you worked out.
231
+ [430.400 --> 431.400] Yeah.
232
+ [431.400 --> 435.400] I actually think that you allude a little bit into this,
233
+ [435.400 --> 439.400] uh, with, uh, with framework paper with a thousand brains paper.
234
+ [439.400 --> 443.400] When you, when you talk about the re-anchoring of grid cells,
235
+ [443.400 --> 445.400] maybe we could, uh,
236
+ [445.400 --> 448.400] refresh the relevant question as what exactly triggers the re-anchoring
237
+ [448.400 --> 449.400] of grid cells?
238
+ [449.400 --> 453.400] What makes an environment actually new?
239
+ [453.400 --> 454.400] Yeah.
240
+ [454.400 --> 457.400] That is, that's another perfect example,
241
+ [457.400 --> 460.400] although the mechanisms there may be different than, for example,
242
+ [460.400 --> 463.400] mechanisms and sequence memory.
243
+ [463.400 --> 465.400] Um, what, there was a,
244
+ [465.400 --> 466.400] um,
245
+ [466.400 --> 468.400] I have some thoughts about that.
246
+ [468.400 --> 469.400] I think it was.
247
+ [469.400 --> 470.400] I think it was.
248
+ [470.400 --> 471.400] I mean,
249
+ [471.400 --> 474.400] one of the things that's generally accepted.
250
+ [474.400 --> 476.400] There was a more kind of,
251
+ [476.400 --> 477.400] what was that?
252
+ [477.400 --> 478.400] There's a problem.
253
+ [478.400 --> 479.400] Yeah, you're good.
254
+ [479.400 --> 481.400] Uh, one of the things that,
255
+ [481.400 --> 482.400] uh, there's,
256
+ [482.400 --> 484.400] there's literature about this in,
257
+ [484.400 --> 486.400] I grid cells, the antelinar cortex,
258
+ [486.400 --> 488.400] uh, related to,
259
+ [488.400 --> 490.400] um, it's, you know,
260
+ [490.400 --> 494.400] the grid, the grid cells are driven by several different factors.
261
+ [494.400 --> 496.400] Uh, one factor is, of course,
262
+ [496.400 --> 498.400] they're, they're updated by, um,
263
+ [498.400 --> 499.400] motor commands.
264
+ [499.400 --> 501.400] So that's how they're known to do it.
265
+ [501.400 --> 503.400] But they're also, um,
266
+ [503.400 --> 504.400] they're anchored.
267
+ [504.400 --> 506.400] It's believed they're anchored by sensory input.
268
+ [506.400 --> 507.400] And so one of the theories is,
269
+ [507.400 --> 510.400] and one of the things that we sort of subscribe to is that play cells.
270
+ [510.400 --> 512.400] Uh, once you've learned,
271
+ [512.400 --> 514.400] once you've learned a connection between,
272
+ [514.400 --> 518.400] uh, play cells and, uh, grid cells,
273
+ [518.400 --> 521.400] that the play cells are constantly re-anchoring the grid cells.
274
+ [521.400 --> 523.400] It's, it's, it's constantly happening.
275
+ [523.400 --> 526.400] You're getting to play cells and it's constantly trying to re-anchor the grid cells.
276
+ [526.400 --> 528.400] Um, but then there's a question of,
277
+ [528.400 --> 529.400] well, what if it can't?
278
+ [529.400 --> 532.400] And then how does it decide to pick a random anchor?
279
+ [532.400 --> 535.400] Um, that, that question is unknown still,
280
+ [535.400 --> 538.400] uh, still to, uh, in that literature.
281
+ [538.400 --> 539.400] So it's a good question.
282
+ [539.400 --> 541.400] I don't think we have a good answer for it yet.
283
+ [541.400 --> 542.400] Right.
284
+ [542.400 --> 546.400] I'm going to roll down the line because I know we'll talk a bit more about that when we go down the form questions.
285
+ [546.400 --> 547.400] So I mean,
286
+ [547.400 --> 550.400] are constantly having to follow up to that?
287
+ [550.400 --> 551.400] Uh, well,
288
+ [551.400 --> 552.400] I slide follow up in the same, uh,
289
+ [552.400 --> 553.400] area is that, uh,
290
+ [553.400 --> 554.400] that in the native paper,
291
+ [554.400 --> 555.400] you mentioned that during learning,
292
+ [555.400 --> 558.400] the location layer doesn't update in response to sensory input.
293
+ [558.400 --> 560.400] So I was simply wondering there if, uh,
294
+ [560.400 --> 562.400] this separation of learning and inference,
295
+ [562.400 --> 564.400] is that all neurologically plausible,
296
+ [564.400 --> 566.400] or simply an artifact of the model.
297
+ [566.400 --> 567.400] What was it?
298
+ [567.400 --> 572.400] What was the kind of a miss at the location layer doesn't update with input?
299
+ [572.400 --> 573.400] In the paper,
300
+ [573.400 --> 574.400] you say during learning,
301
+ [574.400 --> 578.400] the location layer doesn't update in response to sensory input.
302
+ [578.400 --> 580.400] Whereas during inference,
303
+ [580.400 --> 581.400] it does.
304
+ [581.400 --> 582.400] Yeah.
305
+ [582.400 --> 583.400] Yeah.
306
+ [583.400 --> 584.400] Yeah.
307
+ [584.400 --> 585.400] Yeah.
308
+ [585.400 --> 589.400] Because it's learning the connections between the sensory and the right learning.
309
+ [589.400 --> 590.400] You're basically doing, uh,
310
+ [590.400 --> 591.400] you're relying on the,
311
+ [591.400 --> 592.400] the location layer being,
312
+ [592.400 --> 594.400] um, updated by most of the data.
313
+ [594.400 --> 595.400] I'm just wondering,
314
+ [595.400 --> 596.400] um, updated by motor coming out.
315
+ [596.400 --> 597.400] And then you're constantly learning,
316
+ [597.400 --> 599.400] well, what's the new sensory input for that location?
317
+ [599.400 --> 600.400] What's the new sensory input for that location?
318
+ [600.400 --> 601.400] What's the new sensory input for that location?
319
+ [601.400 --> 602.400] Um,
320
+ [602.400 --> 603.400] it's however this separation
321
+ [603.400 --> 605.400] between learning and inferencing at all levels.
322
+ [605.400 --> 606.400] Yeah,
323
+ [606.400 --> 607.400] it's a slow.
324
+ [607.400 --> 608.400] Oh, sorry.
325
+ [608.400 --> 610.400] Interrupt you on the.
326
+ [610.400 --> 612.400] The separation between learning inference.
327
+ [612.400 --> 613.400] Is that logical in the brain?
328
+ [613.400 --> 614.400] Is that biological plausible?
329
+ [614.400 --> 615.400] Uh, totally.
330
+ [615.400 --> 616.400] Yeah.
331
+ [616.400 --> 618.400] Uh,
332
+ [618.400 --> 619.400] uh,
333
+ [619.400 --> 620.400] uh,
334
+ [620.400 --> 621.400] yeah.
335
+ [621.400 --> 622.400] Uh,
336
+ [622.400 --> 623.400] yeah.
337
+ [623.400 --> 624.400] Well, as I said,
338
+ [624.400 --> 627.400] that, that idea that is occurring on different oscillatory cycles
339
+ [627.400 --> 628.400] comes from reasonably observation,
340
+ [628.400 --> 629.400] that,
341
+ [629.400 --> 630.400] uh,
342
+ [630.400 --> 631.400] uh,
343
+ [631.400 --> 632.400] that,
344
+ [632.400 --> 633.400] that there's evidence,
345
+ [633.400 --> 634.400] empirical evidence,
346
+ [634.400 --> 636.400] and that is actually what's going on.
347
+ [636.400 --> 637.400] We didn't make that up.
348
+ [637.400 --> 639.400] I wouldn't have ever thought of that.
349
+ [639.400 --> 640.400] Um,
350
+ [640.400 --> 641.400] so, uh,
351
+ [641.400 --> 642.400] there is,
352
+ [642.400 --> 645.400] the evidence that cells go through these two different phases,
353
+ [645.400 --> 646.400] and different,
354
+ [646.400 --> 647.400] uh, actual activation cycles,
355
+ [647.400 --> 648.400] um,
356
+ [648.400 --> 649.400] two different,
357
+ [649.400 --> 650.400] like,
358
+ [650.400 --> 651.400] in the interrionic culture,
359
+ [651.400 --> 657.760] cycle. Now we don't know if that's happening in the neocortex, but there's strong evidence
360
+ [657.760 --> 663.920] for that. That comes from empirical evidence. That's not something we made up.
361
+ [663.920 --> 671.240] Thanks, Constantine. I know Chris was, it was, had a question too. You're ready, Chris.
362
+ [671.240 --> 678.240] I'm really here to listen. I'm just getting started on this whole thing. I tripped across
363
+ [678.240 --> 689.280] a, I got rye a light, which are really bad tailor. We didn't know that. So I tripped across
364
+ [689.280 --> 695.760] your November hangout and that led me to your forums and that got me, it looks like a good
365
+ [695.760 --> 700.600] place to ask a lot of questions. I have. I have a question today. Is that right? Well,
366
+ [700.600 --> 709.840] I'm not sure relative to this discussion. We talked before the hangout about the phobia
367
+ [709.840 --> 716.040] in Matti. I think there was an interesting mix between what something you were doing.
368
+ [716.040 --> 719.960] And I can ask my question, but it's not actually relevant to these grid cells and I don't
369
+ [719.960 --> 723.560] want to distract from it. Well, I'll pay our surveys. I'll get a talk
370
+ [723.560 --> 728.120] in a minute earlier. I think everybody who goes on this journey of understanding, I think
371
+ [728.120 --> 732.280] of the brain, at some point realizes, holy cow, it's all a simulation of reality and
372
+ [732.280 --> 736.040] we live in a simulation. And you can't become a cryptographer of that. We were talking about
373
+ [736.040 --> 743.000] that earlier. But I mean, his thing was, you know, the I, the phobia says such a small field
374
+ [743.000 --> 748.920] of view. And as far as the whole thousand brains idea, how does that relate to when you've
375
+ [748.920 --> 752.920] got such a small field of view and things around it that get such abstract detail about
376
+ [752.920 --> 758.840] what's going on? How does the thousand brains model, you know, make that work?
377
+ [758.840 --> 764.680] Well, first of all, you know, it's a small field of view, but that of course gets expanded
378
+ [764.680 --> 771.240] to represent a huge part of V1 in the brain. Oh, yeah. So, so you can say it's a small field
379
+ [771.240 --> 775.320] of view. It's like your thumb, right? Yeah, something like that. And it gives up a huge
380
+ [775.320 --> 781.240] amount. But then it expands and occupies the majority of V1, which is just one of the
381
+ [781.240 --> 786.920] largest regions in the cortex of the human. So it's not like it's a small thing. It's a huge
382
+ [786.920 --> 794.280] amount of processing going on with the phobia. And I don't see, there's no inherent reason to
383
+ [796.040 --> 800.680] change the thousand brain theory or the frameworks theory at all related to that.
384
+ [801.720 --> 806.200] The framework does not rely on that. It doesn't rely on phobia. There's other animals that don't
385
+ [806.200 --> 811.080] have a phobia. Rats have a vision without a phobia. They have opposing eyes, largely posing eyes.
386
+ [811.560 --> 816.760] The theory doesn't really care about those, that those differences. All that says is you have a
387
+ [816.760 --> 822.600] sensory array. And the sensory is observing different parts of an object. The phobia wouldn't
388
+ [822.600 --> 827.480] be doing that. I'm looking at the camera in front of me right now, which it's actually occupying
389
+ [827.480 --> 833.800] a small part of my visual field. But it's I see its details. And so the different parts of my
390
+ [833.800 --> 838.040] photo would be attending the different parts of that camera. And they would, as a group,
391
+ [838.040 --> 843.800] the different columns would be voting on what that object is. So it just says that I wouldn't be very
392
+ [843.800 --> 850.840] good. If this object, the camera, I could probably like, you know, huge part of my visual field,
393
+ [850.840 --> 854.120] that might make it difficult to see what it is. But the fact that it's small and it occupates
394
+ [854.120 --> 859.400] small part of my visual field, it allows me to do object recognition at a long distance.
395
+ [860.520 --> 864.600] So I can see things that are actually, you know, just certain details and things that are very far
396
+ [865.560 --> 868.360] away. I don't know who wrote that. Oh, Falco says Braille readers can read, you know,
397
+ [868.360 --> 872.200] a whole book with one finger. Yeah, yeah. That's right. It's a finger sort of like the
398
+ [872.200 --> 876.040] equivalent of a phobia. You know, we have a very high acuity on the tip of our finger. You develop
399
+ [876.040 --> 881.640] that. Yeah. Well, actually, you know, physiologically, your body is built with the high acuity on
400
+ [881.640 --> 889.160] your tip of your finger. And Braille readers actually develop, you know, apparently they learn how
401
+ [889.160 --> 892.360] to discern those patterns. You're not a Braille reader. You try to discern those patterns, you know,
402
+ [892.360 --> 896.520] it's really hard. Yeah, I know. But that's sort of like saying, if I'm not a Russian speaker and
403
+ [896.520 --> 899.880] I hear Russian, I don't understand it. And it would be like, if I never really learned to see,
404
+ [899.880 --> 903.640] and also, like I'd vision, I really can't see either. So you have to train the system,
405
+ [903.640 --> 907.960] has to train to recognize these patterns. But my point is that the finger is like a phobia,
406
+ [907.960 --> 911.480] because it's an area of high acuity. Maybe you can discern very small differences in the
407
+ [911.480 --> 915.640] area represented by your finger in the cortex as well, it's a very large compared to say the back
408
+ [915.640 --> 921.640] your hand or something like that. Since I don't get this up to you much, that asks that
409
+ [921.640 --> 927.560] people is knowledgeable as you. Is it fair to say I've never actually seen through my eyes,
410
+ [927.560 --> 934.360] I've only ever seen the updated model in my head? Well, it depends what you mean by C.
411
+ [935.080 --> 939.880] Obviously, the vision comes through your eyes and those of the patterns are going to the brain.
412
+ [939.880 --> 949.480] I think the general consensus of brain researchers is that your perception of the world is really the
413
+ [949.480 --> 953.400] model that you have into the world. So you've built an internal model of the world, that's what the
414
+ [953.400 --> 960.280] framework is all about. How do you build models of the world? And what you perceive, what you sense
415
+ [960.280 --> 965.960] is really based on that model. And that doesn't mean it's wrong, it doesn't mean it's fake,
416
+ [965.960 --> 970.600] it just means it's a model of what you've experienced and under certain conditions we can see
417
+ [970.600 --> 975.480] different things because we have different models of the world. Right. Thank you for that. Yeah.
418
+ [976.520 --> 984.040] Thanks Chris for your question. Okay, Hey Falco, you ready? You got something to say? He's been
419
+ [984.040 --> 992.360] on our forum a lot lately. Yeah, I have a thousand questions for that. Okay, prioritize.
420
+ [992.840 --> 998.440] Yeah, sure. Sure. Okay, there's one I posted on the forum and maybe it's been answered on all the
421
+ [998.440 --> 1004.440] parts I'm not always up to date. One of the things I don't really understand is that
422
+ [1005.880 --> 1011.160] when you look at an object, you obviously have to make a model. And when you touch an object,
423
+ [1011.160 --> 1018.440] you have to make a model. You need this model to navigate the world. But the Neocortex does a lot
424
+ [1018.440 --> 1026.040] of other things, very abstract things. And somehow I don't understand how this hardware that you
425
+ [1026.040 --> 1034.920] have there to make these models, how it is also useful, for instance, for language. And I suppose
426
+ [1034.920 --> 1041.320] you can put words together and they go next to each other or they represent something abstract.
427
+ [1041.800 --> 1048.120] But it seems to me like you have a tremendous amount of hardware in every cortical column.
428
+ [1048.920 --> 1057.320] And it's very useful for looking and perhaps even hearing things. I orientate yourself based on
429
+ [1057.320 --> 1065.160] what you hear. But for a lot of other things, I don't really understand how this is useful or
430
+ [1066.120 --> 1071.240] well, I guess you understand what I mean. But I don't say it's not right. I don't say it's wrong.
431
+ [1072.200 --> 1076.520] But it doesn't make much sense to me and I don't get it. Yeah.
432
+ [1078.840 --> 1082.680] I can take that. You want to take that? All right. We try to adjust this a little bit in the
433
+ [1082.680 --> 1090.920] frameworks paper, but it is confusing. So let's just review a few facts. One of the facts is
434
+ [1091.000 --> 1096.280] everywhere you look in the neocortex, the architecture is extremely similar. There are differences,
435
+ [1096.600 --> 1103.480] but the similarities are remarkable. And the differences are more tweaks, apparently, than
436
+ [1104.200 --> 1110.520] fundamental differences. And many parts in the neocortex, if you look at them in great detail,
437
+ [1110.520 --> 1116.120] you cannot discern what they're doing or how they're different than another part. And so the one
438
+ [1116.200 --> 1120.440] of the basic themes of neuroscience is that there is this common circuitry that does everything.
439
+ [1121.880 --> 1126.280] That's true of language areas. Really, language areas. They look remarkably similar as the touch
440
+ [1126.280 --> 1131.960] of the vision areas and so on. It's incredible. And so for a long time, it's been
441
+ [1131.960 --> 1138.200] believed that there's some underlying computation that's done everywhere that somehow applies to
442
+ [1138.200 --> 1143.800] all the different things in your cortex does. There's little evidence that yes, that's not true.
443
+ [1144.680 --> 1150.280] And so now we've attacked it from two different parts. We've been looking at it from how it is
444
+ [1150.280 --> 1156.120] that we build models of the world through sense and vision and hearing at a low level. And the
445
+ [1156.120 --> 1162.920] basic, what we deduce there is that the cortex does this by building this a model is using spaces.
446
+ [1163.560 --> 1168.360] And they can be three-dimensional spaces or two-dimensional spaces. And we assign features
447
+ [1168.360 --> 1171.160] all up through those spaces. And we move through those spaces. And we move our fingers,
448
+ [1171.160 --> 1178.520] or move our eyes, or move our bodies. And so if that explains, that's a very powerful idea,
449
+ [1178.520 --> 1185.320] modeling through movement and building structures of almost like CAD models of things,
450
+ [1185.320 --> 1192.680] like these locations and features and spaces. That's a very powerful idea. And even beyond what
451
+ [1192.680 --> 1196.440] we've written so far, it looks like you can explain the vast majority of the circuitry in the
452
+ [1196.440 --> 1200.040] near cortex. So we didn't really get into that in the framework's paper, but neck paper is going
453
+ [1200.040 --> 1205.560] to get into that. So now we say to ourselves, okay, well, if that's true, how would it apply
454
+ [1205.560 --> 1209.800] to language? And how would it apply to other things? At the same time, there are people who've been,
455
+ [1209.800 --> 1213.800] and we referenced some of these in the framework's paper. There are people who've been looking at
456
+ [1214.920 --> 1219.480] FMRI data, which suggests that there are grid cells in the near cortex, while people do
457
+ [1219.480 --> 1225.880] quote high-level tasks thinking about things. And so they not like how do I sense what something
458
+ [1225.960 --> 1230.920] is or see something is, but when I'm thinking about birds or I'm thinking about sort of mental
459
+ [1230.920 --> 1236.200] cognitive task, they find evidence that grid cells underlying that. So there's some empirical
460
+ [1236.200 --> 1240.200] evidence saying, yes, some high-level thought processes are also somehow built on grid cells,
461
+ [1240.200 --> 1245.640] and that we mentally mapped out things in the world in a space. So the classic example I think
462
+ [1245.640 --> 1250.440] was the dollar paper, where they talk people thinking about birds and the different attributes
463
+ [1250.440 --> 1253.160] of birds. And when you're thinking about the different attributes of birds, they have the evidence
464
+ [1253.160 --> 1257.560] that you're assigning them to location spaces or locations in a space. You're not aware of
465
+ [1257.560 --> 1261.960] you're doing this, but that's how you categorize data about something. Birds of
466
+ [1261.960 --> 1265.960] taller or smaller or different attributes, you put them on these dimensional axes, and it looks
467
+ [1265.960 --> 1271.800] like grid cells are modifying them. So there's a lot of evidence, which is sort of triangulating on this,
468
+ [1271.800 --> 1274.680] and then you can say, well, how does that really apply to something like language? Well, we don't
469
+ [1274.680 --> 1280.840] really know, but the evidence, which suggests it does. And one way you might think about it, you
470
+ [1280.840 --> 1284.920] can think about words is objects. You know, there are visual objects, there are auditory objects,
471
+ [1286.360 --> 1291.400] and now you have a series of objects and you're going through them in sequence, and they have a
472
+ [1291.400 --> 1296.520] relationship to one another. Those objects have a, literally a written word, has a spatial
473
+ [1297.560 --> 1303.640] structure to it, an auditory word has an auditory structure to it as well. And now you're
474
+ [1303.640 --> 1309.160] sticking together these objects in that sort of like, then you're putting them on a timeline,
475
+ [1309.160 --> 1313.160] or you're putting them in a reference frame. And so there's some, there's some ideas there that
476
+ [1315.640 --> 1321.560] we can start hinting at how language might be done like this. But we don't know, but I'm very
477
+ [1321.560 --> 1327.960] confident in saying that there's everything we've learned so far says that all high-level concepts,
478
+ [1327.960 --> 1336.360] all plot processes are built on a framework of spaces and look reference frames, and they may not
479
+ [1336.760 --> 1339.640] happen. And I would say in the frame of this paper, those reference frames don't have to correspond
480
+ [1339.640 --> 1345.640] to physical things in the world. You know, they dislike the birds, you know, the we put these
481
+ [1345.640 --> 1349.160] birds on these sort of reference frames that don't really correspond to locations in the world,
482
+ [1349.160 --> 1353.480] but we built the reference, we built the brain teams to build this reference frame for how to
483
+ [1353.480 --> 1360.440] place knowledge, and we move through it. So we took an attempt at describing this the best we
484
+ [1360.440 --> 1367.640] could in a discussion section in the framework's paper. And we're not the only people starting to
485
+ [1367.640 --> 1374.680] talk about this. So it is an interesting question, your questions are correct, no one really knows
486
+ [1374.680 --> 1380.760] exactly how this works yet, but the evidence is very strong that everything we do is built on
487
+ [1380.760 --> 1385.240] reference frames. And if you haven't read it, go back, you know, the best I can explain is what we
488
+ [1385.320 --> 1389.640] wrote in the frameworks paper about it. What we gave these references, we talked about this,
489
+ [1390.840 --> 1395.000] but we also admit we don't really understand it. But it seems like that's going to be part of the
490
+ [1395.000 --> 1398.440] answer. And there doesn't seem to be something else. There's no other magic things going on
491
+ [1398.440 --> 1402.760] elsewhere in the cortex would say, oh yeah, language works differently. It doesn't seem to be that way.
492
+ [1406.280 --> 1410.760] We've been around a lot of time talking about the here too. It's like how the hell does work,
493
+ [1410.840 --> 1415.080] but it seems to be doing it so. David, do you have any questions? I'm going to unmute.
494
+ [1418.680 --> 1423.240] I have to text people all to find the mute button. Yeah, oh my god, I'm going to go on the
495
+ [1423.240 --> 1430.600] fun. You may just be out. I'm going to skip you, David, if you figure out how to unmute, let me know.
496
+ [1430.600 --> 1437.800] Ryan, are you there? You have a question. Yeah. Yeah, hey guys, so I'm pretty new to this. So
497
+ [1437.880 --> 1444.920] sorry, this question has been answered elsewhere. But in regards to grid cells and cortical columns,
498
+ [1444.920 --> 1453.960] do we have any idea? Kind of like the similarity between different columns of how would you say,
499
+ [1453.960 --> 1459.960] like, I guess like granularity between, or like similarity between different cortical columns
500
+ [1459.960 --> 1467.080] and how the grid cells are arranged? Is this question sort of like where actually, what is the
501
+ [1467.080 --> 1471.880] arrangement of grid cells in the cortical column? What is the physical structure? Right, and kind of
502
+ [1471.880 --> 1477.080] like similarity between different cortical columns. I mean, whether they're comparable or not
503
+ [1477.080 --> 1483.800] between the cortical columns? Yeah, exactly. Yeah. Well, everything we do assumes that cortical columns
504
+ [1483.800 --> 1492.040] are very similar. We don't make any assumptions about that standard neuroscience dogma. So we don't
505
+ [1492.360 --> 1498.280] we don't have any evidence that columns treat are different. But there is a very interesting question
506
+ [1498.280 --> 1506.440] as where are the grid cells and exactly what is their structure in a cortical column? And I can
507
+ [1506.440 --> 1512.200] talk about this for hours because we're spending a lot of time on I'm spending a lot of time on it.
508
+ [1512.840 --> 1517.000] So I can give you, I don't know how much we want to go into this since you say you're relatively new to it.
509
+ [1517.000 --> 1526.760] We probably want to wrap up in 30 minutes. Okay, okay. So let me let me give you a sort of a big picture
510
+ [1526.760 --> 1532.440] of this. Okay, grid cells, of course, were discovered not in the neocortex, but in the in the
511
+ [1532.440 --> 1537.960] hippocampal complex or in the entoronocortex. And grid cells in the entoronocortex represent,
512
+ [1537.960 --> 1544.600] you know, they've been studied mostly in rats running around in mazes or rooms. And in that situation,
513
+ [1544.600 --> 1550.280] the grid cells represent a 2D space, a 2D dimensional space where the rat is on, you know, rats don't
514
+ [1550.280 --> 1555.880] fly to the space. They kind of stay on the ground and they move around to D. And so everything that's
515
+ [1555.880 --> 1562.840] been written about grid cells is about 2D representations of space. And that system was evolved
516
+ [1564.360 --> 1571.160] to represent a location of an animal in a 2D environment. Now we have hypothesized that the same
517
+ [1571.160 --> 1576.680] basic mechanism exists in the neocortex. But the neocortex isn't necessarily doing with 2D
518
+ [1576.680 --> 1584.040] spaces. We move in 3D spaces, objects of 3D dimensions. They might even, we might even be modeling
519
+ [1584.040 --> 1589.400] higher dimensional spaces, but at minimum, we know they're modeling 3D dimensions spaces. So how does a
520
+ [1589.400 --> 1595.720] 2D dimensional grid cell represent 3D spaces? We have a paper that's being written by a couple of
521
+ [1595.720 --> 1602.280] our researchers right now, which is very close to being submitted. About this very topic,
522
+ [1602.280 --> 1607.240] about how you could represent higher dimensional spaces using 2 dimensional grid cell modules.
523
+ [1607.240 --> 1612.280] And this is getting to your question in a moment. So what this tells us is that you need to have,
524
+ [1612.280 --> 1617.000] to represent a 3D dimensional space, you need to at least have multiple 2D grid cell modules that
525
+ [1617.000 --> 1621.560] some sense slice up the 3D space differently. A 2D, you can think about 2D grid cell module is
526
+ [1621.560 --> 1629.320] representing a projection of 3D space onto 2D to achieve a grid cell space. And so you need more
527
+ [1629.320 --> 1635.800] than one slice to 3D dimensional space to represent 3D dimensional space. You can represent
528
+ [1635.800 --> 1640.120] a multiple 2D modules that are basically intersecting the 3D space at different projections.
529
+ [1642.440 --> 1646.120] So that's one cool thing. We know that it's definitely going to be different in the neocortex
530
+ [1646.760 --> 1655.080] and in the antironic cortex. It's I'm currently working on the idea that it's possible that in the
531
+ [1655.080 --> 1660.200] neocortex, the grid cell modules are one dimensional. We know they already have to be different.
532
+ [1661.560 --> 1668.040] And there's some evidence to suggest this might be true. And so you can say, what does that mean?
533
+ [1668.040 --> 1672.360] Basically, if I want to represent a 3D space or a 2D space, I have to have a whole bunch of
534
+ [1672.360 --> 1679.320] 1D modules that are basically projections of the 3D space onto a 1D line. And hopefully,
535
+ [1679.320 --> 1686.360] that's that you can imagine that in your head, what that means. So much of the movement through
536
+ [1686.360 --> 1691.080] 3D space would not be reflected on all these 1D modules because they don't all move depending
537
+ [1691.080 --> 1695.320] on the projections. So I'm moving perpendicular to the 1D module, it's not going to reflect that
538
+ [1695.320 --> 1700.520] chain, but some other 1D modules would. So this is a long question to say is in a cortical column,
539
+ [1700.520 --> 1705.240] we believe there have to be multiple grid cell modules. So in one square millimeter, for example,
540
+ [1705.240 --> 1710.360] there have to be deduced logically, have to be more than one grid cell module. They have to be
541
+ [1710.360 --> 1714.520] multiple ones, especially if they're 1D, but even if they're 2D, they have to be multiple ones.
542
+ [1714.840 --> 1721.400] They have to represent different projections in 3D space. And then we know something about how
543
+ [1721.400 --> 1727.000] these physically look in the antironic cortex. There's a nice paper came out recently by David Tank,
544
+ [1727.640 --> 1732.680] Princeton, I think, when he talks about the structure of what these actually look like in antironic
545
+ [1732.680 --> 1738.120] cortex. I'm working on the idea right now, which I would consider very speculative, but just throw
546
+ [1738.120 --> 1745.560] it out, that actually the mini columns in the neocortex actually each mini column could correspond to
547
+ [1747.320 --> 1756.040] a unique grid cell module and actually unique orientation module, head direction cell module.
548
+ [1756.040 --> 1761.080] And so that in a quarter of a column of a square millimeter, you have several hundred mini columns.
549
+ [1761.080 --> 1767.800] Each one could be a unique grid cell module and that together they represent that entire space.
550
+ [1767.800 --> 1774.600] And they might be 1D grid cell modules. This is some evidence for this. It's got a lot of,
551
+ [1774.600 --> 1780.040] but it's very speculative still, but it's elegant in some ways. If it's not in the mini columns,
552
+ [1780.040 --> 1784.600] it still has to be divided up somehow. A cortical column has to have multiple grid cell modules
553
+ [1784.600 --> 1790.600] that are acting independently, slicing up space in different ways. So that's a very long answer
554
+ [1790.600 --> 1796.360] to your question. But right now, the simplest explanation I can come up with is that each mini column
555
+ [1796.360 --> 1801.560] is doing this. I think another part of it, maybe, if you think about the structure of the cortical
556
+ [1801.560 --> 1806.440] column and where grid cells might be within the layers, there are some anatomical constraints that
557
+ [1806.440 --> 1813.000] have to be met as well. So we know that grid cells update their representation based on motor
558
+ [1813.000 --> 1818.040] commands. So wherever the grid cells are, they should be receiving some sort of a motor copy
559
+ [1818.040 --> 1822.280] or a motor command coming in. And there are only a couple of layers in the neocortex where that
560
+ [1822.280 --> 1828.440] happens. And the other anatomical constraint is that we think there's this sort of back and forth
561
+ [1828.440 --> 1833.320] between the location representation and the sensory or the play cell and log representation. So
562
+ [1833.320 --> 1840.440] there has to be sort of strong current connectivity between the sensory layers and the grid cell
563
+ [1840.520 --> 1845.080] layers. So we talked about that in the frameworks paper or a little bit in the cons paper and it
564
+ [1845.080 --> 1851.800] kind of suggests that the grid cell modules could be in the subgranular layers of the lower layers
565
+ [1851.800 --> 1857.480] of the cortex because they kind of mashed these anatomical constraints. So there we have to say
566
+ [1857.480 --> 1861.000] should or could, but I actually feel really confident about that. I think they're really in the
567
+ [1861.000 --> 1867.640] layer six and we know which cell tops they are. But it is obviously theory. So, but we can still
568
+ [1867.640 --> 1873.080] put different levels of confidence on these things. So I'm very confident that those layer six cells
569
+ [1873.640 --> 1878.120] but it could be wrong but I'm very confident then. But other things where you know this
570
+ [1878.120 --> 1882.360] thing I just mentioned about many columns, well that's much more speculative and we don't know yet.
571
+ [1883.480 --> 1887.720] All right, let me give David a chance to unmute himself for a few seconds.
572
+ [1887.720 --> 1894.920] The KCS has something he wants to say and if not then we'll go to Walter. Then he's been to
573
+ [1894.920 --> 1901.560] view of our hackathons and fast. I remember you Walter or Haker I think. He's old from the old
574
+ [1901.560 --> 1906.920] HKK. Oh, okay. Oh, huh. Looks like David's not figured out how to unmute. So Walter, you want to say
575
+ [1906.920 --> 1917.080] anything? If not we'll go to chat questions. He said no. Okay. So there were a couple interesting
576
+ [1917.080 --> 1922.200] things here. Oh someone that was to know what we're working on for research. Well Jeff sort of
577
+ [1922.200 --> 1926.680] just talked a bit about that. Well, I think we should we should be a little clearer. If you've
578
+ [1926.680 --> 1930.600] been I don't know how if you've been following them at the very close so you might know this. But
579
+ [1931.560 --> 1935.480] but Suiton I, Suiton I or Sud of it going under going to divorce right now.
580
+ [1937.720 --> 1944.840] Just a joke. You have to make it awkward. Yeah, let me just explain that. We've been focusing
581
+ [1944.840 --> 1950.040] purely on the neuroscience side lately and I am continuing to focus on the neuroscience side
582
+ [1950.040 --> 1955.480] so I can talk about what we're doing on the neuroscience side. Suiton is now this all about
583
+ [1955.480 --> 1959.560] we're doing this together. There's no acrimony here. I don't know who I'm going to live with. Yeah,
584
+ [1959.560 --> 1965.960] with the plant. Suiton and Lewis one of our other researchers are starting to focus on how to apply
585
+ [1967.000 --> 1972.360] some of what we've learned to machine learning techniques. So going back in that direction. I don't
586
+ [1972.360 --> 1978.520] love to put it. I want to talk about that more. Yeah, I can I can talk a little bit more. I
587
+ [1978.520 --> 1984.760] wouldn't really call it a divorce at all. I'm trying to live it up here. I'm sorry. Yeah, but
588
+ [1985.720 --> 1991.080] of course I'm continuing to be extremely interested in the neuroscience. But you know, Numentos
589
+ [1991.080 --> 1995.560] always had this kind of two-pronged mission of understanding the neuroscience side of it and then
590
+ [1995.560 --> 1999.960] trying to see if the principles that we learned from the neuroscience can be applied to practical
591
+ [1999.960 --> 2004.200] problems and to machine intelligence. And we've done a little bit of that in the past but the
592
+ [2004.200 --> 2009.160] last few years I've been really focused primarily on the neuroscience. And I got pretty excited,
593
+ [2009.160 --> 2015.480] you know, with the frameworks paper. I felt that we had an almost kind of complete kind of structure
594
+ [2015.480 --> 2020.680] about how a cortical column works. And there are a number of principles that are embodied in there
595
+ [2020.680 --> 2024.520] and some of which that we talked about. And if you look at the world of deep learning and machine
596
+ [2024.520 --> 2029.160] learning, there are kind of fundamental problems there. And I could almost see that if applying these
597
+ [2029.160 --> 2033.560] principles from this framework could actually help solve some of these really big problems in deep
598
+ [2033.560 --> 2040.280] learning. So the kind of the research direction that I'm pursuing a little bit now is to take some
599
+ [2040.280 --> 2046.360] of the concepts that we've found from the neuroscience and apply them more directly to neuroscience.
600
+ [2046.360 --> 2053.160] And I think in that research, which is still very speculative and exploratory at this point,
601
+ [2053.160 --> 2059.160] I think there are basically two components to it. If I look at everything we've done, I think there's
602
+ [2059.240 --> 2064.760] like two fundamental pieces. One is kind of a representational component. And a lot of you on the
603
+ [2064.760 --> 2069.560] forum know about how much we rely on sparse distributed representations and the properties of
604
+ [2069.560 --> 2075.320] SDRs. And deep learning systems don't really embody SDRs today. They're primarily dense
605
+ [2075.960 --> 2081.320] representations. So the question is, can we embody SDRs into deep learning systems,
606
+ [2081.320 --> 2086.280] of machine learning systems and take advantage of some of their properties? And the second part of
607
+ [2086.280 --> 2091.880] it is just looking at the cortical column as a structure. If you look at a deep learning system
608
+ [2091.880 --> 2096.920] or neural network today, it's extremely simplistic feed forward structure, whereas the cortical column
609
+ [2096.920 --> 2103.640] structure is a lot more complex. So can we take that structure and along with SDRs, improve
610
+ [2103.640 --> 2108.200] machine learning and deep learning to embody everything that's in this kind of common algorithm
611
+ [2108.200 --> 2113.400] on the common cortical micro circuits? So that's a very quick description of the kind of the research.
612
+ [2113.400 --> 2118.600] I'm just really starting on. He's getting really interesting results already. So
613
+ [2120.520 --> 2125.720] I'm going to focus on, if you don't mind, I can just say, what my work for this year is still on
614
+ [2125.720 --> 2130.440] the biology side. And I'm trying to fill in all these missing pieces of a cortical column.
615
+ [2131.560 --> 2136.280] And specifically the role of orientation, which is like head direction cells and the equivalent
616
+ [2136.280 --> 2142.760] of the play cells, which, and so I'm working on the idea that I actually mentioned a year ago,
617
+ [2142.760 --> 2148.120] and I talked about MIT, but I'm back to it with the vengeance now, is that in a cortical column,
618
+ [2148.120 --> 2155.640] there's actually two different sensory motor inference mechanisms being done. One is movement
619
+ [2155.640 --> 2160.840] through space is what the framework paper talks about a lot. And that's the idea of grid cells
620
+ [2160.840 --> 2166.440] and moving through space. And the other is a sensory motor mechanism which has to do with orientation
621
+ [2166.440 --> 2173.000] or changing orientation to an environment. And so, and that produces what the equivalent of play
622
+ [2173.000 --> 2178.200] cells. So I think the cortex, to fill out the framework, and many of the details, we can understand
623
+ [2178.200 --> 2183.640] a cortical column is doing two types of inference at the same time. One is angular movement,
624
+ [2183.640 --> 2189.240] which is your orientation to the world, and that's figuring out like play cells, what, where am I,
625
+ [2189.240 --> 2193.640] based on my sensory input? And then there is the movement through space, which is a more
626
+ [2193.640 --> 2199.160] of a linear sensory motor inference. And I believe you can map these two inference mechanisms
627
+ [2199.160 --> 2204.840] precisely onto different cortical layers and adding orientation. And it really fills out the
628
+ [2204.840 --> 2210.040] complier, the complete picture of what a cortical column does. So that's a paper I hope to get
629
+ [2210.040 --> 2214.040] written by the end of the year. Is there a related question on chat from Eric Collins? How are
630
+ [2214.040 --> 2221.880] features selected to generate play cell representations? Oh boy, that's a, first of all, play cells
631
+ [2222.680 --> 2227.640] are in the hippocampus, right? That's where the term comes from, these are cells in the hippocampus.
632
+ [2227.640 --> 2231.640] We think there are equivalent cells in the in the near cortex, although we have not really
633
+ [2231.640 --> 2235.160] talked about them as much. We didn't mention that in the in the framework's paper.
634
+ [2236.600 --> 2242.360] So how are they selected? It's more, it's, here's one way to think about it.
635
+ [2244.360 --> 2252.360] There, first of all, what do play cells do? Play cells represent some sensory input that encodes
636
+ [2252.360 --> 2257.880] your location. So it's like when an animal is in a particular location, based on the
637
+ [2257.880 --> 2263.640] sensory inputs around the animal, these play cells represent that. But they don't, they represent
638
+ [2263.640 --> 2268.360] independent of the orientation of the animal. So it's not like I see something in front of me.
639
+ [2268.360 --> 2274.280] It's like there's something relative to the room in front of me. So the play cells don't change
640
+ [2274.280 --> 2278.520] when the animal changes its orientation to the room. It's not pure sensory because when you're
641
+ [2278.520 --> 2283.160] sensory input changes when you rotate your position relative to the room, the play cells are not.
642
+ [2283.720 --> 2287.720] And this is the inference I was talking about. There's a, we believe what's going on is there's
643
+ [2287.720 --> 2292.360] a sensory motor inference which, which says, given these features that are relative to me, and as I move
644
+ [2292.360 --> 2297.960] around, I'm going to form a representation which is oriented to the environment. And it's stable
645
+ [2297.960 --> 2303.560] relative to the environment independent of my movement. And what features you use to select that
646
+ [2303.560 --> 2309.400] can vary in, and it, there's all kinds of literature about what actually goes on in a rat's brain
647
+ [2309.400 --> 2314.840] in this regard. But it could be whiskers, it could be vision, it could be hearing. It doesn't really
648
+ [2314.840 --> 2320.520] matter. It's, as long as I sense something that I can then turn it into a representation of
649
+ [2320.520 --> 2326.200] the location in the room based on that one thing. So there's, it's, it's not really critical to
650
+ [2326.200 --> 2331.560] what senses you sense. It's more critical to how you do the sensory motor inference. And that's
651
+ [2331.560 --> 2336.920] the long topic. So I don't think the actual features are really that important. It could work
652
+ [2336.920 --> 2343.160] of any kind of sensory modality. Okay. I want to, I promise we would answer the forum questions.
653
+ [2343.160 --> 2346.840] So let me go through these and because we only got about 15 more minutes because that might generate
654
+ [2346.840 --> 2352.760] more topics and I'll get to rest of the chat stuff. So, so someone was asking about, is there any
655
+ [2352.760 --> 2358.040] relationship between grid cells and the orientation stripes or bands that we observed and who
656
+ [2358.040 --> 2363.240] won't be cell papers? Yeah. So I remember earlier I was saying that I'm working on the cypods,
657
+ [2363.240 --> 2369.880] is the grid cells are, there's a grid cell module per minicolumn. And each minicolumn in the,
658
+ [2369.880 --> 2376.360] in the human visual model of V1 has a specific orientation. So the next one, the lines of
659
+ [2376.360 --> 2379.480] one orientation, the next minicolumn might be a larger, different orientation. It responds to
660
+ [2379.480 --> 2384.680] stimulus. Yes, visual stimulus at the overall orientation. But also very importantly, those
661
+ [2384.680 --> 2389.160] cells, many of those cells respond to motion. So they're actually not just orientation, but they're
662
+ [2389.160 --> 2395.240] actually, that line is moving this way or this way, that's what they prefer. I won't have time
663
+ [2395.240 --> 2403.560] to explain all of this, but that is exactly the signal you would need to, to update and create a
664
+ [2403.560 --> 2409.160] one-dimensional grid cell module. That movement command, it would tell you which way the bump
665
+ [2409.160 --> 2413.880] should move on a one-dimensional grid cell module. It already, it already represents a one-dimensional
666
+ [2413.880 --> 2421.160] slice through a three-dimensional visual space. And so that's the hard concept to get across.
667
+ [2421.160 --> 2427.080] I'm still struggling with the words for it, but it is possible that those orient, it's possible
668
+ [2427.080 --> 2431.240] that they need interpretation of those orientation columns that human visual gave is completely wrong,
669
+ [2431.240 --> 2438.840] or mostly wrong. It's possible that they actually represent in some sense, like they represent
670
+ [2438.840 --> 2444.920] essentially an orientation conjunctive type of cell, where they're defining the grid cell modules
671
+ [2444.920 --> 2451.240] and they're defining orientation less than visual features. They are visual features, but they
672
+ [2451.240 --> 2455.960] actually, the movement defines those the metrics we need to create grid cell modules and orientation
673
+ [2455.960 --> 2461.560] modules, the head direction cells equivalent. So that's an interesting idea that I don't know
674
+ [2461.560 --> 2467.320] of anyone else that I've ever thought of before. I said earlier, it's very speculative, but I'm working
675
+ [2467.320 --> 2475.240] on it. Okay. The next question is about invariance with respect to object representation. Does this
676
+ [2475.240 --> 2479.880] do its own model help? How does it help with invariance? Why don't you take that one?
677
+ [2480.680 --> 2484.040] Yeah, I think we were talking about this a little bit earlier. There's many different aspects to
678
+ [2484.040 --> 2490.840] invariance, but I would say this whole idea of having a location signal within a cortical column
679
+ [2490.840 --> 2498.040] came from the, came in part from thinking about invariance and what, and the idea of reference
680
+ [2498.040 --> 2503.640] spaces. So if you think about what invariance is, you want to have some sort of a signal that's
681
+ [2503.640 --> 2508.440] stable while you are sensing different aspects of the same thing, that sort of one way you can think
682
+ [2508.440 --> 2516.040] about invariance. And in order to do that for an object, if as I'm sensing an object, I have to
683
+ [2516.040 --> 2521.480] have a representation of an object that's in the reference frame of the object itself. That way my,
684
+ [2522.200 --> 2528.120] the output of my system of our system can be invariant regardless of the pose of this object
685
+ [2528.680 --> 2536.440] relative to me. So grid cells and a location signal by encoding relative positions of features
686
+ [2536.440 --> 2542.200] within the reference frame of the object allow you to have a very invariant kind of predictive model
687
+ [2542.200 --> 2548.200] of the object itself. So there's that's at least sort of one, you know, relationship between those
688
+ [2548.200 --> 2552.440] concepts and the, that's probably the biggest one, right? I mean, essentially you're going from
689
+ [2552.440 --> 2557.080] some presentation on a two-dimensional sensory array, whether it's your fingers or eyes or
690
+ [2557.080 --> 2561.720] something like that, and you're turning into an internal representation, which is completely
691
+ [2561.720 --> 2567.640] independent of your pose relative to that object. It's a 3D model of the object. It doesn't matter,
692
+ [2567.640 --> 2572.280] you know, once you have a 3D model of the object, that 3D model is invariant to any other position
693
+ [2572.280 --> 2579.320] and orientation to anything else. I think one other aspect is for the thousands brain theory to work,
694
+ [2579.320 --> 2583.560] every cortical column has to have some sort of invariant representations of objects. In order to
695
+ [2584.040 --> 2588.840] the voting to occur, you have to have stable representations of objects of the same object in
696
+ [2588.840 --> 2594.120] multiple particle columns, even though they're actually sensing completely different inputs. It's
697
+ [2594.120 --> 2598.840] that stability that allows the kind of the voting mechanism to work. I think one of the interesting
698
+ [2598.840 --> 2603.240] things is that the internal representations of each of these cortical columns is entirely different
699
+ [2603.240 --> 2608.040] because they've got different sensory input coming in. Yeah, they can't have the same
700
+ [2608.040 --> 2613.320] variable representations at a low level. But if they're stable, then you can form associations
701
+ [2613.320 --> 2620.120] between them. That's less the voting work. Yeah, so a tactile coffee cup model and a visual
702
+ [2620.120 --> 2625.480] coffee cup model, the actual details are completely different. But if they both agree that it's a coffee
703
+ [2625.480 --> 2631.800] cup, then they can vote independent of how that was derived. That's the basic idea of the long way.
704
+ [2631.800 --> 2634.600] Even if they're all modeling this object in different ways.
705
+ [2634.600 --> 2639.320] Different modalities. Yeah, different reference frames. Different modalities.
706
+ [2640.840 --> 2646.760] And as I suppose I pointed out, the key thing about invariance is you have a stable representation
707
+ [2646.760 --> 2651.240] while inputs are changing. That is in some sense the definition of invariance.
708
+ [2652.200 --> 2656.920] And we propose there's this very specific mechanism for that, which I think is pretty good. This
709
+ [2656.920 --> 2663.720] is the temple pooler, which is in the college paper and the columns plus paper. And I'm very
710
+ [2663.720 --> 2669.640] confident that that's basically happening. Okay, we're about 10 more minutes. Another question about
711
+ [2669.640 --> 2674.040] lateral connections. If there's a long range lateral connections, is there any problem with temporal
712
+ [2674.040 --> 2680.280] variation for syncing up the activity across? Yeah, we were talking about this question
713
+ [2680.280 --> 2684.920] before the hangout started. And it's a great question. And I don't think we've actually talked
714
+ [2684.920 --> 2691.560] about it or really thought about it. Yeah, the idea is that you want essentially the
715
+ [2692.440 --> 2697.720] ideally you want the sort of the axons on a particular neuron. You want sort of the action
716
+ [2697.720 --> 2701.960] potentials arriving sort of at the same time. No latency. Well, not latency is not the problem.
717
+ [2701.960 --> 2705.640] It's not the latency is the problem. You want to sort of arrive at the same time. They can be
718
+ [2705.640 --> 2713.400] all delayed. Oh, great. It's like if you go back to the neuron paper, the sequence memory paper,
719
+ [2713.400 --> 2719.880] we laid out a very detailed model of the neuron and how the dendrites work and what they're
720
+ [2719.880 --> 2724.360] computing. And part of that was that they have to detect these coincidence patterns on a
721
+ [2724.360 --> 2730.040] dendritic branch. And the biology tells us that those synapses have to be active within a few
722
+ [2730.040 --> 2734.920] milliseconds of each other. So there needs to be, you'd like to have some sort of synchronizing
723
+ [2734.920 --> 2739.720] abilities to get the action potential to arrive at the same time as opposed to scattered over time.
724
+ [2743.720 --> 2748.840] That's just a biological. Seems to be biological requirement. And this question is saying, how is
725
+ [2749.320 --> 2755.240] guaranteed? I believe that's just a question of saying. And we don't know. There are lots of ways it
726
+ [2755.240 --> 2763.080] could occur. The basic belief is that there are cycles in the brain and the cycles will, the
727
+ [2763.080 --> 2767.400] cells will tend to fire on the peaks of these cycles and not on the troughs of these cycles. And
728
+ [2767.400 --> 2770.520] therefore they, if they're going to make a, they're going to spike, they tend to do it at the same
729
+ [2770.520 --> 2776.360] time. But this question, and as if they're traveling along distances and there's, there's delays,
730
+ [2776.360 --> 2779.080] and the delays would be different. So now they're not going to arrive at the same time.
731
+ [2779.720 --> 2783.720] It's a good question. I don't have any answers to it. There you go. But there will be an answer to it.
732
+ [2784.760 --> 2790.520] About that. But yeah, it's not hard to imagine how the answer could, you know, there's so much it's
733
+ [2790.520 --> 2795.080] not known about some of this stuff. Maybe the dendrites aren't as critical as people think they are.
734
+ [2796.200 --> 2802.280] Maybe there's local dynamics which make these things happen. There are some, many synapses have
735
+ [2802.760 --> 2808.120] metapotropic response means that they lead to a long-term depolarization that would
736
+ [2808.120 --> 2813.560] bridge these time gaps. So there's lots of possibilities, but it's not an area that we focused on.
737
+ [2813.560 --> 2819.400] Okay, the last form of question is about displacement cells in L5. And they're saying, are these
738
+ [2819.400 --> 2825.320] like multiplex representations for movement vectors and object compositions asking for more detail
739
+ [2826.040 --> 2831.640] about that displacement cell layer? Yeah, okay.
740
+ [2833.720 --> 2838.280] I always say in the video that I made, there's two types of displacement sort of,
741
+ [2838.280 --> 2841.960] when you're moving within an object reference frame that's one displacement. We make this
742
+ [2841.960 --> 2847.560] really clear in the paper too. I don't know if you want to rest out or I've thought about it.
743
+ [2847.960 --> 2854.680] I'll go for it. Okay, so some of this, a few more speculates in other parts.
744
+ [2855.800 --> 2861.160] The idea for displacement cells originated with Marcus and maybe Scott, I'm not sure
745
+ [2863.000 --> 2866.760] the idea there, but we were trying to come up with a mechanism for object composition,
746
+ [2867.320 --> 2872.360] how to object, a line of objects. And the mechanism that I was outlined in the
747
+ [2873.160 --> 2881.240] in the frameworks paper addresses some of that. But we also realized that that mechanism would
748
+ [2882.600 --> 2889.000] allow the system to figure out the distance or to navigate from a point to another point.
749
+ [2889.720 --> 2895.720] And in fact, some of the research which Marcus and Scott used to come up with the displacement
750
+ [2895.720 --> 2900.520] was literally, and we referenced this in the paper, literally came about from people trying to
751
+ [2900.520 --> 2904.280] figure out how we navigate, how you know how to get from point A to point B in the same space.
752
+ [2905.000 --> 2908.920] Now we have this mechanism which we were trying to figure out how to do object compositionality,
753
+ [2908.920 --> 2915.400] but clearly could also do navigation within the same space. So now we have these two dual ideas.
754
+ [2915.400 --> 2920.680] One is like between, and this is very clearly written in the frameworks paper, that this concept
755
+ [2920.680 --> 2924.920] of displacement cells could do both of these things. Could say, hey, here's how I get from point A to
756
+ [2924.920 --> 2930.440] point B in one object in a space. And here's how I relate two different points and two different
757
+ [2931.160 --> 2938.120] reference frames. Now, as we go forward in time, it's clear that one of those still works really
758
+ [2938.120 --> 2943.320] well. That's the how to get from point A to point B, how to generate behavior. The compositionality
759
+ [2943.320 --> 2948.200] one is starting to have some problems. We're struggling with trying to get the details working.
760
+ [2948.200 --> 2952.840] So there's issues of orientation and scale that we haven't quite figured out how to get
761
+ [2952.840 --> 2959.320] working in the displacement as an object compositionality problem. So I'm now far more comfortable
762
+ [2959.320 --> 2966.840] that the displacement cells exist and they're doing movement. I'm confused now exactly how they're
763
+ [2966.840 --> 2971.400] doing object compositionality. And maybe we might move to slightly different mechanisms for that.
764
+ [2971.400 --> 2976.680] Maybe we'll separate them out. There's two different things. So we wrote them as the displacement
765
+ [2976.680 --> 2982.520] cells could do both that may still be true, maybe not. But I do know that they could do movement.
766
+ [2983.160 --> 2987.720] So this is an area where we're trying to, it's very difficult to think about, but we're trying to
767
+ [2987.720 --> 2993.080] really get the core of how we do object compositionality exactly, how to deal with these problems of
768
+ [2993.080 --> 2997.240] orientation. And meaning like, imagine we used the coffee cup example and we said, oh, there's a logo
769
+ [2997.240 --> 3000.600] on the coffee cup. Well, we didn't really address what happens if the logo is oriented,
770
+ [3000.600 --> 3004.120] change an orientation to the coffee cup. We didn't address that. We didn't address how the
771
+ [3004.120 --> 3008.600] logo wraps around in three dimensions on the coffee cup. We didn't really address the issue of
772
+ [3008.600 --> 3012.360] how the scale of the logo can change on the coffee cup. So there's a lot of things with
773
+ [3012.360 --> 3016.280] the displacement cells that Billy did in address those issues. We pointed those out in the paper.
774
+ [3016.280 --> 3020.680] We made it clear like, hey, we don't understand this stuff. But as we get into it, it's getting
775
+ [3020.680 --> 3024.280] more complicated. So I'm sticking with the idea that the displacement cells exist. They're
776
+ [3024.280 --> 3031.080] doing, they're definitely doing the motor behavior. But the compositionality part is under flux right now.
777
+ [3032.760 --> 3036.440] Okay. Going through some of these questions, don't skip some of them.
778
+ [3037.400 --> 3041.240] How do you envision the transformation of reference frames to allow the invariance of objects?
779
+ [3041.240 --> 3045.560] I think we're talking about the displacement cells represent that transformation.
780
+ [3046.840 --> 3051.240] Displacement cells, you know, you think of them as a movement between two points, not the two
781
+ [3051.240 --> 3057.640] points, but the movement between the two points, right? Yeah. And then, so Mark Brown,
782
+ [3058.920 --> 3065.160] how does the local grid in a mini column square with the known repeating grid patterns across
783
+ [3065.160 --> 3073.800] the interrhynal cortex? Yeah. So the idea here is a grid cell responds at multiple locations,
784
+ [3074.840 --> 3082.120] right? And those are spaced out in the enthronic cortex. They're on a 2D sheet and they're at
785
+ [3082.120 --> 3089.560] these, you know, sort of a 60 degree hexagonal patterns. But if you had a linear,
786
+ [3090.520 --> 3096.280] a one dimensional grid cell module, that means as you go in one dimension, you have a series of
787
+ [3096.280 --> 3101.320] cells become active and they become active at various repeating points along the line. It's the
788
+ [3101.320 --> 3108.600] same basic idea. You're just repeating on a linear line versus repeating on a hexagonal grid on
789
+ [3109.160 --> 3115.720] a 2D sheet. So it's same basic idea. And I didn't know if I answered the question.
790
+ [3116.680 --> 3120.440] Oh, yeah. How does it square cross for repeating the each year,
791
+ [3121.400 --> 3125.480] repeating grid pattern across the interrhynal cortex? Yeah. It could be the exactly the same thing
792
+ [3125.480 --> 3130.440] in the neocortex. So you might have a 2D grid cell module. We haven't eliminated that possibility.
793
+ [3130.440 --> 3136.360] That's the first to sport or assumption. In which case you'd have cells that repeat. If I,
794
+ [3136.360 --> 3143.560] you know, as I move over objects, they would repeat. And in the same sort of, but now in a 2D
795
+ [3143.560 --> 3150.360] projection of a three dimensional space, which is a little bit odd to think about. But imagine if
796
+ [3150.360 --> 3156.920] I could just move through some space continuously relative to some object, the cell would repeat this.
797
+ [3156.920 --> 3162.840] And I can move those 2D projection of that space. Well, then the cell would repeat over that 2D
798
+ [3162.840 --> 3168.040] in an hexagonal way. I think one thing that came out of the work that Marco and Marcus are
799
+ [3168.040 --> 3173.080] working on is that the dimensionality of the grid cell modules is kind of independent of the
800
+ [3173.080 --> 3178.120] dimensionality of the location space itself. You can take any dimensional location space and
801
+ [3178.120 --> 3183.080] represent it with almost any dimensional grid cell modules. As long as you have enough of them,
802
+ [3183.080 --> 3187.480] any of these random projections that do it. So you can kind of divorce the 2 of them to some
803
+ [3187.480 --> 3192.120] extent. There's capacity issues and stuff like that. But generally speaking, any
804
+ [3192.120 --> 3197.720] n dimensional space can be represented by a set of 1D modules or 2D modules or 3D modules or whatever.
805
+ [3197.720 --> 3201.480] The same thing happens with orientation, by the way. We think there's an orientation of your
806
+ [3201.480 --> 3206.280] finger to the cup just like the rat has an orientation into the room. You can think of rat in the
807
+ [3206.280 --> 3210.600] room. The orientation is a 1D vector. The head direction cells, they're just like, you know,
808
+ [3210.600 --> 3218.120] there's just one, it's an angular, it represented angular position in its 1D. And if you go all the
809
+ [3218.120 --> 3221.960] around, then you're back to the same cells again. So it's a repeating pattern, but it's a closed
810
+ [3222.040 --> 3227.240] space because you're doing angular movement. But how would I represent my orientation to my
811
+ [3227.240 --> 3231.880] finger to this cup? That's not a one-dimensional orientation. There's all kinds of movements here.
812
+ [3231.880 --> 3236.760] I can do where I'm on the same location of the cup, but it's different orientations. And so even
813
+ [3236.760 --> 3242.120] there, if I represented orientation with 1D orientation modules, I would need multiple
814
+ [3242.120 --> 3247.160] of them to represent the orientation of my finger to this cup. So it's the same basic problem.
815
+ [3247.720 --> 3255.480] And so this is a fact. I continue as a factor. Half the multiple slices of orientation space or
816
+ [3255.480 --> 3262.440] multiple slices of location space in each cortical column. So Marx really interested in long distance
817
+ [3262.440 --> 3267.480] coordination, especially between cortical columns and representation at a level of love that. So
818
+ [3267.480 --> 3273.560] he's continued to ask, what is the long distance coordination mechanism? Are these cortical
819
+ [3273.560 --> 3279.480] columns local? We already addressed one aspect of it, right? With the voting thing? I'm not sure.
820
+ [3281.000 --> 3285.240] Mark doesn't understand that. We can go over that again. We've also talked about this as a long distance
821
+ [3285.240 --> 3291.080] coordinate. That is a long distance coordinate. Yes. It's voting on what? There's only two cellular
822
+ [3291.080 --> 3295.720] layers in the cortex, which send long distance connections to other parts of the cortex.
823
+ [3296.600 --> 3301.560] There are cells in basically the layer 2, 3, and there are certain cells in layer 5.
824
+ [3302.280 --> 3308.600] And those are the only, that's a subset of layer 5. And those are the only two cell types that
825
+ [3308.600 --> 3313.640] project long distances. And the current theory, which goes a little bit beyond,
826
+ [3313.640 --> 3318.360] what was in the frameworks paper, is that in this, we, in the frameworks, but we talked about one
827
+ [3318.360 --> 3322.920] of those being the, we actually in the columns paper, we talked about it initially. We modeled it
828
+ [3322.920 --> 3328.200] in the columns paper last year of paper. We modeled as one of those representing the object. And so as
829
+ [3328.200 --> 3332.040] we talked about earlier, everybody can be looking at different parts of the world. But if they
830
+ [3332.040 --> 3336.520] are modeling the same object, then all you have to do is have a associative memory that links
831
+ [3336.520 --> 3341.480] a pattern in this column or the pattern at column. And they vote. And they learn to say, when we're both
832
+ [3341.480 --> 3344.840] looking at the same thing, we can make those connections. And so they vote to decide what they're doing.
833
+ [3344.840 --> 3352.120] He's asking what's the neural substrate? The neural substrate is long range axons in layer 2, 3,
834
+ [3352.120 --> 3357.800] to other cells in layer 2, 3, anywhere in the cortex that might be modeling the same object.
835
+ [3358.200 --> 3362.520] And all you have to do is take a population of cells and another population of two sparse
836
+ [3362.520 --> 3368.360] populations. And you say, okay, we're both learning the coffee cup now. Let's form these long range
837
+ [3368.360 --> 3373.240] connections, basically just to associate this pattern with that pattern. And they can do that from
838
+ [3373.240 --> 3377.480] hundreds of different patterns. And now when you see this pattern, it's going to invoke that pattern
839
+ [3377.480 --> 3381.640] over here. So why touch the coffee cup? It's going to, it's going to bias the visual
840
+ [3382.440 --> 3387.480] cortical columns to say, you're probably going to be seeing a coffee cup. That kind of idea.
841
+ [3387.480 --> 3391.560] Yeah, maybe I know we only have a minute or two. I see Seth asked me a question about Hinton's
842
+ [3391.560 --> 3396.360] capsules. Oh, I'm not there. So let me just address that. That's good. So yeah, there is, there are
843
+ [3396.360 --> 3404.200] links between the frameworks and the 1000 brains idea and capsules. I wrote actually a whole blog post
844
+ [3404.200 --> 3409.240] about it about a year and a half ago. So you can search for that on our website. But I think
845
+ [3409.240 --> 3416.280] Hinton's capsules includes the idea of having representing objects based on their relative
846
+ [3416.280 --> 3421.000] locations and doing kind of a voting mechanism to come up with a consistent interpretation of
847
+ [3421.000 --> 3425.400] everything. So to that extent, there are analogies. I think the framework's idea and the
848
+ [3425.400 --> 3429.560] cortical columns idea goes quite a bit beyond that because we're dealing with sensory motor
849
+ [3430.920 --> 3435.960] information, reference frames and a whole bunch of other things in there as well. And of
850
+ [3435.960 --> 3439.880] course, we're trying to model the actual biology and the neuroscience. But there are some really
851
+ [3439.880 --> 3444.920] interesting relationships with Hinton that you can look up my blog post if you want to know more.
852
+ [3446.680 --> 3454.200] Okay, so we, I think we need to wrap it up because we have a hard stop. A link to the blog post.
853
+ [3455.640 --> 3459.720] Yeah, I'll put it on the forum. Maybe to make it fine for me. It's on numin.com slash blog.
854
+ [3461.960 --> 3465.080] All right, that's it. Closing thoughts at all. Thanks to everybody.
855
+ [3465.080 --> 3469.400] I have a couple of closing talks as always. I want to thank Matt for organizing and running the
856
+ [3469.400 --> 3474.200] community. And I love, I really want to appreciate everyone out there who's actually following this
857
+ [3474.200 --> 3480.280] work and trying to understand it and contributing to it. I think the quality of the questions was
858
+ [3480.280 --> 3484.840] great. The fact that there are so many questions that are just at the edge of what we're researching
859
+ [3484.840 --> 3488.920] right now is that I think people are really understanding what we're doing and following it.
860
+ [3488.920 --> 3495.320] Yeah, we appreciate that. These questions push our our own knowledge and make us think about like,
861
+ [3495.320 --> 3499.880] hey, what do we understand? And sometimes we get good suggestions from the community. So
862
+ [3500.840 --> 3505.560] anyway, just want to make sure that one knows we appreciate that. Thanks community.
863
+ [3507.480 --> 3510.920] All right, take care everybody. We'll see you on the forums. Join HGM forum.
864
+ [3512.280 --> 3514.280] Bye.
transcript/allocentric_VGSDUFAtf1E.txt ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 8.160] Hello everybody. Thank you very much for inviting me to talk here today. So I'm going to switch
2
+ [8.160 --> 15.920] here a little bit from brain activity to just pure behavior. And I'm going to tackle a relatively
3
+ [15.920 --> 21.760] simple question, but it'll get a little more complicated as we go. So when we make sacats
4
+ [21.760 --> 26.800] looking around in the world, we can actually move our eyes in all directions. We move our eyes
5
+ [26.800 --> 33.440] horizontally, vertically. We can make oblique sacats. But it turns out that at least for humans,
6
+ [34.400 --> 40.880] all these directions of sacats are not equally likely. We make a lot more sacats as shown here
7
+ [40.880 --> 47.760] in this cartoon depiction. By arrows, we make more horizontal sacats. And then we make quite a
8
+ [47.760 --> 53.600] bit of vertical sacats, but we make very few oblique sacats. So what we're going to try to
9
+ [53.600 --> 63.040] start to tackle today is kind of why this may be. And here is an actual data recording and summary
10
+ [63.040 --> 68.160] of this effect. And we're going to see quite a bit of this plot today. So I'm going to try to
11
+ [68.160 --> 74.640] describe it well today right now. So here we have a plot that shows the frequencies of sacats
12
+ [74.640 --> 81.200] for different directions. So the blue line shows for each direction how many sacats or what was
13
+ [81.200 --> 87.680] the probability of a sacadena given direction. And for this example here, which was recorded
14
+ [87.680 --> 94.560] during free viewing of static images, we see that there are many sacats towards the left and the
15
+ [94.560 --> 100.560] right. There are quite a few going up, a little less going down and even less in oblique
16
+ [101.440 --> 108.080] 45 degree approximately directions. So this is a very robust effect. It's seen in humans in
17
+ [108.080 --> 117.200] multiple tasks. And I wonder why. So if we start to think of possible hypothesis, I think the
18
+ [117.200 --> 122.400] first one that usually comes to mind is just thinking about the symmetry of the system. So if we
19
+ [122.400 --> 127.920] think of the eye muscles or the brain areas that control sacats, they have this horizontal symmetry
20
+ [127.920 --> 133.040] in general. We're pulling one eye to one side. It's controlled by one side of the brain and one
21
+ [133.120 --> 139.520] muscle on one side and pulling the eye towards the other side is symmetrically control in the other
22
+ [139.520 --> 144.800] side. On the other hand, for vertical, I move and things get a little more complicated. We have a
23
+ [144.800 --> 151.840] pair of muscles that also produce torsion and it takes more computation and more control to move the
24
+ [151.840 --> 158.640] eye perfectly vertical or obliquely. So maybe horizontal sacats are just easier to program for the
25
+ [158.640 --> 163.360] brain or somehow more energy efficient. So that's where we make more of them.
26
+ [165.760 --> 172.720] But this gets very soon kind of this proof or at least challenged by a fact that has been shown
27
+ [172.720 --> 178.560] by previous studies already that if you simply tilt the visceral scene that a subject is looking at,
28
+ [179.680 --> 185.680] then the direction of sacats rotates with the image. So when the image is upright, most of the
29
+ [185.680 --> 190.960] sacats the subjects make horizontal but when the image is tilted and the head is still upright,
30
+ [192.000 --> 197.760] now oblique sacats become the most likely. So if it was really that much more costly, that shouldn't
31
+ [197.760 --> 206.880] have. You should still make sacats horizontally most of the time. But then we're going to see that
32
+ [206.880 --> 213.200] it's not really just the image itself. So for example, if we look at sacats that we make when we
33
+ [213.200 --> 219.600] just try to fixate at the spot, as Jake showed when you fixate at the spot, your eyes are still
34
+ [219.600 --> 225.440] are still moving and they'll produce the small sacats, usually called microsecats. So if we look at
35
+ [225.440 --> 231.440] the direction of those microsecats made during fixation, they show a very strong directional bias as
36
+ [231.440 --> 239.120] well. So most of the microsecats in humans tend to be horizontal. And this happens when people are
37
+ [239.120 --> 245.360] looking at a simple fixation spot in the middle of a blank scene. So that would suggest that
38
+ [245.360 --> 249.600] is not really just about the visual scene, at least not the visual scene that you're looking at,
39
+ [250.240 --> 255.440] but there may be something about the regularities on the visual images that we look at or the behaviors
40
+ [255.440 --> 263.760] we usually have that produce this bias. Okay, so what are we going to show today is a few experiments
41
+ [263.840 --> 270.480] and analysis that try to at least tackle some questions to start to understand where this bias
42
+ [270.480 --> 276.640] comes from. First, we are going to show experiments where we are trying to really understand better
43
+ [276.640 --> 281.440] what the reference frame of this bias is. So we are going to try to decouple the head orientation
44
+ [281.440 --> 287.360] in space, the orientation of the image in the world, and a little bit of the eye orientation of the
45
+ [288.320 --> 296.080] to see where this bias really stays uncurtured. Then we look at how the bias is different, maybe for
46
+ [296.080 --> 302.560] different saccade sizes. And finally, we are going to try to analyze the images to try to predict
47
+ [302.560 --> 307.200] what are the features in an image that may influence this saccade bias.
48
+ [310.480 --> 314.880] And this is all work that has been done by a graduate student in the lab, Stephanie Rives,
49
+ [315.680 --> 320.560] and she's a vision science graduate in Berkeley. So in the first experiment,
50
+ [321.440 --> 327.600] we are using virtual reality. So we have this headset, the fove that is equipped with eye tracking
51
+ [327.600 --> 334.560] internally. And we are going to show subjects either these fractal scenes that have the main
52
+ [334.560 --> 339.600] characteristic that they are rotational asymmetric. So they don't have any cue for where a bright is.
53
+ [340.320 --> 345.120] And then they are also going to look at natural scenes that, of course, you saw a little you were
54
+ [345.120 --> 352.640] upright is. And then we are going to keep their head either upright or tilt. Here is the complete
55
+ [352.640 --> 359.440] set of conditions in this experiment. As we see, there are three different head tilts. And then
56
+ [359.440 --> 364.400] within each head tilt, we have three different image tilts. So we always have either the image
57
+ [364.400 --> 369.280] aligned with the head or tilted 30 degrees more than the head or 30 degrees less than the head.
58
+ [369.840 --> 374.480] And as I said before, the images could be the fractal images or the natural scenes.
59
+ [376.240 --> 383.920] So first we are going to focus on those fractal images. Now it's a simple case where we have just a
60
+ [383.920 --> 389.760] head tilt and an image that really doesn't have any tilt information is always the same.
61
+ [389.920 --> 395.440] So in this scenario, we could think of two possible hypotheses. So
62
+ [398.480 --> 404.320] when the head is upright, the two reference frame, either the world reference frame or the head
63
+ [404.320 --> 409.760] reference frame, they are really the same. So with the two hypotheses, we get the same saccade
64
+ [409.760 --> 416.080] distribution. The picture here, but these ellipse. But then when the head is tilted, we could get
65
+ [416.160 --> 420.480] these two different results. The saccades could go with the head or they could stay with the world.
66
+ [421.680 --> 427.120] So let's see what happens. So here we have the typical distribution for upright.
67
+ [427.840 --> 433.440] And now Sabia's we're just really free viewing this fractal scenes. But then when the head is tilted,
68
+ [434.400 --> 440.000] we see that the saccades rotate with the head. So they stay in a head reference frame. When the
69
+ [440.000 --> 446.400] head is tilted to the right, the distribution is shown here, show a tilt to the right. And when
70
+ [446.400 --> 450.240] this head is tilted to the left, the saccades tilt to the left. They go with the head.
71
+ [452.160 --> 457.440] But if we start to look into the data, it may seem that it's not exactly with the head. So we're
72
+ [457.440 --> 461.760] going to do a further analysis. And this is how the data is going to be shown most often today,
73
+ [462.560 --> 468.400] where we actually just look at how much this distribution deviate from a head reference.
74
+ [468.640 --> 476.160] So now we've rotated the distribution. So the horizontal line represents the head orientation.
75
+ [476.160 --> 481.920] And we are comparing in black the distribution when the head is upright and in blue or red,
76
+ [481.920 --> 487.280] the distribution with the head is tilted. And when you do these plots, just start to getting a
77
+ [487.280 --> 493.680] hint that there is a small rotation. But we can further quantify that. We use some cross correlation
78
+ [493.680 --> 499.040] analysis to measure how much you need to rotate when distribution to better match the other
79
+ [499.040 --> 507.840] distribution. And we see that there is actually a small shift of these distributions where when the
80
+ [507.840 --> 513.680] head is tilted to the left, the saccades directions rotate a little bit to the right. And when the
81
+ [514.400 --> 522.160] head is tilted to the right, the saccades rotate a little bit to the left. And initially we try to
82
+ [522.160 --> 529.920] quantify with just a summary index for each subject how much the saccades remain in a head
83
+ [529.920 --> 536.800] orientation reference frame or in a world orientation. And we see that we show before it's very close
84
+ [536.800 --> 541.600] to the head, but not exactly right there. Now
85
+ [552.880 --> 570.960] okay how is this good? So yeah I was mentioning that there is this small deviation from a pure
86
+ [571.680 --> 577.200] head reference frame. And this amount is actually consistent with the amount of eye movements that
87
+ [577.360 --> 583.360] are produced usually with a head tilt. So when the head tilt, we get a torsional rotation of the eye,
88
+ [583.920 --> 589.760] which is a rotation around the line of sight. And typically for a head tilt like here of 30 degrees,
89
+ [589.760 --> 593.520] the eye is going to rotate around four or five degrees in the opposite direction.
90
+ [594.560 --> 599.280] So this starts to suggest that maybe this bias is not in a pure head reference frame, but actually
91
+ [599.360 --> 609.360] maybe in an eye reference frame. Okay so next we are going to focus on another set of conditions
92
+ [610.080 --> 615.680] where the image is always horizontal, respect to the world, but now the head maybe tilt.
93
+ [616.720 --> 621.760] Remember this is still done in virtual realities, it's not a completely natural condition, but it's
94
+ [621.760 --> 626.640] the set of conditions that more mimic maybe the natural behavior where you may be looking at the world
95
+ [626.640 --> 632.400] when your head is tilted. In this case again we find that the saccade directions
96
+ [632.400 --> 637.680] deviate even more. So we don't get just this eye reference frame, but the saccade directions
97
+ [638.480 --> 644.800] rotate to align themselves closer to where the horizon in the world is.
98
+ [645.440 --> 650.880] To remember these graphs here now they are showing head orientation, so horizontal means aligned
99
+ [650.880 --> 656.960] with the head, so when the head is tilted this moves in the direction that will align it closer to
100
+ [656.960 --> 662.960] the world. Okay and finally we have sort of the opposite condition where the head stays just
101
+ [662.960 --> 668.800] upright and now the image may be tilted and this is just replicating previous results, but it shows
102
+ [669.680 --> 677.600] also that the saccade rotates so they align with the image, but only partially if we measure the
103
+ [678.480 --> 683.120] angle that the saccades rotate we see that even though the image was tilted 30 degrees,
104
+ [684.800 --> 693.120] the saccades distribution rotate about 10 to 15 degrees and we can represent again this here with
105
+ [693.120 --> 699.840] this reference frame index where one would be a perfect orientation with the image and zero perfect
106
+ [699.840 --> 704.320] orientation with the head and for most subjects we end up getting somewhere in between.
107
+ [705.280 --> 711.840] So that's what the next experiment is going to start to tackle. We just purely here we're going to
108
+ [711.840 --> 717.840] use image tilt, the head is always going to be upright and we're going to have two different
109
+ [718.480 --> 723.040] behavioral conditions when where they are freely viewing the image and when they are
110
+ [723.040 --> 729.760] fixating at the center of the screen where that is as small dot and now the image can be tilted
111
+ [729.760 --> 736.320] 30 degrees to the left or 30 degrees to the right. Now overall this is the result we get and we've
112
+ [736.320 --> 741.600] seen a few of this already when the image stills and subjects are reviewing the saccade direction
113
+ [741.600 --> 750.240] rotate. Now surprisingly potentially initially is that when subjects are fixating in a dot and the
114
+ [750.240 --> 758.800] image in the background is tilted, these micro saccades that are made for infixations don't change at
115
+ [758.800 --> 763.920] all. So the distributions of these micro saccades remain exactly the same no matter if there is an
116
+ [763.920 --> 771.760] image in the background that is tilted or upright. Okay and we can quantify this in the same way as
117
+ [771.760 --> 778.560] shown before. So we've been reviewing we get a big rotation about 10 degrees with a reference frame
118
+ [778.560 --> 785.040] that we would think is closer to an agrocentric so a reference frame in the head but it's still
119
+ [785.120 --> 791.440] very affected by the image. On the other hand for fixation we get no effect and the saccades are
120
+ [791.440 --> 799.040] made in a purely agrocentric head reference frame. Now of course this could about the task in one
121
+ [799.040 --> 803.600] case they're fixating they may be ignoring the background so it may make sense that they are not
122
+ [803.600 --> 807.600] affecting by the background because they're just looking at the dot and in the other case they're
123
+ [807.600 --> 813.120] actually reviewing and engaging with the image. So then we did that far the analysis where we just
124
+ [813.120 --> 820.160] look at the free viewing data but we group the saccades depending on their size. So we did four
125
+ [820.160 --> 828.480] quartiles where we get the smallest saccades maybe you're interviewing less than one degree then we
126
+ [828.480 --> 834.720] have other groups from one to two more or less two to four and bigger than four. What we clearly see
127
+ [834.720 --> 841.440] is that there is a pattern that changes so for the big saccades we get a very strong effect of the
128
+ [841.440 --> 847.840] tilt of the image but for the small saccades we get almost no rotation and again we can quantify
129
+ [848.640 --> 855.840] with this reference frame index where it's zero means align with the head and one means align with the
130
+ [855.840 --> 862.560] image and the small saccades remain aligned with the head but big saccades align more and more with the
131
+ [862.800 --> 872.160] image. Okay so after doing this when we look more closely at the data we find that not all images
132
+ [873.200 --> 879.200] are the same. If we were to show what these effects are for different images we'll find images that
133
+ [879.200 --> 883.920] have a very big effect meaning they pull the saccades to be oriented with that image when it's
134
+ [883.920 --> 889.760] tilted and other images that don't seem to have the same effect. So what we are trying is to
135
+ [889.760 --> 894.640] find what are the features the characteristics of those images that would predict which images
136
+ [894.640 --> 900.480] affect the saccades and which ones don't. So in the first option that we thought of we are
137
+ [900.480 --> 906.240] studying the the saliency of an image so this is something that has been done a lot in the field
138
+ [906.240 --> 912.640] of imovements where you extract what are the most salient features of an image by contrast orientation
139
+ [913.280 --> 920.880] etc and we can build a saliency map as shown here that essentially would predict the positions
140
+ [920.880 --> 926.640] in the image where it's more likely to fixate. So now you could end up with some images
141
+ [927.520 --> 934.160] that have some structure on this saliency and that the structure would potentially induce a bias.
142
+ [934.160 --> 939.120] So if you have like here only two very salient targets you could predict that the subject is
143
+ [939.120 --> 944.000] going to be looking between the targets a lot so you're going to get more saccades in particular
144
+ [944.000 --> 948.960] directions. Well on the other hand you may have other images where the saliency map is more uniform
145
+ [950.320 --> 955.120] so it would not predict a lot of bias in the directions just purely caused by this
146
+ [955.120 --> 962.080] structure in the saliency map. As a second option we are going to look at the special frequency
147
+ [962.080 --> 966.880] distribution similar to again what Jake was doing but in this case we're going to focus at the
148
+ [966.880 --> 973.920] the power of the spectrum in different orientals. So here I have two examples one where there is
149
+ [973.920 --> 979.920] a very strong orientation signal in this case probably more in the low frequency where also in the high
150
+ [980.960 --> 987.600] where you have a very distinctive bias distribution of power in different
151
+ [988.640 --> 992.560] in other images you may have a more uniform power in all directions.
152
+ [992.720 --> 1001.600] And finally the third option of feature we are looking at is maybe the harder to analyze but
153
+ [1001.600 --> 1009.120] is the more cognitive the the cues about work gravity or where the floor is. So this cannot be
154
+ [1009.120 --> 1014.960] directly started with low level features so we are using a deep learning network trained with
155
+ [1014.960 --> 1022.160] actual images of known orientation and that network can tell us what the orientation of that
156
+ [1022.160 --> 1028.640] image is of any image and how certain the network is of that orientation. So then we can end up with
157
+ [1028.640 --> 1035.360] images that clearly tells us where a bright is and other images that may not have so much the
158
+ [1038.320 --> 1042.480] so with these three options we are going to do essentially the same analysis with all of them we get
159
+ [1042.480 --> 1049.120] a metric that tells us how the saliency this is strongly biased how the frequencies are
160
+ [1049.120 --> 1054.080] strongly biased or how strong the deep neural network can tell us where a bright is.
161
+ [1054.960 --> 1062.160] And we can correlate that with the strength of the effect on rotating the saccades that I shown before.
162
+ [1063.680 --> 1068.320] And this is the summary of the result and what we can see at least preliminary because this is
163
+ [1068.320 --> 1074.320] still a small set of images is that the special frequency either lower height seems to be the
164
+ [1074.880 --> 1084.800] the strongest contributor to this effect. So to summarize today we shown that saccade generation is
165
+ [1084.800 --> 1090.640] not uniform in all directions. We humans make saccades especially horizontal and this may not be
166
+ [1090.640 --> 1097.040] true for other animal species which is another interesting line of approach to this project problem.
167
+ [1097.840 --> 1104.640] We have these big bias towards horizontal directions unless so to vertical and this bias is not
168
+ [1104.640 --> 1111.920] really fixed neither to the head or the world it's probably initially biased towards the head but it
169
+ [1111.920 --> 1118.320] can be affected by the image content. And this is especially true for larger saccades.
170
+ [1119.120 --> 1125.760] Larger saccades seem to take more information about the stillt of the image and reorient themselves
171
+ [1125.760 --> 1130.080] with the image while the small saccades are more tightly tied to the head.
172
+ [1131.920 --> 1136.560] And then the special frequency content and how it is directionally biased seems to be the best
173
+ [1136.560 --> 1145.040] predictor for now about the effect of different images on this saccade tilt. So thank you everybody
174
+ [1145.040 --> 1149.360] and I want to thank also the people in my lab especially Stephanie Rives and Raul Rodriguez
175
+ [1149.360 --> 1155.280] which contributed to this work and the funding agencies. And that's my eye doing a little bit of
176
+ [1155.280 --> 1165.280] torsion. Okay thank you.
177
+ [1173.280 --> 1180.480] So that was really interesting basically telling us that saccade directions are modulated by the
178
+ [1180.480 --> 1185.680] information and the image the saccades are seeking the information. Do you guys have any questions?
179
+ [1187.200 --> 1188.160] Laura is that a hand?
180
+ [1198.480 --> 1203.840] Yeah thank you very much nice talk and very nice project. Do you think that is related somehow
181
+ [1203.920 --> 1212.080] to listings plain? So the initial data that I show with the small effect that could be
182
+ [1212.720 --> 1217.120] but the effect of the image is so much bigger than whatever anything that you could really predict
183
+ [1217.120 --> 1222.640] with the listings plain. Okay thinking rather of the of the micro saccades which are less sensitive
184
+ [1224.240 --> 1231.760] to the image. Yeah no certainly but still I don't think that all saccades are generated by the
185
+ [1231.760 --> 1238.960] same circuits as far as we know right now. So I don't think enforcing the restrictions of listings
186
+ [1238.960 --> 1243.760] plain would necessarily predict why the small ones versus the big one would be differently affected
187
+ [1243.760 --> 1260.480] by the image. Thank you. Thank you Hoi for your lovely talk. Being somebody who studies development
188
+ [1260.480 --> 1271.360] you know where I'm going to go. And so I can imagine spatial frequency piece being quite reflex
189
+ [1272.000 --> 1278.080] and then I can imagine the world and the task and the free viewing being more learned.
190
+ [1279.440 --> 1281.760] Are you willing to speculate about any of that?
191
+ [1282.720 --> 1292.560] Not too much but I think there are some studies that have shown so people have looked at the
192
+ [1292.560 --> 1299.920] horizontal bias across ages there may be one study that they know of and it seems to become more
193
+ [1299.920 --> 1306.240] and more biased with age. So it starts more with saccades made in all directions and with age
194
+ [1306.320 --> 1313.120] the distribution becomes tighter and tighter for a sometime. But I don't know of the effects of
195
+ [1313.120 --> 1320.480] building the image if that would independently change with age or not. No idea. Thank you.
196
+ [1324.480 --> 1330.720] Hi you said that spatial frequency influences. I want to know what spatial frequencies do what?
197
+ [1331.200 --> 1339.920] Yeah so right now we essentially group we tried to look at different spatial frequency bands
198
+ [1340.640 --> 1345.760] but we didn't find that different effect. So we found the same result for lower height right now.
199
+ [1346.640 --> 1353.520] So if we just look at how each band of frequencies is biased in directions we see the same effect.
200
+ [1361.680 --> 1367.600] Hello. Hello. Very cool talk. So when you say it goes with the head I mean there's two ways
201
+ [1367.600 --> 1372.240] you can think of that right it can go with the eye line or it can go with the stibular signal of
202
+ [1372.240 --> 1378.240] the head right. So if you were to have some to have a situation where the tilt does not go with
203
+ [1378.240 --> 1383.200] gravity let's say a person's lying down right and they're doing the same head tilt. How would you
204
+ [1383.200 --> 1388.560] predict that would the behavioral change? Yes good question but I think it
205
+ [1388.880 --> 1396.160] since we see that still stay with the head so it's not aligned with gravity I would expect to
206
+ [1396.160 --> 1405.040] still go with the head mostly when even is the eye line yeah or the the roll direction of the head.
207
+ [1405.040 --> 1413.440] Yeah thank you. My question is how do people scan an entire scene if they're primarily only
208
+ [1413.520 --> 1419.360] using horizontal scuds are these coming back to the same place and then going up or are they
209
+ [1419.360 --> 1423.840] actually too unique target? It's a good point I forgot to clarify so in those directions we ignore
210
+ [1423.840 --> 1429.840] size when we say there are more saccades in one direction they could be more as small ones where
211
+ [1429.840 --> 1439.280] you have more big ones in the other direction so I think in general we'll cover we you can cover
212
+ [1439.280 --> 1444.160] the entire field but you're going to switch more for each other and still those saccades are not
213
+ [1444.160 --> 1450.320] perfectly horizontal they still have a oblique component so you can zigzag around the image.
214
+ [1451.520 --> 1456.480] Yep this interest it doesn't seem the most efficient thing to do it to distract information.
215
+ [1457.280 --> 1462.720] Okay so we have a longer discussion section at the end of every session so if you have more
216
+ [1462.720 --> 1466.640] questions let's reserve it for then thank you Jorge.
transcript/allocentric_WmtANkx6Bok.txt ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.880] Why is Arvind K. T. Wal not wearing a suit in a tie?
2
+ [2.880 --> 4.560] The way that Ankar was wearing.
3
+ [4.560 --> 5.880] Is it because he cannot afford it?
4
+ [5.880 --> 8.840] Why is K. J. Rivalji carefully wearing a shirt?
5
+ [8.840 --> 11.440] Or if you think about it, it's deflective of the middle glass.
6
+ [11.440 --> 14.800] Why the hell is there a renault pen tucked into one of his pockets?
7
+ [14.800 --> 15.880] And camera capturing it.
8
+ [15.880 --> 18.520] Why is Modiji not wearing a T-shirt or Sharwani?
9
+ [18.520 --> 19.440] He should have bleeding blue.
10
+ [19.440 --> 22.160] He's rather wearing a blue colored jacket, a blue colored stool.
11
+ [22.160 --> 26.240] At a time when temperature was 32 degrees, humidity was 36 percent in him the birth.
12
+ [26.240 --> 28.000] Why is Gandhiji's stand really addressed?
13
+ [28.000 --> 31.920] Why is he just wearing sandals to attend the second round table conference?
14
+ [31.920 --> 33.480] At a time when London is freezing.
15
+ [33.480 --> 36.960] Each of these individuals, they choose their attire as a form of communication.
16
+ [36.960 --> 39.120] The timings may change, the rears may change.
17
+ [39.120 --> 41.760] But each of those leaders want to convey a message.
18
+ [41.760 --> 44.480] A message about their values, their beliefs, their affiliations.
19
+ [44.480 --> 48.320] Many people have this wrong perception that polity, governance, society,
20
+ [48.320 --> 52.160] it is all about understanding the constitution or judiciary, fundamental rights,
21
+ [52.160 --> 53.640] APSPs, parliament.
22
+ [53.640 --> 53.880] No.
23
+ [53.880 --> 57.080] These are the people who merely try to books or mechanically revise those subjects.
24
+ [57.080 --> 59.080] And they really become the content of the book.
transcript/allocentric_WwYDMpD7j4Q.txt ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.000] All right, excellent.
2
+ [7.000 --> 8.000] Excellent.
3
+ [8.000 --> 9.000] Okay. Sorry about that.
4
+ [9.000 --> 10.000] Thanks for the introduction.
5
+ [10.000 --> 12.000] I'm sorry for wasting a minute or two there.
6
+ [12.000 --> 14.000] Also thanks to the neuro-matrachonizer.
7
+ [14.000 --> 16.000] We're putting on this greatly needed meeting.
8
+ [16.000 --> 18.000] So yeah, I'm a computational neuroscientist.
9
+ [18.000 --> 21.000] And much of my work is focused on the theories of the neural
10
+ [21.000 --> 23.000] computations underlying spatial cognition,
11
+ [23.000 --> 26.000] including interactions between play cells and grid cells.
12
+ [26.000 --> 29.000] But in this talk, I'd like to describe some data and some modeling
13
+ [29.000 --> 33.000] that potentially supports a theoretical role for septal neurons
14
+ [33.000 --> 37.000] in maintaining the allocentric reference frame
15
+ [37.000 --> 41.000] of the spatial cognitive system, at least in rats.
16
+ [41.000 --> 46.000] Particularly, I'm going to focus on the question of path integration
17
+ [46.000 --> 57.000] and how, and how you might be able to maintain an allocentric reference frame
18
+ [57.000 --> 61.000] of the path integration, which basically comes down to the problem
19
+ [61.000 --> 66.000] of how to, how to bring reset the accumulation of encoding errors.
20
+ [66.000 --> 72.000] So path integration as has been described is the critical component of navigation
21
+ [72.000 --> 76.000] that contributes spatial information by integrating self-motion signals.
22
+ [76.000 --> 77.000] So resetting is necessary.
23
+ [77.000 --> 83.000] It was integrating self-motion that accumulates errors in position estimates over time.
24
+ [83.000 --> 87.000] And that's illustrated by this green trajectory here,
25
+ [87.000 --> 90.000] starting with this journey.
26
+ [90.000 --> 92.000] And so there's a lot of different ways of doing that.
27
+ [92.000 --> 97.000] There's been two main kind of camps in a theoretically speaking for how to model
28
+ [97.000 --> 102.000] path integration and how it might interact with landmarks and queues in the environment.
29
+ [102.000 --> 106.000] One of my much older papers kind of explored the oscillatory interference mechanism,
30
+ [106.000 --> 110.000] which is where if you have a theta-rhythmic oscillators that are also modulated
31
+ [110.000 --> 115.000] by direction and speed, then you can combine them in different ways to form grid cells.
32
+ [115.000 --> 120.000] For instance, those were models from John Keath and others and Neil Burgess.
33
+ [120.000 --> 126.000] But I showed that you could generalize this or generisize this into a place field,
34
+ [126.000 --> 131.000] one of a generic spatial mapping model where if you randomly,
35
+ [131.000 --> 134.000] if you have random preferred directions with these VCOs,
36
+ [134.000 --> 139.000] these velocity control oscillators, then you can build place fields.
37
+ [139.000 --> 143.000] That are stable and basically on path integration.
38
+ [143.000 --> 148.000] And this was work with Jim Kinerim's lab of modeling his data on circular tracks.
39
+ [148.000 --> 153.000] And so this is just kind of showing off the amplitude of synchronic and map into space
40
+ [153.000 --> 156.000] through path integration with oscillators.
41
+ [156.000 --> 160.000] But one of the things that I was really concerned about in this paper was,
42
+ [160.000 --> 165.000] was how does a sensory cue actually feed back into the phase code in order to stabilize a phase code
43
+ [165.000 --> 167.000] from drifting away.
44
+ [168.000 --> 172.000] And so in the bottom panels here, you can see that if you have,
45
+ [172.000 --> 174.000] if you have a positive sum amount of phase error,
46
+ [174.000 --> 177.000] and you want that to be corrected at some point in time,
47
+ [177.000 --> 180.000] that represents the interaction with an external cue,
48
+ [180.000 --> 184.000] then I simply positive a very abstract,
49
+ [184.000 --> 190.000] an abstract feedback process represented in this diagram on the bottom right,
50
+ [190.000 --> 195.000] which is what I needed to be able to study this in terms of remapping and partial remapping.
51
+ [195.000 --> 198.000] But this is definitely a black box.
52
+ [198.000 --> 201.000] So this kind of really raised the big question,
53
+ [201.000 --> 207.000] like, is there an actual neurobiological basis for having this kind of spatial feedback,
54
+ [207.000 --> 209.000] particularly in the phase domain?
55
+ [209.000 --> 217.000] So the question is, is there a phase code and oscillatory code outside of potentially outside of the campus
56
+ [217.000 --> 221.000] that could serve a resetting function for path integration?
57
+ [222.000 --> 226.000] And so this data is a, is a result of collaboration with,
58
+ [226.000 --> 229.000] with a TAD Blaires lab at UCLA,
59
+ [229.000 --> 234.000] where he performed these very long duration recordings and an 80 centimeter cylindrical arena,
60
+ [234.000 --> 239.000] basically standard random foraging tasks.
61
+ [239.000 --> 242.000] While recording a reference hippocampal LFP signal,
62
+ [242.000 --> 248.000] he recorded essentially from all of the subcortical brain areas highlighted here,
63
+ [249.000 --> 256.000] that are in one way or another interconnected with the hippocampal and Serenal formation.
64
+ [256.000 --> 259.000] And so he recorded from all of these areas,
65
+ [259.000 --> 268.000] and I basically took this into an ET analysis paradigm to look at the amount of spatial information carried in phase,
66
+ [268.000 --> 275.000] in the theta phase, as well as correlating that to velocity and position and other characteristics.
67
+ [276.000 --> 280.000] And I just kind of give the hint of where things are going.
68
+ [280.000 --> 286.000] I found this kind of spatial phase code, which I think might serve the theoretical role that I described,
69
+ [286.000 --> 289.000] only in one place, and that was in lateral septum.
70
+ [289.000 --> 294.000] And lateral septum happens to be the primary subcortical output target of the hippocampus.
71
+ [294.000 --> 299.000] So it's possibly the entrance point to an interesting feedback loop,
72
+ [299.000 --> 304.000] if we consider all the interconnectivity within these networks.
73
+ [304.000 --> 308.000] And so I'm going to take you through what some of this lateral septal data looks like.
74
+ [308.000 --> 311.000] And just as a brief overview, which is recorded,
75
+ [311.000 --> 314.000] what the theta signal is, probably this crowd doesn't need that.
76
+ [314.000 --> 318.000] You can record the LFP, you can do a band pass filter, find where the peaks are,
77
+ [318.000 --> 325.000] and then we can take the spike timing from a relative to peak to peak within each theta cycle.
78
+ [325.000 --> 329.000] And we map that to the phase domain that goes from zero to two pi.
79
+ [330.000 --> 335.000] And just to be so that explains the y axis of a lot of the plots that it will show.
80
+ [335.000 --> 340.000] And so here's just a standard spike trajectory plot of one of these cells.
81
+ [340.000 --> 344.000] And you can clearly see this is a very long, I think a two hour recording.
82
+ [344.000 --> 349.000] So you've got the gray trajectory there with the random foraging and the red spikes.
83
+ [349.000 --> 351.000] And you can see clear spatial modulation.
84
+ [351.000 --> 357.000] So if we look at the top map here, you can see that the firing rate clearly illustrates
85
+ [357.000 --> 362.000] that it is a broad, place like field and all the west side of the arena,
86
+ [362.000 --> 363.000] which you know, that's great.
87
+ [363.000 --> 368.000] We've got spatial modulation and lateral septum that's been shown sparsely in the literature,
88
+ [368.000 --> 369.000] but it hasn't shown before.
89
+ [369.000 --> 376.000] What hasn't really been shown is that relationship to the ongoing theta oscillation and the difficult campus system.
90
+ [376.000 --> 383.000] So if you look at the bottom map here, this is a phase map of the average phase at every location.
91
+ [384.000 --> 393.000] And you can see that there is a core spot in between the pattern of modulation in the rate map on the top and the phase map on the bottom.
92
+ [393.000 --> 395.000] And so this is, this is interesting.
93
+ [395.000 --> 397.000] This is over a very long period of time, a very long recording.
94
+ [397.000 --> 401.000] And there appears to be a very strong relationship between rate and phase.
95
+ [401.000 --> 405.000] So that's kind of the basis of the idea going forward.
96
+ [405.000 --> 409.000] And that was an I quantified that by looking at the phase rate correlation,
97
+ [409.000 --> 414.000] a circular linear correlation for the basic pixels in these maps.
98
+ [414.000 --> 416.000] And you can see what that looks like.
99
+ [416.000 --> 420.000] So you get kind of the expected negative phase rate relationship.
100
+ [420.000 --> 424.000] And I termed these cells, I called them phaser cells.
101
+ [424.000 --> 430.000] So these are lateral septal phaser cells for want to the better word.
102
+ [430.000 --> 432.000] We'll see if it catches on.
103
+ [432.000 --> 435.000] But basically we're going to analyze that correlation.
104
+ [435.000 --> 442.000] But the correlation kind of immediately brings to mind that like you can explain this with a fairly simple phase coding mechanism.
105
+ [442.000 --> 450.000] So if you posit that there's some cell that it receives a inhibitory theta rhythmic input such as the the magenta magenta sinusoid that you see here.
106
+ [450.000 --> 462.000] And then you also posit that well maybe it also receives a slowly changing or slowly ramping depolarizing input like the green triangle wave here.
107
+ [462.000 --> 470.000] And that's all you need to get kind of the phase coding relationship that we just saw where as the input increases.
108
+ [470.000 --> 476.000] You start to get activity in the cell and then you get more and more activity within each data cycle as the input goes up.
109
+ [476.000 --> 482.000] But then the activity within each data cycle also occurs an earlier time or it initiates an earlier time.
110
+ [482.000 --> 487.000] So you have this joint modulation of phase and and rate.
111
+ [487.000 --> 494.000] And critically has once an input for slowly ramp stop it gets slowly ramped back down and symmetrically you see the same exact thing.
112
+ [494.000 --> 502.000] The phase will deflect all the way back up to the baseline phase once that input has gone away.
113
+ [502.000 --> 505.000] So there's no kind of history system of learning going on here.
114
+ [505.000 --> 516.000] So this is a symmetric bidirectional phase coding mechanism and something like this has been positive for face for place cells and hippocampus before learning or you know any kind of
115
+ [516.000 --> 519.000] network effects of comment to play.
116
+ [519.000 --> 523.000] But the idea is that this gives you a scalar code.
117
+ [523.000 --> 533.000] So this is basically the co modulation of phase and rate means that the phase is basically just a conversion of rate.
118
+ [533.000 --> 540.000] It's analogous to taking this facial information in a rate code and then and then putting it into the phase domain.
119
+ [540.000 --> 544.000] And so we can think about functionally why would you want to do that.
120
+ [544.000 --> 553.000] And it is in particular a high contrast of what you see in and in typical place cells and hippocampus.
121
+ [553.000 --> 562.000] So on the right I've taken a figure from from Sousa and Torch 2017 paper with the analyzed a large place cell dataset and showed this kind of condominical
122
+ [562.000 --> 567.000] unit direction all asymmetric relationship between phase and rate.
123
+ [567.000 --> 573.000] So as the animal goes through a field the the phase will continually go down and does not return.
124
+ [573.000 --> 587.000] However, if you had a phase yourself field as the rate goes up, but the phase would advance and then as the rate goes down on the scene of a leaves the field, the phase would delay back up to the previous level.
125
+ [587.000 --> 594.000] And so looking for the type of phase code I set up a number of different criteria of four criteria three important ones.
126
+ [594.000 --> 604.000] So looking at the spatial phase information, looking at the total phase shift, how much does it change within that correlation and then the strength of that phase rate correlation essentially.
127
+ [604.000 --> 615.000] And then using that those criteria I was able to filter the tads entire data set this a cortical data set all those single unit recordings into.
128
+ [615.000 --> 624.000] Cells meet these major solid criteria and those that don't and particularly it's important to look at whether this is actually a stable code or not.
129
+ [625.000 --> 634.000] So just looking within session I compared the up to the first hour of a session the early part of to the up to the last hour, the late part.
130
+ [634.000 --> 649.000] And so if you look at on the left of the spatial correlation and on the right a change in that total phase shift, this kind of illustrates that you do maintain spatial correlations across these long duration recordings.
131
+ [649.000 --> 667.000] And the phase shift that comprises for that constitutes that face yourself code does not significantly change most cells remain within about pie over four or about 45 degrees across across these multi hour recordings.
132
+ [667.000 --> 689.000] And then you also want that to exist across days and that is basically what we found with the curve looking very similar and between between days with the identified units we cannot we can track with each individual cells looking like across days and the vast majority of them do not have significant changes or flips in the in the direction in their phase shifts for this phase code.
133
+ [690.000 --> 696.000] A couple of them do the most of them are pretty stable, which is, which is pretty good. That's what you want to see.
134
+ [696.000 --> 706.000] But then the last thing besides stability is that you want to make sure these spatial responses really are spatial and not just a confound of spatial correlations of other aspects of the trajectory.
135
+ [706.000 --> 716.000] And then to kind of decon found that I trained a GLM generalized linear model with both spatial predictors and trajectory based predictors.
136
+ [716.000 --> 722.000] And so these variables called LMQ or just linear and quadratic sources up to second order spatial variation.
137
+ [722.000 --> 727.000] And then the trajectory predictors are wall distance speed and direction basically.
138
+ [727.000 --> 740.000] So this top grid. This top grid is showing you that the responses are utterly dominated by the spatial factors and almost not at all the trajectory based factors and other confounds.
139
+ [740.000 --> 749.000] And even if we look at the maximum possible contribution that each of these predictors made, which is the bottom plot here, you still see that dominance of the spatial relationships.
140
+ [750.000 --> 759.000] Though that does kind of also reveal that there is this hint of a trade off between how spatial the cells are and then how tuned to speed they are.
141
+ [759.000 --> 763.000] If you look at the sorting along the fourth column of S here.
142
+ [763.000 --> 768.000] I'm not sure if people can see my maps actually. I'm waiting my master on to them.
143
+ [768.000 --> 772.000] So once that's kind of established, these are spatial cells and they are pretty stable.
144
+ [772.000 --> 777.000] We have those criteria. We can kind of see where these cells fall.
145
+ [777.000 --> 784.000] So on this plot, I'm showing all cells that have significant spatial phase information.
146
+ [784.000 --> 791.000] And so that spatial phase information on the x axis and then the total phase shift that phase modulation is on the y axis.
147
+ [791.000 --> 797.000] And you can see we have these these cells with a negative phase rate correlation here in the bottom part of the plot.
148
+ [797.000 --> 803.000] And these these circles are the size of the circle correlates to how strong correlation is.
149
+ [803.000 --> 813.000] It's like nice strong phase rate correlations. I'm showing these strong negative phase modulation giving us lots of information about space and phase, which is great.
150
+ [813.000 --> 817.000] But then the kind of a surprising thing was that if we look at the top, we also saw cells.
151
+ [817.000 --> 825.000] So the cells appear with positive face shifts. This was surprising because of that simple mechanism that I implied earlier, but I described earlier.
152
+ [825.000 --> 831.000] You wouldn't expect higher firing rate to correspond to later firing and that on two of the model.
153
+ [831.000 --> 837.000] So there's probably something else going on here and this potentially has interesting implications.
154
+ [837.000 --> 846.000] So this shows some examples. These are five different examples of those negative phase or cells from from different animals.
155
+ [846.000 --> 855.000] The top row shows you the rate maps, the middle row shows those phase maps. And then the bottom shows the phase rate correlations, just like the example cells we showed earlier.
156
+ [855.000 --> 865.000] And you can see those like wall responses, there's place like responses, place boundary conjunctive responses and kind of a broader but still spatial responses.
157
+ [865.000 --> 867.000] So it's a strong diversity.
158
+ [867.000 --> 869.000] Joe, Joe, if you have one minute left.
159
+ [870.000 --> 876.000] Darn, okay. And so here's an example of the positive phase cells, which are not as strongly spatial.
160
+ [876.000 --> 883.000] So if you look at the phase rate, the trajectories of these cells, you can see across phase.
161
+ [883.000 --> 891.000] They interleave very nicely. You can see in the spot here. And then if you look at the typical firing phase of these populations, you can see just how interleaved they are.
162
+ [891.000 --> 897.000] So at any moment in time, you've got information coming in the phase domain about space from one population or the other.
163
+ [897.000 --> 911.000] And so very quickly, the modeling of this is that I had a dynamical circuit model with a very simple structure here using feed for the suppression of the negative of the positive cells by the negative cells.
164
+ [911.000 --> 923.000] And you can get this complimentary relationship pretty much exactly, which is very nice. And then I use that GLM as a generative model to generate spatial tuning curves for both a thousand negative.
165
+ [924.000 --> 929.000] And so these are just random target bursting.
166
+ [929.000 --> 935.000] Is there random theta bursting neurons that are not path integrating our spatial.
167
+ [935.000 --> 940.000] The questions can we make can we make them reset to a path integration signal.
168
+ [940.000 --> 946.000] And so with different types of supervised base codes, which I won't get into detail because I'm running at a time.
169
+ [947.000 --> 957.000] You can actually see how well these these codes were learned by by down by these downstream target cells based on a very simple competitive learning mechanism.
170
+ [957.000 --> 963.000] And so once you have that, we have a very small number of these cells, you can actually do population decoding of just the phase.
171
+ [963.000 --> 972.000] And you can see that well, this this top structure didn't work very well, but bottom one did. And that is going to lead to a very rapid phase resetting.
172
+ [973.000 --> 978.000] The mechanism basically a sub second reset mechanism for path integration.
173
+ [978.000 --> 984.000] So that's basically the idea that we've got fairly simple network structures and circuits.
174
+ [984.000 --> 996.000] And using this kind of a single location based synchrony idea, there might be a sub cortical pathway for for Facebook feedback in the spatial system that might support path integration and other elements of navigation.
175
+ [997.000 --> 999.000] And I would just end there.
176
+ [999.000 --> 1009.000] Perfect. Okay. So we have to move swiftly on to make sure that Balaash has enough time for his talk. You do have a few questions in the Q&A from Eleanor.
177
+ [1009.000 --> 1014.000] So I would encourage you to go check a look at that. And thank you again for this great talk.
178
+ [1014.000 --> 1017.000] I'm going to move swiftly swiftly on.
transcript/allocentric_XhhkhpK-3L4.txt ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 9.880] Say you're at a cookout when you notice that there's a giant spider hanging out on your
2
+ [9.880 --> 10.880] friend's shoulder.
3
+ [10.880 --> 15.360] You want to avoid total pandemonium, so you casually wave to get their attention, then
4
+ [15.360 --> 17.400] make a brushing motion on your left shoulder.
5
+ [17.400 --> 21.360] But instead of realizing that there inches away from certain death, your friend thinks
6
+ [21.360 --> 25.160] that you're busting out a new dance move, and the whole cookout starts breaking it down.
7
+ [25.160 --> 29.040] Waving to say hello, yelping when you get hurt or brushing at your shoulder to try to
8
+ [29.040 --> 34.000] save your friend from mortal danger are all examples of non-verbal communication.
9
+ [34.000 --> 38.200] Non-verbal communication is the process of sharing thoughts and ideas using behavior other
10
+ [38.200 --> 39.200] than words.
11
+ [39.200 --> 43.800] The gestures, movements, and facial expressions we use to share information with one another
12
+ [43.800 --> 46.160] are all forms of this type of communication.
13
+ [46.160 --> 50.040] It also includes things like smiling to show you're happy, or giving a thumbs up to say
14
+ [50.040 --> 51.040] okay.
15
+ [51.040 --> 54.640] In other words, non-verbal communication is kind of like a game of shurides.
16
+ [54.640 --> 57.920] Only you're playing it all the time, even if you don't realize it.
17
+ [57.920 --> 63.720] In fact, around 65% of the meaning we get from communication comes from non-verbal signals.
18
+ [63.720 --> 68.000] So understanding how non-verbal communication works can help you better express yourself
19
+ [68.000 --> 69.640] and avoid being misunderstood.
20
+ [69.640 --> 74.480] I'm Cisandra Ryder, and this is Study Hall, intro to human communication.
21
+ [74.480 --> 82.840] But non-verbal communication isn't a solo act.
22
+ [82.840 --> 84.240] It's more like a duet.
23
+ [84.240 --> 87.560] This is because our non-verbal and verbal communication work together as part of the
24
+ [87.560 --> 88.560] same system.
25
+ [88.560 --> 93.040] Verbal communication uses words to share ideas, and non-verbal communication uses gestures
26
+ [93.040 --> 94.040] and sounds.
27
+ [94.040 --> 98.120] It's like verbal communication is the melody, and non-verbal communication is the harmony.
28
+ [98.120 --> 101.880] And when their powers combine, our messages become even more meaningful.
29
+ [101.880 --> 106.120] For instance, we tend to rely on verbal communication to share complex ideas and express ourselves
30
+ [106.120 --> 107.120] clearly.
31
+ [107.120 --> 110.920] Like when someone asks us for directions, we use spoken or written words to explain which
32
+ [110.920 --> 111.920] route they should take.
33
+ [111.920 --> 116.120] You know, like turn left to the library, or it's the second door on your right.
34
+ [116.120 --> 120.320] Because to help someone get from point A to point B, they need as much specific information
35
+ [120.320 --> 121.320] as possible.
36
+ [121.320 --> 123.720] And that's where verbal communication really shines.
37
+ [123.720 --> 128.560] Non-verbal communication, on the other hand, adds extra context to the words that we use.
38
+ [128.560 --> 132.680] So along with using words to give directions, we can also use our hands to point out which
39
+ [132.680 --> 134.080] way someone should go.
40
+ [134.080 --> 137.840] Non-verbal cues can also clear things up when our words might be misinterpreted.
41
+ [137.840 --> 140.280] Like telling someone, go that way.
42
+ [140.280 --> 143.440] You'd be confusing unless you also pointed to where you wanted them to go.
43
+ [143.440 --> 147.080] We also use non-verbal communication to convey emotions and connect with others.
44
+ [147.080 --> 150.680] For instance, you'd probably smile while giving directions so the other person knows
45
+ [150.680 --> 152.440] that you're friendly and willing to help.
46
+ [152.440 --> 157.280] And finally, non-verbal communication also helps us make judgments about a person's credibility
47
+ [157.280 --> 158.560] or trustworthiness.
48
+ [158.560 --> 162.000] Like someone whose lost might not ask you for help if you're looking around and have
49
+ [162.000 --> 163.000] your arms crossed.
50
+ [163.000 --> 166.600] In this case, you're broadcasting that you're probably waiting for someone and don't have
51
+ [166.600 --> 168.560] time to answer a stranger's questions.
52
+ [168.560 --> 172.800] So if non-verbal communication can do all of these things, does that make it more important
53
+ [172.800 --> 174.120] than verbal communication?
54
+ [174.120 --> 176.400] Well, it depends on the context.
55
+ [176.400 --> 180.200] Like verbal communication is probably more important when you're making a big business
56
+ [180.200 --> 182.920] deal and want to make sure everyone's on the same page.
57
+ [182.920 --> 186.240] But if you're disagreeing with a friend, paying attention to their tone of voice and body
58
+ [186.240 --> 189.240] postures can clue you into how they're really feeling.
59
+ [189.240 --> 190.520] And that's normal.
60
+ [190.520 --> 194.400] Because non-verbal and verbal messages play different roles in how we communicate.
61
+ [194.400 --> 196.480] But they also have a few things in common.
62
+ [196.480 --> 201.640] Like both verbal and non-verbal communication include non-vocal and vocal elements.
63
+ [201.640 --> 207.040] For instance, writing in American Sign Language are non-vocal elements of verbal communication
64
+ [207.040 --> 209.200] because they both use symbols to make meaning.
65
+ [209.200 --> 211.360] And you don't actually speak them with your voice.
66
+ [211.360 --> 214.760] We also use non-vocal elements during non-verbal communication.
67
+ [214.760 --> 218.600] According to the field of kinesics, which is the study of movement, there are three main
68
+ [218.600 --> 224.120] types of non-vocal, non-verbal cues, gestures, facial expressions, and postures.
69
+ [224.120 --> 228.680] These are non-vocal and non-verbal because most gestures don't refer to a specific word
70
+ [228.680 --> 230.920] like a written or signed symbol does.
71
+ [230.920 --> 235.360] Like when you wave to your friend at the cookout, you could have been saying, hello, goodbye,
72
+ [235.360 --> 236.840] or trying to get their attention.
73
+ [236.840 --> 241.120] Because there isn't one single word that we associate with waving, we have to use context
74
+ [241.120 --> 246.360] clues, like facial expressions or spoken words to understand what the wave really means.
75
+ [246.360 --> 250.680] And while many gestures have more than one meaning, kinesics lets us sort them into different
76
+ [250.680 --> 253.680] categories based on the type of information they're sharing.
77
+ [253.680 --> 257.560] For instance, gestures that describe something are called illustrators.
78
+ [257.560 --> 260.840] Illustrators are used to clarify or reinforce a verbal message.
79
+ [260.840 --> 264.720] Like if you'd pointed at your friend's shoulder during the cookout and said, there's a huge
80
+ [264.720 --> 265.720] spider.
81
+ [265.720 --> 269.720] They would know exactly what you're communicating, in this case, that they need to brush
82
+ [269.720 --> 270.800] the spider off.
83
+ [270.800 --> 275.560] And by using an illustrator to clarify your verbal message, you can save your friend and
84
+ [275.560 --> 276.560] the cookout.
85
+ [276.560 --> 280.000] Then there are emblems, or gestures that have a meaning that people in a community or
86
+ [280.000 --> 281.400] culture have agreed upon.
87
+ [281.400 --> 284.760] Some of them and emblems include shaking your head to say no, or shrugging to show that
88
+ [284.760 --> 285.920] you don't know something.
89
+ [285.920 --> 289.720] In the cookout scenario, if your friend went to brush the spider off and asked if it was
90
+ [289.720 --> 293.680] gone, you might use the emblem of nodding your head instead of saying, yes.
91
+ [293.680 --> 298.000] Or if they asked how many spiders were on their shoulder, you could hold up one finger,
92
+ [298.000 --> 299.680] which would also be an emblem.
93
+ [299.680 --> 303.760] Basically, emblems are super helpful because they give us a way to communicate clearly without
94
+ [303.760 --> 305.440] using words at all.
95
+ [305.440 --> 309.840] We can also use gestures called regulators to manage our conversations with others.
96
+ [309.840 --> 313.520] Just keep the conversation flowing, like when we lean forward to show that we want someone
97
+ [313.520 --> 314.520] to keep talking.
98
+ [314.520 --> 317.200] But we can also use regulators to pause a conversation.
99
+ [317.200 --> 320.880] Like if your friend is telling a wild story, but you really need to tell them about the
100
+ [320.880 --> 324.600] spider on their shoulder, you might hold your hand out with your palm open to get them
101
+ [324.600 --> 325.600] to pause.
102
+ [325.600 --> 329.360] And in any scenario, regulators help us keep the conversation flowing and ensure everyone's
103
+ [329.360 --> 330.360] voice is heard.
104
+ [330.360 --> 333.840] Then there are adapters, which are gestures that help our bodies release tension during
105
+ [333.840 --> 338.280] stressful situations, like twirling our hair or clicking a pen during a job interview.
106
+ [338.280 --> 341.680] These are different from the other types of gestures because we usually aren't aware
107
+ [341.680 --> 342.680] that we're doing them.
108
+ [342.680 --> 346.800] And while they make us feel better in a tough situation, adapters can actually distract
109
+ [346.800 --> 348.360] the people we're communicating with.
110
+ [348.360 --> 352.280] Like hair twirling during an interview totally steals a spotlight from your awesome story
111
+ [352.280 --> 354.680] about how you saved your friend from a deadly spider bite.
112
+ [354.680 --> 358.800] Because even when we don't realize it, our non-ribble cues still send messages to other
113
+ [358.800 --> 359.800] people.
114
+ [359.800 --> 361.760] Even our subconscious hair twirling and pen clicking.
115
+ [361.760 --> 366.000] But with a little self-awareness, we can recognize and monitor our adapters and project confidence
116
+ [366.000 --> 367.760] in any situation.
117
+ [367.760 --> 371.520] Directors, emblems, regulators and adapters are important because they add meaning to
118
+ [371.520 --> 375.480] what we say and even replace verbal communication when the moment is right.
119
+ [375.480 --> 379.240] But gestures aren't the only non-vocal elements of non-brible communication.
120
+ [379.240 --> 383.600] We also use things like eye contact to create connections, share information, establish
121
+ [383.600 --> 387.000] our credibility, and even make a good impression when meeting someone new.
122
+ [387.000 --> 390.480] But eye contact can also be used to intimidate others.
123
+ [390.480 --> 394.880] Like we probably all remember disobeying the rules as a kid and getting the look from our
124
+ [394.880 --> 395.880] parents.
125
+ [395.880 --> 400.680] And they made eye contact, oh man, you knew you were in big trouble and needed to clean
126
+ [400.680 --> 402.200] your room right away.
127
+ [402.200 --> 407.040] Eye contact also interacts with other non-brible cues, like facial expressions, so we can better
128
+ [407.040 --> 409.360] understand what people are thinking and feeling.
129
+ [409.360 --> 413.880] For example, if you smile at a baby, they'll know your friendly and might even smile back.
130
+ [413.880 --> 417.880] Facial expressions, like smiles, are often viewed as innate, emotional reactions to the
131
+ [417.880 --> 418.880] world around us.
132
+ [418.880 --> 422.440] Like, smiling at strangers in public might feel totally involuntary to you.
133
+ [422.440 --> 428.280] But the truth is that all of our facial expressions, including smiles, are also social behaviors.
134
+ [428.280 --> 431.600] In many cultures, we smile to make other people feel at ease.
135
+ [431.600 --> 435.440] And because we wear those social smiles for the benefit of others, we view them differently
136
+ [435.440 --> 440.200] than the genuine smiles we put on when we're feeling strong emotions, like joy or excitement.
137
+ [440.200 --> 444.080] So like waving or giving the thumbs up, most facial expressions have different meanings
138
+ [444.080 --> 446.400] depending on how we use them in different contexts.
139
+ [446.400 --> 450.400] And the better we are at pairing facial expressions with our verbal communication, the more
140
+ [450.400 --> 452.080] effective our messages can be.
141
+ [452.080 --> 455.320] But there are also vocal elements of non-verbal communication.
142
+ [455.320 --> 457.560] Yep, you heard that right.
143
+ [457.560 --> 460.960] Some of the sounds we make count as non-verbal communication.
144
+ [460.960 --> 462.600] I know, I know.
145
+ [462.600 --> 463.760] That's pretty confusing.
146
+ [463.760 --> 467.440] But we often use sounds to add meaning to the words we speak, like when you raise your
147
+ [467.440 --> 470.360] voice when you're angry or speak quickly when you're excited.
148
+ [470.360 --> 474.760] Because these sounds aren't included in our grammar system, we call them pary language,
149
+ [474.760 --> 477.400] which literally means alongside language.
150
+ [477.400 --> 482.560] Pair language refers to the vocalized but non-verbal parts of a message, like pitch, volume,
151
+ [482.560 --> 484.360] rate of speech, and verbal fillers.
152
+ [484.360 --> 488.440] Like if I start talking loud and really fast, you might think something exciting is about
153
+ [488.440 --> 489.440] to happen.
154
+ [489.440 --> 493.000] Once we learn how pary language works, we can use it to convey meaning and emotion in our
155
+ [493.000 --> 494.480] conversations with others.
156
+ [494.480 --> 498.720] For instance, in English, we use a rising pitch to indicate that we're asking a question,
157
+ [498.720 --> 499.720] like this.
158
+ [499.720 --> 501.320] Is there a spider on my shoulder?
159
+ [501.320 --> 505.200] And if we want to emphasize the intensity of a verbal message, we might increase the volume
160
+ [505.200 --> 507.080] of our voice like this.
161
+ [507.080 --> 509.240] There's a giant spider on your shoulder.
162
+ [509.240 --> 513.880] Vocal elements of non-verbal communication make our words more expressive, and they can
163
+ [513.880 --> 519.200] even stand in for words when we need to express sudden feelings, like surprise or fright.
164
+ [519.200 --> 523.040] Without these vocal cues, our verbal communication just wouldn't be as exciting.
165
+ [523.040 --> 526.800] So if non-verbal communication is so important, how do we learn to do it?
166
+ [526.800 --> 530.480] It's not like you take classes on when to use an illustrator versus an emblem in school.
167
+ [530.480 --> 534.680] Instead, we learn how to use non-verbal communication by participating in our culture.
168
+ [534.680 --> 538.360] Non-verbal communication cultures have unique norms or guidelines for how to use non-verbal
169
+ [538.360 --> 539.360] cues.
170
+ [539.360 --> 543.520] For example, pointing is fine if you're from the United States, but in China and Indonesia,
171
+ [543.520 --> 545.320] it's considered really rude.
172
+ [545.320 --> 549.200] Artifacts or objects and possessions we use are another form of non-verbal communication
173
+ [549.200 --> 551.160] that's shaped by the culture we live in.
174
+ [551.160 --> 555.920] Most cultures have rules about how we use artifacts, which include our clothes, jewelry, and
175
+ [555.920 --> 557.760] the decorations we put up in our spaces.
176
+ [557.760 --> 562.600] For example, on some college campuses, it's the norm for students to wear pajamas to class.
177
+ [562.600 --> 566.960] There's a good chance no one told students that wearing fuzzy slippers to class is cool.
178
+ [566.960 --> 569.920] They just saw older classmates doing it and assumed it was okay.
179
+ [569.920 --> 574.040] But some cultures have explicit rules about how artifacts should be used, like wearing
180
+ [574.040 --> 576.560] a wedding ring on your third finger on your left hand.
181
+ [576.560 --> 579.880] And using artifacts to express ourselves can also be fun.
182
+ [579.880 --> 583.480] Like if you're a huge Lord of the Rings fan, you might have a bumper sticker of the
183
+ [583.480 --> 585.360] ring of power on the back of your car.
184
+ [585.360 --> 588.800] But someone who hasn't seen Lord of the Rings might think your bumper sticker represents
185
+ [588.800 --> 593.400] your passion for ancient jewelry, instead of your undying devotion to the fellowship.
186
+ [593.400 --> 596.640] Navigating non-verbal communication can be a little confusing if you're not familiar
187
+ [596.640 --> 598.480] with cultural rules and norms.
188
+ [598.480 --> 603.280] But it's impossible to know all the non-verbal norms from every culture in the entire world.
189
+ [603.280 --> 606.840] So it's inevitable that non-verbal messages are going to get mixed up sometimes.
190
+ [606.840 --> 611.080] It's just a normal part of living in a world with so many amazing cultures and traditions.
191
+ [611.080 --> 615.600] But just like we use context clues to figure out what unfamiliar words mean, we can also
192
+ [615.600 --> 618.800] look for context clues to understand non-verbal communication.
193
+ [618.800 --> 622.880] For instance, if you notice young people bowing to older people, you can infer that bowing
194
+ [622.880 --> 624.360] is a sign of respect.
195
+ [624.360 --> 626.800] And add that to your non-verbal vocabulary too.
196
+ [626.800 --> 631.040] At the end of the day, we can't not communicate when it comes to non-verbal communication.
197
+ [631.040 --> 635.040] Our non-verbal cues are a window into our feelings and emotions, and they're constantly
198
+ [635.040 --> 636.760] seeping out of us.
199
+ [636.760 --> 637.880] Even if we don't realize it.
200
+ [637.880 --> 642.120] So to make sure our non-verbal communication reflects what we truly want to say, we have
201
+ [642.120 --> 643.640] to be extra thoughtful.
202
+ [643.640 --> 648.120] Because a single hand gesture can be the difference between squashing a giant spider and accidentally
203
+ [648.120 --> 649.120] starting a dance party.
204
+ [649.120 --> 652.840] Thanks for watching Study Hall, Intro to Human Communication, which is part of the Study
205
+ [652.840 --> 655.920] Hall project, a partnership between ASU and Crash Course.
206
+ [655.920 --> 658.800] If you liked this video and want to keep learning with us, be sure to subscribe.
207
+ [658.800 --> 662.920] You can learn more about Study Hall and the videos produced by Crash Course and ASU in the
208
+ [662.920 --> 664.440] links in the description.
209
+ [664.440 --> 665.040] See you next time!
transcript/allocentric_YSd6nSYr2ZA.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ [0.000 --> 9.940] Oh, little, uh patient.
2
+ [10.140 --> 13.100] I'm ag sympathized.
3
+ [20.120 --> 25.680] And jib darauf que ain et en participate.
4
+ [25.680 --> 27.680] Do you need anything?
5
+ [29.680 --> 33.680] You may have gone to the law school for you to have breakfast.
6
+ [34.680 --> 37.680] I brought this ice cream.
transcript/allocentric_YrMiKxPV_Ig.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_Z550DeGoTgU.txt ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 17.500] It's my privilege and my honor to be able to introduce our keynote speaker today.
2
+ [17.500 --> 19.000] And I just want to spend a couple of minutes.
3
+ [19.000 --> 22.380] I don't want to eat up too much of his time because it's already been long enough that I've
4
+ [22.380 --> 23.380] taken.
5
+ [23.380 --> 27.120] But I just want to tell you a few things about him.
6
+ [27.120 --> 30.640] He is a professor of neuroscience and director of the Cavali Institute for Systems
7
+ [30.640 --> 36.040] Neuroscience at the Norwegian University of Science and Technology in Trondheim, Norway.
8
+ [36.040 --> 39.240] He did most of his formative training at the University of Oslo with Pierre Anderson,
9
+ [39.240 --> 43.960] which I think he shares in common actually with many people in this audience.
10
+ [43.960 --> 48.840] He then did a postdoctoral fellowship with John O'Keefe and with Richard Morris at University
11
+ [48.840 --> 52.480] of Edinburgh and UCL.
12
+ [52.480 --> 57.560] And the work that he's going to talk about today, the large corpus of work that he has
13
+ [57.560 --> 63.160] done in his career in collaboration with MyBrit Moser, focuses on how spatial memories
14
+ [63.160 --> 69.160] and spatial locations are encoded in the brain and the mechanisms that are required to formulate
15
+ [69.160 --> 73.480] some sense of where you are and how you can navigate in space.
16
+ [73.480 --> 77.400] Now this work was incredibly influential and transformative.
17
+ [77.400 --> 85.280] It earned him MyBrit Moser and John O'Keefe the 2014 Nobel Prize in Physiology or Medicine.
18
+ [85.280 --> 88.680] Some of our speakers, more recent work, which I hope you will have an opportunity to talk
19
+ [88.680 --> 95.600] about today also, focuses on taking this premise of understanding neural computation underlying
20
+ [95.600 --> 100.960] space and memory in the brain to try and understand time and understand how time is also process
21
+ [100.960 --> 101.960] in the brain.
22
+ [101.960 --> 104.160] And maybe there are shared mechanisms there.
23
+ [104.160 --> 107.480] There are differences, I think, we'll have to wait to hear from him on that.
24
+ [107.480 --> 110.600] But that's something I'm particularly excited to hear from him about as of these new directions
25
+ [110.600 --> 111.920] in the work.
26
+ [111.920 --> 114.880] One of the things I want to mention about our speaker also is that if you have a chance
27
+ [114.880 --> 120.080] to spend more than a couple of minutes with him, you'll realize something very, very special.
28
+ [120.080 --> 125.600] Aside from his global renown and his accomplishments, he's also one of the most humble people I have
29
+ [125.600 --> 127.120] ever met.
30
+ [127.120 --> 129.560] And I think you'll know this just by talking to him for a few minutes.
31
+ [129.560 --> 133.760] He's very generous with his time, with students, with colleagues.
32
+ [133.760 --> 138.120] And I've always had a listening ear when I've tried to reach out to him and chat about data
33
+ [138.120 --> 140.120] and science.
34
+ [140.120 --> 143.880] I also want to thank, take this opportunity to thank Neuralinks, who have sponsored this
35
+ [143.880 --> 145.560] keynote lecture.
36
+ [145.560 --> 147.920] Neuralinks and our speaker actually have kind of a history.
37
+ [147.920 --> 149.720] They go way back.
38
+ [149.720 --> 154.960] And this is something that I think is really just spectacular for us to be able to have
39
+ [154.960 --> 158.400] their support for this conference in particular for this keynote lecture.
40
+ [158.880 --> 160.400] Now, I know you're in for quite a treat.
41
+ [160.400 --> 162.000] I don't want to take up any more of your time.
42
+ [162.000 --> 166.120] So with that, ladies and gentlemen, please help me give a warm welcome to our speaker, Edvard
43
+ [166.120 --> 167.120] Moser.
44
+ [167.120 --> 191.920] So thank you, Edvard, for the nice introduction.
45
+ [191.920 --> 198.360] Thanks to both you and Manuel and everyone else who has organized and prepared this conference.
46
+ [198.360 --> 208.080] I think the size of the audience, the number of people here, testifies to...
47
+ [208.080 --> 213.080] You don't want to have that slide up the whole time?
48
+ [213.080 --> 220.960] Not only to the great work that is being done here, but also to the importance of this
49
+ [220.960 --> 228.840] center in the history of modern neuroscience and especially with the focus on learning and
50
+ [228.840 --> 229.840] memory.
51
+ [229.840 --> 236.440] And my congratulations especially to Jim MacGov for starting all of this and for being this
52
+ [236.440 --> 245.240] for 35 years.
53
+ [245.240 --> 247.920] So my talk will be...
54
+ [247.920 --> 255.160] I was told explicitly when we started, when I prepared this, that this is a combined
55
+ [255.160 --> 260.440] public talk and scientific talk, which is something that's really hard to achieve actually.
56
+ [260.440 --> 266.640] But I will start out at primary school level in the beginning and then I will go gradually
57
+ [266.640 --> 274.040] up and during the second half of my talk, I will move into unpublished territory and
58
+ [274.040 --> 279.840] include some new principles of the position coding in the internal cortex and then move
59
+ [279.840 --> 287.320] over, as Mike said, to time, which is the most recent work and which will serve as an
60
+ [287.320 --> 296.480] introduction to another talk that my former PhD student, Albert Cao, will go into more detail
61
+ [296.480 --> 298.320] on tomorrow.
62
+ [298.320 --> 302.560] But let's begin with location and space.
63
+ [302.560 --> 314.360] So I thought I could not be worse than both Jim and Lin who both had pictures of Gauls,
64
+ [314.360 --> 316.240] so I'll do the same.
65
+ [316.240 --> 319.320] And here's my Gaul brain.
66
+ [319.320 --> 326.440] It shows the different faculties or abilities or properties and how they are labeled onto
67
+ [326.440 --> 335.840] the brain and I think both speakers made the important point that this had actually tremendous
68
+ [335.840 --> 340.080] influence on neuroscience.
69
+ [340.080 --> 348.360] It set the stage and then was forgotten for many years, but actually today with trajectories
70
+ [348.360 --> 355.720] into circuits and not only areas and principles for cooperation, collaboration, interaction
71
+ [355.720 --> 356.720] between many cells.
72
+ [356.720 --> 364.120] We are actually getting back to the point where we can start to understand some of the psychological
73
+ [364.120 --> 370.760] functions that are enabled by the brain and especially the cortex.
74
+ [370.760 --> 380.440] However, since the 19th century, concepts have moved forward too and that's also one
75
+ [380.440 --> 391.920] of the reasons now with more conceptual advances and better ideas and models for how the
76
+ [391.920 --> 394.360] brain might work at the psychological level.
77
+ [394.360 --> 396.600] We are actually making some advances.
78
+ [396.600 --> 402.240] But there's more advance in some areas than others and one of those areas that over the
79
+ [402.240 --> 413.200] last 40 years or so really have seen a lot of advance is our understanding of how space
80
+ [413.200 --> 414.440] is represented.
81
+ [414.440 --> 421.360] Because this is one of the first in mammals, one of the first high-order, non-century
82
+ [421.360 --> 429.080] and non-motor functions that are really beginning to be understood in neural language in terms
83
+ [429.080 --> 435.920] of how cells work together and how cells have different functions and how this is all put
84
+ [435.920 --> 443.280] together to produce something that probably gives rise to our sense of location.
85
+ [443.280 --> 451.200] So, I want to start very simply and now I will go to primary school level and simply
86
+ [451.200 --> 460.000] ask what would it be like if we didn't have this ability to conceive of space and where
87
+ [460.000 --> 461.760] we are in space.
88
+ [461.760 --> 469.000] So I have an animation that you made for the purpose, not for this purpose but I will show
89
+ [469.000 --> 478.760] this and that takes about two minutes and so let's begin with this.
90
+ [478.760 --> 482.600] How sound should be on?
91
+ [482.600 --> 483.600] World-weight.
92
+ [483.600 --> 485.600] Sound on?
93
+ [485.600 --> 490.400] Okay, try again.
94
+ [490.400 --> 497.320] Nope, I'll wait because it sounds essentially.
95
+ [497.320 --> 499.560] It worked one minute ago.
96
+ [499.560 --> 504.120] So I don't know.
97
+ [504.120 --> 506.560] Okay, try once again.
98
+ [506.560 --> 507.560] There.
99
+ [507.560 --> 508.560] Good.
100
+ [508.560 --> 509.560] Fantastic.
101
+ [509.560 --> 510.560] Yeah.
102
+ [510.560 --> 511.560] Yeah.
103
+ [511.560 --> 512.560] Yeah.
104
+ [512.560 --> 516.560] How does life in world space?
105
+ [516.560 --> 518.560] Nine minutes.
106
+ [518.560 --> 526.560] Life for us development came over time.
107
+ [526.560 --> 534.080] Abilities and trades that proved useful for survival are retained across generations through
108
+ [534.080 --> 539.080] succession of species from the common ancestor to its project.
109
+ [539.080 --> 543.080] These are the mechanisms of evolution.
110
+ [543.080 --> 548.080] Natural solution has a vapor of the species with the best of meaning to develop it.
111
+ [548.080 --> 553.080] And all energy that moves can escape from danger and find the shelter.
112
+ [553.080 --> 559.080] An obligation also allows us to actively find food.
113
+ [559.080 --> 566.080] The safety of a flock.
114
+ [566.080 --> 573.080] Or a suitable mate.
115
+ [573.080 --> 582.080] Scientists have discovered an navigation system in the way that is common for my main species
116
+ [582.080 --> 588.080] as per birth, as per births, rats, mice, monkeys, and even humans.
117
+ [588.080 --> 595.080] These these common is suggest that this positioning system evolved from a common ancestor of mammals
118
+ [595.080 --> 596.080] or a prey.
119
+ [596.080 --> 607.080] We all share a system.
120
+ [607.080 --> 609.080] So where is this system?
121
+ [609.080 --> 615.080] Well, many parts of the brain are involved in space.
122
+ [615.080 --> 621.080] But there is still as we have learned earlier today, there are two areas that have received a lot of attention
123
+ [621.080 --> 628.080] that are critically involved in representation of space.
124
+ [628.080 --> 635.080] So let's see hippocampus and it's the entorinal cortex.
125
+ [635.080 --> 639.080] Is this the pointer?
126
+ [639.080 --> 640.080] Let's see.
127
+ [640.080 --> 646.080] Okay. So this shows the human brain and this shows rat brain.
128
+ [646.080 --> 650.080] This is all from collaborative work with menowitter.
129
+ [650.080 --> 656.080] It shows the human brain in the rat area, which is embedded under the deep in the cortex here.
130
+ [656.080 --> 659.080] And the blue area here is the entorinal cortex.
131
+ [659.080 --> 665.080] In the rat brain, it's located somewhat differently but very far back.
132
+ [665.080 --> 668.080] The hippocampus here, entorinal cortex here.
133
+ [668.080 --> 682.080] And in these areas, they are turned down to be important but because it is all much easier to investigate in animals,
134
+ [682.080 --> 692.080] then a lot of major advance was made about 45 years ago, as you've heard earlier in this meeting,
135
+ [692.080 --> 701.080] when John O'Keefe and Jonathan Strowski started to record electrical activity or action potential spikes
136
+ [701.080 --> 710.080] from single neurons in the hippocampus of rats.
137
+ [710.080 --> 716.080] So this shows a rat that is walking freely in a box or it could be other types of a parathy,
138
+ [716.080 --> 726.080] a parathy, a macers. But in any case, what John did was that he recorded activity from single cells
139
+ [726.080 --> 736.080] and viewed those on the screen, on the silo scope and stored them and then found that single neurons in the hippocampus
140
+ [736.080 --> 739.080] are responsive to the location of the rat.
141
+ [739.080 --> 742.080] So I will illustrate this with a movie.
142
+ [742.080 --> 745.080] Now we see the rat from a bob.
143
+ [745.080 --> 749.080] Rat is walking in a box. Box is one meter by one meter.
144
+ [749.080 --> 754.080] In the box, there are occasionally thrown out crumbles of chocolate, which rats like,
145
+ [754.080 --> 759.080] but keep them walking around in the box and visiting every possible place.
146
+ [759.080 --> 765.080] And at the same time, we are recording cells from the hippocampus.
147
+ [765.080 --> 772.080] You will hear those soon as spikes sounds or noise, sounds like noise.
148
+ [772.080 --> 778.080] But each time there is a sound, a popcorn sound, then the cell is active.
149
+ [778.080 --> 784.080] And you will notice that this cell, example cell, is active only at certain places in the box.
150
+ [784.080 --> 789.080] So let's start the movie.
151
+ [789.080 --> 797.080] And each time the cell is active or fires, you will also see a rat dot up on the screen.
152
+ [797.080 --> 804.080] So you probably already know and notice that this cell is active only at one place in the box.
153
+ [804.080 --> 809.080] In this case, in the upper left part, and otherwise the cell is very silent.
154
+ [809.080 --> 818.080] This can also be illustrated with a heat map, a color code, where red is high activity, blue is lower, no activity.
155
+ [818.080 --> 829.080] And it turned out that during the years to come after this discovery that different cells have different preferred areas in the hippocampus.
156
+ [829.080 --> 840.080] And together, it became clear that these cells cover the entire environment, visited by the rat.
157
+ [841.080 --> 855.080] For these, based on these data, then John O'Keefe and Lynne Adele suggested in 1978 that the hippocampus is actually the basis of cognitive map,
158
+ [855.080 --> 860.080] or a Tolmanian map, you heard in the morning about Tolman's contributions.
159
+ [860.080 --> 874.080] A map that encodes spaces, locations in the environment, but also more than that also experiences associated with those locations.
160
+ [874.080 --> 878.080] This was a major conceptual advance.
161
+ [878.080 --> 885.080] It put things together, tied it up as you heard this morning, a very chaotic literature.
162
+ [885.080 --> 903.080] So during the next decades, a lot was learned about play cells, but there were a few things that still weren't clear when we came into the picture after three months visit with John, where he taught us all the essentials.
163
+ [903.080 --> 920.080] So when we started up in 1996, in our own lab in Norway, there were several questions that were interesting, but one, perhaps the most important one, was which hadn't been resolved, where and how is this play cell signal generator.
164
+ [920.080 --> 939.080] Because this, remember, this is not sensory cortex. So these signals, they have properties that are as clear as you might see in sensory areas, because they really strictly respond to the location of the rat.
165
+ [939.080 --> 949.080] So you all know, you don't have space sensors on your fingers, not in your ears, not in your eyes. So how is this generated, where does it come from?
166
+ [949.080 --> 961.080] So to large extent, this is probably generated inside the brain, by the brain itself, based on with the help of sensory inputs, but this was really not well understood.
167
+ [961.080 --> 975.080] And one idea that was around in the 1990s was that if anything, this signal, if it wasn't created in hippocampus, it was at least enhanced quite significantly in the hippocampus.
168
+ [975.080 --> 998.080] And because the hippocampus operates to a large extent, like a circuit, a unit direction circuit consisting of sub areas that project from one to the other in a loop through the hippocampus, in, through it and out, then most of the cells have been recorded in CA1, which is one of the last stages of the circuit.
169
+ [998.080 --> 1009.080] Then it was, believe me, many people at that time, that essential things happened in the earliest stages just before the area where most of the activity had been recorded.
170
+ [1009.080 --> 1027.080] So an obvious thing to do, when we started out, was simply just to try to get rid of the inputs from C and C that was postulated to be so important.
171
+ [1027.080 --> 1043.080] And what we found, which actually was in agreement with earlier work, using other methods from the McNaughtner and Vonslab, was that a lot of the activity actually survived.
172
+ [1043.080 --> 1052.080] So this shows examples of seven different cells from a recording when the CA3 is inactivated, olesian.
173
+ [1052.080 --> 1062.080] And you can see that these seven cells, this is the box in from above and color indicates, color indicates the activity of the cell.
174
+ [1062.080 --> 1069.080] You see that these cells are still spatially selective. They still fire in certain areas and not in other areas.
175
+ [1069.080 --> 1082.080] So although the spatial firing was not as strong maybe as it is in the control animal, it was still not really noticeably different.
176
+ [1082.080 --> 1095.080] That then led us to get interested in the entirinal cortex, an area that feeds in most of the cortical input into the hippocampus.
177
+ [1095.080 --> 1109.080] And by that time we had strength and connections with Manu Vitter, who then participated in this study and was one of the world's experts on just this area.
178
+ [1109.080 --> 1129.080] And this work, and we showed then here that with Manu, this was the case that the place signal survived despite, in animals where there was absolutely no input from the CA3 left, which we showed by using anatomical methods.
179
+ [1129.080 --> 1144.080] So that led us to the entirinal cortex and we tried to record directly from that area together with Manu and with the students, Mariana Fien and Total Hofting.
180
+ [1144.080 --> 1152.080] And we put in the recording electrodes into the dorsal part of the medial entirinal cortex.
181
+ [1152.080 --> 1164.080] And this dorsal part is the area that has the strongest inputs into the dorsal hippocampus where almost all the place cells had been recorded.
182
+ [1164.080 --> 1173.080] So it was an obvious area to go to, but at that time in that part of entirinal cortex I don't think anyone really had recorded yet.
183
+ [1173.080 --> 1176.080] So it was a new territory.
184
+ [1176.080 --> 1183.080] And what happened was, and that cells in that area had a different type of pattern.
185
+ [1183.080 --> 1187.080] First of all, they were very strongly spatially modulated.
186
+ [1187.080 --> 1198.080] So what you see here now to the bottom right is the box again. Now it's a bigger box. In this case, a 220 by 220 centimeter large box.
187
+ [1198.080 --> 1204.080] The gray trace is where the animal walked, so it shows the part of the animal over half an hour.
188
+ [1204.080 --> 1213.080] And each black dot is where that one particular cell was active when the rat was running around.
189
+ [1213.080 --> 1228.080] So what you can see is that this cell, like other cells in the area, was active in certain places, but no longer just in one place or maybe in, it was active in many places.
190
+ [1228.080 --> 1242.080] And the other thing that you may notice is how regular this pattern is, which you can see when you put these lines on top, which I did in the left diagram here, that is actually repeating triangular hexagonal pattern.
191
+ [1242.080 --> 1253.080] That in many ways expresses a metric that was certainly not present in the place cell signals of the hippocampus.
192
+ [1253.080 --> 1269.080] So apparently here we had another component of this spatial or cognitive map that contain information about distances and directions that were not so easily extractable from the hippocampus.
193
+ [1269.080 --> 1274.080] So this is now 2004-05.
194
+ [1274.080 --> 1280.080] So one of the things that became clear from the beginning is that the grid cells, there were many of them.
195
+ [1280.080 --> 1289.080] And especially they were abundant in the superficial layers of the midial and toral cortex, which project into the hippocampus.
196
+ [1289.080 --> 1301.080] But varied in various ways. So they could have different faces, they could have different scales, or they could have different orientations relative to the environment.
197
+ [1301.080 --> 1308.080] So faces means that the grid patterns are shifted in XY space relative to each other.
198
+ [1308.080 --> 1312.080] So you see that illustrated here for a green grid cell and for a blue one.
199
+ [1312.080 --> 1318.080] And you can see that they have different the peaks of the grid pattern or different places.
200
+ [1318.080 --> 1323.080] Or they might differ in the scale, which you see here with the blue one compared to the green one.
201
+ [1323.080 --> 1327.080] So you see the blue one has both larger fields and larger distances.
202
+ [1327.080 --> 1333.080] And the third is then that they may be tilted relative to each other as well.
203
+ [1333.080 --> 1342.080] So we asked then early on whether there is any organization according to these dimensions.
204
+ [1342.080 --> 1350.080] And both yes and no. So first of all, for the face of the grid, there was no very striking organization.
205
+ [1350.080 --> 1364.080] Or that means that whatever we recorded, and this illustrator recording electrodes using tetrodes, which I don't have to explain how that works.
206
+ [1364.080 --> 1373.080] But anyway, it picks up signals in a way that makes it possible to differentiate between cells and to isolate from each other.
207
+ [1373.080 --> 1377.080] So here you have a blue cell, a green cell and a red cell.
208
+ [1377.080 --> 1381.080] And you can see that the grid patterns on the three cells.
209
+ [1381.080 --> 1385.080] All of them have grid patterns, but they are shifted in XY space.
210
+ [1385.080 --> 1388.080] And this is pretty representative of what you get in most places.
211
+ [1388.080 --> 1399.080] So that it is similar to what in sensory or visual neuroscience often is referred to as salt and pepper organizations, pretty mixed.
212
+ [1399.080 --> 1403.080] That is totally mixed, that is still uncertain.
213
+ [1403.080 --> 1408.080] And there are various indications recently that there may be some organization to it.
214
+ [1408.080 --> 1413.080] There may also be not an equal distribution of faces.
215
+ [1413.080 --> 1425.080] But by and large, every location is represented at every place, every anatomical location in the entorinal cortex.
216
+ [1425.080 --> 1437.080] Which is very different from the spacing of the grid, because it was clear from the outset that spacing varied depending on how far up or down you are in the brain.
217
+ [1437.080 --> 1444.080] So this is a side view of the hippocampus and entorinal cortex.
218
+ [1444.080 --> 1451.080] The hippocampus is this ear like structure here, and the red structure here is the medial entorinal cortex.
219
+ [1451.080 --> 1468.080] If you start at the top, which you often refer to as the dorsal part, and then go down towards the ventral or the bottom, what we typically see is that it starts out with only a small scale grid cells, dots are small and very close to each other.
220
+ [1468.080 --> 1478.080] This is a box of 2 by 220 by 220 centimeters, and the distance here is down to something like 30 centimeters between each node.
221
+ [1478.080 --> 1498.080] Once you go down, this increases, and so you get into the middle here, it may already be a meter or more, and if you go even further, then it's difficult to assess, because we don't have environments that are big enough or didn't at least at that time.
222
+ [1498.080 --> 1506.080] There is a clear gradient, topographical gradient, where it begins with the smallest at the top and goes towards the largest at the bottom.
223
+ [1506.080 --> 1521.080] So this can be organized in many ways, but one important question was whether is this a continuous gradient, where you go smoothly from smallest to largest, or are there actually discrete steps?
224
+ [1521.080 --> 1529.080] So does this depend on the consist of subnetworks that each have their own scale?
225
+ [1529.080 --> 1555.080] So certain ideas about how grid cells rise actually require the latter, so that we did look more into this, and this is work with Torren Hannes-Denzola in about 2012, who were able to record up to almost 200 grid cells from the same animal, which at that time was quite unique.
226
+ [1555.080 --> 1566.080] And by doing so, they were able to plot the scale of grid cells from the same animal in one diagram.
227
+ [1566.080 --> 1582.080] So what you have here is on the x-axis, you have Dorsal-2-Bentrel, so top to bottom in the inter-inventor-inventor-inventor-inventor cortex, and then on the y-axis you have the scale of the grid or the distance between the peaks, and then each dot is one cell.
228
+ [1582.080 --> 1592.080] And what you can see is, first of all, as I told you, as you start from Dorsal and go to ventral, then the scale generally gets larger and larger and larger.
229
+ [1592.080 --> 1606.080] But what you also see is that it is a step-like increase, where there is actually just a small number of scales present, and almost every cell can be put into one of these steps.
230
+ [1606.080 --> 1616.080] So these steps, we call them modules, and we call them module 1, the smallest one, and then module 2, 3, and module 4.
231
+ [1616.080 --> 1629.080] So it turns out they even have a certain relationship, so that you, when we asked, what is the factor that you have to multiply M1 within order to get M2?
232
+ [1629.080 --> 1633.080] How much do you have to multiply M2 with to get M3 and so on?
233
+ [1633.080 --> 1645.080] It turns out that it is actually a constant factor, and in this case, under those conditions in rats it was approximately, or on average the mean was 1.42.
234
+ [1645.080 --> 1658.080] Of course, there is a lot of variation, but still the scale factor is the same, so that you can actually describe the levels of the grid.
235
+ [1658.080 --> 1668.080] Of the grid cells, or the modules of the grid cells, as organized in a, like a geometric progression.
236
+ [1668.080 --> 1684.080] So, and what's the advantage of that? Well, that is still not clear, but it has been hypothesis, at least by various people, that this might be the best way to organize grid scales.
237
+ [1684.080 --> 1694.080] If you want to represent space in the most, possibly the most efficient manner with the fewest number of cells.
238
+ [1694.080 --> 1702.080] So, I want to emphasize at least one major difference between the place cell map and the grid cell map.
239
+ [1702.080 --> 1705.080] So, we go back to the place cells now.
240
+ [1705.080 --> 1718.080] You heard also again from the morning talks, and also from Karl's talk, that one property of place cells is that they remap, as we say.
241
+ [1718.080 --> 1730.080] That means that you have different maps or different combinations of place cell maps in different environments.
242
+ [1730.080 --> 1743.080] So, we can also say that the map is high dimensional, because it just, what this means is that the maps are uncorrelated, or as different as possible.
243
+ [1743.080 --> 1753.080] This was shown already by starting with Muller and Kubey, and then has been developed by many labs over the years.
244
+ [1753.080 --> 1765.080] But I like to show this experiment that we did quite recently, because we demonstrated effect in as many as 11 different recording rooms.
245
+ [1765.080 --> 1770.080] So, here's a picture of 11 labs where rats were tested.
246
+ [1770.080 --> 1775.080] And I like to show them, because those labs are so similar that I can't tell the difference.
247
+ [1775.080 --> 1783.080] I have no way that I can say the difference between lab number N8 and lab number N2, for example.
248
+ [1783.080 --> 1797.080] But the question is whether rats are able to, so what Charlotte Olma in our lab did was that she recorded many, many places from the same rat in sequence,
249
+ [1797.080 --> 1812.080] whether rats were tested sequentially in all these rooms, one familiar room which has a label F, and then 10 different novel rooms where they were exposed for the first time, label from N1 to N10.
250
+ [1812.080 --> 1830.080] And then asked whether places in those rooms are similar, is it one map that is carried over, or as we expected, because of the ability to remap from one room to other, that they are uncorrelated.
251
+ [1830.080 --> 1845.080] So, what we found in this experiment is that all combinations of maps in all of these rooms are actually as different as it is possible.
252
+ [1845.080 --> 1859.080] So, what you see here is first of all maps from place cells in cell number 1, cell number 2, 3, 4, and so on until cell number N, and then they are correlated using a population vector approach.
253
+ [1859.080 --> 1871.080] And this is a correlation matrix, and this is all the different rooms on one axis and all the different rooms on the other axis, and then the color indicates the correlation between the maps.
254
+ [1871.080 --> 1882.080] And of course, along the diagonal, when you correlate the rooms with itself, you get a correlation of 1, so that is no surprise, the same recording co-created itself.
255
+ [1882.080 --> 1897.080] But on all other combinations, you see that it is in the deep blue range, which means essentially what you would get by chance is absolutely not different if you just shuffle the data completely,
256
+ [1897.080 --> 1904.080] except for a very few places here, which all are marked by a star on asterisk.
257
+ [1904.080 --> 1921.080] And those are the instances where the room was actually repeated, a second exposure to the same room, and then so that shows that it is not just the fact that the new map is pulled up each time, but it actually the same map is re-expressed when they go back to the environment.
258
+ [1921.080 --> 1938.080] But otherwise, those maps or play cells are as different as they can be, which is probably quite useful, and what you want to have in a structure that stores memories, including spatial memories for many, many places.
259
+ [1938.080 --> 1943.080] You don't want to mix them up, you want to keep them separate.
260
+ [1943.080 --> 1951.080] So this is in line with all the work that has implied a role for the hippocampus in memory, which you heard more about this morning.
261
+ [1951.080 --> 1965.080] So this is quite different from what we see in the entorinal cortex, because in the entorinal cortex, there is not this scrambling between environments.
262
+ [1965.080 --> 1985.080] So I illustrate this first again, now same type of approach. You compare two different rooms. This is room A, this is room B. In one of the rooms, it was a circle, in the other it was a square, but anyway, many cells were recorded at the same time in both rooms.
263
+ [1985.080 --> 2000.080] And then cross-correlated, so similar correlation between the maps for different environments. And this shows the result for 1, 2, 3, 4, 5, 6 cells.
264
+ [2000.080 --> 2014.080] And the first row shows when you correlate the map with the same environments, or A times A. And of course you get the grid map, because there is no reason why it should change.
265
+ [2014.080 --> 2037.080] And of course you also get a peak in the center, because there is no reason why the map should move. But if you now correlate the A versus the B, you also get a grid map, except that the map is shifted slightly to the right in this case, which means that the cell has the same pattern, it's just slightly displaced in one direction.
266
+ [2037.080 --> 2056.080] But the important thing here is that this shift is expressed in every single cell that was recorded. They all show the same shift, which then means that actually here you have one map that is just shifted in X or Y, or maybe it could even be rotated. But it's the same map, same map.
267
+ [2056.080 --> 2068.080] And this is even the case if you put all the cells on top of each other and cross-correlated them all together, again you get the same map, but it's just slightly shifted.
268
+ [2068.080 --> 2079.080] So what is suggest is that it's really just one map that is used over and over again, at least as long as you stay within one of the modules.
269
+ [2079.080 --> 2088.080] So at that time we didn't know about modules, but there's still reasons to believe that most of the cells were from one single module.
270
+ [2088.080 --> 2107.080] So based on those data, which is a study, essentially all from a study by Mariana Fien in 2007, and in the collaboration with Alexander Trevis, then we asked more recently,
271
+ [2107.080 --> 2121.080] how much does this, is this a single map that actually is expressed even when the rat has no behavior, for example when it's sleeping and not walking around?
272
+ [2121.080 --> 2136.080] So this is a study that we did and actually has been shown, the same result has been shown also in Laura Colgins lab where they did the same at the same time, with exactly the same result.
273
+ [2136.080 --> 2150.080] So then I believe it. So what we found, and this is Richard Garner's work in our lab, what he did was first to compare pairs of cells that were in-face,
274
+ [2150.080 --> 2161.080] that means that the grid pattern is more or less overlapping. So you see two grid cells here, and you can see that the peaks are more or less in the same place.
275
+ [2161.080 --> 2172.080] You can also see that from the collar, but maybe easier to see here. So these are examples of two cells that fire in the same place when the rat is awake and walks around in the box.
276
+ [2172.080 --> 2188.080] And as you would expect, if you cross correlate those two cells in time, show what is the probability of cell two to fire when cell one is active, you get the strong peak around zero because when one fire, then the other fires two.
277
+ [2188.080 --> 2203.080] So that's all as expected. But you then record from the same cells in sleep, in slow way sleep, you get the same peaks. So that means that those cells that fire together in the wake state also fire together in sleep.
278
+ [2203.080 --> 2218.080] Conversely, if the cells are out of face, if they have their dots or peaks at different places, then on the fire out of face in the wake state, then they also fire out of face in sleep.
279
+ [2218.080 --> 2233.080] So it's the same thing. And if you do this now for 1267 combinations or pairs of grid cells, what you find is that, and you can plot that then with one line per cell pair.
280
+ [2233.080 --> 2245.080] And then now you can transform this one, this plot to color so that yellow is high cross correlation and black is low.
281
+ [2245.080 --> 2259.080] So what you then find is that those pairs that have the high cross correlation in the wake state when the rat is walking in this open field environment, they are also the ones that have the highest in sleep.
282
+ [2259.080 --> 2269.080] And those that have the lowest correlation in the wake state when it walks in the box are the ones that have the lowest correlation in the sleep state.
283
+ [2269.080 --> 2281.080] And the same applies also in a different type of sleep, REM sleep, which in humans corresponds to when we dream. You say it's the same thing again, a bit more noisy because there's much less data from that.
284
+ [2281.080 --> 2305.080] But what this essentially shows is that confirms the suggestion that this entire map is really low dimensional or has only one or at least only a few ways to express itself very, very differently from the hippocampal maps, which can appear in all kinds of combinations.
285
+ [2305.080 --> 2328.080] And this is exactly as would be predicted by attractive type models for grid cells or models that propose that grid cells actually arise as a consequence of how the network is wired together and these connections and these interactions they are present also in the sleep state, even if these animals don't walk around.
286
+ [2328.080 --> 2346.080] So with that, I then, that's a little bit of introduction about grid cells. I should also mention though that there are other types of cells, which already some of you may have heard that from the symposium earlier today and also was also mentioned in earlier.
287
+ [2346.080 --> 2357.080] But not the least the head direction cells. The head direction cells are cells that fire in when the animals face is pointing only in a certain direction.
288
+ [2357.080 --> 2367.080] These cells were discovered by Jim Rank and then Jim Rank with a student, Jeff Tauber, followed up and showed them.
289
+ [2367.080 --> 2379.080] They found them originally in the Dorsal Presuviculum, which is adjacent to the Dorsal, Medial and Rhynal Cortex.
290
+ [2379.080 --> 2385.080] But it turned out then that they are very abundant also in the Medial and Rhynal Cortex.
291
+ [2385.080 --> 2394.080] So this shows again a side view in the rat brain and the area between the two red areas lines here is Medial and Rhynal Cortex.
292
+ [2394.080 --> 2401.080] And what you see here is that these cells, they don't really have these grid dots that you saw in the other cells.
293
+ [2401.080 --> 2409.080] But what they have, as you see here in this is a polar plot that shows firing rate as a function of direction of the rat's head.
294
+ [2409.080 --> 2418.080] You can see that this cell, for example, only fires only active when the rat has its head pointing in the left or west direction.
295
+ [2418.080 --> 2425.080] This cell is only active when the rat is walking from bottom right to top left.
296
+ [2425.080 --> 2428.080] So these are strongly directionally tuned cells.
297
+ [2428.080 --> 2431.080] Some of them are very, very sharply directionally tuned.
298
+ [2431.080 --> 2439.080] Others are a little bit broader and some of them can also be head direction cells and grid cells at the same time.
299
+ [2440.080 --> 2455.080] There are also other cells, border cells we name them, cells that fire exclusively when the rat is walking along one or several borders of the local environment.
300
+ [2455.080 --> 2459.080] So here again you see the box from the top.
301
+ [2459.080 --> 2463.080] The color indicates activity of firing rate.
302
+ [2463.080 --> 2471.080] Red is the highest rate and you can see an example here of a cell that fires only when the rat is on the right part of the box.
303
+ [2471.080 --> 2480.080] And that happens even if you stretch the box either in the horizontal or in the right or either in the x or in the y direction.
304
+ [2480.080 --> 2484.080] Still just fire at or along that particular wall.
305
+ [2484.080 --> 2488.080] This shows the same cell in a different room.
306
+ [2488.080 --> 2492.080] So now the cell chooses the left wall instead.
307
+ [2492.080 --> 2501.080] And what you see here in the middle is that if a wall is inserted in the middle here, then the cell actually fires along that wall too.
308
+ [2501.080 --> 2503.080] And on the corresponding side.
309
+ [2503.080 --> 2509.080] So on the right side here, just as it does on the right side for the peripheral wall.
310
+ [2509.080 --> 2512.080] So it's a very different type of cell.
311
+ [2512.080 --> 2519.080] A grid cell is never a border cell and a border cell is never a grid cell at least not in our hands.
312
+ [2519.080 --> 2530.080] So different classes of cells and as some of you may have heard in the morning, they respond differently to sensory inputs, visual inputs versus locomotion for example.
313
+ [2530.080 --> 2539.080] But these cells coexist. They are intermingled in the superficial layers of the entrional cortex.
314
+ [2539.080 --> 2549.080] And very closely associated also with the head direction cells which are also there, but shift is slightly more into the deeper layers.
315
+ [2549.080 --> 2556.080] And more cells that many of which are actually heard about some of you may have heard about speed cells.
316
+ [2556.080 --> 2561.080] These are cells that don't really, as you'll see, the 12 example cells here.
317
+ [2561.080 --> 2564.080] They don't really have a preferred location of firing.
318
+ [2564.080 --> 2570.080] The color code heat maps here show that they are active anywhere in the box.
319
+ [2570.080 --> 2587.080] But what the line diagrams here show is that their activity is strongly correlated, linearly correlated with the firing rate or with the speed of the animal.
320
+ [2587.080 --> 2589.080] And that's also clear from the examples here.
321
+ [2589.080 --> 2595.080] Seven different cells shown in different color on the background of the speed of the rat.
322
+ [2595.080 --> 2601.080] So the speed is shown in gray over a period of two minutes and then the color shows the firing rate of the cell.
323
+ [2601.080 --> 2609.080] And you can see for example if you focus on the yellow one here, you can see how closely the cells firing rate actually follows the speed of the rat.
324
+ [2609.080 --> 2613.080] It's extremely closely tied to the speed.
325
+ [2613.080 --> 2627.080] So the existence of these cells is also kind of predicted because the self-motion is necessary for updating these cells.
326
+ [2627.080 --> 2641.080] These cells actually use past integration to decide whether fired and motion in self-motion or speed input is just as essential as the direction input.
327
+ [2641.080 --> 2650.080] And finally, there are many cells that actually have spatially localized firing fields that aren't anything really.
328
+ [2650.080 --> 2655.080] They're not borders. They're just blobs of firing at particular locations.
329
+ [2655.080 --> 2669.080] Of course there could be grid cells that for some reason either they have so large grid patterns as you just see one peak or the other grid fields are so low in rate that you don't see them.
330
+ [2669.080 --> 2676.080] But nonetheless they are hard to explain and there are many many of them. They have been around for a long time.
331
+ [2676.080 --> 2680.080] But until quite recently at least I thought they were mostly just garbage.
332
+ [2680.080 --> 2684.080] People cells that you couldn't really put into any category.
333
+ [2684.080 --> 2698.080] But I changed my mind slightly when Changlin Yao in our lab showed that they are modulated in a different way than many of the other spatial cells.
334
+ [2698.080 --> 2727.080] So what he found which you can see an example of here is that if you block some auto-statin expressing cells, interneurons, and just islands then using chemo-genetic methods, then what you find as you see in the middle column here is that those cells under blockade of these some auto-statin expressing interneurons
335
+ [2727.080 --> 2734.080] actually have much more dispersed firing and then when the drug is out of the body again then they go back to what they were.
336
+ [2734.080 --> 2744.080] So this happens only to these cells. So a grid cell for example would not respond to that treatment.
337
+ [2744.080 --> 2755.080] And conversely if you block another type of interneurons, per valve-human type of interneurons, then there is no effect on these cells as you can see here.
338
+ [2755.080 --> 2765.080] But there is a very strong effect on grid cells instead. So it seems like these are actually different classes of cells that are modulated separately.
339
+ [2765.080 --> 2783.080] So all in all this then brings me back to the movie where I started which suggests that these cells are widely expressed.
340
+ [2783.080 --> 2792.080] Actually they are present in many species. They were found first in rats and came then in mice, not surprisingly.
341
+ [2792.080 --> 2803.080] But then they were discovered in bats in the Wulanovsky group. And bats are on a completely different branch of the mammalian evolutionary tree.
342
+ [2803.080 --> 2819.080] And then grid cells or at least grid-like cells were found in monkeys with bad buffaloes work and then finally from Josh Jacobs and Isaac Fried in humans.
343
+ [2819.080 --> 2829.080] The fact that they were spread around among mammals probably suggests that they arose quite early on or present widely among mammals.
344
+ [2829.080 --> 2838.080] So they are at the price not only to grid cells but they are at least to several other types of cells like border cells and head direction cells.
345
+ [2838.080 --> 2847.080] So that was my long, long interaction but I did want to save some time for a few new things.
346
+ [2847.080 --> 2865.080] So one of the first questions probably come up already to everyone who is here who is not working in the field so may ask why do they only test these animals in these empty boxes because rats don't really walk in empty boxes in their natural lives.
347
+ [2865.080 --> 2879.080] So how about more realistic environments? And realistic environments, how are they different from empty boxes? Well at least they contain objects, there are things in the environments.
348
+ [2879.080 --> 2893.080] So there is some precedence from other approaches that suggest that rats or animals may actually use objects for navigation.
349
+ [2893.080 --> 2915.080] And that includes both behavioral work and especially the work of Tim Collett which is illustrated here and just the five second version of it is that they tested garbels in an area which contained two landmarks, two circles here and then the X indicates a location where they could dig for food.
350
+ [2915.080 --> 2941.080] And they were tested over and over and over and over and over again but then on a test trial the two landmarks, the two objects were pulled apart and then what they observed was that the animals did not search in the middle here but they actually searched at a certain distance away from each of those objects suggesting that they actually encoded the distance and direction from individual objects to find the food.
351
+ [2941.080 --> 2967.080] And this together with theoretical work that was inspired partly by this and that includes the work of McNaughton and at all Jim Kniehrm was also on that paper which suggested that there must be cells in the hippocampal system somewhere that actually respond to locations defined by distances and directions or vectors from the
352
+ [2967.080 --> 2995.080] individual objects and the idea of vector encoding was also proposed by O'Keefe and Burgess based on their work but they suggested it was walls or lann, walls or boundaries that were used by animals to encode positions in the open space.
353
+ [2995.080 --> 3024.080] So the idea was there so based on this then Avin Hüydel who is a PhD student in our lab recorded from mice when these mice were running around in very simple environments like the ones we have seen already but there was now an object, very prominent tower like object in the environment and it turned out that they were actually indeed very many cells that responded to the location.
354
+ [3025.080 --> 3047.080] Of the object they did not fire at the location of the object but they fired at some distance away from it in a certain direction and such a cell an example cell is shown here you see it has a one single peak of one single area of firing and that area is displaced from the object in a certain direction.
355
+ [3047.080 --> 3076.080] So the typical design is like this starts out with no object trial there is no object in the circle environment then an object is introduced somewhere near the middle and then the object is displaced and what he sees like in the two examples shown here is that the cell starts to express a strong field, a strong area of activity at a certain place defined to the object in this direction.
356
+ [3077.080 --> 3106.080] So this case on the north side of the object some 20-30 centimeters away then the object is moved so in this case the object is moved down you see the white circle here and still the cell files some 20-30 centimeters north of the object and the same thing for this cell shown here and that can be plotted so you could plot the firing rate as a function of distance from the object and the orientation of the object.
357
+ [3107.080 --> 3122.080] And you can measure that on trials with objects in different places and then you find for these cells that they have that they have correlations between those two trials that are way beyond what you would expect by chance.
358
+ [3122.080 --> 3151.080] So how is this object vector field distributor so this shows just data from a part of the data actually many more cells now since I had this figure but what you get here is still the point this shows one line per cell and color indicates firing rate and this shows orientation relative to the object and what you can see is that essentially all orientation.
359
+ [3152.080 --> 3170.080] So these are expressed if it was perfectly distributed you would have a line along the diagonal here so this is the distribution of orientation slight bias towards 90 degree cycles but actually that bias has been almost gone now in the larger data set.
360
+ [3170.080 --> 3193.080] This shows the distribution of the distances so you can see typically firing field is about 5-10-15 centimeters away from the object but it can be anything up to 45 probably more than but we couldn't test beyond that because the environment wasn't really larger than that if you wanted to have the object in different places.
361
+ [3193.080 --> 3222.080] It's not dependent on the exact type of object so this shows 13 different types of objects so many of them quite similar they are tower like they could be prisms or they could be cylinders but looked quite differently and if you then use several of them or replace them it doesn't really matter so this is shown here for five example cells so you can for example see cell number two here.
362
+ [3223.080 --> 3252.080] It responds in the same way to two objects placed here so there are two circles and you can see it fires on the left side at a certain distance away from each of the two objects so you can also see it here and this cell three different objects among this and again on the southeast side of each of them and this one here you can even construct the grid cell if you like because if you put the objects in a certain pattern you can get fields on the in this case on the north east.
363
+ [3253.080 --> 3281.080] So this suggests that it's not really the identity of the object that is encoded but more like positions and actually vectors directions and distances away from any prominent object in the environment and that even includes some that are very different like flat cylinder here and even a wall like this.
364
+ [3281.080 --> 3310.080] So this cells is this something that has to be learned while it turned out not to be because these cells that were recorded multiple times in familiar environments they were also tested in a novel environment, novel room with a novel object and you can see that you get the same exactly the same type of firing both in the familiar and in the novel room and their own if anything just very very minor differences in the information.
365
+ [3311.080 --> 3318.080] So this is a very important information content or in how spatially selective they are.
366
+ [3318.080 --> 3330.080] So we also wonder if is the intrinsic relationship between different cells of this kind maintained so what you see here is two simultaneously recorded cell.
367
+ [3330.080 --> 3358.080] So this is the field on the south east side and one has the field on the north east side in room A and if the right is then recorded in room B well then it all rotates for this cell so this one goes or flips almost 180 degrees and now you see that the file field is on the northwest side and this one also then flips by 180 degrees almost and the same happens to a head direction cell that's recorded to the same time.
368
+ [3358.080 --> 3387.080] So this also suggests that actually the intrinsic relationships between these cells and even between these cells and other directionally oriented cells are maintained between environments so again this is part of the low dimensionality of the entoronal map where both grid cells and head direction cells which I didn't mention actually turn out to be more or less one map that is maintained across environments.
369
+ [3388.080 --> 3394.080] So are these cells different from other cells like grid cells?
370
+ [3394.080 --> 3416.080] Well largely yes so we calculated scores for both border cells and grid cells and speed cells and a direction cells using different criteria that we have used previously to identify such cells and essentially what you can see here is that for example grid scores are around zero.
371
+ [3416.080 --> 3444.080] That means that it's not different from what you would expect by chance and also for the head direction tuning it's what you see in the middle column is the object vector cells when there's no object and then to the right you see the object vector cells when there is the object present and what you see in the left column here or the left one is the rest of the cell so it's not really different from the population.
372
+ [3444.080 --> 3454.080] So low head direction tuning, low grid tuning and not really definitely not more border like activity than border cells.
373
+ [3454.080 --> 3473.080] However, yeah I'll skip this but however there is some overlap with border cells and that may be not so surprising because border is also an object so I mean how you really distinguish those because a border is just an object that is elongated in one direction so when does the border become an object and when there is no object.
374
+ [3474.080 --> 3503.080] So this is an object becoming a border I think this is not totally obvious where where the boundary is so but using the criteria we have used to identify both object vector cells and border cells and we do find that there's a small subset 11 out of approximately 150 cells that actually satisfy both criteria and you see some examples over here so if you look at the bottom first you see a typical border cell recorded with no object present then the object is interesting.
375
+ [3503.080 --> 3532.080] And then the object is introduced here in the middle you see a white dot here and then the cell adopts a field on one side just like it helps for the border and here at the bottom you see another cell of the same type which flies along the border of this cylinder and then when you introduce the object in the middle here you get another field as well but many many don't most actually don't so you see an example at the top here.
376
+ [3532.080 --> 3560.080] Where there's no object that is a border field along one border here and then you also see that in the neighboring cell or in the one below here and you introduce the object and nothing happens so what is the difference between these cells it's not quite clear but there are many many things that suggest they are not just they are not object vectors cells are not just border cells that
377
+ [3560.080 --> 3589.080] in there are different in many ways so what we show here is that this shows just the relationship between the orientation of firing of these cells so this is the direction of firing relative to the border so this is for the border cells and you can see that they essentially line up along the orientations of the walls as expected.
378
+ [3590.080 --> 3619.080] But when it comes to the object vectors cells which you see here to write they have all kinds of orientations and the same with the distance from the object versus the wall so this shows this shows for a border cells so this shows the orientation of the object vector field versus orientation of the border field for those cells that had fields in both and you can see there is really no correlation you would expect bands along the parallel with the diagonal if they were in the middle.
379
+ [3620.080 --> 3643.080] And the field distance so this is the distance from the wall or the border this is the distance from the object for those cells that fired in relation to both and you can see that the distance from to the object is much larger so this could be because all the way the cells are defined but nonetheless they are different in many ways.
380
+ [3643.080 --> 3672.080] So then to try to tidy up in that then we have an ongoing work which is the Boston Anderson's work where he tried to manipulate the shape of the objects to make them more or less border like so first of all he tried to ask whether it's the height of the objects that matters so he had small objects and then big objects and then put them in different same place in the same environment and then could see that if they're very very small
381
+ [3673.080 --> 3696.080] sometimes they don't elicit object vector fields but consistently as they get larger and the same when he changes the width of the objects so this shows anything from about 2 cm width to 30 cm width and you can see that the cells fire in the same orientation in the same way regardless.
382
+ [3696.080 --> 3725.080] So he even tried to morph the objects from what we called an object originally to a border or a wall like you see the object here getting bigger and bigger and then going back so essentially the cell fires all the time but this cell which clearly is an object vector cell by definition as we had it still doesn't even it fires along most of the wall here it never fires along the peripheral walls.
383
+ [3726.080 --> 3740.080] So I think these cells have many properties that distinguish them although it still yet to be finally determined what is the difference between an object and the border.
384
+ [3740.080 --> 3769.080] So finally before I leave this topic I just want to say that these cells have some similarities with cells that have been recorded before so first of all I want to mention the object cells not object vector cells but the object cells of the lateral and rhino cortex which Jim Knerin and his students have observed for many years but these cells essentially fire at or around the object.
385
+ [3770.080 --> 3798.080] So they are different in that sense but the object vector cells are more similar possibly identical or at least more similar to what they call the landmark vector cell in the hippocampus which are cells that also fire displaced from the object so this shows four objects and this shows the firing fields which are in this case on the southeast side or two of the objects.
386
+ [3798.080 --> 3815.080] So they are different in some ways like for example many of them fire only in relation to some objects and not others and as I understood it also took quite a while for many of them to develop whereas the ones in the rhino cortex are expressed from the outside.
387
+ [3815.080 --> 3844.080] So finally just to sum up again and come back to where I started with these cells so these cells they suggest them that the antirinal, medial and rhino cortex may encodes position in several ways not only by a metric defined by a regular grid but also by actually using completely different principle vector based principle based on locations relative to objects in these areas.
388
+ [3845.080 --> 3874.080] So I will now move on to the environment, individual objects in the environment. So and then I promise to come back at the end to another dimension time which is we heard in this morning that especially from Lynn Adel's talk that hippocampus is very important for episodic memory where space has an absolutely essential role but space isn't all there is also an episodic memory, a time component and our understanding of how we are going to do this.
389
+ [3875.080 --> 3897.080] So time is really encoded has not been at the level of our understanding of space. So this is the work of Albert Tsau who was a PhD student in our lab and is also collaboration with the Kniehrm lab which I have contributed some of the data.
390
+ [3897.080 --> 3918.080] This will be presented in more detail tomorrow in symposium so I don't want to steal the whole show from Albert. So I will just present it very briefly and put it into context and then hopefully many of you will find an opportunity to listen to Albert himself tomorrow.
391
+ [3918.080 --> 3932.080] But let's put it in some background. So what do we know about encoding of time in the hippocampus? At least two aspects that is worth emphasizing.
392
+ [3932.080 --> 3961.080] First we have the so-called time cells which are cells that were described initially by Pastelkova et al from Bouchacke lab and then followed up more extensively by a series of studies from the Iconbaum lab which showed that when this is the original task, when rats run in a certain pattern like in a figure eight pattern like you see here,
393
+ [3962.080 --> 3976.080] and then stop in a running wheel to run for delay until they continue to run in the maze again. Then during that delay the cells actually fire at certain times in the interval.
394
+ [3976.080 --> 4000.080] This is plotted here by a new number one to new 30 here and then you have time in the wheel when they run and what you can see is that these cells fire at specific times during the interval and that it's a very orderly firing just like cells fire orderly in space when rats for example run on a linear track, they fire in a certain order when they run on the wheel.
395
+ [4000.080 --> 4012.080] Even if they don't move at all so it is not and this even happens when they control for movement.
396
+ [4012.080 --> 4024.080] So this was proposed then to show that hippocampus cells can actually also express time the very same cells that express space in other contexts.
397
+ [4024.080 --> 4042.080] So this is so called time cells there's a lot of attention to that now but nonetheless these cells this is described across time scales of not much more than 10 seconds or a little bit more and probably this also has to be learned.
398
+ [4042.080 --> 4066.080] But then there's a totally different expression of time in the hippocampus so going first back to studies again by I can bomb where they showed in this is a study from 2007 where rats were trained in an auto sequence memory task but the essence of it is shown in this figure.
399
+ [4066.080 --> 4083.080] So this is the trial lag or the distance between trials and on the y-axis you have the differences in the population activity and what you can see is that regardless of where they actually find their food the distance increases with time.
400
+ [4083.080 --> 4094.080] So slow there's a change in which cells are active at any given time in the hippocampus that could be an expression of time.
401
+ [4094.080 --> 4116.080] And then work from Jillian's there from Lloydkebs lab and also a study from C.V.T.L. when he was at Marc Schnitzer showed that in C.A.1 this is strong yes but is even stronger in the C.A.2 area of the hippocampus.
402
+ [4116.080 --> 4145.080] This is where we were when Albert Sauer started but we wanted to find out more about this and also where it came from and try to understand how such a code is expressed outside of the hippocampus and we then directed our attention to the lateral entranocortex which I haven't talked much about at all today but where cells are not really very strongly spatially selective as was shown in the hippocampus.
403
+ [4146.080 --> 4162.080] So we were shown by Jim Knierem's group about the same time when we found the grid cells but we wonder where much of the activity of lateral entranocortex could actually be explained by a role in coding of time.
404
+ [4162.080 --> 4191.080] Albert did was that he tested rats in the sequence of trials extending over a period of more than one hour so going from alternating between two types of environments so it's a black environment and a white environment so it means that the balls are either black and white otherwise totally similar and then alternating in a similar random sequence from over a series of 12 trials and then in between the resting trials of post trials.
405
+ [4192.080 --> 4201.080] So that total is 24 different recording epochs and as I said total a little bit more than one hour.
406
+ [4201.080 --> 4210.080] Then he asked how is the activity of cells in the lateral entranocortex during recording over this time sequence.
407
+ [4210.080 --> 4234.080] And first of all he did find some cells in the lateral entranocortex that are strongly modulated by time that means that their firing rates change in various ways across the experiments and this is not due to instability of the changes of the cells because he showed in many ways that it is totally, totally stable.
408
+ [4234.080 --> 4262.080] But the cells you see the activity of the cells shown here for four different types of cells this is across the alternating trials or sessions or black and white trials and what you can see here is made a little bit difficult to see but the cells ramp either up or down within trials as in this one so firing it begins low and gets high and high and higher then next trial begins longer, it's high and high and higher and so on.
409
+ [4262.080 --> 4283.080] Or it might have activity that could ramp up or down over the whole one hour or one hour plus session or you may have combinations where the activity ramps up or down just in certain just in the black boxes or in the white boxes and so on.
410
+ [4283.080 --> 4309.080] So based on this he then performed a general linear model analysis GLM and identify the fractional cells that have significant modulation by the various factors that he put into the analysis which included the color of the wall, black or white, the position of the rat or the combination of the two or time or mixtures of them all.
411
+ [4309.080 --> 4338.080] And what we found first of all if you begin at the bottom with the media and triangle cortex and with the C at 3 not surprisingly there's a very strong influence of positions you can see both here in the media and to Ryan and also to some extent in the C at 3 of course not much in lateral and triangle cortex when it comes to the color of the wall quite low in all of them but some in the C at 3 and in the lateral and triangle cortex.
412
+ [4339.080 --> 4358.080] But when it comes to time see that both C at 3 and the media and to Ryan are quite low but when it comes to lateral and to the right or the court it's a very high proportion that has significant modulation is about 25 to 30% that passed that significant threshold.
413
+ [4358.080 --> 4374.080] It doesn't mean that the others don't but they may have weaker influences but then and then C at 2 they are somewhat in between but none of them really reach the level of the lateral and triangle cortex.
414
+ [4374.080 --> 4386.080] But then you asked them well these are individual cells but could it be that those cells that don't pass that threshold perhaps also contribute to the coding so it took a totally different approach then.
415
+ [4386.080 --> 4406.080] So I used to look at the whole population instead and used them machine learning approach or linear support vector machine to determine the contribution of time for the three areas lateral and triangle cortex, C at 3 and medium and triangle cortex.
416
+ [4406.080 --> 4425.080] So what he did was that he trained the network based on he chopped it into 10 blocks and then trained it on 9 and then used that to predict that for the 10th one as a test case what time this actually was recorded in across those blocks so 24 epochs.
417
+ [4425.080 --> 4452.080] And the success of that is shown here so this is in this confusion matrix is here so what you have here on the x and y axis is the predicted epoch on the horizontal axis and the actual epoch on the y axis and then color indicates the proportional hits and you can see the strong almost every case is a hit here and very few were actually miss hits.
418
+ [4452.080 --> 4464.080] So you could almost all the time predict when the recording actually took place this is not the case in C at 3 years you can see here and just weekly the case in the medium and triangle cortex.
419
+ [4464.080 --> 4493.080] Correlated corrected for public for size of the sample I don't want to go into that and this is also was not only able to predict epochs or blocks of trials it could even go within trials and predict the right 22nd block or even 10 second block or even one second I mean of course with much lower success if it was one second but still you see the steeple line here at the bottom that is the same.
420
+ [4494.080 --> 4523.080] So it actually means that this representation of time was present at multiple times gains finally then one could ask is this something is this an internal clock that is present in the lateral and the final cortex so it is a clock like thing that goes on regardless of what happens or is it actually dependent on the experience of the animal and it may turn out to be the latter which I will show in this final data.
421
+ [4524.080 --> 4540.080] So the final slide which again now is a new type of task the right is not walking in the open field box like it did before but now it's running in the maze in a figure 8 pattern so alternating less than right or every second trial.
422
+ [4540.080 --> 4569.080] So I found in this task and also in one other task where the animals just run in a circle over and over and over again is that in those tasks if you then decode the identity of the trials which trial recording was from actually the success is lower than it was in the open field so it's much reduced in those two tasks in the figure 8 task and in the circle truck.
423
+ [4569.080 --> 4579.080] So the task compared to the task when the rat was walking in the open field and you can see here that's very little activity really very little.
424
+ [4579.080 --> 4598.080] It's along the diagonal here but what was the at the same time as this was lower situation lower in in the success of hitting the right trial whether it was trial number five or seven or nine when it comes to when in the trial the recording was from then it's reversed.
425
+ [4598.080 --> 4616.080] So what he did was that he chopped up segments each time the rat passed a certain point on this track and then he did that for every lap that the rat went through and through and through and through and then he asked is this from an early one or the late one in that trial then the success is actually reversed.
426
+ [4616.080 --> 4626.080] So now when in this task there's a hit success with much higher than it used to be when the rat was running freely in the open field.
427
+ [4626.080 --> 4645.080] So this then suggests that the encoding of time in the lateral interrhyne cortex is not a fixed thing it depends on the experience that the animal has and this network the representational time can actually be adapted to what the animal experiences going from a kind of absolute experience.
428
+ [4645.080 --> 4674.080] A kind of absolute invitation marks representation of time that is just running freely to one where you actually encode time relative to some temporal landmark like for example the start of the trial or passage of a certain point on the track and that then brings me back to the person who introduced me who was so Mike was kind enough to allow me to show one slide from his own data.
429
+ [4674.080 --> 4693.080] I asked him for him because Maria Monticell from his lab has done work in humans, human fMRI studies where they actually find that have data that are entirely consistent with the data from the lateral interrhyne cortex in the rats.
430
+ [4693.080 --> 4704.080] So what they find very very very briefly summarized is that they let subjects view a movie a famous TV thing that I never heard about.
431
+ [4704.080 --> 4717.080] But the point is that after the movie they were asked to put on a timeline when a still image from the movie was shown was this early or later whenever and then they could measure how well they hit.
432
+ [4717.080 --> 4727.080] And he trade was actually very highly correlated with activity in the lateral interrhyne cortex no correlation with me gel and also high that the.
433
+ [4727.080 --> 4749.080] Periodine cortex not with a power which and periodine is very strongly linked to the lateral interrhyne cortex so this I think this is those two sets of data fit very well together and with that I think come to the conclusion so these are people from the institute and from the lab as Mike mentioned.
434
+ [4749.080 --> 4773.080] So the hybrid moso participated in all of it there are also other people who have listed here so many can't really mention them but for the new work I would again then mention Albert's tells Roland who is going to present it tomorrow and also for the object vectors cells especially even her doll and also the more recent work is the bastion.
435
+ [4773.080 --> 4788.080] So with that and a lot of people pay for this and you'll in sponsor the lecture so with that and I'm done and thanks you for your patience sitting here so long and I hope you'll have a nice evening.
transcript/allocentric_Z8ckbP8bHSs.txt ADDED
@@ -0,0 +1,447 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 26.080] Welcome to your participants.
2
+ [26.080 --> 32.520] In today's module we shall discuss chronomics which is a study of time in the context of
3
+ [32.520 --> 35.720] non-verbal communication.
4
+ [35.720 --> 42.640] Chronomics is a subcategory of non-verbal aspects of communication which has emerged as the
5
+ [42.640 --> 45.120] studies in this field broadened.
6
+ [45.120 --> 51.840] Conventionally, time has been treated as an abstract concept and it is in this context
7
+ [51.840 --> 58.280] that linguistically we have responded to this idea representing it in different idioms
8
+ [58.280 --> 59.320] and phrases.
9
+ [59.320 --> 65.520] For example, quality time or time and time await for none.
10
+ [65.520 --> 72.560] However, we find that as the studies in the field of non-verbal aspects of communication
11
+ [72.560 --> 79.840] started to broaden their perspective in the areas of organizational behavior, business
12
+ [79.920 --> 87.080] communication as well as in anthropology people started to study the dimensions of time in
13
+ [87.080 --> 89.800] particular contexts.
14
+ [89.800 --> 97.520] A communication based study of time is dependent on how people in different cultures, in different
15
+ [97.520 --> 103.600] work cultures perceive and structure time in their interactions with other, in their
16
+ [103.600 --> 107.600] dialogues as well as in their relationships with others.
17
+ [107.600 --> 114.040] In the area of communication we also study how in different ways people respond to it
18
+ [114.040 --> 119.720] and thereby what type of non-verbal messages they try to communicate with it.
19
+ [119.720 --> 125.800] Our values in the context of time are reflected in our attitudes as well as in other aspects
20
+ [125.800 --> 133.280] of non-verbal communication and these can be understood in terms of how do we spend our
21
+ [133.280 --> 139.720] time, do we waste it, do we keep on postponing things, are we able to utilize the time to
22
+ [139.720 --> 141.720] its maximum.
23
+ [141.720 --> 147.280] There are of course individual variations in the way we respond to our understanding of
24
+ [147.280 --> 154.080] time and we evaluate, but at the same time we find that the cultural impact on this aspect
25
+ [154.080 --> 157.440] of MVC is also palpable.
26
+ [157.440 --> 164.760] As human beings we have a complex temporal identity which is constructed at different
27
+ [164.760 --> 171.280] labels, at personal as well as social, cultural and professional labels.
28
+ [171.280 --> 178.280] All types of verbal messages as well as non-verbal messages have their own temporalities, they
29
+ [178.280 --> 183.120] have a point of beginning and a point at which they end.
30
+ [183.120 --> 188.320] There has been something happening before that point and there would of course something
31
+ [188.320 --> 190.760] else would take place after that.
32
+ [190.760 --> 197.440] So our communication in the context of time or in the context of the larger phenomena of
33
+ [197.440 --> 203.320] non-verbal aspects of communication is not outside the context.
34
+ [203.320 --> 210.720] Chronomics ask for a more dynamic way of studying our professional interactions with emotional
35
+ [210.720 --> 217.360] understandings and connotations which we have individually, socially and culturally with
36
+ [217.360 --> 219.520] time.
37
+ [219.520 --> 225.800] Studies of chronomics have developed from interdisciplinary literature on time and they
38
+ [225.800 --> 234.360] have been also supported by researches in diversified fields of biology or sociology,
39
+ [234.360 --> 237.800] psychology as well as anthropology.
40
+ [237.800 --> 243.800] People have always been associated with studies of time in different ways.
41
+ [243.800 --> 250.320] But before we started using the term chronomics or even before we applied these understandings
42
+ [250.320 --> 255.800] in the area of business and professional communication, a number of scholars have to
43
+ [255.800 --> 261.280] be listed to acknowledge their contribution for the development of this idea.
44
+ [261.280 --> 268.040] From the modern perspectives we find that the idea was first of all developed by E. Robert
45
+ [268.040 --> 275.040] Kelly who is better known as E. R. Clay and the same idea was carried forward by William
46
+ [275.040 --> 282.600] James whom we students of English literature recognize primarily for his use of the phrase
47
+ [282.600 --> 286.720] stream of conscious and technique in his works.
48
+ [286.720 --> 292.320] The idea was also carried forward by George Herbert Mead and these leading developers of
49
+ [292.320 --> 301.200] the study of human acts and presentness alerted us to this idea that the time is not governed
50
+ [301.200 --> 304.800] only by the external clock time.
51
+ [304.800 --> 311.280] William James suggested that there is also an internal dimension of time which he called
52
+ [311.280 --> 312.680] as the re.
53
+ [312.680 --> 318.440] Another philosopher whom we have to acknowledge at this stage is Harold Innis, the famous
54
+ [318.440 --> 326.880] Canadian communicologist who published his famous book, Changing Concepts of Time in 1952.
55
+ [326.880 --> 336.640] He studied the impact of time as well as space for the development of civilization.
56
+ [336.640 --> 344.320] The ideas of Harold Innis were further enriched by Marshall McCluhan who discussed time and
57
+ [344.320 --> 347.240] human communication in several works.
58
+ [347.240 --> 354.320] We primarily know McCluhan for introducing us to the term global village in his works
59
+ [354.320 --> 359.640] but he has also talked about the concept of time.
60
+ [359.640 --> 367.360] In 1952 only the same year in which Harold Innis has published his book, Edward T. Hall
61
+ [367.360 --> 372.040] also published his book, The Process of Change.
62
+ [372.040 --> 378.320] Hall was to write periodically about time and the socio cultural relations over the next
63
+ [378.320 --> 386.200] four decades and his ideas have encouraged other researchers to take up similar studies.
64
+ [386.200 --> 393.920] The actual term chronomics was coined in 1972 by Fernando Poyatos, a Canadian linguist
65
+ [393.920 --> 396.000] and Cimutician.
66
+ [396.000 --> 401.200] In dealing with the communication system of the speaker actor, Poyatos briefly discussed
67
+ [401.200 --> 407.080] economics that concerned conceptions in the handling of time as a biopsychological and
68
+ [407.080 --> 410.720] cultural element of social interactions.
69
+ [410.720 --> 416.640] He had introduced this idea in cross-cultural study of parallel linguistic alternates
70
+ [416.640 --> 422.520] in face to face interaction which was published in 1975.
71
+ [422.520 --> 430.680] As examples for chronomically significant aspects in communication, he included the cross-cultural
72
+ [430.680 --> 436.860] differences in the duration of ordinary social visits, response latency among different
73
+ [436.860 --> 443.740] cultural groups when a question is asked or for example a decision is to be made.
74
+ [443.740 --> 450.620] He also looked at conversational silences and pauses as part of cultural chronomics.
75
+ [450.620 --> 457.540] Continuing with these observations, anti-looking and these fever suggests that since temporal
76
+ [457.540 --> 462.900] experience depends on the changing of something, chronomics is probably best conceived of
77
+ [462.980 --> 467.300] as a kind of parallel linguistic or supressigmental feature.
78
+ [467.300 --> 474.660] Tom Pernodu developed the first article on time and non-verbal communication in 1974 and
79
+ [474.660 --> 482.060] he also attempted to define chronomics and outlined its characteristics in 1977.
80
+ [482.060 --> 490.700] So it is in this decade of 1970s that the maximum understanding of the impact of chronomics
81
+ [490.780 --> 494.900] was being talked about by various research scholars.
82
+ [494.900 --> 502.460] Since these early works, we find that a number of works and commentaries have come out on
83
+ [502.460 --> 507.060] the significance of chronomics in the field of professional communication.
84
+ [507.060 --> 514.620] I would base my initial discussions or this concept on the findings of Edward T. Hall.
85
+ [514.620 --> 522.540] He has recognized three time systems and named them as technical formal and informal.
86
+ [522.540 --> 527.740] Technical time according to him is the scientific measurement of time which is associated with
87
+ [527.740 --> 529.980] the precision of keeping the time.
88
+ [529.980 --> 537.980] The way different mechanical devices for example, clock and watches primarily are used to keep
89
+ [537.980 --> 538.980] time.
90
+ [538.980 --> 546.100] Formal time is the time which we learn on the basis of our social conditioning.
91
+ [546.100 --> 552.060] Oveston Turner have quoted the example of the USA and have talked about how the American
92
+ [552.060 --> 555.740] society is being governed by the clock and calendar.
93
+ [555.740 --> 561.380] People have been socially conditioned to think that when it is 1 p.m. it is normally the
94
+ [561.380 --> 567.540] time to work and when it is 1 a.m. it is normally the time to sleep.
95
+ [567.540 --> 575.740] At the same time we find that in our contemporary cultures, our arrangement of time is broadly fixed
96
+ [575.740 --> 577.820] and rather methodical.
97
+ [577.820 --> 584.340] So to say that the majority of the people follow similar patterns at workplace and in their
98
+ [584.340 --> 585.940] personal lives also.
99
+ [585.940 --> 591.260] Informal time is normally our understanding of time at a personal level.
100
+ [591.340 --> 599.620] All has included 3 different concepts within it and these are duration, punctuality and activity.
101
+ [599.620 --> 606.540] Duration is related with the time which is formally allocated to a particular event.
102
+ [606.540 --> 614.020] For example, in a meeting for a particular agenda item we might have allocated 40 minutes.
103
+ [614.020 --> 621.980] But at the same time, sometimes in certain cultures our estimates can be normally imprecise
104
+ [621.980 --> 628.620] whereas in some cultures as we will later see these estimates have to be as close to
105
+ [628.620 --> 634.620] precision as possible and at the same time there are personal definitions also.
106
+ [634.620 --> 640.140] For example, if I say I would be there within 2 minutes then what exactly I mean by these
107
+ [640.140 --> 648.740] 2 minutes would it be 1 hour or exactly 2 minutes or maybe somewhere around 15 to 20 minutes.
108
+ [648.740 --> 655.620] Another aspect which is associated by hall within formal time is punctuality which is basically
109
+ [655.620 --> 660.140] our promptness associated with the way we keep time.
110
+ [660.140 --> 665.580] We are normally considered to be punctual when we arrive at the designated place at the
111
+ [665.580 --> 667.260] given time.
112
+ [667.260 --> 674.820] Some people are tardy and habitually late-combers and at the same time there are cultural associations
113
+ [674.820 --> 676.140] also.
114
+ [676.140 --> 682.700] In certain cultures for example punctuality is not exactly a value because late-coming
115
+ [682.700 --> 687.700] is often associated with our status and perceptions of power.
116
+ [687.700 --> 694.260] Activities also another chronic value our use and management of time is defined in a cultural
117
+ [694.260 --> 695.260] manner too.
118
+ [695.260 --> 701.380] Other aspects which may be associated with our concept of time is our willingness to
119
+ [701.380 --> 710.740] wait the way we maintain time during our interactions and to what extent the use of time punctuality
120
+ [710.740 --> 717.020] etc are a reflection of our status and a part of the power game.
121
+ [717.020 --> 724.700] The way we look at time we maintain our association with it and the way we value it affects
122
+ [724.700 --> 726.100] the life is time.
123
+ [726.100 --> 733.020] It is also a reflection of our own work culture as well as at a larger scale it becomes a reflection
124
+ [733.020 --> 736.420] of the work culture of an organization.
125
+ [736.420 --> 741.620] It also affects our communication and professional relationships too in the long run.
126
+ [741.620 --> 747.780] Hall has also pointed out that time can be an erigmatic characteristic as far as our
127
+ [747.780 --> 750.340] social pressures are concerned.
128
+ [750.340 --> 756.700] We are encouraged to use time wisely and at the same time we may also be cautioned not
129
+ [756.700 --> 759.100] to be too obsessive about it.
130
+ [759.100 --> 765.900] The way different cultures understand the function of time can be understood from several
131
+ [765.900 --> 768.300] different angles.
132
+ [768.300 --> 775.420] Hall has treated time as a language as a thread which runs through cultures.
133
+ [775.420 --> 782.460] In his opinion it acts as an organizer and at the same time it also acts as a message
134
+ [782.460 --> 783.660] system.
135
+ [783.660 --> 789.780] It reveals how people treat each other and at the same time it also tells us about the
136
+ [789.780 --> 791.940] things which people value.
137
+ [791.940 --> 800.220] Hall has taken a historical perspective as far as the human concept of time is concerned.
138
+ [800.220 --> 806.820] He suggests that our consciousness of time has emerged from the way we learn to respond
139
+ [806.820 --> 813.220] to natural rhythms which were associated with changes in the season, with changes during
140
+ [813.220 --> 819.700] the days, annual cycles of different crops etc.
141
+ [819.700 --> 825.940] Though the hidden dimensions of time remain to be exceedingly complex, basic time systems
142
+ [825.940 --> 832.700] can be termed as possessing either monochronic or polychronic orientations.
143
+ [832.700 --> 838.900] Hall suggests that most of our cultures are either monochronic or polychronic.
144
+ [838.900 --> 845.740] Although these patterns which are almost polar opposites cannot be applied rigidly to all
145
+ [845.740 --> 852.500] the cultures, a given culture is likely to have a preference for either one of these
146
+ [852.500 --> 855.500] and would be more inclined towards it.
147
+ [855.500 --> 862.820] However, there may be cultural and ethnic variations, a particular culture may be inclined towards
148
+ [862.820 --> 867.060] a particular preference or orientation in terms of time.
149
+ [867.060 --> 870.620] But within that culture we may find some smaller groups.
150
+ [870.620 --> 878.700] For example, ethnic groups or subcultural groups who are disposed in a different manner and
151
+ [878.700 --> 882.300] have retained a different association with time.
152
+ [882.300 --> 888.620] In general, Hall suggests that northern European and American cultures are monochronic and
153
+ [888.620 --> 891.900] Mediterranean cultures are polychronic.
154
+ [891.900 --> 897.580] So, how do we look at the differences between the monochronic and polychronic orientations
155
+ [897.580 --> 899.260] of time?
156
+ [899.260 --> 906.180] A monochronic understanding of time is linear and it is governed by our clock.
157
+ [906.180 --> 913.820] In comparison to it, a polychronic culture is a non-linear one and it is more oriented
158
+ [913.820 --> 915.460] towards time.
159
+ [915.460 --> 922.020] It prefers relationships in terms of the idea of keeping time.
160
+ [922.020 --> 928.620] Monochronic culture also has a short term orientation in relation with the polychronic
161
+ [928.620 --> 930.780] which is a long term orientation.
162
+ [931.340 --> 938.140] Whereas, monochronic preference precision, we find that the polychronic cultures understand
163
+ [938.140 --> 940.740] that time has a particular flow.
164
+ [940.740 --> 947.220] The basic difference between these two orientations has been beautifully summed up by McCool,
165
+ [947.220 --> 953.420] when he says that the monochronic cultures are based primarily on clock time, whereas
166
+ [953.420 --> 958.220] polychronic cultures are typically based on people time.
167
+ [958.220 --> 963.500] And this is by far the most significant difference between the two.
168
+ [963.500 --> 970.020] These cultural orientations towards the way we value time as people are reflected in
169
+ [970.020 --> 973.540] our day to day activities also.
170
+ [973.540 --> 981.020] A culture which has a monochronic orientation assumes a linear order of things and it suggests
171
+ [981.020 --> 986.580] that things have to be completed in a sequential pattern.
172
+ [986.580 --> 995.380] One thing has to follow the other and A should always proceed B and A should end before
173
+ [995.380 --> 998.060] the task B begins.
174
+ [998.060 --> 1004.660] And therefore, monochronic cultures value those tools and systems which increase focus
175
+ [1004.660 --> 1007.540] and help us in saving time.
176
+ [1007.540 --> 1015.540] They look at time as money as value which has to be structured and therefore, their culture
177
+ [1015.540 --> 1022.740] and therefore, the work cultures in these monochronic cultures are governed by well-structured
178
+ [1022.740 --> 1025.620] and well-defined schedules.
179
+ [1025.620 --> 1032.300] The focus in these cultures is somehow to reduce distractions during plant interactions
180
+ [1032.300 --> 1038.140] and they always try to save time as much as possible.
181
+ [1038.140 --> 1044.660] The non-verbal clues which can be associated with this orientation are linked with certain
182
+ [1044.660 --> 1049.900] tendencies which are exhibited in individual and it over cultures.
183
+ [1049.900 --> 1056.620] For example, the capability and tendency to plan ahead to schedule things to schedule meetings
184
+ [1056.620 --> 1057.620] etc.
185
+ [1057.620 --> 1061.420] So, that there is no fuzziness during the day.
186
+ [1061.420 --> 1067.060] Puncturity as a value has to be there and at the same time, there is a tendency to push
187
+ [1067.060 --> 1072.620] things through the agenda so that things can and on time.
188
+ [1072.620 --> 1077.820] After at the same time, they do not want to double with so many things simultaneously
189
+ [1077.820 --> 1081.140] and they prefer to do one thing at a time.
190
+ [1081.140 --> 1087.540] The countries which are typically associated with a monochronic orientation are most of
191
+ [1087.540 --> 1094.060] the countries in northern Europe, this Scandinavian countries, Germany, USA and Japan.
192
+ [1094.060 --> 1099.940] Hall has also pointed out that the monochronic perceptions and preferences in the cultures
193
+ [1099.940 --> 1104.860] of northern Europe and the USA are not natural.
194
+ [1104.860 --> 1112.660] They are learnt, social and cultural values and at the same time, they happen to be arbitrary.
195
+ [1112.660 --> 1118.900] He has traced the development of this attitude to the early days of industrial revolution
196
+ [1118.900 --> 1126.540] which had occurred during 1760 to 1820 and some people stretch it to 1840 also in Europe
197
+ [1126.540 --> 1128.300] in the USA.
198
+ [1128.300 --> 1135.240] The factory life required that the labour has to report at a given time and the appointed
199
+ [1135.240 --> 1141.180] hour was always announced using different types of bells or whistles etc.
200
+ [1141.180 --> 1147.740] This punctuality was necessary to maintain and sustain industrial revolution and gradually
201
+ [1147.740 --> 1151.780] these attitudes have seeped into these cultures.
202
+ [1151.780 --> 1158.660] And therefore monochronic cultures place a paramount value on schedules, on task, on
203
+ [1158.660 --> 1165.260] completing the things by the deadline and therefore hall has gone to the extent to say that
204
+ [1165.260 --> 1172.140] in the American business world, the schedule is sacred and time is tangible.
205
+ [1172.140 --> 1177.340] Because our preference for the monochronic attitude encourages us to take up only one
206
+ [1177.420 --> 1183.380] thing at a time, people who are governed by it do not like to be interrupted and also
207
+ [1183.380 --> 1188.020] do not prefer to suddenly change the pre-decided scheduling.
208
+ [1188.020 --> 1194.340] Hall has also been able to point out certain constraints which are associated in his opinion
209
+ [1194.340 --> 1197.580] with the monochronic preference for time.
210
+ [1197.580 --> 1205.500] He says that this perception of time seals people from one another and as a result intensifies
211
+ [1205.580 --> 1208.980] some relationships at the cost of others.
212
+ [1208.980 --> 1215.660] He has suggested that this time preference is like a room in which some people are allowed
213
+ [1215.660 --> 1219.020] to enter while others are kept out of it.
214
+ [1219.020 --> 1226.540] The rigidity and the focus to keep the schedules intact, conditions people to think that those
215
+ [1226.540 --> 1233.860] people who do not subscribe to similar value system in the context of time are basically
216
+ [1233.940 --> 1240.540] inefficient and unreliable and at the same time they are rather disrespectful.
217
+ [1240.540 --> 1245.660] Hall feels that even though most of the Western cultures are dominated by the monochronic
218
+ [1245.660 --> 1252.140] perception of time, it is not a natural focus of the way human beings have evolved
219
+ [1252.140 --> 1259.140] and in his opinion this preference seems to violate many of humanity's in natirithms.
220
+ [1259.860 --> 1265.420] It does not mean however that he prefers a different perception of time.
221
+ [1265.420 --> 1271.260] It is a part of his analysis only and has to be perceived in the same manner.
222
+ [1271.260 --> 1280.260] In contrast we find that polychronic orientation encourages a certain flux and non-linearity.
223
+ [1280.660 --> 1288.660] These cultures value relationship and predictions more than they value rigidity towards time.
224
+ [1289.620 --> 1296.460] There is always more emphasis on finishing the natural agenda first rather than keeping
225
+ [1296.460 --> 1299.660] the schedule in a mechanical manner.
226
+ [1299.660 --> 1305.380] For example if two people who belong to this culture meet on the street corner after
227
+ [1305.380 --> 1311.260] a long time they would prefer to catch on what is going on in other life first rather
228
+ [1311.260 --> 1314.060] than rushing to a 10 o clock meeting.
229
+ [1314.060 --> 1318.140] A slight delay is understandable.
230
+ [1318.140 --> 1325.140] The non verbal clues which seep into our work environment in such cultures are reflected
231
+ [1325.140 --> 1329.260] in being non punctual during the meetings.
232
+ [1329.260 --> 1336.260] Non punctuality is not necessarily related with a negative work culture rather it has
233
+ [1336.260 --> 1342.220] to be understood as a certain empathy if people tend to get leaked.
234
+ [1342.220 --> 1345.260] Meetings are used for building relationships.
235
+ [1345.260 --> 1350.700] The focus on finishing the agenda is not typically over there.
236
+ [1350.700 --> 1357.780] In these cultures we find that multitasking is considered as a value and therefore a certain
237
+ [1357.780 --> 1361.500] flexibility is encouraged.
238
+ [1361.500 --> 1368.220] In Latin American countries in most of the African and Arabic countries as well as in some
239
+ [1368.220 --> 1376.060] countries and certain segments in South Asia we find that a polychronic orientation towards
240
+ [1376.060 --> 1378.220] time is followed.
241
+ [1378.220 --> 1385.140] It is also followed in those sections of the society the world over which are basically
242
+ [1385.140 --> 1392.260] rural and agrarian because they follow the larger cycles of the crop and production etc.
243
+ [1392.260 --> 1397.460] And at the same time those societies which rigorously follow the religious calendars this
244
+ [1397.460 --> 1400.020] orientation is normally found.
245
+ [1400.020 --> 1407.900] In those cultures where a polychronic understanding of time is prevalent multiple timelines are
246
+ [1407.900 --> 1409.860] routinely followed.
247
+ [1409.860 --> 1416.500] It is understood if people are not able to follow the deadlines because they have preferred
248
+ [1416.500 --> 1420.580] to do some other thing within the eloted hour.
249
+ [1420.580 --> 1427.940] The tendency to view this attitude from a monochronic perspective is to view them as
250
+ [1427.940 --> 1430.540] basically chaotic or random.
251
+ [1430.540 --> 1436.200] The monochronic cultures are also primarily known as the clock cultures because for them
252
+ [1436.200 --> 1439.460] time is measured and it is of essence.
253
+ [1439.460 --> 1444.340] The punctuity which is practiced over there and the precision which is preferred in these
254
+ [1444.340 --> 1448.740] cultures is reflected in various routines also.
255
+ [1448.740 --> 1454.400] For example, keeping the time as far as the public transport is concerned is reflected
256
+ [1454.400 --> 1457.140] because of this cultural preference also.
257
+ [1457.140 --> 1463.820] In the context of the business world sometimes we find that too much of an emphasis on monochronic
258
+ [1463.820 --> 1470.820] perspective can backfire in a multicultural setting because the idea that sometimes it
259
+ [1470.820 --> 1477.540] may take years to develop a loyal customer base is not understood by such people.
260
+ [1477.540 --> 1484.580] The different ways in which cultures respond to punctuity and other time related values
261
+ [1484.580 --> 1490.860] is nicely displayed in this video.
262
+ [1490.860 --> 1496.900] I guess we all believe that time is pretty constant but around the world attitudes to
263
+ [1496.900 --> 1499.380] it differ greatly.
264
+ [1499.380 --> 1504.700] While you can set your watch by Swiss trains not all cultures break the day down into minutes
265
+ [1504.700 --> 1506.420] and seconds.
266
+ [1506.420 --> 1517.460] For other cultures punctuality is a very different matter.
267
+ [1517.460 --> 1521.660] A German sales executive trying to open doors in a number of African countries scheduled
268
+ [1521.660 --> 1523.500] two meetings a day.
269
+ [1523.500 --> 1526.260] For him quite easy going.
270
+ [1526.260 --> 1529.980] His first meeting didn't even take place till a day later.
271
+ [1529.980 --> 1534.180] By the end of his trip he was so stressed out he could hardly operate.
272
+ [1534.180 --> 1541.940] He mistakenly thought his hosts would look at time like he did.
273
+ [1541.940 --> 1547.860] In Africa like in the Middle East or South America there they work in blocks of time half
274
+ [1547.860 --> 1550.700] a day maybe certainly not in minutes.
275
+ [1550.700 --> 1554.860] As long as they can achieve what they need in that block of time then exactly when is
276
+ [1554.860 --> 1556.580] less important.
277
+ [1556.580 --> 1560.460] That's not to say that they're less efficient or effective it's just that they work at
278
+ [1560.460 --> 1562.260] their own pace.
279
+ [1562.260 --> 1567.020] If you work in seconds then you need to adapt otherwise you're going to set yourself
280
+ [1567.020 --> 1586.300] up for a lot of resistance from your hosts and you're going to get constant disappointment.
281
+ [1586.300 --> 1588.740] And then there are cultural anomalies.
282
+ [1588.740 --> 1592.580] In French society absolute punctuality is not the highest priority.
283
+ [1592.580 --> 1598.140] But if you arrive late at a French restaurant don't expect a warm welcome.
284
+ [1598.140 --> 1603.380] The French take their food very seriously and consider lateness a sign of disrespect
285
+ [1603.380 --> 1605.580] for their culinary efforts.
286
+ [1605.580 --> 1609.580] You'd better pay some serious compliments to the waiters if you want to get back in their
287
+ [1609.580 --> 1621.620] good books.
288
+ [1621.620 --> 1626.460] The American expression time is money can be taken very literally in the US.
289
+ [1626.460 --> 1632.220] A chatty bank teller whose line's moving slowly will cause customers to become impatient.
290
+ [1632.220 --> 1635.620] And you'll also get a near full if the line has to wait because you haven't filled out
291
+ [1635.620 --> 1638.420] your forms ahead of time.
292
+ [1638.420 --> 1644.540] Certain tendencies of monochronic and polychronic orientations which we have already discussed
293
+ [1644.540 --> 1647.660] are related with punctuality.
294
+ [1647.660 --> 1652.940] Monochronic orientation prefers punctuality which is considered to be almost sacred.
295
+ [1652.940 --> 1658.780] So ten o'clock meeting means that the discussions have to begin at ten o'clock.
296
+ [1658.780 --> 1664.980] On the other hand polychronic cultures are more people centered and for them at ten o'clock
297
+ [1664.980 --> 1670.580] meeting means at ten o'clock people would start assembling there and start greeting each
298
+ [1670.580 --> 1671.580] other.
299
+ [1671.580 --> 1677.900] In the polychronic orientation punctuality is largely ignored to the rhythm of the people.
300
+ [1677.900 --> 1683.200] And the rigid adherence to completing the projects and deliverables according to a
301
+ [1683.200 --> 1686.700] rigid schedule is sometimes overlooked.
302
+ [1686.700 --> 1692.740] The cultural variations in the perception of time are also discussed in this particular
303
+ [1692.740 --> 1693.260] video.
304
+ [1695.140 --> 1698.180] Every culture has its own perception of time.
305
+ [1698.180 --> 1702.460] Every culture has its own perception of time and perception of time in a separate light.
306
+ [1702.460 --> 1706.220] In some countries people dedicate their lives to build a strong relationship with their
307
+ [1706.220 --> 1708.500] families like the Arabic people.
308
+ [1708.500 --> 1714.060] Or others merely dedicate their lives with their career like the Japanese.
309
+ [1714.060 --> 1716.500] I have the rush, say, as the American.
310
+ [1716.500 --> 1717.500] My time is up.
311
+ [1717.500 --> 1722.140] The Arab, scornful of this sub-missifellotute discussion would only use this expression
312
+ [1722.140 --> 1724.740] if death were imminent.
313
+ [1724.740 --> 1729.580] Though Western European and North American countries fuel time as a linear vision, time
314
+ [1729.580 --> 1731.540] is a beginning and an end.
315
+ [1731.540 --> 1734.900] This culture is fast-paced compared to other cultures.
316
+ [1734.900 --> 1738.340] When Western cultures make a decision about business, they will see it as final when
317
+ [1738.340 --> 1739.860] they come to an agreement.
318
+ [1739.860 --> 1743.500] And so, they don't have the rethink or just the agreement.
319
+ [1743.500 --> 1748.020] They want to do as much as possible in the time they have.
320
+ [1748.020 --> 1752.500] The Arabic countries fuel the perception of time as a flexible vision, being led to an
321
+ [1752.500 --> 1756.780] appointment or checking a long time to get down to business is the exact norm for much
322
+ [1756.780 --> 1758.420] Arabic countries.
323
+ [1758.420 --> 1762.980] For flexible time cultures, scudges are less important than human feelings.
324
+ [1762.980 --> 1767.980] When people and relationships demand attention or require nurture, time becomes a subjective
325
+ [1767.980 --> 1771.340] commodity that can be manipulated or stretched.
326
+ [1771.340 --> 1776.220] Meeting should not be rushed or cut short for the sake of an arbitrary schedule.
327
+ [1776.220 --> 1778.420] Time is an open-ended resource.
328
+ [1778.420 --> 1782.220] Communication is not regulated by a clock.
329
+ [1782.220 --> 1787.220] In Asia, the people view the perception of time as a cyclical vision.
330
+ [1787.220 --> 1790.340] Vision culture takes a concept to a next step.
331
+ [1790.340 --> 1794.380] When the process of life ends, the Asian countries will start at birth again.
332
+ [1794.380 --> 1798.260] The Asian countries are slower-paced than the Western European countries.
333
+ [1798.260 --> 1801.940] For instance, when the Chinese people make an appointment, for for example a business
334
+ [1801.940 --> 1805.020] deal, they will always arrive early so they won't be wasting your time.
335
+ [1805.020 --> 1807.660] They have more focus on their career.
336
+ [1807.660 --> 1811.220] When Asian people make a decision, they will always refute their decision later on.
337
+ [1811.220 --> 1813.220] But see, if it's still the right choice.
338
+ [1813.220 --> 1817.980] If this is not a case, they will adjust accordingly.
339
+ [1817.980 --> 1822.060] For instance, when European businessmen want to make a deal or sign a contract with
340
+ [1822.060 --> 1826.300] Chinese businessmen, they expect to make the deal fast and only think about the future.
341
+ [1826.300 --> 1830.060] While the Chinese businessmen will always look for long-term solution and rethink the
342
+ [1830.060 --> 1831.060] deal several times.
343
+ [1831.060 --> 1836.780] If the deal isn't made quickly, the Western cultures will see it as a waste of time.
344
+ [1836.780 --> 1844.060] Our cultural preferences as far as our understanding of time is concerned are reflected not only
345
+ [1844.060 --> 1849.900] in our relationship with other people, but also in our relationship with technology.
346
+ [1849.900 --> 1855.940] A clear example of it is the way the global websites are designed.
347
+ [1855.940 --> 1862.060] We find that monochronic users are quick and decisive and usually task-oriented and they
348
+ [1862.060 --> 1865.340] design the websites in the same manner.
349
+ [1865.340 --> 1871.860] On the other hand, we find that polychronic users emphasize process over results and prefer
350
+ [1871.860 --> 1877.180] to gain a high level of understanding over a practical implementation.
351
+ [1877.180 --> 1885.300] And this difference is easily visible in the way technology is used by different cultures.
352
+ [1885.300 --> 1892.260] In the fast-changing pace of our work cultures, where we may have to work with people from
353
+ [1892.260 --> 1894.620] different cultural background.
354
+ [1894.620 --> 1900.860] Our awareness of how time is perceived differently in different cultures has become almost
355
+ [1900.860 --> 1902.940] a must.
356
+ [1902.940 --> 1908.780] People who work at an international level must know what are the different definitions
357
+ [1908.780 --> 1913.980] of time and how do people relate to it differently.
358
+ [1913.980 --> 1920.100] A particularly interesting word which is used in Latin American countries is Manana.
359
+ [1920.100 --> 1927.620] In the Middle East, a synonymous word is Bukhara, which indicates a particular attitude.
360
+ [1927.620 --> 1932.740] That means that what cannot be done today would be done tomorrow.
361
+ [1932.740 --> 1940.860] So this laid-back attitude in terms of time is a cultural aspect of looking at our values
362
+ [1940.860 --> 1943.660] and our relationships with other people.
363
+ [1943.660 --> 1949.980] In the monochronic cultures, we find that time is divided and further subdivided into
364
+ [1949.980 --> 1951.900] identifiable units.
365
+ [1951.900 --> 1957.500] However, in polychronic cultures, we find that time is a happy mixture of past, present
366
+ [1957.500 --> 1962.940] and future and these segments are not strictly segregated.
367
+ [1962.940 --> 1969.700] So we have to understand whether the people with whom we work look at time in a formal
368
+ [1969.700 --> 1975.500] and task-oriented fashion or do they look at time as an opportunity to spend time and
369
+ [1975.500 --> 1978.340] develop interpersonal relationships.
370
+ [1978.340 --> 1987.820] In some cultures, we find that lack of punctuality is associated with our social prestige.
371
+ [1987.820 --> 1993.900] It is very common in certain societies as well as in certain organizations to make the subordinates
372
+ [1993.900 --> 2000.580] wait for the appointments so that they can internalize the significance and importance
373
+ [2000.580 --> 2003.740] of the higher rank of their superior.
374
+ [2003.740 --> 2010.580] Power and dignity are often shown by arriving late and it is also used as a tactic in certain
375
+ [2010.580 --> 2016.220] countries, particularly we can refer to the work culture of the Middle East and countries.
376
+ [2016.220 --> 2022.900] However, we find that in monochronic cultures, lack of punctuality is always frowned upon.
377
+ [2022.900 --> 2028.980] A very interesting example is that of Michael Jackson, who angered the judge when he arrived
378
+ [2028.980 --> 2033.700] late in one of the codes in 2005.
379
+ [2033.700 --> 2040.580] Punctuality is considered by monochronic cultures as a value and it is not relaxed even for those
380
+ [2040.580 --> 2046.300] people who are considered to be as social or cultural leaders in different fields.
381
+ [2046.300 --> 2052.180] It is interesting to note that in certain international situations, the name of a country
382
+ [2052.260 --> 2058.100] is also inserted after the time of the meeting is given and the insertion of the name of
383
+ [2058.100 --> 2066.420] a country indicates that one also has to understand how the particular country associates itself
384
+ [2066.420 --> 2067.260] with time.
385
+ [2067.260 --> 2071.980] The insertion of the name of a country allows the participants from different cultural
386
+ [2071.980 --> 2079.020] backgrounds to understand if the time is fixed or fluid as far as the invitation is concerned.
387
+ [2079.020 --> 2085.580] I take this example from Martin and Cheney, who have cited this example of an invitation
388
+ [2085.580 --> 2091.740] where the meeting is announced at 9 a.m. within codes, Malaysian time.
389
+ [2091.740 --> 2098.540] Now Malaysian time is an indication that the punctuality would be practiced in a fluid
390
+ [2098.540 --> 2099.540] fashion.
391
+ [2099.540 --> 2104.500] Work time and personal times are strictly separated in monochronic cultures.
392
+ [2104.500 --> 2111.060] However, in polychronic cultures, we find that the work time and personal time are not
393
+ [2111.060 --> 2113.300] strictly separated.
394
+ [2113.300 --> 2116.380] They often interwine into each other.
395
+ [2116.380 --> 2124.380] These cultural aspects percolate further into different organizations and it is reflected
396
+ [2124.380 --> 2126.500] in their work culture.
397
+ [2126.500 --> 2133.300] For example, how much time is given during a work day to the company task and how much
398
+ [2133.620 --> 2136.380] time is given to socializing?
399
+ [2136.380 --> 2143.740] In monochronic cultures, we find that the division is typically 80 percent task and 20 percent
400
+ [2143.740 --> 2144.900] social.
401
+ [2144.900 --> 2152.500] On the other hand, in polychronic countries, we find that it may be rather skewed.
402
+ [2152.500 --> 2159.140] Understanding appropriate connotations of time is therefore important in international situations.
403
+ [2159.140 --> 2164.820] Globalization of business is influencing how the concept of time is viewed around the
404
+ [2164.820 --> 2170.700] world, particularly at the level of the individual, at the level of the organization.
405
+ [2170.700 --> 2177.100] So, more than the country we find that it is the organization which is reflecting the
406
+ [2177.100 --> 2180.140] cultural associations with time.
407
+ [2180.140 --> 2186.820] It is interesting to note that the work cultures and the offices of the same company which
408
+ [2186.900 --> 2191.620] are located in different countries may follow different patterns.
409
+ [2191.620 --> 2198.620] A head office situated in a country where the preferences for monochronic attitudes would
410
+ [2200.380 --> 2207.060] work in a different atmosphere in comparison to another office which is situated in a
411
+ [2207.060 --> 2211.060] country which is governed by the polychronic attitude.
412
+ [2211.060 --> 2217.420] These differences alert us to the manner in which time is perceived in different ways
413
+ [2217.420 --> 2223.980] and the extent to which we are conditioned by our social and cultural parameters.
414
+ [2223.980 --> 2230.180] And at the same time, the necessity to adapt ourselves in an empathetic manner to different
415
+ [2230.180 --> 2235.540] viewpoints as far as our associations with time is concerned.
416
+ [2235.540 --> 2242.380] The differences of attitude between monochronic and polychronic individuals can be further understood
417
+ [2242.380 --> 2243.860] with the help of this video.
418
+ [2265.540 --> 2273.460] In this scenario, we have Bob.
419
+ [2273.460 --> 2275.660] Bob is what we call polychronic.
420
+ [2275.660 --> 2279.460] Polychronic people are frequently late and are easily distracted and do many things at
421
+ [2279.460 --> 2280.460] once.
422
+ [2280.460 --> 2284.100] For Bob, it is normal to quickly change appointments, schedules and knock-meat deadlines.
423
+ [2284.100 --> 2290.100] This behavior is common in Latin America and in the Middle Eastern countries.
424
+ [2291.100 --> 2297.100] When monochronic and polychronic people interact in groups, the results can be frustrating.
425
+ [2297.100 --> 2302.940] Monochronic people can become distressed by how polychronic people seem to disrespect deadlines
426
+ [2302.940 --> 2306.500] and schedules.
427
+ [2306.500 --> 2310.780] In order to work together smoothly, monochronic members need to take responsibility for
428
+ [2310.780 --> 2312.700] the time-sensitive tasks.
429
+ [2312.700 --> 2316.580] While accepting that polychronic members will vary their promptness based on the nature
430
+ [2316.580 --> 2318.580] and importance of a situation.
431
+ [2318.580 --> 2326.020] As this video very aptly suggests, time is not only a measuring instrument.
432
+ [2326.020 --> 2329.260] It also indicates human behavior.
433
+ [2329.260 --> 2332.780] It also indicates our cultural preferences.
434
+ [2332.780 --> 2338.900] It also indicates our attitudes towards relationships.
435
+ [2338.900 --> 2345.420] Business and other professional activities are planned within time and diverse understandings
436
+ [2345.420 --> 2349.100] about our preferences can also cause confusion.
437
+ [2349.100 --> 2355.860] For an American, time is truly money and therefore it is always considered to be precious.
438
+ [2355.860 --> 2359.740] Because this society is basically a profit-oriented society.
439
+ [2359.740 --> 2365.740] German census link time with their sense of order, tidiness and planning.
440
+ [2365.740 --> 2371.940] In certain other cultures, for example in the Spanish culture as well as in Italian and
441
+ [2371.940 --> 2378.940] Arabic cultures, we find that the considerations of time are usually subjected to human feelings.
442
+ [2378.940 --> 2384.940] The understanding of the French as far as the punctuity is concerned is also closer to
443
+ [2384.940 --> 2386.900] a polychronic attitude.
444
+ [2386.900 --> 2393.740] Our understanding of time helps us to organize our non-verbal communication in a better
445
+ [2393.740 --> 2401.220] way and to modulate our dialogue and conversations in such a way that the other people can also
446
+ [2401.220 --> 2404.020] empathetically understand it.
447
+ [2404.020 --> 2404.340] Thank you.
transcript/allocentric_Zd71719_G8Y.txt ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 20.000] When we park in a big parking lot, how do we remember where we parked our car?
2
+ [20.000 --> 26.000] Here's the problem facing Homer, and we're going to try to understand what's happening in his brain.
3
+ [26.000 --> 30.000] We start with the hippocampus shown in yellow, which is the organ of memory.
4
+ [30.000 --> 35.000] If you have damage there, like in Alzheimer's, you can't remember things, including where you parked your car.
5
+ [35.000 --> 38.000] It's named after Latin for seahorse, which it resembles.
6
+ [38.000 --> 41.000] Like the rest of the brain, it's made of neurons.
7
+ [41.000 --> 44.000] The human brain has about 100 billion neurons in it.
8
+ [44.000 --> 52.000] The neurons communicate with each other by sending little pulses or spikes of electricity via connections to each other.
9
+ [52.000 --> 56.000] The hippocampus is formed of two sheets of cells, which are very densely interconnected.
10
+ [56.000 --> 68.000] Scientists have begun to understand how spatial memory works by recording from individual neurons in rats or mice while they forage or explore an environment looking for food.
11
+ [68.000 --> 75.000] We're going to imagine we're recording from a single neuron in the hippocampus of this rat here.
12
+ [75.000 --> 80.000] When it fires a little spike of electricity, there's going to be a red dot and a click.
13
+ [80.000 --> 87.000] What we see is that this neuron knows whenever the rat has gone into one particular place in its environment.
14
+ [87.000 --> 91.000] It signals to the rest of the brain by sending a little electrical spike.
15
+ [91.000 --> 97.000] We could show the firing rate of that neuron as a function of the animal's location.
16
+ [97.000 --> 105.000] If we record from lots of different neurons, we'll see that different neurons fire when the animal goes into different parts of its environment, like in this square box shown here.
17
+ [105.000 --> 113.000] Together they form a map for the rest of the brain telling the brain continually where am I now within my environment.
18
+ [113.000 --> 120.000] Place cells are also being recorded in humans, so epilepsy patients sometimes need the electrical activity in their brain monitoring.
19
+ [120.000 --> 124.000] Some of these patients played a video game where they drive around a small town.
20
+ [124.000 --> 133.000] Place cells in their hippocampus would fire, become active with sending electrical impulses whenever they drove through a particular location in that town.
21
+ [133.000 --> 139.000] How does a place cell know where the rat or person is within its environment?
22
+ [139.000 --> 144.000] These two cells here show us that the boundaries of the environment are particularly important.
23
+ [144.000 --> 151.000] The one on the top likes to fire midway between the walls of the box that they're rat in.
24
+ [151.000 --> 154.000] When you expand the box, the firing location expands.
25
+ [154.000 --> 159.000] The one below likes to fire whenever there's a wall close by to the south.
26
+ [159.000 --> 169.000] If you put another wall inside the box, then the cell fires in both places, wherever there's a wall to the south, as the animal explores around in its box.
27
+ [169.000 --> 178.000] This predicts that sensing the distances and directions of boundaries around you, extended buildings and so on, is particularly important for the hippocampus.
28
+ [178.000 --> 187.000] The cells are found which project into the hippocampus which do respond exactly to detecting boundaries or edges,
29
+ [187.000 --> 192.000] particularly distances and directions from the rat or mouse as it's exploring around.
30
+ [192.000 --> 205.000] The cell on the left, you can see it fires whenever the animal gets near to a wall or a boundary to the east, whether it's the edge or the wall of a square box or the circular wall of a circular box,
31
+ [205.000 --> 209.000] or even the drop at the edge of a table which the animals are running around.
32
+ [209.000 --> 217.000] The cell on the right there fires whenever there's a boundary to the south, whether it's the drop at the edge of the table or a wall or even the gap between two tables that have pulled apart.
33
+ [217.000 --> 222.000] That's one way in which we think play cells determine where the animal is as it's exploring around.
34
+ [222.000 --> 230.000] We can also test where we think objects are, like this gold flag in simple environments or indeed where your car would be.
35
+ [230.000 --> 236.000] We can have people explore an environment and see the location they have to remember.
36
+ [236.000 --> 243.000] If we put them back in the environment, generally they're quite good at putting a marker down where they thought that flag or their car was.
37
+ [243.000 --> 249.000] On some trials, we could change the shape and size of the environment like we did with the play cell.
38
+ [249.000 --> 257.000] In that case, we can see how where they think the flag had been changes as a function of how you change the shape and size of the environment.
39
+ [257.000 --> 266.000] What you see, for example, if the flag was where that cross was in a small square environment and then you asked people to say where it was but you've made the environment bigger,
40
+ [266.000 --> 272.000] where they think the flag had been stretches out in exactly the same way that the play cell firing pattern stretched out.
41
+ [272.000 --> 278.000] It's as if you remember where the flag was by storing the pattern of firing across all of your play cells at that location.
42
+ [278.000 --> 287.000] Then you can get back to that location by moving around so that you best match the current pattern of firing of your play cells with that stored pattern.
43
+ [287.000 --> 290.000] That guides you back to the location that you want to remember.
44
+ [290.000 --> 293.000] We also know where we are through movement.
45
+ [293.000 --> 300.000] If we take some outbound path, perhaps we park and we wander off, we know because our own movements, which we can integrate over this path,
46
+ [300.000 --> 303.000] roughly what the heading direction is to go back.
47
+ [303.000 --> 310.000] Play cells also get this kind of path integration input from a kind of cell called a grid cell.
48
+ [310.000 --> 318.000] Grid cells are found again on the inputs of the hippocampus and they're a bit like play cells, but now as the rat explores around,
49
+ [318.000 --> 329.000] each individual cell fires in a whole array of different locations, which are laid out across the environment in an amazingly regular triangular grid.
50
+ [330.000 --> 343.000] If you record from several grid cells shown here in different colors, each one has a grid-like firing pattern across the environment and each cell's grid-like firing pattern is shifted slightly relative to the other cells.
51
+ [343.000 --> 348.000] The red one fires on this grid and the green one on this one and the blue one on this one.
52
+ [348.000 --> 360.000] So together, it's as if the rat can put a virtual grid of firing locations across its environment, a bit like the latitude and longitude lines that you'd find on a map but using triangles.
53
+ [360.000 --> 371.000] And as it moves around, the electrical activity can pass from one of these cells to the next cell to keep track of where it is so that it can use its own movements to know where it is in its environment.
54
+ [372.000 --> 386.000] Do people have grid cells? Well, because all of the grid-like firing patterns have the same axes of symmetry, the same orientations of grid shown in orange here, it means that the net activity of all of the grid cells, in a particular part of the brain,
55
+ [386.000 --> 392.000] should change according to whether we're running along one of these six directions or running along one of these six directions in between.
56
+ [393.000 --> 407.000] So we can put people in an MRI scanner and have them do a little video game like the one I showed you and look for this signal and indeed you do see it in the human entrial cortex which is the same part of the brain that you see grid cells in rats.
57
+ [407.000 --> 421.000] So back to Homer, he's probably remembering where his car was in terms of the distances and directions to extended buildings and boundaries around the location where he parked and that would be represented by the firing of boundary detecting cells.
58
+ [421.000 --> 443.000] He's also remembering the path he took out of the car park which would be represented in the firing of grid cells. Now both of these kinds of cells can make the place cells fire and he can return to the location where he parked by moving so as to find where it is that best matches the firing pattern of the place cells in his brain currently with the stored pattern where he parked his car.
59
+ [443.000 --> 453.000] And that guides him back to that location irrespective of visual cues like whether his car is actually there. Maybe it's been towed but he knows where it was so he knows to go and get it.
60
+ [453.000 --> 470.000] So beyond spatial memory, if we look for this grid like firing pattern throughout the whole brain, we see it in a whole series of locations which are always active when we do all kinds of autobiographical memory task like remembering the last time you went to a wedding for example.
61
+ [470.000 --> 485.000] So it may be that the neural mechanisms for representing the space around us are also used for generating visual imagery so that we can recreate the spatial scene at least of the events that have happened to us when we want to imagine them.
62
+ [485.000 --> 500.000] So if this was happening your memories could start by place cells activating each other via these dense interconnections and then reactivating boundary cells to create the spatial structure of the scene around your viewpoint and grid cells could move this viewpoint through that space.
63
+ [500.000 --> 514.000] Another kind of L head direction cells which I didn't mention yet they fire like a compass according to which way you're facing they could define the viewing direction from which you want to generate an image for your visual imagery so you can imagine what happened when you're at this wedding for example.
64
+ [515.000 --> 533.000] So this is just one example of a new era really in cognitive neuroscience where we're beginning to understand psychological processes like how you remember or imagine or even think in terms of the actions of the billions of individual neurons that make up our brains.
65
+ [533.000 --> 535.000] Thank you very much.
transcript/allocentric__n_vDvne5yo.txt ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 7.000] Okay, it's a real pleasure to be here presenting to everyone.
2
+ [7.000 --> 10.360] And let's start things off since we're talking about navigation.
3
+ [10.360 --> 16.560] Most of you drove possibly a small number of you walked or biked from somewhere else.
4
+ [16.560 --> 23.160] How many of you used GPS, a mobile device on your phone or on your car to get you?
5
+ [23.160 --> 24.160] How many are willing to?
6
+ [24.160 --> 29.280] I did not because I live in Davis, but okay, so a lot of you did not, okay, that's good.
7
+ [29.280 --> 37.280] Okay, so you use some kind of navigational and computer assist today.
8
+ [37.280 --> 39.280] So maybe about 50% of you.
9
+ [39.280 --> 48.080] Do any of you remember the days when you moved to a new city and you had to go to a gas station and you bought or even worse those
10
+ [48.080 --> 52.080] books of maps and you can look it up and be in that.
11
+ [52.080 --> 59.080] You get a 50 and you're pulled over on the side of the road and you're trying to figure out where that met chance.
12
+ [59.080 --> 65.080] And it's a desire to stick it in your glove compartment and the next time you get it, you can't read the city because it's all legal.
13
+ [65.080 --> 74.080] So many of you have the experience now that mobile devices like your phone can really enhance your really navigate.
14
+ [74.080 --> 80.080] But now an interesting question in navigation and we're going to just sort of touch on this briefly and this talk.
15
+ [80.080 --> 87.080] Is this destroying your ability to represent and learn about your spatial environment?
16
+ [87.080 --> 96.080] And there are even some people in my field of studying human spatial navigation who their advice for healthy aging is turn off your GPS.
17
+ [96.080 --> 98.080] I'm not of that ilk by the way.
18
+ [98.080 --> 104.080] I think your brain is involved in a lot of different things, not necessarily just navigation.
19
+ [104.080 --> 108.080] That would be a little biopic of me given that I study navigation.
20
+ [108.080 --> 111.080] But that is an opinion that's out there.
21
+ [111.080 --> 118.080] Now why do we think that representing spatial environments and learning to do that is so important?
22
+ [118.080 --> 128.080] Well, if you imagine learning your surrounding environments, if you live in the city of Davis remembering the basic layout and organizational organization of the city of Davis,
23
+ [128.080 --> 133.080] why do we think that that's important and why is that a difficult task for you to bring in the first place?
24
+ [133.080 --> 141.080] Well, imagine when you walk around even just campus, there's a wealth of different things that you see at different times.
25
+ [141.080 --> 148.080] You may see one landmark stand out to you, you know, these funny upside-down heads and other things that sticks out.
26
+ [148.080 --> 151.080] You see these at different points, these bridges.
27
+ [151.080 --> 157.080] And what you try to do in your head is come up with something rough like a mental map.
28
+ [157.080 --> 160.080] A cartographic map would be the more accurate version.
29
+ [160.080 --> 168.080] What you're trying to do is construct some rough idea of the distance and direction of things as you experience them.
30
+ [168.080 --> 177.080] And the problem is not easy from the standpoint of translating behavior into brain, which is that we may experience multiple routes from different angles.
31
+ [177.080 --> 184.080] Many of you entered this building from the parking lot, but you could just as easily have entered the building from one of the back doors.
32
+ [184.080 --> 187.080] There are many different ways you could experience the same location.
33
+ [187.080 --> 193.080] And your brain needs a way of taking that very different visual information and knowing that it's the same location.
34
+ [193.080 --> 203.080] We need to take these buildings that stand out and realize this is a useful landmark and then ignore all the other things that aren't useful landmarks,
35
+ [203.080 --> 209.080] like, say, bikes that could change their location, brightly colored cars, which are not going to be constant and that useful landmark.
36
+ [209.080 --> 212.080] So we need to figure out what to use some as a landmark.
37
+ [212.080 --> 221.080] And we're going to learn this information at different times if we have lived in the city of Davis for much of our life or Sacramento or somewhere else.
38
+ [221.080 --> 226.080] We experience this information in some cases over decades by lifetime.
39
+ [226.080 --> 232.080] And so our brain needs a way of taking all this information and fitting it together into a way that is accurate.
40
+ [232.080 --> 236.080] And, in principle, it does not require us to use our phones.
41
+ [237.080 --> 243.080] Now, in the literature, we've often referred to this idea of a mental map as something called a cognitive map.
42
+ [243.080 --> 253.080] And those of you who are familiar with the Nobel Prize will know that a couple of years ago, the Nobel Prize was awarded for work on the mental map of the cognitive map.
43
+ [253.080 --> 255.080] But all that work was in rodents.
44
+ [255.080 --> 262.080] So the interest of my lab is understanding how does this apply to the more interesting species, in my opinion?
45
+ [262.080 --> 267.080] Us. No offense. Alex. Others who study rodents.
46
+ [267.080 --> 270.080] We're wrong with rodents. We're wrong with them.
47
+ [270.080 --> 275.080] But ultimately, we are interested in us, so let's help.
48
+ [275.080 --> 278.080] So I'm going to give you a quick crash course on navigation.
49
+ [278.080 --> 285.080] And I'm going to try to give you an understanding of why we might think turning off our GPS device could be a good idea.
50
+ [286.080 --> 293.080] So there are several different ways we know from the study of navigation that we can learn where things are in our environment.
51
+ [293.080 --> 299.080] And we believe that the optimal way to do this is something called awo-centric coordinates.
52
+ [299.080 --> 308.080] It sounds like a really technical terms. It is. It means a coordinate system, a way of thinking that is referenced outside of our body position.
53
+ [308.080 --> 317.080] So that's exactly what a cartographic map is. That map you buy at the gas station tells you how landmarks are arranged relative to each other.
54
+ [317.080 --> 327.080] So for example, knowing the Davis is approximately 10 miles south of Woodland, and approximately 15 miles west of Sacramento,
55
+ [327.080 --> 332.080] would be a landmark reference to our alo-centric way of thinking about things.
56
+ [332.080 --> 337.080] Okay, alo-centric again means reference to something outside of our body position.
57
+ [337.080 --> 347.080] Okay, so here's an example of this. If we want to remember where something is, we can remember it based on the relative position from these other landmarks.
58
+ [347.080 --> 357.080] Okay, and landmarks can be cities, they can be buildings, anything that stays constant over time, data we can use to remember where things are.
59
+ [357.080 --> 367.080] So the essence of the cognitive map, and what we think the most effective form of spatial memory is, is an alo-centric form of memory.
60
+ [367.080 --> 372.080] Now, you're not going to be surprised to know that there are some other forms of spatial representation.
61
+ [372.080 --> 381.080] And one of these is called the egocentric form. It sounds bad, right? You don't want to be an egocentric person, right?
62
+ [381.080 --> 387.080] And similarly, in general, we don't think that egocentric coordinates are a great way to remember how to navigate.
63
+ [387.080 --> 395.080] Now, egocentric coordinates are extremely useful in many, many contexts in everyday life, which would be where things are in front of your body.
64
+ [395.080 --> 401.080] So if you want to reach for a cup of coffee, you need to put your glasses down on your bedside when you go to bed,
65
+ [401.080 --> 409.080] anything like that involves egocentric coordinates, right? Because you need to know where to reach your hand relative to the current position of your body.
66
+ [409.080 --> 419.080] When you get up from this chair, if you were oriented 180 degrees egocentrically and correctly, that could cause a big problem because you could ram into the chair behind you.
67
+ [419.080 --> 427.080] So it's very important to know the position of your body relative to objects. Now, for navigation, there's a problem.
68
+ [427.080 --> 437.080] Egocentric coordinates change constantly as you move, right? So the position of you, everyone in this audience to me right now to find an egocentric coordinates is in one system now.
69
+ [437.080 --> 447.080] And as soon as I move, that changes. So imagine you've driven from that, but to here, your egocentric coordinates are constantly changing, right?
70
+ [447.080 --> 455.080] The relative position of vacaville is constantly changing relative to your body as you're moving. But it's all of central coordinates are staying constant, because vacaville,
71
+ [455.080 --> 465.080] at least as far as we know, almost there's some major seismic activity, is not changing its position relative to map our Davis over time. But your body position is constantly changing.
72
+ [465.080 --> 473.080] So egocentric coordinates are extremely important for things like reaching for things in front of us, but they're not in general a great way to navigate.
73
+ [473.080 --> 487.080] Now, perhaps some of our given the navigation literature, the worst way to navigate is what's called a place response strategy or a beacon strategy.
74
+ [487.080 --> 501.080] What is that? That's essentially what GPS is giving you. So that would be that you want to walk, say to the gas station across the street, and then the next thing you're going to do is walk to Carl's Jr. across the street from there,
75
+ [501.080 --> 513.080] and the next thing you're going to do is walk back to center for neuroscience. You haven't had to use any relative positions of things. All you've had to do is just remember a sequence of things and terms.
76
+ [513.080 --> 529.080] And that's essentially what GPS is giving you. GPS is saying, take a left at this place, take a right at this junction. So we tend to think that a beaconing strategy, also called a response strategy, in general, is not a great way to engage your brain in a meaningful way,
77
+ [529.080 --> 549.080] with all the things around. So these are the three fundamental ways that we believe typically people navigate in the wild. And the current thinking is that we've switched a lot of our ways of navigating to these other forms like egocentric and beacon forms of navigate.
78
+ [549.080 --> 559.080] Again, a beacon would just be like a big thing that's in front of you and you just walk to. You don't really remember where anything is placed relative to that.
79
+ [559.080 --> 569.080] So in my lab, what we try to do is we try to understand how the brain remembers spatial locations and how the brain uses these different forms of representations.
80
+ [569.080 --> 580.080] So some of our research naturally that involves virtual reality, because virtual reality is something we can build on a computer and have a lot of control of.
81
+ [580.080 --> 593.080] So I'm going to show you an example of a technique that we've been developing in my lab to allow people to navigate in the lab on an environment, in an environment that is generated on a computer.
82
+ [593.080 --> 604.080] So we can get a really detailed sense how people navigate in new environments. And you might wonder why not just have people walk around downtown Davis. Right now we can see how they learn this.
83
+ [604.080 --> 613.080] While there's a problem there from an experimental standpoint, some of you probably have had more exposure to downtown Davis than others. Some of you may have viewed maps.
84
+ [614.080 --> 623.080] So ideally, as experimentalist, as scientists, we want to have a situation where we can control for your exposure and knowledge of the city.
85
+ [623.080 --> 634.080] And there are other parts about real world environments that are a little bit complicated, like you're walking around and someone asks you for directions or you have to stop for a car or something like that.
86
+ [634.080 --> 638.080] So ideally, we want to have people just walk around and navigate as much as possible.
87
+ [638.080 --> 649.080] So here's an experimental setup that we've been working on in my lab. You can see this thing here, it looks a little bit like a disc that my student is standing on these wearing goggles here.
88
+ [649.080 --> 655.080] So what I'm going to show you is what he experienced when he walks on this treadmill.
89
+ [655.080 --> 668.080] So this is what he is seeing through the goggles. And the image is actually fused.
90
+ [668.080 --> 676.080] So what's happening in your retina is you're seeing two different pieces of the environment, but your brain is actually fusing these two images together.
91
+ [676.080 --> 682.080] So it's appearing as one big environment even though the way we've rendered it is two separate images.
92
+ [682.080 --> 689.080] So what my student is doing is he's walking around this environment, you can see how he's doing it. He is moving his feet on this treadmill.
93
+ [689.080 --> 693.080] So you can imagine now we have a situation.
94
+ [693.080 --> 699.080] He's running. It's going to go backwards in a second.
95
+ [699.080 --> 706.080] So you can imagine now we have a situation where we can control a lot of variables that previously we did not have an ability to control.
96
+ [706.080 --> 713.080] And we can start to study how people learn large scale environments in the what.
97
+ [713.080 --> 717.080] In the wild so to speak, our one.
98
+ [717.080 --> 723.080] So what we do is we have people walk around this environment on the treadmill.
99
+ [723.080 --> 728.080] And then we have them point to the locations of objects in this environment.
100
+ [728.080 --> 735.080] So we have you do a classic task in the navigation literature called the judgments of relative direction text.
101
+ [735.080 --> 742.080] And this is tapping largely into your allocentric knowledge of where things are located in your environment.
102
+ [742.080 --> 748.080] So it's providing a relatively stripped test of how well you've learned where landmarks are placed.
103
+ [748.080 --> 755.080] So what you're doing is you're imagining standing at one store with your bat facing another and you're pointing to another.
104
+ [755.080 --> 762.080] So imagine for example you are standing in downtown areas facing south.
105
+ [762.080 --> 768.080] And then you want to a point to approximately where Sacramento.
106
+ [768.080 --> 770.080] So you just imagine yourself in that situation.
107
+ [770.080 --> 772.080] That's the types of questions that we're asking you.
108
+ [772.080 --> 777.080] What we do is we have this done repeatedly over and over again throughout the experiment.
109
+ [777.080 --> 780.080] And we do something a little tricky to put.
110
+ [780.080 --> 783.080] The environment is shaped like a big rectangle.
111
+ [783.080 --> 791.080] And we have people answer questions where their body is either aligned or misaligned with the surrounding boundaries.
112
+ [791.080 --> 803.080] And the reason why we're going to do this is we're going to see to what extent people start to form knowledge that is based on the structure or the shape of the allocentric nature of the environment.
113
+ [803.080 --> 813.080] And to make sure that people really know this environment before they get into it, we have them study a map beforehand just to make sure that they really know where things are.
114
+ [813.080 --> 818.080] And we compare that with a situation where they don't study a map before it.
115
+ [818.080 --> 823.080] And not surprisingly when you study a map beforehand in general your knowledge is better.
116
+ [823.080 --> 830.080] Your error you make fewer errors when pointing to the locations of landmarks or stores in the environment.
117
+ [830.080 --> 840.080] And what we find is that over trials not surprisingly as you walk through this environment and then point to the locations of objects you get better and better at the task.
118
+ [840.080 --> 849.080] But interestingly on the first couple of trials we find that people typically do not use the shape of the environment to anchor their knowledge.
119
+ [849.080 --> 859.080] So this suggests that it takes a certain amount of time to integrate what you've learned as you freely navigate through an environment with the surrounding structure of.
120
+ [859.080 --> 863.080] So in other words learning how to represent stuff takes time.
121
+ [863.080 --> 873.080] And that is why we might say the cautionary note about GPS. When you have your GPS on you have your mobile phone on you are short circuiting some of that normal process.
122
+ [873.080 --> 880.080] Okay, I'm going to skip through the conclusions because I did want to talk about healthy aging and navigation.
123
+ [880.080 --> 884.080] I know many of us are interested in this topic.
124
+ [884.080 --> 888.080] So what happens as we age with regard to our ability to navigate?
125
+ [888.080 --> 906.080] And one of the ideas in the literature is that because of some changes in the structure of your brain we switch from these allocentric strategies to non-allocentric strategies like egocentric position and what I call the beckoning strategy.
126
+ [906.080 --> 912.080] And the issue is that these ultimately may not be the best way to navigate.
127
+ [912.080 --> 924.080] And remember I said that the goal that we hope happens when people navigate is they form rich representations that tell them about the relative positions of objects within their environment to each other.
128
+ [924.080 --> 936.080] So if we think that one of the things that happens with aging is that there is a decrement in this process is there a way that we could potentially try to reverse or stop this.
129
+ [936.080 --> 953.080] And I've been collaborating with Beth Over in the Department of Human Development on this issue where we show young healthy undergraduates and then individuals who were 80 and over.
130
+ [953.080 --> 963.080] Maps as well as have them navigate in an environment just like what you saw with the treadmill but instead of being on a treadmill they're using a joystick.
131
+ [963.080 --> 971.080] And makes life a little easier that treadmill can be a little crazy for some people. Although my son who's six loves us.
132
+ [971.080 --> 974.080] Does he something about virtual reality?
133
+ [974.080 --> 981.080] So for example what we find is that when we show individuals 80 and older.
134
+ [981.080 --> 989.080] Maps of an environment in general they tend to use this information quite well compared to when they freely navigate an environment.
135
+ [989.080 --> 995.080] So there appears to be a benefit to showing older and adults a map compared to having a freely navigate.
136
+ [995.080 --> 1004.080] Compared to younger people who admittedly generally do better on the task but don't show the same proportional benefit from studying a map.
137
+ [1004.080 --> 1022.080] So one of the areas that we are starting to investigate is if we can use maps and the general structure of environments because many of our cities are shaped like rectangles or have grid shapes to them are other shapes that could be very useful for remembering where things are.
138
+ [1022.080 --> 1034.080] Can we use this type of iterative training to help rescue or encourage use of allocentric spatial memory strategies?
139
+ [1034.080 --> 1045.080] That's something we're currently investigating a lab. We've been very fortunate to get a small amount of money from UC Davis Alzheimer's Center to investigate this issue thanks to Charlie DeCarly and others at that center.
140
+ [1045.080 --> 1050.080] But we're really just getting started on this and we hope there's a lot more that we can learn.
141
+ [1050.080 --> 1065.080] So I did want to talk a little bit about the brain and I want to give some what I think is good news and some major changes that I think have happened in how we think about the brain more generally and what that could mean for at least many of you.
142
+ [1065.080 --> 1077.080] So our classic way of thinking about the brain is what we would call the localizationist perspective and it's essentially this one brain region one function.
143
+ [1077.080 --> 1089.080] And this works somewhat well in some context. So we have vision here at the back of the brain. In general we know that there are many neurons responsive to visual features in the back of the brain.
144
+ [1089.080 --> 1099.080] And if we damage the back of the brain called visual cortex people will be blind. We also know if we damage an area called the cerebellum that we severely impair motor control.
145
+ [1099.080 --> 1109.080] We know the cerebellum is important for motor control. But how about some of these higher cognitive functions that I've been talking about like allocentric navigation or egocentric navigation.
146
+ [1109.080 --> 1120.080] Can we stick them in one part of the brain? Well we used to think that and we spent decades investigating that issue. And in general I think the answer is that is not born out the way that we thought of it.
147
+ [1120.080 --> 1133.080] And increasingly we started to move to a different perspective on the brain. I don't know if any of you have been on a flight lately, been through an airport, maybe hopefully not.
148
+ [1133.080 --> 1145.080] But if you have had that displeasure or pleasure depending on your perspective, you may have blanced at one of these maps of how airlines are interconnected and continental United States.
149
+ [1145.080 --> 1156.080] Which is that we have these things called hubs, which we would call the areas where airlines typically have the most of their flights taking off landing it.
150
+ [1156.080 --> 1162.080] And then we have other areas depending on the airline where they just don't have as many flights to them.
151
+ [1162.080 --> 1171.080] So we already think about air travel on a lot of travel in this highly interconnected dynamic fashion. What do I mean by dynamic?
152
+ [1171.080 --> 1181.080] If we looked at this airline map of the United States at any given time, things would look really really different. We might see a lot of flights coming into Phoenix, of course Southwest.
153
+ [1181.080 --> 1191.080] We might see fewer flights going into areas in the Pacific Northwest. But in general looking at any given time would reveal very different things. So that's what we mean by dynamic.
154
+ [1191.080 --> 1203.080] And there's a new method that has been developed really in the last two decades called graph theory analysis, which lets us take these highly interconnected maps and try to make some sense of it.
155
+ [1203.080 --> 1214.080] Now you might not be surprised to hear that the brain also has some of these similar properties. In other words, there are areas that serve as hubs that are highly interconnected with other areas.
156
+ [1214.080 --> 1227.080] And that this can be highly dynamic depending on what we were doing with our brain. So an area that my graduate student I am or have been investigating is can we apply these methods to understanding something like memory?
157
+ [1227.080 --> 1233.080] And in particular, our memory for spatial locations and the order in which things happen.
158
+ [1234.080 --> 1244.080] And again, remember that we used to think in a very localized fashion about the brain. And one of the areas we've historically focused a lot on is called the hippocampus.
159
+ [1244.080 --> 1248.080] We used to think if you lose your hippocampus, you lose all your memory.
160
+ [1248.080 --> 1259.080] The new perspective that is starting to emerge in cognitive neuroscience is that that is simply not true. That there are many other areas that participate meaningfully in memory.
161
+ [1259.080 --> 1275.080] So the good news is that if you suffer, hopefully not if you do at some point in your life, damage to any part of your brain, all the parts of your brain may be able to dynamically reconfigure and take over for some of that loss function.
162
+ [1275.080 --> 1285.080] And that is a new emerging area which my lab is very interested in. And again, this really contrasts from how we used to think about the brain as a more static structure.
163
+ [1285.080 --> 1296.080] We used to think one brain region, one function, lose that brain region, lose that function. We are now starting to see behavior and cognition as a more distributed phenomenon.
164
+ [1296.080 --> 1302.080] In other words, many other parts of the brain can take over for that loss function.
165
+ [1302.080 --> 1314.080] So we still view brain areas like the hippocampus as important for memory. But increasingly we are starting to see that other brain areas are playing critical roles in how this works do.
166
+ [1314.080 --> 1319.080] And if you're interested in a more technical discussion about that, I'm happy to do it.
167
+ [1319.080 --> 1327.080] But the important implication of this is that there is a possibility for other brain structures to take over for loss function.
168
+ [1327.080 --> 1337.080] And you might ask, how? The two areas that are active areas of research in many labs, including my lab, is cognitive rehabilitation and neuro stimulation therapy.
169
+ [1337.080 --> 1342.080] So I'm going to show you a future direction and then I should take some questions because I think I'm already over.
170
+ [1342.080 --> 1344.080] That's the sign of a work.
171
+ [1344.080 --> 1351.080] So let me show you what my lab is just started working on. And this is taking people on our treadmill.
172
+ [1351.080 --> 1364.080] Taking people on the treadmill. And what we're going to do is we record from their brain while they navigate in these large scale events.
173
+ [1364.080 --> 1372.080] So this is an individual who is wearing a cap called Scalpy G. It has electrons that can access signals in the brain.
174
+ [1372.080 --> 1380.080] He's walking on the treadmill. You can see what he is seeing as he is navigating. And you can see the brain signals that we are continuously recording while he navigates.
175
+ [1380.080 --> 1388.080] So this will give us new insight into how the brain codes things like spatial distance and spatial direction.
176
+ [1388.080 --> 1394.080] Okay, since I'm out of time, I'm going to mention really quickly my very generous sponsors from the federal government.
177
+ [1394.080 --> 1399.080] But I should mention I'm fortunate and further along in my career where I've been able to get some of these things.
178
+ [1399.080 --> 1408.080] But the early work in my career was impart funded by generous donations from family foundations and people like you.
179
+ [1408.080 --> 1418.080] And I can't emphasize how important these sources are for fueling innovation and new ideas because government funding is drying out unfortunately.
180
+ [1418.080 --> 1425.080] And in addition, government funding doesn't tend to fund the high risk types of things.
181
+ [1425.080 --> 1429.080] All right, I want to thank you very much and I'm happy to take any questions.
transcript/allocentric_akfatVK5h3Y.txt ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 6.000] Hello friends, I am Surbi and once again welcome to my channel Key Differences.
2
+ [6.000 --> 13.000] Today in this video tutorial I am going to explain you the difference between verbal and non-verbal communication.
3
+ [13.000 --> 16.000] So friends, let's get started.
4
+ [16.000 --> 29.000] After watching this video you will be able to understand what is communication, what is the process of communication,
5
+ [29.000 --> 43.000] and what are its types, what is verbal communication and its types, what is non-verbal communication and what are the types of non-verbal communication.
6
+ [43.000 --> 49.000] Lastly, what is the difference between verbal and non-verbal communication?
7
+ [49.000 --> 53.000] Now come let's understand the meaning of communication.
8
+ [53.000 --> 57.000] Communication is the process of interacting with people.
9
+ [57.000 --> 65.000] No matter whether you speak something or not, but your behavior, attitude or body language conveys a message to the other party.
10
+ [65.000 --> 73.000] Meaning that communication is not dependent on words. It is possible even without the use of words.
11
+ [73.000 --> 78.000] So now we are going to understand the process of communication.
12
+ [78.000 --> 90.000] In the process of communication, the sender encodes a message through a proper channel that is email, phone, SMS, etc. to the receiver.
13
+ [90.000 --> 101.000] The receiver decodes the message and after interpreting it gives a proper feedback through a proper channel to the sender.
14
+ [101.000 --> 105.000] So in this way the process of communication continues.
15
+ [105.000 --> 110.000] So now we are going to understand the types of communication.
16
+ [110.000 --> 120.000] So on the basis of channel there are two types of communication, verbal communication and non-verbal communication.
17
+ [120.000 --> 125.000] Let's understand the meaning of verbal communication.
18
+ [125.000 --> 136.000] The communication in which we use words and language to communicate the intended message to the other party is called verbal communication.
19
+ [136.000 --> 140.000] It can be performed in two ways.
20
+ [140.000 --> 146.000] That is oral communication and written communication.
21
+ [146.000 --> 159.000] Oral communication is a communication through spoken words. That is face to face communication, voice chat, video conferencing or communication over the telephone or mobile phone.
22
+ [159.000 --> 173.000] On the other hand, written communication entails the use of letters, documents, emails, SMS, various chat platforms, social media, etc. to interact with people.
23
+ [173.000 --> 183.000] What is non-verbal communication? Non-verbal communication is a wordless communication as it does not use words.
24
+ [183.000 --> 196.000] The communication takes place through signals such as facial expressions, body language, nodding of head, gestures, postures, eye contact, physical appearance and so forth.
25
+ [196.000 --> 201.000] Now come let's understand the types of non-verbal communication.
26
+ [201.000 --> 214.000] The communication through body language, facial expressions, gestures, postures, eye contacts is called as kinetics.
27
+ [214.000 --> 227.000] In artifacts, you learn how the appearance of a person speaks a lot about his personality. That is the way he or she is dressed, accessories carried by him, etc.
28
+ [227.000 --> 237.000] Proximics. The distance maintained by a person while communicating with another tells you a lot about their relationship.
29
+ [237.000 --> 249.000] Chronomics is the use of time and communication. It tells you about how punctual or disciplined a person is or how serious the person is regarding the matter.
30
+ [249.000 --> 260.000] Vocalics. The volume, tone of voice and pitch used by the sender to transmit information is called vocalics.
31
+ [260.000 --> 270.000] The use of touch and communication to express emotions and feelings is called haptics.
32
+ [270.000 --> 284.000] Come let's discuss the difference between verbal and non-verbal communication. Meaning, verbal communication is the process of communication in which words and language is used to transmit the message to another person.
33
+ [284.000 --> 299.000] Whereas in non-verbal communication, we do not use words. Instead, we use signals to transmit the message. The signals can be facial expressions, eye contact, body language, parallel language, sign language, etc.
34
+ [299.000 --> 308.000] Next. In verbal communication, the transmission of message is very fast and feedback can also be provided instantly.
35
+ [308.000 --> 317.000] Whereas non-verbal communication relies on the understanding of the receiver, so it consumes a lot of time.
36
+ [317.000 --> 325.000] When it comes to delivery of message, there are very less chances of confusion in case of verbal communication.
37
+ [325.000 --> 335.000] Contrary to this, the chances of confusion and misunderstanding are relatively high in non-verbal communication.
38
+ [335.000 --> 344.000] In verbal communication, the presence of both the parties that is sender and receiver at the place of communication is not necessary.
39
+ [344.000 --> 353.000] As against, in non-verbal communication, the presence of both the parties at the time of communication is a must.
40
+ [353.000 --> 362.000] The best thing about verbal communication is that the message can be clearly understood and feedback can also be provided immediately.
41
+ [362.000 --> 375.000] Whereas, the best thing about non-verbal communication is that it complements verbal communication. That is, it helps in understanding the lifestyle and emotions of the sender.
42
+ [376.000 --> 385.000] Okay guys, this is all for this video. Now if you want to study the topic in detail, you can visit our official website that is keydifference.com.
43
+ [385.000 --> 395.000] Here you can find a detailed comparison of the two types of communication along with their definitions.
44
+ [395.000 --> 398.000] We have also provided the links in the description below.
45
+ [399.000 --> 404.000] So friends, I hope you enjoyed watching this video. Please like and share this video.
46
+ [404.000 --> 409.000] And if you have any queries of feedback for me, don't hesitate to comment below.
47
+ [409.000 --> 415.000] And please like our channel to never miss a video from key differences. Okay then, bye bye for now.
transcript/allocentric_bQLya0OLd2A.txt ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 13.660] Everyone, quick look now at the nonverbal communications which are absolutely brilliant.
2
+ [13.660 --> 17.680] So I'm going to click on My Status.
3
+ [17.680 --> 28.040] Okay, so there we are, we've got a wonderful, happy, surprised, faster, sad, confused, slower.
4
+ [28.040 --> 29.520] And agree and disagree.
5
+ [29.520 --> 32.120] So if you want to do instant polls, are you happy?
6
+ [32.120 --> 33.520] Are you understanding?
7
+ [33.520 --> 34.720] Are you with me?
8
+ [34.720 --> 36.040] I can agree.
9
+ [36.040 --> 39.760] Now we can see here that I agree.
10
+ [39.760 --> 43.680] My iPad might disagree.
11
+ [43.680 --> 46.840] So he is not happy at all.
12
+ [46.840 --> 48.720] But you can't see that.
13
+ [48.720 --> 52.520] So let's go to the Magic Purple button.
14
+ [52.520 --> 53.520] And there we are.
15
+ [53.520 --> 55.280] Look, it went straight to people.
16
+ [55.280 --> 57.920] It knew what we were looking for.
17
+ [57.920 --> 61.840] And you can see that Carola agrees, but the iPad disagrees.
18
+ [61.840 --> 67.480] Okay, now then if the iPad is happy, I managed to click surprised.
19
+ [67.480 --> 73.120] If the iPad is happy, there we are, the iPad is suddenly happy.
20
+ [73.120 --> 77.960] And I might be happy here.
21
+ [77.960 --> 82.080] If I'm sad, it shows up.
22
+ [82.080 --> 83.640] Look at that little sad face.
23
+ [83.640 --> 90.680] And if my iPad is confused, there we are, we've got a very confused face.
24
+ [90.680 --> 92.440] So that's really useful.
25
+ [92.440 --> 94.160] Now there's one more thing.
26
+ [94.160 --> 96.960] And there's a hands up symbol.
27
+ [96.960 --> 97.960] There you are.
28
+ [97.960 --> 102.960] The iPad put his hand up.
29
+ [102.960 --> 104.960] Okay.
30
+ [104.960 --> 110.160] And even though I wouldn't generally do it as a teacher, I can raise my own hand.
31
+ [110.160 --> 111.160] And there you are.
32
+ [111.160 --> 113.360] You can see that there are hands up.
33
+ [113.400 --> 118.520] I can lower the iPad's hand when I've answered his problem.
34
+ [118.520 --> 120.960] And hopefully he's no longer confused.
35
+ [120.960 --> 123.800] And I can lower my own hand.
36
+ [123.800 --> 128.240] So buttons at the bottom.
37
+ [128.240 --> 133.080] Put my hand up.
38
+ [133.080 --> 134.600] Look at the attendees.
39
+ [134.600 --> 138.280] I've got my hand up.
40
+ [138.280 --> 142.920] If I close all the buttons, and my iPad puts a hand up,
41
+ [143.800 --> 147.040] I get a notice.
42
+ [147.040 --> 152.760] Okay, yeah, it's more or less covered those nonverbal communications.
43
+ [152.760 --> 155.560] So I can lower the iPad's hand.
44
+ [155.560 --> 156.840] Right? Thanks very much.
45
+ [156.840 --> 159.840] Bye for now.
transcript/allocentric_c-N8Qtz_g-o.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_cM4ISxZYLBs.txt ADDED
The diff for this file is too large to render. See raw diff
 
transcript/allocentric_csaYYpXBCZg.txt ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 4.880] Hi, I'm Jacob Taxis for About.com.
2
+ [4.880 --> 8.760] In this video, you will learn 8 types of nonverbal communication.
3
+ [8.760 --> 12.400] This is information from About.com's Psychology Site.
4
+ [12.400 --> 13.720] Number 1.
5
+ [13.720 --> 18.920] Facial Expression Facial expression is one type of nonverbal communication that is nearly
6
+ [18.920 --> 21.040] universal in meaning.
7
+ [21.040 --> 25.560] Though different cultures generally ascribe different meanings to various types of nonverbal
8
+ [25.560 --> 29.960] communication, the meanings attributed to certain facial expressions like this one.
9
+ [29.960 --> 34.480] A smile or the frown remain quite similar throughout the world.
10
+ [34.480 --> 38.720] For example, a downcast look in New York will be a downcast look in Moscow.
11
+ [38.720 --> 43.680] A smile and beliefs will signal happiness or joy just as it would in Barcelona.
12
+ [43.680 --> 44.840] Number 2.
13
+ [44.840 --> 45.840] Gestures
14
+ [45.840 --> 50.320] Hand gestures are a vitally important type of nonverbal communication that take on various
15
+ [50.320 --> 53.960] meanings as you navigate the world's cultures.
16
+ [53.960 --> 58.840] One might immediately think of waving, giving a peace sign or a thumbs up.
17
+ [58.840 --> 64.280] One might see a raised index finger to signal that a person's team is number 1.
18
+ [64.280 --> 67.640] Politicians will use specially designed gestures to emphasize points.
19
+ [67.640 --> 68.640] Number 3.
20
+ [68.640 --> 69.960] Parallinguistics
21
+ [69.960 --> 75.480] Parallinguistics simply means a type of vocal communication without the use of language.
22
+ [75.480 --> 79.920] This includes voice inflection, pitch, rhythm, loudness, and tone.
23
+ [79.920 --> 85.400] A slow rhythm and hush tone might signify gentleness or concern, while heavy pitch and
24
+ [85.400 --> 89.720] rising inflection might be attributed to anger or enthusiasm.
25
+ [89.720 --> 90.720] Number 4.
26
+ [90.720 --> 92.040] Body language
27
+ [92.040 --> 96.320] Though body language and posture can be quite subtle, they can have an enormous impact
28
+ [96.320 --> 97.920] on communication.
29
+ [97.920 --> 101.800] Cross-darms might signify closed-off or defensive attitude.
30
+ [101.800 --> 105.360] Slumped shoulders and excessive leaning might signify boredom.
31
+ [105.360 --> 108.680] Again, these cues are subtle but powerful.
32
+ [108.680 --> 109.680] Number 5.
33
+ [109.680 --> 110.760] Proxemics
34
+ [110.760 --> 113.600] Proxemics refers to personal space.
35
+ [113.600 --> 118.160] Independent individuals prefer different distances when it comes to speaking with others.
36
+ [118.160 --> 122.400] Obviously, standing too close to someone while she or he is talking might bring about
37
+ [122.400 --> 125.240] feelings of discomfort or annoyance.
38
+ [125.240 --> 129.600] When speaking to groups, individuals tend to need larger distances in order to feel
39
+ [129.600 --> 130.600] heard.
40
+ [130.600 --> 131.600] Number 6.
41
+ [131.600 --> 132.600] Eye gays
42
+ [132.600 --> 136.880] Eye gazing is a fascinating type of nonverbal communication.
43
+ [136.880 --> 141.120] For example, the rate of blinking might actually increase and the pupils die late when
44
+ [141.120 --> 143.520] friends or loved ones are encountered.
45
+ [143.520 --> 145.920] This goes for interesting objects as well.
46
+ [145.920 --> 151.360] The eyes react very differently to outside stimulus depending on personal interpretation.
47
+ [151.360 --> 152.640] Number 7.
48
+ [152.640 --> 154.160] Habtics
49
+ [154.160 --> 157.640] Habtics simply refers to communicating through touch.
50
+ [157.640 --> 161.320] Touching is used to signify love, affection, and familiarity.
51
+ [161.320 --> 165.920] It might also be employed in times of stress or sadness when comfort is needed.
52
+ [165.920 --> 170.960] The force of a handshake might signify extra enthusiasm between close friends, while
53
+ [170.960 --> 176.040] a firm, standard grip might be more appropriate for a professional introduction.
54
+ [176.040 --> 177.040] Number 8.
55
+ [177.040 --> 178.040] Appearance
56
+ [178.040 --> 182.000] Appearance is a very important type of nonverbal communication.
57
+ [182.000 --> 186.480] Physical appearance, including clothing style and neatness, is the first thing people see
58
+ [186.480 --> 188.840] when encountering one another.
59
+ [188.840 --> 192.920] Studies in the area of color psychology suggest that the colors of clothing can have big
60
+ [192.920 --> 195.600] effects on mood and attitude.
61
+ [195.600 --> 199.480] People make quick judgements of character according to dress and appearance.
62
+ [199.480 --> 200.720] Thank you for watching.
63
+ [200.720 --> 202.600] For more, visit about.com.
transcript/allocentric_d_J9UxKBl7o.txt ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 2.240] Welcome to this module, guys.
2
+ [2.240 --> 7.120] And in this module, we're going to explore the concept of Proximix,
3
+ [7.120 --> 11.360] which is developed by anthropologist Edward T. Hall.
4
+ [11.360 --> 13.480] As we discussed earlier in the course,
5
+ [13.480 --> 19.760] Proximix actually focuses on the use of space and distance when it comes to nonverbal communication.
6
+ [19.760 --> 24.760] And it looks into how it influences our interactions with those around us.
7
+ [24.760 --> 29.040] And by actually better understanding the principles of Proximix
8
+ [29.320 --> 33.000] through Hall's Proximix model,
9
+ [33.000 --> 37.840] we can actually create an environment that's conducive to effective communication,
10
+ [37.840 --> 39.800] trust building and collaboration.
11
+ [39.800 --> 43.480] And we can also learn that in different types of situations,
12
+ [43.480 --> 48.240] how we can use the nonverbal communication aspect of space
13
+ [48.240 --> 53.240] to actually optimize and maximize our communication for success.
14
+ [53.240 --> 58.840] So let's go into the four zones of personal space that Hall identified.
15
+ [58.840 --> 63.840] So Hall has identified four distinct zones of personal space
16
+ [63.840 --> 67.240] that people maintain in their interactions with others.
17
+ [67.240 --> 73.240] We have the intimate zone, the personal zone, the social zone and the public zone.
18
+ [73.240 --> 79.040] Now, the intimate zone is around 0 to 50 centimeters from you.
19
+ [79.040 --> 82.840] And this zone is only reserved for close relationships,
20
+ [82.840 --> 85.720] such as family members, romantic partners,
21
+ [85.720 --> 87.800] or in some cases close friends.
22
+ [87.800 --> 92.600] So entering someone's intimate zone without permission can cause discomfort.
23
+ [92.600 --> 94.200] So what does that tell us?
24
+ [94.200 --> 97.800] This means that when we're talking to someone we've met for the first time,
25
+ [97.800 --> 104.400] or we're talking to a team member or a professional who we don't have an intimate relationship with,
26
+ [104.400 --> 107.600] this is the zone that we don't want to get into.
27
+ [107.600 --> 112.200] Being 0 to 50 centimeters close to someone can often feel intrusive
28
+ [112.200 --> 114.000] like you're going into their personal space.
29
+ [114.000 --> 120.400] And we never want to get this close to someone unless we have the right relationship.
30
+ [120.400 --> 123.600] So this is a zone we probably want to stay away from.
31
+ [123.600 --> 126.400] Next, we move on to the personal zone,
32
+ [126.400 --> 131.600] which is around 0.5 to 1 meters away from you.
33
+ [131.600 --> 138.400] And now this zone is for interactions with friends, acquaintances, and professional colleagues.
34
+ [138.400 --> 142.600] It allows for casual conversations and personal connections
35
+ [142.600 --> 145.400] without invading someone's intimate space.
36
+ [145.400 --> 149.200] So when you're having a conversation with someone,
37
+ [149.200 --> 151.400] whether it's a casual conversation,
38
+ [151.400 --> 155.000] whether it's like a lunch conversation,
39
+ [155.000 --> 158.600] or you're just having a casual conversation around work,
40
+ [158.600 --> 162.200] whether it's with like a work colleague or an acquaintance or a friend,
41
+ [162.200 --> 164.200] this is the zone that you want to be in.
42
+ [164.200 --> 168.400] You want to be around 0.5 to 1 meters away from them,
43
+ [168.400 --> 173.200] because this is probably the optimal zone where someone will feel like
44
+ [173.200 --> 177.000] you're not invading their personal space, which is a good thing.
45
+ [177.000 --> 184.400] Next guys, we have our social zone and our social zone is around 1 to 4 meters away from you.
46
+ [184.400 --> 187.800] Now this zone is used for more formal interactions,
47
+ [187.800 --> 192.000] such as business meetings, presentations, or group discussions.
48
+ [192.000 --> 196.800] It allows for clear communication while maintaining a sense of professionalism.
49
+ [196.800 --> 199.600] So guys, if you're doing a team presentation,
50
+ [199.600 --> 202.600] or you want to facilitate a bit of a group discussion,
51
+ [202.600 --> 206.600] or you're having a business meeting with someone you haven't met for the first time,
52
+ [206.600 --> 210.600] this is probably the distance that you want to keep from them,
53
+ [210.600 --> 213.400] whether it's through a cleverly arranged meeting room,
54
+ [213.400 --> 217.400] or sitting across the table, or keeping a little bit of distance
55
+ [217.400 --> 223.000] so you can project to everybody and not just feel like you're talking to just one person.
56
+ [223.000 --> 225.400] This is the zone that you want to stay in.
57
+ [225.400 --> 229.200] This distance that is 1 to 4 meters away from you.
58
+ [229.200 --> 234.200] And finally guys, we have the public zone, which is 4 meters or more.
59
+ [234.200 --> 239.000] And now this zone is for public speaking lectures or performances,
60
+ [239.000 --> 242.400] because it actually creates a sense of detachment,
61
+ [242.400 --> 245.200] which is useful when addressing large audiences.
62
+ [245.200 --> 249.000] Because if you get any closer, it's a little bit uncomfortable for the audience
63
+ [249.000 --> 252.800] seeing someone speak from such a close distance for so long.
64
+ [252.800 --> 257.400] It often, sometimes what will happen is like the people at the back might not even see you,
65
+ [257.400 --> 260.200] or they might just think that you're talking to a few people.
66
+ [260.200 --> 263.600] But actually, if you want to address a large group,
67
+ [263.600 --> 266.800] whether it's for public speaking lecture performances,
68
+ [266.800 --> 271.800] you want to be in this public zone or of 4 meters or more.
69
+ [271.800 --> 276.600] So as we mentioned, that understanding the approximate,
70
+ [276.600 --> 279.600] approximate zones for different types of interactions
71
+ [279.600 --> 284.000] can really help you communicate more effectively with your team members,
72
+ [284.000 --> 288.200] your clients, your stakeholders, and also in different communication.
transcript/allocentric_eK3T5UIwr3E.txt ADDED
@@ -0,0 +1,1084 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.000] This program is presented by University of California Television.
2
+ [5.000 --> 13.000] Like what you learn? Visit our website or follow us on Facebook and Twitter to keep up with the latest UCTV programs.
3
+ [30.000 --> 35.000] Visit our website or follow us on Facebook and Twitter to keep up with the latest UCTV programs.
4
+ [60.000 --> 65.000] Earlier this program has been funded by University of California Television, ago.
5
+ [65.040 --> 70.040] Detailed and Climate Assessment, China Education to welcome protects communities and values of Italy.
6
+ [70.040 --> 73.560] specified by US Tale County National Journal and Transportation Professional Legal Policy.
7
+ [73.560 --> 76.000] Let's begin to get creative and creative together.
8
+ [76.000 --> 79.540] Today, you're gonna hear 3 talks
9
+ [79.900 --> 82.400] From 3 members of the Membering Ageing Center, which is actually just執 alongside the��는 of the Media Mystery College.
10
+ [82.520 --> 85.320] Thank you for your coming today!
11
+ [85.680 --> 87.880] Thank you.
12
+ [87.880 --> 95.120] So first I'm going to talk about brain games that capture brain circuits, specifically how
13
+ [95.120 --> 100.280] to use brain games to make inferences about memory systems.
14
+ [100.280 --> 102.680] And after me you'll hear from Breen Betcher.
15
+ [102.680 --> 107.120] She's also a neuropsychologist at the Memory and Aging Center as I am.
16
+ [107.120 --> 112.360] And she's going to talk about the evidence for using brain games to improve your cognitive
17
+ [112.360 --> 114.200] function.
18
+ [114.200 --> 118.880] And lastly you'll hear from one of our neurology fellows Winston Chung who's going to talk
19
+ [118.880 --> 121.480] about neuroscience and philosophy.
20
+ [121.480 --> 124.440] So I think it'll be an interesting evening.
21
+ [124.440 --> 129.400] So in my talk I hope that you'll learn that there are multiple distinct memory systems
22
+ [129.400 --> 131.640] in the brain.
23
+ [131.640 --> 137.320] And by using carefully designed cognitive tests we can measure separately how well each
24
+ [137.320 --> 142.840] of these systems are functioning.
25
+ [142.840 --> 148.680] During the first half of my talk I'll focus on the distinction between working memory
26
+ [148.680 --> 152.520] and long-term memory consolidation.
27
+ [152.520 --> 158.040] I'll start with the story of a famous patient known as HM who taught us that there are
28
+ [158.040 --> 162.000] multiple memory systems in the brain.
29
+ [162.000 --> 166.320] Then we'll try out some tests of working memory and long-term memory like the ones that
30
+ [166.320 --> 169.800] we use at the Memory and Aging Center.
31
+ [169.800 --> 174.880] And with that section I'll end with some tips about how you can maximize your memory
32
+ [174.880 --> 179.920] function using these insights from neuroscience.
33
+ [179.920 --> 185.560] During the second half of my talk I'll focus on the distinction between allocentric and
34
+ [185.560 --> 189.360] egocentric navigation memory strategies.
35
+ [189.360 --> 194.240] So there's two major ways that we can navigate how we can find our way around without getting
36
+ [194.240 --> 196.560] lost.
37
+ [196.560 --> 202.400] And I'll ask you which strategy do you prefer to use?
38
+ [202.400 --> 208.680] At the Memory and Aging Center the most common reason why new patients come to us is because
39
+ [208.680 --> 212.240] they have a memory problem.
40
+ [212.240 --> 218.600] When a patient tells us that they have a memory problem we ask them to give us some examples.
41
+ [218.600 --> 222.680] And when we ask this question we get very different answers.
42
+ [222.680 --> 227.280] So here are some of the most common answers that we get.
43
+ [227.280 --> 232.640] I have trouble finding words or names when I need them.
44
+ [232.640 --> 238.440] Sometimes I can't remember why I walked into a room.
45
+ [238.440 --> 242.440] Especially if I get distracted on the way.
46
+ [242.440 --> 248.080] I forget where I put my keys or parked my car.
47
+ [248.080 --> 256.680] I can't remember the meanings of words or even what objects are used for.
48
+ [256.680 --> 263.320] I sometimes forget what I did yesterday or last week and even when I am reminded I sometimes
49
+ [263.320 --> 265.760] can't remember.
50
+ [265.760 --> 271.280] These are all very different memory problems and in fact they rely on different memory
51
+ [271.280 --> 276.000] circuits in the brain.
52
+ [276.000 --> 280.640] We learned that there are different memory systems from a famous patient who is known
53
+ [280.640 --> 282.640] as HM.
54
+ [282.640 --> 287.800] HM had a seizure disorder that was not well treated with medications.
55
+ [287.800 --> 294.680] And so his surgeon Dr. William Scoville performed a bilateral medial temporal lobe resection
56
+ [294.680 --> 300.800] cutting out the middle parts of his temporal lobe including the hippocampus on each side.
57
+ [300.800 --> 306.720] You can see in the figure there on the left an HM spring there is a big chunk of brain
58
+ [306.720 --> 310.240] that is missing.
59
+ [310.240 --> 315.800] So the good thing about this surgery was that it cured his seizures but it had a horrible
60
+ [315.800 --> 317.880] side effect.
61
+ [317.880 --> 322.640] He could no longer commit new events to his long-term memory.
62
+ [322.640 --> 327.680] He actually lived a long life and he would see the same doctors sometimes day after day
63
+ [327.680 --> 332.560] and it was like he was meeting them for the first time.
64
+ [332.560 --> 341.120] So HM was impaired at laying down new memories, long-term memory consolidation.
65
+ [341.120 --> 345.560] This is the type of memory that we often mean when we talk about memory.
66
+ [345.560 --> 349.600] It's what we use when we are a student and we study a subject so that we'll remember
67
+ [349.600 --> 351.800] the information later.
68
+ [351.800 --> 354.720] It's memory for facts and events.
69
+ [354.720 --> 361.480] It seems to have almost an unlimited capacity.
70
+ [361.480 --> 366.160] One of the important findings with HM was that there were memory functions that were
71
+ [366.160 --> 368.040] spared.
72
+ [368.040 --> 373.120] So we know that the medial temporal lobe is critical for long-term memory consolidation
73
+ [373.120 --> 379.280] from HM but we know that it's not critical for some other memory functions.
74
+ [379.280 --> 383.320] For example, HM was able to learn new skills.
75
+ [383.320 --> 389.320] You call this type of memory procedural memory like learning to dance the salsa or learning
76
+ [389.320 --> 390.720] to ride a bike.
77
+ [390.720 --> 392.560] It becomes a habit after a while.
78
+ [392.560 --> 394.200] You don't even really have to think about it.
79
+ [394.200 --> 398.240] You just remember how to do it almost effortlessly.
80
+ [398.240 --> 403.080] So this is called procedural memory and it's subserved by very different brain circuit
81
+ [403.080 --> 407.560] than the long-term memory consolidation.
82
+ [407.560 --> 411.280] Short-term memory was also relatively preserved in HM.
83
+ [411.280 --> 416.680] It's also called working memory because we use this kind of information to hold small
84
+ [416.680 --> 421.600] amounts of information in our mind so that we can work with the information.
85
+ [421.600 --> 426.600] This type of memory is very temporary and has a very small capacity.
86
+ [426.600 --> 432.320] So again, these two types of memory were preserved in HM despite the fact that he had those
87
+ [432.320 --> 435.800] big chunks of his medial temporal lobe removed.
88
+ [435.800 --> 442.360] So HM taught us that there are multiple distinct memory systems.
89
+ [442.360 --> 446.160] So I'll talk a bit more now about this short term or working memory.
90
+ [446.160 --> 450.040] It's the type of memory you're using right now to listen to this talk and process it in
91
+ [450.040 --> 455.440] your mind and think about how the stuff you're learning may apply to you or people you know.
92
+ [455.440 --> 459.760] Your processing or working with this memory as you listen.
93
+ [459.760 --> 465.120] So working memory holds information and conscious awareness so we can use it.
94
+ [465.160 --> 470.160] The information can come from our senses like right now you're listening to me talk and
95
+ [470.160 --> 473.280] that information is going into your working memory.
96
+ [473.280 --> 477.960] The information can also come from your long-term memory stores.
97
+ [477.960 --> 480.400] The duration is seconds.
98
+ [480.400 --> 486.200] It only lasts up to maybe 20 or 30 seconds unless you keep rehearsing the information over
99
+ [486.200 --> 488.880] and over again in your mind.
100
+ [488.880 --> 494.400] For example, if someone gave you a phone number to you and then you walked over the phone
101
+ [494.440 --> 499.400] to dial it you would hold that phone number in your working memory so that you could remember
102
+ [499.400 --> 501.960] it when you need to dial on the phone.
103
+ [501.960 --> 506.960] But if someone distracts you when you're on your way to the phone you're likely to use
104
+ [506.960 --> 512.200] it and that's because this working memory has a very limited capacity and so distracting
105
+ [512.200 --> 517.720] information can compete with the information you want to pay attention to and then you
106
+ [517.720 --> 520.440] can lose it.
107
+ [520.440 --> 525.000] So it can only hold about five to seven items in your mind at a time which is perfect because
108
+ [525.000 --> 528.840] phone numbers are about seven digits long or nine digits.
109
+ [528.840 --> 533.440] If you can chunk the information you can hold onto it longer.
110
+ [533.440 --> 539.120] So for example if you recognize an area code in the phone number you can chunk that and
111
+ [539.120 --> 545.000] it becomes one unit and then you only have to remember the other seven digits.
112
+ [545.000 --> 550.560] So the better you inhibit irrelevant information the more information you can hold in your
113
+ [550.560 --> 553.160] working memory.
114
+ [553.160 --> 558.720] Now I think this is why people who are under a lot of stress have trouble with their memory.
115
+ [558.720 --> 563.960] They may have a lot of distressing thoughts that are interfering with their working memory.
116
+ [563.960 --> 570.920] So there's not enough room in their working memory for what they want to pay attention to.
117
+ [570.920 --> 577.440] So good strategies for improving your working memory are to reduce your stress and also
118
+ [577.440 --> 581.000] just try to reduce distracting information.
119
+ [581.000 --> 585.320] If you need to concentrate on something or concentrate on important conversation try to
120
+ [585.320 --> 590.160] do in a quiet place with fewer distractions.
121
+ [590.160 --> 593.240] So let's try a working memory test now.
122
+ [593.240 --> 599.960] I'm going to administer to you a very popular neuropsychological test of working memory.
123
+ [599.960 --> 603.480] I'm going to say some letters and numbers to you.
124
+ [603.480 --> 610.680] I'll jump up and I want you to say them back to me with the letters in order first followed
125
+ [610.680 --> 613.320] by the numbers in order.
126
+ [613.320 --> 615.120] Okay are you ready?
127
+ [615.120 --> 618.120] All right.
128
+ [618.120 --> 622.160] F three a eight.
129
+ [622.160 --> 629.160] All right good let's try a longer one now.
130
+ [629.160 --> 632.160] All right.
131
+ [632.160 --> 639.160] K, W, 9, 2, P.
132
+ [639.160 --> 645.160] All right good.
133
+ [645.160 --> 649.960] So that's a test of what we would call verbal working memory where you have to hold online
134
+ [649.960 --> 656.840] those letters and numbers and manipulate them, reorder them in your mind.
135
+ [656.840 --> 661.240] So now let's try another test of working memory that we're using in our research right
136
+ [661.240 --> 662.240] now.
137
+ [662.240 --> 686.480] I want you to remember the last three locations that are shown.
138
+ [686.480 --> 689.600] So this is a test of spatial working memory.
139
+ [689.600 --> 694.640] It turns out that spatial working memory and verbal working memory the test we just did
140
+ [694.640 --> 699.680] have some similar neural underpinnings but also have some separate neural underpinnings.
141
+ [699.680 --> 704.600] So for example some patients might be impaired in spatial working memory but not verbal working
142
+ [704.600 --> 708.960] memory or vice versa.
143
+ [708.960 --> 715.040] In Alzheimer's disease early on working memory is actually pretty good.
144
+ [715.040 --> 719.920] But the type of memory that they have problems with is the same type that HM had problems
145
+ [719.920 --> 724.360] with long term memory consolidation.
146
+ [724.360 --> 725.560] Why is that?
147
+ [725.560 --> 731.440] Well you can see in this healthy control brain this is the hippocampus it's nice and tight
148
+ [731.440 --> 735.600] and plump and lots of neurons there.
149
+ [735.600 --> 741.680] But in the Alzheimer's brain there's a lot of black which is the cerebral spinal fluid
150
+ [741.760 --> 746.360] that has come in to fill in where the neurons have died.
151
+ [746.360 --> 752.560] So this is an early target of Alzheimer's disease and this is why early in the disease many
152
+ [752.560 --> 758.840] patients have trouble laying down new information into long term memory stores.
153
+ [758.840 --> 764.560] They may have trouble telling you what movie they saw last week for example.
154
+ [765.560 --> 772.560] Well let's try a test of memory, a test of long term consolidation.
155
+ [772.560 --> 777.200] So I'm going to read a list of words to you and want you to listen carefully and what
156
+ [777.200 --> 778.200] I'm through.
157
+ [778.200 --> 782.520] I want you to say them back in your mind in any order.
158
+ [782.520 --> 786.920] And if you want you can try to keep track of how many you're remembering on your fingers
159
+ [786.920 --> 790.520] or tallying but don't write down the words.
160
+ [790.520 --> 794.480] So I'll read the list of words to you and then you can repeat them back to yourself when
161
+ [794.480 --> 797.400] I'm done in your mind.
162
+ [797.400 --> 811.920] A rugula, paperclip, apple, stapler, telephone, gorgonzola, scissors, red onion.
163
+ [811.920 --> 812.920] I'm finished.
164
+ [812.920 --> 814.920] Repeat them back in your mind.
165
+ [814.920 --> 815.920] Okay.
166
+ [815.920 --> 816.920] Let's try it again.
167
+ [817.480 --> 819.520] Let's see if you can remember more this time.
168
+ [819.520 --> 821.880] It'll be the same list.
169
+ [821.880 --> 834.880] A rugula, paperclip, apple, stapler, telephone, gorgonzola, scissors, red onion.
170
+ [834.880 --> 837.880] Okay.
171
+ [837.880 --> 844.240] We administer a test like that one to all the patients who come in our clinic.
172
+ [844.240 --> 850.700] And what we find is the first time we read the list of words, the Alzheimer's patients
173
+ [850.700 --> 852.960] perform pretty similarly to controls.
174
+ [852.960 --> 855.560] So this test actually I think has 16 words.
175
+ [855.560 --> 858.880] It's a different test than the one I just gave you but it's similar.
176
+ [858.880 --> 864.440] And at trial one, they repeat back a similar number of words.
177
+ [864.440 --> 869.040] But then over the learning trials, we actually administer five learning trials.
178
+ [869.040 --> 872.880] You can see the controls get better every time.
179
+ [872.880 --> 875.360] Every time they remember more words.
180
+ [875.360 --> 879.920] And this is because their hippocampus is helping them to consolidate the information.
181
+ [879.920 --> 884.720] But in Alzheimer's disease, they don't show as much improvement over the learning trials.
182
+ [884.720 --> 888.040] Because their hippocampus is not as effective at this.
183
+ [888.040 --> 893.480] And importantly, over the long delay, which is 20 minutes, we see that the Alzheimer's
184
+ [893.480 --> 896.760] patients remember almost none of the words.
185
+ [896.760 --> 901.440] In fact, many of the patients don't remember that a list had been read to them.
186
+ [901.440 --> 908.800] So this is a problem with long-term memory consolidation.
187
+ [908.800 --> 916.000] How does the hippocampus consolidate new information into long-term memory stores?
188
+ [916.000 --> 923.000] Well, it consolidates the memories in a widely distributed network of brain regions
189
+ [923.000 --> 925.920] in neocortex.
190
+ [925.920 --> 931.120] So for example, let's say you went to an important family wedding several years ago.
191
+ [931.120 --> 936.520] Well, the brain doesn't just consolidate your memory of that wedding into one node in
192
+ [936.520 --> 939.320] the brain and its connection to hippocampus.
193
+ [939.320 --> 945.600] Rather, it consolidates the memory in a widely distributed network of brain regions, the
194
+ [945.600 --> 951.600] same brain regions that you used when you process the information at the wedding.
195
+ [951.600 --> 957.040] So the same brain regions that processed the sites of the wedding, the taste of the
196
+ [957.040 --> 962.440] cake, the sound of the music, the conversations that you had there, the emotions that you
197
+ [962.440 --> 969.000] felt there, those same brain regions are involved in the memory for the event.
198
+ [969.000 --> 974.320] Emotion in particular seems to be a really important organizing force for these memories.
199
+ [974.320 --> 979.600] So these nodes in your brain that represent the event are all interconnected functionally
200
+ [979.600 --> 983.400] for this memory and connected with the hippocampus.
201
+ [983.400 --> 989.720] And every time you recall that wedding over the years, these same regions are active and
202
+ [989.720 --> 991.040] interact.
203
+ [991.040 --> 996.480] The hippocampus is critically important for bringing up that memory.
204
+ [996.480 --> 1002.200] Over time, however, the hippocampus becomes less and less important for bringing up that
205
+ [1002.200 --> 1004.000] memory.
206
+ [1004.000 --> 1009.400] So many years after the wedding, the hippocampus may not be important hardly at all for
207
+ [1009.400 --> 1012.080] bringing up that memory.
208
+ [1012.080 --> 1017.840] This is why patients with Alzheimer's disease can remember better events from earlier in
209
+ [1017.840 --> 1021.840] their life than the movie they saw last week.
210
+ [1021.840 --> 1026.840] They may be able to tell you stories from their childhood, but they can't remember that
211
+ [1026.840 --> 1029.440] you went to a party with them last week.
212
+ [1029.440 --> 1034.480] And this is because the hippocampus is less important for recalling memories from earlier
213
+ [1034.480 --> 1040.920] in your life than for more recent events.
214
+ [1040.920 --> 1046.440] So we've talked about the brain's circuits important for memory and the differences between
215
+ [1046.440 --> 1050.120] short-term memory and long-term memory consolidation.
216
+ [1050.120 --> 1053.680] What can we take from all of this to maximize our memories?
217
+ [1053.680 --> 1057.520] Well, I'm going to leave you with two tips here.
218
+ [1057.520 --> 1061.440] The first is we remember when we pay attention.
219
+ [1061.440 --> 1065.280] So when you focus and you reduce distractions.
220
+ [1065.280 --> 1070.880] And the second is we remember when we make it meaningful.
221
+ [1070.880 --> 1076.720] So when you make associations that give new information, context or significance in terms
222
+ [1076.720 --> 1082.600] of all the other things you have in your mind, the reason this works is because memories
223
+ [1082.600 --> 1090.120] are stored based on their associations to other events or memories.
224
+ [1090.120 --> 1094.840] So let's try this technique out.
225
+ [1094.840 --> 1097.040] So breathe is someone you're going to meet in a few moments.
226
+ [1097.040 --> 1099.040] She's going to give the next talk.
227
+ [1099.040 --> 1103.320] And I don't know about all of you, but sometimes when someone introduces themselves to me, I hear
228
+ [1103.320 --> 1106.720] the name and then second later it's gone.
229
+ [1106.720 --> 1112.440] So I encourage you, when someone tells you their name, to stop a moment and focus and make
230
+ [1112.440 --> 1115.000] associations.
231
+ [1115.000 --> 1120.840] So for breathe, you might imagine a plate of breachies.
232
+ [1120.840 --> 1125.760] And just think about how delicious that cheese is and imagine breathe eating that big plate
233
+ [1125.760 --> 1126.760] of breachies.
234
+ [1126.760 --> 1129.920] You'll probably never forget her name again.
235
+ [1129.920 --> 1133.360] And if that doesn't work, I haven't even better strategy for you.
236
+ [1133.360 --> 1137.800] And you can think of someone else you knew by the name of breathe.
237
+ [1137.800 --> 1141.840] Maybe there was a girl back in high school with the name of breathe.
238
+ [1141.840 --> 1145.680] So even better, let's say that she stole your boyfriend.
239
+ [1145.680 --> 1149.400] So you just remember that girl, breathe, who stole your boyfriend.
240
+ [1149.400 --> 1154.440] Remember, emotions are a very powerful organizing force for memories.
241
+ [1154.440 --> 1158.280] So if you can activate your emotions while you're trying to remember something, you're
242
+ [1158.280 --> 1160.880] much more likely to remember it.
243
+ [1160.880 --> 1163.560] All right, let's try another one.
244
+ [1163.560 --> 1164.560] So Winston.
245
+ [1164.560 --> 1170.880] Winston's going to be giving a talk on philosophy and neuroscience later this evening.
246
+ [1170.880 --> 1172.520] And I think it's going to be really good talk.
247
+ [1172.520 --> 1176.560] So you could remember Winston, he's a real winner.
248
+ [1176.560 --> 1179.760] Or you might think of Winston Churchill.
249
+ [1179.760 --> 1182.520] Winston Churchill was always smoking cigars.
250
+ [1182.520 --> 1186.280] So you might visualize Winston smoking a cigar.
251
+ [1186.280 --> 1191.800] So the more you engage your different senses, I find visualization in particular to be helpful,
252
+ [1191.800 --> 1197.600] the more likely you're going to be able to remember new information.
253
+ [1197.600 --> 1202.640] So we've talked about short-term memory and long-term memory and how to transition information
254
+ [1202.640 --> 1204.960] into long-term memory.
255
+ [1204.960 --> 1213.600] And again, the tips I have for you are one, stop and pay attention and two, make associations.
256
+ [1213.600 --> 1218.160] Because we consolidate long-term memories in terms of their associations to other memories
257
+ [1218.160 --> 1219.720] or concepts.
258
+ [1219.720 --> 1226.240] The most effective associations are original, even absurd.
259
+ [1226.240 --> 1228.640] They engage multiple senses.
260
+ [1228.640 --> 1230.920] They engage emotions.
261
+ [1230.920 --> 1234.200] Or they're personally salient.
262
+ [1234.200 --> 1239.000] So before we shift to the second half of the talk, I'll just review the brain-basis of
263
+ [1239.000 --> 1241.280] these two memory systems.
264
+ [1241.280 --> 1246.080] So the short-term memory or the working memory relies principally on the frontal lobes
265
+ [1246.080 --> 1249.080] and frontal-prideal circuits.
266
+ [1249.080 --> 1253.920] But the long-term memory consolidation relies critically on the hippocampus.
267
+ [1253.920 --> 1261.160] And over time, the hippocampus lays down memory throughout neocortex.
268
+ [1261.160 --> 1267.360] And after many years, the hippocampus isn't even really that critical to recall the memory.
269
+ [1267.360 --> 1271.840] So now I'm going to move to the little talk on navigation memory.
270
+ [1271.840 --> 1278.880] So I want you to think for a moment, how will you find your way home after this talk?
271
+ [1278.880 --> 1285.680] If your GPS isn't working.
272
+ [1285.680 --> 1292.080] There are two primary strategies that we use to find our way around.
273
+ [1292.080 --> 1298.840] The first that I'll talk about is the aloe-centric system, which means other centered.
274
+ [1298.840 --> 1305.360] When we use this system, we represent where locations are relative to major landmarks
275
+ [1305.360 --> 1307.840] in three-dimensional space.
276
+ [1307.840 --> 1313.760] We often anchor our aloe-centric cognitive maps in Cartesian coordinates, north, south,
277
+ [1313.760 --> 1316.080] east, west.
278
+ [1316.080 --> 1321.640] For example, if you are using the aloe-centric navigation system, you might think my house
279
+ [1321.640 --> 1327.240] is north of UCSF between Quite Tower and Fisherman's War.
280
+ [1327.240 --> 1332.480] So you're appreciating the relationship between these major landmarks in space.
281
+ [1332.480 --> 1338.880] Your aloe-centric cognitive map of San Francisco does not change if you are at UCSF, if you're
282
+ [1338.880 --> 1341.800] at the Golden Gate Bridge, if you're a New York City.
283
+ [1341.800 --> 1343.000] It's the same map.
284
+ [1343.000 --> 1347.880] It doesn't depend on your position in space.
285
+ [1347.880 --> 1353.440] This system relies critically on the hippocampus, more so on the right hippocampus in the posterior
286
+ [1353.440 --> 1356.040] portion.
287
+ [1356.040 --> 1363.520] So in contrast to the aloe-centric system, the egocentric system is self-centered.
288
+ [1363.520 --> 1369.600] When we use this system, we chain responses with local cues.
289
+ [1369.600 --> 1375.640] For example, you might think to get to my house, I take a left on third street, I take
290
+ [1375.640 --> 1379.720] a right on King Street, and follow along the water.
291
+ [1379.720 --> 1383.120] After I pass the ferry building, I take a left.
292
+ [1383.120 --> 1387.120] You can see with this system, you don't have to appreciate the relationship between
293
+ [1387.120 --> 1389.440] these locations and three-dimensional space.
294
+ [1389.440 --> 1393.880] You just need to know when you get to the ferry building you take a left.
295
+ [1393.880 --> 1398.960] This type of system is very efficient when you've navigated along the same route so
296
+ [1398.960 --> 1403.440] many times that it becomes routine.
297
+ [1403.440 --> 1407.640] But let's say you're going to work on the same route that you take every day, and there's
298
+ [1407.640 --> 1408.840] a detour.
299
+ [1408.840 --> 1413.040] Well, your egocentric system isn't going to work anymore, and you need to pull up your
300
+ [1413.040 --> 1418.680] aloe-centric cognitive map to come up with another way to get home.
301
+ [1418.680 --> 1423.960] So this system, this habit learning system, relies critically on the Codate Nuculus,
302
+ [1423.960 --> 1429.560] which is a structure in the basal ganglia deep inside your brain.
303
+ [1429.560 --> 1434.440] The reason we know so much about the neural circuit's important for navigation learning
304
+ [1434.440 --> 1439.520] is because if you want to know how a rodent's cognition is working, you put them in a maze
305
+ [1439.520 --> 1444.240] and you see if they can find your way out or get some food.
306
+ [1444.240 --> 1450.240] So when people are looking at rodent models of Alzheimer's disease, for example, they
307
+ [1450.240 --> 1455.040] evaluate how well the treatment's working by seeing how well the rodents can find their
308
+ [1455.040 --> 1458.680] way out of a maze.
309
+ [1458.680 --> 1462.400] So this is the most popular cognitive test for rodents.
310
+ [1462.400 --> 1464.200] It's the Morris Water Maze.
311
+ [1464.200 --> 1470.040] On this task, the mouse is put into a cloudy, cold pool, and the mouse is swimming around
312
+ [1470.040 --> 1475.560] trying to find the hidden submerged platform so he can escape.
313
+ [1475.560 --> 1481.480] He does this over many trials, and the platform is always hidden in the same place.
314
+ [1481.480 --> 1486.000] However, the rodent starts from a different position on every trial.
315
+ [1486.000 --> 1490.000] So the only way to get better and better at finding that hidden platform, which he's
316
+ [1490.000 --> 1495.840] sitting on right now, is to learn the relationship between the hidden platform and the cues that
317
+ [1495.840 --> 1502.040] surround the pool in three-dimensional space, just like you know the relationship between
318
+ [1502.040 --> 1506.600] quite tower and the Golden Gate Bridge in UCSF when you pull up a map of San Francisco
319
+ [1506.600 --> 1509.600] in your mind.
320
+ [1509.600 --> 1515.120] So we've developed some virtual reality tests of those two navigation strategies that
321
+ [1515.120 --> 1519.400] we're using in our lab because we think that they're sensitive to different brain circuits
322
+ [1519.440 --> 1523.440] and are disrupted by different diseases.
323
+ [1523.440 --> 1526.120] So I'll show you a couple examples of these.
324
+ [1526.120 --> 1529.960] So this is the human version of the test I just showed you.
325
+ [1529.960 --> 1534.800] We actually have a version of this test outside and I invite you to try it after the talks.
326
+ [1534.800 --> 1541.280] So on this test, you drive around in a circular field looking for the buried treasure.
327
+ [1541.280 --> 1544.840] When you drive over it, it will appear.
328
+ [1544.840 --> 1549.640] So you have many trials to find it and for you to get faster and faster at finding it,
329
+ [1549.640 --> 1555.240] you need to appreciate the relationship between the external cues, the houses and watertower
330
+ [1555.240 --> 1561.080] and mountains and so forth and the location of the buried treasure.
331
+ [1561.080 --> 1566.440] Just like the mouse had to learn where the hidden platform was relative to the cues around
332
+ [1566.440 --> 1567.440] the pool.
333
+ [1567.440 --> 1574.600] So we think this test is very sensitive to hippocampal system dysfunction and we're finding that
334
+ [1574.600 --> 1579.440] it's particularly impaired in the earliest stages of Alzheimer's disease, which targets
335
+ [1579.440 --> 1581.600] that system.
336
+ [1581.600 --> 1590.000] I'm going to show you now another test that we're using.
337
+ [1590.000 --> 1595.120] This one to measure specifically the egocentric navigation strategy.
338
+ [1595.120 --> 1599.720] On this test, the subject navigates through a long route through a neighborhood.
339
+ [1599.720 --> 1604.120] It's always the same route and you learn it by trial and error.
340
+ [1604.120 --> 1608.880] Each time you get to an intersection, you take a guess about which way you think it goes
341
+ [1608.880 --> 1613.360] and if you get it wrong, you're prompted to guess again until you get it right.
342
+ [1613.360 --> 1618.000] Over time, subjects get much more accurate at this test and it becomes almost a habit
343
+ [1618.000 --> 1620.560] for them.
344
+ [1620.560 --> 1625.280] So to do this test well, you just have to chain responses with local cues.
345
+ [1625.280 --> 1630.400] When I get to this cactus, I turn right, for example.
346
+ [1630.400 --> 1634.540] So we think these two types of navigation memory are really tapping different brain
347
+ [1634.540 --> 1639.760] circuits in our brains and that they're affected by different diseases.
348
+ [1639.760 --> 1644.540] I think we all use both of these strategies, but I think some of us tend to use one more
349
+ [1644.540 --> 1645.860] than the other.
350
+ [1645.860 --> 1649.840] So think to yourself, which strategy do you tend to use?
351
+ [1649.840 --> 1655.960] There are actually some sex differences on these tasks as well and you tend to be a little
352
+ [1655.960 --> 1661.440] bit better on average on the allocentric navigation paradigm.
353
+ [1661.440 --> 1667.360] Although I've definitely had some women volunteers who have done amazingly well and one explanation
354
+ [1667.360 --> 1670.800] for this comes from evolutionary psychology.
355
+ [1670.800 --> 1676.280] You think about hunters back in prehistoric days, they had to wander long distances through
356
+ [1676.280 --> 1680.440] winding paths to try to search for prey and find their way home.
357
+ [1680.440 --> 1685.920] They really needed to rely on the allocentric memory system.
358
+ [1685.920 --> 1688.840] So I'm going to finish now with some take home points.
359
+ [1688.840 --> 1690.880] There are several types of memory.
360
+ [1690.880 --> 1696.720] We've focused on the distinction between working memory and long term consolidation, as well
361
+ [1696.720 --> 1702.280] as the distinction between allocentric and egocentric navigation memory.
362
+ [1702.280 --> 1708.000] Each type of memory relies on a set of brain regions and circuits.
363
+ [1708.000 --> 1713.800] By measuring the function of different types of memory, neuropsychologists can make inferences
364
+ [1713.800 --> 1720.080] about the integrity of the different underlying brain circuits.
365
+ [1720.080 --> 1721.360] Why is this important?
366
+ [1721.360 --> 1727.280] Why do we need to understand the links between memory and brain circuits?
367
+ [1727.280 --> 1731.320] Well memory disorders tend to target specific circuits.
368
+ [1731.320 --> 1737.360] And so to treat these diseases, we need to understand how these memory systems work and
369
+ [1737.360 --> 1740.960] why they fail.
370
+ [1740.960 --> 1745.280] So even healthy people can benefit from these understandings.
371
+ [1745.280 --> 1752.760] They can maximize their memories by understanding how memory systems work.
372
+ [1752.760 --> 1753.760] Thank you.
373
+ [1753.760 --> 1762.440] All right, good evening everyone.
374
+ [1762.440 --> 1768.320] As Kate mentioned, my name is Breeb Batcher and I also often introduce myself to patients
375
+ [1768.320 --> 1771.600] by saying that it's like the cheese.
376
+ [1771.600 --> 1775.440] And feel very fortunate that I wasn't named after Gouda.
377
+ [1775.440 --> 1778.800] So I want to be talking to you tonight about something I think is really salient to all
378
+ [1778.800 --> 1785.120] of us, which is for stalling cognitive decline, so preventing any decline over time and
379
+ [1785.120 --> 1786.360] our thinking.
380
+ [1786.360 --> 1793.080] All right, so just to begin, I think one of the main questions that dominates our field
381
+ [1793.080 --> 1797.280] is how do we slow the cognitive aging process?
382
+ [1797.280 --> 1803.040] And by cognitive aging, what I mean is this is typically gradual decline in our ability
383
+ [1803.040 --> 1805.880] to process and manipulate information quickly.
384
+ [1805.880 --> 1809.720] And this isn't restricted to middle or older age.
385
+ [1809.720 --> 1814.120] We actually start to experience declines in how quickly we process things pretty early
386
+ [1814.120 --> 1816.800] even after our 20s.
387
+ [1816.800 --> 1821.360] And so in terms of the research landscape, I think what has been most remarkable in the
388
+ [1821.360 --> 1824.240] past few years is the transition in focus.
389
+ [1824.240 --> 1828.840] So for quite a while, we've had an anchor in looking at preventing dementia.
390
+ [1828.840 --> 1832.160] And this still is a very important focus of our work.
391
+ [1832.160 --> 1836.680] But over the years, particularly the last decade, there's been even more focus on staving
392
+ [1836.680 --> 1842.320] off decline, so not even necessarily dementia, but just preventing any cognitive decline.
393
+ [1842.320 --> 1846.640] And in addition to that, I think in the last couple of years, we've seen a lot more information,
394
+ [1846.640 --> 1852.640] a lot more media buzz around remaining cognitively robust throughout our life and maybe even improving
395
+ [1852.640 --> 1854.520] our cognition.
396
+ [1854.520 --> 1859.800] I think this Newsweek article actually sort of personifies this interest that's developed
397
+ [1859.800 --> 1864.800] over the past few years of how do we maintain our abilities and can we even get smarter
398
+ [1864.800 --> 1868.680] over time.
399
+ [1868.680 --> 1875.400] So transitioning from Dr. Postine's talk on spatial cognition and verbal memory, I plan
400
+ [1875.400 --> 1880.920] to talk a little bit tonight about cognitive plasticity and brain games.
401
+ [1880.920 --> 1884.680] And I'm also going to follow it up with a brief discussion of physical exercise.
402
+ [1884.680 --> 1889.960] So how physical activity is related to brain health and what are the mechanisms by which
403
+ [1889.960 --> 1896.560] physical exercise might actually impact our thinking.
404
+ [1896.560 --> 1903.280] So just to provide some context for how this evolution and aging research has transpired,
405
+ [1903.280 --> 1908.160] I think it's really important to examine early studies of plasticity and cognitive reserve.
406
+ [1908.560 --> 1913.360] I think one of really the most striking examples of this comes from the early non-studies,
407
+ [1913.360 --> 1915.640] which I think some of you might be probably familiar with.
408
+ [1915.640 --> 1918.600] We talked about a little bit last year.
409
+ [1918.600 --> 1922.600] The non-study refers to this longitudinal study of Catholic sisters.
410
+ [1922.600 --> 1927.600] They were members of the school sisters of Notre Dame congregation.
411
+ [1927.600 --> 1931.520] There's actually a book on this topic.
412
+ [1931.520 --> 1938.520] And this included approximately, it was a little bit over 650, between 650 and 700 Catholic
413
+ [1938.520 --> 1940.080] centers were enrolled.
414
+ [1940.080 --> 1947.960] And their ages ranged from 75 to 102 years old when the study began in 1991.
415
+ [1947.960 --> 1952.400] And what was great about this study is that the sisters received annual examinations and
416
+ [1952.400 --> 1957.160] they all agreed to donate their brains upon autopsy.
417
+ [1957.160 --> 1961.440] They were taught to donate their brains for autopsy upon death.
418
+ [1961.440 --> 1965.160] Probably an important distinction there.
419
+ [1965.160 --> 1970.560] So the non-study provided this really controlled means of evaluating predictors of cognitive
420
+ [1970.560 --> 1975.960] resilience and also cognitive decline in a group of individuals who clearly had very similar
421
+ [1975.960 --> 1976.960] lifestyle.
422
+ [1976.960 --> 1984.040] So we didn't have to worry about any multiple partners across the lifespan, any exposure
423
+ [1984.040 --> 1985.040] to particular diseases.
424
+ [1985.040 --> 1990.840] It's a pretty clean sample though that they had to look at.
425
+ [1990.840 --> 1996.200] And from this study, the researchers led by Dr. Snowden at the University of Kentucky reported
426
+ [1996.200 --> 2000.440] several important findings that I think has really changed how we think about cognition
427
+ [2000.440 --> 2002.840] over the lifetime.
428
+ [2002.840 --> 2007.960] And one of these findings includes the observation that some nuns had brains that were riddled
429
+ [2007.960 --> 2015.360] with Alzheimer's disease pathology but did not show any manifestations of a dementia.
430
+ [2015.360 --> 2018.240] And Dr. Snowden reported several case examples.
431
+ [2018.240 --> 2023.520] So including one of Sister Mathia shown there to illustrate the individual differences
432
+ [2023.520 --> 2027.240] he noted in pathology and clinical manifestation.
433
+ [2027.240 --> 2033.760] So Sister Mathia reportedly died at 104 years of age, relatively healthy, dementia-free.
434
+ [2033.760 --> 2038.960] And upon autopsy, they noted that the severity of Alzheimer's disease pathology in her brain
435
+ [2038.960 --> 2043.920] was at around a stage four, suggesting that there was moderate spread of the disease in
436
+ [2043.920 --> 2048.960] her brain, including the areas that Dr. Poseen mentioned that are very important for memory,
437
+ [2048.960 --> 2053.960] namely your hippocampus.
438
+ [2053.960 --> 2060.080] So stemming from this research is the question of how there can be such heterogeneity in clinical
439
+ [2060.080 --> 2065.720] outcome among individuals have a pretty similar degree of pathology in their brain.
440
+ [2065.720 --> 2071.160] So importantly, I think it's an important fact to highlight that what we see under a microscope
441
+ [2071.160 --> 2075.320] does not always reflect what we see in everyday life.
442
+ [2075.320 --> 2079.880] It's not necessarily a one-to-one correspondence.
443
+ [2079.880 --> 2085.360] So when you examine individuals with the same severity of Alzheimer's disease in their brain,
444
+ [2085.360 --> 2091.880] some may show Alzheimer's disease related to dementia, and some may be clinically normal
445
+ [2091.880 --> 2094.360] with no dementia.
446
+ [2094.360 --> 2096.520] And so the question really is, why is this?
447
+ [2096.520 --> 2102.000] And how can we tip the scales towards clinically normal with no dementia?
448
+ [2102.000 --> 2105.120] All right.
449
+ [2105.120 --> 2110.720] And one theory that has led to an influx of research on cognitive exercise and training is
450
+ [2110.720 --> 2113.520] the theory of cognitive reserve.
451
+ [2113.520 --> 2119.200] And cognitive reserve was propagated by Dr. Yakhostirn, he's at Columbia University,
452
+ [2119.200 --> 2124.520] and he developed this idea to account for the disparity between the degree of pathology
453
+ [2124.520 --> 2127.960] someone has in their brain and their clinical presentation.
454
+ [2127.960 --> 2129.760] So what is cognitive reserve?
455
+ [2129.760 --> 2135.160] It really relies on the idea that there are individual differences in how tasks are processed
456
+ [2135.160 --> 2141.720] that permit some people to cope better than others with brain changes, brain pathology,
457
+ [2141.720 --> 2144.000] damage or degeneration.
458
+ [2144.000 --> 2149.920] So in the face of aging, or even Alzheimer's disease pathology, a brain with higher cognitive
459
+ [2149.920 --> 2156.480] reserve may try to cope with impending changes by using pre-existing cognitive strategies
460
+ [2156.480 --> 2163.040] more efficiently, or they may flexibly use different strategies for the same task.
461
+ [2163.040 --> 2167.960] So cognitive reserve is really hard to measure because in many ways it's a theoretical construct.
462
+ [2167.960 --> 2173.520] So we can't measure it the same way that we measure plaques and tangles in the brain.
463
+ [2173.520 --> 2179.400] Because of that, researchers often rely on proxy measures to assess cognitive reserve.
464
+ [2179.400 --> 2184.360] So this would be something like educational attainment, how far you went in school, your
465
+ [2184.360 --> 2192.960] occupation, your mental activities, which is sort of a nebulous term, and your IQ.
466
+ [2192.960 --> 2198.240] So consistent with what we saw in the non-study, this also suggests that individuals with more
467
+ [2198.240 --> 2203.600] cognitive reserve may be able to tolerate or handle greater amounts of damage to the
468
+ [2203.600 --> 2206.920] brain before clinical impairment is evident.
469
+ [2206.920 --> 2212.520] So I think the figure here illustrates this model nicely as it shows that at the same level
470
+ [2212.520 --> 2217.720] of brain pathology, individuals with higher cognitive reserve are performing much better
471
+ [2217.720 --> 2219.720] on the same tasks.
472
+ [2219.720 --> 2223.600] So an alternative way to look at this is that individuals with higher cognitive reserve
473
+ [2223.600 --> 2228.720] only start to approximate the lower levels of performance when they have more pathology
474
+ [2228.720 --> 2231.120] in their brains.
475
+ [2231.120 --> 2235.440] And there's been a tremendous amount of support for the benefits of high cognitive reserve.
476
+ [2235.440 --> 2240.760] And I think what's nice about this conceptualization is that it's an active model.
477
+ [2240.760 --> 2245.720] So it doesn't assume that you need a certain amount of change to your brain before you start
478
+ [2245.720 --> 2248.120] to show difficulties in everyday life.
479
+ [2248.120 --> 2253.280] And instead it focuses on the processes that actually allow individuals to experience
480
+ [2253.280 --> 2258.840] these changes and still maintain a similar level of function.
481
+ [2258.840 --> 2263.000] What's also helpful I think about the recent data is that even late stage interventions
482
+ [2263.000 --> 2266.720] to improve cognitive reserve look promising.
483
+ [2266.720 --> 2270.000] So that could ultimately delay or even prevent dementia.
484
+ [2270.000 --> 2273.960] And this is, I think, really intimately related to that concept of plasticity, which
485
+ [2273.960 --> 2278.520] really relates to the brain's ability to modify its structure and its function in light
486
+ [2278.520 --> 2281.720] of new experiences that we have.
487
+ [2281.720 --> 2284.240] All right.
488
+ [2284.240 --> 2289.560] And I think a natural extension of this topic is that of cognitive exercise and brain
489
+ [2289.560 --> 2291.080] games.
490
+ [2291.080 --> 2295.640] So translating the cognitive reserve and plasticity research into interventions has been
491
+ [2295.640 --> 2298.240] kind of a difficult process, I would say.
492
+ [2298.240 --> 2303.840] And there is an extensive scientific literature that is messy and difficult to interpret.
493
+ [2303.840 --> 2310.100] So brain games, Sudoku and intellectual engagement have been heavily fed to the media as this
494
+ [2310.100 --> 2312.920] sort of ultimate panacea for cognitive decline.
495
+ [2312.920 --> 2317.760] I'm sure most people here have seen these in the mainstream media before.
496
+ [2317.760 --> 2323.600] And I should say that some of this has occurred without research supporting it.
497
+ [2323.600 --> 2326.240] So what do we know about these things?
498
+ [2326.240 --> 2330.120] There's been some encouraging results in terms of leisure activities that were reported
499
+ [2330.120 --> 2332.000] in the last couple of years.
500
+ [2332.000 --> 2337.520] And they've shown that people who have high rates of intellectual leisure activity, which
501
+ [2337.520 --> 2346.200] they define as things like reading books, going out to operas, playing games at home,
502
+ [2346.200 --> 2351.880] playing cards, taking a new class, that all of these were protective.
503
+ [2351.880 --> 2356.840] And that individuals who did this had cognitive decline that started much later in life that
504
+ [2356.840 --> 2361.080] individuals who did not report doing these activities.
505
+ [2361.080 --> 2365.560] And so we're still a little unclear about the mechanisms by how this works.
506
+ [2365.560 --> 2372.240] But there has been some promising evidence to suggest that even intellectual leisure activities
507
+ [2372.240 --> 2374.320] might be helpful.
508
+ [2374.320 --> 2379.880] Similarly, there's been a lot of buzz and rightfully so about cognitive interventions.
509
+ [2379.880 --> 2381.800] These findings have also been mixed.
510
+ [2381.800 --> 2386.440] And with some studies demonstrating a lot of benefit and then some studies showing absolutely
511
+ [2386.440 --> 2387.440] no benefit.
512
+ [2387.440 --> 2392.400] And I think this is where being an educated consumer is critically important, particularly
513
+ [2392.400 --> 2397.520] given the sheer volume of brain games that are being marketed to the mainstream public.
514
+ [2397.520 --> 2402.120] So on one hand, we have studies that have shown no benefit from brain games.
515
+ [2402.120 --> 2406.840] And by no benefit, I mean that when individuals are trained on these tasks, they do get better
516
+ [2406.840 --> 2407.840] on these tasks.
517
+ [2407.840 --> 2412.720] But that it's not generalizing to other things, to other important activities in someone's
518
+ [2412.720 --> 2413.720] life.
519
+ [2413.720 --> 2420.360] So for example, in a study reported in nature back in 2010, researchers randomly assigned
520
+ [2420.360 --> 2425.520] over 4,000 people to two different experimental groups where they were being trained on things
521
+ [2425.520 --> 2430.440] like memory tasks or reasoning tasks and then a control group.
522
+ [2430.440 --> 2435.200] And they completed training sessions over a period of six weeks.
523
+ [2435.200 --> 2439.960] And while again, like I said, they show significant improvement in the tasks that they trained
524
+ [2439.960 --> 2444.440] on, they did not show any transfer of benefit from that.
525
+ [2444.440 --> 2446.000] And that's really what you want.
526
+ [2446.000 --> 2449.440] You really want to have transfer of benefits to other things in your life for these to be
527
+ [2449.440 --> 2450.760] most meaningful.
528
+ [2450.760 --> 2452.320] So that's one side of the coin.
529
+ [2452.320 --> 2456.840] I think on the other side, what we're seeing is in the last couple of years, there have
530
+ [2456.840 --> 2459.680] been very encouraging studies that have been coming out.
531
+ [2459.680 --> 2465.040] And I think what's different about them is that they are training people on very targeted
532
+ [2465.040 --> 2466.440] cognitive processes.
533
+ [2466.440 --> 2472.040] So they're very specific about what they're training the individual on.
534
+ [2472.040 --> 2479.400] And it seems like that's probably most important in terms of reaping the cognitive benefits.
535
+ [2479.400 --> 2481.160] So the findings are promising but mixed.
536
+ [2481.160 --> 2485.440] And I want on the positive side, I want to give you an example from some studies that are
537
+ [2485.440 --> 2487.000] occurring at UCSF.
538
+ [2487.000 --> 2491.880] And these are going to be talked about a little bit later in the interactive portion of
539
+ [2491.880 --> 2493.280] the night.
540
+ [2493.280 --> 2499.200] So this is Dr. Adam Gazzali's lab and Dr. Angara who also works with him and has really
541
+ [2499.200 --> 2503.160] spearheaded some of these studies will be here in the interactive portion to talk with you
542
+ [2503.160 --> 2505.120] about these.
543
+ [2505.120 --> 2510.840] So for these studies, the Gazzali lab in collaboration with colleagues at Lucas Arts,
544
+ [2510.840 --> 2514.840] they have developed this game called Neuroracer.
545
+ [2514.840 --> 2520.120] And as you can see here, I mean, I think it even just looks really exciting when you see
546
+ [2520.120 --> 2521.120] it.
547
+ [2521.520 --> 2526.400] So this is really thinking about cognitive training in the context of single task versus
548
+ [2526.400 --> 2527.400] multitasking.
549
+ [2527.400 --> 2531.600] I think multitasking is something that comes up a lot for people, something that can be
550
+ [2531.600 --> 2533.760] difficult over time.
551
+ [2533.760 --> 2539.960] So in this study, they are doing some pre-testing where they have people come in and they do
552
+ [2539.960 --> 2545.920] some tests on them and then they have people go home with these laptops and they do trainings
553
+ [2545.920 --> 2547.720] at home on this task.
554
+ [2547.720 --> 2554.120] And then they come back in later and do some more testing in the laboratory.
555
+ [2554.120 --> 2559.240] So and these tests are really designed to emulate multitasking in everyday life while
556
+ [2559.240 --> 2562.240] controlling for specific cognitive processes.
557
+ [2562.240 --> 2566.760] And so you can see here what we have is the, there's a single task version and there's
558
+ [2566.760 --> 2567.760] a multitask.
559
+ [2567.760 --> 2573.720] And with the single task, they have is a sign that the participant will have to respond
560
+ [2573.720 --> 2574.720] to.
561
+ [2574.720 --> 2578.920] And multitask, they will still have to respond to that but they're also driving a car.
562
+ [2578.920 --> 2583.160] And so again to really emulate the kinds of things that we're dealing with on an everyday
563
+ [2583.160 --> 2584.400] basis.
564
+ [2584.400 --> 2587.840] And from this, they calculate a multitasking cost.
565
+ [2587.840 --> 2591.080] So what is the cost to your performance just by multitasking?
566
+ [2591.080 --> 2596.120] And it's a fairly basic calculation that they use there.
567
+ [2596.120 --> 2601.520] So using the index shown before, you can see the cost of multitasking increases across
568
+ [2601.520 --> 2603.400] the lifespan.
569
+ [2603.400 --> 2610.240] So in other words, the ability to efficiently handle and respond to multiple sources of information
570
+ [2610.240 --> 2612.120] worsens over an individual's life.
571
+ [2612.120 --> 2617.360] And you can see this actually starts even in your, in your 20s.
572
+ [2617.360 --> 2623.920] Now, these results start to look very different upon providing multitasking training.
573
+ [2623.920 --> 2631.200] So specifically, individuals who did not receive any training remained at around the same
574
+ [2631.200 --> 2634.480] level a month later.
575
+ [2634.480 --> 2639.520] Individuals who obtained the single task training at home, so again, they were trained on just
576
+ [2639.520 --> 2642.960] responding to those signs without actually driving the car.
577
+ [2642.960 --> 2645.200] Their performance, it looks a little bit better.
578
+ [2645.200 --> 2648.320] This is not statistically significant.
579
+ [2648.320 --> 2653.600] And then those who were trained on the multitasking component, you see this striking difference.
580
+ [2653.600 --> 2657.720] And striking improvement in terms of how much cost there is there.
581
+ [2657.720 --> 2663.960] And most importantly, what we see is that these actually hold over time.
582
+ [2663.960 --> 2670.200] And so after a period of about six months, you're still seeing much better improvement with
583
+ [2670.200 --> 2674.800] individuals that were trained on the multitasking component.
584
+ [2674.800 --> 2679.600] So again, Dr. Engar will be demonstrating this latest version of these games on an iPad
585
+ [2679.600 --> 2683.200] during the interactive portion of the night.
586
+ [2683.200 --> 2687.000] All right.
587
+ [2687.720 --> 2696.000] Okay, so just to briefly review the cognitive engagement and brain game section of this,
588
+ [2696.000 --> 2700.560] just want to say that in terms of plasticity and cognitive reserve, I think there's really
589
+ [2700.560 --> 2704.800] strong evidence that our brains continue to change and adapt.
590
+ [2704.800 --> 2707.120] That's part of what plasticity is.
591
+ [2707.120 --> 2712.040] And research, I think, has really uncovered a lot of protective and risk factors for this.
592
+ [2712.040 --> 2716.120] And in terms of the actual brain games, I think, as I said, there's a lot of new research
593
+ [2716.120 --> 2721.880] suggesting that if it's targeting specific cognitive processes, that's the most helpful.
594
+ [2721.880 --> 2726.440] And this is, I think, a really promising area of research, but also requires a critical
595
+ [2726.440 --> 2732.960] eye and thinking about the fact that not all of these studies have a lot of research behind
596
+ [2732.960 --> 2733.960] them.
597
+ [2733.960 --> 2737.080] So being an educated consumer about this, I think, is one of the most important facets
598
+ [2737.080 --> 2739.440] of it.
599
+ [2739.440 --> 2747.440] So something that I am increasingly excited about is the role of physical exercise in brain
600
+ [2747.440 --> 2748.440] health.
601
+ [2748.440 --> 2754.280] And in particular, how exercising might actually improve cognition and potentially delay
602
+ [2754.280 --> 2757.800] or even prevent dementia.
603
+ [2757.800 --> 2760.720] So have studies shown.
604
+ [2760.720 --> 2765.960] In general, what we've shown is that what's good for your heart is good for your brain.
605
+ [2765.960 --> 2771.960] So individuals who participate in physical activity, particularly aerobic activity,
606
+ [2771.960 --> 2777.760] they've shown in various studies that you could have up to 30% reduction in the risk of
607
+ [2777.760 --> 2783.000] cognitive decline in dementia, which I think is a very striking and exciting finding.
608
+ [2783.000 --> 2786.440] Because it's something that we can do something about at any stage.
609
+ [2786.440 --> 2795.480] And in particular, just to answer that question, Christine Yaffe and Dr. Middleton at both UCSF
610
+ [2795.480 --> 2800.200] and the San Francisco VA have conducted studies trying to answer the question, doesn't matter
611
+ [2800.200 --> 2802.080] when you become physically active.
612
+ [2802.080 --> 2808.160] So is it too late to start if I wasn't doing any sort of activity as a teenager, is it
613
+ [2808.160 --> 2810.960] too late to start in middle or late life?
614
+ [2810.960 --> 2817.080] So what they found was that women who reported that they had been physically active, particularly
615
+ [2817.080 --> 2821.600] during their teenage years, showed the lowest likelihood of cognitive impairment.
616
+ [2821.600 --> 2823.920] So they seem to be the most protected.
617
+ [2823.920 --> 2830.120] However, individuals who became active later in life also showed a reduced risk of developing
618
+ [2830.120 --> 2831.640] cognitive impairment.
619
+ [2831.640 --> 2836.160] So even though it seems like a lifetime of physical activity is most helpful, people are
620
+ [2836.160 --> 2841.440] reaping benefits from this even if they start late in life to become physically active.
621
+ [2841.440 --> 2847.400] So it seems to be a really critical component to brain health.
622
+ [2847.400 --> 2852.280] So in addition, that was more of what we call an epidemiological study on cognitive decline.
623
+ [2852.280 --> 2856.360] So what if we turn our attention to what the brain looks like in those who are physically
624
+ [2856.360 --> 2857.680] active?
625
+ [2857.680 --> 2862.960] So research with animal models has shown that a molecule in the brain called brain-derived
626
+ [2862.960 --> 2869.440] neurotrophic factor, or BDNF, is critical for neuron health and is really important for
627
+ [2869.440 --> 2872.240] plasticity or synapses.
628
+ [2872.240 --> 2878.360] And exercise has been shown to have a really robust effect on BDNF levels in the brain.
629
+ [2878.360 --> 2884.480] So in this case, if you have rats run on a wheel for as little as a week, what you can
630
+ [2884.480 --> 2891.040] see is that they have nearly a one and a half-fold increase in BDNF expression in their hippocampus.
631
+ [2891.040 --> 2893.440] And these effects were also still noted.
632
+ [2893.440 --> 2897.360] They were still raised three months later in these animals.
633
+ [2897.360 --> 2902.960] So you can see here that there's this induction of BDNF in various parts of the hippocampus.
634
+ [2902.960 --> 2904.440] So that's the Dintate Gioros.
635
+ [2904.440 --> 2908.880] This is the CA3 and CA1 regions.
636
+ [2908.880 --> 2913.400] If we kind of try to take that literature from animals and apply it to humans, we're also
637
+ [2913.400 --> 2915.480] starting to see some really exciting results.
638
+ [2915.480 --> 2918.320] And this study came out in the last year.
639
+ [2918.320 --> 2925.580] And this was looking at 120 adults who were randomized to either a walking group or a
640
+ [2925.580 --> 2928.000] stretching toning group.
641
+ [2928.000 --> 2932.180] And these groups were completely identical except that the walking group participated
642
+ [2932.180 --> 2938.460] in moderate intensity walking for about 30 to 45 minutes per day, three times per week.
643
+ [2938.460 --> 2943.180] So they both groups received the same amount of social interaction and health instruction.
644
+ [2943.180 --> 2946.780] So they really controlled for a lot of variables here.
645
+ [2946.780 --> 2952.540] And then brain MRI scans were conducted before randomization after six months and again
646
+ [2952.540 --> 2955.500] after the completion of the one-year trial.
647
+ [2955.500 --> 2959.100] So if you can see here that what they were really focusing on was the hippocampus.
648
+ [2959.100 --> 2963.940] The hippocampus is a very metabolically active area that seems to be very sensitive to plasticity
649
+ [2963.940 --> 2966.980] and where there's been probably the most research in terms of plasticity.
650
+ [2966.980 --> 2970.620] So that's where they were really focusing on.
651
+ [2970.620 --> 2975.100] The Codate and the Thalmas were also regions that they looked at more for control areas.
652
+ [2975.100 --> 2979.740] So with the hippocampus, what they noted was that for the individuals that were in the
653
+ [2979.740 --> 2985.940] stretching toning group, they had about a 1.5% decline in their hippocampal volume over
654
+ [2985.940 --> 2986.940] the one-year.
655
+ [2986.940 --> 2990.820] And this is very consistent with normal aging research.
656
+ [2990.820 --> 2995.180] So this is something that we often see when we're following adults over time.
657
+ [2995.180 --> 3002.020] But in contrast, what they found was that individuals who were in this more aerobically active group
658
+ [3002.020 --> 3007.460] that they actually had a 2% increase in the size of the hippocampus, particularly the anterior
659
+ [3007.460 --> 3010.260] part of the hippocampus, over one year.
660
+ [3010.260 --> 3013.260] And this was a significant difference in the two.
661
+ [3013.260 --> 3017.660] So this is one of the things that one of the first studies to really robustly show this
662
+ [3017.660 --> 3023.500] in a regimented way.
663
+ [3023.500 --> 3027.860] So these observational studies, along with others, provide considerable support for the
664
+ [3027.860 --> 3034.740] hippocuses that physical activity may reduce the risk of cognitive decline in dementia.
665
+ [3034.740 --> 3035.740] But how does this actually happen?
666
+ [3035.740 --> 3041.140] And I think this is an important question to ask any time we're reading literature about
667
+ [3041.140 --> 3042.140] something new.
668
+ [3042.140 --> 3044.860] What is the possible mechanism behind this?
669
+ [3044.860 --> 3046.620] How could this possibly happen?
670
+ [3046.620 --> 3049.380] How does this confer benefit?
671
+ [3049.380 --> 3053.940] And as you might guess, physical activity is related to lower rates of obesity.
672
+ [3053.940 --> 3057.500] Like I mentioned before, what's good for your heart is good for your brain.
673
+ [3057.500 --> 3062.100] So obesity, particularly middle age, has been shown to associate significantly with dementia
674
+ [3062.100 --> 3064.260] in later life.
675
+ [3064.260 --> 3068.700] Physical activity is also linked to reduced vascular risks.
676
+ [3068.700 --> 3072.980] So again, anything having to do with your cardiovascular system, blood being innervated
677
+ [3072.980 --> 3078.100] up to your brain, it has significant benefit for any sort of vascular risk factors that
678
+ [3078.100 --> 3079.100] someone might have.
679
+ [3079.100 --> 3085.540] So this could be diabetes, hypertension, cardiovascular disease.
680
+ [3085.540 --> 3090.260] And as I just mentioned before, it also seems to induce BDNF, which is incredibly important
681
+ [3090.260 --> 3092.740] for neuronal function.
682
+ [3092.740 --> 3097.420] And something that's more near and dear to my heart is its relationship to inflammation,
683
+ [3097.420 --> 3100.300] which is something I study in healthy, older adults.
684
+ [3100.300 --> 3105.700] And people who are very physically active seem to have lower levels of inflammation in their
685
+ [3105.700 --> 3106.700] bodies.
686
+ [3106.700 --> 3111.260] And inflammation has been shown to be related to your brain structure.
687
+ [3111.260 --> 3117.060] In particular, what we have shown is that inflammation, people who have higher levels of
688
+ [3117.060 --> 3120.860] inflammation, so they're just healthy people who do not have cognitive impairment.
689
+ [3120.860 --> 3125.740] But if they have higher levels of inflammation, they seem to have lower integrity in the
690
+ [3125.740 --> 3128.140] white matter areas of the brain.
691
+ [3128.140 --> 3133.860] And so if you can see here actually the white parts here and the green tracks, these are
692
+ [3133.860 --> 3138.020] not clearly what your tracks look like, but they are color coded here.
693
+ [3138.020 --> 3142.300] And you actually have lower integrity and something particularly called the corpus colosum
694
+ [3142.300 --> 3145.340] that connects the two hemispheres of your brain together.
695
+ [3145.340 --> 3147.900] And this seems to be highly related to inflammation.
696
+ [3147.900 --> 3153.220] So people who are physically active seem to have lower levels of inflammation.
697
+ [3153.220 --> 3158.740] So in terms of lower integrity, we use something called diffusion tensor imaging, which basically
698
+ [3158.740 --> 3163.100] looks to see how well does water molecules move across a track.
699
+ [3163.100 --> 3167.420] And if something is really intact, like if you think about any sort of, if you think about
700
+ [3167.420 --> 3171.900] a fire or anything that's a really intact track, things should move along very easily.
701
+ [3171.900 --> 3177.620] If it's starting to degrade at all, you will have lower directionality of the water.
702
+ [3177.620 --> 3179.860] You can think about water just starting to spread out.
703
+ [3179.860 --> 3181.260] And so that's how we measure that.
704
+ [3181.260 --> 3186.740] So it seems like it's unclear exactly what's degrading necessarily if it's the outer
705
+ [3186.740 --> 3194.260] sheath of it, but it seems like there's lower, the structure of it doesn't seem to be
706
+ [3194.260 --> 3197.180] quite as intact as it was before.
707
+ [3197.180 --> 3201.140] These white matter tracks are really important for processing information quickly.
708
+ [3201.140 --> 3205.780] So they connect all these different parts in your brain, the gray areas, they connect
709
+ [3205.780 --> 3208.740] these so that you can think more efficiently.
710
+ [3208.740 --> 3210.420] All right.
711
+ [3210.420 --> 3216.460] And so just to start to conclude a little bit, what I want to really highlight here is
712
+ [3216.460 --> 3222.160] that based on this evidence with physical activity, I think it's pretty clear that there's
713
+ [3222.160 --> 3226.060] considerable evidence that you can reap the benefits of physical exercise at any age.
714
+ [3226.060 --> 3231.580] And it's actually what we tell our patients the most often, I would say, in our clinics,
715
+ [3231.580 --> 3236.100] is that this is something that you can do at any point in time and it will really benefit
716
+ [3236.100 --> 3240.260] your neuronal health, as well as benefiting cardiovascular health.
717
+ [3240.260 --> 3244.660] And I think there's also ample evidence to suggest that exercise really reduces vascular
718
+ [3244.660 --> 3251.020] risk factors, obesity and flammatory markers, and may alter brain structure as well.
719
+ [3251.020 --> 3255.300] So I think the combination of these, these cognitive training, cognitive exercise and thinking
720
+ [3255.300 --> 3261.860] about physical exercise are two very tightly interwoven facets of how we can improve our
721
+ [3261.860 --> 3265.140] cognitive health of our time and hopefully stave off dementia.
722
+ [3265.140 --> 3266.140] All right.
723
+ [3267.140 --> 3268.140] All right.
724
+ [3268.140 --> 3271.900] So I just want to thank my colleagues and I really appreciate everyone's attention and
725
+ [3271.900 --> 3275.180] letting me talk to you about this topic and I really look forward to speaking with you
726
+ [3275.180 --> 3277.180] afterwards in the A3M there.
727
+ [3277.180 --> 3283.820] So moving in a slightly different direction, my name is Winston Chong.
728
+ [3283.820 --> 3287.780] I'm a neurologist and neuroscientist at the Memory and Aging Center.
729
+ [3287.780 --> 3293.380] And I bring a sort of an interdisciplinary perspective in that my PhD was actually in philosophy.
730
+ [3293.380 --> 3298.020] One of my areas of interest is actually in kind of points of contact between philosophy
731
+ [3298.020 --> 3302.380] and other kind of more humanistic disciplines and clinical medicine and neuroscience.
732
+ [3302.380 --> 3307.060] And so what I'll be talking about today is a little bit more speculative, but I'm really
733
+ [3307.060 --> 3311.620] trying to take a look at some points of contact, some recent findings in neuroscience and
734
+ [3311.620 --> 3317.660] how we might use these in connection with some older ideas to think a little bit more about
735
+ [3317.660 --> 3322.060] what makes us kind of uniquely human and kind of what contributes to our sense of self.
736
+ [3322.060 --> 3325.620] So I hope you'll bear with me on that.
737
+ [3325.620 --> 3329.660] So before I talked about the self though, I wanted to start by talking about kind of a
738
+ [3329.660 --> 3334.300] more general principle, which is the idea that brain diseases tell us about how the healthy
739
+ [3334.300 --> 3339.300] brain is organized, that when we pay attention to what goes wrong when something goes happens
740
+ [3339.300 --> 3344.100] in the brain, that that tells us that gives us important clues about how things are connected
741
+ [3344.100 --> 3347.020] in normal function.
742
+ [3347.020 --> 3351.340] And one of my favorite examples of this actually comes from this passage from the Bible, which
743
+ [3351.420 --> 3355.180] many of you will already be familiar with, but after tonight, I hope after you leave,
744
+ [3355.180 --> 3357.620] you'll think about it in a slightly different way.
745
+ [3357.620 --> 3362.740] So this is Psalm 137 from the King James Version, and this is after the conquest of Jerusalem
746
+ [3362.740 --> 3364.620] by the Babylonians.
747
+ [3364.620 --> 3368.740] And what's interesting about the Psalm is that it describes two kind of divine punishments
748
+ [3368.740 --> 3374.260] that the speaker would wish upon himself if he would forget about Jerusalem.
749
+ [3374.260 --> 3380.020] And the two punishments are, let my right hand forget her cunning and let my tongue
750
+ [3380.020 --> 3382.340] leave to the roof of my mouth.
751
+ [3382.340 --> 3387.460] And so if you think about this, either of these in its own right would be a very severe punishment.
752
+ [3387.460 --> 3392.100] So for the first one, we're talking about essentially losing the use of the hand that
753
+ [3392.100 --> 3396.780] 90% of us used to do pretty much everything, and we're talking about the loss of the ability
754
+ [3396.780 --> 3398.180] to speak.
755
+ [3398.180 --> 3401.940] And so you might think originally, well, this seems like a bit much.
756
+ [3401.940 --> 3405.180] Why should they both, why should they happen at the same time?
757
+ [3405.180 --> 3409.380] But I think that what's very striking as a neurologist when you read this is actually
758
+ [3409.380 --> 3411.700] that these two problems often come together.
759
+ [3411.700 --> 3415.740] We actually do tend to see people with both of these problems at the same time.
760
+ [3415.740 --> 3420.020] And I'm assuming that the ancient Israelites observed this also.
761
+ [3420.020 --> 3423.340] So to understand why, it helps to take a look at the brain.
762
+ [3423.340 --> 3424.980] So this is a picture of the brain from the left side.
763
+ [3424.980 --> 3429.020] So if you're looking at my left ear, if you could see through my skull, this is what
764
+ [3429.020 --> 3430.540] you'd see.
765
+ [3430.540 --> 3434.220] And I wanted to call your attention to a couple brain regions.
766
+ [3434.220 --> 3439.340] So this region here in yellow is what we might call a motor speech area.
767
+ [3439.340 --> 3444.220] And among other things that's done by this area is basically, it helps us go from words
768
+ [3444.220 --> 3448.460] to the actual movements that you have to make, again, with your lips, your tongue, and
769
+ [3448.460 --> 3449.460] so forth.
770
+ [3449.460 --> 3453.580] And one thing that we don't think about, because we're all fluent speakers of a language,
771
+ [3453.580 --> 3457.620] is what a skillful and coordinated action it is to speak.
772
+ [3457.620 --> 3461.820] Because basically, you're talking about coordinating the movements again of your jaw, your lips,
773
+ [3461.820 --> 3465.580] your tongue, your vocal cords, your breathing to produce each word.
774
+ [3465.580 --> 3468.740] And ordinarily, you don't have to think about how to do that.
775
+ [3468.740 --> 3473.100] And that's partly because the sort of motor program for how to perform all of those actions
776
+ [3473.100 --> 3478.980] correctly is kind of stored on the left side in this sort of yellow region.
777
+ [3478.980 --> 3483.460] Then close by along this red strip, there's another region.
778
+ [3483.460 --> 3486.620] So you may know that the left side of the brain controls the right side of the body, the
779
+ [3486.620 --> 3488.980] right side of the brain controls the left side of the body.
780
+ [3488.980 --> 3493.700] And so along this red strip is a region that controls basically the movements of the
781
+ [3493.700 --> 3494.700] right hand.
782
+ [3494.700 --> 3499.460] So the neurons in this region send signals down to the spinal cord that in turn send
783
+ [3499.460 --> 3502.900] other signals down to the hand and basically control those movements.
784
+ [3502.900 --> 3507.460] And so you can imagine that if something happens to the brain here, that it's likely also
785
+ [3507.460 --> 3509.820] to affect this region and vice versa.
786
+ [3509.820 --> 3513.420] And in fact, if you take a look at the map of the blood supply to the brain, there's
787
+ [3513.420 --> 3517.060] a very important blood vessel that comes up through the neck, comes into the skull, and
788
+ [3517.060 --> 3520.420] basically gives off this branch that supplies this whole region.
789
+ [3520.420 --> 3524.820] So something where it happened like a blood clot, where to migrate or develop here, you
790
+ [3524.820 --> 3528.620] can easily see how it would affect the blood supply to this region of brain.
791
+ [3528.620 --> 3532.660] So this region of brain would be permanently injured, leading to loss of the ability to
792
+ [3532.660 --> 3536.900] speak, as well as loss of the movement of the right hand.
793
+ [3536.900 --> 3541.820] And it's sort of fitting, I think to me, that we talk about this as a divine punishment
794
+ [3541.820 --> 3547.020] or in a theological context because our English word stroke, which is the modern
795
+ [3547.020 --> 3551.580] term we use for this disease when you have a blood clot that blocks this vessel, comes
796
+ [3551.580 --> 3553.620] from the term the stroke of God's hand.
797
+ [3553.620 --> 3554.620] Right?
798
+ [3554.620 --> 3557.900] So this expressed again the idea that this is a sudden devastating loss of neurological
799
+ [3557.900 --> 3558.900] function.
800
+ [3558.900 --> 3562.860] And while the ancient Israelites probably did not know that this is the way things were
801
+ [3562.860 --> 3567.940] connected, we can learn from this observation that has been made for a long period of time
802
+ [3567.940 --> 3570.820] that this is how these parts of the brain are connected.
803
+ [3570.820 --> 3575.900] So that's just an illustration that I like about how we can learn from these brain diseases
804
+ [3575.900 --> 3582.380] about how these things come together, even if we didn't know about the brain itself.
805
+ [3582.380 --> 3587.620] So this is the way we've learned about a lot about how particular parts of the brain
806
+ [3587.620 --> 3588.620] work.
807
+ [3588.620 --> 3592.460] We take what we call these focal lesions, these diseases that affect particular parts
808
+ [3592.460 --> 3598.180] of the brain, so strokes, tumors, Dr. Prasin talked about side effects of brain surgery.
809
+ [3598.180 --> 3602.780] And so we've learned in this way about how these particular parts of the brain are important
810
+ [3602.780 --> 3607.540] for functions like vision, language, memory, our control of movement, our sense of touch
811
+ [3607.540 --> 3609.140] and so forth.
812
+ [3609.140 --> 3613.540] What is kind of a new frontier though in neuroscience, and this is what I want to talk to you about
813
+ [3613.540 --> 3616.900] today, is a little bit more distributed in the brain.
814
+ [3616.900 --> 3617.900] Right?
815
+ [3617.900 --> 3620.340] And that's the question about how do these parts all work together?
816
+ [3620.340 --> 3621.340] Right?
817
+ [3621.340 --> 3623.620] How are these different functions brought together to make us who we are?
818
+ [3623.620 --> 3624.620] Okay?
819
+ [3624.620 --> 3627.780] Because we're not just language, we're not just vision, we're all of these things brought
820
+ [3627.780 --> 3628.780] together.
821
+ [3628.780 --> 3633.260] And then the suggestion I'm going to try to present today is the idea that it's really
822
+ [3633.260 --> 3637.180] the coordinate activity of multiple parts of the brain working together, and there's
823
+ [3637.180 --> 3642.220] something we can really learn about how the brain is organized kind of in this way.
824
+ [3642.220 --> 3647.140] So here we're getting, again, from sort of more hard clinical neuroscience to something
825
+ [3647.140 --> 3652.220] that's a little bit more ineffable, a little bit more intellectual, and philosophical maybe,
826
+ [3652.220 --> 3656.380] and starting to talk about, again, the topic for tonight, which is kind of neuroscience
827
+ [3656.380 --> 3657.780] in the self.
828
+ [3657.780 --> 3662.020] And I think one of the problems that we have as a starting point is just the observation
829
+ [3662.020 --> 3666.980] that when we talk about the self, people use this language in a lot of different ways,
830
+ [3666.980 --> 3672.060] and seem to refer into many different things that might be related, but it's helpful to
831
+ [3672.060 --> 3673.980] think about these differences.
832
+ [3673.980 --> 3677.340] And so one thing that philosophers like to do is to kind of catalog the ways that people
833
+ [3677.340 --> 3679.820] use natural language.
834
+ [3679.820 --> 3684.860] And so if we think about different ways that people speak about their selves, we can identify
835
+ [3684.860 --> 3686.980] maybe hopefully a few themes.
836
+ [3686.980 --> 3691.540] So obviously the self, the idea of the self is very related to ideas of individuality.
837
+ [3691.540 --> 3697.220] So myself, yourself, and also this difference between self and others, right?
838
+ [3697.220 --> 3700.700] There's sort of boundary where I end and the rest of the world begins.
839
+ [3700.700 --> 3704.300] We also see this, you know, in immunology when we talk about, you know, our immune system
840
+ [3704.300 --> 3708.100] is recognizing self versus other.
841
+ [3708.100 --> 3712.620] Another step that's related to this is the idea of reflexivity or reflectiveness that
842
+ [3712.620 --> 3718.300] we think about self-awareness, our ability to take ourselves as kind of the object of
843
+ [3718.300 --> 3720.740] our thought or perception.
844
+ [3720.740 --> 3722.340] And then there's a lot of ethical ideas, right?
845
+ [3722.340 --> 3725.820] So there's an idea of personhood, that there's something special about beings that are
846
+ [3725.820 --> 3727.580] ourselves that have a self.
847
+ [3727.580 --> 3732.300] There's a relationship to identity, and this is the problem that kind of most philosophers
848
+ [3732.300 --> 3734.380] would think about in terms of the problem of self.
849
+ [3734.380 --> 3738.700] And that's kind of almost the problem of what makes you the same person over time, right?
850
+ [3738.700 --> 3745.580] So you're a you today, and there's also you, you know, five days ago or five years ago,
851
+ [3745.580 --> 3749.500] and what's the connection between them that makes it all a continuous shared life, a shared
852
+ [3749.500 --> 3750.500] self?
853
+ [3750.500 --> 3753.900] And finally, you know, there's this ethical idea of autonomy, which again, we go back
854
+ [3753.900 --> 3759.100] to the Greek roots, is really giving a lot to yourself, being the kind of being that
855
+ [3759.100 --> 3762.020] you know, can self-legislate in this way.
856
+ [3762.020 --> 3765.460] And so what I'm going to suggest today, when I try to translate this language into sort
857
+ [3765.460 --> 3770.180] of more the language of neuroscience, is that these different senses are related to,
858
+ [3770.180 --> 3774.220] again, more global processes, not things that one particular part of the brain does, but
859
+ [3774.220 --> 3778.260] rather these more general processes that integrate the activity of these different parts
860
+ [3778.260 --> 3780.260] of the brain.
861
+ [3780.260 --> 3785.420] So if we take again our lesson that brain diseases tell us about how the brain is organized,
862
+ [3785.420 --> 3789.980] then one thing that we might look to for inspiration is to think about brain diseases that are
863
+ [3789.980 --> 3794.780] not like strokes or tumors, but brain diseases that affect many different parts of the brain
864
+ [3794.780 --> 3796.460] at the same time.
865
+ [3796.460 --> 3800.380] And I'm going to get into a slightly controversial area, and I hope I don't get myself into
866
+ [3800.380 --> 3806.860] too much trouble, but there is an idea that dementia is a disease that's very threatening
867
+ [3806.860 --> 3809.460] to people's self.
868
+ [3809.460 --> 3811.580] And I say it's controversial.
869
+ [3811.580 --> 3814.660] So you know, there are some people that say that this is something that happens.
870
+ [3814.660 --> 3818.660] So here's a popular book for family members of patients with Alzheimer's disease, and
871
+ [3818.660 --> 3822.100] the title of the book would suggest that yes, this is something that happens, this is
872
+ [3822.100 --> 3826.220] something we see that family members need to know about that patients with Alzheimer's
873
+ [3826.220 --> 3831.060] disease can lose their self in the course of the disease.
874
+ [3831.060 --> 3834.820] But at the same time, you'll have other people who say, no, how could you say that?
875
+ [3834.820 --> 3840.060] The self isn't lost in Alzheimer's disease, the self-indulers in Alzheimer's disease.
876
+ [3840.060 --> 3844.220] And what I'm going to suggest in part is that some of this controversy reflects actually
877
+ [3844.220 --> 3850.300] different neurobiological and neuroscientific aspects that are related to the self that
878
+ [3850.300 --> 3856.020] might be preserved or might be lost in these different diseases.
879
+ [3856.020 --> 3859.460] So hoping to broker a compromise of sorts in this.
880
+ [3859.460 --> 3864.460] So I'm going to unfortunately introduce a little bit of philosophical jargon, but I hope
881
+ [3864.460 --> 3866.500] it'll be helpful.
882
+ [3866.500 --> 3872.340] Thinking about us as people as agents that have to move around and be effective in the world
883
+ [3872.340 --> 3877.540] and make sense of the world, there are two kinds of problems of sort of integration.
884
+ [3877.540 --> 3881.580] So we talk already about the different functions that these different parts of our brain
885
+ [3881.580 --> 3886.380] do, but they've got to be brought together in a coherent way that allows us to deal with
886
+ [3886.380 --> 3889.100] the world and to be effective in the world.
887
+ [3889.100 --> 3894.100] And two problems in particular that I want to focus on, the first I'll call a problem
888
+ [3894.100 --> 3895.860] of synchronic unity.
889
+ [3895.860 --> 3901.820] And by this, I mean unification of kind of your activity at a given point in time.
890
+ [3901.820 --> 3905.940] And the first observation to make is that at any point in time, there are hundreds of
891
+ [3905.940 --> 3909.100] different things that are all competing for your attention.
892
+ [3909.100 --> 3913.100] So you might be trying to pay attention to what I'm saying, but you might find your mind
893
+ [3913.100 --> 3917.500] wandering to think about what kind of cheese they're going to be serving in the reception
894
+ [3917.500 --> 3923.620] after work or how can I get my hands on one of those brain games.
895
+ [3923.620 --> 3927.020] And in addition, there's also kind of sensory information.
896
+ [3927.020 --> 3933.100] So you're listening to my voice, you're looking at the slides, but you might also be distracted
897
+ [3933.100 --> 3938.140] by an itchy feeling on your leg or the way the tag of your shirt is rubbing against
898
+ [3938.140 --> 3939.980] your neck, things like that.
899
+ [3939.980 --> 3943.500] And your brain is being bombarded by this information all the time.
900
+ [3943.500 --> 3945.700] All of these things are actually being represented in your brain.
901
+ [3945.700 --> 3951.580] These things are happening, but you can't be responding to all of those at the same time.
902
+ [3951.580 --> 3955.820] And similarly, from the point of view of motivation, we all have conflicting aims and desires
903
+ [3955.820 --> 3957.860] that can't all be satisfied at once.
904
+ [3957.860 --> 3962.780] So you came here to learn about the brain, to learn about brain games and so forth, but
905
+ [3962.780 --> 3967.140] you might have also hoped to go to a movie or meet some friends for dinner and so forth.
906
+ [3967.140 --> 3970.780] And we know that we can't satisfy all these different aims and desires at once.
907
+ [3970.780 --> 3974.900] And again, so in both cases, you've got to focus, you have to prioritize and allocate
908
+ [3974.900 --> 3975.900] attention.
909
+ [3975.900 --> 3979.060] And that's just in any given moment.
910
+ [3979.060 --> 3982.940] Then in addition, there are problems that I'll call the problem of diacronic unity.
911
+ [3982.940 --> 3985.860] And that's kind of being here in sense across times, right?
912
+ [3985.860 --> 3991.060] Because your life extends far beyond this moment, far beyond this room.
913
+ [3991.060 --> 3993.180] It extends forward and backward in time.
914
+ [3993.180 --> 3997.820] And we all have important plans and projects that extend over the course of our lifetimes
915
+ [3997.820 --> 4002.340] or when we think about things for the sake of our children or important causes we have,
916
+ [4002.340 --> 4007.140] we actually have important projects and plans that extend beyond our own lifetimes.
917
+ [4007.140 --> 4014.020] And so part of being human is kind of the ability to think about yourself extended beyond
918
+ [4014.020 --> 4015.020] the present moment.
919
+ [4015.020 --> 4016.660] You plan for the future.
920
+ [4016.660 --> 4020.620] And then in order to do this, you've also got to be able to recall prior intentions.
921
+ [4020.620 --> 4024.620] So you signed up for this course maybe a week or two ago and then you had to remember
922
+ [4024.620 --> 4027.380] today that today was the day you're going to come.
923
+ [4027.380 --> 4030.620] You've also got to be able to keep track of different things that you do in order to
924
+ [4030.620 --> 4032.180] realize these long-term goals.
925
+ [4032.180 --> 4036.020] So yes, I already did step one and step two and then now I have to think about step three
926
+ [4036.020 --> 4037.620] and step four.
927
+ [4037.620 --> 4042.860] And some psychologists have suggested that one way of thinking about this task that we
928
+ [4042.860 --> 4048.580] all have as human beings is in terms of a faculty they call mental time travel, right?
929
+ [4048.580 --> 4054.940] And so that's kind of the ability to project your perspective and to imagine yourself kind
930
+ [4054.940 --> 4056.860] of in the future or in the past.
931
+ [4056.860 --> 4063.660] And we use this when we recall old experiences, when we think about particularly moving experiences
932
+ [4063.660 --> 4065.540] that we had in life.
933
+ [4065.540 --> 4068.020] But we also use it when we think about the future.
934
+ [4068.020 --> 4072.900] So maybe you've never been to Barcelona before but you'd like to go and you can imagine
935
+ [4072.900 --> 4077.060] yourself walking along less, round less or standing at the base of less agrarian familiar
936
+ [4077.060 --> 4080.620] and looking up at the spires, right?
937
+ [4080.620 --> 4086.500] So it's helpful that I have two problems of the self because there's also two different
938
+ [4086.500 --> 4092.180] forms of dementia that I think might be relevant as disease models that tell us about the
939
+ [4092.180 --> 4095.660] way that, you know, again, we think about the way the brain is organized in health and
940
+ [4095.660 --> 4100.260] we also think about the way that things can go wrong in the case of disease.
941
+ [4100.260 --> 4104.460] And so one of them, all the time, is diseases one that you've already heard a lot about.
942
+ [4104.460 --> 4107.780] One that may be less familiar is this disease called front of temporal dementia.
943
+ [4107.780 --> 4112.820] I guess those of you who were here last week would have heard a lot about it as well.
944
+ [4112.820 --> 4117.260] But you know, these diseases tell us actually about different brain systems that seem to
945
+ [4117.260 --> 4118.260] be involved.
946
+ [4118.260 --> 4122.020] I would suggest in these different aspects of self-integration, these different kinds
947
+ [4122.020 --> 4124.620] of integrative problems of the self.
948
+ [4124.620 --> 4129.540] So patients with front of temporal dementia, these patients are very, very unique.
949
+ [4129.540 --> 4134.860] They're very tragic in that they really have an inability to make their actions coherent,
950
+ [4134.860 --> 4137.260] particularly, you know, just even in the moment.
951
+ [4137.260 --> 4139.220] So these patients are often disinhibited.
952
+ [4139.220 --> 4143.940] So these patients are prone to do things like, you know, they might see people in the supermarket,
953
+ [4143.940 --> 4148.180] complete strangers and say that they're fat or that they would like to have sex with
954
+ [4148.180 --> 4149.180] them.
955
+ [4149.180 --> 4152.980] And, you know, one thing I would say is that, you know, these are thoughts that even in
956
+ [4152.980 --> 4157.620] normal people might occur to somebody in the course of their interactions, you know,
957
+ [4157.620 --> 4158.700] kind of being out in the world.
958
+ [4158.700 --> 4161.500] But, you know, we know not to say these things.
959
+ [4161.500 --> 4165.540] And, sadly, these patients don't have that ability anymore.
960
+ [4165.540 --> 4167.260] These patients can be very distractible.
961
+ [4167.260 --> 4171.620] So even when they're focused on a particular goal, they can be easily distracted and so
962
+ [4171.620 --> 4173.300] they wind up doing something else.
963
+ [4173.300 --> 4177.620] They have a certain loss of concern for other people, loss of empathy that might be connected
964
+ [4177.620 --> 4182.860] more broadly to a loss of a sense of kind of the importance of other people.
965
+ [4182.860 --> 4185.220] They tend to perform a lot of compulsive and repetitive movements.
966
+ [4185.220 --> 4189.020] They might tap their leg in a certain way or we've seen patients that rub their skin
967
+ [4189.020 --> 4194.540] raw because they just have a tendency to rub in a certain way or they might make repetitive
968
+ [4194.540 --> 4200.820] kind of vocalizations like, in a certain way, that can be kind of very inappropriate to
969
+ [4200.820 --> 4202.980] the setting.
970
+ [4202.980 --> 4207.060] They kind of overeat so if there's food in front of them, they're likely to eat especially
971
+ [4207.060 --> 4208.060] sweets.
972
+ [4208.060 --> 4210.980] Something I put in gray because it's not part of our formal criteria anymore, but it's
973
+ [4210.980 --> 4213.180] something that we use clinically as a loss of insight.
974
+ [4213.180 --> 4217.340] So these patients seem especially unable to reflect upon kind of the changes in their
975
+ [4217.340 --> 4223.460] personality and the ways that their behavior is affected to other people.
976
+ [4223.460 --> 4227.740] So that's front and temporal dementia and then we've also talked about Alzheimer's disease.
977
+ [4227.740 --> 4232.740] And two of the things that we already talked about are that they kind of forget these episodic
978
+ [4232.740 --> 4237.100] memories so they kind of lose the ability to lay down these memory traces and refer
979
+ [4237.100 --> 4238.420] back to them.
980
+ [4238.420 --> 4242.940] And as Dr. Prasin pointed out, they also have trouble even in learning and acquiring
981
+ [4243.020 --> 4244.300] these memories.
982
+ [4244.300 --> 4248.940] These patients are often disoriented in times so they lose track of the day of the week,
983
+ [4248.940 --> 4251.060] the month, even the year.
984
+ [4251.060 --> 4252.780] And they also have difficulties in navigation, right?
985
+ [4252.780 --> 4255.580] So these patients don't get lost in time but they get lost in space.
986
+ [4255.580 --> 4260.660] So these patients tend to wander or they tend to lose track of where they are.
987
+ [4260.660 --> 4266.180] So one of the things is that that's important to know about these diseases is that they
988
+ [4266.180 --> 4268.420] don't just strike sort of randomly.
989
+ [4268.420 --> 4270.180] They actually tend to occur in patterns.
990
+ [4271.140 --> 4276.540] They affect different parts of the brain but they're quite repeatable in terms of which
991
+ [4276.540 --> 4279.060] parts of the brain, these particular disease's effect.
992
+ [4279.060 --> 4282.380] And so here I have a map in blue.
993
+ [4282.380 --> 4285.940] Might be familiar to those of you who are Dr. Celie's talk last week.
994
+ [4285.940 --> 4289.100] But there are certain parts of the brain that are affected in certain parts that are
995
+ [4289.100 --> 4293.060] spared in front of temporal dementia and similarly for Alzheimer's disease.
996
+ [4293.060 --> 4297.340] And then what's quite interesting is that when we go back and look in the healthy brain,
997
+ [4297.340 --> 4300.420] we've done a lot of research that looks at these sort of networks that are distributed
998
+ [4300.420 --> 4301.420] across the brain.
999
+ [4301.420 --> 4306.420] So again, not asking about what one particular part of the brain does on its own, but more
1000
+ [4306.420 --> 4309.900] research that's devoted to how these different parts of the brain are connected.
1001
+ [4309.900 --> 4311.780] And so we've identified these networks.
1002
+ [4311.780 --> 4316.580] But when you look at them, there's a significant amount of correspondence between the areas
1003
+ [4316.580 --> 4321.620] of the brain that are affected, that are atrophied and lost in these dementia syndromes
1004
+ [4321.620 --> 4325.140] and these networks that we see even in the healthy brain.
1005
+ [4325.140 --> 4329.060] And we're calling these the salient network and the default network.
1006
+ [4329.060 --> 4333.860] So I should say that while Alzheimer's disease affects the hippocampus, as was mentioned,
1007
+ [4333.860 --> 4337.260] there is actually this broader constellation of brain regions that's also affected in
1008
+ [4337.260 --> 4338.380] this disease.
1009
+ [4338.380 --> 4343.500] So for the salient network, when we look at this network that's affected in front of temporal
1010
+ [4343.500 --> 4349.540] dementia and we ask, what are we learning about what this network does in healthy people?
1011
+ [4349.540 --> 4354.580] We're finding that it's related to a lot of the functions that you might guess just based
1012
+ [4354.660 --> 4356.820] upon our knowledge of the disease.
1013
+ [4356.820 --> 4361.060] So we know that regions of this network are very important for things like value, what
1014
+ [4361.060 --> 4367.140] value we attach to things, even the value of things like money or relationships, emotion.
1015
+ [4367.140 --> 4371.340] There are nodes of this that are very closely associated with motivation and drive, with
1016
+ [4371.340 --> 4375.140] kind of the will to get up and do things.
1017
+ [4375.140 --> 4380.020] And then even very basic cognitive processes like paying attention and being alert or staying
1018
+ [4380.020 --> 4384.180] on task or all related to this network and are all very closely related to deficits
1019
+ [4384.260 --> 4387.860] that we see in patients with these diseases.
1020
+ [4387.860 --> 4393.140] On the other hand, we know from, again, studies of healthy people that this default network
1021
+ [4393.140 --> 4396.900] is important for things like autobiographical memory and envisioning the future.
1022
+ [4396.900 --> 4400.740] So these are things that we would relate again to this idea of mental time travel.
1023
+ [4400.740 --> 4405.260] A couple other things that the default network seems to be involved with that may be related.
1024
+ [4405.260 --> 4410.060] One has to do again with navigation, with certain kinds of tasks where we have to orient
1025
+ [4410.060 --> 4411.380] ourselves in space.
1026
+ [4411.940 --> 4414.740] The one that's also interesting is adopting other perspectives.
1027
+ [4414.740 --> 4419.180] So imagining myself and yours shoes, knowing the things that you know, and which might be
1028
+ [4419.180 --> 4422.580] different from the things that I know, this seems to be involved.
1029
+ [4422.580 --> 4428.380] And also mind wandering, which might be actually tapping into some of the memory and envisioning
1030
+ [4428.380 --> 4429.380] the future, right?
1031
+ [4429.380 --> 4433.140] So when your mind wanders, you're often likely to think about maybe something that you're
1032
+ [4433.140 --> 4436.980] doing or conversation you're having yesterday or you might find your mind wandering to
1033
+ [4436.980 --> 4439.540] something that you'd like to do in the future.
1034
+ [4439.540 --> 4443.820] And kind of a broader picture that includes the idea of mental time travel that some people
1035
+ [4443.820 --> 4448.700] have proposed is that this default network is involved in engaging in these sort of dynamic
1036
+ [4448.700 --> 4451.540] simulations of possible states of affairs.
1037
+ [4451.540 --> 4455.860] So that when we recall a memory, one thing that we do when we kind of reconstruct that
1038
+ [4455.860 --> 4460.700] experience is that we draw upon things that we've stored in the brain to recreate the
1039
+ [4460.700 --> 4462.580] experience that we had of that memory.
1040
+ [4462.580 --> 4466.300] And that we might use a very similar system when we think about something we're going
1041
+ [4466.300 --> 4467.300] to do in the future.
1042
+ [4467.300 --> 4472.100] And there we're not drawing upon memory per se, but we use a similar system along with
1043
+ [4472.100 --> 4477.100] information that we already know about some future event that allows us to simulate it
1044
+ [4477.100 --> 4478.860] in a similar way.
1045
+ [4478.860 --> 4484.020] So in conclusion, I've talked about two distributed networks, right?
1046
+ [4484.020 --> 4487.580] So again, we're moving beyond thinking about things that any particular region of the
1047
+ [4487.580 --> 4493.020] brain does in isolation, and instead thinking about what the coordinated activity of these
1048
+ [4493.020 --> 4495.660] distributed parts of the brain do together.
1049
+ [4495.660 --> 4500.460] And these networks seem to be very central to the activity of other parts of the brain.
1050
+ [4500.460 --> 4506.060] And I think that when we think about what is the upshot of this for us as human beings,
1051
+ [4506.060 --> 4509.700] that the function of these networks seems to be to give some coherence to our thoughts,
1052
+ [4509.700 --> 4511.900] our motivations, and our actions.
1053
+ [4511.900 --> 4516.780] And my suggestion is that this problem that I've mentioned of synchronic unity of kind
1054
+ [4516.780 --> 4522.500] of being a unified agent at a particular point in time able to kind of deal with all of
1055
+ [4522.500 --> 4527.060] the potentially distracting information, the conflicting desires that we have, and so
1056
+ [4527.060 --> 4531.740] forth, is really something that is served by this salient network.
1057
+ [4531.740 --> 4536.340] And meanwhile, this problem of diacronic unity of being an agent that's extended over
1058
+ [4536.340 --> 4541.260] time, that's agency can go kind of back and forth beyond the present moment, is something
1059
+ [4541.260 --> 4545.140] that's served in part by this default network.
1060
+ [4545.140 --> 4549.900] And then getting back to the controversy that I mentioned before, if people were to ask
1061
+ [4549.900 --> 4556.860] the question, is the self-lost in dementia, my suggestion would be that to answer this
1062
+ [4556.860 --> 4560.740] question, we really have to distinguish between different kinds of unity that are important
1063
+ [4560.740 --> 4563.620] to being a coherent coordinated self.
1064
+ [4563.620 --> 4567.860] So I think that one thing that we definitely do see in Alzheimer's disease is a loss of
1065
+ [4567.860 --> 4570.380] unity, a loss of self across time.
1066
+ [4570.380 --> 4575.140] So these are patients who have trouble linking one moment to the next.
1067
+ [4575.140 --> 4580.260] And so it can be very, very difficult for these patients to make plans or to rely upon
1068
+ [4580.260 --> 4584.260] their knowledge of past events and being effective in the present and future.
1069
+ [4584.260 --> 4588.460] But we also know that these patients can be very, very present in the moment.
1070
+ [4588.460 --> 4593.220] These patients can be very sensitive to other people's needs and emotions.
1071
+ [4593.220 --> 4598.620] They can respond in very socially appropriate, very graceful ways to all kinds of challenging
1072
+ [4598.620 --> 4599.940] situations.
1073
+ [4599.940 --> 4604.420] And this is, I think, what a lot of people refer to when they say that this is the preserved
1074
+ [4604.420 --> 4607.380] part of the self in Alzheimer's disease.
1075
+ [4607.380 --> 4610.740] Meanwhile, when we see patients with front of temporal dementia, I think one of the things
1076
+ [4610.740 --> 4616.020] that's very striking about them is this loss of unity and coherence even at a given time.
1077
+ [4616.020 --> 4619.820] So just in a single interaction with one of these patients, you might find that they're
1078
+ [4619.820 --> 4625.100] distracted, that they're emotionally disengaged, they act in somewhat bizarre ways.
1079
+ [4625.100 --> 4629.420] But if they're paying attention, their memory of these past events and their ability to
1080
+ [4629.420 --> 4631.780] project forward and backward can be preserved.
1081
+ [4631.780 --> 4638.060] And so I think that overall, I'd say that when we think about these diseases, we might
1082
+ [4638.060 --> 4643.260] think about different aspects, different tasks of self-integration and see ways that they
1083
+ [4643.260 --> 4645.260] can stay together or come apart.
1084
+ [4645.260 --> 4646.260] Thanks.
transcript/allocentric_ePP0G7FJGPI.txt ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 3.680] The famous work of hubel and weasel, mostly done with cats.
2
+ [3.680 --> 5.920] Sorry, I realize it's going to upset some people.
3
+ [5.920 --> 8.160] That's just how it was done.
4
+ [8.160 --> 10.640] Cat is lying on a table looking at a stimulus.
5
+ [10.640 --> 15.560] There's an electrode in the cat's visual cortex.
6
+ [15.560 --> 18.440] And so here's the view looking on at the cat.
7
+ [18.440 --> 19.680] He's in an apparatus here.
8
+ [19.680 --> 21.080] There are electrodes there.
9
+ [21.080 --> 22.720] And out here in front of the cat,
10
+ [22.720 --> 24.280] hubel and weasel and their colleagues
11
+ [24.280 --> 27.600] are flashing up light in different shapes
12
+ [27.600 --> 30.800] in different positions in the cat's visual field.
13
+ [30.800 --> 33.200] And recording from neurons.
14
+ [33.200 --> 36.080] OK, so here is an example.
15
+ [36.080 --> 38.360] So this is one of hubel and weasel's movies.
16
+ [38.360 --> 39.680] This is what the cat is seeing.
17
+ [43.960 --> 46.880] So they're flashing a bar of light in front of the cat.
18
+ [46.880 --> 48.800] And what you hear are the action potentials
19
+ [48.800 --> 52.680] to be single neuron in the cat's visual cortex.
20
+ [52.680 --> 55.240] And so they're marking on a piece of paper
21
+ [55.240 --> 58.320] what the receptive field of that cell is.
22
+ [61.240 --> 65.160] See, they keep moving it around.
23
+ [65.160 --> 67.640] And once it gets to that edge, their response.
24
+ [76.040 --> 77.160] OK, everybody got the idea.
25
+ [77.160 --> 80.520] That's how you map out a receptive field.
26
+ [80.520 --> 82.920] Now, there's another movie that I'm running low on time.
27
+ [82.920 --> 84.960] So I think I'll just post it on the stellar side
28
+ [84.960 --> 86.160] and you can look at it offline.
29
+ [86.160 --> 87.640] Where this goes on for minutes.
30
+ [87.640 --> 89.480] And they change the orientation of that bar
31
+ [89.480 --> 91.480] and they do all kinds of stuff and it's pretty cool.
32
+ [91.480 --> 94.200] OK, but I'm going to take the time to run through it.
33
+ [94.200 --> 98.560] The upshot of that is that what you find in primary visual
34
+ [98.560 --> 101.200] cortex is that neurons have a property
35
+ [101.200 --> 103.640] called orientation selectivity.
36
+ [103.640 --> 106.720] And that means that each neuron likes bars
37
+ [106.720 --> 108.880] of a certain orientation.
38
+ [108.880 --> 111.800] That neuron we just saw liked what was it like this.
39
+ [111.800 --> 115.560] Later on in that movie, they present lines like this.
40
+ [115.560 --> 116.320] Bars of light.
41
+ [116.320 --> 117.800] It doesn't like those.
42
+ [117.800 --> 120.080] It likes this.
43
+ [120.080 --> 121.080] OK.
44
+ [121.080 --> 123.280] But you say it doesn't like the means of it.
45
+ [123.280 --> 124.760] It doesn't fire as much.
46
+ [124.760 --> 125.280] Yeah.
47
+ [125.280 --> 128.040] Yeah.
48
+ [128.040 --> 130.040] OK, so here is a depiction of that.
49
+ [130.040 --> 131.600] This is all one neuron.
50
+ [131.600 --> 133.480] The dotted bars are receptive field
51
+ [133.480 --> 136.920] where you have to put stuff to make that neuron fire.
52
+ [136.920 --> 140.280] And here's the firing over time when you put stuff in there.
53
+ [140.280 --> 142.800] And what you see is that this neuron responds
54
+ [142.800 --> 146.720] to bars tilted like this, not bars tilted like that or like that.
55
+ [146.720 --> 149.000] Everybody see that that's orientation selectivity
56
+ [149.000 --> 151.480] of that single neuron?
57
+ [151.480 --> 152.360] OK.
58
+ [152.360 --> 157.160] If you plot that, you smoothly vary the orientation of that bar
59
+ [157.160 --> 158.160] in the receptive field.
60
+ [158.160 --> 160.600] And you measure the firing rate of that neuron,
61
+ [160.600 --> 164.560] you get a curve like this, also showing orientation selectivity.
62
+ [164.560 --> 165.280] Everybody got that?
63
+ [168.080 --> 169.800] Why would a brain, why would a visual system
64
+ [169.800 --> 171.680] have orientation selectivity?
65
+ [171.680 --> 172.520] What could it be?
66
+ [175.000 --> 178.000] If you were writing up the code to do object recognition,
67
+ [178.000 --> 179.160] would you do this at the front end?
68
+ [182.240 --> 183.440] Why?
69
+ [183.440 --> 185.440] So one way to look at it is this is sort
70
+ [185.440 --> 189.600] of the very primitive beginning alphabet out
71
+ [189.600 --> 191.560] of which we will build shape.
72
+ [191.560 --> 193.360] Remember what the visual system is doing
73
+ [193.360 --> 195.200] is trying to tell you what's out there.
74
+ [195.200 --> 197.880] It's got to somehow go from spots of light hitting
75
+ [197.880 --> 202.880] your retina to Miguel, right, or Stephanie.
76
+ [202.880 --> 203.880] At how's it going to do that?
77
+ [203.880 --> 205.960] We don't know, but it's going to have
78
+ [205.960 --> 209.760] to extract information with some kind of building blocks out
79
+ [209.760 --> 213.120] of which it's going to construct some perceptual representation.
80
+ [213.120 --> 213.320] OK.
81
+ [213.320 --> 216.480] So what we're seeing here is one of the very early stages
82
+ [216.480 --> 219.240] of building up a perceptual representation.
83
+ [219.240 --> 222.480] First, finding the bits that matter in the visual field.
84
+ [222.480 --> 224.080] That's what you do with retinal ganglion cells.
85
+ [224.080 --> 226.320] Find thing to change over space and time.
86
+ [226.320 --> 229.360] Next, start getting primitives of shape.
87
+ [229.360 --> 229.600] Right?
88
+ [229.600 --> 231.680] If you're going to describe the shape of an object,
89
+ [231.680 --> 235.680] you need to know how the edges are oriented.
90
+ [235.680 --> 237.200] That's what you see right here.
91
+ [237.200 --> 239.400] This one fires a lot like that.
92
+ [239.400 --> 242.960] It fires a little bit like that, and not at all like that.
93
+ [242.960 --> 244.720] Not at all, often in the nervous system
94
+ [244.720 --> 246.080] needs background firing rate.
95
+ [246.080 --> 248.760] The occasional spike that just happens now in them,
96
+ [248.760 --> 251.760] but much lower firing rate.
97
+ [251.760 --> 252.960] That's what this represents.
98
+ [252.960 --> 256.280] Much more firing to the preferred orientation
99
+ [256.280 --> 259.720] than the non-prefer orientation.
100
+ [259.720 --> 263.720] Now, all of this is sticking electrodes right in, in this case,
101
+ [263.720 --> 267.840] cat V1, responding the recording of the firing rate
102
+ [267.840 --> 271.400] as a function of the orientation of that bar.
103
+ [271.400 --> 272.320] That's cool.
104
+ [272.320 --> 275.200] That seems like a sensible way to do it.
105
+ [275.200 --> 278.360] Is there any way to detect orientation selectivity
106
+ [278.360 --> 281.240] just with behavior?
107
+ [281.240 --> 282.720] It seems like how the hell would you do that?
108
+ [282.720 --> 284.400] We're looking in the middle of the system.
109
+ [284.400 --> 285.960] We're recording from neurons.
110
+ [285.960 --> 287.920] What could we do with behavior that would tell us
111
+ [287.920 --> 292.800] about the orientation selectivity or lack thereof in the brain?
112
+ [292.800 --> 293.440] But there's a way.
113
+ [293.440 --> 297.200] In fact, this was hypothesized way before,
114
+ [297.200 --> 298.080] Hubble and Weasel.
115
+ [298.080 --> 300.680] Question is, could we discover the same thing?
116
+ [300.680 --> 302.200] Could we discover the idea that there
117
+ [302.200 --> 305.600] are neurons in your visual system tuned to orientations,
118
+ [305.600 --> 307.600] to specific orientations?
119
+ [307.600 --> 310.320] Could we discover that without making a measurement
120
+ [310.320 --> 313.720] from neurons, just measuring behavior?
121
+ [313.720 --> 315.120] We're going to discover it right now.
122
+ [315.120 --> 318.640] I hope this is a slightly weak demo, but I hope it will work.
123
+ [318.640 --> 320.000] OK, so now here's what you need to do.
124
+ [320.000 --> 321.680] First, look here.
125
+ [321.680 --> 323.400] Everybody see nice vertical lines?
126
+ [323.400 --> 324.800] Got that?
127
+ [324.800 --> 329.400] OK, now, your job is to fixate right on that horizontal bar.
128
+ [329.400 --> 331.640] You can move back and forth along the width of the bar,
129
+ [331.640 --> 332.920] but you can't leave the bar.
130
+ [332.920 --> 336.400] And you have to keep fixating for a pretty good wireless
131
+ [336.400 --> 337.320] as a subtle effect.
132
+ [337.320 --> 339.680] So you're going to have to keep doing this for another 20 seconds
133
+ [339.680 --> 343.120] or so while I fill in airtime.
134
+ [343.120 --> 347.520] And what you're doing now, as you stare at that, hopefully,
135
+ [347.520 --> 351.760] is tiring out those orientation selective neurons
136
+ [351.760 --> 353.960] above and below your visual field.
137
+ [353.960 --> 355.200] Keep fixating there.
138
+ [355.200 --> 358.720] Keep tiring out those neurons.
139
+ [358.720 --> 361.800] And the idea is, if you do that long enough,
140
+ [361.800 --> 364.920] the signal that your brain will be sending up
141
+ [364.920 --> 368.640] to you, the conscious perceiver, wherever that is,
142
+ [368.640 --> 373.520] will be a code in which the representation of those orientations
143
+ [373.520 --> 376.080] has been diminished, because you burn them out.
144
+ [376.080 --> 378.280] You adapted them out.
145
+ [378.280 --> 380.120] OK, keep looking for another few seconds.
146
+ [380.120 --> 383.480] Don't do this yet, but when I say what you'll do
147
+ [383.480 --> 388.120] is you'll shift your gaze over to the horizontal bar
148
+ [388.120 --> 388.880] to the left.
149
+ [388.880 --> 391.400] And it's pretty subtle, but you can tell me if you see anything.
150
+ [391.400 --> 394.720] OK, try shifting them.
151
+ [394.720 --> 396.520] Did it work?
152
+ [396.520 --> 397.120] Did you see?
153
+ [397.120 --> 399.560] Did these guys tilt it a little bit more like that?
154
+ [399.560 --> 400.800] Awesome.
155
+ [400.800 --> 403.800] OK, this is a tilt after effect.
156
+ [403.800 --> 406.480] And isn't that cool, like, right here in this class
157
+ [406.480 --> 409.360] with a projector and a bunch of people,
158
+ [409.360 --> 412.920] we discovered the properties of neurons in your visual system
159
+ [412.920 --> 416.800] just by looking at what you see after you stare at this.
160
+ [416.800 --> 420.000] Does everybody get the gist of why that would happen?
161
+ [420.000 --> 423.960] Think of these pools of neurons in your primary visual cortex
162
+ [423.960 --> 426.200] tuned to each of these different orientations.
163
+ [426.200 --> 428.400] And now what we did was we made you really
164
+ [428.400 --> 430.920] tire out the neurons that like this, or whatever it was.
165
+ [430.920 --> 432.040] Yeah.
166
+ [432.040 --> 433.040] Look at that long enough.
167
+ [433.040 --> 435.920] They adapt, just like retinal ganglion cells adapt.
168
+ [435.920 --> 437.280] OK, those neurons adapt.
169
+ [437.280 --> 438.160] They tire out.
170
+ [438.160 --> 440.480] They're less interested in firing, just like you run a marathon.
171
+ [440.480 --> 441.320] You don't want to run anymore.
172
+ [441.320 --> 443.200] They're done, right?
173
+ [443.200 --> 444.680] And so they are firing less.
174
+ [444.680 --> 447.400] And so the net average orientation
175
+ [447.400 --> 449.200] indicated by the whole pool of neurons
176
+ [449.200 --> 452.000] is shifted in the direction of the other ones,
177
+ [452.000 --> 455.080] because they're kind of taken out of your representation.
178
+ [455.080 --> 456.600] Does that make sense?
179
+ [456.600 --> 458.680] And it gives you an opposite after effect.
180
+ [458.680 --> 461.560] OK, I mentioned that just to say that it's kind of cheating
181
+ [461.560 --> 463.120] to record from neurons.
182
+ [463.120 --> 466.240] The really hip thing is to infer what the neurons are doing
183
+ [466.240 --> 468.040] with a nice low-tech sort of kidding.
184
+ [468.040 --> 470.760] But it's pretty cool to be able to do this without actually
185
+ [470.760 --> 471.720] recording.
186
+ [471.720 --> 473.760] The coolest thing actually is having both
187
+ [473.760 --> 476.440] to really make it a strong argument.
188
+ [476.440 --> 478.200] OK.
189
+ [478.200 --> 480.480] All right, so this adaptation is sometimes
190
+ [480.480 --> 483.360] called the psychophysicist microelectro.
191
+ [483.360 --> 485.720] Psychophysicists are people who do just like this.
192
+ [485.720 --> 488.240] They present visual or sensory stimuli
193
+ [488.240 --> 489.840] and measure behavioral responses.
194
+ [489.840 --> 493.040] And from that, they try to infer how the system works.
195
+ [493.040 --> 495.240] And in this case, they infer the properties of neurons
196
+ [495.240 --> 498.200] just from behavior.
197
+ [498.200 --> 500.680] And there's like a million variations of this.
198
+ [500.680 --> 503.640] OK, so now we know that there's neurons in your visual
199
+ [503.640 --> 504.840] court.
200
+ [504.840 --> 507.360] The tilt after effect doesn't tell you where
201
+ [507.360 --> 508.800] in the brain those neurons are.
202
+ [508.800 --> 510.640] It just says somewhere on your processing chain,
203
+ [510.640 --> 513.200] you have neurons that do that and that adapt out.
204
+ [514.000 --> 516.640] You need physiology to tell you where.
205
+ [516.640 --> 519.560] OK, so now we know that there are neurons in your primary
206
+ [519.560 --> 522.560] visual cortex that have orientation selectivity.
207
+ [522.560 --> 525.760] OK, how do you compute that?
208
+ [525.760 --> 528.840] I keep making all this loose talk about how vision is visual
209
+ [528.840 --> 531.560] information processing and you're computing things
210
+ [531.560 --> 533.280] on representations.
211
+ [533.280 --> 535.080] This is actually one of the few cases where
212
+ [535.080 --> 537.920] there's a pretty good idea of how that's actually computed
213
+ [537.920 --> 540.320] in a simple neural circuit.
214
+ [540.400 --> 544.520] So remember that we're going to try to derive this property
215
+ [544.520 --> 548.120] from a simple circuit starting with the properties of retinal
216
+ [548.120 --> 549.520] ganglion cells.
217
+ [549.520 --> 551.040] It's true there's an LGN in between,
218
+ [551.040 --> 554.200] but the LGN responds much like the retinal ganglion cells.
219
+ [554.200 --> 559.240] OK, so this is what Hubell and Weasel proposed for which
220
+ [559.240 --> 560.920] there's some evidence and still some dispute
221
+ [560.920 --> 562.960] about exactly how this works.
222
+ [562.960 --> 567.720] But imagine just taking a bunch of those retinal ganglion cells,
223
+ [567.720 --> 570.280] or I'm sorry, lateral geniculate cells
224
+ [570.280 --> 572.080] that behave like retinal ganglion cells.
225
+ [572.080 --> 573.280] Here are four of them.
226
+ [573.280 --> 580.200] Each of them is an on-center off-surround spot detector.
227
+ [580.200 --> 582.680] And if you have them aligned in a row in space,
228
+ [582.680 --> 585.920] that is the receptive fields are aligned, not the cells.
229
+ [585.920 --> 588.920] They respond to different parts of space like this.
230
+ [588.920 --> 593.400] And now you have all of them feed into a V1 cell.
231
+ [593.400 --> 597.320] If it functions as a kind of AND gate, which neurons can do,
232
+ [597.320 --> 601.800] more or less, then this neuron is going to detect bars
233
+ [601.800 --> 603.840] of that orientation.
234
+ [603.840 --> 605.800] Everybody see how that works?
235
+ [605.800 --> 607.800] Nice and simple and low-tech.
236
+ [607.800 --> 611.240] So here's this basic building block in your visual system
237
+ [611.240 --> 613.520] that you can detect indirectly with adaptation
238
+ [613.520 --> 616.320] behaviorally, that you can measure nearly.
239
+ [616.320 --> 621.160] And here we have an idea of how that simple thing is computing.
240
+ [621.160 --> 623.320] We won't be able to do this for, say, face recognition.
241
+ [623.320 --> 624.720] We don't have the circuit for that.
242
+ [624.720 --> 626.800] But for these simple early building blocks,
243
+ [626.800 --> 628.120] there are very sensible circuits that
244
+ [628.120 --> 632.640] can do these first few computations.
245
+ [632.640 --> 635.200] All right, so how's this thing going to behave?
246
+ [635.200 --> 639.640] Let's imagine a row of these, just as the same thing.
247
+ [639.640 --> 643.600] But what happens here is if you add up the on-center
248
+ [643.600 --> 647.280] in the off-surround across those neurons aligned like this,
249
+ [647.280 --> 651.680] you will get a receptive field of the primary visual cortex
250
+ [651.680 --> 653.160] neuron that looks like this.
251
+ [653.160 --> 656.240] Everybody see if you can average that, you get this?
252
+ [656.960 --> 660.280] So it has orientation sensitivity as we just described.
253
+ [660.280 --> 663.640] But it's also got these flanking fields here,
254
+ [663.640 --> 666.360] these inhibitory flanking fields here.
255
+ [666.360 --> 671.320] So if you put stimulus A right in the center like that,
256
+ [671.320 --> 675.840] it'll turn on like that, with it turning it on in the middle
257
+ [675.840 --> 679.440] right there, you get an activation.
258
+ [679.440 --> 685.080] If you put in a bar right here, right on top of the inhibitory
259
+ [685.080 --> 690.000] flanker, you're going to get an inhibition in that neuron.
260
+ [690.000 --> 693.960] And if you put it diagonally like C, there's no change,
261
+ [693.960 --> 696.080] because the excitation from the center of the field
262
+ [696.080 --> 699.680] is canceled by the inhibition from the flankers.
263
+ [699.680 --> 701.560] Everybody get that?
264
+ [701.560 --> 704.400] So this is just how these are called simple cells,
265
+ [704.400 --> 707.560] basic orientation selective cells in primary visual cortex.
266
+ [707.560 --> 710.920] That's how they behave and how they're computed
267
+ [710.920 --> 715.920] from the properties of LGN input.
268
+ [715.920 --> 717.360] Makes sense?
269
+ [717.360 --> 718.520] There's much more to V1.
270
+ [718.520 --> 720.600] There's all their kinds of selectivities,
271
+ [720.600 --> 721.440] and we'll skip all that.
272
+ [721.440 --> 723.440] Here's the basic idea.
273
+ [723.440 --> 726.240] So that's one neuron.
274
+ [726.240 --> 729.560] How are these guys oriented spatially across the brain?
275
+ [729.560 --> 732.200] I'm going to go rather quick through a few slides here
276
+ [732.200 --> 735.480] and then get to some more basic facts.
277
+ [735.480 --> 738.960] Turns out that they're clustered together,
278
+ [739.800 --> 742.080] they progress systematically across the cortex.
279
+ [742.080 --> 744.160] So here's a piece of cortex outside the head,
280
+ [744.160 --> 746.640] inside the head, piece of slab of cortex.
281
+ [746.640 --> 748.640] And what you see is if you send an electrode
282
+ [748.640 --> 752.760] along the length of cortex, you see this even smooth progression
283
+ [752.760 --> 756.280] in orientation selectivity.
284
+ [756.280 --> 758.040] So it's not like random cells.
285
+ [758.040 --> 759.480] Right next to a cell that likes this,
286
+ [759.480 --> 761.120] there's a cell that likes that.
287
+ [761.120 --> 765.440] No, they progress smoothly and evenly across the cortex.
288
+ [765.440 --> 768.480] So there's like a little map, a little fine scale map
289
+ [768.480 --> 770.680] of orientation selectivity spatially
290
+ [770.680 --> 773.520] across primary visual cortex.
291
+ [773.520 --> 777.960] These are sometimes called orientation columns.
292
+ [777.960 --> 780.800] And it's another kind of functional organization
293
+ [780.800 --> 784.760] on top of retinotopy, all in the same chunk of cortex.
294
+ [784.760 --> 786.880] So primary visual cortex is getting complicated.
295
+ [786.880 --> 788.080] It isn't just a map.
296
+ [788.080 --> 789.160] It's a map.
297
+ [789.160 --> 791.440] And then on top of that map is a smooth progression
298
+ [791.440 --> 794.520] of orientation happening all over the place.
299
+ [794.520 --> 795.560] OK?
300
+ [795.560 --> 797.520] All right.
301
+ [797.520 --> 798.840] Can we see this with humans?
302
+ [798.840 --> 800.080] OK, do this really fast?
303
+ [800.080 --> 802.680] So here's another study with 7 Tesla,
304
+ [802.680 --> 804.680] super fancy high resolution.
305
+ [804.680 --> 806.960] Here's a little piece through the back of the brain.
306
+ [806.960 --> 809.560] Here's the sulcus between the two hemispheres.
307
+ [809.560 --> 813.760] Here's a piece of V1 in a human subject,
308
+ [813.760 --> 816.000] scanned at 7 Tesla.
309
+ [816.000 --> 818.480] And in fact, it's claimed that you
310
+ [818.480 --> 820.720] can see orientation columns like that
311
+ [820.720 --> 822.120] across the cortex in humans.
312
+ [822.120 --> 823.520] If you have high enough resolution,
313
+ [823.520 --> 826.640] it needs to be down to around a millimeter or less.
314
+ [826.640 --> 829.400] Each of those colors is a preferential response
315
+ [829.400 --> 833.240] to a different orientation.
316
+ [833.240 --> 835.080] This can be shown much better in animals,
317
+ [835.080 --> 838.400] but you can see it here even in humans.
transcript/allocentric_fLaslONQAKM.txt ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.000 --> 5.220] That is trulyigious.
2
+ [5.280 --> 9.780] That, as allflower in the world,
3
+ [9.860 --> 12.500] has been my passion for Uni,
4
+ [12.580 --> 16.520] you followed the message of God,
5
+ [17.580 --> 24.460] and become everything my God has done to us,
6
+ [25.420 --> 27.460] but the power I stand in this beauty and the powerove
7
+ [27.460 --> 29.880] The things that you attach to yourself,
8
+ [31.640 --> 36.640] a purse, a pen, a fancy car, all these things are communicating.
9
+ [38.200 --> 41.640] How you look at others communicate.
10
+ [43.180 --> 47.680] And all day long, we are communicating non-verbaly.
11
+ [49.760 --> 50.660] All day long.
12
+ [52.240 --> 54.940] You can look in on your child as they sleep
13
+ [54.940 --> 57.180] and you can tell if they're having a nightmare
14
+ [57.180 --> 58.980] or they're sleeping soundly.
15
+ [60.480 --> 65.160] As you sit there, and now I'm starting to see some of you,
16
+ [66.980 --> 69.280] you're giving information up,
17
+ [70.560 --> 73.500] even as I'm giving information up.
18
+ [73.500 --> 74.820] You're assessing me.
19
+ [76.960 --> 81.080] If I can speak to you from an anthropological standpoint,
20
+ [81.780 --> 86.080] I am transmitting information about myself,
21
+ [86.420 --> 91.420] my beliefs, the things that I value, even as you are.
22
+ [95.940 --> 98.260] Now that I can see you a little clearer,
23
+ [98.260 --> 101.460] how many of you were dressed by your parents today?
24
+ [101.460 --> 102.780] Raise your hand.
25
+ [103.780 --> 104.600] Wow.
26
+ [108.780 --> 111.780] Spouses, that's okay, your spouse is gonna draw.
27
+ [113.780 --> 117.780] So you chose to dress the way you did,
28
+ [118.780 --> 121.780] even as I chose to dress the way I did.
29
+ [121.780 --> 124.780] They said, well it's Ted Talks, you can dress down.
30
+ [125.780 --> 128.780] I said, you know, I was in the FBI for 25 years.
31
+ [128.780 --> 130.780] I don't know how else to dress.
32
+ [131.780 --> 133.780] It would be such a disappointment.
33
+ [133.780 --> 137.780] It's like on TV they always have suits,
34
+ [137.780 --> 140.780] even when they're walking through the marsh.
35
+ [141.780 --> 143.780] It's true.
36
+ [143.780 --> 146.780] I can't tell you how many crime scenes I went through,
37
+ [146.780 --> 150.780] that ruined, really inexpensive suits.
38
+ [152.780 --> 153.780] But we look good.
39
+ [153.780 --> 155.780] We look good.
40
+ [161.780 --> 163.780] I guess humor is allowed.
41
+ [164.780 --> 170.780] And so all day long, we're making choices.
42
+ [171.780 --> 172.780] We're making choices.
43
+ [172.780 --> 175.780] They're based on culture.
44
+ [177.780 --> 181.780] They're based on peer pressure, on personal preferences.
45
+ [183.780 --> 187.780] And so the things we wear and attach to ourselves
46
+ [187.780 --> 191.780] are transmitting our bodies, or transmitting information.
47
+ [193.780 --> 197.780] And the question that I'm often asked is, well, how authentic is it?
48
+ [201.780 --> 203.780] How authentic is it?
49
+ [203.780 --> 208.780] And as I pondered this, I said, you know what?
50
+ [208.780 --> 214.780] What do we think of the power of nonverbal communication?
51
+ [218.780 --> 222.780] But let's do it by taking the myths out of it
52
+ [222.780 --> 226.780] and plugging in what really values.
53
+ [226.780 --> 230.780] What really is a value when it comes to nonverbals?
54
+ [231.780 --> 234.780] How many of you have had a bad handshake?
55
+ [237.780 --> 242.780] And normally, of course, now we have the coronavirus.
56
+ [242.780 --> 246.780] I would have you turn to each other and give each other
57
+ [246.780 --> 249.780] a handshake that's really bad.
58
+ [249.780 --> 251.780] But I'm not going to do that.
59
+ [251.780 --> 254.780] I want you to just put your hand in front of you
60
+ [254.780 --> 257.780] and pretend to give someone a bad handshake.
61
+ [257.780 --> 259.780] Ready? Let's do it.
62
+ [259.780 --> 261.780] Let's do it.
63
+ [261.780 --> 263.780] Yeah.
64
+ [263.780 --> 264.780] Good.
65
+ [264.780 --> 267.780] Do you realize the funny faces you make?
66
+ [267.780 --> 270.780] It's like, I didn't ask you to make a funny face.
67
+ [270.780 --> 272.780] And yet you did.
68
+ [272.780 --> 275.780] Why is that?
69
+ [275.780 --> 278.780] Because you're human.
70
+ [278.780 --> 284.780] And humans betray what we feel, what we think,
71
+ [284.780 --> 288.780] what we desire, what we intend,
72
+ [288.780 --> 293.780] what makes us anxious and what we fear.
73
+ [293.780 --> 296.780] And we do it in real time.
74
+ [296.780 --> 299.780] We don't have to wait 20 minutes.
75
+ [299.780 --> 302.780] It happens now.
76
+ [302.780 --> 306.780] And our body language, in a way, it's exquisite
77
+ [306.780 --> 310.780] because there's an area of the brain that is elegant.
78
+ [310.780 --> 314.780] And it's elegant because it takes shortcuts.
79
+ [314.780 --> 317.780] It doesn't think.
80
+ [317.780 --> 322.780] If I bring in a Bengal tiger here and walk it around,
81
+ [322.780 --> 325.780] nobody sits around and waves at it.
82
+ [325.780 --> 330.780] That's like, you know, eat me.
83
+ [330.780 --> 333.780] No. Everybody freezes.
84
+ [333.780 --> 336.780] And that's because of the limbic system.
85
+ [336.780 --> 341.780] This rather primitive area of the brain that reacts to the world
86
+ [341.780 --> 344.780] doesn't have to think about the world.
87
+ [344.780 --> 350.780] And everything that comes from the limbic brain is so authentic.
88
+ [350.780 --> 354.780] You hear a loud noise and you freeze.
89
+ [354.780 --> 355.780] Right?
90
+ [355.780 --> 358.780] What was that?
91
+ [358.780 --> 361.780] You see bad news or you see something on TV
92
+ [361.780 --> 364.780] and you cover your mouth.
93
+ [364.780 --> 366.780] Why is that?
94
+ [366.780 --> 371.780] When the conquistadores arrived in the new world,
95
+ [371.780 --> 377.780] they didn't have any problem finding out who was in authority.
96
+ [377.780 --> 384.780] The same behaviors that they had just left in Queen Isabella's court,
97
+ [384.780 --> 387.780] they saw in the new world.
98
+ [387.780 --> 391.780] They had better clothing and an entourage.
99
+ [391.780 --> 398.780] They didn't have their own show on television, but pretty close.
100
+ [398.780 --> 406.780] All these behaviors are very authentic because the limbic system
101
+ [406.780 --> 409.780] resides within that human brain.
102
+ [409.780 --> 412.780] It's part of our paleo circuits.
103
+ [412.780 --> 419.780] So when we see the furrowed forehead on a baby that's three weeks old,
104
+ [419.780 --> 423.780] we know that this little area called the globella.
105
+ [423.780 --> 427.780] Something is wrong. There's an issue.
106
+ [427.780 --> 429.780] When we see the bunny nose, right?
107
+ [429.780 --> 431.780] When you wrinkle the nose.
108
+ [431.780 --> 433.780] Yeah, we know what that means.
109
+ [433.780 --> 435.780] Ooh, I don't like that.
110
+ [435.780 --> 437.780] I don't want that.
111
+ [437.780 --> 440.780] Ooh. Right?
112
+ [440.780 --> 446.780] Did I just say that in public?
113
+ [446.780 --> 452.780] When we squint, we're focusing, but we have concerns.
114
+ [452.780 --> 458.780] Ah, when the eyelids close, you want me to do what?
115
+ [458.780 --> 468.780] And if things are really bad, you want me to talk for 15 minutes.
116
+ [468.780 --> 470.780] Here's what's interesting.
117
+ [470.780 --> 476.780] Children who are born blind, when they don't like things, they don't like.
118
+ [476.780 --> 478.780] Here's things they don't like.
119
+ [478.780 --> 480.780] They don't cover their ears.
120
+ [480.780 --> 484.780] They cover their eyes. They've never seen.
121
+ [484.780 --> 491.780] This is millions of years old.
122
+ [491.780 --> 495.780] Smiles are important.
123
+ [495.780 --> 502.780] Smiles. The lips begin to disappear when we're stressed.
124
+ [502.780 --> 507.780] Most politicians look something like that.
125
+ [507.780 --> 511.780] Right before they're indicted, they look like that.
126
+ [511.780 --> 516.780] Dramatic lip pulls, jaw shifting.
127
+ [516.780 --> 519.780] Covering of the neck.
128
+ [519.780 --> 523.780] You've seen that clutching up the pearls.
129
+ [523.780 --> 529.780] Where's that creep? Oh, he's gone now. He's back.
130
+ [529.780 --> 532.780] But did you know why?
131
+ [532.780 --> 535.780] Large felines.
132
+ [535.780 --> 546.780] We have seen large felines for so long taking down prey that we immediately cover our neck.
133
+ [546.780 --> 555.780] How many of you have been told that you can detect deception by the use of non-verbals?
134
+ [555.780 --> 559.780] I'm here to clear that up.
135
+ [559.780 --> 563.780] When you leave here today, you say, well, I heard that Navarro fellow.
136
+ [563.780 --> 568.780] And he did about 13,000 interviews in the FBI.
137
+ [568.780 --> 572.780] He said there is no Pinocchio effect.
138
+ [572.780 --> 577.780] Not one single behavior indicative of deception.
139
+ [577.780 --> 580.780] Not one.
140
+ [580.780 --> 583.780] And we mustn't propagate that.
141
+ [583.780 --> 588.780] We must not tell people that we can detect they're lying because of behaviors.
142
+ [588.780 --> 590.780] They may be anxious.
143
+ [590.780 --> 592.780] They may be stressed.
144
+ [592.780 --> 595.780] But not deceptive.
145
+ [595.780 --> 598.780] How many of you have been told that if you cross your arms,
146
+ [598.780 --> 601.780] that you're blocking the people away?
147
+ [601.780 --> 603.780] And you say that.
148
+ [603.780 --> 605.780] There's a clinical term for that.
149
+ [605.780 --> 608.780] It's called crap.
150
+ [608.780 --> 612.780] Yeah, I said it.
151
+ [612.780 --> 615.780] Get over it.
152
+ [615.780 --> 618.780] It's crap. It's a self-hug.
153
+ [618.780 --> 620.780] You're comfortable?
154
+ [620.780 --> 621.780] Yeah.
155
+ [621.780 --> 626.780] Where does this nonsense come from?
156
+ [626.780 --> 629.780] I asked the question often.
157
+ [629.780 --> 632.780] You were a spy catcher.
158
+ [632.780 --> 635.780] You use nonverbals every day.
159
+ [635.780 --> 637.780] What do you use it for?
160
+ [637.780 --> 640.780] To make sure people are comfortable.
161
+ [640.780 --> 644.780] To make sure that we are empathetic.
162
+ [644.780 --> 651.780] The only way to be truly empathetic is by understanding nonverbals.
163
+ [651.780 --> 658.780] Carl Sagan, the famous cosmologist, said, who are we?
164
+ [658.780 --> 660.780] What are we?
165
+ [660.780 --> 662.780] You think about that.
166
+ [662.780 --> 667.780] It really takes a smart person to ask that question.
167
+ [667.780 --> 670.780] What are we in this universe?
168
+ [670.780 --> 673.780] And he summed it up this way.
169
+ [673.780 --> 676.780] And I think it's rather exquisite.
170
+ [676.780 --> 678.780] He said, oh, we are.
171
+ [678.780 --> 684.780] Is the sum total of our influence on others.
172
+ [684.780 --> 686.780] That's all we are.
173
+ [686.780 --> 689.780] It's not how much you earn.
174
+ [689.780 --> 691.780] It's not how many cars you have.
175
+ [691.780 --> 694.780] It's our influence on each other.
176
+ [694.780 --> 700.780] And what's interesting is that the primary way that we influence each other
177
+ [700.780 --> 703.780] through nonverbals,
178
+ [703.780 --> 706.780] it's that nice handshake,
179
+ [706.780 --> 708.780] it's a pad on the shoulder,
180
+ [708.780 --> 711.780] it's that touch of the hand,
181
+ [711.780 --> 715.780] it is that behavior that communicates love
182
+ [715.780 --> 721.780] in a way that words simply can't do it.
183
+ [721.780 --> 725.780] When you leave here, you're going to have choices.
184
+ [725.780 --> 727.780] You always have choices.
185
+ [727.780 --> 731.780] You have free agency.
186
+ [731.780 --> 735.780] And one of the things that you should think about is,
187
+ [735.780 --> 739.780] how do I change my nonverbals?
188
+ [739.780 --> 744.780] How do I become that person of influence?
189
+ [744.780 --> 748.780] Because if there's one thing we need in this world,
190
+ [748.780 --> 752.780] it's truly to be more empathetic.
191
+ [752.780 --> 757.780] And so when I see this, it says it all.
192
+ [757.780 --> 760.780] That's why we use nonverbals.
193
+ [760.780 --> 763.780] Because they're powerful.
194
+ [763.780 --> 764.780] Thank you.
transcript/allocentric_gLUcuv2PxuU.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ [0.000 --> 15.000] Do you mind?
2
+ [15.000 --> 16.000] Who?
3
+ [16.000 --> 17.000] Me?
4
+ [17.000 --> 18.000] Yes, you.
5
+ [18.000 --> 19.000] Do you mind?
6
+ [19.000 --> 20.000] Mind what?
7
+ [20.000 --> 24.000] Learn more at www.9thplanet.org.