Skip to Main Content

Language learning and development – How neural methods can clarify what we know from behavior alone

March 12, 2024

YCSC Grand Rounds March 12, 2024
Richard Aslin
Senior Research Scientist/Senior Lecturer, Yale Child Study Center

ID
11465

Transcript

  • 00:00OK. Good afternoon, everyone,
  • 00:02and welcome to Graham Rounds.
  • 00:05I'm Sarah Santis Alonso.
  • 00:07I'm an Associate Research Scientist
  • 00:09at the Child Study Center.
  • 00:10I joined the Child Study
  • 00:11Center about a year ago,
  • 00:13and I've been conducting research on
  • 00:15language neurodevelopment since then.
  • 00:17And first, before moving on to today's talk,
  • 00:20I want to start with a reminder
  • 00:21that next week we're going to
  • 00:23hear from Doctor David Yan,
  • 00:24and he will be speaking about the Asian
  • 00:27American experience in healthcare.
  • 00:29So we hope to see many of you there.
  • 00:31And now moving on to today's talk,
  • 00:34it is my great pleasure to welcome
  • 00:36you all to today's presentation
  • 00:39featuring Doctor Dee Kaslim.
  • 00:41So I've known Dee Kaslim
  • 00:43for about 6 years now.
  • 00:44First, I suppose Doctor,
  • 00:45a fellow in his lab and more recently
  • 00:48as a colleague and collaborator
  • 00:49here at the Chalice Study Center.
  • 00:52And as an introductory note
  • 00:54to his presentation,
  • 00:55I'd like to emphasize a couple of
  • 00:57qualities that I've been making a very
  • 00:59unique individual scientist to work with.
  • 01:02So first,
  • 01:03as many of you know,
  • 01:04**** is a truly remarkable scientist.
  • 01:07He's made ground breaking
  • 01:08contributions to a wide range of
  • 01:10fields including infant perception,
  • 01:12language acquisition,
  • 01:13cognitive neuroscience,
  • 01:14and I've always been inspired
  • 01:17by his curiosity to learn and
  • 01:19to delve deep into new fields.
  • 01:21And indeed, as we've seen today's talk,
  • 01:23he has a very interdisciplinary
  • 01:26approach to science and he
  • 01:29integrates insights from psychology,
  • 01:31linguistics,
  • 01:31connecting neuroscience to
  • 01:33computational modelling.
  • 01:34And he has received a number of awards
  • 01:37for his scientific contributions,
  • 01:39much recently the Atkinson Prize in
  • 01:41Psychological and Cognitive Sciences
  • 01:43by the National Academy of Sciences.
  • 01:46He's also a member of the American
  • 01:48Academy of Arts and Sciences and a
  • 01:50member of the National Academy of Sciences.
  • 01:52And the second quality that I want to
  • 01:54emphasize is that he is and continues
  • 01:57to be an extraordinary role model.
  • 01:58As a mentor.
  • 01:59He's able to create a supportive and
  • 02:01nurturing environment for his mentees.
  • 02:03So he has this truly unique quality.
  • 02:05I think that he's able to see the strength
  • 02:07in every individual and helps them grow.
  • 02:10And as a personal note,
  • 02:11I I mentioned that as I was transitioning
  • 02:13to his lab as a post doctoral fellow,
  • 02:15I talked to some of his prior
  • 02:17mentees and they could only
  • 02:19say positive things about him.
  • 02:21I couldn't believe it and but you know,
  • 02:24I was convinced this was the the way to
  • 02:26go and Fast forward six years later,
  • 02:29I only have positive things to say
  • 02:31about him and I think that it is true.
  • 02:33I think that this is because
  • 02:35he is truly able to create.
  • 02:37He's committed to the growth
  • 02:38and success of of their mentees
  • 02:40and really able to create this
  • 02:42nurturing environment for them.
  • 02:44And he has indeed received
  • 02:46several mentorship awards,
  • 02:48and I want to mention a couple of them.
  • 02:49So in 2015,
  • 02:50he received the Mentor Award for
  • 02:52Lifetime Achievement by the American
  • 02:55Psychological Association that
  • 02:56acknowledges his extraordinary
  • 02:58leadership to increase the participation
  • 03:00of women of all racial and ethnic groups.
  • 03:03And in 2018, he received the honorary
  • 03:06Honorary award for Enduring Leadership
  • 03:08by by Women in Cognitive Science that
  • 03:11recognizes his important role in
  • 03:13advancing the career of women scientists.
  • 03:17And with a further ado,
  • 03:18I invite you to join me and and extending
  • 03:20a warm welcome to Doctor de Castle.
  • 03:27Well, thank you so much, Sarah. And
  • 03:32it's very easy to mentor people when
  • 03:34they're really outstanding people
  • 03:35and scientists in their own right.
  • 03:37And so I've been really
  • 03:38fortunate to have many,
  • 03:39many talented students and post
  • 03:41docs working with me over the years.
  • 03:44So thanks to the child Studies center,
  • 03:46Kieran and Linda for inviting
  • 03:48me to give this talk today.
  • 03:51This is kind of an overview talk.
  • 03:52I wanted to give you a flavor for what's
  • 03:55going on in the lab over the past two years.
  • 03:57So here's a road map for today's talk.
  • 04:00I want first of all,
  • 04:01for those of you who are not so
  • 04:03familiar with the methods that
  • 04:04are used to study human infants,
  • 04:05talk about those behavioral methods briefly.
  • 04:08And then give some examples of the
  • 04:10findings that you can obtain from
  • 04:12infants using those behavioral methods.
  • 04:14And then review the neural methods that
  • 04:16are available for use with infants,
  • 04:17which are quite constrained.
  • 04:19And then some findings that have
  • 04:21illustrated how that neural information
  • 04:24can advance our understanding
  • 04:25of the behavioral manifestations
  • 04:27of language development.
  • 04:28And then at the end,
  • 04:30for those of you who are interested
  • 04:32in some practical applications of
  • 04:33this work allude to some research
  • 04:35that I think has some translational
  • 04:38components to it.
  • 04:39So first of all,
  • 04:40reviewing behavioral methods,
  • 04:41really almost everything we know
  • 04:43about psychological development in
  • 04:45infants is is come from initially
  • 04:47from behavioral responses such
  • 04:48as crying and facial expressions,
  • 04:50high amplitude sucking,
  • 04:51which has been used to study learning,
  • 04:54reaching and grasping responses.
  • 04:55You know,
  • 04:55Arnold Gazelle did the classic
  • 04:57work here at Yale on that and
  • 04:59crawling and walking.
  • 05:01And what drove the this increase
  • 05:04in our knowledge over the past 30
  • 05:06or 40 years was capitalizing on
  • 05:08a particular kind of behavior.
  • 05:09And that's I've reviewed it in an article,
  • 05:12It's it's the looking behavior that
  • 05:14infants exhibit toward different stimuli,
  • 05:16not just visual stimuli but
  • 05:17auditory stimuli as well,
  • 05:19which I will summarize briefly.
  • 05:21So these looking paradigms really
  • 05:23are quite powerful and they apply
  • 05:26in different content domains.
  • 05:27You can measure spontaneous preferences
  • 05:29that infants bring to the laboratory.
  • 05:31You can expose them to stimuli and see
  • 05:34how they become familiarized with them.
  • 05:37You can study learning with these
  • 05:39techniques and you can also study
  • 05:41how they explore their environment
  • 05:43with their visual attention.
  • 05:45And this has been used in to measure
  • 05:47all sorts of content domains within
  • 05:49infancy such as sensory thresholds,
  • 05:51visual acuity, for example,
  • 05:54cross modal integration.
  • 05:55It's also been used to study
  • 05:58discrimination and categorization,
  • 05:59such As for faces and speech and it.
  • 06:02And it's even been used to study what
  • 06:04you might consider sort of higher
  • 06:06level cognitive processes like space number,
  • 06:08the grammar in languages and theory of mind,
  • 06:12among others.
  • 06:13So these behavioral methods then
  • 06:15have been deployed to the study of
  • 06:18language development in many different ways.
  • 06:20And I'm just going to summarize
  • 06:223 very briefly.
  • 06:23One is the discrimination of speech contrast,
  • 06:26the ability of infants to tell
  • 06:28what one speech sound
  • 06:30is different from another speech sound.
  • 06:32The 2nd is statistical learning,
  • 06:34a very rapid form of implicit learning
  • 06:36that infants have quite early in life.
  • 06:39And then spoken word recognition,
  • 06:40which is obviously relevant to
  • 06:42language understanding the spoken items
  • 06:44that are presented to you and what
  • 06:46they mean in the in the real world.
  • 06:48So one of the techniques that
  • 06:50employs looking time is called the
  • 06:52head turn preference procedure.
  • 06:54This is a procedure in which the
  • 06:56baby is seated on a parent's
  • 06:58lap inside of a soundproof room,
  • 07:00and there is very uninteresting visual
  • 07:02stimuli, like a blinking light.
  • 07:03And it turns out if you present out of a
  • 07:06loudspeaker adjacent to that blinking light,
  • 07:08an auditory stimulus instance
  • 07:09will look at it for some period
  • 07:11of time before they look away.
  • 07:13Not to measure their preference
  • 07:15for listening to the sound.
  • 07:16It's not about the blinking light,
  • 07:17it's about the sound.
  • 07:20And you can use this to study
  • 07:23perceptual discrimination.
  • 07:23For example,
  • 07:24some sounds we've looked at
  • 07:26longer than other sounds,
  • 07:27or after you've been familiarized
  • 07:29to one class of sounds and
  • 07:30then changed to a new sound,
  • 07:32they will show an increase in their
  • 07:34visual attention to that sound.
  • 07:36And one of the classic findings in
  • 07:38the field by Janet Worker and Richard
  • 07:40Tees is that stimuli speech stimuli
  • 07:43that are used in a particular language
  • 07:46but are not used in the language
  • 07:49of infants who are being tested.
  • 07:51So these are non-native speech.
  • 07:54Contrast shows an interesting
  • 07:57phenomenon called perceptual narrowing,
  • 07:59that is 6 and seven month old babies.
  • 08:02Even though that is not a native language,
  • 08:04contrast will never let us be
  • 08:06able to discriminate it.
  • 08:07But you can see over the
  • 08:09course of several months,
  • 08:10by 12 months of age,
  • 08:11they're essentially unable to
  • 08:14discriminate that.
  • 08:15If you ask whether infants from that native
  • 08:18speaking environment can discriminate it,
  • 08:20the answer is yes,
  • 08:21of course.
  • 08:22So you have this interesting phenomenon
  • 08:25whereby universal properties of
  • 08:26discrimination are present in infants
  • 08:28at six months of age and then they're
  • 08:31kind of winnowed away as a result of
  • 08:33exposure to their native language.
  • 08:35Kind of use it or lose it is the
  • 08:37expression that's used here.
  • 08:39So that's in the domain
  • 08:41of speech discrimination.
  • 08:42But what about how infants acquire
  • 08:45information about new combinations of sounds?
  • 08:48And one of the things that we were
  • 08:51very interested in a number of years
  • 08:53ago is how do infants understand where
  • 08:55one word ends and the next word begins?
  • 08:58Because as you're listening to me
  • 09:00speak and hopefully fluent sentences,
  • 09:02there's no obvious boundary between the
  • 09:04words except at the end of an utterance.
  • 09:07So if you take an example of a set
  • 09:09of sentences that a mother might
  • 09:11speak to an infant like these,
  • 09:13you can ask whether there's statistical
  • 09:16information that defines a word.
  • 09:18So for example,
  • 09:19Bey followed by B is happening
  • 09:21every time the word baby is spoken.
  • 09:24But those other syllables that
  • 09:26come between words like T
  • 09:29Bey happen relatively infrequently.
  • 09:30So you have word combinations of
  • 09:33syllables and non word or word
  • 09:36boundary combinations of syllables.
  • 09:38So what we did is created a synthetic stream
  • 09:40of speech in which there were no pauses.
  • 09:43So there's no person who's taking a pause,
  • 09:46taking a breath at the end of an utterance.
  • 09:47It's just a continuous stream.
  • 09:49Sounds like this.
  • 09:52Oh, unfortunately, no one can hear that.
  • 09:55That's OK. I will imagine that
  • 09:57this is just concatenated speech.
  • 10:00It's continuous, there's no pauses.
  • 10:02And then what?
  • 10:04The question is,
  • 10:05can you extract from that stream
  • 10:07of speech the underlying structure
  • 10:10that's defined statistically?
  • 10:11In fact, in this stream of speech,
  • 10:14it consisted only of these
  • 10:18343 syllable words,
  • 10:20and they were 3 syllable words that
  • 10:23were represented in random order,
  • 10:25but they're concatenated together as
  • 10:27a continuous stream with no pauses.
  • 10:29So the question is,
  • 10:31can infants extract the structure
  • 10:33by merely listening to it?
  • 10:36And So what you do is present the
  • 10:38stream of speech for two minutes to 8
  • 10:41month olds and then give them a test.
  • 10:44And the test is this following critical test,
  • 10:47they're going to hear a word which
  • 10:49would be one of those triples that's
  • 10:52actually a part of the structure of the
  • 10:55stream verses on other chest trials,
  • 10:57what's called a part word.
  • 10:58So it's the last syllable of one word
  • 11:01and the next two syllables of the next word.
  • 11:03Now that is something that they've heard,
  • 11:06but it has a statistical property
  • 11:09highlighted here that the likelihood
  • 11:11that that that particular syllable
  • 11:14da follows to is 1/3 because there's
  • 11:18four words and it can be followed
  • 11:20by one of the other three words.
  • 11:22So it's a very subtle probabilistic
  • 11:24relationship that they would have to
  • 11:26extract from this 2 minutes of speech.
  • 11:28And the answer is that they do
  • 11:31discriminate between these words
  • 11:32and part words after only two
  • 11:34minutes of exposure.
  • 11:35And they do so by showing
  • 11:37a novelty preference.
  • 11:37That is,
  • 11:38they listen longer to that slightly
  • 11:41less statistically coherent part word
  • 11:43than to the the words themselves.
  • 11:45And this is not unique to language.
  • 11:48We did follow up experiments with musical
  • 11:50tones that had the same underlying
  • 11:52structure and you see the same phenomenon.
  • 11:54We've even done it in the visual domain.
  • 11:56So it's a domain general property.
  • 11:59It's not specific to language,
  • 12:00but obviously language capitalizes on it.
  • 12:04So that's a learning effect.
  • 12:05We've seen discrimination
  • 12:07effect and learning effect.
  • 12:08And now what about recognizing words?
  • 12:11So canonical example of this
  • 12:13would be two known objects,
  • 12:15that is, objects that have words
  • 12:17that are known by the infant,
  • 12:19apple and ball,
  • 12:20and simply an utterance while they're
  • 12:23faced with these two visual stimuli.
  • 12:26Where's the apple or where's the ball?
  • 12:28And what you might expect is that the
  • 12:30infant would look at the appropriate
  • 12:32referent of that word apple or ball.
  • 12:34And in fact,
  • 12:35that's exactly what you find
  • 12:36at 14 month olds.
  • 12:38You see that when the
  • 12:40word is spoken, they're gonna
  • 12:42move their eyes to the target,
  • 12:44and they're not gonna move
  • 12:45their eyes to the distractor.
  • 12:46It's a highly reliable
  • 12:48effect in 14 month olds.
  • 12:50Moreover, you can teach infants a new word.
  • 12:53So this is an experiment in
  • 12:55which we took two novel objects
  • 12:57that they'd never seen before,
  • 12:58and we held them up in front of the baby
  • 13:01and said the word for that object 10 times.
  • 13:04So it only took like 2 minutes.
  • 13:06And then we did a test using
  • 13:07this very same procedure here,
  • 13:09right, for those novel objects.
  • 13:12And we picked these novel words for the
  • 13:15novel objects during the teaching phase,
  • 13:18so that in one circumstance,
  • 13:20the MEB circumstance,
  • 13:21there is no other word in the child's
  • 13:24vocabulary that sounds like MEB.
  • 13:27We had another group of subjects and
  • 13:29we labeled that object Tog and we
  • 13:31picked that that non word because
  • 13:33it's very similar sounding to a word
  • 13:35that's already in the vocabulary,
  • 13:37the word dog.
  • 13:38And we had another counterbalance
  • 13:40condition where we had shang and gal.
  • 13:42Again, the same logic.
  • 13:43And what we found is that when
  • 13:45we presented these novel objects
  • 13:47with novel words to infants,
  • 13:49they could readily learn them in
  • 13:51the course of just a few minutes,
  • 13:53but only when they didn't have another
  • 13:56word in the vocabulary that sounded like it.
  • 13:59So when we ran that condition,
  • 14:01they failed on that circumstance.
  • 14:02And that's mirroring effects in adult,
  • 14:05where you have a much more
  • 14:07complicated vocabulary,
  • 14:07but it's harder to learn words
  • 14:09that sound alike.
  • 14:12But infants are not slaves to the particular
  • 14:16words that they've heard in the past,
  • 14:19because they can rapidly adjust
  • 14:21how they interpret words based
  • 14:24on the accent of the speaker.
  • 14:26So what we did is we had two
  • 14:28conditions in which infant came into
  • 14:30the lab and they listened to a person
  • 14:33just verify the name of an object.
  • 14:35For example, in the first condition the
  • 14:38tarka would speak in a normal accent,
  • 14:41they'd hold up the the block
  • 14:43and they'd say block.
  • 14:44In another condition for other infants,
  • 14:47when they came into the lab a a
  • 14:49different tarka would hold up
  • 14:51the block and would say black.
  • 14:53And the question then is,
  • 14:55after a brief exposure to the
  • 14:57accent of this this novel talker,
  • 15:01would infants respond appropriately
  • 15:03when that mispronounced word was
  • 15:06used to identify that object?
  • 15:08And so they have a canonical representation
  • 15:09of what that word should sound like.
  • 15:11But now they're getting new information that
  • 15:13this talker speaks a little bit differently.
  • 15:16And the answer is that 18 month olds would
  • 15:19readily look at the object that was labeled,
  • 15:21even when it was labeled by
  • 15:23this incorrect pronunciation,
  • 15:25only in the condition in which they had
  • 15:27heard it used by that particular talker.
  • 15:29And moreover,
  • 15:29even though they had not been
  • 15:32exposed to this word here,
  • 15:33which is canonical bottle,
  • 15:35even though they had not heard
  • 15:38the talker who spoke in the funny
  • 15:40accent by calling a block a black,
  • 15:42they had not heard them say the word battle.
  • 15:44When they were tested with the word battle,
  • 15:46they generalized to that,
  • 15:48so they rapidly are able to adapt
  • 15:51their representation of the talkers
  • 15:53spoken word and matching it to
  • 15:56objects in the real world.
  • 15:58And finally you can ask,
  • 15:59well what about what you might
  • 16:01call semantic competition?
  • 16:02Is it the case that infants have
  • 16:05semantic categories of objects and
  • 16:07that that influences how readily
  • 16:09they can recognize the spoken
  • 16:11word that refers to that object?
  • 16:13So Elica Bergeson did this really
  • 16:16interesting experiment in which half
  • 16:17of the objects pairs were related.
  • 16:19For example, foot and hand right, they're,
  • 16:21they're kind of body parts, right?
  • 16:23Or juice and milk.
  • 16:24They're both things that you can
  • 16:26drink compared to random pairings
  • 16:28on the right hand side.
  • 16:30And what we found,
  • 16:31even in six month old infants who have
  • 16:33a very, very rudimentary vocabulary,
  • 16:35even at that early age,
  • 16:37they were more readily able to
  • 16:39look to the object that was labeled
  • 16:42when it was in the presence of
  • 16:44an unrelated competitor.
  • 16:46When it was in the presence
  • 16:47of related competitor,
  • 16:48they had more difficulty,
  • 16:49just as in the previous case
  • 16:51that I referred to,
  • 16:52where if they sound alike,
  • 16:54that is a competition effect.
  • 16:56So it's both at the phonological
  • 16:58level and at the semantic level.
  • 17:00Moreover,
  • 17:01Elica did this heroic study
  • 17:03called the Seedlings Project,
  • 17:04in which she studied infants
  • 17:06from 6 to 18 months of age in
  • 17:08the home by using what's called a
  • 17:09head camera.
  • 17:10And it was a clever design at the time.
  • 17:13Technology is much better now to use
  • 17:16two different cameras at the same time.
  • 17:18So there's one camera that's looking
  • 17:20tilted down so you can see what
  • 17:22the baby's holding in their hands,
  • 17:24and another camera that's tilted up
  • 17:26because typically adults are higher
  • 17:28in in posture than in the baby.
  • 17:30So we could see both what the
  • 17:32mother was doing and what the
  • 17:34baby was doing with their hands.
  • 17:36And then the babies were brought
  • 17:37back into the laboratory so we could
  • 17:39look at the relationship between
  • 17:40what they saw in their natural
  • 17:41environment and how they performed on
  • 17:43one of these word recognition tasks.
  • 17:46And what we wanted to know is
  • 17:47what predicts word learning.
  • 17:49So for particular objects in the environment,
  • 17:51what allowed babies to perform
  • 17:52well in the lab?
  • 17:54And the answer is that what was
  • 17:57present in the field of view of
  • 17:59the infant while the mother spoke
  • 18:01a particular word was the best
  • 18:03predictor of them learning.
  • 18:04So it's the joint attention while
  • 18:06they were listening to the word
  • 18:08that the mother was speaking that
  • 18:10had the best prediction for their
  • 18:12performance in the laboratory.
  • 18:14So I'm gonna segue now from
  • 18:16these behavioral results,
  • 18:17which I think are incredibly powerful,
  • 18:19but they have some limitations.
  • 18:20Why would we want to study the brain?
  • 18:23Well,
  • 18:23we can infer that there's some
  • 18:26brain mechanism that must
  • 18:28be controlling the behavior.
  • 18:31And the behavior is,
  • 18:32you know,
  • 18:33it's an existence proof that that
  • 18:35brain mechanism is functioning
  • 18:36at that particular age.
  • 18:38And in invasive neuroscience,
  • 18:40animal studies,
  • 18:41for example our studies of patients,
  • 18:43under some circumstances you can
  • 18:45do things that are just simply
  • 18:47not possible to do with your
  • 18:49typically developing infant for
  • 18:50ethical reasons among others.
  • 18:53So we have to use non invasive
  • 18:56imaging techniques with infants,
  • 18:57and each one of these imaging
  • 19:01techniques has its pros and cons.
  • 19:03So we have EEG,
  • 19:04very easy to record EEG,
  • 19:06very difficult to make it clear EEG
  • 19:09signals because of movement artifacts.
  • 19:11We have Meg which is super expensive
  • 19:13and there are very few labs that
  • 19:16have infant friendly Meg systems.
  • 19:17Brain works in 100 college
  • 19:20will be one such place.
  • 19:22We have MRI which is great because
  • 19:24it has exquisite spatial resolution,
  • 19:26not so good temporal resolution.
  • 19:29Lots of complicating factors for
  • 19:31studying infancy young children,
  • 19:32but I'll comment on that in a few minutes.
  • 19:36And nears which is near
  • 19:37infrared spectroscopy,
  • 19:38which has certain advantages in terms
  • 19:40of recording from babies while they're
  • 19:42sort of in naturalistic conditions.
  • 19:44So you have to think about why
  • 19:47would we want to expend a lot
  • 19:49of time and energy studying the
  • 19:51brains of infants when behavior has
  • 19:53revealed so many interesting things.
  • 19:55Well,
  • 19:56I think for me one of the fundamental
  • 19:58reasons for studying the brain and
  • 20:00when I started doing this work 15 years ago.
  • 20:02Is that you can imagine some
  • 20:05qualitative change that occurs at the
  • 20:07behavioral level during development.
  • 20:09And it's, for example,
  • 20:10babies go from crawling to walking.
  • 20:13What allows that to happen?
  • 20:14And it's sort of seductive
  • 20:15to think that well,
  • 20:16that they have this huge qualitative change.
  • 20:19It must be because something
  • 20:21in the brain changed.
  • 20:22Now what is it that changed in the brain?
  • 20:24Maybe there's a new mechanism that
  • 20:26was latent and that suddenly appears.
  • 20:29Or maybe it's the case that
  • 20:30the brain is just noisy.
  • 20:31Well, how would you know that?
  • 20:32You'd have to study the brain, right?
  • 20:35Similarly, what if there's an
  • 20:36absence of qualitative change?
  • 20:38What if it looks like a
  • 20:39just continuous development?
  • 20:40Well, then a sort of seductive to think,
  • 20:42well, there's really not a
  • 20:44fundamental change in the brain.
  • 20:45It's just getting better.
  • 20:47But you don't know that, right?
  • 20:48The only way you would know you
  • 20:50could have the same behavior.
  • 20:51It could be mediated by two different
  • 20:53brain mechanisms at different ages.
  • 20:55And the only way to know that
  • 20:56is to study the brain.
  • 20:56I mean, it seems obvious,
  • 20:57but that's a rationale
  • 20:59for studying the brain.
  • 21:01And another reason which is more
  • 21:03practical is that typically you would
  • 21:05expect the development of the brain to
  • 21:06precede the development of behavior,
  • 21:08right?
  • 21:08Behavior has to be assembled from
  • 21:10a whole series of brain mechanisms,
  • 21:12and therefore if you could find a
  • 21:14brain mechanism that's predictive of
  • 21:16the subsequent behavioral development,
  • 21:18then that allows you not only
  • 21:20to intervene earlier,
  • 21:21but to understand the mechanism
  • 21:23itself that led to the behavior.
  • 21:25So how have these neural methods been
  • 21:28applied to language development?
  • 21:30I'm,
  • 21:30I'm gonna review some really
  • 21:32kind of quickly here.
  • 21:33Phonetic discrimination,
  • 21:34which we've already talked about.
  • 21:36Statistical learning,
  • 21:37which we've already talked about,
  • 21:38Spoken word recognition,
  • 21:39which you've already talked about.
  • 21:41And then talk about some really
  • 21:44interesting work that's being
  • 21:46conducted that use sort of
  • 21:48modern neuro imaging and machine
  • 21:50learning techniques to understand
  • 21:52the functioning of the brain.
  • 21:54And importantly,
  • 21:55because infants and young children
  • 21:58are not terribly cooperative subjects,
  • 22:00at least not all the time.
  • 22:02Using naturalistic viewing conditions,
  • 22:04particularly movie watching as a
  • 22:07way to extend our data collection
  • 22:09from typical behavioral laboratory
  • 22:11experiments where you might get four
  • 22:14or five minutes worth of data to
  • 22:16longer periods of time when we can
  • 22:18make sense of the underlying brain signals.
  • 22:20So the classic EEG or ERP event
  • 22:24related potential approach is to repeat
  • 22:27a stimulus some number of times,
  • 22:28typically in the dozens of times,
  • 22:31and do some sort of stimulus
  • 22:34manipulation and find a component
  • 22:36in the average waveform that is
  • 22:38indicative of the underlying
  • 22:41process that you believe is it is
  • 22:43being triggered by the stimuli.
  • 22:45So for example you can do a
  • 22:47classic ERP study in which you show
  • 22:50a mismatch negativity,
  • 22:51a response to the odd stimulus
  • 22:54among series of stimuli,
  • 22:55and that's been quite powerful.
  • 22:58It's been used by Pat Cool and others
  • 23:00to study phonetic discrimination
  • 23:01discrimination of different speech sounds,
  • 23:04showing that natives speech sounds
  • 23:07are discriminated better by this
  • 23:10ERP component than
  • 23:12non-native speech sounds,
  • 23:14and moreover that that difference
  • 23:17between native and non-native responding
  • 23:19from the ERP signal is predictive
  • 23:22of their subsequent vocabulary
  • 23:24development in terms of word production.
  • 23:27So it's it's a converging operation
  • 23:30between the behavioral results and
  • 23:32the underlying brain mechanism within
  • 23:34the domain of statistical learning.
  • 23:36Of interesting technique that's
  • 23:37been used again with EEG is called
  • 23:40frequency tagging and the basic idea is
  • 23:43illustrated here in a visual example.
  • 23:45Let's imagine we're interested
  • 23:47in face discrimination.
  • 23:48Well, what you can do is present a
  • 23:51series of images very rapidly and notice
  • 23:54that every 5th stimulus is a phase.
  • 23:58And So what that means is you're
  • 24:00going to get a component in the
  • 24:02EEG that is oscillating at the
  • 24:04rate of each individual stimulus,
  • 24:07which is that fairly high rate
  • 24:09of 6 per second.
  • 24:11But every 5th stimulus,
  • 24:12there's going to be a component
  • 24:14that is specific to phases,
  • 24:16and if that component is present,
  • 24:18then you can conclude that the
  • 24:20faces have been discriminated
  • 24:21from all of the other stimuli
  • 24:23that are presented in the stream.
  • 24:25And this has been used in in
  • 24:27a very interesting series of
  • 24:28experiments by Laura Battering
  • 24:32and colleagues in which they
  • 24:34looked at statistical learning.
  • 24:36These are the same kinds of stimuli
  • 24:38that I described earlier with
  • 24:39regard to behavioral studies.
  • 24:41So these are syllables that
  • 24:43are grouped into triples.
  • 24:45And the question then is,
  • 24:46in the EEG signal,
  • 24:48you're going to see a component at
  • 24:52each individual syllable, right?
  • 24:54That's a relatively high rate.
  • 24:56But the question is,
  • 24:57will you see a component at
  • 24:59the level of the triple which
  • 25:01will be 1/3 of that that rate?
  • 25:03And the answer is yes,
  • 25:05you see a big component at
  • 25:06the syllable frequency,
  • 25:07but you also see a reliable
  • 25:09component at the word frequency,
  • 25:11which tells you that that word
  • 25:13information has been extracted
  • 25:15from this stream of stimuli.
  • 25:16And the nice thing about this is
  • 25:18there's no behavioral component.
  • 25:20It doesn't require looking time.
  • 25:22It's just simply passive
  • 25:23listening to stimuli,
  • 25:25which can have some clinical importance.
  • 25:29As I alluded to, there are other techniques
  • 25:33that can be used to study both MRI and EEG.
  • 25:36So let me give you an example from MRI first.
  • 25:40So imagine that you're interested in two
  • 25:42different categories of visual stimuli.
  • 25:43This is just a simple example here
  • 25:47from from Jim Haxby where you have one
  • 25:50class of stimuli, the bottle class,
  • 25:53and another class of stimuli, the shoe class.
  • 25:55Now you you've got a whole set of voxels
  • 25:58in the brain that you can record from,
  • 26:00and what you're doing is you're
  • 26:02looking for a pattern of activation.
  • 26:04You're not looking for a hot
  • 26:05spot in the brain,
  • 26:06but you're looking for a pattern of
  • 26:08activation across a number of voxels that
  • 26:11discriminates reliably between the first
  • 26:14category bottle and the second category shoe.
  • 26:19And so you train the model to look for
  • 26:22that discriminating pair of patterns
  • 26:24and then apply that to novel data that
  • 26:27are not involved in the training set.
  • 26:29And you can employ that very
  • 26:32same technique with EEG.
  • 26:34So instead of looking at voxels,
  • 26:36you can look at patterns of activation
  • 26:38across different channels from
  • 26:40different electrodes on the scalp.
  • 26:42And the additional advantage of EEG is
  • 26:44you can do this at each time point.
  • 26:47So in the in the MRI example,
  • 26:50you're taking one moment in time and
  • 26:52recording the patterns that are present.
  • 26:55But with EEG,
  • 26:56because it has much better
  • 26:57temporal resolution,
  • 26:58you can do that literally
  • 27:00at every millisecond.
  • 27:01And what you would expect is that
  • 27:03if this pattern is reliable,
  • 27:05then in,
  • 27:05let's say the 1st 50 milliseconds before the
  • 27:08information is even gotten into the brain,
  • 27:10it's going to be a chance,
  • 27:12and then it's gonna grow in amplitude.
  • 27:14That is, you're gonna be able to more
  • 27:16reliably detect those differences.
  • 27:18And then presumably as memory declines,
  • 27:20right, that's gonna fade away.
  • 27:21So you would expect a pattern like this.
  • 27:24And that's exactly what Laurie Byette
  • 27:26and and colleagues did in our lab,
  • 27:28where took EEG data from 12 to
  • 27:3115 month old babies,
  • 27:328 different visual stimuli,
  • 27:34and asked whether or not there's a
  • 27:37pattern that is uniquely linked to each
  • 27:39one of those eight different stimuli.
  • 27:42The pattern in the EEG and in
  • 27:45adults that's definitely present.
  • 27:47You can see here that you're getting
  • 27:49accuracies of discriminating one
  • 27:51of those stimuli, like the dog,
  • 27:53from all of the other stimuli with
  • 27:55an accuracy of about 75% correct.
  • 27:57That means on each trial you can
  • 28:00say with pretty high reliability
  • 28:02that that was a dog that the person
  • 28:05was was seeing the the results
  • 28:07from the infants were noisier,
  • 28:08not unexpectedly, but highly reliable.
  • 28:11So we have a technique where we can
  • 28:14identify from the brain patterns
  • 28:15alone in the EEG what stimulus
  • 28:17the baby is being exposed to,
  • 28:19and it doesn't have to be a visual stimulus.
  • 28:21So with Bob McMurray and colleagues,
  • 28:24we asked whether or not we could
  • 28:27do the same kind of EEG based
  • 28:29decoding but in the auditory domain,
  • 28:31in the speech domain.
  • 28:33So the goal was to determine on a
  • 28:36millisecond by millisecond basis
  • 28:37what is the speech signal that you're
  • 28:39hearing and how does it relate to
  • 28:42other stimuli either similar sounding
  • 28:44to that particular target stimulus.
  • 28:47And here I have to take a pause and
  • 28:49just review briefly what we know
  • 28:52about this phenomenon behaviorally.
  • 28:53Basically because it's been studied a
  • 28:55lot and the paradigm that's been used to
  • 28:58study it is called the Visual World paradigm.
  • 29:00In the visual World paradigm,
  • 29:02there are typically 4 stimuli present,
  • 29:04so these are pictures.
  • 29:06And then there's a word that spoke.
  • 29:08So it's just like the paradigm
  • 29:10I described with babies,
  • 29:11except it's a little bit more complicated.
  • 29:12So for example,
  • 29:14where is the bug in this particular example?
  • 29:17And your eyes will fairly automatically,
  • 29:20as an adult,
  • 29:21land on the bug stimulus.
  • 29:23Notice that there is another stimulus
  • 29:26in this array that sounds like bug
  • 29:28at the beginning of the word bus,
  • 29:30but of course it's not the same.
  • 29:33The ending is different and then
  • 29:35there is 2 unrelated stimuli and
  • 29:37across a whole series of trials.
  • 29:39Then you can ask,
  • 29:40well where do the eyes go when
  • 29:42you hear the word bug?
  • 29:44And every trials can be slightly different.
  • 29:47Sometimes you will immediately look at
  • 29:48the bug as in the first example there.
  • 29:50Sometimes you will actually go to the to
  • 29:53the bus and then correct and go to the bug,
  • 29:55like in the third case, etcetera.
  • 29:57But if you sum across a
  • 29:58whole series of trials,
  • 29:59you get a probability function.
  • 30:01It'll look roughly like this.
  • 30:03Obviously,
  • 30:04this is a cartoon illustrating the
  • 30:06fact that across a series of trials,
  • 30:08you more reliably will look at the
  • 30:10target of the word that is spoken.
  • 30:12But occasionally you look at the one
  • 30:13that sounded like it at the beginning,
  • 30:15which is that red line there.
  • 30:17Those are cartoon data.
  • 30:18These are real data.
  • 30:20It's exactly that out of adults
  • 30:22you get this kind of behavioral
  • 30:24performance across a series of trials.
  • 30:26Now 1 limitation of this is that
  • 30:28you have to have pictures, right?
  • 30:30If we wanted to understand your
  • 30:33spoken knowledge of democracy,
  • 30:35what would the picture be that
  • 30:35we would put up there, right.
  • 30:37Well, we can imagine anti democracy.
  • 30:40We could imagine a picture.
  • 30:43So it has that limitation.
  • 30:44It also has a limitation that the
  • 30:46eye movements themselves are a
  • 30:48behavior that some individuals,
  • 30:49particularly clinical populations,
  • 30:50might not have control over.
  • 30:52So it would be ideal if you could just
  • 30:54tap into the EEG responses of the
  • 30:56brain and get a function that look like that.
  • 30:59So that's exactly what we did.
  • 31:00And remember, we have already
  • 31:01shown that this works in babies.
  • 31:03It already works in toddlers
  • 31:04at the behavioral level.
  • 31:06The question is,
  • 31:06can we see it in the EEG pattern?
  • 31:09So these are all adult data.
  • 31:10You have EEG channels off.
  • 31:12The adult brain got a lot
  • 31:14of wiggles that come
  • 31:15off of those channels.
  • 31:17And at each time step,
  • 31:18after the stimulus is spoken,
  • 31:20we're going to ask,
  • 31:21is there a pattern in that EEG
  • 31:24that predicts that particular word?
  • 31:26And we chose words that sounded
  • 31:28alike at the beginning,
  • 31:30like badger and baggage and
  • 31:31muscle and mushroom. OK.
  • 31:33And then what we're doing is they're just
  • 31:35simply having adults passively listen.
  • 31:37There's no task,
  • 31:39there's no visual referent,
  • 31:41and we're going to train the
  • 31:42statistical model at each time
  • 31:44point to predict as best it can
  • 31:46which of those words is spoken,
  • 31:48and these are the results.
  • 31:49It looks a lot like the behavioral results.
  • 31:52There's no eye movements,
  • 31:54that there's no task.
  • 31:55It's just passive listening.
  • 31:57And moreover,
  • 32:00moreover, it happens at the
  • 32:02individual subject level.
  • 32:03So there's enough data at each
  • 32:05individual subject level to
  • 32:06make this clinically relevant.
  • 32:08And we can see in most of these cases that
  • 32:10they're showing the canonical pattern,
  • 32:13greater accuracy to the target
  • 32:16than to the the non targets.
  • 32:20And in an interesting
  • 32:21experiment that's ongoing,
  • 32:22Elizabeth Simmons,
  • 32:23who is at Sacred Heart University
  • 32:26but affiliated with Child Study Center,
  • 32:29there's a grant in which we are
  • 32:31looking at this in toddlers.
  • 32:33So these are so-called late talkers.
  • 32:35These are children who we don't know
  • 32:37much about their speech comprehension,
  • 32:40but we know that they do not speak
  • 32:42at the canonical age of which
  • 32:43you would expect them to speak.
  • 32:45And in addition to getting eye
  • 32:47tracking data on for example
  • 32:49Kitty versus kitchen that would
  • 32:51be a a child friendly example.
  • 32:53We'll also be gathering EEG data.
  • 32:56OK.
  • 32:57So let me just mention near infrared
  • 33:00spectroscopy and how it works.
  • 33:01It's basically an optical imaging
  • 33:04technique that is amenable to like
  • 33:08EGA cap that is placed on the baby's
  • 33:13head and that cap is contains a
  • 33:17set of optical emitters in the near
  • 33:20infrared range which are able to
  • 33:23penetrate the biological tissue through
  • 33:25the scalp and the skull into the brain.
  • 33:28And photons coming back out from that light
  • 33:32emitting into the brain modulate with
  • 33:36the absorption of oxygenated hemoglobin.
  • 33:39And that is comparable
  • 33:41to the signal in F MRI,
  • 33:43the BOLD signal in F MRI.
  • 33:45So typically these are arrays of emitters
  • 33:49and detectors called channels and
  • 33:52they're placed on a cap on the baby's head,
  • 33:54so we can cover the entire head
  • 33:56of the baby with these channels,
  • 33:58roughly 100 channels covering
  • 34:00the entire head.
  • 34:01Now,
  • 34:02there's been kind of the classic approach
  • 34:05to studying brain activation using nears,
  • 34:08and that's to look for a hotspot.
  • 34:11And this is an early study out
  • 34:13of Jacques Mailer's group,
  • 34:15in which the contrast was simply
  • 34:16listening to speech that goes in
  • 34:18the forward direction versus the
  • 34:20speech that is simply reversed.
  • 34:22And of course, reverse speech sounds weird.
  • 34:24It doesn't contain meaning.
  • 34:25The phonemes are all kind of screwed up.
  • 34:27So it's a it's a kind of a crude contrast,
  • 34:29but it gives you some insight about
  • 34:32what's going on in the brain of these
  • 34:34were newborns just to be clear,
  • 34:36newborn babies in the first week.
  • 34:39And generally speaking you see a left
  • 34:41hemisphere dominance which is what
  • 34:43you would expect from the canonical
  • 34:45language related brain areas in adults.
  • 34:50But you can go beyond this sort
  • 34:52of standard approach and ask,
  • 34:54just as we asked in the case of EEG,
  • 34:57whether you can do this multivariate
  • 35:01voxel type analysis to identify
  • 35:04particular types of stimuli
  • 35:06from the pattern of activation.
  • 35:09And so Ben Zinzer and colleagues
  • 35:11did an interesting study.
  • 35:12This is an adult study now using
  • 35:14those same baby friendly stimuli.
  • 35:17So we have these eight different stimuli.
  • 35:19And now the question is,
  • 35:22can we use nears, not EEG?
  • 35:25Can we use nears to identify which one of
  • 35:27these eight stimuli has been presented
  • 35:30by looking at the pattern of activation?
  • 35:32Whoop, the pattern of activation
  • 35:34where we have, you know,
  • 35:36Bunny versus foot and Bunny versus teddy
  • 35:39bear etcetera and seeing whether or not
  • 35:41there's a reliable pattern for each
  • 35:43one of those eight different stimuli.
  • 35:45And the answer is yes.
  • 35:46The overall decoding accuracy as
  • 35:47you can see on the right hand
  • 35:49side here is about 70% correct,
  • 35:51which is pretty good.
  • 35:53There were 44 news channels.
  • 35:55It wasn't even the whole head
  • 35:57in this particular study,
  • 35:58but you can also see individual
  • 36:00differences in the performance.
  • 36:01So some participants are better
  • 36:03than others in terms of their
  • 36:06their decoding accuracy.
  • 36:09So of course doing it adults is one thing,
  • 36:12doing it in infants is another.
  • 36:13We did a follow up experiment Lauren
  • 36:15Emerson and colleagues in which we asked
  • 36:17of a more rudimentary question of of infants.
  • 36:21And so it was a two stimuli,
  • 36:22this is just the that's the
  • 36:25same cartoon to Orient you.
  • 36:26So we have a set of nearest channels,
  • 36:29and we have two different stimuli.
  • 36:31We have an auditory visual pair of stimuli
  • 36:34and another auditory visual pair of stimuli.
  • 36:38So both stimuli have auditory and
  • 36:40visual information, but they differ
  • 36:41in the pairing of those of those.
  • 36:43So it's subtle.
  • 36:44And the answer is and and.
  • 36:47And.
  • 36:47One of the problems that you run into
  • 36:49when you're doing infinite experiments
  • 36:51is you need quite a number of trials
  • 36:54of each one of the stimuli to train
  • 36:56the machine learning algorithm.
  • 36:58Infants are notoriously not cooperative in
  • 37:01terms of giving lots of trials of data.
  • 37:03You have an advantage with the EEG
  • 37:05because the stimuli can occur very rapidly.
  • 37:08So in 5 minutes with a baby you
  • 37:10can have several 100 stimuli.
  • 37:12But in nears the signal is slow.
  • 37:14It's like the F MRI BOLD signal.
  • 37:16And so we couldn't get enough
  • 37:18data from each individual infant.
  • 37:19So we did an interesting only
  • 37:22in in retrospect.
  • 37:24Interesting because it worked manipulation.
  • 37:25What we did is we aggregated all of
  • 37:28the data across all of the trials
  • 37:31from all the infants except 1 infant.
  • 37:34And then we trained the model on
  • 37:36all of the infants except one and
  • 37:39then determined whether or not
  • 37:40we could predict the behavior of
  • 37:42the withheld infants data.
  • 37:44And the answer is that it could.
  • 37:46It was 72% decoding accuracy.
  • 37:48So the subtle auditory visual
  • 37:50pair of stimuli,
  • 37:52we could on a trial by trial basis
  • 37:54for that withheld babies data tell
  • 37:57you with fairly high reliability that
  • 38:00it was pair one versus pair two.
  • 38:02These babies were six months of age,
  • 38:05so quite young. OK.
  • 38:08So if we segue then to F MRI sort
  • 38:11of the gold standard of spatial
  • 38:13resolution and imaging the classic
  • 38:15approach the classic because it
  • 38:18has been around for a long time.
  • 38:21One example is out of Gislyn
  • 38:23de Haan's lab in Paris,
  • 38:24again using that forward versus
  • 38:26backward speech contrast.
  • 38:27It's a crude manipulation, but believe me,
  • 38:31in 2002 this is a heroic experiment.
  • 38:35And what they found again was
  • 38:37a left hemisphere bias,
  • 38:38as you would see in adults where
  • 38:41there's greater activation to
  • 38:42the forward going speech than
  • 38:44the backward going speech.
  • 38:46In subsequent work that's come out
  • 38:48of Rebecca Sachs's lab at MIT,
  • 38:50they've been interested in visual stimuli.
  • 38:52There's a classic distinction in
  • 38:55the ventral pathway in the visual
  • 38:58extra trite areas of the brain
  • 38:59between an area that is responsive
  • 39:02to faces versus an adjacent area
  • 39:05that's responsive to scenes,
  • 39:07right Outdoor scenes, for example.
  • 39:11And interestingly enough,
  • 39:13in this experiment that was
  • 39:14published a number of years ago,
  • 39:16you see that same kind of dissociation
  • 39:19between scenes and faces in
  • 39:21approximately the same regions
  • 39:23of the brain in young infants.
  • 39:25These were roughly 6 to 18 month
  • 39:28old infants and adults.
  • 39:29So those red and blue bars on the
  • 39:31bottom on the left of the infants,
  • 39:32on the right of the adults.
  • 39:33And you can see that the canonical areas
  • 39:35are being activated in a very similar way.
  • 39:38So these two results suggest that
  • 39:40the the fundamental architecture of
  • 39:42the brain in early infancy is set up
  • 39:44in a way that's similar to adults,
  • 39:47both for speech and for visual stimuli.
  • 39:50But the limitation of F MRI with
  • 39:54awake infants is quite severe.
  • 39:57And our colleague Nick Turk Brown
  • 39:59has been a pioneer in trying to
  • 40:01set up situations in the scanner
  • 40:03environment that maximize the amount
  • 40:05of data that you can get from babies.
  • 40:08And here is just a summary slide
  • 40:09that they put together a number of
  • 40:11years ago showing that the average
  • 40:12baby is giving you about 10 minutes
  • 40:14of data in the scanner.
  • 40:15And one of the things that they used,
  • 40:17it's really powerful.
  • 40:18I'm sorry.
  • 40:19Let let me just talk about a results
  • 40:211st and then tell you about the why
  • 40:23they got such good evidence of of data.
  • 40:27They were able to study a structure
  • 40:29in the brain that you cannot
  • 40:31access with either EEG or nears,
  • 40:33and that's the hippocampus.
  • 40:34And their adult work had shown
  • 40:36that the hippocampus was involved
  • 40:38in statistical learning.
  • 40:39And there also is suggestive evidence
  • 40:41that the hippocampus is really not
  • 40:43functioning very well early in infancy.
  • 40:45And yet,
  • 40:46Nick and his colleagues Cameron Ellis
  • 40:48showed that in the statistical learning task,
  • 40:52infants as young as 12 months of age
  • 40:54are showing reliable hippocampal activation,
  • 40:57which you would not be able to see
  • 40:59with any technique other than F MRI,
  • 41:01suggesting that the hippocampus is
  • 41:04in fact more involved in in early
  • 41:07learning effects in infants than
  • 41:09was previously thought possible.
  • 41:11But let me go back to this issue here about
  • 41:15how much data you can get out of an infant.
  • 41:17As I said, infants are not the most
  • 41:19cooperative subjects in the world,
  • 41:21no matter how much we motivate
  • 41:23them or their parents.
  • 41:24And so setting up an environment
  • 41:25in which you get the most amount
  • 41:27of data is really important.
  • 41:29And the MRI scanner environment is
  • 41:31not a terribly friendly environment.
  • 41:33So what Nick has discovered and
  • 41:35other people have discovered as well
  • 41:37is that putting them in a situation
  • 41:39which you have a naturalistic,
  • 41:41seemingly complicated kind
  • 41:42of stimulus situation,
  • 41:44which seems kind of counterintuitive, right?
  • 41:46The typical way scientists
  • 41:48proceed is to simplify everything,
  • 41:50make it just like 1 variable that
  • 41:52you're studying and you know prune away
  • 41:55all the other distracting variables.
  • 41:56The problem with that is the
  • 41:58stimuli are so simple,
  • 41:58the babies are bored and they
  • 42:00don't give you a lot of data.
  • 42:01So by using a naturalistic
  • 42:03task like movie watching,
  • 42:05babies are much more engaged,
  • 42:07are able to maintain their attention
  • 42:08for longer periods of time,
  • 42:09and you can gather more data.
  • 42:11Then you have to parse that data in
  • 42:13such a way that you can interpret it
  • 42:16because the stimuli are very complicated.
  • 42:18So two kinds of metrics that you
  • 42:20can get in addition to where in the
  • 42:22brain there's a hotspot of activation
  • 42:24is how are the different areas in
  • 42:26the brain connected to each other?
  • 42:28That is,
  • 42:28how are they correlated with each
  • 42:31other while you're watching the movie?
  • 42:33Or how different two different
  • 42:34brains watching the same movie are
  • 42:36correlated with each other, right.
  • 42:38So it's not the internal connectivity,
  • 42:40but it's the correspondence between
  • 42:42the two brains activity And Sarah
  • 42:47Central and Alonso has done a really
  • 42:49interesting analysis of a large
  • 42:51data set that was available through
  • 42:53the healthy brain network in which
  • 42:55children now these are 6 to 18 year
  • 42:58old children are watching the same movie.
  • 43:01So it's this movie watching paradigm
  • 43:03in the scanner.
  • 43:04Parcelate the brain into a a a
  • 43:07relatively small number of regions
  • 43:09parcels compared to the number
  • 43:11of voxels in the brain.
  • 43:12And then ask how do these functional
  • 43:15connectivity analysis differentiate
  • 43:17between when you're watching the
  • 43:20movie versus when you're at rest,
  • 43:22right when there's no stimulation
  • 43:25and without going through all
  • 43:27of the gory details,
  • 43:28There are different regions of the
  • 43:30brain that show different functional
  • 43:32connectivity patterns during
  • 43:33rest and during movie watching.
  • 43:36And those are so reliable that with
  • 43:38only a 3 minute movie you can decode,
  • 43:41that is tell whether the person
  • 43:43is watching a movie or in a
  • 43:46resting state with 89% accuracy.
  • 43:48So there's a very robust decoding
  • 43:51that you can do from these brain
  • 43:54functional connectivity networks.
  • 43:56Moreover,
  • 43:57that relationship between rest and movie
  • 44:00watching changes with age because of course
  • 44:03the child is acquiring more knowledge,
  • 44:05both linguistically 'cause they're
  • 44:06listening to the audio track,
  • 44:08but also visually in terms of interpreting
  • 44:11the visual stimuli in in the movie.
  • 44:13And as a result, that developmental
  • 44:16function can be predictive of the relative
  • 44:20maturational state of a particular child.
  • 44:24We've extended that with Isabel,
  • 44:26Nickerson and and Sarah over the last
  • 44:29couple of years in which we wanted to
  • 44:32target the language stimuli themselves.
  • 44:34And so we switched from MRI to NEARS.
  • 44:38These again are adults and we created
  • 44:42the we presented the very same movies
  • 44:45that were used in that previous study.
  • 44:48Happens to be the movie Despicable Me.
  • 44:50I highly recommend it.
  • 44:52But we dubbed into the three into
  • 44:55the movie 3 different audio tracks.
  • 44:58One is in English,
  • 45:00the one that the movie was made in English,
  • 45:03another is Spanish,
  • 45:04and the third is a non speech
  • 45:06stimulus that you can't understand.
  • 45:09And so the question is can we look at
  • 45:11the nearest responses and adults while
  • 45:13they're watching this naturalistic
  • 45:14movie with the three audio tracks
  • 45:16and discriminate between their native
  • 45:18language and a non-native language.
  • 45:21So it's more subtle than movie versus
  • 45:24rest and and the answer is yes,
  • 45:26you have greater left hemisphere
  • 45:27activation when you're listening to your
  • 45:29native language than non-native language.
  • 45:31Perhaps not surprising,
  • 45:32but moreover,
  • 45:33the functional connectivity network
  • 45:35is different.
  • 45:36If you start with a seed region
  • 45:37that's in the canonical language area
  • 45:39and ask what is it connected to?
  • 45:41It's connected in a much richer
  • 45:42way when you're listening to your
  • 45:44native language than when when you're
  • 45:46listening to a non-native language.
  • 45:47So we're in the process with Virginia
  • 45:50Chambers and others in the lab
  • 45:52to begin to do this with children
  • 45:55moving from adults to children.
  • 45:57So in the last five or six minutes,
  • 46:01I just want to talk briefly about
  • 46:04some applications to particular
  • 46:06problems that have to do with special
  • 46:09populations and how these neural
  • 46:12methods can inform us about them.
  • 46:15So I want to talk about this notion of
  • 46:18prediction and and and its relationship
  • 46:20to prematurity to storybook reading,
  • 46:22which I think is kind of
  • 46:23an interesting phenomenon,
  • 46:25hyper scanning that is looking at
  • 46:27the social interaction between
  • 46:29individuals and the bilingual brain.
  • 46:33So prediction is something that
  • 46:34we do all the time.
  • 46:36It's extremely important it what it's
  • 46:39what allows you to interpret my perhaps
  • 46:42overly rapid speech behavior at the moment.
  • 46:46Didn't to know what the next word
  • 46:47is that I'm going to say before I've
  • 46:49even said it because we have learned
  • 46:51all sorts of structures to our
  • 46:52language and prediction is a really
  • 46:54important process in doing that.
  • 46:56Imagine that we had to wait to the end
  • 46:58of every word before we knew what it was.
  • 47:01We would continually fall behind our
  • 47:04interpretation of of a speaker's utterances.
  • 47:07So there's a really interesting
  • 47:09case you know in an
  • 47:10epilepsy patient that was studied by
  • 47:13by Hughes ET al in 2001 and these were
  • 47:18direct recordings from the brain pre
  • 47:21surgical epilepsy patient and the the
  • 47:23the paradigm was really, really simple.
  • 47:26They're just hearing tone,
  • 47:27tone, tone, tone, right?
  • 47:29It's a it's a double tone burst.
  • 47:33But then every once in a while
  • 47:35they omitted the second tone.
  • 47:37And what they found is that of
  • 47:39course if there's just one tone,
  • 47:41as in that first little squiggle there,
  • 47:43you get one bump.
  • 47:44If there's two tones, you get 2 bumps.
  • 47:46But if you occasionally omit that
  • 47:49second tone, you still get 2 bumps.
  • 47:51That's a prediction effect.
  • 47:53And Lauren Emberson thought, wow,
  • 47:55this is a great paradigm to use with
  • 47:58babies because what we can do is we can
  • 48:00pair an auditory and a visual stimulus.
  • 48:02We can record from the temporal
  • 48:04cortex where the auditory signal
  • 48:06is going and from the visual cortex
  • 48:07where the visual signal is going.
  • 48:09And we can ask, well,
  • 48:10what happens after we paired the
  • 48:12stimuli over and over again.
  • 48:14And then occasionally we just
  • 48:16don't present the visual stimulus.
  • 48:17So it's analogous to the Hughes study.
  • 48:20So they get 80%, I'm sorry,
  • 48:22they get 100% pairing and then they go
  • 48:24into a test phase where they have 80%
  • 48:27pairing and 20% omitting the visual stimulus.
  • 48:32And So what you see,
  • 48:33this is a cartoon, trust me,
  • 48:35the data looked just like
  • 48:37this for simplicity.
  • 48:38When you're testing them on the
  • 48:4080% of the trials where they get
  • 48:42auditory and visual information,
  • 48:44well then you get temporal cortex
  • 48:46activation and occipital cortex activation.
  • 48:48That's that's not surprising.
  • 48:50What's surprising is that when
  • 48:52you on those 20% of the trials,
  • 48:54you present the auditory stimulus,
  • 48:55but you don't present the visual stimulus,
  • 48:57you get the same response.
  • 48:59So the occipital cortex is responding
  • 49:01even though there's no physical stimulus
  • 49:03present because it's a predicted response.
  • 49:06And we ran a control condition in which
  • 49:08they never got the two stimuli paired.
  • 49:10They were just always in just random order,
  • 49:11no pairing.
  • 49:12And then you get this effect, right?
  • 49:14When you present an auditory stimulus,
  • 49:16you get an auditory temporal
  • 49:18cortex response visual,
  • 49:19you get a visual.
  • 49:20So what's interesting here is that
  • 49:23that high bar on the left is the
  • 49:26unexpected absence of a stimulus,
  • 49:29and the low bar on the right is the
  • 49:31expected absence of a stimulus right.
  • 49:33So one is expected, one is unexpected,
  • 49:36and you get a hugely different
  • 49:38response in the brain.
  • 49:39Now the reason I'm raising this is
  • 49:41because prediction effects have
  • 49:42been shown to be kind of interesting
  • 49:44with regard to special populations.
  • 49:46Lauren did a follow up study with
  • 49:49about 100 prematurely born infants
  • 49:51and showed that that prediction
  • 49:52response is not present to the brain.
  • 49:55Now,
  • 49:55these babies appear to be behaviorally
  • 49:58typically developing,
  • 49:59but yet they have this neural problem
  • 50:01and the question is will they have
  • 50:04a cascading effect later, right?
  • 50:06But we also did follow up experience
  • 50:09with our colleagues
  • 50:10in Taiwan in which we asked whether
  • 50:12this prediction effect is predictive
  • 50:17of subsequent language development.
  • 50:19And the answer is that if you look at
  • 50:21this prediction effect in six month olds,
  • 50:23just like in the original study,
  • 50:25and then ask how is it related to
  • 50:27subsequent language development,
  • 50:29the answer is that it is reliably
  • 50:32related to productive language
  • 50:34behavior at 12 and 18 months of age.
  • 50:37In addition, in a follow up experiment,
  • 50:40Shin then asked, well,
  • 50:41what is it about the language environment
  • 50:44that is causing better language performance?
  • 50:47One thing that has been known behaviorally
  • 50:51is that storybook reading seems to be
  • 50:53a predictor of subsequent language,
  • 50:55and that's what's shown in this diagram here.
  • 50:57The mothers who read more storybooks
  • 50:59to their infants between 6:00 and
  • 51:0112:00 months of age the more likely
  • 51:03they were to have a better language
  • 51:05outcome at 18 months of age.
  • 51:08But moreover,
  • 51:09that predictive effect in
  • 51:11the nearest response,
  • 51:12the visual omission effect,
  • 51:15also predicted vocabulary development,
  • 51:17and it had a separate additive
  • 51:20component to the prediction.
  • 51:22So it's not just the experience
  • 51:23that they get with the mother,
  • 51:25it's the kind of changes that it
  • 51:27implements in the brain that causes
  • 51:30this subsequent language behavior.
  • 51:35And that suggests that this
  • 51:37interactive nature of mothers,
  • 51:39typically mothers and and infants,
  • 51:41sometimes fathers, of course,
  • 51:42could be important.
  • 51:44And there is a paradigm called hyperscanning,
  • 51:46which many of you might know that Joy
  • 51:48Hirsch's lab studies here at Yale.
  • 51:50And this is the first study that I
  • 51:52know of out of Elise Piazza's lab,
  • 51:54actually Casey Lou Williams
  • 51:56lab at at Princeton.
  • 51:57But Elise Piazza is now in
  • 51:59her own faculty position,
  • 52:01showing nears in a hyperscanning
  • 52:02paradigm where the mother is
  • 52:04wearing an apparatus and the
  • 52:06baby's wearing an apparatus.
  • 52:07And the question is what is the
  • 52:09relationship between the back and
  • 52:11forth and social communication
  • 52:12between the two brains.
  • 52:14And the answer is that they are.
  • 52:15They are statistically
  • 52:17significantly correlated.
  • 52:18Now,
  • 52:19what that correlation implies for other
  • 52:22aspects of behavior are are not yet known,
  • 52:25because this paradigm really hasn't been
  • 52:27used very much with young infants yet.
  • 52:29But it's suggestive of the fact
  • 52:32that that synchrony between the
  • 52:33brains may have causal effects on
  • 52:36a variety of subsequent behaviors,
  • 52:38including language development.
  • 52:42Just two more things and then I will stop.
  • 52:46I wanted to mention briefly a a study
  • 52:49that just came out from a a grant
  • 52:51that we got from the Gates Foundation.
  • 52:54And this is a study in which
  • 52:56infants from low resource countries,
  • 53:00in this particular case was Bangladesh,
  • 53:03were studied at two different
  • 53:05ages 6 and 12 months of age,
  • 53:076 and 24 months of age using
  • 53:09nears and the task.
  • 53:11I'm sorry.
  • 53:12And and half of the baby,
  • 53:13roughly half of the babies
  • 53:14were from low income,
  • 53:15low income in Bangladesh versus
  • 53:18middle income in Bangladesh.
  • 53:20And the stimuli were social stimuli.
  • 53:22They're depicted on a on a video screen.
  • 53:25One is a person who's interacting
  • 53:27with the baby and the other is
  • 53:29an inanimate object that is,
  • 53:30you know,
  • 53:31dynamic but doesn't have social
  • 53:33component to it.
  • 53:34And what they studied was the functional
  • 53:36connectivity network within the brain.
  • 53:37At six months and 24 months between
  • 53:40the low income and the medium income,
  • 53:42there was a statistically significant
  • 53:44difference.
  • 53:45But interestingly enough,
  • 53:46if you look at the change in
  • 53:49functional connectivity,
  • 53:51in the low income group,
  • 53:54there's an increase in functional
  • 53:56connectivity between 6 and 24 months
  • 53:58and in the middle income group,
  • 54:00there's a decrease in
  • 54:01functional connectivity.
  • 54:02We know that there are a variety
  • 54:04of processes that go on the brain
  • 54:05that involve like pruning and the
  • 54:07reduction in connections because
  • 54:09of maturation and noise reduction.
  • 54:12And so this suggests that perhaps
  • 54:14these low income infants are immature,
  • 54:17that is,
  • 54:18they will show the same decrease effect,
  • 54:21but they'll they'll show it at a later age.
  • 54:22And so Chuck Nelson's group is
  • 54:24following up with his babies.
  • 54:26And finally,
  • 54:27I just want to say one thing
  • 54:28about bilingual infants.
  • 54:29It's it's long been thought that
  • 54:31individuals who are confronted
  • 54:33with two native language
  • 54:35simultaneously have certain kinds
  • 54:36of cognitive processes that are
  • 54:38more flexible because they do a
  • 54:39lot of switching between the two
  • 54:41different languages.
  • 54:42And one instance of that behaviorally
  • 54:44is that they're able to deploy
  • 54:47their attention more flexibly.
  • 54:49I'm not going to go through the results here,
  • 54:50but that was definitely true in
  • 54:52a study with Maria Arredondo and
  • 54:54Janet Worker in which it was
  • 54:55shown that the bilinguals have a
  • 54:58reaction time advantage under these
  • 55:00circumstances behaviorally and that
  • 55:02it's correlated with how often the
  • 55:05parent does language switching.
  • 55:07That's behavioral results.
  • 55:08But in addition,
  • 55:09there was a near study on a on
  • 55:12a follow up in which recordings
  • 55:13were made from the babies brains
  • 55:15at six and ten months of age.
  • 55:17And interestingly enough,
  • 55:19the bilingual infants show this
  • 55:22greater frontal left frontal
  • 55:25activation on these mismatched trials
  • 55:27that I didn't describe very well.
  • 55:29But basically the behavioral results
  • 55:32show a neural difference that you
  • 55:36wouldn't ordinarily have seen by
  • 55:37just looking at the behavior alone.
  • 55:39That is that there is a particular
  • 55:41brain region that seems to be different
  • 55:44between bilinguals and monolingues.
  • 55:45So let me just wrap up.
  • 55:47These behavioral studies in infant
  • 55:50language development have been
  • 55:51very powerful with a long history,
  • 55:54neural development,
  • 55:55infants adds and I think important
  • 55:57insights about these behavioral changes.
  • 56:00These more modern multivariate
  • 56:01and machine learning techniques
  • 56:03I think are now being used much
  • 56:05more widely with infants.
  • 56:06And we've been,
  • 56:08you know,
  • 56:09limited in terms of practical
  • 56:11constraints on how much data we can
  • 56:13get from Maybe's naturalistic viewing
  • 56:15as one potential solution to that.
  • 56:17And all of these things conspire to,
  • 56:19I think be reasonably optimistic
  • 56:21that we can look at individual
  • 56:23differences in special populations.
  • 56:24So with that,
  • 56:25let me conclude by saying that Nick
  • 56:27Sharp Brown and I have a paper coming
  • 56:29out next month with in trends and
  • 56:31neuroscience that summarize a lot of
  • 56:34methodological things that I talked
  • 56:35about today in much more detail.
  • 56:38And with that,
  • 56:38thanks very much for your attention,
  • 56:47right. So now we're going to take
  • 56:49some questions from the audience.
  • 56:51If you're awesome,
  • 56:56Professor Lefkowitz,
  • 56:59Professor Aslan, wonderful talk.
  • 57:02Thank you so much, So much to
  • 57:04think about one one question that
  • 57:06came to mind was find the the work
  • 57:10on prediction very interesting.
  • 57:11And of course, prediction is so
  • 57:13fundamental to learning and all of that.
  • 57:16And I wondered whether you have ever
  • 57:19looked at the ability of babies to
  • 57:22predict babies or even children,
  • 57:24especially in light of multi
  • 57:27sensory information. That is to say,
  • 57:30one of the things that we know is that
  • 57:32multi sensory integration is a
  • 57:34very long developmental process,
  • 57:36takes a long time for babies and
  • 57:39then children to begin to learn
  • 57:40how to connect what they
  • 57:41see with what they hear,
  • 57:43particularly in the social domain.
  • 57:45So I'm just wondering whether if you
  • 57:48were to use multi sensory stimuli,
  • 57:50auditory and visual in particular,
  • 57:52whether you would get different patterns
  • 57:54of prediction that would be visible
  • 57:57in the brain response. Just it's just
  • 58:00So let me just for those of you on Zoom
  • 58:02who might not have heard David's question.
  • 58:04He's asking whether or not the the
  • 58:07prediction effects have been studied in
  • 58:09the multi sensory domain where you have
  • 58:11combinations of sensory stimuli presented.
  • 58:14And if I can make just two
  • 58:17quick comments about that.
  • 58:20Some of the studies we're doing
  • 58:22involved simultaneous presentation of
  • 58:23both auditory and visual information,
  • 58:26and so in principle you could tease apart
  • 58:28what aspect of that combination of stimuli
  • 58:31is leading to the prediction effect.
  • 58:33Moreover, it would be interesting
  • 58:35if you could train one of these
  • 58:37machine learning models to identify
  • 58:39which of those two stimuli,
  • 58:42or possibly both,
  • 58:43are driving the effect in the brain,
  • 58:46because all we have you know,
  • 58:48are these simple examples of prediction.
  • 58:52And the third comment is that Alexis Black,
  • 58:54who used to be in my lab,
  • 58:55and I have long wanted to do prediction
  • 58:58experiments in the language domain
  • 59:00in which you're listening to a
  • 59:02sentence and then the very last word
  • 59:04of the sentence is omitted, right?
  • 59:06It's not there.
  • 59:08And see whether or not even
  • 59:09in the absence of a word,
  • 59:11you can identify the thought
  • 59:13of that word in the brain.
  • 59:15So that could be based on
  • 59:17the auditory information,
  • 59:19it could be based on
  • 59:20orthographic information, right,
  • 59:21Like text.
  • 59:22Or it could be based on a a
  • 59:25visual reference of that word.
  • 59:27So I think there are clever ways
  • 59:28that you could use these techniques
  • 59:29to tease that apart either.
  • 59:35Austin, I just wanna read.
  • 59:40Hi, Dorothy. Stuby, you talked
  • 59:44about premature babies and they are
  • 59:48taking longer to mature.
  • 59:52Do we know how that goes?
  • 59:54And in terms of interventions, do we
  • 59:58have ideas about interventions to help
  • 01:00:00the language development? Yeah.
  • 01:00:01So the question is about the premature
  • 01:00:04babies and long term outcome with
  • 01:00:06regard to language, language, delay.
  • 01:00:10We know that statistically speaking they're
  • 01:00:12more likely to have language problems,
  • 01:00:15but that doesn't of course say for any
  • 01:00:18given premature baby whether they will.
  • 01:00:21We also did not have access to follow
  • 01:00:24up of those babies that showed the
  • 01:00:26absence of of a prediction effect.
  • 01:00:29And to be clear,
  • 01:00:30they showed the absence of that
  • 01:00:31prediction effect at six months,
  • 01:00:33a corrected age, right.
  • 01:00:35So they so they actually were nine or ten
  • 01:00:37months of the age when they were tested.
  • 01:00:39So it's it's a highly reliable and
  • 01:00:42prevalent absence of prediction
  • 01:00:44in those babies in their brains.
  • 01:00:46But we did test those babies on
  • 01:00:48a behavioral task that involved
  • 01:00:50prediction and they were no
  • 01:00:52different than than full term babies.
  • 01:00:54So if there is a lag effect,
  • 01:00:57it obviously would happen later.
  • 01:00:59And it's possible that there are
  • 01:01:01compensatory mechanisms that have been
  • 01:01:03triggered by the prematurity that's
  • 01:01:06buffering them from that brain problem,
  • 01:01:09if we can even call it a problem that
  • 01:01:11allows their behavior to be typical.
  • 01:01:14But it is possible that it could
  • 01:01:15be what's called a sleeper effect,
  • 01:01:17right,
  • 01:01:17That that perhaps later in life there
  • 01:01:20they might show a subtle deficit
  • 01:01:22that is not obvious in these kind of
  • 01:01:25crude tasks that we use in infancy.
  • 01:01:28And so that's a possibility,
  • 01:01:29but unfortunately,
  • 01:01:30we haven't been able to follow
  • 01:01:31up on those babies. Thank you.
  • 01:01:35She'll be Is there a zoom?
  • 01:01:38Zoom. No, there no, there's
  • 01:01:40nothing to do. I was just
  • 01:01:41going to ask a real quick question.
  • 01:01:43It's unfair because you've presented
  • 01:01:44so much beautiful work from your own lab,
  • 01:01:46but the the work that you presented
  • 01:01:48from at least Piazza's group was it.
  • 01:01:50And looking at the
  • 01:01:52dual mirrors data collection,
  • 01:01:53I was just wondering if they
  • 01:01:54had looked at that with a
  • 01:01:56caregiver and a non caregiver.
  • 01:01:57Differences in synchronic, Yeah.
  • 01:01:59So Karen's question is whether or
  • 01:02:02not the hyper scanning between mom
  • 01:02:03and baby has been done between,
  • 01:02:05for example, baby and caregiver,
  • 01:02:07non caregiver, stranger, etcetera.
  • 01:02:09To the best of my knowledge, no.
  • 01:02:11But I'm sure they're working on that.
  • 01:02:14What what I would personally be interested
  • 01:02:17in is the kinds of social queuing that
  • 01:02:20goes on in the commutative context.
  • 01:02:24And you can introduce perturbations
  • 01:02:26in the behavior of of the parent to
  • 01:02:30see whether or not it has an effect
  • 01:02:31on the the synchrony relationship
  • 01:02:33that they would normally have.
  • 01:02:35And I think that would be really interesting.
  • 01:02:37Yeah.
  • 01:02:40Well, thank you so much.
  • 01:02:41OK, thanks everybody.
  • 01:02:49We'll talk to our patient.