video1210942289
May 13, 2026Replication, Rigor, and Rejection: What 10,000 Manuscripts Taught Me That Cardiology Didn't
Information
- ID
- 14212
- To Cite
- DCA Citation Guide
Transcript
- 01:52Yeah. Probably.
- 01:54If you're doing the introduction,
- 01:56yeah, it is nice.
- 01:58What else? It's like the
- 02:00upcoming So
- 02:01that's like the CME code
- 02:02to get credit for being
- 02:03here.
- 02:05And then they'll go over
- 02:06it. I think it's just,
- 02:08it is still.
- 02:13Yeah. Mostly, like, you know,
- 02:14don't cross my anything here
- 02:15here.
- 02:17Some of this is AI
- 02:17generated.
- 02:20Sorry. Is that still our
- 02:21I've been done. I just
- 02:22practice the on-site to work.
- 02:25Just,
- 02:46Yeah. Yeah. It does. You
- 02:47know, it generates a Pretty
- 02:50You know, to be
- 02:51honest. But now, pretty good
- 02:53to see the strollers.
- 02:55It's
- 02:56in the top half. Right?
- 02:58I'm not saying that it's
- 02:59a publishable model, but, like,
- 03:01it's it's
- 04:11Yeah. I'm a hundred percent.
- 04:14Yeah.
- 04:16Hello, everyone. I think we
- 04:18can get started.
- 04:22Welcome to Cardiovascular Grand Rounds.
- 04:27Take a moment for people
- 04:28to settle.
- 04:30So,
- 04:31you know, we can get
- 04:32started. I think that this
- 04:33is the CME code up
- 04:34on, screen.
- 04:36These are the
- 04:37the talks that are upcoming
- 04:39in our,
- 04:40CVR grand round series. A
- 04:42lot of, really exciting speakers,
- 04:44both internal and external.
- 04:46This is the
- 04:47disclosure slide, and these are
- 04:49specific disclosures for doctor Nalamoto.
- 04:51And with that, I it's
- 04:52my distinct honor to invite
- 04:55to, you know, have welcomed
- 04:56doctor Nalamoto
- 04:57who took a trip from
- 04:58Michigan to see us since
- 04:59yesterday. And he's a Steve
- 05:01O'Julius research professor of cardiovascular
- 05:03medicine,
- 05:04and a professor of internal
- 05:05medicine at Michigan. He's an
- 05:07interventional cardiologist,
- 05:08a world renowned outcomes and
- 05:10health services researcher.
- 05:11And then he has been,
- 05:13you know, instrumental in moving
- 05:14the field forward and scholarship
- 05:16through both mentorship
- 05:18as well as editorialship.
- 05:19The he was the the
- 05:21outgoing chief editor in chief
- 05:23of circulation cardiovascular outcomes and
- 05:25research and, you know, has
- 05:26really defined what this field
- 05:28has been, especially,
- 05:30in a in a time
- 05:31when data sciences have undergone
- 05:33a massive revolution with the
- 05:34emergence of AI.
- 05:36He's, the program director of
- 05:37My CHAMP and AI and,
- 05:40pragmatic research,
- 05:41research group over at Michigan.
- 05:44And and, you know, he's
- 05:45he has numerous NIH hundred
- 05:46grants on quality improvement across
- 05:48domains, interventional cardiology and resuscitation
- 05:51science, you know, that that
- 05:52he has, you know, spoken
- 05:54at many venues. And today,
- 05:55we have the honor of
- 05:56him, you know, summarizing many
- 05:58of the key learnings that
- 05:59he has seen across his
- 06:01research and his,
- 06:03his academic journey in, and
- 06:05and I I encourage folks
- 06:06to ask questions at the
- 06:07end. There's no better mentor,
- 06:09no kinder human being out
- 06:10there than Brahmajee, who is
- 06:12a generous friend and collaborator
- 06:13and has always been one
- 06:14of the people, you know,
- 06:16if what would Brahmajee do
- 06:17is what people ask often,
- 06:18and that's the thing you're
- 06:19supposed to do in science.
- 06:20Hey. Welcome, Brahmajee.
- 06:31Thank you so much, Rohan.
- 06:32It's, it's really an honor
- 06:33to be here.
- 06:35You know, my my first
- 06:36mentor
- 06:37actually was was Harlan Krumholtz.
- 06:39And,
- 06:40I I say that because
- 06:42I've been, lucky enough to
- 06:43be visiting Yale a couple
- 06:44of times over the years.
- 06:46This weekend has been particularly
- 06:47special for me. I have
- 06:48a niece who's graduating
- 06:49on Monday, and, you know,
- 06:51what a wonderful institution. What
- 06:53a wonderful
- 06:54place. And and Rohan was
- 06:56kind enough to bring some
- 06:57sun for me yesterday, so
- 06:58that was wonderful to walk
- 06:59around campus.
- 07:04These are my disclosures.
- 07:08I'm gonna start today, with
- 07:10a story.
- 07:12And this is a story
- 07:14about a,
- 07:15psychologist and a researcher
- 07:17named Brian Nosek. And
- 07:19it was a really,
- 07:21you know, fascinating story for
- 07:23me. So Brian Nosek is,
- 07:25as I mentioned, a psychologist
- 07:26at the University of Virginia,
- 07:28and he was really interested
- 07:29at a certain point about
- 07:31the question about whether or
- 07:34not light skinned players
- 07:35get less red cards than
- 07:37dark skinned players. Very interesting,
- 07:39kinda quirky question, and there's
- 07:41a point to this. So
- 07:42so bear with me for
- 07:42a moment.
- 07:44So he's you know, he
- 07:45he he was thinking about
- 07:47this question. And and for
- 07:48those of you who aren't
- 07:49as familiar with football or
- 07:50soccer, you know, there's a
- 07:51couple of things about this
- 07:52that are really intriguing. Right?
- 07:54So one of the things
- 07:55about this is that red
- 07:57cards are given for a
- 07:57couple of different reasons. Usually,
- 07:59it's a major infraction.
- 08:01But the second part is
- 08:02that there's a certain subjectivity,
- 08:04that's associated with it. It's
- 08:05not just what happened, but
- 08:06the intent of the infraction.
- 08:07And so there's some judgment
- 08:08around this.
- 08:09And so what Nozick did
- 08:11as a psychologist and a
- 08:12researcher is is what most
- 08:14of us would do. Right?
- 08:15He thought, I'm gonna go
- 08:16out and I'm gonna get
- 08:16some data, and I'm gonna
- 08:18call this the NoSick project.
- 08:19Right? And just to summarize
- 08:21what he was able to
- 08:23collect over a pretty short
- 08:24period of time was he
- 08:26got data on two thousand
- 08:27players and about three thousand
- 08:29referees
- 08:30from multiple countries in Europe.
- 08:33He got a bunch of
- 08:34data on each of the
- 08:35players,
- 08:36as well as the referees.
- 08:38And then what was kind
- 08:39of unique was he was
- 08:40able to scrape pictures of
- 08:42many of these players from
- 08:43the web, and he had
- 08:44it coded by two different
- 08:45reviewers so that he could
- 08:47objectively tell who was light
- 08:48skinned or dark skinned. Right?
- 08:50And then with this, he
- 08:51created this, like, really unique
- 08:53analytic data file of about
- 08:55a hundred and fifty thousand
- 08:57player referee dyads.
- 08:58I'm gonna pause for a
- 08:59second because I think the
- 09:01traditional story at this point
- 09:02in this type of research
- 09:04is, you know, he would
- 09:05go into the backroom.
- 09:07Right? He'd find a number
- 09:09of, like, really,
- 09:11you know, talented postdocs,
- 09:13you know, students,
- 09:15researchers,
- 09:16and they would start to
- 09:17come up and run some
- 09:18analysis and come up with
- 09:19an answer. And depending on
- 09:20the answer, we all know
- 09:21what would happen, right? It
- 09:22would be picked up by
- 09:23the New York Times and
- 09:24the Wall Street Journal and
- 09:26would show up everywhere.
- 09:28But here's where the whole
- 09:30project gets very interesting because
- 09:32actually Nosick really wasn't interested
- 09:34in whether light skinned or
- 09:36dark skinned
- 09:37players got red cards more
- 09:38often. Actually what he was
- 09:40interested in is, what do
- 09:41you think about this question
- 09:42and how would you answer,
- 09:44you know, this particular hypothesis?
- 09:47And so what he decided
- 09:48to do was take that
- 09:49data and actually crowdsource it.
- 09:52So he has a huge
- 09:53social network, so he decided
- 09:55to recruit analytic teams from
- 09:56around the world. And he
- 09:57just said, listen. Here's the
- 09:58question you need to answer.
- 10:00Are soccer referees more likely
- 10:02to give red cards to
- 10:03dark skinned players than to
- 10:04light skinned players? That's it.
- 10:06Answer that question in whatever
- 10:07way you want and I'm
- 10:08gonna give you the data.
- 10:09And what's fascinating was seventy
- 10:11seven teams expressed serious interest,
- 10:13right, thirty three ended up
- 10:14submitting proposals,
- 10:16and then twenty nine actually
- 10:18went through and they included
- 10:19about sixty one analysts and
- 10:21generated final reports.
- 10:24Alright.
- 10:25The key thing is that
- 10:26the teams made all their
- 10:28analytic choices independently
- 10:30of each other, but
- 10:32they were able to view
- 10:33others' plans before they actually
- 10:35carried them out.
- 10:36And then this paper was
- 10:37published in two thousand and
- 10:38seventeen as a result of
- 10:40this study.
- 10:41You know, the title is
- 10:42many analysts, one dataset.
- 10:44And I'm gonna walk you
- 10:45through the key finding here.
- 10:46Right? So this is the
- 10:47summary finding from this study.
- 10:50And what it shows is
- 10:52these twenty nine results from
- 10:53these different teams.
- 10:55And what you can see
- 10:56here is there's this line
- 10:57of unity.
- 10:59You know, things above
- 11:02things above,
- 11:03you know, the line are,
- 11:05lead to a higher likelihood
- 11:07of a dark skinned player
- 11:08getting a red card. Those
- 11:10below are a lower likelihood.
- 11:12And you can see that
- 11:13the results vary. Their confidence
- 11:15intervals vary.
- 11:17And and in general, the
- 11:19summary was that the effect
- 11:20sizes range from about point
- 11:22eight nine. So
- 11:24dark skinned players were slightly
- 11:25less likely to get a
- 11:27red card to, like,
- 11:29almost a threefold higher risk
- 11:31of getting a red card.
- 11:33Twenty of these were found
- 11:35to be significant based on
- 11:36kind of traditional hypothesis testing.
- 11:40And then, you know, not
- 11:41surprisingly, the variation was explained
- 11:43by the analyst choices in
- 11:45statistical modeling.
- 11:47Alright.
- 11:48So
- 11:49I think I want you
- 11:50to just sit with this
- 11:51for a moment and think
- 11:52about this because I think
- 11:53what's kind of, you know,
- 11:56disturbing a bit is the
- 11:58wide variability
- 11:59in in what people
- 12:01could see or expect from
- 12:02these. Because the the truth
- 12:04of the matter is that
- 12:05Noesick could have run any
- 12:07one of those experiments.
- 12:09We would have never known
- 12:10which one he had run
- 12:11or chosen, and then he
- 12:12could have written a story
- 12:14behind those.
- 12:15So
- 12:16when you look at, like,
- 12:17how people responded to this,
- 12:18I'm gonna give you a
- 12:19few voices that I think
- 12:20are very important to think
- 12:21through. So one is, you
- 12:23know, a name that's gonna
- 12:24be obviously
- 12:25very familiar to many of
- 12:26you here, Nicholas Christakis,
- 12:28who's probably the most famous,
- 12:31you know, physician sociologist in
- 12:32the world today. So Christakis
- 12:35said very disturbing.
- 12:36Twenty nine analytic teams tackle
- 12:38whether a player skin tone
- 12:39affects red cards in soccer
- 12:41and find, you know, variation
- 12:43in the results.
- 12:44And and this is a
- 12:45key point, even by experts
- 12:46with honest intentions. Right? None
- 12:48of these teams
- 12:49that were analyzing this data
- 12:51had a huge stake in
- 12:52this question.
- 12:54John Mandrola, who's a very
- 12:56famous, like, physician blogger,
- 12:59you know,
- 13:00wrote rare is a study
- 13:01that forever changes your view.
- 13:03And then I I I
- 13:04certainly don't have to introduce
- 13:06this guy to you. Harlan
- 13:07Krumholtz wrote, this is one
- 13:08of the most important studies
- 13:10published this century.
- 13:12So I think at the
- 13:13core, you know, what what
- 13:14these, like,
- 13:16observers were noting was that
- 13:18when we think of science,
- 13:19we think of science having
- 13:20variability in many aspects,
- 13:22you know, concerns about replication
- 13:24and reproducibility,
- 13:26but not when it comes
- 13:27to this fundamental aspect of
- 13:28analyzing data. We think that
- 13:29data itself is true and
- 13:31that we are revealing
- 13:32some underlying,
- 13:34you know, fundamental facts about
- 13:35nature.
- 13:36So one of the things
- 13:38that I wanted to talk
- 13:38about today is this question
- 13:40of is science broken and
- 13:41then the problem of replication
- 13:43in research.
- 13:44I want to talk about
- 13:45why it occurs in recent
- 13:46drivers.
- 13:47And then finally, I want
- 13:48to describe potential lessons and
- 13:50solutions. And I'm gonna do
- 13:52this in a couple of
- 13:53different ways, but as my
- 13:54title alluded to, I I
- 13:56think a lot of this
- 13:57has evolved over my, you
- 13:58know, time in in the
- 14:00last ten years. And, you
- 14:01know, we were talking Eric
- 14:03and I were talking earlier
- 14:04today about how, you know,
- 14:05we all wear lots of
- 14:06hats. Right? Like, many of
- 14:08us here, you know, work
- 14:09clinically,
- 14:10we work as researchers, and
- 14:12we work as editors.
- 14:13And one of the fundamental
- 14:14things that's been really transformative
- 14:16for me, you know, over
- 14:17the last ten years is
- 14:19this understanding of how you
- 14:20think about things from an
- 14:22editor's perspective. And and the
- 14:24way that I've been framing
- 14:25it lately is
- 14:26when I'm a clinician, I
- 14:28often think about the numerator.
- 14:29Right? The patient that's in
- 14:31front of me, there is
- 14:32no outlier when you're a
- 14:33clinician. Right? Everybody has their
- 14:35own unique story. They come
- 14:37to you with their own
- 14:38unique needs.
- 14:40But when you're an editor,
- 14:41you're on the far end.
- 14:42Right? You think a lot
- 14:43about the denominator. Right? You
- 14:45think about how generalizable is
- 14:47this? What does this mean
- 14:48beyond, you know, this particular
- 14:51example? And how important and
- 14:52impactful is it to the
- 14:53field?
- 14:54And one of the things
- 14:55that I've realized, like, over
- 14:56the years is this my
- 14:57own transformation as I look
- 14:59at science and studies,
- 15:01and that's been a really
- 15:03amazing,
- 15:04you know, opportunity and a
- 15:05privilege.
- 15:06You know, my editor's perspective
- 15:08comes through circulation, cardiovascular quality,
- 15:10and outcomes. This is a
- 15:11journal that was founded by
- 15:13by Harlan.
- 15:15It's now been rebranded as
- 15:17circulation population health and outcomes,
- 15:18but it's part of the
- 15:20family of journals.
- 15:21It deals with mainly observational
- 15:23research but also clinical trials
- 15:25and qualitative studies. And we
- 15:26publish about eighty to a
- 15:28hundred articles a year. We
- 15:30have about a ten percent
- 15:31acceptance rate. And just doing
- 15:33the math over ten years,
- 15:34I've I've realized, like, I
- 15:36have looked at about ten
- 15:37thousand, you know, papers that
- 15:39have come across my desk.
- 15:40Now I'm not gonna lie
- 15:42to you and tell you
- 15:42I've read every one of
- 15:43them,
- 15:44but, like, I certainly have
- 15:46read their titles and abstracts
- 15:47and learned a bit from
- 15:49each one.
- 15:50But the thing about CERC
- 15:52outcomes that's that's fascinating to
- 15:53me is it sits right
- 15:55in the middle tier. Right?
- 15:56We we're not the New
- 15:57England Journal. We're not JAMA.
- 15:59We're not Jack.
- 16:01You know, thankfully, we're not
- 16:02at the bottom tier either.
- 16:03Right? And we get good
- 16:04science. And it's a very
- 16:06important perspective because in many
- 16:07ways, this is where most
- 16:08of us spend our career.
- 16:10Right? Once in a while,
- 16:10we'll we'll kinda reach and
- 16:12get one of those, you
- 16:13know, high profile articles. But
- 16:15science happens in this middle
- 16:17tier, and it's a fascinating
- 16:18way in which you can
- 16:19look at both the good,
- 16:20the bad, and and sometimes
- 16:21the ugly.
- 16:23So I'm gonna come back
- 16:24to the statement of that
- 16:25core problem. Right? And this
- 16:26is it in a in
- 16:27a nutshell. The idea that
- 16:28the published scientific literature is
- 16:31producing too many false positive
- 16:32findings that are overrated
- 16:35and not replicable.
- 16:36And this leads to substantial
- 16:38inefficiencies
- 16:39and waste in research.
- 16:40And the key,
- 16:42characteristic here is this this
- 16:43idea of too many. Right?
- 16:45What is too many? Because
- 16:46we all know that science
- 16:47is exploration, and we're gonna
- 16:48go down some dead ends.
- 16:51And I wanna point out
- 16:52that
- 16:53when you think about the
- 16:54too many, that's been really
- 16:56the the tagline of this
- 16:57idea of the replication crisis.
- 16:59And this has been documented
- 17:01not just recently, but I
- 17:02wanna go back to even
- 17:03Charles Babbage. Right? Like
- 17:06writing, you know, almost like
- 17:07four hundred years ago also
- 17:09wrote about the idea that,
- 17:10like, science is just,
- 17:13you know, filled with too
- 17:14many errors and too many,
- 17:16like,
- 17:17replication issues.
- 17:19Okay. And then the next
- 17:20thing I just wanna say
- 17:21is as I'm approaching this,
- 17:22I I I definitely wanna
- 17:23share with you. I'm not
- 17:25trying to be sanctimonious at
- 17:26all. In fact, many of
- 17:28the things I'm gonna tell
- 17:29you are things that if
- 17:30you even look back at
- 17:31my own research over the
- 17:32years and how we did
- 17:34it,
- 17:35you know, it it it's
- 17:36kind of one of those
- 17:37things where it's been an
- 17:38interesting transformation in my own
- 17:39career, but it is definitely
- 17:41a perspective that's been changed
- 17:42with this idea of an
- 17:44editorial lens.
- 17:46So, again, you know, this
- 17:47isn't about good or bad.
- 17:48This isn't about, like, you
- 17:50know,
- 17:51you know, angels and devils.
- 17:53There is a whole talk
- 17:55about fraudulent research,
- 17:58that that could be given,
- 17:59but that's not what I'm
- 18:00talking about. I'm talking about
- 18:01good people trying to do
- 18:02good work, but then sometimes
- 18:04getting caught up in some
- 18:05of the limitations of what
- 18:06we can derive from data
- 18:08itself.
- 18:09Alright. So I told you
- 18:10the story about Brian Nosek,
- 18:11and I'm sure that many
- 18:12of you are like, okay.
- 18:13That's great. But, I mean,
- 18:14come on. You're talking about
- 18:15a psychologist and, like, you
- 18:16know,
- 18:18skin tone and red cards.
- 18:20I mean, what does that
- 18:20have to do with anything
- 18:21in medicine? And I'm gonna
- 18:22spend a slide just telling
- 18:24you how critical this is.
- 18:26And one of the areas
- 18:27is that this is just
- 18:28broadly applicable.
- 18:29You know? In fact, like,
- 18:30you know, we can start
- 18:31with, like, health policy
- 18:33and the hospital readmissions reductions
- 18:35program and mortality. And I'm
- 18:37just gonna make this because
- 18:38this is, like, one of
- 18:39the homes of, like, this
- 18:40debate. Right?
- 18:42And I and I point
- 18:43this out. These are two
- 18:44articles that are published in
- 18:45JAMA a year apart. You
- 18:47know, one is from Yale.
- 18:48The other is from
- 18:49the the group at the
- 18:50BI, at the Smith Center.
- 18:52And what I just find
- 18:53fascinating about this is if
- 18:55you if you look at
- 18:55just the conclusions, right,
- 18:57they are literally the exact
- 18:59opposite. Right? So in one
- 19:01conclusion, it says
- 19:03that the HRRP was significantly
- 19:05correlated with reductions in in
- 19:07hospital thirty day mortality after
- 19:09discharge.
- 19:10And then in the next
- 19:11one,
- 19:12you know, it suggests that
- 19:13there was an increase in
- 19:14thirty day post discharge mortality.
- 19:17Now the the highlight here
- 19:19is that
- 19:20these investigators
- 19:21on both sides are amongst
- 19:23the world's best. Right? These
- 19:24are experts who use these
- 19:26data all the time. They're
- 19:27using the same exact data
- 19:29set,
- 19:30and they're coming
- 19:31to drastically different conclusions.
- 19:34And the problem is not
- 19:35so much, like, the inconsistencies,
- 19:38but they're published in the
- 19:39same journal,
- 19:40one of our greatest journals.
- 19:42Right?
- 19:43It's like a physics journal,
- 19:45like, publishing that electrons are
- 19:46positively charged one year and
- 19:48then saying it's negatively charged
- 19:50the next year. And then
- 19:51nobody really thinks that we
- 19:53have to reconcile this. We
- 19:54just move on. Right? And
- 19:55there are still to this
- 19:56day people who believe the
- 19:58article on the left and
- 19:59people who are passionately
- 20:01convinced of the truth of
- 20:02the article on the right.
- 20:04Okay. So I I pointed
- 20:06this out for health policy,
- 20:07and you guys might say,
- 20:08well, like, look, health policy
- 20:09is is like one step
- 20:10away from the social sciences,
- 20:12you know, give me something
- 20:13more. I mean, if you
- 20:14look at epidemiology,
- 20:16I love this example that
- 20:17I oftentimes give to students
- 20:19about bisphosphonates
- 20:20and cancer.
- 20:21Two articles published just months
- 20:24apart, one in JAMA and
- 20:25the other
- 20:26in the BMJ
- 20:28using the exact same datasets.
- 20:30Right? And on the one
- 20:31on the left, it suggests
- 20:33that there's no significant association
- 20:35with bisphosphonates
- 20:36and GI cancers, and the
- 20:37one on the right suggests
- 20:38the exact opposite.
- 20:41Which one do you think
- 20:42is actually cited more? I'm
- 20:44curious. Do you think the
- 20:45one that shows no association
- 20:47or the one that shows,
- 20:48a positive association? Anybody have
- 20:50any guesses?
- 20:53Yeah. Significantly. Right? So in
- 20:55a lower impact journal, the
- 20:56BMJ, that article is cited
- 20:58much more often.
- 21:01And it just raises the
- 21:02question of, again, you know,
- 21:03we we just live with
- 21:04these kind of,
- 21:06dichotomies and just feel comfortable.
- 21:09You know, okay, that's epidemiology.
- 21:11What about randomized clinical trials?
- 21:13You know, I I'm an
- 21:14interventional cardiologist and it's been
- 21:16fascinating. We've done four studies
- 21:18now on the MitraClip.
- 21:20Two have suggested
- 21:21the MitraClip is is incredibly,
- 21:25you know, important, has survival
- 21:27benefit. Two suggest the exact
- 21:29opposite.
- 21:32And, you know, this isn't
- 21:33gonna be surprising, I think,
- 21:34with randomized clinical trials. You
- 21:35know, this is a a
- 21:36great paper from John Cocotto,
- 21:39who I know is at
- 21:40Yale,
- 21:41and then Ralph Horowitz who
- 21:42was here at the time.
- 21:43But just describing
- 21:45overall that there's always going
- 21:46to be conflicting results from
- 21:47RCTs because they oftentimes represent
- 21:50a range of real outcomes
- 21:51that you expect to see
- 21:52in a clinical setting.
- 21:55What's fascinating is it can
- 21:56even go deeper into the
- 21:58preclinical research setting, right? And
- 22:00you know, this is a
- 22:02paper, in
- 22:03Nature twenty twelve that was
- 22:05written by Glenn Begley and
- 22:07Lee Ellis. These were both
- 22:09individuals that were actually in
- 22:11industry at the time.
- 22:13And and one of the
- 22:14just because you just have
- 22:15to read it just to
- 22:16kind of understand this, but
- 22:18like, you know, one of
- 22:19the quotes from this paper
- 22:20that's just so fascinating is
- 22:22that these,
- 22:23investigators
- 22:24talked about Amgen
- 22:25going back and targeting
- 22:28fifty three landmark papers that
- 22:30were published in, like, Nature
- 22:31and Science. So they went
- 22:32back and they found these
- 22:33fifty three papers. Everybody considered
- 22:35them, you know, transformative.
- 22:37And so they wanted to
- 22:39go and see if they
- 22:40could reproduce
- 22:42these results. Right? So to
- 22:43replicate them.
- 22:44And what was fascinating was
- 22:47that after doing all that
- 22:48work in that space,
- 22:50they could only confirm findings
- 22:52in six cases.
- 22:53And, you know, they write,
- 22:54like, even knowing the limitations
- 22:56of preclinical research, this was
- 22:57a shocking result. In fact,
- 22:59like, you know, Bagley's gone
- 23:01on to write in other
- 23:02areas that many people in
- 23:04industry
- 23:05don't even trust that the
- 23:06studies that come out of
- 23:07nature and science at times
- 23:08because sometimes they can be
- 23:10so,
- 23:11like,
- 23:11extreme in their results. And
- 23:13so here are, again, are
- 23:14are are, like, highest scientific
- 23:16journals, and there's a question
- 23:18about what what we're doing
- 23:19with the the results that
- 23:20we're discovering.
- 23:22Alright. So
- 23:24I hope I've convinced you
- 23:25a little bit that there
- 23:26is this issue around replication
- 23:27in research and, you know,
- 23:29one of the issues that's,
- 23:31you know, very important for
- 23:32us to kind of,
- 23:33understand.
- 23:34I'm gonna talk a little
- 23:35bit now about why it
- 23:37may be occurring and recent
- 23:38drivers in it.
- 23:41So I think it all
- 23:42comes back to this idea
- 23:44or concept of, like, metascience.
- 23:45And metascience is a very
- 23:47interesting term.
- 23:48It really refers to using
- 23:50the tools
- 23:51of science itself to study
- 23:54the science. Right? And, you
- 23:55know, probably,
- 23:57you know, the person that's
- 23:58been most identified with the
- 24:00idea of metascience
- 24:02and and certainly one of
- 24:03the most famous papers in
- 24:04this space is this one
- 24:05by John Ioannidis, a single
- 24:07author study that was in
- 24:08PLOS,
- 24:10and it had the provocative
- 24:11title of why most published
- 24:12research findings are false.
- 24:15You know, in this, he
- 24:17goes through this entire simulation
- 24:19modeling
- 24:20around, you know, the the
- 24:22scientific enterprise and why it
- 24:23seems to generate false positive
- 24:25results.
- 24:27He actually created a specific
- 24:29term within his models of
- 24:31bias, and he defined bias
- 24:33or mu as the combination
- 24:34of various design data analysis
- 24:36and presentation factors
- 24:38that tend to produce research
- 24:40findings when they should not
- 24:41be produced.
- 24:42And he went on to
- 24:43say that studies in general
- 24:46are less likely to be
- 24:47true based on several factors.
- 24:49Some of these are,
- 24:51you know, pretty,
- 24:53pretty understandable.
- 24:55The first is obviously the
- 24:56smaller the study design, the
- 24:57more likely it is an
- 24:58outlier finding.
- 24:59The smaller the the true
- 25:01effect size, the less likely
- 25:03it is to be true.
- 25:05The greater the number of
- 25:06relationships studied
- 25:07with less discriminate selection, that
- 25:09also is gonna increase the
- 25:11likelihood of a false positive
- 25:12finding.
- 25:13Some of these things, though,
- 25:14were actually really interesting in
- 25:16his modeling.
- 25:17One of them was the
- 25:18the greater the flexibility and
- 25:20design, the more likelihood,
- 25:23that the study would be
- 25:25less likely to be true.
- 25:26Greater the financial and intellectual
- 25:28conflicts of interest, more teams
- 25:30kind of engaged in that
- 25:32science seem to kind of
- 25:33also result in that. And
- 25:35then, obviously, the hotter the
- 25:36topic.
- 25:39Okay. So
- 25:41if studies are likely in
- 25:43this way to be untrue,
- 25:45you know, why might that
- 25:46be driving it? And I'm
- 25:47gonna point to three things
- 25:48that I think have been
- 25:49kind of key aspects here.
- 25:51The first is I think
- 25:53we have to have a
- 25:53little bit more,
- 25:55understanding of statistical limitations of
- 25:57current methods.
- 25:59I think the first is
- 26:00one that's that's well described
- 26:02which is the tyranny of
- 26:03the p value.
- 26:04You know, we we live
- 26:05in a world of these
- 26:06frequentist statistics
- 26:08and they ignore prior evidence
- 26:10when interpreting findings. And and
- 26:12the way that this has
- 26:12an impact is if if
- 26:14you think about p values,
- 26:15you know, p values are
- 26:17not how likely a hypothesis
- 26:19is to be true,
- 26:20but it really just is
- 26:21simply how surprised
- 26:23should you be,
- 26:25with the data you've collected
- 26:27if no relationship exists. Right?
- 26:29That's the exact,
- 26:31term and definition of it.
- 26:32And and when you look
- 26:33at the implications of this,
- 26:35it it it does have,
- 26:36like, some striking,
- 26:38consequences.
- 26:40So if you have, like,
- 26:41a toss-up idea, hypothesis that's
- 26:43likely to be true or
- 26:44not true at a rate
- 26:45of about fifty percent,
- 26:47and you have a p
- 26:48value of point zero five,
- 26:50you've nudged
- 26:51that hypothesis
- 26:52from a likelihood of being
- 26:54true from fifty percent to
- 26:55about seventy one percent. The
- 26:57p value is point zero
- 26:58one, goes up to about
- 26:59eighty nine percent. Feel a
- 27:01little bit more confident about
- 27:02it.
- 27:03Now,
- 27:04traditionally, science is about this.
- 27:05Right? When you're gonna invest
- 27:07resources, you wanna have,
- 27:09you know, a hypothesis that's
- 27:11likely to be true as
- 27:12not likely. Right? I mean,
- 27:13you know, we even use
- 27:14this language when we talk
- 27:16to patients and recruit them
- 27:17for studies. We say a
- 27:18flip of the coin.
- 27:20Now
- 27:21if you're studying something that
- 27:22you already know that works,
- 27:24yeah, you know, p values
- 27:25that are significant are gonna
- 27:26nudge it even further, but
- 27:28it seems a little bit
- 27:29pointless at that point.
- 27:31But here's the concern is
- 27:32increasingly,
- 27:34I think studies are actually
- 27:35investigating things on the long
- 27:36shot. And I'm not sure
- 27:38if we're being as, you
- 27:39know, transparent about that when
- 27:41it happens. Right?
- 27:42When you have a five
- 27:43percent chance of a real
- 27:44effect, but you're studying it,
- 27:46you can get a p
- 27:47value, and all that's done
- 27:48is change that to an
- 27:49eleven percent chance of a
- 27:51real effect.
- 27:52And the the the comments
- 27:54I'll make a little bit
- 27:54later about the increasing in
- 27:56data sizes and increasing the
- 27:58exploration and just the volume
- 27:59of studies that are coming
- 28:01through, I have to be
- 28:02honest. I think that more
- 28:03and more studies are being
- 28:04done with the mindset of
- 28:06the long shot.
- 28:09What else can make a
- 28:09difference? Well, I mean, there's
- 28:11obviously the concern around researcher
- 28:13degrees of freedom.
- 28:14So what do I mean
- 28:15by that? I mean that
- 28:17researchers just don't conduct an
- 28:18experiment. They conduct many experiments.
- 28:20You know, just go back
- 28:21to the Nozick example. Right?
- 28:23Like I said, he could
- 28:24have done any one of
- 28:25those twenty nine studies and
- 28:26picked and chose which one,
- 28:28and we would have never
- 28:29known which one that his
- 28:30group had actually, you know,
- 28:32settled
- 28:33on. And that leads to
- 28:35the potential for p hacking.
- 28:36Right?
- 28:37And it doesn't have to
- 28:38be in a nefarious way.
- 28:40Right? As researchers, anyone who's
- 28:41in the trenches knows we
- 28:43all make these decisions on
- 28:44a day to day basis.
- 28:45Right? We we have to
- 28:46think like, oh, should we
- 28:47collect more data? Should some
- 28:49observations be excluded? They just
- 28:50don't make sense. Right? Which
- 28:52control variable should actually even
- 28:54be considered?
- 28:55And then, you know, we
- 28:57all have a limited amount
- 28:58of bandwidth. Right? So we
- 28:59end up
- 29:01being biased to report and
- 29:02publish only what works. Right?
- 29:04This is the classic file
- 29:05drawer problem.
- 29:06You know, we we've done
- 29:08a number of analyses that
- 29:10just didn't seem like they
- 29:11were going anywhere.
- 29:12And, you know, to to
- 29:14spend the limited amount of
- 29:15time we have as a
- 29:16lab to report those out
- 29:18and then try to find
- 29:18someone who's willing to publish
- 29:20it just doesn't happen. And
- 29:21so those actually sit, again,
- 29:23in someone's file drawer while
- 29:25the positive studies end up
- 29:26getting,
- 29:27pushed out to our literature.
- 29:29And then finally, I just
- 29:30wanna talk about study design
- 29:32limitations.
- 29:34You know, it it's it's
- 29:35interesting. You know, Yuan and
- 29:36I had a great conversation
- 29:37earlier today about just the
- 29:39questions of data collection and
- 29:41outcomes measurements. Right?
- 29:43Like, we live in a
- 29:44world that's changed significantly
- 29:46from when I was a
- 29:47fellow.
- 29:48You know, we do studies
- 29:49now on digital,
- 29:51digital health tools.
- 29:53And, you know, we have
- 29:54analytic files that literally have,
- 29:56like, billions of cells.
- 29:58And there's
- 29:59so much analytic complexity to,
- 30:01like, condensing that down to,
- 30:02like, actual manageable data.
- 30:05You know, these these data
- 30:06is in many ways, you
- 30:07know, you have to have,
- 30:08like, almost a leap of
- 30:09faith, right, around, like, is
- 30:11is this actually measuring what
- 30:13we think it's measuring? You
- 30:14know? How do we set
- 30:15up, like, guardrails around that?
- 30:17And I'll talk a little
- 30:18bit about our own team's
- 30:20experience with some of the
- 30:21complexity,
- 30:22in that area.
- 30:23And then, obviously, there's always
- 30:25these challenges of causal inference.
- 30:27Right? And this is an
- 30:28area that, you know, real
- 30:30world evidence has made its
- 30:31way back into a lot
- 30:32of discussion.
- 30:33I think there's definitely some
- 30:35opportunities,
- 30:36for understanding how that data
- 30:38can kind of complement,
- 30:40questions. But, you know, is
- 30:42it enough to always draw
- 30:43that causal inference? And this
- 30:45is something that's real. Right?
- 30:46Like,
- 30:48Kirsten,
- 30:49Bimmons Domingo, who's the editor
- 30:50chief of JAMA, has really
- 30:52talked about how we need
- 30:54to think about ways in
- 30:55which we can leverage observational
- 30:56studies
- 30:57to to draw causal inferences,
- 30:59but there's always concerns and
- 31:01dangers about that path. It's
- 31:03something certainly that in medicine
- 31:04we've gone down a number
- 31:06of times.
- 31:08And then finally, just even
- 31:09the question of even ideal
- 31:11study designs are vulnerable,
- 31:13particularly, given the relevance of
- 31:15the question and population. And,
- 31:17you know, one of my
- 31:18favorite examples is a study
- 31:19that we were involved with.
- 31:20We were able to do
- 31:21with
- 31:22Bobby Yeh and,
- 31:24you know, the Smith Center.
- 31:26You know, we we published
- 31:27a paper about how parachute
- 31:29use, which has oftentimes been
- 31:31described as,
- 31:32you know,
- 31:33something that doesn't need a
- 31:34randomized
- 31:35control trial and has never
- 31:37had one.
- 31:38You know, so we we
- 31:40went out and we tested
- 31:41parachute use, and we found
- 31:42that it actually,
- 31:44as you can see, did
- 31:45not reduce death or a
- 31:46major traumatic injury when jumping
- 31:48from an aircraft.
- 31:50And the the whole tongue
- 31:51in cheek play of that
- 31:52was the aircraft were on
- 31:53the ground, so the jump
- 31:54was about four feet.
- 31:56And so it just makes
- 31:57you realize, like, even if
- 31:58you have that label
- 31:59of, like, a randomized controlled
- 32:01trial, it doesn't necessarily mean
- 32:03that that's gonna give you
- 32:04the answer you want. And
- 32:06if you don't think that
- 32:07this applies, anybody who's tried
- 32:09to do studies in, like,
- 32:11devices like
- 32:12the Impella or in high
- 32:14risk situations like cardiac arrest
- 32:16care, it's very applicable. Right?
- 32:18Many people think that the
- 32:20the randomized clinical trials being
- 32:22null in the cardiac arrest
- 32:23space are largely because
- 32:25most people are already dead
- 32:26by the time you give
- 32:27them an intervention. And then
- 32:29an intervention,
- 32:30you know, trying to find
- 32:31that spot in which you
- 32:32can actually make an impact,
- 32:34and show a benefit is
- 32:35more challenging
- 32:36than one would imagine.
- 32:38Okay. So if these are
- 32:40all true, what about drivers
- 32:41in recent years?
- 32:43Well, I I wanna speak
- 32:44about this. I I think
- 32:45that this is a very
- 32:46important and core problem we
- 32:48have. You know, the first
- 32:49is this just this idea
- 32:51of
- 32:52wide availability of data and
- 32:54analytical tools. Now there's obviously
- 32:55a positive side to this.
- 32:57I'm gonna talk about a
- 32:58little bit about how this
- 32:59has an underbelly too. And
- 33:02and one of my favorite
- 33:03papers of all time is
- 33:04Rohan's paper.
- 33:06I I'm sure many of
- 33:07you have,
- 33:08you know, seen it or
- 33:09remembered it, but he did
- 33:10this very interesting analysis of
- 33:12the national inpatient sample.
- 33:14You know, the NIS is
- 33:16a a dataset that's provided
- 33:18by the federal government.
- 33:20It uses a a sample
- 33:22of hospital admissions,
- 33:24in the country, and it's
- 33:25weighted in a way that
- 33:26you can make nationwide assessments.
- 33:29And
- 33:30it's a very complicated sampling
- 33:32scheme, and there have to
- 33:33be certain rules in which
- 33:34you follow these methodologic
- 33:37recommendations. Otherwise, you can draw,
- 33:39like, completely incorrect inferences.
- 33:42And what Rohan did was
- 33:43he just basically,
- 33:45you know, looked through the
- 33:46literature, and he found, number
- 33:47one, that the number of
- 33:49studies using this data set
- 33:50is exploding.
- 33:52But then more interestingly,
- 33:54just the number of times
- 33:56the adherences
- 33:57of nonadherence to required practices.
- 33:59Right? So what he found
- 34:01was that, you know, the
- 34:03majority of studies, even in
- 34:04journals with high impact factors,
- 34:07you know, showed at least
- 34:08one instance where,
- 34:10the study was not,
- 34:12being done in an appropriate
- 34:14way. And in fact, in
- 34:15one fifth of the studies,
- 34:17they had ignored three of
- 34:18the required practices.
- 34:20And this is a problem.
- 34:21Right? As we get more
- 34:22data and more analytic tools
- 34:23out there, what are we
- 34:25doing,
- 34:26in terms of, you know,
- 34:28protecting,
- 34:30like, our ability to to
- 34:32make sure that the analyses
- 34:33are correct?
- 34:34And,
- 34:35you know, if you think
- 34:37this is just happening in
- 34:38terms of the data side,
- 34:39I mean, I think we're
- 34:40all getting ready for, like,
- 34:41what's already been shown to
- 34:43be happening with the next
- 34:45generation of AI tools.
- 34:48You know, just even this
- 34:49past week in The Lancet,
- 34:51the the cover article was
- 34:53about how there was a
- 34:55an increase in the number
- 34:56of studies that are being
- 34:58published right now in the
- 34:59literature
- 35:00with, like, references that just
- 35:01don't exist. Right?
- 35:04And so it it's it's
- 35:05just becoming more and more
- 35:06of a a concern that
- 35:08we have to have, and
- 35:09we had a great debate
- 35:10last night about this.
- 35:12And part of it was
- 35:13this whole discussion of, like,
- 35:14what are the pluses and
- 35:15minuses of these tools?
- 35:17I will say that these
- 35:18tools do have,
- 35:21you know, several advantages.
- 35:23You know? This is a
- 35:24a really fascinating article from
- 35:26Science,
- 35:27last year that talked about
- 35:28the scientific production in the
- 35:30era of large language models.
- 35:32And what you can see
- 35:33is this discontinuity.
- 35:35Right?
- 35:35So what they did was
- 35:36they asked investigators,
- 35:38you know, do you use
- 35:39LLMs? And if you do
- 35:41in your scientific
- 35:42work, when did you start
- 35:43using them? And
- 35:46investigators'
- 35:47relative
- 35:47those who ended up using
- 35:49them in the pre period
- 35:51before, this was their scientific
- 35:52productivity
- 35:54in terms of just, like,
- 35:55papers
- 35:56and publications.
- 35:57And then post adoption,
- 35:59you know, there was a
- 36:00substantial and significant rise in
- 36:02the number of papers that
- 36:03they were putting out.
- 36:05And and I do include
- 36:06this because
- 36:07one of the interesting
- 36:09one of the interesting subanalyses
- 36:11that they did,
- 36:13you know, really spoke to
- 36:14the idea that in many
- 36:16ways, these can be very
- 36:17powerful.
- 36:18In non native English speaking
- 36:20geographies, this seemed to have
- 36:22even more of a pronounced
- 36:23impact.
- 36:24And it does raise the
- 36:25question of, like, if it's
- 36:26a good idea and it's
- 36:27good science, shouldn't we use
- 36:29these tools to express those
- 36:30ideas in the most powerful
- 36:32way possible?
- 36:35But the flip side of
- 36:36it is that, you know,
- 36:39if you just start to
- 36:40increase the volume of science,
- 36:41you know, what what are
- 36:42we actually accomplishing? And and,
- 36:44actually,
- 36:45at dinner last night again,
- 36:47you know, one of the
- 36:48things I was talking to
- 36:48Bob about is that the
- 36:50way I see AI at
- 36:51this point, it's kind of
- 36:52like science fertilizer.
- 36:54It it's really a force
- 36:55multiplier,
- 36:56And it doesn't, at this
- 36:57point in time, distinguish weeds
- 36:59from crops. Right? And so
- 37:01if your field is messy,
- 37:02you just get more weeds
- 37:03faster. And that's kind of
- 37:04the the spot we're in
- 37:06because AI is definitely boosting
- 37:08productivity,
- 37:09but it's currently agnostic to
- 37:11quality, and that that does
- 37:12raise a lot of challenges.
- 37:16Alright.
- 37:18I think that AI
- 37:20and some of these data
- 37:21tools would not be as
- 37:22big of a problem unless
- 37:24there was this huge inflationary
- 37:25incentive that's growing to publish
- 37:27more and more.
- 37:29And, you know, this is
- 37:30there's so many examples of
- 37:32this in the literature. This
- 37:33is one that we published
- 37:34in Search CQL,
- 37:36that was just a a
- 37:37really, quirky
- 37:38little take on this. At
- 37:40the time,
- 37:41you know, we were getting
- 37:42so many systematic reviews and
- 37:43meta analyses, like, every week.
- 37:46And,
- 37:47one of them that was
- 37:48very interesting
- 37:49was, you know, this perspective
- 37:51on it. Again, John Ioannidis
- 37:53as well as, Kostas Sientes,
- 37:54who's a cardiologist at the
- 37:56Mayo Clinic,
- 37:57they did a review. And
- 37:58and what they pointed out
- 38:00was simply, at the time
- 38:01they were looking at DOACs
- 38:03in
- 38:04AFib for stroke prevention,
- 38:06there were fourteen clinical trials
- 38:08that had been done to
- 38:09date on that particular topic,
- 38:10and there were nearly sixty
- 38:12meta analyses of those fourteen
- 38:14clinical trials. Right? Which is
- 38:15just raising the question of,
- 38:16like, what what are we
- 38:17actually doing?
- 38:19And then, you know, on
- 38:20top of that, you know,
- 38:21there's this huge industry that's
- 38:24growing. I know I didn't
- 38:25think I was gonna spend
- 38:26a lot of time on
- 38:27this, but, you know, of
- 38:28paper mills and predatory publishing
- 38:30that's making it even more,
- 38:33lower barriers towards, you know,
- 38:36you know, pushing these these
- 38:37types of work out there.
- 38:38And, you know, this this
- 38:39Wall Street Journal article had
- 38:41a a wonderful,
- 38:42you know, I think, summary
- 38:43of it, which is that
- 38:45world over, scientists are under
- 38:46pressure to publish in peer
- 38:48reviewed journals, sometimes to win
- 38:49grants, other times as conditions
- 38:51for promotions.
- 38:53And that really can motivate
- 38:54people to, like, think through,
- 38:55like,
- 38:56you know, the currency. If
- 38:57that's the currency, how do
- 38:59I get more and more
- 39:00out there without really, you
- 39:02know, thinking carefully about the
- 39:03quality?
- 39:06The third thing that I
- 39:07just wanna kinda describe as
- 39:09this drivers of replication crisis
- 39:11is, like, I I I
- 39:12do think this is a
- 39:13really important concern these days
- 39:15is this we've become, like,
- 39:17you know, caught up in
- 39:17this hype cycle in contemporary
- 39:19science. Right? And what I
- 39:20mean by that is, you
- 39:22know, when I started
- 39:24in in science, we didn't
- 39:25really use the words that
- 39:26I think we're adopting now
- 39:28often that seem like they're
- 39:29coming from, like, the tech
- 39:30industry in Silicon Valley. I
- 39:32mean, we talk about
- 39:33moonshots
- 39:34and hacking health and, you
- 39:36know, one brave idea.
- 39:37You know? And and I
- 39:39love this
- 39:40this centerpiece here, right, because
- 39:42it's such a telling,
- 39:44image. This is from the
- 39:45New York Times, and this
- 39:47is a very generous gift,
- 39:48right, by Mark Zuckerberg and
- 39:50Priscilla Chan. And they pledged
- 39:51three billion dollars. But if
- 39:53you look in the background,
- 39:54right, they have this statement,
- 39:56can we cure all diseases
- 39:57in our children's lifetime, as
- 39:59if we're only three billion
- 40:00dollars short
- 40:02from from this goal. Right?
- 40:04And it just makes you
- 40:05kinda pause and think. And
- 40:07all of this, I think,
- 40:08came to a head when
- 40:09we just saw, like, five
- 40:10or six years ago with
- 40:12the COVID nineteen pandemic. In
- 40:13many ways, this was like
- 40:15a stress test for science.
- 40:16Right?
- 40:17And I'm not sure how
- 40:18well we did because
- 40:21science basically was facing this
- 40:22idea of this huge growth
- 40:24of data sources,
- 40:25this, you know, opportunity to
- 40:27publish more and more, and
- 40:28then this hype cycle. And
- 40:30and you can see all
- 40:30these little examples. Right? Just
- 40:32even in the data sources,
- 40:34the surgeosphere
- 40:35scandal, right, where papers papers
- 40:36had to get retracted from
- 40:37the New England Journal and
- 40:38The Lancet,
- 40:40you know, publishing more and
- 40:42more the rise of, like,
- 40:44many of these,
- 40:46preprint servers. Right? And then
- 40:47the clutter of low value
- 40:48science, especially when it went
- 40:50straight from preprint server to,
- 40:52like, front page of, like,
- 40:53The Wall Street Journal. And
- 40:55then finally,
- 40:56you know, the hype cycle
- 40:58had a really,
- 40:59huge impact in terms of
- 41:01the idea of the pandemic
- 41:02science mixing with politics. And
- 41:04I'll just give you a
- 41:05quick case study of this
- 41:06that was just, you know,
- 41:07very telling for our institution
- 41:09for reasons you'll see. But,
- 41:10you know, when COVID nineteen
- 41:11came out, there was some
- 41:13initial,
- 41:14identification of cardiac complications,
- 41:17with with COVID nineteen. You
- 41:19know, people started to recognize
- 41:20this almost on, you know,
- 41:22day one.
- 41:23There was this really interesting
- 41:25study
- 41:26of about a hundred patients,
- 41:29from Europe
- 41:30that described some MRI abnormalities,
- 41:34in about seventy eight of
- 41:35these individuals. And so then
- 41:37the whole question came up
- 41:38around, you know, myocarditis,
- 41:40especially amongst young people.
- 41:43This had a huge impact.
- 41:44Right? This this happened. This
- 41:46paper came out right around
- 41:47the time that the football
- 41:48season was starting in Ann
- 41:49Arbor.
- 41:50And, you you know, they
- 41:52care a lot about football
- 41:53in Ann Arbor.
- 41:54And even though almost from
- 41:56the immediate
- 41:57aspect of this paper getting
- 41:59published,
- 42:00there were data concerns. There
- 42:02were questions about inadequate
- 42:03controls. There were unclear clinical
- 42:05implications of, like, whatever MRI
- 42:07findings they were discovering,
- 42:09but they actually canceled the
- 42:10football season.
- 42:13And then what was really
- 42:14interesting was a few months
- 42:15later, a group from the
- 42:17University of Wisconsin
- 42:18replicated this study in more
- 42:20patients
- 42:21and,
- 42:22found that
- 42:24the actual incidence of, like,
- 42:25cardiac MRR imaging associated with
- 42:28myocarditis was more like around
- 42:29one percent. Right? That's just
- 42:31the imaging. Right? Not even
- 42:32talking about clinical implications.
- 42:34But what's fascinating about this
- 42:36all is that
- 42:37these were actually drawn from
- 42:38just this morning because I
- 42:39always update. Like, I'm always
- 42:41curious about this. Right? So
- 42:43to date, this study that's
- 42:45published just six months after
- 42:47that other study has been
- 42:48cited a hundred and sixty
- 42:49four times.
- 42:52The other study,
- 42:53right, has been studied,
- 42:55you know,
- 42:56cited fifteen hundred times.
- 42:58And then you can look
- 42:59at, like, the number of
- 43:00views, and you can also
- 43:02look at just the altmetric
- 43:03score in general. Right? This
- 43:04is one of the highest
- 43:05altmetric scores that's ever been
- 43:07produced
- 43:07versus, you know, one that's,
- 43:09you know, good paper but
- 43:10not getting the attention that
- 43:11it probably deserves.
- 43:12Again, you know, raising all
- 43:14these issues. Now
- 43:16I'm not gonna leave you
- 43:17drowning. Okay? I told you
- 43:19I did learn something ten
- 43:20years here, and I'm gonna
- 43:21give you some lessons,
- 43:22that I'm gonna come back
- 43:23to because I do think
- 43:24that there is a way
- 43:25out of this, and I
- 43:26think we're actually already,
- 43:28you know, on that path.
- 43:29And I I wanna just
- 43:31point out some of this
- 43:32stuff,
- 43:33especially these, you know, potential
- 43:35lessons and solutions. And, you
- 43:37know, I I'm gonna come
- 43:38back to this is something
- 43:39I learned as a clinician,
- 43:41and I brought it back
- 43:42to kind of the editorial
- 43:43role. And I think it's
- 43:45something that anybody who sees
- 43:46patients on a day to
- 43:47day basis will agree with
- 43:48is,
- 43:49I I think there's no
- 43:50lesson like humility, and I
- 43:52think that is hitting
- 43:53a number of, like, you
- 43:55know, scientific areas. And, you
- 43:57know, so there's five things
- 43:58that I I hope we
- 43:59can start to accomplish over
- 44:00the next few years and
- 44:01build on, because many of
- 44:03these are already underway.
- 44:05The first is we we
- 44:06obviously need open science and,
- 44:08you know, transparency
- 44:09and experimental methodology,
- 44:11observation and data collection has
- 44:13just gotta, you know, move
- 44:14forward.
- 44:16You know, the public availability
- 44:17and reusability of data, I
- 44:19I think, is a great
- 44:20idea. I think we have
- 44:22to start figuring out guardrails
- 44:23around this.
- 44:25And then also making sure
- 44:27that there's more transparency in
- 44:28the scientific communication side.
- 44:32This is, again, this is
- 44:33an old idea. Right? I
- 44:34mean, Michael Faraday, the great,
- 44:36like, nineteenth century chemist,
- 44:39when when some a young
- 44:40person asked them, like, you
- 44:41know, what what they should
- 44:42do, you know, he had
- 44:43the famous line, you know,
- 44:44you should work, you should
- 44:45finish, you should publish. And
- 44:47I think many have pointed
- 44:48out that need one more
- 44:49step. You need to release.
- 44:50Right?
- 44:51And a great example of,
- 44:53like, when we do it
- 44:54the best
- 44:55is is this, this study
- 44:57that was done,
- 44:58out of Boston,
- 45:00and was published in the
- 45:01New England Journal. And if
- 45:03you guys remember, in in
- 45:04two thousand and seventeen,
- 45:06hurricane Maria
- 45:07tore through Puerto Rico.
- 45:09And at the time, there
- 45:10was a lot of confusion
- 45:11about, you know, what was
- 45:13the actual impact of this.
- 45:15Now
- 45:16when,
- 45:17The New York Times and
- 45:18a couple of other media
- 45:19outlets did some estimates, they
- 45:21thought that there was probably
- 45:23about a thousand to twelve
- 45:24hundred deaths that had happened
- 45:26because of that.
- 45:27The Trump administration
- 45:28thought that were sixty four.
- 45:30Right?
- 45:31And so then these guys
- 45:32went out and they did,
- 45:34a very elegant study where
- 45:35they actually did population based
- 45:37sampling the right way that
- 45:38science should be done,
- 45:40And they came up with
- 45:41this conclusion.
- 45:42And what I love about
- 45:43this is a couple things.
- 45:44One is that, you know,
- 45:45you can see that the
- 45:46the point estimates are much
- 45:48higher than what, you know,
- 45:49was previously reported. But look
- 45:51at these confidence intervals. Right?
- 45:53The uncertainty is actually marked
- 45:55here.
- 45:56And even more brilliant was
- 45:58on the date that they
- 45:59published this, they released the
- 46:01full data and analysis so
- 46:03the entire study could be
- 46:04replicated by anybody. Right? So
- 46:06they just said,
- 46:07okay.
- 46:09You know, we want a
- 46:10real debate, a real time
- 46:12debate of these findings. Right?
- 46:13This is what we found.
- 46:14You tell us where we
- 46:15got this wrong or how
- 46:16you would have done this
- 46:17better.
- 46:18And Andrew Gelman, who runs,
- 46:20you know, the the stat
- 46:21modeling site, you know, had
- 46:23this wonderful line about, you
- 46:25know, these adjustments represent one
- 46:27simple way to account for
- 46:28biases, but we have made
- 46:29our data publicly available for
- 46:31additional analyses. I I love
- 46:32that thought.
- 46:34The second thing we need
- 46:34to do is we need
- 46:35to think about preregistration
- 46:37more broadly.
- 46:38This has been a dramatic,
- 46:41impact on RCTs.
- 46:43You know, this is a
- 46:44study from, you know, years
- 46:46ago. This is the time
- 46:47when RCTs
- 46:49started to require preregistration
- 46:51before publication.
- 46:53And what they found was
- 46:54that,
- 46:55you know, prior to two
- 46:57thousand, when this became a
- 46:58mandate across the,
- 47:01medical journals and community,
- 47:03seventeen of thirty studies had
- 47:05a significant benefit for the
- 47:06intervention
- 47:07on the primary outcome. Now
- 47:09I'm not saying this is
- 47:10the only thing, but, you
- 47:12know, after two thousand, after
- 47:14preregistration
- 47:15were required, only two of
- 47:17the twenty five trials funded
- 47:18by the NIH,
- 47:21had a positive,
- 47:22you know, finding.
- 47:24Again, just speaking to the
- 47:25importance of, like, stating your
- 47:27claim,
- 47:28before you collect and analyze
- 47:30the data. And and I
- 47:31think that there is a
- 47:33role in terms of expanding
- 47:34this to preclinical research, to
- 47:36epidemiology, and to observational studies.
- 47:38It's something that we oftentimes
- 47:40ask,
- 47:41authors for at our journal.
- 47:43And there are also guidelines
- 47:44that are starting to come
- 47:45out. There's also many websites
- 47:47also that allow for preregistration,
- 47:50too.
- 47:51The third,
- 47:52lesson, I think, is we
- 47:53need to accept corrections.
- 47:56Just do this as a
- 47:57community. Right? Like, a a
- 47:59great example is the PREDIMED
- 48:00study,
- 48:01which was the study around
- 48:02the Mediterranean
- 48:03diet and cardiovascular
- 48:06disease.
- 48:07When this was originally published,
- 48:10there were
- 48:11some data sloots that noticed
- 48:12some irregularities
- 48:14in the data that were
- 48:15presented. And a few years
- 48:16later,
- 48:18they they realized, like, as
- 48:19part of the protocol, there
- 48:20was a break in one
- 48:21region. And they went back,
- 48:23and they reanalyzed the data.
- 48:25And then they put the
- 48:26correct data out there. And
- 48:28I love this because it
- 48:29was a way in which
- 48:31the scientific community,
- 48:33actually responded in a positive
- 48:35manner. Right? They didn't immediately
- 48:37throw this out. And the
- 48:38authors had great intention
- 48:40in terms of correcting the
- 48:41record.
- 48:42But I have to say
- 48:43that, you know, this is
- 48:43where it gets a little
- 48:44personal for me is, like,
- 48:45this question of, does science
- 48:47really self correct?
- 48:48You know, we published a
- 48:49paper in in our own
- 48:51journal, right, in my own
- 48:52journal. And I think I'm
- 48:53the only editor to ever
- 48:55retract from his own journal.
- 48:57And, you know, to talk
- 48:59about this, you know, it
- 49:00it it it really is
- 49:02a a complicated
- 49:04space. Right?
- 49:05There's embarrassment. There's unclear responsibility.
- 49:09It can be very time
- 49:09consuming.
- 49:11We don't make it easy
- 49:12to retract even when you're
- 49:13the editor in chief. Like,
- 49:15I had to, you know,
- 49:16push every week to, like,
- 49:17hey. Where where are we
- 49:18gonna do with this? Because,
- 49:20you know, we need to
- 49:21retract this. And the story
- 49:23is actually quite interesting.
- 49:25The,
- 49:26the PhD,
- 49:28student who is kind of
- 49:29responsible for the analyses,
- 49:31she felt awful about this.
- 49:32And the only way we
- 49:33discovered it was when we
- 49:35tried to apply the same
- 49:36tool in a different population,
- 49:38and we recognized that the
- 49:39results were absurd. Right? They
- 49:41were just nonsensical.
- 49:42So then we went back,
- 49:43and it was a small
- 49:44coding error.
- 49:45And,
- 49:46you know, again, thinking through
- 49:48this, like,
- 49:49she was very in a
- 49:51very vulnerable position, and I
- 49:52always, like, come back to
- 49:53the fact that she was
- 49:55brave enough to kind of
- 49:56come and tell us.
- 49:57And, you know, we we
- 49:59tried to encourage that, but
- 50:00I don't think that this
- 50:01happens enough. And I don't
- 50:02think we've created a culture
- 50:04of that,
- 50:05to the extent we need
- 50:06to.
- 50:08Lesson four, I think, is
- 50:09we need to accept no
- 50:10easy answers.
- 50:12Many changes will require improving
- 50:15training in research at all
- 50:16levels. That's why we need
- 50:17institutions like Yale,
- 50:19that produce, like, really good,
- 50:21physician scientists and clinical researchers.
- 50:24You know, we need to
- 50:25just know the limitations
- 50:26of the ways in which
- 50:27we analyze data. We need
- 50:29to push towards the use
- 50:30of stronger study designs.
- 50:33And then, you know, the
- 50:34idea that also better long
- 50:36term education of the public
- 50:37on scientific discourse overall and
- 50:39just communication. And I do
- 50:41really feel that editors and
- 50:42journals must lead in this
- 50:44space, and I know that
- 50:45this is the vision that
- 50:46that Harlan has certainly for
- 50:48JACC, which will be important
- 50:49because
- 50:50it's gonna take, like, our
- 50:51our flagship journals, JACC, Circulation,
- 50:54and EHA to really,
- 50:56push us towards this.
- 50:58There's an example that that's
- 50:59really telling of this idea
- 51:01of hacking journals. Right?
- 51:04There's a a fascinating,
- 51:08oh, it it doesn't show
- 51:09up, but,
- 51:10there was a
- 51:11there was a paper that
- 51:12was published in Nature about
- 51:14a year and a half
- 51:15ago,
- 51:16on the climate science topic.
- 51:18And right after the author
- 51:19published it, he he wrote
- 51:21a,
- 51:22a piece
- 51:23in the free press
- 51:25that was titled, I overhyped
- 51:26climate change to get it
- 51:28published.
- 51:29And he went through it's
- 51:30almost like a tell all
- 51:31of, like,
- 51:32how, you know, he wrote,
- 51:33if you adhere to the
- 51:34mainstream narrative, if you focus
- 51:36on problems, not solutions,
- 51:38even when improvement exists, like
- 51:40pointing out that climate, you
- 51:42know, climate change has actually
- 51:43slowed down in some aspects.
- 51:46But if you don't focus
- 51:47on that and you just
- 51:47focus on overhyping it again,
- 51:50and then you you you
- 51:51report the eye popping statistics
- 51:53rather than the the ones
- 51:54that show improvement,
- 51:55that you can really get
- 51:57the mainstream journals to be
- 51:58excited about this. And it
- 51:59was a very interesting whether
- 52:01you agree with him or
- 52:02not, just his thought process
- 52:04was very fascinating,
- 52:05to kinda go through. And
- 52:07then finally, this lesson of,
- 52:08like, starting to ask what
- 52:09I call the hard questions.
- 52:11You know, we do need
- 52:12to understand funders and policymakers'
- 52:14role in reform.
- 52:15They played a major role
- 52:17in open science and protocol
- 52:19preregistration.
- 52:20And then many of you
- 52:20have probably seen the NIH
- 52:22director
- 52:23as in recent, you know,
- 52:24months, he's pushed this idea
- 52:26that the NIH needs to
- 52:27be focused on replication science.
- 52:29The challenge has been that
- 52:31they haven't really funded that
- 52:32aspect of it. And an,
- 52:34you know, an unfunded mandate.
- 52:36I'm not sure it's gonna
- 52:37really move the needle.
- 52:38But I think that at
- 52:39least it's, like, starting to
- 52:40make its way to the
- 52:41highest levels of, of sponsors.
- 52:44And then really just, you
- 52:46know, this, like, fundamental idea
- 52:47at the end of the
- 52:48day of too much research.
- 52:50You know? How much real
- 52:51value have tens of thousands
- 52:53of COVID nineteen studies brought
- 52:54us? I mean, I I've
- 52:55seen so many of those
- 52:56studies come across my desk
- 52:57and, you know, to try
- 52:58to understand how they actually
- 53:00change clinical care or impact
- 53:01us,
- 53:02you know, is marginal at
- 53:03best.
- 53:04And and how we think
- 53:05about that when, you know,
- 53:06we think about this idea
- 53:08of democratizing science, which sounds
- 53:10like a great idea on
- 53:11paper, but the implications of
- 53:13that,
- 53:14can be tremendous.
- 53:16So,
- 53:17you know, I I'm gonna,
- 53:19finish off with just this
- 53:20last slide,
- 53:21which is,
- 53:23you know, one of my
- 53:24favorites. I'm a I'm a
- 53:25rational optimist, I think, at
- 53:26heart.
- 53:27And, you know, I love
- 53:29this slide.
- 53:31When the New England Journal
- 53:32was celebrating its its two
- 53:34hundredth anniversary,
- 53:37the very first article in
- 53:38their series
- 53:39of reflections
- 53:41was by Betsy Nabel and
- 53:43Eugene Braunwald, and it was
- 53:44titled A Tale of Coronary
- 53:46Artery Disease and Myocardial Infarction.
- 53:49And if you look at
- 53:50this slide, it's just really
- 53:51remarkable. Right? You look at
- 53:53these
- 53:54deaths per hundred thousand population
- 53:56rates, you know, going north
- 53:58of, you know, four hundred
- 54:00down over the years from
- 54:02nineteen fifty to about two
- 54:03thousand and ten
- 54:05to, you know, almost like
- 54:07a seventy five percent decrease.
- 54:08And that's just incredible when
- 54:10you think about it. But
- 54:11what I really love more
- 54:13about this slide than anything
- 54:14else is
- 54:15what you just see is
- 54:16this steady,
- 54:17progressive decline. Right?
- 54:19There was no, like, moonshot
- 54:21that that did this. Right?
- 54:23Nobody hacked health.
- 54:25It was just you know,
- 54:26you just have this slow
- 54:27decline,
- 54:28you know, based on real
- 54:30advancements in science that happen
- 54:32in a way that I
- 54:33think, you know, incrementalism
- 54:34gets oftentimes,
- 54:36you know,
- 54:37diminished. But but I think
- 54:38it's at the core of
- 54:39of how we progress,
- 54:41because, you know, really, science
- 54:42is not as much about
- 54:44being right, just about being
- 54:45less wrong over time. So,
- 54:47anyway, thank you. It's it's
- 54:48wonderful to be here. It's
- 54:49wonderful to visit with everyone.
- 54:55Baoji,
- 54:57that was excellent. Thank you
- 54:58for joining us and and
- 54:59for this great visit. Maybe
- 55:01I'll start it off. I'm
- 55:02sure there's gonna be millions
- 55:03of questions.
- 55:04You you introduced the concept
- 55:05of the need for preregistration,
- 55:07which I think,
- 55:08I I completely agree with
- 55:09you.
- 55:12Do you can you speak
- 55:13to how you would see
- 55:14that happening? We've done it
- 55:16in the clinical trial realm
- 55:17realm, I think, particularly well.
- 55:18And by the way, I
- 55:19think preregistration step one, you
- 55:21know,
- 55:23submitting
- 55:24planned analysis plans
- 55:26before that first patient's enrolled,
- 55:27I think probably would be
- 55:28a step in the right
- 55:29direction, which is not preregistration.
- 55:31It's it's actually a
- 55:33but how do you see
- 55:34that actually happening in the
- 55:35outcomes
- 55:36arena?
- 55:37And does it need to
- 55:38be applied
- 55:39to the work we do
- 55:41in preclinical
- 55:42spaces as well? Because I
- 55:43do think that, you know,
- 55:45putting
- 55:46your analysis
- 55:48your goals in front
- 55:50should be almost a a
- 55:52way to define
- 55:54the quality of the science
- 55:55that comes out of it.
- 55:56So I'm just curious how
- 55:57you see it kind of
- 55:58rolling out in the outcome
- 56:00space as an example.
- 56:02Yeah. I I think it's
- 56:03a great question.
- 56:05Eric, I I really appreciate
- 56:06it. I I think that
- 56:07there's two things I'd say.
- 56:08One is that
- 56:10I believe that,
- 56:12there are ways in which
- 56:13we can do this even
- 56:14immediately.
- 56:16You know, Brian Nosig
- 56:17is one example.
- 56:19Open science framework,
- 56:21OSF. You can go on
- 56:22his site. You can,
- 56:25register your observational study. You
- 56:27can register the analytic plan,
- 56:29and you can date and
- 56:30time stamp it,
- 56:32which is which is wonderful.
- 56:35You know, it's it's a
- 56:35little harder, especially when you're
- 56:37doing secondary day data analysis
- 56:38because sometimes these things have
- 56:40been around for, you know,
- 56:41the data. You know, you
- 56:42you have to have some
- 56:43faith in, like, what the
- 56:44investigator and team are doing.
- 56:46I do think that there's
- 56:47a frame shift of mindset
- 56:48too that needs to happen.
- 56:50I'm not saying that exploratory
- 56:51analysis is not worthwhile. Right?
- 56:53I mean, I think, you
- 56:54know, many of the things
- 56:55we we discover, we discover
- 56:57accidentally,
- 56:58and I think there's a
- 56:59role for it. I think
- 57:00what is troubling is when
- 57:03you're doing data exploration,
- 57:05but you're reporting it as
- 57:07if it's hypothesis testing.
- 57:09That's the disconnect. I think
- 57:10there's a role for both
- 57:11types of science for sure,
- 57:13but I think that that's
- 57:14the challenge when you're telling
- 57:15a different type of story
- 57:17from what actually happened.
- 57:19And that's where we can
- 57:20get down these these different
- 57:21rabbit holes.
- 57:24You've given us some great
- 57:26solutions that I think will
- 57:28incrementally improve the quality of
- 57:30our data.
- 57:31But tomorrow,
- 57:32what would you say to
- 57:33your anti vaxx Maha sister-in-law
- 57:37when she asks you about
- 57:38the NOSEC study?
- 57:45Yeah.
- 57:47Yeah. No. It's it's a
- 57:49it's a it's a really
- 57:50tough question. I mean, I
- 57:54okay. So I'm gonna I'm
- 57:55gonna answer it. I'm gonna
- 57:56tread carefully here.
- 57:58I think that,
- 58:00you know, the the two
- 58:01things that I just think
- 58:02about immediately are, you know,
- 58:04if you go back,
- 58:06about ten or twenty years
- 58:07ago and I was probably
- 58:08one of the strongest advocates
- 58:09for saying, oh, just release
- 58:11data. Right?
- 58:14And I do believe that
- 58:15that's still probably the right
- 58:16way to do it. But
- 58:17what we've seen over the
- 58:19years is people can take
- 58:21data and they can, you
- 58:22know, manipulate it to a
- 58:23prior story. In fact, like,
- 58:25I'll be honest. I think
- 58:26all human beings, it doesn't
- 58:27matter your political spectrum. We
- 58:28all tend to do it.
- 58:30You know?
- 58:31We we have some answer
- 58:32in mind, and then we
- 58:33kind of selectively go looking
- 58:35for the answers that support
- 58:37it.
- 58:38So I I I think
- 58:40that that's challenging.
- 58:42I think the second thing
- 58:43I'll just say about,
- 58:44you know, that I
- 58:46you know, Michigan's a purple
- 58:47state. Right? And,
- 58:49you know, it was very
- 58:51interesting because
- 58:52if you go outside of
- 58:53Ann Arbor,
- 58:55like,
- 58:56just even thirty miles, you're
- 58:57in a much different place
- 58:59than you are in the
- 59:00heart of Ann Arbor.
- 59:02I've tried to
- 59:04I don't know. I don't
- 59:05know. I try I try
- 59:06to listen a little bit
- 59:07more to my Maha sister-in-law,
- 59:10but, like, I it's it
- 59:11could be challenging.
- 59:13But the one thing I've
- 59:14realized is nobody wants to
- 59:15be told they're wrong. And,
- 59:17you know, I I don't
- 59:18know.
- 59:19I if you have an
- 59:20answer, I'd love to hear
- 59:21it, but it's like trying
- 59:22to
- 59:23data itself is not gonna
- 59:25get us out of, like,
- 59:26you know, the the the
- 59:27situation I feel like we're
- 59:28sometimes in.
- 59:31Yeah. I'm I wish I
- 59:32had a better answer.
- 59:35Amarjeet, that was a wonderful
- 59:37talk,
- 59:38and thank you for your
- 59:39visit. I,
- 59:41as a basic scientist, I
- 59:43couldn't help but continually
- 59:45compare
- 59:46a lot of your discussion
- 59:48with what I think about
- 59:49in the preclinical or basic
- 59:51science world. And I wanna
- 59:53go back to
- 59:54your concept about hypothesis driven
- 59:56versus
- 59:57sort of observational
- 59:59science. And I've I've joked
- 01:00:00with Harlan about this over
- 01:00:02the years. At least, I
- 01:00:03thought it was a joke.
- 01:00:03I'm not sure he did.
- 01:00:04But,
- 01:00:07you know, I wonder if
- 01:00:08you see a difference
- 01:00:10in how much
- 01:00:11the science is, and I
- 01:00:13use the word pushed if
- 01:00:15it's hypothesis driven. When we
- 01:00:17make a hypothesis,
- 01:00:19we're intellectually
- 01:00:20and emotionally
- 01:00:22invested in that hypothesis.
- 01:00:24And I think science basic
- 01:00:26science gets pushed
- 01:00:28based on hypotheses
- 01:00:31in a bad way often.
- 01:00:33And I I wouldn't think
- 01:00:35that that would happen in
- 01:00:37data observational data analysis
- 01:00:40or outcomes
- 01:00:42analysis where you're looking at
- 01:00:44data without a preconceived
- 01:00:46hypothesis. So I'm I'm wondering
- 01:00:48if you see any
- 01:00:50any advantage or disadvantage to
- 01:00:52coming in science that way.
- 01:00:55Well, I I I think
- 01:00:56first of all, I think
- 01:00:56you're giving too much credit
- 01:00:57to us as, like, outcomes
- 01:00:59researchers.
- 01:00:59I think we come with
- 01:01:01extreme,
- 01:01:02like,
- 01:01:03intellectual biases. In fact, like,
- 01:01:06you you know, in our
- 01:01:07journal,
- 01:01:08one of the things that
- 01:01:09always comes up is, you
- 01:01:10know, sometimes, like, we'll get
- 01:01:11a paper, and it'll have
- 01:01:12a number of industry collaborators
- 01:01:14on it, sometimes even first
- 01:01:16authors or senior authors.
- 01:01:17And someone will inevitably, in
- 01:01:19the editorial team meeting, raise
- 01:01:21that question and say, well,
- 01:01:22you know,
- 01:01:26what about this, like, conflict?
- 01:01:27And I always tell folks,
- 01:01:28you know,
- 01:01:30tell me where the science
- 01:01:31is wrong, but, like, we
- 01:01:32can't, like, be stuck in
- 01:01:34this model because,
- 01:01:36I think intellectual
- 01:01:38conflicts like, when you've dedicated
- 01:01:40your entire life and career
- 01:01:41to, like, one particular model,
- 01:01:44like, you have incredible
- 01:01:45conflicts,
- 01:01:46in that space. In fact,
- 01:01:48more powerful sometimes
- 01:01:49than the financial ones. Right?
- 01:01:52And I I don't know
- 01:01:53if we acknowledge it enough.
- 01:01:55I think outcomes researchers come
- 01:01:57with just the same types
- 01:01:58of biases. You know, again,
- 01:02:00it's a cute example, the
- 01:02:02the one Brian Nosek won,
- 01:02:04but we all know. Right?
- 01:02:05If that dataset were analyzed
- 01:02:07in one way,
- 01:02:08Fox News would be all
- 01:02:10over it. If it were
- 01:02:11analyzed in a different way,
- 01:02:12The New York Times would
- 01:02:13be all over it. And,
- 01:02:14you know, and nobody knows.
- 01:02:16Right? And you could see
- 01:02:17people, you know, to the
- 01:02:19point that was raised earlier,
- 01:02:20you know, just closing in
- 01:02:22on that and just, you
- 01:02:24know,
- 01:02:25reporting or choosing which narrative
- 01:02:27is more impactful. So I
- 01:02:28I think outcomes researchers, observational
- 01:02:30researchers, we got, like, Rohan
- 01:02:32and several others, Bob here.
- 01:02:33I I think we have
- 01:02:34the same biases.
- 01:02:37So Mhmm. Yeah.
- 01:02:39Well, first of all, end
- 01:02:40of the hour. But thank
- 01:02:41you, Brahmajee, for for coming
- 01:02:42and spending the day with
- 01:02:44us and, for a fantastic
- 01:02:45talk. I really let's everyone
- 01:02:47give him a hand.
- 01:02:49Thank you.