Genentech’s John Marioni on enhancing drug discovery with data and AI

Over his decades-long career working at the intersection of biology and technology, John Marioni has seen how advancements in data and AI have changed the landscape of drug discovery. At Genentech Research and Early Development, a biotechnology company and member of the Roche Group, Marioni—senior vice president and head of computational sciences—and his team are leveraging a lab-in-the-loop approach, which applies AI and machine learning to clinical and experimental data to inform models that can predict the next steps of drug discovery experiments. Ultimately, this strategy can help patients with myriad types of diseases get more-effective therapies faster.

In this episode of Eureka!, a McKinsey podcast on innovation in life sciences R&D, hosts Alex Devereson, Laura Furstenthal, and Navraj Nagra talk with Marioni about the influence of AI on biopharma, the collaborations required to stay on the cutting edge of discovery, and how he and his team balance risk and reward. An edited version of their conversation follows.

The overlap of academia and the biopharma industry

Navraj Nagra: How did your academic background inform the work you’re currently doing? How do innovation and new developments differ between academia and the biotech industry?

John Marioni: There are a lot of similarities between academia and the biotech industry. Both do similar things to push at the cutting edge, innovate, and then apply those findings to some of the most important problems we’re trying to solve as a community in biology, chemistry, or clinical sciences.

I spent 12 years in academia running a group at the University of Cambridge that focused on developing methods for analyzing high-throughput genomics data and applying those methods to study problems about how cells make decisions about which fate to choose, particularly during early development. That taught me how to think about data, how to apply computational methods to important questions, and how best to collaborate across disciplines. Science—in academia or an industry—is increasingly interdisciplinary, and understanding how you can best interface with people with different skills and backgrounds is critical to being successful in both accelerating the science and building a functional team.

I made the transition to work in the industry in 2022 because there was an exciting confluence of data availability, compute power, and methodological developments, as well as an openness to apply these tools in the pharmaceutical industry. I wanted to be part of that and see how I could contribute so we could discover better targets, develop better medicines, and bring them to patients faster.

AI’s effect on the biopharma industry

Navraj Nagra: What cutting-edge innovations in AI and machine learning do you think have the potential to create industry-wide impact?

John Marioni: In our daily lives, we see the effect of AI when we’re using a product such as ChatGPT or an online agent. In the pharmaceutical sector and the biotech sector more generally, we see similar effects across the pipeline from developing foundational models to eventually predicting what cell types will do when a specific perturbation happens. The Nobel Prizes that were given in the past year was for foundational models and their downstream applications in designing better trials and speeding up our processes so we can accelerate how quickly we can file results from trials.

I’m excited about agents, particularly autonomous agents. These agents democratize many of these AI tools—they help make them available for people who don’t have a strong computational background, and they’re tunable for specific tasks. People can deploy them effectively in many lines of work and can build their own bots for particular purposes. The role agents and similar tools can play for the wider population as more people embed them into their daily work will be tremendously impactful.

Laura Furstenthal: What is the role of data, including multimodal data, in innovations in AI? How is data informing the next generation of treatment for patients?

John Marioni: Data is critical, obviously. At first approximation, the power of AI models is proportional to the amount of data that’s ingested. One challenge is ensuring that the data is organized, which is especially difficult for larger companies that have legacy systems and complex infrastructures. Other big challenges are ensuring that we can access that data, organize it effectively, and allow it to be ingested not only by individuals but also by computational models. We have to change how we build data foundations so the algorithms that make up the corpus of this data work not only for an individual end user but also for the broader scientific community. If we do that right, these models will have so much more potential.

The second part to data is generating it for models. We want to be able to generate powerful types of data at scale. Across the sector, people are investing in capabilities to generate data, whether it’s perturbation data, optical pooled screening data, or other types of data modality. Generating this data at scale will be necessary so we can train AI models to generate good predictions that can help us to accelerate our work.

Last, once we’re able to generate predictions, we’ll have to test those predictions to make sure they’re useful. We can test that either in the lab or by collecting other types of relevant data. That process forms the basis of the lab-in-the-loop construct we have at Genentech and Roche: We start with the model, receive a prediction, validate it, and then improve the model. It’s a virtuous circle. And you keep doing that until the model can generate good predictions that can complement and guide the next experiments that are being done. This process can be applied to target discovery, the molecule design space, and the clinical context.

Laura Furstenthal: What have you learned from embedding this approach into scientific workflows?

John Marioni: We learned that there’s a process and a design, and these different efforts exist within a larger ecosystem. A lot of the challenges we face are around adapting our processes and ways of working to embed this system properly. We’ve taken great strides, but this is an area where there is still more to do. Beyond the data storage, compute, methods, and data generation challenges, we have to make this model work from a people and a process perspective. All of that needs to come together if we’re going to succeed. So don’t underestimate that holistic approach—80 percent of the challenge related to many of these efforts is the people part, so getting that to work makes an enormous difference. And if we do that right, the rest follows.

Alex Devereson: What are some challenges and opportunities that are unique to biopharma R&D, where the individuals you’re working with are scientists and academics?

John Marioni: We intentionally hire extremely smart people, but we all come with our own preferences for how things work and what we think is the right way to do something. Whenever there’s a new approach, depending upon the individual, there are different degrees of skepticism. And frankly, we should be skeptical—otherwise we would adopt anything without thinking. Working with our colleagues to show this approach has potential and proving the lab-in-the-loop approach works helps us get buy-in. In biopharma, where people have strong technical backgrounds, everyone knows their area really well, so they need to be convinced. We need to help them see the potential as we move forward.

In academia or biological research, you need to be stubborn and used to failure because many things don’t work. Being skeptical about other tools is important. Nonetheless, this is an industry that is driven by changes in technology, such as polymerase chain reaction tests, applications in next-generation sequencing, or cryogenic electron microscopy. When these technologies come out, people adopt them, and AI is another example of that—and perhaps an even more transformative example.

Laura Furstenthal: What are some insights you’ve seen from the lab-in-the-loop approach so far that spark hope?

John Marioni: The most advanced part of our lab-in-the-loop approach is on the molecule design side, which is where we started. The idea is to take a variety of data, say, around the sequence structure or other properties of antibodies, and build a foundational model based on those. Starting with a seed sequence, we can generate predictions of other molecules with desired properties. We’re starting to see computationally predicted molecules get embedded into everyday processes and move into the pipeline. We’ll see whether they will succeed—this is a long game—but it’s been exciting to see them in the lead.

Building successful ecosystems in an ever-changing environment

Navraj Nagra: What kinds of collaborations are happening across the life sciences ecosystem, the broader technical ecosystem, and academia? What does it take to make those partnerships successful?

John Marioni: Everybody brings complementary skills to solve the problem, and that makes it fun to work together. You need all those different elements—the technical expertise; the biological, chemical, and clinical insight; and the compute infrastructure—and you need to be able to incorporate them together sensibly. One company is unlikely to have all those elements in-house, so we collaborate closely on the compute side with Amazon Web Services and NVIDIA to accelerate how quickly we can train and deploy the models we’re coming up with together. In both instances, the experience has been synergistic and complementary. Acknowledging that we are working together in a partnership to drive this forward has been important so it doesn’t feel like we’re two organizations trying to do the same thing.

Alex Devereson: How do you evolve your thinking of the ecosystem, given how quickly this space is evolving and commoditizing? How do you keep track of where the right partners are and where the right innovation is happening?

John Marioni: In January 2024, some of what is possible now in AI was not academic, but it certainly wasn’t mainstream. Now, we’re beginning to see fine-tuning capabilities and the deployment of agents at a scale that wasn’t there in January [2024]. That’s a rapid change in the environment, and keeping up with that is challenging from a technical perspective, an upskilling perspective, and an investment perspective. Where do you put your effort? How do you partner? How do you work out what is going to be at the cutting edge? We have great scientists internally in this space who can guide us toward what we think are the most important problems, but we still have to move quickly and accept that this field is not going to look like it is today a year from now.

The types of tools we deploy are going to change, especially at the app layer, where the tools and types of approaches turn over very fast, so we have to get used to iterating more quickly than we’ve been used to. The data element—and the organization of the data—is something more foundational; you can’t keep iterating the lower layers of your tech stack all the time, though you have to be cognizant of changes.

Laura Furstenthal: We talk a lot about the taker, maker, and shaper framework.1 How do you use that framework in the work you’re doing?

John Marioni: Some AI tools will be pretty much off-the-shelf, such as those that summarize your emails. Other tools, such as those used for protocol generation, will be available, but you will have to adapt them to particular constraints or issues you have internally. And then there are areas where you’ll want to innovate and take more of a risk.

For the first category, the everyday tools, you’ll want to ensure everybody in the organization has the right skills to deploy them and can take advantage of them. For the second category, where you’re fine-tuning tools so they’re fit for purpose, a smaller group of users will be focused on that. Those tools will probably have targeted applications, so working with that group to understand business processes and how they need to adapt to embed these tools is important. For the more cutting-edge work, you probably don’t want to invest in a large number of these tools. These investments should be done in a targeted way, and they need to be resourced appropriately because they require bespoke infrastructure and can require particular talent skill sets. You want to ensure the tools you decide to invest in have the highest chance of success.

A new era of discovery in biopharma

Navraj Nagra: There have recently been some exciting announcements from the Human Cell Atlas project2 and from MIT around the Boltz effort,3 which can now create 3D structures of complex proteins. How are you working with academia to help shape the landscape of AI and biological predictions? And how is academia shaping the work that you’re doing at Genentech and Roche?

John Marioni: We have numerous partnerships across academia in the Bay Area and internationally that range from focusing on generating specific data sets to collaborating on research projects to pure academic postdoc research without a pipeline application. The combination of those is important for us because we can remain at the cutting edge with the best in academia.

One thing that surprised me when I transitioned from academia to the industry is that the timescale on which success is judged is long. Even when we use these technologies to speed up the research process to get better targets, develop drugs faster, or do the filings faster, it’s still a long process. Having a sense of whether you’re moving in the right direction can be harder in the industry than it is in academia because you don’t have the validation you get from your peers reviewing your paper.

For example, in biopharma, when you’re looking for a molecule that’s going to succeed in a phase three trial, it’s a long slog from the gestation to the birth of that molecule to giving it to patients. It’s important to understand that and know where the intermediary points are along the journey that can be milestones to show you’re moving in a good direction. That sense of scale differs between the two environments.

Alex Devereson: How do you keep energy and engagement up over these long timescales? How do you balance that with the people on your teams?

John Marioni: The motivation is in knowing that you’re creating a medicine that could change someone’s life for the better. That’s a very strong motivator, and it’s one that keeps you enthused all the time. It’s why we’re here. So in some sense, it’s easy to stay excited, even if the process of getting to that medicine is not easy.

From a computational or AI application perspective, it’s important to have some smaller wins. For example, if the design that came from a project is being used and is moving forward into the next phase, or if you were part of the diligence that enabled an amazing deal. You need to celebrate those wins as well because the prize for any of these efforts is downstream. It’s about keeping people focused on that big picture while motivating them with shorter-term wins. Otherwise, the reward becomes an abstract thing, and there is a risk of people losing motivation, especially if they’re used to the quick gratification you get from publications or the other types of awards and promotions you get within the academic system.

Big rewards from taking responsible risks

Alex Devereson: You mentioned that in academia, you have to be prepared to be stubborn and to fail. Is there a different angle for big bets and failures in biopharma? How does that influence what risks you take?

John Marioni: Risk tolerance can vary among companies, and it can vary depending on if you’re a start-up, a medium-size company, or a large international pharma like Roche. I think if you’re not a little scared by something you’re doing, you’re probably not being ambitious enough. There has to be some sense that it might fail; otherwise, you’re being complacent. You have to push the envelope with reasonable grounds to expect that it will succeed, but if you know everything is going to work, you may not be pushing the envelope as much as you need to. That’s not to say you should take crazy, uncalculated risks. It’s about getting the balance right between being ambitious and being measured.

Laura Furstenthal: Apart from this motivating fear or willingness to take risks, what other aspects of a culture are most important to get the most out of people in that kind of environment?

John Marioni: You need to have purpose, be collaborative, be curious, have belief, and be nice. Why nice? Quite often, science sucks. Things don’t work. And if you’re in an environment that is not supportive and you’re surrounded by people who are picking at you every time something doesn’t work in a destructive rather than a constructive way, it’s impossible to get anything done. A lot of things don’t work in science and in life. You need nice people who are going to be constructive about how you could move on to the next phase rather than destructive when things don’t work. Without those qualities, it’s harder to get things done, and it burns people out quickly.

Alex Devereson: How do you integrate AI responsibly into your work?

John Marioni: Pharma operates in a heavily regulated environment—rightly so. From our perspective, we would want regulation to fit within our framework for AI so they’re not opposing each other. We are trying to be good stewards of this regulation and embed any additional requirements into our AI framework, which is the way to move forward. We will have to continually adapt to ensure we are working in the most appropriate way moving forward.

Alex Devereson: What do you think is on the horizon for AI in biopharma?

John Marioni: I think many of these tools we’re developing will be democratized. So far, they have been accessible to relatively small cohorts because it’s been difficult to wrangle the data and fine-tune the tools. Democratization will be transformative because it will bring many of these tools to real life.

In the slightly further future, though not that far off, robotics and automation will have a bigger role in a variety of processes, especially in the lab environment. We can already see this, and it’s going to transform what a lab looks like, especially an experimental lab. It will be interesting to see how that evolves as new types of gen AI technology come together with robotics.

Explore a career with us