MCP Users Meeting: Speaker Abstracts

10.30 AM- 11.10 AM Talk 1: Electron Videography and Its Automation for Soft and Energy Materials

Qian Chen | University of Illinois Urbana Champaign

Abstract:

I will present our group’s recent progress on establishing and utilizing “electron videography” to image, understand, and engineer synthetic and natural nanoparticle systems, in space and time at a nanometer resolution. This involves systems that underpin the fundamentals of structure‒functional relationship for a wide range of phenomena and applications. In this talk, we will discuss in detail two types of such systems. The first focuses on metallic nanoparticles assembling into various complex lattices such as Maxwell lattice, a chiral pinwheel lattice, a colloidal moiré pattern, and nanoparticle swarms as promising optical and mechanical metamaterials. The second is on the structural fluctuations and fingering dynamics of membrane protein lipid assemblies. We will show how we build electron videography upon liquid-phase transmission electron microscopy, electron tomography, and four-dimensional scanning transmission electron microscopy, while coupling them with machine learning, automation, and molecular dynamics simulations. I will end the talk by discussing the prospects of autonomous electron videography for understanding and discovery of dynamic multifunctional nanoparticle systems in liquid and at operation at the otherwise inaccessible spatiotemporal precision.

 

11.20 AM-12.00 PM Talk 2: Intelligent Automation in the Materials Lab: From Statistical Learning to Multi-Agent Reasoning

Maxim Ziatdinov | Pacific Northwest National Laboratory

Abstract:

This talk will explore the evolution of intelligent automation in a materials lab, from powerful statistical learning techniques for optimization to advanced multi-agent reasoning for open-ended discovery.

We begin with the foundational role of statistical learning, exemplified by Bayesian optimization. This technique is a cornerstone for efficiently navigating vast experimental parameter spaces to achieve specific, targeted goals. We will discuss the versatility of this approach by highlighting different surrogate models – including Gaussian Processes, Bayesian Neural Networks, and Deep Kernel Learning – as essential tools for accelerating hypothesis-driven research and demonstrate its application for microscopy/spectroscopy and material synthesis.

At the same time, a singular focus on optimization risks overlooking crucial, unplanned findings. This creates an opportunity for advanced multi-agent reasoning to broaden the scope of discovery. To this end, we introduce SciLink, a multi-agent AI framework designed to “operationalize serendipity” in research. Moving beyond a single objective, SciLink employs an observation-driven pathway that autonomously converts all experimental data into falsifiable scientific claims. These claims are then quantitatively scored for novelty against the existing body of scientific knowledge, allowing the system to identify unexpected results and automatically link novel findings to theoretical simulations for deeper mechanistic insights. By combining the efficiency of statistical learning with the exploratory power of multi-agent reasoning, we can create a holistic scientific process that accelerates both planned goals and serendipitous breakthroughs.

 

12.25 PM- 1.20 PM Talk 3: Revamping neutron scattering infrastructure for non-equilibrium measurements with FAIR Data at NIST

Craig Brown | NIST Center for Neutron Research

Abstract:

The NIST Center for Neutron Research (NCNR) is a user facility serving thousands of scientific researchers per year. The Center for High Resolution Neutron Scattering (CHRNS) is the heart of the user program supported by NCNR and NSF. Over recent years, we have completed a time-resolved capability upgrade to several CHRNS instruments and improved aspects of the entire experiment life-cycle based on FAIR principles. While raw data taken from the instruments has been available on our website for decades, we have sought to increase the value of the data to the experimenters and the scientific community. We developed an improved metadata workflow to help users add sample-specific metadata to their stored data files, automate their data processing, and make the experimental process more robust and transparent. Combined with automated creation of a DOI for each user experiment, tooling for adding sample metadata to our data streams, capture of process metadata, and an expanded search infrastructure for data and metadata, we believe that we have added value to the process for experimental teams and provided the basis for retaining the value of that data beyond its original use.