Klymkowsky will speak about the innovative ways he and his collaborators utilize AI Bots in their classes. His Bots, which he named Dewey and Rita, act like “tutors” for his students. They also can analyze students’ work and instructors’ questions, providing feedback and suggestions to instructors and course designers that can help students overcome conceptual obstacles and help enhance their learning skills and outcomes. Recent surveys show that a large majority of both undergraduate and graduate students are using Artificial Intelligence tools, encouraging the development of new ideas on how to incorporate generative AI strategically into college and university coursework.
As the shape and demands of large scale computing environments have evolved, so have the needs of those who are responsible for keeping them in tip top shape. HPC administrators are challenged with knowing what the data on their system looks like, who’s doing what to the data and tracking jobs on the system. In this talk we’ll cover how the VAST data platform’s powerful structured data component and analytics makes these tasks easy.
Scientists, Engineers, and Technologists live in a world of data, methodical analysis, and measurable outcomes. Scientific research is increasingly being held to regulatory standards that are vague and ambiguous. How can the scientific community understand and adapt to this shifting regulatory landscape?
In today's research landscape, protecting scientific data from tampering and unauthorized access and adhering to compliance driven security measures is critical. Effective data management solutions must ensure integrity and immutability while enabling secure collaboration across institutions and research labs. By implementing policy-driven access controls, encryption, and compliance-driven security measures, organizations can safeguard sensitive research data against breaches and unauthorized modifications. This approach not only enhances data security but also ensures adherence to regulatory frameworks, fostering trust in the integrity and reproducibility of scientific discoveries while enabling secure, compliant data sharing.
As quantum computing continues to develop, quantum-inspired algorithms are emerging as powerful tools that leverage quantum principles on classical hardware to solve complex optimization problems. Universities are uniquely positioned to support research in this area by enabling access to compute resources tailored to the specific demands of quantum-inspired techniques. This session explores the strategies, challenges, advantages and best practices in supporting quantum-inspired optimization workloads on a university compute cluster. We will discuss software frameworks (such as pyqubo, Matlab tools, D-Wave's Ocean tools and NEC's Vector Annealing software) and specialized hardware considerations. The session will also highlight case studies from active research projects.
In this presentation, I will describe our group’s efforts in leveraging high-performance computing (HPC), particularly the RMACC cluster, to tackle computationally intensive optimization problems in quantum computing. In our first case study, we address a quantum optimal control problem that requires fine-tuning numerous pulse parameters to optimize the performance of a quantum gate on a superconducting quantum computer. This optimization is carried out through large-scale parallel executions of a stochastic gradient descent (SGD) algorithm with multiple random seeds on RMACC. In the second case, we apply RMACC to solve a maximum likelihood estimation (MLE) problem for learning structured quantum states from synthetic measurement data. Our approach involves generating a vast number of measurement samples in parallel using a novel autoregressive method and subsequently performing MLE via SGD. In both applications, our approach yields near-optimal results that align with theoretical upper bounds, demonstrating that RMACC could provide an efficient, cost-effective HPC solution for state-of-the art quantum research in local institutions.
In this session we lead a discussion on facilitation for RCD professionals. The discussion will be free flowing, but includes subjects like; how the teams are organized, what are the responsibilities, how small teams handle all the responsibilities, professional development and career advancement, useful tools, and successes and failures
This panel will explore diverse approaches to integrating AI in teaching and research. Each panelist will briefly share their current work with AI before opening up to an informal, generative discussion with the audience about the opportunities and challenges AI presents in higher education. Lee Frankel-Goldwater (Environmental Studies) will discuss the AI Literacy Ambassadors program and the use of citizen science lessons to build research and AI literacy skills. Bobby Hodgkinson (Aerospace Engineering) will share insights from embedding AI-generated feedback in engineering courses, highlighting both its pedagogical strengths and limitations. Diane Sieber (Engineering, Ethics and Society) will focus on shifting campus culture through new curricula and the Generative Futures Lab for AI — a collaborative space for experimentation and knowledge-sharing. Together, their work spans interdisciplinary areas of application, classroom integration, and institutional transformation — offering a rich conversation on the evolving role of AI in academia.
Rocky Linux is a community based Enterprise Linux distribution established in the wake of the announcement that CentOS shifting to a new model focused on being a development branch rather than a rebuild of RedHat Enterprise Linux. The Rocky Linux community includes Special Interest Groups (SIGs) that help to push Rocky Linux further like Security, Cloud, additional Architectures, and HPC CIQ is the founding sponsor of Rocky Linux and provides support and optimizations for joint customers. CIQ also pushes the boundaries of Rocky Linux by providing variants of Rocky Linux for specific use cases like security as well applications for the next generation IT stack. In this session, we will go over the history of Rocky Linux, describe how SIGs are driving forward new features and how to get involved. We will also go over how CIQ supports and enhances Rocky Linux today and what is coming in the future for both CIQ and Rocky Linux.
Tmux (“terminal multiplexor”) is a Linux tool with two useful functions: 1) a user may run several terminal windows within a single Tmux session, and 2) the Tmux session can be run in the background. The latter functionality is particularly useful on remote computing systems (e.g., supercomputers) that require user access via ssh, because users may “reattach” to their Tmux session each time they login, minimizing loss of work between logins. This presentation will provide an overview, hands-on exercises, and a discussion of useful tools and best practices to streamline use of Tmux. To follow along with the hands-on exercises, register for a Research Computing account beforehand.
We will show how projectEureka can enable increased researcher productivity by providing an interactive research and data science platform. projectEureka provides a convenient and easy-to-use web interface for researchers to access HPC resources by bridging data and compute in the cloud and on premises through Kubernetes.
Quantum computers (QC) can significantly enhance high-performance computing (HPC) as accelerators with unique capabilities for solving challenging chemistry, materials science, and optimization problems. Hybrid HPC+QC system offers unique advantages that neither classical nor quantum simulations can achieve independently. Our collaboration between IQM, a leading quantum hardware company, and a premier HPC center has demonstrated practical integration of quantum and classical resources. We present the details of our technical implementation, including hardware and software requirements, networking, and selection of the appropriate space to house the quantum computer, as well as initial scientific results coming out of that collaboration. We also discuss how QC can be integrated with minimal disruption into HPC workflows and the benefits of on-prem QC.
Microsoft Windows through Open OnDemand by Paige Despain 7lbd is an innovative open-source project that simplifies Windows deployment in HPC environments by treating Windows as an application within Open OnDemand. The solution eliminates traditional infrastructure complexities by using technologies like Apache Guacamole, network namespaces, and a simplified Windows VM configuration to provide secure, isolated Windows desktops across computing clusters. This solution simplifies Windows to a level that even Linux systems administrators will find easy to maintain while maintaining robust security and accessibility, no AD required.
An Approach to SLURM Configuration Verification by Kyle Reinholt Ensuring the correctness of Slurm configurations is crucial for maintaining high-performance computing (HPC) environments, but validating these configurations effectively remains a challenge. In this lightning talk, we will explore existing approaches to Slurm configuration verification, including manual checks, custom scripts, and automated validation tools. While these methods offer some benefits, they often fall short in scalability and flexibility. The talk will then shift focus to exploring potential solutions for improving configuration verification, discussing innovative strategies, and tools that could streamline the process, reduce errors, and enhance cluster reliability.
Since their discovery in the late 1800’s, electrons have been a constant source of study for scientists. Their properties and behavior have been studied and harnessed to produce some of the greatest inventions of the past century, including electron microscopes and particle accelerators. However one fundamental question about their behavior still remains: how do electrons move inside atoms and molecules? Electron motion within atoms has proved difficult to study due to the incredibly short timescale it occurs on (the attosecond timescale, or 10-18 seconds). One method of capturing electron motion is to use very short laser pulses to take a series of snapshots of the system. This requires laser pulses shorter than the duration of the dynamics we want to observe (similar to using a short flash on a camera to obtain an image of a fast-moving object). The means to do this have only become possible in the past decade with the advent of new ultrashort (less than 100 as) lasers, which have become feasible due to a process called high‐harmonic generation (HHG). However, these ultrashort lasers are difficult to produce and characterize experimentally, so theoretical and computational methods are often used in the field of attoscience. These methods are also not without their limitations – modelling the correlated behavior of electrons requires significant computing resources, and so High-Performance Computing (HPC) resources are often used to perform these calculations. In this seminar I will present recent results obtained using R-Matrix with Time-dependence (RMT) method calculations performed on national HPC resources, firstly to treat high-harmonic generation in two-color laser fields, and then on applications of the attosecond pulses generated during the HHG process to measure ionization delays.