Klymkowsky will speak about the innovative ways he and his collaborators utilize AI Bots in their classes. His Bots, which he named Dewey and Rita, act like “tutors” for his students. They also can analyze students’ work and instructors’ questions, providing feedback and suggestions to instructors and course designers that can help students overcome conceptual obstacles and help enhance their learning skills and outcomes. Recent surveys show that a large majority of both undergraduate and graduate students are using Artificial Intelligence tools, encouraging the development of new ideas on how to incorporate generative AI strategically into college and university coursework.
Scientists, Engineers, and Technologists live in a world of data, methodical analysis, and measurable outcomes. Scientific research is increasingly being held to regulatory standards that are vague and ambiguous. How can the scientific community understand and adapt to this shifting regulatory landscape?
This panel will explore diverse approaches to integrating AI in teaching and research. Each panelist will briefly share their current work with AI before opening up to an informal, generative discussion with the audience about the opportunities and challenges AI presents in higher education. Lee Frankel-Goldwater (Environmental Studies) will discuss the AI Literacy Ambassadors program and the use of citizen science lessons to build research and AI literacy skills. Bobby Hodgkinson (Aerospace Engineering) will share insights from embedding AI-generated feedback in engineering courses, highlighting both its pedagogical strengths and limitations. Diane Sieber (Engineering, Ethics and Society) will focus on shifting campus culture through new curricula and the Generative Futures Lab for AI — a collaborative space for experimentation and knowledge-sharing. Together, their work spans interdisciplinary areas of application, classroom integration, and institutional transformation — offering a rich conversation on the evolving role of AI in academia.
Microsoft Windows through Open OnDemand by Paige Despain 7lbd is an innovative open-source project that simplifies Windows deployment in HPC environments by treating Windows as an application within Open OnDemand. The solution eliminates traditional infrastructure complexities by using technologies like Apache Guacamole, network namespaces, and a simplified Windows VM configuration to provide secure, isolated Windows desktops across computing clusters. This solution simplifies Windows to a level that even Linux systems administrators will find easy to maintain while maintaining robust security and accessibility, no AD required.
An Approach to SLURM Configuration Verification by Kyle Reinholt Ensuring the correctness of Slurm configurations is crucial for maintaining high-performance computing (HPC) environments, but validating these configurations effectively remains a challenge. In this lightning talk, we will explore existing approaches to Slurm configuration verification, including manual checks, custom scripts, and automated validation tools. While these methods offer some benefits, they often fall short in scalability and flexibility. The talk will then shift focus to exploring potential solutions for improving configuration verification, discussing innovative strategies, and tools that could streamline the process, reduce errors, and enhance cluster reliability.
Cyberinfrastructure provides a foundation that supports teaching and research on college campuses—but how do we ensure faculty and researchers can fully leverage its potential? This keynote focuses on the critical role of community engagement and development in delivering on cyberinfrastructure’s promises. From workforce development strategies to proof-of-concept examples, Ana Hunsinger will reflect on key lessons learned from intentional community engagement strategies with MS-CC campuses to drive impactful research outcomes. She will share insights into effective strategies for fostering engagement among institutions to participate in and contribute to cyberinfrastructure, supporting researchers and educators in leveraging cyberinfrastructure, and building sustainable engagement strategies that drive long-term success.
Publication of research datasets is now a requirement of most funding agencies and journals. Data curation is the process of ensuring that these datasets are findable, accessible, and usable. In the era of Big Data, the generation of datasets with sizes on the order of 100s of gigabytes and larger is increasingly common. Such large datasets create challenges for both the curation and publishing of data as they often cannot be accessed on standard computer hardware or hosted in traditional online repositories. This presentation provides an overview of a collaborative process between the CU Boulder Libraries and CU Boulder Research Computing in which high-performance computing infrastructure is used to curate and publish gigabyte- and terabyte-scale datasets in a manner that makes them accessible to the research community.
The growing integration of Artificial Intelligence (AI) across diverse research disciplines necessitates a comprehensive understanding of researchers’ current and anticipated AI-related needs. The Research Computing group at the University of Colorado Boulder recently conducted a survey among campus researchers and the broader RMACC) community to evaluate AI usage trends, associated computational demands, and challenges faced by researchers. The survey examined discipline-specific differences in each area. Our findings highlight the importance of understanding the composition of a research community when investing in infrastructure and developing training materials.
This talk shares the story of an NSF-funded experiential learning opportunity for undergraduate and graduate students at RMACC institutions. Students developed practical skills in HPC system administration by learning from and shadowing CU Boulder Research Computing staff. A total of 17 students participated across two in-person experiences and took part in various aspects of system design, deployment, and teardown.
This workshop will act as a continuation of last year's presentation and discussion on quantum-centric high-performance computing (QCHPC). We will again look at realistic instances in the current noisy intermediate-scale quantum (NISQ) -era, how these (and other use cases) might scale with quantum hardware, and strategies for integration of quantum resources with HPC. We will consider approaches to putting these hybrid paradigms into practice; Symposium members are encouraged to contribute efforts made by their HPC departments, both from emulation/simulation and hardware integration perspectives. The workshop is intended as a space to explore ideas, share experiences and gather knowledge to advance quantum-centric HPC in our region.
As research demands grow and infrastructure ages, some institutions are turning to the public cloud to supplement or replace traditional on-prem systems. This talk shares the journey of launching a public cloud pilot for research computing at a small state university. We’ll explore the drivers behind the shift—including scalability, agility, and cost transparency—and walk through key decisions, from selecting a cloud provider to identifying test users. Drawing from real-world experience six months post-launch, we’ll cover what’s worked, what’s surprised us, and what we’re planning next. Attendees will leave with a practical roadmap and lessons learned to guide their own cloud adoption efforts—whether starting small or scaling up.
NVIDIA's GH200 Grace Hopper Superchip offers strong potential for accelerating large-scale AI and HPC workflows through its tightly integrated CPU-GPU architecture. In this talk, we share CU Boulder Research Computing’s first-hand experience providing GH200 nodes to users. We'll cover the GH200 architecture, our approach to the software stack, an overview of our beta testing phase, and successful use cases run on the GH200s. We'll conclude with potential future directions for CURC’s GH200 resources and describe how RMACC members can access the nodes for their research or educational purposes.
The National Artificial Intelligence Research Resource (NAIRR) is a federal initiative aimed at providing researchers with greater access to advanced AI tools, datasets, and computing infrastructure. By connecting academic institutions, national labs, and government agencies, NAIRR is building a shared ecosystem to support AI-driven discovery across scientific disciplines. This talk will highlight the goals of the NAIRR pilot, outline its current offerings, and explore how institutions in the RMACC community can engage with and benefit from this growing national effort.
Many research computing providers are currently working to comply with enhanced data security requirements mandated by U.S. funding agencies to protect Controlled Unclassified Information (CUI) and other sensitive data when used on their cyberinfrastructure. The most common data security frameworks include the National Institute of Standards and Technology (NIST) 800-171 for CUI, the Cybersecurity Maturity Model Certification (CMMC) for defense-related CUI, and the Health Insurance Portability and Accountability Act (HIPAA) for health information. Progress toward compliance with these frameworks is a complex and iterative process, and therefore numerous approaches have been undertaken. This panel discussion provides a forum to share information, experiences, and ideas regarding secure HPC.