As research data volumes grow and mandates for data publication become more pervasive, automated means for managing these complex workflows to ensure data integrity have a growing role in modern science. In this session we will introduce Globus Flows, a foundational service for orchestrating secure and reliable data management tasks at scale, and Globus Compute, a service which enables you to execute functions on diverse remote systems. We will describe how Globus Flows and Compute fit into the Globus ecosystem of data and compute management services, and how flows can feed into downstream data portals, science gateways, and data commons, enabling search and discovery of data by the broader community. We will demonstrate how to run various Globus provided flows and discuss initiating flows with triggers and inserting compute tasks into your flows. We will conclude with an interactive tutorial detailing how to build custom flows using Jupyter notebooks and the Globus web app.