Scientific Computing Speaker Abstract & Bios
Abstract: In biomedical research, the availability of an increasing array of high-throughput and high-resolution instruments has given rise to large datasets of imaging data. These datasets provide highly detailed views of tissue structures at the cellular level and present a strong potential to revolutionize biomedical translational research. However, traditional human-based tissue review is not feasible to obtain this wealth of imaging information due to the overwhelming data scale and unacceptable inter- and intra-observer variability. In this talk, I will first describe how to efficiently process Two-Dimension (2D) digital microscopy images for highly discriminating phenotypic information with development of microscopy image analysis algorithms and Computer-Aided Diagnosis (CAD) systems for processing and managing massive in-situ micro-anatomical imaging features with high performance computing. Additionally, I will present novel algorithms to support Three-Dimension (3D), molecular, and time-lapse microscopy image analysis with HPC. Specifically, I will demonstrate an on-demand registration method within a dynamic multi-resolution transformation mapping and an iterative transformation propagation framework. This will allow us to efficiently scrutinize volumes of interest on-demand in a single 3D space. For segmentation, I will present a scalable segmentation framework for histopathological structures with two steps: 1) initialization with joint information drawn from spatial connectivity, edge map, and shape analysis, and 2) variational level-set based contour deformation with data-driven sparse shape priors. For 3D reconstruction, I will present a novel cross section association method leveraging Integer Programming, Markov chain based posterior probability modelling and Bayesian Maximum A Posteriori (MAP) estimation for 3D vessel reconstruction. I will also present new methods for multi-stain image registration, biomarker detection, and 3D spatial density estimation for For molecular imaging data integration. For time-lapse microscopy images, I will present a new 3D cell segmentation method with gradient partitioning and local structure enhancement by eigenvalue analysis with hessian matrix. A derived tracking method will be also presented that combines Bayesian filters with a sequential Monte Carlo method with joint use of location, velocity, 3D morphology features, and intensity profile signatures. Our proposed methods featuring by 2D, 3D, molecular, and time-lapse microscopy image analysis will facilitate researchers and clinicians to extract accurate histopathology features, integrate spatially mapped pathophysiological biomarkers, and model disease progression dynamics at high cellular resolution. Therefore, they are essential for improving clinical decisions, enhancing prognostic predictions, inspiring new research hypotheses, and realizing personalized medicine.
Bio: Dr. Jun Kong, is an Associate Professor in Mathematics and Statistics in GSU, an adjunct faculty in Department of Biomedical Informatics and Department of Computer Science at Emory University. Furthermore, Dr. Kong is a member of the Cancer Cell Biology Program of the Winship Cancer Institute and directs a lab that develops biomedical image analysis algorithms, computer-aided diagnosis systems, large-scale integrative approaches, and high performance computing methods for quantitative cancer research. He is particularly interested in advancing translational oncology research with quantitative computer, mathematical, engineering, and informatics methods. He has developed a large number of image analysis systems and quantitative data integration methods for numerous diseases with intensive development and use of image analysis and pattern recognition techniques for automated processing microscopy images of histological specimens.
Shedding Light on Proteins with Computers
Abstract: Light-responsive proteins have evolved to use light to drive complex processes ranging from photosynthesis to vision. These systems not only serve as an inspiration for technology, but have also been implemented directly in biotechnologies such as bioimaging, biosensing, optogenetics, and photodynamic therapy. However, engineering photoreceptors for use in biotechnology requires a fundamental understanding of how these systems operate on a molecular level. Therefore, we develop and employ computer models of light-responsive proteins to understand how they respond to light. These models are rooted in both quantum mechanical and classical theories and methods. We will describe our computational models and how they can be used to understand photoinduced events in photoreceptor proteins such as Rhodopsins and Light-Oxygen-Voltage (LOV) sensing domains.
Bio: Samer Gozem is an Assistant Professor of Chemistry at Georgia State University. He obtained his BSc in Chemistry in 2008 from the American University of Beirut in Lebanon and his Ph.D. in Photochemical Sciences in 2013 at Bowling Green State University. At Bowling Green, he studied biological photoreceptors under the mentorship of Prof. Massimo Olivucci. Samer then did his postdoctoral training at the University of Southern California with Prof. Anna Krylov, where he worked on modeling photoionization and photodetachment processes. He joined the faculty at Georgia State University in 2017. His interests include using both classical mechanics and quantum mechanics to study light-responsive chemical and biological systems.
Secrets of High Performance Computing Research
-Optimizing your research speed and efficiency
-Maximizing Your Chances of Getting NSF Funding
-Best Practices and Case Studies of How HPC is Applied in Higher Education
Abstract: This session will focus on how to speed up your research by tiering data, how to Maximize Your Chances of Getting NSF Grant Funding, and case studies/best practices of HPC that are flourishing today in Georgia’s most elite Higher Education institutions. These blueprints have advanced research 15x in some cases. This session will dive into how to optimize the speed of your research by tiered data systems, where the pitfalls of HPC are and how to avoid them, where you can save money, tips to earn research grants, and types of designs outside of a traditional HPC architecture..
Bio: Bestselling, Award Winning, Author Anthony R. Howard has been an industry recognized Systems Consultant and technology expert for Dell for over 15 years. He was named #1 IT Super Hero by InfoWorld and Computer World, was the winner of the National Federal Office Systems Award (FOSE – Nation’s Largest Information Technology Exposition Serving the Government Marketplace), and the winner of Government Computer News Best New Technology Award. Several case studies have been published on Howard’s solutions across the Information Technology industry. Currently he provides enterprise technology solutions and advisement for America’s most distinguished clients including a sizeable amount of work for the Georgia Higher Education and Research, U.S. Defense Sector, Department of Justice, and the Department of Homeland Security. His projects have been featured in dozens of national media outlets including Fox News. After founding his own technology firm, Howard completed his formal education with a Masters of Business Administration with a concentration in Information Technology from Florida A&M University. He’s also the bestselling author of The Invisible Enemy: Black Fox, The Invisible Enemy II: Vendetta, and Devil’s Diary: The Coming.
Multidisciplinary/Interdisciplinary Research and Education and NSF Office of Advanced Cyberinfrastructure Programs
Abstract: Multidisciplinary/Interdisciplinary research are challenging as well as rewarding. The National Science Foundation Office of Advanced Cyberinfrastructure (OAC) has growing research and education programs, including programs for early career multidisciplinary faculty such as CAREER and CISE Research Initiation Initiative (CRII). OAC is pleased to announce its newest program, its core research program solicitation (NSF 18-567), with the goals of supporting all aspects of advanced cyberinfrastructure (CI) research that will significantly impact the future capabilities of advanced research CI, as well as the research career paths of computer as well as computational and data-driven scientists and engineers. Through this solicitation, OAC seeks to foster the development of new knowledge in the innovative design, development, and utilization of robust research CI. The OAC core research areas include architectures and middleware for extreme-scale systems, scalable algorithms, and applications, including simulation and modeling, and the advanced CI ecosystem, including tools and sociotechnical aspects.
OAC also introduced a CyberTraining program (NSF 18-516) for education and training aimed to fully prepare the scientific workforce for nation’s research enterprise to innovate and utilize high-performance computing resources, tools and methods. The community response in its two rounds of competition have exceeded expectations. OAC also has programs for research training of undergraduate students (REU sites).
I will introduce these and share some of the recent awards. I will also touch on NSF’s ten big ideas, including Harnessing the Data Revolution.
Bio: Sushil K. Prasad is a Program Director at National Science Foundation in its Office of Advanced Cyberinfrastructure (OAC) in the Computer and Information Science and Engineering (CISE) directorate leading its emerging research and education programs such as CAREER, CRII, Expeditions, CyberTraining, and the most-recently introduced OAC-Core research. He is an ACM Distinguished Scientist and a Professor of Computer Science at Georgia State University. He is the director of Distributed and Mobile Systems Lab carrying out research in Parallel, Distributed, and Data Intensive Computing and Systems. He has been twice-elected chair of IEEE-CS Technical Committee on Parallel Processing (TCPP), and leads the NSF-supported TCPP Curriculum Initiative on Parallel and Distributed Computing for undergraduate education.
Coding vs Clicking: Clashes and Compromises in Scientific Computing
Abstract: The panelists and attendees will engage in a conversation about the pros and cons of performing analytical computing via point-and-click interfaces vs. coding/programming. The following prompts will guide the discussion:
1. Coding vs. clicking – if you were forced to pick one side, which would you pick, and why? When do you think there is room for compromise?
2. How do you think the increasing emphasis on research transparency and replication will influence the coding vs. clicking issue?
3. How do you see disciplinary practices and traditions influencing the coding vs. clicking issue?
4. When teaching novice researchers such as undergraduates, grad students, research staff, or other faculty analysis tools and/or approaches that are new to them, which is your go-to approach: Coding? Clicking? Both? And why?
5. If someone says “I will not or cannot learn code because [it’s too hard, I don’t have time to invest in learning it, the software I use works perfectly fine for what I need to do, etc.],” what would you say to them to convince them to think otherwise?
Panelists and Moderator Bios:
DR. RAEDA ANDERSON is an Assistant Professor and the Quantitative Data Specialist for Research Data Services. As a member of the Library’s Research Data Services Team, Dr. Anderson provides quantitative/statistical data support to GSU researchers who are undergraduates, graduate students, staff, and faculty. She has experience with both code as well as point-and-click analysis approaches for research in multiple statistical analysis programs. Dr. Anderson’s background is in social science research and survey methodology and her research interests include survey implementation and biopsychosocial analysis of physical disability.
MR. DAVID FIKIS is a research associate with the Department of Educational Policy Studies in the College of Education and Human Development at Georgia State University’s Atlanta Campus. David works with a federally sponsored external evaluation project and collaborates with faculty members and students working on manuscripts in the field of psychometrics and other related topics in research, measurement, and statistics. His primary line of research involves Item Response Theory, but his interactions with scientific computing occur under the auspices of research in topic modeling, multilevel modeling, and other simulation-based studies.
DR. MATTHEW TURNER is a research scientist in the Department of Psychology and the research and technology mentor for the GSU Center for Information, Modeling & Simulation. He is a mathematician and statistician whose research work is on problems in data science, machine learning, ontologies, and signal processing. He applies these methods to problems in curating and organizing the scientific literature, understanding terrorism, characterizing brain states in meditation, and the automatic processing of brain imaging data. His primary teaching focus is on developing quantitative thinking and skills to students of social and behavioral science.
DR. DROR WALTER is an Assistant Professor of Digital Persuasion at the Department of Communication at GSU. His research is centered on the intersection between classic media effects theories, and computational social sciences methods. His research and teaching address the ways unsupervised machine learning methods such as semantic network analysis and topic modeling can aid in the understanding of various political communication processes, with emphasis on election campaigns, international conflicts, and the representation and perceptions of foreign countries in news and entertainment media. His current research is aimed at examining the role of discourse structure in shaping public opinion in various contexts focusing on the conceptualization, measurement and impact of framing and thematic diversity.
DR. MANDY SWYGART-HOBAUGH is an Associate Professor and the leader of the Library’s Research Data Services Team, which supports GSU students, faculty, and staff in the areas of data analysis tools and methods, mapping and data visualization, finding data and statistics, survey design, and data management.