scd_banner

Previous Scientific Computing Day Conferences


Scientific Computing Day, October 5 and 6, 2017

Scientific Computing Day (SCD) is a symposium for fostering interactions and collaborations among researchers at Georgia State University and its affiliates. SCD provides researchers a venue to present their work, and for the GSU scientific, computational and business communities to exchange views on today’s multidisciplinary computational challenges and state-of-the-art developments.
With over a dozen of our industry and academia collaborators, this year’s SCD evolves from a single-day conference to a two-day symposium featuring a full day of tutorials on analytics.

What to expect?
Day 1 – October 5: Tutorials/labs led by Amazon Web Services (AWS) and Microsoft experts.  Modules and hands-on labs will provide overview and practice in data analytics with a focus on dashboards and training deep learning models.Day 2- October 6:  Technical program features: presentations by renowned and leading experts on topics such as data analytics, artificial intelligence, and advanced cyber-infrastructure, panel discussion, student networking, and poster reception.
Researchers and aspiring researchers from all disciplines are welcome.

SCD 2017 Collaborators:
scd_colab_logo

Organizing Committee
Organizer: Research Solutions
Chair: Semir Sarajlic (ssarajlic1@gsu.edu)
Poster Chair: Suranga Edirisinghe (neranjan@gsu.edu)
Logistics Chair: Charnae Knight (cknight4@gsu.edu)

Conference Schedule

8:30 a.m. Registration, Check In, and Breakfast
Located in Student Center Ballroom

9:15 a.m. - 10:15 a.m.

(Amazon Web Services)  Module 1:Introduction to Amazon Web Services with a Focus on Researchers   
Bill Richmond, Senior Solutions Architect at AWS
Tracy Applegate, Account Manager at AWS

(Microsoft) Module 1: Introduction to PowerBI
Dustin Ryan, Data Platform Solutions Architect at Microsoft

10:30 a.m. - 12:30 p.m.

(Amazon Web Services) Module 2: Introduction to AWS AI and Machine Learning Services and Hands-on Lab 1: Building a Chat Bot Using Amazon Lex
Bill Richmond, Senior Solutions Architect at AWS

(Microsoft) Module 2: Overview of PowerBI Desktop - Consuming Data, Transforming Data, and Visualizing Data and Hands-on Lab
Dustin Ryan, Data Platform Solutions Architect at Microsoft

12:30 p.m. - 1:30 p.m.
Lunch

1:30 p.m. - 3 p.m.

(Amazon Web Services) Module 3: Data Science Process and Module 4: Introduction to Deep Learning and MXNet
Bill Richmond, Senior Solutions Architect at AWS

(Microsoft) Module 3: PowerBI Service - Publishing and Sharing Content and Hands-on Lab
Dustin Ryan, Data Platform Solutions Architect at Microsoft

3:15 p.m. - 5 p.m.

(Amazon Web Services) Hands-on Lab 2: Training deep learning models with MXNet, Next Steps and AWS workshop concluding remarks 
Bill Richmond, Senior Solutions Architect at AWS
Tracy Applegate, Account Manager at AWS

(Microsoft) Module 4: Building a Dashboard with PowerBI and Hands-on Lab
Dustin Ryan, Data Platform Solutions Architect at Microsoft

8:30 a.m. Registration, Check In, and Breakfast
Located in the College of Law

9 a.m. - Conference Opening and Welcome

9:15 a.m. - Letting Your Data Build Your System with Amazon Web Services Artificial Intelligence
Bill Richmond, Senior Solutions Architect at Amazon Web Services 

Advancements in data and analytics, hardware acceleration, and advanced libraries and services in Machine Learning and Deep Learning have unleashed the power to learn your business logic rather than “try to code" for it. In this session, we’ll dive into design paradigms and architectures that allow you to drive your logic from your data and add intelligence to your applications. The session will describe key ways customers build intelligent AI systems with AWS AI services, platforms, frameworks, and infrastructure.

About the Speaker

bill_richmond_aws
Bill Richmond is a Senior Solutions Architect for the AWS Worldwide Public Sector, and has been with AWS since 2015. Prior to joining AWS, Bill spent time at IBM, Northrop Grumman, Martin Marietta, and a number of other integrators and software companies architecting, building or leading teams in building complex systems. More recently, his focus has been on supporting organizations in the Financial Services sector, including FINRA and the U.S. Department of the Treasury. Bill has degrees in applied mathematics from The University of Central Florida and The Florida State University.

10 a.m. - The Transformation of Science with HPC, Big Data, and AI
Dr. Jay Boisseau, HPC  and AI Strategist at DELL EMC

Computing has fundamentally transformed the conduct of science, enabling us to run powerful simulations based on theoretical models to analyzing data from such simulations and from observations and experiments. With computing capabilities—including storage and networking—increasing exponentially, we can solve an increasing number of problems directly, and many more through realistic simulations and analysis of much larger data. High performance computing technologies underlie much of our computational science progress: through parallelism in systems, algorithms, and workloads, we extend our capabilities far beyond Moore’s Law-level progress. Now, in the era of Big Data and at the onset of the Internet of Things, we are presented with additional opportunities and techniques for understanding our world and universe, through new techniques and ever more volumes and variety of data. The emergence of deep learning, leveraging both HPC technologies and data analytics techniques, exemplifies how advances in both computing and data can enable new modes of science. In this talk we review our progress, the current state and trends of computational sciences, and the opportunities on our horizon.

About the Speaker

jay-117x175

Jay Boisseau is an experienced supercomputing leader with over 20 years in the field, having worked at three supercomputing centers—including founding one—and for two technology companies. Jay is currently working for Dell EMC as the HPC & AI Technology Strategist. In this role, he has been working to develop and implement a new HPC strategy, business plan, solutions, and programs to help broaden the usage of HPC by more companies and organizations, for more kinds of applications and workloads. He has added the development of AI strategy and solutions to his role at Dell EMC in the past year, and is helping to build a new team, strategy, and solutions for machine learning and deep learning.

Prior to joining Dell EMC, Jay founded the Texas Advanced Computing Center in 2001. He led TACC’s growth in impact, stature, and size from a small group of experts into one of the leading academic advanced computing centers in the world, with over 100 staff, world-class supercomputing systems, and many competitively-awarded, multi-million dollar federal grants. He established a strong research and development program at TACC and expanded the computational resources by winning two of the largest ($50M+) NSF awards in UT Austin history: for Stampede, deployed in January 2013, which debuted at #4 in the Top500 (2012) and remains one of the ten most powerful computing systems and the world, and which Ranger, which debuted as the #3 system in the world (2007). Jay was also one of the co-principal investigators in the National Science Foundation (NSF)-sponsored Extreme Science and Engineering Discovery Environment (XSEDE) project, the most powerful and robust collection of integrated advanced digital resources and services for open science research in the world.

Jay’s career in supercomputing was fueled by his graduate research in astronomy at The University of Texas at Austin. After obtaining his masters degree in 1990, Jay initiated his dissertation research on modeling the dynamics of Type Ia supernovae using Cray supercomputers. This work stimulated his interest in high performance computing, and led him to join the staff of the Arctic Region Supercomputing Center as a programmer analyst in 1994 while continuing his supernova modeling research. At ARSC, Jay helped develop and lead several projects and activities in the relatively new center while supporting a growing scientific user community. While at ARSC, Jay completed his dissertation with The University of Texas at Austin and joined the San Diego Supercomputer Center (SDSC) in 1996 to advance his career in high performance computing. At SDSC, Jay became an Associate Director and created the Scientific Computing Department, with groups specializing in applications optimization, performance modeling, parallel tools development, grid portals development, and user support. He led several major SDSC projects for the National Partnership for Advanced Computational Infrastructure (NPACI) and also led SDSC’s participation in the Department of Defense (DoD) Programming Environments and Training (PET) program. This experience led him to tackle the job of creating TACC in 2001.

Jay graduated with a bachelor’s degree in astronomy and physics from the University of Virginia in 1986 while also working as a computer consultant. He continued to work in Charlottesville for an additional year as a scientific programmer, where he gained his first exposure to HPC for astronomy. This influenced him to enter the graduate program in astronomy at The University of Texas at Austin, which in turn led to his 20+ year career in HPC.

10:45 a.m. - Coffee Break

11 a.m. - Panel Discussion
How data analytics impacts decision making at organizations, and opportunities for addressing workforce development in data analytics and advanced cyber-infrastructure

Panelists: Dr. Dan Stanzione, TACC, Dr. Rob Gardner, University of Chicago; Dr. Mehmet Belgin, Georgia Tech; Gregori Faroux, Georgia State University; Sanjay Mistry, Mölnlycke, Dr. Renata Rawlings-Goss, South Big Data Hub, Dr. Andy Rindos, IBM
Moderator: Semir Sarajlic, SCD Chair, Georgia State University

12 p.m. - Lunch (Student Networking)
1 p.m. - Feature Guest Speaker
Training-Based Workforce Development in Data Analytics and Augmented Intelligence
Dr. Mahmoud Ghavi, Professor and Director of the Center for Nuclear Studies at Kennesaw State University and Chair of Consort Institute at Emory University

The Fourth Industrial Revolution is underway, and it will have a profound and transformative impact on the workforce landscape among other things. The key driving forces behind this revolution include big data analytics, machine learning, and augmented/artificial intelligence (AI). Recent studies indicate that 38% of US jobs are at high risk of elimination by 2030 due to automation. In order to survive and thrive in the midst of this major disruptive force, it is necessary to rethink our approach to the workforce education and training. Current training and educational programs need to be agile, applied oriented, focused, up to date, and relevant to the exact requirements of the marketplace. The looming workforce readiness challenges are simply too great to be relegated only to the traditional colleges and universities. Those institutions play a significant role in providing core sets of basic knowledge and skills to their students. That level of education, delivered in traditional formats, however, is not quite responsive to the prevailing technological conditions. In augmenting the conventional roles of the universities, some certificate-based programs have proven effective in providing fast-paced, advanced training courses in an environment that demands active, life-long learning. It is important to note that not all technical courses lend themselves to this type of programs; however, for the ones that do, fast, applied-based training has proven to be a powerful delivery mechanism.

About the Speaker

dr_ghavi-2

Dr. Ghavi is an accomplished scientist, educator and corporate executive with extensive experience and proven leadership in the fields of information technology, data management, data analytics, and healthcare informatics. His experience includes a unique blend of corporate and academic achievements. He has founded and run highly successful companies focusing on innovative products and services in information technology, data analytics, and electronic health records. He is a pioneer and subject matter expert in areas of big data analytics, business intelligence, large data management, and data fusion.

As Chair and Chief Academic Officer of Consort Institute, a post graduate professional workforce development and training organization, he is responsible for the development and successful delivery of intensive educational courses focused on big data analytics, business intelligence, big data management, healthcare informatics, and information technology. He is also professor of nuclear engineering and Director of the Center for Nuclear Studies at Southern Polytechnic College of Engineering- Kennesaw State University where he was previously Director of the School of Computer Science and Software Engineering (CSE) Center for Health IT. Dr. Ghavi is also the CEO and lead technology officer of Consort Systems, a healthcare information technology company.

2:00 p.m. - Accelerating Research with Open Science Grid
Dr. Rob Gardner, Research Professor of Physics, Enrico Fermi Institute Senior Fellow, Computation Institute at University of Chicago

The Open Science Grid is the nation’s shared high throughput computing fabric comprised of computing resources from more than 120 institutions.  While originally driven by the computing requirements of the Large Hadron Collider experiments at CERN, which used the OSG to help discover the Higgs boson in 2012, it is currently being used by hundreds of researchers from dozens of science domains including astrophysics, economics, evolutionary biology, genomics and engineering.  We will describe how students and faculty can use this open (and free) resource to speed up their research.

About the Speaker

rob_gardner-copy-2
Rob Gardner is Research Professor of Physics in the Enrico Fermi Institute and Senior Fellow in the Computation Institute at the University of Chicago.  He directs the Midwest Tier2 Center for the ATLAS experiment at the CERN Large Hadron Collider and is the integration program manager for the U.S. ATLAS Collaboration's Computing Facilities, which includes the Tier1 center at Brookhaven National Laboratory and ten university Tier2 sites.  He leads the Open Science Grid user support team, is co-principal investigator of VC3: Virtual Clusters for Community Computation, a DOE ASCR award to deploy virtual cluster systems over diverse HPC centers, and is the PI of NSF CIF21 DIBBs: EI: SLATE and the Mobility of Capability.

2:45 p.m. - Coffee Break

3:00 p.m. - Unlocking Digital Transformation with Cortana Intelligence Suite
Dustin Ryan, Data Platform Solutions Architect at Microsoft

With the explosion of cloud technologies, the proliferation of data, and the quest for intelligent insights, organizations across the globe are striving for digital transformation. Microsoft’s Cortana Intelligence Suite is making it easier than ever before for researchers to build intelligent applications. In this session, we will discuss how the Cortana Intelligence Suite unlocks digital transformation, review real world examples of intelligent applications built on the Cortana Intelligence Suite, and then build a social media, sentiment analysis solution.

About the Speaker

rob_gardner-copy-2

Dustin Ryan is a Technology Solutions Professional on the Education Specialist Team Unit at Microsoft. Dustin has worked in the business intelligence and data warehousing field since 2008, has spoken at community events such as SQL Saturday, SQL Rally, and PASS Summit, and has a wide range of experience using the Microsoft business intelligence stack of products across multiple industries. Prior to his time at Microsoft, Dustin worked as a business intelligence consultant and trainer for Pragmatic Works, a Microsoft partner. Dustin is also an author, contributor and technical editor of books such as Applied Microsoft Business Intelligence, Professional Microsoft SQL Server 2012 Analysis Services with MDX and DAX, and others.

Dustin resides in Jacksonville, Florida with his wife, three children, and three-legged cat. You can find Dustin spending time with his family and serving at his local church.

3:45 p.m. - Accelerating Artificial Intelligence with GPUs
Dr. Jeff Layton, Solutions Architect at NVIDIA

Data scientists in both industry and academia have been using GPUs for AI and machine learning to make groundbreaking improvements across a variety of applications including image classification, video analytics, speech recognition and natural language processing. In particular, Deep Learning – the use of sophisticated, multi-level “deep” neural networks to create systems that can perform feature detection from massive amounts of unlabeled training data – is an area that has been seeing significant investment and research.

Although AI has been around for decades, two relatively recent trends have sparked widespread use of Deep Learning within AI: the availability of massive amounts of training data, and powerful and efficient parallel computing provided by GPU computing.  Early adopters of GPU accelerators for machine learning include many of the largest web and social media companies, along with top tier research institutions in data science and machine learning. With thousands of computational cores and 10-100x application throughput compared to CPUs alone, GPUs have become the processor of choice for processing big data for data scientists.

About the Speaker
Jeff Layton is a Senior Solution Architect in the Worldwide Field Organization and a Certified Deep Learning Institute (DLI) Instructor at NVIDIA. His primary roles are to support high performance computing and deep learning with AI. He is focused on applying Deep Learning within Artificial Intelligence. Prior to joining NVIDIA, Jeff spent time at Amazon Web Services and Dell providing high performance computing architecture and computational science support to government and educational organizations. Jeff holds a Ph.D. in Aeronautical and Astronautical Engineering from Purdue University.  He is also an active contributing writer to ADMIN Magazine, as well as HPC ADMIN Magazine, and Quinstreet.
4:30 p.m. - Poster Reception
Over 30 accepted posters from multiple disciplines from College of Arts and Sciences, Robinson College of Business, Neuroscience Institute, Center for Nano Optics and many more - including contributions from Georgia Institute of Technology, University of Texas, Emory University, Kennesaw State University, Gwinnett Technical College among others. This year's submissions represent contributions from more than 85 authors.

View our 2017 Poster Presenters >

5:30 p.m. - Best Poster Awarded, Closing Remarks, Post-Conference Networking
Gregori Faroux, Assistant Vice President, Georgia State University
Semir Sarajlic, SCD Chair, Georgia State University

October 5 Parking Details for GSU Student Center

M Deck Parking
Visitor parking is available in the M Deck for $7 (cash only)
Address: 33 Auditorium Place, Atlanta, GA 30303 | Map

To get to the Student Center from M Deck:

  1. Once you park in the M Deck, exit via the M Deck Pedestrian Entrance
  2. The Student Center will be directly across the street

MARTA
To get to the Student Center from MARTA:

  1. Take East/West rapid-rail line to the Georgia State Station.
  2. Exit station onto Piedmont Avenue. Walk right two blocks to Gilmer Street (you will cross over Decatur Street).
  3. Cross Piedmont and enter Student Center East at the corner of Piedmont and Gilmer.

October 6 Parking Details for GSU College of Law

M Deck and T Deck are available for parking for $7 (cash only)

M Deck Parking
M Deck address: 33 Auditorium Place | M Deck Map

To get to the College of Law from M Deck:

  1. Exit Deck M onto Gilmer Street
  2. Head northwest (up) Gilmer Street SE and walk to Edgewood Avenue (about 0.2 miles).
  3. Turn left on to Edgewood Avenue (about 300 feet) and walk to Equitable Place NE (about 400 feet)
  4. Turn left onto Auburn Avenue NE (about 98 feet) then turn right onto Park Place NE.
  5. The college will be on your right. If you reach the Georgia-Pacific Center, you have gone too far.

T Deck Parking
T Deck address: 43 Auburn Avenue | T Deck Map

To get to the College of Law from T Deck:

  1. Exit Deck T onto Auburn Ave, head west (up) Auburn Ave and walk to Park Place (about 0.2 miles).
  2. Turn right on to Park Place NE and walk about 98 feet.
  3. The college will be on your right. If you reach the Georgia-Pacific Center, you have gone too far.

MARTA
To get to the College of Law from MARTA:

  1. On the North and South line, travel to the Peachtree Center Station on the North/South rapid rail line.
  2. Look for the Ellis Street exit in the station. Go up those escalators then take the Peachtree Street West exit out of the station.
  3. Turn right. The college is less than a block from the station at the corner of John Wesley Dobbs Avenue and Park Place, next door to the Georgia-Pacific Center.

Other Public Parking Options
The closest ones include:

  • 150 Carnegie Way: (the old Macy’s Department Store lot). The cost is $2 every 20 minutes with a maximum of $16 a day.
  • 141 John Wesley Dobbs Avenue Parking Lot: Corner of Peachtree Center Avenue and John Wesley Dobbs Avenue, Atlanta, GA 30303: $5 subject to availability.


Scientific Computing Day, September 30, 2016

Scientific Computing Day (SCD) is a symposium for fostering interactions and collaborations among researchers at Georgia State University and its affiliates.

SCD provides researchers a venue to present their work, and for the GSU scientific, computational and business communities to exchange views on today’s multidisciplinary computational challenges and state-of-the-art developments.

Organizing Committee

Organizer: Research Solutions
2016 Chair: Semir Sarajlic (ssarajlic1@gsu.edu)
SCD 2016 Collaborators:

2015 Chair: Suranga Edirisinghe (neranjan@gsu.edu)

Conference Schedule

8:30 a.m. – Conference Welcome
Dr. James Weyhenmeyer, Vice President for Research and Economic Development at Georgia State University

8:45 a.m. – Making Your Own Data in Social Science Research
Dr. Matthew DeAngelis, School of Accountancy, J. Mack Robinson College of Business


Making Your Own Data in Social Science Research
The availability of large quantities of unstructured data creates opportunities to explore new research ideas, test existing theories in new ways, and extend prior empirical research to new settings. Using examples from financial disclosure research, I discuss the benefits and pitfalls of “making your own data.” I also discuss the decision to use “big” data versus more targeted samples.


About the Author
Matthew DeAngelis is an Assistant Professor of Accounting in the Robinson School of Business at Georgia State University. His research focuses on the properties of qualitative firm disclosures to capital markets, with a special emphasis on structural and other characteristics of disclosure that affect information processing. He holds a doctorate in business administration from Michigan State University and a masters of science in business administration and a bachelors of arts in political science from Penn State

9:15 a.m. – Maximizing Your Chances of Getting NSF Funding
Dr. Jay Boisseau, Chief HPC Technology Strategist at Dell


Maximizing Your Chances of Getting NSF Funding
This session will focus on how to get your project funded through NSF grants. The National Science Foundation funds research and education in most fields of science and engineering. It does this through grants, and cooperative agreements to more than 2,000 colleges, universities, K-12 school systems, businesses, informal science organizations and other research organizations. NSF receives approximately 40,000 proposals each year for research, education and training projects, of which approximately 11,000 are funded. This session will discuss how to become one of the 11,000.

JayAbout the Author
John (“Jay”) Boisseau, Ph.D. is the Chief HPC Technology Strategist for Dell EMC. Jay is an experienced supercomputing leader with over 20 years in the field, having worked at three supercomputing centers—including founding one—and consulted for two technology companies. Jay’s recent work at Dell EMC includes helping to develop the high performance computing (HPC) vision, strategy, and new solutions for a broader use of HPC in 2014-2105; in 2016, he is developing Dell EMC’s new HPC strategies for cloud-enabled HPC and for machine learning. Jay also tracks the advances in the HPC field, and meets with leading customers in strategic HPC market segments to understand needs for new, innovative Dell EMC HPC solutions. Jay also has a number of related activities, including CEO of Vizias (technology consulting company), founder and director of the Austin Forum (technology outreach and networking organization), and founder and president of Austin CityUP, the new smart city consortium for Austin. All of these activities build on Jay’s long career in technology at leading supercomputing center and vendors—and Dell EMC is a major partner or contributor in all of them.

Jay’s prior HPC experience includes creating and leading the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (2001-2012). Under his direction, TACC grew in size and stature to become one of the leading academic advanced computing centers in the world, with well over 100 staff, world-class supercomputing systems, and several competitive-awarded multi-million dollar federal grants. He established a strong research and development program at TACC and expanded the computational resources by winning two of the largest National Science Foundation (NSF) awards in UT Austin history: for Stampede, deployed in January 2013, which remains one of the ten most powerful computing systems in the world, and for Ranger (now retired), which debuted as a top 5 system in the world and was the largest NSF award in UT Austin’s history at $59 million in 2007. Jay was also one of the leaders in the NSF-sponsored Extreme Science and Engineering Discovery Environment (XSEDE) project, the most powerful and robust collection of integrated advanced digital resources and services in the world. Prior to TACC, Jay worked at the San Diego Supercomputer Center (SDSC, 1996-2001), where he created the Scientific Computing Department, with groups specializing in applications optimization, performance modeling, parallel tools development, grid portals development, and user support. He led several major SDSC projects for the National Partnership for Advanced Computational Infrastructure (NPACI) and also led SDSC’s participation in the Department of Defense (DoD) Programming Environments and Training (PET) program. Jay began his supercomputing career at the Arctic Region Supercomputing Center (1994-1996), where he led the user services activities.

Jay graduated with a bachelor’s degree in astronomy and physics from the University of Virginia in 1986 while also working as a computer consultant. He continued to work in Charlottesville for an additional year as a scientific programmer, then he entered the graduate program in astronomy at The University of Texas at Austin. After obtaining his masters degree in 1990, Jay conducted his dissertation research on modeling the dynamics of Type Ia supernovae using Cray supercomputers, leading to his PhD in astronomy.

10:00 a.m. – Collaborative Neuroimaging of Hallucinations  
Dr. Jessica Turner, Psychology, College of Arts & Sciences


Collaborative Neuroimaging of Hallucinations
The International Consortium on Hallucination Research (ICHR) is made up in part of researchers from around the world in psychiatry, psychology, and neurology, who are studying both the neuroscience and phenomenology of hallucinations.  In a recent collaborative review of neuroimaging studies of individuals with hallucinations, the group recommended that the data existing to date be shared and analyzed together.  The GSU Scientific Computing Center is hosting the data and providing support for the analyses, for the aggregated neuroimaging datasets from hundreds of subjects around the world. We will be leveraging standardized neuroimaging pipelines to examine in multiple analyses the roles of different brain circuits in individuals who hallucinate, both with and without various psychological and neurological diagnoses.  The project is currently underway; this presentation will review the background, motivation, and methods for the ICHR efforts.


TurnerAbout the Author 
Dr. Jessica Turner is an Associate Professor in Psychology and Neuroscience here at Georgia State University, and she runs the Imaging Genetics and Informatics Lab (http://sites.gsu.edu/igil/). Her PhD is in Experimental Psychology, and her research in part involves large-scale neuroimaging, data sharing and neuroinformatics, and the imaging & genetics studies that are facilitated by large-scale data sharing, in the study of mind/brain relationships.

11 a.m. – Panel Discussion
How Cyberteams and Technology Impact Cross Organizational Research
Featured panelist: Dr. Jay Boisseau, Dr. Renata Rawlings-Goss, Dr. Jessica Turner, Dr. Peter Molnar, Dr. Matthew DeAngelis, Anthony Howard, Gregori Faroux, and Bryan Sinclair
Moderator: Semir Sarajlic

Noon – Poster Session
27 Accepted Poster Presentations from College of Arts and Sciences, Robinson College of Business, Andrew Young School of Policy, Neuroscience Institute, Center for Nano Optics and more.  This year’s submissions represent contributions from more than 55 authors.
To view the presentations, please visit our SCD16 Conference System >

Awards presented by Phil Ventimiglia, Chief Innovation Officer at Georgia State University.

1:30 p.m. – Featured Guest Speaker
Big Data: Challenges and Opportunities 
Dr. Margaret A. Jezghani, Data Scientist at Space & Naval Warfare Systems Center Atlantic (SPAWARSYSCEN) 


Big Data: Challenges and Opportunities
“Big data” has quickly become one of the most popular tech buzzwords used across academia, industry, and government. In the midst of this big data era, the data scientist has been named the “hot job of the decade” by the Harvard Business Review and this year’s “best job of the year” by Glassdoor. As it becomes increasingly easier to generate massive amounts of data, growing and maintaining a skilled data science workforce to manage and derive value from that data is more important than ever. Earlier this year, Wikibon estimated the worldwide big data market to be more than $18 billion, and predicted it to grow at a rate of 14.5% per year over the next decade. Despite the amount of big data projects that are currently being funded, data scientists that are capable of extracting meaningful insight from all of the background are rarer than one might assume. In fact, Gartner reported last year that 60% of all big data projects fail early on due to poor strategy and misconceptions about big data. In this talk, I will introduce both the challenges and opportunities associated with big data, starting with an overview of the booming field. Next, I will discuss how big data is handled by Georgia State University scientists who conduct research on the PHENIX experiment at the Relativistic Heavy-Ion Collider. As another example, I will discuss what big data can do for the Navy and the Department of Defense as a whole. The audience can expect to walk away with a greater understanding of big data challenges and opportunities, providing new knowledge that could potentially be leveraged in their own scientific computing research.


About the Author
maggieDr. Jezghani serves as Data Scientist at Space and Naval Warfare (SPAWAR) Systems Center Atlantic, a Department of the Navy organization. Dr. Jezghani earned a Ph.D. in Nuclear Physics from Georgia State University. For her doctoral research, she led a team of scientists on the PHENIX experiment at Brookhaven National Laboratory to make the first phi meson measurement at forward rapidity in heavy-ion collisions in 15 years of running the Relativistic Heavy Ion Collider (RHIC). Throughout her education she held appointments at Fermi National Accelerator Laboratory, Lawrence Berkley National Laboratory, and Los Alamos National Laboratory.

2:30 p.m. – IT Security Data Analytics on Hunk Hadoop
Josephine Palencia, Office of Information Technology, Georgia Institute of Technology


IT Security Data Analytics on Hunk Hadoop
A framework that enables real-time Cybersecurity analytics capable of handling large volumes of IT data is described. This is used by the Cybersecurity team to handle the daily influx of massive amounts of data in the order of 2.5TB/day. The data source consist of all the campus systems and security logs that undergo immediate, continuous query and analysis. The previous de facto status-quo of having a Splunk license handling only a few hundreds of GB/day from satellites of data forwarders presented a limitation on cost, high-performance computing as well as scalable storage. We present our Splunk and Hunk-Hadoop infrastructure with a 2-node, 200TB-capacity Hunk-Hadoop Cluster (HHc). Initial work is performed on the 2 nodes while 8 other servers are being added to the system. We investigate viable methodologies for big data movement and management of hundreds of remote sources to transfer all these data to the HDFS and MapRFS. We demonstrate the HHc’s ability to deduce useful analytics from concrete, working use cases studying phishing email attacks and firewall port scanners to aid in early intrusion detection and mitigation.


PalenciaAbout the Author
Josephine's background is in Physics (Nonlinear Dynamics) from Drexel University. Her interest is in extracting topological patterns from chaotic attractors and dynamical systems and creating a model for them. She worked at NASA/Goddard Space Flight Center for 6 years as a Principal HPC System Administrator where she managed over a dozen HPC clusters for space missions. She built and managed the cluster that was used by the general relativity group to simulate the production of gravitational waves with Einstein's equations. She moved to Carnegie Mellon University and spent 10 years at the Pittsburgh Supercomputing Center. She later became a Senior Scientific Data Specialist and was a Technical Lead for an NSF project ExTENCI involving the Fermi National Lab, PSC, and half a dozen Universities in the state of Florida. Last year, she started at Georgia Tech as a Research Scientist II Faculty and works full time supporting PACE. She is also pursuing another degree in MS Analytics

3:30 p.m. – Best Practices and Case Studies of How HPC is Applied in Higher Education
Anthony Howard, Chief Systems Architect at Dell


Best Practices and Case Studies of How HPC is Applied in Higher Education
This session will focus on best practices and case studies in HPC, based off of proven HPC designs that are deployed in production, and flourishing today in Georgia’s most elite Higher Education institutions. These blueprints have advanced research 15x in some cases. This session will dive into where the pitfalls of HPC are and how to avoid them, where you can save money, and types of designs outside of a traditional HPC architecture. Many understand the basics of HPC design. However, it isn’t until you’ve created a successful HPC, deployed it, put it into production, expanded it, and advanced your research or goals by leaps and bounds, that you realize just how much of an impact seemingly trivial mistakes can make. They can degrade performance, cost money, and perhaps worst of all, inhibit adoption and usage of the new system. Learn how to avoid many common mistakes, and grasp different ideas of what’s actually working and deployed in Hi-ed.


Anthony DellAbout the Author:
Bestselling, Award Winning, Author Anthony R. Howard has been an industry recognized Systems Consultant and technology expert for Dell for over 15 years. He was named #1 IT Super Hero by InfoWorld and Computer World, was the winner of the National Federal Office Systems Award (FOSE – Nation’s Largest Information Technology Exposition Serving the Government Marketplace), and the winner of Government Computer News Best New Technology Award. Several case studies have been published on Howard’s solutions across the Information Technology industry. Currently he provides enterprise technology solutions and advisement for America’s most distinguished clients including a sizeable amount of work for the U.S. Defense Sector, Department of Justice, and the Department of Homeland Security. His projects have been featured in dozens of national media outlets including Fox News. After founding his own technology firm, Howard completed his formal education with a Masters of Business Administration with a concentration in Information Technology from Florida A&M University. He’s also the bestselling author of The Invisible Enemy: Black Fox and The Invisible Enemy II: Vendetta.

4:15 p.m. – Machine Learning for Astronomical Interferometry
Dr. Fabien Baron, Physics and Astronomy at Georgia State University and Director of Hard Labor Creek Observatory


Machine Learning for Astronomical Interferometry
Stellar astronomy has undergone a revolution in the last ten years. Stars can now be imaged with an unprecedented amount of details using optical interferometry, an observational technique that fully taps into the power of multiple-telescope arrays. Georgia State University's Center of High Angular Resolution Astronomy is the world leader in this blooming field, and its astronomers have produced images of stellar environments that are testing the limits of astronomical models. At the heart of interferometric imaging lie cutting-edge numerical methods: GPU computing, MCMC inference methods, wavelet-based compressed sensing, and most recently machine learning. Could this last technique be the final key to the interferometric quest for the ultimate image quality?


fabienAbout the Author:
Fabien Baron is currently Assistant Professor in the Department of Physics and Astronomy, and the Director of Hard Labor Creek Observatory. After getting his PhD from the University Paris, he worked on instrumentation at the University of Cambridge (UK) and the University of Michigan. He now specializes in numerical methods applied to inverse problems in stellar astronomy.

5:00p.m. Closing Remarks
Semir Sarajlic, SCD16 Chair
Gregori Faroux, Director of Research Solutions, Georgia State University 
Post-Conference Networking and Coffee
Please join us after the conference for some light refreshments and networking with colleagues and our poster presenters.

Scientific Computing Day, September 18, 2015

Conference Schedule

8:30 a.m. – Conference Welcome
Dr. James Weyhenmeyer, Vice President for Research and Economic Development at Georgia State University

8:30 a.m. – Classifying Cancers from RNAseq Data through Machine Learning
Sergey Kilmov, Daniel Kneller, Robert Stone – Department of Mathematics and Statistics


Making Your Own Data in Social Science Research
Cancer is the second leading cause of death in the United States, with a lifetime probability of cancer diagnosis 43% for men and 38% for women1. Early detection of cancer is paramount in effective treatment. Identification of tumor type can allow clinicians to provide personalized treatments, especially if clinical analysis of tumor type are inconclusive. A machine learning method to the analysis of transcriptomic data can identify primary tumor tissue type with 98% accuracy for five specific cancer types. Combined with an easy user interfaced website, clinician access to instant tumor type prediction is possible.

8:55 a.m. – Modeling Transmission and Control of the Ebola Epidemic In West Africa
Dr. Gerardo Chowell – Department of Public Health


While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease.

We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering.

Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks.

10 a.m. – UAVs and Archaeology: How New Technologies Are Changing Our View of the Past
Dr. Jeffrey B. Glover – Department of Anthropology


Over the past 20+ years, technological advances in computing and remote sensing have dramatically impacted the field of archaeology. These technological advances are often the result of collaboration between Computer Scientists and Archaeologists. While this new digital toolkit cannot completely replace the shovel and trowel, it can be harnessed to unlock secrets about humanity’s past that would not have been revealed otherwise. This talk provides a broad overview to how technology has impacted archaeological practice, and then focuses on how Unmanned Aerial Vehicles (UAVs), aka Drones, are transforming archaeological survey, mapping and site protection.

10:50 a.m. – Panel Discussion: How Technology Will Shape the Future of Humanities Research
Featured panelist: Dr. Brennan Collins – Department of English, Dr. Robin Wharton – Department of English, Dr. Glenn Gunhouse – School of Art & Design, Dr. Jeffrey Glover – Department of Anthropology, Joseph Hurley – University Library, Dr. Robert Bryant – Department of Anthropology

Noon – Poster Session and Lunch
1. Dynamical Analysis of Fitzhugh Nagumo Neuron Network CPGs Using GPGPU Computing With OpenAcc and Open MPI
Krishna Pusuluri, Andrey Shilnikov
Dynamical Analysis of networks of Neurons forming Central Pattern Generators (CPGs) is an active area of research in Computational Neuroscience. Previously, networks of 3-cells have been explored thoroughly using various GPGPU tools. We have now built a comprehensive framework to explore large generic networks of neurons that can run across multiple GPUs for faster computations using OpenAcc and Open MPI.
We present some of our results using this framework for 4 Neuron networks and 6 Neuron networks with two connected motifs running on a single Nvidia Tesla K40 GPU. We discuss the performance improvements achieved using GPGPU computing vs. various CPU parallel computing approaches. We also discuss significant improvements achieved in terms of development time and maintainability using OpenAcc vs. approaches like CUDA. Lastly we present how the computational approach in this framework is scalable to study large scale networks of neurons.2. R-Tree Contruction and Query On the GPU
Sushil Prasad, Michael McDermott, Xi He
Efficient spatial indexing is becoming increasingly important in today’s world where larges amounts of data are not just indexed and collated but also located at very specific points in space and time. These are not just important for the concerns of climate science but for any domain that utilizes spatial databases such as agriculture, oceanography, geopolitics, and many more common place areas such as simple route planning. One of the most used data structures that allows easy and space efficient indexing of spatial data is the R-Tree. Commonly implemented sequentially due to its hierarchical we present a non-trivial construction algorithm on the GPU that allows 226-fold speedup in construction and query algorithm that allows 91-fold to 180-fold speedup.3. Computer Model of the β1-Adrenergic Signaling In Mouse Ventricular Myocytes
Vladimir Bondaranko
The β1-adrenergic signaling system plays an important role in cardiac function. Activation of this system increases the rate of heart contraction, blood flow, and contraction force. Dysfunction of the β1-adrenergic signaling system results in cardiac hypertrophy, which leads to heart failure. Therefore, comprehensive experimental study and modeling of the β1-adrenergic signaling system in the heart is of significant importance. We developed an experimentally-based computer model of β1-adrenergic regulation of the action potential, and Ca2+ dynamics in mouse ventricular myocytes. The model describes biochemical reactions, electrical activity, and protein phosphorylation during activation of β1-adrenergic receptors. The model cell consists of three major compartments: caveolae, extracaveolae, and cytosol. In the model, β1-adrenergic receptors are stimulated by the catecholamine isoproterenol. This leads to the activation of the Gs-protein signaling pathway, which ultimately increases cyclic AMP concentration and activity of protein kinase A to different degrees in different compartments. The catalytic subunit of protein kinase A phosphorylates ion channels and Ca2+ handling proteins, leading to an increase or decrease in their function.Dephosphorylation is performed by the protein phosphatases 1 and 2A. Our model reproduces time-dependent behavior of a number of biochemical reactions and voltage-clamp data on ionic currents in mouse ventricular myocytes. The model also predicts action potential prolongation as well as an increase in intracellular Ca2+ transients dependent upon stimulation of β1-adrenergic receptors.4. Transcriptomic Analysis of Syrian Hamster Brain
Katharine McCann, David Sinkiewicz, Kim Huhman
Transcriptomic analysis is a powerful tool with which to study gene expression changes across conditions in a variety of organisms. This type of analysis is becoming increasingly popular to identify differential gene expression in animal models of various neuropsychiatric disorders. Using Georgia State’s high performance computing resources, our lab has begun to identify gene expression changes in Syrian hamster brain. Our lab specifically studies the behavioral and neurophysiological changes that occur after exposure to social stress, which has been shown to cause behavioral changes that are similar to those seen in human mood and anxiety disorders (e.g., changes in sociality, sleep, feeding, activity). First, we sequenced the entire brain transcriptome of male and female hamsters and compared baseline expression levels between sexes. Because social stress-induced behavioral changes vary between males and females, we also used transcriptomic analysis to identify sexually dimorphic gene expression in defeated versus non-defeated hamsters. We have shown that the basolateral amygdala (BLA) is an important component of the neural circuit underlying behavioral responses to social stress, and thus we investigated differential gene expression in the BLA of males and females of different social status (i.e., dominant, subordinate, or socially isolated). We are currently analyzing these data to determine if the genes that are differentially expressed based on social status are similar between males and females and what role, if any, these genes play in the striking behavioral changes that are observed after social stress.5. An Application of Multithreaded Data Mining in Educational Leadership Research
David Fikis, Alex Bowers, Yinying Wang
This proposed study aims to provide a practical example of applying high-performance computing to the field of educational leadership. This interdisciplinary study bridges text mining, in particular the emerging topic modeling, educational leadership, several key software (CasperJS, various GNU utilities, R, etc.), and hardware (the VELA batch computer and other multi-threaded environments) to facilitate an efficient data analytical analysis. The guiding research question of the proposed study is, “How are leading-edge research techniques in the educational leadership field adapted to take advantage of high-performance scientific computing?”Data collection: After receiving the permission from both the journal editor and the database involved, all articles from the Education Administration Quarterly journal were indexed with CasperJS, a headless webkit, retrieved with GNU wget, processed when required by the Tesseract OCR engine, and cleaned with GNU utilities such as sed. A corpus of 1,631 plain text articles was generated.Data analysis: Data were analyzed using the topicmodels package in R, first published July 11, 2014. After initial speed tests were unsatisfactory, sample code from a previous study was refactored to take advantage of both R’s vectorization and multi-threaded capabilities. The analysis code was run using the VELA Linux batch computer. The final compilation of findings was completed on a local workstation using R markdown.Implications: The speed increases (the data analysis time was shortened from days to less than six hours) provided by taking advantage of high-performance computing allows the research to proceed at an efficient pace, a more rigorous examination of data, and a reduced turnaround time for the research writing process. The systemic approaches developed for this particular research could be applied in other areas, and some of the challenges experienced may be encountered in other disciplines. A brief overview of “lessons learned” as well as “further opportunities” will be provided.6. Intra-miR-ExploreR, a Novel Bioinformatics Platform for Integrated Discovery of miRNA:mRNA Gene Regulatory Networks
Surajit Bhattacharya, Atit Patel, Daniel Cox
miRNAs have emerged as key post-transcriptional regulators of gene expression, however identification of biologically-relevant target genes for this epigenetic regulatory mechanism remains a significant challenge. To address this knowledge gap, we have developed a novel tool, Intra-miR-ExploreR, that facilitates integrated discovery of miRNA targets by incorporating target databases and novel target prediction algorithms to arrive at high confidence intragenic miRNA target predictions. We have explored the efficacy of this tool using Drosophila melanogaster as a model organism for bioinformatic analyses and functional validation. Moreover, we are constructing interaction maps of intragenic miRNAs focusing on neural tissues to uncover regulatory codes via which these molecules regulate gene expression to direct cellular development.7. Optimizing the Selection of Molecular Dynamics Frames for Virtual Screening of Human α1 Adrenergic Receptors
Mengyuan Zhu
Structure-based drug design has become a well-accepted method for new drug scaffold discovery and known ligands modification, and heavily relies on the availability of structural information for the target. G protein-coupled receptors (GPCRs) occupies one third of the targets for the drugs on the market and are important for future drug discovery efforts as well. However, crystal structures are only available for a limited number of GPCRs. Computer based structure model building and optimization is an inexpensive alternative to obtain structural models. One can use molecular dynamics simulations to sample a large number of possible structures. However, we were not knowing how to efficiently select frames along the simulation trajectory for docking. In the present study, we generated homology models of human 1 adrenergic receptors, performed 220ns molecular dynamics simulations, and then studied them in virtual screening. Our results suggested that ensemble docking can provide us with the most accurate performance. Other methods including time based, cluster based and energy based are not valid to evaluate whether a model can be selected for virtual screening. Besides, we also developed a series of scripts that can be used for post-processing of FRED program docking results.8. Computer Assisted Data Processing for Flow Adjusted Microcalorimetric Enthalpies
Christopher Tran
At Georgia State University, Dr. Kabengi is working with experimental flow calorimeters. These novel devices do not have available software to compute data, therefore, all data processing was carried out with human data entry with Excel and SigmaPlot. This required a significant amount of human input and time which could have been allocated elsewhere, and included human error.The program Graph Utility was tailored to the precise specifications of the flow calorimeters and has many of the features of proprietary software in similar genres. It intelligently parses raw data from different sources and can detect outliers within a user specified interval, automatically detect start/stop of data curves, rotation and x/y rectification, detects and adjusts for peak drift, multifile capabilties, outputs to Excel, and integrates curve area using the trapezoidal rule of integration.

With millions of computations per minute, Graph Utility processes data which previously required several hours of human work in mere seconds.

9. Molecular Dynamics of an Allosteric Activator of Pyruvate Kinase M2
Kenneth Huang, Chunli Yan, George Wang
Cancer cells have altered metabolic functions that allow them to support rapid cell proliferation, known as the Warburg effect. The pyruvate kinase isoform M2 (PKM2) is expressed in cancer cells in order to drive this effect, and has been designated a biomarker for diagnosis of various cancers. Regulation of PKM2 between its dimer and tetramer states allows cancer cells to meet both glycolytic and biosynthetic demands. Increasing pyruvate kinase activity by activating PKM2 is capable of suppressing tumor growth, making it a promising target in reducing cancer cell proliferation. We have found that novel compound that is capable of activating PKM2 by covalently binding to the protein, stabilizing the PKM2 tetramer. The compound is able to induce tetrameric formation without and with FBP present under normal conditions, a quality unseen in other known PKM2 activators. Using molecular dynamic simulations, we are able to ascertain the mechanism of how this occurs.

10. Digital Critical Editions of Medieval Texts: the Hoccleve Archive and the Digital Humanities
Dylan Ruediger, Sruthi Vuppala
The Hoccleve Archive, an ambitious effort to create a digital variorum edition of the works of the fifteenth-century poet, Thomas Hoccleve, attempts to conserve several generations of archival materials but also to sustain the future of textual scholarship using digital tools. Our work, which is in its early stages, is currently creating the computing infrastructure to transform a rich collection of data from an earlier technologies, including microfilmed manuscripts, over 6000 handwritten collation sheets, and nearly 150 text-based computer files containing an archaic, but still legible orthographic and lexiconic mark-up of Hoccleve’s holograph manuscripts. By creating a constellation of artefacts, the Hoccleve Archive simultaneously records both Hoccleve’s Middle English text and three historical levels of editorial processing, ranging from the conventions of late medieval manuscript production and transmission through the digital age. One of the projects major long-term goals is to create a robust electronic platform for the practice of scholarly editing, which will allow a community of interested researchers and students to contribute to and sustain the ongoing work of our edition and gain experience in the complexities of archival work as they transcribe, edit, and tag Hoccleve’s poems. Our poster presentation will highlight the intersection of multiple forms of text and images which form the digital artefacts of Hoccleve’s texts and the history of the editorial practices which have shaped them for contemporary readers. We will illustrate in schematic and graphic form the community and peer based tools we will develop as our editorial and pedagogical platform. Finally, a media setup will allow attendees to view and interact with a beta version of the Hoccleve Archives website.

11. 3D Cell Migration, Amoeboid, Mesenchymal, Cancer Invasion, Signal Pathway
Xiuxiu He, Bong Hwan Sung, Kristin Riching, Patricia Keeley, Kevin Eliceiri, Alissa Weaver, Yi Jiang
Cell migration is important for development, wound healing and cancer invasion. It is a complex process that involves multi-scale interactions between cells and the extracellular matrix (ECM). Empirical evidence of cell migration showed that the cell substrate interaction through focal adhesion is a key mechanism to regulate cell migration plasticity. How the cell integrates the biomechanical properties of microenvironment with cytoskeleton remodeling to initiate polarity, adhesion and regulate migration modes is still not clear. Increasing experimental evidence suggests that migration behaviors differ and transit over physical parameters, including substrate rigidity, topography, and cell property. We built a 3-D cell model with cell motility signaling pathway and explicit cell membrane, cytoskeleton, nucleus. We simulated cell migration in 1-D and 2-D substrates with varied distribution and intensity. The model provides a flexible platform for investigating cell migration plasticity with complex microenvironments through biomechanical cell-substrate interactions.

12. Influenza A Infection: A Kinetic Model for Viral Replication at a Cellular Level
Ivan Solis, Neranjan Edirisinghe, Yi Jiang
Influenza A belong to the family of omyxoviridae. It is an enveloped virus with a genome made up of negative sense, single-stranded, segmented RNA. The Influenza A virus has eight segments that encode for the 11 viral genes. The goal in mind is to find a simple kinetic model to simulate viral replication in a single cell at different parameters measuring different outcomes computationally. The work will be presented from a TACC Stampede system ran through a time_step. We hope that by learning how the virus is able to replicate in host cells using a kinetic model, we can develop a better system to follow pandemics and help develop drugs and vaccines to inhibit infection based on a better understanding of the virus’ replication cycle.

13. Democratization of HPC Resources for Utilization by Students from Traditional and Non-traditional Disciplines
Semir Sarajlic, Neranjan Edirisinghe
At Georgia State University (GSU), we simplified HPC utilization by abstracting the computer science with an ideology that HPC is a service and not a tool. This approach allowed us to broaden our user base from traditional researchers (e.g., chemistry, physics) to more non-traditional researchers (public studies, neuroscience, economic). GSU’s HPC system, VELA, that we used as a testing environment consists of 10 mutually exclusive servers loosely coupled with IBM’s Load Sharing Facility (LSF) workload manager. Each of the 7 compute nodes have Xeon® Processor E7-4850 (total 40 cores per server), 512 GB RAM, and 4 TB local storage. Plus, two accelerator based nodes with each having 20 cores and 128 GB RAM, and one node with 2x Intel Phi cards and the other one with 2x NVIDIA K40m GPU. Our ideology is achieved using a model that is based on developed custom scripts for various scientific applications that we have on VELA. These custom scripts (wrappers) facilitate data moving between user’s local space and compute nodes and execute job submission scripts. In this approach, our model effectively abstracts the computer science in submitting batch jobs on a HPC system. This ideology and the model facilitating it, has resulted in an increased user base for our VELA system, more importantly, our model has allowed students from non-computer science background to access the HPC resource as a service in order to get their necessary results. Our usage data for VELA show a 316 % growth in our student users from 2013 to 2014. VELA coupled with our customized scripts has become the backbone for non-traditional HPC users on our campus.

1:25 p.m. – Featured Guest Speaker
The World of Computing 
Dr. Vincent Betro – University of Tennessee National Institute for Computational Sciences at Oak Ridge National Laboratory 

The computers are taking over! There’s no fighting it…just join in the fun! No matter what your discipline, you will have to be a competent computational thinker if you want to find a job or get funding to go to graduate school. More importantly, your generation is the one which will finally have the raw computing power to solve some of humanities most interesting grand challenges: creating a model of the human brain, unlocking the patterns of weather to save lives and help with food supplies, and keeping people safe and mobile in an increasingly cramped world.How can a computer do all this, you ask? The answer: it’s not about any one computer but thousands of them working together to solve a problem (many hands make for light work, right?). And it’s not just physicists and engineers trying to look for the birth of the universe or design a fuel-efficient vehicle. It’s historians looking at social justice issues and political changes by digitizing hundreds of years of census data. It’s cinematographers scanning through the entire catalog of Hollywood movies to determine how the portrayals of women and minorities have changed. And it will be you trying to stay above the deluge of data that is at your fingertips—you will need tools to sort through all of that and get useful information to change the world. Please come join us and learn about all the excitement that the world of computing has to offer to all disciplines in a data-centric world.

 

Betro2Dr. Vincent Charles Betro received his Ph.D. in Computational Engineering from the University of Tennessee at Chattanooga in 2010, after which he became research faculty and the STEM Outreach coordinator. In 2012, he joined the University of Tennessee National Institute for Computational Sciences at Oak Ridge National Laboratory as a computational scientist, where he focused his research on porting and optimizing applications for the Intel Xeon Phi. He also served as the Training manager for the NSF eXtreme Science and Engineering Discovery Environment (XSEDE) project and now serves as a mathematics instructor at the Baylor School in Chattanooga, Tennessee.

2:15 p.m. – Multiscale Modeling of Tumor Progression
Dr. Yi Jiang – Department of Mathematics and Statistics


Caner is one of the main leading causes of disease death. Despite the recent advancements in our understandings in cancer mechanisms, significant gaps still exist, particularly in the interface between biology and physics/mechanics. Our work has focused on studying how biophysical and biomechanical interactions contribute to cancer development. I will present a few examples of using mathematical models, specifically multiscale models, in understanding tumor growth, tumor angiogenesis, and cancer invasion.

3:05 p.m. – Dynamical Network of Protein Residue-Residue Contacts Provides Insights Into Enzyme Function
Dr. Donald Hamelberg – Department of Chemistry


Although the relationship between structure and function in biomolecules is well established, it is not always adequate to provide a complete understanding of biomolecular function. It is now generally believed that dynamical fluctuations of biomolecules can also play an essential role in function. However, the precise nature of this dynamical contribution remains unclear. In this talk, I will discuss the development and use of theoretical and computational methods to understand how dynamical motions of enzymes are coupled to their function. I will present computational studies on members of a ubiquitous family of enzymes that catalyze peptidyl-prolyl bonds and regulate many sub-cellular processes. We map key dynamical features of the prototypical enzyme by defining dynamics in terms of residue-residue contacts. Analyzing large amount of time-dependent multi-dimensional data with a coarse-grained approach, we could capture the variation in contact dynamics during catalysis. I will present rationale of how enzyme dynamics is coupled to the reaction step and affects catalysis. Our results provide insights into the general interplay between enzyme conformational dynamics and catalysis from an atomistic perspective that have implications for structure based drug design and protein engineering.

4:05 p.m. – Does Computational Science Show us the Future of Business Analytics?
Dr. Peter Molnar – Institute of Insight


Business analytics, data science, or simply Big Data have more in common with the computational sciences than their hunger for immense processing power, huge storage capacity, and performance-optimized linear algebra.

Though, people usually point out their differences: analytics is still heavily rooted in statistics, classification, and predictions based on similarity to observed data points. Computational science, on the other hand, allows us to explore the unknown…for example, understanding the beginning of the universe, and discovering new materials and drugs.
The key is understanding causality. It is true that we have a much deeper understanding of cause and effect in the physical world. However, gaining similar insights into human behavior is catching up. Soon, we may say “we developed the company’s business model in silico.”

4:55 p.m. – Transcriptomic Approaches to Studying Small Neural Networks In Exotic Species
Dr. Paul Katz – Neurosciences Institute

There are sometimes advantages for studying non-standard species. For example, nudibranch molluscs produce simple behaviors that are controlled by a small number of identified neurons. Furthermore, homologous neurons can be identified across species, permitting neural circuits to be compared. However, exotic species such as these have the disadvantage of lacking genetic information. Next-gen sequencing and de novo assembly of transcriptomes provides a means to overcome these problems. We have sequenced and assembled the brain transcriptomes from several nudibranch species, which produce two different types of swimming behaviors. We are using the results from these sequencing efforts to identify genes such as neurotransmitter receptors that play important roles in producing the swimming behaviors. Furthermore, the sequence information is providing us with additional molecular markers for identifying homologous neurons. In this way, we can study the evolution of neural circuitry underlying natural behaviors.
5:45 p.m. – Brain Transcriptome of Adult Green Treefrogs
David Sinkiewicz, Walter Wilczynski – Neurosciences Institute

The ‘omics have brought increasingly larger data sets to the life sciences. The first foray was sequencing and annotation of genomes, which gave us insight into the genes that are capable of being expressed in a given organism. Since then transcriptomics, proteomics, and metabolomics have been added in an effort to understand how the genome is expressed in an organism. Here at GSU we have recently begun investigating mechanisms that lead to differences in gene expression in the brain of several different species. One such mechanism is the role sex plays in regulating the expression genes in adult green treefrogs (Hyla cinerea). These animals present a unique opportunity to connect sex-linked gene expression to the production of the overt behavior of vocalization. We have used HPC resources to produce a de novo representative transcriptome of green treefrog brain tissue and have begun analysis of differentially expressed genes between the sexes. Additionally, we are using computing resources to annotate the transcriptome with homologous gene names from BLAST and gene ontology (GO) terms to describe the molecular function, biological process, and cellular component of the genes in the transcriptome itself as well as those that are identified as differentially expressed. These analyses will enable us to understand the genes that are responsible for behavioral sex differences in green treefrogs.
Post-Conference Networking and Coffee
Please join us after the conference for some light refreshments and networking with colleagues and our poster presenters.