Organised by the Next Generation Arithmetic (NGA) team, the Conference for Next Generation Arithmetic (CoNGA) is an annual conference that provides a platform for speakers to showcase their recent development in NGA.
After the first successful taskforce meeting held in Singapore from 6 to 7 September 2019, the second meeting is organised to begin preparations in the possible HPC system configuration that will best suit the shared ASEAN HPC facility.
In this workshop we will show you how to write pipelines with Snakemake, a free workflow management system. Snakemake greatly reduces the complexity to write and execute your pipelines. We will demonstrate and discuss in detail how the system can be used to analyze genomics data and how execution transparently scales from your laptop to supercomputers.
Target Audience: Users of A*CRC SGI ICE XA system
Account on SGI ICE XA system - participants are requested to register on the NSCC portal page at https://user.nscc.sg/saml. Basic knowledge of Linux, Fortran/C/C++ programming and concepts of distributed (e.g. MPI) or shared-memory parallelisation (e.g. OpenMP) . . .
Git is a collaborative code development tool, which is intrinsically distributed. This makes it more powerful than CVS and SVN. Branching and merging which are a nightmare in CVS and SVN are common use scenarios in Git. This workshop briefly introduces to common usage model of Git and makes a brief introduction to GitHub . . .
Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments . . .
The foundations for ADF (Amsterdam Density Functional) were laid in the 1970s, when Prof. Baerends pursued his Ph.D. at the VU University in Amsterdam. Prof. Ziegler joined the development efforts from early on as a post-doc, as well as the Snijders group in Groningen. These theoretical chemists actively advocated the use of DFT methods to get insight in chemistry and materials, while this was still not accepted at all in the chemical community.
Python for Finance workshop is addressed to everyone who wishes to learn programming in Python language (Day 1) and begin coding a variety of financial models and ideas effortlessly (Day 2).
The workshop will cover the fundamentals of Python 3.5+, numerical aspects of coding and over 100 individually crafted examples covering various applications from finance, risk management, data analysis, statistics, and machine learning techniques in finance and beyond.
The growth of sequencing data now far outstrips today’s computer technologies with genomic data quadrupling every year while compute power at best doubling. Many bioinformatics algorithms rely on direct comparisons of nucleotide sequences and optimization combined with statistical techniques that do not scale to massive datasets.
This talk describes a recently initiated funded international partnership, called CENTRA, to facilitate research collaborations on transnational cyberinfrastructure and its applications. The rationale, goals, progress and opportunities of CENTRA are presented. A brief introduction is made to the scientific advances being sought by CENTRA, the important societal problems it targets and the objective of creating international networks of scientists working on cyberinfrastructure and its applications. The CENTRA framework, including mechanisms to engage new institutions and researchers, is also discussed briefly.
Supercomputing Frontiers 2016 (SCF2016) is back again and will be held from March 15 – 18, 2016 at Matrix Building, Biopolis. Organised by A*STAR Computational Resource Centre (A*CRC), SCF2016 is a platform for thought leaders from both academia and industry to interact and discuss visionary ideas, important global trends and substantial innovations in supercomputing.
This session will provide an overview of the Power 8 processor and the systems built with Power 8 technology. With 96 hardware threads over twelve cores running at up to 4GHz, 96 MB of on-chip L3, 128 MB of L4, up to 230GB/s of main memory bandwidth, and numerous architectural innovations such as transactional memory, the Power 8 processor provides leading throughput performance.
Bright Cluster Manager (BCM) is one of the leading tools in the area of cluster management. In this workshop, we will learn how to set up a HPC cluster with BCM through demonstration and hands-on activities. Another new and important feature for BCM is that it can incorporate with OpenStack to provision virtual nodes (private cloud) on a HPC (aka HPC Cloud), and these virtual nodes will be able to support InfinBand connectivity, and ready to accept jobs despatched by the workload manager installed on the HPC cluster.
Supercomputing Frontiers 2015 is Singapore’s inaugural conference on trends and innovations in the world of high performance computing.
It will be held on March 17 – 20, 2015 at Biopolis’ Matrix Building in Singapore.
VisNow is a generic visualization framework in Java technology, developed by Interdisciplinary Centre for Mathematical and Computational Modelling at University of Warsaw.
It is a modular data flow driven platform enabling users to create schemes for data visualization, visual analysis, data processing and simple simulations. Motivated by 'Read and Watch' idea, VisNow shows the data as soon and fast as possible giving further opportunity for processing and more in-depth visualization. In a few steps it can create professional images and movies as well as discover unknown information hidden in datasets.
The HPC Advisory Council, in association with A-STAR Computational Resource Centre, will hold the HPC Advisory Council Singapore Conference 2014 on October 7, 2014. The conference will focus on High-Performance Computing (HPC) usage models and benefits, the future of supercomputing, latest technology developments, best practices and advanced HPC topics. The conference is open to the public and will bring together system managers, researchers, developers, computational scientists and industry affiliates.
He is the co-author of MADNESS (Multiresolution Adaptive Numerical Environment for Scientific Simulation). Please refer to the following link for more information on MADNESS: https://code.google.com/p/m-a-d-n-e-s-s/
An overview of supercomputing activities in Korea will be presented. Emphasis will be given to the legislation of the National Supercomputing Promotion Act passed by the National Assembly of Korea in 2011. With the law, increased investment in supercomputing ecosystem of the nation and coordination among government agencies are expected. The five year master plan of national supercomputing, which was established in 2012, will be described in such areas as application, infrastructure, and R&D. The talk will be concluded with brief description of newly created institute of National Institute of Supercomputing and Networking.
Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) was founded in 1993 as an institution new to then existing Polish system of science. ICM’s interdisciplinary attitude was a driving force in all forms of activity: research, research infrastructures, education and promotion.
D-Wave is the first company which claims to have built a Quantum Computer. We will be hosting D-Wave experts at A*CRC on the 17th and 18th of February.
Graphics processing units (GPUs) contain hundreds of arithmetic units and can be harnessed to provide tremendous acceleration for scientific applications such as computational fluid dynamics, climate-weather modeling and computational mechanics.
Modern graphics processing units (GPUs) contain hundreds of arithmetic units and can be harnessed to provide tremendous acceleration for scientific applications such as molecular modeling, computational biology & chemistry and bioinformatics.
Learn about Life Science applications that leverage the power of GPU. Hear the case studies that were able to shorten research cycles and speed up discovery process through GPU acceleration.
The talk will cover the differences between traditional cloud setups,focusing on low cost and high workload density and the HPC approach, which aims to achieve top performance. This will cover differences in the general system design, interconnect, local and shared storage and the configuration of cloud software for optimal performance in high performance computing.
As concurrency continues to increase on high-end machines, from both the number of cores and storage devices, we must look for a revolutionary way to treat Input/Output (I/O). As a matter of fact, one of the major roadblocks to exascale is how to write and read big datasets quickly and efficiently on high-end machines. On the other hand, applications often want to process data in an efficient and flexible manner, in terms of data formats and operations performed (e.g., files, data streams). We will show how users can do that and get high performance with ADIOS on 100,000+ cores.
Molecular spectroscopy is the physical description of chemical systems. As the advancement of our instrumental techniques, such as synchrotron sourced spectroscopy with better detectors and more powerful lasers, our knowledge of matter, molecules and their interactions has been constantly improving and sometimes earlier conclusions can be even overturned. The role of theory is not only interpretation of experimental results, but also the power of predictions. Computational spectroscopy powered by supercomputers has been integrated into the discovery process.
Supercomputers are capable of performing 1016 floating-point operations per second (34 PFlops). The greatest challenges facing computer and computational scientists are to further increase the computer speeds and, more challengingly, to develop programming models to enable realization of the potentials of such massive systems.
Supercomputers often stress their FLOPS as their primary benefit. While this would be true in a classical sense, there is growing need for very fast I/O capabilities in handling large quantities of data,, or namely, “Big Data”. Many are lead to believe that current-day cloud infrastructures are more suitable for big data processing compared to supercomputers, but such is simply not true considering the current day technologies as well as future technological trend trajectories. Tsubame2.0, Tokyo Tech.’s petascale supercomputer is touted often for its FLOPS and greenness, but the other highlighted characteristics is that it is likely the world’s first supercomputer to facilitated fast I/O for both resilience and big data processing
Hardware assisted acceleration is a well-known cost-effective way of obtaining performance gains on HPC systems. While various kinds of hardware accelerators exist (most notably FPGAs and GP-GPUs), this seminar will focus on the Intel Xeon Phi co-processor. In this Jeff Aide from SGI will talk about his experience with the Intel Xeon Phi accelertor. Kenny Sng from Intel will give a brief outline of the Phi architecture and roadmap.
NVIDIA Graphic Processor Units (GPUs) technology is increasingly used to accelerate compute-intensive HPC applications across various disciplines in the scientific and engineering communities. OpenFOAM® simulations can require a significant amount of computing time which can potentially lead to higher simulation costs. Enabling faster research and discovery using CFD is of key importance, and GPU technology can help speed up simulations and accelerate science.
Purpose: The purpose of this workshop is to provide an opportunity for users (application developers) to learn how to get started and to take advantage of the BG/Q for their codes. This will be accomplished through some brief lectures to introduce different topics and as much hands-on effort with IBM consultants guiding the participants on their codes. This allows participants to obtain as much experience using the compilers, tools and techniques on their own codes as much as possible.
A series of 3 NAG Seminars will be conducted at the following location by Dr Jonathan Gibson, Technical Consultant, Numerical Algorithms Group The venue will be at Charles Babbage Room, Level 17, 1 Fusionopolis Way, Connexis South, Singapore 138632
Parallel computing with GPUs is becoming more and more widely used in demanding general-purpose scientific and engineering applications. CUDA has been widely adopted throughout the world as the most accessible and intuitive way to achieve massive parallelism - and this is reflected by large number of Universities which include CUDA as part of their standard curriculum, hundreds of technical papers and parallel programming textbooks.
OpenFOAM is a free, open source CFD software package developed by OpenCFD Ltd at ESI Group and distributed by the OpenFOAM Foundation. It has a large user base across most areas of engineering and science, from both commercial and academic organisations. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
Computational Fluid Mechanics is the application of numerical methods and algorithms to problems involving fluid flows. Famously one of the most difficult and computationally intensive areas of classical mechanics, this two-part class will provide an introduction to CFD for neophytes. Topics to be covered during the first part of the lecture include: What is CFD, fluid characteristics, fluid modelling, turbulence modelling, and CFD workflow
High Performance Computing represents the absolute zenith of computing power. HPC systems consist of thousands or millions of processor cores, joined together with high-speed networks, terabytes or petabytes of RAM and storage, and accelerators such as GPUs in order to process massive, computationally intensive datasets. Using and programming HPC systems requires a different approach to traditional serial programming, with unique challenges and opportunities.
“How sensitive are the values of the outputs of my computer program with respect to changes in the values of the inputs? How sensitive are these first-order sensitivities with respect to changes in the values of the inputs? How sensitive are the second-order sensitivities with respect to changes in the values of the inputs? ”
NWChem is a computational quantum chemistry package for the studies of electronic structure, geometry and properties of molecules and periodic systems. It also includes classical and quantum (Carr-Parinello) molecular dynamics simulations. The package exhibits excellent parallel scaling and has been shown to run on hundreds of thousands of cores on Jaguar, and other top Supercomputing systems. The computations performed using NWChem have been awarded several Gordon Bell prizes for the best supercomputer programs.
The goal of this day-long series of talks is to introduce advanced high performance scientific computing including parallel and distributed algorithms and method designs, regular or irregular data structure adaptations, programming paradigms and methodologies. The course will cover all the main existing high performance execution and programming paradigms: flux parallelism (pipelined vector computing), data parallelism, control-flow parallelism, SIMD, SPMD, MSPMD,…along with linear algebra examples
The Challenge: Big Data -Technology advances have made data storage relatively inexpensive and bandwidth abundant, resulting in voluminous datasets from modeling and simulation, high-throughput instruments, and system sensors. Such data stores exist in a diverse range of application domains, including scientific research (e.g., bioinformatics, climate change), national security (e.g., cyber security, ports-of-entry), environment (e.g., carbon management, subsurface science) and energy (e.g., power grid management).
NAG produces numerical, data mining components, statistical and visualisation software, compilers and application development tools, for the solution of problems in a wide range of areas such as science, engineering, financial analysis and research. Produced by experts for use in a variety of applications, the NAG Library is the largest commercially available collection of numerical and statistical algorithms in the world. With over 1,600 tried and tested routines that are both flexible and portable, it remains at the core of thousands of programs and applications spanning the globe.
A*STAR COMPUTATIONAL RESOURCE CENTRE (A*CRC)
1 Fusionopolis Way
#17-01 Connexis (South Tower)
Telephone: (65) 6419 1510