Date: 09 May 2013, Thursday
Time: 6:00PM – 9:00PM
Venue: Level 17, Charles Babbage Room, 1 Fusionopolis Way,Connexis South, Singapore 138632
Parallel computing with GPUs is becoming more and more widely used in demanding general-purpose scientific and engineering applications. CUDA has been widely adopted throughout the world as the most accessible and intuitive way to achieve massive parallelism - and this is reflected by large number of Universities which include CUDA as part of their standard curriculum, hundreds of technical papers and parallel programming textbooks.
The following topics will be covered in this lecture:
- Introduction to GPU computing
- CUDA programming basics
- CUDA API and data allocation
- Matrix multiplication in CUDA
- CUDA memory model and tiled parallel algorithm
Kyle Rupnow is an Assistant Professor at Nanyang Technological University (Singapore) and a Research Scientist in the Advanced Digital Sciences Center, a University of Illinois research center (http://www.adsc.com.sg). He received a BS in Computer Engineering and Mathematics in 2003 from the University of Wisconsin-Madison. Dr. Rupnow received his PhD in Electrical Engineering in 2010 working with Prof. Katherine Compton on operating system support for reconfigurable computing systems. During his PhD studies, Dr. Rupnow was supported as a Sandia National Laboratories Exellence in Engineering Fellow. In addition, he received the Gerald Holdridge tutorial development award and UW-Madison PhD Capstone teaching award for his work as a teaching assistant and lecturer during his time at UW-Madison.
No of Participants: 40
No of Participants: