Design Methodologies

Focus© Gl0ck33 | Dreamstime.com

Developing novel design methodologies for both individual processors and whole computing systems.

 

  

Research topics

  • High-level GPU Programming

To exploit the performance potential of massively parallel accelerators like GPUs, expert programmers need to tune their code in low-level languages like OpenCL or Cuda C for the parameters and features of specific GPU platforms and models. This hampers the use of those accelerators by non-GPU experts, as is needed in the domains of artificial intelligence, machine vision, and big data.
We focus on the high-level programming language Julia. We build on its meta-programming, just-in-time compilation and strong type inference capabilities, and we redesign, extend and open up compiler interfaces such that the existing JULIA compiler infrastructure and code can be reused as much as possible while still mapping compute kernels into efficient GPU code. The results is code that is equally efficient as Cuda C code, that can be written with an order of magnitude less effort, and does not need to be retuned for specific GPU models.

Staff: Bjorn De Sutter, Koen De Bosschere
Researchers: Thomas Faingnaert, Tim Besard, Christophe Foket
Projects: IWT-grant Tim Besard, FWO G051318N, GOA BOF11/GOA/021
Publications:Rapid software prototyping for heterogeneous and distributed platforms, Effective extensible programming : unleashing Julia on GPUs, Dynamic automatic differentiation of GPU broadcast kernels

Completed projects

  • CGRA: Compiler Techniques for Coarse-Grained Reconfigurable Arrays
  • KIS: High-Performance Embedded Systems
  • FLEXWARE: Hardware acceleration of massively parallel applications by exploiting flexible parallel hardware platforms
  • HOME-MATE: Home-compatible Multimodal Alarm Triggering for Epilepsy