Optimisation and Neural Network Training on GPUs

  by   Wulf Dettmer






Departments Zienkiewicz Institute for Modelling, Data and AI
DescriptionIn recent years, computational optimisation and machine learning have become fundamental for progress in many areas of science and engineering. Therefore, the efficiency of optimisation strategies and of artificial neural network training in terms of hardware and computational time requirements is crucial to enable further progress and to minimise the carbon footprint. This project is concerned with the implementation of state-of-the-art black box optimisation strategies on GPU systems. The data structures, the management of threads and of GPU memory must be designed carefully to ensure the efficiency and scalability of the software. Strategies to be considered include particle swarm optimisation and approximate gradient descent. The methodologies to be developed will be applied to benchmark problems and to recurrent network training. The student should have a background in programming and scripting languages such as C++, Python or Matlab. Experience with CUDA is beneficial but not conditional.
Preparationbackground reading on + particle swarm optimisation + CUDA + recurrent neural networks
Project Categories Artificial Intelligence (AI), Software Engineering
Project Keywords Machine Learning, Neural Networks, Optimisation


Level of Studies

Level 6 (Undergraduate Year 3) yes
Level 7 (Masters) yes
Level 8 (PhD) yes