ContentDue to the availability of general purpose programming interfaces like CUDA, the immense speed of graphics cards can be put to work for a multitude of parallel tasks. Algorithms for the analysis of images mostly work independently on different regions of an image. These algorithms are therefore inherently parallel and can greatly profit from parallel hardware. Speedup factors in the order of two magnitudes make it possible to process and extract information from huge datasets, for example the images of the ImageNet Large Scale Visual Recognition Challenge. When experimenting with learning algorithms, the experiment duration is drastically reduced. In the Lab,
we learn how to implement learning algorithms from the area of
visual
pattern recognition and accelerate them using graphics
processing unit
(GPU).
We will implement learning algorithms with the help of PyTorch
which is a
popular deep learning framework. The lab
will have weekly meetings. We'll agree on a time slot for
these
meetings in the first session. Prerequisites
|
University of Bonn, Institute for Computer Science, Computer Science VI - Intelligent Systems and Robotics Impress Data Privacy Statement