We have installed and set up JupyterHub in the previous post. To make use of the GPU card in the server, we are going to also install and configure CUDA and cuDNN from NVIDIA.
Setup CUDA and cuDNN According to NVIDIA, CUDA is not just an API or a programming language:
CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant.
We have recently added a computing node equipped with a Tesla V100 for some machine learning projects. This node will also serve as a testbed for exploring the opportunities to migrate current C and Fortran programs to CUDA platform, but this will be the next phase.
To conveniently serves a Python development for multiple users, we are going to deploy JupyterHub to the Ubuntu node. Here is a brief note about the installation procedure.
The initial title of this post is "A Simple Docker Example", which is going to show how to plot a simple Python program into a Docker image and run it. But it turns out that I’ve chosen a not so simple one as an example.
Few days ago, I have annotated several hundred of bib images with their corresponding bib numbers for my Bib-Racer-Recognition project. I want to double check whether I have tagged the bib numbers correctly for the images, so I just created a simple program to show the original images and the corresponding bib numbers.
Some notes while learning Docker with the Docker Getting Started Tutorial.
Basics Concept: container is an isolated environment for build, run, deploy, and share applications. Container interacts with its own private file system, and runs in different namespace, which is provided by a Docker image.
Relevant commands Images: Build image where the Dockerfile is in current directory: docker build -t <image[:tag]> . Pull image from registry: docker image pull <image> List images: docker image ls Containers: Start a container and for example with a web application based on an built image: docker run -d -p <host_port>:<container_port> --name <container_name> <image> Show available containers: docker ps -a Stop a container: docker stop <container_name> or docker stop <container_id Start a stopped container: docker start <container_name> or docker start <container_id> Start a container and run a specific command: docker container run -w <working_dir> <image> <command> Send a command to an UP container: docker exec <container_name> <command> Remove a container: docker rm <container_name>, or docker rm -f <container_name> to stop and remove a container in one command.
We have purchased several DELL PowerEdge R640 servers several months ago as computation nodes. What we want to have is to build a small computational cluster so that they can run users’ programmes in parallel. Users prefer to work on Linux and most of the programmes will be written in C / C++ / Fortran. Therefore we are going to build a Beowulf cluster with a head node, or server node, and several computation nodes.
In previous post, I talked about how to use the h5py package to read MAT-file that contains bounding box information of the SVHN dataset. After we successfully reading the bounding box data, we can start to train a neural network for the SVHN recognition task. The bounding box data provided in the dataset is the position, size and label of each digit in the image, which means for a 4-digit house number, there are 4 boxes in total and each one is just bounding 1 digit, as shown below:
Several days ago I was trying to train a neural network on the Street View House Numbers (SVHN) Dataset. I was working on the test set for its relatively smaller size with 13068 images only. The bounding box information are recorded in digitStruct.mat which can be loaded with Matlab. There are two fields for each record in digitStruct: name, the name of the image file; and bbox, the bounding box information of that image.
I just made a very simple face and bib detection program following the post by Adrian Rosebrock, with the weights trained with the downloaded trail running images using method described in the previous post. The speed is not very fast, which take more than 1 second for an image. Certainly, it is Google Colab free tier, so there are lots of variables that we cannot control and even do not know.
In previous post, we talked about how to scrape and download photos using Selenium and BeautifulSoup, from an online photo album of a trail running event. The next step is to identify the bib numbers in the photos automatically. This can be divided into two subtasks, the first one is to identify the location of the bib in an image, and the second one is to recognize bib numbers in the identified bib.
I had participated in trail running races for 2 years. All of those races are exciting and unforgettable, with stunning scenic views and different kinds of challenges. During each of the event, there are many enthusiastic photographers, amateur or professional, taking numerous pictures of racers and putting them online for download either freely or with fees. In order to find the photos for a particular racer, one needs to either look for those photos from countless albums each containing more that hundreds of photos one by one and by naked eyes, or some websites can let you input the bib number to and get the photos with the number for you.