Object recognition using CNN
Description: Using the ImageNet database, implement a smaller version of AlexNet (https://proceedings.neurips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf) for object classification. The original AlexNet contains 5 convolutional layers + 3 fully-connected (MLP) layers. This project is meant for high school students who have working knowledge of Python for machine and deep learning.
Implementation: Implement the missing functions in the reduced form of the AlexNet with 3 convolutional layers and 2 MLP layers. Train the AlexNet so that it can recognize 5 different objects (e.g. dog, bicycle, car, bird, cat). You (or your TAs) will decide how much training and testing data you should use. The team with the accuracy wins.
Node classification using GCN
Description: Using the Cora dataset to classify papers that do not have labels. The dataset contains 2708 papers, with 5429 links, and divided into 7 categories. Each paper is described by a 0/1 word vector of size 1433 that indicates the presence or absence of a keyword in a dictionary of keywords. This project is meant for undergraduate students who have working knowledge of Python for machine and deep learning, with a background in linear algebra, especially for graph theory.
Implementation: Using a 2-layer Graph Convolutional Neural Network (GCN), where the weight matrix for the first layer output 64 features (i.e. 1433 x 64) and the weight matrix for the second layer output the probabilities of the 7 categories (i.e. 64 x 7), fill in the missing functions and train the model using 80% of the labeled data.
Surveillance using model-based CNN
Description: In this day and age, deep neural network design is mostly based on prior knowledge and intuition as there are no methodological way to optimally design them for certain tasks. Recently, model-driven (or model-based) neural network design has made design of DNN tractable and explainable, usually by unrolling certain iterative optimization algorithm. One of the first algorithm is called Learned Iterative Shrinkage Thresholding Algorithm, or LISTA. This has lead to the development of Robust Principal Component Analysis (RPCA) algorithm that can also be unrolled into a DNN. This project is meant for undergraduate students who have a working knowledge of Python for machine and deep learning and background in linear algebra.
Implementation: Implement the missing functions in the model-driven RPCA neural network to perform video surveillance using the SBMI database (https://sbmi2015.na.icar.cnr.it/SBIdataset.html) for training and testing. The purpose is to separate the foreground (moving objects) from the background (static objects). The SBMI database contains the original video and background, so a background subtraction step needs to be implement to extract the ground truth foreground.
Handwritten digits recognition using 3-layer MLP
Description: Using the MNIST dataset to classify handwritten digits (i.e. 10 categories). The dataset contains 60,000 training images and 10,000 testing images. This project is meant for high school students who have working knowledge of Python for machine and deep learning.
Implementation: Fill in the missing functions for the 3-layer MLP that will output probabilities for the 10 categories.