Thursday, 27 August 2020

Face recognition





School of Mathematics, Computer Science and Engineering

IN3060: Computer Vision

Resit

Coursework for UG students:

Face recognition


Submission details

You must submit your project to Moodle by 5pm, Wednesday 26 August 2020. Differently

from the sit period, no module-wide extensions will be granted. The submission must

include your project report (PDF format), all the produced source code, a readme.txt file

(with the instructions on how to run the implemented methods), the trained models, and a

screen capture video (mp4, or any other format executable by the latest version of the VLC

player).

Standard lateness penalties (late submission = 0 marks) and extensions policy apply. This

assignment constitutes 100% of the total mark for this module.

Concept and task

Have you ever been introduced to a number of people, but forget their names? Humans are

exceedingly capable of recognising faces previously seen before, but matching names to

faces can be difficult. Imagine having an app running on your device (e.g. smartphone,

glasses) that recognises people and prompts you with their name, to help you avoid blurting

an awkward “I’m sorry but I forgot your name?”.

Person identification is important for a variety of practical tasks, including cybersecurity and

authentication. In this project, you will develop a computer vision system to detect and

identify individuals using a database of known face images (specifically, students in this

module).

Page 2 of 5

Code specifications

It is recommended you use Matlab (preferably 2018a or later) for the development of the

coursework in this module. Matlab is an exceedingly powerful programming language and

environment for imaging and computer vision, and includes a variety of toolboxes that

provide functionality of interest to this project, including the Computer Vision System

Toolbox, Image Processing Toolbox, Optimization Toolbox, Neural Networks and Statistics

and Machine Learning Toolbox. As an alternative to Matlab, you are allowed to use Python

with relevant libraries (be sure to follow the “Coursework Python guidelines” document on

Moodle). This is however not recommended, since the lecture notes, code samples and lab

exercises will be based on Matlab. While some help might be provided, academic support

from the lecturers and tutors will be limited to Matlab. If the code will not be properly

packaged to allow for smooth testing, the submission will be marked as a fail.

Deliverables

Project reporting (30%):

Produce relevant documents and materials to describe how you carried out the coursework.

In particular:

• Prepare a project report (maximum 10 pages) in PDF format (20%):

o The report should include an overview of your project followed by a description of

your implemented methods (with more emphasis spent on your best performing

one). Include a brief block diagram showing the key algorithmic components that

are part of your best performing method;

o For each task (i.e. other deliverables), list briefly the requirements that were

implemented. Describe your implementation, review and discuss any relevant

theory underpinning the methodology. If a requirement was skipped, say so;

o If you initially tried approaches that failed, mention these briefly, and provide a

rationale on why they were ineffective;

o Provide qualitative and quantitative analyses of the accuracy of the implemented

methods, and discuss conditions when your best performing method may fail to

achieve the desired result;

o Reference any external source or pretrained models used;

o Include a discussion section reflecting on the project. Consider the strengths and

weaknesses of what you achieved in the time provided. Also discuss how you might

expand the project to improve performance or take it in new directions.

At times you may wish to discuss the methods used with your colleagues. This is

encouraged. However, the coursework is individual. Plagiarism will be dealt with

directly by the Department’s senior staff and may lead to course disqualification. Your

work must be your own and you must cite any and all resources used. Importantly,

while using libraries/code/models found online is allowed and encouraged, you must

not reuse other students’ submissions for this module’s Computer Vision coursework

(either from the sit period or from previous years). Failure to do so, regardless of

proper referencing and acknowledgement, will be treated as academic misconduct and

potentially lead to course disqualification.

Page 3 of 5

• Provide your commented source code (5%):

o Follow a logical design, organisation, and tidy coding style;

o All code has to be commented (at the block level is sufficient);

o Reference any external source code used;

o Use appropriate naming conventions and display an organised code strategy

through the use of functions.

• Provide a screen capture video in which you test the implemented methods on images

from the evaluation set (that you will create from the provided datasets) (5%):

o The video must be 5 minutes maximum (anything beyond 5 minutes will not be

watched);

o It should show the results produced by the implemented methods by displaying the

obtained results on images from the evaluation set;

o The video does not have to be exhaustive, but has to show that your code actually

works and provides some results;

o It has to contain an audio commentary (recorded by you) providing a short but

relevant description of the shown methods as they are executed.

Face recognition (70%)

Develop a Matlab function,

P = RecogniseFace(I, featureType, classifierType)

that returns a matrix P describing the student(s) present in an RGB image I, with I being a

string containing the address of the image file (e.g. I = "test_folder/IMG_6817.JPG"), and not

the image data itself. P should be a matrix of size N x 3, where N is the number of people

detected in the image. The three columns should represent:

o ID: a unique number associated with each person. If the person is present in the

database of individual pictures, the ID is the number held by the person;

o x: the x coordinate of centre of the face for the person detected in the image;

o y: the y coordinate of centre of the face for the person detected in the image.

For example, if the function detects two students in the image, with student with ID=03

having his/her face centred at position [144, 153], and student with ID=13 at position [312,

123], the P matrix would be

P =

03 144 153

13 312 123

During marking, your function will be tested on images not available in the provided

datasets.

• Implement RecogniseFace so that it correctly identifies the location of faces in both

individual and group images (automatically managing different image orientations).

Executing the function must also automatically display the input image with the face

locations superimposed as bounding boxes together with the students’ IDs (15%);

Page 4 of 5

• The RecogniseFace function must have two additional arguments, featureType

and classifierType, which allow different features (e.g. SURF, HOG) or classifiers

(e.g. SVMs, MLPs, CNNs) to be used (up to 40%). Use at least three combinations of

features-classifiers. Different points will be awarded for different combinations:

o For each general features-classifier combination (8%);

o For each features-classifier combination using MLP as classifier with hyperparameter

tuning (16%, see below for details);

o For using a CNN classifier (24%). In this case, the content of featureType is not

used.

The points scored for this task will be given by the sum of the implemented

combinations (e.g. correctly implementing a SURF-SVM, a SURF-MLP as well as a HOGMLP

will provide up to 40%). Importantly, at least one combination needs to include a

MLP (multi-layer perceptron) as classifier. For this case, discuss the choice of the

hyper-parameters involved (e.g. number of hidden layers) by evaluating at least a

couple of different implementations (which will still count as one single classifier);

• Add to your report performance metrics (e.g. percentage of faces correctly identified,

confusion matrices) obtained on the evaluation dataset at least for what concerns the

individual images (15%) using the implemented models. At least for your best

performing model, show examples of correct and, if any, incorrect detections on both

individual and group images. For your best performing model, estimate the

quantitative accuracy on at least one group image by manually annotating the

students appearing in it using the individual images as reference.

Implementation notes

• Create a sensible training/evaluation split in the “Coursework Datasets” available on

Moodle. You can use both individual and group images and videos as you see fit;

• For the images in your training set, perform face detection to find faces, and save the

extracted faces as image samples. Associate each sample with a unique label, which has

to be the number held by the person in the individual images. This will produce a

labelled dataset of image samples that can be used for training;

• Throughout the project, feel free to use pre-trained models (excluding ones trained by

other present or past students of this module), but carefully reference them in your

report and produced code;

• (Optional, up to 10%) The association between faces and labels can be done manually,

without the need to apply OCR algorithms to the handheld signs. However, submissions

that will implement this feature in their code will be awarded additional points;

• (Optional, up to 10%) RecogniseFace should be able to assign as many correct IDs as

possible on a previously unseen group image. If some faces do not correspond to any of

the ones available in the individual images (i.e. for students who did not have their

individual picture taken), it could assign individual “unknown” labels (i.e. “unknown 01”,

“unknown 02”, etc.). These labels should be dynamically assigned at test time (i.e. avoid

creating labels for the “unknown” students);

• If no faces are detected, RecogniseFace should return an empty matrix;

• Do not panic if some features-classifier combinations do not provide very accurate

results: this is expected for some of them. The aim of the coursework is to assess your

Page 5 of 5

ability to properly and effectively implement and evaluate the methods you have

learned in class, not to necessarily achieve extremely high accuracy in all cases;

• Please note that a perfect mark (100%) can be obtained in case of a perfect submission

without implementing the optional tasks. The optional marks will be awarded only to

cover potential deficiencies in the Face Recognition deliverable, and not on the Project

reporting.

Submission notes

• You should deliver your project by uploading the produced files on Moodle. Avoid

uploading a single .zip file. Instead, your submission must consist of the following,

separately uploaded files:

o The report (PDF format)

o The screen capture video (preferably mp4, but at least readable by VLC player)

o The RecogniseFace function (commented)

o A readme.txt file

o All the other produced code (commented and zipped)

o The trained models (zipped)

• You must upload both the report and the RecogniseFace function separately and

unzipped to allow for Turnitin-based plagiarism checks;

• The readme.txt file must describe all the valid ways to call your RecogniseFace

function (i.e. arguments to pass in as featureType and classifierType) to allow

easy testing on unseen images. It also has to indicate where to place the

RecogniseFace function in the code folder to allow for smooth testing;

• Make sure that your RecogniseFace function can properly execute without runtime

errors by testing your submission on a different computer (perhaps using the facilities

at City, University of London through Remote Desktop). To this end, remember to use

Matlab’s filesep in path descriptions to accommodate for a different OS. If

functions/libraries/packages are used beyond those in the standard City, University of

London Matlab distribution, make sure that these are sufficiently packaged with the

provided source code so that RecogniseFace can run on a machine with Matlab

without having to set up any complicated file paths;

• Include all trained models so that the implemented function can run autonomously

without the need for re-training: your models will not be trained during marking;

• Add the code you produced to train your models to your code submission, but do not

include the image datasets used for training;

• If your models exceed Moodle’s 200MB limit, you can upload your models alone

(report, source code, readme.txt file and screen capture video must be submitted to

Moodle) on a personal cloud space (e.g. Dropbox) and add a link to them in the

readme.txt file together with an indication of where to store them to allow the

seamless execution of your methods. Do not send the link directly to the teaching

staff. Note that apart from the models, all other files need to be uploaded on Moodle:

submissions not following this rule will be marked as fail.

UK assignment helper

Author & Editor

We are the best assignment writing service provider in the UK. We can say it with pride that we tend to perceive our client’s requirements better than any other company. We provide assignment writing service in 100+ subjects.

0 comments:

Post a Comment