Skip to content

Latest commit

 

History

History
53 lines (32 loc) · 3.68 KB

README.md

File metadata and controls

53 lines (32 loc) · 3.68 KB

VGG Face Search

Author:

License: BSD (see LICENSE.md)

Overview

This repository includes a face-search service meant to serve requests generated by the vgg_frontend web interface. However, it could be easily adapted to serve other clients. The source code for the service is located in the service folder. See the README file inside service for more information.

The repository also includes a data-ingestion pipeline mechanism to extract face-features from a user-defined dataset of images or a collection of videos. The output of this pipeline is then used by the face-search service to obtain the results of any search. The source code for the data-ingestion is located in the pipeline folder. See the README file inside pipeline for more information.

The most important dependency is the RetinaFace detector. In the case of ingesting videos ffmpeg is also a major dependency. They must be installed in the dependencies folder. See the LICENSE.md file for links to the license of these dependencies.

The Pytorch model used for feature-extraction must be located in the models folder. It corresponds to the SE-Resnet-50-256D VGGFace2 model. See the LICENSE.md file for links to the license of this model.

Supported platforms

Successfully deployed on Ubuntu 14/16 LTS, macOS High Sierra v10.13.3 and Windows10 x64.

Installation Instructions

Currently, this Pytorch version of RetinaFace is used for face detection on all platforms, for which Python 3 and Pytorch are needed.

In the install directory you will find installation scripts for Ubuntu and macOS. Using the GPU is supported as long as the Pytorch installation can have access to the GPU. See the next section for instructions on how to enable the GPU support.

If you want to install the service in Windows, see the Windows installer for VFF to get a rough idea of how to do the deployment.

Usage

Before running the service for the first time, please check the service/settings.py file:

  1. Check that the CUDA_ENABLED flag is set to False if you want to use the CPU or set it to True if you want to use the GPU.
  2. Make sure that DEPENDENCIES_PATH points to the location of the place where the dependency libraries (e.g. Pytorch_Retinaface) are installed.
  3. Make sure that DATASET_FEATS_FILE points to the location of your dataset features file. If you do not have one, you won't be able to run the service until you run the feature computation pipeline. See the README in the pipeline directory.
  4. Adjust MAX_RESULTS_RETURN as you wish.
  5. Only change the rest of the settings if you really know what you are doing.

If you already have adjusted the settings and have a dataset feature file, you should be ready to start the service. To do so, start a command-line terminal and use it to go inside the service folder, then execute the start_backend_service.sh script (start_backend_service.bat for Windows). Use that script file to define or modify any environment variables required by your local setup.

The service should be reachable at the HOST and PORT specified in the settings.

Wiki

The Wiki explains the details about the communication API used in the service, as well as it includes other useful links and information.