If the Android device running Slideshow app has a camera (either integrated, like a camera on tablet, or USB camera), it can be used for detecting how many people are currently in the area in front of the camera. This number can be used for estimating how many people can see what is on the screen at the moment and for either interactively triggering a content on the screen or for further analytical and statistical purposes.

The face detection is done using a machine learning algorithm, which runs entirely on the Android device. No internet connection or special hardware is needed, the calculations are done using CPU.

If you would like to further integrate the face detection feature with an external system, please contact us.


Following steps are needed for enabling face detection:

  1. On-screen menu – Basic settingsRequest camera permissions – allow Slideshow to access the camera on the Android device. If this option is missing from the Basic settings, the permission has been already granted and it is not necessary to grant it again.
  2. Web interface – menu SettingsDevice settings – select input camera under Camera for face detection and setup other options if needed. If you are using a USB camera and it is not listed in Device settings, please verify that it is correctly plugged in and supported by Android system (not all Android builds support all USB cameras).
  3. Reload Slideshow app in order to apply the settings.
  4. Wait 10 seconds after startup, navigate to web interface – menu InformationFace detection to check whether the camera image is correct. Refresh this page in order to get updated image and statistics.


The quick overview of the face detection status can be seen on the Home page of the web interface. If the face detection is active, Face detection widget containing statistics will be displayed. By clicking on the Number of faces link detected in the last frame (alternatively through menu InformationFace detection) you can open a page with detailed statistics for the last 60 frames, as well as the last processed image from the camera. All detected faces together with their tracking IDs are marked in red and probability whether each eye is opened is marked with blue and green.

Results of face detection can be used in Triggers to trigger a change in playback. For example detecting a person close to the camera can trigger a different playlist on the screen.

Face detection can’t be used at the same time as video input from the same camera, as two different processes can’t use the same camera at the same time.