Using this function you can analyze attributes of facial expressions within a video file. There are two ways to supply the video information. First, you can provide the actual video file. The function will then break it down into still frames using the grabVideoStills() function. Second, you can use the videoImageDirectory argument to give the location of a directory where images have been pre-saved.

videoFaceAnalysis(
  inputVideo,
  recordingStartDateTime,
  sampleWindow,
  facesCollectionID = NA,
  videoImageDirectory = NULL,
  grabVideoStills = FALSE,
  overWriteDir = FALSE
)

Arguments

inputVideo

string path to the video file (ideal is gallery)

recordingStartDateTime

YYYY-MM-DD HH:MM:SS of the start of the recording

sampleWindow

Frame rate for the analysis

facesCollectionID

name of an 'AWS' collection with identified faces

videoImageDirectory

path to a directory that either contains image files or where you want to save image files

grabVideoStills

logical indicating whether you want the function to split the video file or not

overWriteDir

logical indicating whether to overwrite videoImageDirectory if it exists

Value

data.frame with one record for every face detected in each frame. For each face, there is an abundance of information from 'AWS Rekognition'. This output is quite detailed. Note that there will be a varying number of faces per sampled frame in the video. Imagine that you have sampled the meeting and had someone rate each person's face within that sampled moment.

Examples

if (FALSE) { vid.out = videoFaceAnalysis(inputVideo="meeting001_video.mp4", recordingStartDateTime="2020-04-20 13:30:00", sampleWindow=1, facesCollectionID="group-r", videoImageDirectory="~/Documents/meetingImages", grabVideoStills=FALSE, overWriteDir=FALSE) }