The Yassifier functions with 2 different cameras: the IPad’s frontal camera, to enable the face filter made in Spark AR which is displayed on the screen live, and a webcam installed on top that is connected to a laptop. In the laptop, a facial expression recognition algorithm made in JavaScript is running and identifying 7 different emotions, which are then turned into numerical data. Then, in MAX MSP, each emotions’ data is fed to a respective RNBO audio effect that alters the sound live. For example, when smiling and talking simultaneously, the sound emitted by MAX will be the user’s voice but in a much higher pitch.