event
Synthux Hackathon 2023

The Yassifier

Mirror mirror on the wall, why don't I recognise myself at all?
No items found.
MaxMSP
Electret microphone
Online data
Spark AR
prompt
A humanised algorithm

Coaches & collaborators

institute
Design Academy Eindhoven
Poreless skin, pillow lips, a tiny nose and high cheekbones. While apps like Faceapp allow us to edit our faces however we want, they also set an impossible beauty standard, where we end up competing with a pretend version of ourselves. The Yassifier (from “yassifying” - applying several beauty filters until one becomes unrecognisable) is a humorous response to this real issue. The heavy face filter creates a reaction in the user - depending on their facial expression, their voice gets further distorted, creating an even bigger divide between our real and digital selves.
No items found.
the tech

How does it work?

The Yassifier functions with 2 different cameras: the IPad’s frontal camera, to enable the face filter made in Spark AR which is displayed on the screen live, and a webcam installed on top that is connected to a laptop. In the laptop, a facial expression recognition algorithm made in JavaScript is running and identifying 7 different emotions, which are then turned into numerical data. Then, in MAX MSP, each emotions’ data is fed to a respective RNBO audio effect that alters the sound live. For example, when smiling and talking simultaneously, the sound emitted by MAX will be the user’s voice but in a much higher pitch.
Open source code
Click here to grab this project's code
No items found.
MaxMSP
Visual programming language for music and multimedia
Electret microphone
Record or detect audio gain (volume) changes
Online data
Captures data in real time from web APIs
Spark AR
AR design tool
About the instrument
we're using cookies to run our site. privacy policy.