Project Description
For the final phase of my project, I created an interactive website that turns human vocal expressions into real-time visual reactions. Using Gemini to build the system, I designed a blank digital canvas that responds only when the user engages through sound. An external Bluetooth microphone serves as the input device, capturing the user’s voice and analyzing its frequency, volume, and tonal shifts. Each type of vocalization—laughing, whispering, shouting, gasping, or even subtle breaths, triggers a different visual response on the canvas. Higher frequencies produce sharper, more energetic movements, while softer sounds generate delicate, flowing forms. These variations form a spectrum of visual reactions that mirror the emotional range we naturally experience when watching K-dramas, movies, or any impactful media. And after each visual appears, it leaves an imprint therefore creating a visual piece by multiple interactions. The purpose of this phase was to explore how sound can become a reactive, expressive system. By mapping vocal frequency to visual behavior, the website transforms spontaneous human reactions into graphic output, creating a dynamic conversation between voice and screen. Rather than consuming a fixed image or animation, the user generates the visuals through their own emotional cues. This final outcome ties together interactivity, sound analysis, and generative visuals to create a responsive environment that visualizes the way we feel, react, and express ourselves through voice.