To achieve this idea, I had to work on implementing Machine Learning to my system to allow detecting the entire audience walking in front of the screen.


To do so, I used a software called vl which is newer version of vvvv allowing multi threading, used their sample patch for CPU processing YOLO detection, and converted that to work on GPU.


Once machine learning part was done, I used the output of YOLO and applied various effects such as edge detection, optical flow to apply different effects onto bounded human texture.



The long space allotted for this exhibition reminded me of a long corridor leading to the subway. In order to design a flow line that would not overwhelm the visitor's attention and would ensure their participation in the interaction, I designed the installation to cover the entire wall with LED screens and installed three wide-angle cameras on top the wall as machine learning feeds.


The basic color of the images is black so that the LEDs only emit light when a viewer enters the room, attracting the attention of the audience. The resolution of the video is specified in detail, such as px x px, and the number of LED panels is specified in sheets, and SDI transmission is used for the cameras to eliminate problems such as delays.

In addition, two small cabinets were installed in front of the two LED panels, which were made by dismantling an Intel NUC and fusing it with a touch panel. UI designed by Patricia Reiners using Adobe XD was implemented in this PC to provide an interaction system that allows the viewer to change the density and drawing style of the trajectory as desired. This transformed the relational of the viewer into a participant in the interactive installation, rather than just as an object to be treated as a one-sided machine learning source. Just to be clear, we did not collect any information of the participants during this installation.

@DesignUP India


Machine Learning

Live Interaction

The best way to learn about the nature of a medium is to actually use it.


In 2019, when machine learning was being talked as a hot topic, at Design UP India, which bills itself as India's largest design conference, we exhibited an installation that implemented a machine learning system called YOLO Detection as part of its functionality. Darknet affiliate YOLO Detection is a machine learning algorythm that can process pre-labeled objects in the video by spitting out the label and the boundary.

The theme of this work is "painting with crowds,". Custom software made with vvvv beta & gamma continuously draws visitors on an LED wall, clipped by YOLO Detection, by leaving their trajectory in the form of a brushstroke. The idea for this work came while walking underground in the city of Tokyo, and it was born from a sentimental motive to record the steps of people with in the rapidly changing landscape of the city.


Design & Development by Takuma Nakata

UX / UI Design  Patricia Reiners 
Art Direction:  Yuta Ichinose

Sound Design: Julien Miere

Other List

Video Edit : Daichi Ito

Supported by : Adobe Creative Residency