Room 1

Machine learning is a technique for realizing Artificial Intelligence (AI). Learning rules by repeatedly giving examples of input and output, and returning meaningful outputs with a certain probability to unknown inputs. Machine learning may be said to be similar to the process that the designer learns. Because many designers produce rules within themselves by receiving various constraints as input, outputting solutions that seem to be optimal, and repeatedly experiencing receiving feedback from people. What if a designer has an AI as one’s alter ego?

“Room 1” is a hands-on exhibition where a participant interacts in a Virtual Reality (VR) space with an AI, which learned the placement plans of furniture by a designer, at a specific time and under certain constrained conditions by deep learning (a technique of machine learning). When a participant wears the head mounted display (HMD) and places two life-sized model furniture in the space (input), the placement of the five other VR pieces of furniture are dynamically determined (output). Depending on the placement of the imputed furniture, it may be a reproduction of the designer’s proposed arrangement, alternatively it may result in an arrangement close to the intention of the designer, even though it is significantly different from the original arrangement plan.

For a designer, the output of “AI” based on the rules that one did not verbalize will be a reflection. From outputs of “AI” to unexpected inputs by a participant, not only by being stimulated from rare success examples but also by reading out glimpses of one’s rule in failure examples occupying most of the trial, the designer will produce a reflection.

How to experience

A participant wears the HMD and places the VR stool and bed in your desired position. When the participant is satisfied, tell the staff that the placement is complete. The AI who previously learnt the designer’s plan will then reorganize the remaining furniture based on the participant’s arrangement. Changing the position of the stool and the bed again will rearrange the placement of other furniture accordingly. The participant takes up to 5 minutes to enjoy one’s dialogue with the AI.

System

Each participant can freely walk around and experience the VR space. According to the position and the direction of the HMD worn by the participant, the space containing the 3D models is arranged and rendered in real time. Two life-sized models of furniture which act as inputs are equipped. Each fitted with a sensor for real time tracking of the position and direction within the 3D space. Thus allowing each participant to arrange the furniture freely in VR space. When the participant confirms the placement and inputs it into the learned model, the position and direction of the five pieces of VR furniture are outputted, reflecting the rendering engine.

Rhinoceros, Cinema 4D, Unreal Engine 4, Python, Keras, TensorFlow, HTC VIVE, VCarve Pro, ShopBot PRSalpha 96-48-8, coniferous plywood, lauan plywood, styrofoam, PLA (3D printed), screw bolt, red pine block

Exhibit