Agent exploring aesthetic of architecture using computer vision
Team Jin Lee, Hwanjin Kim, Hobum Park (Mind Computation Lab)
Advisor Professor Seung Wan Hong, Inha University
This project was awarded the Grand Prize (Minister of Land, Infrastructure, and Transport Award) at the 2021 Digital Architecture Competition hosted by the Architectural Institute of Korea.
How can Virtual Users (VUsers) help discover the aesthetic experience of architecture? In modern architecture, the elements that create aesthetic experiences are becoming more diverse and complicated. There is a clear limit to finding unexpected experiences beyond the architect’s intention and direction from a varied and autonomous perspective of the users and reflecting them in design processes.
To address this, our team has developed computer vision-embedded agents that compute autonomous behavior with a visual cognitive ability to judge and discover the aesthetic experiences of architecture. The computer vision-embedded agents allow authentic experience in the dynamic situations of a virtually built environment. The populated agents can interpret the inherent meanings of design components and draw a consensual assessment of aesthetic validity. Optical recognition of VUsers is implemented by a convolution neural network (CNN) deep learning method that maintains ‘space information’ of images and extracts’ features’ using a recently developed computer vision model.
The computer vision model that learns the image dataset is connected to virtual reality in real-time. Virtual Users (VUsers) transmit real-time spatial capture data to computer vision, and it calculates aesthetic validity based on the learning model. If the model judges that the spatial image is aesthetic enough, the signal is sent to VUsers and he starts appreciating, sitting in front of the relevant position.
The learning model uses a large amount of image data to generalize the subjective aesthetic judgment criteria to determine the standard language. The dataset contains the works of an architect based on a ‘straight line’ and ‘curve.’ The ‘straight line’ dataset includes images of famous buildings, such as Peter Eisenman and Richard Miere. The ‘curve’ dataset has images of representative buildings such as Zaha Hadid and Frank Gehry. To extract the form of images in datasets, our team implements image processing and neural network sequential models that simplify the form and distinguish the images with multiple perspectives.
If the observed scene is consensually aesthetic, the agent calls the others to share the moment and they appreciate it together.
Epilogue | Here is the full video, introducing the digital application of a computer vision-embedded agent exploring the aesthetic of architecture (KOR)
Contact
Mind Computation Lab© Mind Computation Lab