Development of play behavior model of learning environment simulation
Principle Investigator Jin Lee (Master Thesis Study, Inha University)
Advisor Professor Seung Wan Hong, Inha University
Related Publication Lee, J., & Hong, S. W. (2023). Developing the Reinforcement-Learning Child Agents for Measuring Play and Learning Performance in Kindergarten Design. In Proceedings of the 41st International Conference on eCAADe. https://doi.org/10.52842/conf.ecaade.2023.1.069
Funding details This study was supported by the National Research Foundation of Korea (NRF) grants funded by the Ministry of Education (Grant Number NRF-2021R1A2C1004608), and the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant Number RS-2021-KA163269).
This study was awarded the First Prize at the 2023 Best Thesis Award hosted by the Architectural Institute of Korea.
This simulator has been applied for the patent (Korean Patent Application No. 10-2023-0086178, Korean Intellectual Property Office)
The current technical feature of behavior modeling, Finite State Machine (FSM), in human behavior simulation represents errorless, the mutual interaction of agents. However, it is a limit to generating emerging, varying behavioral responsiveness. As a new approach to enhancing a responsive behavior model, this project developed a reinforcement learning (RL) agent. The RL agent is an artificially intelligent creature that learns the appropriate behavior by itself in a virtually constructed environment. This behavior model allows computing physics-based motor skills and generating unstructured behaviors with the precise parameterization of the human and environmental factors. Therefore, it enables the agent to respond to subtle variations in the given environmental conditions. This project applies the RL-powered behavior model in children's play behaviors to assess children’s physical and social development relevant to play behaviors and fatal injury for measuring learning environment design performance. I applied the developed behavior model in the agent-based simulation. Conducting experimentation in the empirical learning environment design, this project confirms that the agent model successfully adapts to the given environmental settings and generates more localized and subtle behavior responses.
pre-made animation data-based behavior transition(FSM)
RL agent's behavior transition
V(a) = V(a) + α × (r-V(a))
The equation V(a) = V(a) + α × (r - V(a)) shows how an agent learns from its environment, updating its value based on rewards and a learning rate (α). Compared to using pre-made animation data, this makes an RL agent implement a flexible and adaptive choice for actions, responding to environment variations.
Step 1 | Converting to an agent with 16 segmented joints to control each physical parameter for various play types.
Step 2 | Conducting test iterations exploring ergonomic, and cognitive responses to implement physical and social play behaviors.
Step 3 | Training!
Algorithm 1: RewardsFunctionForTraining
a = ActionBuffer.ContinuousActions
reward = 0
function OnActionReceived():
for each bodyPart in bodyParts:
bodyPart.JointRotation(a[i++], a[i++], a[i++])
float CalculateRewardsForPhysicalPlay():
if bodyParts[foot].touchingSlopes:
reward = 1
else:
reward = -1
return reward
float CalculateRewardsForSocialPlay():
prameters.TeamId == Team.AgentA:
if Collision.Ball.BodyParts[foot]
reward = 0.2
if Collision.Ball.CompareTag("Goal”):
reward = 1
return reward
Last step | Integrating behaviors into the simulation with analytics for measuring learning (supportability for physical and social play) and safety matters and conducting implementation test in the authentic children's learning environment.
Contact
Mind Computation Lab© Mind Computation Lab