top of page
steelcase_ad.jpg

Steelcase Studies in Human/Environment Interactions

#consumer electronics #human-machine interaction #learning #interfaces #social science #machine learning #affective computing #ethics #industry #interactive #computer science #interfaces #learning #artificial intelligence #design #creativity #history #technology #motion capture

Steelcase Study, 2018-2020


A project utilizing two classroom environments with 24 embedded web cameras, networked infrastructure for data-capture, processed through “Open- pose”, open-source algorithm for pose estimation, to study active learning and create a dataset of human/environment interactions.

Top of Page

As interest in reverse classroom dynamics and collaborative learning environments continue to grow in education, learning spaces must adapt to support collaboration, creativity, and human wellbeing.


In collaboration with the BUILD research team, a group of faculty-sponsored by the education division of the furniture company, Steelcase, I collaborated on a two year research project to create a dataset about human/environment interactions.


Our team developed an experiment by converting and updating the technology
of an otherwise standard classroom into a cutting-edge, active learning laboratory, designed to research the pedagogical function of active learning, from an architectural and design perspective.


By using computer vision algorithms for pose estimation to gather information to analyze how students engaged with different types of spaces, we were interested in measuring the effectiveness of different environments on learning and social interactions.

Our team found that computer vision techniques were a useful means for evaluating the dynamics of learning, and that sensor-based environments could offer valuable insights into the nature of human/computer interactions.


Our modern active learning environment incorporated a multi-camera capture computer vision system; resulting in 3D re-creations of skeletal data representing human behavior as 3D animations.


I designed a custom 3D printed multi-camera capture mechanism to facilitate the capture. We embedded a system of multiple cameras networked into the ceiling of the classrooms, enabling us to capture multiple angles of interactions. We were then able to triangulate the data using ARUCO markers to create a ground truth correspondence between each camera, resulting in “skeletal data” of human/environment interactions, reconstructed as spatially explorable 3D animations.

Grant Proposal

Making of: Active Learning Lab with Ubiquitous Sensing

Making of: OpenPose Tests

Computer Vision for Spatial Interactions

Our team designed and ran a series of studies in two conditions. In our control condition, a conventional learning environment was used.


A second condition had an active learning design.


Both environments were outfitted with a multi-camera capture system.

Multi-Camera Capture

We processed the data with an algorithm called OpenPose, a computer vision algorithm able to detect body, foot, hand, and facial keypoints from single-camera images.


Our dataset was derived from 32 classroom sessions, captured and processed to reconstruct 3D skeletal data simulations from each session.

Human/Environment Datasets

Our primary interest was how architecture (i.e., active learning vs conventional) is associated with student learning outcomes.


The project resulted in a rich dataset of human/environment behaviors, a useful resource for researchers interested in group dynamics and the effects of architectural design on experiential learning.

Steelcase & IMRSV Audio Transcription Project

Steelcase Project Presentation

bottom of page