Changing the way we design: Kinected Conference

Kinected Conference is a project by MIT students that aims to make the videoconferencing screen a more useful tool by integrating it with Kinect motion detection. The students demo their system in the video below; video quality is rough, and some of the features are clearly still in development, but it is still a fascinating technology.

Kinected Conference from Lining (Lizzie) Yao on Vimeo.

The four applications they are developing, "Talking to Focus", "Freezing Former Frames", "Privacy Zone" and "Spacial Augmenting Reality", each could be beneficial to our industry.

"Talking to Focus" - This feature makes video conferencing better. Especially when involving multiple locations, each with multiple participants. Only the people currently speaking are in focus; above their image are word balloons that could contain a variety of useful information, links to important documents, etc. ("Click the link above my head to download the specs I just mentioned...") One of the major limitations of video conferencing is attention tends to focus on individuals based on their screen position, rather than activity, as opposed to face-to-face where the eye gravitates towards the most active individual; this feature directly addresses that problem.

"Freezing Former Frames" - In addition to augmenting the impact of "Talking to Focus", this feature allows participants to "pause" just themselves while they pull up important documents, step out of the room momentarily, or even just get over a sneezing attack without disrupting the conversation. Imagine how useful that would be in a sales call!

"Privacy Zone" - In the video this feature was the least successful of the ones demonstrated, but the idea is there and it will only improve. Essentially this puts up a curtain behind the speakers. Making a call from home, or haven't had time to clean the office? No problem. On the trade show floor and don't want the traffic to be a distraction? Solved. But beyond that, this introduces greenscreen capabilities. Instead of a blank white wall, throw up the building plans; literally walk the team through the designs, and use Kinect's gesture recognition to navigate, zoom, and highlight key features.

"Spacial Augmenting Reality" - This has the most direct implications for construction. First of all, notice that the 3-D spacial recognition is good enough the system can measure length. This now becomes an inspection tool; point the camera at the wall panel, and everyone sees how the actual dimensions compare to the specified dimensions. Heck, this could probably create an as-built overlay for BIM, allowing for direct visual comparison. Second, imagine each of those blocks they were moving around the table was linked to a BIM element. The software will probably quickly evolve to the point the blocks are not even needed; move your hand to the on-screen location of an object, make the appropriate gesture, and manipulate as needed.

What really blows me away about this, though, is that it's based on consumer-level, widely available video game technology. I could walk down the street, drop $200 dollars, and set this up in my office. We are on the verge of complete design tool transparency, allowing us to interact directly with our designs.

I'm excited to see what happens next!