Reading through gestures – OpenKinect+Processing+Text2Speech

I’ve been playing with a Microsoft Kinect for some time. I want to use it in ways to provide useful interaction with traditionally difficult environments for the mouse. Public exhibitions or presentations are examples of those situations where the kinect might do a good job. The above is a simple interaction test with kinect that tests the integration of different things: The kinect and the open source drivers (obviously), the Processing language and the java wrappers, the Text2Speech capabilities of Mac OS X, and the Real time reading of news (in this case from the NYT). This is just a first demo. I’d like to see in the future some integration with other technologies like openCV for face recognition, openNI skeleton tracking, and multitouch using TISCH, but lets take one step at a time.