What this means is that you can write Kinect using gadgets that work from any web page. At the moment the software is pre-alpha and provides a low level interface to the Kinect and a high level gesture recognition API.
The high-level API provides robust hand detection but needs work on more general gesture recognition.The API can recognize the following:
Presence of hand (registration)
Removal of hand (unregistration)
Large swipe up/down/left/right
If you want to see the sort of thing it might be used for take a look at the video below.:
In my opinion it looks good but I foresee lots of arm ache and perhaps even some ailment to beat carpal tunnel syndrome as the number one computer using hazard.
The code is open source and you can get it from GitHub.
A new tool from the University of Washington will take your floating point expressions and convert them into something that does the same calculation, but more accurately. This is worth knowing about. [ ... ]
One of the fun things about being a programmer is that algorithms that you know can often be applied in unexpected ways in areas that are the domain of other specialists. A good example is the traveli [ ... ]