While some games out there might give you headaches Microsoft Research is working with a group of London super brains to enable touchless viewing and manipulation of images while performing vascular surgery with the help of Kinect.
With this tech, complex aneurysm procedures are made easier and safer because surgeons can more easily maintain a sterile environment because they don’t need to manipulate equipment and they no longer need to rely on fallible human assistance to properly manipulate the visual-aid equipment.
“Until recently I was shouting out across the operating theatre to tell someone to go up, down, left right,” explains Doctor Tom Carrell. “But with the Kinect I’m able to get the position that I want quickly – and also without me having to handle non-sterile things like a keyboard or mouse during the procedure.”
The only downside to this innovation is that you’d have to be some sort of brain surgeon to use it.
This just in from the incredible whiz-kids over on the Kinect for Windows team — They’ve just released version 1.5 of the Kinect for Windows runtime and SDK.
They’ve added more capabilities to help you brilliant developers make cool things!
- Face Tracking with a real-time 3D mesh of facial features tracking the head position, location of eyebrows, etc
- Seated skeletal tracking!
- Improved performance and data quality enhancements to permit RGB and depth data to be mapped together (e.g. “green screen”)
- All new samples!
- Many more (for full details read the Team Blog post here)
They’ve also made Kinect for Windows hardware available in Hong Kong, South Korea, Singapore, and Taiwan — with 15 more countries starting next month! To learn all 31 countries Kinect for Windows will be available in check out the Kinect for Windows blog!
We’ve got a ton of space geniuses here at Microsoft. Our team isn’t the only one working to make Kinect smarter, faster, and stronger. Microsoft Research has some pretty incredible whiz kids that have recently used Kinect to help innovate a new gesture-based motion “controller” that doesn’t actually need any sort of controller, including the Kinect, to work!
Project SoundWave utilizes the existing speakers and microphone built into a PC or notebook to detect your movement. The speakers emit inaudible tones in the 18 – 22 KHz range (your cats and dogs are safe). SoundWave then uses the same system’s microphone to pick up these tones as they are bounced back by your failing hands and arms. The tones are passed through a detection algorithm and any frequency shifts detected are processed to figure out what in the world you’re trying to do!
Want to see it in action? Watch the video below!
Take a look at what the space geniuses with Microsoft Research have done now! Yesterday, we told you about some of the amazing Kinect-related research projects that are starting to be shared. Beamatron uses a Projector, Kinect and a pan-tilt moving head to create an Augmented reality concept that lets the user blend the physical world with a digital one.
This doesn’t mean you’re going to have that Ryan Gosling hologram you’ve always dreamed of there to greet you when you get home, unfortunately. At least, not yet. In the demo provided by researchers Hrvoje Benko and Andy Wilson they drove a virtual car over real life ramps. The car even bumped into shoes and other objects while being driven around the room.
Another Gosling-free application provides a heads-up display for the user to notify them of events like a new Tweet or the latest post from their favorite blog, KinectShare.
Take a look at the video! How would you want to use this technology? Share your G-rated ideas in the comments!