kinect point cloud
just using the saturation value (close range). manually aligning the texture to the depth, which isn't really right: it needs to be projected onto the depth map.
note that this uses hector's default 'gamma' function, but what really needs to happen is taking the inverted reciprocal 1/(1-x) as i'm pretty sure the 11-bit image returned by the depth sensor is a disparity image. edit: i just posted a function for converting to meters here github.com/OpenKinect/openkinect/wiki/Imaging-Information
if you want to see this in realtime, swap out the glview.c from your copy of the github openkinect project with the glview.c posted on the OF board: www.openframeworks.cc/forum/viewtopic.php?f=14&t=4947...