Today’s post presents a very basic implementation of a point cloud – essentially equivalent to the code in this previous post – which makes use of the Microsoft Kinect SDK to bring the Kinect color image on a Maya image plan live. While for an image it was pretty straight forward to take the Kinect color image and use the Maya API MImage class to read that image. This time we are interested to get the depth information out of the Kinect sensor.
The Kinect SDK provides a function – NuiTransformDepthImageToSkeleton() – which does everything for you. It transforms the image X, Y coordinates from the Depth image into space coordinates X, Y, Z. Nothing terrible for us to implement excepted the documentation is wrong telling to left shift the depth parameter by 3 bits ( see the comment in the code below ).
No the real issue for us is to get it displayed into Maya properly. There is few options and as an exercise, we will play with all during this Kinect journey:
- the first one is to create a Maya locator: it has the advantage to be easy to implement, but does not render, nor to be very fast as you will need to redraw the locator very frequently.
- use the Maya particle system (dynamics or nDynamics): this one sounds promising, but after a certain number of particles, there might be some limits which will impact Maya performance again. But it has the advantage to render as well as being standard to Maya.
- use a specialized plug-in for point cloud like the AliceLabs Maya point cloud plug-in.
Here is the Point Cloud acquisition code
MPointArray &KinectLocator::getPointCloud () { MPlug depthImageData (thisMObject (), KinectLocator::aDepthImageData) ; MPlug sizeData (thisMObject (), KinectLocator::size) ; int depthAddr =depthImageData.asInt () ; float multiplier =sizeData.asFloat () ; if ( depthAddr == 0 ) return (mVertices) ; mVertices.clear () ; IFTImage *pDepthImage =(IFTImage *)depthAddr ; unsigned int depthWidth =pDepthImage->GetWidth (), depthHeight =pDepthImage->GetHeight () ; // Loop through the depth information USHORT *depthData =(USHORT *)pDepthImage->GetBuffer () ; for ( int y =0 ; y < depthHeight ; y++ ) { for ( int x =0 ; x < depthWidth ; x++, depthData++ ) { //usDepthValue // Type: USHORT // [in] The depth value in millimeters of the depth image pixel, shifted left by three bits. // The left shift enables you to pass the value from the depth image directly into this function. // // http://social.msdn.microsoft.com/Forums/br/kinectsdknuiapi/thread/a657e563-b240-46e2-827d-02712b14ebc1 // You should pass the 16-bit value exactly as it appears in the depth frame, *without* applying a shift. // d will be a short integer representing the distance in millimeters; s_z will be a float representing the // same distance in meters. So if d is 1500, then s_z should be 1.5f. Vector4 realPoints ; if ( depthWidth == 320 ) realPoints =NuiTransformDepthImageToSkeleton (x, y, *depthData) ; else if ( depthWidth == 640 ) realPoints =NuiTransformDepthImageToSkeleton (x, y, *depthData, NUI_IMAGE_RESOLUTION_640x480) ; MPoint pt (realPoints.x * multiplier, realPoints.y * multiplier, realPoints.z * multiplier) ; mVertices.append (pt) ; } } return (mVertices) ; }
Once you get the point array, you can displays these point in Maya .Based on the LocatorLib code from the Plug-in of the Month available on Autodesk Labs, you modify the MPxLocatorNode::draw (M3dView &view, const MDagPath &path, M3dView::DisplayStyle style, M3dView::DisplayStatus status) method (or myWireFrameDraw () / myShadedDraw () methods in that sample) like below and you are done.
void KinectLocator::myWireFrameDraw () { MPointArray vertices =getPointCloud () ; glPointSize (2) ; glBegin (GL_POINTS) ; for ( int i =0 ; i < vertices.length () ; i++ ) { glVertex3f (vertices [i].x, vertices [i].y, vertices [i].z) ; } glEnd () ; }
Here is finally the results:
and from a different Camera angle:
The data is still coming live from the Kinect Sensor via the KinectDeviceNode node implemented in the previous post
Now I need to start looking at interpreting gestures, to see what can be done to supplement AutoCAD’s existing user interface features. Fun fun fun! :-)
Comments
You can follow this conversation by subscribing to the comment feed for this post.