Pages

Sunday, March 16, 2014

Kinect v2 developer preview + OpenCV 2.4.8: depth data

This time, I'd like to share code on how to access depth data using the current API of Kinect v2 developer preview using a simple polling, and display it using OpenCV. Basically the procedure is almost the same with accessing color frame.

In the current API, depth data is no longer mixed with player index (called body index in Kinect v2 API).

Disclaimer:
This is based on preliminary software and/or hardware. Software, hardware, APIs are preliminary and subject to change.
//Disclaimer:
//This is based on preliminary software and/or hardware, subject to change.

#include <iostream>
#include <sstream>

#include <Windows.h>
#include <Kinect.h>

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/contrib/contrib.hpp>

inline void CHECKERROR(HRESULT n) {
    if (!SUCCEEDED(n)) {
        std::stringstream ss;
        ss << "ERROR " << std::hex << n << std::endl;
        std::cin.ignore();
        std::cin.get();
        throw std::runtime_error(ss.str().c_str());
    }
}

// Safe release for interfaces
template
inline void SAFERELEASE(Interface *& pInterfaceToRelease) {
    if (pInterfaceToRelease != nullptr) {
        pInterfaceToRelease->Release();
        pInterfaceToRelease = nullptr;
    }
}

IDepthFrameReader* depthFrameReader = nullptr; // depth reader

void processIncomingData() {
    IDepthFrame *data = nullptr;
    IFrameDescription *frameDesc = nullptr;
    HRESULT hr = -1;
    UINT16 *depthBuffer = nullptr;
    USHORT nDepthMinReliableDistance = 0;
    USHORT nDepthMaxReliableDistance = 0;
    int height = 424, width = 512;

    hr = depthFrameReader->AcquireLatestFrame(&data);
    if (SUCCEEDED(hr)) hr = data->get_FrameDescription(&frameDesc);
    if (SUCCEEDED(hr)) hr = data->get_DepthMinReliableDistance(
        &nDepthMinReliableDistance);
    if (SUCCEEDED(hr)) hr = data->get_DepthMaxReliableDistance(
        &nDepthMaxReliableDistance);

    if (SUCCEEDED(hr)) {
            if (SUCCEEDED(frameDesc->get_Height(&height)) &&
            SUCCEEDED(frameDesc->get_Width(&width))) {
            depthBuffer = new UINT16[height * width];
            hr = data->CopyFrameDataToArray(height * width, depthBuffer);
            if (SUCCEEDED(hr)) {
                cv::Mat depthMap = cv::Mat(height, width, CV_16U, depthBuffer);
                cv::Mat img0 = cv::Mat::zeros(height, width, CV_8UC1);
                cv::Mat img1;
                double scale = 255.0 / (nDepthMaxReliableDistance - 
                    nDepthMinReliableDistance);
                depthMap.convertTo(img0, CV_8UC1, scale);
                applyColorMap(img0, img1, cv::COLORMAP_JET);
                cv::imshow("Depth Only", img1);
            }
        }
    }
    if (depthBuffer != nullptr) {
        delete[] depthBuffer;
        depthBuffer = nullptr;
    }
    SAFERELEASE(data);
}

int main(int argc, char** argv) {
    HRESULT hr;
    IKinectSensor* kinectSensor = nullptr;     // kinect sensor

    // initialize Kinect Sensor
    hr = GetDefaultKinectSensor(&kinectSensor);
    if (FAILED(hr) || !kinectSensor) {
        std::cout << "ERROR hr=" << hr << "; sensor=" << kinectSensor << std::endl;
        return -1;
    }
    CHECKERROR(kinectSensor->Open());

    // initialize depth frame reader
    IDepthFrameSource* depthFrameSource = nullptr;
    CHECKERROR(kinectSensor->get_DepthFrameSource(&depthFrameSource));
    CHECKERROR(depthFrameSource->OpenReader(&depthFrameReader));
    SAFERELEASE(depthFrameSource);

    while (depthFrameReader) {
        processIncomingData();
        int key = cv::waitKey(10);
        if (key == 'q'){
            break;
        }
    }

    // de-initialize Kinect Sensor
    CHECKERROR(kinectSensor->Close());
    SAFERELEASE(kinectSensor);
    return 0;
}
Results in my messy room:

If we modify the scaling, for example:
                nDepthMaxReliableDistance = 900;
                nDepthMinReliableDistance = 500;
                int i, j;
                for (i = 0; i < height; i++) {
                    for (j = 0; j < width; j++) {
                        UINT16 val = depthMap.at<UINT16>(i,j);
                        val = val - nDepthMinReliableDistance;
                        val = (val > nDepthMaxReliableDistance ? 
                            nDepthMinReliableDistance : val);
                        val = (val < 0 ? 0 : val);
                        depthMap.at<UINT16>(i,j) = val;
                    }
                }

                double scale = 255.0 / (nDepthMaxReliableDistance - 
                    nDepthMinReliableDistance);
                depthMap.convertTo(img0, CV_8UC1, scale);
                applyColorMap(img0, img1, cv::COLORMAP_WINTER);
                cv::imshow("Depth Only", img1);
It may look like this:
That's all :)

12 comments:

  1. Hey, quick question. Wondering if you have used the Kinect v2 with VM Ware successfully? I'm using a MacBook Pro Retina.

    ReplyDelete
  2. Hi @Chocobot, unfortunately I never used kinect v2 in virtual machine.

    ReplyDelete
  3. Hi, thanks for that. With your scale modified code above, I get expect 1 argument received 2 arguments associated with depthMap.at.
    Any idea why?

    ReplyDelete
    Replies
    1. Hi, thanks for the question. It was my mistake when putting the code that the symbol < and > were not displayed correctly. It should be written as: depthMap.at(i,j)

      Delete
  4. Can you use the kinect SDK out off the box with OpenCV? how about OpenNI? The SDK is a free download right?

    ReplyDelete
    Replies
    1. Sorry for the verryyy latee reply. I think you already got the answers already. I have not tried OpenNI. Yes, the SDK from Microsoft is a free download and we can use it with OpenCV.

      Delete
  5. Wow, great info here. Thanks for detailing the steps for Kinect for Windows!

    ReplyDelete
  6. Great example. Thanks for the help. Just a heads up for v2 of the Kinect sensor/SDK the Interface class isn't there (or at least not within those files) you need to replace it with the IUnknown class. Just for future proofing! Keep up the great work!

    ReplyDelete
    Replies
    1. Also I forgot to mention. Contrib.hpp is deprecated and I feel its easier to just use opencv.hpp instead of adding the exact includes. Only problem will be an initial intellinsense (if you use VS) load but other than that I don't think there should be any significant increase in build and run time

      Delete
  7. Hi, would you tell me how can I use your code, which file did you modify. I am using kinect v2 in lunix

    ReplyDelete
    Replies
    1. Hi, sorry for late reply. Unfortunately the code above is using Microsoft Kinect SDK, thus, only works on Windows (not Linux). To use Kinect v2 in Linux, you probably need this one: https://github.com/OpenKinect/libfreenect2

      Delete