Pages

Friday, August 18, 2017

Realtime depth estimation using monocular camera

Just for playing around, I did a small modification to the monodepth_simple.py from the github of CVPR 2017 paper "Unsupervised Monocular Depth Estimation with Left-Right Consistency" (original source code: https://github.com/mrharicot/monodepth), so that it takes input image directly from a camera. To retrieve images from the camera without losing much performance, I refer to the articles from here and here.

I used i7 6700K + Nvidia GTX 970 + Logicool webcam C910:


It appears that although the trained model was Kitti, which is quite different from the scene from my apartment window, it still worked to a certain extent. If you are interested, you can download the modification: monodepth_opencv3.py and a small utility for retrieving the image: util.py.

To run the modification, download the original code and the ready-to-use model by following the instruction from the original Author ( here ). Then copy monodepth_opencv3.py and util.py to the same folder as the monodepth_simple.py. Finally, execute:
$ python ./monodepth_opencv3.py --checkpoint_path ./model/model_kitti

Note that it requires Python 2.7, TensorFlow 1.x, OpenCV >= 2.4 (I used OpenCV 3.2), and of course, a camera.

That's all :)

Tuesday, August 15, 2017

TensorFlow: could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

When running the TensorFlow's object-detection model inference with my own dataset, I got the following error (with Nvidia GTX 970, CUDA 8, TensorFlow 1.2.1 through pip, and Ubuntu 16.04):
2017-08-15 21:18:06.254989: E tensorflow/stream_executor/cuda/cuda_dnn.cc:359] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2017-08-15 21:18:06.255027: E tensorflow/stream_executor/cuda/cuda_dnn.cc:326] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM 2017-08-15 21:18:06.255036: F tensorflow/core/kernels/conv_ops.cc:671] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
Solution from https://github.com/tensorflow/tensorflow/issues/6698, by enabling the GPU's allow_growth flag, solved my problem:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
That's all :)

Sunday, July 30, 2017

Installing Chrome on Ubuntu (16.04)

I recently installed Linux alongside Windows (dual boot). When I tried to install Chrome by downloading the latest installer from here, I got the following error:
$ sudo dpkg -i ./google-chrome-stable_current_amd64.deb
(Reading database ... 219479 files and directories currently installed.)
Preparing to unpack .../google-chrome-stable_current_amd64.deb ...
Unpacking google-chrome-stable (60.0.3112.78-1) over (60.0.3112.78-1) ...
dpkg: dependency problems prevent configuration of google-chrome-stable:
 google-chrome-stable depends on libappindicator1; however:
  Package libappindicator1 is not installed.

dpkg: error processing package google-chrome-stable (--install):
 dependency problems - leaving unconfigured
Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) ...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for mime-support (3.59ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Errors were encountered while processing:
 google-chrome-stable
Thanks to the blog post from here, the solution is straightforward. To satisfy the dependencies, execute the following command and re-execute the dpkg command shown above.
$ sudo apt-get -f install
That's all :)

Monday, May 22, 2017

Tensorflow 1.1 (GPU) on Windows with cuDNN 6.0: No module named '_pywrap_tensorflow'

Short notes: if you got trouble importing tensorflow 1.1 (GPU) on Windows with cuDNN 6.0, try to use cuDNN 5.1 and ensure that the MSVC++2015 redistributable has been installed.

Reference: https://github.com/tensorflow/tensorflow/issues/7705

That's all :)

Saturday, May 13, 2017

Update Anaconda Navigator 1.5.0

I had trouble updating Anaconda Navigator from 1.5.0 to 1.5.2 in Windows 10. While there was a message indicating that the update was successful, the version remains. Gladly found the solution here: https://github.com/ContinuumIO/anaconda-issues/issues/1583 :
  1. Open command prompt
  2. Execute:
    C:¥> conda update --all --prefix <path_to_the_installed_conda>
That's all :)

Friday, December 19, 2014

Android Studio was unable to find a valid JVM

I obtained the following message when opening Android Studio 1.0.1 on Mac OSX Yosemite:
Android Studio was unable to find a valid JVM
Among some workarounds for the problem that I found in this Stackoverflow thread, I think the simplest one is by creating a small bash script that export the environment variable for the JVM and launch the Android Studio. For example, in my machine the JVM is located at /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk, so the script looks like this:
#!/bin/bash export STUDIO_JDK=/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk open /Applications/Android\ Studio.app


That's all :)

Sunday, March 16, 2014

Kinect v2 developer preview + OpenCV 2.4.8: depth data

This time, I'd like to share code on how to access depth data using the current API of Kinect v2 developer preview using a simple polling, and display it using OpenCV. Basically the procedure is almost the same with accessing color frame.

In the current API, depth data is no longer mixed with player index (called body index in Kinect v2 API).

Disclaimer:
This is based on preliminary software and/or hardware. Software, hardware, APIs are preliminary and subject to change.
//Disclaimer:
//This is based on preliminary software and/or hardware, subject to change.

#include <iostream>
#include <sstream>

#include <Windows.h>
#include <Kinect.h>

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/contrib/contrib.hpp>

inline void CHECKERROR(HRESULT n) {
    if (!SUCCEEDED(n)) {
        std::stringstream ss;
        ss << "ERROR " << std::hex << n << std::endl;
        std::cin.ignore();
        std::cin.get();
        throw std::runtime_error(ss.str().c_str());
    }
}

// Safe release for interfaces
template
inline void SAFERELEASE(Interface *& pInterfaceToRelease) {
    if (pInterfaceToRelease != nullptr) {
        pInterfaceToRelease->Release();
        pInterfaceToRelease = nullptr;
    }
}

IDepthFrameReader* depthFrameReader = nullptr; // depth reader

void processIncomingData() {
    IDepthFrame *data = nullptr;
    IFrameDescription *frameDesc = nullptr;
    HRESULT hr = -1;
    UINT16 *depthBuffer = nullptr;
    USHORT nDepthMinReliableDistance = 0;
    USHORT nDepthMaxReliableDistance = 0;
    int height = 424, width = 512;

    hr = depthFrameReader->AcquireLatestFrame(&data);
    if (SUCCEEDED(hr)) hr = data->get_FrameDescription(&frameDesc);
    if (SUCCEEDED(hr)) hr = data->get_DepthMinReliableDistance(
        &nDepthMinReliableDistance);
    if (SUCCEEDED(hr)) hr = data->get_DepthMaxReliableDistance(
        &nDepthMaxReliableDistance);

    if (SUCCEEDED(hr)) {
            if (SUCCEEDED(frameDesc->get_Height(&height)) &&
            SUCCEEDED(frameDesc->get_Width(&width))) {
            depthBuffer = new UINT16[height * width];
            hr = data->CopyFrameDataToArray(height * width, depthBuffer);
            if (SUCCEEDED(hr)) {
                cv::Mat depthMap = cv::Mat(height, width, CV_16U, depthBuffer);
                cv::Mat img0 = cv::Mat::zeros(height, width, CV_8UC1);
                cv::Mat img1;
                double scale = 255.0 / (nDepthMaxReliableDistance - 
                    nDepthMinReliableDistance);
                depthMap.convertTo(img0, CV_8UC1, scale);
                applyColorMap(img0, img1, cv::COLORMAP_JET);
                cv::imshow("Depth Only", img1);
            }
        }
    }
    if (depthBuffer != nullptr) {
        delete[] depthBuffer;
        depthBuffer = nullptr;
    }
    SAFERELEASE(data);
}

int main(int argc, char** argv) {
    HRESULT hr;
    IKinectSensor* kinectSensor = nullptr;     // kinect sensor

    // initialize Kinect Sensor
    hr = GetDefaultKinectSensor(&kinectSensor);
    if (FAILED(hr) || !kinectSensor) {
        std::cout << "ERROR hr=" << hr << "; sensor=" << kinectSensor << std::endl;
        return -1;
    }
    CHECKERROR(kinectSensor->Open());

    // initialize depth frame reader
    IDepthFrameSource* depthFrameSource = nullptr;
    CHECKERROR(kinectSensor->get_DepthFrameSource(&depthFrameSource));
    CHECKERROR(depthFrameSource->OpenReader(&depthFrameReader));
    SAFERELEASE(depthFrameSource);

    while (depthFrameReader) {
        processIncomingData();
        int key = cv::waitKey(10);
        if (key == 'q'){
            break;
        }
    }

    // de-initialize Kinect Sensor
    CHECKERROR(kinectSensor->Close());
    SAFERELEASE(kinectSensor);
    return 0;
}
Results in my messy room:

If we modify the scaling, for example:
                nDepthMaxReliableDistance = 900;
                nDepthMinReliableDistance = 500;
                int i, j;
                for (i = 0; i < height; i++) {
                    for (j = 0; j < width; j++) {
                        UINT16 val = depthMap.at<UINT16>(i,j);
                        val = val - nDepthMinReliableDistance;
                        val = (val > nDepthMaxReliableDistance ? 
                            nDepthMinReliableDistance : val);
                        val = (val < 0 ? 0 : val);
                        depthMap.at<UINT16>(i,j) = val;
                    }
                }

                double scale = 255.0 / (nDepthMaxReliableDistance - 
                    nDepthMinReliableDistance);
                depthMap.convertTo(img0, CV_8UC1, scale);
                applyColorMap(img0, img1, cv::COLORMAP_WINTER);
                cv::imshow("Depth Only", img1);
It may look like this:
That's all :)