Sunday, March 20, 2016

Android + SurfaceTexture + Camera2 + OpenCV + NDK

Using Android Studio 1.5.1, OpenCV own camera class is too slow for video processing. I searched a lot at web, but didn't find a good and whole solution, so I decided to program by myself.  With Android supplied camera2 class, it is fast -- about 30 fps and has more camera controls--like auto focus, set resolution, quality, etc..

Full source code can be found:
https://github.com/webjb/myrobot

Code is based on Android Studio sample camer2raw.
Whole idea is using class Camera2 to capture video, class ImageReader to obtain capture video frame, then send each frame image to NDK to process video -- recognition, draw lines with OpenCV functions, then draw processed frame to surface using class TextureView.

1)  class Camera2 control and enable capture:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
    private void createCameraPreviewSession() {
        CaptureRequest.Builder mPreviewRequestBuilder;
        try {
            SurfaceTexture texture = mSurfaceView.getSurfaceTexture();
            // We configure the size of default buffer to be the size of camera preview we want.
            texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

            // This is the output Surface we need to start preview.
            mSurface = new Surface(texture);

            // We set up a CaptureRequest.Builder with the output Surface.
            mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            //mPreviewRequestBuilder.addTarget(mSurface);
            mPreviewRequestBuilder.addTarget(mImageReader.get().getSurface());

            BlockingSessionCallback sessionCallback = new BlockingSessionCallback();

            List<Surface> outputSurfaces = new ArrayList<>();
            outputSurfaces.add(mImageReader.get().getSurface());
            //outputSurfaces.add(mSurface);

            mCameraDevice.createCaptureSession(outputSurfaces, sessionCallback, mBackgroundHandler);

            try {
                Log.d(TAG, "waiting on session.");
                mCaptureSession = sessionCallback.waitAndGetSession(SESSION_WAIT_TIMEOUT_MS);
                try {
                    mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_AUTO);
//                    mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);

                    // Comment out the above and uncomment this to disable continuous autofocus and
                    // instead set it to a fixed value of 20 diopters. This should make the picture
                    // nice and blurry for denoised edge detection.
                    // mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                    //     CaptureRequest.CONTROL_AF_MODE_OFF);
                    // mPreviewRequestBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, 20.0f);
                    // Finally, we start displaying the camera preview.

                    Log.d(TAG, "setting repeating request");

                    mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(),
                            mCaptureCallback, mBackgroundHandler);
                } catch (CameraAccessException e) {
                    e.printStackTrace();
                }
            } catch (TimeoutRuntimeException e) {
                showToast("Failed to configure capture session.");
            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

2) class ImageReader to obtain captured frame


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
    private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
            = new ImageReader.OnImageAvailableListener() {

        @Override
        public void onImageAvailable(ImageReader reader) {

            Image image;
            String result;

            try {
                image = reader.acquireLatestImage();
                if( image == null) {
                    return;
                }
                int fmt = reader.getImageFormat();
                Log.d(TAG,"bob image fmt:"+ fmt);
                if( mTakePicture == 1) {
                    result = JNIUtils.detectLane(image, mSurface, mFileName, mTakePicture);
                    mTakePicture = 0;
                }
                else {
                    result = JNIUtils.detectLane(image, mSurface, mFileName, mTakePicture);
                }
                Log.d(TAG, "bob Lane Detect result: " + result);

                comm.send_lane(result);
            } catch (IllegalStateException e) {
                Log.e(TAG, "Too many images queued for saving, dropping image for request: ");
                return;
            }
            image.close();
        }
    };
Whenever one frame is captured, this callback will be called,

image = reader.acquireLatestImage();

Then call JNI to send to image to NDK C++ for image processing.
result = JNIUtils.detectLane(image, mSurface, mFileName, mTakePicture);

After image is done, close it to release buffer.

3) JNI interface


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
    public static String detectLane(Image src, Surface dst, String path, int savefile) {
        
        if (src.getFormat() != ImageFormat.YUV_420_888) {
            throw new IllegalArgumentException("src must have format YUV_420_888.");
        }
        Plane[] planes = src.getPlanes();
        // Spec guarantees that planes[0] is luma and has pixel stride of 1.
        // It also guarantees that planes[1] and planes[2] have the same row and
        // pixel stride.
        if (planes[1].getPixelStride() != 1 && planes[1].getPixelStride() != 2) {
            throw new IllegalArgumentException(
                    "src chroma plane must have a pixel stride of 1 or 2: got "
                    + planes[1].getPixelStride());
        }
        return detectLane(src.getWidth(), src.getHeight(), planes[0].getBuffer(),
                dst, path, savefile);
    }

4) OpenCV for image processing


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
JNIEXPORT jstring JNICALL Java_com_neza_myrobot_JNIUtils_detectLane(
            JNIEnv *env, jobject obj, jint srcWidth, jint srcHeight,
            jobject srcBuffer, jobject dstSurface, jstring path, jint saveFile) {
    char outStr[2000];

    const char *str = env->GetStringUTFChars(path, NULL);
    LOGE("bob path:%s saveFile=%d", str, saveFile);

    uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));

    if (srcLumaPtr == nullptr) {
        LOGE("blit NULL pointer ERROR");
        return NULL;
    }

    int dstWidth;
    int dstHeight;

    cv::Mat mYuv(srcHeight + srcHeight / 2, srcWidth, CV_8UC1, srcLumaPtr);

    uint8_t *srcChromaUVInterleavedPtr = nullptr;
    bool swapDstUV;

    ANativeWindow *win = ANativeWindow_fromSurface(env, dstSurface);
    ANativeWindow_acquire(win);

    ANativeWindow_Buffer buf;

    dstWidth = srcHeight;
    dstHeight = srcWidth;

    ANativeWindow_setBuffersGeometry(win, dstWidth, dstHeight, 0 /*format unchanged*/);

    if (int32_t err = ANativeWindow_lock(win, &buf, NULL)) {
        LOGE("ANativeWindow_lock failed with error code %d\n", err);
        ANativeWindow_release(win);
        return NULL;
    }

    uint8_t *dstLumaPtr = reinterpret_cast<uint8_t *>(buf.bits);
    Mat dstRgba(dstHeight, buf.stride, CV_8UC4,
                dstLumaPtr);        // TextureView buffer, use stride as width
    Mat srcRgba(srcHeight, srcWidth, CV_8UC4);
    Mat flipRgba(dstHeight, dstWidth, CV_8UC4);

    // convert YUV -> RGBA
    cv::cvtColor(mYuv, srcRgba, CV_YUV2RGBA_NV21);

    // Rotate 90 degree
    cv::transpose(srcRgba, flipRgba);
    cv::flip(flipRgba, flipRgba, 1);

#if 0
    int ball_x;
    int ball_y;
    int ball_r;

    ball_r = 0;

    BallDetect(flipRgba, ball_x, ball_y, ball_r);
    if( ball_r > 0)
        LOGE("ball x:%d y:%d r:%d", ball_x, ball_y, ball_r);
    else
        LOGE("ball not detected");
#endif

    LaneDetect(flipRgba, str, saveFile, outStr);

    // copy to TextureView surface
    uchar *dbuf;
    uchar *sbuf;
    dbuf = dstRgba.data;
    sbuf = flipRgba.data;
    int i;
    for (i = 0; i < flipRgba.rows; i++) {
        dbuf = dstRgba.data + i * buf.stride * 4;
        memcpy(dbuf, sbuf, flipRgba.cols * 4);
        sbuf += flipRgba.cols * 4;
    }

    // Draw some rectangles
    Point p1(100, 100);
    Point p2(300, 300);
    cv::line(dstRgba, Point(dstWidth/2, 0), Point(dstWidth/2, dstHeight-1),Scalar(255, 255, 255));
    cv::line(dstRgba, Point(0,dstHeight-1), Point(dstWidth-1, dstHeight-1),Scalar(255,255,255 ));

    LOGE("bob dstWidth=%d height=%d", dstWidth, dstHeight);
    ANativeWindow_unlockAndPost(win);
    ANativeWindow_release(win);

    return env->NewStringUTF(outStr);
}
}

Frame is YUV format data,  it's buffer is:
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));

Create YUV Mat:
cv::Mat mYuv(srcHeight + srcHeight / 2, srcWidth, CV_8UC1, srcLumaPtr);

Convert YUV to RGBA
// convert YUV -> RGBAcv::cvtColor(mYuv, srcRgba, CV_YUV2RGBA_NV21);
For some reason, ImageReader video is rotated to 90 degree, so rotate -90:
// Rotate 90 degreecv::transpose(srcRgba, flipRgba);
cv::flip(flipRgba, flipRgba, 1);

Then you can do whatever you want for image processing with powerful OpenCV functions.

After processing, you need to put buffer into TextureView for display:
// copy to TextureView surfaceuchar *dbuf;
uchar *sbuf;
dbuf = dstRgba.data;
sbuf = flipRgba.data;
int i;
for (i = 0; i < flipRgba.rows; i++) {
    dbuf = dstRgba.data + i * buf.stride * 4;
    memcpy(dbuf, sbuf, flipRgba.cols * 4);
    sbuf += flipRgba.cols * 4;
}

Please notice that stride value.

Finally, you can return back

ANativeWindow_unlockAndPost(win);
ANativeWindow_release(win);


Again, source code here:
https://github.com/webjb/myrobot

Enjoy!

Monday, March 7, 2016

Moving to Android Studio 1.5.1

My previous env is ADT, but Google released new version Android Studio 1.5.1 which is better to use. NDK is integrated into Android Studio, so it is easy to use. Don't need to go to console to compile C++, then run it. Just modify C++ code, they Android Studio will compile automatically for you. Unfortunately, NDK is not fully supported, but it is enough for my Robot project.

Android Studio requires more powerful PC, more RAM. Doesn't support my 10+ years old Windows XP. So I upgrade it to a new PC with Windows 10.

I have been struggling a lot of time to make everything working together with Android Studio. NDK + OpenCV + Camera2 class. But it is worth for it since this Android Studio is a main tools for Android development.