This tutorial explains how to use Intel® INDE Media Pack for Android* to add video capturing capability to Qt Quick applications.
Prerequisites:
This tutorial is designed for a experienced Qt-programmer. If you have never done Qt apps for Android, you should check official manual. Getting Started with Qt for Android. We will pay attention only to the key moments.
Let’s make simple QML app. We need any moving content, FPS counter and record button. Create new Qt Quick 2 Application project. Qt Creator generates QtQuick2ApplicationViewer class automatically:
[sourcecode language=”java” collapse=”true”]
/*
This file was generated by the Qt Quick 2 Application wizard of Qt Creator.
QtQuick2ApplicationViewer is a convenience class containing mobile device specific
code such as screen orientation handling. Also QML paths and debugging are
handled here.
It is recommended not to modify this file, since newer versions of Qt Creator
may offer an updated version of it.
*/[/sourcecode]
We need to make significant changes to its behavior. But we won’t change the QtQuick2ApplicationViewer sources. So let’s inherit from it:
[sourcecode language=”csharp” collapse=”true”]
#ifndef QTCAPTURINGVIEWER_H
#define QTCAPTURINGVIEWER_H
#include “qtquick2applicationviewer.h”
#include <jni.h>
class QOpenGLFramebufferObject;
class QOpenGLShaderProgram;
class QElapsedTimer;
class QtCapturingViewer : public QtQuick2ApplicationViewer
{
Q_OBJECT
Q_PROPERTY(int fps READ fps NOTIFY fpsChanged)
public:
explicit QtCapturingViewer(QWindow *parent = 0);
~QtCapturingViewer();
int fps() const { return m_fps; }
Q_INVOKABLE void startCapturing(int width, int height, int frameRate, int bitRate, QString fileName);
Q_INVOKABLE void stopCapturing();
private:
jobject m_qtCapturingObject;
QOpenGLFramebufferObject *m_fbo;
QOpenGLShaderProgram *m_program;
QElapsedTimer *m_timer;
bool m_inProgress;
int m_fps;
QString m_videoDir;
int m_videoFrameRate;
void drawQuad(int textureID);
void captureFrame(int textureID);
signals:
void fpsChanged();
private slots:
void onSceneGraphInitialized();
void onBeforeRendering();
void onAfterRendering();
};
#endif // QTCAPTURINGVIEWER_H
[/sourcecode]
As you can see now we have FPS counter Q_PROPERTY and start/stop Q_INVOKABLE methods for video capturing.
Edit the file main.cpp:
[sourcecode language=”csharp” collapse=”true”]
#include <QtGui/QGuiApplication>
<s>#include “qtquick2applicationviewer.h”</s>
#include “qtcapturingviewer.h”
#include <QQmlContext>
#include <QDebug>
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
<s>QtQuick2ApplicationViewer viewer;</s>
QtCapturingViewer viewer;
viewer.rootContext()->setContextProperty(“viewer”, &viewer);
viewer.setMainQmlFile(QStringLiteral(“qml/QtQuickCapturing/main.qml”));
viewer.showExpanded();
return app.exec();
}
[/sourcecode]
Don’t forget include and set context property “viewer”. Now we can use FPS property and startCapturing/stopCapturing methods inside QML:
[sourcecode language=”csharp” collapse=”true”]
import QtQuick 2.0
Rectangle {
width: 360
height: 360
Rectangle {
radius: 50
width: (parent.width > parent.height ? parent.width : parent.height) / 3
height: width
anchors.centerIn: parent
gradient: Gradient {
GradientStop { position: 0.0; color: “red” }
GradientStop { position: 0.5; color: “yellow” }
GradientStop { position: 1.0; color: “green” }
}
PropertyAnimation on rotation {
running: true
loops: Animation.Infinite
easing.type: Easing.Linear
from: 0
to: 360
duration: 8000
}
}
Rectangle {
id: buttonRect
anchors {
right: parent.right
bottom: parent.bottom
margins: 50
}
color: “green”
Behavior on color {
ColorAnimation {}
}
width: 150
height: 150
radius: 75
property bool inProgress: false
MouseArea {
id: mouseArea
anchors.fill: parent
onClicked: {
buttonRect.inProgress = !buttonRect.inProgress
if (buttonRect.inProgress) {
viewer.startCapturing(1920, 1104, 30, 3000, “QtCapturing.mp4”);
} else {
viewer.stopCapturing();
}
}
}
states: [
State {
when: buttonRect.inProgress
PropertyChanges {
target: buttonRect
color: “red”
}
}
]
}
Text {
anchors {
left: parent.left
top: parent.top
margins: 25
}
font.pixelSize: 50
text: “FPS: ” + viewer.fps
}
}
[/sourcecode]
Download and install Intel INDE by visiting http://intel.com/software/inde. After installing Intel INDE, choose to download and install the Media Pack for Android. For additional assistance visit the Intel INDE forum. Go to the installation folder of Media Pack for Android -> libs and copy two jar files (android-<version>.jar and domain-<version>.jar) to your /android-sources/libs / folder. Of course you should create this folders before that. As you can imagine folder /android-sources/ can have any name. But you should specify it in the project file:
[sourcecode collapse=”true” language=”csharp”]
ANDROID_PACKAGE_SOURCE_DIR = $$PWD/android-sources
[/sourcecode]
Deploying an Application on Android – check this if you have additional questions.
No let’s go to Java side. It isn’t convenient to make changes to the source code of the main activity. It is much easier to create a separate class and instantiate it at application startup from C++ side using JNI. Let’s create a chain of folders /android-sources/src/org/qtproject/qt5/android/bindings/. Add a Java* file QtCapturing.java into the last folder with the following code in it:
[sourcecode language=”csharp” collapse=”true”]
package org.qtproject.qt5.android.bindings;
import com.intel.inde.mp.android.graphics.FullFrameTexture;
import com.intel.inde.mp.android.graphics.FrameBuffer;
import android.os.Environment;
import java.io.IOException;
import java.io.File;
public class QtCapturing
{
private static FullFrameTexture texture;
public QtCapturing()
{
texture = new FullFrameTexture();
}
boolean release()
{
return true;
}
public static String getDirectoryDCIM()
{
return Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM) + File.separator;
}
public void initCapturing(int videoWidth, int videoHeight, int videoFrameRate, int videoBitRate)
{
VideoCapture.init(videoWidth, videoHeight, videoFrameRate, videoBitRate);
}
public void startCapturing(String videoPath)
{
VideoCapture capture = VideoCapture.getInstance();
synchronized (capture) {
try {
capture.start(videoPath);
} catch (IOException e) {
}
}
}
public void captureFrame(int textureID)
{
VideoCapture capture = VideoCapture.getInstance();
synchronized (capture) {
capture.beginCaptureFrame();
texture.draw(textureID);
capture.endCaptureFrame();
}
}
public void stopCapturing()
{
VideoCapture capture = VideoCapture.getInstance();
synchronized (capture) {
if (capture.isStarted()) {
capture.stop();
}
}
}
}
[/sourcecode]
Then create another Java file in the same directory. Name it VideoCapture.java and put the following contents in it:
[sourcecode language=”csharp” collapse=”true”]
package org.qtproject.qt5.android.bindings;
import com.intel.inde.mp.*;
import com.intel.inde.mp.android.AndroidMediaObjectFactory;
import com.intel.inde.mp.android.AudioFormatAndroid;
import com.intel.inde.mp.android.VideoFormatAndroid;
import java.io.IOException;
public class VideoCapture
{
private static final String TAG = “VideoCapture”;
private static final String Codec = “video/avc”;
private static int IFrameInterval = 1;
private static final Object syncObject = new Object();
private static volatile VideoCapture videoCapture;
private static VideoFormat videoFormat;
private static int videoWidth;
private static int videoHeight;
private GLCapture capturer;
private boolean isConfigured;
private boolean isStarted;
private long framesCaptured;
private VideoCapture()
{
}
public static void init(int width, int height, int frameRate, int bitRate)
{
videoWidth = width;
videoHeight = height;
videoFormat = new VideoFormatAndroid(Codec, videoWidth, videoHeight);
videoFormat.setVideoFrameRate(frameRate);
videoFormat.setVideoBitRateInKBytes(bitRate);
videoFormat.setVideoIFrameInterval(IFrameInterval);
}
public static VideoCapture getInstance()
{
if (videoCapture == null) {
synchronized (syncObject) {
if (videoCapture == null)
videoCapture = new VideoCapture();
}
}
return videoCapture;
}
public void start(String videoPath) throws IOException
{
if (isStarted())
throw new IllegalStateException(TAG + ” already started!”);
capturer = new GLCapture(new AndroidMediaObjectFactory());
capturer.setTargetFile(videoPath);
capturer.setTargetVideoFormat(videoFormat);
AudioFormat audioFormat = new AudioFormatAndroid(“audio/mp4a-latm”, 44100, 2);
capturer.setTargetAudioFormat(audioFormat);
capturer.start();
isStarted = true;
isConfigured = false;
framesCaptured = 0;
}
public void stop()
{
if (!isStarted())
throw new IllegalStateException(TAG + ” not started or already stopped!”);
try {
capturer.stop();
isStarted = false;
} catch (Exception ex) {
}
capturer = null;
isConfigured = false;
}
private void configure()
{
if (isConfigured())
return;
try {
capturer.setSurfaceSize(videoWidth, videoHeight);
isConfigured = true;
} catch (Exception ex) {
}
}
public void beginCaptureFrame()
{
if (!isStarted())
return;
configure();
if (!isConfigured())
return;
capturer.beginCaptureFrame();
}
public void endCaptureFrame()
{
if (!isStarted() || !isConfigured())
return;
capturer.endCaptureFrame();
framesCaptured++;
}
public boolean isStarted()
{
return isStarted;
}
public boolean isConfigured()
{
return isConfigured;
}
}
[/sourcecode]
Now, like for any other Android application, we need to setup a manifest XML file. This manifest file will tell at compilation time which activities should be launched and which functions are allowed to be accessed. Go toProjects tab. Switch to Run settings of your Android kit. Expand Deploy configurations and press Create AndroidManifest.xml button. Press Finish on wizard. Adjust manifest features and permissions:
Switch to XML Source and hide the status bar by setting an activity theme:
[sourcecode language=”csharp” collapse=”true”]
<activity
…
android_theme=”@android:style/Theme.NoTitleBar.Fullscreen” >
…
</activity>
[/sourcecode]
I prefer to add all necessary files to project file:
[sourcecode language=”csharp” collapse=”true”]
OTHER_FILES +=
android-sources/libs/android-1.0.903.jar
android-sources/libs/domain-1.0.903.jar
android-sources/src/org/qtproject/qt5/android/bindings/QtCapturing.java
android-sources/src/org/qtproject/qt5/android/bindings/VideoCapture.java
android-sources/AndroidManifest.xml
[/sourcecode]
So your project structure should look like this:
Core functionality is concentrated inside the qtcapturingviewer.cpp file. First of all we need to connect with our Java side. JNI_OnLoad is a convenient place to look up and cache class object references:
[sourcecode language=”csharp” collapse=”true”]
#include “qtcapturingviewer.h”
#include <QOpenGLFramebufferObject>
#include <QOpenGLShaderProgram>
#include <QElapsedTimer>
#include <QtAndroidExtras>
static JavaVM *s_javaVM = 0;
static jclass s_classID = 0;
static jmethodID s_constructorMethodID = 0;
static jmethodID s_initCapturingMethodID = 0;
static jmethodID s_startCapturingMethodID = 0;
static jmethodID s_captureFrameMethodID = 0;
static jmethodID s_stopCapturingMethodID = 0;
static jmethodID s_releaseMethodID = 0;
static jmethodID s_getDirectoryDCIMMethodID =0;
// This method is called immediately after the module is load
JNIEXPORT jint JNI_OnLoad(JavaVM *vm, void */*reserved*/)
{
JNIEnv *env;
if (vm->GetEnv(reinterpret_cast<void **>(&env), JNI_VERSION_1_6) != JNI_OK) {
qCritical() << “Can’t get the enviroument”;
return -1;
}
s_javaVM = vm;
// Search for our class
jclass clazz = env->FindClass(“org/qtproject/qt5/android/bindings/QtCapturing”);
if (!clazz) {
qCritical() << “Can’t find QtCapturing class”;
return -1;
}
// Keep a global reference to it
s_classID = (jclass)env->NewGlobalRef(clazz);
// Search for its contructor
s_constructorMethodID = env->GetMethodID(s_classID, “<init>”, “(Landroid/content/Context;)V”);
if (!s_constructorMethodID) {
qCritical() << “Can’t find QtCapturing class contructor”;
return -1;
}
s_initCapturingMethodID = env->GetMethodID(s_classID, “initCapturing”, “(IIII)V”);
if (!s_initCapturingMethodID) {
qCritical() << “Can’t find initCapturing() method”;
return -1;
}
s_startCapturingMethodID = env->GetMethodID(s_classID, “startCapturing”, “(Ljava/lang/String;)V”);
if (!s_startCapturingMethodID) {
qCritical() << “Can’t find startCapturing() method”;
return -1;
}
s_captureFrameMethodID = env->GetMethodID(s_classID, “captureFrame”, “(I)V”);
if (!s_startCapturingMethodID) {
qCritical() << “Can’t find captureFrame() method”;
return -1;
}
s_stopCapturingMethodID = env->GetMethodID(s_classID, “stopCapturing”, “()V”);
if (!s_stopCapturingMethodID) {
qCritical() << “Can’t find stopCapturing() method”;
return -1;
}
// Search for release method
s_releaseMethodID = env->GetMethodID(s_classID, “release”, “()Z”);
if (!s_releaseMethodID) {
qCritical() << “Can’t find release() method”;
return -1;
}
// Register and call our static method
s_getDirectoryDCIMMethodID = env->GetStaticMethodID(s_classID, “getDirectoryDCIM”, “()Ljava/lang/String;”);
if (!s_getDirectoryDCIMMethodID) {
qCritical() << “Can’t find getDirectoryDCIM() static method”;
return -1;
}
return JNI_VERSION_1_6;
}
[/sourcecode]
The QQuickWindow::sceneGraphInitialized() signal is emitted when a new OpenGL context is created for this window. The QQuickWindow::beforeRendering() signal is emitted before the scene starts rendering. TheQQuickWindow::afterRendering() signal is emitted after the scene has completed rendering, before swapbuffers is called. Make a Qt::DirectConnection to these signals to be notified.
[sourcecode language=”csharp” collapse=”true”]
QtCapturingViewer::QtCapturingViewer(QWindow *parent)
: QtQuick2ApplicationViewer(parent)
, m_qtCapturingObject(nullptr)
, m_fbo(nullptr)
, m_program(nullptr)
, m_inProgress(false)
, m_fps(0)
{
connect(this, SIGNAL(sceneGraphInitialized()), SLOT(onSceneGraphInitialized()), Qt::DirectConnection);
connect(this, SIGNAL(beforeRendering()), SLOT(onBeforeRendering()), Qt::DirectConnection);
connect(this, SIGNAL(afterRendering()), SLOT(onAfterRendering()), Qt::DirectConnection);
m_timer = new QElapsedTimer();
m_timer->start();
}
[/sourcecode]
Now we have OpenGL context and we can instantiate our QtCapturing.java class. TheQOpenGLFramebufferObject class encapsulates an OpenGL framebuffer object. Be sure to attach depth to framebuffer.
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::onSceneGraphInitialized()
{
// Create a new instance of QtCapturing
JNIEnv *env;
// Qt is running in a different thread than Java UI, so you always Java VM *MUST* be attached to current thread
if (s_javaVM->AttachCurrentThread(&env, NULL) < 0) {
qCritical( ) << “AttachCurrentThread failed”;
return;
}
QAndroidJniObject activity = QtAndroid::androidActivity();
m_qtCapturingObject = env->NewGlobalRef(env->NewObject(s_classID, s_constructorMethodID, activity.object<jobject>()));
if (!m_qtCapturingObject) {
qCritical() << “Can’t create the QtCapturing object”;
return;
}
// Get DCIM dir
jstring value = (jstring)env->CallStaticObjectMethod(s_classID, s_getDirectoryDCIMMethodID);
const char *res = env->GetStringUTFChars(value, NULL);
m_videoDir = QString(res);
env->ReleaseStringUTFChars(value, res);
// Don’t forget to detach from current thread
s_javaVM->DetachCurrentThread();
m_fbo = new QOpenGLFramebufferObject(size());
m_fbo->setAttachment(QOpenGLFramebufferObject::Depth);
}
[/sourcecode]
Don’t forget to release properly all resources:
[sourcecode language=”csharp” collapse=”true”]
QtCapturingViewer::~QtCapturingViewer()
{
delete m_fbo;
delete m_timer;
JNIEnv *env;
if (s_javaVM->AttachCurrentThread(&env, NULL) < 0) {
qCritical() << “AttachCurrentThread failed”;
return;
}
if (!env->CallBooleanMethod(m_qtCapturingObject, s_releaseMethodID))
qCritical() << “Releasing failed”;
s_javaVM->DetachCurrentThread();
}
[/sourcecode]
Method QQuickWindow::setRenderTarget() sets the render target for this window. The default is to render to the surface of the window, in which case the render target is 0.
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::onBeforeRendering()
{
if (m_inProgress) {
if (renderTarget() == 0)
setRenderTarget(m_fbo);
} else {
if (renderTarget() != 0) {
setRenderTarget(0);
}
}
}
[/sourcecode]
After that we have a texture with a QML scene rendered into it. Next we need to render this texture to display and to the video surface.
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::onAfterRendering()
{
static qint64 frameCount = 0;
static qint64 fpsUpdate = 0;
static const int fpsUpdateRate = 4; // updates per sec
static qint64 m_nextCapture = 0;
if (m_inProgress) {
// Draw fullscreen quad
QOpenGLFramebufferObject::bindDefault();
drawQuad(m_fbo->texture());
// Pass color attachment to java side for actual capturing
if (m_timer->elapsed() > m_nextCapture) {
captureFrame(m_fbo->texture());
m_nextCapture += 1000 / m_videoFrameRate;
}
}
// Update FPS
frameCount++;
if (m_timer->elapsed() > fpsUpdate) {
fpsUpdate += 1000 / fpsUpdateRate;
m_fps = frameCount * fpsUpdateRate;
frameCount = 0;
emit fpsChanged();
}
}
[/sourcecode]
This method represents a full screen renderer:
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::drawQuad(int textureID)
{
if (!m_program) {
m_program = new QOpenGLShaderProgram();
m_program->addShaderFromSourceCode(QOpenGLShader::Vertex,
“attribute highp vec4 vertices;”
“varying highp vec2 coords;”
“void main() {”
” gl_Position = vertices;”
” coords = (vertices.xy + 1.0) * 0.5;”
“}”);
m_program->addShaderFromSourceCode(QOpenGLShader::Fragment,
“uniform sampler2D texture;”
“varying highp vec2 coords;”
“void main() {”
” gl_FragColor = texture2D(texture, coords);”
“}”);
m_program->bindAttributeLocation(“vertices”, 0);
if (!m_program->link()) {
qDebug() << “Link wasn’t successful: ” << m_program->log();
}
}
m_program->bind();
m_program->enableAttributeArray(0);
float values[] = {
-1, -1,
1, -1,
-1, 1,
1, 1
};
m_program->setAttributeArray(0, GL_FLOAT, values, 2);
glBindTexture(GL_TEXTURE_2D, textureID);
glViewport(0, 0, size().width(), size().height());
glDisable(GL_DEPTH_TEST);
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
m_program->disableAttributeArray(0);
m_program->release();
}
[/sourcecode]
Every time we start capturing we are configuring output video format. So you can alter your parameters between calls.
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::startCapturing(int width, int height, int frameRate, int bitRate, QString videoName)
{
if (!m_qtCapturingObject)
return;
JNIEnv *env;
if (s_javaVM->AttachCurrentThread(&env, NULL) < 0) {
qCritical() << “AttachCurrentThread failed”;
return;
}
// Setup format
m_videoFrameRate = frameRate;
env->CallVoidMethod(m_qtCapturingObject, s_initCapturingMethodID, width, height, frameRate, bitRate);
// Start capturing
QString videoPath = m_videoDir + videoName;
jstring string = env->NewString(reinterpret_cast<const jchar *>(videoPath.constData()), videoPath.length());
env->CallVoidMethod(m_qtCapturingObject, s_startCapturingMethodID, string);
env->DeleteLocalRef(string);
s_javaVM->DetachCurrentThread();
m_inProgress = true;
}
[/sourcecode]
This is how we pass the texture handle to the captureFrame() method of our QtCapturing.java class:
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::captureFrame(int textureID)
{
if (!m_qtCapturingObject)
return;
JNIEnv *env;
if (s_javaVM->AttachCurrentThread(&env, NULL) < 0) {
qCritical() << “AttachCurrentThread failed”;
return;
}
env->CallVoidMethod(m_qtCapturingObject, s_captureFrameMethodID, textureID);
s_javaVM->DetachCurrentThread();
}
[/sourcecode]
And last thing you should know:
[sourcecode language=”csharp” collapse=”true”]
void QtCapturingViewer::stopCapturing()
{
m_inProgress = false;
if (!m_qtCapturingObject)
return;
JNIEnv *env;
if (s_javaVM->AttachCurrentThread(&env, NULL) < 0) {
qCritical() << “AttachCurrentThread failed”;
return;
}
env->CallVoidMethod(m_qtCapturingObject, s_stopCapturingMethodID);
s_javaVM->DetachCurrentThread();
}
[/sourcecode]
It’s all you need to know to be able to add video capturing capability to Qt Quick applications. Run your test application. You can find recorded videos in /mnt/sdcard/DCIM/ folder of your Android device. Enjoy!
Source: Intel Developer Zone (by Ilya Aleshkov, Auriga’s Engineer)