At the end of the previous tutorial we noticed the issue about Unity* GUI layer. But imagine that you already have a complicated Unity game with intensive GUI usage. What to do? Read this tutorial. It’s about more advanced use of Intel® INDE Media Pack for Android*. Moreover now we can use free version of Unity. How? We will explore the approach without fullscreen image postprocessing effects.
Prerequisites:
- Unity 4.3.0
- Android SDK
- First video capturing for Unity Tutorial
First of all you have to integrate Intel® INDE Media Pack to your game as described in the first tutorial. We won’t discuss this process once again. We will be focused on changes.
Open Capturing.java file. Now our class has to look as follows:
[sourcecode language=”java” collapse=”true”]
package com.intel.inde.mp.samples.unity;
import com.intel.inde.mp.android.graphics.FullFrameTexture;
import com.intel.inde.mp.android.graphics.FrameBuffer;
import android.os.Environment;
import java.io.IOException;
import java.io.File;
public class Capturing
{
private static FullFrameTexture texture;
private FrameBuffer frameBuffer;
public Capturing(int width, int height)
{
frameBuffer = new FrameBuffer();
frameBuffer.create(width, height);
texture = new FullFrameTexture();
}
public static String getDirectoryDCIM()
{
return Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DCIM) + File.separator;
}
public void initCapturing(int width, int height, int frameRate, int bitRate)
{
VideoCapture.init(width, height, frameRate, bitRate);
}
public void startCapturing(String videoPath)
{
VideoCapture capture = VideoCapture.getInstance();
synchronized (capture) {
try {
capture.start(videoPath);
} catch (IOException e) {
}
}
}
public void beginCaptureFrame()
{
frameBuffer.bind();
}
public void captureFrame(int textureID)
{
VideoCapture capture = VideoCapture.getInstance();
synchronized (capture) {
capture.beginCaptureFrame();
texture.draw(textureID);
capture.endCaptureFrame();
}
}
public void endCaptureFrame()
{
frameBuffer.unbind();
captureFrame(frameBuffer.getTexture());
}
public void stopCapturing()
{
VideoCapture capture = VideoCapture.getInstance();
synchronized (capture) {
if (capture.isStarted()) {
capture.stop();
}
}
}
}
[/sourcecode]
As you can see there are some changes. The main one is the frameBuffer member. Constructor now accepts width and height parameters to create properly sized FrameBuffer. There are three new public methods:frameBufferTexture(), beginCaptureFrame() and endCaptureFrame(). Their meanings will become clear later on the C# side.
Leave VideoCapture.java file without any changes. Please notice the package name. Keep it the same as in the player settings (Bundle identifier) in Unity. Don’t forget about manifest file. Set all necessary permissions and features.
Now we have our AndroidManifest.xml and our Java* files under /Plugins/Android. Create an Apache* Ant* script and build all your stuff with it. For more details look in the previous tutorial. Notice the new file Capturing.jar in the directory.
Switch to Unity. Open Capture.cs and replace its content with the following code:
[sourcecode language=”csharp” collapse=”true”]
using UnityEngine;
using System.Collections;
using System.IO;
using System;
[RequireComponent(typeof(Camera))]
public class Capture : MonoBehaviour
{
public int videoWidth = 720;
public int videoHeight = 1094;
public int videoFrameRate = 30;
public int videoBitRate = 3000;
private string videoDir;
public string fileName = “game_capturing-“;
private float nextCapture = 0.0f;
public bool inProgress { get; private set; }
private bool finalizeFrame = false;
private Texture2D texture = null;
private static IntPtr constructorMethodID = IntPtr.Zero;
private static IntPtr initCapturingMethodID = IntPtr.Zero;
private static IntPtr startCapturingMethodID = IntPtr.Zero;
private static IntPtr beginCaptureFrameMethodID = IntPtr.Zero;
private static IntPtr endCaptureFrameMethodID = IntPtr.Zero;
private static IntPtr stopCapturingMethodID = IntPtr.Zero;
private static IntPtr getDirectoryDCIMMethodID = IntPtr.Zero;
private IntPtr capturingObject = IntPtr.Zero;
void Start()
{
if (!Application.isEditor) {
// Search for our class
IntPtr classID = AndroidJNI.FindClass(“com/intel/inde/mp/samples/unity/Capturing “);
// Search for it’s constructor
constructorMethodID = AndroidJNI.GetMethodID(classID, “<init>”, “(II)V”);
// Register our methods
initCapturingMethodID = AndroidJNI.GetMethodID(classID, “initCapturing”, “(IIII)V”);
startCapturingMethodID = AndroidJNI.GetMethodID(classID, “startCapturing”, “(Ljava/lang/String;)V”);
beginCaptureFrameMethodID = AndroidJNI.GetMethodID(classID, “beginCaptureFrame”, “()V”);
endCaptureFrameMethodID = AndroidJNI.GetMethodID(classID, “endCaptureFrame”, “()V”);
stopCapturingMethodID = AndroidJNI.GetMethodID(classID, “stopCapturing”, “()V”);
// Register and call our static method
getDirectoryDCIMMethodID = AndroidJNI.GetStaticMethodID(classID, “getDirectoryDCIM”, “()Ljava/lang/String;”);
jvalue[] args = new jvalue[0];
videoDir = AndroidJNI.CallStaticStringMethod(classID, getDirectoryDCIMMethodID, args);
// Create Capturing object
jvalue[] constructorParameters = new jvalue[2];
constructorParameters[0].i = Screen.width;
constructorParameters[1].i = Screen.height;
IntPtr local_capturingObject = AndroidJNI.NewObject(classID, constructorMethodID, constructorParameters);
if (local_capturingObject == IntPtr.Zero) {
Debug.LogError(“Can’t create Capturing object”);
return;
}
// Keep a global reference to it
capturingObject = AndroidJNI.NewGlobalRef(local_capturingObject);
AndroidJNI.DeleteLocalRef(local_capturingObject);
AndroidJNI.DeleteLocalRef(classID);
}
inProgress = false;
nextCapture = Time.time;
}
void OnPreRender()
{
if (inProgress && Time.time > nextCapture) {
finalizeFrame = true;
nextCapture += 1.0f / videoFrameRate;
BeginCaptureFrame();
}
}
public IEnumerator OnPostRender()
{
if (finalizeFrame) {
finalizeFrame = false;
yield return new WaitForEndOfFrame();
EndCaptureFrame();
} else {
yield return null;
}
}
public void StartCapturing()
{
if (capturingObject == IntPtr.Zero)
return;
jvalue[] videoParameters = new jvalue[4];
videoParameters[0].i = videoWidth;
videoParameters[1].i = videoHeight;
videoParameters[2].i = videoFrameRate;
videoParameters[3].i = videoBitRate;
AndroidJNI.CallVoidMethod(capturingObject, initCapturingMethodID, videoParameters);
DateTime date = DateTime.Now;
string fullFileName = fileName + date.ToString(“ddMMyy-hhmmss.fff”) + “.mp4”;
jvalue[] args = new jvalue[1];
args[0].l = AndroidJNI.NewStringUTF(videoDir + fullFileName);
AndroidJNI.CallVoidMethod(capturingObject, startCapturingMethodID, args);
inProgress = true;
}
private void BeginCaptureFrame()
{
if (capturingObject == IntPtr.Zero)
return;
jvalue[] args = new jvalue[0];
AndroidJNI.CallVoidMethod(capturingObject, beginCaptureFrameMethodID, args);
}
private void EndCaptureFrame()
{
if (capturingObject == IntPtr.Zero)
return;
jvalue[] args = new jvalue[0];
AndroidJNI.CallVoidMethod(capturingObject, endCaptureFrameMethodID, args);
}
public void StopCapturing()
{
inProgress = false;
if (capturingObject == IntPtr.Zero)
return;
jvalue[] args = new jvalue[0];
AndroidJNI.CallVoidMethod(capturingObject, stopCapturingMethodID, args);
}
}
[/sourcecode]
This is the place where our changes take place more than anywhere else. But logic behind all this changes is simple. We pass screen dimensions to Capturing.java constructor. Notice the constructor’s new signature – (II)V. On the Java side we create FrameBuffer. OnPreRender() is called before a camera starts rendering the scene. We bind our FrameBuffer here. All actual rendering of the scene becomes off-screen. OnPostRender() is called after a camera finished rendering the scene. We wait until the end of the frame, switch back to default on-screen FrameBuffer and copy texture directly to screen (find endCaptureFrame() method inside Capturing.java). We can’t use Graphics.Blit(). It requires Unity Pro. We use the same texture to capture frame.
It will be convenient to show how your game’s performance is affected by capturing algorithm. So let’s create simpleFPSCounter class:
[sourcecode language=”csharp” collapse=”true”]
using UnityEngine;
using System.Collections;
public class FPSCounter : MonoBehaviour
{
public float updateRate = 4.0f; // 4 updates per sec.
private int frameCount = 0;
private float nextUpdate = 0.0f;
private float fps = 0.0f;
private GUIStyle style = new GUIStyle();
void Start()
{
style.fontSize = 48;
style.normal.textColor = Color.white;
nextUpdate = Time.time;
}
void Update()
{
frameCount++;
if (Time.time > nextUpdate) {
nextUpdate += 1.0f / updateRate;
fps = frameCount * updateRate;
frameCount = 0;
}
}
void OnGUI()
{
GUI.Label(new Rect(10, 110, 300, 100), “FPS: ” + fps, style);
}
}
[/sourcecode]
Add this script to any object in your scene.
That’s all. Now Build & Run your test application for Android platform. You can find recorded videos in/mnt/sdcard/DCIM/ folder of your Android device.
Known issues:
- With this approach we can’t capture any off-screen rendering (drop shadows, deferred shading and fullscreen post-effects).
Source: Intel Developer Zone (by Ilya Aleshkov, Auriga’s Engineer)