قالب وردپرس درنا توس
Home / IOS Development / How to track facial movement

How to track facial movement



One of the most innovative inventions Apple has made in the last year is the True Depth camera. The True Depth camera is an ode to hardware and software engineers who strengthen the secure FaceID face recognition system. As developers, the True Deepth camera opens up a world of opportunities for us, especially in face-to-base interaction.

Before we begin this ARKit tutorial, let me quickly brief you on the different parts of the camera. Like most iPhone / iPad cameras up front, the True Depth camera comes with a microphone, a 7 megapixel camera, an ambient light sensor, a proximity sensor, and a speaker. What makes the True Deepth camera itself is the addition of a dot projector, floodlight and infrared camera.

The dot projector projects more than 30,000 invisible dots on your face to create a local map (you will see this later in the tutorial). The infrared camera reads the dot pattern, takes an infrared image and then sends the data to the Secure Enclave in the A1

2 Bionic chip to confirm a match. Finally, the floodlight allowed invisible infrared light to identify your face, even when it was dark.

These parts come together to create some magical experiences like Animojis and Memojis. Special effects that require a 3D model of the user's face and head can rely on True Depth Camera.

Introduction to the Demo Project

I think it's important for developers to learn how to use the True Deepth camera, so they can perform face tracking and create amazing face-based experiences for users. In this tutorial, I will show you how we can use the 30,000 dots to recognize different face movements using the ARFaceTrackingConfiguration that comes with the ARKit frame.

The final result will look like this:

  How to detect and track the user's face using ARKit 1

Let's get started!

You must run this project on either an iPhone X, XS, XR or iPad Pro (3rd gen). This is because these are the only devices that have the True Depth camera. We also use Swift 5 and Xcode 10.2.

Editor's Note : If you are not new to ARKit, please refer to the ARKit guides.

Creating an ARKit Face Tracking Demo [19659006] First open Xcode and create a new Xcode project. Make sure to select the Augmented Reality App under iOS under Templates.

Then enter the name of your project. I simply named my True Depth . Make sure the language is set to Swift and Content Technology to SceneKit .

  How to detect and track the user's face using ARKit 2

Go to Main .storyboard . It should be a single view with an ARSCNView already plugged into a socket in your code.

  How to detect and track the user's face using ARKit 3

What we need to do is very simple. All we need to do is add a UIView and a UILabel to that view. This label will inform the user about the facial expressions they make.

Drag and drop a UIView into ARSCNView . Now let's set the restrictions. Set the width to 240 pt and the height to 120 pt. Set left and lower limits to 20 pts.

  How to detect and track the user's face using ARKit 4

For design purposes, let's set the alpha to the display to 0.8 . Now drag a UILabel to the view you just added. Put the limitations of 8 points around as shown below.

  How to detect and track the user's face using ARKit 5

Finally, adjust the label to centralized. Your final storyboard should look like this.

  How to detect and track the user's face using ARKit 6

Now let's set IBOutlets to our ViewController.swift file. Switch to the assistant editor. Check and click on UIView and UILabel and drag it to ViewController.swift to create IBOutlets .

  How to Find and Follow the User's Face Using ARKit 7

You should create two outlets: faceLabel and labelView .

  How to detect and track the user's face using ARKit 8

Creating a face network

Let's clear the code a bit. Since we chose the Augmented Reality app as our template, there are some codes we don't need. Change the viewDidLoad function to this:

With the template, our code loads a 3D scene. However, we do not need this scene, so we delete it. At this point, you can delete the art.scnassets folder in the project navigator. Finally, we add two snippets of code to our viewDidLoad method.

  1. First, we round the corners of labelView . This is more of a design choice.
  2. Then we check if the device supports ARFaceTrackingConfiguration . This is the setting for the AR tracking we use to create a face network. If we don't check if a device supports this, our app will crash. If the device does not support configuration, we will present an error.

Then we change a line in the viewWillAppear function. Change the constant configuration to ARFaceTrackingConfiguration () . Your code should look like this now.

  How to detect and track the user's face using ARKit 9

Next, we need to add the ARSCNViewDelegate methods. Add the following code under // MARK: - ARSCNViewDelegate .

This code executes when ARSCNView is rendered. First, we create a face geometry of sceneView and set it to the constant faceMesh . Then we assign this geometry to a SCNNode . Finally, we put the material to the node . For most 3D objects, the material is usually the color or structure of a 3D object.

For the face mesh, you can use two materials – either a filler or a line material. I prefer the lines, so I set fillMode = .lines but you can use whatever you prefer. Your code should look like this now.

  How to detect and track the user's face using ARKit 10

If we run the app, you should see something like this.

  face-tracking archit

Updating the face mask

You may notice that the web is not updated when you change your facial features (flashing, smiling, yawning, etc.). This is because we have to add the renderer (_didUpdate :) under the renderer (_nodeFor) method.

This code runs every time sceneView is updated. First, we define a faceAnchor as the anchor for the face it detects in sceneView . The result is the information about the pose, topology and expression of a face that was discovered in the face-tracking AR session. We also define the constant faceGeometry which is a topology of the detected face. Using these two constants, we update faceGeometry every time.

Run the code again. Now you will see the network update every time you change facial features, all running at 60 fps.

  How to detect and track the user's face using ARKit 11

Function Analysis

First, let's create a variable at the top of the file.

Then type the following function at the end of the file.

The above function takes a ARFaceAnchor as parameter.

  1. Mixture Shapes is a dictionary with named coefficients that represents the detected facial expression when it comes to the movement of specific facial features. Apple offers over 50 coefficients that detect various different facial features. For our purposes, we use only 4: mouthSmileLeft mouthSmileRight cheekPuff and tongOut .
  2. We take the coefficients and check the probability that the face performs these facial features. To detect a smile, we add the probabilities of both the right and left sides of the mouth. I found that 0.9 for the smile and 0.1 for the cheek and tongue works best.

We take the possible values ​​and add the text of the analysis strictly.

Now that we have our function cr eated, let's update our renderer (_didUpdate :) method.

We run the expression method every time sceneView is updated. Since the function sets the analysis string, we can finally set the text of the faceLabel to analysis string.

Now we're all done coding! Run the code and you should get the same result we saw at the beginning.

  How to detect and track the user's face using ARKit 12

Conclusion

There is a lot of potential behind developing face-based experiences using ARKit. Games and apps can use the True Depth camera for a variety of purposes. One of my favorite apps is Hawkeye Access, a browser you can control with your eyes.

For more information on the True Depth camera, check out Apple Video Face Tracking with ARKit. You can download the final project here.


Source link