قالب وردپرس درنا توس
Home / IOS Development / Custom Filters with Core Image Kernel Language

Custom Filters with Core Image Kernel Language



If you've ever played with the Core Image Filter API, you might have wondered "What would it take to make a filter and start my own Snapchat!". There are ways to build and chain Core Image Filters to create custom. But this is a bit expensive compared to writing your own. When I first went on this trip, I found many problems with API documentation only in Goal C and on the desktop. Moving this to both iOS and swift proved to be quite the obligation.

Create our custom filter

  Picture of a trip

Let's jump into the deep end and take a look at the Core Image Kernel file. For this post, we create a CI color kernel that uses a drop filter in a file called HazeRemove.cikernel. vec4 hazeRemovalKernel sampler src __ color color float spacer float slope ) {
vec4 [1

9659006] t ;
float d ;

d = destCoord () y * [19659006] slope + spacer ;
t = unpremultiply ( samples ( src samplerCoord ( src )));
t = - d * color ) / ( 1.0 - d );

returning premultiply ( t ); [19659071]}

So what's up here? We knock it down on each line. However, it is important to note this as a pixel-by-pixel change. We run this code on a single pixel at a time and returns an altered pixel. For the syntax, the kernel source technique puts on top of the OpenGL Shading language, so it has different rules than Swift of Objective-C.

kernel vec4 HazeRemovalKernel (
The first line we specify is a kernel window, so the system knows that this will be delivered to the CIKernel class to perform. We enter a return type vec4 as Core Image requires that we return this type to properly change the input pixel for your output pixel.

sampler src, __color color, flowflow, flood slope)
In our function HazeRemovalKernel we pass in a CISampler object that we treat as source pixel. __ color is a color matched with CIContext color space, so the color looks as expected if the user has True Tone or Night Swift turned on. Also for our filter we pass on a slope and distance that flows. These are only ways to affect the parameters of the filter for a typical removal algorithm.

Then we define a few variables we will use and modify in our routine. The first is our modified pixel; We make this a vec4 . In OpenGL, a vec4 is a vector type that has 4 components of single-precision floating point numbers. So in our case it holds RGBA values. Then we define a float used to hold our calculated value for our removal algorithm.

d = destCoord (). Y * slope + distance
To find out how much we want to distort, we use a simple algorithm that adds the slope to the distance. To get a slope that considers the entire image, we take the slope value that is submitted and multiplied by destCoord () .y . destCoord returns the position of the pixel in the current work area, and it is a good base to use to make a slope.

t = unpremultiply (sample (src, samplerCoord (src)));
The next thing we need to do is take into account the fact that there may be some transparency in the image. So before we do the color correction, we will remove alpha to get its pure color. To do this, we use a method called unpremultiply . This takes a vec4 color to get this we use a method called a sample that returns a vec4 containing the color of a given pixel. To help try, do our job, we pass the variable src that is sampler type. Also a vec2 containing the pixel coordinate. We get vec2 by calling samplerCoord (src) this method uses the sample variable and find the coordinates for us.

All this is done and we now have two set variables. d is our distortion and t are the pure color values ​​of the pixel we are trying to change.

t = (t - d * color) / (1.0-d);
Let's now make it tough! Hazen is a fairly simple calculation at the end. t and color are both vect4 types, so that they easily subtract.

returns premultiply (t);
Once we have got our new removed pixel, we need to repeat that transparency if necessary and return the results. We can do this by using premultiply and just returning vect4 it generates.

Build our Core Image Filter

Great we have our CI Kernel File but to use it we You have to pack it into a CI filter so it's available to call from our app. Now a warning We play with a C-level API, and because of this while writing this, Swift is knowledgeable about C, and objective-C interoperability is required.

  import    Foundation 
  import   CoreImage 

  class    HazeRemoveFilter :    CIFilter    {
    @objc    Dynamic    Var [19659086] inputImage :    CIImage  ? 
    @objc    dynamic    var    inputColor .    CIColor    =    CIColor     white 
    @objc    dynamic    var    inputDistance :    NSNumber    =    ] 
    @objc    dynamic    var    inputSlope :    NSNumber    =    0 

First, we will set up the filter by importing the essentials like CoreImage and create a new class inherited from CIFilter. We will then define our variables we want to pass into our filter and provide anything other than inputImage a default value. : [ String ] {
return [
kCIAttributeFilterDisplayName : "Remove Haze"

"inputImage" : [ kCIAttributeIdentity : 0
kCIAttributeClass : "CIImage"
kCIAttributeDisplayName : "image"
kCIAttributeType : kCIAttributeTypeImage ]

"inputDistance"
: [ kCIAttributeIdentity : 0 [19659008]
kCIAttributeClass : "NSNummer"
kCIAttributeDisplayName : "Distribution Factor"
kCIAttributeDefault : ] 0.2
kCIAttributeMin : 0
kCI AttributeMax : 1
kCIAttributeSliderMin : 0 [19659008]
kCIAttributeSliderMax : 0.7
kCIAttributeType ]: kCIAttributeTypeScalar ]
"inputSlope" : [[19659145] kCIAttributeIdentity : 0
kCIAttributeClass : " NSNummer "
kCIAttributeDisplayName : " Slope Factor " ]
kCIAttributeDefault : 0.2
kCIAttributeSliderMin : [19659025] –
kCIAttributeSliderMax : 0.01
kCIAttributeType : kCIAttributeTypeScalar ]
kCIInputColorKey : [19659018] [
kCIAttributeDefault . CIColor white
]
]
}

The next part is a bit difficult. Because we define some custom inputs (distance and slope), we must override CIFilters attributes so that they know about them. With our overall attributes, we also need to define something that CIFilter already had, this would be display name, image, and color. I do not want to go into all the details, the most important thing we have to know is that all we do is to set up a map so that the C level API knows how to interpret objective-C objects and if any my max info, standard and more we want it to follow as well. There is much available box Apple's documentation for filter attribute keys in CIFilter.

  private    lat    was    hazeRemovalKernel :    CIColorKernel ?    =    
    cover    la    lane    =    Bundle     head     lane   (  forResource .: [19659140] "HazeRemove"     or Type :    "cikernel" ),  
      la    code    =            contentOfFile :    lane )    Other    {   fatalError   (  "Could not load HazeRemove. cicernel from bundle ")  }    core    =    CIColorKernel   (  source :    code ) 
    ] return [19659006] core 
} () 

Let's load that filter! But let's be lazy about it. In this way, we will not load the file until we know that we must use it for the first time. Again, we created a CI Color Kernel, so we make sure our filter is of the same type. To load a CI Color Kernel, you will pass the source code as a string to CIColorKernel so we only load our file with bundle and make sure it is loaded as one string. In a production application, we may not have a fatal error, but for our purpose this will work well.

  override    var    outputImage :    CIImage ?   ] [      {
      if    la    inputImage    =    by itself .   inputImage    {
        return    hazeRemovalKernel       apply   (  Extension :? .. ​​   inputImage     ]  
    arguments : [19659021] [
          inputImage    as    Any  
          inputColor  
          inputDistance  
          inputSlope 
    ] 
    }    otherwise    {
        return    nil 
    } 
  } 
} 

Finally come to business. In fact, to filter the image, we will override a calculation method called outputImage . Here we can take our hazeRemovalKernel and call the CIColorKernel method applies with our input image and our arguments and get back our new filtered output.

Registering our filter

To make our new shiny filter available to Core Image clients, we must create a supplier implementing the CIFilterConstructor protocol. CoreImage

class CustomFiltersVendor : NSObject CIFilterConstructor {

We will set up our filter with the amazing name of CusomFilterVendor Since this is an Objective-C protocol, we must inherit NSObject ]. New to Swift Interoperability? We learn it in our advanced iOS Bootcamp!

  public    static    la    HazeRemoveFilterName    =    "HazeRemoveFilter" 

I'm not a big fan of stringly typing names. So let's define a public static variable with our filter name, so we do not dislike anything by accident. ] = [ ] [ ] [ [ ] [[] [ ] : [ "CustomFilters" ]]
HazeRemoveFilter registerName ([19659131] HazeRemoveFilterName The Designer : CustomFiltersVendor , classAttributes : classAttributes )
}

Then We register our filter in a fairly easy way. If you have more filters you've done, throw them in here by calling registry name on your own filter object. This is the method that tells Core Image if this filter is called which provider is responsible for it.

  func    filter   (  withName    name :    String  )    ->    CIFilter      {
"/>

]




Source link