从上一节的滤镜链的原理中,我们知道了Source总是滤镜链的源头,必须继承GPUImageOutput
,通过传递outputFramebuffer
给target
,实现滤镜链
GPUImageOutput
中的outputFramebuffer
的类型是GPUImageFramebuffer
,作用是管理GPUImageOutput(包括Source和Filter)产生的Texture
targets是一个保存实现了GPUImageInput
的数组,作用是
GPUImageOutput
生成的outputFrameBuffer
设置给GPUImageInput target
,这样就实现了GPUImageInput
是在GPUImageOutput
处理后的texture
上做处理,这是滤镜链的关键
从二进制的图片数据中,生成outputFramebuffer
,初始化时要指定数据的格式GPUPixelFormat
,其核心方法是- (void)uploadBytes:(GLubyte *)bytesToUpload
,通过glTexImage2D
加载纹理数据
- (void)uploadBytes:(GLubyte *)bytesToUpload;
{
[GPUImageContext useImageProcessingContext];
// TODO: This probably isn't right, and will need to be corrected
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:uploadedImageSize textureOptions:self.outputTextureOptions onlyTexture:YES];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
glTexImage2D(GL_TEXTURE_2D, 0, _pixelFormat, (int)uploadedImageSize.width, (int)uploadedImageSize.height, 0, (GLint)_pixelFormat, (GLenum)_pixelType, bytesToUpload);
}
renderInContext
显示内容,核心方法是- (void)updateWithTimestamp:(CMTime)frameTime
glTexImage2D
加载纹理数据- (void)updateWithTimestamp:(CMTime)frameTime;
{
[GPUImageContext useImageProcessingContext];
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextRotateCTM(imageContext, M_PI_2);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
// TODO: This may not work
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions onlyTexture:YES];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
// no need to use self.outputTextureOptions here, we always need these texture options
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
...
AVCaptureVideoDataOutput
获取持续的视频数据输出,在代理方法captureOutput:didOutputSampleBuffer:fromConnection:
可以拿到CMSamleBufferRef
,然后判断采集的图像的编码格式是否为YUV
,再进行调用不同的方法,加载纹理数据,核心方法是- (void)processVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer;
若是YUV
,需要通过调用CVOpenGLESTextureCacheCreateTextureFromImage
分别产生Y
和UV
纹理。这里纹理的产生大概是这样的一个过程CMSampleBufferRef
->CVImageBufferRef
->CVOpenGLESTextureRef
->Texture
// processVideoSampleBuffer方法中的这个判断就是对YUV编码的图像处理,产生纹理的具体代码
if ([GPUImageContext supportsFastTextureUpload] && captureAsYUV)
{
// 这里产生YUV纹理
}
否则还是调用glTexImage2D
,加载纹理
- (void)startProcessing
processAsset
方法,依赖AVAssetReaderOutput
的copyNextSampleBuffer
方法,获取CMSampleBufferRef
,拿到了SampleBuffer后,就跟上面GPUImageVideoCamera
对SampleBuffer的处理方法一样AVURLAsset
,然后走processAsset
的流程processPlayerItem
方法,通过AVPlayerItemVideoOutput
逐帧读取,通过AVPlayerItemVideoOutput
的copyPixelBufferForItemTime
获取CVPixelBufferRef
,然后继续走YUV判断的那套流程产生纹理数据GPUImageMovie配合AVFoundation,可以做一个视频编辑的功能,GPUImageMovie提供实时预览和加滤镜的功能,AVFoundation负责视频的编辑,这里我做了一个Demo,下载地址https://github.com/maple1994/MPVideoEditDemo
通过已经存在的纹理初始化
通过加载图片的信息,生成纹理信息,实现的核心是图片->CGImageRef->纹理
GPUImageOutput的作用,总的一句话,就是产生纹理数据,将纹理绑定到GPUImageFrameBuffer
。常用生成纹理的途径可以总结为以下三种
CMSampleBuffer -> CVImageBuffer/CVPixelBuffer -> CVOpenGLESTextureCacheCreateTextureFromImage生成纹理(YUV)
UIView -> Layer(生成上下文绘制) -> glTextImage2D
UIImage -> CGImageRef -> glTextImage2D