我正试着制作一个模拟长曝光摄影的应用程序。我的想法是从网络摄像头抓取当前的帧,并将其合成到画布上。随着时间的推移,照片会“曝光”,越来越亮。(见http://www.chromeexperiments.com/detail/light-paint-live-mercury/?f=)
我有一个效果很好的着色器。就像photoshop中的“添加”混合模式。问题是,我不能让它循环上一帧。
我认为这将是像renderer.autoClear = false;
这样简单的东西,但是在这种情况下,这个选项似乎什么也做不了。
下面是使用THREE.EffectComposer应用着色器的代码。
onWebcamInit: function () {
var $stream = $("#user-stream"),
width = $stream.width(),
height = $stream.height(),
near = .1,
far = 10000;
this.renderer = new THREE.WebGLRenderer();
this.renderer.setSize(width, height);
this.renderer.autoClear = false;
this.scene = new THREE.Scene();
this.camera = new THREE.OrthographicCamera(width / -2, width / 2, height / 2, height / -2, near, far);
this.scene.add(this.camera);
this.$el.append(this.renderer.domElement);
this.frameTexture = new THREE.Texture(document.querySelector("#webcam"));
this.compositeTexture = new THREE.Texture(this.renderer.domElement);
this.composer = new THREE.EffectComposer(this.renderer);
// same effect with or without this line
// this.composer.addPass(new THREE.RenderPass(this.scene, this.camera));
var addEffect = new THREE.ShaderPass(addShader);
addEffect.uniforms[ 'exposure' ].value = .5;
addEffect.uniforms[ 'frameTexture' ].value = this.frameTexture;
addEffect.renderToScreen = true;
this.composer.addPass(addEffect);
this.plane = new THREE.Mesh(new THREE.PlaneGeometry(width, height, 1, 1), new THREE.MeshBasicMaterial({map: this.compositeTexture}));
this.scene.add(this.plane);
this.frameTexture.needsUpdate = true;
this.compositeTexture.needsUpdate = true;
new FrameImpulse(this.renderFrame);
},
renderFrame: function () {
this.frameTexture.needsUpdate = true;
this.compositeTexture.needsUpdate = true;
this.composer.render();
}
这是着色器。没什么花哨的。
uniforms: {
"tDiffuse": { type: "t", value: null },
"frameTexture": { type: "t", value: null },
"exposure": { type: "f", value: 1.0 }
},
vertexShader: [
"varying vec2 vUv;",
"void main() {",
"vUv = uv;",
"gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );",
"}"
].join("\n"),
fragmentShader: [
"uniform sampler2D frameTexture;",
"uniform sampler2D tDiffuse;",
"uniform float exposure;",
"varying vec2 vUv;",
"void main() {",
"vec4 n = texture2D(frameTexture, vUv);",
"vec4 o = texture2D(tDiffuse, vUv);",
"vec3 sum = n.rgb + o.rgb;",
"gl_FragColor = vec4(mix(o.rgb, sum.rgb, exposure), 1.0);",
"}"
].join("\n")
发布于 2013-11-13 12:58:38
要实现这种反馈效果,您必须交替编写WebGLRenderTarget
的实例。否则,将覆盖帧缓冲区。不太清楚为什么会这样..。但这是解决办法。
init:
this.rt1 = new THREE.WebGLRenderTarget(512, 512, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat });
this.rt2 = new THREE.WebGLRenderTarget(512, 512, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat });
呈现:
this.renderer.render(this.scene, this.camera);
this.renderer.render(this.scene, this.camera, this.rt1, false);
// swap buffers
var a = this.rt2;
this.rt2 = this.rt1;
this.rt1 = a;
this.shaders.add.uniforms.tDiffuse.value = this.rt2;
发布于 2014-01-03 09:49:43
这在本质上相当于“假设实验室的答案”,但我成功地获得了一个更精简的解决方案--我创建了一个EffectComposer,只使用我想要回收的ShaderPass,然后用每个渲染替换renderTargets。
初始化:
THREE.EffectComposer.prototype.swapTargets = function() {
var tmp = this.renderTarget2;
this.renderTarget2 = this.renderTarget1;
this.renderTarget1 = tmp;
};
...
composer = new THREE.EffectComposer(renderer,
new THREE.WebGLRenderTarget(512, 512, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBFormat })
);
var addEffect = new THREE.ShaderPass(addShader, 'frameTexture');
addEffect.renderToScreen = true;
this.composer.addPass(addEffect);
呈现:
composer.render();
composer.swapTargets();
然后,辅助EffectComposer可以将两个renderTargets中的一个推送到屏幕上,或者进一步转换它。
还请注意,在初始化frameTexture时,我将“ShaderPass”声明为textureID。这让ShaderPass知道用上一次传递的结果更新frameTexture统一。
发布于 2013-11-16 06:01:55
试着用这个:
this.renderer = new THREE.WebGLRenderer( { preserveDrawingBuffer: true } );
https://stackoverflow.com/questions/19872524
复制相似问题