引子

又偷懒了,说好的周更的,又拖了一个月咯。前面两篇写了可视域分析和视频投影,无一例外的都用到了ShadowMap也就是阴影贴图,因此觉得又必要单独写一篇阴影贴图的文章。当然了,还有另外一个原因,文章中视频投影是利用Cesium自带的Entity方式实现的,毫无技术性可言,在文章结尾我说了可以使用ShadowMap方式来做,原理类似于可视域分析,那么今天我就把实现方式给大家说一下。

预期效果

照例先看一下预期的效果,既然说了阴影贴图,当然不能满足于只贴视频纹理了,这里我放了三张图,代表着我用了三种纹理:图片、视频、颜色。小伙伴惊奇的发现,颜色贴图不就是可视域分析么?嘿嘿,是的,因为原理都是一样的嘛。

实现原理

上面说了实现原和可视域分析是一样的,涉及到的知识点ShadowMap、Frustum、Camera之类的请参考Cesium深入浅出之可视域分析,这里不在赘述。只简单讲一点,阴影贴图支持不同的纹理,那么我们要做的就是创建一个ShadowMap,然后把不同类型的Texture传给他就可以了。

具体实现

实现流程与可视域分析也大致相似,类→创建Camera→创建ShadowMap→创建PostProcessStage→创建Frustum,只多了一步设置Texture,当然最核心的内容是在shader里。

因为代码高度重合,这里就不贴全部代码了,只贴核心代码,如果有疑问的可以留言、私信、群里询问,我看到了都会回答的。

构造函数

// 定义变量
/** 纹理类型:VIDEO、IMAGE、COLOR */
#textureType;
/** 纹理 */
#texture;
/** 观测点位置 */
#viewPosition;
/** 最远观测点位置(如果设置了观测距离,这个属性可以不设置) */
#viewPositionEnd;
// ...

// 构造函数
constructor(viewer, options) {
    super(viewer);

    // 纹理类型
    this.#textureType = options.textureType;
    // 纹理地址(纹理为视频或图片的时候需要设置此项)
    this.#textureUrl = options.textureUrl;
    // 观测点位置
    this.#viewPosition = options.viewPosition;
    // 最远观测点位置(如果设置了观测距离,这个属性可以不设置)
    this.#viewPositionEnd = options.viewPositionEnd;

    // ...

    switch (this.#textureType) {
        default:
        case VideoShed.TEXTURE_TYPE.VIDEO:
            this.activeVideo();
            break;
        case VideoShed.TEXTURE_TYPE.IMAGE:
            this.activePicture();
            break;
    }
    this.#refresh()
    this.viewer.scene.primitives.add(this);
}

定义纹理类型,视频和图片投影需要引入纹理文件,初始化的时候设置文件路径,颜色投影不需要任何操作。

视频纹理

/**
 * 投放视频。
 *
 * @author Helsing
 * @date 2020/09/19
 * @param {String} textureUrl 视频地址。
 */
activeVideo(textureUrl = undefined) {
    if (!textureUrl) {
        textureUrl = this.#textureUrl;
    } else {
        this.#textureUrl = textureUrl;
    }
    const video = this.#createVideoElement(textureUrl);
    const that = this;
    if (video /*&& !video.paused*/) {
        this.#activeVideoListener || (this.#activeVideoListener = function () {
            that.#texture && that.#texture.destroy();
            that.#texture = new Texture({
                context: that.viewer.scene.context,
                source: video,
                width: 1,
                height: 1,
                pixelFormat: PixelFormat.RGBA,
                pixelDatatype: PixelDatatype.UNSIGNED_BYTE
            });
        });
        that.viewer.clock.onTick.addEventListener(this.#activeVideoListener);
    }
}

视频纹理是通过html5的video标签引入,需要动态创建标签,不过需要注意的是视频标签的释放是个问题,常规方式并不能彻底释放,最好不要每次都创建新的标签。

图片纹理

 

图片纹理使用的是Image对象加载的,要注意的是在异步回调中设置纹理。

/**
 * 投放图片。
 *
 * @author Helsing
 * @date 2020/09/19
 * @param {String} textureUrl 图片地址。
 */
activePicture(textureUrl = undefined) {
    this.deActiveVideo();
    if (!textureUrl) {
        textureUrl = this.#textureUrl;
    } else {
        this.#textureUrl = textureUrl;
    }
    const that = this;
    const img = new Image;
    img.onload = function () {
        that.#textureType = VideoShed.TEXTURE_TYPE.IMAGE;
        that.#texture = new Texture({
            context: that.viewer.scene.context,
            source: img
        });
    };
    img.onerror = function () {
        console.log('图片加载失败:' + textureUrl)
    };
    img.src = textureUrl;
}

PostProcessStage

 

/**
 * 创建后期程序。
 *
 * @author Helsing
 * @date 2020/09/19
 * @ignore
 */
#addPostProcessStage() {
    const that = this;
    const bias = that.#shadowMap._isPointLight ? that.#shadowMap._pointBias : that.#shadowMap._primitiveBias;
    const postStage = new PostProcessStage({
        fragmentShader: VideoShedFS,
        uniforms: {
            helsing_textureType: function () {
                return that.#textureType;
            },
            helsing_texture: function () {
                return that.#texture;
            },
            helsing_alpha: function () {
                return that.#alpha;
            },
            helsing_visibleAreaColor: function () {
                return that.#visibleAreaColor;
            },
            helsing_invisibleAreaColor: function () {
                return that.#invisibleAreaColor;
            },
            shadowMap_texture: function () {
                return that.#shadowMap._shadowMapTexture;
            },
            shadowMap_matrix: function () {
                return that.#shadowMap._shadowMapMatrix;
            },
            shadowMap_lightPositionEC: function () {
                return that.#shadowMap._lightPositionEC;
            },
            shadowMap_texelSizeDepthBiasAndNormalShadingSmooth: function () {
                const t = new Cartesian2;
                t.x = 1 / that.#shadowMap._textureSize.x;
                t.y = 1 / that.#shadowMap._textureSize.y;
                return Cartesian4.fromElements(t.x, t.y, bias.depthBias, bias.normalShadingSmooth, that.#combinedUniforms1);
            },
            shadowMap_normalOffsetScaleDistanceMaxDistanceAndDarkness: function () {
                return Cartesian4.fromElements(bias.normalOffsetScale, that.#shadowMap._distance, that.#shadowMap.maximumDistance, that.#shadowMap._darkness, that.#combinedUniforms2);
            },
        }
    });
    this.#postProcessStage = this.viewer.scene.postProcessStages.add(postStage);
}

后处理程序中的重点是向shader中传入uniforms参数,如纹理类型,可视域颜色、非可视域颜色等。最后就是重头戏着色器代码。

export default `
varying vec2 v_textureCoordinates;
uniform sampler2D colorTexture;
uniform sampler2D depthTexture;
uniform sampler2D shadowMap_texture;
uniform mat4 shadowMap_matrix;
uniform vec4 shadowMap_lightPositionEC;
uniform vec4 shadowMap_normalOffsetScaleDistanceMaxDistanceAndDarkness;
uniform vec4 shadowMap_texelSizeDepthBiasAndNormalShadingSmooth;
uniform int helsing_textureType;
uniform sampler2D helsing_texture;
uniform float helsing_alpha;
uniform vec4 helsing_visibleAreaColor;
uniform vec4 helsing_invisibleAreaColor;

vec4 toEye(in vec2 uv, in float depth){
    vec2 xy = vec2((uv.x * 2.0 - 1.0),(uv.y * 2.0 - 1.0));
    vec4 posInCamera =czm_inverseProjection * vec4(xy, depth, 1.0);
    posInCamera =posInCamera / posInCamera.w;
    return posInCamera;
}
float getDepth(in vec4 depth){
    float z_window = czm_unpackDepth(depth);
    z_window = czm_reverseLogDepth(z_window);
    float n_range = czm_depthRange.near;
    float f_range = czm_depthRange.far;
    return (2.0 * z_window - n_range - f_range) / (f_range - n_range);
}
float _czm_sampleShadowMap(sampler2D shadowMap, vec2 uv){
    return texture2D(shadowMap, uv).r;
}
float _czm_shadowDepthCompare(sampler2D shadowMap, vec2 uv, float depth){
    return step(depth, _czm_sampleShadowMap(shadowMap, uv));
}
float _czm_shadowVisibility(sampler2D shadowMap, czm_shadowParameters shadowParameters){
    float depthBias = shadowParameters.depthBias;
    float depth = shadowParameters.depth;
    float nDotL = shadowParameters.nDotL;
    float normalShadingSmooth = shadowParameters.normalShadingSmooth;
    float darkness = shadowParameters.darkness;
    vec2 uv = shadowParameters.texCoords;
    depth -= depthBias;
    vec2 texelStepSize = shadowParameters.texelStepSize;
    float radius = 1.0;
    float dx0 = -texelStepSize.x * radius;
    float dy0 = -texelStepSize.y * radius;
    float dx1 = texelStepSize.x * radius;
    float dy1 = texelStepSize.y * radius;
    float visibility = (_czm_shadowDepthCompare(shadowMap, uv, depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx0, dy0), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(0.0, dy0), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx1, dy0), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx0, 0.0), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx1, 0.0), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx0, dy1), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(0.0, dy1), depth)
        + _czm_shadowDepthCompare(shadowMap, uv + vec2(dx1, dy1), depth)
    ) * (1.0 / 9.0);
    return visibility;
}
vec3 pointProjectOnPlane(in vec3 planeNormal, in vec3 planeOrigin, in vec3 point){
    vec3 v01 = point -planeOrigin;
    float d = dot(planeNormal, v01) ;
    return (point - planeNormal * d);
}

void main(){
    const float PI = 3.141592653589793;
    vec4 color = texture2D(colorTexture, v_textureCoordinates);
    vec4 currentDepth = texture2D(depthTexture, v_textureCoordinates);
    if(currentDepth.r >= 1.0){
        gl_FragColor = color;
        return;
    }
    float depth = getDepth(currentDepth);
    vec4 positionEC = toEye(v_textureCoordinates, depth);
    vec3 normalEC = vec3(1.0);
    czm_shadowParameters shadowParameters;
    shadowParameters.texelStepSize = shadowMap_texelSizeDepthBiasAndNormalShadingSmooth.xy;
    shadowParameters.depthBias = shadowMap_texelSizeDepthBiasAndNormalShadingSmooth.z;
    shadowParameters.normalShadingSmooth = shadowMap_texelSizeDepthBiasAndNormalShadingSmooth.w;
    shadowParameters.darkness = shadowMap_normalOffsetScaleDistanceMaxDistanceAndDarkness.w;
    shadowParameters.depthBias *= max(depth * 0.01, 1.0);
    vec3 directionEC = normalize(positionEC.xyz - shadowMap_lightPositionEC.xyz);
    float nDotL = clamp(dot(normalEC, -directionEC), 0.0, 1.0);
    vec4 shadowPosition = shadowMap_matrix * positionEC;
    shadowPosition /= shadowPosition.w;
    if (any(lessThan(shadowPosition.xyz, vec3(0.0))) || any(greaterThan(shadowPosition.xyz, vec3(1.0)))){
        gl_FragColor = color;
        return;
    }
    shadowParameters.texCoords = shadowPosition.xy;
    shadowParameters.depth = shadowPosition.z;
    shadowParameters.nDotL = nDotL;
    float visibility = _czm_shadowVisibility(shadowMap_texture, shadowParameters);

    if (helsing_textureType < 2){ // 视频或图片模式
        vec4 videoColor = texture2D(helsing_texture, shadowPosition.xy);
        if (visibility == 1.0){
            gl_FragColor =  mix(color, vec4(videoColor.xyz, 1.0), helsing_alpha * videoColor.a);
        }
        else{
            if(abs(shadowPosition.z - 0.0) < 0.01){
                return;
            }
            gl_FragColor = color;
        }
    }
    else{ // 可视域模式
        if (visibility == 1.0){
            gl_FragColor = mix(color, helsing_visibleAreaColor, helsing_alpha);
        }
        else{
            if(abs(shadowPosition.z - 0.0) < 0.01){
                return;
            }
            gl_FragColor = mix(color, helsing_invisibleAreaColor, helsing_alpha);
        }
    }
}`;

 

可以看出着色器代码并不复杂,而且其中大部分是Cesium中原生的,重点看我标注的部分,视频和图片模式时使用混合纹理,可视域模式时混合颜色。

小结

延续了以往的风格,文章以干货为主,开袋即食,穿插着讲一点原理,因为我觉得学习得从下口开始,下得去口才能慢慢吃进去,才能消化,否则就是直接挂营养液了,对身体并没有多大好处。当然了,原理的确讲的少了点,主要是因为。。。打字太累了啊T_T。好了,今天就到这里了。下期预告:信息框,是在能跟随地图的那种,不是单纯的弹框。对Cesium技术感兴趣的小伙伴,到854943530一起讨论吧,干货不容错过。

Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐