Three.js uses the GPU to select the object and calculate the method of calculating the position of the intersection.

Use Three.js Bottom (Raycaster) to select the object is very simple, the code is as follows:

Var raycaster = new three.raycaster (); var mouse = new three.vector2 (); function onmousemove (event) {// calculate the device coordinate // three coordinate components of the location of the mouse All -1 to 1 mouse.x = event.clientx / window.innerwidth * 2 – 1; mouse.y = – (event.clienty / window.innerHeight) * 2 + 1;} function pick () {// Camera and mouse location update Light Raycaster.SetFromcamera (Mouse, Camera); // Calculate object var intersects = raycaster.Children;}

It is filtered by a envelope, calculating the projection light to intend to intersect each triangular element.

However, when the model is very large, it is more than 400,000 faces, and the method of calculating the object and the calculation of the collision point will be very slow by traversal. The user experience is not good. But use the GPU to select this problem without this problem. Regardless of how much the scene and models are, they can get the position of the object and intersection point where the mouse is located within one frame.

using GPU

is simple:

1. Create a selection material, willThe material of each model in the scene is replaced with different colors. 2. Read the mouse position pixel color, determine the object of the mouse position according to the color.

Specific implementation code:

1. Create a selection material, traverse the scene, replace each model in the scene to different colors.
   Let maxhexcolor = 1; // Replace the material Scene.TraverseVisible (n => {if (! (N instanceof through.mesh) {return;} n. oldMaterial = n.material; if (n.pickMaterial) {// have already created a material selected n.material = n.pickMaterial; return;} let material = new THREE.ShaderMaterial ({vertexShader: PickVertexShader, fragmentShader: PickFragmentShader, uniforms : {PickColor: {Value: New three.color}}}}); n.pickcolor = maxHexColor; MaxHexColor ++; n.material = n.pickmaterial = material;}); 

2. Plotting the scene on the WebGlrenderTarget, read the color of the position where the mouse is located, and determine the selected object.
 Let Rendertarget = New Three.WebglrenderTarget (width, height); let pixel = new uint8array (4); // Draw and read pixel renderer.setrenderTarget; renderer.clear (); renderer.render (Scene, Camera); RendererGetertertertargetpixels , Offsetx, Height - Offsety, 1, 1, Pixel); // Read the location of the mouse location color // Restore the original material, and get the selected object const currentcolor = Pixel [0] * 0xfffff + Pixel [1] * 0xFF + Pixel [2]; let selected = null; Scene.TraverseVisible (n => {if (! (N instanceof through.mesh) {return;} if (n.pickmaterial&ox p (n.pickmaterial && n.pickcolor === currentcolor) {// color The object of the same selected = n; // The object in the mouse}} if (n.oldmaterial) {n.material = n.oldmaterial; delete n.oldmaterial;}});   
Offsetx and Offsety are mouse positions, HEIGHT is a height of the canvas. The meaning of the READRENDERTARGETPIELS is the color of the mouse located in the position (Offsetx, Height – Offsety), the width of 1, the height of 1, is 1 color.

Pixel is uint8array (4), saving RGB, respectivelyThe four channels of a color, each channel value range is 0 ~ 255.

Complete implementation code: https: //

Get intersection points using GPU Location

Implementation method is also very simple:
1. Creating a depth coloring equipment, rendering the scene depth to WebGlrenderTarget.

2. Calculate the depth of the position of the mouse, calculate the intersection position according to the mouse position and depth.

Specific implementation code:

1. Create a depth coloring equipment, encoding depth information in a certain manner, rendering to the WebGlrenderTarget.

Material Depth:

const depthMaterial = new THREE.ShaderMaterial ({vertexShader: DepthVertexShader, fragmentShader: DepthFragmentShader, uniforms: {far: {value: camera

precision highp float; uniform float far; Varying float depth; void main () {gl_position = ProjectionMatrix * ModelViewMatrix * VEC4 (POSition, 1.0); Depth = GL_Position.z / FAR;}


   Precision Highp float; void main () {float hex = abs (depth) * 16777215.0; // 0xfffffffFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF FLOAT R = FLOOR (HEX / 65535.0); float g = floor ((HEX - R * 655535.0) / 255.0) Float B = Floor (HEX - R * 65535.0 - G * 255.0); float a = sign (depth)> = 0.0? 1.0: 0.0; // DEPTH is greater than or equal to 0, 1.0; less than 0, 0.0. GL_FRAGCOLOR = VEC4 (R / 255.0, g / 255.0, b / 255.0, a);} 

a. GL_Position.z is the depth in the camera space, linear, range from Cameranear to Camerafar. You can use the shader Varying variable directly to interpolate.
B. The reason for GL_Position.z / FAR is to convert the value to 0 to 1, which is convenient for the color output. c. Cannot use the depth in the screen space, after perspective projection, the depth becomes -1 ~ 1, most of which is very close to 1 (more), not linear, almost unchanged, the color of the output is almost Stay, it is very incorrect.
 d. Get depth in the chromogenMethods: The camera space depth gl_FragCoord.z, screen-space depth gl_FragCoord.z ​​/ gl_FragCoord.w.  e. The above description is for the perspective projection, orthogonal projection of the gl_Position.w 1, using the camera and screen-space depth of the space are the same.  
f. In order to accurately output depth as possible, using the three components rgb output depth. gl_Position.z / far in the range of 0 to 1, multiplying 0xFFFFFF, converted to an rgb color value, a component represented by R & lt 65535, g represents the component 1 255, b 1 represents a component.
Full implementation code: https:. //

2 reads the location of the mouse color, color value read to revert to the camera space depth value.

a. The “Encryption” after treatment depth plotted on WebGLRenderTarget. Color reading method

let renderTarget = new THREE.WebGLRenderTarget (width, height); let pixel = new Uint8Array (4); scene.overrideMaterial = this.depthMaterial; renderer .setRenderTarget (renderTarget); renderer.clear (); renderer.render (scene, camera); renderer.readRenderTargetPixels (renderTarget, offsetX, height – offsetY, 1, 1, pixel);

OfFsetX and OFFSETY are the mouse position, and Height is a canvas height. The meaning of the READRENDERTARGETPIELS is the color of the mouse located in the position (Offsetx, Height – Offsety), the width of 1, the height of 1, is 1 color.

Pixel is uint8array (4), saving four channels of RGBA colors, each channel value range of 0 to 255.

b. Recute “the camera space after” encryption “” decrypt “to get the correct camera spatial depth value.

IF (Pixel [2]! == 0 || Pixel [1]! == 0 || Pixel [0]! == 0) {let hex = (this.pixel [0] * 65535 + this.pixel [1] * 255 + this.pixel [2]) / 0xfffff; if (this.pixel [3] === 0) {hex = -hex;} CameraDepth = -Hex * Camera.far; // Camera coordinate system depth (Note: The depth value in the camera coordinate system is negative)}

  3 According to the position of the mouse on the screen and the camera space depth, interpolation retrieval intersection of the coordinates in the world coordinate system.  
Let NearPosition = New Three.Vector3 (); // Mouse Screen Location Coordinate Let FarPosition = New Three.Vector3 (); // Mouse screen position in the coordinate letter = new three.vector3 (); // By interpolation Calculate World Coordinate // Equipment Coordinate Const Devicex = this.OffSetx / Width * 2 – 1; Const Devicey = – this.OffSety / Height * 2 + 1; // Recent Nearposition.set (Devicex, Devicey, 1 ); // screen coordinate system: (0, 0, 1) NearPosition.ApplyMatrix4 (Camera.Projectionmatrixinverse); // Camera coordinate system: (0, 0, -far) // Far FarPosition.set (Devicex, Devicey, -1); // Screen coordinate system: (0, 0, -1) FarPosition.ApplyMatrix4 (Camera.ProjectionmatrixInverse); // Camera coordinate system: (0, 0, -near) // In camera space, according to depth, The camera space X and Y value are calculated in proportion. Const T = (CameraDepth – NearPosition.z) / (FarPosition.z – NearPosition.z); // Transfer points from the coordinates from the camera space to the world coordinate system coordinates. World.set (NearPosition.x + (FarPosition.x – NearPosition.x) * t, NearPosition.y + (FarPosition.y – NearPosition.y) * t, cameraptepth; world.ApplyMatrix4 (Camera.matrixworld);

Find code: / master / shadoweditor.web / src / evenet / gpupickevent.js

 Use the GPU to select the object and calculate the intersection position, mostly used for The performance is very high. For example:   1. The mouse moves to the HOVER effect on the 3D model. 
2. When the model is added, the model moves with the mouse movement, the real-time preview model is placed in the scene. 3. Distance measurement, area measurement and other tools, lines and polygons move with the mouse on plane, real-time preview effect, and calculate length and area.
4. The scene and model are very large, the light projection method is very slow, and the user experience is very bad.

Here is a picture that uses GPUs to select objects and implement the mouse HOVER effect. The red border is selected, the yellow translucent effect is the mouse HOVER effect.

Do not understand? Maybe you are less familiar with the various projection operations in Three.js. The projection operation formula in Three.js is given below.

1. ModelViewMatrix = Camera.matrixWorldinVerse * Object.matrixworld

2. ViewMatrix = Camera .matrixWorldInverse
3. modelMatrix = object.matrixWorld

4. project = applyMatrix4 (camera.matrixWorldInverse) .applyMatrix4 (camera.projectionMatrix)

5. unproject= ApplyMatrix4 (camera.projectionMatrixInverse) .applyMatrix4 (camera.matrixWorld)

6. Gl_Position = projectionMatrix * modelViewMatrix * position

= projectionMatrix * camera.matrixWorldInverse * matrixWorld * position

= projectionMatrix * viewMatrix * ModelMatrix * position

1. Complete implementation code: https: // SRC / Event / GPUPICKEVENT.JS

three.js利用gpu选取物体并计算交点位置的方法示例 2. Use shader to draw depth values ​​in OpenGL: https: // Shaders

3. In GLSL, get the real chip shader depth value: https: //

The above is the full content of this article, I hope this paper has a certain reference value for everyone’s learning or work.Thank you for your support of Tumi Cloud.

© Copyright Notice
Just support it if you like
comment Grab the couch

Please log in to comment