A list of puns related to "Vertex normal"
Hello there, Iβve got a vertex shader which generates a height map but I need to calculate the correct normals, donβt have a clue where to begin with it anyone able to walk me through it?
This is something I've always been confused by, so I'd love any help, even it's vague.
I currently have a plane defined using a ShaderMaterial that extends MeshStandardMaterial. I've chopped up this example to get there.
I'm using simplex noise to modify the vertices, but now I'm stuck figuring out how I should adjust normals accordingly. I could do some math to look at each adjacent vertex, similarly to what this post outlines, but I'm curious about a broader question - how do you do this in the general case?
Do people always reimplement normal calculation for displacement, or is there some way to utilize a computeVertexNormals
-like function to do this? Is the only general solution to do this on the CPU and bake it into a texture? Are there some ergonomics to run a calculation like this for 30 seconds of simplex noise and then swap out normal every frame?
I'd love any help or discussion. I can try to clarify more if this is too vague.
https://preview.redd.it/7ry8ngtmqta81.png?width=2015&format=png&auto=webp&s=65afd3d583787e5da67b56f29a1c9fe96f55f36b
I have a model consisting of a few meshes. Here are 2 of the meshes that are have touching edges and vertices https://i.imgur.com/a2ksr4g.png.
You can see that where the meshes touch the vertex normals are not identical. https://i.imgur.com/gD7Xs3s.png https://i.imgur.com/BgPUToO.png.
The vertex normals should be identical but they are pointing away from each other. This means that there is a hard edge in the render. https://i.imgur.com/pr6pOp2.png.
How do I fix these vertex normals?
[SOLVED] How do I fix the triangle artifacts? The mesh doesn't need to be quadded as it is for a realtime environment project. Ive tried setting vertex normals to the faces around the beveled shape and locking them but it just yields even stranger artifacts elsewhere.
SOLUTION is to just add a loop, cutting all of the triangles in half. (Did this edit break the post? I canβt see the images lol)
https://preview.redd.it/2j4t8xsv6xm71.png?width=622&format=png&auto=webp&s=857370b3f3f96819b3c788074d56b101f7e839e6
https://preview.redd.it/urpbu72u6xm71.png?width=655&format=png&auto=webp&s=3c2e34218eb179adb26f9d9cd4519699d24dace9
With that out of the way, this could be a matter of individual dev's impementation. I know that you can convert an integer to a decimal number with fixed point math but I'm not super familiar with that whole process. And I don't know if this is how it is handled most of the time. Also bear in mind that this is my first time parsing normals in a 3d model format and so what is common is a bit iffy to me. In the past I've parsed verts/faces and used double sided normals. So some insight would be much appreciated. Thank you.
I have been reverse engineering an export plugin that only works in obsolete versions of Windows, the exporter takes in values from OBJ file and rearranges such data to create an ASCII 3D model format.
I have ran into an issue relating to normals, since in the file to be created normals are listed per vertex, e.g., for a 8 vert cube they are listed as follows (normals are inaccurate in this attempt)
VERT(-20.00000, -20.00000, 20.00000),
NORM(-0.1443250, -0.1443250, 0.1443250),
VERT(-20.00000, 20.00000, 20.00000),
NORM(-0.1443250, 0.1443250, 0.1443250),
VERT(-20.00000, -20.00000, -20.00000),
NORM(-0.1443250, -0.1443250, -0.1443250),
VERT(-20.00000, 20.00000, -20.00000),
NORM(-0.1443250, 0.1443250, -0.1443250),
VERT(20.00000, -20.00000, 20.00000),
NORM(0.1443250, -0.1443250, 0.1443250),
VERT(20.00000, 20.00000, 20.00000),
NORM(0.1443250, 0.1443250, 0.1443250),
VERT(20.00000, -20.00000, -20.00000),
NORM(0.1443250, -0.1443250, -0.1443250),
VERT(20.00000, 20.00000, -20.00000),
NORM(0.1443250, 0.1443250, -0.1443250),
The issue with this is that in OBJ, verts can have more than one normal associated to a vert, so taking that data provides unusable data values for normals in the end result ASCII format.
Okay, so my actual question is, if I were to implement a formula for surface normals like this one: https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal
Would it produce the same amount of normals as there are vertices in the polygon, or is there another formula that I should research for calculating directly from vertices? In theory the shading model should be smooth shaded, based on official models of the format I am trying to recreate.
I also made a previous topic about this, which shows visually how the exporter assigns normals for a cube: https://www.reddit.com/r/computergraphics/comments/p9df8e/trying_to_reverse_engineer_an_ascii_model_format/
Hi all!
Currently my vertex normals look like this: (top image)
But I want to make them point downards, towards the vertex below (bottom image)
Is there a way to do this in Maya?
My UVs for these "hair strands" are all set up in a tile. (meaning I can apply a ramp to the hair strands).
Could I orient my normals to use the "v" direction of the UVs?
A bit stumped, any help appreciated!
Cheers!
https://preview.redd.it/eer2xrrzns171.png?width=873&format=png&auto=webp&s=9e6b2af7a96a519b44856946fe2670892a93988f
https://preview.redd.it/6ftpoey0os171.png?width=828&format=png&auto=webp&s=500ff33e88b47aba8ce89c5ea71616e57171ffa7
BY The Fly β 3:45 PM ET 06/23/2021 Bullish option flow detected in Vertex Energy ( VTNR Loading... Loading... ) with 18,037 calls trading, 1.5x expected, and implied vol increasing over 33 points to 167.66%. Jul-21 1
Shader included at bottom.
I have an input vertex which includes position, normal, forward and even bi-tan. I trust the calculations of these values (except for handedness) because they are emitted from assimp.
I am getting a bit confused by the math involved... but the general Idea that I have currently is to create a lookat matrix in the shader based on the vertex provided position and forward. Then make a lookat matrix with forward to be (0,0,-1) and Up to be (0,1,0).
Then multiply the sampled normal by those matricies... It seems to *almost* work?
Barrel pos 0,0,0 Light 0,100,0
#version 450
layout(location=0) out vec4 FragColor;
layout(location=0) in vec3 fpos;
layout(location=1) in vec3 fnorm;
layout(location=2) in vec3 ffwd;
layout(location=3) in vec3 ftan;
layout(location=4) in vec2 fuv;
layout(std140, set=0, binding=0) uniform CameraMat
{
mat4 CamMatrix;
};
layout(std140, set=0,binding=1) uniform ModelMat
{
mat4 ModelMatrix;
};
//Ambient lighting: https://learnopengl.com/ somewhere.
//SpotLights and Point Lights: https://learnopengl.com/Lighting/Light-casters
layout(std140, set=1, binding=0) uniform AmbientLightData
{
vec4[4] ALDR;
};
layout(std140, set=1, binding=1) uniform PointLightData
{
vec4 PLDR[32];
};
layout(set=2,binding=0) uniform texture2D diff_tex;
layout(set=2,binding=1) uniform sampler diff_sam;
layout(set=3,binding=0) uniform texture2D bump_tex;
layout(set=3,binding=1) uniform sampler bump_sam;
layout(set=4,binding=0) uniform texture2D spec_tex;
layout(set=4,binding=1) uniform sampler spec_sam;
layout(set=5,binding=0) uniform texture2D emissive_tex;
layout(set=5,binding=1) uniform sampler emissive_sam;
float calcAttenuation(vec3 a, float d)
{
return 1.0/(a.x*a.y*d+a.z*d*d);
}
mat4 calcLookAt(vec3 fwrd, vec3 up)
{
vec3 zA = normalize(ffwd);
vec3 xA = normalize(cross(up,zA));
vec3 yA = cross(zA, xA);
float xDot = -dot(xA,vec3(0,0,0));
float yDot = -dot(yA,vec3(0,0,0));
float zDot = -dot(zA,vec3(0,0,0));
//It should matter which of these mats I use right?!
//They both look somewhat ok from initial light pos. (0,100,0)
... keep reading on reddit β‘https://preview.redd.it/ptld70ama8v61.png?width=706&format=png&auto=webp&s=9fee097ac2464625adcfc392e0cc948b61cae66f
I'm trying this code that I found on twitter and using 3dsmax 2020 it doesn't make the alignment as you can see in the image, it does it in a different way, wrong.What could be happening?
faceArray = polyop.getFacesUsingVert $Box001 4
faceNormals = for f in faceArray collect polyop.getFaceNormal $Box001 f
theNormal = [0,0,0]
for n in faceNormals do theNormal += n
normalize theNormal
theMatrix = matrixFromNormal(theNormal)
theMatrix.row4 = theNormal
$Box002.transform = theMatrix
Solution Thanks to: Daniel Swahn Lindberg
(
sourceObj = $Box001
targetObj = $Box002
theVert = 4
theVertPos = polyop.getVert sourceObj theVert
faceArray = polyop.getFacesUsingVert sourceObj theVert
faceNormals = for f in faceArray collect polyop.getFaceNormal sourceObj f
theNormal = [0,0,0]
for n in faceNormals do theNormal += n
theNormal = normalize theNormal
theMatrix = matrixFromNormal theNormal
theMatrix.row4 = theVertPos
targetObj.transform = theMatrix
)
https://preview.redd.it/2wdc7d1ssdo61.jpg?width=1095&format=pjpg&auto=webp&s=839181b115d0fea1c46b20cd96546de403e80166
Hi, here is the problem I am trying to solve. My terrain generated with perlin noise in the tesselation shader looks like this: https://prnt.sc/vqmhwx.
I am currently calculating the normals for some basic lighting in the geometry shader by doing a cross product of two of the lines that make up the triangle. However, because I only have access to one triangle at a time in the geometry shader, it leads to this flat shaded look.
What I want is to have per-vertex normals, so the terrain looks a lot smoother. I assume this would be calculated by taking the average of the face normals that use that vertex, but I can't for the life of me figure out how to do that.
Does anyone have any ideas?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.