If the model matrix performs unequal scaling, the vertex changes will cause the normal vector to no longer be perpendicular to the surface. Therefore, we cannot transform the normal vector with this model matrix. The following figure shows the effect of the model matrix with unequal scaling applied on the normal vector:

Whenever we apply an unequal scaling (note: equal scaling does not break the normals because the direction of the normals is not changed, just the length of the normals, which is easily fixed by normalization), the normal vector is no longer perpendicular to the corresponding surface, and the light is broken.

The trick to fixing this behavior is to use a model matrix that is customized for the normal vector. This Matrix is called the Normal Matrix, and it uses some manipulation of linear algebra to remove the effect of the incorrect scaling of the Normal vector.

The normal matrix is defined as “the transpose of the inverse matrix in the upper left corner of the model matrix”.

The specific derivation process is as follows:

VertexEyespace = ModelViewMatrix * vertex;

modelEyeSpace = modelViewMatrix * vec4(normal, 0);

We know that the line segment T is equal to P2-P1; P2 and P1 are the positions of two vertices.

T’ = T * modelViewMatrix = (p2 – p1) * modelview;

= p2 * modelview – p1 * modelview;

So this is the same thing as p’ 2 – p’1

Assumptions:

N = Q2-Q1;

Because the dot product of the line segment and the normal has to be equal to 0

dot(T,N) = 0;

So the dot product of the converted line segment with the normal line has to be equal to 0 for that to happen

dot(T’,N’) = 0;

Let’s say the top left submatrix of the view is M

And then let’s assume that G is the correct matrix to transform the normal vector

To:

dot(G*N, M*T) = 0;

According to the dot product formula dot (t, n) = | t | | | * n * cosine alpha.

To:

Dot (T’, N’) = dot(G*N, M*T) = inverse(GN) * MT;

We know that the product of the matrix Gn and the inverse of Gn is the element matrix.

inverse(GN)*GN = I

And transpose(GN)*GN = I of the orthogonal matrix

When I multiply both sides by my transformation matrix

transpose(GN)*inverse(GN) * GN = inverse(GN)

Have to

transpose(GN) = inverse(GN)

so

dot(T’, N’) = transpose(GN) * MT;

After disassembly:

dot(T’, N’) = transpose(G) * transpose(N) * M*T;

Assume that transpose(G) * M = I

dot(N’, T’) = dot(N, T) = 0;

So that gives us what we want

G = transpose(inverse(M))

That’s the whole derivation of the inverse transpose of the upper left corner of the model matrix.

Also: the normal vector is only a directional vector and cannot represent a specific position in space. Also, the normal vector has no homogeneous coordinates (the w component in the vertex positions). That means that the displacement should not affect the normal vector. Therefore, if we were to multiply the normal vector by a model matrix, we would remove the displacement part from the matrix and only select the 3 by 3 matrix in the upper left corner of the model matrix (note that we could also set the w component of the normal vector to 0 and multiply it by a 4 by 4 matrix; This also removes the displacement). For the normal vector, we just want to scale and rotate it.

The final approach is:

Normal = mat3(transpose(inverse(model))) * aNormal;

Whenever we apply an unequal scaling (note: equal scaling does not break the normals because the direction of the normals is not changed, just the length of the normals, which is easily fixed by normalization), the normal vector is no longer perpendicular to the corresponding surface, and the light is broken.

The trick to fixing this behavior is to use a model matrix that is customized for the normal vector. This Matrix is called the Normal Matrix, and it uses some manipulation of linear algebra to remove the effect of the incorrect scaling of the Normal vector.

The normal matrix is defined as “the transpose of the inverse matrix in the upper left corner of the model matrix”.

The specific derivation process is as follows:

VertexEyespace = ModelViewMatrix * vertex;

modelEyeSpace = modelViewMatrix * vec4(normal, 0);

We know that the line segment T is equal to P2-P1; P2 and P1 are the positions of two vertices.

T’ = T * modelViewMatrix = (p2 – p1) * modelview;

= p2 * modelview – p1 * modelview;

So this is the same thing as p’ 2 – p’1

Assumptions:

N = Q2-Q1;

Because the dot product of the line segment and the normal has to be equal to 0

dot(T,N) = 0;

So the dot product of the converted line segment with the normal line has to be equal to 0 for that to happen

dot(T’,N’) = 0;

Let’s say the top left submatrix of the view is M

And then let’s assume that G is the correct matrix to transform the normal vector

To:

dot(G*N, M*T) = 0;

According to the dot product formula dot (t, n) = | t | | | * n * cosine alpha.

To:

Dot (T’, N’) = dot(G*N, M*T) = inverse(GN) * MT;

We know that the product of the matrix Gn and the inverse of Gn is the element matrix.

inverse(GN)*GN = I

And transpose(GN)*GN = I of the orthogonal matrix

When I multiply both sides by my transformation matrix

transpose(GN)*inverse(GN) * GN = inverse(GN)

Have to

transpose(GN) = inverse(GN)

so

dot(T’, N’) = transpose(GN) * MT;

After disassembly:

dot(T’, N’) = transpose(G) * transpose(N) * M*T;

Assume that transpose(G) * M = I

dot(N’, T’) = dot(N, T) = 0;

So that gives us what we want

G = transpose(inverse(M))

That’s the whole derivation of the inverse transpose of the upper left corner of the model matrix.

Also: the normal vector is only a directional vector and cannot represent a specific position in space. Also, the normal vector has no homogeneous coordinates (the w component in the vertex positions). That means that the displacement should not affect the normal vector. Therefore, if we were to multiply the normal vector by a model matrix, we would remove the displacement part from the matrix and only select the 3 by 3 matrix in the upper left corner of the model matrix (note that we could also set the w component of the normal vector to 0 and multiply it by a 4 by 4 matrix; This also removes the displacement). For the normal vector, we just want to scale and rotate it.

The final approach is:

Normal = mat3(transpose(inverse(model))) * aNormal;

### Read More:

- Record the problems encountered in OpenGL learning
- Graphics rendering pipeline diagram of OpenGL
- matlab Error Subscript indices must either be real positive integers or logicals.
- Empty Matrices, Scalars, and Vectors
- An example of 3D data modeling based on VB6 + OpenGL
- Use of rep function in R
- Pytorch corresponding point multiplication and matrix multiplication
- Failed to resolve: com.android.support:appcompat-v7:27.
- Learning from OpenGL
- Build your own resnet18 network and load torch vision’s own weight
- Pychar has a problem downloading module [install packages failed: installing packages: error occurred. Details…]
- OpenGL Usage Summary (including problems encountered and solutions)
- The usage of Matlab function downsample
- Drawing a cube with OpenGL
- Call unity with lightmap and light probes in shader
- Derivation process of gradient descent method based on house price
- An example of drawing rotating cube with OpenGL
- The sparse matrix of R language is too large to be used as.matrix
- Common problems and basic concept knowledge of OpenGL
- Drawing cube with OpenGL