Part 2: light and texture of 3d environment
Lights of the world
In the process of transformation, usually in the coordinates known as observation space, we've come across one of the most important calculations: light computing. It's a kind of thing, when it works, you don't pay attention to it, but when it doesn't work, you pay attention to it. There are many different light methods, ranging from simple calculations of the direction of the polygon towards the light, and the percentage of the light colour, depending on the direction and distance of the light to the polygon, to the superstition of the light to produce smooth edges. And some api actually provides pre-construction lighting. For example, opengl provides the light calculation for each polygon, per vertex, and per pixel。
In the vertigo light, you decide how many polygons a vertex is shared, and calculate the average of all polygonal vectors (known as legal vectors) where the vertex is shared, and give the law a vertex. Each vertex of a given polygon will have a different law vector, so you need the light color of the vertex of the polygon to evolve or insert in order to get a smooth light effect. You don't need to look at every single polygon in this light. The advantage of this approach is that hardware conversion and light (t & l) can often be used to assist in rapid completion. The weakness is that it cannot cast a shadow. For example, even if the light is on the right side of the model, the left arm should be in the shadow of the body, whereas in fact the arms of the model are lit in the same way。
These simple methods use colour to achieve their goals. And when you draw a polygon with a flat light, you let the rendering engine put the whole polygon in a given colour. It's called coloring. (in this method, the polygon corresponds to a light strength, all points on the surface are shown with the same strength value, the rendering is painted with a flat effect, and the edge of the polygon is not accurately shown)。
For vertex colour (gouraud colour), you let the rendering engine give a specific colour to each vertex. When drawing the pixels corresponding to the projection of the dots in the polygon, the colour of these dots is calculated by interpolating them according to their distance from each vertex. This is actually the way the quake iii model works, amazingly。
And phong coloured. Like gouraud colouring, determines the pixel colour value by texture work, but does not insert a vertex colour, and it inserts the pixels of each vertex, doing the same work for each pixel projecting. For gouraud color, you need to know which light is projected at each vertex. For phong color, you know so much about every pixel。
It is not surprising that phong colours can have a more smoother effect, because each pixel needs a light calculation, which is very time consuming. The flat-light treatment is fast, but rougher. Phong colouring is more expensive than gouraud colouring, but it works best to achieve a mirror high light effect ( "highlight"). All of this requires a trade-off in the development of the game。
Different lights

And then you create the lighting map, and you mix it with the already existing texture. It works very well, but it's essentially a pre-canner effect before it's rendered. If you use dynamic lighting (i. E., the light moves, or you turn it on and off without program intervention), you have to recreate the lighting maps in each of the lumbers, adapting them to the dynamic light mode of motion. The light mapping can be quickly rendered, but it is expensive to consume the memory needed to store the light texture. You can use some compression techniques to make them take less memory space, or reduce their size, or even make them monochrome. If you do have multiple dynamic lights in the scene, the regeneration of lighting maps will end with expensive cpu cycles。
Many games usually use some kind of hybrid lighting. In the case of quake iii, the scene is mapped with light, and the animation model with top light. Pre-processed lights do not have the right effect on animated models -- the whole polygon model receives the full light value of the light -- and dynamic lighting will be used to produce the right effect. The use of hybrid lighting is a compromise that most people do not notice, which usually makes the effect look "right." that's all the game -- doing all the work necessary to make it look "right," but not really right。
Of course, all of this is gone in the new doom engine, but to see all the effects, at least one ghz cpu and geforce 2 graphic cards are required. It's progress, but everything comes at a price。
As soon as the scene is converted and lit, we're going to trim it. Without entering the bloody details, cutting calculations determines which triangles are fully present (known as observation of flatheads) or some of them. The triangles, which are fully present in the scene, are called details accepted and are processed. For a triangle that is only part of the scene, the outer part of the flathead will be trimmed, and the remaining polygon within the flathead will need to be re-closed so that it is fully within the visible scene. For more details, please refer to our 3d flow line guide。
When the scene is trimmed, the next stage in the stream line is the triangle generation phase (also known as the scan line conversion), and the scene is mapped to the 2d screen coordinates. Here, it's the rendering。
Texture and mip map
Textures are extremely important in making the 3d scene look real, and they are some of the little pieces of the polygon you apply to the scene area or object. Multiple textures cost a lot of memory, and different technologies help manage their size. Texture compression is a way to make texture data smaller while maintaining photographic information. Texture compression occupies less cd space and, more importantly, less memory and 3d graphic card storage space. Also, when you first asked the graphics of the graphic cards, the compressed version went through the agp interface from the main pc to the 3d graphic cards, which is much faster. Texture compression is a good thing. We'll talk more about texture compression down there。
Mip map (multiform map)

Another technique used to reduce the demand for texture memory and bandwidth is mip mapping. The mip mapping technique produces multiple copies of its texture by pre-processing textures, each of which is half the size of the previous copy. Why do you do that? To answer that question, you need to know how 3d graphic cards show texture. Worst case scenario, you choose a texture, stick it on a polygon, and then output it to the screen. We say it's a one-to-one relationship, a texture from the original texture map (the texture element) that corresponds to a pixel from the polygon of the texture map object. If the polygon is reduced by half, the texture of the texture is shown at every interval. This is usually no problem -- but in some cases it leads to some visual anomalies. Let's see the brick wall. Assuming that the initial texture is a brick wall, there are many bricks and only one pixel is the width of the mud between the bricks. If you cut the polygon in half, the texture is applied at every interval, and then all the mud suddenly disappears because they're shrunk. You'll only see some strange images。
Using the mip map, you can scale your own image before displaying the texture of the card application, because you can pre-process the texture, you can do better, so the mud doesn't shrink. When the 3d graphic card draws a polygon in texture, it detects a scaling factor, and says, "you know, i'm going to use a smaller texture instead of a smaller texture, which would look better. " here, mip map for everything, and everything for mip map。
Multiple textures and dent mapping
A single texture map makes a big difference to the whole 3d sense of truth, but using multiple textures can even achieve more memorable effects. This used to require multiple renderings (drawing), which seriously affected pixel filling rates. However, many 3d accelerators with multiple current lines, such as ati's radeon and nvidia's geforce 2 and higher graphic cards, can be completed in the re-diggling (drawing) process. When you produce multiple texture effects, you draw the polygon with one texture, and then you draw it transparently with another texture. So you can make the texture look like it's moving, or it's pulse, or even have a shadow effect. Draws the first texture map, and then draws a translucent black texture on it, which causes a black, all-weave, but a transparent layer accumulates at the top of it, which is -- instant shadow. It's called a light map (sometimes also called a dark map) until the new doom, which has been the traditional method of lighting at the level of the id engine。
The cam map is an ancient technique that has recently emerged. A few years ago, matrox was the first to initiate the use of different forms of convex in a popular 3d game. It is the formation of textures to express the projection of the light on the surface, the contour of the surface or the crack of the surface. The cavity map does not move along with the light -- it is designed to represent a small flaw on the surface, not a large cavity. In flight simulators, for example, you can use convection maps to produce random surface details instead of repeating the same texture, which doesn't seem interesting。
The cavity map produces quite obvious surface details, although it is a brilliant trick, but strictly speaking, it does not change from your perspective. When newer ati and nvidia graphic cards are able to run per pixels, this gap in observations is really no longer a strong and fast rule. Either way, so far, no game developers have used too much; more games can and should use convex maps。
High-speed cache shakes = bad things
The speed of the texture high-speed cache management game engine is crucial. Like any high-speed cache, it's good, but it's bad. If textures are frequently exchanged in and out of a graphic display of the memory of a card, this is texture high-speed cache dithering. When this happens, api will normally abandon every texture, with the result that all textures will be reloaded in the next lumber, which is time-consuming and wasteful. For game players, when api reloads texture high-speed caches, it leads to slow gill rates。

In texture high-speed cache management, there are different techniques to minimize texture high-speed cache dithering -- a decisive factor in ensuring the speed of any 3d game engine. Texture management is a good thing – it means that graphics are only required once, not again. It sounds a little ambivalent, but the effect is to say to the graphic card, "look, all these polygons are using this texture, can we load this texture just once instead of many times?" this prevents the api (or graphically driven software) from uploading textures to the graphic cards many times. In practice, apis like opengl usually handle texture high-speed cache management, meaning that according to some rules, such as the frequency of texture access, api determines which textures are stored on the graphic card and which textures are stored on the primary memory. The real problem is: (a) you often don't know the exact rules api is using. (b) you often ask to draw more textures in a field than can be accommodated in the memory space of a graphic card。
Another texture high-speed cache management technique is the texture compression we discussed earlier. Very much like a sound waveform file is compressed into an mp3 file, and although it is not possible to reach that compression rate, texture can be compressed. The compression from the sound waveform file to mp3 can reach the 11:1 compression ratio, while the majority of the texture compression algorithms supported by hardware have a 4:1 compression ratio, although this can make a significant difference. In addition, during the rendering (drawing) process, textures are dynamically decompressed only when needed. That's very good, and we're just wiping out the surfaces that are about to be used。
As noted above, another technique ensures that the rendering machine requires a graphic card to be drawn only once for each texture. Make sure all the polygons you want to render (draw) the same texture go to the graphic card at the same time, instead of a model here, another model there, and then back to the original texture theory. Just one drawing, and you're sending it through the agp interface. That's what quake iii does in its shadow system. When dealing with polygons, add them to an internal shadow list, and once all polygons have been processed, the renderingrs pass through the texture list, and all polygons using them are transmitted simultaneously。
The above process is not very effective when using graphic card hardware t & l (if supported). And you end up with a large number of polygonal sub-groups with the same texture, all of which use different transformation matrices. This means that more time is spent building the graphic card hardware t & l engine and more time is wasted. In any case, because they facilitate the use of uniform texture for the entire model, it can work effectively on the actual screen model. But because many polygons tend to use the same wall texture, the world scene is often hell. Usually it's not that serious, because, in general, the texture of the world will not be that big, so that api's texture cache system will handle this for you and keep it on a graphic card for reuse。
There's usually no texture high speed cache system on the game. On ps2, you better stay away from the texture method. On top of the xbox, this is not important because it does not have a graphic memory ( it's the uma system structure) and all textures remain in the main store anyway。
In fact, in today's modern pc fps game, trying to transmit large amounts of texture through the agp interface is the second most common bottleneck. The biggest bottleneck is physical geometry, which makes things appear where they should be. In today's 3d fps game, the most time-consuming work is clearly the mathematical calculations of the right world position for each vertex in the model. If you don't keep the texture of the scene in the budget, the next step is to transmit a lot of texture through the agp interface. However, you do have the capacity to influence these. By lowering the top level of the mip (remember where the system is breaking down the texture for you?), you can halve the texture size of the system that is trying to reach the graphic card. Your visual quality is going to decline -- especially in the high-profile film clip -- but you're going to have an increase in the rate. This is particularly helpful for online games. In fact, two games, solidier of fortune ii and jedi knight ii: outcast, were designed for a graphic card that is not yet the mainstream public card in the market. In order to view their texture at maximum size, your 3d graphic card needs at least 128mb memory. Both products are ideologically designed for the future。
This is part 2 above. In the following section, we're going to talk about a lot of topics, including memory management, fog effects, depth testing, anti-serted teeth, toplight color, api, etc。




