GENERATING TERRAIN AND HI DETAILS USING TEXTURE MAPS Z. Sabati, A. Bernik 1. Abstract Texturing is one of the fundamental elements of 3D computer graphics. Application, which is analyzed in this paper is related to two components. First is the relationship of textures and 3D models, and the second refers to the texture generating relief. Showing the basic principles and rules for the inevitable result of good quality. It explains the idea of the textures multiplied the number of two Advantages and disadvantages of various other ways of generating terrain, such as polygonal geometry and dots (a voxel). The analysis includes two platforms PC and iOS. Practical work results in a table format that shows the comparative component compatibility of 2D graphic elements. Mentioned are three basic types of texture maps: Mip, Detailed and Normal maps. The principle of Normal maps and the algorithm of creations are listed in the paper. For what use, what are the advantages and what are the ways that this technique supports the process of creating 3D terrain. 2. Basic techniques used for terrain creation 2.1 Heightmaps With a heightmap, you store only the height component for each vertex (usually as 2D texture) and provide position and resolution only once for the whole quad. The landscape geometry is generated each frame using the geometry shader or hardware tessellation. Heightmaps are the fastest way to store landscape data for collision detection. You only need to store one value per vertex and no indices. It's possible to improve this further by using detail maps or a noise filter to increase perceived detail.[3] 2.2 Voxels Voxel terrain stores terrain data for each point in a 3D grid. This method always uses the most storage per meaningful surface detail, even if you use compression methods like sparse octrees. The term "voxel engine" was often used to describe a method of ray marching terrain heightmaps common in older 3D games. This section applies only to terrain stored as voxel data.[6] 2.3 Meshes Polygon meshes are the most flexible and precise way of storing and rendering terrain. They are often used in games where precise control or advanced terrain features are needed. You only have to do the usual projection calculation in the vertex shader. A geometry shader isn't needed. All coordinates are store individually for each vertex, so it's possible to move them horizontally and increase mesh density in places with finer details. This also means the mesh will usually need less memory than a heighmap, because vertices can be more sparse in areas with less small features. The mesh is rendered as-is, so there won't be any glitches or strange-looking borders. It's possible to leave holes and create overhangs. Tunnels are seamless. Only possible with precomputed meshes. This will cause "jumps" when switching without additional data to map old to new vertices. Finding vertices that correspond to an area that should be modified is slow. Unlike in heightmaps and voxel data, the memory address for a certain location usually can't be calculated directly.[8] Table 1. Techniques for terrain generation
Heightmaps are the best solution if you don't need overhangs or holes in the terrain surface and use physics or dynamic terrain. They are scalable and work well for most games. 3. Limitations for the texture in terms of engine There are a few fundamental 'rules' applicable to making content for any sort of interactive media that need particular attention paid to them. The following section will discuss one of the core ''rules'', that of texture size, their dimensions and how that relates to a form of texture optimization, something commonly called the ‘’Power of two’’ rule. The main question was: Is our project affected by this rule and what types of media projects use this rule? [2] Figure 1. Preview of god and bad texture mapping [2] A visual representation, in Blender, of what a game would do to a texture when applied to something if it didn't resize and fix badly proportioned images. A) would happen if texture were loaded in 'as is' when incorrectly sized - red areas indicate areas of the model that wouldn't have anything applied. B) is what happens when a game resizes a bad texture, note the areas of mismatch between faces, something commonly the result. And C) a properly and correctly sized and proportioned texture applied to an object without any of the aforementioned problems. 3.1 The power of two rule It's a simple set of criteria applicable to all game related images that makes sure they conform to a series of regular dimensions. Typically this means doubling up or dividing down by two. So texture sizes that have or are limited to "8", "16", "32", "64", "128", "256", "512", "1024", "2048" (or higher for more modern games) in one or more width/height direction are regarded as being valid and properly optimized for quick loading into a game and processing into memory.
Figure 2. Unwrapped texture maps: Power of two and random texture pixel size [2] Ignoring the power of two rule has a number of knock-on effects for texture making, one of which relates directly to image quality. Because a game engine has to physically adjust the size and dimensions of incorrectly proportioned image it degrades the fidelity of the image itself, so fine details - the freckles on a character models skin, or the pattern of fabric on a piece of furniture - become blurred, pixilated or have some other visual artifacts appear due to the resize process having to extrapolating the necessary data from what's available. 4. Texture types and support Every texture image when imported into the engine is converted into basic format which is supported by certain graphic cards. Types of formats for PC and iOS platform are shown in next table. Mip MapsMip Maps are a list of progressively smaller versions of an image, used to optimize performance on real-time 3D engines. Objects that are far away from the camera use the smaller texture versions. Using mip maps uses 33% more memory, but not using them can be a huge performance loss. You should always use mipmaps for in-game textures; the only exceptions are textures that will never be minified (e.g. GUI textures).[7] Detail MapsIf developer wants to make a terrain, he normally use his main texture to show where there are areas of grass, rocks sand, etc... If terrain has a decent size, it will end up very blurry. Detail textures hide this fact by fading in small details as your main texture gets up close. A Detail texture is a small, fine pattern which is faded in as you approach a surface, for example wood grain, imperfections in stone, or earthly details on a terrain. Detail textures must tile in all directions. Color values from 0-127 makes the object it's applied to darker, 128 doesn't change anything, and lighter colors make the object lighter. It's very important that the image is centered around 128 - otherwise the object it's applied to will get lighter or darker as you approach. They are explicitly used with the Diffuse Detail shader. Table 2. Formats and compatible platforms
Normal Maps Normal maps are used by normal map shaders to make low-polygon models look as if they contain more detail. Some game Engines uses normal maps encoded as RGB images. Developer also has the option to generate a normal map from a grayscale height map image. 4. Normal mapping Normal-Mapping is a technique used to light a 3D model with a low polygon count as if it were a more detailed model. It does not actually add any detail to the geometry, so the edges of the model will still look the same, however the interior will look a lot like the high-res model used to generate the normal map. The RGB values of each texel in the the normal map represent the x,y,z components of the normalized mesh normal at that texel. Instead of using interpolated vertex normals to compute the lighting, the normals from the normal map texture are used.[4]
Figure 4. Low – Hi poly and usage of Normal maps The most basic information you need for shading a surface is the surface normal. This is the vector that points straight away from the surface at a particular point. For flat surfaces, the normal is the same everywhere. For curved surfaces, the normal varies continuously across the surface. Typical materials reflect the most light when the surface normal points straight at the light source. By comparing the surface normal with the direction of incoming light, you can get a good measure of how bright the surface should be under illumination:
Figure 5. Lighting a surface using its own and Hi resolution normals [5] To use normals for lighting, You have two options. The first is to do this on a geometry basis, assigning a normal to every triangle in the planet mesh. This is straightforward, but ties the quality of the shading to the level of detail in the geometry. A second, better way is to use a normal map. You stretch an image over the surface, as you would for applying textures, but instead of color, each pixel in the image represents a normal vector in 3D. Each pixel's channels (red, green, blue) are used to describe the vector's X, Y and Z values. 4.1 Object and Tangent Space normal maps Whether using object space or tangent space, normal-mapping with skeletal animation is much the same. The main idea behind using tangent space is going through extra steps to allow the reuse of a normal map texture across multiple parts of the model. Storing normals / tangents / binormals at the vertices, and computing the normals of the normal map relative to them, then converting back when rendering. Also, you could use tangent-space to skin a flat bump texture around a model.[5] Figure 7. Low and Hi res Raptor [9] A limitation of Object Space normal mapping, as opposed to using Tangent Space, is that every point on the skin must have its own distinct UV coordinates. You can't reuse parts of the texture for multiple parts of the model. This is a pretty common practice: perhaps both sides of the Raptor's face will have the exact same UV coordinates, effectively reusing one part of the texture for both sides of the face. With the straightforware implementation of an object space normal map though, the normals on opposite sides of the Raptor's face will obviously have to point in different directions, so they each need to be defined by separate areas of the texture. There is a way around this for parts of a model that are completely symetric along an axis. Only generate the normal map for one side, and then render each side separately.[9] 5. Algorithm for calculating the normals of a heightmap First, you're going to derive normals for a regular flat terrain heightmap. To start, you need to define the terrain surface, a 2D heightmap, i.e. a function f(u,v) of two coordinates that returns a height value, so you can create a 3 dimensional surface g: Figure 8. Creating three dimensional UV field [5] We can use this formal description to find tangent and normal vectors. A vector is tangent when its direction matches the slope of the surface in a particular direction. Differential math tells us that slope is found by taking the derivative. For functions of multiple variables, that means we can find tangent vectors along curves of constant v or constant u. These curves are the thin grid lines in the diagram. To do this, we take partial derivatives with respect to u (with v constant) and with respect to v (with u constant). The set of all partial derivatives is called the Jacobian matrix J, whose rows form the tangent vectors tu and tv, indicated in red and purple: Figure 9. Creating the surface normals [5] The cross product of those tu and tv gives you n, the surface normal.
Figure 10. Creation of the spherical mapping [5] The principle behind the spherical mapping is this: first we take the vector (s, t, 1), which lies in the base plane of the flat terrain. We normalize this vector by dividing it by its length w, which has the effect of projecting it onto the sphere: (s/w, t/w, 1/w) will be at unit distance from (0, 0, 0). Then we multiply the resulting vector by the terrain height h to create the terrain on the sphere's surface, relative to its center: (h·s/w, h·t/w, h/w) [5] Just like with the function g(u,v) and J(u,v), we can find the Jacobian matrix J(s,t,h) of k(s,t,h). Because there are 3 input values for the function k, there are 3 tangents, along curves of varying s (with constant t and h), varying t (constant s and h) and varying h (constant s and t). The three tangents are named ts, tt, th. [5] Figure 11. Creation of the spherical mapping [5] The three vectors describe a local frame of reference at each point in space. Near the edges of the grid, they get more skewed and angular. We use these vectors to transform the flat frame of reference into the right shape, so we can construct a new 90 degree angle here. That is, to find the partial derivatives (i.e. tangent vectors) of the final spherical terrain with respect to the original terrain coordinates u and v, we can take the flat terrain's tangents tu and tv and multiply them by J(s,t,h). Once we have the two post-warp tangents, we take their cross product, and find the normal of the spherical terrain. It's important to note that this is not the same as simply multiplying the flat terrain normal with J(s,t,h). J(s,t,h)'s rows do not form a set of perpendicular vectors, which means it does not preserve angles between vectors when you multiply by it. In other words, J(s,t,h) * n, with n the flat terrain normal, would not be perpendicular to the spherical terrain.[5]6. Conclusion Texturing is one of the basis elements in 3D visualization. Its use in terms of generating 3D terrain is very important because of couple of reasons. 7. References
|