General Questions About Old Or Limited Graphics Cards

Member
Posts: 227
Joined: 2008.08
Post: #1
I am working on a game that will (hopefully) squeeze any ounce of potential out of less powerfull graphics cards. This includes ATY Radeon (I know its ATI but thats what the system profiler says), Geforce 4 MX, and PVR MBX(iPhone). And I have some (a lot (I lied Rasp)) of questions.

1)Why do these report extensions that they dont seem to support (particularly the shader related things, like GL_EXT_geometry_shader4 and GL_ARB_vertex_program and GL_ARB_shading_language_100):
NVIDIA Corporation
NVIDIA GeForce4 MX OpenGL Engine
1.1 NVIDIA-1.5.36
GL_ARB_transpose_matrix GL_ARB_vertex_program GL_ARB_vertex_blend GL_ARB_window_pos GL_ARB_shader_objects GL_ARB_vertex_shader GL_ARB_shading_language_100 GL_EXT_multi_draw_arrays GL_EXT_clip_volume_hint GL_EXT_rescale_normal GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_gpu_program_parameters GL_EXT_geometry_shader4 GL_EXT_transform_feedback GL_APPLE_client_storage GL_APPLE_specular_vector GL_APPLE_transform_hint GL_APPLE_packed_pixels GL_APPLE_fence GL_APPLE_vertex_array_object GL_APPLE_vertex_program_evaluators GL_APPLE_element_array GL_APPLE_flush_render GL_APPLE_aux_depth_stencil GL_NV_texgen_reflection GL_NV_light_max_exponent GL_IBM_rasterpos_clip GL_SGIS_generate_mipmap GL_ARB_imaging GL_ARB_point_parameters GL_ARB_texture_env_crossbar GL_ARB_multitexture GL_ARB_texture_env_add GL_ARB_texture_cube_map GL_ARB_texture_env_dot3 GL_ARB_texture_env_combine GL_ARB_texture_compression GL_ARB_texture_mirrored_repeat GL_ARB_vertex_buffer_object GL_ARB_pixel_buffer_object GL_EXT_compiled_vertex_array GL_EXT_texture_rectangle GL_ARB_texture_rectangle GL_EXT_texture_env_add GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_texture_lod_bias GL_EXT_abgr GL_EXT_bgra GL_EXT_stencil_wrap GL_EXT_texture_filter_anisotropic GL_EXT_separate_specular_color GL_EXT_secondary_color GL_EXT_texture_compression_s3tc GL_EXT_texture_compression_dxt1 GL_APPLE_flush_buffer_range GL_APPLE_ycbcr_422 GL_APPLE_vertex_array_range GL_APPLE_texture_range GL_APPLE_pixel_buffer GL_NV_register_combiners GL_NV_blend_square GL_NV_fog_distance GL_NV_multisample_filter_hint GL_ATI_texture_env_combine3 GL_SGIS_texture_edge_clamp GL_SGIS_texture_lod

ATI Technologies Inc.
ATI Radeon OpenGL Engine
1.3 ATI-1.5.36
GL_ARB_transpose_matrix GL_ARB_vertex_program GL_ARB_vertex_blend GL_ARB_window_pos GL_ARB_shader_objects GL_ARB_vertex_shader GL_ARB_shading_language_100 GL_EXT_multi_draw_arrays GL_EXT_clip_volume_hint GL_EXT_rescale_normal GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_gpu_program_parameters GL_EXT_geometry_shader4 GL_EXT_transform_feedback GL_APPLE_client_storage GL_APPLE_specular_vector GL_APPLE_transform_hint GL_APPLE_packed_pixels GL_APPLE_fence GL_APPLE_vertex_array_object GL_APPLE_vertex_program_evaluators GL_APPLE_element_array GL_APPLE_flush_render GL_APPLE_aux_depth_stencil GL_NV_texgen_reflection GL_NV_light_max_exponent GL_IBM_rasterpos_clip GL_SGIS_generate_mipmap GL_ARB_texture_env_crossbar GL_ARB_texture_border_clamp GL_ARB_multitexture GL_ARB_texture_env_add GL_ARB_texture_cube_map GL_ARB_texture_env_dot3 GL_ARB_multisample GL_ARB_texture_env_combine GL_ARB_texture_compression GL_ARB_texture_mirrored_repeat GL_ARB_occlusion_query GL_ARB_vertex_buffer_object GL_ARB_pixel_buffer_object GL_EXT_compiled_vertex_array GL_EXT_texture_rectangle GL_ARB_texture_rectangle GL_EXT_texture_env_add GL_EXT_texture_lod_bias GL_EXT_abgr GL_EXT_bgra GL_EXT_texture_filter_anisotropic GL_EXT_separate_specular_color GL_EXT_secondary_color GL_EXT_texture_compression_s3tc GL_EXT_texture_compression_dxt1 GL_APPLE_flush_buffer_range GL_APPLE_ycbcr_422 GL_APPLE_vertex_array_range GL_APPLE_texture_range GL_APPLE_pixel_buffer GL_NV_fog_distance GL_ATI_texture_mirror_once GL_ATI_texture_env_combine3 GL_ATI_array_rev_comps_in_4_bytes GL_SGIS_texture_edge_clamp GL_SGIS_texture_lod GL_SGI_color_matrix


2)Is there any good tutorial on NV_register_ combiners? Everyone seems to have pulled their info on them.

3) Is there a good resource on manually doing shadow mapping besides paulsprojects.net (I don't like his code (Its in C++ and for windows))?

4) How are Dim3's shadow's done? I liked the look of them when I used it, but I can't figure out how from the source, as it is not necessarily a tutorial or explanation.

5) How can I move making a shadow volume onto the graphics card, without geometry shaders and with geometry shaders?

6) How can I do only self-shadowing (using the above mentioned graphics cards)? I made a system for dynamic shadowing but it does't shadow the object casting the shadow.
Quote this message in a reply
Member
Posts: 227
Joined: 2008.08
Post: #2
Still no answer...Bump.
Quote this message in a reply
Member
Posts: 45
Joined: 2008.04
Post: #3
With regards to (1) it's because there are software fallbacks. For shaders you can call: CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUFragmentProcessing, &fragmentGPUProcessing); CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUVertexProcessing, &vertexGPUProcessing);
to determine if it's running on the GPU or not.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #4
1) Apple tries to support the same transformation-stage functionality across all hardware. That means ARB_vertex_shader, EXT_geometry_shader4, and EXT_transform_feedback are supported everywhere. They will fall back to software TCL on older hardware.

If this seems confusing, consider that even the newer hardware can't support arbitrarily complicated shaders-- they can also fall back to software. There are CGL queries to determine if a renderer can support any hardware acceleration of the vertex or fragment stages, as wells as determining if your current state is accelerated or not.

2) See Nvidia's register combiner paper. This functionality is deprecated, though.

3) The SIGGRAPH '96 samples include discussion and code for projected self shadowing.

4) You could try stepping through a frame with OpenGL Profiler to see exactly how Dim3 draws shadows.

5) With vertex shaders, you have to put dummy vertices in the input mesh. See Nvidia's paper. With geometry shaders, you can do the silhouette determination and volume generation all in the shader. GPU Gems 3 has an article on this, with example code.

6) You can get the SGI '96 projected self-shadowing sample to run on hardware as old as a Rage 128, using only the texture matrix, two texture units, and an 8 bit alpha texture. No depth textures or shadow comparison hardware required. Obviously, performance/quality are compromised.
Quote this message in a reply
Member
Posts: 227
Joined: 2008.08
Post: #5
arekkusu Wrote:6) You can get the SGI '96 projected self-shadowing sample to run on hardware as old as a Rage 128, using only the texture matrix, two texture units, and an 8 bit alpha texture. No depth textures or shadow comparison hardware required. Obviously, performance/quality are compromised.

How? I looked at it and it only uses the depth texture and shadow extensions.
Quote this message in a reply
Sage
Posts: 1,232
Joined: 2002.10
Post: #6
Oddity007 Wrote:How? I looked at it and it only uses the depth texture and shadow extensions.

Yes, it does. The course notes say:
Quote:Shadow maps can almost be done with the OpenGL 1.0 implementation. What's missing is the ability to compare the texture's r component against the corresponding texel value.

The ARB_shadow extension provides the comparison on a depth texture lookup. Or, in a shader, you can do the comparison yourself by manually comparing the texture value to a reference (usually the texture R coordinate.)

But, if you think about it, you don't need shaders or ARB_shadow to compare a texture lookup with a reference value. You can do it by simply subtracting the two values, with ARB_texture_env_combine. And you don't need a depth texture to capture light Z. You can write distance from the light to a regular 8 bit color channel.

So, you can make a couple of modifications to that shadowmap sample, and run on old hardware (with 8 bit precision):

1) render scene from light's point of view.
1a) use regular depth testing to capture maxZ from the light's view.
1b) to put Z into a color channel, you need to transform Z into a color component. Fog can do this. Or, more flexibly, use the texture matrix to transform vertex Z into texture S coordinates. Use S to look up into a Z ramp texture.
1c) when drawing the scene, mirror any modelview matrix transforms in the texture matrix.

2) render unshadowed scene from camera's point of view, normally.
2a) depth test captures scene depth normally.

3) render shadowed scene from camera's point of view.
3a) set up shadow projection in the usual way (light MVP * object linear scene positions.) TexGen can be replaced by feeding in the scene positions as texcoords directly, since object linear is a passthrough transform.
3b) sample maxZ on unit0. Sample Z ramp texture on unit1, using the same light matrix computed in 1b). Use texture combiners to subtract Z from maxZ. Resulting values <=0 are "in shadow".
3c) use alpha test to discard "in light" values > 0. Use depth test EQUAL to only write pixels exactly matching the scene from 2a).

Various improvements to this are possible. Like attenuating the shadow falloff in the combiner, based on light Z. Or using multiple color channels for jittered light positions. Combine the jittered samples for cheesy shadow antialiasing.
Quote this message in a reply
Post Reply 

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  general blending versus texture blending questions WhatMeWorry 2 5,240 Dec 7, 2006 02:43 PM
Last Post: arekkusu
  Heads up: glDrawbuffer(GL_FRONT) issue w/OS X 10.4.3 and NVidia cards zKing 4 3,711 Jan 11, 2006 01:00 PM
Last Post: arekkusu
  General Texture Question.... WhatMeWorry 5 4,169 Aug 5, 2005 09:38 PM
Last Post: WhatMeWorry
  glBindTexture with ALPHA sux on NVIDIA Cards !? hgore69 1 3,106 Mar 12, 2005 01:35 PM
Last Post: OneSadCookie
  What graphics cards qualify as DX9-class hardware? morgant 10 5,294 Feb 22, 2005 03:09 PM
Last Post: arekkusu