It's all in your head, Part 8

The only place you'll ever hear the truth
chalz.r
level0
Posts: 7
Joined: Wed Dec 06, 2006 4:42 am
Location: USA

"Ghettos"

Postby chalz.r » Wed Feb 06, 2008 7:45 am

All this stuff about generating cities reminded me of something...

I'm a subscriber to the print version (and by extension online version) of NewScientist, and I recalled an article or two about the formation of ghettos in cities. If I may include a couple links:

http://www.newscientist.com/channel/fun ... rhood.html

I think the link name gives a quick summary. Unfortunately, the full article can only be viewed by paid subscribers. However, it links to a paper at arxiv.org:

http://arxiv.org/abs/0710.3021v1

I don't know if this will be of any interest or use to the folks at IV, but it occurred to me it may be possible for them to utilize it to modify the economic and social well-being of certain neighborhoods within their cities.
User avatar
briceman2
level2
level2
Posts: 123
Joined: Wed Dec 12, 2007 4:30 am

Postby briceman2 » Fri Feb 08, 2008 5:41 pm

Chris wrote: I’ve started work on a basic Level Of Detail system to try to render distant buildings at lower details, but it currently suffers from the same problem all LOD systems fail to hide : your eye is constantly drawn to the dividing line between high detail and low, and you see things popping between the two, which looks naff. The widely used alternative solution – to clamp the camera at about 10 metres off the ground pointing permanently down, is completely unacceptable for this game.


I'm not a graphics programmer, but here's a potentially workable scheme. It probably has a name in the literature.

If you are not 100% committed to totally transparent buildings (and your night scene suggests you are not) then you can dynamically render the nearfield as full inerior + exterior poly models, but pre-render the facades of all the farfield buildings. There would be a cache of texture maps, one for each unique building facade panel. Without their interiors, your buildings are all composed of only a handfull of facade quads. For more acurate window details, you could further subdivide the distance field into midfield1, midfield2, etc. In the nearest midfield, you might want more acurate lighting, so you could pre-render a series of facade textures from different angles, and maybe even interpolate to cut down the number of textures. You might end up doing this for the farfield too if you use dramatic lighting like sunsets where, at certain angles, the glint should show on every building near and far. With the newer shaders you might even be able to post-process a base texture for different lighting conditions on the fly.

My (limited) understanding of texture mapping says that you can pre-render large hi-res textures and the GPU automatically takes care of the scaling and sampling for smaller projections (on more distal buildings). So you can beat the detail issue by using textures at a resolution that would support close up views at nearly the same quality as the full poly render.

The only real constraint to this idea seems to be the available memory for texture storage. Even this problem could be massaged by dynamically managing which high-res textures reside on-card and which are replaced by lower-res textures for the far far field. Like with nearfield, midfield, and farfield subdivisions of render space, you could use similar subdivisions of texture space to manage the memory requirements. Everything shuffles dynamically as the camera moves.

The only way the full city can come into view is at a great distance. If you forbid sharp camera cuts from one perspective to a radically different one, then no translocations will ever require the entire scene to be rendered from scratch. I.e. there will never be a sudden cut to above the city, causing a sudden render lag as all the models get updated simultaneously. By forbidding jump cuts you can ease the performance requirements for your resolution management system.

Anyhow, carry on, I'm just rambling out loud...
User avatar
NeoThermic
Introversion Staff
Introversion Staff
Posts: 6256
Joined: Sat Mar 02, 2002 10:55 am
Location: ::1
Contact:

Postby NeoThermic » Fri Feb 08, 2008 5:59 pm

briceman2 wrote:The only real constraint to this idea seems to be the available memory for texture storage. Even this problem could be massaged by dynamically managing which high-res textures reside on-card and which are replaced by lower-res textures for the far far field. Like with nearfield, midfield, and farfield subdivisions of render space, you could use similar subdivisions of texture space to manage the memory requirements. Everything shuffles dynamically as the camera moves.


Well, memory space and the fact that continually passing textures to the card from RAM and back again would utterly kill performance... :P

NeoThermic
User avatar
briceman2
level2
level2
Posts: 123
Joined: Wed Dec 12, 2007 4:30 am

Postby briceman2 » Fri Feb 08, 2008 6:24 pm

...well maybe there is no need to pass them back and forth. A newer programmable shader might be able to do the pre-rendering of building facades entirely on-card. So the question of memory management is one of: which textures to delete and which ones to pre-render to take their place. Pre-renders could be spinkled into the pipeline so they never cause a big performance hit.

I guess what I meant about memory is I have no feeling for how many textures could be crammed into current mid-range cards. Some back-of-the-envelope calculations could probably show if this idea is feasable or not. The GPGPU capabilities of newer GPUs seems to argue that you can bypass the transfer bottleneck by doing more work on card. And I think there are some (or at least one) higher level GPGPU languages that are GPU independent, but don't quote me on that.

Also, since many / most of the buildings are similar, it is quite possible that some (or many) textures could be shared. Or the algorithm which generates the facade poly layouts could be used to generate a "universal" texture containing representative regions which can then be recombined by the shaders to paint the low-detail stand-in buildings. The algorithms may simply repeat a small set of features at a given granularity and perturb them with a low number of parameters. Instead of rendering a facade as a subdivision of polys, you could ask the shader to texture a single quad facade with a subdivision of rectangular texture primatives. Shift some of the burden from poly space to texture space.

I mean, if the buildings are all generated algorithmically, then it seems there may be a way to algorithmically generate corresponding textures directly from the seeds... maybe even on the fly. Sort of like "projecting" your 3D algorithm onto a 2D surface, if that makes sense. There must be some sneaky non-standard ways to take advantage of having all your content being numerically derived. So too there might be non-obvious, but efficient and elegant ways to "project" your algorithms into different problem spaces, and thereby achieve workarounds to standard problems.

I guess I'm just being philisophical there... :)
User avatar
xander
level5
level5
Posts: 16869
Joined: Thu Oct 21, 2004 11:41 pm
Location: Highland, CA, USA
Contact:

Postby xander » Fri Feb 08, 2008 8:55 pm

Isn't that kind of what Shadow of the Colossus does? Things that are near are rendered in full 3D, while things in the distance are basically just textures. I know I've read about it somewhere... Let me see if I can find it.

Ah! Here it is! The pertinent part is about half way down.
http://edusworld.org/ew/ficheros/2006/p ... _sotc.html

xander
User avatar
NeoThermic
Introversion Staff
Introversion Staff
Posts: 6256
Joined: Sat Mar 02, 2002 10:55 am
Location: ::1
Contact:

Postby NeoThermic » Fri Feb 08, 2008 9:30 pm

briceman2 wrote:A newer programmable shader


Ouch. There goes anything without a programmable shader.



briceman2 wrote:I guess what I meant about memory is I have no feeling for how many textures could be crammed into current mid-range cards.


If we define mid-range as 256MB, then my envelope indicates you can store exactly 268,435,456 pixels in the format of RGB. This assumes, however, that you don't wish to store anything else on the card. Which you do. Always. :)

briceman2 wrote:The GPGPU capabilities of newer GPUs seems to argue that you can bypass the transfer bottleneck by doing more work on card. And I think there are some (or at least one) higher level GPGPU languages that are GPU independent, but don't quote me on that.


GPGPU programming is a bit on the wild side for newer cards, and again doesn't exsist for older cards. So there goes another bunch of cards :)



You've not got a bad idea, but I think you're looking for the idea of mipmaps. You define a texture in terms of a series of sizes, the maximum size being the texture size / 2. In each half, you have half the texture at that res, and then you sub-devide for mipmap levels. Have a peak at what wikipedia has for mipmapping for a better explination and some examples.

The implementation of such a change, however, is down to the programmer ;)

NeoThermic
Zidz
level0
Posts: 2
Joined: Sat Feb 09, 2008 12:27 pm

Postby Zidz » Sat Feb 09, 2008 12:35 pm

Wow! This is simply amazing, just as everything you guys make! :)
Superpig
level4
level4
Posts: 658
Joined: Sat May 04, 2002 10:06 pm
Location: Right behind you
Contact:

Postby Superpig » Sat Feb 09, 2008 5:35 pm

briceman2 wrote:I'm not a graphics programmer, but here's a potentially workable scheme. It probably has a name in the literature.
Yeah, it sounds like you're describing impostors. Link.

NeoThermic wrote:Ouch. There goes anything without a programmable shader.
Well, to be honest... the GeForce 3 was the first card to support programmable shading, and it was released in 2001. Even without a GF3-level card (Intel Integrated, I'm looking at you...), vertex shaders can be transparently run on the CPU, using whatever CPU extensions are available (e.g. SSE). Using vertex shaders on a card that doesn't support them but does support hardware TnL is not quite the best performance route, but when you're dealing with cards that are over 7 years old now, you should kind of expect performance issues.

NeoThermic wrote:If we define mid-range as 256MB, then my envelope indicates you can store exactly 268,435,456 pixels in the format of RGB. This assumes, however, that you don't wish to store anything else on the card. Which you do. Always. :)
Yeah, you might want to store the framebuffer :)
Superpig
- Saving pigs from untimely fates
User avatar
NeoThermic
Introversion Staff
Introversion Staff
Posts: 6256
Joined: Sat Mar 02, 2002 10:55 am
Location: ::1
Contact:

Postby NeoThermic » Sat Feb 09, 2008 6:41 pm

Superpig wrote:
NeoThermic wrote:Ouch. There goes anything without a programmable shader.
Well, to be honest... the GeForce 3 was the first card to support programmable shading, and it was released in 2001. Even without a GF3-level card (Intel Integrated, I'm looking at you...), vertex shaders can be transparently run on the CPU, using whatever CPU extensions are available (e.g. SSE). Using vertex shaders on a card that doesn't support them but does support hardware TnL is not quite the best performance route, but when you're dealing with cards that are over 7 years old now, you should kind of expect performance issues.


Ahh, but should you? Why should a laptop with a 1.5GHz Core 2 duo and a Intel card have to be listed as too low of a spec? This is the goal of optimisation. Getting something to run on older spec without trading major graphical degradation. Being said, a BSP or similar might just be exactly the kind of optimisation needed in this case.

Superpig wrote:
NeoThermic wrote:If we define mid-range as 256MB, then my envelope indicates you can store exactly 268,435,456 pixels in the format of RGB. This assumes, however, that you don't wish to store anything else on the card. Which you do. Always. :)
Yeah, you might want to store the framebuffer :)


Plus display lists, vertex arrays, shaders, etc, etc. The pixel max there is obviously the upper limit; everything else you do on the card will reduce it. (Hence the 'Which you do. Always' bit ;) )

NeoThermic
User avatar
briceman2
level2
level2
Posts: 123
Joined: Wed Dec 12, 2007 4:30 am

Postby briceman2 » Sat Feb 09, 2008 10:58 pm

NeoThermic wrote:...mipmaps...


Thanks for the reference! I knew I wasn't having an original idea, but I couldn't think of an easy way to google it. Yeah, mipmaps is along the lines of what I was thinking, and of course they cause 2x memory hits.

I've just got this nagging feeling that the procedural city can be optimised for display in novel ways simply due to it's being procedurally generated. It's like a fractal that is everywhere, and at every level, self-similar. There must be some granularity or level of abstraction at which you can extract repeating units of texture.

Looking at Chris' night render, the midfield and farfield stand-in box-buildings could have their faces subdivided into rectangular regions. If done properly, then a small number of "universal" textures could be applied to these regions to simulate a fully pre-rendered building. So in the nearfield you use full interior/exterior 3D models, and in the mid and farfield you substitute these subdivided boxes which have only flat exterior surfaces. The number of textures needed for the rectangular subdivisions could go as low as 2 in the night render. This all might break down if you want fancy lighting effects, unless "pixel effects" (or something) can be used to modify the base textures on the fly based on their positions on the stand-in buildings.

One way to handle the transition between full 3D and midfield textured buildings would be to have a zone near the transition where textured buildings are double textured. The outer texture is rendered with transparency which varies with distance to camera. Underneath is a universal "noise" texture which simulates the interior details of fully rendered buildings at roughly that distance range. This would (possibly) reduce or eliminate the abrupt visual transition from full 3D transparency to flat textured stand-ins. Design of the "noise" texture would be critical, and probably fairly tough to get right.

Anyhow, you are right about the backwards compatability issues. Unless you wanted to go get funding from the GPU makers and try to use Subversion as an advertising campaign to force people to upgrade... you know, like what MS did with Vista... Blech!
chalz.r
level0
Posts: 7
Joined: Wed Dec 06, 2006 4:42 am
Location: USA

Postby chalz.r » Sun Feb 10, 2008 10:18 am

NeoThermic wrote:
Superpig wrote:
NeoThermic wrote:Ouch. There goes anything without a programmable shader.
Well, to be honest... the GeForce 3 was the first card to support programmable shading, and it was released in 2001. Even without a GF3-level card (Intel Integrated, I'm looking at you...), vertex shaders can be transparently run on the CPU, using whatever CPU extensions are available (e.g. SSE). Using vertex shaders on a card that doesn't support them but does support hardware TnL is not quite the best performance route, but when you're dealing with cards that are over 7 years old now, you should kind of expect performance issues.


Ahh, but should you? Why should a laptop with a 1.5GHz Core 2 duo and a Intel card have to be listed as too low of a spec? This is the goal of optimisation. Getting something to run on older spec without trading major graphical degradation. Being said, a BSP or similar might just be exactly the kind of optimisation needed in this case.


*raises hand*

Dell Inspiron 8500. 1GB RAM, 2.4GHz P4. nVidia GeForce 4 4200 Go. TnL yes, but pixel shading or texture shading? No. Purchased August of 2003 I think it was. My desktop died and I wanted to replace it with a laptop, hence a beefy laptop (whose battery lasts, oh, about an hour without wireless or CD). I got hosed on the graphics, though. Only thing available on this model at the time, and who knew everyone would start using shaders in 6 months?..
Superpig
level4
level4
Posts: 658
Joined: Sat May 04, 2002 10:06 pm
Location: Right behind you
Contact:

Postby Superpig » Fri Feb 15, 2008 12:46 am

NeoThermic wrote:
Superpig wrote:
NeoThermic wrote:Ouch. There goes anything without a programmable shader.
Well, to be honest... the GeForce 3 was the first card to support programmable shading, and it was released in 2001. Even without a GF3-level card (Intel Integrated, I'm looking at you...), vertex shaders can be transparently run on the CPU, using whatever CPU extensions are available (e.g. SSE). Using vertex shaders on a card that doesn't support them but does support hardware TnL is not quite the best performance route, but when you're dealing with cards that are over 7 years old now, you should kind of expect performance issues.
Ahh, but should you? Why should a laptop with a 1.5GHz Core 2 duo and a Intel card have to be listed as too low of a spec? This is the goal of optimisation. Getting something to run on older spec without trading major graphical degradation. Being said, a BSP or similar might just be exactly the kind of optimisation needed in this case.
Eh, there's a thin line between compatability (making it run at all on old hardware) and optimization (making it run fast), but consider that the reason Intel don't bother trying to provide hardware TnL is because they provide an extremely highly optimized software vertex processor instead. If you've got a 1.5GHz Core 2 Duo, you don't really need hardware TnL. (It doesn't hurt to have both, but just in the same way that it doesn't hurt to have a faster processor anyway). It's all just processing power, whether it be on the CPU or the GPU.

Anyway. A classical BSP tree is unlikely to help things in this situation - it's just a spatial index on the scene database. It's useful for classifying points within a scene (e.g. 'is this point inside a wall?') but not quite as helpful for culling. A quadtree (or similar structure) is likely to be a better bet, as the world is fundamentally 2D.

NeoThermic wrote:
Superpig wrote:
NeoThermic wrote:If we define mid-range as 256MB, then my envelope indicates you can store exactly 268,435,456 pixels in the format of RGB. This assumes, however, that you don't wish to store anything else on the card. Which you do. Always. :)
Yeah, you might want to store the framebuffer :)


Plus display lists, vertex arrays, shaders, etc, etc. The pixel max there is obviously the upper limit; everything else you do on the card will reduce it. (Hence the 'Which you do. Always' bit ;) )
Certainly. The framebuffer is just the only thing you'll find it difficult to live without :)
Superpig

- Saving pigs from untimely fates
jasonh1234
level0
Posts: 3
Joined: Sun Jun 14, 2009 5:52 am

Update?

Postby jasonh1234 » Sun Jun 14, 2009 5:53 am

Any new news? Screenshots? Dying to see this in action.
User avatar
KingAl
level5
level5
Posts: 4138
Joined: Sun Sep 10, 2006 7:42 am

Postby KingAl » Sun Jun 14, 2009 6:05 am

Epic necromancy. You planning to edit in a spam URL soon? There's lots of further news on Subversion available in the "It's all in your head" posts with numbers higher than '8'.
Gentlemen, you can't fight in here: this is the War Room!
Ultimate Uplink Guide
Latest Patch
jasonh1234
level0
Posts: 3
Joined: Sun Jun 14, 2009 5:52 am

Postby jasonh1234 » Sun Jun 14, 2009 10:25 pm

KingAl wrote:There's been plenty of updates! They were posted in subsequent numbered threads. I guess you must've bookmarked this one a long time ago and missed them?


Awesome thanks! Yeah you're right I did. Glad you weren't a troll about it.

Return to “Introversion Blog”

Who is online

Users browsing this forum: No registered users and 12 guests