Return to “Technical”

Post

Re: Graphics

#16
I am so relieved Josh. One caution I get when I buy games is I read the box, look at the minimum requirements. And I am thinking do I have a bit more than that? What if the developer under-represented the requirements? What if I can load up the game but it lags/skips/pauses so bad I would want to exit?

And in this case this matters because I have in principle already paid. So if I cannot run the end product properly, I am the one responsible to salvage the mess as it were.

Congrats again for a successful campaign Josh!
Image To boldly go where no one has gone before
Post

Re: Graphics

#17
That sounds fantastic!

I'm sure that a comparison of graphics using different texture detail levels would make a really great update, remembering to quote associated FPS. Or alternatively, a set of screen shots for your dev blog.

If you don't mind me re-asking, what about anti-aliasing? Does that scale too? Are you using stochastic supersampling?

Cheers!
Post

Re: Graphics

#18
At the moment for me it would cost a fair bit to upgrade or replace my desktop pc. My power supply is too rubbish to even run a decent energy efficient card.

For now i am happy to run at lower res as Josh says. I don't mind low res, good colour and light/dark composition for me is what is important.

One day i'll upgrade but probably in two or three years from now.
Post

Re: Graphics

#19
terrordactyl wrote:That sounds fantastic!

I'm sure that a comparison of graphics using different texture detail levels would make a really great update, remembering to quote associated FPS. Or alternatively, a set of screen shots for your dev blog.

If you don't mind me re-asking, what about anti-aliasing? Does that scale too? Are you using stochastic supersampling?

Cheers!
That's a great idea, I may do that!

FXAA is used, and it doesn't necessarily scale that well, but IMO it pretty much solves aliasing. Unless you are really, really picky, you will probably be happy with it. IMO higher resolution is the better answer, and of course the game can handle whatever resolution you want, so in the future AA will scale in the sense that res will be higher.
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Graphics

#20
JoshParnell wrote: That's a great idea, I may do that!

FXAA is used, and it doesn't necessarily scale that well, but IMO it pretty much solves aliasing. Unless you are really, really picky, you will probably be happy with it. IMO higher resolution is the better answer, and of course the game can handle whatever resolution you want, so in the future AA will scale in the sense that res will be higher.
Hi Josh, thanks for taking the time to reply!!!

I agree that higher res is always better. It's unfortunate that we're limited to ~1080p at present (unless you're really rich and can afford 1600p). We can only hope for higher pixel density monitors in future!!!

For now, we have to make do with anti-aliasing. I always tend to use as high AA as possible (at least 8x where possible), although I don't think that existing hardware-based AA is optimal. If you think that the FXAA is sufficient then that's great - it's your call, of course! However, if you wanted to improve on that, you should be able to do your own really nice efficient N-sample stochastic supersampling in OpenGL, which is completely scalable, seemingly in congruence with your overall philosophy. Certainly a similar approach could be applied to AF and tessellation, and other graphical techniques. I'm glad you already describe something similar for raw texture detail - this will be great.

Although I have noticed that some of your screenshots do suffer from some really bad aliasing, e.g. here, particularly the planet's high-contrast edges:
http://2.bp.blogspot.com/-oIa2S-QIjh0/U ... uent38.png
I assume that this wasn't using any AA, right? Or is this with the FXAA?!

Can I make a separate suggestion: in your pitch video your settings UI consisted only of toggle buttons... can you have toggle buttons + a numerical input box, so that the user has complete control over all scaling/detail parameters?

P.S. congrats on making your $50,000 target!!!
Post

Re: Graphics

#21
Yes, I will give much more control than the little togglebox window that I showed in the video - that's just a quick thing for me to play with while testing. But when I build a full-out settings menu, it will be way more extensive!! In fact, I hope to include significantly more graphical control than most games afford. I want to let you tweak things like contrast, color correction, visibility in gas, etc...they are all easy parameters to expose.

That shot you linked does not have AA enabled, as you guessed.

Can you point me to a paper on the type of AA to which you are referring? The trouble is that I use a semi-deferred rendering scheme, which generally does not play well with hardware AA approaches. It is much cleaner and more effective, IMO, to use a post-process AA (of which FXAA is an example).
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Graphics

#23
Patricifiko99 wrote:What about a max screen resolution? Is it possible to play on a triple monitor setup wih a res of approx. 1000 x 6000?
Yes, I will support multi-monitor setups.
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Graphics

#24
Hi Josh,
Thanks for your reply! For sure, I can see the appeal of FXAA. Such post-processing is easy to implement, cheap and effective - what's not to like!!! I agree that this should *always* be an option in all games - keep it!!! However, in recent years I've been convinced that certain forms of stochastic SSAA are going to become more prominent in the near future, at least as an alternative option, due to having increased flexibility, scalability, realism and fidelity. Furthermore, the fact that existing widespread implementations of SSAA are incredibly expensive and inefficient has created a false stigma towards (intelligent versions of) stochastic SSAA, which is undeserved.

I know that this is a bit of a long post… but I'd appreciate it if you read my discussion of an incredibly simple, appropriate and elegant implementation of AA, a technique that may be referred to as "FPS-Regulated Heterogeneous Single-Sample Stochastic AA"… I hope you like it!!! Whether or not you choose to implement it is one matter for you to decide (for sure, I don't know whether this would be possible given your rendering scheme), but at the very least I thought you'd be interested in learning/discussing/thinking about this technique, given your clear interest in computer graphics!!!

I happen to be aware that some new up-and-coming innovative indie game developers are implementing their own stochastic SSAA in OpenGL as a superior, more efficient, alternative to traditional/widespread approaches. I foresee that this is going to be a quite big deal in upcoming years.

It's easy to find citations from recent years for "advanced" AA methods that include additional effects such as defocus, motion blur, depth of field, etc., e.g. "decoupled sampling". But I think that would be overkill. I think a more simplistic approach is all that's required - no need to make things too complicated!

I haven't yet seen an objective scientific article comparing all AA methods including stochastic SSAA - such articles are usually found in journal articles promoting a particular type of AA, or fueled by financial interests, and thus tend to be very biased. Indeed, there are so many permutations of each AA method that an objective comparison would be difficult. Also, in comparative analyses, the "result" is usually displayed as a static image. Whilst this is relevant when trying to compare deterministic methods, or if trying to produce a static image (such as your fantastic 3-hour rendered image), this approach immediately falls down when considering that the result is dynamically perceived. The result of this often-overlooked temporal component is that stochastic SSAA is in fact much more efficient, with a higher level of fidelity in practice, than one would be lead to believe based on historical views of SSAA (both as presented in science and in the media, as far as I'm aware). This point is key!!! For sure, due to the effect of temporal averaging, this stochastic approach achieves a perceived level of fidelity substantially higher than the deterministic counterpart, given equal computational cost.

Thanks for reading so far! Here is an example of an appropriate implementation of efficient stochastic SSAA:

FPS-Regulated Heterogeneous Single-Sample Stochastic AA:
  1. Simple idea: Instead of partitioning the pixel into equal sub-pixels as in SSAA, a single (bivariate uniform) random position is sampled within the whole pixel. Path tracing is then used to get the color at this position. Repeat, amassing colors for each pixel, as fast as the hardware allows!!! Note - unlike SSAA, each pixel isn't re-sampled consecutively… Rather, the whole raster is swept once each cycle, resulting in multiple passes.
  2. FPS regulation: This method naturally adapts sampling in order to maintain the desired FPS (or higher), no matter the complexity of the frame. Given a target FPS (e.g. 60), the number of single samples contributing to each pixel can be chosen (i.e. the number of passes can be chosen dynamically) so that the target FPS, or higher, is maintained. This ensures that the approach is agnostic to hardware power, being conceptually equally applicable on low-end as on high-end systems. Of course, higher fidelity will be achieved on more powerful GPUs. Equally, this approach inherently has the natural limit of 1 sample (which is logically computationally equivalent to having no AA). This limit would be observed on systems that cannot even maintain the desired frame rate without any AA... but interestingly this nevertheless results in increased AA fidelity due to the inherent effect of temporal averaging (be it with worse results than on faster systems).
  3. Heterogeneous sampling: Performance can be improved by combining this with heterogeneous sampling, specifically sampling each pixel only until convergence in order to reduce redundancy, resulting in a much more efficient process. This ensures that pixels are not re-rendered unnecessarily in homogenous regions of the image, allowing more samples for pixels on high-contrast edges, thus overall improved AA. This approach contrasts with other methods, which would usually do this spatially. Indeed, this approach is conceptually better due to the lack of blurring (which causes the reduction in image quality that you inherently get with spatial methods). Basically, on each pass, you only continue to re-sample those pixels that are deemed to not have yet converged, according to some criteria. Consequently, depending on the convergence threshold, this overall AA process would expect to converge very quickly at an accelerated rate (depending on the complexity of the particular scene), because fewer and fewer pixels would have to be re-rendered each subsequent pass. Certainly, this can mean that the process finishes before the target FPS is achieved, resulting in the FPS being even higher than desired! Of course, the overall detail/quality level can be altered using a parameter controlling the convergence threshold, which can be lower for more powerful systems in order to make better use of the extra grunt. Of course, on more powerful systems, you can hold off for two or three cycles before letting the heterogeneous sampling kick in, if desired.
I'll reiterate that an important feature of this approach is the temporal component, which really helps to achieve a superior anti-aliasing effect. This should result in the picture having very high fidelity, given the computational expense.

Of course, you can either fix the state of the system for each final rendered frame, or alternatively recalculate the state of the system at each sub-frame sampling pass. In the latter case, you would end up with automatic natural motion blur (which you may or may not want!).

Implementation would look something like this:
  1. For each pixel, get the color corresponding to a single randomly-sampled position.
  2. For each pixel, calculate average pixel color (which is trivial in the first iteration).
  3. Determine whether another cycle is to be performed, according to desired FPS criteria… i.e. predict whether FPS criteria would be violated after the next cycle. If another cycle is not to be performed, then stop.
  4. Otherwise, determine which pixels will need to be re-sampled in the next cycle, according to pixel color convergence criteria, by comparing average pixel colors from current and previous cycles. If no pixels need to be re-sampled, then stop (or re-calculate the list of converged pixels, using relaxed pixel color convergence criteria).
  5. Repeat! On to the next pass!
(steps 3 and 4 may be swapped if desired)

Whilst this technique will certainly work on low-end systems, you may or may not get better results with post-processing AA hacks, such as FXAA. However, on more modest systems where higher fidelity is required and possible, I'm sure you'll agree that the results from such a stochastic technique would be expected to be far superior.

Note that, in it's most basic form, standard stochastic SSAA is path-tracing based on subpixel positions, with positions determined by N uniform random samples per pixel. Note the difference between this and the method I describe above is that the standard approach uses a fixed value of N. However, the method described above is more flexible, scalable, efficient, and ensures robustness to permanent and temporary frame-rate drop.

Finally, I should mention the traditional approach of jittering, which I think is probably not a great idea in this context. Basically, this uses smaller stochastic offsets from predefined subpixel positions, which homogenizes sub-pixel sampling. However, this limits the sample size N to multiples of M^2 (i.e. number of predefined subpixel positions) as in traditional deterministic SSAA. Whilst perhaps increasing fidelity, this would be expensive, and dramatically reduce the ability for the technique to smoothly scale in practice. The human eye deals well with such noise anyway, and temporal averaging would sort that out. Consequently, I think that jittering may not be a necessary/appropriate feature in the context of future AA technologies, apart from on extremely powerful hardware (although it could be tried!).

So there we have it!!! My thought is that FPS-Regulated Heterogeneous Single-Sample Stochastic SSAA sounds like quite an elegant and efficient approach. Applicable on low-end and high-end machines, high-fidelity, efficient, self-regulating, floating-point adjustable detail levels, with no FPS dips. Sounds great to me, at least!

How does that sound to you?
Cheers,
Rob
Post

Re: Graphics

#25
Hey Rob, thanks for the epic post!

It sounds beautiful in theory, now that I know what you're talking about. It is no doubt the "correct" way that AA should be implemented, as sampling-based techniques are superior. This is the kind of stuff that I would do with my path tracer.

But there is a very large problem with it for real-time games: you cannot trace paths. In real-time games, we use rasterization, not raytracing, to paint pixel colors onto the screen. What this means is that games are not built to answer queries like "what is the pixel color at x,y" and, in fact, cannot do so efficiently. The sad fact is that we can not efficiently trace rays from arbitrary positions. Rasterization works in something of the reversed way - it looks at primitives (i.e., triangles), and projects them onto the screen, painting all pixels that are part of the primitive. In this sense it is very much a hack and not as elegant as raytracing, but it's way faster and it's what LT uses, like almost every other game.

So, unless I've missed something and you can explain how to cheaply trace paths...that part is really a deal-breaker, as it just isn't possible with modern 3D APIs. You say that other people are implementing it? I would be interested to know how, unless they have a raytracing engine...

~Josh
“Whether you think you can, or you think you can't--you're right.” ~ Henry Ford
Post

Re: Graphics

#26
Hey Josh,

Thanks for the reply!

Admittedly, I can now see that my previous incorrect usage of the phrase "path tracing" was misleading - that was not actually what I meant. I did intend for rasterization to be used, be it an irregular raster, and not ray-tracing!!! A couple of questions... I'm sure you'll put me right... :)

I see exactly what you're saying... I know that raytracing is computationally intractable for home-use real-time application right now, and likely will be for the foreseeable future. However, I thought the problem associated with raytracing was due to attempting to "bounce" the rays off (or through) objects in order to create more realistic colours for each sampled point within each pixel, allowing reflection, refraction, transparency, etc. I didn't think it was necessarily anything to do with the choice of sampling points... please correct me if I'm wrong, as I'm happy to concede to your greater knowledge! :)

Having a quick look at the indisputable source of all knowledge: ;)
http://en.wikipedia.org/wiki/Supersampl ... g_patterns
... the image shows that there are various ways in which to sample subpixel positions. This would suggest that there is no reason why you can't choose the subpixel sampling points, and thus choose a simulated random position as a single sample point in the "1 sample per pixel" case. If that's all true, then surely there's no problem with the AA method I described in my previous post? Or have I missed something?

To give another example, the illustrative explanation of 4xMSAA in this article here:
http://www.tomshardware.com/reviews/ant ... 868-2.html
... also features irrelgular sample points. So, surely stochastic SSAA similar to the method I describe above can be achieved by assuming an irregular grid, with custom sample points? Perhaps I've missed something crucial in terms of the practical approach?

Note that the wikipedia page (linked above) also mentions stochastic supersampling, although there aren't any details of specific implementations. There's also a short reference to it here:
http://books.google.co.uk/books?id=IGtI ... ng&f=false

I don't know exactly how the people that are implementing it are doing it (indeed, there may be differences to my previous description - proprietary code/ideas and all that...) although I'll ask and see if I get a response. I do know that they are producing very nice modern graphics, and claiming to do stochastic supersampling. They claim to be getting 8 renders (or passes) per frame, at 60FPS on a modest modern PC, and the results look great! Certainly superior to hardware AA, if more demanding than FXAA.

Many thanks again,
Rob
Last edited by terrordactyl on Wed Nov 28, 2012 7:51 am, edited 12 times in total.
Post

Re: Graphics

#27
Related to graphics in the latest game-play video we can see you turning fog on and off.

I like both effects though!

Wouldn't it be possible to instead have a slider to set the opacity of the fog effect between 0 - 100%?
So we can have a few of those awesome background stars and nebulae "shine through" the fog so to speak!
Post

Re: Graphics

#28
terrordactyl wrote:
Ringo wrote:I cannot afford a high end gaming beast :cry:
I can imagine that no matter what sort of PC you've got, it's probably pretty beasty compared with the PCs of 5 or 10 years ago. In 5 or 10 years time, today's PC beasts will likely seem underpowered.

unfortunately I AM using a 12 year old pc XD. its amazing what these old workhorses can do with proper maintenance and care. i probably need to get off my wallet and max the ram out but to date ive done nothing to the specs and am handling things quite well XD.
If I've rambled and gone off topic im sorry but i tend to be long winded as you might notice if you stumble across my other post XD. thanks for reading.
Post

Re: Graphics

#29
terrordactyl wrote:....However, I thought the problem associated with raytracing was due to attempting to "bounce" the rays off (or through) objects in order to create more realistic colours for each sampled point within each pixel, allowing reflection, refraction, transparency, etc. I didn't think it was necessarily anything to do with the choice of sampling points...
Yes. Thats the computationally expensive part. The more 'interactions' (bounces/bleed etc..) per ray you calculate, the more your image appears to be extremely lifelike and the higher the computational needs jump. Each bounce or refraction or reflection or occlusion subjects the current ray through a physics computation. From what you have described of your proposed AA method, it seems that in essence, you are attempting to assign multiple rays to each pixel based on a predetermined sample size and then finally collating the results for each pixel to make the scene appear as realistic as possible. It would result in very beautiful scenes, although I suspect it would also be very slow.

The space scenes that this game offers is pretty complex in nature, especially if the medium is dense. Ray tracing through that would result in either a huge drop in performance or a huge drop in visuals, depending on whether you want to render the scene fast or well. I do not believe there is a cheap way to trace paths. Given that space scenes are usually very complex and detailed in nature, I would probably propose that jittering the scene would be the most efficient method for AA. While jittering does not exactly produce the most beautiful scenes, it would work well for highly detailed scenes because:

1. its computationally cheap and easy to implement (most important part. a laggy game, no matter how beautiful, would probably not go down well).
2. it would blur the scene, which would be detrimental for certain games (such as fps-es) but in the case of a space game with lots of gas clouds and high contrast lighting and huge draw distances, it would make the scene more realistic because the human eye would not capture that much detail anyway. Blurring it would actually make the image more view-friendly.
3. jittering is easy to scale based on graphic and/or performance requirements.
4. jittering operates on a raster image and is done on the final step before the scene is drawn. As such, it does not inherently require any expensive or particular preprocessing to be done.

However, the choice is purely up to Josh. Jittering is a rather classic solution and may or may not work well. Also, given that josh seems the type to try novel approaches, maybe he might come up with his own form of AA that might work well with his current graphics implementation.
Post

Re: Graphics

#30
Sliverine wrote:From what you have described of your proposed AA method, it seems that in essence, you are attempting to assign multiple rays to each pixel based on a predetermined sample size and then finally collating the results for each pixel to make the scene appear as realistic as possible. It would result in very beautiful scenes, although I suspect it would also be very slow.
Just to clarify - I wasn't suggesting the use of raytracing. The proposed method above involves the duplicated use of irregular rasters - this is a known technique, despite being less-known than other techniques due to not being available through the interfaces provided by the major GPU manufacturers. For sure, this will not be as cheap as post-processing effects, but certainly will not be anywhere near as expensive as raytracing.
Sliverine wrote:I would probably propose that jittering the scene would be the most efficient method for AA.
I agree that post-processing affects such as jittering and FXAA are a good choice if computational power is highly limited, and you're trying to get a reasonable image from underpowered hardware. However, if you have a half-decent graphics card (scaling up to the lucky people who have multiple high-end GPUs, of which I am certainly not one) then post-processing effects are not a good choice, at least for anyone who really knows/cares about graphical quality.

The idea is to achieve sub-pixel accuracy to improve the pixel colour, not to apply any sort of spatial averaging (or blurring), since that reduces the effective resolution of the image. The idea is to increase the effective resolution, not to reduce it!!! Whilst post-processing can trick Average Joe into thinking that edges are smoother, you do end up with a lower-quality image. If you have excess graphics processing power, it would be much better to have MSAA or SSAA available as an option, if the implementation of better bespoke methods is not an option.

Online Now

Users browsing this forum: No registered users and 2 guests

cron