Aagh! These answers are very uninformed! Of course, it doesn't help that the question is imprecise.
OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms.
That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing"). A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten dden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.
One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc. ). Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.
These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques.
Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.) "Rasterizing" really just means "figuring out which pixels an object lies on. " Convention dictates that it excludes ray tracing, but this is shaky.
I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference. Now then, hidden surface removal is not the only problem to be solved in the field of "rendering. " Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene.
There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights)."Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately).
But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc. ). People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization.
For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).
The Rendering article on Wikipedia covers various techniques. Intro paragraph: Many rendering algorithms have been researched, and software used for rendering may employ a number of different techniques to obtain a final image. Tracing every ray of light in a scene is impractical and would take an enormous amount of time.
Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted. Therefore, four loose families of more-efficient light transport modelling techniques have emerged: rasterisation, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; ray casting considers the scene as observed from a specific point-of-view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; radiosity uses finite element mathematics to simulate diffuse spreading of light from surfaces; and ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude slower. Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost.
Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels. From those descriptions, only radiosity seems different in concept to me.
There's a technique called photon mapping that is actually quite similar to ray tracing, but provides various advantages in complex scenes. In fact, it's the only method (at least of which I know) that provides truly realistic (i.e. All the laws of optics are obeyed) rendering if done properly.It's a technique that's used sparingly as far as I know, since it's performance is hugely worse than even ray tracing (given that it effectively does the opposite and simulates the paths taken by photons from the light sources to the camera) - yet this is it's only disadvantage.
It's certainly an interesting algorithm, though you're not going to see it in widescale use until well after ray tracing (if ever).
Note that like ray tracing, this method doesn't require polygons. Spheres can be represented perfectly, for example. In reality, it's easiest to model most complex objects using polygons anyway.
– Noldorin Apr 22 '09 at 23:09 Spheres can be represented perfectly in Raytracing as well. – Andrei Krotkov Apr 23 '09 at 5:12 1 The other method that gives you photorealistic rendering is Path Tracing, which is like Raytracing but with random lightflow instead of deterministic shadowing checks. It's quite possibly even slower than photon mapping.
:) – FeepingCreature Apr 23 '09 at 7:02 @Andrei: That's precisely what I was saying, in fact! @FeepingCreature: Yeah, it seems you are right. They're all in the same general family of renderers, though photon mapping/path tracing do have their own advantages/disadvantages.
– Noldorin Apr 23 '09 at 11:18 1 @Larry: Actually, that's incorrect. They're in the same family of rendering algorithms, you might say, but ray tracing is substantially different - it's not capable of fully solving the rendering equation for one thing. Did that really merit a downvote anyway?
– Noldorin Apr 23 '09 at 18:54.
If you want to count it, there is always voxel-based rendering or direct volume rendering.
Voxel-based rendering isn't a rendering method. Rather, it's rasterisation using voxels to represent objects in 3D. – Ben S Apr 22 '09 at 22:59 Most definitions that I've seen of rasterization mentionned projection of vector graphics or polygons, which doesn't really includes voxels.
Furthermore, most voxel-based tech I've seen was closer to raytracing than to rasterization. But yeah, direct volume rendering as a rendering method is debatable. – jfclavette Apr 22 '09 at 23:30.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.