Apr 25, 2020

Real-Time 3D with Python Part VIII: Finishing Touches

In the introduction I explained the idea: Using Python to program old skool demo style real-time vector graphics. In Part I I set up a simple 3D wireframe object, rotated it, and drew it on the screen, and in Part II added surfaces and perspective. Part III was about calculating object surface visibility and shading. Part IV set up an XML reader to define the objects and other properties using a common file format (XML), and added the code needed to work with multiple objects. In Part V, I added two special features: the ground and roads. Part VI calculated and drew shadows, and Part VII added moving in the cityscape and moving objects. This final part finishes the project with a number of small finishing touches.

The program code and XML files for all parts can be downloaded from github. Note that the full code is not found below - to run this, use github code.


Add Some Finishing


To polish the surface and round some corners, I am adding a number of smaller pieces to finish the project nicely:
  • full screen mode
  • music
  • color blends for shadows and ground
  • an info display for program resource use
  • fade in and fade out
I will go through these each below.

For a Fuller Look


So far we have been running the program in a window. A more demo-ish look would of course be full screen, and this is easy to set up with Pygame. I have also set it up so that full screen mode may be entered and exited with key f as follows:

key_to_function = {
    pygame.K_ESCAPE: (lambda x: x.terminate()),         # ESC key to quit
    pygame.K_SPACE:  (lambda x: x.pause()),             # SPACE to pause
    pygame.K_f:      (lambda x: x.toggleFullScreen()),  # f to switch between windowed and full screen display
    pygame.K_i:      (lambda x: x.toggleInfoDisplay())  # i to toggle info display on/off
    }

    def toggleFullScreen(self):
        
        # switch between a windowed display and full screen
        if self.fullScreen == True:
            self.fullScreen = False
            self.screen = pygame.display.set_mode((self.width,self.height))
        else:
            self.fullScreen = True
            self.screen_blends = pygame.display.set_mode((self.width,self.height), pygame.FULLSCREEN | pygame.DOUBLEBUF | pygame.HWSURFACE)

        self.screen_blends = self.screen.copy()
        self.screen_shadows = self.screen.copy()
        self.screen_blends.fill((255,255,255))
        self.screen_shadows.fill((255,255,255))

If switching to full screen, the same screen is set up but with some specific flags making it take up the whole display, but with original resolution. For best results, resolution should obviously be selected such that it is supported in full screen. The new support screens for blends and shadows need to be initialised (copied) as well (more on these below).

Play It Again, Sam


A demo needs some music. I have no talent whatsoever on that front, but the original tunes of the 1992 Amiga demo (see introduction) can be found in some Amiga archives like Janeway. I found it surprising that the Amiga music modules, made with NoiseTracker or ProTracker etc., can be played straight out with Pygame. Note I am not including this module in github, use the link above to download it and take a note of the credits for it! The composer Jellybean is a Norwegian musician who made some great Amiga tunes for our demos. The code needed here is simple:

    music_file = "sinking2.mod"  # this mod by Jellybean is available at e.g. http://janeway.exotica.org.uk/release.php?id=45536
    vv = VectorViewer(disp_size[0], disp_size[1], music_file, font_size)

        # start music player
        pygame.mixer.music.load(self.music_file)
        pygame.mixer.music.play()

The part after the blank line is in vv.Run just before the main loop - I am just telling pygame to start playing.

Blending in


So far, the shadows have been one color irrespective of where they fall (the blue ground or the gray road). Similarly, although the ground blends to the horizon, the roads do not - they are an even gray. In the Amiga and its bitplane graphics, this was easily solved, using the Copper to change the ground and road colors dynamically line by line - although that was strictly restricted to Y axis rotation only (see Part V). In pygame, I can use a blit operation with blend to add, subtract, or multiply one screen (image) to another.

            # draw shadows always first for all visible objects
            self.screen_shadows.lock()
            shadow_rect = None
            for VectorObj in (vobj for vobj in self.VectorObjs if vobj.visible == 1 and vobj.prio == prio_nr and vobj.shadow == 1):
                node_list = self.cropEdges(VectorObj.shadowTransNodes)                
                rect = self.drawPolygon(self.screen_shadows, self.shadowColor, node_list, 0)
                if shadow_rect is None:
                    shadow_rect = rect
                else:
                    shadow_rect = shadow_rect.union(rect)
            while self.screen_shadows.get_locked():
                self.screen_shadows.unlock()
            if shadow_rect is not None:
                # blit the "multiplied" screen copy to add shadows on main screen
                # release any locks on screen
                while self.screen.get_locked():
                    self.screen.unlock()
                self.screen.blit(self.screen_shadows, shadow_rect, shadow_rect, pygame.BLEND_MULT)
                # clear the surface copy after use back to "all white"
                self.screen_shadows.fill((255,255,255), shadow_rect)

When drawing the shadows, I am using a "drawing board" image screen_shadows. It has been prefilled with all white, and the shadowColor is now a light gray. I have also modified the drawPolygon to return the (rectangular) area it has modified, and am using union to build the minimum size rectangular area holding all the shadows. This is then blit (ie. copied) to the actual image using BLEND_MULT, which in effect multiplies the colors of the actual image with the colors of the shadow image. As the shadow image background is all white, the multiplier is 100% for all colors red, green and blue, so there's no effect. The shadows are light gray, so the multiplier is less (I am using 140 of 255, ca. 55 %) so those areas appear darker. If the actual image has a gray road, a shadow falling on it will be a darker shade of gray; if the image has blue ground, a shadow falling on it will be a darker shade of blue. In the end the area used for shadows is filled again with all white, to be ready for the next frame. All the shadows are processed in one blit; this is more efficient and also avoids overlapping shadows being darker still.

For the ground, I am using the same technique, but somewhat modified. In the first phase, I am drawing the ground (in blue) and the roads (in gray) in one solid color. Then, I will blit on top of these an image, which has darker shades of gray at the horizon and lighter shades of gray close to the viewer, with the same color-multiplying blend method. This will cause the far away parts of these surfaces to blend nicely to the horizon (ie. towards black). The blend image is actually drawn at the same time as the ground, but it "waits" until the roads have been processed, and only the is used in the blit.

Information Overload

Why is my program so slow? It is nice, from a code optimization point of view, to know what takes time and what goes quickly in the program. I added an info display, which can be toggled on or off (see the first code box above). This plots information on the relative processing time taken by some of the parts or operations on the screen, and includes fps (frames per second) and some other data points. Behind are some data collected by calling measureTime below with a predefined timer_name:   

    def measureTime(self, timer_name):
        # add time elapsed from previous call to selected timer
        i = self.timer_names.index(timer_name)
        new_time = pygame.time.get_ticks()
        self.timers[i, self.timer_frame] += (new_time - self.millisecs)
        self.millisecs = new_time
        
    def nextTimeFrame(self):
        # move to next timer and clear data
        self.timer_frame += 1
        if self.timer_frame >= self.timer_avg_frames:
            self.timer_frame = 0
        self.timers[:, self.timer_frame] = 0

It stores the time elapsed from previous call to an array, which can then be used to calculate a moving average of a selected number (I am using 180) of frames. Then I am using code below to add these to the display image:

        # add measured times as percentage of total
        tot_time = np.sum(self.timers)
        if tot_time > 0:
            for i in range(len(self.timer_names)):
                info_msg = (self.timer_names[i] + ' '*16)[:16] + (' '*10 + str(round(np.sum(self.timers[i,:]) * 100 / tot_time, 1)))[-7:]
                self.screen.blit(self.font.render(info_msg, False, [255,255,255]), (10, 110 + i * 20))

Note that I am using blit again, but without any blend functionality, to add the text on top of the cityscape. (And yes, I know the text formatting used here is not very elegant.)

Fading In, Fading Out


In the Amiga demo scene, decent productions always nicely faded in from black and ended in a same way by fading out. I added a fade based on viewer movement as follows:

                # check for fade at start and end
                if loop_pos < 255 / self.fadeSpeed:
                    self.fade = (loop_pos * self.fadeSpeed) / 255
                elif loop_pos > VectorMovement.loopEnd - 255 / self.fadeSpeed:
                    self.fade = ((VectorMovement.loopEnd - loop_pos) * self.fadeSpeed) / 255
                else:
                    self.fade = 1.0

The fade is simply a multiplier between 0 and 1 and used to multiply the color components, causing the desired effect.

And finally. The end result. Feel free to compare to the original (see introduction).

Finally, Some Thinking


What could be improved? Certainly, a lot. This was my first Python project and learning by doing is a sure method to not find the ultimately best solutions. I am sure there are a multitude of ways to make this program run faster / more clear or elegant / more pythonic / more versatile etc. Some thoughts I have had during the project include the following.

Parallelism / multi-threading. While on the Amiga parallelism was restricted to simultaneous use of the CPU and the co-processors, modern computers have multiple processor cores and could divide the CPU load to several threads being processed in parallel. Maybe I will try this in some future project.

OpenGL. Would using it instead of pygame standard drawing methods make a difference? Would it be difficult? Or some other way of using more hardware support (read: graphics card) instead of using the CPU - that would definitely speed things up.

Adding features. There's a lot one could add, of course, although in 1992 on the Amiga this was really something, and already quite a stretch for its capabilities (although, clever programmers certainly made improvements after that as well). But probably adding bitmaps / texture to the walls and shadows falling on buildings could be done with Python power. Of course one could have a larger city, and user controlled steering, and add some game elements to it...

Thanks for reading.

Apr 13, 2020

Real-Time 3D with Python Part VII: Movement

In the introduction I explained the idea: Using Python to program old skool demo style real-time vector graphics. In Part I I set up a simple 3D wireframe object, rotated it, and drew it on the screen, and in Part II added surfaces and perspective. Part III was about calculating object surface visibility and shading. Part IV set up an XML reader to define the objects and other properties using a common file format (XML), and added the code needed to work with multiple objects. In Part V, I added two special features: the ground and roads. Part VI calculated and drew shadows. This part is about moving in the cityscape and adding moving objects.

The program code and XML files for all parts can be downloaded from github. Note that the full code is not found below - to run this, use github code.

Moving Around


In a game, one would add user controls for moving around the city, adjusting for direction, speed, viewing angles etc. In a demo, it is customary not to give control to the viewer, but to set up everything for show (exceptions have, of course, been made). There are two similar but separate types of movement in the demo: moving objects, and a moving viewer. Both should have control of both the position and the angles; angles meaning rotation for objects, and viewing direction for the viewer, the latter then translated into rotation of all objects.

I have defined the world as coordinates in 3D space, so it is natural to define movement in a similar fashion. However, movement should be smooth, not in a rectangular fashion, so I have defined it so (in the XML file) that at certain points of time I want to be at certain locations, and the intermediate locations (coordinates) are interpolated. I am using

from scipy.interpolate import interp1d

to get help in interpolating, and then

        # build movement timeseries array; [0] is time in seconds (if target_fps = movSpeed), [1:4] are positions X,Y,Z, [4:7] are angles X,Y,Z
        movSpeed = float(movements.findtext("speed", default="1"))
        for timedata in movements.iter('timeseries'):
            mov_times = np.zeros((7, 0))
            mov_point = np.zeros((7, 1))
            for stepdata in timedata.iter('movementstep'):
                #steptime = float(stepdata.get("time"))
                mov_point[0, 0] = float(stepdata.get("time"))
                mov_point[1, 0] = float(stepdata.findtext("movepositionX", default=str(mov_point[1, 0])))
                mov_point[2, 0] = float(stepdata.findtext("movepositionY", default=str(mov_point[2, 0])))
                mov_point[3, 0] = float(stepdata.findtext("movepositionZ", default=str(mov_point[3, 0])))
                mov_point[4, 0] = float(stepdata.findtext("moveangleX", default="0"))
                mov_point[5, 0] = float(stepdata.findtext("moveangleY", default="0"))
                mov_point[6, 0] = float(stepdata.findtext("moveangleZ", default="0"))
                mov_times = np.hstack((mov_times, mov_point))
            # sort by time (first row), just in case ordering is wrong
            mov_times = mov_times[:,mov_times[0,:].argsort()]
            if viewer == "1":
                # for viewer, invert sign of Y coordinate
                mov_times[1,:] = -1 * mov_times[1,:]
            mov.addTimeSeries(mov_times)
            # out of the time series, build frame by frame movement using interpolation. 
            fx = interp1d(mov_times[0,:], mov_times[1:7,:], kind='quadratic')
            # data needed for the whole range of time series. Use speed = "time" per second.
            num_times = int((mov_times[0,-1] - mov_times[0,0]) * vv.target_fps / movSpeed)
            time_frames = np.linspace(mov_times[0,0], mov_times[0,-1], num=num_times, endpoint=True)
            mov_moves = fx(time_frames)

So I am building an array with 7 series; the first for the time, the next three for the coordinates, and the last three for the angles. Then, using the interp1d, I am setting the interpolation function fx (I am using quadratic, but you can experiment with e.g. cubic spline) to get a spline interpolation between them. The whole time series is calculated and stored so that it can be just used later.

For some moving objects, I am happy with rotating them separately of their movement or not at all; but for some, like a moving car, I would like them to "point forwards" i.e. have their angles follow their movement. Also, for the viewer, it would be nice to be able to look forward when moving around. For this purpose, I added the following:

            mov_angleforward = movements.findtext("angleforward", default="off")
            if mov_angleforward != "off":
                # angleforward means that object angles should be such that it is rotated according to movement vector, always facing towards movement
                # NOTE: Tested for Y only! Not well defined for XYZ anyway.
                mov_diffs = mov_moves[0:3,1:] - mov_moves[0:3,:-1]
                mov_diffs = np.hstack((mov_diffs, mov_diffs[:,-1:])) # copy second last diff to last
                if mov_angleforward.find("X") >= 0:
                    mov_moves[3,:] += np.arctan2(mov_diffs[1,:], mov_diffs[2,:]) * 180 / np.pi
                if mov_angleforward.find("Y") >= 0:
                    mov_moves[4,:] += np.arctan2(mov_diffs[0,:], -mov_diffs[2,:]) * 180 / np.pi
                if mov_angleforward.find("Z") >= 0:
                    mov_moves[5,:] += np.arctan2(mov_diffs[0,:], mov_diffs[1,:]) * 180 / np.pi

Basically, if movement item angleforward has value "Y", then the Y angle (angle "around" the vertical axis or "left and right") will be defined by the movement in the plane Y=0, i.e. by the change in X and Z coordinates. It is additional to the angle calculated using the spline; by defining this for Y and using angle 90 for Y in the movement, for instance, one can have the object pointing to its side or the viewer "looking out of the side window." 

I am also changing calculating the shadows (see previous part) slightly to use an intermediate step in rotation, where the objects have been moved and rotated, but still live in a "constant coordinate system" where the plane Y = 0 is the ground. There it is easy to calculate the multiplier for the shadow vector: Let's say the light source position (its Y coordinate) is twice as high as the top of the building: multiplier = 2. Then the shadow of the top of the building will fall at [lightsource position + 2 * (top of building position - lightsource position)]. However, the same multiplier also applies to the positions after they have been rotated to any angle, even if then Y = 0 is not the ground. Applying it to the already rotated positions saves me from rotating the resulting shadow coordinates.

This is what we have. A bigger city, a moving light source object with shading and shadows following its movement, a moving car - and a moving viewer:



Next: Part VIII: Finishing Touches

Apr 5, 2020

Real-Time 3D with Python Part VI: Shadows

In the introduction I explained the idea: Using Python to program old skool demo style real-time vector graphics. In Part I I set up a simple 3D wireframe object, rotated it, and drew it on the screen, and in Part II added surfaces and perspective. Part III was about calculating object surface visibility and shading. Part IV set up an XML reader to define the objects and other properties using a common file format (XML), and added the code needed to work with multiple objects. In Part V, I added two special features: the ground and roads. Now it is time to add shadows to the objects

The program code and XML files for all parts can be downloaded from github. Note that the full code is not found below - to run this, use github code.

In the Shadows


With light source shaded objects, shadows add a dose of reality. Of course, in reality, shadows can be extremely complex, so I am going to implement a simplistic approach more suitable for a 1985 home computer (and 2020 Python processing). A shadow will be cast by an object between the ground and the light source, and only to the ground. It would be nice to add a shadow of, say, a building on another building, but that would make the calculations way too complex. In the end, we'll end up with this:

  
Like I explained before, in the original demo rotation was limited to the vertical axis to keep calculations more manageable. With vertical axis rotation only, the ground will always be at Y coordinate zero, which makes calculating shadows based on the rotated object coordinates simple. In the Python code, I have kept the ability to rotate around all three axis, so for the shadow calculations there are two alternatives:
  1. project shadows directly on a rotated ground surface using rotated light source coordinates and rotated object coordinates; or
  2. project shadows using unrotated coordinates, when ground is at vertical zero, and then rotate the shadow coordinates.
I chose the latter, as it is easier. Solving the shadow coordinates in (1) is quite a bit more difficult, so (2) might also be faster. (See e.g. Programming with OpenGL).

Projecting a shadow of a simple object. Source: York College CS 370

When ground is at zero, projecting the X and Z coordinates is a matter of simple scaling: Xs = Xi + (Xo - Xi) * Yi / (Yi - Yo), and similarly for Z. Ys == 0. (s = shadow, o = object, i = light source.)

But how to define the shadowed surfaces? A simplistic approach would be to calculate and draw a shadow for every surface of the object. This can easily be refined by only drawing a shadow for the surfaces which light falls on, i.e. the surfaces where angleToLightSource > 0 (remember this was defined as np.dot(self.crossProductVector, self.lightSourceVector) for each surface). Another step, assuming the object is "concave" and "whole", is to figure out the combined shadow area of these surfaces, and draw it as a single shadow. The code below does just that:

    def updateShadow(self, viewerAngles, lightNode, vectorPosition, objMinZ, zScale, midScreen):
        """ 
        Update shadowNodes (a list of nodes which define the shadow of the object).
        This routine assumes objects are "whole", that they do not have any holes - such objects should be built as several parts.
        A more resilient way would be to calculate and draw a shadow for each surface separately, but leading to extra calculations and draws.
        Update nodes by calculating and adding the required shadow nodes to the array.
        Note for objects standing on the ground, the nodes where Y=0 are already there and need no calculation / adding.
        """
        obj_pos = -vectorPosition[0:3]  # object position
        shadow_edges = [] # list of object edges that belong to surfaces facing the lightsource, i.e. producing a shadow.
        for surface in (surf for surf in self.surfaces if surf.angleToLightSource > 0 or surf.showBack == 1):
            # add all egdes using the smaller node nr first in each edge node pair.
            shadow_edges.extend([((min(surface.nodes[i], surface.nodes[i-1]), max(surface.nodes[i], surface.nodes[i-1]))) for i in range(len(surface.nodes))])
        # get all edges which are in the list only once - these should define the outer perimeter. Inner edges are used twice.
        use_edges = [list(c[0]) for c in (d for d in list(Counter(shadow_edges).items()) if d[1] == 1)]
        # these edges should form a continuous line (the perimeter of the shadowed object). Prepare a list of nodes required:
        node_list = []
        node_list.append(use_edges[0][0]) # first node from first edge
        node_list.append(use_edges[0][1]) # second node from first edge
        prev_edge = use_edges[0]
        for i in range(len(use_edges)): 
            if node_list[-1] != node_list[0]:
                for edge in use_edges:
                    if edge != prev_edge: # do not check the edge itself
                        if edge[0] == node_list[-1]:
                            node_list.append(edge[1]) # this edge begins with previous node, add its end
                            prev_edge = edge
                            break
                        if edge[1] == node_list[-1]:
                            node_list.append(edge[0]) # this edge ends with previous node, add its beginning
                            prev_edge = edge
                            break
            else:
                break # full circle reached
        node_list = node_list[0:len(node_list) - 1] # full circle - drop the last node (is equal to first)

Note that this code is still a simplified version where only vertical axis rotation is assumed. This will be fixed later.

Above, I first go through all surfaces which are shadowed, and add their edges to a list. This of a simple object - a cube for instance - where three surfaces are shown. The outer six edges belong to only one surface each, but the inner three edges all belong to two surfaces. To get the outer perimeter of the shadowed object, I use Counter to find the edges mentioned in the above list only once. Then, the loop builds a continuous list of perimeter nodes.

Having the nodes, it is time to project them on the ground:

        # then project these nodes on the ground i.e. Y = 0. If necessary. Add the required shadowNodes.
        self.shadowNodes = np.zeros((0, 4))
        for node_num in range(len(node_list)):
            node = node_list[node_num]
            # check that node is not already on the ground level or too high compared to light source
            if obj_pos[1] + self.rotatedNodes[node, 1] > 3 and obj_pos[1] + self.rotatedNodes[node, 1] < (lightNode[1] - 3):
                # node must be projected. Add a shadow node and replace current node in node_list with it.
                node_list[node_num] = self.nodeNum + np.shape(self.shadowNodes)[0]
                diff_node = (obj_pos + self.rotatedNodes[node,:]) - lightNode # vector from lightNode to this node
                self.shadowNodes = np.vstack((self.shadowNodes, np.hstack((lightNode + diff_node * (lightNode[1] / -diff_node[1]) - obj_pos, 1))))
        
        self.shadowRotatedNodes = self.shadowNodes[:,0:3]

When starting, all the nodes used for the shadow are actually object nodes. Projection to ground will only be required if the node is higher than the ground (Y > 0) and lower than the light source. If it is on the ground already, we already have the correct position. If it is at light source height or higher, it's impossible to project it. The buildings stand on the ground, so many of the nodes need no projection at all.

If a node needs to be projected, I create a new shadowNode for it, and change the reference in the node_list so that it will be used later instead of the actual object node. As I am using rotated nodes for shadow calculations, shadowRotatedNodes will equal shadowNodes.

These new shadowRotatedNodes will need to be cropped to objMinZ to prevent trying to apply perspective to something behind the viewer, and then flattened to 2D. In the end I have the shadowTransNodes, a list of 2D nodes for drawing the shadow polygon:

        # flatten rotated shadow nodes and build a list of shadowTransNodes. shadowTransNodes has all shadow nodes.
        flat_nodes = np.zeros((0, 3))
        if node_list[-1] < self.nodeNum:
            prev_node = self.rotatedNodes[node_list[-1], 0:3] # previous node XYZ coordinates
        else:
            prev_node = self.shadowRotatedNodes[node_list[-1] - self.nodeNum, 0:3]
        for node_num in range(len(node_list)):
            if node_list[node_num] < self.nodeNum:
                node = self.rotatedNodes[node_list[node_num], 0:3] # current node XYZ coordinates
            else:
                node = self.shadowRotatedNodes[node_list[node_num] - self.nodeNum, 0:3]
            diff_node = node - prev_node # surface side vector
            # if both Z coordinates behind the viewer: do not draw at all, do not add a transNode
            if (node[2] < objMinZ and prev_node[2] >= objMinZ) or (node[2] >= objMinZ and prev_node[2] < objMinZ):
                # line crosses objMinZ, so add a "crop point". Start from previous node and add difference stopping to objMinZ
                flat_nodes = np.vstack((flat_nodes, prev_node + diff_node * ((objMinZ - prev_node[2]) / diff_node[2])))
            if node[2] >= objMinZ:
                # add current node, if it is visible
                flat_nodes = np.vstack((flat_nodes, node))
            prev_node = node
        # apply perspective using Z coordinates and add midScreen to center on screen to get to transNodes
        self.shadowTransNodes = (-flat_nodes[:, 0:2] * zScale) / (flat_nodes[:, 2:3]) + midScreen

With the above, actually drawing the shadows is simple. This is done, for each object priority class, so that first all shadows are drawn, and then the objects - ensuring that shadows are behind the objects and not vice versa.

            # draw shadows always first for all visible objects
            for VectorObj in (vobj for vobj in self.VectorObjs if vobj.visible == 1 and vobj.prio == prio_nr and vobj.shadow == 1):
                node_list = self.cropEdges(VectorObj.shadowTransNodes)
                self.drawPolygon(self.shadowColor, node_list, 0)

Note that the VectorObject.shadow attribute is defined in the XML and should, obviously, be set to zero (no shadow) for roads, the light source object etc.

Buffering...


For those not from the ice age like me but used to video streaming, buffering means that you have to wait for something. For real-time graphics, it means one has to draw "in the buffer" and not directly on the screen being shown; otherwise the picture would show up half-finished. In Python pygame this is supported by a simple display.flip() operation. On the Amiga, triple buffering was commonly used. Triple buffering works by setting up three identical screens (collections of bitplanes). Let's say we draw the first image on #1. Immediately after we are finished, we start drawing on #2, and then, again when finished, on #3, and then start from #1 again. So the whole time is spent drawing, not waiting for any specific time to switch screens, unless we can draw screens faster than the display frame rate (on a PAL Amiga, 50Hz). What to show then? We always have two finished screens, and one we are working on. That's because we cannot switch the screen shown in the middle of display refresh - we would end up showing half of, say, #1 and half of #2! That's why we need two finished screens: we can wait for display refresh to finish, and then switch to the other finished screen (well, not actually wait - there's an interrupt that kicks in at the right moment). (Nowadays there are hardware solutions for the same problem.)

So are we all right with three screens on the Amiga? Unfortunately not - we need a fourth one as a temporary "drawing board" as objects cannot be drawn and filled in on the actual screen, as this would mess up the filling operation. Luckily, there's a massive 512kB of chip RAM to put these all in... how much memory do we need? A simple calculation is required: Our screen is 352 pixels (bits) wide: 352 / 8 = 44 bytes. It is 288 pixels (lines) high, so one bitplane is 44 x 288 = 12.4kB. Let's say we work with the full 32 colors, and hence need five bitplanes for one screen = 62kB. Triple buffering and one drawing board, four times that = 248kB. Whoa - we have already used half of all RAM available! The rest is then for program code, music, any bitmap graphics etc... 

The programmers those days needed to be really good at using the scarce resources. Although, their predecessors were even better with the mighty Commodore 64. Not to mention VIC-20 with 5kB of RAM... give that to some of today's programmers!