Showing posts with label Assembly. Show all posts
Showing posts with label Assembly. Show all posts

Oct 3, 2021

Jelly Cubes

This is something I planned on programming some 30 years back on the Amiga and Assembly, but never ended up doing. Now did, with Python and Pygame. The idea here is to have two semi-transparent, co-centric cubes rotating so that one will grow out of the other, showing the parts outside the bigger cube being "cut". It's not realistic in any way but a nice exercise in vector graphics. Unfortunately, the routine is not perfect, either.

The code can be found in my github repository.



Rotating Back and Forth


It is quite possible to calculate if and how a 3D vector cuts through a 3D plane of any definition. However, it is rather much easier to calculate if and how a 3D vector cuts through a 3D plane if one of the 3D plane's coordinates is constant. For example, picture a cube where all coordinates (X, Y, Z) are -1 or +1; each of the six surfaces of that cube has one coordinate (X, Y or Z) which is constant (either -1 or +1). 

Then picture the other cube, which is smaller, but only a little - so that, when rotated, one or more of its vertices (or nodes or corners) will be outside the bigger cube. It will be quite simple to see when that happens - if and only if the vertex has a coordinate > 1 (or < -1). Also calculating the point where the edge vector cuts through the other cube's surface is simple - it is the point where that coordinate equals 1 (or -1).

There are two ways to achieve this. One is to first rotate the smaller cube, keeping the bigger cube stationary, then calculating the cuts, and rotating everything again to make also the bigger cube move. However, this causes difficulties when the cubes change size and the smaller cube at one point becomes the bigger one. The other option is to rotate the cubes, and then "rotate back" the smaller cube using the inverse of the big cube's rotation matrix. This will result in a version of the smaller cube which is rotated as if the bigger cube stayed stationary. Then the cuts can be calculated using this version, but applied directly to the "fully rotated" version - avoiding rotating the cut points.

    def rotate(self):

        if self.cube_sizes[0] < self.cube_sizes[1] and self.cube_small == 1:
            self.cube_small = 0
            self.cube_big = 1
        elif self.cube_sizes[0] > self.cube_sizes[1] and self.cube_big == 1:
            self.cube_small = 1
            self.cube_big = 0

        matrix_small = self.rotate_matrix_XYZ(self.angles[self.cube_small, :])
        matrix_big = self.rotate_matrix_XYZ(self.angles[self.cube_big, :])
        matrix_big_inv = np.linalg.inv(matrix_big)

        # rotate small cube
        self.rotated_nodes[0:8, :] = np.matmul(self.nodes * self.cube_sizes[self.cube_small], matrix_small)
        # rotate big cube
        self.rotated_nodes[8:16, :] = np.matmul(self.nodes * self.cube_sizes[self.cube_big], matrix_big)
        # make a special rotated copy of the small cube by rotating it with the inverse of big cube matrix.
        # the result is small cube rotated as if big cube was stationary (or not rotated at all).
        self.rotated_nodes_small = np.matmul(self.rotated_nodes[0:8, :] * self.cube_sizes[self.cube_small], matrix_big_inv)

    def rotate_matrix_XYZ(self, angles):

        # define rotation matrix for given angles
        (sx, sy, sz) = np.sin(angles)
        (cx, cy, cz) = np.cos(angles)

        # build a matrix for X, Y, Z rotation (in that order, see Wikipedia: Euler angles).
        return np.array([[cy * cz               , -cy * sz              , sy      ],
                         [cx * sz + cz * sx * sy, cx * cz - sx * sy * sz, -cy * sx],
                         [sx * sz - cx * cz * sy, cz * sx + cx * sy * sz, cx * cy ]])

What is needed for this "rotating back" is the inverse of the rotation matrix. Inverting a matrix can be a very costly operation and requires the matrix to be of a suitable form; fortunately, a 3 x 3 rotation matrix is rather quick and easy to invert.

Cutting Edge


In the simple case, a vertex protrudes the other cube's surface so that it forms a small pyramid, consisting of the three surfaces using that vertex, but cut at the +1 (or -1) level mentioned. (See the image above, the red vertex closest to the center of the image.) So we need to calculate the three cut points for the three edges using the vertex, which are simply the weighted averages of the edges' vertices, using the relative lengths of the edge within the cube and outside the cube as weights. So, if e.g. X = 1.1 for the vertex outside the cube, and X = 0.7 at the other end of the edge, the cut point (where X = 1.0) would be (0.1 / 0.4) x IN + (0.3 / 0.4) x OUT, where IN and OUT represent the inside and outside vertices, respectively. And, as said, the calculation can be applied directly to the fully rotated smaller cube vertices, even if the weights are calculated using the one which was "rotated back".

Of course, life is never simple. If the cube sizes are close to each other, there will be cases when the whole edge, i.e. both the vertices defining it and all points on the edge, are outside the bigger cube. (See the image above, the red edge at lower left side.) Then there is no cut point at all, and the "cut" is a much more complex area than a pyramid. The code handles this by building such cut surfaces separately. Also, the code calculates all the cut points in one go, utilising NumPy operations, so it is not quite as simple as the example above - but still it has to build all the cut surfaces in a loop, one by one.   




Sep 4, 2021

RGB Sphere (Epcot Sphere / Geodesic Polyhedron)

The RGB Sphere is an advanced version of a routine I programmed for the Amiga, released in the intro often called First by Reflect early 1992. Still keeping with the idea of recreating "oldskool demo style" real-time 3D graphics with Python ... see my bigger project Sound Vision for lots more. 

All the code can be downloaded from github.

The RGB Sphere is a morphing vector object for testing three color light sourcing. It starts from a tetrahedron, or, for better results, an icosahedron, the faces of which are equilateral triangles. All the corners (ends of edges) lay on a surface of a single sphere. Then this object can be morphed closer to a full sphere by replacing each triangle with four triangles of equal shape and size, and projecting the new corners (ends of edges), which now are inside the sphere, to the sphere surface. Repeating this a few times results in a 3D vector object looking like the Epcot Center at Disney World. All these objects are also Geodesic Polyhedra.

Three's Company


There are three modes available, and moving between the modes is controlled by the left and right cursor keys. The initial mode is wireframe, where the back side of the object can be seen through the front.


The second mode is a light sourced version of the same, where the color of each triangle depends on its angle to either one white light source or three separate light sources, which are red, green and blue (additive colors...). The light source mode can be switched with the l key. The light sources may also move about, and the speed can be selected with m and n keys. Note that unless they move, they all start at the same spot, so three light sources look the same as one.


The third mode is the same as the second, but instead of each triangle being white to begin with, they can be colored based on a image. I used a world map, but any image can be used - however projecting it to a sphere and having a limited number of triangles will result in a less accurate representation of it ... I have another project for that under way.



The changes between the modes and adding new triangles ("depth") are made in a smooth way. When new triangles are added, they are first added "in place" so that the new corners appear in the middle of the old ones, and only then are they slowly pushed to the sphere surface. This increasing or decreasing of depth is controlled by the up and down cursor keys. You can see the different modes in this video capture:



Preparations Necessary


To ensure smooth transitions, all the object data are calculated and stored before showing any 3D objects. Building new levels from the original tetrahedron or icosahedron is fully algorithmic, so even more depth could be added, but now getting six levels results in 10,242 nodes (or corners or points to rotate), 35,514 edges to connect these, and 20,480 surfaces. Unfortunately, there is persistent bug in the code when trying to figure out which edges are at the front using depth 6 in wireframe mode.

The original Amiga OCS assembly code had only 62 surfaces - 31 independent colors was the maximum one could have, and it ran in 208 x 208 pixels @ 50Hz. 


The original Amiga RGB Sphere.

May 30, 2021

Sound Vision Demo in Python

In July 1992, the Amiga demo group Reflect released Sound Vision at the first Assembly demo competition. Having programmed a large part of the demo, which was, at the time, quite awesome (if I may say so), I wondered if I could do the same on PC with Python and pygame. The demo consists of a number of independent parts with real-time calculated graphics, so I chose just one to get to grips with the programming environment (I have no previous experience on Python). Of course, I picked the most complex part - the 3D Vector World with light sourcing. You can read about this in a previous series of blogs. Next, I did the Fractal Landscapes part - again, read more here. Then I embarked on coding the rest of the parts, and indeed, now having some practice, they were finished quite a bit quicker than the ones already mentioned.



All the code and graphics and music files can be downloaded from github. The video captures below are from the Python version at 1280x720 @60Hz.


Programming vs. Programming


Programming demos on 1992 Amiga hardware (actually released 1985) using Assembly language was quite different from using today's rather hardware-agnostic Python & pygame setup. At the time, the first thing we demo programmers usually did was to take over from the operating system and use the memory normally dedicated to OS routines for our own purposes. This was possible because all hardware setups were the same (maybe someone had more RAM but that's about it) and no drivers etc. were needed. Even if, back then, the Motorola MC68000 running at ~7.1 Mhz was great, and all you had to do was fill a screen with a resolution of 352 x 286 pixels and a maximum of 32 colors, there were quite a few obstacles to cross before getting to real time 3D graphics. Consider these:
  • There was no floating point support - integers only. And even with 32 bit registers, you could only perform multiplication on 16 bit integers (0 to 65,535, or -32,768 to 32,767 if signed). Any division result had to fit into that as well. 3D rotations employ a lot of multiplication and division, how do you avoid overflow and losing precision?
  • There were no mathematical functions available at all, apart from instructions for addition, subtraction, multiplication and division. For 3D rotations sine and cosine are essential and used a lot. The sine of any number is, by definition, between -1 and +1 - but all you have are integers anyway! How do you get the sine and cosine, perform the operations, and most importantly: how do you do it fast enough? And how do you ever get a square root if you need one? I discuss this a bit more here.
  • And of course, things like CPU cache (today's computers have more of that than the Amiga had total RAM), multiprocessing and multithreading were only in our dreams. Basically, any number to be used in, say, multiplication had to be first fetched from RAM to one of the eight data registers in the CPU.
  • The standard setup was only 512 kB of RAM, which is about 0.006 % of today's standard 8 GB.
For the Python project, I had a number of goals on how it should work. First, it should run in a window and in full screen mode - so using "Amiga style" bitplane graphics modes was probably not a good idea. Second, it should run in any resolution selected (you can set this at the end of SoundVision.py). Third, it should be "standard Python" not requiring any Cython-type supersets or any other such additions.

Sound Vision


The demo consists of a number of independent parts just like the original. The SoundVision.py program simply imports these parts as modules and runs them one after another. In addition it handles setting up the screen and music. The parts are shortly described here.

It must also be said that I only did part of the original Amiga programming, and credits for the additional code, the music and the still pictures are given in The Stars and End Credits parts.

Stripy Plane Vectors (Title texts)




The stripy plane vectors are created in a very similar manner as they were realized in the original Amiga demo. There are two planar (2D) vector objects - the letter and the stripes - and they are combined on top of each other to create the striped letter. On the Amiga, this was easy to do with two separate bitplanes; on a regular PC screen, the stripes were "subtracted" from the letter using pygame's blit command. Note the nice copperbars imitation in the background! 

Metamorphing Title Text




This was an invention - or, rather, an addition to an existing Shadebobs routine - that was able to shift from one 1-color image to another gradually, using a 32-color palette. In essence it stored all the 31 images, adding them on top of each other so that the highest number color was used when all images overlapped, and the other colors in a similar way based on the number of images overlapping. The invention was merely to also subtract the oldest image, so that instead of continuously adding images, you would only have 31 in use at any one time.

Moving and/or rotating the images slightly combined to this creates the effect. Changing from one text to another is a matter of simply defining the letters so that each letter coordinate can be interpolated to the coordinate of the new letter. I improved on the original by using a 256 color palette.

The Stars




The movable 3D stars were easy, the race was about the number of them. I think I got up to something like 650 movable 3D stars with true perspective in two bitplanes on the Amiga. Movable here means they can go sideways and up or down, but not rotate around any axis. Adding some bitplanes for the credits meant using some precious cycles for having more colors, so in the demo there are maybe 630 stars or so.

In Python, using NumPy here is quite efficient. The number of stars is 1 % of window resolution, so for a Full HD screen you get 20,760 stars. In addition, the closest ones are plotted in 2, 3 or 4 pixel size - but still, if you watch the video above, you need to enlarge it to actually see them. For plotting the stars, it would be way too slow to use a loop and setting each pixel separately. Instead, I am using pygame's surfarray, which can be operated as if it were a standard NumPy array. This way all the stars can be plotted simultaneously! I have also improved on the original code so that moving the stars works better (albeit not perfectly) and they can also be rotated around the Z-axis (the axis from viewer to distance).

Side Effect Cube




On the Amiga, this was programmed by my friend and colleague Overlander. The aim is to have a number of 3D objects; a star field, a rotating 2D star, and a 3D cube, and then project these on the sides of another, bigger 3D cube. The projection can be done in two ways: 1) rotate the "side objects", project them to a stationary big cube's sides, and the rotate the big cube including these; or 2) rotate all objects separately, and use interpolation to add the smaller objects to the big cube. The latter is faster, if perhaps slightly less accurate. I am not sure which method Overlander used, but I used the latter here.

Shadow Bobs




This effect, probably better known as Shadebobs, was something of a filler as there was really nothing new to it. It was needed, because the next part, the Globe, relied heavily on the processor to rotate a (back then) huge 3D object and that could not be achieved real time with a decent frame rate. However, shadebobs was very light on the processor and heavy on the Blitter coprocessor instead; so when the Blitter was busy creating shadebobs, the CPU was used for precalculating the Globe.

I improved the original by moving to a 256 color palette (from 32 on the Amiga). You could actually easily use even more colors, but 256 is convenient when using a NumPy array with data type uint8 (unsigned 8-bit integer), as when adding to color 255 it drops back to 0 automatically and no additional code is needed to check for that limit. Again I am using pygame's surfarray to do this and able to process hundreds of bobs per frame.

The Globe




On the Amiga, the 300 frames (pictures) in 240x240 pixels and EHB (extra half-brite, six bitplanes) would have used 12MB RAM. Of course, four bitplanes (the blue ball) are stationary, but still there was not nearly enough RAM to store the pictures, even with extra RAM as pictures needed to reside in the so-called Chip RAM. Instead, during the Shadow Bobs part, the globe was rotated and the necessary Blitter data was stored. Even this required a 512kB memory extension (so 1MB RAM in total). Then, using the precomputed Blitter data, the object was drawn fresh for each frame. Amiga's EHB mode allowed for a sixth bitplane to be used, but it had no color registers - it simply halved the color brightness. This is how I was able to show Finland separately. The blue ball in the background was an image created in a ray tracing program.

With Python I create the blue ball first. It is actually a one-color blue ball - but then I also create a shaded gray ball, which is then used to create the shading or "light source" effect using pygame's multiplicative blit operation. No precalculations are needed, the 3D land masses are rotated and drawn real time. There's also a small routine to convert a Mercator projection world map to 3D globe Cartesian coordinates. The bit that needed quite a lot of work is how to define the edges of a continent when parts of it are on the back side. On the Amiga this was actually easier!

The World




This was really the part that took the most time and effort. There's a lot happening; there's a 3D city the viewer can fly through, there are moving objects like a car, the light source is moving and the buildings are shaded accordingly, and cast shadows. Nothing is precalculated, at least in the sense that you could fly another route or give control of the flight to the viewer. On the Amiga the rotation of the city is 2D only for two reasons: 1) it would have been slower in 3D and 2) the blue "ground" is created using the Copper coprocessor, and going 3D would have meant not using this effect at all. While all the other parts run in "full frame rate" i.e. 50 frames per second (PAL Amigas had a fixed 50Hz refresh rate), this part is so complex it, at times, runs at something like 13-15 frames per second - a bit jerky, that is. I still like it, though.

The Python implementation was my first Python project - read more about it here - and now I would probably make some changes to this. In the Amiga Assembly code it was important to avoid unnecessary calculations, but it seems using Python to (for example) figure out which rotations are not required is often slower than rotating everything using a single NumPy operation. Anyway, I improved on the original by keeping the code fully 3D, so there's some Z-axis rotation as well. Also, the data on which the city, movement etc. are based, are now in a separate xml file. 

The Milky Way




This is a variation of the Metamorphing Title Text - there's a rather simple planar vector object, which is rotated, and the last 31 (on the Amiga) or 127 (Python) images are added on top of each other to form the final image. Here, instead of using a surfarray, I was able to use some clever additive and subtracting blits to generate the final true color image, due to the limited palette.

Box in a Box in a Box...




There are eight boxes rotating inside each other, within a frame. On the Amiga, I needed the frame to make the area updated smaller than full screen - otherwise it would not have worked at 50 fps in five bitplanes. Even if there are only eight boxes with three different color sides showing each, i.e. 24 colors, the opacity of each box must be taken into account as well. The solution, in bitplane graphics, was simple (if I recall correctly): the area covered by each box was drawn using three bitplanes (that is, using eight colors - one for each box), to give each box its color. Then, each box had two sides drawn in bitplane number four and two in bitplane number five, so that they added bits 01, 10 and 11 to the total color, separating the sides. This was rather efficient as drawing filled polygons on the Amiga was achieved by drawing the vertical lines (one pixel per horizontal line), and then invoking a Blitter fill operation. The fill operation worked in an XOR fashion so that it always switched on/off when it encountered a new pixel on a single horizontal line. Thus, I could draw all the edges of the boxes, and use just one fill operation to achieve the result, where the color always changes if the outer box edge crosses the box's surface.

With Python, I could have used a surfarray again, but instead there's rather simply just an alpha value in use. The biggest box is drawn directly on the screen, but the next ones are copied so that about one third of the existing image remains and about two thirds (the alpha value) come from the new, smaller box, creating the effect of seeing through the outer boxes. There are some limitations on how this works, and I had to kind of "work backwards" from the colors I needed to define the colors to be actually used, as they blend with the previously drawn boxes' colors.

Fractal Landscapes




This is one of favorite routines, and I have written more about it here. As a fractal it is very simple, but I was able to calculate it really fast considering Amiga resources, and I still like the zoomer (try it with the mouse) a lot. You can actually zoom into it infinitely. I improved the routine on Python by adding controls on the resolution of the grid.

Raytracing




Raytracing was performed by Overlander on an Amiga ray tracing program Real3D. It only consists of ten images - remember memory and disk space were scarce resources - and, if I remember correctly, those ten images took 37 hours to calculate on his Amiga 500. All I did here was that I ran the original demo in a Amiga emulator (WinUAE) and snipped the ten images from the screen... and added a simple scroll text to get the original feel. The Reflect logo was drawn by Nighthawk.

End Credits




Not much happening here, there was some space left on the original Amiga 880 kB floppy disk, so this was added as a finishing touch. The disk was quite fully used and we actually used our own track loader (coded by Overlander) - no file system, but actually giving commands to the disk drive: move the magnetic head one step further, read a sector, etc... you probably would not do that with modern day computers any more.

The funny thing is that, despite requiring a huge amount of operating system code, the files for the Python implementation take more space, about 1.7 MB, although only 268 kB of that is Python source code and 1.0 MB and 289 kB are taken by the still graphics and music, respectively.

Thank you for reading.

Nov 22, 2020

Fractal Landscapes

In the introduction to my previous project, I explained the idea: Using Python to program old skool demo style real-time vector graphics. The finished project can be found here. This post is about a different part of the same demo - random Fractal Landscapes. 

Again, Python code can be found in GitHub.

Fractals They Are


Apparently, Benoit Mandelbrot has defined fractals as "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole". These landscapes use a simple mid-point replacement algorithm to generate random fractal landscapes, which can be indefinitely zoomed into. 

The original Amiga demo (1992) used a fixed 65 x 65 point grid (i.e. 4,225 random grid values for 8,192 triangle surfaces between them), but this Python version can switch between (2^n + 1) x (2^n + 1) point grids, n being between 3 and 10 (i.e. 81 to 1,050,625 grid points and 128 to 2,097,152 surfaces). On the Amiga machine language and Blitter it took about two seconds to draw a new landscape; on my PC it takes about the same time to draw a 2^8+1 version - 16 times as many surfaces. Although the screen resolution on my PC is also a lot higher (about 25 times the number of pixels drawn), this mainly shows how slow (my) Python implementation is. 


Fractal Routine


Generating a landscape is simple, especially as its grid size is always 2^n+1. First, the corners are set. Then a simple loop goes through the mid-points so that they are set as the average of the end-points plus a random change. The size of the random change is halved after each round, as the distance of the end-points is also half of what it was the previous round. Unfortunately, I could not figure out a nicer way of achieving this without looping through single points.

    def generateGrid(self):
        
        # full grid generation. Start from corners.
        rSize = self.randSize
        startSize = self.landSize
        # set corner values. Tilt: Use higher altitudes for back of grid.
        self.grid[0, 0] = (random.random() - 0.5 + self.tilt * 2) * rSize
        self.grid[0, self.gridSize] = (random.random() - 0.5 + self.tilt * 2) * rSize
        self.grid[self.gridSize, 0] = (random.random() - 0.5 - self.tilt) * rSize
        self.grid[self.gridSize, self.gridSize] = (random.random() - 0.5 - self.tilt) * rSize
            
        # go through grid by adding a mid point first on axis 0 (X), then on axis 1 (Z), as average of end points + a random shift
        # each round the rSize will be halved as the distance between end points (step) is halved as well
        for s in range(startSize, 0, -1):
            halfStep = 2 ** (s - 1)
            step = 2 * halfStep
            # generate mid point in x for each z
            for z in range(0, self.gridSize + 1, step):
                for x in range(step, self.gridSize + 1, step):
                    self.grid[x - halfStep, z] = (self.grid[x - step, z] + self.grid[x, z]) / 2 + (random.random() - 0.5) * rSize
            # generate mid point in z for each x (including the nex x just created, so using halfStep)
            for x in range(0, self.gridSize + 1, halfStep):
                for z in range(step, self.gridSize + 1, step):
                    self.grid[x, z - halfStep] = (self.grid[x, z - step] + self.grid[x, z]) / 2 + (random.random() - 0.5) * rSize
            rSize = rSize / 2


Mountains and the sea


The coloring reflects the original demo: high altitudes are ice, then one has brown soil, and close to sea levels, green vegetation. The sea is a nice blue, and while it is generated in exactly the same way as land, it is drawn flat and the (negative) height only defines its color. For land, the color is given by how steep the triangle shape is, giving a shaded look.

To the Drawing Board


Drawing the fractals on screen is simple - start from the back, and for each grid square, draw two triangles. As the viewer is above sea level, the sea level coordinate of each new row is at a lower level than that of the previous row, and for perspective, the horizontal distance between the grid points grows the closer to the viewer we get.

Zooming in


As the definition above implies, one can take a part of a fractal, look at it closer, and it will have the same general properties as the original. These landscapes can be zoomed into indefinitely. There is a mouse-controlled zoomer, which can be used for selecting any quarter of the land area, and then zoom into it. Of course, zooming into a mountain will soon lead to that mountain growing so high that it goes entirely out of sight; so zooming in on some island in the sea is a better idea.

Zooming is quite simple. Take the zoomed area grid, and spread it to the whole grid, filling in every one in four grid points, and at the same time doubling the height. Then, use the last phase of random mid-point replacement algorithm to fill in the empty grid points.

The video clip shows zooming in action.



User Controls


There are some additional user controls in the Python version. They are listed in the information area above the landscape - cursor keys for controlling the grid size and general steepness, f to toggle full screen on/off. As in the original, left mouse button creates a new landscape, while right mouse button zooms the mouse-selected area.

Parallel Thoughts


Having a large number of grid points to calculate or triangles to draw sounds like a good candidate for using all processor cores. Alas, this is not the case here. For grid points there could be some complications as the number of points calculated in each step varies and the points depend on the previous step. Perhaps drawing the triangles would be easier, but I could not get even simple examples of Pool or Process to really work. Some say there are issues with the environment I use (namely, Spyder (4.0.1)). Perhaps sometime later...

Feb 22, 2020

Real-Time 3D Vector Graphics with Python

I used to program real-time graphics demos on the Commodore Amiga at the end of the 1980's and the beginning of the 1990's. The programming language of choice at the time was assembly, a.k.a. machine language, to be able to squeeze everything possible (and then some) out of the hardware.

Now, some 30 years later, I wanted to learn Python, perhaps to read data from files and perform some math operations on it. In need of a fun project to help me get a decent start, I decided to try and replicate what I did back then with assembly. Using a high-level, run-time interpreted language like Python adds a lot of overhead compared to executing assembly, and so does Windows in the background. On the other hand, my ~4 year old laptop has a 2.6Ghz 64-bit Intel i7 with four cores, 16GB RAM and a graphics card, while the Amiga 1000 had a 7.09Mhz 16/32-bit Motorola 68000, 512kB RAM and some co-processors. The i7 has more L2 cache memory than the Amiga (no cache whatsoever) has total RAM. According to the famous Moore's law (loosely interpreted) I might have to the tune of 30 000 times more computing power!

Sample of  (my) MC68000 assembly source code (1991).

So, this is where I took my inspiration: The 3D vector world I programmed for the demo Sound Vision, released by our group Reflect at the very first Assembly Demo Party in Kauniainen, Finland, in July 1992. (We finished 1st in the demo competition. Assembly is still very active, but nowadays more about gaming.) Some vector world routines had of course been presented earlier, but this was the first one with a moving light source and real time calculated shadows.

This blog is partly about how to do this with Python, and partly about how this was done 30 years ago ("old school" a.k.a. "oldskool") with assembly on the Amiga.


The whole demo, as a video, can be found at e.g. Assembly Archive. The part above I actually captured running an Amiga emulator on my laptop (WinUAE) with an original disk image of the demo, which is less than 880 kB including all the code, music, and graphics!  The Amiga (original chip set) was capable of 352 x 288 pixel resolution in 32 colors and basically there was so much happening in this part (considering the resources) that the frame rate (pictures per second) is at times as low as 12-15 - hence, it does look somewhat jerky.  

So, how to go about it with Python? First, I installed Anaconda on my laptop, and started Spyder. Quickly I found out that I needed a real-time graphics package, so I installed Pygame. Pygame can be used to set up a screen or a window, and to draw on it - something that was handled by the Copper and the Blitter co-processors in the Amiga, respectively. Also, a number of other - more standard - packages like math and numpy came in handy.

Setting up an Amiga demo


The Amiga had 512kB of "chip" RAM, ie. memory available for program code and usable to the graphics and sound co-processors. The separate ROM (read-only memory) contained the BIOS; in my Amiga 1000 even this had to be loaded from a "kickstart" disk at startup. A track loaded demo ("trackmo") would boot from a disk and then run the first 1kB stored on that disk (the "bootblock"). Basically, you could have your own code there then using the disk drive hardware directly to proceed - asking it to load "tracks" from the disk directly to memory. The Amiga OS was not needed for anything. No libraries, drivers etc., but accessing the hardware registers directly.

So once you had loaded some music, perhaps graphics like a font for a "scroller", and your actual program code, then what? The first thing to do with the demo code was to take hold of system interruptions (these "interrupt" the currently running code and give the CPU temporarily for some other code to use) and create a "copper list". The copper list would tell the Copper co-processor where the bitplanes to be shown on screen are located, how the colours (up to 32) are defined, and some other graphics related settings. Then, kick off with the music and start drawing on the screen! For the drawing operations the Blitter would be used. According to the manual, the Blitter is "the high speed line drawing and block movement component of the system" and able to clear, copy and fill areas, and to draw lines. So, to create a simple text scroller, first find the location of the first letter of your scroller text, and copy the picture of that letter to the rightmost edge of your screen. Then, in the next frame, move the screen image some bits to the left. Once the whole letter has been moved enough times, there's room for the next letter, etc. See this tutorial for an example!

More on demo programming:
    Crash Course to Amiga Assembly Programming
    Coder Aid
    Amiga Hardware Programming (video)