Apr 13, 2020

Real-Time 3D with Python Part VII: Movement

In the introduction I explained the idea: Using Python to program old skool demo style real-time vector graphics. In Part I I set up a simple 3D wireframe object, rotated it, and drew it on the screen, and in Part II added surfaces and perspective. Part III was about calculating object surface visibility and shading. Part IV set up an XML reader to define the objects and other properties using a common file format (XML), and added the code needed to work with multiple objects. In Part V, I added two special features: the ground and roads. Part VI calculated and drew shadows. This part is about moving in the cityscape and adding moving objects.

The program code and XML files for all parts can be downloaded from github. Note that the full code is not found below - to run this, use github code.

Moving Around


In a game, one would add user controls for moving around the city, adjusting for direction, speed, viewing angles etc. In a demo, it is customary not to give control to the viewer, but to set up everything for show (exceptions have, of course, been made). There are two similar but separate types of movement in the demo: moving objects, and a moving viewer. Both should have control of both the position and the angles; angles meaning rotation for objects, and viewing direction for the viewer, the latter then translated into rotation of all objects.

I have defined the world as coordinates in 3D space, so it is natural to define movement in a similar fashion. However, movement should be smooth, not in a rectangular fashion, so I have defined it so (in the XML file) that at certain points of time I want to be at certain locations, and the intermediate locations (coordinates) are interpolated. I am using

from scipy.interpolate import interp1d

to get help in interpolating, and then

        # build movement timeseries array; [0] is time in seconds (if target_fps = movSpeed), [1:4] are positions X,Y,Z, [4:7] are angles X,Y,Z
        movSpeed = float(movements.findtext("speed", default="1"))
        for timedata in movements.iter('timeseries'):
            mov_times = np.zeros((7, 0))
            mov_point = np.zeros((7, 1))
            for stepdata in timedata.iter('movementstep'):
                #steptime = float(stepdata.get("time"))
                mov_point[0, 0] = float(stepdata.get("time"))
                mov_point[1, 0] = float(stepdata.findtext("movepositionX", default=str(mov_point[1, 0])))
                mov_point[2, 0] = float(stepdata.findtext("movepositionY", default=str(mov_point[2, 0])))
                mov_point[3, 0] = float(stepdata.findtext("movepositionZ", default=str(mov_point[3, 0])))
                mov_point[4, 0] = float(stepdata.findtext("moveangleX", default="0"))
                mov_point[5, 0] = float(stepdata.findtext("moveangleY", default="0"))
                mov_point[6, 0] = float(stepdata.findtext("moveangleZ", default="0"))
                mov_times = np.hstack((mov_times, mov_point))
            # sort by time (first row), just in case ordering is wrong
            mov_times = mov_times[:,mov_times[0,:].argsort()]
            if viewer == "1":
                # for viewer, invert sign of Y coordinate
                mov_times[1,:] = -1 * mov_times[1,:]
            mov.addTimeSeries(mov_times)
            # out of the time series, build frame by frame movement using interpolation. 
            fx = interp1d(mov_times[0,:], mov_times[1:7,:], kind='quadratic')
            # data needed for the whole range of time series. Use speed = "time" per second.
            num_times = int((mov_times[0,-1] - mov_times[0,0]) * vv.target_fps / movSpeed)
            time_frames = np.linspace(mov_times[0,0], mov_times[0,-1], num=num_times, endpoint=True)
            mov_moves = fx(time_frames)

So I am building an array with 7 series; the first for the time, the next three for the coordinates, and the last three for the angles. Then, using the interp1d, I am setting the interpolation function fx (I am using quadratic, but you can experiment with e.g. cubic spline) to get a spline interpolation between them. The whole time series is calculated and stored so that it can be just used later.

For some moving objects, I am happy with rotating them separately of their movement or not at all; but for some, like a moving car, I would like them to "point forwards" i.e. have their angles follow their movement. Also, for the viewer, it would be nice to be able to look forward when moving around. For this purpose, I added the following:

            mov_angleforward = movements.findtext("angleforward", default="off")
            if mov_angleforward != "off":
                # angleforward means that object angles should be such that it is rotated according to movement vector, always facing towards movement
                # NOTE: Tested for Y only! Not well defined for XYZ anyway.
                mov_diffs = mov_moves[0:3,1:] - mov_moves[0:3,:-1]
                mov_diffs = np.hstack((mov_diffs, mov_diffs[:,-1:])) # copy second last diff to last
                if mov_angleforward.find("X") >= 0:
                    mov_moves[3,:] += np.arctan2(mov_diffs[1,:], mov_diffs[2,:]) * 180 / np.pi
                if mov_angleforward.find("Y") >= 0:
                    mov_moves[4,:] += np.arctan2(mov_diffs[0,:], -mov_diffs[2,:]) * 180 / np.pi
                if mov_angleforward.find("Z") >= 0:
                    mov_moves[5,:] += np.arctan2(mov_diffs[0,:], mov_diffs[1,:]) * 180 / np.pi

Basically, if movement item angleforward has value "Y", then the Y angle (angle "around" the vertical axis or "left and right") will be defined by the movement in the plane Y=0, i.e. by the change in X and Z coordinates. It is additional to the angle calculated using the spline; by defining this for Y and using angle 90 for Y in the movement, for instance, one can have the object pointing to its side or the viewer "looking out of the side window." 

I am also changing calculating the shadows (see previous part) slightly to use an intermediate step in rotation, where the objects have been moved and rotated, but still live in a "constant coordinate system" where the plane Y = 0 is the ground. There it is easy to calculate the multiplier for the shadow vector: Let's say the light source position (its Y coordinate) is twice as high as the top of the building: multiplier = 2. Then the shadow of the top of the building will fall at [lightsource position + 2 * (top of building position - lightsource position)]. However, the same multiplier also applies to the positions after they have been rotated to any angle, even if then Y = 0 is not the ground. Applying it to the already rotated positions saves me from rotating the resulting shadow coordinates.

This is what we have. A bigger city, a moving light source object with shading and shadows following its movement, a moving car - and a moving viewer:



Next: Part VIII: Finishing Touches

3 comments:

  1. Hi !
    how to do this:
    https://youtu.be/Yc9o_ldmlVs?t=74
    https://youtu.be/87tvXIIMJes?t=135

    ReplyDelete
  2. Great work.
    I remember writing optimised graphics machine code (hex - not assembly) on the C64. I used self-modifying code in the tight loops to save a few cycles. Those were the days. Now I have 48 core servers and petabytes in the data centre - not so much fun.

    ReplyDelete
  3. I thought about a 2d map editor built with pyglet, a kind of sim city , which build json file, which is
    converted thanks to Lxml module, to finally give a 3d city :
    https://stackoverflow.com/questions/3605680/creating-a-simple-xml-file-using-python

    ReplyDelete