Research into the films techniques began 3 years ago and eventuated into the creation of custom built software to visualise the world, Points. Traditional 3d Software like Maya and 3Ds Max may have been an option, but with the need for complete control over the point clouds, Ben Torkington set out to build his own software from scratch. The software allows many different Point Clouds to be composed together, often the complexity of the models was too much for the “offline” process, so Ben wrote in “bucketing”, a way of viewing 10% or even 1% of the points so moving around would be less computationally intensive, then for the final renders this would be cranked up to %100. Other uses for the bucketing were to fade individual models on an off.
Mia was shot on a blue-screen on a green-screen treadmill (just unfortunate they couldn’t be the same colour!). These scenes were camera tracked and Points would add the translation path of the camera crane and add it to another movement that would simulate the treadmill so Mia could appear as if walking on the Point Clouds.
A huge library of models was gathered then selected down to the 80 or so that feature in the film. The process of gathering these was generally to find an interesting characteristic place with lots of textures, also when the weather was cloudy and overcast (otherwise the sun’s shadows would get burnt into the models). Then using a Canon 7D with a reasonably fast shutter speed, take lots of photos of particular area we wanted to cover. Often the results were terrible, resulting in only a partial model with huge holes in it, other shoots might result in very few photos producing the most striking results, such as the leaves on ground model. Constructions were also best made from stills, even if only in JPEG mode compared to H264 video.
Another technique used to generate models such as the Colosseum and Stonehenge was to use internet photo collections. For this we had an Amazon EC2 server go and gather thousands of photos using a bash script that would ask Google Image Search for every link to pictures that it would return for that particular subject. Then it would go through and remove corrupt images and duplicates and tar them all up. We’d then launch a Dual GPU instance of an EC2 machine that much faster at generating the models than our base EC2 instance. For this we used Changchang Wu’s VisualSFM from the command line to generate the SIFT’s and their matches and the SFM sparse point cloud. Once this was done, we’d launch another EC2 instance, this time not worrying about the GPU’s and use PMVS2 from Yasutaka Furukawa to build the dense point clouds. The final model would be downloaded back to our studio and combined into the world with Points.
Funded by Creative New Zealand and The New Zealand Film Commission.