MOVING
STILL

2022
Short Film

<intro>



"Don't paint from nature too much. Art is an abstraction. Derive this abstraction from nature while dreaming before it, and think more of the creation that will result."

- Paul Gauguin

↓ Trailer for "Moving Still" (2022):

↓ Still Frames of "Moving Still" (2022):

GAUGAN    2
185A
185
185
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
185A
1144
1144
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
1213A
1213
1213
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
1622A
1622
1622
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
2221A
2221
2221
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
2360A
2360
2360
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
2644A
2644
2644
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
2929A
2929
2929
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
6850A
680
6850
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
6964A
694
6964
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
7733A
7733
7733
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
8109A
8109
8109
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
8513A
8513
8513
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
10393A
10393
10393
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
10779A
10779
10779
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
11466A
11466
11466
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
11980A
11980
11980
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
12131A
12131
12131
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
12561A
12561
12561
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
12893A
12893
12893
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
13068A
1368
13068
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
13332A
13332
13332
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
13819A
13819
13819
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
14453A
14453
14453
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
14628A
14628
14628
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
14717A
14717
14717
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
14866A
14866
14866
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
15331A
15331
15331
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
15480A
15480
15480
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
15748A
15748
15748
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
17323A
17323
17323
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
185A
185
185
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
17866A
17866
17866
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
18061A
18061
18061
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
18249A
18249
18249
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
18287A
18287
18287
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
18358A
18358
18358
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
18435A
18435
18435
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
20375A
20375
20375
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
22205A
22205
22205
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
22246A
22246
22246
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.
GAUGAN    2
23140A
23140
23140
Visual still from an experimental short film created with Cinema 4D and NVIDIA GauGAN 2, blending / morphing CGI environments with AI-generated surreal landscapes. Experimental and abstract look, very eery.

/ / What is "Moving Still"?



Moving Still is a 13 minute, experimental short movie and art-installation.

It takes on a odyssey through constantly morphing and pulsating nature scenes, with an eerie, dreamlike atmosphere. The visuals, both interconnected and disintegrating, evoke a haunting liminal space.

An evergrowing stem of memories, past or present? Clutching onto silhouettes and shadows in a world too fast to perceive, running into the unknown abyss.



//Project Info

}
    "type": "Personal Project",
    "contributor(s)": "Benno Schulze",
   "full-length": "13:12 min"
   
    "category": [
        "SHORT FILM",
        "AVANT GARDE",
        "GAUGAN2",
        "ART INSTALLATION"
   ]
}

<inFOS>

Moving Still was created as a passion project, stemming from lengthy experiments (more about that in the project insight) with GauGAN Beta and, later on, GauGAN 2. I found the basic concept of being able to produce artificial, photorealistic scenes of nature simply immensely intriguing.

What I found even more fascinating, however, were the technical aspects—the inner workings of the GAN. To understand how it works, dissect its processes, test its limits.

To support and enhance the visual narrative with the use of AI became my primary focus. Use its weaknesses as a stylistic device, rather than trying to create a perfect copy of the reality.

From what I learned about GANs, I always drew parallels to the human brain: neurons firing, creating artificial imagery right before your very own eyes. You can imagine the shape of a house, the number of windows, the color of the door, and, drawing from images you've seen and environmental influences (essentially the training data), your brain fills in the shapes to produce a somewhat realistic image with ease.

Back to the GAN, the strong divergence between it´sindividual video frames stems directly from the limited capabilities of the GauGan Beta (2019) / GauGAN 2 (2021), developed by Taesung, Park et al. at NVIDIA Research AI. Although it is no more available, it was (from my knowledge) the first image generator made available to the public.

The GAN (Generative Adversarial Network) was trained on 10 million—unconnected—reference images of landscapes and, as such, lacks frame consistency since video synthesis was never part of its training data.

Even though, I created the first version of the short film back in 2022, since then, I´ve done multiple additions to both the visual and auditive layer and still have things to work and experiment with out of pure joy for the base idea. Some of those changes found it´s way to the project insight.

↓ Further technical info on GauGAN2:

/ / Concept

A deliberate lack of frame-to-frame consistency creates a surreal, abstract pulsation of shapes and contours. Abrupt shifts in lighting, and even the complete replacement of objects, introduce a new layer of narrative. The image is held together only by the silhouettes and compositional balance of its visual elements. A sense of unease is intentionally evoked through the dissonance between components within a single frame: while the camera pans and elements like trees or objects move fluidly, others—such as the ground—remain unnaturally static.

Short Scene from Moving Still: Example for the Dissonance between elements, focus on the pebble beach.

Depending on the viewer’s subjective focus, the scenes—despite their linear progression—can evoke entirely different impacts and perceived levels of control. Beyond the segmentation maps that guide image generation (LINK), the visual outcome is left entirely to the GAN. The viewer witnesses a virtual, artificially constructed landscape that never existed—or perhaps did. On a parallel, immersive level, the auditory layer abstracts perception further. Initially, low-frequency textures—barely perceptible, like the distant rattling of memory or the mechanical hum of an old film projector—set the tone. At key moments, calibrated highs and lows allow the viewer to both submerge and resurface. This intra-diegetic soundscape is subtly enriched with experimental music elements composed by Azure Studios.

Comparing Frame 8630: Left image is the segmentation map input, Right image is the Output generated by GauGan2.

Segmentation maps function as a type of masking process, using predefined HTML color codes to represent different surface elements—such as water, sand, rock, or grass. These coded maps were used within Cinema 4D to texture a custom built, rough 3D environment, which was then rendered out frame by frame and processed through GauGAN2. Given the extensive volume of 23,785 individual frames, the processing workflow was automated via a custom-built script.

White image flashes, reminiscent of firing synapses, occurred due to faulty repetitive frames. This happened whenever the custom script, which allocated about 8 seconds per frame, saved the image before it was fully processed. In such cases, the output from the previous frame (e.g., 701) was saved as the next frame in the pipeline (e.g., 702).These were replaced in a second pass, directly linking the film to the working process. This highlights that despite identical segmentation maps, the output image depends on many variables and coincidences. However, if they are processed in a single pass, a hidden seed creates similiarities, which are hard to reproduce.

White flashes occuring throughout the movie

On a further immersive level — similar to the video — the auditory aspect also abstracts the senses. Initially, some barely perceptible low-frequency sound effects, such as an almost omnipresent rattling, evoke the playback of memories, or the video frames resemble an old film projector. Highs and lows in the sound give viewers time to immerse themselves and also offer moments of relief.

/ / EXHIBITION

First day of exhibition setup at the "Kaffe Hag" Grounds.

The 13-minute short film Moving Still premiered as part of the interdisciplinary exhibition „Licht_Raum“, organized by the Zentrum für Kollektivkultur (ZfK) in Bremen and supported by the Bremen Department of Culture. The exhibition took place in a partially abandoned industrial building on the historic Hag grounds – once home to the former coffee manufacturer Kaffee HAG.

The raw, unrestored nature of the venue resonated perfectly with the abstract aesthetic  atmosphere of the film. The installation was one of twelve artistic positions exploring the intersection of light, space, and perception.

Faded lettering left by an unknown intruder from before the exhibition, now blending into the raw aesthetic of the space. Reading: "It is a thing about to take shape, about to reveal its true edges, a presence almost real.
Concept render to catch the atmosphere of the exhibition space,
Illustrated concept for the exhibition space.
The almost finished exhibition space, showing the use of molton and black foil to improve acoustic and accentuate the video installation

As darkness fell, the building came alive: projections, light sculptures, and sonic interventions transformed the rooms into a dynamic gallery of passing experiences. The event program featured artist talks and live performances, including ambient and electronic sound sets that were opening and closing the exhibition.

Moving Still was exhibited on the upper floor of the former research wing – a secluded, quiet space, that allowed the recipient to fully let himself fall to the immersive visual and sound.

Temporary construction columns set up to define a more intimate space for the installation. Later covered in black Molton fabric to improve acoustics and focus the atmosphere.
Main Hall of the exhibition space, also during the setup phase. This part was mainly used for talks and music.
↓ Further documentation on the exhibition:

<INSIGHT>

Excerpt from the work in progress material, showcasing the steady improvement of quality and animations.

    "enabledSoftware": [
        "Cinema 4D.exe", //main 3D Software (for input maps)
        "AfterEffects.exe", //post edit
        "Audacity.exe", //custom foley refinement
        "Premiere.exe", //sound design
        "Stable Diffusion", //AI-DLM [link]
           "Automatic1111", //webUi [link]
           "sd-webui-control-net", //AI-NNM [link]
           "depth-map-script", //depth map generator [link]
        "TopazGigapixelAI.exe" //upscaling

    ],
    "webpages": [
        "gaugan.org/gaugan2", //!no more available!
    ],
}  

/ / GauGAN (Beta) - Early Testing

The short film began as an exploration of various techniques using Cinema4D and "GauGAN2." The core idea and workflow centered around creating segmentation maps, where solid colors were used to define shapes and objects.

Each specific hex color code corresponded to a distinct object or material type—such as light blue for the sky, green for a meadow, or gray for a stone. Further direction for "GauGan2" can be given by uploading a style image, as a reference for rough color palette and mood.

Those segmentation maps were created in Cinema4D and rendered out as images sequences, to be processed by "GauGan2".

  Segmentation sequence, rendered in Cinema4D

GauGan2 output with 2 different stye filtes enabed. You can notice that the silhouettes are not strictly followed, but rather give the overall composition, while being able to to adjust it. In this example, the small patches of clouds are connecting with eachother on the generated image, even though they are disconnected on the segmentation map.

The web-interface of GauGAN (Beta) around 2020 (it was first released in 2019).
As you can tell, it looks kind of rudimentary compared todays GAN´s.

Though one must keep in mind, it was the first ever generative-adversarial-networks (GAN) for artificial image generation, atleast released to the public.

  Comment on reddit about the newly released GauGAN back in 2019...2025 here we are

/ / Jumping to GauGAN 2

I had played around with GauGAN (Beta) a bit but kind of forgot about it. In 2022 I got back to it with "GauGAN2". Initially for an event of Luft & Laune to be used as social media story ad and live visual content on stage.

  The web-interface of GauGAN 2 – just as the beta – the processing was outsourced to servers provided by NVIDIA.

While creating still images and short video sequences was enjoyable, I found the long, aggressively pulsating video scenes of nature to be the most fascinating. This came due to the GAN’s lack of frame consistency—unsurprising, given that it was only trained to generate single images.

As we’ve seen before, there’s always some variability in how the GAN processes input, even when using the exact same segmentation map.

/ / Utilizing script
for bulk processing

Though the main issue was the web interface, which, at the time, was the only way to use the GAN. It allowed just one upload at a time—you had to click “process,” wait about seven seconds, and then manually download the generated output. Doing this hundreds or even thousands of times would have been absolutely dreadful and mind boggling.

So with the help of Paul Schulze, I enhanced a Python script — originally created by gormlabenz — for bulk uploading and downloading of input segmentation maps. Modifications also made it possible to set a style image and execute multiple iterations simultaneously.

class Gaugan2Renderer:
   def __init__(self, waiting_time=5):
       self.waiting_time = waiting_time
       self.output_images = []
       chrome_options = Options()
       #chrome_options.add_argument("--headless")
       #chrome_options.add_argument("--remote-debugging-port=9222")
       #chrome_options.binary_location = "/usr/bin/chromedriver"        self.driver = webdriver.Firefox(
           #ChromeDriverManager().install(),
       #    options=chrome_options
       )    def open(self):
       self.driver.get("http://gaugan.org/gaugan2/")
       WebDriverWait(self.driver, 10).until(
           EC.presence_of_element_located((By.ID, "viewport"))
       )
       self.close_popups()    def close_popups(self):
       close_button = self.driver.find_element(By.XPATH,
                                               "/html/body/div[2]/div/header/button")
       if close_button:
           close_button.click()        terms_and_conditions = self.driver.find_element(
           By.XPATH, '//*[@id="myCheck"]')        if terms_and_conditions:
           terms_and_conditions.click()    def download_image(self, file_path):
       output_canvas = self.driver.find_element(
           By.ID, 'output')
       canvas_base64 = self.driver.execute_script(
           "return arguments[0].toDataURL('image/png').substring(21);", output_canvas)
       canvas_png = base64.b64decode(canvas_base64)        with open(file_path, 'wb') as f:
           f.write(canvas_png)    def create_output_dir(self):
       os.makedirs(self.output_path, exist_ok=True)    def render_image(self, file_path, style_filter_path):        # segmentation map
       self.driver.find_element(
           By.XPATH, '//*[@id="segmapfile"]').send_keys(file_path)
       self.driver.find_element(
           By.XPATH, '//*[@id="btnSegmapLoad"]').click()        # custom style filter
       self.driver.find_element(
           By.XPATH, '//*[@id="imgfile"]').send_keys(style_filter_path)
       self.driver.find_element(
           By.XPATH, '//*[@id="btnLoad"]').click()
       
       
       self.driver.find_element(
           By.XPATH, '//*[@id="render"]').click()    def run(self, input_folder, style_filter_path, output_path):
       self.image_paths = glob(input_folder + "/*.png")
       self.output_path = output_path        self.open()
       self.create_output_dir()        for file_path in tqdm(self.image_paths):
           file_path = os.path.abspath(file_path)
           basename = os.path.basename(file_path)
           output_image = os.path.join(self.output_path,
                                       basename)            self.render_image(file_path, style_filter_path)
           time.sleep(self.waiting_time)
           self.download_image(output_image)
           self.output_images.append(output_image)        
       self.driver.close()
       
   def create_video(self, output_video):
       images = [imageio.imread(image) for image in self.output_images]
       imageio.mimsave(output_video, images, fps=10)

from gaugan2_renderer import Gaugan2Rendererrenderer = Gaugan2Renderer(waiting_time=10)
renderer.run("./input_folder", "path/to/styleframe/styleframe.png", "./output_folder")
#renderer.create_video("./output.mp4")

/ / Experimenting

A quick test involved using shapes that didn’t align with their designated "colors" (object/material types, such as stone). I noticed that all objects and materials on the segmentation map seemed interconnected. For example, if a small patch of snow was placed in the foreground, trees in the background would also appear snow-covered, even if the segmentation map didn’t explicitly include snow in those areas. Same with fog in the examples below.

  Fog + building
  Fog + stone
↓  Fog + stone
↓  Fog + tree
Same test but with a shark I rigged and animated
  Building + clouds
  Fog + building
  Fog + clouds
Moutains + clouds

/ /  Starting the journey

Over time, I kind of figured out what works and what doesn’t, discovered a visual aesthetic, and developed a visual narrative and perception I was excited to explore more deeply.

However, a major issue persisted. As I talked about before, every element of the segmentation (e.g., dirt) is connected to the other elements on it (e.g., snow). But when similar elements on 2 otherwise different segmentation maps are visible, even though the elements differ in size and location, the segmentation map seems to act similar to a masking process.

This means that if the bottom half is covered in light blue, representing straw, this part — in its output — will almost always have the same look [1]. One could even say it’s the same picture. Even if the pattern is broken up by smaller dots, like stones or bushes (in the segmentation input) [2], it still remains unchanged, as it only seems to include parts if they reach a certain size threshold.

And this isn’t an isolated issue with just this combination of elements — it happens with almost anything. This could be due to several factors: insufficient variation in training data, issues with the seed (which basically adds a randomness factor to the result), or something with the script utilized for bulk processing.

Regardless, when attempting to create a moving scenery, it becomes obviously distracting — perhaps even nauseating — when some elements appear to move along while others, like the ground, seem to remain still, at least with this degree of persistence.

Over time, I kind of figured out what works and what doesn’t, discovered a visual aesthetic, and developed a visual narrative and perception I was excited to explore more deeply.

However, a major issue persisted. As I talked about before, every element of the segmentation (e.g., dirt) is connected to the other elements on it (e.g., snow). But when similar elements on 2 otherwise different segmentation maps are visible, even though the elements differ in size and location, the segmentation map seems to act similar to a masking process.

This means that if the bottom half is covered in light blue, representing straw, this part — in its output — will almost always have the same look [1]. One could even say it’s the same picture. Even if the pattern is broken up by smaller dots, like stones or bushes (in the segmentation input) [2], it still remains unchanged, as it only seems to include parts if they reach a certain size threshold.

And this isn’t an isolated issue with just this combination of elements — it happens with almost anything. This could be due to several factors: insufficient variation in training data, issues with the seed (which basically adds a randomness factor to the result), or something with the script utilized for bulk processing.

Regardless, when attempting to create a moving scenery, it becomes obviously distracting — perhaps even nauseating — when some elements appear to move along while others, like the ground, seem to remain still, at least with this degree of persistence.
  [1] Issue visualized: Similar output, different input
  [1] Issue visualized: Similar output, different input
  Issue visualized: Lack of power to recognize camera movement

The learnings from this are, that the ground / overall elements need to be:

A:
So small / far away, so that the difference between frames next to each other is big enough, so that the output given has a distinguishable look when compared to each other.

B:
Can´t be too small, as from a certain treshhold on (about min. size about 15x15px of a 512x512 full resolution input map), elements are no more processed.

C:
Big Elements, such as the ground need to be constantly broken up with various DIFFERENT elements, (represented as colors in the segmentation map) in order for the camera movement to be recognized by the recipient.

  Issue visualized: Lack of power to recognize camera movement, less visible due to low contrast with the sand texture

In the example below, you can tell that fixing the problems mentioned above (regarding segmentation map) substantially improved the output given by GauGAN2.

More project insight coming soon...



Due to complexity and performance limitations, the majority of the project's content is not available on mobile.

Curious? Check out the full page on tablet or desktop devices!

↓ A small selection:

Go back to Top ↑
Due to complexity and performance limitations, the majority of the project's content is not available on mobile.

Curious? Check out the full page on tablet or desktop devices!
//Lazy Load Videos