aoxo
/

Image-to-Image
English
art
Edit model card

RealFormer - Photorealism over Supersampling

EmelinLabs introduces RealFormer, a novel Image-to-Image Transformer-based architecture designed and trained to enhance photorealism in images, particularly focused on bringing a real lifelike style to synthetic artifacts in media.

thumbnail

Model Details

Detailed description of model, its architecture, training data and procedures.

Model Description

RealFormer is an innovative Vision Transformer (ViT) based architecture that combines elements of Linear Attention (approximation attention) with Swin Transformers and adaptive instance normalization (AdaIN) for style transfer. It's designed to transform images,specifically targeted at the video game and animation industry, potentially enhancing their photorealism or applying style transfer.

  • Developed by: Alosh Denny
  • Funded by: EmelinLabs
  • Shared by: EmelinLabs
  • Model type: Image-to-Image Transformer
  • Language(s) (NLP): None (Pre-trained Generative Image Model)
  • License: Apache-2.0
  • Finetuned from model [optional]: Novel; Pre-trained (not finetuned)

Model Sources [optional]

Uses

Direct Use

RealFormer is designed for image-to-image translation tasks. It can be used directly for:

  • Enhancing photorealism in synthetic images (e.g., transforming video game graphics to more realistic images)
  • Style transfer between rendered frames and post-processed frames
  • To be incorporated in pipeline with DLSS

Downstream Use

Potential downstream uses could include:

  • Integration into game engines for real-time graphics enhancement - AdaIN layers are finetunable for video-game-specific usecases. In this implementation, the models have been pretrained on a variety of video for super-sampling, photorealistic style transfer and reverse photorealism.
  • Pre-processing step in computer vision pipelines to improve input image quality - Decoder layers can be frozen for task-specific usecases.
  • Photo editing software for synthesized image enhancement

Out-of-Scope Use

This model is not recommended for:

  • Generating or manipulating images in ways that could be deceptive or harmful
  • Tasks requiring perfect preservation of specific image details, as the transformation process may alter some artifacts of the image
  • Medical or forensic image analysis where any alteration could lead to misinterpretation. Remember, this is a model, not a classification or detection model.

Bias, Risks, and Limitations

  • The model may introduce biases present in the training data, potentially altering images in ways that reflect these biases.
  • There's a risk of over-smoothing or losing fine details in the image transformation process.
  • The model's performance may vary significantly depending on the input image characteristics and how similar they are to the training data.
  • As with any image manipulation tool, there's a potential for misuse in creating deceptive or altered images.

How to Get Started with the Model

Use the code below to get started with the model.

# Instantiate the model
model = RealFormerAGA(img_size=256, patch_size=8, emb_dim=768, num_heads=32, num_layers=16, hidden_dim=3072)

# Move model to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)

# Load an image
input_image = load_image('path_to_your_image.png')
input_image = input_image.to(device)

# Perform inference
with torch.no_grad():
    output = model(input_image, input_image)  # Using input as both content and style for this example

# Visualize or save the output
visualize_tensor(output, "Output Image")

Training Details

Training Data

  • Preliminary: The model was trained on Pre-Training Dataset and then the decoder layers were frozen to finetune it on the Calibration Dataset for Grand Theft Auto V. The former includes over 400,000 frames of footage from over 9 video games such as WatchDogs 2, Grand Theft Auto V, CyberPunk, several Hollywood films and high-defintion photos. The latter comprises of ~25,000 high-definition semantic segmentation map - rendered frame pairs captured from Grand Theft Auto V in-game and a UNet based Semantic Segmentation Model.

  • Latest: The latest model was trained purely on Calibration Dataset for Grand Theft Auto V, a composition of over 1.24 billion realworld images and over 117 million in-game captured frames.

Training Procedure

  • Optimizer: Adam
  • Learning rate: 0.001
  • Batch size: 8
  • Steps per epoch: 3,125
  • Number of epochs: 100
  • Total number of steps: 312,500
  • Loss function: Combined L1 loss, Perpetual Loss, Style Transfer Loss, Total Variation loss

Preprocessing

Preprocessing of Large-Scale Image Data for Photorealism Enhancement
This section details our methodology for preprocessing a large-scale dataset of approximately 117 million game-rendered frames from 9 AAA video games and 1.24 billion real-world images from Mapillary Vistas and Cityscapes, all in 4K resolution. The goal is to pair game frames with real images that exhibit the highest cosine similarity based on structural and visual features, ensuring alignment of fine details like object positions, level of detail and motion blur.

Images and their corresponding style semantic maps were resized to 512 x 512 pixels and corrected to a 24-bit depth (3 channels) if they exceeded this depth. We employ a novel feature-mapped channel-split PSNR matching approach using EfficientNet feature extraction, channel splitting, and dual metric computation of PSNR and cosine similarity. Locality-Sensitive Hashing (LSH) aids in efficiently identifying the top-10 nearest neighbors for each frame. This resulted in a massive dataset of 1.17 billion frame-image pairs and 12.4 billion image-frame pairs. The final selection process involves assessing similarity consistency across channels to ensure accurate pairings. This scalable preprocessing pipeline enables efficient pairing while preserving critical visual details, laying the foundation for subsequent contrastive learning to enhance photorealism in game-rendered frames.

preprocessing

Training Hyperparameters

v1

  • Precision: FP32
  • Embedded dimensions: 768
  • Hidden dimensions: 3072
  • Attention Type: Linear Attention
  • Number of attention heads: 16
  • Number of attention layers: 8
  • Number of transformer encoder layers (feed-forward): 8
  • Number of transformer decoder layers (feed-forward): 8
  • Activation function(s): ReLU, GeLU
  • Patch Size: 8
  • Swin Window Size: 7
  • Swin Shift Size: 2
  • Style Transfer Module: AdaIN (Adaptive Instance Normalization)

v2

  • Precision: FP32
  • Embedded dimensions: 768
  • Hidden dimensions: 3072
  • Attention Type: Location-Based Multi-Head Attention (Linear Attention)
  • Number of attention heads: 16
  • Number of attention layers: 8
  • Number of transformer encoder layers (feed-forward): 8
  • Number of transformer decoder layers (feed-forward): 8
  • Activation function(s): ReLU, GELU
  • Patch Size: 16
  • Swin Window Size: 7
  • Swin Shift Size: 2
  • Style Transfer Module: AdaIN

v3

  • Precision: FP32, FP16, BF16, INT8
  • Embedding Dimensions: 768
  • Hidden Dimensions: 3072
  • Attention Type: Location-Based Multi-Head Attention (Linear Attention)
  • Number of Attention Heads: 42
  • Number of Attention Layers: 16
  • Number of Transformer Encoder Layers (Feed-Forward): 16
  • Number of Transformer Decoder Layers (Feed-Forward): 16
  • Activation Functions: ReLU, GeLU
  • Patch Size: 8
  • Swin Window Size: 7
  • Swin Shift Size: 2
  • Style Transfer Module: Style Adaptive Layer Normalization (SALN)

v4

  • Precision: FP32, FP16, BF16, INT8
  • Embedding Dimensions: 768
  • Hidden Dimensions: 3072
  • Attention Type: Location-Based Multi-Head Attention (Linear Attention) and Cross-Attention (Pretrained Attention-Guided)
  • Number of Attention Heads: 32
  • Number of Attention Layers: 16
  • Number of Transformer Encoder Layers (Feed-Forward): 16
  • Number of Transformer Decoder Layers (Feed-Forward): 16
  • Activation Functions: ReLU, GeLU
  • Patch Size: 8
  • Swin Window Size: 7
  • Swin Shift Size: 2
  • Style Transfer Module: Style Adaptive Layer Normalization (SALN)
  • Style Encoder: Custom MultiScale Style Encoder

Speeds, Sizes, Times

Model size: There are currently four definitive versions of the model:

  • v1_1: 224M params
  • v1_2: 200M params
  • v1_3: 93M params
  • v2_1: 2.9M params
  • v3: 252.6M params
  • v4: 651.9M params

Training hardware: Each of the models were trained on 2 x T4 GPUs (multi-GPU training). For this reason, linear attention modules were implemented as ring (distributed) attention during training.

Total Training Compute Throughput: 4.13 TFLOPS

Total Logged Training Time: ~210 hours (total time split across four models including overhead)

Start Time: 09-13-2024

End Time: 09-21-2024

Checkpoint Size:

  • v1_1: 855 MB
  • v1_2: 764 MB
  • v1_3: 355 MB
  • v2_2: 11 MB
  • v3: 1.01 GB
  • v3_fp16: 505M
  • v3_bf16: 505M
  • v3_int8: 344M
  • v4: 2.42 GB
  • v4_fp16: 1.21GB
  • v4_bf16: 1.21GB
  • v4_int8: 766M

Evaluation Data, Metrics & Results

This section covers information on how the model was evaluated at each stage.

Evaluation Data

Evaluation was performed on real-time footage captured from Grand Theft Auto IV, Grand Theft Auto V, Cyberpunk 2077, WatchDogs, Marvel's Spiderman, Far Cry 6, Red Read Redemption 2 and Control.

Metrics

  • Peak Signal-to-Noise Ratio (PSNR)
  • Cosine Similarity Score (CSS)
  • L1 Loss
  • Contrastive Loss (CL)
  • Combined loss (L1 loss + PSNR + CSS + CL)

Results

  • In-game ingame-car

  • Ours ours-car

  • In-game ingame-car2

  • Ours ours-car2

  • In-game ingame-roads

  • Ours ours-roads

  • In-game ingame-roads2

  • Ours ours-roads2

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 2 x Nvidia T4 16GB GPUs
  • Hours used: 210 (per GPU); 420 (combined)
  • Cloud Provider: Kaggle
  • Compute Region: US
  • Carbon Emitted: 8.82 kg CO2

Technical Specifications

Model Architecture and Objective

RealFormer is a Transformer-based low-latency Style Transfer Generative LM that attempts to reconstruct each frame into a more photorealistic image. The objective of RealFormer is to attain the maximum level of detail to the real-world, which even current video games with exhaustive graphics are not able to.

Flagship Architecture v4: The v4 model builds upon the previous version by introducing Attention Guided Attention (AGA), which leverages learned attention weights from a optical flow field motion-guided cross-attention preprocessing stage. These pre-learned weights, conditioned into the untrained attention mechanism, improve the model's ability to focus on dynamic regions within consecutive frames. Additionally, v4 incorporates a novel Multi-Scale Style Encoder to enhance feature extraction, but also continues to leverage features from SALN and LbMhA. This architecture significantly improves temporal coherence and photorealistic enhancement by transferring knowledge from motion vector-based attention, without retraining the learned weights, leading to more efficient training and better performance in capturing real-world dynamics.

RealFormerAGA(
  (patch_embed): DynamicPatchEmbedding(
    (proj): Conv2d(3, 768, kernel_size=(1, 1), stride=(1, 1))
  )
  (encoder_layers): ModuleList(
    (0-15): 16 x TransformerEncoderBlock(
      (attn): CrossAttentionLayer(
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (ff): Sequential(
        (0): Linear(in_features=768, out_features=3072, bias=True)
        (1): ReLU()
        (2): Linear(in_features=3072, out_features=768, bias=True)
      )
      (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      (norm2): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (dropout): Dropout(p=0.1, inplace=False)
    )
  )
  (decoder_layers): ModuleList(
    (0-15): 16 x TransformerDecoderBlock(
      (attn1): CrossAttentionLayer(
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (attn2): CrossAttentionLayer(
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (ff): Sequential(
        (0): Linear(in_features=768, out_features=3072, bias=True)
        (1): ReLU()
        (2): Linear(in_features=3072, out_features=768, bias=True)
      )
      (norm1): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (norm2): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (norm3): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
    )
  )
  (swin_layers): ModuleList(
    (0-15): 16 x SwinTransformerBlock(
      (attn): MultiheadAttention(
        (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
      )
      (mlp): Sequential(
        (0): Linear(in_features=768, out_features=3072, bias=True)
        (1): GELU(approximate='none')
        (2): Linear(in_features=3072, out_features=768, bias=True)
      )
      (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
    )
  )
  (refinement): RefinementBlock(
    (conv): Conv2d(768, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
  )
  (final_layer): Conv2d(3, 3, kernel_size=(1, 1), stride=(1, 1))
  (style_encoder): MultiScaleStyleEncoder(
    (conv): Conv2d(3, 768, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (pool1): AdaptiveAvgPool2d(output_size=16)
    (pool2): AdaptiveAvgPool2d(output_size=8)
    (pool3): AdaptiveAvgPool2d(output_size=4)
    (fc): Linear(in_features=258048, out_features=768, bias=True)
  )
)

v3 Architecture: The v3 model introduces Style Adaptive Layer Normalization (SALN) & Location-based Multi-head Attention (LbMhA) to improve feature extraction at lower parameters. The two other predecessors attained a similar level of accuracy without the LbMhA layers, but with SALN, outperformed by upto ~13%. The general architecture is as follows:

RealFormerv3(
  (patch_embed): DynamicPatchEmbedding(
    (proj): Conv2d(2048, 768, kernel_size=(1, 1), stride=(1, 1))
  )
  (encoder_layers): ModuleList(
    (0-7): 8 x TransformerEncoderBlock(
      (attn): CrossAttentionLayer(
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (ff): Sequential(
        (0): Linear(in_features=768, out_features=3072, bias=True)
        (1): ReLU()
        (2): Linear(in_features=3072, out_features=768, bias=True)
      )
      (norm1): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (norm2): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (dropout): Dropout(p=0.1, inplace=False)
    )
  )
  (decoder_layers): ModuleList(
    (0-7): 8 x TransformerDecoderBlock(
      (attn1): CrossAttentionLayer(
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (attn2): CrossAttentionLayer(
        (attn): MultiheadAttention(
          (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
        )
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (ff): Sequential(
        (0): Linear(in_features=768, out_features=3072, bias=True)
        (1): ReLU()
        (2): Linear(in_features=3072, out_features=768, bias=True)
      )
      (norm1): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (norm2): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
      (norm3): StyleAdaptiveLayerNorm(
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (fc): Linear(in_features=768, out_features=1536, bias=True)
      )
    )
  )
  (swin_layers): ModuleList(
    (0-7): 8 x SwinTransformerBlock(
      (attn): MultiheadAttention(
        (out_proj): NonDynamicallyQuantizableLinear(in_features=768, out_features=768, bias=True)
      )
      (mlp): Sequential(
        (0): Linear(in_features=768, out_features=3072, bias=True)
        (1): GELU(approximate='none')
        (2): Linear(in_features=3072, out_features=768, bias=True)
      )
      (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
    )
  )
  (refinement): RefinementBlock(
    (conv): Conv2d(768, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (bn): BatchNorm2d(3, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU(inplace=True)
  )
  (final_layer): Conv2d(3, 2048, kernel_size=(1, 1), stride=(1, 1))
  (style_encoder): Sequential(
    (0): Conv2d(2048, 768, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (1): ReLU()
    (2): AdaptiveAvgPool2d(output_size=1)
    (3): Flatten(start_dim=1, end_dim=-1)
    (4): Linear(in_features=768, out_features=768, bias=True)
  )
)

Compute Infrastructure

Hardware

2 x Nvidia T4 16GB GPUs

Software

  • PyTorch
  • torchvision
  • einops
  • numpy
  • PIL (Python Imaging Library)
  • matplotlib (for visualization)

Model Card Authors

Alosh Denny

Model Card Contact

[email protected]

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train aoxo/RealFormer