license: apache-2.0
datasets:
- aoxo/photorealism-style-adapter-gta-v
language:
- en
metrics:
- accuracy
pipeline_tag: image-to-image
tags:
- art
Introducing RealFormer - A new approach to Photorealism over Supersampling
Introducing RealFormer, a novel image-to-image transformer model designed for enhancing photorealism in images, particularly focused on transforming synthetic images to more realistic ones.
Model Details
Model Description
RealFormer is an innovative Vision Transformer (ViT) based architecture that combines elements of Linear Attention (approximation attention) with Swin Transformers and adaptive instance normalization (AdaIN) for style transfer. It's designed to transform images,specifically targeted at the video game and animation industry, potentially enhancing their photorealism or applying style transfer.
- Developed by: Alosh Denny
- Funded by [optional]: EmelinLabs
- Shared by [optional]: EmelinLabs
- Model type: Image-to-Image Transformer
- Language(s) (NLP): None (Pre-trained Generative Image Model)
- License: Apache-2.0
- Finetuned from model [optional]: Novel; Pre-trained (not finetuned)
Model Sources [optional]
- Dataset: Calibration Dataset for Grand Theft Auto V, Pre-Training
- Repository: Swin Transformer
- Paper: Ze Liu et al. (2021)
Uses
Direct Use
RealFormer is designed for image-to-image translation tasks. It can be used directly for:
- Enhancing photorealism in synthetic images (e.g., transforming video game graphics to more realistic images)
- Style transfer between rendered frames and post-processed frames
- To be incorporated in pipeline with DLSS
Downstream Use
Potential downstream uses could include:
- Integration into game engines for real-time graphics enhancement - AdaIN layers are finetunable for video-game-specific usecases. In this implementation, the models have been pretrained on a variety of video for super-sampling, photorealistic style transfer and reverse photorealism.
- Pre-processing step in computer vision pipelines to improve input image quality - Decoder layers can be frozen for task-specific usecases.
- Photo editing software for synthesized image enhancement
Out-of-Scope Use
This model is not recommended for:
- Generating or manipulating images in ways that could be deceptive or harmful
- Tasks requiring perfect preservation of specific image details, as the transformation process may alter some artifacts of the image
- Medical or forensic image analysis where any alteration could lead to misinterpretation. Remember, this is a model, not a classification or detection model.
Bias, Risks, and Limitations
- The model may introduce biases present in the training data, potentially altering images in ways that reflect these biases.
- There's a risk of over-smoothing or losing fine details in the image transformation process.
- The model's performance may vary significantly depending on the input image characteristics and how similar they are to the training data.
- As with any image manipulation tool, there's a potential for misuse in creating deceptive or altered images.
How to Get Started with the Model
Use the code below to get started with the model.
# Instantiate the model
model = ViTImage2Image(img_size=512, patch_size=16, emb_dim=768, num_heads=16, num_layers=8, hidden_dim=3072)
# Move model to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
# Load an image
input_image = load_image('path_to_your_image.png')
input_image = input_image.to(device)
# Perform inference
with torch.no_grad():
output = model(input_image, input_image) # Using input as both content and style for this example
# Visualize or save the output
visualize_tensor(output, "Output Image")
Training Details
Training Data
The model was trained on two
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]