File size: 3,246 Bytes
d0bc6eb
 
5c331b9
 
 
 
d0bc6eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: cc-by-4.0
pipeline_tag: image-to-image
tags:
- pytorch
- super-resolution
---

[Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xHFA2k_ludvae_realplksr_dysample)  

# 4xHFA2k_ludvae_realplksr_dysample  
Scale: 4  
Architecture: [RealPLKSR with Dysample](https://github.com/muslll/neosr/?tab=readme-ov-file#supported-archs)  
Architecture Option: [realplksr](https://github.com/muslll/neosr/blob/master/neosr/archs/realplksr_arch.py)  

Author: Philip Hofmann  
License: CC-BY-0.4  
Purpose: Restoration  
Subject: Anime  
Input Type: Images  
Release Date: 13.07.2024  

Dataset: HFA2k_LUDVAE  
Dataset Size: 10'272  
OTF (on the fly augmentations): No  
Pretrained Model: [4xNomos2_realplksr_dysample](https://github.com/Phhofm/models/releases/tag/4xNomos2_realplksr_dysample)  
Iterations: 165'000  
Batch Size: 12  
GT Size: 256  

Description:  
A Dysample RealPLKSR 4x upscaling model for anime single-image resolution.  
The dataset has been degraded using DM600_LUDVAE, for more realistic noise/compression. Downscaling algorithms used were imagemagick box, triangle, catrom, lanczos and mitchell. Blurs applied were gaussian, box and lens blur (using chaiNNer). Some images were further compressed using -quality 75-92. Down-up was applied to roughly 10% of the dataset (5 to 15% variation in size). Degradations orders were shuffled, to give as many variations as possible.

Examples are inferenced with [neosr](https://github.com/muslll/neosr) testscript and the released pth file. I include the test images also as a zip file in this release together with the model outputs, so others can test their models against these test images aswell to compare.

onnx conversions are  static since dysample doesnt allow dynamic conversion, I tested the conversions with [chaiNNer](https://github.com/chaiNNer-org/chaiNNer).

Showcase:
[Slowpics](https://slow.pics/c/FKDZAcyI)

(Click Image to enlarge)
![Example1](https://github.com/user-attachments/assets/0cbbbdc2-9c7e-4cf8-9d1f-692caa55f4d3)
![Example2](https://github.com/user-attachments/assets/0e8272b9-48cc-4a6f-8a0b-b8a0535e09d1)
![Example3](https://github.com/user-attachments/assets/f519f0f2-a3bd-430e-b07b-7c57ea8f5b63)
![Example4](https://github.com/user-attachments/assets/a5feb09a-81ee-4c18-bca9-db9b4bd8bcc1)
![Example5](https://github.com/user-attachments/assets/e71bd965-8d94-48ba-a02f-c4af016f9a9a)
![Example6](https://github.com/user-attachments/assets/3f0ce3bc-842d-40da-8eb7-880e0a0b8e92)
![Example7](https://github.com/user-attachments/assets/f0d55c0a-e1e9-461f-b189-3c3a831abf29)
![Example8](https://github.com/user-attachments/assets/0c38593e-53f5-4455-86a7-9f08ce0d6b78)
![Example9](https://github.com/user-attachments/assets/c5611080-2691-4950-9d96-960f2ce92756)
![Example10](https://github.com/user-attachments/assets/8b7bd232-c08a-4cd7-8a02-91ae35e16954)
![Example11](https://github.com/user-attachments/assets/33d44ddd-8b8c-4af6-971d-6f9942a331da)
![Example12](https://github.com/user-attachments/assets/438b76bc-7e5e-45ab-9be1-1f39d14b7428)
![Example13](https://github.com/user-attachments/assets/9ac1c550-8cc8-44d8-8808-265428aebb6c)
![Example14](https://github.com/user-attachments/assets/1fc30d57-9e55-4c65-a5c5-19d5738307d4)