Phips commited on
Commit
d0bc6eb
·
verified ·
1 Parent(s): 06a14bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+
5
+ [Link to Github Release](https://github.com/Phhofm/models/releases/tag/4xHFA2k_ludvae_realplksr_dysample)
6
+
7
+ # 4xHFA2k_ludvae_realplksr_dysample
8
+ Scale: 4
9
+ Architecture: [RealPLKSR with Dysample](https://github.com/muslll/neosr/?tab=readme-ov-file#supported-archs)
10
+ Architecture Option: [realplksr](https://github.com/muslll/neosr/blob/master/neosr/archs/realplksr_arch.py)
11
+
12
+ Author: Philip Hofmann
13
+ License: CC-BY-0.4
14
+ Purpose: Restoration
15
+ Subject: Anime
16
+ Input Type: Images
17
+ Release Date: 13.07.2024
18
+
19
+ Dataset: HFA2k_LUDVAE
20
+ Dataset Size: 10'272
21
+ OTF (on the fly augmentations): No
22
+ Pretrained Model: [4xNomos2_realplksr_dysample](https://github.com/Phhofm/models/releases/tag/4xNomos2_realplksr_dysample)
23
+ Iterations: 165'000
24
+ Batch Size: 12
25
+ GT Size: 256
26
+
27
+ Description:
28
+ A Dysample RealPLKSR 4x upscaling model for anime single-image resolution.
29
+ The dataset has been degraded using DM600_LUDVAE, for more realistic noise/compression. Downscaling algorithms used were imagemagick box, triangle, catrom, lanczos and mitchell. Blurs applied were gaussian, box and lens blur (using chaiNNer). Some images were further compressed using -quality 75-92. Down-up was applied to roughly 10% of the dataset (5 to 15% variation in size). Degradations orders were shuffled, to give as many variations as possible.
30
+
31
+ Examples are inferenced with [neosr](https://github.com/muslll/neosr) testscript and the released pth file. I include the test images also as a zip file in this release together with the model outputs, so others can test their models against these test images aswell to compare.
32
+
33
+ onnx conversions are static since dysample doesnt allow dynamic conversion, I tested the conversions with [chaiNNer](https://github.com/chaiNNer-org/chaiNNer).
34
+
35
+ Showcase:
36
+ [Slowpics](https://slow.pics/c/FKDZAcyI)
37
+
38
+ (Click Image to enlarge)
39
+ ![Example1](https://github.com/user-attachments/assets/0cbbbdc2-9c7e-4cf8-9d1f-692caa55f4d3)
40
+ ![Example2](https://github.com/user-attachments/assets/0e8272b9-48cc-4a6f-8a0b-b8a0535e09d1)
41
+ ![Example3](https://github.com/user-attachments/assets/f519f0f2-a3bd-430e-b07b-7c57ea8f5b63)
42
+ ![Example4](https://github.com/user-attachments/assets/a5feb09a-81ee-4c18-bca9-db9b4bd8bcc1)
43
+ ![Example5](https://github.com/user-attachments/assets/e71bd965-8d94-48ba-a02f-c4af016f9a9a)
44
+ ![Example6](https://github.com/user-attachments/assets/3f0ce3bc-842d-40da-8eb7-880e0a0b8e92)
45
+ ![Example7](https://github.com/user-attachments/assets/f0d55c0a-e1e9-461f-b189-3c3a831abf29)
46
+ ![Example8](https://github.com/user-attachments/assets/0c38593e-53f5-4455-86a7-9f08ce0d6b78)
47
+ ![Example9](https://github.com/user-attachments/assets/c5611080-2691-4950-9d96-960f2ce92756)
48
+ ![Example10](https://github.com/user-attachments/assets/8b7bd232-c08a-4cd7-8a02-91ae35e16954)
49
+ ![Example11](https://github.com/user-attachments/assets/33d44ddd-8b8c-4af6-971d-6f9942a331da)
50
+ ![Example12](https://github.com/user-attachments/assets/438b76bc-7e5e-45ab-9be1-1f39d14b7428)
51
+ ![Example13](https://github.com/user-attachments/assets/9ac1c550-8cc8-44d8-8808-265428aebb6c)
52
+ ![Example14](https://github.com/user-attachments/assets/1fc30d57-9e55-4c65-a5c5-19d5738307d4)