shunk031 commited on
Commit
80af1e9
1 Parent(s): ccd6fbb

update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md CHANGED
@@ -1,3 +1,55 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # [FIDNetV3](https://github.com/CyberAgentAILab/layout-dm/blob/main/src/trainer/trainer/fid/model.py#L123-L180) from [LayoutDM](https://github.com/CyberAgentAILab/layout-dm)
6
+
7
+ ```shell
8
+ from transformers import AutoModel
9
+
10
+ model = AutoModel.from_pretrained("shunk031/layoutdm-fidnet-v3-publaynet", trust_remote_code=True)
11
+ print(model)
12
+ # LayoutDmFIDNetV3(
13
+ # (emb_label): Embedding(5, 256)
14
+ # (fc_bbox): Linear(in_features=4, out_features=256, bias=True)
15
+ # (enc_fc_in): Linear(in_features=512, out_features=256, bias=True)
16
+ # (enc_transformer): TransformerWithToken(
17
+ # (core): TransformerEncoder(
18
+ # (layers): ModuleList(
19
+ # (0-3): 4 x TransformerEncoderLayer(
20
+ # (self_attn): MultiheadAttention(
21
+ # (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
22
+ # )
23
+ # (linear1): Linear(in_features=256, out_features=128, bias=True)
24
+ # (dropout): Dropout(p=0.1, inplace=False)
25
+ # (linear2): Linear(in_features=128, out_features=256, bias=True)
26
+ # (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
27
+ # (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
28
+ # (dropout1): Dropout(p=0.1, inplace=False)
29
+ # (dropout2): Dropout(p=0.1, inplace=False)
30
+ # )
31
+ # )
32
+ # )
33
+ # )
34
+ # (fc_out_disc): Linear(in_features=256, out_features=1, bias=True)
35
+ # (dec_fc_in): Linear(in_features=512, out_features=256, bias=True)
36
+ # (dec_transformer): TransformerEncoder(
37
+ # (layers): ModuleList(
38
+ # (0-3): 4 x TransformerEncoderLayer(
39
+ # (self_attn): MultiheadAttention(
40
+ # (out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
41
+ # )
42
+ # (linear1): Linear(in_features=256, out_features=128, bias=True)
43
+ # (dropout): Dropout(p=0.1, inplace=False)
44
+ # (linear2): Linear(in_features=128, out_features=256, bias=True)
45
+ # (norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
46
+ # (norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
47
+ # (dropout1): Dropout(p=0.1, inplace=False)
48
+ # (dropout2): Dropout(p=0.1, inplace=False)
49
+ # )
50
+ # )
51
+ # )
52
+ # (fc_out_cls): Linear(in_features=256, out_features=5, bias=True)
53
+ # (fc_out_bbox): Linear(in_features=256, out_features=4, bias=True)
54
+ # )
55
+ ```