phucdev commited on
Commit
7a18504
1 Parent(s): 7188b32

Remove conceptMention id and unify labels, update README.md

Browse files
Files changed (2) hide show
  1. README.md +486 -21
  2. mobie.py +23 -25
README.md CHANGED
@@ -196,16 +196,16 @@ dataset_info:
196
  '40': I-trigger
197
  splits:
198
  - name: train
199
- num_bytes: 2023843
200
  num_examples: 788
201
  - name: test
202
- num_bytes: 1232888
203
  num_examples: 484
204
  - name: validation
205
- num_bytes: 395053
206
  num_examples: 152
207
  download_size: 8190212
208
- dataset_size: 3651784
209
  - config_name: el
210
  features:
211
  - name: id
@@ -436,9 +436,9 @@ dataset_info:
436
  - **Repository:** [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie)
437
  - **Paper:** [https://aclanthology.org/2021.konvens-1.22/](https://aclanthology.org/2021.konvens-1.22/)
438
  - **Point of Contact:** See [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie)
439
- - **Size of downloaded dataset files:** 7.8 MB
440
- - **Size of the generated dataset:** 1.9 MB
441
- - **Total amount of disk used:** 9.7 MB
442
 
443
  ### Dataset Summary
444
 
@@ -446,13 +446,18 @@ This script is for loading the MobIE dataset from https://github.com/dfki-nlp/mo
446
 
447
  MobIE is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities. The dataset consists of 3,232 social media texts and traffic reports with 91K tokens, and contains 20.5K annotated entities, 13.1K of which are linked to a knowledge base. A subset of the dataset is human-annotated with seven mobility-related, n-ary relation types, while the remaining documents are annotated using a weakly-supervised labeling approach implemented with the Snorkel framework. The dataset combines annotations for NER, EL and RE, and thus can be used for joint and multi-task learning of these fundamental information extraction tasks.
448
 
449
- This version of the dataset loader provides NER tags only. NER tags use the `BIO` tagging scheme.
 
 
 
 
 
450
 
451
  For more details see https://github.com/dfki-nlp/mobie and https://aclanthology.org/2021.konvens-1.22/.
452
 
453
  ### Supported Tasks and Leaderboards
454
 
455
- - **Tasks:** Named Entity Recognition
456
  - **Leaderboards:**
457
 
458
  ### Languages
@@ -463,34 +468,494 @@ German
463
 
464
  ### Data Instances
465
 
466
- - **Size of downloaded dataset files:** 7.8 MB
467
- - **Size of the generated dataset:** 1.9 MB
468
- - **Total amount of disk used:** 9.7 MB
 
469
 
470
  An example of 'train' looks as follows.
471
 
472
  ```json
473
  {
474
- 'id': 'http://www.ndr.de/nachrichten/verkehr/index.html#2@2016-05-04T21:02:14.000+02:00',
475
- 'tokens': ['Vorsicht', 'bitte', 'auf', 'der', 'A28', 'Leer', 'Richtung', 'Oldenburg', 'zwischen', 'Zwischenahner', 'Meer', 'und', 'Neuenkruge', 'liegen', 'Gegenstände', '!'],
476
- 'ner_tags': [0, 0, 0, 0, 19, 13, 0, 13, 0, 11, 12, 0, 11, 0, 0, 0]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
477
  }
478
  ```
479
 
480
 
481
  ### Data Fields
482
 
483
- The data fields are the same among all splits.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
484
 
485
- - `id`: a `string` feature.
486
- - `tokens`: a `list` of `string` features.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
487
  - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-date` (1), `I-date` (2), `B-disaster-type` (3), `I-disaster-type` (4), ...
488
 
489
  ### Data Splits
490
 
491
- | | Train | Dev | Test |
492
- | ----- | ------ | ----- | ---- |
493
- | MobIE | 4785 | 1082 | 1210 |
 
 
 
494
 
495
  ## Dataset Creation
496
 
 
196
  '40': I-trigger
197
  splits:
198
  - name: train
199
+ num_bytes: 1869427
200
  num_examples: 788
201
  - name: test
202
+ num_bytes: 1117030
203
  num_examples: 484
204
  - name: validation
205
+ num_bytes: 365928
206
  num_examples: 152
207
  download_size: 8190212
208
+ dataset_size: 3352385
209
  - config_name: el
210
  features:
211
  - name: id
 
436
  - **Repository:** [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie)
437
  - **Paper:** [https://aclanthology.org/2021.konvens-1.22/](https://aclanthology.org/2021.konvens-1.22/)
438
  - **Point of Contact:** See [https://github.com/dfki-nlp/mobie](https://github.com/dfki-nlp/mobie)
439
+ - **Size of downloaded dataset files:** 8.2 MB
440
+ - **Size of the generated dataset:** 1.7 MB
441
+ - **Total amount of disk used:** 9.9 MB
442
 
443
  ### Dataset Summary
444
 
 
446
 
447
  MobIE is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities. The dataset consists of 3,232 social media texts and traffic reports with 91K tokens, and contains 20.5K annotated entities, 13.1K of which are linked to a knowledge base. A subset of the dataset is human-annotated with seven mobility-related, n-ary relation types, while the remaining documents are annotated using a weakly-supervised labeling approach implemented with the Snorkel framework. The dataset combines annotations for NER, EL and RE, and thus can be used for joint and multi-task learning of these fundamental information extraction tasks.
448
 
449
+ This version of the dataset loader provides configurations for:
450
+
451
+ - Named Entity Recognition (`ner`): NER tags use the `BIO` tagging scheme
452
+ - Entity Linking (`el`): Entity mentions are linked to an internal knowledge base and Open Street Map
453
+ - Relation Extraction (`re`): n-ary Relation Extraction
454
+ - Event Extraction (`ee`): formatted similar to https://github.com/nlpcl-lab/ace2005-preprocessing?tab=readme-ov-file#format
455
 
456
  For more details see https://github.com/dfki-nlp/mobie and https://aclanthology.org/2021.konvens-1.22/.
457
 
458
  ### Supported Tasks and Leaderboards
459
 
460
+ - **Tasks:** Named Entity Recognition, Entity Linking, n-ary Relation Extraction, Event Extraction
461
  - **Leaderboards:**
462
 
463
  ### Languages
 
468
 
469
  ### Data Instances
470
 
471
+ #### ner
472
+ - **Size of downloaded dataset files:** 8.2 MB
473
+ - **Size of the generated dataset:** 1.7 MB
474
+ - **Total amount of disk used:** 10.9 MB
475
 
476
  An example of 'train' looks as follows.
477
 
478
  ```json
479
  {
480
+ "id": "http://www.ndr.de/nachrichten/verkehr/index.html#2@2016-05-04T21:02:14.000+02:00",
481
+ "tokens": ["Vorsicht", "bitte", "auf", "der", "A28", "Leer", "Richtung", "Oldenburg", "zwischen", "Zwischenahner", "Meer", "und", "Neuenkruge", "liegen", "Gegenstände", "!"],
482
+ "ner_tags": [0, 0, 0, 0, 19, 13, 0, 13, 0, 11, 12, 0, 11, 0, 0, 0]
483
+ }
484
+ ```
485
+
486
+ #### el
487
+ - **Size of downloaded dataset files:** 8.2 MB
488
+ - **Size of the generated dataset:** 2.1 MB
489
+ - **Total amount of disk used:** 10.3 MB
490
+
491
+ An example of 'train' looks as follows.
492
+
493
+ ```json
494
+ {
495
+ "id": "1108129826844672001",
496
+ "text": "#S4 #RegioNDS #Teilausfall #Mellendorf(23.03)> #Bennemühlen(23.07). Grund: technische Störung an der Strecke. Bitte nutzen Sie #RB38 nach Soltau über Bennemühlen Abfahrt: 23:08 Uhr vom Gleis 2",
497
+ "entity_mentions": [
498
+ {
499
+ "text": "#S4",
500
+ "start": 0,
501
+ "end": 1,
502
+ "char_start": 0,
503
+ "char_end": 3,
504
+ "type": 7,
505
+ "entity_id": "NIL",
506
+ "refids": [
507
+ {
508
+ "key": "spreeDBReferenceId",
509
+ "value": "24007"
510
+ }
511
+ ]
512
+ },
513
+ {
514
+ "text": "#RegioNDS",
515
+ "start": 1,
516
+ "end": 2,
517
+ "char_start": 4,
518
+ "char_end": 13,
519
+ "type": 13,
520
+ "entity_id": "NIL",
521
+ "refids": [
522
+ {
523
+ "key": "spreeDBReferenceId",
524
+ "value": "NIL"
525
+ }
526
+ ]
527
+ },
528
+ {
529
+ "text": "#Teilausfall",
530
+ "start": 2,
531
+ "end": 3,
532
+ "char_start": 14,
533
+ "char_end": 26,
534
+ "type": 19,
535
+ "entity_id": "NIL",
536
+ "refids": [
537
+ {
538
+ "key": "spreeDBReferenceId",
539
+ "value": "NIL"
540
+ }
541
+ ]
542
+ },
543
+ {
544
+ "text": "#Mellendorf",
545
+ "start": 3,
546
+ "end": 4,
547
+ "char_start": 27,
548
+ "char_end": 38,
549
+ "type": 8,
550
+ "entity_id": "NIL",
551
+ "refids": [
552
+ {
553
+ "key": "spreeDBReferenceId",
554
+ "value": "8003957"
555
+ }
556
+ ]
557
+ },
558
+ {
559
+ "text": "23.03",
560
+ "start": 5,
561
+ "end": 6,
562
+ "char_start": 39,
563
+ "char_end": 44,
564
+ "type": 0,
565
+ "entity_id": "NIL",
566
+ "refids": [
567
+ {
568
+ "key": "spreeDBReferenceId",
569
+ "value": "NIL"
570
+ }
571
+ ]
572
+ },
573
+ {
574
+ "text": "#Bennemühlen",
575
+ "start": 8,
576
+ "end": 9,
577
+ "char_start": 47,
578
+ "char_end": 59,
579
+ "type": 6,
580
+ "entity_id": "29589800",
581
+ "refids": [
582
+ {
583
+ "key": "spreeDBReferenceId",
584
+ "value": "29589800"
585
+ },
586
+ {
587
+ "key": "osm_id",
588
+ "value": "29589800"
589
+ }
590
+ ]
591
+ },
592
+ {
593
+ "text": "23.07",
594
+ "start": 10,
595
+ "end": 11,
596
+ "char_start": 60,
597
+ "char_end": 65,
598
+ "type": 0,
599
+ "entity_id": "NIL",
600
+ "refids": [
601
+ {
602
+ "key": "spreeDBReferenceId",
603
+ "value": "NIL"
604
+ }
605
+ ]
606
+ },
607
+ {
608
+ "text": "technische Störung",
609
+ "start": 15,
610
+ "end": 17,
611
+ "char_start": 76,
612
+ "char_end": 94,
613
+ "type": 4,
614
+ "entity_id": "NIL",
615
+ "refids": [
616
+ {
617
+ "key": "spreeDBReferenceId",
618
+ "value": "NIL"
619
+ }
620
+ ]
621
+ },
622
+ {
623
+ "text": "#RB38",
624
+ "start": 24,
625
+ "end": 25,
626
+ "char_start": 128,
627
+ "char_end": 133,
628
+ "type": 7,
629
+ "entity_id": "NIL",
630
+ "refids": [
631
+ {
632
+ "key": "spreeDBReferenceId",
633
+ "value": "23138"
634
+ }
635
+ ]
636
+ },
637
+ {
638
+ "text": "Soltau",
639
+ "start": 26,
640
+ "end": 27,
641
+ "char_start": 139,
642
+ "char_end": 145,
643
+ "type": 6,
644
+ "entity_id": "1809016",
645
+ "refids": [
646
+ {
647
+ "key": "spreeDBReferenceId",
648
+ "value": "-1809016"
649
+ },
650
+ {
651
+ "key": "osm_id",
652
+ "value": "1809016"
653
+ }
654
+ ]
655
+ },
656
+ {
657
+ "text": "Bennemühlen",
658
+ "start": 28,
659
+ "end": 29,
660
+ "char_start": 151,
661
+ "char_end": 162,
662
+ "type": 8,
663
+ "entity_id": "NIL",
664
+ "refids": [
665
+ {
666
+ "key": "spreeDBReferenceId",
667
+ "value": "8000871"
668
+ }
669
+ ]
670
+ },
671
+ {
672
+ "text": "23:08 Uhr",
673
+ "start": 31,
674
+ "end": 33,
675
+ "char_start": 172,
676
+ "char_end": 181,
677
+ "type": 18,
678
+ "entity_id": "NIL",
679
+ "refids": [
680
+ {
681
+ "key": "spreeDBReferenceId",
682
+ "value": "NIL"
683
+ }
684
+ ]
685
+ },
686
+ {
687
+ "text": "2",
688
+ "start": 35,
689
+ "end": 36,
690
+ "char_start": 192,
691
+ "char_end": 193,
692
+ "type": 11,
693
+ "entity_id": "NIL",
694
+ "refids": [
695
+ {
696
+ "key": "spreeDBReferenceId",
697
+ "value": "NIL"
698
+ }
699
+ ]
700
+ }
701
+ ]
702
+ }
703
+ ```
704
+
705
+ #### re
706
+ - **Size of downloaded dataset files:** 8.2 MB
707
+ - **Size of the generated dataset:** 1.7 MB
708
+ - **Total amount of disk used:** 10.9 MB
709
+
710
+ An example of 'train' looks as follows.
711
+
712
+ ```json
713
+ {
714
+ "id": "1111185208647274501_1",
715
+ "text": "RT @SBahn_Stuttgart: 🚨Störung🚨 Derzeit steht eine #S2 Richtung Filderstadt mit einer Türstörung in Stg-Rohr. Es kommt auf den Linien #S1, #…",
716
+ "tokens": ["RT", "@SBahn_Stuttgart", ":", "🚨", "Störung", "🚨 ", "Derzeit", "steht", "eine", "#S2", "Richtung", "Filderstadt", "mit", "einer", "Türstörung", "in", "Stg", "-", "Rohr", ".", "Es", "kommt", "auf", "den", "Linien", "#S1", ",", "#", "…"],
717
+ "entities": [[1, 2], [4, 5], [9, 10], [11, 12], [14, 15], [16, 19], [25, 26]],
718
+ "entity_roles": [0, 1, 2, 0, 0, 0, 0],
719
+ "entity_types": [13, 4, 7, 6, 4, 8, 7],
720
+ "event_type": 5,
721
+ "entity_ids": ["NIL", "NIL", "NIL", "2796535", "NIL", "NIL", "NIL"]
722
+ }
723
+ ```
724
+
725
+ #### ee
726
+ - **Size of downloaded dataset files:** 8.2 MB
727
+ - **Size of the generated dataset:** 3.7 MB
728
+ - **Total amount of disk used:** 11.9 MB
729
+
730
+ An example of 'train' looks as follows.
731
+
732
+ ```json
733
+ {
734
+ "id": "1111185208647274501",
735
+ "text": "RT @SBahn_Stuttgart: 🚨Störung🚨 Derzeit steht eine #S2 Richtung Filderstadt mit einer Türstörung in Stg-Rohr. Es kommt auf den Linien #S1, #…",
736
+ "entity_mentions": [
737
+ {
738
+ "text": "@SBahn_Stuttgart",
739
+ "start": 1,
740
+ "end": 2,
741
+ "char_start": 3,
742
+ "char_end": 19,
743
+ "type": 13,
744
+ "entity_id": "NIL",
745
+ "refids": [
746
+ {
747
+ "key": "spreeDBReferenceId",
748
+ "value": "NIL"
749
+ }
750
+ ]
751
+ },
752
+ {
753
+ "text": "Störung",
754
+ "start": 4,
755
+ "end": 5,
756
+ "char_start": 22,
757
+ "char_end": 29,
758
+ "type": 4,
759
+ "entity_id": "NIL",
760
+ "refids": [
761
+ {
762
+ "key": "spreeDBReferenceId",
763
+ "value": "NIL"
764
+ }
765
+ ]
766
+ },
767
+ {
768
+ "text": "#S2",
769
+ "start": 9,
770
+ "end": 10,
771
+ "char_start": 50,
772
+ "char_end": 53,
773
+ "type": 7,
774
+ "entity_id": "NIL",
775
+ "refids": [
776
+ {
777
+ "key": "spreeDBReferenceId",
778
+ "value": "17171"
779
+ }
780
+ ]
781
+ },
782
+ {
783
+ "text": "Filderstadt",
784
+ "start": 11,
785
+ "end": 12,
786
+ "char_start": 63,
787
+ "char_end": 74,
788
+ "type": 6,
789
+ "entity_id": "2796535",
790
+ "refids": [
791
+ {
792
+ "key": "spreeDBReferenceId",
793
+ "value": "-2796535"
794
+ },
795
+ {
796
+ "key": "osm_id",
797
+ "value": "2796535"
798
+ }
799
+ ]
800
+ },
801
+ {
802
+ "text": "Türstörung",
803
+ "start": 14,
804
+ "end": 15,
805
+ "char_start": 85,
806
+ "char_end": 95,
807
+ "type": 4,
808
+ "entity_id": "NIL",
809
+ "refids": [
810
+ {
811
+ "key": "spreeDBReferenceId",
812
+ "value": "NIL"
813
+ }
814
+ ]
815
+ },
816
+ {
817
+ "text": "Stg-Rohr",
818
+ "start": 16,
819
+ "end": 19,
820
+ "char_start": 99,
821
+ "char_end": 107,
822
+ "type": 8,
823
+ "entity_id": "NIL",
824
+ "refids": [
825
+ {
826
+ "key": "spreeDBReferenceId",
827
+ "value": "NIL"
828
+ }
829
+ ]
830
+ },
831
+ {
832
+ "text": "#S1",
833
+ "start": 25,
834
+ "end": 26,
835
+ "char_start": 133,
836
+ "char_end": 136,
837
+ "type": 7,
838
+ "entity_id": "NIL",
839
+ "refids": [
840
+ {
841
+ "key": "spreeDBReferenceId",
842
+ "value": "16703"
843
+ }
844
+ ]
845
+ }
846
+ ],
847
+ "event_mentions": [
848
+ {
849
+ "id": "r/0f748b57-63ec-4cb9-ab54-e35d29ac44f8",
850
+ "trigger": {
851
+ "text": "Störung",
852
+ "start": 4,
853
+ "end": 5,
854
+ "char_start": 22,
855
+ "char_end": 29
856
+ },
857
+ "arguments": [
858
+ {
859
+ "text": "#S2",
860
+ "start": 9,
861
+ "end": 10,
862
+ "char_start": 50,
863
+ "char_end": 53,
864
+ "role": 1,
865
+ "type": 7
866
+ }
867
+ ],
868
+ "event_type": 5
869
+ }
870
+ ],
871
+ "tokens": ["RT", "@SBahn_Stuttgart", ":", "🚨", "Störung", "🚨 ", "Derzeit", "steht", "eine", "#S2", "Richtung", "Filderstadt", "mit", "einer", "Türstörung", "in", "Stg", "-", "Rohr", ".", "Es", "kommt", "auf", "den", "Linien", "#S1", ",", "#", "…"],
872
+ "pos_tags": ["NN", "NN", "$.", "CARD", "NN", "CARD", "ADV", "VVFIN", "ART", "NN", "NN", "NE", "APPR", "ART", "NN", "APPR", "NE", "$[", "NE", "$.", "PPER", "VVFIN", "APPR", "ART", "NN", "CARD", "$,", "CARD", "$["],
873
+ "lemma": ["rt", "@sbahn_stuttgart", ":", "🚨", "störung", "🚨", "derzeit", "steht", "eine", "#s2", "richtung", "filderstadt", "mit", "einer", "türstörung", "in", "stg", "-", "rohr", ".", "es", "kommt", "auf", "den", "linien", "#s1", ",", "#", "..."],
874
+ "ner_tags": [0, 14, 0, 0, 5, 0, 0, 0, 0, 8, 0, 7, 0, 0, 5, 0, 9, 29, 29, 0, 0, 0, 0, 0, 0, 8, 0, 0, 0]
875
  }
876
  ```
877
 
878
 
879
  ### Data Fields
880
 
881
+ #### ner
882
+
883
+ - `id`: example identifier, a `string` feature.
884
+ - `tokens`: list of tokens, a `list` of `string` features.
885
+ - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-date` (1), `I-date` (2), `B-disaster-type` (3), `I-disaster-type` (4), ...
886
+
887
+ #### el
888
+
889
+ - `id`: example identifier, a `string` feature.
890
+ - `text`: example text, a `string` feature.
891
+ - `entity_mentions`: a `list` of `struct` features.
892
+ - `text`: a `string` feature.
893
+ - `start`: token offset start, a `int32` feature.
894
+ - `end`: token offset end, a `int32` feature.
895
+ - `char_start`: character offset start, a `int32` feature.
896
+ - `char_end`: character offset end, a `int32` feature.
897
+ - `type`: a classification label, with possible values including `O` (0), `date` (1), `disaster-type` (2), `distance` (3), `duration` (4), `event-cause` (5), ...
898
+ - `entity_id`: Open Street Map ID, a `string` feature.
899
+ - `refids`: knowledge base ids, a `list` of `struct` features.
900
+ - `key`: name of the knowledge base, a `string` feature.
901
+ - `value`: identifier, a `string` feature.
902
+
903
+
904
+ #### re
905
+
906
+ - `id`: example identifier, a `string` feature.
907
+ - `text`: example text, a `string` feature.
908
+ - `tokens`: list of tokens, a `list` of `string` features.
909
+ - `entities`: a list of token spans, a `list` of `int32` featuress.
910
+ - `entity_roles`: a `list` of classification labels, with possible values including `no_arg` (0), `trigger` (1), `location` (2), `delay` (3), `direction` (4), ...
911
+ - `event_type`: a classification label, with possible values including `O` (0), `Accident` (1), `CanceledRoute` (2), `CanceledStop` (3), `Delay` (4), ...
912
+ - `entity_ids`: list of Open Street Map IDs, a `list` of `string` features.
913
+
914
+ #### ee
915
 
916
+ - `id`: example identifier, a `string` feature.
917
+ - `text`: example text, a `string` feature.
918
+ - `entity_mentions`: a `list` of `struct` features.
919
+ - `text`: a `string` feature.
920
+ - `start`: token offset start, a `int32` feature.
921
+ - `end`: token offset end, a `int32` feature.
922
+ - `char_start`: character offset start, a `int32` feature.
923
+ - `char_end`: character offset end, a `int32` feature.
924
+ - `type`: a classification label, with possible values including `O` (0), `date` (1), `disaster-type` (2), `distance` (3), `duration` (4), `event-cause` (5), ...
925
+ - `entity_id`: Open Street Map ID, a `string` feature.
926
+ - `refids`: knowledge base ids, a `list` of `struct` features.
927
+ - `key`: name of the knowledge base, a `string` feature.
928
+ - `value`: identifier, a `string` feature.
929
+ - `event_mentions`: a list of `struct` features.
930
+ - `id`: event identifier, a `string` feature.
931
+ - `trigger`: a `struct` feature.
932
+ - `text`: a `string` feature.
933
+ - `start`: token offset start, a `int32` feature.
934
+ - `end`: token offset end, a `int32` feature.
935
+ - `char_start`: character offset start, a `int32` feature.
936
+ - `char_end`: character offset end, a `int32` feature.
937
+ - `arguments`: a list of `struct` features.
938
+ - `text`: a `string` feature.
939
+ - `start`: token offset start, a `int32` feature.
940
+ - `end`: token offset end, a `int32` feature.
941
+ - `char_start`: character offset start, a `int32` feature.
942
+ - `char_end`: character offset end, a `int32` feature.
943
+ - `role`: a classification label, with possible values including `no_arg` (0), `trigger` (1), `location` (2), `delay` (3), `direction` (4), ...
944
+ - `type`: a classification label, with possible values including `O` (0), `date` (1), `disaster-type` (2), `distance` (3), `duration` (4), `event-cause` (5), ...
945
+ - `event_type`: a classification label, with possible values including `O` (0), `Accident` (1), `CanceledRoute` (2), `CanceledStop` (3), `Delay` (4), ...
946
+ - `tokens`: list of tokens, a `list` of `string` features.
947
+ - `pos_tags`: list of part-of-speech tags, a `list` of `string` features.
948
+ - `lemma`: list of lemmatized tokens, a `list` of `string` features.
949
  - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-date` (1), `I-date` (2), `B-disaster-type` (3), `I-disaster-type` (4), ...
950
 
951
  ### Data Splits
952
 
953
+ | | Train | Dev | Test |
954
+ |-----|-------|-----|------|
955
+ | NER | 2115 | 494 | 623 |
956
+ | EL | 2115 | 494 | 623 |
957
+ | RE | 1199 | 228 | 609 |
958
+ | EE | 788 | 152 | 484 |
959
 
960
  ## Dataset Creation
961
 
mobie.py CHANGED
@@ -136,7 +136,7 @@ def fix_doc(doc):
136
  class Mobie(datasets.GeneratorBasedBuilder):
137
  """MobIE is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities"""
138
 
139
- VERSION = datasets.Version("1.0.0")
140
 
141
  # This is an example of a dataset with multiple configurations.
142
  # If you don't want/need to define several sub-sets in your dataset,
@@ -180,13 +180,23 @@ class Mobie(datasets.GeneratorBasedBuilder):
180
  "time",
181
  "trigger",
182
  ]
 
 
 
 
 
 
 
 
183
  entity_mentions = [
184
  {
185
- "id": datasets.Value("string"),
186
  "text": datasets.Value("string"),
187
  "start": datasets.Value("int32"),
188
  "end": datasets.Value("int32"),
189
- "type": datasets.features.ClassLabel(names=labels),
 
 
 
190
  "refids": [
191
  {
192
  "key": datasets.Value("string"),
@@ -224,19 +234,13 @@ class Mobie(datasets.GeneratorBasedBuilder):
224
  "tokens": datasets.Sequence(datasets.Value("string")),
225
  "entities": datasets.Sequence([datasets.Value("int32")]),
226
  "entity_roles": datasets.Sequence(datasets.features.ClassLabel(
227
- names=[
228
- "no_arg", "trigger", "location", "delay", "direction", "start_loc", "end_loc",
229
- "start_date", "end_date", "cause", "jam_length", "route"
230
- ]
231
  )),
232
  "entity_types": datasets.Sequence(datasets.features.ClassLabel(
233
- names=labels
234
  )),
235
  "event_type": datasets.features.ClassLabel(
236
- names=[
237
- "O", "Accident", "CanceledRoute", "CanceledStop", "Delay", "Obstruction",
238
- "RailReplacementService", "TrafficJam"
239
- ]
240
  ),
241
  "entity_ids": datasets.Sequence(datasets.Value("string"))
242
  }
@@ -252,31 +256,27 @@ class Mobie(datasets.GeneratorBasedBuilder):
252
  {
253
  "id": datasets.Value("string"),
254
  "trigger": {
255
- "id": datasets.Value("string"),
256
  "text": datasets.Value("string"),
257
  "start": datasets.Value("int32"),
258
  "end": datasets.Value("int32"),
 
 
259
  },
260
  "arguments": [{
261
- "id": datasets.Value("string"),
262
  "text": datasets.Value("string"),
263
  "start": datasets.Value("int32"),
264
  "end": datasets.Value("int32"),
 
 
265
  "role": datasets.features.ClassLabel(
266
- names=[
267
- "no_arg", "location", "delay", "direction", "start_loc", "end_loc",
268
- "start_date", "end_date", "cause", "jam_length", "route"
269
- ]
270
  ),
271
  "type": datasets.features.ClassLabel(
272
- names=labels
273
  )
274
  }],
275
  "event_type": datasets.features.ClassLabel(
276
- names=[
277
- "O", "Accident", "CanceledRoute", "CanceledStop", "Delay", "Obstruction",
278
- "RailReplacementService", "TrafficJam"
279
- ]
280
  ),
281
  }
282
  ],
@@ -497,7 +497,6 @@ class Mobie(datasets.GeneratorBasedBuilder):
497
  assert arg_start != -1 and arg_end != -1, f"Could not find token offsets for {arg['conceptMention']['id']}"
498
  arg_text = text[arg_char_start:arg_char_end]
499
  args.append({
500
- "id": arg["conceptMention"]["id"],
501
  "text": arg_text,
502
  "start": arg_start,
503
  "end": arg_end,
@@ -509,7 +508,6 @@ class Mobie(datasets.GeneratorBasedBuilder):
509
  event_mentions.append({
510
  "id": rm["id"],
511
  "trigger": {
512
- "id": trigger["conceptMention"]["id"],
513
  "text": trigger_text,
514
  "start": trigger_start,
515
  "end": trigger_end,
 
136
  class Mobie(datasets.GeneratorBasedBuilder):
137
  """MobIE is a German-language dataset which is human-annotated with 20 coarse- and fine-grained entity types and entity linking information for geographically linkable entities"""
138
 
139
+ VERSION = datasets.Version("1.1.0")
140
 
141
  # This is an example of a dataset with multiple configurations.
142
  # If you don't want/need to define several sub-sets in your dataset,
 
180
  "time",
181
  "trigger",
182
  ]
183
+ event_types = [
184
+ "O", "Accident", "CanceledRoute", "CanceledStop", "Delay", "Obstruction",
185
+ "RailReplacementService", "TrafficJam"
186
+ ]
187
+ event_roles = [
188
+ "no_arg", "trigger", "location", "delay", "direction", "start_loc", "end_loc",
189
+ "start_date", "end_date", "cause", "jam_length", "route"
190
+ ]
191
  entity_mentions = [
192
  {
 
193
  "text": datasets.Value("string"),
194
  "start": datasets.Value("int32"),
195
  "end": datasets.Value("int32"),
196
+ "char_start": datasets.Value("int32"),
197
+ "char_end": datasets.Value("int32"),
198
+ "type": datasets.features.ClassLabel(names=["O"]+labels),
199
+ "entity_id": datasets.Value("string"),
200
  "refids": [
201
  {
202
  "key": datasets.Value("string"),
 
234
  "tokens": datasets.Sequence(datasets.Value("string")),
235
  "entities": datasets.Sequence([datasets.Value("int32")]),
236
  "entity_roles": datasets.Sequence(datasets.features.ClassLabel(
237
+ names=event_roles
 
 
 
238
  )),
239
  "entity_types": datasets.Sequence(datasets.features.ClassLabel(
240
+ names=["O"] + labels
241
  )),
242
  "event_type": datasets.features.ClassLabel(
243
+ names=event_types
 
 
 
244
  ),
245
  "entity_ids": datasets.Sequence(datasets.Value("string"))
246
  }
 
256
  {
257
  "id": datasets.Value("string"),
258
  "trigger": {
 
259
  "text": datasets.Value("string"),
260
  "start": datasets.Value("int32"),
261
  "end": datasets.Value("int32"),
262
+ "char_start": datasets.Value("int32"),
263
+ "char_end": datasets.Value("int32")
264
  },
265
  "arguments": [{
 
266
  "text": datasets.Value("string"),
267
  "start": datasets.Value("int32"),
268
  "end": datasets.Value("int32"),
269
+ "char_start": datasets.Value("int32"),
270
+ "char_end": datasets.Value("int32"),
271
  "role": datasets.features.ClassLabel(
272
+ names=event_roles
 
 
 
273
  ),
274
  "type": datasets.features.ClassLabel(
275
+ names=["O"] + labels
276
  )
277
  }],
278
  "event_type": datasets.features.ClassLabel(
279
+ names=event_types
 
 
 
280
  ),
281
  }
282
  ],
 
497
  assert arg_start != -1 and arg_end != -1, f"Could not find token offsets for {arg['conceptMention']['id']}"
498
  arg_text = text[arg_char_start:arg_char_end]
499
  args.append({
 
500
  "text": arg_text,
501
  "start": arg_start,
502
  "end": arg_end,
 
508
  event_mentions.append({
509
  "id": rm["id"],
510
  "trigger": {
 
511
  "text": trigger_text,
512
  "start": trigger_start,
513
  "end": trigger_end,