Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Swedish
Libraries:
Datasets
pandas
License:
PierreMesure commited on
Commit
76e82a6
·
1 Parent(s): 3a358dc

Add data structure information

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -9,6 +9,11 @@ tags:
9
  - internet-archive
10
  size_categories:
11
  - 1M<n<10M
 
 
 
 
 
12
  ---
13
 
14
  # Myndighetscrawl
@@ -23,6 +28,7 @@ Unfortunately, their decentralised nature and the lack of best open data practic
23
  Some agencies consistently publish their reports on a specific "Publications" page, but many do not. Some add metadata to the PDF files, many do not. In practice, the work of automatically fetching documents from hundreds of websites is too much and most organisations do not keep their old reports online.
24
 
25
  These publications are also archived by three organisations:
 
26
  - the Swedish National Library (*Kungliga biblioteket*), still partly in paper form
27
  - the Swedish National Archives (*Riksarkivet*), still partly in paper form and often with a delay of several years
28
  - the Government Offices of Sweden (*Regeringskansliet*) in their digital registry (*diariesystem*) for those documents sent to the government
@@ -44,11 +50,13 @@ The data made available is the raw list of links provided by the services.
44
  The data is in [archive_org.parquet](./archive_org.parquet).
45
 
46
  The Internet Archive provides:
 
47
  - *timestamp*, a timestamp for the time the document was archived
48
  - *original*, the URL at which the document was found
49
  - *length*, the size in bytes of the archived file
50
 
51
  The file also contains two fields generated by the data:
 
52
  - *archive*, the URL to access the file on the Wayback Machine, made with *timestamp* and *original*
53
  - *filename*, the filename extracted from the URL if possible
54
 
@@ -57,6 +65,7 @@ The file also contains two fields generated by the data:
57
  The data is in [common_crawl.parquet](./common_crawl.parquet).
58
 
59
  The Common Crawl Index API provides:
 
60
  - *Domain*, the base domain at which the file was fetched
61
  - *Period*, a monthly or yearly reference to when it was fetched
62
  - *URL*, the URL at which the document was found
@@ -64,10 +73,11 @@ The Common Crawl Index API provides:
64
  ## What's next?
65
 
66
  The two sources contain respectively 2.16 and 1.25 million rows. The next steps are to:
 
67
  - identify the documents that hold a high value, like reports
68
  - filter out duplicates
69
  - determine important metadata about the documents such as their title, register number and potentially other connections with important processes (government missions...)
70
 
71
  This can't only be done by looking at the URL of the document, they need to be downloaded and analysed. A script to do it using ML and LLM is in preparation.
72
 
73
- If you want to try and analyse the documents in this dataset, don't hesitate to reach out so we can join forces!
 
9
  - internet-archive
10
  size_categories:
11
  - 1M<n<10M
12
+ configs:
13
+ - config_name: archive.org
14
+ data_files: "archive_org.parquet"
15
+ - config_name: commoncrawl.org
16
+ data_files: "common_crawl.parquet"
17
  ---
18
 
19
  # Myndighetscrawl
 
28
  Some agencies consistently publish their reports on a specific "Publications" page, but many do not. Some add metadata to the PDF files, many do not. In practice, the work of automatically fetching documents from hundreds of websites is too much and most organisations do not keep their old reports online.
29
 
30
  These publications are also archived by three organisations:
31
+
32
  - the Swedish National Library (*Kungliga biblioteket*), still partly in paper form
33
  - the Swedish National Archives (*Riksarkivet*), still partly in paper form and often with a delay of several years
34
  - the Government Offices of Sweden (*Regeringskansliet*) in their digital registry (*diariesystem*) for those documents sent to the government
 
50
  The data is in [archive_org.parquet](./archive_org.parquet).
51
 
52
  The Internet Archive provides:
53
+
54
  - *timestamp*, a timestamp for the time the document was archived
55
  - *original*, the URL at which the document was found
56
  - *length*, the size in bytes of the archived file
57
 
58
  The file also contains two fields generated by the data:
59
+
60
  - *archive*, the URL to access the file on the Wayback Machine, made with *timestamp* and *original*
61
  - *filename*, the filename extracted from the URL if possible
62
 
 
65
  The data is in [common_crawl.parquet](./common_crawl.parquet).
66
 
67
  The Common Crawl Index API provides:
68
+
69
  - *Domain*, the base domain at which the file was fetched
70
  - *Period*, a monthly or yearly reference to when it was fetched
71
  - *URL*, the URL at which the document was found
 
73
  ## What's next?
74
 
75
  The two sources contain respectively 2.16 and 1.25 million rows. The next steps are to:
76
+
77
  - identify the documents that hold a high value, like reports
78
  - filter out duplicates
79
  - determine important metadata about the documents such as their title, register number and potentially other connections with important processes (government missions...)
80
 
81
  This can't only be done by looking at the URL of the document, they need to be downloaded and analysed. A script to do it using ML and LLM is in preparation.
82
 
83
+ If you want to try and analyse the documents in this dataset, don't hesitate to reach out so we can join forces!