Datasets:

Languages:
Hebrew
ArXiv:
Libraries:
Datasets
pandas
License:
GiliGold commited on
Commit
e285174
1 Parent(s): 4942a11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -24
README.md CHANGED
@@ -61,39 +61,41 @@ For more information see: [ArXiv](http://arxiv.org/abs/1111.11111)
61
 
62
 
63
  #### Option 2: ElasticSearch
64
- IP address, username and password for the es server and [Kibana](http://34.0.64.248:5601/)
65
  ```python
66
- elastic_ip = '34.0.64.248:9200'
67
- kibana_ip = '34.0.64.248:5601'
68
  ```
69
  ```python
70
- es_username = 'user'
71
- es_password = 'knesset'
72
  ```
 
73
  ```python
74
- es = Elasticsearch(f'http://{elastic_ip}',http_auth=(es_username, es_password), timeout=100)
75
- resp = es.search(index="all_features_sentences", body={"query":{"match_all": {}}})
76
- print("Got %d Hits:" % resp['hits']['total']['value'])
77
- for hit in resp['hits']['hits']:
78
- print("id: %(sentence_id)s: speaker_name: %(speaker_name)s: sentence_text: %(sentence_text)s" % hit["_source"])
 
 
79
  ```
80
 
81
  #### Option 3: Directly from files
82
  ```python
83
- import json
84
-
85
- path = <path to committee_full_sentences.jsonl> #or any other sentences jsonl file
86
- with open(path, encoding="utf-8") as file:
87
- for line in file:
88
- try:
89
- sent = json.loads(line)
90
- except Exception as e:
91
- print(f'couldnt load json line. error:{e}.')
92
- sent_id = sent["sentence_id"]
93
- sent_text = sent["sentence_text"]
94
- speaker_name = sent["speaker_name"]
95
- print(f"ID: {sent_id}, speaker name: {speaker_name}, text: {sent_text")
96
-
97
  ```
98
 
99
  ## Subsets
 
61
 
62
 
63
  #### Option 2: ElasticSearch
64
+ IP address, username and password for the es server and [Kibana](http://34.0.64.248:5601/):
65
  ```python
66
+ elastic_ip = '34.0.64.248:9200'
67
+ kibana_ip = '34.0.64.248:5601'
68
  ```
69
  ```python
70
+ es_username = 'user'
71
+ es_password = 'knesset'
72
  ```
73
+ Query dataset:
74
  ```python
75
+ from elasticsearch import Elasticsearch
76
+
77
+ es = Elasticsearch(f'http://{elastic_ip}',http_auth=(es_username, es_password), timeout=100)
78
+ resp = es.search(index="all_features_sentences", body={"query":{"match_all": {}}})
79
+ print("Got %d Hits:" % resp['hits']['total']['value'])
80
+ for hit in resp['hits']['hits']:
81
+ print("id: %(sentence_id)s: speaker_name: %(speaker_name)s: sentence_text: %(sentence_text)s" % hit["_source"])
82
  ```
83
 
84
  #### Option 3: Directly from files
85
  ```python
86
+ import json
87
+
88
+ path = <path to committee_full_sentences.jsonl> #or any other sentences jsonl file
89
+ with open(path, encoding="utf-8") as file:
90
+ for line in file:
91
+ try:
92
+ sent = json.loads(line)
93
+ except Exception as e:
94
+ print(f'couldnt load json line. error:{e}.')
95
+ sent_id = sent["sentence_id"]
96
+ sent_text = sent["sentence_text"]
97
+ speaker_name = sent["speaker_name"]
98
+ print(f"ID: {sent_id}, speaker name: {speaker_name}, text: {sent_text")
 
99
  ```
100
 
101
  ## Subsets