File size: 1,877 Bytes
e42fc5a
1bcd4f8
 
e42fc5a
 
 
 
1bcd4f8
 
 
e42fc5a
1bcd4f8
f96a150
1bcd4f8
 
 
f96a150
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e42fc5a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
base_model:
- meta-llama/Llama-3.2-3B-Instruct
datasets:
- snap-stanford/stark
metrics:
- recall
pipeline_tag: question-answering
library_name: transformers
license: mit
---

# MoR
This model card for our paper [Mixture of Structural-and-Textual Retrieval over Text-rich Graph Knowledge Bases](https://arxiv.org/pdf/2502.20317).

Code: https://github.com/Yoega/MoR

# Running the Evaluation and Reranking Script

## Installation
To set up the environment, you can install dependencies using Conda or pip:

### Using Conda
```bash
conda env create -f mor_env.yml
conda activate your_env_name  # Replace with actual environment name
```

### Using pip
```bash
pip install -r requirements.txt
```


## Inference
To run the inference script, execute the following command in the terminal:

```bash
bash eval_mor.sh
```

This script will automatically process three datasets using the pre-trained planning graph generator and the pre-trained reranker.

## Training (Train MoR from Scratch)
### Step1: Training the planning graph generator 

```bash
bash train_planner.sh
```

### Step2: Train mixed traversal to collect candidates (note: there is no training process for reasoning)

```bash
bash run_reasoning.sh
```

### Step3: Training the reranker

```bash
bash train_reranker.sh
```

## Generating training data of Planner
### We provide codes to generate your own training data to finetune the Planner by using different LLMs.
#### If you are using Azure API

```bash
python script.py --model "model_name" \
  --dataset_name "dataset_name" \
  --azure_api_key "your_azure_key" \
  --azure_endpoint "your_azure_endpoint" \
  --azure_api_version "your_azure_version"

```

#### If you are using OpenAI API

```bash
python script.py --model "model_name" \
  --dataset_name "dataset_name" \
  --openai_api_key "your_openai_key" \
  --openai_endpoint "your_openai_endpoint"

```