MoR / README.md
GagaLey's picture
Add pipeline tag, library name and license (#1)
1bcd4f8 verified
metadata
base_model:
  - meta-llama/Llama-3.2-3B-Instruct
datasets:
  - snap-stanford/stark
metrics:
  - recall
pipeline_tag: question-answering
library_name: transformers
license: mit

MoR

This model card for our paper Mixture of Structural-and-Textual Retrieval over Text-rich Graph Knowledge Bases.

Code: https://github.com/Yoega/MoR

Running the Evaluation and Reranking Script

Installation

To set up the environment, you can install dependencies using Conda or pip:

Using Conda

conda env create -f mor_env.yml
conda activate your_env_name  # Replace with actual environment name

Using pip

pip install -r requirements.txt

Inference

To run the inference script, execute the following command in the terminal:

bash eval_mor.sh

This script will automatically process three datasets using the pre-trained planning graph generator and the pre-trained reranker.

Training (Train MoR from Scratch)

Step1: Training the planning graph generator

bash train_planner.sh

Step2: Train mixed traversal to collect candidates (note: there is no training process for reasoning)

bash run_reasoning.sh

Step3: Training the reranker

bash train_reranker.sh

Generating training data of Planner

We provide codes to generate your own training data to finetune the Planner by using different LLMs.

If you are using Azure API

python script.py --model "model_name" \
  --dataset_name "dataset_name" \
  --azure_api_key "your_azure_key" \
  --azure_endpoint "your_azure_endpoint" \
  --azure_api_version "your_azure_version"

If you are using OpenAI API

python script.py --model "model_name" \
  --dataset_name "dataset_name" \
  --openai_api_key "your_openai_key" \
  --openai_endpoint "your_openai_endpoint"