You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Installation

  1. Install Package

    conda create -n llava python=3.10 -y
    conda activate llava
    pip install --upgrade pip  # enable PEP 660 support
    pip install -e .
    
  2. Install additional packages for training cases

    pip install -e ".[train]"
    pip install flash-attn --no-build-isolation
    

Interface

from llava_llama3.serve.cli import chat_llava
from llava_llama3.model.builder import load_pretrained_model
import argparse
import os
import glob
import pandas as pd
from tqdm import tqdm
import json

root_path = os.path.dirname(os.path.abspath(__file__))
print(f'\033[92m{root_path}\033[0m')

parser = argparse.ArgumentParser()
parser.add_argument("--model-path", type=str, default="TobyYang7/MFFM-8B-finma-v8")
parser.add_argument("--device", type=str, default="cuda")
parser.add_argument("--conv-mode", type=str, default="llama_3")
parser.add_argument("--temperature", type=float, default=0)
parser.add_argument("--max-new-tokens", type=int, default=512)
parser.add_argument("--load-8bit", action="store_true")
parser.add_argument("--load-4bit", action="store_true")
args = parser.parse_args()

# load model
tokenizer, llava_model, image_processor, context_len = load_pretrained_model(args.model_path, None, 'llava_llama3', args.load_8bit, args.load_4bit, device=args.device)

print('\033[92mRunning chat\033[0m')
output = chat_llava(args=args,
                    image_file=root_path+'/data/llava_logo.png',
                    text='What is this?',
                    tokenizer=tokenizer,
                    model=llava_model,
                    image_processor=image_processor,  # todo: input model name or path
                    context_len=context_len)
print('\033[94m', output, '\033[0m')

If you encounter the error No module named 'llava_llama3', set the PYTHONPATH as follows:

export PYTHONPATH=$PYTHONPATH:{$your_dir}/llava_llama3
Downloads last month
0
Safetensors
Model size
8.35B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.