Japanese-LLaMA-3-8B

Japanese-LLaMA-3-8Bγ―εŸΊη›€γƒ’γƒ‡γƒ«γ€γƒ•γƒ«γƒ’γƒ‡γƒ«γ§γ™γ€‚

Meta-Llama-3-8Bγ‚’γƒ™γƒΌγ‚Ήγ«ι–‹η™Ίγ—γΎγ—γŸγ€‚

GMOγ‚€γƒ³γ‚ΏγƒΌγƒγƒƒγƒˆγ‚°γƒ«γƒΌγƒ—ζ ͺεΌδΌšη€ΎγŒι‹ε–Άγ™γ‚‹ConoHa VPS (with NVIDIA H100 GPU)δΈŠγ§ι–‹η™ΊεŠγ³γƒ†γ‚Ήγƒˆγ‚’θ‘Œγ„γΎγ—γŸγ€‚

Downloads last month
11
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for owner203/japanese-llama-3-8b

Quantizations
4 models

Collection including owner203/japanese-llama-3-8b