url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/6604 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6604/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6604/comments | https://api.github.com/repos/ollama/ollama/issues/6604/events | https://github.com/ollama/ollama/issues/6604 | 2,502,317,787 | I_kwDOJ0Z1Ps6VJlbb | 6,604 | Report better error message on old drivers (show detected version and minimum requirement) | {
"login": "my106",
"id": 77132705,
"node_id": "MDQ6VXNlcjc3MTMyNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/77132705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/my106",
"html_url": "https://github.com/my106",
"followers_url": "https://api.github.com/users/my106/followers",
"following_url": "https://api.github.com/users/my106/following{/other_user}",
"gists_url": "https://api.github.com/users/my106/gists{/gist_id}",
"starred_url": "https://api.github.com/users/my106/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/my106/subscriptions",
"organizations_url": "https://api.github.com/users/my106/orgs",
"repos_url": "https://api.github.com/users/my106/repos",
"events_url": "https://api.github.com/users/my106/events{/privacy}",
"received_events_url": "https://api.github.com/users/my106/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4 | 2024-09-03T09:02:14 | 2024-09-05T15:45:01 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
[log.txt](https://github.com/user-attachments/files/16846086/log.txt)
This is a log
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.9 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6604/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2509 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2509/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2509/comments | https://api.github.com/repos/ollama/ollama/issues/2509/events | https://github.com/ollama/ollama/pull/2509 | 2,135,649,983 | PR_kwDOJ0Z1Ps5m7eiB | 2,509 | handle race condition while setting raw mode in windows | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-02-15T05:02:22 | 2024-02-15T05:28:35 | 2024-02-15T05:28:35 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2509",
"html_url": "https://github.com/ollama/ollama/pull/2509",
"diff_url": "https://github.com/ollama/ollama/pull/2509.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2509.patch",
"merged_at": "2024-02-15T05:28:35"
} | This change handles a race condition in the go routine which handles reading in runes. On Windows "raw mode" (i.e. turning off echo/line/processed input) gets turned off too late which would cause `ReadRune()` to wait until the buffer was full (when it got a new line). This change goes into raw mode faster, but it still needs to happen before any input since we turn it back off again once we start processing output.
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2509/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6601 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6601/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6601/comments | https://api.github.com/repos/ollama/ollama/issues/6601/events | https://github.com/ollama/ollama/issues/6601 | 2,502,017,756 | I_kwDOJ0Z1Ps6VIcLc | 6,601 | when i try to visit https://xxxxxxxx.com/api/chat,it is very slow | {
"login": "lessuit",
"id": 52142616,
"node_id": "MDQ6VXNlcjUyMTQyNjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/52142616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lessuit",
"html_url": "https://github.com/lessuit",
"followers_url": "https://api.github.com/users/lessuit/followers",
"following_url": "https://api.github.com/users/lessuit/following{/other_user}",
"gists_url": "https://api.github.com/users/lessuit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lessuit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lessuit/subscriptions",
"organizations_url": "https://api.github.com/users/lessuit/orgs",
"repos_url": "https://api.github.com/users/lessuit/repos",
"events_url": "https://api.github.com/users/lessuit/events{/privacy}",
"received_events_url": "https://api.github.com/users/lessuit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3 | 2024-09-03T06:23:13 | 2024-10-01T07:45:27 | 2024-09-04T01:09:49 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
there is docker logs:
time=2024-09-03T06:16:48.144Z level=INFO source=sched.go:715 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb gpu=GPU-1a66ac9e-e1b3-db2b-4e52-f54d2f373819 parallel=4 available=84100120576 required="42.2 GiB"
time=2024-09-03T06:16:48.150Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.3 GiB]" memory.required.full="42.2 GiB" memory.required.partial="42.2 GiB" memory.required.kv="2.5 GiB" memory.required.allocations="[42.2 GiB]" memory.weights.total="39.3 GiB" memory.weights.repeating="38.3 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="1.0 GiB" memory.graph.partial="1.3 GiB"
time=2024-09-03T06:16:48.163Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama3339020570/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb --ctx-size 8192 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 4 --port 46231"
time=2024-09-03T06:16:48.164Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-09-03T06:16:48.164Z level=INFO source=server.go:591 msg="waiting for llama runner to start responding"
time=2024-09-03T06:16:48.164Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="1e6f655" tid="139860331409408" timestamp=1725344208
INFO [main] system info | n_threads=48 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139860331409408" timestamp=1725344208 total_threads=96
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="46231" tid="139860331409408" timestamp=1725344208
llama_model_loader: loaded meta data with 21 key-value pairs and 963 tensors from /root/.ollama/models/blobs/sha256-f6ac28d6f58ae1522734d1df834e6166e0813bb1919e86aafb4c0551eb4ce2bb (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen2
llama_model_loader: - kv 1: general.name str = Qwen2-72B-Instruct
llama_model_loader: - kv 2: qwen2.block_count u32 = 80
llama_model_loader: - kv 3: qwen2.context_length u32 = 32768
llama_model_loader: - kv 4: qwen2.embedding_length u32 = 8192
llama_model_loader: - kv 5: qwen2.feed_forward_length u32 = 29568
llama_model_loader: - kv 6: qwen2.attention.head_count u32 = 64
llama_model_loader: - kv 7: qwen2.attention.head_count_kv u32 = 8
llama_model_loader: - kv 8: qwen2.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 9: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 2
llama_model_loader: - kv 11: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 12: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 17: tokenizer.ggml.padding_token_id u32 = 151643
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 151643
llama_model_loader: - kv 19: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 20: general.quantization_version u32 = 2
llama_model_loader: - type f32: 401 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-09-03T06:16:48.416Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 421
llm_load_vocab: token to piece cache size = 0.9352 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = qwen2
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 152064
llm_load_print_meta: n_merges = 151387
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 29568
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 72.71 B
llm_load_print_meta: model size = 38.39 GiB (4.54 BPW)
llm_load_print_meta: general.name = Qwen2-72B-Instruct
llm_load_print_meta: BOS token = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token = 151645 '<|im_end|>'
llm_load_print_meta: PAD token = 151643 '<|endoftext|>'
llm_load_print_meta: LF token = 148848 'ÄĬ'
llm_load_print_meta: EOT token = 151645 '<|im_end|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A100-SXM4-80GB, compute capability 8.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.85 MiB
time=2024-09-03T06:16:49.873Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 668.25 MiB
llm_load_tensors: CUDA0 buffer size = 38647.70 MiB
time=2024-09-03T06:16:51.930Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
time=2024-09-03T06:16:57.155Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 2560.00 MiB
llama_new_context_with_model: KV self size = 2560.00 MiB, K (f16): 1280.00 MiB, V (f16): 1280.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.45 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1104.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 32.01 MiB
llama_new_context_with_model: graph nodes = 2806
llama_new_context_with_model: graph splits = 2
time=2024-09-03T06:16:58.760Z level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server loading model"
INFO [main] model loaded | tid="139860331409408" timestamp=1725344218
time=2024-09-03T06:16:59.013Z level=INFO source=server.go:630 msg="llama runner started in 10.85 seconds"
[GIN] 2024/09/03 - 06:17:08 | 200 | 20.303390852s | 113.57.107.43 | POST "/api/chat"
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.8 and 0.3.9 | {
"login": "lessuit",
"id": 52142616,
"node_id": "MDQ6VXNlcjUyMTQyNjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/52142616?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lessuit",
"html_url": "https://github.com/lessuit",
"followers_url": "https://api.github.com/users/lessuit/followers",
"following_url": "https://api.github.com/users/lessuit/following{/other_user}",
"gists_url": "https://api.github.com/users/lessuit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lessuit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lessuit/subscriptions",
"organizations_url": "https://api.github.com/users/lessuit/orgs",
"repos_url": "https://api.github.com/users/lessuit/repos",
"events_url": "https://api.github.com/users/lessuit/events{/privacy}",
"received_events_url": "https://api.github.com/users/lessuit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6601/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/733 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/733/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/733/comments | https://api.github.com/repos/ollama/ollama/issues/733/events | https://github.com/ollama/ollama/issues/733 | 1,931,631,321 | I_kwDOJ0Z1Ps5zIlrZ | 733 | where is everything? | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 17 | 2023-10-08T04:06:42 | 2024-12-07T17:22:09 | 2023-12-04T20:30:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I don't use Docker so maybe there are obvious answers that I don't know.
I've downloaded the install from the website and it put it in the /usr/local/bin directory. Not my first choice. For testing software I want to put it in a user directory.
It ran find and pulled the mistrel models. Only thing is, I've already got them downloaded. I could not tell it to use my downloads, and
I have no idea where it downloaded to. So now I've got wasted space on my limited hard drive.
I then cloned the repo and build it. it build it fine, but I can't actually find what it built.
It says " 58%] Building CXX object common/CMakeFiles/common.dir/common.cpp.o
[ 75%] Built target common
[ 83%] Built target BUILD_INFO
[ 91%] Building CXX object examples/server/CMakeFiles/server.dir/server.cpp.o
[100%] Linking CXX executable ../../bin/server
[100%] Built target server
"
but there isn't any server file in my bin directory. I really hate not knowing where things are going. | {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/733/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/652 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/652/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/652/comments | https://api.github.com/repos/ollama/ollama/issues/652/events | https://github.com/ollama/ollama/issues/652 | 1,919,987,277 | I_kwDOJ0Z1Ps5ycK5N | 652 | Failed to build `Dockerfile`: `unknown flag -ldflags -w -s` | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-09-29T22:44:08 | 2023-11-10T05:07:53 | 2023-09-30T20:34:02 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On an AWS EC2 `g4dn.2xlarge` instance (Ubuntu 22.04.2 LTS) with Ollama [a1b2d95](https://github.com/jmorganca/ollama/tree/a1b2d95f967df6b4f89a6b9ed67263711d59593c), from a fresh `git clone [email protected]:jmorganca/ollama.git`:
```none
> sudo docker buildx build . --file Dockerfile
=> => transferring context: 6.93MB 0.1s
=> [stage-1 2/4] RUN apt-get update && apt-get install -y ca-certificates 42.4s
=> [stage-1 3/4] RUN groupadd ollama && useradd -m -g ollama ollama 37.6s
=> [stage-0 2/7] WORKDIR /go/src/github.com/jmorganca/ollama 21.9s
=> [stage-0 3/7] RUN apt-get update && apt-get install -y git build-essential cmake 10.8s
=> [stage-0 4/7] ADD https://dl.google.com/go/go1.21.1.linux-amd64.tar.gz /tmp/go1.21.1.tar.gz 0.8s
=> [stage-0 5/7] RUN mkdir -p /usr/local && tar xz -C /usr/local </tmp/go1.21.1.tar.gz 3.2s
=> [stage-0 6/7] COPY . . 0.1s
=> ERROR [stage-0 7/7] RUN /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build . 0.4s
------
> [stage-0 7/7] RUN /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build .:
0.361 go: parsing $GOFLAGS: unknown flag -ldflags -w -s
------
Dockerfile:14
--------------------
13 | ENV GOARCH=$TARGETARCH
14 | >>> RUN /usr/local/go/bin/go generate ./... \
15 | >>> && /usr/local/go/bin/go build .
16 |
--------------------
ERROR: failed to solve: process "/bin/sh -c /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build ." did not complete successfully: exit code: 1
```
The error seems to be:
```none
go: parsing $GOFLAGS: unknown flag -ldflags -w -s
```
Do you know if I am missing an `apt` package? Any insight to this error? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/652/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6686 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6686/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6686/comments | https://api.github.com/repos/ollama/ollama/issues/6686/events | https://github.com/ollama/ollama/issues/6686 | 2,511,576,566 | I_kwDOJ0Z1Ps6Vs532 | 6,686 | Model shows wrong date. | {
"login": "ghaisasadvait",
"id": 11556546,
"node_id": "MDQ6VXNlcjExNTU2NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11556546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghaisasadvait",
"html_url": "https://github.com/ghaisasadvait",
"followers_url": "https://api.github.com/users/ghaisasadvait/followers",
"following_url": "https://api.github.com/users/ghaisasadvait/following{/other_user}",
"gists_url": "https://api.github.com/users/ghaisasadvait/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghaisasadvait/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghaisasadvait/subscriptions",
"organizations_url": "https://api.github.com/users/ghaisasadvait/orgs",
"repos_url": "https://api.github.com/users/ghaisasadvait/repos",
"events_url": "https://api.github.com/users/ghaisasadvait/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghaisasadvait/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-09-07T10:05:39 | 2024-09-07T22:10:39 | 2024-09-07T22:10:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
![image](https://github.com/user-attachments/assets/b1debc6f-31e7-4ceb-83bb-b0d32a97d274)
I also tried using Open WEb UI and turned on the Duck DUck GO search functionality but somehow the model still returns the wrong date:
![image](https://github.com/user-attachments/assets/cb4b4d6b-37bc-4616-ab7b-5b2de88d87a0)
### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6686/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4628 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4628/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4628/comments | https://api.github.com/repos/ollama/ollama/issues/4628/events | https://github.com/ollama/ollama/issues/4628 | 2,316,694,931 | I_kwDOJ0Z1Ps6KFfWT | 4,628 | aya model : error when using the generate endpoint | {
"login": "saurabhkumar",
"id": 3962573,
"node_id": "MDQ6VXNlcjM5NjI1NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3962573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saurabhkumar",
"html_url": "https://github.com/saurabhkumar",
"followers_url": "https://api.github.com/users/saurabhkumar/followers",
"following_url": "https://api.github.com/users/saurabhkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/saurabhkumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saurabhkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saurabhkumar/subscriptions",
"organizations_url": "https://api.github.com/users/saurabhkumar/orgs",
"repos_url": "https://api.github.com/users/saurabhkumar/repos",
"events_url": "https://api.github.com/users/saurabhkumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/saurabhkumar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-05-25T04:50:03 | 2024-05-27T06:21:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am running aya model locally. When i just start the model with `ollama run aya` and interact in the terminal, it works fine. But when I try using it via POSTMAN on Windows 10 at (http://127.0.0.1:11434/api/generate) with the following data:
```
{
"model": "aya",
"prompt": "Are there multilingual open source large language models available? Respond using JSON",
"format": "json",
"stream": false
}
```
i get the following error:
`{
"error": "error reading llm response: read tcp 127.0.0.1:60589->127.0.0.1:60580: wsarecv: An existing connection was forcibly closed by the remote host."
}`
For other models (e.g. phi3), it works correctly with the same setup.
e.g:
```
{
"model": "phi3:mini",
"prompt": "Are there multilingual open source large language models available? Respond using JSON",
"format": "json",
"stream": false
}'
```
works.
I can see the following logging for aya (the last line shows only the error but not any other details.):
> llm_load_tensors: ggml ctx size = 0.27 MiB
> llm_load_tensors: offloading 32 repeating layers to GPU
> llm_load_tensors: offloading non-repeating layers to GPU
> llm_load_tensors: offloaded 33/33 layers to GPU
> llm_load_tensors: CPU buffer size = 820.31 MiB
> llm_load_tensors: CUDA0 buffer size = 4564.83 MiB
> ........................................................................
> llama_new_context_with_model: n_ctx = 2048
> llama_new_context_with_model: n_batch = 512
> llama_new_context_with_model: n_ubatch = 512
> llama_new_context_with_model: freq_base = 10000.0
> llama_new_context_with_model: freq_scale = 1
> llama_kv_cache_init: CUDA0 KV buffer size = 256.00 MiB
> llama_new_context_with_model: KV self size = 256.00 MiB, K (f16): 128.00 MiB, V (f16): 128.00 MiB
> llama_new_context_with_model: CUDA_Host output buffer size = 0.99 MiB
> llama_new_context_with_model: CUDA0 compute buffer size = 508.00 MiB
> llama_new_context_with_model: CUDA_Host compute buffer size = 12.01 MiB
> llama_new_context_with_model: graph nodes = 968
> llama_new_context_with_model: graph splits = 2
> INFO [wmain] model loaded | tid="9016" timestamp=1716611056
> time=2024-05-25T06:24:16.906+02:00 level=INFO source=server.go:545 msg="llama runner started in 8.06 seconds"
> [GIN] 2024/05/25 - 06:24:19 | 500 | 14.8679199s | 127.0.0.1 | POST "/api/generate"
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4628/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6309 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6309/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6309/comments | https://api.github.com/repos/ollama/ollama/issues/6309/events | https://github.com/ollama/ollama/pull/6309 | 2,459,531,445 | PR_kwDOJ0Z1Ps54B6ha | 6,309 | Added a go example for mistral's native function calling | {
"login": "Binozo",
"id": 70137898,
"node_id": "MDQ6VXNlcjcwMTM3ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/70137898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Binozo",
"html_url": "https://github.com/Binozo",
"followers_url": "https://api.github.com/users/Binozo/followers",
"following_url": "https://api.github.com/users/Binozo/following{/other_user}",
"gists_url": "https://api.github.com/users/Binozo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Binozo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Binozo/subscriptions",
"organizations_url": "https://api.github.com/users/Binozo/orgs",
"repos_url": "https://api.github.com/users/Binozo/repos",
"events_url": "https://api.github.com/users/Binozo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Binozo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-08-11T10:34:22 | 2024-11-21T21:47:40 | 2024-11-21T21:47:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6309",
"html_url": "https://github.com/ollama/ollama/pull/6309",
"diff_url": "https://github.com/ollama/ollama/pull/6309.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6309.patch",
"merged_at": null
} | Hello there 🙋
I was playing around a bit with the awesome native function calling feature from the mistral model.
I saw that an example for that was missing so I took a little inspiration from #5284 and built it myself ✌️ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6309/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4967 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4967/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4967/comments | https://api.github.com/repos/ollama/ollama/issues/4967/events | https://github.com/ollama/ollama/issues/4967 | 2,344,703,306 | I_kwDOJ0Z1Ps6LwVVK | 4,967 | API Silently Truncates Conversation | {
"login": "flu0r1ne",
"id": 76689481,
"node_id": "MDQ6VXNlcjc2Njg5NDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/76689481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flu0r1ne",
"html_url": "https://github.com/flu0r1ne",
"followers_url": "https://api.github.com/users/flu0r1ne/followers",
"following_url": "https://api.github.com/users/flu0r1ne/following{/other_user}",
"gists_url": "https://api.github.com/users/flu0r1ne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flu0r1ne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flu0r1ne/subscriptions",
"organizations_url": "https://api.github.com/users/flu0r1ne/orgs",
"repos_url": "https://api.github.com/users/flu0r1ne/repos",
"events_url": "https://api.github.com/users/flu0r1ne/events{/privacy}",
"received_events_url": "https://api.github.com/users/flu0r1ne/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 5 | 2024-06-10T19:45:35 | 2024-09-19T12:32:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
### Problem Description
The chat API currently truncates conversations without warning when the context limit is exceeded. This behavior can cause significant problems in downstream applications. For instance, if a document is provided for summarization, silently removing part of the document may lead to an incomplete or inaccurate summary. Similarly, for other tasks, critical instructions can be forgotten if they do not appear in the initial prompt.
### Desired Behavior
Ideally, the API should reject requests that do not fit within the context window with a clear error message. For example, the (official) OpenAI API provides the following error if the context limit is exceeded:
```
1 validation error for request body → content ensure this value has at most 32768 characters
```
This enables downstream applications to notify users about the issue, allowing them to decide whether to extend the context, truncate the document, or accept a response based on the truncated prompt.
### Current Behavior and Documentation
If this is the intended behavior for the API, it is currently undocumented and can be considered user-unfriendly. It appears this behavior might be inherited from the `llama.cpp`.
### Example
The issue can be demonstrated with the following example:
```bash
CONTENT=$(python -c 'print("In language modeling, the context window refers to a predefined number of tokens that the model takes into account while predicting the next token within a text sequence. " * 68)')
curl http://localhost:11434/api/chat -d "{ \"model\": \"gemma:2b\", \"messages\": [ { \"role\": \"user\", \"content\": \"$CONTENT\" } ]}"
```
With 68 repetitions of the sequence, the prompt contains `1041` tokens, as determined by `prompt_eval_count`. However, with 67 repetitions, the prompt contains `2027` tokens.
### Similar Issues
#299 - Adjusted the truncation behavior to spare the prompt formatting.
#2653 - Documents a similar issue with the CLI
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.1.41 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4967/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4967/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8206 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8206/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8206/comments | https://api.github.com/repos/ollama/ollama/issues/8206/events | https://github.com/ollama/ollama/issues/8206 | 2,754,349,650 | I_kwDOJ0Z1Ps6kLApS | 8,206 | MultiGPU ROCm | {
"login": "Schwenn2002",
"id": 56083040,
"node_id": "MDQ6VXNlcjU2MDgzMDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/56083040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Schwenn2002",
"html_url": "https://github.com/Schwenn2002",
"followers_url": "https://api.github.com/users/Schwenn2002/followers",
"following_url": "https://api.github.com/users/Schwenn2002/following{/other_user}",
"gists_url": "https://api.github.com/users/Schwenn2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Schwenn2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Schwenn2002/subscriptions",
"organizations_url": "https://api.github.com/users/Schwenn2002/orgs",
"repos_url": "https://api.github.com/users/Schwenn2002/repos",
"events_url": "https://api.github.com/users/Schwenn2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/Schwenn2002/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 10 | 2024-12-21T20:25:20 | 2025-01-07T21:34:28 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
System:
CPU AMD Ryzen 9950X
RAM 128 GB DDR5
GPU0 AMD Radeon PRO W7900
GPU1 AMD Radeon RX7900XTX
ROCM: 6.3.1
Ubuntu 24.04 LTS (currently patched)
ERROR:
I start a large LLM (e.g. Llama-3.3-70B-Instruct-Q4_K_L) with open webui and a context window of 32678 and get the following error in ollama:
Dec 22 03:52:04 ollama ollama[6345]: time=2024-12-22T03:52:04.990Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
Dec 22 03:52:41 ollama ollama[6345]: ROCm error: out of memory
Dec 22 03:52:41 ollama ollama[6345]: llama/ggml-cuda/ggml-cuda.cu:96: ROCm error
=========================================== ROCm System Management Interface ==========================================
==================================================== Concise Info ====================================================
Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
(DID, GUID) (Edge) (Avg) (Mem, Compute, ID)
========================================================== ==========================================================
0 1 0x7448, 54057 59.0°C 56.0W N/A, N/A, 0 651Mhz 96Mhz 20.0% auto 241.0W 0% 82%
1 2 0x744c, 53541 40.0°C 75.0W N/A, N/A, 0 1301Mhz 456Mhz 0% auto 327.0W 0% 39%
========================================================== ==========================================================
================================================ End of ROCm SMI Log ================================================
The VRAM on both cards is never fully utilized and the normal RAM is almost completely free. SWAP is not used.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8206/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8504 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8504/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8504/comments | https://api.github.com/repos/ollama/ollama/issues/8504/events | https://github.com/ollama/ollama/pull/8504 | 2,799,630,108 | PR_kwDOJ0Z1Ps6IX_Po | 8,504 | add doc to describe setup of vm on proxmox for multiple P40 gpus | {
"login": "fred-vaneijk",
"id": 178751132,
"node_id": "U_kgDOCqeGnA",
"avatar_url": "https://avatars.githubusercontent.com/u/178751132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fred-vaneijk",
"html_url": "https://github.com/fred-vaneijk",
"followers_url": "https://api.github.com/users/fred-vaneijk/followers",
"following_url": "https://api.github.com/users/fred-vaneijk/following{/other_user}",
"gists_url": "https://api.github.com/users/fred-vaneijk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fred-vaneijk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fred-vaneijk/subscriptions",
"organizations_url": "https://api.github.com/users/fred-vaneijk/orgs",
"repos_url": "https://api.github.com/users/fred-vaneijk/repos",
"events_url": "https://api.github.com/users/fred-vaneijk/events{/privacy}",
"received_events_url": "https://api.github.com/users/fred-vaneijk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2025-01-20T15:44:15 | 2025-01-20T15:44:15 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8504",
"html_url": "https://github.com/ollama/ollama/pull/8504",
"diff_url": "https://github.com/ollama/ollama/pull/8504.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8504.patch",
"merged_at": null
} | Fixes an issue where 1 of the compute units would go to 100% CPU use and the system would appear locked up | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8504/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7728 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7728/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7728/comments | https://api.github.com/repos/ollama/ollama/issues/7728/events | https://github.com/ollama/ollama/pull/7728 | 2,670,021,679 | PR_kwDOJ0Z1Ps6CTiWc | 7,728 | Improve crash reporting | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-11-18T21:43:38 | 2024-11-20T00:27:00 | 2024-11-20T00:26:58 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7728",
"html_url": "https://github.com/ollama/ollama/pull/7728",
"diff_url": "https://github.com/ollama/ollama/pull/7728.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7728.patch",
"merged_at": "2024-11-20T00:26:58"
} | Many model crashes are masked behind "An existing connection was forcibly closed by the remote host" This captures that common error message and wires in any detected errors from the log.
This also adds the deepseek context shift error to the known errors we capture. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7728/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5798 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5798/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5798/comments | https://api.github.com/repos/ollama/ollama/issues/5798/events | https://github.com/ollama/ollama/issues/5798 | 2,419,572,294 | I_kwDOJ0Z1Ps6QN75G | 5,798 | ollama save model to file and ollama load model from file | {
"login": "cruzanstx",
"id": 2927083,
"node_id": "MDQ6VXNlcjI5MjcwODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2927083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cruzanstx",
"html_url": "https://github.com/cruzanstx",
"followers_url": "https://api.github.com/users/cruzanstx/followers",
"following_url": "https://api.github.com/users/cruzanstx/following{/other_user}",
"gists_url": "https://api.github.com/users/cruzanstx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cruzanstx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cruzanstx/subscriptions",
"organizations_url": "https://api.github.com/users/cruzanstx/orgs",
"repos_url": "https://api.github.com/users/cruzanstx/repos",
"events_url": "https://api.github.com/users/cruzanstx/events{/privacy}",
"received_events_url": "https://api.github.com/users/cruzanstx/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-07-19T18:20:09 | 2024-07-26T21:14:40 | 2024-07-26T21:14:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | In docker you can save images and load them from tar.gz files. example:
```bash
docker pull ollama/ollama:0.2.5
docker save ollama/ollama:0.2.5 | gzip > ollama_0.2.5.tar.gz
docker load --input ollama_0.2.5.targ.gz
```
Could we have a similar loop of managing models example:
```bash
ollama pull llama3:latest
ollama save llama3:latest | gzip > llama3.latest.tar.gz
ollama load --input llama3.latest.tar.gz
```
| {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5798/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5798/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8022 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8022/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8022/comments | https://api.github.com/repos/ollama/ollama/issues/8022/events | https://github.com/ollama/ollama/issues/8022 | 2,729,116,225 | I_kwDOJ0Z1Ps6iqwJB | 8,022 | Error reported when importing a multimodal large model of type hugginface (llava-mistral-7b) | {
"login": "lyp-liu",
"id": 71242087,
"node_id": "MDQ6VXNlcjcxMjQyMDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/71242087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lyp-liu",
"html_url": "https://github.com/lyp-liu",
"followers_url": "https://api.github.com/users/lyp-liu/followers",
"following_url": "https://api.github.com/users/lyp-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/lyp-liu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lyp-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lyp-liu/subscriptions",
"organizations_url": "https://api.github.com/users/lyp-liu/orgs",
"repos_url": "https://api.github.com/users/lyp-liu/repos",
"events_url": "https://api.github.com/users/lyp-liu/events{/privacy}",
"received_events_url": "https://api.github.com/users/lyp-liu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-12-10T06:05:53 | 2024-12-29T20:08:57 | 2024-12-29T20:08:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama create mytestsafe -f ./mytest.Modelfile
transferring model data 100%
converting model
Error: unsupported architecture
At present, it seems impossible to convert the huggingface type Llava-mistral convert to a gguf type model through Llama.cpp.
i want to know that the type of llava-mistral-7b in ollama
![QQ图片20241210140505](https://github.com/user-attachments/assets/fb435aa0-c266-4b59-9391-8d31dfb5bd56)
### OS
Windows
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.5.1 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8022/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3497 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3497/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3497/comments | https://api.github.com/repos/ollama/ollama/issues/3497/events | https://github.com/ollama/ollama/issues/3497 | 2,226,625,418 | I_kwDOJ0Z1Ps6Et5uK | 3,497 | Support AMD Firepro w7100 - gfx802 / gfx805 | {
"login": "ninp0",
"id": 1008583,
"node_id": "MDQ6VXNlcjEwMDg1ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1008583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ninp0",
"html_url": "https://github.com/ninp0",
"followers_url": "https://api.github.com/users/ninp0/followers",
"following_url": "https://api.github.com/users/ninp0/following{/other_user}",
"gists_url": "https://api.github.com/users/ninp0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ninp0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ninp0/subscriptions",
"organizations_url": "https://api.github.com/users/ninp0/orgs",
"repos_url": "https://api.github.com/users/ninp0/repos",
"events_url": "https://api.github.com/users/ninp0/events{/privacy}",
"received_events_url": "https://api.github.com/users/ninp0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-04-04T22:28:50 | 2024-04-12T23:30:38 | 2024-04-12T23:30:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What are you trying to do?
Start ollama in a manner that will leverage an AMD Firepro w7100 gpu
### How should we solve this?
Current output when starting ollama via:
```
$ sudo systemctl status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: disabled)
Active: active (running) since Thu 2024-04-04 16:16:50 MDT; 6min ago
Main PID: 4404 (ollama)
Tasks: 21 (limit: 619005)
Memory: 471.9M (peak: 474.8M)
CPU: 9.896s
CGroup: /system.slice/ollama.service
└─4404 /usr/local/bin/ollama serve
Apr 04 16:16:55 ollama[4404]: time=2024-04-04T16:16:55.990-06:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
Apr 04 16:16:55 ollama[4404]: time=2024-04-04T16:16:55.996-06:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama2232459459/runners/cuda_v11/libcudart.so.11.0]"
Apr 04 16:16:55 ollama[4404]: time=2024-04-04T16:16:55.996-06:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama2232459459/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35"
...
Apr 04 16:16:56 ollama[4404]: time=2024-04-04T16:16:56.002-06:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
Apr 04 16:16:56 ollama[4404]: time=2024-04-04T16:16:56.002-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx000 gfx000]"
Apr 04 16:16:56 ollama[4404]: time=2024-04-04T16:16:56.002-06:00 level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
Apr 04 16:16:56 ollama[4404]: time=2024-04-04T16:16:56.002-06:00 level=INFO source=routes.go:1141 msg="no GPU detected"
```
amdgpu modules are loaded:
```
$ lsmod | grep amdgpu
amdgpu 11554816 0
amdxcp 12288 1 amdgpu
drm_exec 16384 1 amdgpu
drm_buddy 20480 1 amdgpu
gpu_sched 57344 1 amdgpu
video 77824 1 amdgpu
drm_suballoc_helper 12288 1 amdgpu
drm_display_helper 233472 1 amdgpu
drm_ttm_helper 12288 1 amdgpu
i2c_algo_bit 12288 2 mgag200,amdgpu
ttm 106496 2 amdgpu,drm_ttm_helper
drm_kms_helper 270336 5 drm_display_helper,mgag200,amdgpu
drm 802816 14 gpu_sched,drm_kms_helper,drm_exec,drm_suballoc_helper,drm_shmem_helper,drm_display_helper,mgag200,drm_buddy,amdgpu,drm_ttm_helper,ttm,amdxcp
```
```
$ sudo lspci -s 44:00.0 -vv
44:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tonga PRO GL [FirePro W7100 / Barco MXRT-7600] (prog-if 00 [VGA controller])
Subsystem: BARCO Tonga PRO GL [FirePro W7100 / Barco MXRT-7600]
Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Interrupt: pin A routed to IRQ 80
NUMA node: 1
IOMMU group: 8
Region 0: Memory at 3bff0000000 (64-bit, prefetchable) [size=256M]
Region 2: Memory at 3bfefe00000 (64-bit, prefetchable) [size=2M]
Region 4: I/O ports at dc00 [size=256]
Region 5: Memory at d4fc0000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at d4000000 [disabled] [size=128K]
Capabilities: [48] Vendor Specific Information: Len=08 <?>
Capabilities: [50] Power Management version 3
Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1+,D2+,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [58] Express (v2) Legacy Endpoint, MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 256 bytes, MaxReadReq 512 bytes
DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s, Width x16
TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt+ EETLPPrefix+, MaxEETLPPrefixes 1
EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
FRS-
AtomicOpsCap: 32bit- 64bit- 128bitCAS-
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
AtomicOpsCtl: ReqEn-
LnkCap2: Supported Link Speeds: 2.5-8GT/s, Crosslink- Retimer- 2Retimers- DRS-
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+
EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest-
Retimer- 2Retimers- CrosslinkRes: unsupported
Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
Address: 0000000000000000 Data: 0000
Capabilities: [100 v1] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?>
Capabilities: [150 v2] Advanced Error Reporting
UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt+ RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UESvrt: DLP+ SDES+ TLP+ FCP+ CmpltTO+ CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC+ UnsupReq- ACSViol-
CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
CEMsk: RxErr- BadTLP+ BadDLLP+ Rollover+ Timeout+ AdvNonFatalErr+
AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
HeaderLog: 00000000 00000000 00000000 00000000
Capabilities: [200 v1] Physical Resizable BAR
BAR 0: current size: 256MB, supported: 256MB 512MB 1GB 2GB 4GB 8GB
Capabilities: [270 v1] Secondary PCI Express
LnkCtl3: LnkEquIntrruptEn- PerformEqu-
LaneErrStat: 0
Capabilities: [2b0 v1] Address Translation Service (ATS)
ATSCap: Invalidate Queue Depth: 00
ATSCtl: Enable+, Smallest Translation Unit: 00
Capabilities: [2c0 v1] Page Request Interface (PRI)
PRICtl: Enable- Reset-
PRISta: RF- UPRGI- Stopped+
Page Request Capacity: 00000020, Page Request Allocation: 00000000
Capabilities: [2d0 v1] Process Address Space ID (PASID)
PASIDCap: Exec+ Priv+, Max PASID Width: 10
PASIDCtl: Enable- Exec- Priv-
Capabilities: [328 v1] Alternative Routing-ID Interpretation (ARI)
ARICap: MFVC- ACS-, Next Function: 0
ARICtl: MFVC- ACS-, Function Group: 0
Kernel modules: amdgpu
```
### What is the impact of not solving this?
ollama won't use GPU.
### Anything else?
I'm hoping this can be addressed by simply mapping to an appropriate version? The error in the `sudo systemctl status ollama` is leading me to that conclusion: `/sys/module/amdgpu/version: no such file or directory`
That said, I did try this which appeared to fail as well:
```
# systemctl stop ollama && HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve
time=2024-04-04T16:48:07.577-06:00 level=INFO source=images.go:804 msg="total blobs: 0"
time=2024-04-04T16:48:07.578-06:00 level=INFO source=images.go:811 msg="total unused blobs removed: 0"
time=2024-04-04T16:48:07.578-06:00 level=INFO source=routes.go:1118 msg="Listening on 127.0.0.1:11434 (version 0.1.30)"
time=2024-04-04T16:48:07.579-06:00 level=INFO source=payload_common.go:113 msg="Extracting dynamic libraries to /tmp/ollama718362879/runners ..."
time=2024-04-04T16:48:14.009-06:00 level=INFO source=payload_common.go:140 msg="Dynamic LLM libraries [cpu_avx cuda_v11 cpu cpu_avx2 rocm_v60000]"
time=2024-04-04T16:48:14.009-06:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-04-04T16:48:14.009-06:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-04-04T16:48:14.015-06:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/tmp/ollama718362879/runners/cuda_v11/libcudart.so.11.0]"
time=2024-04-04T16:48:14.016-06:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /tmp/ollama718362879/runners/cuda_v11/libcudart.so.11.0: cudart init failure: 35"
time=2024-04-04T16:48:14.016-06:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-04-04T16:48:14.022-06:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
time=2024-04-04T16:48:14.022-06:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
time=2024-04-04T16:48:14.023-06:00 level=WARN source=amd_linux.go:53 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers: amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-04-04T16:48:14.023-06:00 level=INFO source=amd_linux.go:88 msg="detected amdgpu versions [gfx000 gfx000 gfx000]"
time=2024-04-04T16:48:14.023-06:00 level=INFO source=amd_linux.go:92 msg="all detected amdgpus are skipped, falling back to CPU"
time=2024-04-04T16:48:14.023-06:00 level=INFO source=routes.go:1141 msg="no GPU detected"
``` | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3497/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7358 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7358/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7358/comments | https://api.github.com/repos/ollama/ollama/issues/7358/events | https://github.com/ollama/ollama/pull/7358 | 2,614,544,504 | PR_kwDOJ0Z1Ps5_7SYk | 7,358 | Fix unicode output on windows with redirect to file | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-10-25T16:15:37 | 2024-10-25T20:43:19 | 2024-10-25T20:43:16 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7358",
"html_url": "https://github.com/ollama/ollama/pull/7358",
"diff_url": "https://github.com/ollama/ollama/pull/7358.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7358.patch",
"merged_at": "2024-10-25T20:43:16"
} | If we're not writing out to a terminal, avoid setting the console mode on windows, which corrupts the output file.
Fixes #3826 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7358/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4167 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4167/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4167/comments | https://api.github.com/repos/ollama/ollama/issues/4167/events | https://github.com/ollama/ollama/issues/4167 | 2,279,497,276 | I_kwDOJ0Z1Ps6H3l48 | 4,167 | abnormal reply of Llama-3-ChatQA-1.5-8B-GGUF | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-05-05T12:05:37 | 2024-05-11T08:46:53 | 2024-05-11T08:46:53 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
import Llama-3-ChatQA-1.5-8B-GGUF to ollama, but reply is abnormal. I have tried many gguf version of this model from different username on HF.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1..32 | {
"login": "taozhiyuai",
"id": 146583103,
"node_id": "U_kgDOCLyuPw",
"avatar_url": "https://avatars.githubusercontent.com/u/146583103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taozhiyuai",
"html_url": "https://github.com/taozhiyuai",
"followers_url": "https://api.github.com/users/taozhiyuai/followers",
"following_url": "https://api.github.com/users/taozhiyuai/following{/other_user}",
"gists_url": "https://api.github.com/users/taozhiyuai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taozhiyuai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taozhiyuai/subscriptions",
"organizations_url": "https://api.github.com/users/taozhiyuai/orgs",
"repos_url": "https://api.github.com/users/taozhiyuai/repos",
"events_url": "https://api.github.com/users/taozhiyuai/events{/privacy}",
"received_events_url": "https://api.github.com/users/taozhiyuai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4167/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/4167/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1886 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1886/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1886/comments | https://api.github.com/repos/ollama/ollama/issues/1886/events | https://github.com/ollama/ollama/pull/1886 | 2,073,708,262 | PR_kwDOJ0Z1Ps5jpjoL | 1,886 | feat: load ~/.ollama/.env using godotenv | {
"login": "sublimator",
"id": 525211,
"node_id": "MDQ6VXNlcjUyNTIxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/525211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sublimator",
"html_url": "https://github.com/sublimator",
"followers_url": "https://api.github.com/users/sublimator/followers",
"following_url": "https://api.github.com/users/sublimator/following{/other_user}",
"gists_url": "https://api.github.com/users/sublimator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sublimator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sublimator/subscriptions",
"organizations_url": "https://api.github.com/users/sublimator/orgs",
"repos_url": "https://api.github.com/users/sublimator/repos",
"events_url": "https://api.github.com/users/sublimator/events{/privacy}",
"received_events_url": "https://api.github.com/users/sublimator/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2024-01-10T06:46:40 | 2024-01-22T23:51:54 | 2024-01-22T21:52:24 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1886",
"html_url": "https://github.com/ollama/ollama/pull/1886",
"diff_url": "https://github.com/ollama/ollama/pull/1886.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1886.patch",
"merged_at": null
} | - More generic than https://github.com/jmorganca/ollama/pull/1846
- Slots in simply with the existing environment variable configuration
- Can be used to set environment variables on MacOS for e.g. OLLAMA_ORIGINS without needing to fiddle around with plist/SIP
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1886/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1886/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7274 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7274/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7274/comments | https://api.github.com/repos/ollama/ollama/issues/7274/events | https://github.com/ollama/ollama/pull/7274 | 2,599,914,986 | PR_kwDOJ0Z1Ps5_NFdD | 7,274 | Add Environment Variable For Row Split and No KV Offload | {
"login": "heislera763",
"id": 126129661,
"node_id": "U_kgDOB4SV_Q",
"avatar_url": "https://avatars.githubusercontent.com/u/126129661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/heislera763",
"html_url": "https://github.com/heislera763",
"followers_url": "https://api.github.com/users/heislera763/followers",
"following_url": "https://api.github.com/users/heislera763/following{/other_user}",
"gists_url": "https://api.github.com/users/heislera763/gists{/gist_id}",
"starred_url": "https://api.github.com/users/heislera763/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/heislera763/subscriptions",
"organizations_url": "https://api.github.com/users/heislera763/orgs",
"repos_url": "https://api.github.com/users/heislera763/repos",
"events_url": "https://api.github.com/users/heislera763/events{/privacy}",
"received_events_url": "https://api.github.com/users/heislera763/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-10-20T03:38:41 | 2024-11-26T18:29:07 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7274",
"html_url": "https://github.com/ollama/ollama/pull/7274",
"diff_url": "https://github.com/ollama/ollama/pull/7274.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7274.patch",
"merged_at": null
} | This is https://github.com/ollama/ollama/pull/5527 (add "--split-mode row" parameter) but rebased and cleaned up. I've also added the "--no-kv-offload" parameter, which was discussed as a workaround to all KV cache being placed on the first GPU when using split rows. These parameters are activated with the new environment variables OLLAMA_SPLIT_MODE_ROW and OLLAMA_NO_KV_OFFLOAD.
One of the known caveats to --no-kv-offload is that will reduce performance (seemingly proportional to the length of your prompt). However, something odd I noticed in my testing was that performance seems to scale inversely with the "num_thread" parameter, which meant that performance was best with just 1 thread. It's not clear why this is, especially since a local llama.cpp instance I tested against didn't have this behavior. Another issue is that there seems to be some incorrect gpu allocation when you set high context lengths. I think what's happening is that memory.go is trying to determine the optimal gpu layer count, but the formula isn't accounting for the fact that the KV cache isn't going to use any GPU memory. I'm seemingly able to work around this issue by manually setting "num_gpu" to a higher value.
I'm not sure how to go about addressing either of these issues, but I think it worth adding these environment variables in their current state since they can be very beneficial to multi-gpu users, especially row split. If the jankiness introduced by no-kv-offload is too much maybe I can cut that from this PR and try to pursue this idea https://github.com/ollama/ollama/pull/5527#issuecomment-2330289372 which seems like it could probably work, although I can't say I'm familiar with how simple/complex memory.go is. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7274/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7274/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8522 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8522/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8522/comments | https://api.github.com/repos/ollama/ollama/issues/8522/events | https://github.com/ollama/ollama/issues/8522 | 2,802,904,225 | I_kwDOJ0Z1Ps6nEOyh | 8,522 | Ollama throws 'does not support generate' error on running embedding models on windows | {
"login": "tanmaysharma2001",
"id": 78191188,
"node_id": "MDQ6VXNlcjc4MTkxMTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/78191188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanmaysharma2001",
"html_url": "https://github.com/tanmaysharma2001",
"followers_url": "https://api.github.com/users/tanmaysharma2001/followers",
"following_url": "https://api.github.com/users/tanmaysharma2001/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmaysharma2001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanmaysharma2001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmaysharma2001/subscriptions",
"organizations_url": "https://api.github.com/users/tanmaysharma2001/orgs",
"repos_url": "https://api.github.com/users/tanmaysharma2001/repos",
"events_url": "https://api.github.com/users/tanmaysharma2001/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanmaysharma2001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 1 | 2025-01-21T22:03:27 | 2025-01-21T22:38:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
as the title says, when using ollama cli, and trying to running any embedding models present on the website (in this case nomic-embed-text), they throw an error which is:
```
Error: "nomic-embed-text" does not support generate
```
to reproduce:
1. simply install ollama on windows through their website.
2. run:
```
ollama run nomic-embed-text
```
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8522/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1568 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1568/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1568/comments | https://api.github.com/repos/ollama/ollama/issues/1568/events | https://github.com/ollama/ollama/issues/1568 | 2,044,977,596 | I_kwDOJ0Z1Ps554-G8 | 1,568 | ollama in Powershell using WSL2 | {
"login": "BananaAcid",
"id": 1894723,
"node_id": "MDQ6VXNlcjE4OTQ3MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1894723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BananaAcid",
"html_url": "https://github.com/BananaAcid",
"followers_url": "https://api.github.com/users/BananaAcid/followers",
"following_url": "https://api.github.com/users/BananaAcid/following{/other_user}",
"gists_url": "https://api.github.com/users/BananaAcid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BananaAcid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BananaAcid/subscriptions",
"organizations_url": "https://api.github.com/users/BananaAcid/orgs",
"repos_url": "https://api.github.com/users/BananaAcid/repos",
"events_url": "https://api.github.com/users/BananaAcid/events{/privacy}",
"received_events_url": "https://api.github.com/users/BananaAcid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 1 | 2023-12-16T22:16:53 | 2023-12-19T17:42:00 | 2023-12-19T17:41:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Just an info for others trying to trigger ollama from powershell:
Either use `wsl ollama run llama2` (prefix with wsl)
- or -
enable a `ollama` command in powershell:
1. `notepad $PROFILE`
2. add as last line: `function ollama() { $cmd = @("ollama") + $args ; &wsl.exe $cmd }`
Note: setting `OLLAMA_MODELS=/mnt/...DRIVELETTER...` will kill the performance! | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1568/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1568/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4660 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4660/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4660/comments | https://api.github.com/repos/ollama/ollama/issues/4660/events | https://github.com/ollama/ollama/issues/4660 | 2,318,779,533 | I_kwDOJ0Z1Ps6KNcSN | 4,660 | Changing seed does not change response | {
"login": "ccreutzi",
"id": 89011131,
"node_id": "MDQ6VXNlcjg5MDExMTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/89011131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccreutzi",
"html_url": "https://github.com/ccreutzi",
"followers_url": "https://api.github.com/users/ccreutzi/followers",
"following_url": "https://api.github.com/users/ccreutzi/following{/other_user}",
"gists_url": "https://api.github.com/users/ccreutzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccreutzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccreutzi/subscriptions",
"organizations_url": "https://api.github.com/users/ccreutzi/orgs",
"repos_url": "https://api.github.com/users/ccreutzi/repos",
"events_url": "https://api.github.com/users/ccreutzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccreutzi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1 | 2024-05-27T10:07:48 | 2024-06-11T21:24:42 | 2024-06-11T21:24:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
According to [the documentation](https://github.com/ollama/ollama/blob/main/docs/api.md#request-reproducible-outputs), getting reproducible outputs requires setting the seed and setting temperature to 0.
As far as I can tell, the part of these that works is setting the temperature to 0. But changing the seed does not change the response I get.
```
$ curl -s http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Why is the sky blue?",
"options": {
"seed": 1234,
"temperature": 0
}, "stream": false
}' | jq .response
" The sky appears blue due to a process called Rayleigh scattering. As sunlight reaches Earth, it is made up of different wavelengths or colors. Short-wavelength light (blue and violet) is scattered more easily than longer-wavelength light (red, orange, yellow, etc.) because it interacts more with the molecules in the Earth's atmosphere.\n\nThe blue light gets scattered in all directions, making the sky appear blue to us. However, you might wonder why we don't see a violet sky since violet light is scattered even more than blue light. This is because our eyes are more sensitive to blue light and because sunlight reaches us with less violet light due to the ozone layer absorbing some of it.\n\nAt sunrise and sunset, you may observe different colors in the sky, such as red or orange. This happens because at those times, the sun is low on the horizon, and its light has to pass through more atmosphere. The shorter blue and violet wavelengths are scattered out of the line of sight, leaving longer wavelengths like red, orange, and yellow to reach our eyes."
$ curl -s http://localhost:11434/api/generate -d '{
"model": "mistral",
"prompt": "Why is the sky blue?",
"options": {
"seed": 123,
"temperature": 0
}, "stream": false
}' | jq .response
" The sky appears blue due to a process called Rayleigh scattering. As sunlight reaches Earth, it is made up of different wavelengths or colors. Short-wavelength light (blue and violet) is scattered more easily than longer-wavelength light (red, orange, yellow, etc.) because it interacts more with the molecules in the Earth's atmosphere.\n\nThe blue light gets scattered in all directions, making the sky appear blue to us. However, you might wonder why we don't see a violet sky since violet light is scattered even more than blue light. This is because our eyes are more sensitive to blue light and because sunlight reaches us with less violet light due to the ozone layer absorbing some of it.\n\nAt sunrise and sunset, you may observe different colors in the sky, such as red or orange. This happens because at those times, the sun is low on the horizon, and its light has to pass through more atmosphere. The shorter blue and violet wavelengths are scattered out of the line of sight, leaving longer wavelengths like red, orange, and yellow to reach our eyes."
```
(I did test other seeds as well, with no change.)
----
My expectation would have been that `seed` is used for all (pseudo-)random choices in the response generation, and setting a fixed seed would result in a deterministic response, including with a positive temperature.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.38, installed with Homebrew | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4660/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/4660/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4820 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4820/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4820/comments | https://api.github.com/repos/ollama/ollama/issues/4820/events | https://github.com/ollama/ollama/issues/4820 | 2,334,271,800 | I_kwDOJ0Z1Ps6LIik4 | 4,820 | Issue with Llama3 Model on Multiple AMD GPU | {
"login": "rasodu",
"id": 13222196,
"node_id": "MDQ6VXNlcjEzMjIyMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13222196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasodu",
"html_url": "https://github.com/rasodu",
"followers_url": "https://api.github.com/users/rasodu/followers",
"following_url": "https://api.github.com/users/rasodu/following{/other_user}",
"gists_url": "https://api.github.com/users/rasodu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasodu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasodu/subscriptions",
"organizations_url": "https://api.github.com/users/rasodu/orgs",
"repos_url": "https://api.github.com/users/rasodu/repos",
"events_url": "https://api.github.com/users/rasodu/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasodu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 7 | 2024-06-04T19:58:35 | 2024-07-28T18:31:30 | 2024-06-23T22:21:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm experiencing an issue with running the llama3 model (specifically, version 70b-instruct-q6) on multiple AMD GPUs. While it works correctly on ollama/ollama:0.1.34-rocm, I've encountered a problem where it produces junk output when using ollama/ollama:0.1.35-rocm and ollama/ollama:0.1.41-rocm.
Interestingly, I've noticed that the junk output only occurs when the entire model fits within the GPU memory. If part of the model is stored in CPU memory, the output is generated correctly.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.41 | {
"login": "rasodu",
"id": 13222196,
"node_id": "MDQ6VXNlcjEzMjIyMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/13222196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasodu",
"html_url": "https://github.com/rasodu",
"followers_url": "https://api.github.com/users/rasodu/followers",
"following_url": "https://api.github.com/users/rasodu/following{/other_user}",
"gists_url": "https://api.github.com/users/rasodu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasodu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasodu/subscriptions",
"organizations_url": "https://api.github.com/users/rasodu/orgs",
"repos_url": "https://api.github.com/users/rasodu/repos",
"events_url": "https://api.github.com/users/rasodu/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasodu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4820/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4499 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4499/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4499/comments | https://api.github.com/repos/ollama/ollama/issues/4499/events | https://github.com/ollama/ollama/issues/4499 | 2,302,621,905 | I_kwDOJ0Z1Ps6JPzjR | 4,499 | paligemma | {
"login": "wwjCMP",
"id": 32979859,
"node_id": "MDQ6VXNlcjMyOTc5ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/32979859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwjCMP",
"html_url": "https://github.com/wwjCMP",
"followers_url": "https://api.github.com/users/wwjCMP/followers",
"following_url": "https://api.github.com/users/wwjCMP/following{/other_user}",
"gists_url": "https://api.github.com/users/wwjCMP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwjCMP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwjCMP/subscriptions",
"organizations_url": "https://api.github.com/users/wwjCMP/orgs",
"repos_url": "https://api.github.com/users/wwjCMP/repos",
"events_url": "https://api.github.com/users/wwjCMP/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwjCMP/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 8 | 2024-05-17T12:27:40 | 2024-12-19T10:19:58 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://huggingface.co./google/paligemma-3b-pt-224 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4499/reactions",
"total_count": 48,
"+1": 45,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4499/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1171 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1171/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1171/comments | https://api.github.com/repos/ollama/ollama/issues/1171/events | https://github.com/ollama/ollama/issues/1171 | 1,998,760,855 | I_kwDOJ0Z1Ps53IquX | 1,171 | Update installed models | {
"login": "Bodo-von-Greif",
"id": 6941672,
"node_id": "MDQ6VXNlcjY5NDE2NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6941672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bodo-von-Greif",
"html_url": "https://github.com/Bodo-von-Greif",
"followers_url": "https://api.github.com/users/Bodo-von-Greif/followers",
"following_url": "https://api.github.com/users/Bodo-von-Greif/following{/other_user}",
"gists_url": "https://api.github.com/users/Bodo-von-Greif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bodo-von-Greif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bodo-von-Greif/subscriptions",
"organizations_url": "https://api.github.com/users/Bodo-von-Greif/orgs",
"repos_url": "https://api.github.com/users/Bodo-von-Greif/repos",
"events_url": "https://api.github.com/users/Bodo-von-Greif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bodo-von-Greif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-17T10:19:08 | 2023-11-17T19:11:01 | 2023-11-17T19:11:01 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi all,
i wrote a small bash script to update the installed models.
Maybe its useful for some of you:
`
#/bin/bash
#Based on: ollama run codellama 'show me how to send the first colum named "name" of the list which is produced with ollama list with xargs to "ollama pull"'
echo "Actual models"
ollama list
echo
ollama list | grep -v "NAME" | awk '{print $1}' | xargs -I{} ollama pull {}
echo
echo "Updated models"
ollama list
` | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1171/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8101 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8101/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8101/comments | https://api.github.com/repos/ollama/ollama/issues/8101/events | https://github.com/ollama/ollama/pull/8101 | 2,740,135,352 | PR_kwDOJ0Z1Ps6FPOGv | 8,101 | llama: vendor commit ba1cb19c | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-12-14T20:40:05 | 2024-12-14T22:55:54 | 2024-12-14T22:55:51 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8101",
"html_url": "https://github.com/ollama/ollama/pull/8101",
"diff_url": "https://github.com/ollama/ollama/pull/8101.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8101.patch",
"merged_at": "2024-12-14T22:55:51"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8101/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8478 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8478/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8478/comments | https://api.github.com/repos/ollama/ollama/issues/8478/events | https://github.com/ollama/ollama/issues/8478 | 2,796,624,096 | I_kwDOJ0Z1Ps6msRjg | 8,478 | Display Minimum System Requirements | {
"login": "Siddhesh-Agarwal",
"id": 68057995,
"node_id": "MDQ6VXNlcjY4MDU3OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/68057995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Siddhesh-Agarwal",
"html_url": "https://github.com/Siddhesh-Agarwal",
"followers_url": "https://api.github.com/users/Siddhesh-Agarwal/followers",
"following_url": "https://api.github.com/users/Siddhesh-Agarwal/following{/other_user}",
"gists_url": "https://api.github.com/users/Siddhesh-Agarwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Siddhesh-Agarwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Siddhesh-Agarwal/subscriptions",
"organizations_url": "https://api.github.com/users/Siddhesh-Agarwal/orgs",
"repos_url": "https://api.github.com/users/Siddhesh-Agarwal/repos",
"events_url": "https://api.github.com/users/Siddhesh-Agarwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Siddhesh-Agarwal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 6 | 2025-01-18T03:38:17 | 2025-01-20T09:51:21 | 2025-01-20T09:51:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be great to have the minimum system requirements like disk space, and RAM for each model. This could be done both on the website and in the CLI like a small notice.
The minimum RAM required by the model is:
minimum_RAM = num_of_parameters * bytes_per_parameter
I see that most models use the `Q4_K_M` or `Q4_0` quantization. So, bytes_per_parameter = $0.5$. I may be oversimplifying the complexity of this problem.
| {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8478/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8478/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4491 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4491/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4491/comments | https://api.github.com/repos/ollama/ollama/issues/4491/events | https://github.com/ollama/ollama/issues/4491 | 2,301,965,398 | I_kwDOJ0Z1Ps6JNTRW | 4,491 | Pulling using API - Session timeout (5 minutes) | {
"login": "pelletier197",
"id": 24528884,
"node_id": "MDQ6VXNlcjI0NTI4ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/24528884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pelletier197",
"html_url": "https://github.com/pelletier197",
"followers_url": "https://api.github.com/users/pelletier197/followers",
"following_url": "https://api.github.com/users/pelletier197/following{/other_user}",
"gists_url": "https://api.github.com/users/pelletier197/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pelletier197/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pelletier197/subscriptions",
"organizations_url": "https://api.github.com/users/pelletier197/orgs",
"repos_url": "https://api.github.com/users/pelletier197/repos",
"events_url": "https://api.github.com/users/pelletier197/events{/privacy}",
"received_events_url": "https://api.github.com/users/pelletier197/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-05-17T06:55:50 | 2024-07-25T22:40:57 | 2024-07-25T22:40:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using the REST API to pull models, the `PULL` request seems to timeout for large models (llama3).
This is linked to [this issue](https://github.com/ollama/ollama-js/issues/72).
Is there any way to override the default session timeout when pulling models ? I noticed that the `Generate` endpoint accepts a `KeepAlive` [parameter](https://github.com/ollama/ollama/blob/7e1e0086e7d18c943ff403a7ca5c2d9ce39f3f4b/server/routes.go#L136) to override this. Could this be possible for `pull` as well ?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
01.38 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4491/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4491/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7208 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7208/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7208/comments | https://api.github.com/repos/ollama/ollama/issues/7208/events | https://github.com/ollama/ollama/issues/7208 | 2,587,622,492 | I_kwDOJ0Z1Ps6aO_xc | 7,208 | insufficient VRAM to load any model layers | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/followers",
"following_url": "https://api.github.com/users/SDAIer/following{/other_user}",
"gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions",
"organizations_url": "https://api.github.com/users/SDAIer/orgs",
"repos_url": "https://api.github.com/users/SDAIer/repos",
"events_url": "https://api.github.com/users/SDAIer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SDAIer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-10-15T04:45:47 | 2024-10-16T04:37:13 | 2024-10-16T04:37:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
"I want to know why the model prompts 'GPU has too little memory to allocate any layers.' I have four GPU cards, with available memory of 23.3 GiB, 23.3 GiB, 16.8 GiB, and 9.7 GiB respectively."
```
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.994+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/usr/share/ollama/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.994+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[23.3 GiB]"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.995+08:00 level=DEBUG source=memory.go:170 msg="gpu has too little memory to allocate any layers" id=GPU-ac079011-c45b-de29-f2e2-71b2e5d2d7f4 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="23.3 GiB" minimum_memory=479199232 layer_size="484.5 MiB" gpu_zer_overhead="0 B" partial_offload="24.6 GiB" full_offload="24.2 GiB"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.995+08:00 level=DEBUG source=memory.go:312 msg="insufficient VRAM to load any model layers"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.995+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[23.3 GiB]"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.995+08:00 level=DEBUG source=memory.go:170 msg="gpu has too little memory to allocate any layers" id=GPU-1a5993d8-1f60-3ecd-b80f-55ca9f1e95d2 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="23.3 GiB" minimum_memory=479199232 layer_size="484.5 MiB" gpu_zer_overhead="0 B" partial_offload="24.6 GiB" full_offload="24.2 GiB"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.995+08:00 level=DEBUG source=memory.go:312 msg="insufficient VRAM to load any model layers"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.995+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[16.8 GiB]"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.996+08:00 level=DEBUG source=memory.go:170 msg="gpu has too little memory to allocate any layers" id=GPU-6b83f2f6-dc65-7feb-5e02-0cd0087995e8 library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="16.8 GiB" minimum_memory=479199232 layer_size="484.5 MiB" gpu_zer_overhead="0 B" partial_offload="24.6 GiB" full_offload="24.2 GiB"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.996+08:00 level=DEBUG source=memory.go:312 msg="insufficient VRAM to load any model layers"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.996+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[9.7 GiB]"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.996+08:00 level=DEBUG source=memory.go:170 msg="gpu has too little memory to allocate any layers" id=GPU-ad4cba93-ee35-2ea2-dba7-7b5772a098ce library=cuda variant=v12 compute=8.0 driver=12.2 name="NVIDIA A30" total="23.5 GiB" available="9.7 GiB" minimum_memory=479199232 layer_size="484.5 MiB" gpu_zer_overhead="0 B" partial_offload="24.6 GiB" full_offload="24.2 GiB"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.996+08:00 level=DEBUG source=memory.go:312 msg="insufficient VRAM to load any model layers"
10月 14 12:47:30 gpu ollama[24746]: time=2024-10-14T12:47:30.996+08:00 level=DEBUG source=memory.go:103 msg=evaluating library=cuda gpu_count=1 available="[23.3 GiB]"
10月 14 12:47:31 gpu ollama[24746]: time=2024-10-14T12:47:30.997+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/usr/share/ollama/.ollama/models/blobs/sha256-b506a070d1152798d435ec4e7687336567ae653b3106f73b7b4ac7be1cbc4449 gpu=GPU-ac079011-c45b-de29-f2e2-71b2e5d2d7f4 parallel=1 available=24986779648 required="15.1 GiB"
```
### OS
centos7.9
### GPU
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A30 Off | 00000000:17:00.0 Off | 0 |
| N/A 28C P0 30W / 165W | 13896MiB / 24576MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA A30 Off | 00000000:31:00.0 Off | 0 |
| N/A 27C P0 27W / 165W | 6642MiB / 24576MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA A30 Off | 00000000:B1:00.0 Off | 0 |
| N/A 26C P0 25W / 165W | 3MiB / 24576MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA A30 Off | 00000000:CA:00.0 Off | 0 |
| N/A 26C P0 24W / 165W | 3MiB / 24576MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 12273 C /usr/local/bin/python3.8 5062MiB |
| 0 N/A N/A 36459 C /usr/bin/python3.10 8820MiB |
| 1 N/A N/A 36258 C /usr/bin/python3.10 6634MiB |
+---------------------------------------------------------------------------------------+
### CPU
intel
### Ollama version
0.3.11 | {
"login": "SDAIer",
"id": 174102361,
"node_id": "U_kgDOCmCXWQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174102361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SDAIer",
"html_url": "https://github.com/SDAIer",
"followers_url": "https://api.github.com/users/SDAIer/followers",
"following_url": "https://api.github.com/users/SDAIer/following{/other_user}",
"gists_url": "https://api.github.com/users/SDAIer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SDAIer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SDAIer/subscriptions",
"organizations_url": "https://api.github.com/users/SDAIer/orgs",
"repos_url": "https://api.github.com/users/SDAIer/repos",
"events_url": "https://api.github.com/users/SDAIer/events{/privacy}",
"received_events_url": "https://api.github.com/users/SDAIer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7208/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7688 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7688/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7688/comments | https://api.github.com/repos/ollama/ollama/issues/7688/events | https://github.com/ollama/ollama/issues/7688 | 2,662,591,083 | I_kwDOJ0Z1Ps6es-pr | 7,688 | Have start model downloading after internet disconnect | {
"login": "mosquet",
"id": 136934740,
"node_id": "U_kgDOCCl1VA",
"avatar_url": "https://avatars.githubusercontent.com/u/136934740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mosquet",
"html_url": "https://github.com/mosquet",
"followers_url": "https://api.github.com/users/mosquet/followers",
"following_url": "https://api.github.com/users/mosquet/following{/other_user}",
"gists_url": "https://api.github.com/users/mosquet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mosquet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mosquet/subscriptions",
"organizations_url": "https://api.github.com/users/mosquet/orgs",
"repos_url": "https://api.github.com/users/mosquet/repos",
"events_url": "https://api.github.com/users/mosquet/events{/privacy}",
"received_events_url": "https://api.github.com/users/mosquet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677370291,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw",
"url": "https://api.github.com/repos/ollama/ollama/labels/networking",
"name": "networking",
"color": "0B5368",
"default": false,
"description": "Issues relating to ollama pull and push"
}
] | closed | false | null | [] | null | 2 | 2024-11-15T17:01:02 | 2024-11-17T12:02:03 | 2024-11-17T12:02:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
## Description
When internet connection is lost during model download, the process starts again from 0% instead of resuming from the interrupted point.
## Current Behavior
- Download fails on connection loss
- When connection restored, download restarts from 0%
- All previous download progress is lost
## Expected Behavior
- Download should pause when connection lost
- Should resume from previous progress when reconnected
- Implement proper download state tracking
## Technical Considerations
- Need download progress persistence
- Connection state monitoring
- Resume capability implementation
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
Just updated now | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7688/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7776 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7776/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7776/comments | https://api.github.com/repos/ollama/ollama/issues/7776/events | https://github.com/ollama/ollama/issues/7776 | 2,678,294,755 | I_kwDOJ0Z1Ps6fo4jj | 7,776 | streaming for tools support | {
"login": "ZHOUxiaohe1987",
"id": 59469405,
"node_id": "MDQ6VXNlcjU5NDY5NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/59469405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZHOUxiaohe1987",
"html_url": "https://github.com/ZHOUxiaohe1987",
"followers_url": "https://api.github.com/users/ZHOUxiaohe1987/followers",
"following_url": "https://api.github.com/users/ZHOUxiaohe1987/following{/other_user}",
"gists_url": "https://api.github.com/users/ZHOUxiaohe1987/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZHOUxiaohe1987/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZHOUxiaohe1987/subscriptions",
"organizations_url": "https://api.github.com/users/ZHOUxiaohe1987/orgs",
"repos_url": "https://api.github.com/users/ZHOUxiaohe1987/repos",
"events_url": "https://api.github.com/users/ZHOUxiaohe1987/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZHOUxiaohe1987/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-11-21T07:01:10 | 2024-11-21T09:55:09 | 2024-11-21T09:55:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I try to use tools in stream way with langchain, but it do not work. | {
"login": "ParthSareen",
"id": 29360864,
"node_id": "MDQ6VXNlcjI5MzYwODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParthSareen",
"html_url": "https://github.com/ParthSareen",
"followers_url": "https://api.github.com/users/ParthSareen/followers",
"following_url": "https://api.github.com/users/ParthSareen/following{/other_user}",
"gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions",
"organizations_url": "https://api.github.com/users/ParthSareen/orgs",
"repos_url": "https://api.github.com/users/ParthSareen/repos",
"events_url": "https://api.github.com/users/ParthSareen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParthSareen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7776/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6622 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6622/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6622/comments | https://api.github.com/repos/ollama/ollama/issues/6622/events | https://github.com/ollama/ollama/issues/6622 | 2,504,121,981 | I_kwDOJ0Z1Ps6VQd59 | 6,622 | [Bug] open-webui integration error when ui docker listen on 11434 | {
"login": "zydmtaichi",
"id": 93961601,
"node_id": "U_kgDOBZm9gQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93961601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zydmtaichi",
"html_url": "https://github.com/zydmtaichi",
"followers_url": "https://api.github.com/users/zydmtaichi/followers",
"following_url": "https://api.github.com/users/zydmtaichi/following{/other_user}",
"gists_url": "https://api.github.com/users/zydmtaichi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zydmtaichi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zydmtaichi/subscriptions",
"organizations_url": "https://api.github.com/users/zydmtaichi/orgs",
"repos_url": "https://api.github.com/users/zydmtaichi/repos",
"events_url": "https://api.github.com/users/zydmtaichi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zydmtaichi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-09-04T01:47:32 | 2024-09-05T00:34:41 | 2024-09-05T00:34:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
i start open-webui via below cmd first and then ollama service failed to up by using `ollama serve`. Output tells the port already in use. I got the same err reason if i change the order of launch(first ollama, then open-webui docker), please check and improve the integration of ollama and open-webui.
my cmd for start open-webui:
`docker run -d -p 3000:8080 -p 127.0.0.1:11434:11434 -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always registry.cn-hangzhou.aliyuncs.com/kin-images/open-webui:git-18ae586`
my cmd for start ollama:
`ollama serve`
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6 | {
"login": "zydmtaichi",
"id": 93961601,
"node_id": "U_kgDOBZm9gQ",
"avatar_url": "https://avatars.githubusercontent.com/u/93961601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zydmtaichi",
"html_url": "https://github.com/zydmtaichi",
"followers_url": "https://api.github.com/users/zydmtaichi/followers",
"following_url": "https://api.github.com/users/zydmtaichi/following{/other_user}",
"gists_url": "https://api.github.com/users/zydmtaichi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zydmtaichi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zydmtaichi/subscriptions",
"organizations_url": "https://api.github.com/users/zydmtaichi/orgs",
"repos_url": "https://api.github.com/users/zydmtaichi/repos",
"events_url": "https://api.github.com/users/zydmtaichi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zydmtaichi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6622/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2165 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2165/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2165/comments | https://api.github.com/repos/ollama/ollama/issues/2165/events | https://github.com/ollama/ollama/issues/2165 | 2,097,159,829 | I_kwDOJ0Z1Ps59AB6V | 2,165 | ROCm v5 crash - free(): invalid pointer | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 26 | 2024-01-23T23:39:26 | 2024-03-12T18:26:23 | 2024-03-12T18:26:23 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ```
loading library /tmp/ollama800487147/rocm_v5/libext_server.so
2024/01/23 19:26:51 dyn_ext_server.go:90: INFO Loading Dynamic llm server: /tmp/ollama800487147/rocm_v5/libext_server.so
2024/01/23 19:26:51 dyn_ext_server.go:145: INFO Initializing llama server
free(): invalid pointer
Aborted (core dumped)
```
Most likely there is some other problem/error, but it appears we're not handling that error case gracefully and are trying to free an invalid pointer. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2165/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4756 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4756/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4756/comments | https://api.github.com/repos/ollama/ollama/issues/4756/events | https://github.com/ollama/ollama/pull/4756 | 2,328,405,482 | PR_kwDOJ0Z1Ps5xKEpa | 4,756 | refactor convert | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-05-31T18:46:52 | 2024-08-01T21:16:33 | 2024-08-01T21:16:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4756",
"html_url": "https://github.com/ollama/ollama/pull/4756",
"diff_url": "https://github.com/ollama/ollama/pull/4756.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4756.patch",
"merged_at": "2024-08-01T21:16:31"
} | the goal is to build a single, well defined interface to convert a model as well as interfaces for input formats (e.g. safetensors, pytorch), model architectures (e.g. llama, gemma), and model tokenizers
this change makes some significant changes to the conversion process:
1. implement a single function call for conversion `convert.Convert(string, io.WriteSeeker)` which abstracts the many operation required for successful conversion
2. implement a new `convert.Converter` interface which each model conversion shall implement
3. decouple vocabulary parsing from model
4. add special vocabulary detection for both tokenizer.model and tokenizer.json based vocabularies
5. update the tensor writing interface for better compatibility with non-trivial conversions
as a test of this new interface, implement mixtral conversion as an extension to llama conversion
TODO:
- [ ] write short tests for tokenizer, model KVs, minimal tensor
Resolves: https://github.com/ollama/ollama/issues/5255 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4756/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3335 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3335/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3335/comments | https://api.github.com/repos/ollama/ollama/issues/3335/events | https://github.com/ollama/ollama/issues/3335 | 2,205,087,423 | I_kwDOJ0Z1Ps6Dbva_ | 3,335 | Error: pull model manifest: ollama.ai certificate is expired | {
"login": "hheydaroff",
"id": 29415152,
"node_id": "MDQ6VXNlcjI5NDE1MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/29415152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hheydaroff",
"html_url": "https://github.com/hheydaroff",
"followers_url": "https://api.github.com/users/hheydaroff/followers",
"following_url": "https://api.github.com/users/hheydaroff/following{/other_user}",
"gists_url": "https://api.github.com/users/hheydaroff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hheydaroff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hheydaroff/subscriptions",
"organizations_url": "https://api.github.com/users/hheydaroff/orgs",
"repos_url": "https://api.github.com/users/hheydaroff/repos",
"events_url": "https://api.github.com/users/hheydaroff/events{/privacy}",
"received_events_url": "https://api.github.com/users/hheydaroff/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-03-25T07:32:51 | 2024-03-25T11:00:16 | 2024-03-25T08:31:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I try to pull a model from the registry, it gives me the foError: pull model manifest:
`Get "https://registry.ollama.ai/v2/library/llama2-uncensored/manifests/latest": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: “ollama.ai” certificate is expiredllowing error:`
### What did you expect to see?
Successfull pull of the model, so that I can run it on my local machine.
### Steps to reproduce
1. Open the CLI
2. Run `ollama run dolphincoder` or any other model
### Are there any recent changes that introduced the issue?
_No response_
### OS
macOS
### Architecture
arm64
### Platform
_No response_
### Ollama version
0.1.27
### GPU
Apple
### GPU info
_No response_
### CPU
Apple
### Other software
_No response_ | {
"login": "hheydaroff",
"id": 29415152,
"node_id": "MDQ6VXNlcjI5NDE1MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/29415152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hheydaroff",
"html_url": "https://github.com/hheydaroff",
"followers_url": "https://api.github.com/users/hheydaroff/followers",
"following_url": "https://api.github.com/users/hheydaroff/following{/other_user}",
"gists_url": "https://api.github.com/users/hheydaroff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hheydaroff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hheydaroff/subscriptions",
"organizations_url": "https://api.github.com/users/hheydaroff/orgs",
"repos_url": "https://api.github.com/users/hheydaroff/repos",
"events_url": "https://api.github.com/users/hheydaroff/events{/privacy}",
"received_events_url": "https://api.github.com/users/hheydaroff/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3335/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3335/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/2525 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2525/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2525/comments | https://api.github.com/repos/ollama/ollama/issues/2525/events | https://github.com/ollama/ollama/issues/2525 | 2,137,485,748 | I_kwDOJ0Z1Ps5_Z3G0 | 2,525 | ollama version 1.25 problem emojis | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-02-15T21:32:20 | 2024-02-21T15:51:22 | 2024-02-21T15:51:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Apparently adding "my friend" to the end of a prompt, causes mistral to return emojies that end up never stopping.
```
ollama run mistral
>>> hello my friend
Hello! How can I help you today? Is there a specific question or topic you'd like to discuss? I'm here to answer any questions you may have to the best
of my ability. Let me know if there's something on your mind, and we can explore it together. Have a great day! 😊🌞💻 #AI #HelpfulBot #ChatBot
#FriendlyInteraction #QuestionAnswering #AssistiveTechnology #TechnologicalAdvancements #DigitalAssistant #VirtualHelper #HumanComputerInteraction
#ArtificialIntelligenceChatbot #ConversationalInterface #NaturalLanguageProcessing #MachineLearning #DeepLearning #NeuralNetworks #BigDataAnalytics
#CloudComputing #InternetOfThings #Cybersecurity #Programming #Python #Java #Cplusplus #Swift #R #Matlab #SQL #DataScience #MachineLearningModels
#DeepLearningModels #NeuralNetworkModels #TensorFlow #Keras #Pytorch #OpenCV #ComputerVision #ImageProcessing #TextToSpeech #SpeechRecognition
#ChatbotDevelopment #NaturalLanguageUnderstanding #SentimentAnalysis #QuestionAnsweringSystems #DialogueManagement #ConversationalAI
#VirtualAssistantSolutions #CustomerServiceAutomation #BusinessIntelligence #DataAnalyticsTools #DataVisualizationTools #DataMiningTools
#DataPreprocessingTools #StatisticalAnalysisTools #PredictiveAnalysisTools #DataCleaningTools #DataIntegrationTools #DataExportTools
#DatabaseManagementSystems #DataSecurityTools #DataPrivacyTools #DataCompressionTools #DataEncryptionTools #CloudServices #SaaS #PaaS #IaaS
#ServerlessComputing #DevOps #SoftwareEngineering #WebDevelopment #AppDevelopment #MobileDevelopment #UIUXDesign #GraphicDesign #VideoEditing
#AudioEditing #Photography #3DModeling #VR #AR #Gaming #ESports #BlockchainTechnology #SmartContracts #DecentralizedApplications #Cryptocurrency #NFTs
#SupplyChainManagement #LogisticsManagement #ProjectManagementTools #ProductivityTools #TaskManagementTools #TimeTrackingTools #NoteTakingApps
#CollaborationTools #CommunicationTools #EmailClients #MessagingApps #SocialMediaPlatforms #ContentCreationTools #ContentManagementSystems
#WebHostingServices #DomainRegistrationServices #WebDesignServices #GraphicDesignServices #VideoEditingServices #AudioEditingServices #PhotographyServices
#3DModelingServices #VRServices #ARServices #GamingServices #ESportsServices #BlockchainServices #DecentralizedAppServices #CryptocurrencyServices
#NFTServices #SupplyChainServices #LogisticsServices #ProjectManagementServices #ProductivityServices #TaskManagementServices #TimeTrackingServices
#NoteTakingService #CollaborationService #CommunicationService #EmailClientService #MessagingService #SocialMediaPlatformService #ContentCreationService
#ContentManagementSystemService #WebHostingService #DomainRegistrationService #WebDesignService #GraphicDesignService #VideoEditingService
#AudioEditingService #PhotographyService #3DModelingService #VRService #ARService #GamingService #ESportsService #BlockchainService
#DecentralizedAppService #CryptocurrencyService #NFTService #SupplyChainService #LogisticsService #ProjectManagementService #ProductivityService
#TaskManagementService #TimeTrackingService #NoteTakingTool #CollaborationTool #CommunicationTool #EmailClient #MessagingApp #SocialMediaPlatform
#ContentCreationTool #ContentManagementSystem #WebHostingService #DomainRegistrationService #WebDesignService #GraphicDesignService #VideoEditingService
#AudioEditingService #PhotographyService #3DModelingService #VRService #ARService #GamingService #ESportsService #BlockchainService
#DecentralizedAppService #CryptocurrencyService #NFTService #SupplyChainService #LogisticsService #ProjectManagementService #ProductivityService
#TaskManagementService #TimeTrackingService #NoteTakingTool #CollaborationTool #CommunicationTool #EmailClientTool #MessagingAppTool
#SocialMediaPlatformTool #ContentCreationToolTool #ContentManagementSystemTool #WebHostingServiceTool #DomainRegistrationServiceTool #WebDesignServiceTool
#GraphicDesignServiceTool #VideoEditingServiceTool #AudioEditingServiceTool #PhotographyServiceTool #3DModelingServiceTool #VRServiceTool #ARServiceTool
#GamingServiceTool #ESportsServiceTool #BlockchainServiceTool #DecentralizedAppServiceTool #CryptocurrencyServiceTool #NFTServiceTool
#SupplyChainServiceTool #LogisticsServiceTool #ProjectManagementServiceTool #ProductivityServiceTool #TaskManagementServiceTool #TimeTrackingServiceTool
#NoteTakingServiceTool #CollaborationServiceTool #CommunicationServiceTool #EmailClientServiceTool #MessagingServiceTool #SocialMediaPlatformServiceTool
#ContentCreationServiceTool #ContentManagementSystemServiceTool #WebHostingServiceTool #DomainRegistrationServiceTool #WebDesignServiceTool
#GraphicDesignServiceTool #VideoEditingServiceTool #AudioEditingServiceTool #PhotographyServiceTool #3DModelingServiceTool #VRServiceTool #ARServiceTool
#GamingServiceTool #ESportsServiceTool #BlockchainServiceTool #DecentralizedAppServiceTool #CryptocurrencyServiceTool #NFTServiceTool
#SupplyChainServiceTool #LogisticsServiceTool #ProjectManagementServiceTool #ProductivityServiceTool #TaskManagementServiceTool #TimeTrackingServiceTool
#NoteTakingServiceTool #CollaborationServiceTool #CommunicationServiceTool #EmailClientServiceTool #MessagingServiceTool #SocialMediaPlatformServiceTool
#ContentCreationServiceTool #ContentManagementSystemServiceTool #WebHostingServiceTool #DomainRegistrationServiceTool #WebDesignServiceTool
#GraphicDesignServiceTool #VideoEditingServiceTool #AudioEditingServiceTool #PhotographyServiceTool #3DModelingServiceTool #VRServiceTool #ARServiceTool
#GamingServiceTool #ESportsServiceTool #BlockchainServiceTool #DecentralizedAppServiceTool #CryptocurrencyServiceTool #NFTServiceTool
#SupplyChainServiceTool #LogisticsServiceTool #ProjectManagementServiceTool #ProductivityServiceTool #TaskManagementServiceTool #TimeTrackingServiceTool
#NoteTakingServiceTool #CollaborationServiceTool #CommunicationServiceTool #EmailClientServiceTool #MessagingServiceTool #SocialMediaPlatformServiceTool
#ContentCreationServiceTool #ContentManagementSystemServiceTool #WebHostingServiceTool #DomainRegistrationServiceTool #WebDesignServiceTool
#GraphicDesignServiceTool #VideoEditingServiceTool^C
>>> Send a message (/? for help)
``` | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2525/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2525/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7724 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7724/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7724/comments | https://api.github.com/repos/ollama/ollama/issues/7724/events | https://github.com/ollama/ollama/pull/7724 | 2,668,186,705 | PR_kwDOJ0Z1Ps6CO2-e | 7,724 | Update README.md | {
"login": "zeitlings",
"id": 25689591,
"node_id": "MDQ6VXNlcjI1Njg5NTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/25689591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zeitlings",
"html_url": "https://github.com/zeitlings",
"followers_url": "https://api.github.com/users/zeitlings/followers",
"following_url": "https://api.github.com/users/zeitlings/following{/other_user}",
"gists_url": "https://api.github.com/users/zeitlings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zeitlings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zeitlings/subscriptions",
"organizations_url": "https://api.github.com/users/zeitlings/orgs",
"repos_url": "https://api.github.com/users/zeitlings/repos",
"events_url": "https://api.github.com/users/zeitlings/events{/privacy}",
"received_events_url": "https://api.github.com/users/zeitlings/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-11-18T11:19:04 | 2024-11-19T03:33:23 | 2024-11-19T03:33:23 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7724",
"html_url": "https://github.com/ollama/ollama/pull/7724",
"diff_url": "https://github.com/ollama/ollama/pull/7724.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7724.patch",
"merged_at": "2024-11-19T03:33:23"
} | Add [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) to Extensions & Plugins.
- Manage local models
- Perform local inference | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7724/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3452 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3452/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3452/comments | https://api.github.com/repos/ollama/ollama/issues/3452/events | https://github.com/ollama/ollama/issues/3452 | 2,220,042,787 | I_kwDOJ0Z1Ps6EUyoj | 3,452 | Pulling manifest fails with error "read: connection reset by peer" | {
"login": "chopeen",
"id": 183731,
"node_id": "MDQ6VXNlcjE4MzczMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/183731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chopeen",
"html_url": "https://github.com/chopeen",
"followers_url": "https://api.github.com/users/chopeen/followers",
"following_url": "https://api.github.com/users/chopeen/following{/other_user}",
"gists_url": "https://api.github.com/users/chopeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chopeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chopeen/subscriptions",
"organizations_url": "https://api.github.com/users/chopeen/orgs",
"repos_url": "https://api.github.com/users/chopeen/repos",
"events_url": "https://api.github.com/users/chopeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/chopeen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-04-02T09:43:19 | 2024-07-01T19:54:50 | 2024-04-09T10:54:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After upgrading from `0.1.27` to `0.1.30`, I can no longer pull models when connected to a corporate network:
```
$ ollama pull codellama
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=VV5GsyYSIqo_4gO3ILCHrA&scope=repository%!A(MISSING)library%!F(MISSING)codellama%!A(MISSING)pull&service=ollama.com&ts=1712050312": read tcp 10.144.68.189:56764->34.120.132.20:443: read: connection reset by peer
```
### What did you expect to see?
The model `codellama` should be pulled and saved locally.
### Steps to reproduce
1. Connect to corporate network
2. Run `ollama pull ...` or `ollama run ...` for any model not present locally
### Are there any recent changes that introduced the issue?
It used to work in December.
I downgraded and tested two Ollama versions from the past - both `0.1.24` and `0.1.27` are able to pull models on the corporate network. The issue appeared in version `0.1.28`.
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.30
### GPU
_No response_
### GPU info
No GPU
### CPU
Intel
### Other software
_No response_ | {
"login": "chopeen",
"id": 183731,
"node_id": "MDQ6VXNlcjE4MzczMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/183731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chopeen",
"html_url": "https://github.com/chopeen",
"followers_url": "https://api.github.com/users/chopeen/followers",
"following_url": "https://api.github.com/users/chopeen/following{/other_user}",
"gists_url": "https://api.github.com/users/chopeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chopeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chopeen/subscriptions",
"organizations_url": "https://api.github.com/users/chopeen/orgs",
"repos_url": "https://api.github.com/users/chopeen/repos",
"events_url": "https://api.github.com/users/chopeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/chopeen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3452/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3946/comments | https://api.github.com/repos/ollama/ollama/issues/3946/events | https://github.com/ollama/ollama/issues/3946 | 2,265,926,090 | I_kwDOJ0Z1Ps6HD0nK | 3,946 | An existing connection was forcibly closed by the remote host. | {
"login": "icreatewithout",
"id": 34464412,
"node_id": "MDQ6VXNlcjM0NDY0NDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/34464412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/icreatewithout",
"html_url": "https://github.com/icreatewithout",
"followers_url": "https://api.github.com/users/icreatewithout/followers",
"following_url": "https://api.github.com/users/icreatewithout/following{/other_user}",
"gists_url": "https://api.github.com/users/icreatewithout/gists{/gist_id}",
"starred_url": "https://api.github.com/users/icreatewithout/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/icreatewithout/subscriptions",
"organizations_url": "https://api.github.com/users/icreatewithout/orgs",
"repos_url": "https://api.github.com/users/icreatewithout/repos",
"events_url": "https://api.github.com/users/icreatewithout/events{/privacy}",
"received_events_url": "https://api.github.com/users/icreatewithout/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-04-26T14:36:09 | 2024-05-01T21:07:51 | 2024-05-01T21:07:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=2AycvwJ7brPYMZDkoil6gg&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1714142386": read tcp 192.168.3.57:4603->34.120.132.20:443: wsarecv: An existing connection was forcibly closed by the remote host.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3946/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/171 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/171/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/171/comments | https://api.github.com/repos/ollama/ollama/issues/171/events | https://github.com/ollama/ollama/pull/171 | 1,816,538,312 | PR_kwDOJ0Z1Ps5WIw9f | 171 | fix extended tag names | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-07-22T02:08:46 | 2023-07-22T03:27:25 | 2023-07-22T03:27:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/171",
"html_url": "https://github.com/ollama/ollama/pull/171",
"diff_url": "https://github.com/ollama/ollama/pull/171.diff",
"patch_url": "https://github.com/ollama/ollama/pull/171.patch",
"merged_at": "2023-07-22T03:27:25"
} | null | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/171/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2841 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2841/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2841/comments | https://api.github.com/repos/ollama/ollama/issues/2841/events | https://github.com/ollama/ollama/issues/2841 | 2,161,897,987 | I_kwDOJ0Z1Ps6A2_ID | 2,841 | Add/Remove Model Repos + Self Host Your Own Model Repo + Pull Models From Other Repos | {
"login": "trymeouteh",
"id": 31172274,
"node_id": "MDQ6VXNlcjMxMTcyMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trymeouteh",
"html_url": "https://github.com/trymeouteh",
"followers_url": "https://api.github.com/users/trymeouteh/followers",
"following_url": "https://api.github.com/users/trymeouteh/following{/other_user}",
"gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions",
"organizations_url": "https://api.github.com/users/trymeouteh/orgs",
"repos_url": "https://api.github.com/users/trymeouteh/repos",
"events_url": "https://api.github.com/users/trymeouteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/trymeouteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-02-29T18:52:03 | 2024-04-25T08:18:36 | 2024-03-01T01:09:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | 1. The ability to manage the model repos in Ollama. Simular to how in F-Droid (Android app store) you can add and remove repos which allows you to get apps from other sources.
2. Self host your own repo. Allow anyone to self host their own repo.
- Weather this means simply setting up a git repo (Github, Gitlab, Gitea, Forgeo) or self hosting a website and server for hosting models.
- Models can be stored as direct downloads or torrents. Would like to see a torrent option to help reduce bandwidth on the repo provider and encourage users of the model to seed the model (sharing is caring)
3. The ability to pull models from other sources. If the same model is available on Ollama repo and on another repo, to have a way to download both or distinguish which model you want to download.
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2841/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2841/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8258 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8258/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8258/comments | https://api.github.com/repos/ollama/ollama/issues/8258/events | https://github.com/ollama/ollama/issues/8258 | 2,761,061,859 | I_kwDOJ0Z1Ps6kknXj | 8,258 | Error: an error was encountered while running the model: unexpected EOF | {
"login": "Hyccccccc",
"id": 60806532,
"node_id": "MDQ6VXNlcjYwODA2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/60806532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hyccccccc",
"html_url": "https://github.com/Hyccccccc",
"followers_url": "https://api.github.com/users/Hyccccccc/followers",
"following_url": "https://api.github.com/users/Hyccccccc/following{/other_user}",
"gists_url": "https://api.github.com/users/Hyccccccc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hyccccccc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hyccccccc/subscriptions",
"organizations_url": "https://api.github.com/users/Hyccccccc/orgs",
"repos_url": "https://api.github.com/users/Hyccccccc/repos",
"events_url": "https://api.github.com/users/Hyccccccc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hyccccccc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 18 | 2024-12-27T16:22:40 | 2024-12-31T03:26:28 | 2024-12-31T03:26:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I use Ollama to pull llama3.1:8b. When I run llama3.1:8b, the following error occurs:
> ollama run llama3.1
> \>\>\> hello
> Hello! HowError: an error was encountered while running the model: unexpected EOF
(I use the Ubuntu 20.04 image, and since I don’t have permission, _systemctl_ is unavailable in my environment. I’m not sure if this is related.)
Set OLLAMA_DEBUG=1, and it shows the following:
2024/12/28 00:12:37 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:true OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2024-12-28T00:12:37.377+08:00 level=INFO source=images.go:757 msg="total blobs: 5"
time=2024-12-28T00:12:37.377+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2024-12-28T00:12:37.378+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
time=2024-12-28T00:12:37.378+08:00 level=DEBUG source=common.go:80 msg="runners located" dir=/usr/local/lib/ollama/runners
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2024-12-28T00:12:37.379+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu]"
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=routes.go:1340 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-12-28T00:12:37.379+08:00 level=DEBUG source=sched.go:105 msg="starting llm scheduler"
time=2024-12-28T00:12:37.379+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:99 msg="searching for GPU discovery libraries for NVIDIA"
time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:517 msg="Searching for GPU library" name=libcuda.so*
time=2024-12-28T00:12:37.386+08:00 level=DEBUG source=gpu.go:543 msg="gpu library search" globs="[/usr/local/lib/ollama/libcuda.so* /usr/local/lib/ollama/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/gpu/cuda/libcuda.so* /opt/orion/orion_runtime/lib/libcuda.so* /usr/lib64/libcuda.so* /usr/lib/libcuda.so* /usr/local/nvidia/lib/libcuda.so* /usr/local/nvidia/lib64/libcuda.so* /usr/local/cuda*/targets/*/lib/libcuda.so* /usr/lib/*-linux-gnu/nvidia/current/libcuda.so* /usr/lib/*-linux-gnu/libcuda.so* /usr/lib/wsl/lib/libcuda.so* /usr/lib/wsl/drivers/*/libcuda.so* /opt/cuda/lib*/libcuda.so* /usr/local/cuda/lib*/libcuda.so* /usr/lib*/libcuda.so* /usr/local/lib*/libcuda.so*]"
time=2024-12-28T00:12:37.398+08:00 level=DEBUG source=gpu.go:577 msg="discovered GPU libraries" paths=[/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so]
initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
dlsym: cuInit - 0x7efd7438fccb
dlsym: cuDriverGetVersion - 0x7efd7438fd6e
dlsym: cuDeviceGetCount - 0x7efd7438fec2
dlsym: cuDeviceGet - 0x7efd7438fe14
dlsym: cuDeviceGetAttribute - 0x7efd74390230
dlsym: cuDeviceGetUuid - 0x7efd74390020
dlsym: cuDeviceGetName - 0x7efd7438ff6e
dlsym: cuCtxCreate_v3 - 0x7efd743a8d85
dlsym: cuMemGetInfo_v2 - 0x7efd743a1670
dlsym: cuCtxDestroy - 0x7efd743908fa
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-28T00:12:37.555+08:00 level=DEBUG source=gpu.go:134 msg="detected GPUs" count=1 library=/opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
[GPU-00000000-0000-000a-02aa-6b26e8000000] CUDA totalMem 22889 mb
[GPU-00000000-0000-000a-02aa-6b26e8000000] CUDA freeMem 22889 mb
[GPU-00000000-0000-000a-02aa-6b26e8000000] Compute Capability 8.6
time=2024-12-28T00:12:38.012+08:00 level=DEBUG source=amd_linux.go:421 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2024-12-28T00:12:38.013+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-00000000-0000-000a-02aa-6b26e8000000 library=cuda variant=v12 compute=8.6 driver=12.4 name="NVIDIA A40" total="22.4 GiB" available="22.4 GiB"
[GIN] 2024/12/28 - 00:12:41 | 200 | 157.861µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/12/28 - 00:12:41 | 200 | 28.317212ms | 127.0.0.1 | POST "/api/show"
time=2024-12-28T00:12:41.916+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="881.5 GiB" before.free="832.5 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="832.5 GiB" now.free_swap="0 B"
initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
dlsym: cuInit - 0x7efd6c3e8ccb
dlsym: cuDriverGetVersion - 0x7efd6c3e8d6e
dlsym: cuDeviceGetCount - 0x7efd6c3e8ec2
dlsym: cuDeviceGet - 0x7efd6c3e8e14
dlsym: cuDeviceGetAttribute - 0x7efd6c3e9230
dlsym: cuDeviceGetUuid - 0x7efd6c3e9020
dlsym: cuDeviceGetName - 0x7efd6c3e8f6e
dlsym: cuCtxCreate_v3 - 0x7efd6c401d85
dlsym: cuMemGetInfo_v2 - 0x7efd6c3fa670
dlsym: cuCtxDestroy - 0x7efd6c3e98fa
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-28T00:12:41.941+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B"
releasing cuda driver library
time=2024-12-28T00:12:41.942+08:00 level=DEBUG source=sched.go:181 msg="updating default concurrency" OLLAMA_MAX_LOADED_MODELS=0x556684a41780 gpu_count=1
time=2024-12-28T00:12:41.986+08:00 level=DEBUG source=sched.go:224 msg="loading first model" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2024-12-28T00:12:41.986+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2024-12-28T00:12:41.987+08:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 parallel=4 available=24000856064 required="6.5 GiB"
time=2024-12-28T00:12:41.987+08:00 level=DEBUG source=gpu.go:406 msg="updating system memory data" before.total="881.5 GiB" before.free="832.5 GiB" before.free_swap="0 B" now.total="881.5 GiB" now.free="832.4 GiB" now.free_swap="0 B"
initializing /opt/orion/orion_runtime/gpu/cuda/libcuda_orion.so
dlsym: cuInit - 0x7efd6c3e8ccb
dlsym: cuDriverGetVersion - 0x7efd6c3e8d6e
dlsym: cuDeviceGetCount - 0x7efd6c3e8ec2
dlsym: cuDeviceGet - 0x7efd6c3e8e14
dlsym: cuDeviceGetAttribute - 0x7efd6c3e9230
dlsym: cuDeviceGetUuid - 0x7efd6c3e9020
dlsym: cuDeviceGetName - 0x7efd6c3e8f6e
dlsym: cuCtxCreate_v3 - 0x7efd6c401d85
dlsym: cuMemGetInfo_v2 - 0x7efd6c3fa670
dlsym: cuCtxDestroy - 0x7efd6c3e98fa
calling cuInit
calling cuDriverGetVersion
raw version 0x2f08
CUDA driver version: 12.4
calling cuDeviceGetCount
device count 1
time=2024-12-28T00:12:41.999+08:00 level=DEBUG source=gpu.go:456 msg="updating cuda memory data" gpu=GPU-00000000-0000-000a-02aa-6b26e8000000 name="NVIDIA A40" overhead="0 B" before.total="22.4 GiB" before.free="22.4 GiB" now.total="22.4 GiB" now.free="22.4 GiB" now.used="0 B"
releasing cuda driver library
time=2024-12-28T00:12:41.999+08:00 level=INFO source=server.go:104 msg="system memory" total="881.5 GiB" free="832.4 GiB" free_swap="0 B"
time=2024-12-28T00:12:41.999+08:00 level=DEBUG source=memory.go:107 msg=evaluating library=cuda gpu_count=1 available="[22.4 GiB]"
time=2024-12-28T00:12:42.000+08:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[22.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="6.5 GiB" memory.required.partial="6.5 GiB" memory.required.kv="1.0 GiB" memory.required.allocations="[6.5 GiB]" memory.weights.total="4.9 GiB" memory.weights.repeating="4.5 GiB" memory.weights.nonrepeating="411.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="677.5 MiB"
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v11_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server
time=2024-12-28T00:12:42.000+08:00 level=DEBUG source=common.go:124 msg="availableServers : found" file=/usr/local/lib/ollama/runners/rocm_avx/ollama_llama_server
time=2024-12-28T00:12:42.001+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 --ctx-size 8192 --batch-size 512 --n-gpu-layers 33 --verbose --threads 48 --parallel 4 --port 34605"
time=2024-12-28T00:12:42.001+08:00 level=DEBUG source=server.go:393 msg=subprocess environment="[CUDA_VERSION=12.1.0 LD_LIBRARY_PATH=/usr/local/lib/ollama:/usr/local/lib/ollama:/usr/local/lib/ollama/runners/cuda_v12_avx:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/gpu/cuda:/opt/orion/orion_runtime/lib:/usr/lib64:/usr/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64 PATH=/root/miniconda3/bin:/root/miniconda3/condabin:/root/miniconda3/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CUDA_VISIBLE_DEVICES=GPU-00000000-0000-000a-02aa-6b26e8000000]"
time=2024-12-28T00:12:42.003+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2024-12-28T00:12:42.005+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2024-12-28T00:12:42.006+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
time=2024-12-28T00:12:45.320+08:00 level=INFO source=runner.go:945 msg="starting go runner"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA A40, compute capability 8.6, VMM: yes
time=2024-12-28T00:12:45.764+08:00 level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=48
time=2024-12-28T00:12:45.764+08:00 level=INFO source=.:0 msg="Server listening on 127.0.0.1:34605"
llama_load_model_from_file: using device CUDA0 (NVIDIA A40) - 22889 MiB free
time=2024-12-28T00:12:45.780+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 29 key-value pairs and 292 tensors from /root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 8B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 32
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 4096
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 13: llama.attention.head_count u32 = 32
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 15
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 66 tensors
llama_model_loader: - type q4_K: 193 tensors
llama_model_loader: - type q6_K: 33 tensors
llm_load_vocab: control token: 128254 '<|reserved_special_token_246|>' is not marked as EOG
llm_load_vocab: control token: 128249 '<|reserved_special_token_241|>' is not marked as EOG
llm_load_vocab: control token: 128246 '<|reserved_special_token_238|>' is not marked as EOG
llm_load_vocab: control token: 128243 '<|reserved_special_token_235|>' is not marked as EOG
llm_load_vocab: control token: 128242 '<|reserved_special_token_234|>' is not marked as EOG
llm_load_vocab: control token: 128241 '<|reserved_special_token_233|>' is not marked as EOG
llm_load_vocab: control token: 128240 '<|reserved_special_token_232|>' is not marked as EOG
llm_load_vocab: control token: 128235 '<|reserved_special_token_227|>' is not marked as EOG
llm_load_vocab: control token: 128231 '<|reserved_special_token_223|>' is not marked as EOG
llm_load_vocab: control token: 128230 '<|reserved_special_token_222|>' is not marked as EOG
llm_load_vocab: control token: 128228 '<|reserved_special_token_220|>' is not marked as EOG
llm_load_vocab: control token: 128225 '<|reserved_special_token_217|>' is not marked as EOG
llm_load_vocab: control token: 128218 '<|reserved_special_token_210|>' is not marked as EOG
llm_load_vocab: control token: 128214 '<|reserved_special_token_206|>' is not marked as EOG
llm_load_vocab: control token: 128213 '<|reserved_special_token_205|>' is not marked as EOG
llm_load_vocab: control token: 128207 '<|reserved_special_token_199|>' is not marked as EOG
llm_load_vocab: control token: 128206 '<|reserved_special_token_198|>' is not marked as EOG
llm_load_vocab: control token: 128204 '<|reserved_special_token_196|>' is not marked as EOG
llm_load_vocab: control token: 128200 '<|reserved_special_token_192|>' is not marked as EOG
llm_load_vocab: control token: 128199 '<|reserved_special_token_191|>' is not marked as EOG
llm_load_vocab: control token: 128198 '<|reserved_special_token_190|>' is not marked as EOG
llm_load_vocab: control token: 128196 '<|reserved_special_token_188|>' is not marked as EOG
llm_load_vocab: control token: 128194 '<|reserved_special_token_186|>' is not marked as EOG
llm_load_vocab: control token: 128193 '<|reserved_special_token_185|>' is not marked as EOG
llm_load_vocab: control token: 128188 '<|reserved_special_token_180|>' is not marked as EOG
llm_load_vocab: control token: 128187 '<|reserved_special_token_179|>' is not marked as EOG
llm_load_vocab: control token: 128185 '<|reserved_special_token_177|>' is not marked as EOG
llm_load_vocab: control token: 128184 '<|reserved_special_token_176|>' is not marked as EOG
llm_load_vocab: control token: 128180 '<|reserved_special_token_172|>' is not marked as EOG
llm_load_vocab: control token: 128179 '<|reserved_special_token_171|>' is not marked as EOG
llm_load_vocab: control token: 128178 '<|reserved_special_token_170|>' is not marked as EOG
llm_load_vocab: control token: 128177 '<|reserved_special_token_169|>' is not marked as EOG
llm_load_vocab: control token: 128176 '<|reserved_special_token_168|>' is not marked as EOG
llm_load_vocab: control token: 128175 '<|reserved_special_token_167|>' is not marked as EOG
llm_load_vocab: control token: 128171 '<|reserved_special_token_163|>' is not marked as EOG
llm_load_vocab: control token: 128170 '<|reserved_special_token_162|>' is not marked as EOG
llm_load_vocab: control token: 128169 '<|reserved_special_token_161|>' is not marked as EOG
llm_load_vocab: control token: 128168 '<|reserved_special_token_160|>' is not marked as EOG
llm_load_vocab: control token: 128165 '<|reserved_special_token_157|>' is not marked as EOG
llm_load_vocab: control token: 128162 '<|reserved_special_token_154|>' is not marked as EOG
llm_load_vocab: control token: 128158 '<|reserved_special_token_150|>' is not marked as EOG
llm_load_vocab: control token: 128156 '<|reserved_special_token_148|>' is not marked as EOG
llm_load_vocab: control token: 128155 '<|reserved_special_token_147|>' is not marked as EOG
llm_load_vocab: control token: 128154 '<|reserved_special_token_146|>' is not marked as EOG
llm_load_vocab: control token: 128151 '<|reserved_special_token_143|>' is not marked as EOG
llm_load_vocab: control token: 128149 '<|reserved_special_token_141|>' is not marked as EOG
llm_load_vocab: control token: 128147 '<|reserved_special_token_139|>' is not marked as EOG
llm_load_vocab: control token: 128146 '<|reserved_special_token_138|>' is not marked as EOG
llm_load_vocab: control token: 128144 '<|reserved_special_token_136|>' is not marked as EOG
llm_load_vocab: control token: 128142 '<|reserved_special_token_134|>' is not marked as EOG
llm_load_vocab: control token: 128141 '<|reserved_special_token_133|>' is not marked as EOG
llm_load_vocab: control token: 128138 '<|reserved_special_token_130|>' is not marked as EOG
llm_load_vocab: control token: 128136 '<|reserved_special_token_128|>' is not marked as EOG
llm_load_vocab: control token: 128135 '<|reserved_special_token_127|>' is not marked as EOG
llm_load_vocab: control token: 128134 '<|reserved_special_token_126|>' is not marked as EOG
llm_load_vocab: control token: 128133 '<|reserved_special_token_125|>' is not marked as EOG
llm_load_vocab: control token: 128131 '<|reserved_special_token_123|>' is not marked as EOG
llm_load_vocab: control token: 128128 '<|reserved_special_token_120|>' is not marked as EOG
llm_load_vocab: control token: 128124 '<|reserved_special_token_116|>' is not marked as EOG
llm_load_vocab: control token: 128123 '<|reserved_special_token_115|>' is not marked as EOG
llm_load_vocab: control token: 128122 '<|reserved_special_token_114|>' is not marked as EOG
llm_load_vocab: control token: 128119 '<|reserved_special_token_111|>' is not marked as EOG
llm_load_vocab: control token: 128115 '<|reserved_special_token_107|>' is not marked as EOG
llm_load_vocab: control token: 128112 '<|reserved_special_token_104|>' is not marked as EOG
llm_load_vocab: control token: 128110 '<|reserved_special_token_102|>' is not marked as EOG
llm_load_vocab: control token: 128109 '<|reserved_special_token_101|>' is not marked as EOG
llm_load_vocab: control token: 128108 '<|reserved_special_token_100|>' is not marked as EOG
llm_load_vocab: control token: 128106 '<|reserved_special_token_98|>' is not marked as EOG
llm_load_vocab: control token: 128103 '<|reserved_special_token_95|>' is not marked as EOG
llm_load_vocab: control token: 128102 '<|reserved_special_token_94|>' is not marked as EOG
llm_load_vocab: control token: 128101 '<|reserved_special_token_93|>' is not marked as EOG
llm_load_vocab: control token: 128097 '<|reserved_special_token_89|>' is not marked as EOG
llm_load_vocab: control token: 128091 '<|reserved_special_token_83|>' is not marked as EOG
llm_load_vocab: control token: 128090 '<|reserved_special_token_82|>' is not marked as EOG
llm_load_vocab: control token: 128089 '<|reserved_special_token_81|>' is not marked as EOG
llm_load_vocab: control token: 128087 '<|reserved_special_token_79|>' is not marked as EOG
llm_load_vocab: control token: 128085 '<|reserved_special_token_77|>' is not marked as EOG
llm_load_vocab: control token: 128081 '<|reserved_special_token_73|>' is not marked as EOG
llm_load_vocab: control token: 128078 '<|reserved_special_token_70|>' is not marked as EOG
llm_load_vocab: control token: 128076 '<|reserved_special_token_68|>' is not marked as EOG
llm_load_vocab: control token: 128075 '<|reserved_special_token_67|>' is not marked as EOG
llm_load_vocab: control token: 128073 '<|reserved_special_token_65|>' is not marked as EOG
llm_load_vocab: control token: 128068 '<|reserved_special_token_60|>' is not marked as EOG
llm_load_vocab: control token: 128067 '<|reserved_special_token_59|>' is not marked as EOG
llm_load_vocab: control token: 128065 '<|reserved_special_token_57|>' is not marked as EOG
llm_load_vocab: control token: 128063 '<|reserved_special_token_55|>' is not marked as EOG
llm_load_vocab: control token: 128062 '<|reserved_special_token_54|>' is not marked as EOG
llm_load_vocab: control token: 128060 '<|reserved_special_token_52|>' is not marked as EOG
llm_load_vocab: control token: 128059 '<|reserved_special_token_51|>' is not marked as EOG
llm_load_vocab: control token: 128057 '<|reserved_special_token_49|>' is not marked as EOG
llm_load_vocab: control token: 128054 '<|reserved_special_token_46|>' is not marked as EOG
llm_load_vocab: control token: 128046 '<|reserved_special_token_38|>' is not marked as EOG
llm_load_vocab: control token: 128045 '<|reserved_special_token_37|>' is not marked as EOG
llm_load_vocab: control token: 128044 '<|reserved_special_token_36|>' is not marked as EOG
llm_load_vocab: control token: 128043 '<|reserved_special_token_35|>' is not marked as EOG
llm_load_vocab: control token: 128038 '<|reserved_special_token_30|>' is not marked as EOG
llm_load_vocab: control token: 128036 '<|reserved_special_token_28|>' is not marked as EOG
llm_load_vocab: control token: 128035 '<|reserved_special_token_27|>' is not marked as EOG
llm_load_vocab: control token: 128032 '<|reserved_special_token_24|>' is not marked as EOG
llm_load_vocab: control token: 128028 '<|reserved_special_token_20|>' is not marked as EOG
llm_load_vocab: control token: 128027 '<|reserved_special_token_19|>' is not marked as EOG
llm_load_vocab: control token: 128024 '<|reserved_special_token_16|>' is not marked as EOG
llm_load_vocab: control token: 128023 '<|reserved_special_token_15|>' is not marked as EOG
llm_load_vocab: control token: 128022 '<|reserved_special_token_14|>' is not marked as EOG
llm_load_vocab: control token: 128021 '<|reserved_special_token_13|>' is not marked as EOG
llm_load_vocab: control token: 128018 '<|reserved_special_token_10|>' is not marked as EOG
llm_load_vocab: control token: 128016 '<|reserved_special_token_8|>' is not marked as EOG
llm_load_vocab: control token: 128015 '<|reserved_special_token_7|>' is not marked as EOG
llm_load_vocab: control token: 128013 '<|reserved_special_token_5|>' is not marked as EOG
llm_load_vocab: control token: 128011 '<|reserved_special_token_3|>' is not marked as EOG
llm_load_vocab: control token: 128005 '<|reserved_special_token_2|>' is not marked as EOG
llm_load_vocab: control token: 128004 '<|finetune_right_pad_id|>' is not marked as EOG
llm_load_vocab: control token: 128002 '<|reserved_special_token_0|>' is not marked as EOG
llm_load_vocab: control token: 128252 '<|reserved_special_token_244|>' is not marked as EOG
llm_load_vocab: control token: 128190 '<|reserved_special_token_182|>' is not marked as EOG
llm_load_vocab: control token: 128183 '<|reserved_special_token_175|>' is not marked as EOG
llm_load_vocab: control token: 128137 '<|reserved_special_token_129|>' is not marked as EOG
llm_load_vocab: control token: 128182 '<|reserved_special_token_174|>' is not marked as EOG
llm_load_vocab: control token: 128040 '<|reserved_special_token_32|>' is not marked as EOG
llm_load_vocab: control token: 128048 '<|reserved_special_token_40|>' is not marked as EOG
llm_load_vocab: control token: 128092 '<|reserved_special_token_84|>' is not marked as EOG
llm_load_vocab: control token: 128215 '<|reserved_special_token_207|>' is not marked as EOG
llm_load_vocab: control token: 128107 '<|reserved_special_token_99|>' is not marked as EOG
llm_load_vocab: control token: 128208 '<|reserved_special_token_200|>' is not marked as EOG
llm_load_vocab: control token: 128145 '<|reserved_special_token_137|>' is not marked as EOG
llm_load_vocab: control token: 128031 '<|reserved_special_token_23|>' is not marked as EOG
llm_load_vocab: control token: 128129 '<|reserved_special_token_121|>' is not marked as EOG
llm_load_vocab: control token: 128201 '<|reserved_special_token_193|>' is not marked as EOG
llm_load_vocab: control token: 128074 '<|reserved_special_token_66|>' is not marked as EOG
llm_load_vocab: control token: 128095 '<|reserved_special_token_87|>' is not marked as EOG
llm_load_vocab: control token: 128186 '<|reserved_special_token_178|>' is not marked as EOG
llm_load_vocab: control token: 128143 '<|reserved_special_token_135|>' is not marked as EOG
llm_load_vocab: control token: 128229 '<|reserved_special_token_221|>' is not marked as EOG
llm_load_vocab: control token: 128007 '<|end_header_id|>' is not marked as EOG
llm_load_vocab: control token: 128055 '<|reserved_special_token_47|>' is not marked as EOG
llm_load_vocab: control token: 128056 '<|reserved_special_token_48|>' is not marked as EOG
llm_load_vocab: control token: 128061 '<|reserved_special_token_53|>' is not marked as EOG
llm_load_vocab: control token: 128153 '<|reserved_special_token_145|>' is not marked as EOG
llm_load_vocab: control token: 128152 '<|reserved_special_token_144|>' is not marked as EOG
llm_load_vocab: control token: 128212 '<|reserved_special_token_204|>' is not marked as EOG
llm_load_vocab: control token: 128172 '<|reserved_special_token_164|>' is not marked as EOG
llm_load_vocab: control token: 128160 '<|reserved_special_token_152|>' is not marked as EOG
llm_load_vocab: control token: 128041 '<|reserved_special_token_33|>' is not marked as EOG
llm_load_vocab: control token: 128181 '<|reserved_special_token_173|>' is not marked as EOG
llm_load_vocab: control token: 128094 '<|reserved_special_token_86|>' is not marked as EOG
llm_load_vocab: control token: 128118 '<|reserved_special_token_110|>' is not marked as EOG
llm_load_vocab: control token: 128236 '<|reserved_special_token_228|>' is not marked as EOG
llm_load_vocab: control token: 128148 '<|reserved_special_token_140|>' is not marked as EOG
llm_load_vocab: control token: 128042 '<|reserved_special_token_34|>' is not marked as EOG
llm_load_vocab: control token: 128139 '<|reserved_special_token_131|>' is not marked as EOG
llm_load_vocab: control token: 128173 '<|reserved_special_token_165|>' is not marked as EOG
llm_load_vocab: control token: 128239 '<|reserved_special_token_231|>' is not marked as EOG
llm_load_vocab: control token: 128157 '<|reserved_special_token_149|>' is not marked as EOG
llm_load_vocab: control token: 128052 '<|reserved_special_token_44|>' is not marked as EOG
llm_load_vocab: control token: 128026 '<|reserved_special_token_18|>' is not marked as EOG
llm_load_vocab: control token: 128003 '<|reserved_special_token_1|>' is not marked as EOG
llm_load_vocab: control token: 128019 '<|reserved_special_token_11|>' is not marked as EOG
llm_load_vocab: control token: 128116 '<|reserved_special_token_108|>' is not marked as EOG
llm_load_vocab: control token: 128161 '<|reserved_special_token_153|>' is not marked as EOG
llm_load_vocab: control token: 128226 '<|reserved_special_token_218|>' is not marked as EOG
llm_load_vocab: control token: 128159 '<|reserved_special_token_151|>' is not marked as EOG
llm_load_vocab: control token: 128012 '<|reserved_special_token_4|>' is not marked as EOG
llm_load_vocab: control token: 128088 '<|reserved_special_token_80|>' is not marked as EOG
llm_load_vocab: control token: 128163 '<|reserved_special_token_155|>' is not marked as EOG
llm_load_vocab: control token: 128001 '<|end_of_text|>' is not marked as EOG
llm_load_vocab: control token: 128113 '<|reserved_special_token_105|>' is not marked as EOG
llm_load_vocab: control token: 128250 '<|reserved_special_token_242|>' is not marked as EOG
llm_load_vocab: control token: 128125 '<|reserved_special_token_117|>' is not marked as EOG
llm_load_vocab: control token: 128053 '<|reserved_special_token_45|>' is not marked as EOG
llm_load_vocab: control token: 128224 '<|reserved_special_token_216|>' is not marked as EOG
llm_load_vocab: control token: 128247 '<|reserved_special_token_239|>' is not marked as EOG
llm_load_vocab: control token: 128251 '<|reserved_special_token_243|>' is not marked as EOG
llm_load_vocab: control token: 128216 '<|reserved_special_token_208|>' is not marked as EOG
llm_load_vocab: control token: 128006 '<|start_header_id|>' is not marked as EOG
llm_load_vocab: control token: 128211 '<|reserved_special_token_203|>' is not marked as EOG
llm_load_vocab: control token: 128077 '<|reserved_special_token_69|>' is not marked as EOG
llm_load_vocab: control token: 128237 '<|reserved_special_token_229|>' is not marked as EOG
llm_load_vocab: control token: 128086 '<|reserved_special_token_78|>' is not marked as EOG
llm_load_vocab: control token: 128227 '<|reserved_special_token_219|>' is not marked as EOG
llm_load_vocab: control token: 128058 '<|reserved_special_token_50|>' is not marked as EOG
llm_load_vocab: control token: 128100 '<|reserved_special_token_92|>' is not marked as EOG
llm_load_vocab: control token: 128209 '<|reserved_special_token_201|>' is not marked as EOG
llm_load_vocab: control token: 128084 '<|reserved_special_token_76|>' is not marked as EOG
llm_load_vocab: control token: 128071 '<|reserved_special_token_63|>' is not marked as EOG
llm_load_vocab: control token: 128070 '<|reserved_special_token_62|>' is not marked as EOG
llm_load_vocab: control token: 128049 '<|reserved_special_token_41|>' is not marked as EOG
llm_load_vocab: control token: 128197 '<|reserved_special_token_189|>' is not marked as EOG
llm_load_vocab: control token: 128072 '<|reserved_special_token_64|>' is not marked as EOG
llm_load_vocab: control token: 128000 '<|begin_of_text|>' is not marked as EOG
llm_load_vocab: control token: 128223 '<|reserved_special_token_215|>' is not marked as EOG
llm_load_vocab: control token: 128217 '<|reserved_special_token_209|>' is not marked as EOG
llm_load_vocab: control token: 128111 '<|reserved_special_token_103|>' is not marked as EOG
llm_load_vocab: control token: 128203 '<|reserved_special_token_195|>' is not marked as EOG
llm_load_vocab: control token: 128051 '<|reserved_special_token_43|>' is not marked as EOG
llm_load_vocab: control token: 128030 '<|reserved_special_token_22|>' is not marked as EOG
llm_load_vocab: control token: 128117 '<|reserved_special_token_109|>' is not marked as EOG
llm_load_vocab: control token: 128010 '<|python_tag|>' is not marked as EOG
llm_load_vocab: control token: 128238 '<|reserved_special_token_230|>' is not marked as EOG
llm_load_vocab: control token: 128255 '<|reserved_special_token_247|>' is not marked as EOG
llm_load_vocab: control token: 128202 '<|reserved_special_token_194|>' is not marked as EOG
llm_load_vocab: control token: 128132 '<|reserved_special_token_124|>' is not marked as EOG
llm_load_vocab: control token: 128248 '<|reserved_special_token_240|>' is not marked as EOG
llm_load_vocab: control token: 128167 '<|reserved_special_token_159|>' is not marked as EOG
llm_load_vocab: control token: 128127 '<|reserved_special_token_119|>' is not marked as EOG
llm_load_vocab: control token: 128105 '<|reserved_special_token_97|>' is not marked as EOG
llm_load_vocab: control token: 128039 '<|reserved_special_token_31|>' is not marked as EOG
llm_load_vocab: control token: 128232 '<|reserved_special_token_224|>' is not marked as EOG
llm_load_vocab: control token: 128166 '<|reserved_special_token_158|>' is not marked as EOG
llm_load_vocab: control token: 128130 '<|reserved_special_token_122|>' is not marked as EOG
llm_load_vocab: control token: 128114 '<|reserved_special_token_106|>' is not marked as EOG
llm_load_vocab: control token: 128234 '<|reserved_special_token_226|>' is not marked as EOG
llm_load_vocab: control token: 128191 '<|reserved_special_token_183|>' is not marked as EOG
llm_load_vocab: control token: 128064 '<|reserved_special_token_56|>' is not marked as EOG
llm_load_vocab: control token: 128140 '<|reserved_special_token_132|>' is not marked as EOG
llm_load_vocab: control token: 128096 '<|reserved_special_token_88|>' is not marked as EOG
llm_load_vocab: control token: 128098 '<|reserved_special_token_90|>' is not marked as EOG
llm_load_vocab: control token: 128192 '<|reserved_special_token_184|>' is not marked as EOG
llm_load_vocab: control token: 128093 '<|reserved_special_token_85|>' is not marked as EOG
llm_load_vocab: control token: 128150 '<|reserved_special_token_142|>' is not marked as EOG
llm_load_vocab: control token: 128222 '<|reserved_special_token_214|>' is not marked as EOG
llm_load_vocab: control token: 128233 '<|reserved_special_token_225|>' is not marked as EOG
llm_load_vocab: control token: 128220 '<|reserved_special_token_212|>' is not marked as EOG
llm_load_vocab: control token: 128034 '<|reserved_special_token_26|>' is not marked as EOG
llm_load_vocab: control token: 128033 '<|reserved_special_token_25|>' is not marked as EOG
llm_load_vocab: control token: 128253 '<|reserved_special_token_245|>' is not marked as EOG
llm_load_vocab: control token: 128195 '<|reserved_special_token_187|>' is not marked as EOG
llm_load_vocab: control token: 128099 '<|reserved_special_token_91|>' is not marked as EOG
llm_load_vocab: control token: 128189 '<|reserved_special_token_181|>' is not marked as EOG
llm_load_vocab: control token: 128210 '<|reserved_special_token_202|>' is not marked as EOG
llm_load_vocab: control token: 128174 '<|reserved_special_token_166|>' is not marked as EOG
llm_load_vocab: control token: 128083 '<|reserved_special_token_75|>' is not marked as EOG
llm_load_vocab: control token: 128080 '<|reserved_special_token_72|>' is not marked as EOG
llm_load_vocab: control token: 128104 '<|reserved_special_token_96|>' is not marked as EOG
llm_load_vocab: control token: 128082 '<|reserved_special_token_74|>' is not marked as EOG
llm_load_vocab: control token: 128219 '<|reserved_special_token_211|>' is not marked as EOG
llm_load_vocab: control token: 128017 '<|reserved_special_token_9|>' is not marked as EOG
llm_load_vocab: control token: 128050 '<|reserved_special_token_42|>' is not marked as EOG
llm_load_vocab: control token: 128205 '<|reserved_special_token_197|>' is not marked as EOG
llm_load_vocab: control token: 128047 '<|reserved_special_token_39|>' is not marked as EOG
llm_load_vocab: control token: 128164 '<|reserved_special_token_156|>' is not marked as EOG
llm_load_vocab: control token: 128020 '<|reserved_special_token_12|>' is not marked as EOG
llm_load_vocab: control token: 128069 '<|reserved_special_token_61|>' is not marked as EOG
llm_load_vocab: control token: 128245 '<|reserved_special_token_237|>' is not marked as EOG
llm_load_vocab: control token: 128121 '<|reserved_special_token_113|>' is not marked as EOG
llm_load_vocab: control token: 128079 '<|reserved_special_token_71|>' is not marked as EOG
llm_load_vocab: control token: 128037 '<|reserved_special_token_29|>' is not marked as EOG
llm_load_vocab: control token: 128244 '<|reserved_special_token_236|>' is not marked as EOG
llm_load_vocab: control token: 128029 '<|reserved_special_token_21|>' is not marked as EOG
llm_load_vocab: control token: 128221 '<|reserved_special_token_213|>' is not marked as EOG
llm_load_vocab: control token: 128066 '<|reserved_special_token_58|>' is not marked as EOG
llm_load_vocab: control token: 128120 '<|reserved_special_token_112|>' is not marked as EOG
llm_load_vocab: control token: 128014 '<|reserved_special_token_6|>' is not marked as EOG
llm_load_vocab: control token: 128025 '<|reserved_special_token_17|>' is not marked as EOG
llm_load_vocab: control token: 128126 '<|reserved_special_token_118|>' is not marked as EOG
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 8B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 8.03 B
llm_load_print_meta: model size = 4.58 GiB (4.89 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 8B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: tensor 'token_embd.weight' (q4_K) (and 0 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading output layer to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU_Mapped model buffer size = 281.81 MiB
llm_load_tensors: CUDA0 model buffer size = 4403.49 MiB
time=2024-12-28T00:12:47.291+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.06"
time=2024-12-28T00:12:47.794+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.17"
time=2024-12-28T00:12:48.045+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.21"
time=2024-12-28T00:12:48.298+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.24"
time=2024-12-28T00:12:48.549+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.29"
time=2024-12-28T00:12:48.801+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.32"
time=2024-12-28T00:12:49.053+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.36"
time=2024-12-28T00:12:49.304+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.39"
time=2024-12-28T00:12:49.556+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.44"
time=2024-12-28T00:12:49.808+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.47"
time=2024-12-28T00:12:50.060+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.50"
time=2024-12-28T00:12:50.312+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.55"
time=2024-12-28T00:12:50.564+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.58"
time=2024-12-28T00:12:50.816+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.61"
time=2024-12-28T00:12:51.067+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.65"
time=2024-12-28T00:12:51.319+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.68"
time=2024-12-28T00:12:51.571+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.71"
time=2024-12-28T00:12:51.822+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.76"
time=2024-12-28T00:12:52.074+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.79"
time=2024-12-28T00:12:52.326+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.83"
time=2024-12-28T00:12:52.578+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.87"
time=2024-12-28T00:12:52.830+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.91"
time=2024-12-28T00:12:53.081+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.94"
time=2024-12-28T00:12:53.333+08:00 level=DEBUG source=server.go:600 msg="model load progress 0.98"
time=2024-12-28T00:12:53.585+08:00 level=DEBUG source=server.go:600 msg="model load progress 1.00"
llama_new_context_with_model: n_seq_max = 4
llama_new_context_with_model: n_ctx = 8192
llama_new_context_with_model: n_ctx_per_seq = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llama_kv_cache_init: CUDA0 KV buffer size = 1024.00 MiB
llama_new_context_with_model: KV self size = 1024.00 MiB, K (f16): 512.00 MiB, V (f16): 512.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.02 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 560.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 24.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
time=2024-12-28T00:12:53.837+08:00 level=INFO source=server.go:594 msg="llama runner started in 11.83 seconds"
time=2024-12-28T00:12:53.837+08:00 level=DEBUG source=sched.go:462 msg="finished setting up runner" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
[GIN] 2024/12/28 - 00:12:53 | 200 | 11.979894799s | 127.0.0.1 | POST "/api/generate"
time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:466 msg="context for request finished"
time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s
time=2024-12-28T00:12:53.838+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0
time=2024-12-28T00:12:57.763+08:00 level=DEBUG source=sched.go:575 msg="evaluating already loaded" model=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29
time=2024-12-28T00:12:57.764+08:00 level=DEBUG source=routes.go:1542 msg="chat request" images=0 prompt="<|start_header_id|>user<|end_header_id|>\n\nhello<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
time=2024-12-28T00:12:57.766+08:00 level=DEBUG source=cache.go:104 msg="loading cache slot" id=0 cache=0 prompt=11 used=0 remaining=11
SIGSEGV: segmentation violation
PC=0x7f399000ac23 m=4 sigcode=1 addr=0x1c
signal arrived during cgo execution
goroutine 8 gp=0xc0001fc1c0 m=4 mp=0xc0000cd508 [syscall]:
runtime.cgocall(0x558fdc4657d0, 0xc0000ddb90)
runtime/cgocall.go:167 +0x4b fp=0xc0000ddb68 sp=0xc0000ddb30 pc=0x558fdc219b2b
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7f391d961b80, {0x1, 0x7f393039c270, 0x0, 0x0, 0x7f3930410520, 0x7f3930412530, 0x7f39303a9ff0, 0x7f391d970b80})
_cgo_gotypes.go:556 +0x4f fp=0xc0000ddb90 sp=0xc0000ddb68 pc=0x558fdc2c3baf
github.com/ollama/ollama/llama.(*Context).Decode.func1(0x558fdc460f0b?, 0x7f391d961b80?)
github.com/ollama/ollama/llama/llama.go:207 +0xf5 fp=0xc0000ddc80 sp=0xc0000ddb90 pc=0x558fdc2c6475
github.com/ollama/ollama/llama.(*Context).Decode(0xc00011e170?, 0x0?)
github.com/ollama/ollama/llama/llama.go:207 +0x13 fp=0xc0000ddcc8 sp=0xc0000ddc80 pc=0x558fdc2c62f3
github.com/ollama/ollama/llama/runner.(*Server).processBatch(0xc00019e1b0, 0xc0001121e0, 0xc0000ddf20)
github.com/ollama/ollama/llama/runner/runner.go:434 +0x23f fp=0xc0000ddee0 sp=0xc0000ddcc8 pc=0x558fdc45fbdf
github.com/ollama/ollama/llama/runner.(*Server).run(0xc00019e1b0, {0x558fdc85ede0, 0xc0001fa050})
github.com/ollama/ollama/llama/runner/runner.go:342 +0x1d5 fp=0xc0000ddfb8 sp=0xc0000ddee0 pc=0x558fdc45f615
github.com/ollama/ollama/llama/runner.Execute.gowrap2()
github.com/ollama/ollama/llama/runner/runner.go:984 +0x28 fp=0xc0000ddfe0 sp=0xc0000ddfb8 pc=0x558fdc464628
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000ddfe8 sp=0xc0000ddfe0 pc=0x558fdc227561
created by github.com/ollama/ollama/llama/runner.Execute in goroutine 1
github.com/ollama/ollama/llama/runner/runner.go:984 +0xde5
goroutine 1 gp=0xc0000061c0 m=nil [IO wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc00004b7b0 sp=0xc00004b790 pc=0x558fdc21f92e
runtime.netpollblock(0xc000217f80?, 0xdc1b8186?, 0x8f?)
runtime/netpoll.go:575 +0xf7 fp=0xc00004b7e8 sp=0xc00004b7b0 pc=0x558fdc1e4697
internal/poll.runtime_pollWait(0x7f3948ebafd0, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc00004b808 sp=0xc00004b7e8 pc=0x558fdc21ec25
internal/poll.(*pollDesc).wait(0xc0001f6100?, 0x2c?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00004b830 sp=0xc00004b808 pc=0x558fdc274a67
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001f6100)
internal/poll/fd_unix.go:620 +0x295 fp=0xc00004b8d8 sp=0xc00004b830 pc=0x558fdc275fd5
net.(*netFD).accept(0xc0001f6100)
net/fd_unix.go:172 +0x29 fp=0xc00004b990 sp=0xc00004b8d8 pc=0x558fdc2ee969
net.(*TCPListener).accept(0xc0000f4700)
net/tcpsock_posix.go:159 +0x1e fp=0xc00004b9e0 sp=0xc00004b990 pc=0x558fdc2fefbe
net.(*TCPListener).Accept(0xc0000f4700)
net/tcpsock.go:372 +0x30 fp=0xc00004ba10 sp=0xc00004b9e0 pc=0x558fdc2fe2f0
net/http.(*onceCloseListener).Accept(0xc000212000?)
<autogenerated>:1 +0x24 fp=0xc00004ba28 sp=0xc00004ba10 pc=0x558fdc43cec4
net/http.(*Server).Serve(0xc0001f44b0, {0x558fdc85e7f8, 0xc0000f4700})
net/http/server.go:3330 +0x30c fp=0xc00004bb58 sp=0xc00004ba28 pc=0x558fdc42ec0c
github.com/ollama/ollama/llama/runner.Execute({0xc000016130?, 0x558fdc2271bc?, 0x0?})
github.com/ollama/ollama/llama/runner/runner.go:1005 +0x11a9 fp=0xc00004bef8 sp=0xc00004bb58 pc=0x558fdc464309
main.main()
github.com/ollama/ollama/cmd/runner/main.go:11 +0x54 fp=0xc00004bf50 sp=0xc00004bef8 pc=0x558fdc465294
runtime.main()
runtime/proc.go:272 +0x29d fp=0xc00004bfe0 sp=0xc00004bf50 pc=0x558fdc1ebc7d
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x558fdc227561
goroutine 2 gp=0xc000006c40 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000c6fa8 sp=0xc0000c6f88 pc=0x558fdc21f92e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.forcegchelper()
runtime/proc.go:337 +0xb8 fp=0xc0000c6fe0 sp=0xc0000c6fa8 pc=0x558fdc1ebfb8
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c6fe8 sp=0xc0000c6fe0 pc=0x558fdc227561
created by runtime.init.7 in goroutine 1
runtime/proc.go:325 +0x1a
goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000c7780 sp=0xc0000c7760 pc=0x558fdc21f92e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.bgsweep(0xc000030100)
runtime/mgcsweep.go:277 +0x94 fp=0xc0000c77c8 sp=0xc0000c7780 pc=0x558fdc1d67f4
runtime.gcenable.gowrap1()
runtime/mgc.go:204 +0x25 fp=0xc0000c77e0 sp=0xc0000c77c8 pc=0x558fdc1cb0a5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c77e8 sp=0xc0000c77e0 pc=0x558fdc227561
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0x66
goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]:
runtime.gopark(0xc000030100?, 0x558fdc73fe60?, 0x1?, 0x0?, 0xc000007340?)
runtime/proc.go:424 +0xce fp=0xc0000c7f78 sp=0xc0000c7f58 pc=0x558fdc21f92e
runtime.goparkunlock(...)
runtime/proc.go:430
runtime.(*scavengerState).park(0x558fdca4a060)
runtime/mgcscavenge.go:425 +0x49 fp=0xc0000c7fa8 sp=0xc0000c7f78 pc=0x558fdc1d4229
runtime.bgscavenge(0xc000030100)
runtime/mgcscavenge.go:653 +0x3c fp=0xc0000c7fc8 sp=0xc0000c7fa8 pc=0x558fdc1d479c
runtime.gcenable.gowrap2()
runtime/mgc.go:205 +0x25 fp=0xc0000c7fe0 sp=0xc0000c7fc8 pc=0x558fdc1cb045
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c7fe8 sp=0xc0000c7fe0 pc=0x558fdc227561
created by runtime.gcenable in goroutine 1
runtime/mgc.go:205 +0xa5
goroutine 5 gp=0xc000007c00 m=nil [finalizer wait]:
runtime.gopark(0xc0000c6648?, 0x558fdc1c15a5?, 0xb0?, 0x1?, 0xc0000061c0?)
runtime/proc.go:424 +0xce fp=0xc0000c6620 sp=0xc0000c6600 pc=0x558fdc21f92e
runtime.runfinq()
runtime/mfinal.go:193 +0x107 fp=0xc0000c67e0 sp=0xc0000c6620 pc=0x558fdc1ca127
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c67e8 sp=0xc0000c67e0 pc=0x558fdc227561
created by runtime.createfing in goroutine 1
runtime/mfinal.go:163 +0x3d
goroutine 6 gp=0xc000007dc0 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:424 +0xce fp=0xc0000c8718 sp=0xc0000c86f8 pc=0x558fdc21f92e
runtime.chanrecv(0xc0001800e0, 0x0, 0x1)
runtime/chan.go:639 +0x41c fp=0xc0000c8790 sp=0xc0000c8718 pc=0x558fdc1bad7c
runtime.chanrecv1(0x0?, 0x0?)
runtime/chan.go:489 +0x12 fp=0xc0000c87b8 sp=0xc0000c8790 pc=0x558fdc1ba952
runtime.unique_runtime_registerUniqueMapCleanup.func1(...)
runtime/mgc.go:1781
runtime.unique_runtime_registerUniqueMapCleanup.gowrap1()
runtime/mgc.go:1784 +0x2f fp=0xc0000c87e0 sp=0xc0000c87b8 pc=0x558fdc1cdf0f
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x558fdc227561
created by unique.runtime_registerUniqueMapCleanup in goroutine 1
runtime/mgc.go:1779 +0x96
goroutine 18 gp=0xc000218000 m=nil [select]:
runtime.gopark(0xc00031da68?, 0x2?, 0xd?, 0xfe?, 0xc00031d834?)
runtime/proc.go:424 +0xce fp=0xc00031d6a0 sp=0xc00031d680 pc=0x558fdc21f92e
runtime.selectgo(0xc00031da68, 0xc00031d830, 0xc000146000?, 0x0, 0x1?, 0x1)
runtime/select.go:335 +0x7a5 fp=0xc00031d7c8 sp=0xc00031d6a0 pc=0x558fdc1fdb85
github.com/ollama/ollama/llama/runner.(*Server).completion(0xc00019e1b0, {0x558fdc85e978, 0xc0001d6b60}, 0xc0001def00)
github.com/ollama/ollama/llama/runner/runner.go:696 +0xa86 fp=0xc00031dac0 sp=0xc00031d7c8 pc=0x558fdc461a26
github.com/ollama/ollama/llama/runner.(*Server).completion-fm({0x558fdc85e978?, 0xc0001d6b60?}, 0x558fdc432f07?)
<autogenerated>:1 +0x36 fp=0xc00031daf0 sp=0xc00031dac0 pc=0x558fdc464ed6
net/http.HandlerFunc.ServeHTTP(0xc0001d60e0?, {0x558fdc85e978?, 0xc0001d6b60?}, 0x0?)
net/http/server.go:2220 +0x29 fp=0xc00031db18 sp=0xc00031daf0 pc=0x558fdc42bac9
net/http.(*ServeMux).ServeHTTP(0x558fdc1c15a5?, {0x558fdc85e978, 0xc0001d6b60}, 0xc0001def00)
net/http/server.go:2747 +0x1ca fp=0xc00031db68 sp=0xc00031db18 pc=0x558fdc42d96a
net/http.serverHandler.ServeHTTP({0x558fdc85da30?}, {0x558fdc85e978?, 0xc0001d6b60?}, 0x6?)
net/http/server.go:3210 +0x8e fp=0xc00031db98 sp=0xc00031db68 pc=0x558fdc43486e
net/http.(*conn).serve(0xc000212000, {0x558fdc85eda8, 0xc00018ef60})
net/http/server.go:2092 +0x5d0 fp=0xc00031dfb8 sp=0xc00031db98 pc=0x558fdc42a6f0
net/http.(*Server).Serve.gowrap3()
net/http/server.go:3360 +0x28 fp=0xc00031dfe0 sp=0xc00031dfb8 pc=0x558fdc42f008
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00031dfe8 sp=0xc00031dfe0 pc=0x558fdc227561
created by net/http.(*Server).Serve in goroutine 1
net/http/server.go:3360 +0x485
goroutine 69 gp=0xc0002c8a80 m=nil [IO wait]:
runtime.gopark(0x558fdc1c5a85?, 0x0?, 0x0?, 0x0?, 0xb?)
runtime/proc.go:424 +0xce fp=0xc00012f5a8 sp=0xc00012f588 pc=0x558fdc21f92e
runtime.netpollblock(0x558fdc25b158?, 0xdc1b8186?, 0x8f?)
runtime/netpoll.go:575 +0xf7 fp=0xc00012f5e0 sp=0xc00012f5a8 pc=0x558fdc1e4697
internal/poll.runtime_pollWait(0x7f3948ebaeb8, 0x72)
runtime/netpoll.go:351 +0x85 fp=0xc00012f600 sp=0xc00012f5e0 pc=0x558fdc21ec25
internal/poll.(*pollDesc).wait(0xc000210000?, 0xc000204101?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00012f628 sp=0xc00012f600 pc=0x558fdc274a67
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000210000, {0xc000204101, 0x1, 0x1})
internal/poll/fd_unix.go:165 +0x27a fp=0xc00012f6c0 sp=0xc00012f628 pc=0x558fdc2755ba
net.(*netFD).Read(0xc000210000, {0xc000204101?, 0xc00012f748?, 0x558fdc220fd0?})
net/fd_posix.go:55 +0x25 fp=0xc00012f708 sp=0xc00012f6c0 pc=0x558fdc2ed885
net.(*conn).Read(0xc000206008, {0xc000204101?, 0x0?, 0xc0002040f8?})
net/net.go:189 +0x45 fp=0xc00012f750 sp=0xc00012f708 pc=0x558fdc2f7285
net.(*TCPConn).Read(0x558fdca0ad80?, {0xc000204101?, 0x0?, 0x0?})
<autogenerated>:1 +0x25 fp=0xc00012f780 sp=0xc00012f750 pc=0x558fdc304325
net/http.(*connReader).backgroundRead(0xc0002040f0)
net/http/server.go:690 +0x37 fp=0xc00012f7c8 sp=0xc00012f780 pc=0x558fdc425077
net/http.(*connReader).startBackgroundRead.gowrap2()
net/http/server.go:686 +0x25 fp=0xc00012f7e0 sp=0xc00012f7c8 pc=0x558fdc424fa5
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc00012f7e8 sp=0xc00012f7e0 pc=0x558fdc227561
created by net/http.(*connReader).startBackgroundRead in goroutine 18
net/http/server.go:686 +0xb6
rax 0x7f393c0fcad4
rbx 0x7f3947e78ed0
rcx 0x7f393c0fcad4
rdx 0x4
rdi 0x7f393c0fcad4
rsi 0x1c
rbp 0x7f3947e78e20
rsp 0x7f3947e78dc8
r8 0x7f393c0fcac0
r9 0x1
r10 0x0
r11 0x7f393a0fbbd0
r12 0x7f393827eca0
r13 0x7f3938288680
r14 0x7f393827ed70
r15 0x7f3938287fb0
rip 0x7f399000ac23
rflags 0x10246
cs 0x33
fs 0x0
gs 0x0
time=2024-12-28T00:13:01.403+08:00 level=DEBUG source=server.go:1080 msg="stopping llama server"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=server.go:1086 msg="waiting for llama server to exit"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=server.go:1090 msg="llama server stopped"
[GIN] 2024/12/28 - 00:13:01 | 200 | 3.682007471s | 127.0.0.1 | POST "/api/chat"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:407 msg="context for request finished"
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 duration=5m0s
time=2024-12-28T00:13:01.404+08:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=/root/.ollama/models/blobs/sha256-667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29 refCount=0
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4 | {
"login": "Hyccccccc",
"id": 60806532,
"node_id": "MDQ6VXNlcjYwODA2NTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/60806532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hyccccccc",
"html_url": "https://github.com/Hyccccccc",
"followers_url": "https://api.github.com/users/Hyccccccc/followers",
"following_url": "https://api.github.com/users/Hyccccccc/following{/other_user}",
"gists_url": "https://api.github.com/users/Hyccccccc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hyccccccc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hyccccccc/subscriptions",
"organizations_url": "https://api.github.com/users/Hyccccccc/orgs",
"repos_url": "https://api.github.com/users/Hyccccccc/repos",
"events_url": "https://api.github.com/users/Hyccccccc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hyccccccc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8258/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/8258/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6810 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6810/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6810/comments | https://api.github.com/repos/ollama/ollama/issues/6810/events | https://github.com/ollama/ollama/pull/6810 | 2,526,735,469 | PR_kwDOJ0Z1Ps57iMlK | 6,810 | Create docker-image.yml | {
"login": "liufriendd",
"id": 128777784,
"node_id": "U_kgDOB6z-OA",
"avatar_url": "https://avatars.githubusercontent.com/u/128777784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liufriendd",
"html_url": "https://github.com/liufriendd",
"followers_url": "https://api.github.com/users/liufriendd/followers",
"following_url": "https://api.github.com/users/liufriendd/following{/other_user}",
"gists_url": "https://api.github.com/users/liufriendd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liufriendd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liufriendd/subscriptions",
"organizations_url": "https://api.github.com/users/liufriendd/orgs",
"repos_url": "https://api.github.com/users/liufriendd/repos",
"events_url": "https://api.github.com/users/liufriendd/events{/privacy}",
"received_events_url": "https://api.github.com/users/liufriendd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-09-15T04:39:02 | 2024-09-16T20:42:14 | 2024-09-16T20:42:14 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6810",
"html_url": "https://github.com/ollama/ollama/pull/6810",
"diff_url": "https://github.com/ollama/ollama/pull/6810.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6810.patch",
"merged_at": null
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6810/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2645 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2645/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2645/comments | https://api.github.com/repos/ollama/ollama/issues/2645/events | https://github.com/ollama/ollama/issues/2645 | 2,147,315,464 | I_kwDOJ0Z1Ps5__W8I | 2,645 | Biomistral support planned? | {
"login": "DimIsaev",
"id": 11172642,
"node_id": "MDQ6VXNlcjExMTcyNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11172642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DimIsaev",
"html_url": "https://github.com/DimIsaev",
"followers_url": "https://api.github.com/users/DimIsaev/followers",
"following_url": "https://api.github.com/users/DimIsaev/following{/other_user}",
"gists_url": "https://api.github.com/users/DimIsaev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DimIsaev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DimIsaev/subscriptions",
"organizations_url": "https://api.github.com/users/DimIsaev/orgs",
"repos_url": "https://api.github.com/users/DimIsaev/repos",
"events_url": "https://api.github.com/users/DimIsaev/events{/privacy}",
"received_events_url": "https://api.github.com/users/DimIsaev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-02-21T17:30:42 | 2024-02-22T05:19:07 | 2024-02-22T00:55:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Biomistral support planned?
| {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2645/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7104 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7104/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7104/comments | https://api.github.com/repos/ollama/ollama/issues/7104/events | https://github.com/ollama/ollama/issues/7104 | 2,567,600,954 | I_kwDOJ0Z1Ps6ZCns6 | 7,104 | Optimizing GPU Usage for AI Models: Splitting Workloads Across Multiple GPUs Even if the Model Fits in One GPU | {
"login": "varyagnord",
"id": 124573691,
"node_id": "U_kgDOB2zX-w",
"avatar_url": "https://avatars.githubusercontent.com/u/124573691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varyagnord",
"html_url": "https://github.com/varyagnord",
"followers_url": "https://api.github.com/users/varyagnord/followers",
"following_url": "https://api.github.com/users/varyagnord/following{/other_user}",
"gists_url": "https://api.github.com/users/varyagnord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varyagnord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varyagnord/subscriptions",
"organizations_url": "https://api.github.com/users/varyagnord/orgs",
"repos_url": "https://api.github.com/users/varyagnord/repos",
"events_url": "https://api.github.com/users/varyagnord/events{/privacy}",
"received_events_url": "https://api.github.com/users/varyagnord/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 9 | 2024-10-05T02:17:29 | 2024-10-05T13:45:17 | 2024-10-05T13:44:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have a question about how Ollama works and its options for working with AI models. If there are 2 GPUs in a PC, for example, two RTX3090s, and we launch a model that has a size of 20GB VRAM, it will be loaded into one card, preferably the fastest one. This means that processing 20GB of data will be handled by approximately 10,500 CUDA cores. Is there an option to divide the model across both GPUs even if it fits on one? For example, if we split the model so that half (10GB) is processed by 10,500 CUDA cores from the first GPU and the other half by 10,500 CUDA cores from the second GPU. Then a total of 21,000 CUDA cores would process the model. Theoretically, this could improve performance. I understand that in this case, increased data exchange over the PCI-e bus might become a bottleneck, but even then such an approach could be faster. If this option does not exist yet, it might be worth implementing and experimenting with it. If it works, in the future (when using multiple GPUs with different numbers of CUDA cores), when dividing models, they should be divided proportionally to the number of CUDA cores to achieve maximum performance. | {
"login": "varyagnord",
"id": 124573691,
"node_id": "U_kgDOB2zX-w",
"avatar_url": "https://avatars.githubusercontent.com/u/124573691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varyagnord",
"html_url": "https://github.com/varyagnord",
"followers_url": "https://api.github.com/users/varyagnord/followers",
"following_url": "https://api.github.com/users/varyagnord/following{/other_user}",
"gists_url": "https://api.github.com/users/varyagnord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varyagnord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varyagnord/subscriptions",
"organizations_url": "https://api.github.com/users/varyagnord/orgs",
"repos_url": "https://api.github.com/users/varyagnord/repos",
"events_url": "https://api.github.com/users/varyagnord/events{/privacy}",
"received_events_url": "https://api.github.com/users/varyagnord/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7104/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6901 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6901/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6901/comments | https://api.github.com/repos/ollama/ollama/issues/6901/events | https://github.com/ollama/ollama/issues/6901 | 2,539,946,941 | I_kwDOJ0Z1Ps6XZIO9 | 6,901 | High CPU and slow generate tockens | {
"login": "maco6096",
"id": 8744820,
"node_id": "MDQ6VXNlcjg3NDQ4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8744820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maco6096",
"html_url": "https://github.com/maco6096",
"followers_url": "https://api.github.com/users/maco6096/followers",
"following_url": "https://api.github.com/users/maco6096/following{/other_user}",
"gists_url": "https://api.github.com/users/maco6096/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maco6096/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maco6096/subscriptions",
"organizations_url": "https://api.github.com/users/maco6096/orgs",
"repos_url": "https://api.github.com/users/maco6096/repos",
"events_url": "https://api.github.com/users/maco6096/events{/privacy}",
"received_events_url": "https://api.github.com/users/maco6096/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | null | [] | null | 1 | 2024-09-21T03:30:13 | 2024-09-22T16:44:24 | 2024-09-22T16:44:23 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have 32c , 64G mem, cpu is avx2, run qwen1.5-7B-chat.gguf, cpu load 3000% and tocken generate very slow.
this is my cpu config: (base) [app@T-LSM-1 ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 32
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2297.339
BogoMIPS: 4594.67
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-31
Top:
top - 11:29:21 up 21 days, 19:09, 2 users, load average: 28.07, 16.55, 7.01Tasks: 528 total, 3 running, 525 sleeping, 0 stopped, 0 zombie%Cpu(s): 88.3 us, 0.4 sy, 0.0 ni, 1.8 id, 0.0 wa, 9.2 hi, 0.2 si, 0.0 stMiB Mem : 63857.0 total, 2803.0 free, 2671.4 used, 58382.7 buff/cacheMiB Swap: 8192.0 total, 8185.2 free, 6.8 used. 60476.7 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 163803 root 20 0 2488468 1.1g 7668 R 3025 1.8 1679:15 ollama_llama_se
1919 root 20 0 1121684 518260 21516 S 6.6 0.8 224:03.88 ds_agent
There are some problem?
Who can help me?
Thank you!
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.11 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6901/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/386 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/386/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/386/comments | https://api.github.com/repos/ollama/ollama/issues/386/events | https://github.com/ollama/ollama/issues/386 | 1,857,783,219 | I_kwDOJ0Z1Ps5uu4Wz | 386 | Whether the integration of llama2-chinese is supported | {
"login": "cypggs",
"id": 3694954,
"node_id": "MDQ6VXNlcjM2OTQ5NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3694954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cypggs",
"html_url": "https://github.com/cypggs",
"followers_url": "https://api.github.com/users/cypggs/followers",
"following_url": "https://api.github.com/users/cypggs/following{/other_user}",
"gists_url": "https://api.github.com/users/cypggs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cypggs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cypggs/subscriptions",
"organizations_url": "https://api.github.com/users/cypggs/orgs",
"repos_url": "https://api.github.com/users/cypggs/repos",
"events_url": "https://api.github.com/users/cypggs/events{/privacy}",
"received_events_url": "https://api.github.com/users/cypggs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 4 | 2023-08-19T15:57:07 | 2023-08-30T20:48:21 | 2023-08-30T20:48:21 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ERROR: type should be string, got "\r\nhttps://chinese.llama.family/\r\nhttps://github.com/FlagAlpha/Llama2-Chinese" | {
"login": "mchiang0610",
"id": 3325447,
"node_id": "MDQ6VXNlcjMzMjU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchiang0610",
"html_url": "https://github.com/mchiang0610",
"followers_url": "https://api.github.com/users/mchiang0610/followers",
"following_url": "https://api.github.com/users/mchiang0610/following{/other_user}",
"gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions",
"organizations_url": "https://api.github.com/users/mchiang0610/orgs",
"repos_url": "https://api.github.com/users/mchiang0610/repos",
"events_url": "https://api.github.com/users/mchiang0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchiang0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/386/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6645 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6645/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6645/comments | https://api.github.com/repos/ollama/ollama/issues/6645/events | https://github.com/ollama/ollama/pull/6645 | 2,506,521,525 | PR_kwDOJ0Z1Ps56dW2q | 6,645 | Fix gemma2 2b conversion | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-09-05T00:20:31 | 2024-09-06T00:02:30 | 2024-09-06T00:02:28 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6645",
"html_url": "https://github.com/ollama/ollama/pull/6645",
"diff_url": "https://github.com/ollama/ollama/pull/6645.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6645.patch",
"merged_at": "2024-09-06T00:02:28"
} | Gemma2 added some tensors which were not getting named correctly which caused a collision for the `ffm_norm` tensors. This change fixes the tensor names and adds a new unit test for converting gemma2 2b. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6645/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4710 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4710/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4710/comments | https://api.github.com/repos/ollama/ollama/issues/4710/events | https://github.com/ollama/ollama/issues/4710 | 2,324,189,841 | I_kwDOJ0Z1Ps6KiFKR | 4,710 | s390x build ollama : running gcc failed | {
"login": "woale",
"id": 660094,
"node_id": "MDQ6VXNlcjY2MDA5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/660094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woale",
"html_url": "https://github.com/woale",
"followers_url": "https://api.github.com/users/woale/followers",
"following_url": "https://api.github.com/users/woale/following{/other_user}",
"gists_url": "https://api.github.com/users/woale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woale/subscriptions",
"organizations_url": "https://api.github.com/users/woale/orgs",
"repos_url": "https://api.github.com/users/woale/repos",
"events_url": "https://api.github.com/users/woale/events{/privacy}",
"received_events_url": "https://api.github.com/users/woale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 9 | 2024-05-29T20:26:12 | 2025-01-27T21:13:44 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
ollama build fails on undefined llama references
```
# github.com/ollama/ollama
/usr/local/go/pkg/tool/linux_s390x/link: running gcc failed: exit status 1
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_free_model':
/tmp/go-build/cgo-gcc-prolog:63: undefined reference to `llama_free_model'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_load_model_from_file':
/tmp/go-build/cgo-gcc-prolog:79: undefined reference to `llama_load_model_from_file'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_model_default_params':
/tmp/go-build/cgo-gcc-prolog:96: undefined reference to `llama_model_default_params'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_model_quantize':
/tmp/go-build/cgo-gcc-prolog:117: undefined reference to `llama_model_quantize'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_model_quantize_default_params':
/tmp/go-build/cgo-gcc-prolog:134: undefined reference to `llama_model_quantize_default_params'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_print_system_info':
/tmp/go-build/cgo-gcc-prolog:151: undefined reference to `llama_print_system_info'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_token_to_piece':
/tmp/go-build/cgo-gcc-prolog:176: undefined reference to `llama_token_to_piece'
/usr/bin/ld: /tmp/go-link-778429479/000019.o: in function `_cgo_3eac69a87adc_Cfunc_llama_tokenize':
/tmp/go-build/cgo-gcc-prolog:203: undefined reference to `llama_tokenize'
collect2: error: ld returned 1 exit status
```
**Tools version**
```
[linux1@r ollama]$ cmake --version
cmake version 3.26.5
CMake suite maintained and supported by Kitware (kitware.com/cmake).
[linux1@r ollama]$ go version
go version go1.22.3 linux/s390x
[linux1@r ollama]$ gcc --version
gcc (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
### OS
`Linux r 5.14.0-362.8.1.el9_3.s390x #1 SMP Tue Oct 3 09:00:29 EDT 2023 s390x s390x s390x GNU/Linux`
### GPU
None
### CPU
s390x (IBM/S390 on IBM z15 8561 mainframe)
### Ollama version
git version v0.1.39+ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4710/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/7936 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7936/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7936/comments | https://api.github.com/repos/ollama/ollama/issues/7936/events | https://github.com/ollama/ollama/pull/7936 | 2,719,052,661 | PR_kwDOJ0Z1Ps6EHFHQ | 7,936 | ci: adjust windows compilers for lint/test | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-12-05T00:26:25 | 2024-12-05T00:34:39 | 2024-12-05T00:33:51 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7936",
"html_url": "https://github.com/ollama/ollama/pull/7936",
"diff_url": "https://github.com/ollama/ollama/pull/7936.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7936.patch",
"merged_at": "2024-12-05T00:33:51"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7936/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6578 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6578/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6578/comments | https://api.github.com/repos/ollama/ollama/issues/6578/events | https://github.com/ollama/ollama/issues/6578 | 2,498,880,963 | I_kwDOJ0Z1Ps6U8eXD | 6,578 | `/show info` panics on nil ModelInfo | {
"login": "vimalk78",
"id": 3284044,
"node_id": "MDQ6VXNlcjMyODQwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3284044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vimalk78",
"html_url": "https://github.com/vimalk78",
"followers_url": "https://api.github.com/users/vimalk78/followers",
"following_url": "https://api.github.com/users/vimalk78/following{/other_user}",
"gists_url": "https://api.github.com/users/vimalk78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vimalk78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vimalk78/subscriptions",
"organizations_url": "https://api.github.com/users/vimalk78/orgs",
"repos_url": "https://api.github.com/users/vimalk78/repos",
"events_url": "https://api.github.com/users/vimalk78/events{/privacy}",
"received_events_url": "https://api.github.com/users/vimalk78/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-08-31T14:41:52 | 2024-09-01T04:12:18 | 2024-09-01T04:12:18 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
ollama on main via v1.22.5
❯ ./ollama run codellama:latest
>>> /show info
panic: interface conversion: interface {} is nil, not string
goroutine 1 [running]:
github.com/ollama/ollama/cmd.showInfo(0xc0001e0700)
/home/vimalkum/src/ollama/ollama/cmd/cmd.go:729 +0x1177
github.com/ollama/ollama/cmd.generateInteractive(0xc0007a4608, {{0x7fff4fb8bbff, 0x10}, {0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0, 0x0}, ...})
/home/vimalkum/src/ollama/ollama/cmd/interactive.go:374 +0x114d
github.com/ollama/ollama/cmd.RunHandler(0xc0007a4608, {0xc0004a83c0, 0x1, 0x1})
/home/vimalkum/src/ollama/ollama/cmd/cmd.go:441 +0x7a5
github.com/spf13/cobra.(*Command).execute(0xc0007a4608, {0xc0004a8380, 0x1, 0x1})
/home/vimalkum/go/pkg/mod/github.com/spf13/[email protected]/command.go:940 +0x882
github.com/spf13/cobra.(*Command).ExecuteC(0xc00048d808)
/home/vimalkum/go/pkg/mod/github.com/spf13/[email protected]/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
/home/vimalkum/go/pkg/mod/github.com/spf13/[email protected]/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
/home/vimalkum/go/pkg/mod/github.com/spf13/[email protected]/command.go:985
main.main()
/home/vimalkum/src/ollama/ollama/main.go:12 +0x4d
```
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.1.28 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6578/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2143 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2143/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2143/comments | https://api.github.com/repos/ollama/ollama/issues/2143/events | https://github.com/ollama/ollama/pull/2143 | 2,094,695,037 | PR_kwDOJ0Z1Ps5kw2Ga | 2,143 | Refine debug logging for llm | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-22T20:28:38 | 2024-01-22T21:19:19 | 2024-01-22T21:19:16 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2143",
"html_url": "https://github.com/ollama/ollama/pull/2143",
"diff_url": "https://github.com/ollama/ollama/pull/2143.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2143.patch",
"merged_at": "2024-01-22T21:19:16"
} | This wires up logging in llama.cpp to always go to stderr, and also turns up logging if OLLAMA_DEBUG is set.
This solves a couple problems. We used to emit one line to llama.log in verbose/debug mode before shifting to stdout. Now all the logging from llama.cpp will go to stderr, and the verbosity can be controlled at runtime. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2143/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6023 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6023/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6023/comments | https://api.github.com/repos/ollama/ollama/issues/6023/events | https://github.com/ollama/ollama/issues/6023 | 2,433,814,241 | I_kwDOJ0Z1Ps6REQ7h | 6,023 | Expose unavaiable Llama-CPP flags | {
"login": "doomgrave",
"id": 18002421,
"node_id": "MDQ6VXNlcjE4MDAyNDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/18002421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/doomgrave",
"html_url": "https://github.com/doomgrave",
"followers_url": "https://api.github.com/users/doomgrave/followers",
"following_url": "https://api.github.com/users/doomgrave/following{/other_user}",
"gists_url": "https://api.github.com/users/doomgrave/gists{/gist_id}",
"starred_url": "https://api.github.com/users/doomgrave/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/doomgrave/subscriptions",
"organizations_url": "https://api.github.com/users/doomgrave/orgs",
"repos_url": "https://api.github.com/users/doomgrave/repos",
"events_url": "https://api.github.com/users/doomgrave/events{/privacy}",
"received_events_url": "https://api.github.com/users/doomgrave/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-07-28T08:25:04 | 2024-09-04T01:56:41 | 2024-09-04T01:56:41 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Please expose all the llama-cpp flags when we configure the modelcard.
For example: offload_kqv, flash_attn, logits_all can be needed in specific usecases! | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6023/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1971 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1971/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1971/comments | https://api.github.com/repos/ollama/ollama/issues/1971/events | https://github.com/ollama/ollama/pull/1971 | 2,079,836,002 | PR_kwDOJ0Z1Ps5j-oGg | 1,971 | add max context length check | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-12T22:55:03 | 2024-01-12T23:10:26 | 2024-01-12T23:10:25 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1971",
"html_url": "https://github.com/ollama/ollama/pull/1971",
"diff_url": "https://github.com/ollama/ollama/pull/1971.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1971.patch",
"merged_at": "2024-01-12T23:10:25"
} | setting a context length greater than what the model is trained for has adverse effects. to prevent this, if the user requests a larger context length, log and set it to the model's max | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1971/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/698 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/698/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/698/comments | https://api.github.com/repos/ollama/ollama/issues/698/events | https://github.com/ollama/ollama/issues/698 | 1,926,731,522 | I_kwDOJ0Z1Ps5y15cC | 698 | How to uninstall ollama ai on Linux | {
"login": "scalstairo",
"id": 146988643,
"node_id": "U_kgDOCMLeYw",
"avatar_url": "https://avatars.githubusercontent.com/u/146988643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scalstairo",
"html_url": "https://github.com/scalstairo",
"followers_url": "https://api.github.com/users/scalstairo/followers",
"following_url": "https://api.github.com/users/scalstairo/following{/other_user}",
"gists_url": "https://api.github.com/users/scalstairo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scalstairo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scalstairo/subscriptions",
"organizations_url": "https://api.github.com/users/scalstairo/orgs",
"repos_url": "https://api.github.com/users/scalstairo/repos",
"events_url": "https://api.github.com/users/scalstairo/events{/privacy}",
"received_events_url": "https://api.github.com/users/scalstairo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-10-04T18:08:45 | 2023-10-08T11:54:42 | 2023-10-04T18:19:51 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How can ollama be uninstalled on linux? Do not see an obvious entry the package listings | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/698/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/698/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6980 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6980/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6980/comments | https://api.github.com/repos/ollama/ollama/issues/6980/events | https://github.com/ollama/ollama/issues/6980 | 2,550,753,458 | I_kwDOJ0Z1Ps6YCWiy | 6,980 | Tools support is not working right | {
"login": "acastry",
"id": 33638575,
"node_id": "MDQ6VXNlcjMzNjM4NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/33638575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acastry",
"html_url": "https://github.com/acastry",
"followers_url": "https://api.github.com/users/acastry/followers",
"following_url": "https://api.github.com/users/acastry/following{/other_user}",
"gists_url": "https://api.github.com/users/acastry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acastry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acastry/subscriptions",
"organizations_url": "https://api.github.com/users/acastry/orgs",
"repos_url": "https://api.github.com/users/acastry/repos",
"events_url": "https://api.github.com/users/acastry/events{/privacy}",
"received_events_url": "https://api.github.com/users/acastry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 13 | 2024-09-26T14:22:14 | 2025-01-06T07:39:00 | 2025-01-06T07:39:00 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hi,
Tools support doens't work as expected i Guess. When activated it gets the right function to be called, but at the same time it doesn't return anymore normal response for any other phrases than tool support example.
### **I am copying last documentation example for tools support.**
```curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "What is the weather today in Paris?"
}
],
"stream": false,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for, e.g. San Francisco, CA"
},
"format": {
"type": "string",
"description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "format"]
}
}
}
]
}'
```
Returns
```{"model":"llama3.2","created_at":"2024-09-26T13:56:50.934137Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"celsius","location":"Paris"}}}]},"done_reason":"stop","done":true,"total_duration":1596672250,"load_duration":40646209,"prompt_eval_count":213,"prompt_eval_duration":1052689000,"eval_count":25,"eval_duration":500100000}```
**Let me just change the question:**
```
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "What is the value of pi ?"
}
],
"stream": false,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for, e.g. San Francisco, CA"
},
"format": {
"type": "string",
"description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "format"]
}
}
}
]
}'
```
### **I cannot have normal response and it hallucinates :**
`{"model":"llama3.2","created_at":"2024-09-26T14:12:22.749298Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"pi","location":""}}}]},"done_reason":"stop","done":true,"total_duration":5842347542,"load_duration":4899628417,"prompt_eval_count":212,"prompt_eval_duration":480869000,"eval_count":23,"eval_duration":456247000}`
**What if i deactivate tools ?**
```
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "What is the value of pi ?"
}
],
"stream": false,
"tools": [
]
}'
```
It works :
`{"model":"llama3.2","created_at":"2024-09-26T14:16:03.899228Z","message":{"role":"assistant","content":"Pi (π) is an irrational number, which means it cannot be expressed as a finite decimal or fraction. The value of pi is approximately equal to 3.14159, but it goes on forever without repeating in a predictable pattern.\n\nIn fact, the first few digits of pi are:\n\n3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679\n\n...and so on!\n\nPi is a fundamental constant in mathematics and appears in many areas of geometry, trigonometry, calculus, and more. Its value is used to describe the circumference and area of circles, as well as other curved shapes.\n\nIf you need a more precise value, there are online calculators and mathematical libraries that can provide an infinite number of digits of pi!"},"done_reason":"stop","done":true,"total_duration":5318857625,"load_duration":41988708,"prompt_eval_count":32,"prompt_eval_duration":1744657000,"eval_count":173,"eval_duration":3530631000}`
**### Also tested with LLAMA3.1 and not working**
```
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"messages": [
{
"role": "user",
"content": "What is the value of pi ?"
}
],
"stream": false,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the weather for, e.g. San Francisco, CA"
},
"format": {
"type": "string",
"description": "The format to return the weather in, e.g. 'celsius' or 'fahrenheit'",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "format"]
}
}
}
]
}'
```
`{"model":"llama3.1","created_at":"2024-09-26T16:50:52.16426Z","message":{"role":"assistant","content":"","tool_calls":[{"function":{"name":"get_current_weather","arguments":{"format":"celsius","location":"San Francisco, CA"}}}]},"done_reason":"stop","done":true,"total_duration":7556086500,"load_duration":40462833,"prompt_eval_count":214,"prompt_eval_duration":5782271000,"eval_count":48,"eval_duration":1731411000}
`
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6980/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/6980/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3418 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3418/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3418/comments | https://api.github.com/repos/ollama/ollama/issues/3418/events | https://github.com/ollama/ollama/pull/3418 | 2,216,581,518 | PR_kwDOJ0Z1Ps5rOxWa | 3,418 | Request and model concurrency | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 44 | 2024-03-30T16:56:41 | 2024-05-30T01:21:31 | 2024-04-23T15:31:38 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3418",
"html_url": "https://github.com/ollama/ollama/pull/3418",
"diff_url": "https://github.com/ollama/ollama/pull/3418.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3418.patch",
"merged_at": "2024-04-23T15:31:38"
} | This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. This change is designed to be "opt in" initially, so the default behavior mimics the current sequential implementation (1 request at a time, and only a single model), but can be changed by setting environment variables for the server. In the future we will adjust the default settings to enable concurrency by default.
By default, this change supports 1 concurrent requests to a loaded model, which can be adjusted via `OLLAMA_NUM_PARALLEL`.
By default, this change supports 1 loaded model at a time, which can be adjusted via `OLLAMA_MAX_LOADED_MODELS `. Set to zero for fully dynamic based on VRAM capacity, or a fixed number greater than 1 to limit the total number of loaded models regardless of VRAM capacity. In the >1 scenario, we'll still perform VRAM prediction calculations to see if the new model will completely fit into available VRAM, but will prevent loading more than the specified number of runners even if they would have fit.
This change also adjusts our GPU selection algorithm in multi-GPU scenarios. If we can fit the model into a single GPU, we'll favor that over spreading it to multiple GPUs.
Note: system memory capacity is not taken into consideration in this change, so for CPU mode, setting max runners to zero could result in memory exhaustion and paging/swapping.
Fixes #2109
Fixes #1656
Fixes #1514
Fixes #3304
Fixes #961
Fixes #3507 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3418/reactions",
"total_count": 95,
"+1": 37,
"-1": 0,
"laugh": 0,
"hooray": 19,
"confused": 0,
"heart": 12,
"rocket": 19,
"eyes": 8
} | https://api.github.com/repos/ollama/ollama/issues/3418/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7322 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7322/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7322/comments | https://api.github.com/repos/ollama/ollama/issues/7322/events | https://github.com/ollama/ollama/pull/7322 | 2,606,063,044 | PR_kwDOJ0Z1Ps5_fkpi | 7,322 | Refine default thread selection for NUMA systems | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 10 | 2024-10-22T17:31:27 | 2024-11-14T23:51:57 | 2024-10-30T22:05:46 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7322",
"html_url": "https://github.com/ollama/ollama/pull/7322",
"diff_url": "https://github.com/ollama/ollama/pull/7322.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7322.patch",
"merged_at": "2024-10-30T22:05:46"
} | Until we have full NUMA support, this adjusts the default thread selection algorithm to count up the number of performance cores across all sockets.
Fixes #7287
Fixes #7359 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7322/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5096 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5096/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5096/comments | https://api.github.com/repos/ollama/ollama/issues/5096/events | https://github.com/ollama/ollama/pull/5096 | 2,356,954,459 | PR_kwDOJ0Z1Ps5yrEe_ | 5,096 | Fix a build warning again | {
"login": "coolljt0725",
"id": 8232360,
"node_id": "MDQ6VXNlcjgyMzIzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8232360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/coolljt0725",
"html_url": "https://github.com/coolljt0725",
"followers_url": "https://api.github.com/users/coolljt0725/followers",
"following_url": "https://api.github.com/users/coolljt0725/following{/other_user}",
"gists_url": "https://api.github.com/users/coolljt0725/gists{/gist_id}",
"starred_url": "https://api.github.com/users/coolljt0725/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coolljt0725/subscriptions",
"organizations_url": "https://api.github.com/users/coolljt0725/orgs",
"repos_url": "https://api.github.com/users/coolljt0725/repos",
"events_url": "https://api.github.com/users/coolljt0725/events{/privacy}",
"received_events_url": "https://api.github.com/users/coolljt0725/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-06-17T10:10:51 | 2024-06-18T00:54:16 | 2024-06-17T18:47:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5096",
"html_url": "https://github.com/ollama/ollama/pull/5096",
"diff_url": "https://github.com/ollama/ollama/pull/5096.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5096.patch",
"merged_at": "2024-06-17T18:47:48"
} | With the latest main branch, there is a build warning
```
# github.com/ollama/ollama/gpu
In file included from gpu_info_oneapi.h:4,
from gpu_info_oneapi.c:3:
gpu_info_oneapi.c: In function ‘oneapi_init’:
gpu_info_oneapi.c:101:27: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘zes_driver_handle_t’ {aka ‘struct _zes_driver_handle_t *’} [-Wformat=]
101 | LOG(resp->oh.verbose, "calling zesDeviceGet %d\n", resp->oh.drivers[d]);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
| |
| zes_driver_handle_t {aka struct _zes_driver_handle_t *}
gpu_info.h:33:23: note: in definition of macro ‘LOG’
33 | fprintf(stderr, __VA_ARGS__); \
| ^~~~~~~~~~~
gpu_info_oneapi.c:101:50: note: format string is defined here
101 | LOG(resp->oh.verbose, "calling zesDeviceGet %d\n", resp->oh.drivers[d]);
| ~^
| |
| int
``` | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5096/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2492 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2492/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2492/comments | https://api.github.com/repos/ollama/ollama/issues/2492/events | https://github.com/ollama/ollama/issues/2492 | 2,134,240,649 | I_kwDOJ0Z1Ps5_Ne2J | 2,492 | System Prompt not honored until re-run `ollama serve` | {
"login": "hyjwei",
"id": 76876891,
"node_id": "MDQ6VXNlcjc2ODc2ODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/76876891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hyjwei",
"html_url": "https://github.com/hyjwei",
"followers_url": "https://api.github.com/users/hyjwei/followers",
"following_url": "https://api.github.com/users/hyjwei/following{/other_user}",
"gists_url": "https://api.github.com/users/hyjwei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hyjwei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyjwei/subscriptions",
"organizations_url": "https://api.github.com/users/hyjwei/orgs",
"repos_url": "https://api.github.com/users/hyjwei/repos",
"events_url": "https://api.github.com/users/hyjwei/events{/privacy}",
"received_events_url": "https://api.github.com/users/hyjwei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2 | 2024-02-14T12:23:47 | 2024-02-16T19:43:08 | 2024-02-16T19:42:44 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There are actually two issues regarding System Prompt in the current main branch, and I believe them to be related.
# Issue 1: `SYSTEM` prompt in modelfile not honored
If I run a model, then create a new one based the same model, but with a new `SYSTEM` prompt, the new `SYSTEM` prompt is not honored. Killing the current ollama serve process and re-runing a new one with `ollama serve` would solve the problem.
### How to replicate
Start a new server by `ollama serve` with `OLLAMA_DEBUG=1`
Run client with any model, for example, `ollama run phi`
Input a user prompt, you will find prompt debug info on server side, like
```
time=2024-02-14T06:55:05.081-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: hello\nAssistant:" images=0
```
Quit the client, create a custom modelfile like
```
FROM phi
SYSTEM """I want you to speak French only."""
```
Create/run a new model with the custom modelfile
Input a user prompt, check prompt debug info on server side again, you will find that prompt debug info has the same System prompt as before. It is not updated to the custom system prompt specified in the modelfile.
If I restart server, and re-run the client with same custom model, then the prompt debug info in the server side is updated correctly.
# Issue 2: `/set system` command in CLI changes System Prompt incorrectly
If I load a model, then use `/set system` to change System Prompt, ollama will actually append this new system prompt to the existing one, instead of replacing them.
### How to replicate
Start a new server by `ollama serve` with `OLLAMA_DEBUG=1`
Run client with any model, for example, `ollama run phi`
Set a new system prompt in CLI, like
```
/set system I want you to speak French only.
```
You can confirm that the system prompt has indeed been changed by command `/show modelfile` or `/show system`
Input a user prompt, you will find prompt debug info on server side looks like:
```
time=2024-02-14T07:13:40.139-05:00 level=DEBUG source=routes.go:1205 msg="chat handler" prompt="System: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.\nUser: \nAssistant:System: I want you to speak French only.\nUser: hello\nAssistant:" images=0
```
You can see the original system prompt is still there and the new system prompt is appended, followed by user input.
Furthermore, to make it worse, every time I set a new system prompt with `/set system`, the new system prompt will be appended to the old ones, instead of replacing them.
| {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2492/timeline | null | completed | false |