url
stringlengths 51
54
| repository_url
stringclasses 1
value | labels_url
stringlengths 65
68
| comments_url
stringlengths 60
63
| events_url
stringlengths 58
61
| html_url
stringlengths 39
44
| id
int64 1.78B
2.82B
| node_id
stringlengths 18
19
| number
int64 1
8.69k
| title
stringlengths 1
382
| user
dict | labels
listlengths 0
5
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
2
| milestone
null | comments
int64 0
323
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
118k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 60
63
| performed_via_github_app
null | state_reason
stringclasses 4
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/ollama/ollama/issues/2979 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2979/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2979/comments | https://api.github.com/repos/ollama/ollama/issues/2979/events | https://github.com/ollama/ollama/issues/2979 | 2,173,683,721 | I_kwDOJ0Z1Ps6Bj8gJ | 2,979 | Starcoder2 crashing ollama docker version 0.1.28 | {
"login": "tilllt",
"id": 1854364,
"node_id": "MDQ6VXNlcjE4NTQzNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1854364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilllt",
"html_url": "https://github.com/tilllt",
"followers_url": "https://api.github.com/users/tilllt/followers",
"following_url": "https://api.github.com/users/tilllt/following{/other_user}",
"gists_url": "https://api.github.com/users/tilllt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilllt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilllt/subscriptions",
"organizations_url": "https://api.github.com/users/tilllt/orgs",
"repos_url": "https://api.github.com/users/tilllt/repos",
"events_url": "https://api.github.com/users/tilllt/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilllt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-03-07T11:51:04 | 2024-03-07T12:49:55 | 2024-03-07T12:49:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I noticed that the ollama version shipped as docker container has been updated to 0.1.28 and thus should run starcoder2 and gemma models - i am still not having luck running those, ollama just crashes... am i missing something?
https://pastebin.com/ALJRfZZ5 | {
"login": "tilllt",
"id": 1854364,
"node_id": "MDQ6VXNlcjE4NTQzNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1854364?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tilllt",
"html_url": "https://github.com/tilllt",
"followers_url": "https://api.github.com/users/tilllt/followers",
"following_url": "https://api.github.com/users/tilllt/following{/other_user}",
"gists_url": "https://api.github.com/users/tilllt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tilllt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tilllt/subscriptions",
"organizations_url": "https://api.github.com/users/tilllt/orgs",
"repos_url": "https://api.github.com/users/tilllt/repos",
"events_url": "https://api.github.com/users/tilllt/events{/privacy}",
"received_events_url": "https://api.github.com/users/tilllt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2979/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8660 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8660/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8660/comments | https://api.github.com/repos/ollama/ollama/issues/8660/events | https://github.com/ollama/ollama/issues/8660 | 2,818,256,756 | I_kwDOJ0Z1Ps6n-y90 | 8,660 | GPU Memory Not Released After Exiting deepseek-r1:32b Model | {
"login": "Sebjac06",
"id": 172889704,
"node_id": "U_kgDOCk4WaA",
"avatar_url": "https://avatars.githubusercontent.com/u/172889704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebjac06",
"html_url": "https://github.com/Sebjac06",
"followers_url": "https://api.github.com/users/Sebjac06/followers",
"following_url": "https://api.github.com/users/Sebjac06/following{/other_user}",
"gists_url": "https://api.github.com/users/Sebjac06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sebjac06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sebjac06/subscriptions",
"organizations_url": "https://api.github.com/users/Sebjac06/orgs",
"repos_url": "https://api.github.com/users/Sebjac06/repos",
"events_url": "https://api.github.com/users/Sebjac06/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sebjac06/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2025-01-29T13:41:07 | 2025-01-29T13:51:19 | 2025-01-29T13:51:19 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
- Ollama Version: 0.5.7
- Model: deepseek-r1:32b
- GPU: NVIDIA RTX 3090 (24GB VRAM)
- OS: Windows 11 (include build version if known)
After running the `deepseek-r1:32b` model via `ollama run deepseek-r1:32b` and exiting with `/bye` in my terminal, the GPU's dedicated memory remains fully allocated at 24GB despite 0% GPU usage. This persists until I close the ollama application fully, and occurs again when using the model.
Is this a bug? It seems strange that the dedicated memory remains at the maximum even after closing the terrminal.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7 | {
"login": "Sebjac06",
"id": 172889704,
"node_id": "U_kgDOCk4WaA",
"avatar_url": "https://avatars.githubusercontent.com/u/172889704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sebjac06",
"html_url": "https://github.com/Sebjac06",
"followers_url": "https://api.github.com/users/Sebjac06/followers",
"following_url": "https://api.github.com/users/Sebjac06/following{/other_user}",
"gists_url": "https://api.github.com/users/Sebjac06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sebjac06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sebjac06/subscriptions",
"organizations_url": "https://api.github.com/users/Sebjac06/orgs",
"repos_url": "https://api.github.com/users/Sebjac06/repos",
"events_url": "https://api.github.com/users/Sebjac06/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sebjac06/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8660/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3362 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3362/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3362/comments | https://api.github.com/repos/ollama/ollama/issues/3362/events | https://github.com/ollama/ollama/issues/3362 | 2,208,545,720 | I_kwDOJ0Z1Ps6Do7u4 | 3,362 | Report better error on windows on port conflict with winnat | {
"login": "Canman1963",
"id": 133131797,
"node_id": "U_kgDOB-9uFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/133131797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Canman1963",
"html_url": "https://github.com/Canman1963",
"followers_url": "https://api.github.com/users/Canman1963/followers",
"following_url": "https://api.github.com/users/Canman1963/following{/other_user}",
"gists_url": "https://api.github.com/users/Canman1963/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Canman1963/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Canman1963/subscriptions",
"organizations_url": "https://api.github.com/users/Canman1963/orgs",
"repos_url": "https://api.github.com/users/Canman1963/repos",
"events_url": "https://api.github.com/users/Canman1963/events{/privacy}",
"received_events_url": "https://api.github.com/users/Canman1963/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4 | 2024-03-26T15:15:16 | 2024-04-28T19:01:09 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi DevOps
My Ollama was working fine for me until I tried to use it today not sure what has happened. The LOGS show this repeated Crash and attempt to reload in the app.log
Time=2024-03-25T12:09:31.329-05:00 level=INFO source=logging.go:45 msg="ollama app started"
time=2024-03-25T12:09:31.389-05:00 level=INFO source=server.go:135 msg="unable to connect to server"
time=2024-03-25T12:09:34.633-05:00 level=INFO source=server.go:91 msg="started ollama server with pid 33376"
time=2024-03-25T12:09:34.633-05:00 level=INFO source=server.go:93 msg="ollama server logs C:\\Users\\David\\AppData\\Local\\Ollama\\server.log"
time=2024-03-25T12:09:35.525-05:00 level=WARN source=server.go:113 msg="server crash 1 - exit code 1 - respawning"
time=2024-03-25T12:09:36.037-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:09:37.038-05:00 level=WARN source=server.go:113 msg="server crash 2 - exit code 1 - respawning"
time=2024-03-25T12:09:37.550-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:09:39.557-05:00 level=WARN source=server.go:113 msg="server crash 3 - exit code 1 - respawning"
time=2024-03-25T12:09:40.071-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:09:43.072-05:00 level=WARN source=server.go:113 msg="server crash 4 - exit code 1 - respawning"
time=2024-03-25T12:09:43.572-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:09:47.587-05:00 level=WARN source=server.go:113 msg="server crash 5 - exit code 1 - respawning"
time=2024-03-25T12:09:48.087-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:09:53.087-05:00 level=WARN source=server.go:113 msg="server crash 6 - exit code 1 - respawning"
time=2024-03-25T12:09:53.599-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:09:59.613-05:00 level=WARN source=server.go:113 msg="server crash 7 - exit code 1 - respawning"
time=2024-03-25T12:10:00.116-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:10:07.123-05:00 level=WARN source=server.go:113 msg="server crash 8 - exit code 1 - respawning"
time=2024-03-25T12:10:07.636-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:10:15.651-05:00 level=WARN source=server.go:113 msg="server crash 9 - exit code 1 - respawning"
time=2024-03-25T12:10:16.166-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:10:25.171-05:00 level=WARN source=server.go:113 msg="server crash 10 - exit code 1 - respawning"
time=2024-03-25T12:10:25.683-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:10:35.698-05:00 level=WARN source=server.go:113 msg="server crash 11 - exit code 1 - respawning"
time=2024-03-25T12:10:36.207-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:10:47.222-05:00 level=WARN source=server.go:113 msg="server crash 12 - exit code 1 - respawning"
time=2024-03-25T12:10:47.734-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:10:59.745-05:00 level=WARN source=server.go:113 msg="server crash 13 - exit code 1 - respawning"
time=2024-03-25T12:11:00.252-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:11:13.267-05:00 level=WARN source=server.go:113 msg="server crash 14 - exit code 1 - respawning"
time=2024-03-25T12:11:13.782-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:11:27.784-05:00 level=WARN source=server.go:113 msg="server crash 15 - exit code 1 - respawning"
time=2024-03-25T12:11:28.293-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:11:43.295-05:00 level=WARN source=server.go:113 msg="server crash 16 - exit code 1 - respawning"
time=2024-03-25T12:11:43.810-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:11:59.814-05:00 level=WARN source=server.go:113 msg="server crash 17 - exit code 1 - respawning"
time=2024-03-25T12:12:00.328-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:12:17.333-05:00 level=WARN source=server.go:113 msg="server crash 18 - exit code 1 - respawning"
time=2024-03-25T12:12:17.845-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:12:35.856-05:00 level=WARN source=server.go:113 msg="server crash 19 - exit code 1 - respawning"
time=2024-03-25T12:12:36.370-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:12:55.374-05:00 level=WARN source=server.go:113 msg="server crash 20 - exit code 1 - respawning"
time=2024-03-25T12:12:55.888-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:13:15.889-05:00 level=WARN source=server.go:113 msg="server crash 21 - exit code 1 - respawning"
time=2024-03-25T12:13:16.403-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:13:37.410-05:00 level=WARN source=server.go:113 msg="server crash 22 - exit code 1 - respawning"
time=2024-03-25T12:13:37.924-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:13:59.930-05:00 level=WARN source=server.go:113 msg="server crash 23 - exit code 1 - respawning"
time=2024-03-25T12:14:00.436-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:14:23.441-05:00 level=WARN source=server.go:113 msg="server crash 24 - exit code 1 - respawning"
time=2024-03-25T12:14:23.954-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:14:47.968-05:00 level=WARN source=server.go:113 msg="server crash 25 - exit code 1 - respawning"
time=2024-03-25T12:14:48.482-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:15:13.490-05:00 level=WARN source=server.go:113 msg="server crash 26 - exit code 1 - respawning"
time=2024-03-25T12:15:14.003-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:15:40.015-05:00 level=WARN source=server.go:113 msg="server crash 27 - exit code 1 - respawning"
time=2024-03-25T12:15:40.529-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:16:07.542-05:00 level=WARN source=server.go:113 msg="server crash 28 - exit code 1 - respawning"
time=2024-03-25T12:16:08.052-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:16:36.055-05:00 level=WARN source=server.go:113 msg="server crash 29 - exit code 1 - respawning"
time=2024-03-25T12:16:36.569-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:17:05.578-05:00 level=WARN source=server.go:113 msg="server crash 30 - exit code 1 - respawning"
time=2024-03-25T12:17:06.089-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:17:36.095-05:00 level=WARN source=server.go:113 msg="server crash 31 - exit code 1 - respawning"
time=2024-03-25T12:17:36.595-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:18:07.599-05:00 level=WARN source=server.go:113 msg="server crash 32 - exit code 1 - respawning"
time=2024-03-25T12:18:08.108-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:18:40.115-05:00 level=WARN source=server.go:113 msg="server crash 33 - exit code 1 - respawning"
time=2024-03-25T12:18:40.628-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:19:13.638-05:00 level=WARN source=server.go:113 msg="server crash 34 - exit code 1 - respawning"
time=2024-03-25T12:19:14.152-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:19:48.153-05:00 level=WARN source=server.go:113 msg="server crash 35 - exit code 1 - respawning"
time=2024-03-25T12:19:48.664-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:20:23.680-05:00 level=WARN source=server.go:113 msg="server crash 36 - exit code 1 - respawning"
time=2024-03-25T12:20:24.181-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:21:00.187-05:00 level=WARN source=server.go:113 msg="server crash 37 - exit code 1 - respawning"
time=2024-03-25T12:21:00.696-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:21:37.709-05:00 level=WARN source=server.go:113 msg="server crash 38 - exit code 1 - respawning"
time=2024-03-25T12:21:38.210-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:22:16.225-05:00 level=WARN source=server.go:113 msg="server crash 39 - exit code 1 - respawning"
time=2024-03-25T12:22:16.728-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:22:55.736-05:00 level=WARN source=server.go:113 msg="server crash 40 - exit code 1 - respawning"
time=2024-03-25T12:22:56.251-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:23:36.259-05:00 level=WARN source=server.go:113 msg="server crash 41 - exit code 1 - respawning"
time=2024-03-25T12:23:36.772-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:24:17.786-05:00 level=WARN source=server.go:113 msg="server crash 42 - exit code 1 - respawning"
time=2024-03-25T12:24:18.297-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:25:00.300-05:00 level=WARN source=server.go:113 msg="server crash 43 - exit code 1 - respawning"
time=2024-03-25T12:25:00.801-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:25:43.807-05:00 level=WARN source=server.go:113 msg="server crash 44 - exit code 1 - respawning"
time=2024-03-25T12:25:44.319-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:26:28.319-05:00 level=WARN source=server.go:113 msg="server crash 45 - exit code 1 - respawning"
time=2024-03-25T12:26:28.834-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:27:13.836-05:00 level=WARN source=server.go:113 msg="server crash 46 - exit code 1 - respawning"
time=2024-03-25T12:27:14.349-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:28:00.358-05:00 level=WARN source=server.go:113 msg="server crash 47 - exit code 1 - respawning"
time=2024-03-25T12:28:00.858-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:28:47.859-05:00 level=WARN source=server.go:113 msg="server crash 48 - exit code 1 - respawning"
time=2024-03-25T12:28:48.370-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:29:36.379-05:00 level=WARN source=server.go:113 msg="server crash 49 - exit code 1 - respawning"
time=2024-03-25T12:29:36.890-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:30:25.903-05:00 level=WARN source=server.go:113 msg="server crash 50 - exit code 1 - respawning"
time=2024-03-25T12:30:26.414-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:31:16.423-05:00 level=WARN source=server.go:113 msg="server crash 51 - exit code 1 - respawning"
time=2024-03-25T12:31:16.928-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:32:07.931-05:00 level=WARN source=server.go:113 msg="server crash 52 - exit code 1 - respawning"
time=2024-03-25T12:32:08.440-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:33:00.453-05:00 level=WARN source=server.go:113 msg="server crash 53 - exit code 1 - respawning"
time=2024-03-25T12:33:00.967-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:33:53.982-05:00 level=WARN source=server.go:113 msg="server crash 54 - exit code 1 - respawning"
time=2024-03-25T12:33:54.492-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:34:48.500-05:00 level=WARN source=server.go:113 msg="server crash 55 - exit code 1 - respawning"
time=2024-03-25T12:34:49.001-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:35:44.010-05:00 level=WARN source=server.go:113 msg="server crash 56 - exit code 1 - respawning"
time=2024-03-25T12:35:44.519-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:36:40.524-05:00 level=WARN source=server.go:113 msg="server crash 57 - exit code 1 - respawning"
time=2024-03-25T12:36:41.036-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:37:38.048-05:00 level=WARN source=server.go:113 msg="server crash 58 - exit code 1 - respawning"
time=2024-03-25T12:37:38.556-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:38:36.560-05:00 level=WARN source=server.go:113 msg="server crash 59 - exit code 1 - respawning"
time=2024-03-25T12:38:37.076-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:39:36.085-05:00 level=WARN source=server.go:113 msg="server crash 60 - exit code 1 - respawning"
time=2024-03-25T12:39:36.597-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:40:36.611-05:00 level=WARN source=server.go:113 msg="server crash 61 - exit code 1 - respawning"
time=2024-03-25T12:40:37.120-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:41:38.134-05:00 level=WARN source=server.go:113 msg="server crash 62 - exit code 1 - respawning"
time=2024-03-25T12:41:38.642-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:42:40.653-05:00 level=WARN source=server.go:113 msg="server crash 63 - exit code 1 - respawning"
time=2024-03-25T12:42:41.167-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:43:44.169-05:00 level=WARN source=server.go:113 msg="server crash 64 - exit code 1 - respawning"
time=2024-03-25T12:43:44.681-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:44:48.683-05:00 level=WARN source=server.go:113 msg="server crash 65 - exit code 1 - respawning"
time=2024-03-25T12:44:49.190-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:45:54.201-05:00 level=WARN source=server.go:113 msg="server crash 66 - exit code 1 - respawning"
time=2024-03-25T12:45:54.714-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:47:00.715-05:00 level=WARN source=server.go:113 msg="server crash 67 - exit code 1 - respawning"
time=2024-03-25T12:47:01.226-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:48:08.233-05:00 level=WARN source=server.go:113 msg="server crash 68 - exit code 1 - respawning"
time=2024-03-25T12:48:08.747-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:49:16.758-05:00 level=WARN source=server.go:113 msg="server crash 69 - exit code 1 - respawning"
time=2024-03-25T12:49:17.269-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:50:26.284-05:00 level=WARN source=server.go:113 msg="server crash 70 - exit code 1 - respawning"
time=2024-03-25T12:50:26.796-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:51:36.803-05:00 level=WARN source=server.go:113 msg="server crash 71 - exit code 1 - respawning"
time=2024-03-25T12:51:37.313-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:52:48.320-05:00 level=WARN source=server.go:113 msg="server crash 72 - exit code 1 - respawning"
time=2024-03-25T12:52:48.830-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:54:00.841-05:00 level=WARN source=server.go:113 msg="server crash 73 - exit code 1 - respawning"
time=2024-03-25T12:54:01.353-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:55:14.366-05:00 level=WARN source=server.go:113 msg="server crash 74 - exit code 1 - respawning"
time=2024-03-25T12:55:14.880-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:56:28.893-05:00 level=WARN source=server.go:113 msg="server crash 75 - exit code 1 - respawning"
time=2024-03-25T12:56:29.394-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:57:44.407-05:00 level=WARN source=server.go:113 msg="server crash 76 - exit code 1 - respawning"
time=2024-03-25T12:57:44.917-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T12:59:00.925-05:00 level=WARN source=server.go:113 msg="server crash 77 - exit code 1 - respawning"
time=2024-03-25T12:59:01.436-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T13:00:18.439-05:00 level=WARN source=server.go:113 msg="server crash 78 - exit code 1 - respawning"
time=2024-03-25T13:00:18.951-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T13:01:36.954-05:00 level=WARN source=server.go:113 msg="server crash 79 - exit code 1 - respawning"
time=2024-03-25T13:01:37.466-05:00 level=ERROR source=server.go:116 msg="failed to restart server exec: already started"
time=2024-03-25T13:02:24.837-05:00 level=INFO source=logging.go:45 msg="ollama app started"
time=2024-03-25T13:02:24.898-05:00 level=INFO source=server.go:135 msg="unable to connect to server"
In the server log it shows
Error: listen tcp 127.0.0.1:11434: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3362/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5953/comments | https://api.github.com/repos/ollama/ollama/issues/5953/events | https://github.com/ollama/ollama/issues/5953 | 2,430,295,149 | I_kwDOJ0Z1Ps6Q21xt | 5,953 | Who are you? | {
"login": "t7aliang",
"id": 11693120,
"node_id": "MDQ6VXNlcjExNjkzMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/11693120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t7aliang",
"html_url": "https://github.com/t7aliang",
"followers_url": "https://api.github.com/users/t7aliang/followers",
"following_url": "https://api.github.com/users/t7aliang/following{/other_user}",
"gists_url": "https://api.github.com/users/t7aliang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t7aliang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t7aliang/subscriptions",
"organizations_url": "https://api.github.com/users/t7aliang/orgs",
"repos_url": "https://api.github.com/users/t7aliang/repos",
"events_url": "https://api.github.com/users/t7aliang/events{/privacy}",
"received_events_url": "https://api.github.com/users/t7aliang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-07-25T15:23:15 | 2024-07-26T14:01:03 | 2024-07-26T14:01:03 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
![Screenshot 2024-07-25 at 11 18 10 PM](https://github.com/user-attachments/assets/5a06a6fd-3e8c-4db9-84f7-5f076acdd4be)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2.8 | {
"login": "t7aliang",
"id": 11693120,
"node_id": "MDQ6VXNlcjExNjkzMTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/11693120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t7aliang",
"html_url": "https://github.com/t7aliang",
"followers_url": "https://api.github.com/users/t7aliang/followers",
"following_url": "https://api.github.com/users/t7aliang/following{/other_user}",
"gists_url": "https://api.github.com/users/t7aliang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t7aliang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t7aliang/subscriptions",
"organizations_url": "https://api.github.com/users/t7aliang/orgs",
"repos_url": "https://api.github.com/users/t7aliang/repos",
"events_url": "https://api.github.com/users/t7aliang/events{/privacy}",
"received_events_url": "https://api.github.com/users/t7aliang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5953/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/5953/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5602 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5602/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5602/comments | https://api.github.com/repos/ollama/ollama/issues/5602/events | https://github.com/ollama/ollama/issues/5602 | 2,400,911,034 | I_kwDOJ0Z1Ps6PGv66 | 5,602 | Running latest version 0.2.1 running slowly and not returning output for long text input | {
"login": "jillvillany",
"id": 42828003,
"node_id": "MDQ6VXNlcjQyODI4MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/42828003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jillvillany",
"html_url": "https://github.com/jillvillany",
"followers_url": "https://api.github.com/users/jillvillany/followers",
"following_url": "https://api.github.com/users/jillvillany/following{/other_user}",
"gists_url": "https://api.github.com/users/jillvillany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jillvillany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jillvillany/subscriptions",
"organizations_url": "https://api.github.com/users/jillvillany/orgs",
"repos_url": "https://api.github.com/users/jillvillany/repos",
"events_url": "https://api.github.com/users/jillvillany/events{/privacy}",
"received_events_url": "https://api.github.com/users/jillvillany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5808482718,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng",
"url": "https://api.github.com/repos/ollama/ollama/labels/performance",
"name": "performance",
"color": "A5B5C6",
"default": false,
"description": ""
}
] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3 | 2024-07-10T14:19:02 | 2024-10-16T16:18:22 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am running ollama on an AWS ml.p3.2xlarge SageMaker notebook instance.
When I install the latest version, 0.2.1, the response time on a langchain chain running an extract names prompt on a page of text using llama3:latest is about 8 seconds and doesn't return any names.
However, when I install version 0.1.37, the response time goes down to under a second and I get an accurate response with people's names found in the text.
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
0.2.1 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5602/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1876 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1876/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1876/comments | https://api.github.com/repos/ollama/ollama/issues/1876/events | https://github.com/ollama/ollama/issues/1876 | 2,073,129,789 | I_kwDOJ0Z1Ps57kXM9 | 1,876 | ollama list flags help | {
"login": "iplayfast",
"id": 751306,
"node_id": "MDQ6VXNlcjc1MTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/751306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iplayfast",
"html_url": "https://github.com/iplayfast",
"followers_url": "https://api.github.com/users/iplayfast/followers",
"following_url": "https://api.github.com/users/iplayfast/following{/other_user}",
"gists_url": "https://api.github.com/users/iplayfast/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iplayfast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iplayfast/subscriptions",
"organizations_url": "https://api.github.com/users/iplayfast/orgs",
"repos_url": "https://api.github.com/users/iplayfast/repos",
"events_url": "https://api.github.com/users/iplayfast/events{/privacy}",
"received_events_url": "https://api.github.com/users/iplayfast/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6960960225,
"node_id": "LA_kwDOJ0Z1Ps8AAAABnufS4Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/cli",
"name": "cli",
"color": "5319e7",
"default": false,
"description": "Issues related to the Ollama CLI"
}
] | open | false | null | [] | null | 5 | 2024-01-09T20:35:48 | 2024-10-26T21:58:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | There is no obvious way of seeing what flags are available for ollama list
```
ollama list --help
List models
Usage:
ollama list [flags]
Aliases:
list, ls
Flags:
-h, --help help for list
```
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1876/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/2787 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2787/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2787/comments | https://api.github.com/repos/ollama/ollama/issues/2787/events | https://github.com/ollama/ollama/issues/2787 | 2,157,567,148 | I_kwDOJ0Z1Ps6Amdys | 2,787 | bug? - session save does not save latest messages of the chat | {
"login": "FotisK",
"id": 7896645,
"node_id": "MDQ6VXNlcjc4OTY2NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7896645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FotisK",
"html_url": "https://github.com/FotisK",
"followers_url": "https://api.github.com/users/FotisK/followers",
"following_url": "https://api.github.com/users/FotisK/following{/other_user}",
"gists_url": "https://api.github.com/users/FotisK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FotisK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FotisK/subscriptions",
"organizations_url": "https://api.github.com/users/FotisK/orgs",
"repos_url": "https://api.github.com/users/FotisK/repos",
"events_url": "https://api.github.com/users/FotisK/events{/privacy}",
"received_events_url": "https://api.github.com/users/FotisK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-02-27T20:43:59 | 2024-05-17T01:50:42 | 2024-05-17T01:50:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I was having a very long conversation with nollama/mythomax-l2-13b:Q5_K_S, saved the session and restored it and found that the latest 100-200 lines of the discussion were missing. I haven't tried to reproduce it (I don't have lengthy chats often), but I thought I'd report it. When I get another chance, I'll test it again | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2787/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5307 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5307/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5307/comments | https://api.github.com/repos/ollama/ollama/issues/5307/events | https://github.com/ollama/ollama/pull/5307 | 2,376,001,387 | PR_kwDOJ0Z1Ps5zq6Yg | 5,307 | Ollama Show: Check for Projector Type | {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-06-26T18:22:07 | 2024-06-28T18:30:19 | 2024-06-28T18:30:17 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5307",
"html_url": "https://github.com/ollama/ollama/pull/5307",
"diff_url": "https://github.com/ollama/ollama/pull/5307.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5307.patch",
"merged_at": "2024-06-28T18:30:17"
} | Fixes #5289
<img width="410" alt="Screenshot 2024-06-26 at 11 21 57 AM" src="https://github.com/ollama/ollama/assets/65097070/4ae18164-e5c2-453b-91d4-de54569b8e11">
| {
"login": "royjhan",
"id": 65097070,
"node_id": "MDQ6VXNlcjY1MDk3MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/royjhan",
"html_url": "https://github.com/royjhan",
"followers_url": "https://api.github.com/users/royjhan/followers",
"following_url": "https://api.github.com/users/royjhan/following{/other_user}",
"gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/royjhan/subscriptions",
"organizations_url": "https://api.github.com/users/royjhan/orgs",
"repos_url": "https://api.github.com/users/royjhan/repos",
"events_url": "https://api.github.com/users/royjhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/royjhan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5307/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5307/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7499 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7499/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7499/comments | https://api.github.com/repos/ollama/ollama/issues/7499/events | https://github.com/ollama/ollama/pull/7499 | 2,634,169,544 | PR_kwDOJ0Z1Ps6A3i20 | 7,499 | build: Make target improvements | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 59 | 2024-11-05T00:47:49 | 2025-01-18T02:06:48 | 2024-12-10T17:47:19 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7499",
"html_url": "https://github.com/ollama/ollama/pull/7499",
"diff_url": "https://github.com/ollama/ollama/pull/7499.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7499.patch",
"merged_at": "2024-12-10T17:47:19"
} | Add a few new targets and help for building locally. This also adjusts the runner lookup to favor local builds, then runners relative to the executable.
Fixes #7491
Fixes #7483
Fixes #7452
Fixes #2187
Fixes #2205
Fixes #2281
Fixes #7457
Fixes #7622
Fixes #7577
Fixes #1756
Fixes #7817
Fixes #6857
Carries #7199 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7499/reactions",
"total_count": 17,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7499/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3528 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3528/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3528/comments | https://api.github.com/repos/ollama/ollama/issues/3528/events | https://github.com/ollama/ollama/pull/3528 | 2,229,959,636 | PR_kwDOJ0Z1Ps5r8bOC | 3,528 | Update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` on Windows to avoid compiler errors | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-04-07T21:05:46 | 2024-04-07T23:29:52 | 2024-04-07T23:29:51 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3528",
"html_url": "https://github.com/ollama/ollama/pull/3528",
"diff_url": "https://github.com/ollama/ollama/pull/3528.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3528.patch",
"merged_at": "2024-04-07T23:29:51"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3528/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6853 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6853/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6853/comments | https://api.github.com/repos/ollama/ollama/issues/6853/events | https://github.com/ollama/ollama/issues/6853 | 2,533,058,737 | I_kwDOJ0Z1Ps6W-2ix | 6,853 | Setting temperature on any llava model makes the Ollama server hangs on REST calls | {
"login": "jluisreymejias",
"id": 16193562,
"node_id": "MDQ6VXNlcjE2MTkzNTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/16193562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jluisreymejias",
"html_url": "https://github.com/jluisreymejias",
"followers_url": "https://api.github.com/users/jluisreymejias/followers",
"following_url": "https://api.github.com/users/jluisreymejias/following{/other_user}",
"gists_url": "https://api.github.com/users/jluisreymejias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jluisreymejias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jluisreymejias/subscriptions",
"organizations_url": "https://api.github.com/users/jluisreymejias/orgs",
"repos_url": "https://api.github.com/users/jluisreymejias/repos",
"events_url": "https://api.github.com/users/jluisreymejias/events{/privacy}",
"received_events_url": "https://api.github.com/users/jluisreymejias/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6677367769,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q",
"url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info",
"name": "needs more info",
"color": "BA8041",
"default": false,
"description": "More information is needed to assist"
}
] | closed | false | null | [] | null | 4 | 2024-09-18T08:23:25 | 2025-01-06T07:33:52 | 2025-01-06T07:33:52 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When calling llava models from a REST client, setting temperature cause the ollama server hangs until process is killed.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.10 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6853/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5160 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5160/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5160/comments | https://api.github.com/repos/ollama/ollama/issues/5160/events | https://github.com/ollama/ollama/issues/5160 | 2,363,800,007 | I_kwDOJ0Z1Ps6M5LnH | 5,160 | Add HelpingAI-9B in it | {
"login": "OE-LUCIFER",
"id": 158988478,
"node_id": "U_kgDOCXn4vg",
"avatar_url": "https://avatars.githubusercontent.com/u/158988478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OE-LUCIFER",
"html_url": "https://github.com/OE-LUCIFER",
"followers_url": "https://api.github.com/users/OE-LUCIFER/followers",
"following_url": "https://api.github.com/users/OE-LUCIFER/following{/other_user}",
"gists_url": "https://api.github.com/users/OE-LUCIFER/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OE-LUCIFER/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OE-LUCIFER/subscriptions",
"organizations_url": "https://api.github.com/users/OE-LUCIFER/orgs",
"repos_url": "https://api.github.com/users/OE-LUCIFER/repos",
"events_url": "https://api.github.com/users/OE-LUCIFER/events{/privacy}",
"received_events_url": "https://api.github.com/users/OE-LUCIFER/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-06-20T08:05:57 | 2024-06-20T21:14:20 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | HelpingAI-9B is an advanced language model designed for emotionally intelligent conversational interactions. This model excels in empathetic engagement, understanding user emotions, and providing supportive dialogue. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5160/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3438 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3438/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3438/comments | https://api.github.com/repos/ollama/ollama/issues/3438/events | https://github.com/ollama/ollama/issues/3438 | 2,218,222,188 | I_kwDOJ0Z1Ps6EN2Js | 3,438 | Bug in MODEL download directory and launching ollama service in Linux | {
"login": "ejgutierrez74",
"id": 11474846,
"node_id": "MDQ6VXNlcjExNDc0ODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/11474846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ejgutierrez74",
"html_url": "https://github.com/ejgutierrez74",
"followers_url": "https://api.github.com/users/ejgutierrez74/followers",
"following_url": "https://api.github.com/users/ejgutierrez74/following{/other_user}",
"gists_url": "https://api.github.com/users/ejgutierrez74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ejgutierrez74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ejgutierrez74/subscriptions",
"organizations_url": "https://api.github.com/users/ejgutierrez74/orgs",
"repos_url": "https://api.github.com/users/ejgutierrez74/repos",
"events_url": "https://api.github.com/users/ejgutierrez74/events{/privacy}",
"received_events_url": "https://api.github.com/users/ejgutierrez74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
}
] | open | false | null | [] | null | 15 | 2024-04-01T13:06:01 | 2024-07-18T09:58:57 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I write this post to add more information:
1 - As you mentioned : I edited `sudo systemctl edit ollama.service`
![imagen](https://github.com/ollama/ollama/assets/11474846/d82ca623-5b89-4e8c-8b25-81a82de0b7b3)
And the /media/Samsung/ollama_models is empty....
![imagen](https://github.com/ollama/ollama/assets/11474846/63001767-af41-4f47-823a-5c6506f3599d)
So seems here a bug ( as said before the doc says you have to change the ollama.service file)
2 - ollama serve vs systemd
I run systemd start ollama ( today i booted my computer), and fails
![imagen](https://github.com/ollama/ollama/assets/11474846/9449fd23-8a4f-4a06-abd1-f3339778ce91)
But if i run ollama serve it seems to work ( i again just to be sure i started ollama, then see the status...and executed ollama serve):
![imagen](https://github.com/ollama/ollama/assets/11474846/a4c14ca7-4994-4497-a634-1ebad8cd1e77)
And in other tab seems ollama works:
![imagen](https://github.com/ollama/ollama/assets/11474846/352524e4-ce54-4b9d-8ec1-e719f4a16b1d)
3 - where are the model downloaded:
As posted before /media/Samsung/ollama_models -> as you can see is empty
/home/ollama -> doesnt exist
![imagen](https://github.com/ollama/ollama/assets/11474846/9dbb5c4e-27ce-4503-b756-eab30b9efd72)
and /usr/share/ollama ->
![imagen](https://github.com/ollama/ollama/assets/11474846/6b2e23b5-f245-4393-8b34-0ffde5705197)
im going mad ;)
Thans for your help
Editing post for update: Finally i found the ollama model at /home/eduardo/.ollama, but it shouldnt be there as default directory is /usr/share/ollama/.ollama, and i set the environment variable OLLAMA_MODEL to point to /media/Samsung/ollama_models
_Originally posted by @ejgutierrez74 in https://github.com/ollama/ollama/issues/3045#issuecomment-1991349181_
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3438/reactions",
"total_count": 6,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/ollama/ollama/issues/3438/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6948 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6948/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6948/comments | https://api.github.com/repos/ollama/ollama/issues/6948/events | https://github.com/ollama/ollama/pull/6948 | 2,547,117,379 | PR_kwDOJ0Z1Ps58nEbk | 6,948 | Fix Ollama silently failing on extra, unsupported openai parameters. | {
"login": "MadcowD",
"id": 719535,
"node_id": "MDQ6VXNlcjcxOTUzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/719535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MadcowD",
"html_url": "https://github.com/MadcowD",
"followers_url": "https://api.github.com/users/MadcowD/followers",
"following_url": "https://api.github.com/users/MadcowD/following{/other_user}",
"gists_url": "https://api.github.com/users/MadcowD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MadcowD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MadcowD/subscriptions",
"organizations_url": "https://api.github.com/users/MadcowD/orgs",
"repos_url": "https://api.github.com/users/MadcowD/repos",
"events_url": "https://api.github.com/users/MadcowD/events{/privacy}",
"received_events_url": "https://api.github.com/users/MadcowD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3 | 2024-09-25T07:00:07 | 2024-12-29T19:29:39 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6948",
"html_url": "https://github.com/ollama/ollama/pull/6948",
"diff_url": "https://github.com/ollama/ollama/pull/6948.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6948.patch",
"merged_at": null
} | Currently Ollama will just let you send unsupported api params to the openai compatible endpoint and just silently fail. This wreaks havoc on downstream uses causing unexpected behaviour.
```python
# Retrieve the API key and ensure it's set
ollama_api_key = os.getenv("OLLAMA_API_KEY")
if not ollama_api_key:
raise ValueError("API key not found. Please ensure that the OLLAMA_API_KEY is set in your .env.local file.")
# Initialize the OpenAI client
ollama_client = OpenAI(
base_url="http://localhost:11434/v1",
api_key=ollama_api_key,
)
model = "llama3.1:latest"
# Construct the completion request with n=2
response = openai.Completion.create(
model=model,
prompt="Once upon a time",
max_tokens=50,
n=2, # Requesting two completions
temperature=0.7
)
print(f"Number of completions requested: 2")
print(f"Number of completions received: {len(response.choices)}\n")
assert len(response.choices) == 2, "Shouldn't ever get here"
# Iterate and print each completion
for idx, choice in enumerate(response.choices, 1):
print(f"Completion {idx}: {choice.text.strip()}")
```
This leads to
```
umber of completions requested: 2
Number of completions received: 1
Completion 1: Once upon a time, in a land where the sun always shone brightly, there lived a young adventurer eager to explore uncharted territories and uncover hidden treasures.
```
My change would cause the API to error because that parameter is just plainly not supported by ollama
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6948/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3427 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3427/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3427/comments | https://api.github.com/repos/ollama/ollama/issues/3427/events | https://github.com/ollama/ollama/issues/3427 | 2,217,052,006 | I_kwDOJ0Z1Ps6EJYdm | 3,427 | prompt_eval_count in api is broken | {
"login": "drazdra",
"id": 133811709,
"node_id": "U_kgDOB_nN_Q",
"avatar_url": "https://avatars.githubusercontent.com/u/133811709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drazdra",
"html_url": "https://github.com/drazdra",
"followers_url": "https://api.github.com/users/drazdra/followers",
"following_url": "https://api.github.com/users/drazdra/following{/other_user}",
"gists_url": "https://api.github.com/users/drazdra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drazdra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drazdra/subscriptions",
"organizations_url": "https://api.github.com/users/drazdra/orgs",
"repos_url": "https://api.github.com/users/drazdra/repos",
"events_url": "https://api.github.com/users/drazdra/events{/privacy}",
"received_events_url": "https://api.github.com/users/drazdra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-03-31T15:39:03 | 2024-06-04T06:58:20 | 2024-06-04T06:58:20 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
prompt_eval_count parameter is absent on some calls, on other calls it returns wrong information.
1. i tried /api/chat with "stablelm2", no system prompt, prompt="hi".
in result there is no field "prompt_eval_count" most of the time. sometimes it's there, randomly, but rarely.
2. when you have small num_ctx and the supplied prompt (content of all messages in /api/chat) exceeds the num_ctx size, the prompt_eval_count may either be absent or provide wrong information.
i believe it returns the amount of tokens that could fit the context window, instead of the whole context prompt that was sent.
thanks :).
### What did you expect to see?
expected behavior:
1. there should always be prompt_eval_count.
2. it should report the count of submitted tokens, not the processed ones.
3. optionally there maybe one more parameter returned, showing the amount of processed tokens.
### Steps to reproduce
use /api/chat to send array of messages, limit num_ctx, send longer content than fits into the context window defined with num_ctx, check the values returned in final json reply with done==true.
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
amd64
### Platform
_No response_
### Ollama version
0.1.30
### GPU
Other
### GPU info
cpu only
### CPU
AMD
### Other software
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3427/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8366 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8366/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8366/comments | https://api.github.com/repos/ollama/ollama/issues/8366/events | https://github.com/ollama/ollama/issues/8366 | 2,778,471,158 | I_kwDOJ0Z1Ps6lnBr2 | 8,366 | deepseek v3 | {
"login": "Morrigan-Ship",
"id": 138357319,
"node_id": "U_kgDOCD8qRw",
"avatar_url": "https://avatars.githubusercontent.com/u/138357319?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Morrigan-Ship",
"html_url": "https://github.com/Morrigan-Ship",
"followers_url": "https://api.github.com/users/Morrigan-Ship/followers",
"following_url": "https://api.github.com/users/Morrigan-Ship/following{/other_user}",
"gists_url": "https://api.github.com/users/Morrigan-Ship/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Morrigan-Ship/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Morrigan-Ship/subscriptions",
"organizations_url": "https://api.github.com/users/Morrigan-Ship/orgs",
"repos_url": "https://api.github.com/users/Morrigan-Ship/repos",
"events_url": "https://api.github.com/users/Morrigan-Ship/events{/privacy}",
"received_events_url": "https://api.github.com/users/Morrigan-Ship/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 3 | 2025-01-09T18:07:28 | 2025-01-10T22:28:46 | 2025-01-10T22:28:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8366/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4260 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4260/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4260/comments | https://api.github.com/repos/ollama/ollama/issues/4260/events | https://github.com/ollama/ollama/issues/4260 | 2,285,566,329 | I_kwDOJ0Z1Ps6IOvl5 | 4,260 | Error: could not connect to ollama app, is it running? | {
"login": "starMagic",
"id": 4728358,
"node_id": "MDQ6VXNlcjQ3MjgzNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4728358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/starMagic",
"html_url": "https://github.com/starMagic",
"followers_url": "https://api.github.com/users/starMagic/followers",
"following_url": "https://api.github.com/users/starMagic/following{/other_user}",
"gists_url": "https://api.github.com/users/starMagic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/starMagic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/starMagic/subscriptions",
"organizations_url": "https://api.github.com/users/starMagic/orgs",
"repos_url": "https://api.github.com/users/starMagic/repos",
"events_url": "https://api.github.com/users/starMagic/events{/privacy}",
"received_events_url": "https://api.github.com/users/starMagic/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1 | 2024-05-08T13:11:05 | 2024-05-21T18:34:09 | 2024-05-21T18:34:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When try run command "Ollama list", the following error occurs:
server.log
2024/05/08 20:50:26 routes.go:989: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:*] OLLAMA_RUNNERS_DIR:C:\\Users\\Qiang.Liu\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_TMPDIR:]"
time=2024-05-08T20:50:26.065+08:00 level=INFO source=images.go:897 msg="total blobs: 16"
time=2024-05-08T20:50:26.068+08:00 level=INFO source=images.go:904 msg="total unused blobs removed: 0"
time=2024-05-08T20:50:26.071+08:00 level=INFO source=routes.go:1034 msg="Listening on 127.0.0.1:11434 (version 0.1.34)"
time=2024-05-08T20:50:26.071+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11.3 rocm_v5.7]"
time=2024-05-08T20:50:26.071+08:00 level=INFO source=gpu.go:122 msg="Detecting GPUs"
time=2024-05-08T20:50:26.091+08:00 level=INFO source=cpu_common.go:15 msg="CPU has AVX"
Exception 0xc0000005 0x8 0x228ed7065e0 0x228ed7065e0
PC=0x228ed7065e0
signal arrived during external code execution
runtime.cgocall(0xf73c20, 0xc000680408)
runtime/cgocall.go:157 +0x3e fp=0xc0003850d0 sp=0xc000385098 pc=0xf092fe
syscall.SyscallN(0x7ffdc82ddcd0?, {0xc0000ff758?, 0x1?, 0x7ffdc7fd0000?})
runtime/syscall_windows.go:544 +0x107 fp=0xc000385148 sp=0xc0003850d0 pc=0xf6f147
github.com/ollama/ollama/gpu.(*HipLib).AMDDriverVersion(0xc0000be750)
github.com/ollama/ollama/gpu/amd_hip_windows.go:82 +0x69 fp=0xc0003851b8 sp=0xc000385148 pc=0x139fc09
github.com/ollama/ollama/gpu.AMDGetGPUInfo()
github.com/ollama/ollama/gpu/amd_windows.go:37 +0x91 fp=0xc000385840 sp=0xc0003851b8 pc=0x13a0331
github.com/ollama/ollama/gpu.GetGPUInfo()
github.com/ollama/ollama/gpu/gpu.go:214 +0x625 fp=0xc000385ae8 sp=0xc000385840 pc=0x13a4845
github.com/ollama/ollama/server.Serve({0x2009900, 0xc000276bc0})
github.com/ollama/ollama/server/routes.go:1059 +0x771 fp=0xc000385c70 sp=0xc000385ae8 pc=0x19ef231
github.com/ollama/ollama/cmd.RunServer(0xc0000a8b00?, {0x27b2ae0?, 0x4?, 0x1e6cc3e?})
github.com/ollama/ollama/cmd/cmd.go:901 +0x17c fp=0xc000385d58 sp=0xc000385c70 pc=0x1a092dc
github.com/spf13/cobra.(*Command).execute(0xc0004b4908, {0x27b2ae0, 0x0, 0x0})
github.com/spf13/[email protected]/command.go:940 +0x882 fp=0xc000385e78 sp=0xc000385d58 pc=0x12aaa22
github.com/spf13/cobra.(*Command).ExecuteC(0xc00002d808)
github.com/spf13/[email protected]/command.go:1068 +0x3a5 fp=0xc000385f30 sp=0xc000385e78 pc=0x12ab265
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(...)
github.com/spf13/[email protected]/command.go:985
main.main()
github.com/ollama/ollama/main.go:11 +0x4d fp=0xc000385f50 sp=0xc000385f30 pc=0x1a1208d
runtime.main()
runtime/proc.go:271 +0x28b fp=0xc000385fe0 sp=0xc000385f50 pc=0xf4134b
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000385fe8 sp=0xc000385fe0 pc=0xf72461
goroutine 2 gp=0xc000084700 m=nil [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000087fa8 sp=0xc000087f88 pc=0xf4174e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.forcegchelper()
runtime/proc.go:326 +0xb8 fp=0xc000087fe0 sp=0xc000087fa8 pc=0xf415d8
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000087fe8 sp=0xc000087fe0 pc=0xf72461
created by runtime.init.6 in goroutine 1
runtime/proc.go:314 +0x1a
goroutine 3 gp=0xc000084a80 m=nil [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000089f80 sp=0xc000089f60 pc=0xf4174e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.bgsweep(0xc00003a070)
runtime/mgcsweep.go:318 +0xdf fp=0xc000089fc8 sp=0xc000089f80 pc=0xf2b7ff
runtime.gcenable.gowrap1()
runtime/mgc.go:203 +0x25 fp=0xc000089fe0 sp=0xc000089fc8 pc=0xf200a5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000089fe8 sp=0xc000089fe0 pc=0xf72461
created by runtime.gcenable in goroutine 1
runtime/mgc.go:203 +0x66
goroutine 4 gp=0xc000084c40 m=nil [GC scavenge wait]:
runtime.gopark(0x10000?, 0x1ffb9f0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000099f78 sp=0xc000099f58 pc=0xf4174e
runtime.goparkunlock(...)
runtime/proc.go:408
runtime.(*scavengerState).park(0x2726fc0)
runtime/mgcscavenge.go:425 +0x49 fp=0xc000099fa8 sp=0xc000099f78 pc=0xf29189
runtime.bgscavenge(0xc00003a070)
runtime/mgcscavenge.go:658 +0x59 fp=0xc000099fc8 sp=0xc000099fa8 pc=0xf29739
runtime.gcenable.gowrap2()
runtime/mgc.go:204 +0x25 fp=0xc000099fe0 sp=0xc000099fc8 pc=0xf20045
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000099fe8 sp=0xc000099fe0 pc=0xf72461
created by runtime.gcenable in goroutine 1
runtime/mgc.go:204 +0xa5
goroutine 18 gp=0xc000104380 m=nil [finalizer wait]:
runtime.gopark(0xc00008be48?, 0xf13445?, 0xa8?, 0x1?, 0xc000084000?)
runtime/proc.go:402 +0xce fp=0xc00008be20 sp=0xc00008be00 pc=0xf4174e
runtime.runfinq()
runtime/mfinal.go:194 +0x107 fp=0xc00008bfe0 sp=0xc00008be20 pc=0xf1f127
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00008bfe8 sp=0xc00008bfe0 pc=0xf72461
created by runtime.createfing in goroutine 1
runtime/mfinal.go:164 +0x3d
goroutine 5 gp=0xc000085340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00009bf50 sp=0xc00009bf30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00009bfe0 sp=0xc00009bf50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00009bfe8 sp=0xc00009bfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 19 gp=0xc000104fc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000095f50 sp=0xc000095f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000095fe0 sp=0xc000095f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000095fe8 sp=0xc000095fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 6 gp=0xc000085500 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00041df50 sp=0xc00041df30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00041dfe0 sp=0xc00041df50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00041dfe8 sp=0xc00041dfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 20 gp=0xc000105180 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000097f50 sp=0xc000097f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000097fe0 sp=0xc000097f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000097fe8 sp=0xc000097fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 7 gp=0xc0000856c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00041ff50 sp=0xc00041ff30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00041ffe0 sp=0xc00041ff50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00041ffe8 sp=0xc00041ffe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 21 gp=0xc000105340 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000419f50 sp=0xc000419f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000419fe0 sp=0xc000419f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000419fe8 sp=0xc000419fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 8 gp=0xc000085880 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000425f50 sp=0xc000425f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000425fe0 sp=0xc000425f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000425fe8 sp=0xc000425fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 22 gp=0xc000105500 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00041bf50 sp=0xc00041bf30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00041bfe0 sp=0xc00041bf50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00041bfe8 sp=0xc00041bfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 9 gp=0xc000085a40 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000427f50 sp=0xc000427f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000427fe0 sp=0xc000427f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000427fe8 sp=0xc000427fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 23 gp=0xc0001056c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000421f50 sp=0xc000421f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000421fe0 sp=0xc000421f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000421fe8 sp=0xc000421fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 10 gp=0xc000085c00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00042df50 sp=0xc00042df30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00042dfe0 sp=0xc00042df50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00042dfe8 sp=0xc00042dfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 24 gp=0xc000105880 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000423f50 sp=0xc000423f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000423fe0 sp=0xc000423f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000423fe8 sp=0xc000423fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 11 gp=0xc000085dc0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00042ff50 sp=0xc00042ff30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00042ffe0 sp=0xc00042ff50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00042ffe8 sp=0xc00042ffe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 34 gp=0xc000482000 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000429f50 sp=0xc000429f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000429fe0 sp=0xc000429f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000429fe8 sp=0xc000429fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 25 gp=0xc000105a40 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002f5f50 sp=0xc0002f5f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002f5fe0 sp=0xc0002f5f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002f5fe8 sp=0xc0002f5fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 35 gp=0xc0004821c0 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc00042bf50 sp=0xc00042bf30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc00042bfe0 sp=0xc00042bf50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00042bfe8 sp=0xc00042bfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 26 gp=0xc000105c00 m=nil [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002f7f50 sp=0xc0002f7f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002f7fe0 sp=0xc0002f7f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002f7fe8 sp=0xc0002f7fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 36 gp=0xc000482380 m=nil [GC worker (idle)]:
runtime.gopark(0xa5c45b8f8c?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002f1f50 sp=0xc0002f1f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002f1fe0 sp=0xc0002f1f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002f1fe8 sp=0xc0002f1fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 27 gp=0xc000105dc0 m=nil [GC worker (idle)]:
runtime.gopark(0xa5c45b8f8c?, 0x3?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002fdf50 sp=0xc0002fdf30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002fdfe0 sp=0xc0002fdf50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002fdfe8 sp=0xc0002fdfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 37 gp=0xc000482540 m=nil [GC worker (idle)]:
runtime.gopark(0x27b4a60?, 0x1?, 0x4c?, 0xcc?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002f3f50 sp=0xc0002f3f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002f3fe0 sp=0xc0002f3f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002f3fe8 sp=0xc0002f3fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 28 gp=0xc000500000 m=nil [GC worker (idle)]:
runtime.gopark(0x27b4a60?, 0x1?, 0x9c?, 0x61?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002fff50 sp=0xc0002fff30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002fffe0 sp=0xc0002fff50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002fffe8 sp=0xc0002fffe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 38 gp=0xc000482700 m=nil [GC worker (idle)]:
runtime.gopark(0x27b4a60?, 0x1?, 0x7c?, 0xd9?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002f9f50 sp=0xc0002f9f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002f9fe0 sp=0xc0002f9f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002f9fe8 sp=0xc0002f9fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 29 gp=0xc0005001c0 m=nil [GC worker (idle)]:
runtime.gopark(0x27b4a60?, 0x1?, 0xd4?, 0xaf?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000507f50 sp=0xc000507f30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc000507fe0 sp=0xc000507f50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000507fe8 sp=0xc000507fe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 39 gp=0xc0004828c0 m=nil [GC worker (idle)]:
runtime.gopark(0xa5c45b8f8c?, 0x1?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc0002fbf50 sp=0xc0002fbf30 pc=0xf4174e
runtime.gcBgMarkWorker()
runtime/mgc.go:1310 +0xe5 fp=0xc0002fbfe0 sp=0xc0002fbf50 pc=0xf221e5
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc0002fbfe8 sp=0xc0002fbfe0 pc=0xf72461
created by runtime.gcBgMarkStartWorkers in goroutine 1
runtime/mgc.go:1234 +0x1c
goroutine 12 gp=0xc0005841c0 m=5 mp=0xc000100008 [syscall]:
runtime.notetsleepg(0x27b36a0, 0xffffffffffffffff)
runtime/lock_sema.go:296 +0x31 fp=0xc000505fa0 sp=0xc000505f68 pc=0xf11a11
os/signal.signal_recv()
runtime/sigqueue.go:152 +0x29 fp=0xc000505fc0 sp=0xc000505fa0 pc=0xf6e169
os/signal.loop()
os/signal/signal_unix.go:23 +0x13 fp=0xc000505fe0 sp=0xc000505fc0 pc=0x12335d3
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000505fe8 sp=0xc000505fe0 pc=0xf72461
created by os/signal.Notify.func1.1 in goroutine 1
os/signal/signal.go:151 +0x1f
goroutine 13 gp=0xc000584380 m=nil [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:402 +0xce fp=0xc000509f08 sp=0xc000509ee8 pc=0xf4174e
runtime.chanrecv(0xc0004ce4e0, 0x0, 0x1)
runtime/chan.go:583 +0x3cd fp=0xc000509f80 sp=0xc000509f08 pc=0xf0b98d
runtime.chanrecv1(0x0?, 0x0?)
runtime/chan.go:442 +0x12 fp=0xc000509fa8 sp=0xc000509f80 pc=0xf0b592
github.com/ollama/ollama/server.Serve.func2()
github.com/ollama/ollama/server/routes.go:1043 +0x34 fp=0xc000509fe0 sp=0xc000509fa8 pc=0x19ef2f4
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000509fe8 sp=0xc000509fe0 pc=0xf72461
created by github.com/ollama/ollama/server.Serve in goroutine 1
github.com/ollama/ollama/server/routes.go:1042 +0x6f7
goroutine 14 gp=0xc000584540 m=nil [select]:
runtime.gopark(0xc00047df58?, 0x3?, 0x60?, 0x0?, 0xc00047de32?)
runtime/proc.go:402 +0xce fp=0xc00047dcb8 sp=0xc00047dc98 pc=0xf4174e
runtime.selectgo(0xc00047df58, 0xc00047de2c, 0x0?, 0x0, 0x0?, 0x1)
runtime/select.go:327 +0x725 fp=0xc00047ddd8 sp=0xc00047dcb8 pc=0xf51ba5
github.com/ollama/ollama/server.(*Scheduler).processPending(0xc000174af0, {0x200c0c0, 0xc000174aa0})
github.com/ollama/ollama/server/sched.go:97 +0xcf fp=0xc00047dfb8 sp=0xc00047ddd8 pc=0x19f27af
github.com/ollama/ollama/server.(*Scheduler).Run.func1()
github.com/ollama/ollama/server/sched.go:87 +0x1f fp=0xc00047dfe0 sp=0xc00047dfb8 pc=0x19f26bf
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc00047dfe8 sp=0xc00047dfe0 pc=0xf72461
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
github.com/ollama/ollama/server/sched.go:86 +0xb4
goroutine 15 gp=0xc000584700 m=nil [select]:
runtime.gopark(0xc000503f50?, 0x3?, 0x0?, 0x0?, 0xc000503d5a?)
runtime/proc.go:402 +0xce fp=0xc000503be8 sp=0xc000503bc8 pc=0xf4174e
runtime.selectgo(0xc000503f50, 0xc000503d54, 0x0?, 0x0, 0x0?, 0x1)
runtime/select.go:327 +0x725 fp=0xc000503d08 sp=0xc000503be8 pc=0xf51ba5
github.com/ollama/ollama/server.(*Scheduler).processCompleted(0xc000174af0, {0x200c0c0, 0xc000174aa0})
github.com/ollama/ollama/server/sched.go:209 +0xec fp=0xc000503fb8 sp=0xc000503d08 pc=0x19f332c
github.com/ollama/ollama/server.(*Scheduler).Run.func2()
github.com/ollama/ollama/server/sched.go:91 +0x1f fp=0xc000503fe0 sp=0xc000503fb8 pc=0x19f267f
runtime.goexit({})
runtime/asm_amd64.s:1695 +0x1 fp=0xc000503fe8 sp=0xc000503fe0 pc=0xf72461
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
github.com/ollama/ollama/server/sched.go:90 +0x110
rax 0x228f4cf83f0
rbx 0x228ed7065e0
rcx 0xd3d
rdx 0x6
rdi 0x228ed7117f0
rsi 0x228f4cf83f0
rbp 0xd3c
rsp 0x45e97fe108
r8 0x2
r9 0x40
r10 0x80
r11 0x22000000007f4100
r12 0x1
r13 0x0
r14 0x228f4b98340
r15 0x228ed7065e0
rip 0x228ed7065e0
rflags 0x10202
cs 0x33
fs 0x53
gs 0x2b
### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.34 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4260/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/551 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/551/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/551/comments | https://api.github.com/repos/ollama/ollama/issues/551/events | https://github.com/ollama/ollama/issues/551 | 1,901,647,151 | I_kwDOJ0Z1Ps5xWNUv | 551 | Dockerfile.cuda fails to build server | {
"login": "jamesbraza",
"id": 8990777,
"node_id": "MDQ6VXNlcjg5OTA3Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8990777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jamesbraza",
"html_url": "https://github.com/jamesbraza",
"followers_url": "https://api.github.com/users/jamesbraza/followers",
"following_url": "https://api.github.com/users/jamesbraza/following{/other_user}",
"gists_url": "https://api.github.com/users/jamesbraza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jamesbraza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamesbraza/subscriptions",
"organizations_url": "https://api.github.com/users/jamesbraza/orgs",
"repos_url": "https://api.github.com/users/jamesbraza/repos",
"events_url": "https://api.github.com/users/jamesbraza/events{/privacy}",
"received_events_url": "https://api.github.com/users/jamesbraza/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-09-18T19:58:22 | 2023-09-26T22:29:49 | 2023-09-26T22:29:49 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | On an AWS EC2 `g4dn.2xlarge` instance with Ollama https://github.com/jmorganca/ollama/tree/c345053a8bf47d5ef8f1fe15d385108059209fba:
```none
> sudo docker buildx build . --file Dockerfile.cuda
[+] Building 57.2s (7/16) docker:default
=> => transferring context: 97B 0.0s
[+] Building 57.3s (7/16) docker:default
=> => transferring dockerfile: 939B 0.0s
[+] Building 113.7s (15/16) docker:default
=> [internal] load .dockerignore 0.0s
=> => transferring context: 97B 0.0s
=> [internal] load build definition from Dockerfile.cuda 0.0s
=> => transferring dockerfile: 939B 0.0s
=> [internal] load metadata for docker.io/nvidia/cuda:12.2.0-runtime-ubuntu22.04 1.0s
=> [internal] load metadata for docker.io/nvidia/cuda:12.2.0-devel-ubuntu22.04 0.9s
=> [stage-0 1/7] FROM docker.io/nvidia/cuda:12.2.0-devel-ubuntu22.04@sha256:0e2d7e252847c334b056937e533683556926f5343a472b6b92f858a7af8ab880 71.6s
...
=> [stage-1 2/3] RUN groupadd ollama && useradd -m -g ollama ollama 20.6s
=> [stage-0 2/7] WORKDIR /go/src/github.com/jmorganca/ollama 21.0s
=> [stage-0 3/7] RUN apt-get update && apt-get install -y git build-essential cmake 13.5s
=> [stage-0 4/7] ADD https://dl.google.com/go/go1.21.1.linux-amd64.tar.gz /tmp/go1.21.1.tar.gz 0.4s
=> [stage-0 5/7] RUN mkdir -p /usr/local && tar xz -C /usr/local </tmp/go1.21.1.tar.gz 3.2s
=> [stage-0 6/7] COPY . . 0.0s
=> ERROR [stage-0 7/7] RUN /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build -ldflags "-linkmode=external -extldflags='-static' -X=github.com/jmorganca/ollama/version.Version=0.0.0 -X= 2.9s
------
> [stage-0 7/7] RUN /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build -ldflags "-linkmode=external -extldflags='-static' -X=github.com/jmorganca/ollama/version.Version=0.0.0 -X=github.com/jmorganca/ollama/server.mode=release" .:
0.377 go: downloading github.com/mattn/go-runewidth v0.0.14
0.377 go: downloading github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
0.378 go: downloading golang.org/x/term v0.10.0
0.385 go: downloading gonum.org/v1/gonum v0.13.0
0.388 go: downloading github.com/chzyer/readline v1.5.1
0.388 go: downloading golang.org/x/crypto v0.10.0
0.391 go: downloading github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58
0.391 go: downloading github.com/dustin/go-humanize v1.0.1
0.478 go: downloading github.com/gin-contrib/cors v1.4.0
0.480 go: downloading github.com/olekukonko/tablewriter v0.0.5
0.483 go: downloading github.com/gin-gonic/gin v1.9.1
0.486 go: downloading github.com/spf13/cobra v1.7.0
0.487 go: downloading golang.org/x/exp v0.0.0-20230817173708-d852ddb80c63
0.565 go: downloading golang.org/x/sys v0.11.0
0.566 go: downloading github.com/rivo/uniseg v0.2.0
0.573 go: downloading github.com/spf13/pflag v1.0.5
0.580 go: downloading github.com/gin-contrib/sse v0.1.0
0.580 go: downloading github.com/mattn/go-isatty v0.0.19
0.597 go: downloading golang.org/x/net v0.10.0
0.624 go: downloading github.com/go-playground/validator/v10 v10.14.0
0.624 go: downloading github.com/pelletier/go-toml/v2 v2.0.8
0.651 go: downloading github.com/ugorji/go/codec v1.2.11
0.700 go: downloading google.golang.org/protobuf v1.30.0
0.707 go: downloading gopkg.in/yaml.v3 v3.0.1
0.741 go: downloading github.com/gabriel-vasile/mimetype v1.4.2
0.742 go: downloading github.com/go-playground/universal-translator v0.18.1
0.745 go: downloading github.com/leodido/go-urn v1.2.4
0.778 go: downloading golang.org/x/text v0.10.0
0.841 go: downloading github.com/go-playground/locales v0.14.1
1.894 fatal: not a git repository: /go/src/github.com/jmorganca/ollama/../.git/modules/ollama
1.895 llm/llama.cpp/generate.go:6: running "git": exit status 128
------
Dockerfile.cuda:13
--------------------
12 | ENV GOARCH=$TARGETARCH
13 | >>> RUN /usr/local/go/bin/go generate ./... \
14 | >>> && /usr/local/go/bin/go build -ldflags "-linkmode=external -extldflags='-static' -X=github.com/jmorganca/ollama/version.Version=$VERSION -X=github.com/jmorganca/ollama/server.mode=release" .
15 |
--------------------
ERROR: failed to solve: process "/bin/sh -c /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build -ldflags \"-linkmode=external -extldflags='-static' -X=github.com/jmorganca/ollama/version.Version=$VERSION -X=github.com/jmorganca/ollama/server.mode=release\" ." did not complete successfully: exit code: 1
```
[Line 13](https://github.com/jmorganca/ollama/blob/c345053a8bf47d5ef8f1fe15d385108059209fba/Dockerfile.cuda#L13) fails with:
```
ERROR: failed to solve: process "/bin/sh -c /usr/local/go/bin/go generate ./... && /usr/local/go/bin/go build -ldflags \"-linkmode=external -extldflags='-static' -X=github.com/jmorganca/ollama/version.Version=$VERSION -X=github.com/jmorganca/ollama/server.mode=release\" ." did not complete successfully: exit code: 1
```
Any idea what's going on here? | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/551/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/5990 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5990/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5990/comments | https://api.github.com/repos/ollama/ollama/issues/5990/events | https://github.com/ollama/ollama/issues/5990 | 2,432,539,183 | I_kwDOJ0Z1Ps6Q_Zov | 5,990 | Tools and properties.type Not Supporting Arrays | {
"login": "xonlly",
"id": 4999786,
"node_id": "MDQ6VXNlcjQ5OTk3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4999786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xonlly",
"html_url": "https://github.com/xonlly",
"followers_url": "https://api.github.com/users/xonlly/followers",
"following_url": "https://api.github.com/users/xonlly/following{/other_user}",
"gists_url": "https://api.github.com/users/xonlly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xonlly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xonlly/subscriptions",
"organizations_url": "https://api.github.com/users/xonlly/orgs",
"repos_url": "https://api.github.com/users/xonlly/repos",
"events_url": "https://api.github.com/users/xonlly/events{/privacy}",
"received_events_url": "https://api.github.com/users/xonlly/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-07-26T16:06:52 | 2024-10-18T14:29:13 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
**Title:** Issue with `DynamicStructuredTool` and `properties.type` Not Supporting Arrays in LangchainJS
**Description:**
I am encountering an issue when using `DynamicStructuredTool` in LangchainJS. Specifically, the `type` property within `properties` does not currently support arrays. This results in an error when I try to use a nullable argument specified as `["string", "null"]`. The error message is as follows:
```
LLM run errored with error: "HTTP error! status: 400 Response:
{\"error\":{\"message\":\"json: cannot unmarshal array into Go struct field .tools.function.parameters.properties.type of type string\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}}\n
\nMistralAPIError: HTTP error! status: 400 Response:
{\"error\":{\"message\":\"json: cannot unmarshal array into Go struct field .tools.function.parameters.properties.type of type string\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}}\n
at MistralClient._request (file:///Users/xonlly/projects/devana.ai/serveur/node_modules/@mistralai/mistralai/src/client.js:162:17)\n
at processTicksAndRejections (node:internal/process/task_queues:95:5)\n
at MistralClient.chat (file:///Users/xonlly/projects/devana.ai/serveur/node_modules/@mistralai/mistralai/src/client.js:338:22)\n
at /Users/xonlly/projects/devana.ai/serveur/node_modules/@langchain/mistralai/dist/chat_models.cjs:366:27\n
at RetryOperation._fn (/Users/xonlly/projects/devana.ai/serveur/node_modules/p-retry/index.js:50:12)"
```
**JSON Configuration:**
```json
{
"model": "mistral-nemo",
"messages": [
{
"role": "user",
"content": "Hello"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "search-files",
"description": "Search all files in the collection",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": ["string", "null"]
}
},
"required": ["query"],
"additionalProperties": false,
"$schema": "http://json-schema.org/draft-07/schema#"
}
}
}
],
"temperature": 0.7,
"top_p": 1,
"stream": false,
"tool_choice": "required"
}
```
**Steps to Reproduce:**
1. Use LangchainJS with `DynamicStructuredTool`.
2. Define a nullable argument with `["string", "null"]` in the `type` property.
3. Observe the error upon execution.
**Expected Behavior:**
The `type` property should accept arrays to support nullable arguments without causing an error.
**Actual Behavior:**
An error occurs because the `type` property does not support arrays, resulting in a failure to unmarshal the array into a Go struct field.
**Proposed Solution:**
Update the `type` property within `properties` to support arrays, allowing for nullable arguments to be correctly processed.
**Environment:**
- Ollama version: 0.3.0
- Operating System: Ubuntu 22.04
**Additional Context:**
Any guidance or updates to address this limitation would be greatly appreciated. Thank you!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.0 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5990/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6953 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6953/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6953/comments | https://api.github.com/repos/ollama/ollama/issues/6953/events | https://github.com/ollama/ollama/issues/6953 | 2,547,721,251 | I_kwDOJ0Z1Ps6X2yQj | 6,953 | AMD ROCm Card can not use flash attention | {
"login": "superligen",
"id": 4199207,
"node_id": "MDQ6VXNlcjQxOTkyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4199207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/superligen",
"html_url": "https://github.com/superligen",
"followers_url": "https://api.github.com/users/superligen/followers",
"following_url": "https://api.github.com/users/superligen/following{/other_user}",
"gists_url": "https://api.github.com/users/superligen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/superligen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/superligen/subscriptions",
"organizations_url": "https://api.github.com/users/superligen/orgs",
"repos_url": "https://api.github.com/users/superligen/repos",
"events_url": "https://api.github.com/users/superligen/events{/privacy}",
"received_events_url": "https://api.github.com/users/superligen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | open | false | null | [] | null | 4 | 2024-09-25T11:26:27 | 2024-12-19T19:36:01 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
My cards is w7900, and rocm driver is 6.3 , I found the llama-cpp server started by Ollama always without -fa flag.
I check the code , found :
// only cuda (compute capability 7+) and metal support flash attention
if g.Library != "metal" && (g.Library != "cuda" || g.DriverMajor < 7) {
flashAttnEnabled = false
}
This code sames wrong.
Ref: https://github.com/Dao-AILab/flash-attention/pull/1010 Support for RoCM has been added tk flash attention 2
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6953/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/6670 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6670/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6670/comments | https://api.github.com/repos/ollama/ollama/issues/6670/events | https://github.com/ollama/ollama/issues/6670 | 2,509,778,404 | I_kwDOJ0Z1Ps6VmC3k | 6,670 | expose slots data through API | {
"login": "aiseei",
"id": 30615541,
"node_id": "MDQ6VXNlcjMwNjE1NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/30615541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aiseei",
"html_url": "https://github.com/aiseei",
"followers_url": "https://api.github.com/users/aiseei/followers",
"following_url": "https://api.github.com/users/aiseei/following{/other_user}",
"gists_url": "https://api.github.com/users/aiseei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aiseei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aiseei/subscriptions",
"organizations_url": "https://api.github.com/users/aiseei/orgs",
"repos_url": "https://api.github.com/users/aiseei/repos",
"events_url": "https://api.github.com/users/aiseei/events{/privacy}",
"received_events_url": "https://api.github.com/users/aiseei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-09-06T07:58:13 | 2024-09-06T15:38:13 | 2024-09-06T15:38:13 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | hI
Can the information that can be seen in the logs be exposed through /slots api per server/port ? We need this to manage queuing in our load balancer. This has been exposed by llama cpp already. https://github.com/ggerganov/llama.cpp/tree/master/examples/server#get-slots-returns-the-current-slots-processing-state
DEBUG [update_slots] all slots are idle and system prompt is empty, clear the KV cache | tid="140031137009664" timestamp=1725608401
DEBUG [process_single_task] slot data | n_idle_slots=3 n_processing_slots=0 task_id=0 tid="140031137009664" timestamp=1725608401
Many thanks! | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6670/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4248 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4248/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4248/comments | https://api.github.com/repos/ollama/ollama/issues/4248/events | https://github.com/ollama/ollama/issues/4248 | 2,284,620,426 | I_kwDOJ0Z1Ps6ILIqK | 4,248 | error loading model architecture: unknown model architecture: 'qwen2moe' | {
"login": "li904775857",
"id": 43633294,
"node_id": "MDQ6VXNlcjQzNjMzMjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/43633294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li904775857",
"html_url": "https://github.com/li904775857",
"followers_url": "https://api.github.com/users/li904775857/followers",
"following_url": "https://api.github.com/users/li904775857/following{/other_user}",
"gists_url": "https://api.github.com/users/li904775857/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li904775857/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li904775857/subscriptions",
"organizations_url": "https://api.github.com/users/li904775857/orgs",
"repos_url": "https://api.github.com/users/li904775857/repos",
"events_url": "https://api.github.com/users/li904775857/events{/privacy}",
"received_events_url": "https://api.github.com/users/li904775857/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 1 | 2024-05-08T03:50:18 | 2024-07-25T17:43:34 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Qwen1.5-MoE-A2.7B-Chat is installed by convert-hf-to-gguf.py according to the process. After 4-bit quantization, ollamamodelfile is created, but it is not supported when loading. What is the cause of this?
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4248/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5188 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5188/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5188/comments | https://api.github.com/repos/ollama/ollama/issues/5188/events | https://github.com/ollama/ollama/pull/5188 | 2,364,778,595 | PR_kwDOJ0Z1Ps5zF3gJ | 5,188 | fix: skip os.removeAll() if PID does not exist | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-06-20T15:54:26 | 2024-06-20T17:40:59 | 2024-06-20T17:40:59 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5188",
"html_url": "https://github.com/ollama/ollama/pull/5188",
"diff_url": "https://github.com/ollama/ollama/pull/5188.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5188.patch",
"merged_at": "2024-06-20T17:40:59"
} | previously deleted all directories in $TMPDIR starting with ollama. Added a "continue" to skip the directory removal if a PID doesn't exist. We do this to prevent accidentally deleting directories in tmpdir that share the ollama name but aren't created by us for processes
resolves: https://github.com/ollama/ollama/issues/5129 | {
"login": "joshyan1",
"id": 76125168,
"node_id": "MDQ6VXNlcjc2MTI1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshyan1",
"html_url": "https://github.com/joshyan1",
"followers_url": "https://api.github.com/users/joshyan1/followers",
"following_url": "https://api.github.com/users/joshyan1/following{/other_user}",
"gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions",
"organizations_url": "https://api.github.com/users/joshyan1/orgs",
"repos_url": "https://api.github.com/users/joshyan1/repos",
"events_url": "https://api.github.com/users/joshyan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshyan1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5188/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4204 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4204/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4204/comments | https://api.github.com/repos/ollama/ollama/issues/4204/events | https://github.com/ollama/ollama/issues/4204 | 2,281,198,575 | I_kwDOJ0Z1Ps6H-FPv | 4,204 | Support pull from habor registry in proxy mode an push to harbor | {
"login": "ptempier",
"id": 6312537,
"node_id": "MDQ6VXNlcjYzMTI1Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6312537?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ptempier",
"html_url": "https://github.com/ptempier",
"followers_url": "https://api.github.com/users/ptempier/followers",
"following_url": "https://api.github.com/users/ptempier/following{/other_user}",
"gists_url": "https://api.github.com/users/ptempier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ptempier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ptempier/subscriptions",
"organizations_url": "https://api.github.com/users/ptempier/orgs",
"repos_url": "https://api.github.com/users/ptempier/repos",
"events_url": "https://api.github.com/users/ptempier/events{/privacy}",
"received_events_url": "https://api.github.com/users/ptempier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2024-05-06T15:49:57 | 2024-12-19T06:27:32 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Not sure why its not working, maybe i do something bad.
From other ticket i understand it supposed to work with OCI registry.
What i tried :
ollama pull habor-server//ollama.com/library/llama3:text
Error: pull model manifest: 400
ollama pull habor-server/ollama.com/llama3:text
Error: pull model manifest: 400
ollama cp llama2:7b habor-server/aistuff/llama2:7b
ollama push habor-server/aistuff/llama2:7b
retrieving manifest
Error: Get "harbor?nonce=zzzz&scope=&service=&ts=xxxx": unsupported protocol scheme "" | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4204/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4204/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/1223 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1223/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1223/comments | https://api.github.com/repos/ollama/ollama/issues/1223/events | https://github.com/ollama/ollama/pull/1223 | 2,004,804,614 | PR_kwDOJ0Z1Ps5gDJXg | 1,223 | Make alt+backspace delete word | {
"login": "kejcao",
"id": 106453563,
"node_id": "U_kgDOBlhaOw",
"avatar_url": "https://avatars.githubusercontent.com/u/106453563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kejcao",
"html_url": "https://github.com/kejcao",
"followers_url": "https://api.github.com/users/kejcao/followers",
"following_url": "https://api.github.com/users/kejcao/following{/other_user}",
"gists_url": "https://api.github.com/users/kejcao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kejcao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kejcao/subscriptions",
"organizations_url": "https://api.github.com/users/kejcao/orgs",
"repos_url": "https://api.github.com/users/kejcao/repos",
"events_url": "https://api.github.com/users/kejcao/events{/privacy}",
"received_events_url": "https://api.github.com/users/kejcao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-11-21T17:29:44 | 2023-11-21T20:26:47 | 2023-11-21T20:26:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1223",
"html_url": "https://github.com/ollama/ollama/pull/1223",
"diff_url": "https://github.com/ollama/ollama/pull/1223.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1223.patch",
"merged_at": "2023-11-21T20:26:47"
} | In GNU Readline you can press alt+backspace to delete word. I'm used to this behavior and so it's jarring not to be able to do it. This commit adds the feature. | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1223/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3007 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3007/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3007/comments | https://api.github.com/repos/ollama/ollama/issues/3007/events | https://github.com/ollama/ollama/issues/3007 | 2,176,498,516 | I_kwDOJ0Z1Ps6BurtU | 3,007 | Search on ollama.com/library is missing lots of models | {
"login": "maxtheman",
"id": 2172753,
"node_id": "MDQ6VXNlcjIxNzI3NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2172753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxtheman",
"html_url": "https://github.com/maxtheman",
"followers_url": "https://api.github.com/users/maxtheman/followers",
"following_url": "https://api.github.com/users/maxtheman/following{/other_user}",
"gists_url": "https://api.github.com/users/maxtheman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxtheman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxtheman/subscriptions",
"organizations_url": "https://api.github.com/users/maxtheman/orgs",
"repos_url": "https://api.github.com/users/maxtheman/repos",
"events_url": "https://api.github.com/users/maxtheman/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxtheman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0 | 2024-03-08T17:46:15 | 2024-03-11T22:18:50 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Current behavior:
Using @ehartford as an example since he's a prolific ollama model contributor:
https://ollama.com/search?q=ehartford&p=1
Shows his models:
<img width="1276" alt="Screenshot 2024-03-08 at 9 45 33 AM" src="https://github.com/ollama/ollama/assets/2172753/08b9dc80-5d94-4b86-82dd-37c0dddac326">
https://ollama.com/library?q=ehartford
Does not:
<img width="1257" alt="Screenshot 2024-03-08 at 9 45 52 AM" src="https://github.com/ollama/ollama/assets/2172753/6f1c248b-a93c-468a-a78c-24811a857731">
Expected behavior is that they would be the same.
| null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3007/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3227 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3227/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3227/comments | https://api.github.com/repos/ollama/ollama/issues/3227/events | https://github.com/ollama/ollama/issues/3227 | 2,192,660,065 | I_kwDOJ0Z1Ps6CsVZh | 3,227 | ollama/ollama Docker image: committed modifications aren't saved | {
"login": "nicolasduminil",
"id": 1037978,
"node_id": "MDQ6VXNlcjEwMzc5Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1037978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicolasduminil",
"html_url": "https://github.com/nicolasduminil",
"followers_url": "https://api.github.com/users/nicolasduminil/followers",
"following_url": "https://api.github.com/users/nicolasduminil/following{/other_user}",
"gists_url": "https://api.github.com/users/nicolasduminil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicolasduminil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicolasduminil/subscriptions",
"organizations_url": "https://api.github.com/users/nicolasduminil/orgs",
"repos_url": "https://api.github.com/users/nicolasduminil/repos",
"events_url": "https://api.github.com/users/nicolasduminil/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicolasduminil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-03-18T16:14:49 | 2024-03-19T13:46:15 | 2024-03-19T08:48:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm using the Docker image `ollama/ollama:latest`. I'm running the image and, in the new created container, I'm pulling `llama2`. Once the pull operation finished, I'm checking its success using the `ollama list` command.
Now, I commit the modification, I tag the new modified image and I push it on DockerHub.
Pulling it again, running it and checking the presence of `llama2` fails as the result of the `ollama list` command is empty this time.
### What did you expect to see?
I expect that the new augmented image contains `llama2`
### Steps to reproduce
$ docker pull ollama/ollama:latest
$ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
$ docker exec -ti ollama ollama pull llama2
... waiting 10 minutes ...
$ docker exec -ti ollama ollama list
NAME ID SIZE MODIFIED
llama2:latest 78e26419b446 3.8 GB About a minute ago
$ docker commit ollama
$ docker tag <image_id> nicolasduminil/ollama:llama2
$ docker push nicolasduminil/ollama:llama2
$ docker pull nicolasduminil/ollama:llama2
$ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama nicolasduminil/ollama:llama2
$ docker exec -ti ollama ollama list
NAME ID SIZE MODIFIED
### Are there any recent changes that introduced the issue?
Negative
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
_No response_
### GPU
_No response_
### GPU info
_No response_
### CPU
Intel
### Other software
None | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3227/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/3293 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3293/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3293/comments | https://api.github.com/repos/ollama/ollama/issues/3293/events | https://github.com/ollama/ollama/issues/3293 | 2,202,706,972 | I_kwDOJ0Z1Ps6DSqQc | 3,293 | ollama run in national user name | {
"login": "hgabor47",
"id": 1212585,
"node_id": "MDQ6VXNlcjEyMTI1ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1212585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgabor47",
"html_url": "https://github.com/hgabor47",
"followers_url": "https://api.github.com/users/hgabor47/followers",
"following_url": "https://api.github.com/users/hgabor47/following{/other_user}",
"gists_url": "https://api.github.com/users/hgabor47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hgabor47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hgabor47/subscriptions",
"organizations_url": "https://api.github.com/users/hgabor47/orgs",
"repos_url": "https://api.github.com/users/hgabor47/repos",
"events_url": "https://api.github.com/users/hgabor47/events{/privacy}",
"received_events_url": "https://api.github.com/users/hgabor47/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 4 | 2024-03-22T15:01:10 | 2024-05-04T22:03:44 | 2024-05-04T22:03:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
![image](https://github.com/ollama/ollama/assets/1212585/5a14f850-213b-471c-90c9-f1f0f8752c31)
My Username has international characters like á and the ollama not handle it.
### What did you expect to see?
RUN
### Steps to reproduce
1 Create a windows user with international charaters like: Gábor
2 start the ollama with: ollama run llama2
### Are there any recent changes that introduced the issue?
_No response_
### OS
Windows
### Architecture
amd64
### Platform
_No response_
### Ollama version
_No response_
### GPU
Nvidia
### GPU info
_No response_
### CPU
AMD
### Other software
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3293/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6946 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6946/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6946/comments | https://api.github.com/repos/ollama/ollama/issues/6946/events | https://github.com/ollama/ollama/issues/6946 | 2,546,759,749 | I_kwDOJ0Z1Ps6XzHhF | 6,946 | llama runner process has terminated: exit status 0xc0000005 | {
"login": "viosay",
"id": 16093380,
"node_id": "MDQ6VXNlcjE2MDkzMzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/16093380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/viosay",
"html_url": "https://github.com/viosay",
"followers_url": "https://api.github.com/users/viosay/followers",
"following_url": "https://api.github.com/users/viosay/following{/other_user}",
"gists_url": "https://api.github.com/users/viosay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/viosay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/viosay/subscriptions",
"organizations_url": "https://api.github.com/users/viosay/orgs",
"repos_url": "https://api.github.com/users/viosay/repos",
"events_url": "https://api.github.com/users/viosay/events{/privacy}",
"received_events_url": "https://api.github.com/users/viosay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | open | false | null | [] | null | 5 | 2024-09-25T02:30:59 | 2024-11-02T17:12:45 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
It's again the https://github.com/ollama/ollama/issues/6011 issue.
**The issue is with embedding call with the model converted using convert_hf_to_gguf.py.**
litellm.llms.ollama.OllamaError: {"error":"llama runner process has terminated: exit status 0xc0000005"}
```
INFO [wmain] system info | n_threads=6 n_threads_batch=6 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="18380" timestamp=1727231008 total_threads=12
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="11" port="13505" tid="18380" timestamp=1727231008
llama_model_loader: loaded meta data with 26 key-value pairs and 389 tensors from C:\Users\Administrator\.ollama\models\blobs\sha256-aad91e93e9ec705a527cfa8701698055cf473223437acd029762bb77be6fc92d (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = bert
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Conan_Embedding_V1
llama_model_loader: - kv 3: general.size_label str = 324M
llama_model_loader: - kv 4: general.license str = cc-by-nc-4.0
llama_model_loader: - kv 5: general.tags arr[str,1] = ["mteb"]
llama_model_loader: - kv 6: bert.block_count u32 = 24
llama_model_loader: - kv 7: bert.context_length u32 = 512
llama_model_loader: - kv 8: bert.embedding_length u32 = 1024
llama_model_loader: - kv 9: bert.feed_forward_length u32 = 4096
llama_model_loader: - kv 10: bert.attention.head_count u32 = 16
llama_model_loader: - kv 11: bert.attention.layer_norm_epsilon f32 = 0.000000
llama_model_loader: - kv 12: general.file_type u32 = 1
llama_model_loader: - kv 13: bert.attention.causal bool = false
llama_model_loader: - kv 14: bert.pooling_type u32 = 1
llama_model_loader: - kv 15: tokenizer.ggml.token_type_count u32 = 2
llama_model_loader: - kv 16: tokenizer.ggml.model str = bert
llama_model_loader: - kv 17: tokenizer.ggml.pre str = Conan-embedding-v1
llama_model_loader: - kv 18: tokenizer.ggml.tokens arr[str,21128] = ["[PAD]", "[unused1]", "[unused2]", "...
llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,21128] = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 100
llama_model_loader: - kv 21: tokenizer.ggml.seperator_token_id u32 = 102
llama_model_loader: - kv 22: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 23: tokenizer.ggml.cls_token_id u32 = 101
llama_model_loader: - kv 24: tokenizer.ggml.mask_token_id u32 = 103
llama_model_loader: - kv 25: general.quantization_version u32 = 2
llama_model_loader: - type f32: 244 tensors
llama_model_loader: - type f16: 145 tensors
llm_load_vocab: special tokens cache size = 5
llm_load_vocab: token to piece cache size = 0.0769 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = bert
llm_load_print_meta: vocab type = WPM
llm_load_print_meta: n_vocab = 21128
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 512
llm_load_print_meta: n_embd = 1024
llm_load_print_meta: n_layer = 24
llm_load_print_meta: n_head = 16
llm_load_print_meta: n_head_kv = 16
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 64
llm_load_print_meta: n_embd_head_v = 64
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 1.0e-12
llm_load_print_meta: f_norm_rms_eps = 0.0e+00
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 4096
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 0
llm_load_print_meta: pooling type = 1
llm_load_print_meta: rope type = 2
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 512
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 335M
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 324.47 M
llm_load_print_meta: model size = 620.50 MiB (16.04 BPW)
llm_load_print_meta: general.name = Conan_Embedding_V1
llm_load_print_meta: UNK token = 100 '[UNK]'
llm_load_print_meta: SEP token = 102 '[SEP]'
llm_load_print_meta: PAD token = 0 '[PAD]'
llm_load_print_meta: CLS token = 101 '[CLS]'
llm_load_print_meta: MASK token = 103 '[MASK]'
llm_load_print_meta: LF token = 0 '[PAD]'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size = 0.16 MiB
llm_load_tensors: CPU buffer size = 620.50 MiB
time=2024-09-25T10:23:28.796+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model"
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 192.00 MiB
llama_new_context_with_model: KV self size = 192.00 MiB, K (f16): 96.00 MiB, V (f16): 96.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.00 MiB
llama_new_context_with_model: CPU compute buffer size = 26.00 MiB
llama_new_context_with_model: graph nodes = 851
llama_new_context_with_model: graph splits = 1
time=2024-09-25T10:23:30.338+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding"
time=2024-09-25T10:23:31.963+08:00 level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error"
time=2024-09-25T10:23:32.226+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: exit status 0xc0000005"
[GIN] 2024/09/25 - 10:23:32 | 500 | 3.7323168s | 127.0.0.1 | POST "/api/embed"
```
### OS
Windows
### GPU
_No response_
### CPU
Intel
### Ollama version
0.3.11 0.3.12 | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6946/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/3955 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3955/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3955/comments | https://api.github.com/repos/ollama/ollama/issues/3955/events | https://github.com/ollama/ollama/pull/3955 | 2,266,430,367 | PR_kwDOJ0Z1Ps5t4K3P | 3,955 | return code `499` when user cancels request while a model is loading | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-04-26T20:03:32 | 2024-04-26T21:38:30 | 2024-04-26T21:38:29 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3955",
"html_url": "https://github.com/ollama/ollama/pull/3955",
"diff_url": "https://github.com/ollama/ollama/pull/3955.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3955.patch",
"merged_at": "2024-04-26T21:38:29"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3955/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5963 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5963/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5963/comments | https://api.github.com/repos/ollama/ollama/issues/5963/events | https://github.com/ollama/ollama/pull/5963 | 2,431,037,605 | PR_kwDOJ0Z1Ps52hIpP | 5,963 | Revert "llm(llama): pass rope factors" | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-07-25T21:53:31 | 2024-07-25T22:24:57 | 2024-07-25T22:24:55 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5963",
"html_url": "https://github.com/ollama/ollama/pull/5963",
"diff_url": "https://github.com/ollama/ollama/pull/5963.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5963.patch",
"merged_at": "2024-07-25T22:24:55"
} | Reverts ollama/ollama#5924 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5963/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6172 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6172/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6172/comments | https://api.github.com/repos/ollama/ollama/issues/6172/events | https://github.com/ollama/ollama/issues/6172 | 2,447,790,062 | I_kwDOJ0Z1Ps6R5k_u | 6,172 | .git file is missing | {
"login": "Haritha-Maturi",
"id": 100990846,
"node_id": "U_kgDOBgT_fg",
"avatar_url": "https://avatars.githubusercontent.com/u/100990846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Haritha-Maturi",
"html_url": "https://github.com/Haritha-Maturi",
"followers_url": "https://api.github.com/users/Haritha-Maturi/followers",
"following_url": "https://api.github.com/users/Haritha-Maturi/following{/other_user}",
"gists_url": "https://api.github.com/users/Haritha-Maturi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Haritha-Maturi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Haritha-Maturi/subscriptions",
"organizations_url": "https://api.github.com/users/Haritha-Maturi/orgs",
"repos_url": "https://api.github.com/users/Haritha-Maturi/repos",
"events_url": "https://api.github.com/users/Haritha-Maturi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Haritha-Maturi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2024-08-05T07:12:11 | 2024-08-05T08:14:46 | 2024-08-05T08:14:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
As per the docker file present in repo there needs to be a file named .git in repo but it is missing.
![image](https://github.com/user-attachments/assets/731a02ea-df73-4b4c-873f-f40a110ac7f3)
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | {
"login": "Haritha-Maturi",
"id": 100990846,
"node_id": "U_kgDOBgT_fg",
"avatar_url": "https://avatars.githubusercontent.com/u/100990846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Haritha-Maturi",
"html_url": "https://github.com/Haritha-Maturi",
"followers_url": "https://api.github.com/users/Haritha-Maturi/followers",
"following_url": "https://api.github.com/users/Haritha-Maturi/following{/other_user}",
"gists_url": "https://api.github.com/users/Haritha-Maturi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Haritha-Maturi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Haritha-Maturi/subscriptions",
"organizations_url": "https://api.github.com/users/Haritha-Maturi/orgs",
"repos_url": "https://api.github.com/users/Haritha-Maturi/repos",
"events_url": "https://api.github.com/users/Haritha-Maturi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Haritha-Maturi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6172/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3656 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3656/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3656/comments | https://api.github.com/repos/ollama/ollama/issues/3656/events | https://github.com/ollama/ollama/issues/3656 | 2,244,166,564 | I_kwDOJ0Z1Ps6Fw0Ok | 3,656 | error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch | {
"login": "dpblnt",
"id": 13944122,
"node_id": "MDQ6VXNlcjEzOTQ0MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/13944122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dpblnt",
"html_url": "https://github.com/dpblnt",
"followers_url": "https://api.github.com/users/dpblnt/followers",
"following_url": "https://api.github.com/users/dpblnt/following{/other_user}",
"gists_url": "https://api.github.com/users/dpblnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dpblnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpblnt/subscriptions",
"organizations_url": "https://api.github.com/users/dpblnt/orgs",
"repos_url": "https://api.github.com/users/dpblnt/repos",
"events_url": "https://api.github.com/users/dpblnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/dpblnt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-04-15T16:48:36 | 2024-04-16T06:41:45 | 2024-04-15T19:06:37 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
```
gmake[3]: Leaving directory '/mnt/storage/tmp/ollama/ollama/llm/build/linux/x86_64/cpu'
[ 8%] Built target build_info
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/13/include/immintrin.h:109,
from /mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-impl.h:93,
from /mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-quants.c:5:
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/fmaintrin.h: In function ‘ggml_vec_dot_q4_0_q8_0’:
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch
63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
| ^~~~~~~~~~~~~~~
/mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-quants.c:3835:15: note: called from here
3835 | acc = _mm256_fmadd_ps( d, q, acc );
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch
63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
| ^~~~~~~~~~~~~~~
/mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-quants.c:3835:15: note: called from here
3835 | acc = _mm256_fmadd_ps( d, q, acc );
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch
63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
| ^~~~~~~~~~~~~~~
/mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-quants.c:3835:15: note: called from here
3835 | acc = _mm256_fmadd_ps( d, q, acc );
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-pc-linux-gnu/13/include/fmaintrin.h:63:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_fmadd_ps’: target specific option mismatch
63 | _mm256_fmadd_ps (__m256 __A, __m256 __B, __m256 __C)
| ^~~~~~~~~~~~~~~
/mnt/storage/tmp/ollama/ollama/llm/llama.cpp/ggml-quants.c:3835:15: note: called from here
3835 | acc = _mm256_fmadd_ps( d, q, acc );
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
gmake[3]: *** [CMakeFiles/ggml.dir/build.make:121: CMakeFiles/ggml.dir/ggml-quants.c.o] Error 1
```
### What did you expect to see?
build
### Steps to reproduce
go clean
git pull
export LAMA_NATIVE=on
export LLAMA_AVX=on
export LLAMA_AVX2=on
export LLAMA_F16C=on
export LLAMA_FMA=on
export LLAMA_VULKAN=1
export CMAKE_C_COMPILER=/usr/lib/llvm/17/bin/clang
export CMAKE_CXX_COMPILER=/usr/lib/llvm/17/bin/clang++
export CC=clang
export CXX=clang++
export CGO_CFLAGS="-g"
export CMAKE_C_COMPILER=clang
export CMAKE_CXX_COMPILER=clang++
go clean ./...
go generate ./...
go build .
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
_No response_
### Platform
_No response_
### Ollama version
a0b8a32eb4a933f7ad3bf687892f05f85dd75fee
### GPU
AMD
### GPU info
_No response_
### CPU
AMD
### Other software
_No response_ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3656/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5582 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5582/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5582/comments | https://api.github.com/repos/ollama/ollama/issues/5582/events | https://github.com/ollama/ollama/pull/5582 | 2,399,160,110 | PR_kwDOJ0Z1Ps504gzi | 5,582 | Remove nested runner payloads from linux | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-07-09T20:29:57 | 2024-07-11T15:43:00 | 2024-07-11T15:43:00 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/5582",
"html_url": "https://github.com/ollama/ollama/pull/5582",
"diff_url": "https://github.com/ollama/ollama/pull/5582.diff",
"patch_url": "https://github.com/ollama/ollama/pull/5582.patch",
"merged_at": null
} | This adjusts linux to follow the same model we use for windows with a discrete archive (zip/tgz) to cary the primary executable, subprocess runners, and dependent libraries.
Darwin retain the payload model where the go binary is fully self contained.
Marking draft as it still needs some more testing and CI will need adjusting, but initial happy path looks good.
For comparison, the current (v0.2.1) artifacts are:
- `ollama-linux-amd64` - 467 MB
- `ollama-linux-amd64-rocm.tgz` - 1.14 GB
With this PR:
```
% ls -lh ./dist/ollama-linux-amd64.tgz
-rw-r--r-- 1 daniel staff 1.1G Jul 9 13:19 ./dist/ollama-linux-amd64.tgz
```
After extracting on a system:
```
% ls -F
cuda/ ollama* ollama_runners/ rocm/
% du -sh .
4.6G .
```
Note: I opted to include rocm into the single artifact, since we do the same on Windows, and that simplifies the overall logic. That said, it's the brunt of the extracted size so this may be an area worth optimizing. One option might be to simply exclude the rocm directory during the extract in the install script if no AMD GPUs are detected.
```
% du -sh rocm
3.6G rocm
``` | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5582/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6942 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6942/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6942/comments | https://api.github.com/repos/ollama/ollama/issues/6942/events | https://github.com/ollama/ollama/issues/6942 | 2,546,451,482 | I_kwDOJ0Z1Ps6Xx8Qa | 6,942 | Ollama bricks chromium based apps on mac | {
"login": "skakwy",
"id": 36933487,
"node_id": "MDQ6VXNlcjM2OTMzNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/36933487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skakwy",
"html_url": "https://github.com/skakwy",
"followers_url": "https://api.github.com/users/skakwy/followers",
"following_url": "https://api.github.com/users/skakwy/following{/other_user}",
"gists_url": "https://api.github.com/users/skakwy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skakwy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skakwy/subscriptions",
"organizations_url": "https://api.github.com/users/skakwy/orgs",
"repos_url": "https://api.github.com/users/skakwy/repos",
"events_url": "https://api.github.com/users/skakwy/events{/privacy}",
"received_events_url": "https://api.github.com/users/skakwy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-09-24T21:46:30 | 2024-09-30T14:06:53 | 2024-09-25T09:25:08 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I got a weird issue where chromium based apps stop working due to an ERR_ADDRESS_INVALID error. At first, I thought it would be some kind of problem with chromium and tried out a few different things, however nothing worked till I stopped ollama. Since I got ollama that weird error started to popup randomly. Without ollama running in the background, I wasn't able to reproduce the error.
If I try to use ollama I either get the error: `Error: pull model manifest: file does not exist` or `Error: Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connect: can't assign requested address` tho I usually get the first one before the second one.
Honestly I have no idea what could cause this issue and reinstalling might fix the error but since I seem to be the only one to have this problem, it might be good to look further into it. I attached the log file, feel free to ask if I have to provide anything else.
What I tried to fix this issue (before knowing its probably ollama fault): disabled proxys, disabled ipv6, flushed dns records, turned off any apple related safety features for network. Tried different mac os versions (now on 15.0), different networks.
[server.log](https://github.com/user-attachments/files/17121474/server.log)
Edit:
It might be because of the ports that ollama uses, but I don't really think that 11434 is used by chromium for it to not work or does ollama use other ports as well ?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.11 | {
"login": "skakwy",
"id": 36933487,
"node_id": "MDQ6VXNlcjM2OTMzNDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/36933487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skakwy",
"html_url": "https://github.com/skakwy",
"followers_url": "https://api.github.com/users/skakwy/followers",
"following_url": "https://api.github.com/users/skakwy/following{/other_user}",
"gists_url": "https://api.github.com/users/skakwy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skakwy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skakwy/subscriptions",
"organizations_url": "https://api.github.com/users/skakwy/orgs",
"repos_url": "https://api.github.com/users/skakwy/repos",
"events_url": "https://api.github.com/users/skakwy/events{/privacy}",
"received_events_url": "https://api.github.com/users/skakwy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6942/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4003 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4003/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4003/comments | https://api.github.com/repos/ollama/ollama/issues/4003/events | https://github.com/ollama/ollama/issues/4003 | 2,267,566,677 | I_kwDOJ0Z1Ps6HKFJV | 4,003 | Ollama.com - Pull Statistics can be easily fooled | {
"login": "electricalgorithm",
"id": 27111270,
"node_id": "MDQ6VXNlcjI3MTExMjcw",
"avatar_url": "https://avatars.githubusercontent.com/u/27111270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/electricalgorithm",
"html_url": "https://github.com/electricalgorithm",
"followers_url": "https://api.github.com/users/electricalgorithm/followers",
"following_url": "https://api.github.com/users/electricalgorithm/following{/other_user}",
"gists_url": "https://api.github.com/users/electricalgorithm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/electricalgorithm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/electricalgorithm/subscriptions",
"organizations_url": "https://api.github.com/users/electricalgorithm/orgs",
"repos_url": "https://api.github.com/users/electricalgorithm/repos",
"events_url": "https://api.github.com/users/electricalgorithm/events{/privacy}",
"received_events_url": "https://api.github.com/users/electricalgorithm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "bmizerany",
"id": 46,
"node_id": "MDQ6VXNlcjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/46?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmizerany",
"html_url": "https://github.com/bmizerany",
"followers_url": "https://api.github.com/users/bmizerany/followers",
"following_url": "https://api.github.com/users/bmizerany/following{/other_user}",
"gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions",
"organizations_url": "https://api.github.com/users/bmizerany/orgs",
"repos_url": "https://api.github.com/users/bmizerany/repos",
"events_url": "https://api.github.com/users/bmizerany/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmizerany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 3 | 2024-04-28T13:36:34 | 2024-05-09T21:20:50 | 2024-05-09T21:20:50 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama.com model statistics include a pull count. The pull count statistic provides users with the "popularity" of models, where it seems that it is easy to increase the pull count.
**Code Reapply**
By using the following Python code snippet, you can increase the amount of pulls a hundred times.
```python
import subprocess
import time
import signal
import os
for i in range(1000):
process = subprocess.Popen(["ollama", "pull", "USERNAME/MODELNAME"])
time.sleep(1)
try:
os.kill(process.pid, signal.SIGTERM)
except ProcessLookupError:
pass
```
Note that 2-3 minutes is needed to see the changes on the UI.
**Suggestion**
Pull statistics must be increased after the whole model is pulled into a machine, not after starting a pull request.
I'm not a Go dev, don't think that I can provide a fix in a short time.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.32 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4003/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4003/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1791 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1791/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1791/comments | https://api.github.com/repos/ollama/ollama/issues/1791/events | https://github.com/ollama/ollama/pull/1791 | 2,066,388,566 | PR_kwDOJ0Z1Ps5jQyk1 | 1,791 | update Dockerfile.build | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-01-04T21:55:46 | 2024-01-05T03:13:45 | 2024-01-05T03:13:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1791",
"html_url": "https://github.com/ollama/ollama/pull/1791",
"diff_url": "https://github.com/ollama/ollama/pull/1791.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1791.patch",
"merged_at": "2024-01-05T03:13:44"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1791/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4311 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4311/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4311/comments | https://api.github.com/repos/ollama/ollama/issues/4311/events | https://github.com/ollama/ollama/issues/4311 | 2,289,603,215 | I_kwDOJ0Z1Ps6IeJKP | 4,311 | Monetary Support / Donations | {
"login": "dylanbstorey",
"id": 6005970,
"node_id": "MDQ6VXNlcjYwMDU5NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6005970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dylanbstorey",
"html_url": "https://github.com/dylanbstorey",
"followers_url": "https://api.github.com/users/dylanbstorey/followers",
"following_url": "https://api.github.com/users/dylanbstorey/following{/other_user}",
"gists_url": "https://api.github.com/users/dylanbstorey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dylanbstorey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dylanbstorey/subscriptions",
"organizations_url": "https://api.github.com/users/dylanbstorey/orgs",
"repos_url": "https://api.github.com/users/dylanbstorey/repos",
"events_url": "https://api.github.com/users/dylanbstorey/events{/privacy}",
"received_events_url": "https://api.github.com/users/dylanbstorey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2024-05-10T12:03:43 | 2024-05-11T02:24:12 | 2024-05-11T02:24:12 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | How do I buy you a cup of coffee ? | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4311/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5255 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5255/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5255/comments | https://api.github.com/repos/ollama/ollama/issues/5255/events | https://github.com/ollama/ollama/issues/5255 | 2,370,118,290 | I_kwDOJ0Z1Ps6NRSKS | 5,255 | Cannot run model imported from safetensors: byte not found in vocab | {
"login": "peay",
"id": 7261177,
"node_id": "MDQ6VXNlcjcyNjExNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7261177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peay",
"html_url": "https://github.com/peay",
"followers_url": "https://api.github.com/users/peay/followers",
"following_url": "https://api.github.com/users/peay/following{/other_user}",
"gists_url": "https://api.github.com/users/peay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peay/subscriptions",
"organizations_url": "https://api.github.com/users/peay/orgs",
"repos_url": "https://api.github.com/users/peay/repos",
"events_url": "https://api.github.com/users/peay/events{/privacy}",
"received_events_url": "https://api.github.com/users/peay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 3 | 2024-06-24T12:24:56 | 2024-08-01T21:16:32 | 2024-08-01T21:16:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have been trying to import HugginFace safetensors models but getting the following error when trying to use the model with `run`. This happens both with and without quantization.
For example, trying to reproduce the`tinyllama` model from the library:
```sh
git clone https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0
cd TinyLlama-1.1B-Chat-v1.0
ollama show tinyllama --modelfile > Modelfile # Copy reference model file
sed -ibak "s@FROM .*@FROM ${PWD}@" Modelfile # Update FROM
ollama create tinyllama-hf # OK
ollama run tinyllama-hf
# Error: llama runner process has terminated: signal: abort trap error:
error loading model vocabulary: ERROR: byte not found in vocab
```
Directly importing a similar GGUF seems to work fine:
```sh
git clone https://huggingface.co./TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
cd TinyLlama-1.1B-Chat-v1.0-GGUF
ollama show tinyllama --modelfile > Modelfile
sed -ibak "s@FROM .*@FROM ${PWD}/tinyllama-1.1b-chat-v1.0.Q4_0.gguf@" Modelfile
ollama create tinyllama-hf-gguf # OK
ollama run tinyllama-hf-gguf # OK
```
Manually converting the model to GGUF with `convert-hf-to-gguf.py` also works.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.1.45 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5255/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6238 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6238/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6238/comments | https://api.github.com/repos/ollama/ollama/issues/6238/events | https://github.com/ollama/ollama/issues/6238 | 2,453,965,054 | I_kwDOJ0Z1Ps6SRIj- | 6,238 | Ollama server running out of memory when it didn't in previous version | {
"login": "MxtAppz",
"id": 121626118,
"node_id": "U_kgDOBz_eBg",
"avatar_url": "https://avatars.githubusercontent.com/u/121626118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MxtAppz",
"html_url": "https://github.com/MxtAppz",
"followers_url": "https://api.github.com/users/MxtAppz/followers",
"following_url": "https://api.github.com/users/MxtAppz/following{/other_user}",
"gists_url": "https://api.github.com/users/MxtAppz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MxtAppz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MxtAppz/subscriptions",
"organizations_url": "https://api.github.com/users/MxtAppz/orgs",
"repos_url": "https://api.github.com/users/MxtAppz/repos",
"events_url": "https://api.github.com/users/MxtAppz/events{/privacy}",
"received_events_url": "https://api.github.com/users/MxtAppz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 7 | 2024-08-07T17:17:58 | 2024-08-13T05:45:57 | 2024-08-13T05:45:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Hello,
I'm trying to run Llama 3.1 8b in Ollama 0.3.4 on a laptop (it has 8gb ram and 4 cpu cores, running it on CPU as my GPU is integrated and not compatible). Maybe this sounds crazy, but it worked fine on Ollama 0.3.3, and just after updating to 0.3.4 and running it (I also tried updating the model but the same) I get:
Error: Post "http://127.0.0.1:11434/api/chat": EOF
and when I try again:
Error: llama runner process has terminated: signal: killed
So I think it's running out of memory. I like to try things on my laptop before deploying them to my server, so I would like if you could "fix" that back, or show me a workaround for this. (Mistral 7b works fine with 0.3.4).
Thanks in advanced.
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.3.4 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6238/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7508 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7508/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7508/comments | https://api.github.com/repos/ollama/ollama/issues/7508/events | https://github.com/ollama/ollama/issues/7508 | 2,635,214,570 | I_kwDOJ0Z1Ps6dEi7q | 7,508 | Manual ollama update doesn't work | {
"login": "ExposedCat",
"id": 44642024,
"node_id": "MDQ6VXNlcjQ0NjQyMDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/44642024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ExposedCat",
"html_url": "https://github.com/ExposedCat",
"followers_url": "https://api.github.com/users/ExposedCat/followers",
"following_url": "https://api.github.com/users/ExposedCat/following{/other_user}",
"gists_url": "https://api.github.com/users/ExposedCat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ExposedCat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ExposedCat/subscriptions",
"organizations_url": "https://api.github.com/users/ExposedCat/orgs",
"repos_url": "https://api.github.com/users/ExposedCat/repos",
"events_url": "https://api.github.com/users/ExposedCat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ExposedCat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
},
{
"id": 5755339642,
"node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg",
"url": "https://api.github.com/repos/ollama/ollama/labels/linux",
"name": "linux",
"color": "516E70",
"default": false,
"description": ""
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
},
{
"id": 6678628138,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjhPHKg",
"url": "https://api.github.com/repos/ollama/ollama/labels/install",
"name": "install",
"color": "E0B88D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | 2 | 2024-11-05T11:38:29 | 2024-11-05T16:31:53 | 2024-11-05T16:31:32 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
According to docs, after downloading release, this is a last step to update ollama manually:
```bash
sudo tar -C /usr -xzf ollama-linux-amd64-rocm.tgz
```
However, ollama version remains unchanged even after `ollama` service restart
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.14 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7508/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/885 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/885/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/885/comments | https://api.github.com/repos/ollama/ollama/issues/885/events | https://github.com/ollama/ollama/issues/885 | 1,957,895,148 | I_kwDOJ0Z1Ps50sxvs | 885 | Add Parameter Environment="OLLAMA_HOST=127.0.0.1:11434" to the ollama.service file | {
"login": "byteconcepts",
"id": 33394779,
"node_id": "MDQ6VXNlcjMzMzk0Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/33394779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/byteconcepts",
"html_url": "https://github.com/byteconcepts",
"followers_url": "https://api.github.com/users/byteconcepts/followers",
"following_url": "https://api.github.com/users/byteconcepts/following{/other_user}",
"gists_url": "https://api.github.com/users/byteconcepts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/byteconcepts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/byteconcepts/subscriptions",
"organizations_url": "https://api.github.com/users/byteconcepts/orgs",
"repos_url": "https://api.github.com/users/byteconcepts/repos",
"events_url": "https://api.github.com/users/byteconcepts/events{/privacy}",
"received_events_url": "https://api.github.com/users/byteconcepts/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-10-23T19:37:55 | 2023-10-24T23:02:36 | 2023-10-24T23:02:36 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | For those who prefer to use ollama primarily via it's API it would be nice if the ollama.service file would already contain the line...
`Environment="OLLAMA_HOST=127.0.0.1:11434"`
Additionally it would be nice if the info that for the service the interface IP and Port may be changed in this file, would be added to the Readme.
For access over all interfaces on port 4711 change to:
`Environment="OLLAMA_HOST=0.0.0.0:4711"`
But if you like to use the console command then, you then need to use it like this:
`OLLAMA_HOST="[REAL-INTERFACE-IP-ADDRESS]:4711" ollama show llama2-uncensored --modelfile`
As shortcuts, you then may add for example two aliases to your ~/.bash_aliases:
```
alias ollama-run='OLLAMA_HOST="[REAL-INTERFACE-IP-ADDRESS or localhost]:4711" ollama run'
alias ollama-list='OLLAMA_HOST="[REAL-INTERFACE-IP-ADDRESS or localhost]:4711" ollama list'
alias ollama-show='OLLAMA_HOST="[REAL-INTERFACE-IP-ADDRESS or localhost]:4711" ollama show'
```
After you have added the aliases, you need to logoff and logon again or do...
`source ~/.bash_aliases`
Additionally, if you like to run for example chatbot-ollama on the same machine like ollama, but like to make chatbot-ollama available for the whole network, you may then start that one with this command:
`OLLAMA_HOST="http://127.0.0.1:4711" npm run dev -- -H [REAL-INTERFACE-IP-ADDRESS-OF-CHATBOT-OLLAMA]`
If you do that on a machine accessible by the public, be sure to block the chatbot-ollama service/website for the public or use eg. apache or nginx as a proxy to protect it with username/password protection.
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/885/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/885/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/7086 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7086/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7086/comments | https://api.github.com/repos/ollama/ollama/issues/7086/events | https://github.com/ollama/ollama/pull/7086 | 2,563,131,754 | PR_kwDOJ0Z1Ps59c822 | 7,086 | doc: Adding docs on how to compile ollama to run on Intel discrete GPU platform | {
"login": "xiangyang-95",
"id": 18331729,
"node_id": "MDQ6VXNlcjE4MzMxNzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/18331729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiangyang-95",
"html_url": "https://github.com/xiangyang-95",
"followers_url": "https://api.github.com/users/xiangyang-95/followers",
"following_url": "https://api.github.com/users/xiangyang-95/following{/other_user}",
"gists_url": "https://api.github.com/users/xiangyang-95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiangyang-95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiangyang-95/subscriptions",
"organizations_url": "https://api.github.com/users/xiangyang-95/orgs",
"repos_url": "https://api.github.com/users/xiangyang-95/repos",
"events_url": "https://api.github.com/users/xiangyang-95/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiangyang-95/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2024-10-03T05:04:52 | 2024-11-09T12:33:24 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7086",
"html_url": "https://github.com/ollama/ollama/pull/7086",
"diff_url": "https://github.com/ollama/ollama/pull/7086.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7086.patch",
"merged_at": null
} | - Adding the steps to compile Ollama to run on Intel(R) discrete GPU platform
- Adding the discrete GPU that have been verified in the GPU docs | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7086/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8648 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8648/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8648/comments | https://api.github.com/repos/ollama/ollama/issues/8648/events | https://github.com/ollama/ollama/issues/8648 | 2,817,169,473 | I_kwDOJ0Z1Ps6n6phB | 8,648 | olama installer should ask in drive user wants to install it | {
"login": "VikramNagwal",
"id": 123088024,
"node_id": "U_kgDOB1YsmA",
"avatar_url": "https://avatars.githubusercontent.com/u/123088024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VikramNagwal",
"html_url": "https://github.com/VikramNagwal",
"followers_url": "https://api.github.com/users/VikramNagwal/followers",
"following_url": "https://api.github.com/users/VikramNagwal/following{/other_user}",
"gists_url": "https://api.github.com/users/VikramNagwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VikramNagwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VikramNagwal/subscriptions",
"organizations_url": "https://api.github.com/users/VikramNagwal/orgs",
"repos_url": "https://api.github.com/users/VikramNagwal/repos",
"events_url": "https://api.github.com/users/VikramNagwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/VikramNagwal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2025-01-29T03:40:28 | 2025-01-29T17:20:19 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Currently, Ollama Desktop is being installed on the C drive. However, if users prefer not to have it stored there, the system should offer an option to choose a different installation location during the setup process.
**Feature Request:**
The Ollama Desktop installation wizard should prompt users to choose their preferred installation directory.
**Problem:**
The Ollama Desktop installation wizard is installed on the root drive (C drive) without allowing users to control where it is stored. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8648/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8219 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8219/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8219/comments | https://api.github.com/repos/ollama/ollama/issues/8219/events | https://github.com/ollama/ollama/issues/8219 | 2,756,036,629 | I_kwDOJ0Z1Ps6kRcgV | 8,219 | I built an ollama app for Android | {
"login": "echoo-app",
"id": 192385499,
"node_id": "U_kgDOC3eR2w",
"avatar_url": "https://avatars.githubusercontent.com/u/192385499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/echoo-app",
"html_url": "https://github.com/echoo-app",
"followers_url": "https://api.github.com/users/echoo-app/followers",
"following_url": "https://api.github.com/users/echoo-app/following{/other_user}",
"gists_url": "https://api.github.com/users/echoo-app/gists{/gist_id}",
"starred_url": "https://api.github.com/users/echoo-app/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echoo-app/subscriptions",
"organizations_url": "https://api.github.com/users/echoo-app/orgs",
"repos_url": "https://api.github.com/users/echoo-app/repos",
"events_url": "https://api.github.com/users/echoo-app/events{/privacy}",
"received_events_url": "https://api.github.com/users/echoo-app/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-12-23T13:04:38 | 2024-12-29T19:09:05 | 2024-12-29T19:09:05 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The source code of the app is here
https://github.com/echoo-app/echoo
<img width="472" alt="settings" src="https://github.com/user-attachments/assets/2d420674-baa7-4644-b420-a5d44b3c3019" />
<img width="472" alt="home" src="https://github.com/user-attachments/assets/1e26dacf-6baa-4089-af82-c2cb86d4a8be" />
How to add to this list
<img width="215" alt="Screenshot 2024-12-23 at 8 59 03 PM" src="https://github.com/user-attachments/assets/9f416dd0-fde2-41d6-989a-d2f4bd4e6fbe" />
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8219/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8219/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1241 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1241/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1241/comments | https://api.github.com/repos/ollama/ollama/issues/1241/events | https://github.com/ollama/ollama/issues/1241 | 2,006,389,410 | I_kwDOJ0Z1Ps53lxKi | 1,241 | Multi-line prompting from CLI issue - not waiting for closing """ | {
"login": "ken-vat",
"id": 40846144,
"node_id": "MDQ6VXNlcjQwODQ2MTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/40846144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ken-vat",
"html_url": "https://github.com/ken-vat",
"followers_url": "https://api.github.com/users/ken-vat/followers",
"following_url": "https://api.github.com/users/ken-vat/following{/other_user}",
"gists_url": "https://api.github.com/users/ken-vat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ken-vat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ken-vat/subscriptions",
"organizations_url": "https://api.github.com/users/ken-vat/orgs",
"repos_url": "https://api.github.com/users/ken-vat/repos",
"events_url": "https://api.github.com/users/ken-vat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ken-vat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2023-11-22T13:50:10 | 2024-01-18T00:09:27 | 2024-01-18T00:09:27 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | THe engine started to spew out code before ending the multi-line with closing """. as an example -
>>> """
... can you give python code to use this API using the python requests package/library
... curl -X POST \
... --url "localhost:5000/v1/..." \
... --header "Content-Type: application/json" \
... --data '
... {
... "chat": "string"
... }
... '
Here is an example of how you can use the `requests` package in Python to make a POST request to the `/v1/completions`
endpoint ...
It started response right after this curl command before giving closing """
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1241/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1767 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1767/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1767/comments | https://api.github.com/repos/ollama/ollama/issues/1767/events | https://github.com/ollama/ollama/pull/1767 | 2,064,427,520 | PR_kwDOJ0Z1Ps5jKNmU | 1,767 | Update README.md - Terminal Integration - ShellOracle | {
"login": "djcopley",
"id": 4100965,
"node_id": "MDQ6VXNlcjQxMDA5NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4100965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djcopley",
"html_url": "https://github.com/djcopley",
"followers_url": "https://api.github.com/users/djcopley/followers",
"following_url": "https://api.github.com/users/djcopley/following{/other_user}",
"gists_url": "https://api.github.com/users/djcopley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djcopley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djcopley/subscriptions",
"organizations_url": "https://api.github.com/users/djcopley/orgs",
"repos_url": "https://api.github.com/users/djcopley/repos",
"events_url": "https://api.github.com/users/djcopley/events{/privacy}",
"received_events_url": "https://api.github.com/users/djcopley/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-01-03T17:51:14 | 2024-02-20T03:18:06 | 2024-02-20T03:18:05 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1767",
"html_url": "https://github.com/ollama/ollama/pull/1767",
"diff_url": "https://github.com/ollama/ollama/pull/1767.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1767.patch",
"merged_at": "2024-02-20T03:18:05"
} | ShellOracle is a new ZSH Line Editor widget that uses Ollama for intelligent shell command generation! Ollama rocks!
![ShellOracle](https://i.imgur.com/QM2LkAf.gif) | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1767/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1960 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1960/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1960/comments | https://api.github.com/repos/ollama/ollama/issues/1960/events | https://github.com/ollama/ollama/issues/1960 | 2,079,514,896 | I_kwDOJ0Z1Ps578uEQ | 1,960 | Feature Request : new flag of --benchmark | {
"login": "vincecate",
"id": 37512606,
"node_id": "MDQ6VXNlcjM3NTEyNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/37512606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vincecate",
"html_url": "https://github.com/vincecate",
"followers_url": "https://api.github.com/users/vincecate/followers",
"following_url": "https://api.github.com/users/vincecate/following{/other_user}",
"gists_url": "https://api.github.com/users/vincecate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vincecate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vincecate/subscriptions",
"organizations_url": "https://api.github.com/users/vincecate/orgs",
"repos_url": "https://api.github.com/users/vincecate/repos",
"events_url": "https://api.github.com/users/vincecate/events{/privacy}",
"received_events_url": "https://api.github.com/users/vincecate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2024-01-12T19:06:13 | 2024-04-01T09:29:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null |
We are making a website for performance results using different hardware running ollama models at http://LLMPerformance.ai
It would be really helpful if ollama had a flag "--benchmark" which made it output everything that "--verbose" does
but also added the following info:
CPU:
Memory:
GPU:
VRAM:
LLM Model Name:
LLM Model Layers on GPU:
Total Model Layers:
Ollama Version:
Operating System:
Date:
The names etc don't matter as long as it is clear.
This will help make sure that reported results have enough info to be reproducible. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1960/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1960/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/4022 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4022/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4022/comments | https://api.github.com/repos/ollama/ollama/issues/4022/events | https://github.com/ollama/ollama/issues/4022 | 2,268,527,278 | I_kwDOJ0Z1Ps6HNvqu | 4,022 | cannot run moondream in Ollama (ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225477 ") | {
"login": "prithvi151080",
"id": 157370999,
"node_id": "U_kgDOCWFKdw",
"avatar_url": "https://avatars.githubusercontent.com/u/157370999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prithvi151080",
"html_url": "https://github.com/prithvi151080",
"followers_url": "https://api.github.com/users/prithvi151080/followers",
"following_url": "https://api.github.com/users/prithvi151080/following{/other_user}",
"gists_url": "https://api.github.com/users/prithvi151080/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prithvi151080/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prithvi151080/subscriptions",
"organizations_url": "https://api.github.com/users/prithvi151080/orgs",
"repos_url": "https://api.github.com/users/prithvi151080/repos",
"events_url": "https://api.github.com/users/prithvi151080/events{/privacy}",
"received_events_url": "https://api.github.com/users/prithvi151080/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 5 | 2024-04-29T09:20:50 | 2024-05-04T16:13:15 | 2024-04-29T11:10:57 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have downloaded the moondream model from official ollama site (https://ollama.com/library/moondream) but while running the model in ollama i get this error ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225477 "
![image](https://github.com/ollama/ollama/assets/157370999/b5b20357-b825-4cc8-b908-ac0ea0768874)
Below is the entire ollama server log file:
time=2024-04-29T11:07:39.775+05:30 level=INFO source=images.go:817 msg="total blobs: 6"
time=2024-04-29T11:07:39.778+05:30 level=INFO source=images.go:824 msg="total unused blobs removed: 0"
time=2024-04-29T11:07:39.779+05:30 level=INFO source=routes.go:1143 msg="Listening on 127.0.0.1:11434 (version 0.1.32)"
time=2024-04-29T11:07:39.791+05:30 level=INFO source=payload.go:28 msg="extracting embedded files" dir=C:\Users\LENOVO\AppData\Local\Temp\ollama3526940464\runners
time=2024-04-29T11:07:40.021+05:30 level=INFO source=payload.go:41 msg="Dynamic LLM libraries [cpu_avx2 cuda_v11.3 rocm_v5.7 cpu cpu_avx]"
[GIN] 2024/04/29 - 11:07:40 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/04/29 - 11:07:40 | 200 | 2.1732ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/04/29 - 11:07:40 | 200 | 1.6291ms | 127.0.0.1 | POST "/api/show"
time=2024-04-29T11:07:40.857+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-29T11:07:40.857+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_.dll"
time=2024-04-29T11:07:40.869+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\Users\LENOVO\AppData\Local\Programs\Ollama\cudart64_110.dll]"
time=2024-04-29T11:07:41.992+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-29T11:07:41.993+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-29T11:07:42.065+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-04-29T11:07:42.082+05:30 level=INFO source=gpu.go:121 msg="Detecting GPU type"
time=2024-04-29T11:07:42.082+05:30 level=INFO source=gpu.go:268 msg="Searching for GPU management library cudart64_.dll"
time=2024-04-29T11:07:42.092+05:30 level=INFO source=gpu.go:314 msg="Discovered GPU libraries: [C:\Users\LENOVO\AppData\Local\Programs\Ollama\cudart64_110.dll]"
time=2024-04-29T11:07:42.093+05:30 level=INFO source=gpu.go:126 msg="Nvidia GPU detected via cudart"
time=2024-04-29T11:07:42.093+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-29T11:07:42.142+05:30 level=INFO source=gpu.go:202 msg="[cudart] CUDART CUDA Compute Capability detected: 8.6"
time=2024-04-29T11:07:42.168+05:30 level=INFO source=server.go:127 msg="offload to gpu" reallayers=25 layers=25 required="2588.9 MiB" used="2588.9 MiB" available="3304.2 MiB" kv="384.0 MiB" fulloffload="148.0 MiB" partialoffload="190.0 MiB"
time=2024-04-29T11:07:42.168+05:30 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
time=2024-04-29T11:07:42.177+05:30 level=INFO source=server.go:264 msg="starting llama server" cmd="C:\Users\LENOVO\AppData\Local\Temp\ollama3526940464\runners\cuda_v11.3\ollama_llama_server.exe --model C:\Users\LENOVO\.ollama\models\blobs\sha256-e554c6b9de016673fd2c732e0342967727e9659ca5f853a4947cc96263fa602b --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 25 --mmproj C:\Users\LENOVO\.ollama\models\blobs\sha256-4cc1cb3660d87ff56432ebeb7884ad35d67c48c7b9f6b2856f305e39c38eed8f --port 53596"
time=2024-04-29T11:07:42.229+05:30 level=INFO source=server.go:389 msg="waiting for llama runner to start responding"
{"function":"server_params_parse","level":"INFO","line":2603,"msg":"logging to file is disabled.","tid":"21912","timestamp":1714369062}
{"build":2679,"commit":"7593639","function":"wmain","level":"INFO","line":2820,"msg":"build info","tid":"21912","timestamp":1714369062}
{"function":"wmain","level":"INFO","line":2827,"msg":"system info","n_threads":8,"n_threads_batch":-1,"system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | ","tid":"21912","timestamp":1714369062,"total_threads":16}
{"function":"load_model","level":"INFO","line":395,"msg":"Multi Modal Mode Enabled","tid":"21912","timestamp":1714369062}
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3050 Laptop GPU, compute capability 8.6, VMM: yes
key clip.vision.image_grid_pinpoints not found in file
key clip.vision.mm_patch_merge_type not found in file
key clip.vision.image_crop_resolution not found in file
clip_model_load: failed to load vision model tensors
time=2024-04-29T11:07:43.382+05:30 level=ERROR source=routes.go:120 msg="error loading llama server" error="llama runner process no longer running: 3221225477 "
I am able to run other models in ollama like Mixtral, mxbai, tinyllama etc without any issue but Moondream is failing to start.
OS details: Windows
GPU: Nvidia RTX 3050
Any help would be highly appreciated.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.1.32 | {
"login": "prithvi151080",
"id": 157370999,
"node_id": "U_kgDOCWFKdw",
"avatar_url": "https://avatars.githubusercontent.com/u/157370999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prithvi151080",
"html_url": "https://github.com/prithvi151080",
"followers_url": "https://api.github.com/users/prithvi151080/followers",
"following_url": "https://api.github.com/users/prithvi151080/following{/other_user}",
"gists_url": "https://api.github.com/users/prithvi151080/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prithvi151080/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prithvi151080/subscriptions",
"organizations_url": "https://api.github.com/users/prithvi151080/orgs",
"repos_url": "https://api.github.com/users/prithvi151080/repos",
"events_url": "https://api.github.com/users/prithvi151080/events{/privacy}",
"received_events_url": "https://api.github.com/users/prithvi151080/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4022/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6830 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6830/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6830/comments | https://api.github.com/repos/ollama/ollama/issues/6830/events | https://github.com/ollama/ollama/pull/6830 | 2,529,490,462 | PR_kwDOJ0Z1Ps57rPMs | 6,830 | llama: doc: explain golang objc linker warning | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-09-16T21:00:15 | 2024-09-16T21:21:38 | 2024-09-16T21:21:35 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6830",
"html_url": "https://github.com/ollama/ollama/pull/6830",
"diff_url": "https://github.com/ollama/ollama/pull/6830.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6830.patch",
"merged_at": "2024-09-16T21:21:35"
} | null | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6830/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/4796 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4796/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4796/comments | https://api.github.com/repos/ollama/ollama/issues/4796/events | https://github.com/ollama/ollama/issues/4796 | 2,330,632,914 | I_kwDOJ0Z1Ps6K6qLS | 4,796 | Can't answer chinese Starting from 0.1.39 | {
"login": "figuretom",
"id": 37067354,
"node_id": "MDQ6VXNlcjM3MDY3MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/37067354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/figuretom",
"html_url": "https://github.com/figuretom",
"followers_url": "https://api.github.com/users/figuretom/followers",
"following_url": "https://api.github.com/users/figuretom/following{/other_user}",
"gists_url": "https://api.github.com/users/figuretom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/figuretom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/figuretom/subscriptions",
"organizations_url": "https://api.github.com/users/figuretom/orgs",
"repos_url": "https://api.github.com/users/figuretom/repos",
"events_url": "https://api.github.com/users/figuretom/events{/privacy}",
"received_events_url": "https://api.github.com/users/figuretom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 10 | 2024-06-03T09:25:51 | 2024-09-05T16:20:24 | 2024-06-03T16:53:28 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Starting from 0.1.39, if you input a Chinese question, ollama will not be able to output a Chinese answer
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.1.39+ | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4796/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3723 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3723/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3723/comments | https://api.github.com/repos/ollama/ollama/issues/3723/events | https://github.com/ollama/ollama/issues/3723 | 2,249,704,813 | I_kwDOJ0Z1Ps6GF8Vt | 3,723 | Use NVIDIA + AMD GPUs simultaneously (CUDA OOM?) | {
"login": "erasmus74",
"id": 7828606,
"node_id": "MDQ6VXNlcjc4Mjg2MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7828606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erasmus74",
"html_url": "https://github.com/erasmus74",
"followers_url": "https://api.github.com/users/erasmus74/followers",
"following_url": "https://api.github.com/users/erasmus74/following{/other_user}",
"gists_url": "https://api.github.com/users/erasmus74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erasmus74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erasmus74/subscriptions",
"organizations_url": "https://api.github.com/users/erasmus74/orgs",
"repos_url": "https://api.github.com/users/erasmus74/repos",
"events_url": "https://api.github.com/users/erasmus74/events{/privacy}",
"received_events_url": "https://api.github.com/users/erasmus74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 8 | 2024-04-18T04:21:26 | 2024-04-23T17:42:48 | 2024-04-23T17:42:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm trying to run my ollama:rocm docker image (pulled 4/16/24) and it does the Nvidia M40 and Ryzen 7900x CPU offloads. I see there is full nvidia VRAM usage and the remaining layers offload to my CPU RAM.
However I also have my 7900xtx AMD card in there, and when I'm not passing the "--gpus all" in the docker CLI for the run, I can use exclusively the AMD GPU. When I add that parameter, the NV GPU works, but the AMD GPU sits idle.
I'd like to use models that utilize upto 48gb of VRAM by splitting the usage across the two cards though they are not of the same vendor ( I know the challenges, trust me, took forever to get this far).
So my setup of course;
i've got the rocm stuff working, tested with just ollama:rocm and passing --device /dev/dri/renderD129 (my 7900xtx)
i've got the cuda stuff working, tested with just ollama:rocm and passing --device /dev/dri/renderD129 (my 7900xtx) --gpus all
Total I have 48GB VRAM and 64GB System RAM
The steps to reproduce;
`docker run -d --gpus all --device /dev/kfd --device /dev/dri/renderD129 --device /dev/dri/renderD128 -v ollama:/root/.ollama -e OLLAMA_ORIGINS='*.github.io' -p 11434:11434 --name ollama ollama/ollama:rocm`
`docker exec -it /bin/bash`
`ollama run --verbose llama2-uncensored:70b` (this is 38G)
and the output I get is;
```
[root@d8466466df3c /]# ollama run --verbose llama2-uncensored:70b
Error: llama runner process no longer running: -1 CUDA error: out of memory
current device: 0, in function alloc at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:302
cuMemCreate(&handle, reserve_size, &prop, 0)
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml-cuda.cu:60: !"CUDA error"
[root@d8466466df3c /]#
```
And screenshot for reference (mid-run of final command)
![Screenshot from 2024-04-18 00-20-01](https://github.com/ollama/ollama/assets/7828606/de4155fd-cbc9-4cde-b58f-8e7120d75105)
happy to follow up and test.
### OS
Linux, Docker
### GPU
Nvidia, AMD
### CPU
AMD
### Ollama version
ollama version is 0.1.32:rocm | {
"login": "erasmus74",
"id": 7828606,
"node_id": "MDQ6VXNlcjc4Mjg2MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7828606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erasmus74",
"html_url": "https://github.com/erasmus74",
"followers_url": "https://api.github.com/users/erasmus74/followers",
"following_url": "https://api.github.com/users/erasmus74/following{/other_user}",
"gists_url": "https://api.github.com/users/erasmus74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erasmus74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erasmus74/subscriptions",
"organizations_url": "https://api.github.com/users/erasmus74/orgs",
"repos_url": "https://api.github.com/users/erasmus74/repos",
"events_url": "https://api.github.com/users/erasmus74/events{/privacy}",
"received_events_url": "https://api.github.com/users/erasmus74/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3723/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5651 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5651/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5651/comments | https://api.github.com/repos/ollama/ollama/issues/5651/events | https://github.com/ollama/ollama/issues/5651 | 2,405,465,712 | I_kwDOJ0Z1Ps6PYH5w | 5,651 | 2nd prompt never completes | {
"login": "Konuralpkilinc",
"id": 91570726,
"node_id": "U_kgDOBXVCJg",
"avatar_url": "https://avatars.githubusercontent.com/u/91570726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Konuralpkilinc",
"html_url": "https://github.com/Konuralpkilinc",
"followers_url": "https://api.github.com/users/Konuralpkilinc/followers",
"following_url": "https://api.github.com/users/Konuralpkilinc/following{/other_user}",
"gists_url": "https://api.github.com/users/Konuralpkilinc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Konuralpkilinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Konuralpkilinc/subscriptions",
"organizations_url": "https://api.github.com/users/Konuralpkilinc/orgs",
"repos_url": "https://api.github.com/users/Konuralpkilinc/repos",
"events_url": "https://api.github.com/users/Konuralpkilinc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Konuralpkilinc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-07-12T12:31:14 | 2024-07-16T13:04:54 | 2024-07-16T13:04:54 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Whenever i try to give the second prompt on any GGUF models ollama fails here is the logs
time=2024-07-12T15:47:23.505Z level=INFO source=sched.go:738 msg="new model will fit in available VRAM in single GPU, loading" model=/home/udemirezen/.ollama/models/blobs/sha256-c0dd304d761e8e05d082cc2902d7624a7f87858fdfaa4ef098330ffe767ff0d3 gpu=GPU-eb3a6957-5b68-4958-c313-3b20f0815e4d parallel=4 available=33768079360 required="12.4 GiB"
time=2024-07-12T15:47:23.506Z level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[31.4 GiB]" memory.required.full="12.4 GiB" memory.required.partial="12.4 GiB" memory.required.kv="8.0 GiB" memory.required.allocations="[12.4 GiB]" memory.weights.total="10.5 GiB" memory.weights.repeating="10.4 GiB" memory.weights.nonrepeating="102.6 MiB" memory.graph.full="1.1 GiB" memory.graph.partial="1.3 GiB"
time=2024-07-12T15:47:23.507Z level=INFO source=server.go:375 msg="starting llama server" cmd="/tmp/ollama3602127725/runners/cuda_v11/ollama_llama_server --model /home/udemirezen/.ollama/models/blobs/sha256-c0dd304d761e8e05d082cc2902d7624a7f87858fdfaa4ef098330ffe767ff0d3 --ctx-size 16384 --batch-size 512 --embedding --log-disable --n-gpu-layers 33 --parallel 4 --port 40663"
time=2024-07-12T15:47:23.508Z level=INFO source=sched.go:474 msg="loaded runners" count=1
time=2024-07-12T15:47:23.508Z level=INFO source=server.go:563 msg="waiting for llama runner to start responding"
time=2024-07-12T15:47:23.513Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="a8db2a9" tid="139871727800320" timestamp=1720799243
INFO [main] system info | n_threads=24 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="139871727800320" timestamp=1720799243 total_threads=48
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="47" port="40663" tid="139871727800320" timestamp=1720799243
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /home/udemirezen/.ollama/models/blobs/sha256-c0dd304d761e8e05d082cc2902d7624a7f87858fdfaa4ef098330ffe767ff0d3 (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 10
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q2_K: 65 tensors
llama_model_loader: - type q3_K: 160 tensors
llama_model_loader: - type q6_K: 1 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q2_K - Medium
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 2.63 GiB (3.35 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_print_meta: max token length = 48
time=2024-07-12T15:47:23.765Z level=INFO source=server.go:604 msg="waiting for server to become available" status="llm server loading model"
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla V100-SXM2-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size = 0.27 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 41.02 MiB
llm_load_tensors: CUDA0 buffer size = 2653.31 MiB
llama_new_context_with_model: n_ctx = 16384
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 8192.00 MiB
llama_new_context_with_model: KV self size = 8192.00 MiB, K (f16): 4096.00 MiB, V (f16): 4096.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.55 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1088.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 40.01 MiB
llama_new_context_with_model: graph nodes = 1030
llama_new_context_with_model: graph splits = 2
INFO [main] model loaded | tid="139871727800320" timestamp=1720799246
time=2024-07-12T15:47:26.280Z level=INFO source=server.go:609 msg="llama runner started in 2.77 seconds"
ggml_cuda_compute_forward: MUL failed
CUDA error: unspecified launch failure
current device: 0, in function ggml_cuda_compute_forward at /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:2283
err
GGML_ASSERT: /go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-cuda.cu:100: !"CUDA error"
to visualize things i use the Open WebUI here are some screenshots
![image](https://github.com/user-attachments/assets/f0b9cbd0-4f9f-4da4-83c4-a441e32f5186)
![image](https://github.com/user-attachments/assets/f03b16d9-3073-49f7-8b2c-6d80bf833675)
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.2.1 | {
"login": "Konuralpkilinc",
"id": 91570726,
"node_id": "U_kgDOBXVCJg",
"avatar_url": "https://avatars.githubusercontent.com/u/91570726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Konuralpkilinc",
"html_url": "https://github.com/Konuralpkilinc",
"followers_url": "https://api.github.com/users/Konuralpkilinc/followers",
"following_url": "https://api.github.com/users/Konuralpkilinc/following{/other_user}",
"gists_url": "https://api.github.com/users/Konuralpkilinc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Konuralpkilinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Konuralpkilinc/subscriptions",
"organizations_url": "https://api.github.com/users/Konuralpkilinc/orgs",
"repos_url": "https://api.github.com/users/Konuralpkilinc/repos",
"events_url": "https://api.github.com/users/Konuralpkilinc/events{/privacy}",
"received_events_url": "https://api.github.com/users/Konuralpkilinc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5651/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3769 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3769/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3769/comments | https://api.github.com/repos/ollama/ollama/issues/3769/events | https://github.com/ollama/ollama/issues/3769 | 2,254,323,953 | I_kwDOJ0Z1Ps6GXkDx | 3,769 | An existing connection was forcibly closed by the remote host.Could you help me? | {
"login": "risingnew",
"id": 128674607,
"node_id": "U_kgDOB6trLw",
"avatar_url": "https://avatars.githubusercontent.com/u/128674607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/risingnew",
"html_url": "https://github.com/risingnew",
"followers_url": "https://api.github.com/users/risingnew/followers",
"following_url": "https://api.github.com/users/risingnew/following{/other_user}",
"gists_url": "https://api.github.com/users/risingnew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/risingnew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/risingnew/subscriptions",
"organizations_url": "https://api.github.com/users/risingnew/orgs",
"repos_url": "https://api.github.com/users/risingnew/repos",
"events_url": "https://api.github.com/users/risingnew/events{/privacy}",
"received_events_url": "https://api.github.com/users/risingnew/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 22 | 2024-04-20T02:07:16 | 2024-07-29T08:39:35 | 2024-05-02T00:24:10 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
PS C:\Users\Administrator\AppData\Local\Ollama> ollama run llama3
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=1AKxIvoajv-NPGYukzWJcA&scope=repository%!A(MISSING)library%!F(MISSING)llama3%!A(MISSING)pull&service=ollama.com&ts=1713578711": read tcp 192.168.124.11:53463->34.120.132.20:443: wsarecv: An existing connection was forcibly closed by the remote host.
![100614](https://github.com/ollama/ollama/assets/128674607/c25a50e8-0f0d-4291-b9bb-a5c40e7a604d)
(https://github.com/ollama/ollama/files/15046570/server.log)
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3769/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1813 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1813/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1813/comments | https://api.github.com/repos/ollama/ollama/issues/1813/events | https://github.com/ollama/ollama/issues/1813 | 2,067,859,251 | I_kwDOJ0Z1Ps57QQcz | 1,813 | How to run Ollama only on a dedicated GPU? (Instead of all GPUs) | {
"login": "sthufnagl",
"id": 1492014,
"node_id": "MDQ6VXNlcjE0OTIwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1492014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sthufnagl",
"html_url": "https://github.com/sthufnagl",
"followers_url": "https://api.github.com/users/sthufnagl/followers",
"following_url": "https://api.github.com/users/sthufnagl/following{/other_user}",
"gists_url": "https://api.github.com/users/sthufnagl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sthufnagl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sthufnagl/subscriptions",
"organizations_url": "https://api.github.com/users/sthufnagl/orgs",
"repos_url": "https://api.github.com/users/sthufnagl/repos",
"events_url": "https://api.github.com/users/sthufnagl/events{/privacy}",
"received_events_url": "https://api.github.com/users/sthufnagl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6677745918,
"node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g",
"url": "https://api.github.com/repos/ollama/ollama/labels/gpu",
"name": "gpu",
"color": "76C49E",
"default": false,
"description": ""
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 37 | 2024-01-05T18:35:28 | 2024-11-21T19:33:14 | 2024-03-24T18:15:04 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi,
I have 3x3090 and I want to run Ollama Instance only on a dedicated GPU. The reason for this: To have 3xOllama Instances (with different ports) for using with Autogen.
I also tried the "Docker Ollama" without luck.
Or is there an other solution?
Let me know...
Thanks in advance
Steve | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1813/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1813/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3519 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3519/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3519/comments | https://api.github.com/repos/ollama/ollama/issues/3519/events | https://github.com/ollama/ollama/pull/3519 | 2,229,541,406 | PR_kwDOJ0Z1Ps5r7F9r | 3,519 | fix: close files in the CreateHandler func | {
"login": "testwill",
"id": 8717479,
"node_id": "MDQ6VXNlcjg3MTc0Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8717479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testwill",
"html_url": "https://github.com/testwill",
"followers_url": "https://api.github.com/users/testwill/followers",
"following_url": "https://api.github.com/users/testwill/following{/other_user}",
"gists_url": "https://api.github.com/users/testwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/testwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/testwill/subscriptions",
"organizations_url": "https://api.github.com/users/testwill/orgs",
"repos_url": "https://api.github.com/users/testwill/repos",
"events_url": "https://api.github.com/users/testwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/testwill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-04-07T03:27:39 | 2024-05-09T07:25:09 | 2024-05-09T07:25:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3519",
"html_url": "https://github.com/ollama/ollama/pull/3519",
"diff_url": "https://github.com/ollama/ollama/pull/3519.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3519.patch",
"merged_at": null
} | null | {
"login": "testwill",
"id": 8717479,
"node_id": "MDQ6VXNlcjg3MTc0Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8717479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testwill",
"html_url": "https://github.com/testwill",
"followers_url": "https://api.github.com/users/testwill/followers",
"following_url": "https://api.github.com/users/testwill/following{/other_user}",
"gists_url": "https://api.github.com/users/testwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/testwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/testwill/subscriptions",
"organizations_url": "https://api.github.com/users/testwill/orgs",
"repos_url": "https://api.github.com/users/testwill/repos",
"events_url": "https://api.github.com/users/testwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/testwill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3519/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7266 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7266/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7266/comments | https://api.github.com/repos/ollama/ollama/issues/7266/events | https://github.com/ollama/ollama/issues/7266 | 2,598,721,607 | I_kwDOJ0Z1Ps6a5VhH | 7,266 | Windows ARM64 fails when loading model, error code 0xc000001d | {
"login": "mikechambers84",
"id": 904313,
"node_id": "MDQ6VXNlcjkwNDMxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/904313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikechambers84",
"html_url": "https://github.com/mikechambers84",
"followers_url": "https://api.github.com/users/mikechambers84/followers",
"following_url": "https://api.github.com/users/mikechambers84/following{/other_user}",
"gists_url": "https://api.github.com/users/mikechambers84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikechambers84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikechambers84/subscriptions",
"organizations_url": "https://api.github.com/users/mikechambers84/orgs",
"repos_url": "https://api.github.com/users/mikechambers84/repos",
"events_url": "https://api.github.com/users/mikechambers84/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikechambers84/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 5860134234,
"node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg",
"url": "https://api.github.com/repos/ollama/ollama/labels/windows",
"name": "windows",
"color": "0052CC",
"default": false,
"description": ""
}
] | open | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5 | 2024-10-19T04:07:04 | 2024-10-25T03:59:36 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I installed the latest Ollama for Windows (ARM64 build) on my 2023 Windows Dev Kit, which has an 8-core ARM processor, a Snapdragon 8cx Gen 3. It's running Windows 11 Pro.
I can pull models, but when I go to run them, I get an error. It doesn't matter what model I run, I've tried several. Here's an example.
```
C:\Users\Mike Chambers>ollama pull gemma2:2b
pulling manifest
pulling 7462734796d6... 100% ▕████████████████████████████████████████████████████████▏ 1.6 GB
pulling e0a42594d802... 100% ▕████████████████████████████████████████████████████████▏ 358 B
pulling 097a36493f71... 100% ▕████████████████████████████████████████████████████████▏ 8.4 KB
pulling 2490e7468436... 100% ▕████████████████████████████████████████████████████████▏ 65 B
pulling e18ad7af7efb... 100% ▕████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
C:\Users\Mike Chambers>ollama run gemma2:2b
Error: llama runner process has terminated: exit status 0xc000001d
C:\Users\Mike Chambers>
```
### OS
Windows
### GPU
_No response_
### CPU
Other
### Ollama version
0.3.13
[server.log](https://github.com/user-attachments/files/17448637/server.log) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7266/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7266/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8082 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8082/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8082/comments | https://api.github.com/repos/ollama/ollama/issues/8082/events | https://github.com/ollama/ollama/pull/8082 | 2,737,564,400 | PR_kwDOJ0Z1Ps6FGxNy | 8,082 | Docs: Add /api/version endpoint to API documentation | {
"login": "anxkhn",
"id": 83116240,
"node_id": "MDQ6VXNlcjgzMTE2MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/83116240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anxkhn",
"html_url": "https://github.com/anxkhn",
"followers_url": "https://api.github.com/users/anxkhn/followers",
"following_url": "https://api.github.com/users/anxkhn/following{/other_user}",
"gists_url": "https://api.github.com/users/anxkhn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anxkhn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anxkhn/subscriptions",
"organizations_url": "https://api.github.com/users/anxkhn/orgs",
"repos_url": "https://api.github.com/users/anxkhn/repos",
"events_url": "https://api.github.com/users/anxkhn/events{/privacy}",
"received_events_url": "https://api.github.com/users/anxkhn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 0 | 2024-12-13T06:44:22 | 2024-12-29T19:33:44 | 2024-12-29T19:33:44 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8082",
"html_url": "https://github.com/ollama/ollama/pull/8082",
"diff_url": "https://github.com/ollama/ollama/pull/8082.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8082.patch",
"merged_at": "2024-12-29T19:33:44"
} | This PR adds documentation for the `/api/version` endpoint to the Ollama API documentation. This endpoint allows clients to retrieve the Ollama server version, which is useful for ensuring compatibility with features that are dependent on specific server versions.
The `/api/version` endpoint was added in a previous commit but was not documented. This PR addresses that omission.
Resolves #8040
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8082/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/992 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/992/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/992/comments | https://api.github.com/repos/ollama/ollama/issues/992/events | https://github.com/ollama/ollama/pull/992 | 1,977,056,120 | PR_kwDOJ0Z1Ps5elQcL | 992 | update langchainjs doc | {
"login": "aashish2057",
"id": 42164334,
"node_id": "MDQ6VXNlcjQyMTY0MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/42164334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aashish2057",
"html_url": "https://github.com/aashish2057",
"followers_url": "https://api.github.com/users/aashish2057/followers",
"following_url": "https://api.github.com/users/aashish2057/following{/other_user}",
"gists_url": "https://api.github.com/users/aashish2057/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aashish2057/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aashish2057/subscriptions",
"organizations_url": "https://api.github.com/users/aashish2057/orgs",
"repos_url": "https://api.github.com/users/aashish2057/repos",
"events_url": "https://api.github.com/users/aashish2057/events{/privacy}",
"received_events_url": "https://api.github.com/users/aashish2057/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-11-03T23:47:11 | 2023-11-09T13:08:31 | 2023-11-09T13:08:31 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/992",
"html_url": "https://github.com/ollama/ollama/pull/992",
"diff_url": "https://github.com/ollama/ollama/pull/992.diff",
"patch_url": "https://github.com/ollama/ollama/pull/992.patch",
"merged_at": "2023-11-09T13:08:31"
} | Updates docs/tutorials/langchainjs.md from issue #539
adds missing await in line 36 and adds instructions to install cheerio
| {
"login": "technovangelist",
"id": 633681,
"node_id": "MDQ6VXNlcjYzMzY4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/technovangelist",
"html_url": "https://github.com/technovangelist",
"followers_url": "https://api.github.com/users/technovangelist/followers",
"following_url": "https://api.github.com/users/technovangelist/following{/other_user}",
"gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions",
"organizations_url": "https://api.github.com/users/technovangelist/orgs",
"repos_url": "https://api.github.com/users/technovangelist/repos",
"events_url": "https://api.github.com/users/technovangelist/events{/privacy}",
"received_events_url": "https://api.github.com/users/technovangelist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/992/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1032 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1032/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1032/comments | https://api.github.com/repos/ollama/ollama/issues/1032/events | https://github.com/ollama/ollama/pull/1032 | 1,981,597,237 | PR_kwDOJ0Z1Ps5e0dAn | 1,032 | Update configuration instructions in README | {
"login": "joake",
"id": 11403993,
"node_id": "MDQ6VXNlcjExNDAzOTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11403993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joake",
"html_url": "https://github.com/joake",
"followers_url": "https://api.github.com/users/joake/followers",
"following_url": "https://api.github.com/users/joake/following{/other_user}",
"gists_url": "https://api.github.com/users/joake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joake/subscriptions",
"organizations_url": "https://api.github.com/users/joake/orgs",
"repos_url": "https://api.github.com/users/joake/repos",
"events_url": "https://api.github.com/users/joake/events{/privacy}",
"received_events_url": "https://api.github.com/users/joake/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-11-07T15:13:30 | 2023-11-15T16:33:39 | 2023-11-15T16:33:38 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/1032",
"html_url": "https://github.com/ollama/ollama/pull/1032",
"diff_url": "https://github.com/ollama/ollama/pull/1032.diff",
"patch_url": "https://github.com/ollama/ollama/pull/1032.patch",
"merged_at": null
} | Added in instructions on how to specify model file location, as it isn't mentioned in the docs. This should probably be handled by a `ollama config modelpath` function instead. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1032/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2730 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2730/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2730/comments | https://api.github.com/repos/ollama/ollama/issues/2730/events | https://github.com/ollama/ollama/issues/2730 | 2,152,306,215 | I_kwDOJ0Z1Ps6ASZYn | 2,730 | Langchain + Chainlit integration issue | {
"login": "Michelklingler",
"id": 63208430,
"node_id": "MDQ6VXNlcjYzMjA4NDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/63208430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Michelklingler",
"html_url": "https://github.com/Michelklingler",
"followers_url": "https://api.github.com/users/Michelklingler/followers",
"following_url": "https://api.github.com/users/Michelklingler/following{/other_user}",
"gists_url": "https://api.github.com/users/Michelklingler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Michelklingler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Michelklingler/subscriptions",
"organizations_url": "https://api.github.com/users/Michelklingler/orgs",
"repos_url": "https://api.github.com/users/Michelklingler/repos",
"events_url": "https://api.github.com/users/Michelklingler/events{/privacy}",
"received_events_url": "https://api.github.com/users/Michelklingler/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2024-02-24T13:14:18 | 2024-03-12T04:48:11 | 2024-03-12T04:48:11 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi!
I'm using Ollama on a local server RTX A6000 ADA running Mixtral 8x7B.
I run Ollama locally and expose an API endpoint for multiple user to connect and use the LLM in a chat powered by Chainlit + Langchain.
There is 2 issue I want to solve:
1 - Ollama is serving, but the model seems to deload from the GPU after a period of inactivity. This makes the first requet to be a little longer than usual. Is it possible to make sure the model is always running and loaded in the GPU?
2 - When I do 2 request at the same time the output streaming for both user freeze for a few second before continuing slower.
It makes sens it goes slower as Ollama is serving 2 user at the same time, but I wonder why there is this constant freeze, when I do 10 requests at the same time some of them don't even go through.
Could anyone help me on this?
Thanks!
Michel | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2730/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2391 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2391/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2391/comments | https://api.github.com/repos/ollama/ollama/issues/2391/events | https://github.com/ollama/ollama/issues/2391 | 2,123,479,904 | I_kwDOJ0Z1Ps5-kbtg | 2,391 | Ollama outputs endless stream of random words | {
"login": "ncarolan",
"id": 47750607,
"node_id": "MDQ6VXNlcjQ3NzUwNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/47750607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncarolan",
"html_url": "https://github.com/ncarolan",
"followers_url": "https://api.github.com/users/ncarolan/followers",
"following_url": "https://api.github.com/users/ncarolan/following{/other_user}",
"gists_url": "https://api.github.com/users/ncarolan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncarolan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncarolan/subscriptions",
"organizations_url": "https://api.github.com/users/ncarolan/orgs",
"repos_url": "https://api.github.com/users/ncarolan/repos",
"events_url": "https://api.github.com/users/ncarolan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncarolan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 2 | 2024-02-07T16:59:40 | 2024-02-23T06:33:30 | 2024-02-07T22:14:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When running a model with any prompt, the output is a constant stream of random characters and words from various languages. The nonsensical output will continue until ollama is terminated. An example prompt and output is included below:
```
$ ollama run llama2
>>> hi
alseularoured阳negneg вер VALUESalling阳 statementençneg LageTX subsequent VALUES門 посownerençneg African
calculate amerik calculate VALUES interrupted competed succeed subsequentcdot Lage VALUES VALUES
segmentsetra Архиownerးularдні right Pubenç пос門 subsequent АрхиWR African calculate ante Storm ante
calculateenç阳 Архи Mortремен concentrationularottedowneretraship succeed subsequent Архиeffect seis VALUE
верalse Lage stre VALUESular Lage calculateenç пос riv VALUES calculate nad Hannover облаouredoured
VALUES出 ante statement вер Betrieb calculatecdot VALUES阳TX Lage Lage subsequentishingcalled Stormalling
österreich阳 segments nad阳ovooured amerik ante верး succeed Pub Pub Архиownerishing calculate VALUES
competed interruptedishing Stormular門shiputer nad concentration seis門 Mort Pubishing rightремен African
MortInterceptor subsequent statement succeed Lage statementWRularençдні Lageenç African阳 Mortotted
VALUESeffect ante門 succeedTX stre Australneg阳enç обла nad ante Hannoverbo antecdot посcalledenç посalse
amerikowner segments阳 Lage Pub Mortularovoneg Storm Lage Архи Mortishing statement concentration門 ante Storm
Mort Betrieb riv вер Pub Africanးneg interrupted calculatenegenç wol阳 ante calculateular nadдні
statementallingenç stre ante Архиalseençnegetraownerремен stre VALUES вер African Storm African nad
calculate門 Africanownereffectouredneg Storm calculate верTX Africanotted ante VALUES antecdot Hannover Mort
seis subsequent amerik subsequentboowner門阳 Mort concentrationenç обла African African Mortownerး пос Mort阳
Stormship ante competed interruptedençetra subsequent Betrieb Lagecalled calculateular succeed anteдні riv阳
пос ante subsequentovo streuter segments succeed ante Pub succeed AustraleffectWR subsequent VALUES
...
```
The full output text is cut off to save space. This occurs with any prompt I tried for both llama2 and mistral. The same phenomenon occurs when using curl rather than the CLI.
Activity Monitor shows no GPU usage, so I suspect no model inference is actually occurring.
I am using a 16GB M2 Mac.
| {
"login": "ncarolan",
"id": 47750607,
"node_id": "MDQ6VXNlcjQ3NzUwNjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/47750607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncarolan",
"html_url": "https://github.com/ncarolan",
"followers_url": "https://api.github.com/users/ncarolan/followers",
"following_url": "https://api.github.com/users/ncarolan/following{/other_user}",
"gists_url": "https://api.github.com/users/ncarolan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncarolan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncarolan/subscriptions",
"organizations_url": "https://api.github.com/users/ncarolan/orgs",
"repos_url": "https://api.github.com/users/ncarolan/repos",
"events_url": "https://api.github.com/users/ncarolan/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncarolan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2391/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3834 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3834/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3834/comments | https://api.github.com/repos/ollama/ollama/issues/3834/events | https://github.com/ollama/ollama/pull/3834 | 2,257,621,848 | PR_kwDOJ0Z1Ps5taLzO | 3,834 | Report errors on server lookup instead of path lookup failure | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-04-22T23:23:35 | 2024-04-24T17:50:51 | 2024-04-24T17:50:48 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3834",
"html_url": "https://github.com/ollama/ollama/pull/3834",
"diff_url": "https://github.com/ollama/ollama/pull/3834.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3834.patch",
"merged_at": "2024-04-24T17:50:48"
} | This should help identify more details on the failure mode for #3738 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3834/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1139 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1139/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1139/comments | https://api.github.com/repos/ollama/ollama/issues/1139/events | https://github.com/ollama/ollama/issues/1139 | 1,995,042,241 | I_kwDOJ0Z1Ps526e3B | 1,139 | Error while loading Nous-Capybara-34B | {
"login": "eramax",
"id": 542413,
"node_id": "MDQ6VXNlcjU0MjQxMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/542413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eramax",
"html_url": "https://github.com/eramax",
"followers_url": "https://api.github.com/users/eramax/followers",
"following_url": "https://api.github.com/users/eramax/following{/other_user}",
"gists_url": "https://api.github.com/users/eramax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eramax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eramax/subscriptions",
"organizations_url": "https://api.github.com/users/eramax/orgs",
"repos_url": "https://api.github.com/users/eramax/repos",
"events_url": "https://api.github.com/users/eramax/events{/privacy}",
"received_events_url": "https://api.github.com/users/eramax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 2 | 2023-11-15T15:48:08 | 2024-03-11T18:36:03 | 2024-03-11T18:36:02 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I tried to run https://huggingface.co./TheBloke/Nous-Capybara-34B-GGUF
using this modelfile
```
FROM ./nous-capybara-34b.Q3_K_S.gguf
TEMPLATE """USER: {{ .Prompt }} ASSISTANT:"""
PARAMETER num_ctx 200000
PARAMETER stop "USER"
PARAMETER stop "ASSISTANT"
```
I got this error
```
Error: llama runner process has terminated
```
I have 32GB RAM and 1050 TI GPU with 4GB.
| {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1139/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1600 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1600/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1600/comments | https://api.github.com/repos/ollama/ollama/issues/1600/events | https://github.com/ollama/ollama/issues/1600 | 2,048,028,242 | I_kwDOJ0Z1Ps56Em5S | 1,600 | Is there any option to unload a model from memory? | {
"login": "DanielMazurkiewicz",
"id": 2885673,
"node_id": "MDQ6VXNlcjI4ODU2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2885673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielMazurkiewicz",
"html_url": "https://github.com/DanielMazurkiewicz",
"followers_url": "https://api.github.com/users/DanielMazurkiewicz/followers",
"following_url": "https://api.github.com/users/DanielMazurkiewicz/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielMazurkiewicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielMazurkiewicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielMazurkiewicz/subscriptions",
"organizations_url": "https://api.github.com/users/DanielMazurkiewicz/orgs",
"repos_url": "https://api.github.com/users/DanielMazurkiewicz/repos",
"events_url": "https://api.github.com/users/DanielMazurkiewicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielMazurkiewicz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 22 | 2023-12-19T06:45:11 | 2024-11-18T02:52:12 | 2024-01-28T22:34:39 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | As in the title. I want to unload model, is there any option for it? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1600/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6261 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6261/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6261/comments | https://api.github.com/repos/ollama/ollama/issues/6261/events | https://github.com/ollama/ollama/issues/6261 | 2,456,527,800 | I_kwDOJ0Z1Ps6Sa6O4 | 6,261 | Offload a model command | {
"login": "stavsap",
"id": 4201054,
"node_id": "MDQ6VXNlcjQyMDEwNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4201054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stavsap",
"html_url": "https://github.com/stavsap",
"followers_url": "https://api.github.com/users/stavsap/followers",
"following_url": "https://api.github.com/users/stavsap/following{/other_user}",
"gists_url": "https://api.github.com/users/stavsap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stavsap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stavsap/subscriptions",
"organizations_url": "https://api.github.com/users/stavsap/orgs",
"repos_url": "https://api.github.com/users/stavsap/repos",
"events_url": "https://api.github.com/users/stavsap/events{/privacy}",
"received_events_url": "https://api.github.com/users/stavsap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 5 | 2024-08-08T19:59:34 | 2024-09-09T22:23:08 | 2024-09-09T22:23:07 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Can we have an offload a model cli/api command to remove a model from memory/vram? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6261/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4860 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4860/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4860/comments | https://api.github.com/repos/ollama/ollama/issues/4860/events | https://github.com/ollama/ollama/pull/4860 | 2,338,467,407 | PR_kwDOJ0Z1Ps5xsV2E | 4,860 | Update requirements.txt | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasota/followers",
"following_url": "https://api.github.com/users/dcasota/following{/other_user}",
"gists_url": "https://api.github.com/users/dcasota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcasota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcasota/subscriptions",
"organizations_url": "https://api.github.com/users/dcasota/orgs",
"repos_url": "https://api.github.com/users/dcasota/repos",
"events_url": "https://api.github.com/users/dcasota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcasota/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-06-06T14:55:08 | 2024-06-09T20:41:04 | 2024-06-09T20:23:58 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4860",
"html_url": "https://github.com/ollama/ollama/pull/4860",
"diff_url": "https://github.com/ollama/ollama/pull/4860.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4860.patch",
"merged_at": null
} | - chromadb==0.5.0 fixes ingest.py issue "Cannot submit more than 5,461 embeddings at once"
- python-docx==1.1.2 fixes ingest.py issue "ModuleNotFoundError: No module named 'docx'"
- including langchain-huggingface, langchain-community and langchain-core is a preparation for deprecated modules. Also, be aware of issue https://github.com/langchain-ai/langchain/issues/22510 | {
"login": "dcasota",
"id": 14890243,
"node_id": "MDQ6VXNlcjE0ODkwMjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcasota",
"html_url": "https://github.com/dcasota",
"followers_url": "https://api.github.com/users/dcasota/followers",
"following_url": "https://api.github.com/users/dcasota/following{/other_user}",
"gists_url": "https://api.github.com/users/dcasota/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcasota/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcasota/subscriptions",
"organizations_url": "https://api.github.com/users/dcasota/orgs",
"repos_url": "https://api.github.com/users/dcasota/repos",
"events_url": "https://api.github.com/users/dcasota/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcasota/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4860/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2199 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2199/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2199/comments | https://api.github.com/repos/ollama/ollama/issues/2199/events | https://github.com/ollama/ollama/issues/2199 | 2,101,584,144 | I_kwDOJ0Z1Ps59Q6EQ | 2,199 | Go API client: make base URL and http client public fields | {
"login": "emidoots",
"id": 3173176,
"node_id": "MDQ6VXNlcjMxNzMxNzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3173176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emidoots",
"html_url": "https://github.com/emidoots",
"followers_url": "https://api.github.com/users/emidoots/followers",
"following_url": "https://api.github.com/users/emidoots/following{/other_user}",
"gists_url": "https://api.github.com/users/emidoots/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emidoots/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emidoots/subscriptions",
"organizations_url": "https://api.github.com/users/emidoots/orgs",
"repos_url": "https://api.github.com/users/emidoots/repos",
"events_url": "https://api.github.com/users/emidoots/events{/privacy}",
"received_events_url": "https://api.github.com/users/emidoots/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2024-01-26T04:36:41 | 2024-05-17T01:16:56 | 2024-05-17T01:16:56 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be nice if these were public fields:
https://github.com/ollama/ollama/blob/main/api/client.go#L23-L24
`ClientFromEnvironment` is good, but there are cases where supplying one's own http client, transparent, etc. may be quite useful. It's strange to not be able to supply these in a client library.
Just thought I'd leave this feedback here, feel free to close if not helpful :) | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2199/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1949 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1949/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1949/comments | https://api.github.com/repos/ollama/ollama/issues/1949/events | https://github.com/ollama/ollama/issues/1949 | 2,078,718,320 | I_kwDOJ0Z1Ps575rlw | 1,949 | bad generation on multi-GPU setup | {
"login": "jerzydziewierz",
"id": 1606347,
"node_id": "MDQ6VXNlcjE2MDYzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1606347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerzydziewierz",
"html_url": "https://github.com/jerzydziewierz",
"followers_url": "https://api.github.com/users/jerzydziewierz/followers",
"following_url": "https://api.github.com/users/jerzydziewierz/following{/other_user}",
"gists_url": "https://api.github.com/users/jerzydziewierz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerzydziewierz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerzydziewierz/subscriptions",
"organizations_url": "https://api.github.com/users/jerzydziewierz/orgs",
"repos_url": "https://api.github.com/users/jerzydziewierz/repos",
"events_url": "https://api.github.com/users/jerzydziewierz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerzydziewierz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6430601766,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg",
"url": "https://api.github.com/repos/ollama/ollama/labels/nvidia",
"name": "nvidia",
"color": "8CDB00",
"default": false,
"description": "Issues relating to Nvidia GPUs and CUDA"
}
] | closed | false | null | [] | null | 9 | 2024-01-12T12:14:39 | 2024-05-17T00:58:46 | 2024-05-17T00:57:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | When using `vast.ai` and image `nvidia/cuda:12.3.1-devel-ubuntu22.04`
and 4x RTX3090 on a AMD EPYC 7302P 16-Core Processor,
Trying any "small model" ( i have not tried large models yet )
I get either an outright crash or a bad generation like
and i quote:
```
############################
```
screenshot of my desktop, showing `btop` in top-right, `nvtop` in bottom-right, `ollama serve` in top left, and the `ollama run ` in bottom left:
![image](https://github.com/jmorganca/ollama/assets/1606347/3c8b888d-b4fa-4731-9c60-a39d6680c7e0)
output of `nvidia-smi` :
```
[email protected]:~$ nvidia-smi
Fri Jan 12 12:09:43 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A |
| 30% 26C P8 37W / 350W | 2005MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce RTX 3090 On | 00000000:41:00.0 Off | N/A |
| 30% 24C P8 32W / 350W | 1591MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA GeForce RTX 3090 On | 00000000:81:00.0 Off | N/A |
| 30% 25C P8 30W / 350W | 1591MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA GeForce RTX 3090 On | 00000000:C1:00.0 Off | N/A |
| 30% 26C P8 40W / 350W | 1591MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
```
any ideas? maybe I should try with a different image (CUDA version) ?
please advise what else can I try or report with this
My eventual target is to run the new model, megadolphin `https://ollama.ai/library/megadolphin` on multi-GPU setup.
| {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1949/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/ollama/ollama/issues/1949/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4454 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4454/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4454/comments | https://api.github.com/repos/ollama/ollama/issues/4454/events | https://github.com/ollama/ollama/issues/4454 | 2,298,206,546 | I_kwDOJ0Z1Ps6I-9lS | 4,454 | `OLLAMA_MODELS` environment variable is not respected | {
"login": "JLCarveth",
"id": 23156861,
"node_id": "MDQ6VXNlcjIzMTU2ODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/23156861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JLCarveth",
"html_url": "https://github.com/JLCarveth",
"followers_url": "https://api.github.com/users/JLCarveth/followers",
"following_url": "https://api.github.com/users/JLCarveth/following{/other_user}",
"gists_url": "https://api.github.com/users/JLCarveth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JLCarveth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JLCarveth/subscriptions",
"organizations_url": "https://api.github.com/users/JLCarveth/orgs",
"repos_url": "https://api.github.com/users/JLCarveth/repos",
"events_url": "https://api.github.com/users/JLCarveth/events{/privacy}",
"received_events_url": "https://api.github.com/users/JLCarveth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-05-15T15:15:45 | 2025-01-26T06:16:40 | 2024-05-15T15:23:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I have followed the steps [here](https://github.com/ollama/ollama/blob/main/docs/faq.md#where-are-models-stored) to change where Ollama stores the downloaded models.
I make sure to run `systemctl daemon-reload` and to restart the ollama service, and yet it is still storing the model blobs in `/usr/share/ollama/...` instead of the location specified in `OLLAMA_MODELS`.
I expect Ollama to download the models to the specified location. I have insufficient space left on my root partition, hence why I am trying to download the models to my home directory instead.
`sudo systemctl edit ollama.service`:
```
### Anything between here and the comment below will become the contents of the drop-in file
Environment="OLLAMA_MODELS=/home/jlcarveth/.ollama/models"
```
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.32 | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4454/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6905 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6905/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6905/comments | https://api.github.com/repos/ollama/ollama/issues/6905/events | https://github.com/ollama/ollama/pull/6905 | 2,540,549,031 | PR_kwDOJ0Z1Ps58QjCS | 6,905 | runner: Set windows above normal priority for consistent CPU inference performance | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-09-21T23:17:22 | 2024-09-21T23:54:52 | 2024-09-21T23:54:49 | COLLABORATOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6905",
"html_url": "https://github.com/ollama/ollama/pull/6905",
"diff_url": "https://github.com/ollama/ollama/pull/6905.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6905.patch",
"merged_at": "2024-09-21T23:54:49"
} | When running the subprocess as a background service windows may throttle, which can lead to thrashing and very poor token rate.
Fixes #3511
I've now reproduced the performance problem and understand the nature of what's going on leading to poor CPU inference performance for some users on Windows.
Windows treats GUI apps and background services differently. By default, priority is given to GUI apps ("Programs")
![image](https://github.com/user-attachments/assets/33f6d5f4-74d5-4d35-9e33-6c3d45865687)
While you can change this, this wouldn't typically be recommended. The result is when the tray app is automatically started when user logs in, or by clicking "Ollama" from the start menu, it runs as a "background service" which the subprocess inherits. In some situations, this can lead to the subprocess runner being throttled, and since we try to create as many threads as there are cores, this results in thrashing behavior, and very poor token rates.
![image](https://github.com/user-attachments/assets/55207d05-4db3-4180-8d8d-9775bb531049)
By setting the scheduler priority class to "above normal", this allows more CPU usage, and higher CPU load, leading to much better token rates.
![image](https://github.com/user-attachments/assets/70d958ab-3492-4419-9130-5895bbcd1294)
![image](https://github.com/user-attachments/assets/2f2c0ff8-69ad-4062-bdc1-aad28cd0235f)
I also tried setting the scheduler priority to "high" (not recommended in the API docs) and that did lead to the UI freezing, and after ~1 minute, a BSOD and reboot due to a watchdog detecting the system becoming unresponsive. The setting of "Above Normal" does not appear to have any noticeable impact on UI performance, and token rates match when the ollama server is running from a terminal (and inherits the "Program" priority). I do see the CPU utilization drop down slightly over time, so it seems Windows does penalize CPU saturating background processes so for very large context size requests there may still be room to improve performance further.
Reference:
- https://learn.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities#priority-class | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6905/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5693 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5693/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5693/comments | https://api.github.com/repos/ollama/ollama/issues/5693/events | https://github.com/ollama/ollama/issues/5693 | 2,407,657,391 | I_kwDOJ0Z1Ps6Pge-v | 5,693 | Per-Model Concurrency | {
"login": "ProjectMoon",
"id": 183856,
"node_id": "MDQ6VXNlcjE4Mzg1Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/183856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ProjectMoon",
"html_url": "https://github.com/ProjectMoon",
"followers_url": "https://api.github.com/users/ProjectMoon/followers",
"following_url": "https://api.github.com/users/ProjectMoon/following{/other_user}",
"gists_url": "https://api.github.com/users/ProjectMoon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ProjectMoon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ProjectMoon/subscriptions",
"organizations_url": "https://api.github.com/users/ProjectMoon/orgs",
"repos_url": "https://api.github.com/users/ProjectMoon/repos",
"events_url": "https://api.github.com/users/ProjectMoon/events{/privacy}",
"received_events_url": "https://api.github.com/users/ProjectMoon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 3 | 2024-07-15T00:14:39 | 2024-09-17T01:44:17 | 2024-09-17T01:44:16 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I like the new concurrency features. I'm wondering if it would be possible to add a new Modelfile parameter to control parallel requests on a per-model basis. This would override `OLLAMA_NUM_PARALLEL` if set. The primary use-case for this idea is to allow small models like embedding models to serve many quick requests at once, while the larger models that take longer can serve a smaller number of requests at a time. This would allow larger models to be loaded more into the GPU, while the embedding models can work much faster.
Embedding creation from OpenWebUI is much much faster when hitting a bunch of documents (~45) with a high parallelism (20 in the test). However, this high parallelism forces the LLM model that will generate text to be mostly loaded into CPU, because it is also expecting to serve 20 parallel requests (when in reality it's going to serve one). | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5693/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5693/timeline | null | not_planned | false |
https://api.github.com/repos/ollama/ollama/issues/4298 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4298/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4298/comments | https://api.github.com/repos/ollama/ollama/issues/4298/events | https://github.com/ollama/ollama/pull/4298 | 2,288,521,622 | PR_kwDOJ0Z1Ps5vCOc- | 4,298 | log clean up | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-05-09T21:53:11 | 2024-05-09T23:20:58 | 2024-05-09T23:20:57 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/4298",
"html_url": "https://github.com/ollama/ollama/pull/4298",
"diff_url": "https://github.com/ollama/ollama/pull/4298.diff",
"patch_url": "https://github.com/ollama/ollama/pull/4298.patch",
"merged_at": "2024-05-09T23:20:57"
} | any debug logs are available with `--verbose` which was previously a noop since it's only set if also compiled with `SERVER_VERBOSE` | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4298/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7341 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7341/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7341/comments | https://api.github.com/repos/ollama/ollama/issues/7341/events | https://github.com/ollama/ollama/issues/7341 | 2,611,011,341 | I_kwDOJ0Z1Ps6boN8N | 7,341 | Ollama forgets previous information in conversation if a prompt sent by the user is very large | {
"login": "robotom",
"id": 45123215,
"node_id": "MDQ6VXNlcjQ1MTIzMjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/45123215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robotom",
"html_url": "https://github.com/robotom",
"followers_url": "https://api.github.com/users/robotom/followers",
"following_url": "https://api.github.com/users/robotom/following{/other_user}",
"gists_url": "https://api.github.com/users/robotom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robotom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robotom/subscriptions",
"organizations_url": "https://api.github.com/users/robotom/orgs",
"repos_url": "https://api.github.com/users/robotom/repos",
"events_url": "https://api.github.com/users/robotom/events{/privacy}",
"received_events_url": "https://api.github.com/users/robotom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 4 | 2024-10-24T09:33:58 | 2024-10-24T23:20:15 | 2024-10-24T16:02:31 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I am trying to figure out how to pass large bodies of text (or documents) to Ollama. I thought this was just an issue with my front end implementation, but it seems present in the command line interface as well.
I could have a basic conversation with the model about the weather (llama 3.2 in this case) and then if I give it something relatively long (but not really) like a chapter of a book, it completely loses track of everything else that came earlier. In fact, it doesn't even remember the prompt I just sent, only its own response to my latest prompt.
Is there a fix for this? Some setting? Or is it a RAM/Hardware issue or something? Seems like I won't be able to pass any document to it at all if this is a limitation.
Cheers.
### Specs:
CPU: Intel 24 core
RAM: 64GB
GPU: 4060Ti (16GB)
Ollama version: 0.3.12 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7341/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/4132 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/4132/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/4132/comments | https://api.github.com/repos/ollama/ollama/issues/4132/events | https://github.com/ollama/ollama/issues/4132 | 2,278,196,909 | I_kwDOJ0Z1Ps6Hyoat | 4,132 | model run command not rendered on mobile | {
"login": "olumolu",
"id": 162728301,
"node_id": "U_kgDOCbMJbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/olumolu",
"html_url": "https://github.com/olumolu",
"followers_url": "https://api.github.com/users/olumolu/followers",
"following_url": "https://api.github.com/users/olumolu/following{/other_user}",
"gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olumolu/subscriptions",
"organizations_url": "https://api.github.com/users/olumolu/orgs",
"repos_url": "https://api.github.com/users/olumolu/repos",
"events_url": "https://api.github.com/users/olumolu/events{/privacy}",
"received_events_url": "https://api.github.com/users/olumolu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
},
{
"id": 6573197867,
"node_id": "LA_kwDOJ0Z1Ps8AAAABh8sKKw",
"url": "https://api.github.com/repos/ollama/ollama/labels/ollama.com",
"name": "ollama.com",
"color": "ffffff",
"default": false,
"description": ""
}
] | open | false | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 5 | 2024-05-03T18:19:17 | 2024-05-04T00:14:25 | null | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | https://ollama.com/library/phi3:3.8b
In the page the installation commrnd is not written likw llama and gemma. | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/4132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/4132/timeline | null | reopened | false |
https://api.github.com/repos/ollama/ollama/issues/6919 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6919/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6919/comments | https://api.github.com/repos/ollama/ollama/issues/6919/events | https://github.com/ollama/ollama/pull/6919 | 2,542,710,356 | PR_kwDOJ0Z1Ps58X6Zo | 6,919 | Add LLMChat to community apps | {
"login": "deep93333",
"id": 100652109,
"node_id": "U_kgDOBf_UTQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100652109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deep93333",
"html_url": "https://github.com/deep93333",
"followers_url": "https://api.github.com/users/deep93333/followers",
"following_url": "https://api.github.com/users/deep93333/following{/other_user}",
"gists_url": "https://api.github.com/users/deep93333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deep93333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deep93333/subscriptions",
"organizations_url": "https://api.github.com/users/deep93333/orgs",
"repos_url": "https://api.github.com/users/deep93333/repos",
"events_url": "https://api.github.com/users/deep93333/events{/privacy}",
"received_events_url": "https://api.github.com/users/deep93333/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-09-23T13:45:39 | 2024-09-24T00:49:47 | 2024-09-24T00:49:47 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6919",
"html_url": "https://github.com/ollama/ollama/pull/6919",
"diff_url": "https://github.com/ollama/ollama/pull/6919.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6919.patch",
"merged_at": "2024-09-24T00:49:47"
} | null | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6919/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/3599 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3599/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3599/comments | https://api.github.com/repos/ollama/ollama/issues/3599/events | https://github.com/ollama/ollama/pull/3599 | 2,238,044,574 | PR_kwDOJ0Z1Ps5sYLK- | 3,599 | examples: add more Go examples using the API | {
"login": "eliben",
"id": 1130906,
"node_id": "MDQ6VXNlcjExMzA5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1130906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliben",
"html_url": "https://github.com/eliben",
"followers_url": "https://api.github.com/users/eliben/followers",
"following_url": "https://api.github.com/users/eliben/following{/other_user}",
"gists_url": "https://api.github.com/users/eliben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliben/subscriptions",
"organizations_url": "https://api.github.com/users/eliben/orgs",
"repos_url": "https://api.github.com/users/eliben/repos",
"events_url": "https://api.github.com/users/eliben/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliben/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-04-11T15:44:09 | 2024-04-15T22:34:54 | 2024-04-15T22:34:54 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3599",
"html_url": "https://github.com/ollama/ollama/pull/3599",
"diff_url": "https://github.com/ollama/ollama/pull/3599.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3599.patch",
"merged_at": "2024-04-15T22:34:54"
} | Updates #2840
Followup on #2879
| {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3599/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3599/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/520 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/520/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/520/comments | https://api.github.com/repos/ollama/ollama/issues/520/events | https://github.com/ollama/ollama/pull/520 | 1,893,242,840 | PR_kwDOJ0Z1Ps5aKy9v | 520 | Fix ggml arm64 linux cuda build | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-09-12T20:49:57 | 2023-09-12T21:06:49 | 2023-09-12T21:06:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/520",
"html_url": "https://github.com/ollama/ollama/pull/520",
"diff_url": "https://github.com/ollama/ollama/pull/520.diff",
"patch_url": "https://github.com/ollama/ollama/pull/520.patch",
"merged_at": "2023-09-12T21:06:48"
} | Apply patch to support CUDA's half type for aarch64 | {
"login": "BruceMacD",
"id": 5853428,
"node_id": "MDQ6VXNlcjU4NTM0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceMacD",
"html_url": "https://github.com/BruceMacD",
"followers_url": "https://api.github.com/users/BruceMacD/followers",
"following_url": "https://api.github.com/users/BruceMacD/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions",
"organizations_url": "https://api.github.com/users/BruceMacD/orgs",
"repos_url": "https://api.github.com/users/BruceMacD/repos",
"events_url": "https://api.github.com/users/BruceMacD/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceMacD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/520/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6219 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6219/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6219/comments | https://api.github.com/repos/ollama/ollama/issues/6219/events | https://github.com/ollama/ollama/pull/6219 | 2,452,132,353 | PR_kwDOJ0Z1Ps53o5Ky | 6,219 | llm: reserve required number of slots for embeddings | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-08-07T02:50:14 | 2024-08-07T03:20:51 | 2024-08-07T03:20:49 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6219",
"html_url": "https://github.com/ollama/ollama/pull/6219",
"diff_url": "https://github.com/ollama/ollama/pull/6219.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6219.patch",
"merged_at": "2024-08-07T03:20:49"
} | Fixes https://github.com/ollama/ollama/issues/6217 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6219/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/5722 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5722/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5722/comments | https://api.github.com/repos/ollama/ollama/issues/5722/events | https://github.com/ollama/ollama/issues/5722 | 2,411,211,731 | I_kwDOJ0Z1Ps6PuCvT | 5,722 | Environment variable OLLAMA_NUM_PARALLEL is ignored (Linux) | {
"login": "boshk0",
"id": 75314475,
"node_id": "MDQ6VXNlcjc1MzE0NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/75314475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boshk0",
"html_url": "https://github.com/boshk0",
"followers_url": "https://api.github.com/users/boshk0/followers",
"following_url": "https://api.github.com/users/boshk0/following{/other_user}",
"gists_url": "https://api.github.com/users/boshk0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boshk0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boshk0/subscriptions",
"organizations_url": "https://api.github.com/users/boshk0/orgs",
"repos_url": "https://api.github.com/users/boshk0/repos",
"events_url": "https://api.github.com/users/boshk0/events{/privacy}",
"received_events_url": "https://api.github.com/users/boshk0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 1 | 2024-07-16T13:53:31 | 2024-07-22T08:12:52 | 2024-07-22T08:11:59 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
After updating Ollama to the latest version (from 0.1.48) I noticed that the environment variable OLLAMA_NUM_PARALLEL is now ignored., Instead, the default value 4 is used.
My startup script:
export OLLAMA_MAX_LOADED_MODELS:2
export OLLAMA_KEEP_ALIVE:60m
export OLLAMA_NUM_PARALLEL:10
/bin/ollama serve
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.2.5 | {
"login": "boshk0",
"id": 75314475,
"node_id": "MDQ6VXNlcjc1MzE0NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/75314475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boshk0",
"html_url": "https://github.com/boshk0",
"followers_url": "https://api.github.com/users/boshk0/followers",
"following_url": "https://api.github.com/users/boshk0/following{/other_user}",
"gists_url": "https://api.github.com/users/boshk0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boshk0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boshk0/subscriptions",
"organizations_url": "https://api.github.com/users/boshk0/orgs",
"repos_url": "https://api.github.com/users/boshk0/repos",
"events_url": "https://api.github.com/users/boshk0/events{/privacy}",
"received_events_url": "https://api.github.com/users/boshk0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5722/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/5143 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5143/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5143/comments | https://api.github.com/repos/ollama/ollama/issues/5143/events | https://github.com/ollama/ollama/issues/5143 | 2,362,575,347 | I_kwDOJ0Z1Ps6M0gnz | 5,143 | AMD iGPU works in docker with override but not on host | {
"login": "smellouk",
"id": 13059906,
"node_id": "MDQ6VXNlcjEzMDU5OTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/13059906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smellouk",
"html_url": "https://github.com/smellouk",
"followers_url": "https://api.github.com/users/smellouk/followers",
"following_url": "https://api.github.com/users/smellouk/following{/other_user}",
"gists_url": "https://api.github.com/users/smellouk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smellouk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smellouk/subscriptions",
"organizations_url": "https://api.github.com/users/smellouk/orgs",
"repos_url": "https://api.github.com/users/smellouk/repos",
"events_url": "https://api.github.com/users/smellouk/events{/privacy}",
"received_events_url": "https://api.github.com/users/smellouk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 6433346500,
"node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA",
"url": "https://api.github.com/repos/ollama/ollama/labels/amd",
"name": "amd",
"color": "000000",
"default": false,
"description": "Issues relating to AMD GPUs and ROCm"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 21 | 2024-06-19T14:50:14 | 2024-10-26T21:04:15 | 2024-10-26T21:04:15 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Ollama is failing to run on GPU instead it uses CPU. If I force it using `HSA_OVERRIDE_GFX_VERSION=9.0.0` then I get `Error: llama runner process has terminated: signal: aborted error:Could not initialize Tensile host: No devices found`.
## ENV:
I'm using Proxmox LXC with Device Passthrough.
## journalctl:
```
Jun 19 14:38:07 ai-llm systemd[1]: Stopping Ollama Service...
Jun 19 14:38:07 ai-llm systemd[1]: ollama.service: Deactivated successfully.
Jun 19 14:38:07 ai-llm systemd[1]: Stopped Ollama Service.
Jun 19 14:38:07 ai-llm systemd[1]: ollama.service: Consumed 7.818s CPU time.
Jun 19 14:38:07 ai-llm systemd[1]: Started Ollama Service.
Jun 19 14:38:07 ai-llm ollama[15932]: 2024/06/19 14:38:07 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.>
Jun 19 14:38:07 ai-llm ollama[15932]: time=2024-06-19T14:38:07.789Z level=INFO source=images.go:725 msg="total blobs: 5"
Jun 19 14:38:07 ai-llm ollama[15932]: time=2024-06-19T14:38:07.789Z level=INFO source=images.go:732 msg="total unused blobs removed: 0"
Jun 19 14:38:07 ai-llm ollama[15932]: time=2024-06-19T14:38:07.789Z level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.44)"
Jun 19 14:38:07 ai-llm ollama[15932]: time=2024-06-19T14:38:07.789Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama3886990766/runners
Jun 19 14:38:10 ai-llm ollama[15932]: time=2024-06-19T14:38:10.123Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]"
Jun 19 14:38:10 ai-llm ollama[15932]: time=2024-06-19T14:38:10.125Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-dri>
Jun 19 14:38:10 ai-llm ollama[15932]: time=2024-06-19T14:38:10.126Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=9.0.0
Jun 19 14:38:10 ai-llm ollama[15932]: time=2024-06-19T14:38:10.126Z level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx90c driver=0.0 name=1002:1>
Jun 19 14:38:12 ai-llm ollama[15932]: [GIN] 2024/06/19 - 14:38:12 | 200 | 39.044<C2><B5>s | 127.0.0.1 | HEAD "/"
Jun 19 14:38:12 ai-llm ollama[15932]: [GIN] 2024/06/19 - 14:38:12 | 200 | 436.745<C2><B5>s | 127.0.0.1 | POST "/api/show"
Jun 19 14:38:12 ai-llm ollama[15932]: [GIN] 2024/06/19 - 14:38:12 | 200 | 278.824<C2><B5>s | 127.0.0.1 | POST "/api/show"
Jun 19 14:38:12 ai-llm ollama[15932]: time=2024-06-19T14:38:12.873Z level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-dri>
Jun 19 14:38:12 ai-llm ollama[15932]: time=2024-06-19T14:38:12.874Z level=INFO source=amd_linux.go:304 msg="skipping rocm gfx compatibility check" HSA_OVERRIDE_GFX_VERSION=9.0.0
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.183Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=23 memory.available="8.0>
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.183Z level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=23 memory.available="8.0>
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.184Z level=INFO source=server.go:341 msg="starting llama server" cmd="/tmp/ollama3886990766/runners/rocm_v60002/ol>
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.184Z level=INFO source=sched.go:338 msg="loaded runners" count=1
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.184Z level=INFO source=server.go:529 msg="waiting for llama runner to start responding"
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.185Z level=INFO source=server.go:567 msg="waiting for server to become available" status="llm server error"
Jun 19 14:38:13 ai-llm ollama[15955]: INFO [main] build info | build=1 commit="5921b8f" tid="125536033915712" timestamp=1718807893
Jun 19 14:38:13 ai-llm ollama[15955]: INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AV>
Jun 19 14:38:13 ai-llm ollama[15955]: INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="44261" tid="125536033915712" timestamp=1718807893
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: loaded meta data with 23 key-value pairs and 201 tensors from /usr/share/ollama/.ollama/models/blobs/sha256-2af3b81862c>
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 0: general.architecture str = llama
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 1: general.name str = TinyLlama
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 2: llama.context_length u32 = 2048
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 3: llama.embedding_length u32 = 2048
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 4: llama.block_count u32 = 22
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 5: llama.feed_forward_length u32 = 5632
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 64
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 4
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 11: general.file_type u32 = 2
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 12: tokenizer.ggml.model str = llama
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,61249] = ["<E2><96><81> t", "e r", "i n", "<E2><96><81> >
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 1
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 2
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 19: tokenizer.ggml.unknown_token_id u32 = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 20: tokenizer.ggml.padding_token_id u32 = 2
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 21: tokenizer.chat_template str = {% for message in messages %}\n{% if m...
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - kv 22: general.quantization_version u32 = 2
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - type f32: 45 tensors
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - type q4_0: 155 tensors
Jun 19 14:38:13 ai-llm ollama[15932]: llama_model_loader: - type q6_K: 1 tensors
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_vocab: special tokens cache size = 259
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_vocab: token to piece cache size = 0.3368 MB
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: format = GGUF V3 (latest)
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: arch = llama
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: vocab type = SPM
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_vocab = 32000
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_merges = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_ctx_train = 2048
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_embd = 2048
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_head = 32
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_head_kv = 4
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_layer = 22
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_rot = 64
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_embd_head_k = 64
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_embd_head_v = 64
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_gqa = 8
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_embd_k_gqa = 256
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_embd_v_gqa = 256
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: f_norm_eps = 0.0e+00
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: f_norm_rms_eps = 1.0e-05
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: f_clamp_kqv = 0.0e+00
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: f_max_alibi_bias = 0.0e+00
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: f_logit_scale = 0.0e+00
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_ff = 5632
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_expert = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_expert_used = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: causal attn = 1
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: pooling type = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: pooling type = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: rope type = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: rope scaling = linear
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: freq_base_train = 10000.0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: freq_scale_train = 1
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_yarn_orig_ctx = 2048
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: rope_finetuned = unknown
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: ssm_d_conv = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: ssm_d_inner = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: freq_base_train = 10000.0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: freq_scale_train = 1
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: n_yarn_orig_ctx = 2048
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: rope_finetuned = unknown
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: ssm_d_conv = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: ssm_d_inner = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: ssm_d_state = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: ssm_dt_rank = 0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: model type = 1B
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: model ftype = Q4_0
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: model params = 1.10 B
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: model size = 606.53 MiB (4.63 BPW)
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: general.name = TinyLlama
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: BOS token = 1 '<s>'
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: EOS token = 2 '</s>'
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: UNK token = 0 '<unk>'
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: PAD token = 2 '</s>'
Jun 19 14:38:13 ai-llm ollama[15932]: llm_load_print_meta: LF token = 13 '<0x0A>'
Jun 19 14:38:13 ai-llm ollama[15932]: rocBLAS error: Could not initialize Tensile host: No devices found
Jun 19 14:38:13 ai-llm ollama[15932]: time=2024-06-19T14:38:13.436Z level=ERROR source=sched.go:344 msg="error loading llama server" error="llama runner process has terminated: >
```
## rocm-smi
```
============================================ ROCm System Management Interface ============================================
====================================================== Concise Info ======================================================
Device Node IDs Temp Power Partitions SCLK MCLK Fan Perf PwrCap VRAM% GPU%
(DID, GUID) (Edge) (Socket) (Mem, Compute, ID)
==========================================================================================================================
0 1 0x164c, 28495 46.0°C 9.0W N/A, N/A, 0 None 1200Mhz 0% auto Unsupported 1% 0%
==========================================================================================================================
================================================== End of ROCm SMI Log ===================================================
```
## rocminfo
```
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.13
Runtime Ext Version: 1.4
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 7 5700U with Radeon Graphics
Uuid: CPU-XX
Marketing Name: AMD Ryzen 7 5700U with Radeon Graphics
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 4372
BDFID: 0
Internal Node ID: 0
Compute Unit: 16
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 24508068(0x175f6a4) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 24508068(0x175f6a4) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 24508068(0x175f6a4) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx90c
Uuid: GPU-XX
Marketing Name: AMD Radeon Graphics
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 1024(0x400) KB
Chip ID: 5708(0x164c)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 1900
BDFID: 1024
Internal Node ID: 1
Compute Unit: 8
SIMDs per CU: 4
Shader Engines: 1
Shader Arrs. per Eng.: 1
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 64(0x40)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 2560(0xa00)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 471
SDMA engine uCode:: 40
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8388608(0x800000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 8388608(0x800000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx90c:xnack-
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
## rocm version
```
Package: rocm-libs
Version: 6.1.1.60101-90~22.04
Priority: optional
Section: devel
Maintainer: ROCm Dev Support <[email protected]>
Installed-Size: 13.3 kB
Depends: hipblas (= 2.1.0.60101-90~22.04), hipblaslt (= 0.7.0.60101-90~22.04), hipfft (= 1.0.14.60101-90~22.04), hipsolver (= 2.1.1.60101-90~22.04), hipsparse (= 3.0.1.60101-90~22.04), hiptensor (= 1.2.0.60101-90~22.04), miopen-hip (= 3.1.0.60101-90~22.04), half (= 1.12.0.60101-90~22.04), rccl (= 2.18.6.60101-90~22.04), rocalution (= 3.1.1.60101-90~22.04), rocblas (= 4.1.0.60101-90~22.04), rocfft (= 1.0.27.60101-90~22.04), rocrand (= 3.0.1.60101-90~22.04), hiprand (= 2.10.16.60101-90~22.04), rocsolver (= 3.25.0.60101-90~22.04), rocsparse (= 3.1.2.60101-90~22.04), rocm-core (= 6.1.1.60101-90~22.04), hipsparselt (= 0.1.0.60101-90~22.04), composablekernel-dev (= 1.1.0.60101-90~22.04), hipblas-dev (= 2.1.0.60101-90~22.04), hipblaslt-dev (= 0.7.0.60101-90~22.04), hipcub-dev (= 3.1.0.60101-90~22.04), hipfft-dev (= 1.0.14.60101-90~22.04), hipsolver-dev (= 2.1.1.60101-90~22.04), hipsparse-dev (= 3.0.1.60101-90~22.04), hiptensor-dev (= 1.2.0.60101-90~22.04), miopen-hip-dev (= 3.1.0.60101-90~22.04), rccl-dev (= 2.18.6.60101-90~22.04), rocalution-dev (= 3.1.1.60101-90~22.04), rocblas-dev (= 4.1.0.60101-90~22.04), rocfft-dev (= 1.0.27.60101-90~22.04), rocprim-dev (= 3.1.0.60101-90~22.04), rocrand-dev (= 3.0.1.60101-90~22.04), hiprand-dev (= 2.10.16.60101-90~22.04), rocsolver-dev (= 3.25.0.60101-90~22.04), rocsparse-dev (= 3.1.2.60101-90~22.04), rocthrust-dev (= 3.0.1.60101-90~22.04), rocwmma-dev (= 1.4.0.60101-90~22.04), hipsparselt-dev (= 0.1.0.60101-90~22.04)
Homepage: https://github.com/RadeonOpenCompute/ROCm
Download-Size: 1060 B
APT-Sources: https://repo.radeon.com/rocm/apt/6.1.1 jammy/main amd64 Packages
Description: Radeon Open Compute (ROCm) Runtime software stack
```
## Troubleshooting?
1. I tried to `HSA_OVERRIDE_GFX_VERSION=9.0.0` + `HIP_VISIBLE_DEVICES=0` to the service file but it didn't change anything.
2. I tried to run ollama using docker in proxmox LXC, with device passthrough using this cmd
```
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama --device=/dev/kfd --device=/dev/dri/renderD128 --env HSA_OVERRIDE_GFX_VERSION=9.0.0 --env HSA_ENABLE_SDMA=0 ollama/ollama:rocm
```
everything works as expected.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.1.44 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5143/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1122 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1122/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1122/comments | https://api.github.com/repos/ollama/ollama/issues/1122/events | https://github.com/ollama/ollama/issues/1122 | 1,992,321,364 | I_kwDOJ0Z1Ps52wGlU | 1,122 | c | {
"login": "fbrad",
"id": 1182262,
"node_id": "MDQ6VXNlcjExODIyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1182262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fbrad",
"html_url": "https://github.com/fbrad",
"followers_url": "https://api.github.com/users/fbrad/followers",
"following_url": "https://api.github.com/users/fbrad/following{/other_user}",
"gists_url": "https://api.github.com/users/fbrad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fbrad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fbrad/subscriptions",
"organizations_url": "https://api.github.com/users/fbrad/orgs",
"repos_url": "https://api.github.com/users/fbrad/repos",
"events_url": "https://api.github.com/users/fbrad/events{/privacy}",
"received_events_url": "https://api.github.com/users/fbrad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-11-14T09:17:39 | 2023-11-14T09:19:43 | 2023-11-14T09:19:43 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | null | {
"login": "fbrad",
"id": 1182262,
"node_id": "MDQ6VXNlcjExODIyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1182262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fbrad",
"html_url": "https://github.com/fbrad",
"followers_url": "https://api.github.com/users/fbrad/followers",
"following_url": "https://api.github.com/users/fbrad/following{/other_user}",
"gists_url": "https://api.github.com/users/fbrad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fbrad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fbrad/subscriptions",
"organizations_url": "https://api.github.com/users/fbrad/orgs",
"repos_url": "https://api.github.com/users/fbrad/repos",
"events_url": "https://api.github.com/users/fbrad/events{/privacy}",
"received_events_url": "https://api.github.com/users/fbrad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1122/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3739 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3739/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3739/comments | https://api.github.com/repos/ollama/ollama/issues/3739/events | https://github.com/ollama/ollama/pull/3739 | 2,251,838,454 | PR_kwDOJ0Z1Ps5tHLJa | 3,739 | draft: push | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-04-19T00:31:36 | 2024-06-05T20:12:07 | 2024-05-16T01:08:21 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | true | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3739",
"html_url": "https://github.com/ollama/ollama/pull/3739",
"diff_url": "https://github.com/ollama/ollama/pull/3739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3739.patch",
"merged_at": null
} | this (experimental) change updates the streaming mechanism for push to use rangefunc over callback
no non-streaming
| {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3739/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1726 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1726/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1726/comments | https://api.github.com/repos/ollama/ollama/issues/1726/events | https://github.com/ollama/ollama/issues/1726 | 2,057,065,772 | I_kwDOJ0Z1Ps56nFUs | 1,726 | Add GPT--Pilot to ollama docs under libraries | {
"login": "oliverbob",
"id": 23272429,
"node_id": "MDQ6VXNlcjIzMjcyNDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/23272429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliverbob",
"html_url": "https://github.com/oliverbob",
"followers_url": "https://api.github.com/users/oliverbob/followers",
"following_url": "https://api.github.com/users/oliverbob/following{/other_user}",
"gists_url": "https://api.github.com/users/oliverbob/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliverbob/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliverbob/subscriptions",
"organizations_url": "https://api.github.com/users/oliverbob/orgs",
"repos_url": "https://api.github.com/users/oliverbob/repos",
"events_url": "https://api.github.com/users/oliverbob/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliverbob/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396191,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aXw",
"url": "https://api.github.com/repos/ollama/ollama/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 3 | 2023-12-27T08:21:41 | 2024-05-06T23:58:45 | 2024-05-06T23:58:45 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | The repo is available [here](https://github.com/Pythagora-io/gpt-pilot/): which supports Ollama according to this wiki:
[GPT Pilot](https://github.com/Pythagora-io/gpt-pilot/wiki/Using-GPT%E2%80%90Pilot-with-Local-LLMs).
This will allow Ollama models to do full stack development for us. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1726/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1726/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8321 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8321/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8321/comments | https://api.github.com/repos/ollama/ollama/issues/8321/events | https://github.com/ollama/ollama/pull/8321 | 2,770,996,423 | PR_kwDOJ0Z1Ps6G1zVc | 8,321 | feat(openai): add optional authentication for OpenAI compatible API | {
"login": "myaniu",
"id": 1250454,
"node_id": "MDQ6VXNlcjEyNTA0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myaniu",
"html_url": "https://github.com/myaniu",
"followers_url": "https://api.github.com/users/myaniu/followers",
"following_url": "https://api.github.com/users/myaniu/following{/other_user}",
"gists_url": "https://api.github.com/users/myaniu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myaniu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myaniu/subscriptions",
"organizations_url": "https://api.github.com/users/myaniu/orgs",
"repos_url": "https://api.github.com/users/myaniu/repos",
"events_url": "https://api.github.com/users/myaniu/events{/privacy}",
"received_events_url": "https://api.github.com/users/myaniu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2025-01-06T16:21:54 | 2025-01-25T14:35:08 | 2025-01-12T21:31:40 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/8321",
"html_url": "https://github.com/ollama/ollama/pull/8321",
"diff_url": "https://github.com/ollama/ollama/pull/8321.diff",
"patch_url": "https://github.com/ollama/ollama/pull/8321.patch",
"merged_at": null
} | Add authentication middleware that checks API key when OPENAI_API_KEY environment variable is set. When OPENAI_API_KEY is not set, maintain current behavior of allowing all requests. When set, clients must provide matching Bearer token in Authorization header. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8321/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/8333 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8333/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8333/comments | https://api.github.com/repos/ollama/ollama/issues/8333/events | https://github.com/ollama/ollama/issues/8333 | 2,772,247,248 | I_kwDOJ0Z1Ps6lPSLQ | 8,333 | GOT-OCR and voice model support | {
"login": "Elton-Yang",
"id": 93847642,
"node_id": "U_kgDOBZgAWg",
"avatar_url": "https://avatars.githubusercontent.com/u/93847642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Elton-Yang",
"html_url": "https://github.com/Elton-Yang",
"followers_url": "https://api.github.com/users/Elton-Yang/followers",
"following_url": "https://api.github.com/users/Elton-Yang/following{/other_user}",
"gists_url": "https://api.github.com/users/Elton-Yang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Elton-Yang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Elton-Yang/subscriptions",
"organizations_url": "https://api.github.com/users/Elton-Yang/orgs",
"repos_url": "https://api.github.com/users/Elton-Yang/repos",
"events_url": "https://api.github.com/users/Elton-Yang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Elton-Yang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5789807732,
"node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA",
"url": "https://api.github.com/repos/ollama/ollama/labels/model%20request",
"name": "model request",
"color": "1E5DE6",
"default": false,
"description": "Model requests"
}
] | closed | false | null | [] | null | 1 | 2025-01-07T08:50:19 | 2025-01-16T00:02:20 | 2025-01-16T00:02:20 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Could the GOT-OCR image model be supported by Ollama? I like the way ollama deploys and runs model very much, and really hope ollama can support more types of model including image and audio models, specifically two models I use a lot: GOT-OCR2.0 (image OCR) and SenseVoiceSmall (audio STT). Thanks all developers for your hard work! really appreciate this project. | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8333/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8333/timeline | null | duplicate | false |
https://api.github.com/repos/ollama/ollama/issues/3534 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3534/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3534/comments | https://api.github.com/repos/ollama/ollama/issues/3534/events | https://github.com/ollama/ollama/issues/3534 | 2,230,469,158 | I_kwDOJ0Z1Ps6E8kIm | 3,534 | When I ask a longer question, the output is arbitrary | {
"login": "0sengseng0",
"id": 73268510,
"node_id": "MDQ6VXNlcjczMjY4NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/73268510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0sengseng0",
"html_url": "https://github.com/0sengseng0",
"followers_url": "https://api.github.com/users/0sengseng0/followers",
"following_url": "https://api.github.com/users/0sengseng0/following{/other_user}",
"gists_url": "https://api.github.com/users/0sengseng0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0sengseng0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0sengseng0/subscriptions",
"organizations_url": "https://api.github.com/users/0sengseng0/orgs",
"repos_url": "https://api.github.com/users/0sengseng0/repos",
"events_url": "https://api.github.com/users/0sengseng0/events{/privacy}",
"received_events_url": "https://api.github.com/users/0sengseng0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 2 | 2024-04-08T07:29:03 | 2024-04-19T15:41:10 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When I ask longer questions, it's about 15K or so. And I've set enough tokens: /set parameter num_ctx 4096. The model outputs a combination of numbers and symbols. Why?
![image](https://github.com/ollama/ollama/assets/73268510/ef6853ad-465d-4879-9bf0-c1045d603a5d)
### What did you expect to see?
The model outputs normally
### Steps to reproduce
ollama run qwen14:latest
### Are there any recent changes that introduced the issue?
_No response_
### OS
Linux
### Architecture
x86
### Platform
_No response_
### Ollama version
0.1.27
### GPU
Nvidia
### GPU info
4090 * 2
### CPU
_No response_
### Other software
_No response_ | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3534/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/5012 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/5012/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/5012/comments | https://api.github.com/repos/ollama/ollama/issues/5012/events | https://github.com/ollama/ollama/issues/5012 | 2,350,047,630 | I_kwDOJ0Z1Ps6MEuGO | 5,012 | Seeded API request is returning inconsistent results | {
"login": "ScreamingHawk",
"id": 1460552,
"node_id": "MDQ6VXNlcjE0NjA1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1460552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ScreamingHawk",
"html_url": "https://github.com/ScreamingHawk",
"followers_url": "https://api.github.com/users/ScreamingHawk/followers",
"following_url": "https://api.github.com/users/ScreamingHawk/following{/other_user}",
"gists_url": "https://api.github.com/users/ScreamingHawk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ScreamingHawk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ScreamingHawk/subscriptions",
"organizations_url": "https://api.github.com/users/ScreamingHawk/orgs",
"repos_url": "https://api.github.com/users/ScreamingHawk/repos",
"events_url": "https://api.github.com/users/ScreamingHawk/events{/privacy}",
"received_events_url": "https://api.github.com/users/ScreamingHawk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2024-06-13T03:10:50 | 2024-06-14T16:21:47 | 2024-06-14T16:21:47 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
When using `seed: 42069` and `temperature: 0.0`, I get consistent results only for the first ~126 characters. When using a seed, I would expect a deterministic result for the entire interaction.
```py
from ollama import Client, Options
ollama_client = Client(host="http://localhost:11434")
opts = Options()
opts["seed"] = 42069
opts["temperature"] = 0.0
messages = [
{"role": "user", "content": "Write a description of a cat"},
]
res1 = ollama_client.chat(
model="orca-mini",
stream=False,
messages=messages,
options=opts
)
res2 = ollama_client.chat(
model="orca-mini",
stream=False,
messages=messages,
options=opts
)
# Fails
assert res1["message"]["content"] == res2["message"]["content"]
```
### OS
Windows
### GPU
AMD
### CPU
Intel
### Ollama version
0.1.42 | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/5012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/5012/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/1770 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1770/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1770/comments | https://api.github.com/repos/ollama/ollama/issues/1770/events | https://github.com/ollama/ollama/issues/1770 | 2,064,513,686 | I_kwDOJ0Z1Ps57DfqW | 1,770 | Pulling manifest error | {
"login": "buczekkruczek",
"id": 155578015,
"node_id": "U_kgDOCUXunw",
"avatar_url": "https://avatars.githubusercontent.com/u/155578015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buczekkruczek",
"html_url": "https://github.com/buczekkruczek",
"followers_url": "https://api.github.com/users/buczekkruczek/followers",
"following_url": "https://api.github.com/users/buczekkruczek/following{/other_user}",
"gists_url": "https://api.github.com/users/buczekkruczek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buczekkruczek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buczekkruczek/subscriptions",
"organizations_url": "https://api.github.com/users/buczekkruczek/orgs",
"repos_url": "https://api.github.com/users/buczekkruczek/repos",
"events_url": "https://api.github.com/users/buczekkruczek/events{/privacy}",
"received_events_url": "https://api.github.com/users/buczekkruczek/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2024-01-03T19:05:12 | 2025-01-01T10:29:12 | 2024-01-04T16:31:09 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hello Everyone,
I have problem with pulling manifest, while running "ollama run dolphin-mixtral:latest" for the first time I've got "Error: max retries exceeded: unexpected EOF" and now I am unable to restart download getting "Error: pull model manifest: file does not exist".
I am grateful for all help or any kind of advice what to do next or how to deal with this | {
"login": "buczekkruczek",
"id": 155578015,
"node_id": "U_kgDOCUXunw",
"avatar_url": "https://avatars.githubusercontent.com/u/155578015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buczekkruczek",
"html_url": "https://github.com/buczekkruczek",
"followers_url": "https://api.github.com/users/buczekkruczek/followers",
"following_url": "https://api.github.com/users/buczekkruczek/following{/other_user}",
"gists_url": "https://api.github.com/users/buczekkruczek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buczekkruczek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buczekkruczek/subscriptions",
"organizations_url": "https://api.github.com/users/buczekkruczek/orgs",
"repos_url": "https://api.github.com/users/buczekkruczek/repos",
"events_url": "https://api.github.com/users/buczekkruczek/events{/privacy}",
"received_events_url": "https://api.github.com/users/buczekkruczek/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1770/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/6145 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6145/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6145/comments | https://api.github.com/repos/ollama/ollama/issues/6145/events | https://github.com/ollama/ollama/pull/6145 | 2,445,778,816 | PR_kwDOJ0Z1Ps53TQpI | 6,145 | Fix crash on startup when trying to clean up unused files (#5840) | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-08-02T21:23:34 | 2024-08-07T18:24:17 | 2024-08-07T18:24:15 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/6145",
"html_url": "https://github.com/ollama/ollama/pull/6145",
"diff_url": "https://github.com/ollama/ollama/pull/6145.diff",
"patch_url": "https://github.com/ollama/ollama/pull/6145.patch",
"merged_at": "2024-08-07T18:24:15"
} | Improve validation and error handling of manifest files in the event of corruption. This prevents nil pointer errors and possible unintended deletion of data. | {
"login": "jessegross",
"id": 6468499,
"node_id": "MDQ6VXNlcjY0Njg0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jessegross",
"html_url": "https://github.com/jessegross",
"followers_url": "https://api.github.com/users/jessegross/followers",
"following_url": "https://api.github.com/users/jessegross/following{/other_user}",
"gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jessegross/subscriptions",
"organizations_url": "https://api.github.com/users/jessegross/orgs",
"repos_url": "https://api.github.com/users/jessegross/repos",
"events_url": "https://api.github.com/users/jessegross/events{/privacy}",
"received_events_url": "https://api.github.com/users/jessegross/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6145/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/7594 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/7594/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/7594/comments | https://api.github.com/repos/ollama/ollama/issues/7594/events | https://github.com/ollama/ollama/pull/7594 | 2,646,964,227 | PR_kwDOJ0Z1Ps6BbAdK | 7,594 | Invalid OLLAMA_LLM_LIBRARY Error | {
"login": "jk-vtp-one",
"id": 173852691,
"node_id": "U_kgDOClzIEw",
"avatar_url": "https://avatars.githubusercontent.com/u/173852691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jk-vtp-one",
"html_url": "https://github.com/jk-vtp-one",
"followers_url": "https://api.github.com/users/jk-vtp-one/followers",
"following_url": "https://api.github.com/users/jk-vtp-one/following{/other_user}",
"gists_url": "https://api.github.com/users/jk-vtp-one/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jk-vtp-one/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jk-vtp-one/subscriptions",
"organizations_url": "https://api.github.com/users/jk-vtp-one/orgs",
"repos_url": "https://api.github.com/users/jk-vtp-one/repos",
"events_url": "https://api.github.com/users/jk-vtp-one/events{/privacy}",
"received_events_url": "https://api.github.com/users/jk-vtp-one/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2024-11-10T07:53:47 | 2024-11-10T08:18:30 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/7594",
"html_url": "https://github.com/ollama/ollama/pull/7594",
"diff_url": "https://github.com/ollama/ollama/pull/7594.diff",
"patch_url": "https://github.com/ollama/ollama/pull/7594.patch",
"merged_at": null
} | If the OLLAMA_LLM_LIBRARY environment variable is an invalid target, currently the server logs a message of the problem and continues to work using another available runner. This change causes it instead to raise an error. If you are using the OLLAMA_LLM_LIBRARY variable, you should intend for it to use that runner, and if that runner isn’t available it should error, not fall back to something else.
The specific use case is from the testing of the ollama builder with jetson-containers. Previous versions of the ollama build process included the env variable: OLLAMA_SKIP_CPU_GENERATE which allowed us to only generate the CUDA runner. If the binary build process completed but the cuda runner failed to build, our test would fail due to there being no available runners. Now, trying to use OLLAMA_LLM_LIBRARY with the desired but failed-to-build cuda_vXX runner, it falls back to use the CPU runner and the test doesn’t fail.
Re-implementing the build variables OLLAMA_SKIP_CPU_GENERATE and OLLAMA_SKIP_CUDA_GENERATE, would work: https://github.com/ollama/ollama/pull/7499; but doesn’t change that targeting an invalid OLLAMA_LLM_LIBRARY should error. | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/7594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/7594/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/6914 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/6914/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/6914/comments | https://api.github.com/repos/ollama/ollama/issues/6914/events | https://github.com/ollama/ollama/issues/6914 | 2,542,199,583 | I_kwDOJ0Z1Ps6XhuMf | 6,914 | Work done by CPU instead of GPU | {
"login": "Iliceth",
"id": 68381834,
"node_id": "MDQ6VXNlcjY4MzgxODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/68381834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iliceth",
"html_url": "https://github.com/Iliceth",
"followers_url": "https://api.github.com/users/Iliceth/followers",
"following_url": "https://api.github.com/users/Iliceth/following{/other_user}",
"gists_url": "https://api.github.com/users/Iliceth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iliceth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iliceth/subscriptions",
"organizations_url": "https://api.github.com/users/Iliceth/orgs",
"repos_url": "https://api.github.com/users/Iliceth/repos",
"events_url": "https://api.github.com/users/Iliceth/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iliceth/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396220,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA",
"url": "https://api.github.com/repos/ollama/ollama/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "General questions"
}
] | closed | false | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | 7 | 2024-09-23T10:23:42 | 2024-09-25T08:55:58 | 2024-09-25T00:41:46 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
I'm aware this might not be a bug, but I'm trying to understand and figure out if I can and/or should change something. When I try Reflection 70b.
CPU: 8 cores 100% utilization
RAM: 23 of 32 GB in use
GPU: on average 5% utilization
VRAM: 23 of 24 GB in use
I assume the model does not fit in VRAM and therefor the spread across VRAM and RAM, I'm fine with that. But the fact that the CPU seems to be expected to do almost all of the lifting seems odd to me, although it might be normal.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.11 | {
"login": "dhiltgen",
"id": 4033016,
"node_id": "MDQ6VXNlcjQwMzMwMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhiltgen",
"html_url": "https://github.com/dhiltgen",
"followers_url": "https://api.github.com/users/dhiltgen/followers",
"following_url": "https://api.github.com/users/dhiltgen/following{/other_user}",
"gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions",
"organizations_url": "https://api.github.com/users/dhiltgen/orgs",
"repos_url": "https://api.github.com/users/dhiltgen/repos",
"events_url": "https://api.github.com/users/dhiltgen/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhiltgen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/6914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/6914/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/2739 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2739/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2739/comments | https://api.github.com/repos/ollama/ollama/issues/2739/events | https://github.com/ollama/ollama/pull/2739 | 2,152,603,520 | PR_kwDOJ0Z1Ps5n1WXz | 2,739 | Require no extra disk space for windows installation | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-02-25T05:08:40 | 2024-02-25T05:20:36 | 2024-02-25T05:20:35 | MEMBER | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/2739",
"html_url": "https://github.com/ollama/ollama/pull/2739",
"diff_url": "https://github.com/ollama/ollama/pull/2739.diff",
"patch_url": "https://github.com/ollama/ollama/pull/2739.patch",
"merged_at": "2024-02-25T05:20:35"
} | Fixes https://github.com/ollama/ollama/issues/2734
Since quite a few folks won't install a model on `C:\`, and will instead use `OLLAMA_MODELS` to change where models are stored, lower the check for minimum disk space. | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2739/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/2902 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/2902/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/2902/comments | https://api.github.com/repos/ollama/ollama/issues/2902/events | https://github.com/ollama/ollama/issues/2902 | 2,165,548,696 | I_kwDOJ0Z1Ps6BE6aY | 2,902 | feature request: support for conditional probability of next word (logprobs) | {
"login": "iurimatias",
"id": 176720,
"node_id": "MDQ6VXNlcjE3NjcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/176720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iurimatias",
"html_url": "https://github.com/iurimatias",
"followers_url": "https://api.github.com/users/iurimatias/followers",
"following_url": "https://api.github.com/users/iurimatias/following{/other_user}",
"gists_url": "https://api.github.com/users/iurimatias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iurimatias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iurimatias/subscriptions",
"organizations_url": "https://api.github.com/users/iurimatias/orgs",
"repos_url": "https://api.github.com/users/iurimatias/repos",
"events_url": "https://api.github.com/users/iurimatias/events{/privacy}",
"received_events_url": "https://api.github.com/users/iurimatias/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2024-03-03T20:44:12 | 2024-03-12T01:32:34 | 2024-03-12T01:32:34 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | This is required for libraries like LQML, and useful for some other tooling as well, debugging etc..
https://platform.openai.com/docs/api-reference/completions/create#completions-create-logprobs | {
"login": "jmorganca",
"id": 251292,
"node_id": "MDQ6VXNlcjI1MTI5Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmorganca",
"html_url": "https://github.com/jmorganca",
"followers_url": "https://api.github.com/users/jmorganca/followers",
"following_url": "https://api.github.com/users/jmorganca/following{/other_user}",
"gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions",
"organizations_url": "https://api.github.com/users/jmorganca/orgs",
"repos_url": "https://api.github.com/users/jmorganca/repos",
"events_url": "https://api.github.com/users/jmorganca/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmorganca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/2902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/2902/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3463 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3463/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3463/comments | https://api.github.com/repos/ollama/ollama/issues/3463/events | https://github.com/ollama/ollama/pull/3463 | 2,221,548,791 | PR_kwDOJ0Z1Ps5rfku5 | 3,463 | update graph size estimate | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2024-04-02T22:12:34 | 2024-04-03T21:27:31 | 2024-04-03T21:27:30 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/3463",
"html_url": "https://github.com/ollama/ollama/pull/3463",
"diff_url": "https://github.com/ollama/ollama/pull/3463.diff",
"patch_url": "https://github.com/ollama/ollama/pull/3463.patch",
"merged_at": "2024-04-03T21:27:30"
} | precise scratch space estimates for gemma and llama models with other models falling back to the previous algorithm | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3463/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/272 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/272/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/272/comments | https://api.github.com/repos/ollama/ollama/issues/272/events | https://github.com/ollama/ollama/pull/272 | 1,835,825,101 | PR_kwDOJ0Z1Ps5XJlAE | 272 | Decode ggml 2: Use decoded values | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-03T22:48:36 | 2023-08-11T00:22:49 | 2023-08-11T00:22:48 | CONTRIBUTOR | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/ollama/ollama/pulls/272",
"html_url": "https://github.com/ollama/ollama/pull/272",
"diff_url": "https://github.com/ollama/ollama/pull/272.diff",
"patch_url": "https://github.com/ollama/ollama/pull/272.patch",
"merged_at": "2023-08-11T00:22:48"
} | null | {
"login": "mxyng",
"id": 2372640,
"node_id": "MDQ6VXNlcjIzNzI2NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxyng",
"html_url": "https://github.com/mxyng",
"followers_url": "https://api.github.com/users/mxyng/followers",
"following_url": "https://api.github.com/users/mxyng/following{/other_user}",
"gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxyng/subscriptions",
"organizations_url": "https://api.github.com/users/mxyng/orgs",
"repos_url": "https://api.github.com/users/mxyng/repos",
"events_url": "https://api.github.com/users/mxyng/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxyng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/272/timeline | null | null | true |
https://api.github.com/repos/ollama/ollama/issues/1927 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/1927/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/1927/comments | https://api.github.com/repos/ollama/ollama/issues/1927/events | https://github.com/ollama/ollama/issues/1927 | 2,077,156,354 | I_kwDOJ0Z1Ps57zuQC | 1,927 | Handling High traffic | {
"login": "lauvindra",
"id": 82690315,
"node_id": "MDQ6VXNlcjgyNjkwMzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/82690315?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lauvindra",
"html_url": "https://github.com/lauvindra",
"followers_url": "https://api.github.com/users/lauvindra/followers",
"following_url": "https://api.github.com/users/lauvindra/following{/other_user}",
"gists_url": "https://api.github.com/users/lauvindra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lauvindra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lauvindra/subscriptions",
"organizations_url": "https://api.github.com/users/lauvindra/orgs",
"repos_url": "https://api.github.com/users/lauvindra/repos",
"events_url": "https://api.github.com/users/lauvindra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lauvindra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2024-01-11T16:52:02 | 2024-01-26T23:52:33 | 2024-01-26T23:52:33 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Assume I have ollama in server Tesla T4 GPU with 16GB Vram and 120 Ram, how many request can it handle in one second? | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/1927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/1927/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/3802 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/3802/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/3802/comments | https://api.github.com/repos/ollama/ollama/issues/3802/events | https://github.com/ollama/ollama/issues/3802 | 2,255,184,367 | I_kwDOJ0Z1Ps6Ga2Hv | 3,802 | Advertise existence to local network using NSD or mDNS SD | {
"login": "majestrate",
"id": 499653,
"node_id": "MDQ6VXNlcjQ5OTY1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/499653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/majestrate",
"html_url": "https://github.com/majestrate",
"followers_url": "https://api.github.com/users/majestrate/followers",
"following_url": "https://api.github.com/users/majestrate/following{/other_user}",
"gists_url": "https://api.github.com/users/majestrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/majestrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/majestrate/subscriptions",
"organizations_url": "https://api.github.com/users/majestrate/orgs",
"repos_url": "https://api.github.com/users/majestrate/repos",
"events_url": "https://api.github.com/users/majestrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/majestrate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396200,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA",
"url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request",
"name": "feature request",
"color": "a2eeef",
"default": false,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2024-04-21T17:26:49 | 2024-04-21T17:26:49 | null | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | It would be very cool if ollama could publish itself to the local network as a discoverable service such that other devices on the local network could use NSD to discover that there is an instance locally accessible. this would greatly help unify how companion applications work (e.g. a mobile app that would let you remotely query the ollama on the local network) | null | {
"url": "https://api.github.com/repos/ollama/ollama/issues/3802/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/3802/timeline | null | null | false |
https://api.github.com/repos/ollama/ollama/issues/8477 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8477/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8477/comments | https://api.github.com/repos/ollama/ollama/issues/8477/events | https://github.com/ollama/ollama/issues/8477 | 2,796,598,168 | I_kwDOJ0Z1Ps6msLOY | 8,477 | Seems v0.5.7 Linux x86_64 release build in debug mode, and the openai endpoint broken | {
"login": "Dawn-Xu-helloworld",
"id": 48346940,
"node_id": "MDQ6VXNlcjQ4MzQ2OTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48346940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dawn-Xu-helloworld",
"html_url": "https://github.com/Dawn-Xu-helloworld",
"followers_url": "https://api.github.com/users/Dawn-Xu-helloworld/followers",
"following_url": "https://api.github.com/users/Dawn-Xu-helloworld/following{/other_user}",
"gists_url": "https://api.github.com/users/Dawn-Xu-helloworld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dawn-Xu-helloworld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dawn-Xu-helloworld/subscriptions",
"organizations_url": "https://api.github.com/users/Dawn-Xu-helloworld/orgs",
"repos_url": "https://api.github.com/users/Dawn-Xu-helloworld/repos",
"events_url": "https://api.github.com/users/Dawn-Xu-helloworld/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dawn-Xu-helloworld/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 1 | 2025-01-18T02:46:29 | 2025-01-24T09:30:42 | 2025-01-24T09:30:42 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
2025/01/18 10:21:45 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/dawn/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-01-18T10:21:45.413+08:00 level=INFO source=images.go:432 msg="total blobs: 28"
time=2025-01-18T10:21:45.413+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-01-18T10:21:45.414+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
time=2025-01-18T10:21:45.414+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu cpu_avx cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx]"
time=2025-01-18T10:21:45.414+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
time=2025-01-18T10:21:45.421+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
time=2025-01-18T10:21:45.421+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.0 GiB" available="27.0 GiB"
time=2025-01-18T10:21:45.695+08:00 level=INFO source=server.go:104 msg="system memory" total="31.0 GiB" free="27.0 GiB" free_swap="32.0 GiB"
time=2025-01-18T10:21:45.696+08:00 level=INFO source=memory.go:356 msg="offload to cpu" layers.requested=-1 layers.model=29 layers.offload=0 layers.split="" memory.available="[27.0 GiB]" memory.gpu_overhead="0 B" memory.required.full="1.5 GiB" memory.required.partial="0 B" memory.required.kv="224.0 MiB" memory.required.allocations="[1.5 GiB]" memory.weights.total="976.1 MiB" memory.weights.repeating="793.5 MiB" memory.weights.nonrepeating="182.6 MiB" memory.graph.full="299.8 MiB" memory.graph.partial="482.3 MiB"
time=2025-01-18T10:21:45.696+08:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cpu_avx2/ollama_llama_server runner --model /home/dawn/.ollama/models/blobs/sha256-183715c435899236895da3869489cc30ac241476b4971a20285b1a462818a5b4 --ctx-size 8192 --batch-size 512 --threads 6 --no-mmap --parallel 4 --port 37329"
time=2025-01-18T10:21:45.696+08:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
time=2025-01-18T10:21:45.696+08:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
time=2025-01-18T10:21:45.696+08:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
[GIN] 2025/01/18 - 10:27:15 | 404 | 19.126µs | 127.0.0.1 | POST "/v1"
### OS
Linux
### GPU
Other
### CPU
Intel
### Ollama version
0.5.7 | {
"login": "rick-github",
"id": 14946854,
"node_id": "MDQ6VXNlcjE0OTQ2ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rick-github",
"html_url": "https://github.com/rick-github",
"followers_url": "https://api.github.com/users/rick-github/followers",
"following_url": "https://api.github.com/users/rick-github/following{/other_user}",
"gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rick-github/subscriptions",
"organizations_url": "https://api.github.com/users/rick-github/orgs",
"repos_url": "https://api.github.com/users/rick-github/repos",
"events_url": "https://api.github.com/users/rick-github/events{/privacy}",
"received_events_url": "https://api.github.com/users/rick-github/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8477/timeline | null | completed | false |
https://api.github.com/repos/ollama/ollama/issues/8385 | https://api.github.com/repos/ollama/ollama | https://api.github.com/repos/ollama/ollama/issues/8385/labels{/name} | https://api.github.com/repos/ollama/ollama/issues/8385/comments | https://api.github.com/repos/ollama/ollama/issues/8385/events | https://github.com/ollama/ollama/issues/8385 | 2,781,798,559 | I_kwDOJ0Z1Ps6lzuCf | 8,385 | Cannot list or install models: Connection refused error on Windows 10 | {
"login": "inspector3535",
"id": 187630024,
"node_id": "U_kgDOCy8ByA",
"avatar_url": "https://avatars.githubusercontent.com/u/187630024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/inspector3535",
"html_url": "https://github.com/inspector3535",
"followers_url": "https://api.github.com/users/inspector3535/followers",
"following_url": "https://api.github.com/users/inspector3535/following{/other_user}",
"gists_url": "https://api.github.com/users/inspector3535/gists{/gist_id}",
"starred_url": "https://api.github.com/users/inspector3535/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/inspector3535/subscriptions",
"organizations_url": "https://api.github.com/users/inspector3535/orgs",
"repos_url": "https://api.github.com/users/inspector3535/repos",
"events_url": "https://api.github.com/users/inspector3535/events{/privacy}",
"received_events_url": "https://api.github.com/users/inspector3535/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5667396184,
"node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA",
"url": "https://api.github.com/repos/ollama/ollama/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 15 | 2025-01-11T12:07:45 | 2025-01-13T20:02:33 | 2025-01-13T19:28:48 | NONE | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### What is the issue?
Greetings,
I'm unable to interact with Ollama on Windows 10. I cannot list models, nor can I install any of them.
Whatever command I try, I receive the following error: even when I type ollama list, the same error appears.
I initially suspected that my server configuration might be causing the issue, but I tried it on different Windows computers, and the problem persists.
I've even tried different installation methods (direct installation, installing via Scoop, Winget, etc.), but the result is always the same.
```Error:
Head "http://127.0.0.1:11434/": dial tcp 127.0.0.1:11434: connectex: The connection could not be established because the target machine actively rejected it.```
Note: Windows Defender and the firewall are disabled, and the only error I can find in the log is:
"no compatible GPUs were discovered".
The hardware of my server should be sufficient to run at least small models like LLaMA 2-7B or GPT Small.
Additionally, I can successfully run the Ollama server using ollama serve; I can even see the server running in the browser, but I am unable to display, download, or install models.
Thank you in advance for your time and assistance. I would appreciate any guidance on how to resolve this issue.
### OS
Windows
### GPU
Other
### CPU
Intel
### Ollama version
0.5.4 | {
"login": "pdevine",
"id": 75239,
"node_id": "MDQ6VXNlcjc1MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdevine",
"html_url": "https://github.com/pdevine",
"followers_url": "https://api.github.com/users/pdevine/followers",
"following_url": "https://api.github.com/users/pdevine/following{/other_user}",
"gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdevine/subscriptions",
"organizations_url": "https://api.github.com/users/pdevine/orgs",
"repos_url": "https://api.github.com/users/pdevine/repos",
"events_url": "https://api.github.com/users/pdevine/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdevine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/ollama/ollama/issues/8385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/ollama/ollama/issues/8385/timeline | null | completed | false |