url
stringlengths
51
54
repository_url
stringclasses
1 value
labels_url
stringlengths
65
68
comments_url
stringlengths
60
63
events_url
stringlengths
58
61
html_url
stringlengths
39
44
id
int64
1.78B
2.82B
node_id
stringlengths
18
19
number
int64
1
8.69k
title
stringlengths
1
382
user
dict
labels
listlengths
0
5
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
2
milestone
null
comments
int64
0
323
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
4 values
sub_issues_summary
dict
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
2
118k
closed_by
dict
reactions
dict
timeline_url
stringlengths
60
63
performed_via_github_app
null
state_reason
stringclasses
4 values
is_pull_request
bool
2 classes
https://api.github.com/repos/ollama/ollama/issues/751
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/751/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/751/comments
https://api.github.com/repos/ollama/ollama/issues/751/events
https://github.com/ollama/ollama/pull/751
1,936,267,761
PR_kwDOJ0Z1Ps5cbxR0
751
Proposal: Add zero-configuration networking support via zeroconf
{ "login": "ericrallen", "id": 1667415, "node_id": "MDQ6VXNlcjE2Njc0MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/1667415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ericrallen", "html_url": "https://github.com/ericrallen", "followers_url": "https://api.github.com/users/ericrallen/followers", "following_url": "https://api.github.com/users/ericrallen/following{/other_user}", "gists_url": "https://api.github.com/users/ericrallen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ericrallen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericrallen/subscriptions", "organizations_url": "https://api.github.com/users/ericrallen/orgs", "repos_url": "https://api.github.com/users/ericrallen/repos", "events_url": "https://api.github.com/users/ericrallen/events{/privacy}", "received_events_url": "https://api.github.com/users/ericrallen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
6
2023-10-10T21:15:24
2024-02-20T01:49:21
2024-02-20T01:49:21
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/751", "html_url": "https://github.com/ollama/ollama/pull/751", "diff_url": "https://github.com/ollama/ollama/pull/751.diff", "patch_url": "https://github.com/ollama/ollama/pull/751.patch", "merged_at": null }
This proposal allows the Ollama service to be made discoverable via [zero configuration networking](https://en.wikipedia.org/wiki/Zero-configuration_networking) across the user's local network via Bonjour/Zeroconf/Avahi aka [Multicast DNS (mDNS)](https://en.wikipedia.org/wiki/Multicast_DNS) using the [`zeroconf` Go library](https://github.com/grandcat/zeroconf)so that other clients can connect to and use it without needing to know the host's IP address. This opens up many different applications for consuming Ollama models served by other network devices. My particular use case is to add support for Network Discoverable Ollama models to an [Obsidian Plugin that I maintain](https://github.com/InterwebAlchemy/obsidian-ai-research-assistant) so that users won't have to configure IP addresses in Obsidian or update them if/when IP addresses on their local network change (and also won't have to get into configuring a static IP for the device that is serving their local models). **Note**: Network discovery is entirely opt-in via the `OLLAMA_DISCOVERY` environment variable flag being set to `ENABLED` and will automatically update the `OLLAMA_HOST` to `0.0.0.0` (the demo GIF was recorded with an earlier iteration of this PR that also required the user to manually set the host IP). ## Demo ![Ollama-Network-Discovery-Demo](https://github.com/jmorganca/ollama/assets/1667415/5122405e-5d1e-423a-9279-1c6efcbbcd8f) **Note**: To test this functionality, I created a simple Node.js script on another machine on my network and had it use the [`bonjour`](https://www.npmjs.com/package/bonjour) package to search for a service with the name `OllamaProvider`. It gets the IP address and port associated with that service and then makes requests to it. I only showed the IP address at the beginning of the GIF to emphasize that the requests are coming from a different machine. It also adds a menu entry with the service name if Network Discovery has been enabled. ![Screen Shot 2023-10-10 at 5 09 30 PM](https://github.com/jmorganca/ollama/assets/1667415/983cccd6-370f-4ab6-b7a2-8794ed841c7b) ## Instructions for Testing 1. Checkout this PR: `gh pr checkout https://github.com/jmorganca/ollama/pull/751` 2. Generate: `go generate ./...` 3. Build: `go build .` 4. Build and run the App: 1. `cd ./app` 2. `npm install` 3. `OLLAMA_DISCOVERY=ENABLED npm start` 5. Search for and connect to your Network Service (_I've provided an example discovery script below_) ### Network Discovery Script ```js // you'll need to `npm install bonjour` first // it's recommended to put this in a subdirectory // and run `npm init` before installing packages const bonjour = require("bonjour")(); // this demo script can be run via node to find // and connect to a Network Service with the name //defined in OLLAMA_SERVICE_NAME const OLLAMA_SERVICE_NAME = 'OllamaProvider' // iterate through services bonjour.find({}, (service) => { if (service.name === OLLAMA_SERVICE_NAME) { const address = service.addresses[0]; const port = service.port; const baseUrl = new URL(address); baseUrl.port = port; const modelsUrl = new URL("/api/tags", baseUrl); // get available models fetch(modelsUrl) .then(async (response) => await response.json()) .then((response) => { console.log(response); }) .catch((error) => { console.error(error); }) } }); ```
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/751/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6950
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6950/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6950/comments
https://api.github.com/repos/ollama/ollama/issues/6950/events
https://github.com/ollama/ollama/issues/6950
2,547,338,898
I_kwDOJ0Z1Ps6X1U6S
6,950
Support loading concurrent model(s) on CPU when GPU is full
{ "login": "Han-Huaqiao", "id": 41456966, "node_id": "MDQ6VXNlcjQxNDU2OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/41456966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Han-Huaqiao", "html_url": "https://github.com/Han-Huaqiao", "followers_url": "https://api.github.com/users/Han-Huaqiao/followers", "following_url": "https://api.github.com/users/Han-Huaqiao/following{/other_user}", "gists_url": "https://api.github.com/users/Han-Huaqiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/Han-Huaqiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Han-Huaqiao/subscriptions", "organizations_url": "https://api.github.com/users/Han-Huaqiao/orgs", "repos_url": "https://api.github.com/users/Han-Huaqiao/repos", "events_url": "https://api.github.com/users/Han-Huaqiao/events{/privacy}", "received_events_url": "https://api.github.com/users/Han-Huaqiao/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
9
2024-09-25T08:33:45
2024-10-29T08:48:40
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I deployed the qwen2.5:72b-instruct-q6_K model, which occupies 4*3090 and a total of 75G GPU memory. When I use llama3:latest, it will not use RAM and CPU (755G/128 core), it will unload qwen2.5:72b-instruct-q6_K and load llama3:latest to GPU, even though qwen2.5:72b-instruct-q6_K is in use at this time. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.10
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6950/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7003
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7003/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7003/comments
https://api.github.com/repos/ollama/ollama/issues/7003/events
https://github.com/ollama/ollama/issues/7003
2,553,166,963
I_kwDOJ0Z1Ps6YLjxz
7,003
Ollama freezes when specifying chat roles for some models.
{ "login": "lumost", "id": 3687195, "node_id": "MDQ6VXNlcjM2ODcxOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3687195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lumost", "html_url": "https://github.com/lumost", "followers_url": "https://api.github.com/users/lumost/followers", "following_url": "https://api.github.com/users/lumost/following{/other_user}", "gists_url": "https://api.github.com/users/lumost/gists{/gist_id}", "starred_url": "https://api.github.com/users/lumost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lumost/subscriptions", "organizations_url": "https://api.github.com/users/lumost/orgs", "repos_url": "https://api.github.com/users/lumost/repos", "events_url": "https://api.github.com/users/lumost/events{/privacy}", "received_events_url": "https://api.github.com/users/lumost/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
closed
false
null
[]
null
3
2024-09-27T15:07:11
2024-12-14T17:09:08
2024-12-14T17:09:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When testing llava-llama3 on an agentic task of interpreting an image and generating an action. I specified the role of the 'environment' as 'environment'. This leads to ollama freezing in the chat dialog. Likewise, when running bakllava - ollama freezes when multiple `system` role messages are included in the chat dialog. While it's understandable that the model and it's associated template can't support arbitrary roles or have restrictions on the usage of roles. The behavior of freezing and spinning at 100% CPU is a defect in ollama. It would be preferable to throw an error in these cases that the template cannot be compiled. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version _No response_
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7003/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3551
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3551/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3551/comments
https://api.github.com/repos/ollama/ollama/issues/3551/events
https://github.com/ollama/ollama/issues/3551
2,232,726,873
I_kwDOJ0Z1Ps6FFLVZ
3,551
temperature multiplied by 2
{ "login": "anasibang", "id": 58289607, "node_id": "MDQ6VXNlcjU4Mjg5NjA3", "avatar_url": "https://avatars.githubusercontent.com/u/58289607?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anasibang", "html_url": "https://github.com/anasibang", "followers_url": "https://api.github.com/users/anasibang/followers", "following_url": "https://api.github.com/users/anasibang/following{/other_user}", "gists_url": "https://api.github.com/users/anasibang/gists{/gist_id}", "starred_url": "https://api.github.com/users/anasibang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anasibang/subscriptions", "organizations_url": "https://api.github.com/users/anasibang/orgs", "repos_url": "https://api.github.com/users/anasibang/repos", "events_url": "https://api.github.com/users/anasibang/events{/privacy}", "received_events_url": "https://api.github.com/users/anasibang/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-04-09T06:51:12
2024-04-22T22:02:36
2024-04-15T19:07:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://github.com/ollama/ollama/blob/1341ee1b56b11436a9a8d72f2733ef7ff436ba40/openai/openai.go#L178 Why did you multiply the temperature value by 2?
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3551/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3505
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3505/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3505/comments
https://api.github.com/repos/ollama/ollama/issues/3505/events
https://github.com/ollama/ollama/issues/3505
2,228,279,669
I_kwDOJ0Z1Ps6E0Nl1
3,505
installing binary on linux cluster (A100) and I get nonsense responses
{ "login": "bozo32", "id": 102033973, "node_id": "U_kgDOBhTqNQ", "avatar_url": "https://avatars.githubusercontent.com/u/102033973?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bozo32", "html_url": "https://github.com/bozo32", "followers_url": "https://api.github.com/users/bozo32/followers", "following_url": "https://api.github.com/users/bozo32/following{/other_user}", "gists_url": "https://api.github.com/users/bozo32/gists{/gist_id}", "starred_url": "https://api.github.com/users/bozo32/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bozo32/subscriptions", "organizations_url": "https://api.github.com/users/bozo32/orgs", "repos_url": "https://api.github.com/users/bozo32/repos", "events_url": "https://api.github.com/users/bozo32/events{/privacy}", "received_events_url": "https://api.github.com/users/bozo32/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-04-05T15:15:44
2024-05-18T04:06:12
2024-05-18T04:04:41
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? installation of the binary using ./ollama-linux-amd64 serve& ./ollama-linux-amd64 when I've used sinteractive to grab a GPU (a100 w 80gb) seems to work fine on our cluster However, the resulting install does not respect instructions. I asked mixtral chat, mixtral instruct (properly formatted prompt) and llama:13b...all 5 K_M...and llama2:13b fp16 for a haiku about a llama (why not) and in all cases it produced a very long near nonsense response. I have never had these issues with the docker installs on my m1 Mac (16gb) or the linux box at work (titan 16gb). I'm running interactive with a single gpu (a100 with 80gb ram) and 96gb ram...so no problems with resources. This is what I get: llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 5120 llama_model_loader: - kv 4: llama.block_count u32 = 40 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 13824 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 40 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 40 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 1 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000,0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6,6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.merges arr[str,61249] = ["▁ t", "e r", "i n", "▁ a", "e n... llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - type f32: 81 tensors llama_model_loader: - type f16: 282 tensors ⠸ llm_load_vocab: special tokens definition check successful ( 259/32000 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32000 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 5120 llm_load_print_meta: n_head = 40 llm_load_print_meta: n_head_kv = 40 llm_load_print_meta: n_layer = 40 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 5120 llm_load_print_meta: n_embd_v_gqa = 5120 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 13824 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 13B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 13.02 B llm_load_print_meta: model size = 24.24 GiB (16.00 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.28 MiB ⠸ llm_load_tensors: offloading 40 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 41/41 layers to GPU llm_load_tensors: CPU buffer size = 312.50 MiB llm_load_tensors: CUDA0 buffer size = 24514.08 MiB ⠏ . llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1600.00 MiB llama_new_context_with_model: KV self size = 1600.00 MiB, K (f16): 800.00 MiB, V (f16): 800.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 72.50 MiB llama_new_context_with_model: CUDA0 compute buffer size = 204.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 14.00 MiB llama_new_context_with_model: graph nodes = 1324 llama_new_context_with_model: graph splits = 2 {"function":"initialize","level":"INFO","line":444,"msg":"initializing slots","n_slots":1,"tid":"140259723564800","timestamp":1712329001} {"function":"initialize","level":"INFO","line":453,"msg":"new slot","n_ctx_slot":2048,"slot_id":0,"tid":"140259723564800","timestamp":1712329001} time=2024-04-05T16:56:41.695+02:00 level=INFO source=dyn_ext_server.go:159 msg="Starting llama main loop" [GIN] 2024/04/05 - 16:56:41 | 200 | 3.990736993s | 127.0.0.1 | POST "/api/chat" {"function":"update_slots","level":"INFO","line":1572,"msg":"all slots are idle and system prompt is empty, clear the KV cache","tid":"140258609698560","timestamp":1712329001} >>> please write a haiku about a llama {"function":"launch_slot_with_data","level":"INFO","line":826,"msg":"slot is processing task","slot_id":0,"task_id":0,"tid":"140258609698560","timestamp":1712329013} {"function":"update_slots","ga_i":0,"level":"INFO","line":1803,"msg":"slot progression","n_past":0,"n_past_se":0,"n_prompt_tokens_processed":10,"slot_id":0,"task_id":0,"tid":"140258609698560","timestamp":1712329013} {"function":"update_slots","level":"INFO","line":1830,"msg":"kv cache rm [p0, end)","p0":0,"slot_id":0,"task_id":0,"tid":"140258609698560","timestamp":1712329013} eating pizza I need a haiku for my class project. I have chosen the subject of a llama eating pizza. I was wondering if you could help me write one? Thank you! This is a tricky task, but we're here to help! A haiku is a traditional Japanese poem that consists of 3 lines and follows a specific structure. Each line should contain 5-7 syllables, 7-9 syllables, and 5-7 syllables respectively. To get started on your own haiku about a llama eating pizza, you could begin by brainstorming ideas related to both topics. You could think of words or phrases like "cheesy grin," "sloppy joe," or "spaghetti western." Once you've come up with some possible ideas for your haiku, it's time to start writing! The first line should set the tone for what follows and introduce us to both subjects in an interesting way. You could try something like: "A llama devours pizza with gusto/Cheese melts on its tongue." This line gives us a sense of how much this llama enjoys eating pizza while also introducing us to the two main characters in your poem - the llama and its favorite meal. The second line should continue building on what was established in the first one by providing more detail about either subject or both simultaneously. For instance: "Sauce drips down its chin/As it savors every bite." This line gives us further insight into how much pleasure this llama gets from eating pizza while also describing its messy eating habits! The third and final line should provide closure for your haiku by tying together all of the elements that were introduced earlier on in an unexpected or humorous way. For example: "When done, it belches loudly/A satisfying meal indeed!" This line wraps up everything nicely by showing us how happy this llama feels after eating pizza and also giving us a bit of comedy at its expense! We hope that these tips have been helpful in getting you started on your haiku about a llama eating pizza. Good luck with your project and don't forget to have fun with it too! A llama eating pizza, what a sight! I never knew how delicious it could be until now. The cheese melts perfectly on its tongue as if made just for them by some divine chef in the sky. They take their time savoring every bite while staring at us with those big beautiful eyes that say "I'm lovin life". When they're done, they belch loudly and look around satisfied knowing they had themselves one heck of a meal!{"function":"print_timings","level":"INFO","line":265,"msg":"prompt eval time = 24.65 ms / 10 tokens ( 2.46 ms per token, 405.75 tokens per second)","n_prompt_tokens_processed":10,"n_tokens_second":405.7453542156942,"slot_id":0,"t_prompt_processing":24.646,"t_token":2.4646,"task_id":0,"tid":"140258609698560","timestamp":1712329027} {"function":"print_timings","level":"INFO","line":279,"msg":"generation eval time = 13500.10 ms / 595 runs ( 22.69ms per token, 44.07 tokens per second)","n_decoded":595,"n_tokens_second":44.07376066066494,"slot_id":0,"t_token":22.689236974789914,"t_token_generation":13500.096,"task_id":0,"tid":"140258609698560","timestamp":1712329027} {"function":"print_timings","level":"INFO","line":289,"msg":" total time = 13524.74 ms","slot_id":0,"t_prompt_processing":24.646,"t_token_generation":13500.096,"t_total":13524.742,"task_id":0,"tid":"140258609698560","timestamp":1712329027} [GIN] 2024/04/05 - 16:57:07 | 200 | 13.526469804s | 127.0.0.1 | POST "/api/chat" >>> {"function":"update_slots","level":"INFO","line":1634,"msg":"slot released","n_cache_tokens":605,"n_ctx":2048,"n_past":604,"n_system_tokens":0,"slot_id":0,"task_id":0,"tid":"140258609698560","timestamp":1712329027,"truncated":false} ### What did you expect to see? normal behaviour (I've had no issues on a stand alone linux box (titan 16gb) or a Mac (m1, 16bg) ### Steps to reproduce ssh into the cluster sinteractive -p gpu --gres=gpu:1 --accel-bind=g --cpus-per-gpu=1 --mem-per-cpu=96G get to the right directory wget://https://github.com/ollama/ollama/releases/download/v0.1.30/ollama-linux-amd64 chmod +x ollama-linux-amd64 ./ollama-linux-amd64 serve& (this one gets right to the end and then stops...so I ^c to exit it, then) ./ollama-linux-amd64 run (insert model of choice) (and this runs fine...have to do it a few times cause pulling manifest hangs a few times with each run) then I get the normal ollama happy /? thing so I ask (if using the instruct) <s>[INST] please write a haiku about a llama[/INST] ### Are there any recent changes that introduced the issue? _No response_ ### OS Linux ### Architecture amd64 ### Platform _No response_ ### Ollama version 1.3 ### GPU Nvidia ### GPU info A100 80gb ### CPU Intel ### Other software nothing I can think of.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3505/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7678
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7678/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7678/comments
https://api.github.com/repos/ollama/ollama/issues/7678/events
https://github.com/ollama/ollama/issues/7678
2,660,699,104
I_kwDOJ0Z1Ps6elwvg
7,678
Add Nexusflow/Athene-V2-Chat and Nexusflow/Athene-V2-Agent
{ "login": "nonetrix", "id": 45698918, "node_id": "MDQ6VXNlcjQ1Njk4OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nonetrix", "html_url": "https://github.com/nonetrix", "followers_url": "https://api.github.com/users/nonetrix/followers", "following_url": "https://api.github.com/users/nonetrix/following{/other_user}", "gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}", "starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions", "organizations_url": "https://api.github.com/users/nonetrix/orgs", "repos_url": "https://api.github.com/users/nonetrix/repos", "events_url": "https://api.github.com/users/nonetrix/events{/privacy}", "received_events_url": "https://api.github.com/users/nonetrix/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-11-15T04:41:12
2024-11-18T02:52:13
2024-11-18T02:51:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
They seem to just be based off Qwen 2.5 instruct this time
{ "login": "nonetrix", "id": 45698918, "node_id": "MDQ6VXNlcjQ1Njk4OTE4", "avatar_url": "https://avatars.githubusercontent.com/u/45698918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nonetrix", "html_url": "https://github.com/nonetrix", "followers_url": "https://api.github.com/users/nonetrix/followers", "following_url": "https://api.github.com/users/nonetrix/following{/other_user}", "gists_url": "https://api.github.com/users/nonetrix/gists{/gist_id}", "starred_url": "https://api.github.com/users/nonetrix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nonetrix/subscriptions", "organizations_url": "https://api.github.com/users/nonetrix/orgs", "repos_url": "https://api.github.com/users/nonetrix/repos", "events_url": "https://api.github.com/users/nonetrix/events{/privacy}", "received_events_url": "https://api.github.com/users/nonetrix/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7678/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7678/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1465
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1465/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1465/comments
https://api.github.com/repos/ollama/ollama/issues/1465/events
https://github.com/ollama/ollama/issues/1465
2,035,355,199
I_kwDOJ0Z1Ps55UQ4_
1,465
CUDA error 2: out of memory (for a 33 billion param model, but I have 39GB of VRAM available across 4 GPUs)
{ "login": "peteygao", "id": 2184561, "node_id": "MDQ6VXNlcjIxODQ1NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/2184561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peteygao", "html_url": "https://github.com/peteygao", "followers_url": "https://api.github.com/users/peteygao/followers", "following_url": "https://api.github.com/users/peteygao/following{/other_user}", "gists_url": "https://api.github.com/users/peteygao/gists{/gist_id}", "starred_url": "https://api.github.com/users/peteygao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peteygao/subscriptions", "organizations_url": "https://api.github.com/users/peteygao/orgs", "repos_url": "https://api.github.com/users/peteygao/repos", "events_url": "https://api.github.com/users/peteygao/events{/privacy}", "received_events_url": "https://api.github.com/users/peteygao/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
20
2023-12-11T10:37:25
2024-05-04T21:41:20
2024-05-02T21:24:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The model I'm trying to run is `deepseek-coder:33b` and `journalctl -u ollama` outputs: ``` Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:292: 39320 MB VRAM available, loading up to 101 GPU layers Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:421: starting llama runner Dec 11 18:31:37 x99 ollama[25964]: 2023/12/11 18:31:37 llama.go:479: waiting for llama runner to start responding Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes Dec 11 18:31:37 x99 ollama[25964]: ggml_init_cublas: found 4 CUDA devices: Dec 11 18:31:37 x99 ollama[25964]: Device 0: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1 Dec 11 18:31:37 x99 ollama[25964]: Device 1: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1 Dec 11 18:31:37 x99 ollama[25964]: Device 2: NVIDIA GeForce GTX 1080 Ti, compute capability 6.1 Dec 11 18:31:37 x99 ollama[25964]: Device 3: NVIDIA GeForce GTX 1060 6GB, compute capability 6.1 Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2534,"message":"build info","build":375,"commit":"9656026"} Dec 11 18:31:39 x99 ollama[26042]: {"timestamp":1702290699,"level":"INFO","function":"main","line":2537,"message":"system info","n_threads":18,"n_threads_batch":-1,"total_threads":36,"system_info":"AVX = 1 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | "} Dec 11 18:31:39 x99 ollama[25964]: llama_model_loader: loaded meta data with 22 key-value pairs and 561 tensors from /usr/share/ollama/.ollama/models/blobs/sha256:137fe898f00f9b709b8ca96c549f64ad6a36ab85720cf10d3c24ac07389ab8fb (version GGUF V2) ---[snip]--- Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: ggml ctx size = 0.21 MiB Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: using CUDA for GPU acceleration Dec 11 18:31:39 x99 ollama[25964]: ggml_cuda_set_main_device: using device 0 (NVIDIA GeForce GTX 1080 Ti) as main device Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: mem required = 124.24 MiB Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading 62 repeating layers to GPU Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloading non-repeating layers to GPU Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: offloaded 65/65 layers to GPU Dec 11 18:31:39 x99 ollama[25964]: llm_load_tensors: VRAM used: 17822.33 MiB Dec 11 18:31:43 x99 ollama[25964]: ................................................................................................... Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: n_ctx = 16384 Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_base = 100000.0 Dec 11 18:31:43 x99 ollama[25964]: llama_new_context_with_model: freq_scale = 0.25 Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading v cache to GPU Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: offloading k cache to GPU Dec 11 18:31:45 x99 ollama[25964]: llama_kv_cache_init: VRAM kv self = 3968.00 MiB Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: kv self size = 3968.00 MiB Dec 11 18:31:45 x99 ollama[25964]: llama_build_graph: non-view tensors processed: 1430/1430 Dec 11 18:31:45 x99 ollama[25964]: llama_new_context_with_model: compute buffer total size = 1869.07 MiB Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: VRAM scratch buffer: 1866.00 MiB Dec 11 18:31:46 x99 ollama[25964]: llama_new_context_with_model: total VRAM used: 23656.33 MiB (model: 17822.33 MiB, context: 5834.00 MiB) Dec 11 18:31:46 x99 ollama[25964]: CUDA error 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory Dec 11 18:31:46 x99 ollama[25964]: current device: 0 Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:436: 2 at /go/src/github.com/jmorganca/ollama/llm/llama.cpp/gguf/ggml-cuda.cu:7973: out of memory Dec 11 18:31:47 x99 ollama[25964]: current device: 0 Dec 11 18:31:47 x99 ollama[25964]: 2023/12/11 18:31:47 llama.go:444: error starting llama runner: llama runner process has terminated ``` Ollama correctly identifies all 4 GPUs with a collective VRAM of `39320 MB VRAM available, loading up to 101 GPU layers` (first line of the logs). And then it proceeds to load the layers seemingly successfully, but then somehow an OOM error is triggered. How can I manually change the number of layers loaded to the GPU to debug this issue?
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1465/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5293
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5293/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5293/comments
https://api.github.com/repos/ollama/ollama/issues/5293/events
https://github.com/ollama/ollama/issues/5293
2,374,406,550
I_kwDOJ0Z1Ps6NhpGW
5,293
openchat 8b
{ "login": "zh19990906", "id": 59323683, "node_id": "MDQ6VXNlcjU5MzIzNjgz", "avatar_url": "https://avatars.githubusercontent.com/u/59323683?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zh19990906", "html_url": "https://github.com/zh19990906", "followers_url": "https://api.github.com/users/zh19990906/followers", "following_url": "https://api.github.com/users/zh19990906/following{/other_user}", "gists_url": "https://api.github.com/users/zh19990906/gists{/gist_id}", "starred_url": "https://api.github.com/users/zh19990906/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zh19990906/subscriptions", "organizations_url": "https://api.github.com/users/zh19990906/orgs", "repos_url": "https://api.github.com/users/zh19990906/repos", "events_url": "https://api.github.com/users/zh19990906/events{/privacy}", "received_events_url": "https://api.github.com/users/zh19990906/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-06-26T06:17:09
2024-06-26T06:17:09
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
openchat/openchat-3.6-8b-20240522 https://huggingface.co./openchat/openchat_3.5
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5293/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8094
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8094/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8094/comments
https://api.github.com/repos/ollama/ollama/issues/8094/events
https://github.com/ollama/ollama/issues/8094
2,739,752,259
I_kwDOJ0Z1Ps6jTU1D
8,094
No normalization option was provided when calling the embedding model
{ "login": "szzhh", "id": 78521539, "node_id": "MDQ6VXNlcjc4NTIxNTM5", "avatar_url": "https://avatars.githubusercontent.com/u/78521539?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szzhh", "html_url": "https://github.com/szzhh", "followers_url": "https://api.github.com/users/szzhh/followers", "following_url": "https://api.github.com/users/szzhh/following{/other_user}", "gists_url": "https://api.github.com/users/szzhh/gists{/gist_id}", "starred_url": "https://api.github.com/users/szzhh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/szzhh/subscriptions", "organizations_url": "https://api.github.com/users/szzhh/orgs", "repos_url": "https://api.github.com/users/szzhh/repos", "events_url": "https://api.github.com/users/szzhh/events{/privacy}", "received_events_url": "https://api.github.com/users/szzhh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-12-14T10:11:26
2024-12-14T16:39:45
2024-12-14T16:39:45
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? As the title says, I want to use ollama to call mxbai-embed-large:latest and output a normalized vector, but ollama does not seem to support normalized=true ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.1
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8094/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1271
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1271/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1271/comments
https://api.github.com/repos/ollama/ollama/issues/1271/events
https://github.com/ollama/ollama/issues/1271
2,010,412,387
I_kwDOJ0Z1Ps531HVj
1,271
Terminal output issues on Windows
{ "login": "clebio", "id": 811175, "node_id": "MDQ6VXNlcjgxMTE3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/811175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clebio", "html_url": "https://github.com/clebio", "followers_url": "https://api.github.com/users/clebio/followers", "following_url": "https://api.github.com/users/clebio/following{/other_user}", "gists_url": "https://api.github.com/users/clebio/gists{/gist_id}", "starred_url": "https://api.github.com/users/clebio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clebio/subscriptions", "organizations_url": "https://api.github.com/users/clebio/orgs", "repos_url": "https://api.github.com/users/clebio/repos", "events_url": "https://api.github.com/users/clebio/events{/privacy}", "received_events_url": "https://api.github.com/users/clebio/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2023-11-25T01:12:21
2024-09-14T23:10:39
2024-03-12T16:30:37
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I saw that #1262 was merged, so I pulled main and regenerated and built the binary. It runs great, and definitely uses the GPU, now: ``` ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes ggml_init_cublas: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9 ``` However, the terminal interface is broken in a way that I don't understand. Using Git Bash, it doesn't show output: ``` $ ./ollama.exe run orca-mini ``` Whereas if I [add `winpty`](https://stackoverflow.com/questions/32597209/python-not-working-in-the-command-line-of-git-bash): ``` $ winpty ./ollama.exe run orca-mini ←[?25l←[?25l←[?25h←[2K←[1G←[?25h←[?25l←[?25h←[?2004h>>> ←[38;5;245mSend a message (/? for help)←[28D←[0mwhy is the sky blue ←[Kwhy is the sky blue ←[?25l⠙ ←[?25h←[?25l←[2K←[1Gâ ¹ ←[?25h←[?25l←[2K←[1Gâ ¸ ←[?25h←[?25l←[?25l←[2K←[1G←[?25h←[2K←[1G←[?25h The←[?25l←[?25h sky←[?25l←[?25h appears←[?25l←[?25h blue←[?25l←[?25h b ecause←[?25l←[?25h of←[?25l←[?25h a←[?25l←[?25h phenomenon←[?25l←[?25h called←[?25l←[?25h Ray←[?25l←[?25hleigh←[?25l←[?25h scattering←[?25l←[?25h.←[?25l←[?25h When←[?25l ←[?25h sunlight←[?25l←[?25h enters←[?25l←[?25h the←[?25l←[?25h Earth←[?25l←[?25h'←[?25l←[?25hs←[?25l←[?25h atmosphere←[?25l←[?25h,←[?25l←[?25h it←[?25l←[?25h encounters← [?25l←[?25h tiny←[?25l←[?25h particles←[?25l←[?25h such←[?25l←[?25h as←[?25l←[?25h oxygen←[?25l←[?25h and←[?25l←[?25h nitrogen←[?25l←[?25h molecules←[?25l←[?25h.←[?25l←[ ?25h These←[?25l←[?25h particles←[?25l←[?25h sc←[?25l←[?25hatter←[?25l←[?25h the←[?25l←[?25h light←[?25l←[?25h in←[?25l←[?25h all←[?25l←[?25h directions←[?25l←[?25h,←[?2 5l←[?25h but←[?25l←[?25h they←[?25l←[?25h sc←[?25l←[?25hatter←[?25l←[?25h more←[?25l←[?25h easily←[?25l←[?25h towards←[?25l←[?25h the←[?25l←[?25h longer←[?25l←[?25h wave length←[?25l←[?25hs←[?25l←[?25h of←[?25l←[?25h light←[?25l←[?25h such←[?25l←[?25h as←[?25l←[?25h the←[?25l←[?25h v←[?25l←[?25hiolet←[?25l←[?25h and←[?25l←[?25h ind←[?25l ←[?25higo←[?25l←[?25h parts←[?25l←[?25h of←[?25l←[?25h the←[?25l←[?25h spectrum←[?25l←[?25h.←[?25l←[?25h This←[?25l←[?25h means←[?25l←[?25h that←[?25l←[?25h more←[?25l←[ ?25h blue←[?25l←[?25h light←[?25l←[?25h is←[?25l←[?25h scattered←[?25l←[?25h towards←[?25l←[?25h the←[?25l←[?25h observer←[?25l←[?25h,←[?25l←[?25h making←[?25l←[?25h the ←[?25l←[?25h sky←[?25l←[?25h appear←[?25l←[?25h more←[?25l←[?25h blue←[?25l←[?25h than←[?25l←[?25h it←[?25l←[?25h would←[?25l←[?25h be←[?25l←[?25h if←[?25l←[?25h all←[?2 5l←[?25h the←[?25l←[?25h colors←[?25l←[?25h of←[?25l←[?25h light←[?25l←[?25h were←[?25l←[?25h equally←[?25l←[?25h scattered←[?25l←[?25h.←[?25l←[?25h ←[?25l←[?25h>>> ←[38;5;245mSend a message (/? for help)←[28D←[0m←[K←[38;5;245mSend a message (/? for help)←[28D←[0m←[K ←[?2004l ``` Using Command Prompt doesn't fare much better: ``` ollama.exe run orca-mini ←[?25l←[?25l←[?25h←[2K←[1G←[?25h←[?25l←[?25h←[?2004h>>> ←[38;5;245mSend a message (/? for help)←[28D←[0m^D ←[K ←[?2004l ```
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1271/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6489
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6489/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6489/comments
https://api.github.com/repos/ollama/ollama/issues/6489/events
https://github.com/ollama/ollama/issues/6489
2,484,485,771
I_kwDOJ0Z1Ps6UFj6L
6,489
Error 403 occurs when I call ollama's api
{ "login": "brownplayer", "id": 118909356, "node_id": "U_kgDOBxZprA", "avatar_url": "https://avatars.githubusercontent.com/u/118909356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brownplayer", "html_url": "https://github.com/brownplayer", "followers_url": "https://api.github.com/users/brownplayer/followers", "following_url": "https://api.github.com/users/brownplayer/following{/other_user}", "gists_url": "https://api.github.com/users/brownplayer/gists{/gist_id}", "starred_url": "https://api.github.com/users/brownplayer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brownplayer/subscriptions", "organizations_url": "https://api.github.com/users/brownplayer/orgs", "repos_url": "https://api.github.com/users/brownplayer/repos", "events_url": "https://api.github.com/users/brownplayer/events{/privacy}", "received_events_url": "https://api.github.com/users/brownplayer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
10
2024-08-24T10:31:27
2024-08-25T01:52:50
2024-08-25T01:52:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Prerequisite: Use the C++ interface of ipex-llm as ollama's acceleration backend. Then start the ollama server (port 127.0.0.1.11434). When you use the edge browser plug-in to access the api of ollama, error 403 occurs ![image](https://github.com/user-attachments/assets/b9d4eee0-0293-41e7-9437-23ae4dc660af) ### OS Windows ### GPU Intel ### CPU Intel ### Ollama version _No response_
{ "login": "brownplayer", "id": 118909356, "node_id": "U_kgDOBxZprA", "avatar_url": "https://avatars.githubusercontent.com/u/118909356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brownplayer", "html_url": "https://github.com/brownplayer", "followers_url": "https://api.github.com/users/brownplayer/followers", "following_url": "https://api.github.com/users/brownplayer/following{/other_user}", "gists_url": "https://api.github.com/users/brownplayer/gists{/gist_id}", "starred_url": "https://api.github.com/users/brownplayer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brownplayer/subscriptions", "organizations_url": "https://api.github.com/users/brownplayer/orgs", "repos_url": "https://api.github.com/users/brownplayer/repos", "events_url": "https://api.github.com/users/brownplayer/events{/privacy}", "received_events_url": "https://api.github.com/users/brownplayer/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6489/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1580
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1580/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1580/comments
https://api.github.com/repos/ollama/ollama/issues/1580/events
https://github.com/ollama/ollama/issues/1580
2,046,547,249
I_kwDOJ0Z1Ps55-9Ux
1,580
why am I keep failing in request from my server?
{ "login": "kotran88", "id": 20656932, "node_id": "MDQ6VXNlcjIwNjU2OTMy", "avatar_url": "https://avatars.githubusercontent.com/u/20656932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kotran88", "html_url": "https://github.com/kotran88", "followers_url": "https://api.github.com/users/kotran88/followers", "following_url": "https://api.github.com/users/kotran88/following{/other_user}", "gists_url": "https://api.github.com/users/kotran88/gists{/gist_id}", "starred_url": "https://api.github.com/users/kotran88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kotran88/subscriptions", "organizations_url": "https://api.github.com/users/kotran88/orgs", "repos_url": "https://api.github.com/users/kotran88/repos", "events_url": "https://api.github.com/users/kotran88/events{/privacy}", "received_events_url": "https://api.github.com/users/kotran88/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-12-18T12:19:58
2023-12-18T13:39:46
2023-12-18T13:39:46
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hello :) I typed ollama serve then with port forwarding, xxx.xxx.xxx(myserver) : 5050 then there is message saying Ollama is running so , on postman, I tried to send prompt to receive answer from ollama. <img width="746" alt="스크린샷 2023-12-18 오후 9 18 12" src="https://github.com/jmorganca/ollama/assets/20656932/b3dc3c78-a07b-452e-8e58-4d02 <img width="1156" alt="스크린샷 2023-12-18 오후 9 18 40" src="https://github.com/jmorganca/ollama/assets/20656932/7c6cb9e0-1c02-4aac-ae6a-d9dbbbbd1330"> 35d17eaf"> but cause error... what maybe the problem? also, I try use it in my node js server with below code <img width="842" alt="스크린샷 2023-12-18 오후 9 19 20" src="https://github.com/jmorganca/ollama/assets/20656932/13e7ee2f-f027-45f4-baaa-f472656f9c7b"> it just return 500 error. <img width="717" alt="스크린샷 2023-12-18 오후 9 19 42" src="https://github.com/jmorganca/ollama/assets/20656932/bbd6db26-3fda-44af-8698-5ea9b4690c1d"> <img width="565" alt="스크린샷 2023-12-18 오후 9 31 08" src="https://github.com/jmorganca/ollama/assets/20656932/cd6e673f-b389-4731-bc52-ce17d59f102d"> what may be the problem?
{ "login": "kotran88", "id": 20656932, "node_id": "MDQ6VXNlcjIwNjU2OTMy", "avatar_url": "https://avatars.githubusercontent.com/u/20656932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kotran88", "html_url": "https://github.com/kotran88", "followers_url": "https://api.github.com/users/kotran88/followers", "following_url": "https://api.github.com/users/kotran88/following{/other_user}", "gists_url": "https://api.github.com/users/kotran88/gists{/gist_id}", "starred_url": "https://api.github.com/users/kotran88/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kotran88/subscriptions", "organizations_url": "https://api.github.com/users/kotran88/orgs", "repos_url": "https://api.github.com/users/kotran88/repos", "events_url": "https://api.github.com/users/kotran88/events{/privacy}", "received_events_url": "https://api.github.com/users/kotran88/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1580/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5685
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5685/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5685/comments
https://api.github.com/repos/ollama/ollama/issues/5685/events
https://github.com/ollama/ollama/pull/5685
2,407,253,786
PR_kwDOJ0Z1Ps51TyoM
5,685
Disable mmap by default for Windows ROCm
{ "login": "zsmooter", "id": 15349942, "node_id": "MDQ6VXNlcjE1MzQ5OTQy", "avatar_url": "https://avatars.githubusercontent.com/u/15349942?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zsmooter", "html_url": "https://github.com/zsmooter", "followers_url": "https://api.github.com/users/zsmooter/followers", "following_url": "https://api.github.com/users/zsmooter/following{/other_user}", "gists_url": "https://api.github.com/users/zsmooter/gists{/gist_id}", "starred_url": "https://api.github.com/users/zsmooter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zsmooter/subscriptions", "organizations_url": "https://api.github.com/users/zsmooter/orgs", "repos_url": "https://api.github.com/users/zsmooter/repos", "events_url": "https://api.github.com/users/zsmooter/events{/privacy}", "received_events_url": "https://api.github.com/users/zsmooter/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-07-14T03:54:43
2024-11-23T00:55:12
2024-11-23T00:55:12
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5685", "html_url": "https://github.com/ollama/ollama/pull/5685", "diff_url": "https://github.com/ollama/ollama/pull/5685.diff", "patch_url": "https://github.com/ollama/ollama/pull/5685.patch", "merged_at": null }
The same as with CUDA, disabling mmap when using ROCm on windows seems to speed up model load times significantly. I get a >2x speedup in model load times on my 7900xtx when disabling mmap.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5685/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6108
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6108/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6108/comments
https://api.github.com/repos/ollama/ollama/issues/6108/events
https://github.com/ollama/ollama/pull/6108
2,441,050,958
PR_kwDOJ0Z1Ps53C8lu
6,108
server: fix json marshalling of downloadBlobPart
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-31T22:30:46
2024-07-31T23:01:26
2024-07-31T23:01:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6108", "html_url": "https://github.com/ollama/ollama/pull/6108", "diff_url": "https://github.com/ollama/ollama/pull/6108.diff", "patch_url": "https://github.com/ollama/ollama/pull/6108.patch", "merged_at": "2024-07-31T23:01:25" }
The json marshalling of downloadBlobPart was incorrect and racey. This fixes it by implementing customer json marshalling for downloadBlobPart which correctly handles serialization of shared memory.
{ "login": "bmizerany", "id": 46, "node_id": "MDQ6VXNlcjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/46?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bmizerany", "html_url": "https://github.com/bmizerany", "followers_url": "https://api.github.com/users/bmizerany/followers", "following_url": "https://api.github.com/users/bmizerany/following{/other_user}", "gists_url": "https://api.github.com/users/bmizerany/gists{/gist_id}", "starred_url": "https://api.github.com/users/bmizerany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bmizerany/subscriptions", "organizations_url": "https://api.github.com/users/bmizerany/orgs", "repos_url": "https://api.github.com/users/bmizerany/repos", "events_url": "https://api.github.com/users/bmizerany/events{/privacy}", "received_events_url": "https://api.github.com/users/bmizerany/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6108/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5353
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5353/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5353/comments
https://api.github.com/repos/ollama/ollama/issues/5353/events
https://github.com/ollama/ollama/pull/5353
2,379,455,173
PR_kwDOJ0Z1Ps5z10qG
5,353
Draft: Support Moore Threads GPU
{ "login": "yeahdongcn", "id": 2831050, "node_id": "MDQ6VXNlcjI4MzEwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeahdongcn", "html_url": "https://github.com/yeahdongcn", "followers_url": "https://api.github.com/users/yeahdongcn/followers", "following_url": "https://api.github.com/users/yeahdongcn/following{/other_user}", "gists_url": "https://api.github.com/users/yeahdongcn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yeahdongcn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yeahdongcn/subscriptions", "organizations_url": "https://api.github.com/users/yeahdongcn/orgs", "repos_url": "https://api.github.com/users/yeahdongcn/repos", "events_url": "https://api.github.com/users/yeahdongcn/events{/privacy}", "received_events_url": "https://api.github.com/users/yeahdongcn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-28T02:51:26
2024-07-09T01:42:15
2024-07-09T01:28:51
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5353", "html_url": "https://github.com/ollama/ollama/pull/5353", "diff_url": "https://github.com/ollama/ollama/pull/5353.diff", "patch_url": "https://github.com/ollama/ollama/pull/5353.patch", "merged_at": null }
Moore Threads, a cutting-edge GPU startup, introduces MUSA (Moore Threads Unified System Architecture) as its foundational technology. This pull request marks the initial integration of MT GPU support into Ollama, leveraging MUSA's capabilities to enhance LLM inference performance. I am also working on integrating MTGPU support into llama.cpp and am optimistic that there will be no significant blocker issues. Update 0703: I was able to get it working on MTGPU. <img width="957" alt="Screenshot 2024-07-03 at 15 58 51" src="https://github.com/ollama/ollama/assets/2831050/1237baba-71be-4335-853c-9120bcdd1f8d">
{ "login": "yeahdongcn", "id": 2831050, "node_id": "MDQ6VXNlcjI4MzEwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2831050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeahdongcn", "html_url": "https://github.com/yeahdongcn", "followers_url": "https://api.github.com/users/yeahdongcn/followers", "following_url": "https://api.github.com/users/yeahdongcn/following{/other_user}", "gists_url": "https://api.github.com/users/yeahdongcn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yeahdongcn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yeahdongcn/subscriptions", "organizations_url": "https://api.github.com/users/yeahdongcn/orgs", "repos_url": "https://api.github.com/users/yeahdongcn/repos", "events_url": "https://api.github.com/users/yeahdongcn/events{/privacy}", "received_events_url": "https://api.github.com/users/yeahdongcn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5353/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/3181
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3181/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3181/comments
https://api.github.com/repos/ollama/ollama/issues/3181/events
https://github.com/ollama/ollama/issues/3181
2,190,114,182
I_kwDOJ0Z1Ps6Cin2G
3,181
Suppressing output of all the metadata.
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/followers", "following_url": "https://api.github.com/users/phalexo/following{/other_user}", "gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}", "starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phalexo/subscriptions", "organizations_url": "https://api.github.com/users/phalexo/orgs", "repos_url": "https://api.github.com/users/phalexo/repos", "events_url": "https://api.github.com/users/phalexo/events{/privacy}", "received_events_url": "https://api.github.com/users/phalexo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
2024-03-16T16:03:51
2024-03-16T16:40:35
2024-03-16T16:40:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I want to use Ollama to serve a local LLM with OpenAI API to allow Pythogora/gpt-pilot to interact with it. The back end constantly prints out the crap below: ```bash {"function":"launch_slot_with_data","id_slot":0,"id_task":2842,"level":"INFO","line":1002,"msg":"slot is processing task","tid":"139720604485376","timestamp":1710547752} {"function":"update_slots","id_slot":0,"id_task":2842,"level":"INFO","line":1916,"msg":"kv cache rm [p0, end)","p0":8,"tid":"139720604485376","timestamp":1710547752} {"function":"update_slots","id_slot":0,"id_task":2842,"level":"INFO","line":1916,"msg":"kv cache rm [p0, end)","p0":520,"tid":"139720604485376","timestamp":1710547760} {"function":"update_slots","id_slot":0,"id_task":2842,"level":"INFO","line":1916,"msg":"kv cache rm [p0, end)","p0":1032,"tid":"139720604485376","timestamp":1710547767} {"function":"update_slots","id_slot":0,"id_task":2842,"level":"INFO","line":1916,"msg":"kv cache rm [p0, end)","p0":1544,"tid":"139720604485376","timestamp":1710547775} DONE{"function":"print_timings","id_slot":0,"id_task":2842,"level":"INFO","line":310,"msg":"prompt eval time = 26782.10 ms / 1736 tokens ( 15.43 ms per token, 64.82 tokens per second)","n_prompt_tokens_processed":1736,"n_tokens_second":64.81942506738177,"t_prompt_processing":26782.095,"t_token":15.427474078341014,"tid":"139720604485376","timestamp":1710547780} {"function":"print_timings","id_slot":0,"id_task":2842,"level":"INFO","line":326,"msg":"generation eval time = 612.50 ms / 3 runs ( 204.17 ms per token, 4.90 tokens per second)","n_decoded":3,"n_tokens_second":4.897951187018471,"t_token":204.167,"t_token_generation":612.501,"tid":"139720604485376","timestamp":1710547780} {"function":"print_timings","id_slot":0,"id_task":2842,"level":"INFO","line":337,"msg":" total time = 27394.60 ms","t_prompt_processing":26782.095,"t_token_generation":612.501,"t_total":27394.596,"tid":"139720604485376","timestamp":1710547780} {"function":"update_slots","id_slot":0,"id_task":2842,"level":"INFO","line":1611,"msg":"slot released","n_cache_tokens":1746,"n_ctx":2048,"n_past":1746,"n_system_tokens":0,"tid":"139720604485376","timestamp":1710547780,"truncated":false} {"function":"update_slots","level":"INFO","line":1637,"msg":"all slots are idle","tid":"139720604485376","timestamp":1710547780} {"function":"update_slots","level":"INFO","line":1637,"msg":"all slots are idle","tid":"139720604485376","timestamp":1710547780} [GIN] 2024/03/15 - 20:09:40 | 200 | 27.405338872s | 127.0.0.1 | POST "/v1/chat/completions" ``` ### What did you expect to see? I want some means to see NO metadata at all. I just to see the prompt text and the answer text and none of the spurious stuff. ### Steps to reproduce It is obvious "ollama run somemodel" ### Are there any recent changes that introduced the issue? _No response_ ### OS _No response_ ### Architecture _No response_ ### Platform _No response_ ### Ollama version _No response_ ### GPU _No response_ ### GPU info _No response_ ### CPU _No response_ ### Other software _No response_
{ "login": "phalexo", "id": 4603365, "node_id": "MDQ6VXNlcjQ2MDMzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phalexo", "html_url": "https://github.com/phalexo", "followers_url": "https://api.github.com/users/phalexo/followers", "following_url": "https://api.github.com/users/phalexo/following{/other_user}", "gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}", "starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phalexo/subscriptions", "organizations_url": "https://api.github.com/users/phalexo/orgs", "repos_url": "https://api.github.com/users/phalexo/repos", "events_url": "https://api.github.com/users/phalexo/events{/privacy}", "received_events_url": "https://api.github.com/users/phalexo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3181/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7013
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7013/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7013/comments
https://api.github.com/repos/ollama/ollama/issues/7013/events
https://github.com/ollama/ollama/issues/7013
2,553,931,830
I_kwDOJ0Z1Ps6YOeg2
7,013
Option to Override a Model's Memory Requirements
{ "login": "dabockster", "id": 2431938, "node_id": "MDQ6VXNlcjI0MzE5Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/2431938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dabockster", "html_url": "https://github.com/dabockster", "followers_url": "https://api.github.com/users/dabockster/followers", "following_url": "https://api.github.com/users/dabockster/following{/other_user}", "gists_url": "https://api.github.com/users/dabockster/gists{/gist_id}", "starred_url": "https://api.github.com/users/dabockster/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dabockster/subscriptions", "organizations_url": "https://api.github.com/users/dabockster/orgs", "repos_url": "https://api.github.com/users/dabockster/repos", "events_url": "https://api.github.com/users/dabockster/events{/privacy}", "received_events_url": "https://api.github.com/users/dabockster/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-09-28T01:07:23
2025-01-09T14:15:35
2024-09-28T22:49:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I was trying to load the 70b Llama3 model and Ollama says I need 33.6 GB of 30.5 GB ram. I believe this is a safety thing Meta put into the model, so I want to have the ability to override this and attempt to run it on lower amounts of memory. I know this will likely dip into swap/page file space, possibly even causing kernel panics and BSODs. But I still want to make the attempt. _Computer, disable holodeck safety protocols. Authorization Picard-4-7-Alpha-Tango._
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7013/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/91
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/91/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/91/comments
https://api.github.com/repos/ollama/ollama/issues/91/events
https://github.com/ollama/ollama/pull/91
1,808,594,224
PR_kwDOJ0Z1Ps5VtvIK
91
fix stream errors
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-07-17T20:41:20
2023-07-20T19:25:47
2023-07-20T19:22:00
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/91", "html_url": "https://github.com/ollama/ollama/pull/91", "diff_url": "https://github.com/ollama/ollama/pull/91.diff", "patch_url": "https://github.com/ollama/ollama/pull/91.patch", "merged_at": "2023-07-20T19:22:00" }
once the stream is created, it's too late to update response headers (i.e. status code). any and all errors must be returned by the stream
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/91/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/91/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7137
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7137/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7137/comments
https://api.github.com/repos/ollama/ollama/issues/7137/events
https://github.com/ollama/ollama/pull/7137
2,573,657,476
PR_kwDOJ0Z1Ps59-RSp
7,137
llama: add compiler tags for cpu features
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-10-08T16:14:44
2024-10-17T20:43:24
2024-10-17T20:43:21
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7137", "html_url": "https://github.com/ollama/ollama/pull/7137", "diff_url": "https://github.com/ollama/ollama/pull/7137.diff", "patch_url": "https://github.com/ollama/ollama/pull/7137.patch", "merged_at": "2024-10-17T20:43:21" }
Replaces #7009 now on main Support local builds with customized CPU flags for both the CPU runner, and GPU runners. Some users want no vector flags in the GPU runners. Others want ~all the vector extensions enabled. Each runner we add to the official build adds significant overhead (size and build time) so this enhancement makes it much easier for users to build their own customized version if our default runners CPU: [none,avx,avx2] and GPU:[avx] don't address their needs. This PR does not wire up runtime discovery of the requirements, so will only be suitable for adding additional vector flags to GPU runners for now. I'll follow up in the future with support for GPU runners without any vector flags along with docs.
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7137/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2343
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2343/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2343/comments
https://api.github.com/repos/ollama/ollama/issues/2343/events
https://github.com/ollama/ollama/issues/2343
2,116,879,740
I_kwDOJ0Z1Ps5-LQV8
2,343
Feature Request - Support for ollama Keep alive
{ "login": "twalderman", "id": 78627063, "node_id": "MDQ6VXNlcjc4NjI3MDYz", "avatar_url": "https://avatars.githubusercontent.com/u/78627063?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twalderman", "html_url": "https://github.com/twalderman", "followers_url": "https://api.github.com/users/twalderman/followers", "following_url": "https://api.github.com/users/twalderman/following{/other_user}", "gists_url": "https://api.github.com/users/twalderman/gists{/gist_id}", "starred_url": "https://api.github.com/users/twalderman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/twalderman/subscriptions", "organizations_url": "https://api.github.com/users/twalderman/orgs", "repos_url": "https://api.github.com/users/twalderman/repos", "events_url": "https://api.github.com/users/twalderman/events{/privacy}", "received_events_url": "https://api.github.com/users/twalderman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-02-04T04:39:10
2024-02-20T03:57:40
2024-02-20T03:57:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
there is a new api parameter for keeping the model loaded. it would be great to have it as a passable parameter in the modelfile.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2343/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2343/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5777
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5777/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5777/comments
https://api.github.com/repos/ollama/ollama/issues/5777/events
https://github.com/ollama/ollama/issues/5777
2,416,984,810
I_kwDOJ0Z1Ps6QEELq
5,777
Mistral Nemo Please!
{ "login": "stevengans", "id": 10685309, "node_id": "MDQ6VXNlcjEwNjg1MzA5", "avatar_url": "https://avatars.githubusercontent.com/u/10685309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevengans", "html_url": "https://github.com/stevengans", "followers_url": "https://api.github.com/users/stevengans/followers", "following_url": "https://api.github.com/users/stevengans/following{/other_user}", "gists_url": "https://api.github.com/users/stevengans/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevengans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevengans/subscriptions", "organizations_url": "https://api.github.com/users/stevengans/orgs", "repos_url": "https://api.github.com/users/stevengans/repos", "events_url": "https://api.github.com/users/stevengans/events{/privacy}", "received_events_url": "https://api.github.com/users/stevengans/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
closed
false
null
[]
null
28
2024-07-18T17:30:36
2024-07-23T12:27:30
2024-07-22T20:34:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
https://mistral.ai/news/mistral-nemo/
{ "login": "stevengans", "id": 10685309, "node_id": "MDQ6VXNlcjEwNjg1MzA5", "avatar_url": "https://avatars.githubusercontent.com/u/10685309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevengans", "html_url": "https://github.com/stevengans", "followers_url": "https://api.github.com/users/stevengans/followers", "following_url": "https://api.github.com/users/stevengans/following{/other_user}", "gists_url": "https://api.github.com/users/stevengans/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevengans/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevengans/subscriptions", "organizations_url": "https://api.github.com/users/stevengans/orgs", "repos_url": "https://api.github.com/users/stevengans/repos", "events_url": "https://api.github.com/users/stevengans/events{/privacy}", "received_events_url": "https://api.github.com/users/stevengans/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5777/reactions", "total_count": 48, "+1": 48, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5777/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1344
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1344/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1344/comments
https://api.github.com/repos/ollama/ollama/issues/1344/events
https://github.com/ollama/ollama/issues/1344
2,020,533,276
I_kwDOJ0Z1Ps54buQc
1,344
Beam search (best of) for completion API
{ "login": "walking-octopus", "id": 46994949, "node_id": "MDQ6VXNlcjQ2OTk0OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/46994949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/walking-octopus", "html_url": "https://github.com/walking-octopus", "followers_url": "https://api.github.com/users/walking-octopus/followers", "following_url": "https://api.github.com/users/walking-octopus/following{/other_user}", "gists_url": "https://api.github.com/users/walking-octopus/gists{/gist_id}", "starred_url": "https://api.github.com/users/walking-octopus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/walking-octopus/subscriptions", "organizations_url": "https://api.github.com/users/walking-octopus/orgs", "repos_url": "https://api.github.com/users/walking-octopus/repos", "events_url": "https://api.github.com/users/walking-octopus/events{/privacy}", "received_events_url": "https://api.github.com/users/walking-octopus/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2023-12-01T10:03:34
2024-11-21T08:24:39
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Beam search is a sampling mechanism by which we maximize probability of not just the next token, but the entire completion. While it can be ignored for simpler uses, any form of reasoning, especially with a tiny model, requires beam search to backtrack from incorrect steps. llama.cpp already supports beam search, so it should be trivial to implement.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1344/reactions", "total_count": 14, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1344/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7574
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7574/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7574/comments
https://api.github.com/repos/ollama/ollama/issues/7574/events
https://github.com/ollama/ollama/issues/7574
2,643,589,468
I_kwDOJ0Z1Ps6dkflc
7,574
LLaMa 3.2 90B on multi GPU crashes
{ "login": "BBOBDI", "id": 145003778, "node_id": "U_kgDOCKSVAg", "avatar_url": "https://avatars.githubusercontent.com/u/145003778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BBOBDI", "html_url": "https://github.com/BBOBDI", "followers_url": "https://api.github.com/users/BBOBDI/followers", "following_url": "https://api.github.com/users/BBOBDI/following{/other_user}", "gists_url": "https://api.github.com/users/BBOBDI/gists{/gist_id}", "starred_url": "https://api.github.com/users/BBOBDI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BBOBDI/subscriptions", "organizations_url": "https://api.github.com/users/BBOBDI/orgs", "repos_url": "https://api.github.com/users/BBOBDI/repos", "events_url": "https://api.github.com/users/BBOBDI/events{/privacy}", "received_events_url": "https://api.github.com/users/BBOBDI/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
6
2024-11-08T10:28:10
2024-11-08T22:08:52
2024-11-08T22:08:52
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Hello! My problem may be similar to issue 7568. I think there is a problem with the distribution of the LLaMa 3.2 90B model across multiple GPUs. When it runs on a single GPU (quantized), it works. But when it runs on multiple GPUs, it crashes. In my server running Linux Debian Bookworm and having 4x Nvidia H100 NVL, I can easily run a 4-bit quantized version of LLaMa 3.2 90B model. Here is the call of a "ollama run" in this case : $ ollama run llama3.2-vision:90b-instruct-q4_K_M >>> What do you see in this picture ? ./TeddyBear.jpg Added image './TeddyBear.jpg' The image shows a teddy bear. And here is the ollama serve 0.4.0 output : [OLLAMA_SERVE_llama3.2-vision_90b-instruct-q4_K_M.log](https://github.com/user-attachments/files/17676883/OLLAMA_SERVE_llama3.2-vision_90b-instruct-q4_K_M.log) But when I run a 8 bits quantized version model (dispatched on my 4x Nvidia H100), here is what I get : $ ollama run llama3.2-vision:90b-instruct-q8_0 >>> What do you see in this picture ? ./TeddyBear.jpg Added image './TeddyBear.jpg' Error: POST predict: Post "http://127.0.0.1:40459/completion": EOF And here is the ollama serve 0.4.0 output I get : [OLLAMA_SERVE_llama3.2-vision_90b-instruct-q8_0.log](https://github.com/user-attachments/files/17676927/OLLAMA_SERVE_llama3.2-vision_90b-instruct-q8_0.log) The program crashes with a Segfault signal (as in ticket 7568 mentioned earlier). Can you take a look at this please? Thanks in advance! ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.4.0
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7574/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1796
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1796/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1796/comments
https://api.github.com/repos/ollama/ollama/issues/1796/events
https://github.com/ollama/ollama/issues/1796
2,066,606,956
I_kwDOJ0Z1Ps57Lets
1,796
Readme refers to 404 docker documentation
{ "login": "tommedema", "id": 331833, "node_id": "MDQ6VXNlcjMzMTgzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/331833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tommedema", "html_url": "https://github.com/tommedema", "followers_url": "https://api.github.com/users/tommedema/followers", "following_url": "https://api.github.com/users/tommedema/following{/other_user}", "gists_url": "https://api.github.com/users/tommedema/gists{/gist_id}", "starred_url": "https://api.github.com/users/tommedema/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tommedema/subscriptions", "organizations_url": "https://api.github.com/users/tommedema/orgs", "repos_url": "https://api.github.com/users/tommedema/repos", "events_url": "https://api.github.com/users/tommedema/events{/privacy}", "received_events_url": "https://api.github.com/users/tommedema/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-01-05T01:58:21
2024-01-05T03:23:51
2024-01-05T03:23:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The main [readme](https://github.com/jmorganca/ollama/blob/main/docs/README.md) refers to https://github.com/jmorganca/ollama/blob/main/docs/docker.md which gives a 404. Is docker still supported?
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1796/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4411
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4411/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4411/comments
https://api.github.com/repos/ollama/ollama/issues/4411/events
https://github.com/ollama/ollama/pull/4411
2,293,878,091
PR_kwDOJ0Z1Ps5vUQLp
4,411
removed inconsistent punctuation
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/joshyan1/followers", "following_url": "https://api.github.com/users/joshyan1/following{/other_user}", "gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}", "starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions", "organizations_url": "https://api.github.com/users/joshyan1/orgs", "repos_url": "https://api.github.com/users/joshyan1/repos", "events_url": "https://api.github.com/users/joshyan1/events{/privacy}", "received_events_url": "https://api.github.com/users/joshyan1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-13T21:28:59
2024-05-13T22:30:46
2024-05-13T22:30:46
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4411", "html_url": "https://github.com/ollama/ollama/pull/4411", "diff_url": "https://github.com/ollama/ollama/pull/4411.diff", "patch_url": "https://github.com/ollama/ollama/pull/4411.patch", "merged_at": "2024-05-13T22:30:46" }
Removed the period in `ollama serve -h` Resolves https://github.com/ollama/ollama/issues/4410
{ "login": "joshyan1", "id": 76125168, "node_id": "MDQ6VXNlcjc2MTI1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/76125168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshyan1", "html_url": "https://github.com/joshyan1", "followers_url": "https://api.github.com/users/joshyan1/followers", "following_url": "https://api.github.com/users/joshyan1/following{/other_user}", "gists_url": "https://api.github.com/users/joshyan1/gists{/gist_id}", "starred_url": "https://api.github.com/users/joshyan1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshyan1/subscriptions", "organizations_url": "https://api.github.com/users/joshyan1/orgs", "repos_url": "https://api.github.com/users/joshyan1/repos", "events_url": "https://api.github.com/users/joshyan1/events{/privacy}", "received_events_url": "https://api.github.com/users/joshyan1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4411/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5259
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5259/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5259/comments
https://api.github.com/repos/ollama/ollama/issues/5259/events
https://github.com/ollama/ollama/issues/5259
2,371,106,359
I_kwDOJ0Z1Ps6NVDY3
5,259
Support Multiple Types for OpenAI Completions Endpoint
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2024-06-24T21:14:31
2024-07-22T10:21:52
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Allow v1/completions to handle []string, []int and [][]int, in addition to just a string
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5259/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5259/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3741
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3741/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3741/comments
https://api.github.com/repos/ollama/ollama/issues/3741/events
https://github.com/ollama/ollama/issues/3741
2,251,885,195
I_kwDOJ0Z1Ps6GOQqL
3,741
Please accept slow network connections when loading models
{ "login": "igorschlum", "id": 2884312, "node_id": "MDQ6VXNlcjI4ODQzMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/2884312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/igorschlum", "html_url": "https://github.com/igorschlum", "followers_url": "https://api.github.com/users/igorschlum/followers", "following_url": "https://api.github.com/users/igorschlum/following{/other_user}", "gists_url": "https://api.github.com/users/igorschlum/gists{/gist_id}", "starred_url": "https://api.github.com/users/igorschlum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/igorschlum/subscriptions", "organizations_url": "https://api.github.com/users/igorschlum/orgs", "repos_url": "https://api.github.com/users/igorschlum/repos", "events_url": "https://api.github.com/users/igorschlum/events{/privacy}", "received_events_url": "https://api.github.com/users/igorschlum/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677370291, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCVsw", "url": "https://api.github.com/repos/ollama/ollama/labels/networking", "name": "networking", "color": "0B5368", "default": false, "description": "Issues relating to ollama pull and push" } ]
closed
false
null
[]
null
1
2024-04-19T01:24:24
2024-08-11T22:52:36
2024-08-11T22:52:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Downloading of models on slow networks stops too frequently (base) igor@macigor ~ % ollama run llava:7b pulling manifest pulling 170370233dd5... 23% ▕███ ▏ 959 MB/4.1 GB 882 KB/s 59m29s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/17/170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240418%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240418T205718Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a3c02ead244fe32e877db7493a63034cfd737643c3fb8ab4bbc86d75c768d3f1": net/http: TLS handshake timeout (base) igor@macigor ~ % ollama run llava:7b pulling manifest pulling 170370233dd5... 50% ▕███████ ▏ 2.0 GB/4.1 GB 146 KB/s 3h56m Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/17/170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240418%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240418T213225Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=0cdc6db7e37852c2302d7a2cebb0efeee5136292fddcdb90654a3080224be9c8": read tcp 192.168.1.82:50082->104.18.9.90:443: read: connection reset by peer (base) igor@macigor ~ % ollama run llava:7b pulling manifest pulling 170370233dd5... 77% ▕████████████ ▏ 3.2 GB/4.1 GB 1.5 MB/s 10m14s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/17/170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240418%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240418T214548Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=83897005cf68db7e5276cc7a6329e8eac7bad00ff655d8cfd87412e665328af0": net/http: TLS handshake timeout (base) igor@macigor ~ % ollama run llava:7b pulling manifest pulling 170370233dd5... 100% ▕████████████████▏ 4.1 GB pulling 72d6f08a42f6... 100% ▕████████████████▏ 624 MB pulling 43070e2d4e53... 100% ▕████████████████▏ 11 KB pulling c43332387573... 100% ▕████████████████▏ 67 B pulling ed11eda7790d... 100% ▕████████████████▏ 30 B pulling 7c658f9561e5... 100% ▕████████████████▏ 564 B verifying sha256 digest writing manifest removing any unused layers success >>> /exit (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 10% ▕█ ▏ 473 MB/4.7 GB 1.4 MB/s 49m53s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/00/00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240418%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240418T224502Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=7a675059c277b61e2e37ffa05bebcfdc5ef3b141bf9f794b9ff3797e926c6429": net/http: TLS handshake timeout (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 30% ▕████ ▏ 1.4 GB/4.7 GB 1.3 MB/s 41m46s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/00/00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240418%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240418T225758Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=6e860ac70cb81d1a0c7b2830264bec22398ac377382009088111f3117abd88bb": net/http: TLS handshake timeout (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 47% ▕███████ ▏ 2.2 GB/4.7 GB 1.3 MB/s 31m49s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/00/00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240419%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240419T003938Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=4c514b8fd50d56f95adcd60d9efac7f4f2177b0bbbe5e5200f254ac3b1a804c1": read tcp [2a01:cb20:4014:3400:ad09:e45b:4793:12ea]:57813->[2606:4700::6812:95a]:443: read: connection reset by peer (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 61% ▕█████████████████████████████████████████████ ▏ 2.9 GB/4.7 GB 1.5 MB/s 19m39s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/00/00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240419%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240419T004701Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a01e769aa10f27c7020d029aa6cc7814ee96b8f084c27013e3646bac013c6aa0": net/http: TLS handshake timeout (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 74% ▕███████████████████████████████████████████████████████ ▏ 3.4 GB/4.7 GB 1.1 MB/s 18m44s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/00/00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240419%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240419T005553Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=3c756468c56b53adfe4f4be731f3daf7dc04b425686c6df2f1d567195f4f7f6f": read tcp [2a01:cb20:4014:3400:ad09:e45b:4793:12ea]:59735->[2606:4700::6812:85a]:443: read: connection reset by peer (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 92% ▕█████████████████████████████████████████████████████████████████████ ▏ 4.3 GB/4.7 GB 1.4 MB/s 4m20s Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/00/00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20240419%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240419T011033Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=d42500def336a35289b51f066a0e9cac370e8f035b9403b1c4bcf18508c8c409": read tcp 192.168.1.82:61035->104.18.8.90:443: read: connection reset by peer (base) igor@macigor ~ % ollama run llama3 pulling manifest pulling 00e1317cbf74... 100% ▕███████████████████████████████████████████████████████████████████████████▏ 4.7 GB pulling 4fa551d4f938... 100% ▕███████████████████████████████████████████████████████████████████████████▏ 12 KB pulling 8ab4849b038c... 100% ▕███████████████████████████████████████████████████████████████████████████▏ 254 B pulling c0aac7c7f00d... 100% ▕███████████████████████████████████████████████████████████████████████████▏ 128 B pulling db46ef36ef0b... 100% ▕███████████████████████████████████████████████████████████████████████████▏ 483 B verifying sha256 digest writing manifest removing any unused layers success >>> Sen ### OS macOS ### GPU _No response_ ### CPU Apple ### Ollama version 0.1.32
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3741/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7269
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7269/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7269/comments
https://api.github.com/repos/ollama/ollama/issues/7269/events
https://github.com/ollama/ollama/issues/7269
2,598,930,671
I_kwDOJ0Z1Ps6a6Ijv
7,269
You want to be able to customize the OLLAMA installation directory
{ "login": "wxpid1", "id": 45633931, "node_id": "MDQ6VXNlcjQ1NjMzOTMx", "avatar_url": "https://avatars.githubusercontent.com/u/45633931?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wxpid1", "html_url": "https://github.com/wxpid1", "followers_url": "https://api.github.com/users/wxpid1/followers", "following_url": "https://api.github.com/users/wxpid1/following{/other_user}", "gists_url": "https://api.github.com/users/wxpid1/gists{/gist_id}", "starred_url": "https://api.github.com/users/wxpid1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wxpid1/subscriptions", "organizations_url": "https://api.github.com/users/wxpid1/orgs", "repos_url": "https://api.github.com/users/wxpid1/repos", "events_url": "https://api.github.com/users/wxpid1/events{/privacy}", "received_events_url": "https://api.github.com/users/wxpid1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-10-19T09:11:13
2024-10-19T09:52:54
2024-10-19T09:52:54
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Sometimes my user directory space is not much, want to install software to other drive. This solves both the space issue and customizes the management of your own installed software. I tried to use the OLLAMA variable setting, but it didn't work and the OLLAMA was installed on the C drive. It's even better if you can customize the location of your software installation.
{ "login": "wxpid1", "id": 45633931, "node_id": "MDQ6VXNlcjQ1NjMzOTMx", "avatar_url": "https://avatars.githubusercontent.com/u/45633931?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wxpid1", "html_url": "https://github.com/wxpid1", "followers_url": "https://api.github.com/users/wxpid1/followers", "following_url": "https://api.github.com/users/wxpid1/following{/other_user}", "gists_url": "https://api.github.com/users/wxpid1/gists{/gist_id}", "starred_url": "https://api.github.com/users/wxpid1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wxpid1/subscriptions", "organizations_url": "https://api.github.com/users/wxpid1/orgs", "repos_url": "https://api.github.com/users/wxpid1/repos", "events_url": "https://api.github.com/users/wxpid1/events{/privacy}", "received_events_url": "https://api.github.com/users/wxpid1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7269/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5520
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5520/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5520/comments
https://api.github.com/repos/ollama/ollama/issues/5520/events
https://github.com/ollama/ollama/pull/5520
2,393,773,992
PR_kwDOJ0Z1Ps50mVFp
5,520
llm: add `-DBUILD_SHARED_LIBS=off` to common cpu cmake flags
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-06T22:58:08
2024-07-06T22:58:18
2024-07-06T22:58:17
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5520", "html_url": "https://github.com/ollama/ollama/pull/5520", "diff_url": "https://github.com/ollama/ollama/pull/5520.diff", "patch_url": "https://github.com/ollama/ollama/pull/5520.patch", "merged_at": "2024-07-06T22:58:17" }
null
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5520/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5520/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7769
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7769/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7769/comments
https://api.github.com/repos/ollama/ollama/issues/7769/events
https://github.com/ollama/ollama/issues/7769
2,677,200,814
I_kwDOJ0Z1Ps6fkteu
7,769
Request: Nexa AI Omnivision
{ "login": "mak448a", "id": 94062293, "node_id": "U_kgDOBZtG1Q", "avatar_url": "https://avatars.githubusercontent.com/u/94062293?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mak448a", "html_url": "https://github.com/mak448a", "followers_url": "https://api.github.com/users/mak448a/followers", "following_url": "https://api.github.com/users/mak448a/following{/other_user}", "gists_url": "https://api.github.com/users/mak448a/gists{/gist_id}", "starred_url": "https://api.github.com/users/mak448a/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mak448a/subscriptions", "organizations_url": "https://api.github.com/users/mak448a/orgs", "repos_url": "https://api.github.com/users/mak448a/repos", "events_url": "https://api.github.com/users/mak448a/events{/privacy}", "received_events_url": "https://api.github.com/users/mak448a/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-11-20T21:13:41
2024-11-20T21:14:46
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Here is the link to Nexa AI's model. (I haven't checked over it to make sure it's super reputable though) https://huggingface.co./NexaAIDev/omnivision-968M
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7769/reactions", "total_count": 15, "+1": 15, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7769/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/3999
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3999/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3999/comments
https://api.github.com/repos/ollama/ollama/issues/3999/events
https://github.com/ollama/ollama/issues/3999
2,267,446,167
I_kwDOJ0Z1Ps6HJnuX
3,999
could not connect to ollama app
{ "login": "ricardodddduck", "id": 163819103, "node_id": "U_kgDOCcOuXw", "avatar_url": "https://avatars.githubusercontent.com/u/163819103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ricardodddduck", "html_url": "https://github.com/ricardodddduck", "followers_url": "https://api.github.com/users/ricardodddduck/followers", "following_url": "https://api.github.com/users/ricardodddduck/following{/other_user}", "gists_url": "https://api.github.com/users/ricardodddduck/gists{/gist_id}", "starred_url": "https://api.github.com/users/ricardodddduck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ricardodddduck/subscriptions", "organizations_url": "https://api.github.com/users/ricardodddduck/orgs", "repos_url": "https://api.github.com/users/ricardodddduck/repos", "events_url": "https://api.github.com/users/ricardodddduck/events{/privacy}", "received_events_url": "https://api.github.com/users/ricardodddduck/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-04-28T09:10:01
2024-05-01T16:40:00
2024-05-01T16:40:00
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? could not connect to ollama app,is it running? it always happen even reinstall ollama ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version _No response_
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3999/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/462
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/462/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/462/comments
https://api.github.com/repos/ollama/ollama/issues/462/events
https://github.com/ollama/ollama/pull/462
1,879,186,581
PR_kwDOJ0Z1Ps5ZbW3F
462
remove marshalPrompt which is no longer needed
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-09-03T18:13:11
2023-09-05T18:48:43
2023-09-05T18:48:42
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/462", "html_url": "https://github.com/ollama/ollama/pull/462", "diff_url": "https://github.com/ollama/ollama/pull/462.diff", "patch_url": "https://github.com/ollama/ollama/pull/462.patch", "merged_at": "2023-09-05T18:48:42" }
llama.cpp server handles truncating input
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/462/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1039
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1039/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1039/comments
https://api.github.com/repos/ollama/ollama/issues/1039/events
https://github.com/ollama/ollama/issues/1039
1,982,834,018
I_kwDOJ0Z1Ps52L6Vi
1,039
Fail to load Custom Models
{ "login": "tjlcast", "id": 16621867, "node_id": "MDQ6VXNlcjE2NjIxODY3", "avatar_url": "https://avatars.githubusercontent.com/u/16621867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tjlcast", "html_url": "https://github.com/tjlcast", "followers_url": "https://api.github.com/users/tjlcast/followers", "following_url": "https://api.github.com/users/tjlcast/following{/other_user}", "gists_url": "https://api.github.com/users/tjlcast/gists{/gist_id}", "starred_url": "https://api.github.com/users/tjlcast/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tjlcast/subscriptions", "organizations_url": "https://api.github.com/users/tjlcast/orgs", "repos_url": "https://api.github.com/users/tjlcast/repos", "events_url": "https://api.github.com/users/tjlcast/events{/privacy}", "received_events_url": "https://api.github.com/users/tjlcast/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-11-08T06:23:22
2023-12-04T21:42:02
2023-12-04T21:42:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hi I want to load a custom gguf model [TheBloke/deepseek-coder-6.7B-instruct-GGUF](https://huggingface.co./TheBloke/deepseek-coder-6.7B-instruct-GGUF) ModelFile is: ``` FROM ./deepseek-coder-6.7b-instruct.Q4_K_M.gguf ``` But when I do build, it reports a error for me. ``` % ollama create amodel -f ./Modelfile parsing modelfile looking for model ⠋ creating model layer Error: invalid version ``` And I do this on my old mac(MacBook Air (13-inch, Early 2015)) Could you help me how to solve this?
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1039/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3012
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3012/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3012/comments
https://api.github.com/repos/ollama/ollama/issues/3012/events
https://github.com/ollama/ollama/issues/3012
2,176,710,344
I_kwDOJ0Z1Ps6BvfbI
3,012
Scoop repo, NIX repo & Debian repo
{ "login": "trymeouteh", "id": 31172274, "node_id": "MDQ6VXNlcjMxMTcyMjc0", "avatar_url": "https://avatars.githubusercontent.com/u/31172274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/trymeouteh", "html_url": "https://github.com/trymeouteh", "followers_url": "https://api.github.com/users/trymeouteh/followers", "following_url": "https://api.github.com/users/trymeouteh/following{/other_user}", "gists_url": "https://api.github.com/users/trymeouteh/gists{/gist_id}", "starred_url": "https://api.github.com/users/trymeouteh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trymeouteh/subscriptions", "organizations_url": "https://api.github.com/users/trymeouteh/orgs", "repos_url": "https://api.github.com/users/trymeouteh/repos", "events_url": "https://api.github.com/users/trymeouteh/events{/privacy}", "received_events_url": "https://api.github.com/users/trymeouteh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-03-08T20:17:50
2024-03-11T22:18:19
2024-03-11T22:18:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please try and get Ollama into the Windows Scoop package repo and try to get Ollama into the Linux Nix repo and Debian repo.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3012/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/966
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/966/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/966/comments
https://api.github.com/repos/ollama/ollama/issues/966/events
https://github.com/ollama/ollama/pull/966
1,973,293,095
PR_kwDOJ0Z1Ps5eYgin
966
fix log
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2023-11-02T00:18:49
2023-11-02T00:49:11
2023-11-02T00:49:11
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/966", "html_url": "https://github.com/ollama/ollama/pull/966", "diff_url": "https://github.com/ollama/ollama/pull/966.diff", "patch_url": "https://github.com/ollama/ollama/pull/966.patch", "merged_at": "2023-11-02T00:49:10" }
if there's a remainder, the log line will show the remainder instead of the actual size
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/966/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4634
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4634/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4634/comments
https://api.github.com/repos/ollama/ollama/issues/4634/events
https://github.com/ollama/ollama/issues/4634
2,316,942,320
I_kwDOJ0Z1Ps6KGbvw
4,634
Getting Weird Response
{ "login": "Yash-1511", "id": 82636823, "node_id": "MDQ6VXNlcjgyNjM2ODIz", "avatar_url": "https://avatars.githubusercontent.com/u/82636823?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Yash-1511", "html_url": "https://github.com/Yash-1511", "followers_url": "https://api.github.com/users/Yash-1511/followers", "following_url": "https://api.github.com/users/Yash-1511/following{/other_user}", "gists_url": "https://api.github.com/users/Yash-1511/gists{/gist_id}", "starred_url": "https://api.github.com/users/Yash-1511/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yash-1511/subscriptions", "organizations_url": "https://api.github.com/users/Yash-1511/orgs", "repos_url": "https://api.github.com/users/Yash-1511/repos", "events_url": "https://api.github.com/users/Yash-1511/events{/privacy}", "received_events_url": "https://api.github.com/users/Yash-1511/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A", "url": "https://api.github.com/repos/ollama/ollama/labels/macos", "name": "macos", "color": "E2DBC0", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-05-25T11:16:29
2024-07-25T23:33:01
2024-07-25T23:33:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I am using ollama on Apple Mac M2 Ultra. i am getting same problem for past two days. i delete all my models, i uninstalled ollama and reinstall again and sometimes it will work and most of the times it will get some weird response. ![ollama](https://github.com/ollama/ollama/assets/82636823/f3147791-c084-4093-967b-91a7c7d472da) ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.1.38
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4634/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2355
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2355/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2355/comments
https://api.github.com/repos/ollama/ollama/issues/2355/events
https://github.com/ollama/ollama/pull/2355
2,117,382,556
PR_kwDOJ0Z1Ps5l9foV
2,355
Fit and finish, clean up cruft on uninstall
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-02-04T23:35:39
2024-02-05T00:10:12
2024-02-05T00:10:08
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2355", "html_url": "https://github.com/ollama/ollama/pull/2355", "diff_url": "https://github.com/ollama/ollama/pull/2355.diff", "patch_url": "https://github.com/ollama/ollama/pull/2355.patch", "merged_at": "2024-02-05T00:10:08" }
Better job cleaning up, logging improvements, and now spawn a powershell window on first use to help users with their first run
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2355/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4836
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4836/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4836/comments
https://api.github.com/repos/ollama/ollama/issues/4836/events
https://github.com/ollama/ollama/issues/4836
2,336,201,826
I_kwDOJ0Z1Ps6LP5xi
4,836
llama runner process has terminated: exit status 127
{ "login": "kruimol", "id": 41127358, "node_id": "MDQ6VXNlcjQxMTI3MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/41127358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kruimol", "html_url": "https://github.com/kruimol", "followers_url": "https://api.github.com/users/kruimol/followers", "following_url": "https://api.github.com/users/kruimol/following{/other_user}", "gists_url": "https://api.github.com/users/kruimol/gists{/gist_id}", "starred_url": "https://api.github.com/users/kruimol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kruimol/subscriptions", "organizations_url": "https://api.github.com/users/kruimol/orgs", "repos_url": "https://api.github.com/users/kruimol/repos", "events_url": "https://api.github.com/users/kruimol/events{/privacy}", "received_events_url": "https://api.github.com/users/kruimol/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" }, { "id": 6677745918, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgZQ_g", "url": "https://api.github.com/repos/ollama/ollama/labels/gpu", "name": "gpu", "color": "76C49E", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
7
2024-06-05T15:19:57
2024-08-28T20:20:45
2024-06-11T05:13:36
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When i try running ollama run codestral or any other LLM, I keep getting the same status 127 error! Is there a way to fix this? I'm running EndeavourOs Arch linux. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version 0.1.41
{ "login": "kruimol", "id": 41127358, "node_id": "MDQ6VXNlcjQxMTI3MzU4", "avatar_url": "https://avatars.githubusercontent.com/u/41127358?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kruimol", "html_url": "https://github.com/kruimol", "followers_url": "https://api.github.com/users/kruimol/followers", "following_url": "https://api.github.com/users/kruimol/following{/other_user}", "gists_url": "https://api.github.com/users/kruimol/gists{/gist_id}", "starred_url": "https://api.github.com/users/kruimol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kruimol/subscriptions", "organizations_url": "https://api.github.com/users/kruimol/orgs", "repos_url": "https://api.github.com/users/kruimol/repos", "events_url": "https://api.github.com/users/kruimol/events{/privacy}", "received_events_url": "https://api.github.com/users/kruimol/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4836/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5797
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5797/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5797/comments
https://api.github.com/repos/ollama/ollama/issues/5797/events
https://github.com/ollama/ollama/issues/5797
2,419,471,956
I_kwDOJ0Z1Ps6QNjZU
5,797
support for arm linux
{ "login": "olumolu", "id": 162728301, "node_id": "U_kgDOCbMJbQ", "avatar_url": "https://avatars.githubusercontent.com/u/162728301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olumolu", "html_url": "https://github.com/olumolu", "followers_url": "https://api.github.com/users/olumolu/followers", "following_url": "https://api.github.com/users/olumolu/following{/other_user}", "gists_url": "https://api.github.com/users/olumolu/gists{/gist_id}", "starred_url": "https://api.github.com/users/olumolu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olumolu/subscriptions", "organizations_url": "https://api.github.com/users/olumolu/orgs", "repos_url": "https://api.github.com/users/olumolu/repos", "events_url": "https://api.github.com/users/olumolu/events{/privacy}", "received_events_url": "https://api.github.com/users/olumolu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
4
2024-07-19T17:28:09
2024-08-08T19:33:36
2024-08-08T19:33:36
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Support for arm linux build so it can be installed in arm laptops which will be coming with new snapdragon x elite cpus with a good npu
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5797/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1351
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1351/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1351/comments
https://api.github.com/repos/ollama/ollama/issues/1351/events
https://github.com/ollama/ollama/issues/1351
2,021,989,565
I_kwDOJ0Z1Ps54hRy9
1,351
Quitting ollama completely
{ "login": "sytranvn", "id": 13009812, "node_id": "MDQ6VXNlcjEzMDA5ODEy", "avatar_url": "https://avatars.githubusercontent.com/u/13009812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sytranvn", "html_url": "https://github.com/sytranvn", "followers_url": "https://api.github.com/users/sytranvn/followers", "following_url": "https://api.github.com/users/sytranvn/following{/other_user}", "gists_url": "https://api.github.com/users/sytranvn/gists{/gist_id}", "starred_url": "https://api.github.com/users/sytranvn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sytranvn/subscriptions", "organizations_url": "https://api.github.com/users/sytranvn/orgs", "repos_url": "https://api.github.com/users/sytranvn/repos", "events_url": "https://api.github.com/users/sytranvn/events{/privacy}", "received_events_url": "https://api.github.com/users/sytranvn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-12-02T10:45:36
2023-12-02T10:47:44
2023-12-02T10:47:43
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
`ollama` is still running in the background after complete command. If user does not kill backgroud process, it can wastes a lot of energy. ``` $ nvidia-smi -q -d MEMORY ==============NVSMI LOG============== Timestamp : Sat Dec 2 17:39:41 2023 Driver Version : 535.129.03 CUDA Version : 12.2 Attached GPUs : 1 GPU 00000000:01:00.0 FB Memory Usage Total : 8192 MiB Reserved : 218 MiB Used : 9 MiB Free : 7964 MiB BAR1 Memory Usage Total : 8192 MiB Used : 3 MiB Free : 8189 MiB Conf Compute Protected Memory Usage Total : 0 MiB Used : 0 MiB Free : 0 MiB $ ollama run codellama:13b-python "compute gcd of a and b" ''' def gcd(a,b): if b == 0: return a # recursively call with the remainder of a and b until there is no remainder. return gcd(b,a % b) $ nvidia-smi -q -d MEMORY ==============NVSMI LOG============== Timestamp : Sat Dec 2 17:40:26 2023 Driver Version : 535.129.03 CUDA Version : 12.2 Attached GPUs : 1 GPU 00000000:01:00.0 FB Memory Usage Total : 8192 MiB Reserved : 218 MiB Used : 6447 MiB Free : 1525 MiB BAR1 Memory Usage Total : 8192 MiB Used : 5 MiB Free : 8187 MiB Conf Compute Protected Memory Usage Total : 0 MiB Used : 0 MiB Free : 0 MiB ```
{ "login": "sytranvn", "id": 13009812, "node_id": "MDQ6VXNlcjEzMDA5ODEy", "avatar_url": "https://avatars.githubusercontent.com/u/13009812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sytranvn", "html_url": "https://github.com/sytranvn", "followers_url": "https://api.github.com/users/sytranvn/followers", "following_url": "https://api.github.com/users/sytranvn/following{/other_user}", "gists_url": "https://api.github.com/users/sytranvn/gists{/gist_id}", "starred_url": "https://api.github.com/users/sytranvn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sytranvn/subscriptions", "organizations_url": "https://api.github.com/users/sytranvn/orgs", "repos_url": "https://api.github.com/users/sytranvn/repos", "events_url": "https://api.github.com/users/sytranvn/events{/privacy}", "received_events_url": "https://api.github.com/users/sytranvn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1351/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/8667
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8667/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8667/comments
https://api.github.com/repos/ollama/ollama/issues/8667/events
https://github.com/ollama/ollama/issues/8667
2,818,555,937
I_kwDOJ0Z1Ps6n_8Ah
8,667
deepseek-r1:671b Q4_K_M: error="model requires more system memory (446.3 GiB) than is available
{ "login": "philippstoboy", "id": 76473104, "node_id": "MDQ6VXNlcjc2NDczMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/76473104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philippstoboy", "html_url": "https://github.com/philippstoboy", "followers_url": "https://api.github.com/users/philippstoboy/followers", "following_url": "https://api.github.com/users/philippstoboy/following{/other_user}", "gists_url": "https://api.github.com/users/philippstoboy/gists{/gist_id}", "starred_url": "https://api.github.com/users/philippstoboy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philippstoboy/subscriptions", "organizations_url": "https://api.github.com/users/philippstoboy/orgs", "repos_url": "https://api.github.com/users/philippstoboy/repos", "events_url": "https://api.github.com/users/philippstoboy/events{/privacy}", "received_events_url": "https://api.github.com/users/philippstoboy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
8
2025-01-29T15:33:31
2025-01-29T21:31:36
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hey Ollama community, I’m reaching out for some advice on running the DeepSeek-R1 671B model with Q4 quantization on my current setup, which has 40GB of RAM. I understand that this model employs a Mixture of Experts (MoE) architecture, meaning that during inference, only a subset of the model’s parameters (approximately 37 billion) is active at any given time. This design should, in theory, reduce the immediate memory requirements compared to loading all 671 billion parameters simultaneously. Based on calculations, the active 37B parameters in Q4 quantization should only require around **18.5GB of RAM**, with an additional **20–30% memory overhead**, resulting in a total memory requirement of **~25–30GB**. In contrast, the Q8_0 version (8-bit quantization) would require approximately **37GB + overhead**, totaling around **50GB**. Even the full 671B parameters in Q4 quantization should only need around **325GB of RAM** if loaded entirely, and Q8_0 would need **650GB**. Given these figures, my expectation was that loading just the active 37B subset in Q4 should work within reasonable hardware configurations. However, when I attempt to load the model using Ollama, I encounter an error indicating that the model requires more system memory (446.3 GiB) than is available (37.3 GiB). This is perplexing, given the MoE architecture’s supposed efficiency and the Q4 quantization. I came across a discussion where a user successfully ran the Q8_0 GGUF version of DeepSeek-R1 671B on a CPU-only system with 256GB of RAM. They noted that with a context length of 32,092 tokens, the system utilized around 220GB of RAM, emphasizing that the KV cache can consume more RAM than the model itself. Reference: https://huggingface.co./deepseek-ai/DeepSeek-R1/discussions/19 Based on this, I would expect that running the Q4 quantized model with a context length of 32,092 tokens would require less memory than the Q8 version, potentially making it feasible to run on a system with 192GB of RAM(What I'm planning to upgrade to). However, the error message I received suggests otherwise, indicating a requirement of 446.3 GiB of system memory. Thank you for your help. Error Logs: ``` 2025/01/29 14:31:32 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[* http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" time=2025-01-29T14:31:32.660Z level=INFO source=images.go:432 msg="total blobs: 11" time=2025-01-29T14:31:32.680Z level=INFO source=images.go:439 msg="total unused blobs removed: 0" time=2025-01-29T14:31:32.699Z level=INFO source=routes.go:1238 msg="Listening on [::]:11434 (version 0.5.7-0-ga420a45-dirty)" time=2025-01-29T14:31:32.701Z level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cuda_v12_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-01-29T14:31:32.701Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-29T14:31:32.729Z level=WARN source=amd_linux.go:61 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2025-01-29T14:31:32.730Z level=INFO source=amd_linux.go:404 msg="no compatible amdgpu devices detected" time=2025-01-29T14:31:32.730Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-01-29T14:31:32.730Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx compute="" driver=0.0 name="" total="39.2 GiB" available="37.9 GiB" time=2025-01-29T14:33:04.221Z level=INFO source=server.go:104 msg="system memory" total="39.2 GiB" free="37.3 GiB" free_swap="0 B" time=2025-01-29T14:33:04.223Z level=WARN source=server.go:136 msg="model request too large for system" requested="446.3 GiB" available=40099082240 total="39.2 GiB" free="37.3 GiB" swap="0 B" time=2025-01-29T14:33:04.223Z level=INFO source=sched.go:428 msg="NewLlamaServer failed" model=/root/.ollama/models/blobs/sha256-9801e7fce27dbf3d0bfb468b7b21f1d132131a546dfc43e50518631b8b1800a9 error="model requires more system memory (446.3 GiB) than is available (37.3 GiB)" [GIN-debug] [WARNING] ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8667/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6911
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6911/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6911/comments
https://api.github.com/repos/ollama/ollama/issues/6911/events
https://github.com/ollama/ollama/issues/6911
2,541,219,675
I_kwDOJ0Z1Ps6Xd-9b
6,911
Mixture of Agents for Ollama
{ "login": "secondtruth", "id": 416441, "node_id": "MDQ6VXNlcjQxNjQ0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/416441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/secondtruth", "html_url": "https://github.com/secondtruth", "followers_url": "https://api.github.com/users/secondtruth/followers", "following_url": "https://api.github.com/users/secondtruth/following{/other_user}", "gists_url": "https://api.github.com/users/secondtruth/gists{/gist_id}", "starred_url": "https://api.github.com/users/secondtruth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/secondtruth/subscriptions", "organizations_url": "https://api.github.com/users/secondtruth/orgs", "repos_url": "https://api.github.com/users/secondtruth/repos", "events_url": "https://api.github.com/users/secondtruth/events{/privacy}", "received_events_url": "https://api.github.com/users/secondtruth/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-09-22T19:16:14
2025-01-06T07:35:29
2025-01-06T07:35:29
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The [Mixture of Agents (MoA)](https://arxiv.org/abs/2406.04692) is an innovative approach to leveraging the collective strengths of multiple language models to enhance overall performance and capabilities of one main model (aggregator). By combining outputs from various models, each potentially excelling in different aspects or domains, this approach has demonstrated significant improvements in model performance, even outperforming GPT-4o on certain benchmarks. ![moa-structure](https://github.com/user-attachments/assets/f18b22b6-a47d-45bb-9016-e77b106a752e) This proposal aims to bring similar capabilities to Ollama, allowing users to define and utilize multiple agent models within a single inference pipeline. It suggests extending Ollama's Modelfile vocabulary to support a MoA architecture. The feature would enable users to: 1. Define multiple "reference" models, each potentially specialized for different tasks or domains. 2. Specify an "aggregator" model that synthesizes the outputs from these referenced models. 3. Create sophisticated inference pipelines that can adapt to various types of queries or tasks. Expanding Ollama's capabilities with MoA would make Ollama more flexible and powerful, and offer enhanced model performance. This feature would significantly , especially beneficial in scenarios where diverse expertise or multimodal processing is required. ## Proposed Syntax Extensions 1. **`REFERENCE`:** Define a reference model to be used as an agent. These directives must be placed before `FROM`, so it can be referenced in the system prompt of the aggregator model. ``` REFERENCE <model_name> AS <expert_alias> ``` - All configurations following this statement until the next `REFERENCE` or `FROM` apply to this agent model. 2. **`FROM`:** Defines the aggregator model, equal to its current use in Modelfiles. Must come after any `REFERENCE` directive. - All configurations following this statement apply to the aggregator model. 3. **`SYSTEM`:** Defines the system prompt of the reference or aggregator model, equal to its current use in Modelfiles. 4. **`TEMPLATE --type=user-message`:** Defines a template for user messages given to the reference or aggregator model. - This proposal introduces new template variables for this case to access agent outputs. - Available template variables for `TEMPLATE --type=user-message`: - `{{ .Ref.<expert_alias> }}` ## Example Usage ```dockerfile # Define first reference model REFERENCE qwen2.5 AS companion PARAMETER temperature 0.3 PARAMETER num_predict 512 SYSTEM """ You are a companion AI expert. Provide friendly and supportive responses. """ # Define second reference model (multimodal) REFERENCE llava AS sight PARAMETER temperature 0.4 PARAMETER num_predict 256 SYSTEM """ You are a visual analysis expert. Describe and analyze visual elements. """ # Aggregator model configuration FROM llama3.1 PARAMETER temperature 0.5 PARAMETER num_ctx 4096 TEMPLATE --type=user-message """ You have been provided with a set of responses from various agent LLMs to the latest user query. Your task is to synthesize these responses into a single, high-quality response. It is crucial to critically evaluate the information provided in these responses, recognizing that some of it may be biased or incorrect. Your response should not simply replicate the given answers but should offer a refined, accurate, and comprehensive reply to the instruction. Ensure your response is well-structured, coherent, and adheres to the highest standards of accuracy and reliability. Responses from agents: Companion: {{ .Ref.companion }} Sight: {{ .Ref.sight }} """ ``` ## Benefits 1. Enhanced Model Capabilities: Leverage strengths of multiple models for more comprehensive responses. 2. Flexibility: Easy configuration and deployment of complex model ensembles through declarative syntax. 3. Improved Context-Awareness: Utilize specialized models for different aspects of queries. 4. Familiar Syntax: Builds upon existing Modelfile conventions, making it intuitive for Ollama and Docker users. ## Potential Use Cases 1. Multimodal Processing: Combine text and image/video/voice analysis for richer understanding. 2. Enhanced Question-Answering, Fact-Checking, and Research: Use specialized experts for different knowledge domains, aggregating information from multiple specialized sources. 3. Context-Aware Conversational Agents: Dynamically adapt responses based on conversational context, or use different experts for style, content, and personality aspects. ## See also - [Blog Post about Together MoA](https://www.together.ai/blog/together-moa) - [Together MoA Documentation](https://docs.together.ai/docs/mixture-of-agents) - [Together MoA GitHub Repo](https://github.com/togethercomputer/MoA)
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6911/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2888
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2888/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2888/comments
https://api.github.com/repos/ollama/ollama/issues/2888/events
https://github.com/ollama/ollama/issues/2888
2,165,067,697
I_kwDOJ0Z1Ps6BDE-x
2,888
Fail to load dynamic library - unicode character path
{ "login": "08183080", "id": 51738561, "node_id": "MDQ6VXNlcjUxNzM4NTYx", "avatar_url": "https://avatars.githubusercontent.com/u/51738561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/08183080", "html_url": "https://github.com/08183080", "followers_url": "https://api.github.com/users/08183080/followers", "following_url": "https://api.github.com/users/08183080/following{/other_user}", "gists_url": "https://api.github.com/users/08183080/gists{/gist_id}", "starred_url": "https://api.github.com/users/08183080/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/08183080/subscriptions", "organizations_url": "https://api.github.com/users/08183080/orgs", "repos_url": "https://api.github.com/users/08183080/repos", "events_url": "https://api.github.com/users/08183080/events{/privacy}", "received_events_url": "https://api.github.com/users/08183080/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
7
2024-03-03T01:42:38
2024-04-16T21:00:14
2024-04-16T21:00:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Error: Unable to load dynamic library: Unable to load dynamic server library: �Ҳ���ָ����ģ�顣
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2888/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3240
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3240/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3240/comments
https://api.github.com/repos/ollama/ollama/issues/3240/events
https://github.com/ollama/ollama/pull/3240
2,194,369,059
PR_kwDOJ0Z1Ps5qDq2r
3,240
do not prompt to move the CLI on install flow if already installed
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
2
2024-03-19T08:48:10
2024-09-16T10:23:50
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3240", "html_url": "https://github.com/ollama/ollama/pull/3240", "diff_url": "https://github.com/ollama/ollama/pull/3240.diff", "patch_url": "https://github.com/ollama/ollama/pull/3240.patch", "merged_at": null }
If Ollama is installed via brew or the user wishes to manage their path manually they will be prompted to install the CLI when opening the Ollama Mac app. This change attempts to check if Ollama is already set in the path, and if it is found the user is not prompted to link the executable to `/usr/local/bin/ollama`. - rather than checking if the symlink is set try to spawn common shells and see if ollama is available within them - make init async to allow for spawning shells - check for `step` param in installation UI to bypass the `move to applications` prompt - flatten `init()` conditional check to link ollama in `/usr/local/bin/ollama` resolves #283 resolves #3186
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3240/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4852
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4852/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4852/comments
https://api.github.com/repos/ollama/ollama/issues/4852/events
https://github.com/ollama/ollama/pull/4852
2,337,827,714
PR_kwDOJ0Z1Ps5xqIVb
4,852
Error handling load_single_document() in ingest.py
{ "login": "dcasota", "id": 14890243, "node_id": "MDQ6VXNlcjE0ODkwMjQz", "avatar_url": "https://avatars.githubusercontent.com/u/14890243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcasota", "html_url": "https://github.com/dcasota", "followers_url": "https://api.github.com/users/dcasota/followers", "following_url": "https://api.github.com/users/dcasota/following{/other_user}", "gists_url": "https://api.github.com/users/dcasota/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcasota/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcasota/subscriptions", "organizations_url": "https://api.github.com/users/dcasota/orgs", "repos_url": "https://api.github.com/users/dcasota/repos", "events_url": "https://api.github.com/users/dcasota/events{/privacy}", "received_events_url": "https://api.github.com/users/dcasota/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-06T09:45:12
2024-06-09T17:41:08
2024-06-09T17:41:08
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4852", "html_url": "https://github.com/ollama/ollama/pull/4852", "diff_url": "https://github.com/ollama/ollama/pull/4852.diff", "patch_url": "https://github.com/ollama/ollama/pull/4852.patch", "merged_at": "2024-06-09T17:41:08" }
load_single_document() handles - corrupt files - empty (zero byte) files - unsupported file extensions
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4852/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2017
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2017/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2017/comments
https://api.github.com/repos/ollama/ollama/issues/2017/events
https://github.com/ollama/ollama/pull/2017
2,084,476,509
PR_kwDOJ0Z1Ps5kOMkJ
2,017
Fix show parameters
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-01-16T17:20:24
2024-01-16T18:34:44
2024-01-16T18:34:44
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2017", "html_url": "https://github.com/ollama/ollama/pull/2017", "diff_url": "https://github.com/ollama/ollama/pull/2017.diff", "patch_url": "https://github.com/ollama/ollama/pull/2017.patch", "merged_at": "2024-01-16T18:34:44" }
The ShowParameters call was converting some floats into ints. This simplifies the code and adds a unit test.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2017/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1270
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1270/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1270/comments
https://api.github.com/repos/ollama/ollama/issues/1270/events
https://github.com/ollama/ollama/issues/1270
2,010,402,680
I_kwDOJ0Z1Ps531E94
1,270
Specify where to download and look for models
{ "login": "Talleyrand-34", "id": 119809076, "node_id": "U_kgDOByQkNA", "avatar_url": "https://avatars.githubusercontent.com/u/119809076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Talleyrand-34", "html_url": "https://github.com/Talleyrand-34", "followers_url": "https://api.github.com/users/Talleyrand-34/followers", "following_url": "https://api.github.com/users/Talleyrand-34/following{/other_user}", "gists_url": "https://api.github.com/users/Talleyrand-34/gists{/gist_id}", "starred_url": "https://api.github.com/users/Talleyrand-34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Talleyrand-34/subscriptions", "organizations_url": "https://api.github.com/users/Talleyrand-34/orgs", "repos_url": "https://api.github.com/users/Talleyrand-34/repos", "events_url": "https://api.github.com/users/Talleyrand-34/events{/privacy}", "received_events_url": "https://api.github.com/users/Talleyrand-34/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
7
2023-11-25T00:59:47
2023-12-12T20:07:27
2023-11-26T01:56:51
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
null
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1270/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7166
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7166/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7166/comments
https://api.github.com/repos/ollama/ollama/issues/7166/events
https://github.com/ollama/ollama/issues/7166
2,579,797,833
I_kwDOJ0Z1Ps6ZxJdJ
7,166
Qwen 2.5 72B missing stop parameter
{ "login": "bold84", "id": 21118257, "node_id": "MDQ6VXNlcjIxMTE4MjU3", "avatar_url": "https://avatars.githubusercontent.com/u/21118257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bold84", "html_url": "https://github.com/bold84", "followers_url": "https://api.github.com/users/bold84/followers", "following_url": "https://api.github.com/users/bold84/following{/other_user}", "gists_url": "https://api.github.com/users/bold84/gists{/gist_id}", "starred_url": "https://api.github.com/users/bold84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bold84/subscriptions", "organizations_url": "https://api.github.com/users/bold84/orgs", "repos_url": "https://api.github.com/users/bold84/repos", "events_url": "https://api.github.com/users/bold84/events{/privacy}", "received_events_url": "https://api.github.com/users/bold84/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
9
2024-10-10T20:40:02
2024-12-05T06:38:55
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Often doesn't stop generating... ![telegram-cloud-photo-size-5-6071353675453939699-x](https://github.com/user-attachments/assets/0e5bb916-068c-44d8-88b8-797fa1e77b0e) PARAMETER stop <|endoftext|> seems to be missing in the model configuration. Adding it solved the problem. ### OS Ubuntu 22.04.5 LTS (but ollama runs in official docker container) ### GPU 2 x RTX 4090 ### CPU AMD Ryzen 9 7950X ### Ollama version 0.3.12
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7166/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/5733
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5733/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5733/comments
https://api.github.com/repos/ollama/ollama/issues/5733/events
https://github.com/ollama/ollama/issues/5733
2,412,115,809
I_kwDOJ0Z1Ps6Pxfdh
5,733
Installation on Linux fails because /usr/share/ollama does not exist.
{ "login": "richardstevenhack", "id": 44449170, "node_id": "MDQ6VXNlcjQ0NDQ5MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/44449170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardstevenhack", "html_url": "https://github.com/richardstevenhack", "followers_url": "https://api.github.com/users/richardstevenhack/followers", "following_url": "https://api.github.com/users/richardstevenhack/following{/other_user}", "gists_url": "https://api.github.com/users/richardstevenhack/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardstevenhack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardstevenhack/subscriptions", "organizations_url": "https://api.github.com/users/richardstevenhack/orgs", "repos_url": "https://api.github.com/users/richardstevenhack/repos", "events_url": "https://api.github.com/users/richardstevenhack/events{/privacy}", "received_events_url": "https://api.github.com/users/richardstevenhack/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-07-16T22:03:08
2024-07-24T17:24:32
2024-07-16T22:56:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? When you install using the install script on openSUSE Tumbleweed, the script fails because the adduser command with the -m does not create the directory /usr/share/ollama, it merely assigns that directory to the ollama user. I had Claude Sonnet go over the install script line by line explaining to me what it does. Here is what Claude has to say about that line in response to this query: Correct me if I'm wrong, but this line here: $SUDO useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama creates the account with the home directory at /usr/share/ollams. Where in the script is that directory created? Claude Sonnet: Claude 3.5 Sonnet You're absolutely correct, and I apologize for the oversight in my previous explanation. You've made a very astute observation. Let's clarify this: The line you mentioned: bash Copy code $SUDO useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama Indeed creates the 'ollama' user with the home directory set to /usr/share/ollama. You're right to question where this directory is created. In the original script, there isn't an explicit command to create this directory. The -m option in the useradd command is supposed to create the home directory if it doesn't exist, but it's generally a good practice to ensure the parent directory exists. To improve this, we should add a line to create the directory before creating the user. Here's how we could modify that section: bash Copy code if ! id ollama >/dev/null 2>&1; then status "Creating ollama user..." $SUDO mkdir -p /usr/share/ollama $SUDO useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama fi This change ensures that the /usr/share/ollama directory exists before we try to create the user with that home directory. Thank you for catching this. It's an important detail that could potentially cause issues if the parent directory /usr/share didn't exist or wasn't writable. Your attention to detail helps improve the robustness of the script. ### OS Linux ### GPU AMD ### CPU AMD ### Ollama version v0.2.5
{ "login": "richardstevenhack", "id": 44449170, "node_id": "MDQ6VXNlcjQ0NDQ5MTcw", "avatar_url": "https://avatars.githubusercontent.com/u/44449170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardstevenhack", "html_url": "https://github.com/richardstevenhack", "followers_url": "https://api.github.com/users/richardstevenhack/followers", "following_url": "https://api.github.com/users/richardstevenhack/following{/other_user}", "gists_url": "https://api.github.com/users/richardstevenhack/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardstevenhack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardstevenhack/subscriptions", "organizations_url": "https://api.github.com/users/richardstevenhack/orgs", "repos_url": "https://api.github.com/users/richardstevenhack/repos", "events_url": "https://api.github.com/users/richardstevenhack/events{/privacy}", "received_events_url": "https://api.github.com/users/richardstevenhack/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5733/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2742
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2742/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2742/comments
https://api.github.com/repos/ollama/ollama/issues/2742/events
https://github.com/ollama/ollama/issues/2742
2,152,742,334
I_kwDOJ0Z1Ps6AUD2-
2,742
How to improve ollama performance
{ "login": "gautam-fairpe", "id": 127822235, "node_id": "U_kgDOB55pmw", "avatar_url": "https://avatars.githubusercontent.com/u/127822235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gautam-fairpe", "html_url": "https://github.com/gautam-fairpe", "followers_url": "https://api.github.com/users/gautam-fairpe/followers", "following_url": "https://api.github.com/users/gautam-fairpe/following{/other_user}", "gists_url": "https://api.github.com/users/gautam-fairpe/gists{/gist_id}", "starred_url": "https://api.github.com/users/gautam-fairpe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gautam-fairpe/subscriptions", "organizations_url": "https://api.github.com/users/gautam-fairpe/orgs", "repos_url": "https://api.github.com/users/gautam-fairpe/repos", "events_url": "https://api.github.com/users/gautam-fairpe/events{/privacy}", "received_events_url": "https://api.github.com/users/gautam-fairpe/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" }, { "id": 6430601766, "node_id": "LA_kwDOJ0Z1Ps8AAAABf0syJg", "url": "https://api.github.com/repos/ollama/ollama/labels/nvidia", "name": "nvidia", "color": "8CDB00", "default": false, "description": "Issues relating to Nvidia GPUs and CUDA" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
2
2024-02-25T12:29:35
2024-03-11T21:21:36
2024-03-11T21:21:08
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
current model params : FROM llama2:13b-chat PARAMETER temperature 0.2 PARAMETER num_ctx 4096 PARAMETER num_thread 16 PARAMETER use_mmap False System config : Ram 108 GB T4 graphics card 16 gb ![Screenshot from 2024-02-25 17-57-04](https://github.com/ollama/ollama/assets/127822235/24854715-93b3-4732-9b6f-bd1a373a9417) Also hardly any ram is being used. Using ollama python bindings to get the result but due to some params issue not getting the result as expected. What am i missing here ?
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2742/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3067
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3067/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3067/comments
https://api.github.com/repos/ollama/ollama/issues/3067/events
https://github.com/ollama/ollama/issues/3067
2,180,359,266
I_kwDOJ0Z1Ps6B9aRi
3,067
Additional package manager support
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" }, { "id": 5755339642, "node_id": "LA_kwDOJ0Z1Ps8AAAABVwuDeg", "url": "https://api.github.com/repos/ollama/ollama/labels/linux", "name": "linux", "color": "516E70", "default": false, "description": "" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6677279472, "node_id": "LA_kwDOJ0Z1Ps8AAAABjf8y8A", "url": "https://api.github.com/repos/ollama/ollama/labels/macos", "name": "macos", "color": "E2DBC0", "default": false, "description": "" } ]
open
false
null
[]
null
8
2024-03-11T22:18:13
2024-10-29T03:07:17
null
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is an issue that encompasses supporting other package managers: - [ ] MacPorts - [ ] Apt - [ ] Scoop - [ ] Nix
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3067/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3067/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7962
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7962/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7962/comments
https://api.github.com/repos/ollama/ollama/issues/7962/events
https://github.com/ollama/ollama/pull/7962
2,721,806,387
PR_kwDOJ0Z1Ps6EQl5A
7,962
Update readmes for structured outputs
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-12-06T01:11:06
2024-12-06T18:35:39
2024-12-06T18:35:37
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7962", "html_url": "https://github.com/ollama/ollama/pull/7962", "diff_url": "https://github.com/ollama/ollama/pull/7962.diff", "patch_url": "https://github.com/ollama/ollama/pull/7962.patch", "merged_at": "2024-12-06T18:35:37" }
null
{ "login": "ParthSareen", "id": 29360864, "node_id": "MDQ6VXNlcjI5MzYwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/29360864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ParthSareen", "html_url": "https://github.com/ParthSareen", "followers_url": "https://api.github.com/users/ParthSareen/followers", "following_url": "https://api.github.com/users/ParthSareen/following{/other_user}", "gists_url": "https://api.github.com/users/ParthSareen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ParthSareen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ParthSareen/subscriptions", "organizations_url": "https://api.github.com/users/ParthSareen/orgs", "repos_url": "https://api.github.com/users/ParthSareen/repos", "events_url": "https://api.github.com/users/ParthSareen/events{/privacy}", "received_events_url": "https://api.github.com/users/ParthSareen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7962/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5264
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5264/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5264/comments
https://api.github.com/repos/ollama/ollama/issues/5264/events
https://github.com/ollama/ollama/issues/5264
2,371,495,406
I_kwDOJ0Z1Ps6NWiXu
5,264
How to use the .mf model configuration file to register a customize vision-language model in Ollama
{ "login": "LJY16114", "id": 39738076, "node_id": "MDQ6VXNlcjM5NzM4MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/39738076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LJY16114", "html_url": "https://github.com/LJY16114", "followers_url": "https://api.github.com/users/LJY16114/followers", "following_url": "https://api.github.com/users/LJY16114/following{/other_user}", "gists_url": "https://api.github.com/users/LJY16114/gists{/gist_id}", "starred_url": "https://api.github.com/users/LJY16114/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LJY16114/subscriptions", "organizations_url": "https://api.github.com/users/LJY16114/orgs", "repos_url": "https://api.github.com/users/LJY16114/repos", "events_url": "https://api.github.com/users/LJY16114/events{/privacy}", "received_events_url": "https://api.github.com/users/LJY16114/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2024-06-25T02:29:40
2024-06-25T02:39:20
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
This is my .mf model configuration file: FROM /root/llava-llama-3-8b-v1_1-gguf/llava-llama-3-8b-v1_1-f16.gguf TEMPLATE """ {{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|> """ PARAMETER stop "<|system|>" PARAMETER stop "<|user|>" PARAMETER stop "<|assistant|> Ollama can recognize our customized .gguf model and display a model option in open-webui. However, the loaded model cannot understand the content of the input image and randomly says things that are completely unrelated to the image. What could be the cause of this problem? Thank you in advance! @jmorganca
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5264/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/254
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/254/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/254/comments
https://api.github.com/repos/ollama/ollama/issues/254/events
https://github.com/ollama/ollama/issues/254
1,832,129,526
I_kwDOJ0Z1Ps5tNBP2
254
Streaming llama output
{ "login": "osamanatouf2", "id": 70172406, "node_id": "MDQ6VXNlcjcwMTcyNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/70172406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osamanatouf2", "html_url": "https://github.com/osamanatouf2", "followers_url": "https://api.github.com/users/osamanatouf2/followers", "following_url": "https://api.github.com/users/osamanatouf2/following{/other_user}", "gists_url": "https://api.github.com/users/osamanatouf2/gists{/gist_id}", "starred_url": "https://api.github.com/users/osamanatouf2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osamanatouf2/subscriptions", "organizations_url": "https://api.github.com/users/osamanatouf2/orgs", "repos_url": "https://api.github.com/users/osamanatouf2/repos", "events_url": "https://api.github.com/users/osamanatouf2/events{/privacy}", "received_events_url": "https://api.github.com/users/osamanatouf2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2023-08-01T22:43:28
2023-08-02T01:40:54
2023-08-02T01:40:53
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
@jmorganca @mxyng Testing llama2 output after building and serving the model and with the example `why the sky is blue` I noticed that the streaming does not work correctly. Using the curl or python requests either on ubuntu 22.04 I got something like: {"model":"llama2","created_at":"2023-08-01T22:02:11.894420812Z","response":" sc","done":false} {"model":"llama2","created_at":"2023-08-01T22:02:12.151293915Z","response":"at","done":false} {"model":"llama2","created_at":"2023-08-01T22:02:12.409555353Z","response":"ters","done":false} Is there anyway to disable streaming?
{ "login": "osamanatouf2", "id": 70172406, "node_id": "MDQ6VXNlcjcwMTcyNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/70172406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osamanatouf2", "html_url": "https://github.com/osamanatouf2", "followers_url": "https://api.github.com/users/osamanatouf2/followers", "following_url": "https://api.github.com/users/osamanatouf2/following{/other_user}", "gists_url": "https://api.github.com/users/osamanatouf2/gists{/gist_id}", "starred_url": "https://api.github.com/users/osamanatouf2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osamanatouf2/subscriptions", "organizations_url": "https://api.github.com/users/osamanatouf2/orgs", "repos_url": "https://api.github.com/users/osamanatouf2/repos", "events_url": "https://api.github.com/users/osamanatouf2/events{/privacy}", "received_events_url": "https://api.github.com/users/osamanatouf2/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/254/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7416
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7416/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7416/comments
https://api.github.com/repos/ollama/ollama/issues/7416/events
https://github.com/ollama/ollama/issues/7416
2,623,081,544
I_kwDOJ0Z1Ps6cWQxI
7,416
Optimizing Single Inference Performance on Distributed GPUs with Ollama’s Parallel Inference
{ "login": "jibinghu", "id": 63084174, "node_id": "MDQ6VXNlcjYzMDg0MTc0", "avatar_url": "https://avatars.githubusercontent.com/u/63084174?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jibinghu", "html_url": "https://github.com/jibinghu", "followers_url": "https://api.github.com/users/jibinghu/followers", "following_url": "https://api.github.com/users/jibinghu/following{/other_user}", "gists_url": "https://api.github.com/users/jibinghu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jibinghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jibinghu/subscriptions", "organizations_url": "https://api.github.com/users/jibinghu/orgs", "repos_url": "https://api.github.com/users/jibinghu/repos", "events_url": "https://api.github.com/users/jibinghu/events{/privacy}", "received_events_url": "https://api.github.com/users/jibinghu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
4
2024-10-30T06:52:49
2024-10-30T09:29:59
2024-10-30T09:29:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Hey guys, I have a server with dual A100 GPUs and a server with a single V100 GPU. Knowing the IP addresses, ports, and passwords of both servers, I want to use Ollama’s parallel inference functionality to perform a single inference request on the llama3.1-70B model. How can I achieve optimal performance for a single request when using Ollama for inference? Do I need to use MPI or other distributed methods? Will this make the model inference faster? Thanks.
{ "login": "jibinghu", "id": 63084174, "node_id": "MDQ6VXNlcjYzMDg0MTc0", "avatar_url": "https://avatars.githubusercontent.com/u/63084174?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jibinghu", "html_url": "https://github.com/jibinghu", "followers_url": "https://api.github.com/users/jibinghu/followers", "following_url": "https://api.github.com/users/jibinghu/following{/other_user}", "gists_url": "https://api.github.com/users/jibinghu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jibinghu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jibinghu/subscriptions", "organizations_url": "https://api.github.com/users/jibinghu/orgs", "repos_url": "https://api.github.com/users/jibinghu/repos", "events_url": "https://api.github.com/users/jibinghu/events{/privacy}", "received_events_url": "https://api.github.com/users/jibinghu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7416/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2741
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2741/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2741/comments
https://api.github.com/repos/ollama/ollama/issues/2741/events
https://github.com/ollama/ollama/issues/2741
2,152,705,000
I_kwDOJ0Z1Ps6AT6vo
2,741
Using embedding and llm at the same time
{ "login": "Bearsaerker", "id": 92314812, "node_id": "U_kgDOBYCcvA", "avatar_url": "https://avatars.githubusercontent.com/u/92314812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bearsaerker", "html_url": "https://github.com/Bearsaerker", "followers_url": "https://api.github.com/users/Bearsaerker/followers", "following_url": "https://api.github.com/users/Bearsaerker/following{/other_user}", "gists_url": "https://api.github.com/users/Bearsaerker/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bearsaerker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bearsaerker/subscriptions", "organizations_url": "https://api.github.com/users/Bearsaerker/orgs", "repos_url": "https://api.github.com/users/Bearsaerker/repos", "events_url": "https://api.github.com/users/Bearsaerker/events{/privacy}", "received_events_url": "https://api.github.com/users/Bearsaerker/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-02-25T10:47:28
2024-05-02T22:25:36
2024-05-02T22:25:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I just have the question: can I use embedding and llm at the same time without it loading the embedding and unloading the llm? I wanted to use it in llama index and want to know if this works before trying to make it work
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2741/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2741/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6532
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6532/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6532/comments
https://api.github.com/repos/ollama/ollama/issues/6532/events
https://github.com/ollama/ollama/pull/6532
2,490,425,013
PR_kwDOJ0Z1Ps55oM7z
6,532
add safetensors to the modelfile docs
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-08-27T21:41:53
2024-08-27T21:46:49
2024-08-27T21:46:48
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6532", "html_url": "https://github.com/ollama/ollama/pull/6532", "diff_url": "https://github.com/ollama/ollama/pull/6532.diff", "patch_url": "https://github.com/ollama/ollama/pull/6532.patch", "merged_at": "2024-08-27T21:46:47" }
null
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6532/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4114
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4114/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4114/comments
https://api.github.com/repos/ollama/ollama/issues/4114/events
https://github.com/ollama/ollama/issues/4114
2,276,740,449
I_kwDOJ0Z1Ps6HtE1h
4,114
`ollama create` tries to pull model when using quotes in `FROM` line
{ "login": "savareyhano", "id": 32730327, "node_id": "MDQ6VXNlcjMyNzMwMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/32730327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/savareyhano", "html_url": "https://github.com/savareyhano", "followers_url": "https://api.github.com/users/savareyhano/followers", "following_url": "https://api.github.com/users/savareyhano/following{/other_user}", "gists_url": "https://api.github.com/users/savareyhano/gists{/gist_id}", "starred_url": "https://api.github.com/users/savareyhano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/savareyhano/subscriptions", "organizations_url": "https://api.github.com/users/savareyhano/orgs", "repos_url": "https://api.github.com/users/savareyhano/repos", "events_url": "https://api.github.com/users/savareyhano/events{/privacy}", "received_events_url": "https://api.github.com/users/savareyhano/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
2024-05-03T01:10:02
2024-05-08T05:52:28
2024-05-08T05:51:01
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Steps to reproduce: 1. Download and install [Ollama v0.1.33 from release page](https://github.com/ollama/ollama/releases/tag/v0.1.33) 2. Download any gguf file and make a modelfile for the gguf file (for example, im using hermes 2 pro): Modelfile ``` FROM "./Hermes-2-Pro-Mistral-7B.Q8_0.gguf" TEMPLATE """{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant """ PARAMETER stop <|im_start|> PARAMETER stop <|im_end|> PARAMETER num_ctx 4096 ``` 4. Create the model from the modelfile ```bash ollama create hermes2-pro-mistral:7b-q8_0 -f ./Modelfile ``` 5. Run `ollama list` and see if the model is registered from the modelfile (it is not). Rolled back to [Ollama v0.1.32](https://github.com/ollama/ollama/releases/tag/v0.1.32), and it is now working again so i'm pretty sure this is a problem with the version 0.1.33. ### OS Windows ### GPU Nvidia ### CPU AMD ### Ollama version 0.1.33
{ "login": "savareyhano", "id": 32730327, "node_id": "MDQ6VXNlcjMyNzMwMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/32730327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/savareyhano", "html_url": "https://github.com/savareyhano", "followers_url": "https://api.github.com/users/savareyhano/followers", "following_url": "https://api.github.com/users/savareyhano/following{/other_user}", "gists_url": "https://api.github.com/users/savareyhano/gists{/gist_id}", "starred_url": "https://api.github.com/users/savareyhano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/savareyhano/subscriptions", "organizations_url": "https://api.github.com/users/savareyhano/orgs", "repos_url": "https://api.github.com/users/savareyhano/repos", "events_url": "https://api.github.com/users/savareyhano/events{/privacy}", "received_events_url": "https://api.github.com/users/savareyhano/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4114/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/56
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/56/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/56/comments
https://api.github.com/repos/ollama/ollama/issues/56/events
https://github.com/ollama/ollama/pull/56
1,794,116,306
PR_kwDOJ0Z1Ps5U8rWu
56
if directory cannot be resolved, do not fail
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-07-07T19:28:14
2023-07-11T14:19:32
2023-07-08T03:18:25
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/56", "html_url": "https://github.com/ollama/ollama/pull/56", "diff_url": "https://github.com/ollama/ollama/pull/56.diff", "patch_url": "https://github.com/ollama/ollama/pull/56.patch", "merged_at": "2023-07-08T03:18:25" }
allow for offline mode
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/56/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/56/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2782
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2782/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2782/comments
https://api.github.com/repos/ollama/ollama/issues/2782/events
https://github.com/ollama/ollama/issues/2782
2,156,791,878
I_kwDOJ0Z1Ps6AjghG
2,782
Why the Gemma performs that bad with some simple questions?
{ "login": "brightzheng100", "id": 1422425, "node_id": "MDQ6VXNlcjE0MjI0MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/1422425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brightzheng100", "html_url": "https://github.com/brightzheng100", "followers_url": "https://api.github.com/users/brightzheng100/followers", "following_url": "https://api.github.com/users/brightzheng100/following{/other_user}", "gists_url": "https://api.github.com/users/brightzheng100/gists{/gist_id}", "starred_url": "https://api.github.com/users/brightzheng100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brightzheng100/subscriptions", "organizations_url": "https://api.github.com/users/brightzheng100/orgs", "repos_url": "https://api.github.com/users/brightzheng100/repos", "events_url": "https://api.github.com/users/brightzheng100/events{/privacy}", "received_events_url": "https://api.github.com/users/brightzheng100/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
4
2024-02-27T14:50:45
2024-05-10T01:14:33
2024-05-10T01:14:32
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Just tried `Gemma` model but not sure why it performed that bad. ```sh ollama pull gemma:2b ollama run gemma:2b ``` **So is it the model issue or the model hosted here is different?** For example - this should be a common-sense question but it has no idea: ``` >>> Do you think whether human can fly? Tell me why with step-by-step reasoning. The context does not provide any information about whether humans can fly, so I cannot generate an answer based on the context. ``` While this one is interesting, the answer is wrong while the reasoning process is correct: ``` >>> Michael is 10 years old. Rob is 2 years older than Amy, while Amy is 5 years younger than Michael. How old is Rob? Rob is 12 years old. We know that Amy is 5 years younger than Michael, so she is 5 - 10 = 5 years old. Since Rob is 2 years older than Amy, he is 5 + 2 = 7 years old. ``` ![ollama](https://github.com/ollama/ollama/assets/1422425/8dd16e62-1caf-4356-b23f-d1820c883124)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2782/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2928
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2928/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2928/comments
https://api.github.com/repos/ollama/ollama/issues/2928/events
https://github.com/ollama/ollama/issues/2928
2,168,587,625
I_kwDOJ0Z1Ps6BQgVp
2,928
Error: could not connect to ollama app, is it running?
{ "login": "ttkrpink", "id": 2522889, "node_id": "MDQ6VXNlcjI1MjI4ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/2522889?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ttkrpink", "html_url": "https://github.com/ttkrpink", "followers_url": "https://api.github.com/users/ttkrpink/followers", "following_url": "https://api.github.com/users/ttkrpink/following{/other_user}", "gists_url": "https://api.github.com/users/ttkrpink/gists{/gist_id}", "starred_url": "https://api.github.com/users/ttkrpink/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ttkrpink/subscriptions", "organizations_url": "https://api.github.com/users/ttkrpink/orgs", "repos_url": "https://api.github.com/users/ttkrpink/repos", "events_url": "https://api.github.com/users/ttkrpink/events{/privacy}", "received_events_url": "https://api.github.com/users/ttkrpink/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
6
2024-03-05T08:18:44
2024-03-18T08:44:51
2024-03-06T22:24:25
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I think I removed everything, and reinstalled ollama on Ubuntu:22.04. After a “fresh” install, the command line can not connect to ollama app. ``` Ubuntu: ~ $ curl -fsSL https://ollama.com/install.sh | sh >>> Downloading ollama... ######################################################################## 100.0%#=#=-# # ######################################################################## 100.0% >>> Installing ollama to /usr/local/bin... [sudo] password for user: >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... >>> NVIDIA GPU installed. Ubuntu: ~ $ ollama Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model pull Pull a model from a registry push Push a model to a registry list List models cp Copy a model rm Remove a model help Help about any command Flags: -h, --help help for ollama -v, --version Show version information Use "ollama [command] --help" for more information about a command. Ubuntu: ~ $ ollama list Error: could not connect to ollama app, is it running? Ubuntu: ~ $ sudo service ollama status ● ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/ollama.service.d └─override.conf Active: active (running) since Tue 2024-03-05 16:04:03 CST; 28s ago Main PID: 605340 (ollama) Tasks: 27 (limit: 76514) Memory: 486.3M CPU: 7.895s CGroup: /system.slice/ollama.service └─605340 /usr/local/bin/ollama serve 3月 05 16:04:03 Ubuntu ollama[605340]: time=2024-03-05T16:04:03.849+08:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" 3月 05 16:04:03 Ubuntu ollama[605340]: time=2024-03-05T16:04:03.849+08:00 level=INFO source=routes.go:1019 msg="Listening on [::]:33020 (version 0.1.27)" 3月 05 16:04:03 Ubuntu ollama[605340]: time=2024-03-05T16:04:03.850+08:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.164+08:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [cpu_avx cpu > 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.164+08:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.164+08:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia> 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.168+08:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/lib/x86_64-lin> 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.175+08:00 level=INFO source=gpu.go:99 msg="Nvidia GPU detected" 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.175+08:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" 3月 05 16:04:08 Ubuntu ollama[605340]: time=2024-03-05T16:04:08.191+08:00 level=INFO source=gpu.go:146 msg="CUDA Compute Capability detected: 8.6" ``` ``` [Service] Environment="OLLAMA_HOST=0.0.0.0:33020" ``` I have to use ollama serve first then I can pull model files. If I check the service port, both 33020 and 11434 are in service. If the ollama is running as a service, do I suppose to download model file directly without launch another ollama serve from command line? Thanks
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2928/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/4157
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4157/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4157/comments
https://api.github.com/repos/ollama/ollama/issues/4157/events
https://github.com/ollama/ollama/issues/4157
2,279,259,620
I_kwDOJ0Z1Ps6H2r3k
4,157
Bunny-Llama-3-8B-V
{ "login": "rawzone", "id": 2092357, "node_id": "MDQ6VXNlcjIwOTIzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2092357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rawzone", "html_url": "https://github.com/rawzone", "followers_url": "https://api.github.com/users/rawzone/followers", "following_url": "https://api.github.com/users/rawzone/following{/other_user}", "gists_url": "https://api.github.com/users/rawzone/gists{/gist_id}", "starred_url": "https://api.github.com/users/rawzone/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rawzone/subscriptions", "organizations_url": "https://api.github.com/users/rawzone/orgs", "repos_url": "https://api.github.com/users/rawzone/repos", "events_url": "https://api.github.com/users/rawzone/events{/privacy}", "received_events_url": "https://api.github.com/users/rawzone/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
3
2024-05-05T00:41:29
2024-05-09T08:59:47
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Would love to see: [Bunny-Llama-3-8B-V](https://huggingface.co./BAAI/Bunny-Llama-3-8B-V) included in the Ollama models. > Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4157/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4157/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/6602
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6602/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6602/comments
https://api.github.com/repos/ollama/ollama/issues/6602/events
https://github.com/ollama/ollama/issues/6602
2,502,088,293
I_kwDOJ0Z1Ps6VItZl
6,602
n_ctx parameter display error
{ "login": "JinheTang", "id": 97284834, "node_id": "U_kgDOBcxy4g", "avatar_url": "https://avatars.githubusercontent.com/u/97284834?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JinheTang", "html_url": "https://github.com/JinheTang", "followers_url": "https://api.github.com/users/JinheTang/followers", "following_url": "https://api.github.com/users/JinheTang/following{/other_user}", "gists_url": "https://api.github.com/users/JinheTang/gists{/gist_id}", "starred_url": "https://api.github.com/users/JinheTang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JinheTang/subscriptions", "organizations_url": "https://api.github.com/users/JinheTang/orgs", "repos_url": "https://api.github.com/users/JinheTang/repos", "events_url": "https://api.github.com/users/JinheTang/events{/privacy}", "received_events_url": "https://api.github.com/users/JinheTang/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-09-03T07:06:16
2024-09-03T16:24:39
2024-09-03T16:24:39
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When I create model from a Modelfile: ```log FROM mistral:latest PARAMETER num_ctx 4096 ``` and load with: ```log ollama create mistral:latest-nctx4096 -f Modelfile ollama run mistral:latest-nctx4096 ``` setting `n_ctx` to 4096, it is discovered that on the server end `n_ctx` is wrongly displayed as `16384`: ![image](https://github.com/user-attachments/assets/c71f4668-40cd-4bcf-b42d-186028eba52a) ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version 0.3.9
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6602/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/5045
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5045/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5045/comments
https://api.github.com/repos/ollama/ollama/issues/5045/events
https://github.com/ollama/ollama/pull/5045
2,353,703,341
PR_kwDOJ0Z1Ps5ygAj8
5,045
openai: do not set temperature to 0 when setting seed
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-06-14T16:24:20
2024-06-14T20:43:57
2024-06-14T20:43:56
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5045", "html_url": "https://github.com/ollama/ollama/pull/5045", "diff_url": "https://github.com/ollama/ollama/pull/5045.diff", "patch_url": "https://github.com/ollama/ollama/pull/5045.patch", "merged_at": "2024-06-14T20:43:56" }
`tempearture` was previously set to 0 for reproducible outputs when setting seed, however this is not required Note https://github.com/ollama/ollama/issues/4990 is still an open issue on Nvidia/AMD GPUs Fixes https://github.com/ollama/ollama/issues/5044
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5045/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/795
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/795/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/795/comments
https://api.github.com/repos/ollama/ollama/issues/795/events
https://github.com/ollama/ollama/issues/795
1,944,162,393
I_kwDOJ0Z1Ps5z4ZBZ
795
Unable to download any models on Amazon Linux 2023 on EC2
{ "login": "drnushooz", "id": 10852951, "node_id": "MDQ6VXNlcjEwODUyOTUx", "avatar_url": "https://avatars.githubusercontent.com/u/10852951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drnushooz", "html_url": "https://github.com/drnushooz", "followers_url": "https://api.github.com/users/drnushooz/followers", "following_url": "https://api.github.com/users/drnushooz/following{/other_user}", "gists_url": "https://api.github.com/users/drnushooz/gists{/gist_id}", "starred_url": "https://api.github.com/users/drnushooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drnushooz/subscriptions", "organizations_url": "https://api.github.com/users/drnushooz/orgs", "repos_url": "https://api.github.com/users/drnushooz/repos", "events_url": "https://api.github.com/users/drnushooz/events{/privacy}", "received_events_url": "https://api.github.com/users/drnushooz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
5
2023-10-16T01:15:40
2023-12-20T21:46:42
2023-10-16T20:17:29
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I am trying to download LLama2 and Medllama2 on Amazon Linux 2023. I have verified the security group to check if it has permissions to reach outside world and it does. Ollama installation was successful by using ``` curl https://ollama.ai/install.sh | sh ``` However, when I try to run `ollama pull medllama2` it errors out with the following. ``` Error: Head "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/7d/7da0afc1bbe70f988bc3c7f07d7dfcd16230d0214724a6dc1c639a56657a385f/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77ac1b787c3af820529859349a%!F(MISSING)20231016%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20231016T010928Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=76b083752834ca2f16ba864c9aa146a90d0ec203696e415f6ff20eca3ea74f20": read tcp 10.220.8.84:38428->104.18.9.90:443: read: connection reset by peer ``` I have tried multiple models and everything fails. Here is Openssl version information. ``` OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023) ollama version 0.1.3 ```
{ "login": "drnushooz", "id": 10852951, "node_id": "MDQ6VXNlcjEwODUyOTUx", "avatar_url": "https://avatars.githubusercontent.com/u/10852951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drnushooz", "html_url": "https://github.com/drnushooz", "followers_url": "https://api.github.com/users/drnushooz/followers", "following_url": "https://api.github.com/users/drnushooz/following{/other_user}", "gists_url": "https://api.github.com/users/drnushooz/gists{/gist_id}", "starred_url": "https://api.github.com/users/drnushooz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drnushooz/subscriptions", "organizations_url": "https://api.github.com/users/drnushooz/orgs", "repos_url": "https://api.github.com/users/drnushooz/repos", "events_url": "https://api.github.com/users/drnushooz/events{/privacy}", "received_events_url": "https://api.github.com/users/drnushooz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/795/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7867
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7867/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7867/comments
https://api.github.com/repos/ollama/ollama/issues/7867/events
https://github.com/ollama/ollama/issues/7867
2,700,131,100
I_kwDOJ0Z1Ps6g8Lsc
7,867
Deepseek (various) 236b crashes on run
{ "login": "Maltz42", "id": 20978744, "node_id": "MDQ6VXNlcjIwOTc4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/20978744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maltz42", "html_url": "https://github.com/Maltz42", "followers_url": "https://api.github.com/users/Maltz42/followers", "following_url": "https://api.github.com/users/Maltz42/following{/other_user}", "gists_url": "https://api.github.com/users/Maltz42/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maltz42/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maltz42/subscriptions", "organizations_url": "https://api.github.com/users/Maltz42/orgs", "repos_url": "https://api.github.com/users/Maltz42/repos", "events_url": "https://api.github.com/users/Maltz42/events{/privacy}", "received_events_url": "https://api.github.com/users/Maltz42/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 6677367769, "node_id": "LA_kwDOJ0Z1Ps8AAAABjgCL2Q", "url": "https://api.github.com/repos/ollama/ollama/labels/needs%20more%20info", "name": "needs more info", "color": "BA8041", "default": false, "description": "More information is needed to assist" } ]
open
false
null
[]
null
19
2024-11-27T23:00:55
2025-01-13T23:43:24
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Deepseek V2, V2.5, and V2-coder all crash with an OOM error when loading the 236b size. Other versions of Deepseek may as well, that's all I've tested. Hardware is dual A6000's with 48GB each. ``` Error: llama runner process has terminated: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 882903040 llama_new_context_with_model: failed to allocate compute buffers ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version v0.4.5
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7867/timeline
null
reopened
false
https://api.github.com/repos/ollama/ollama/issues/6241
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6241/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6241/comments
https://api.github.com/repos/ollama/ollama/issues/6241/events
https://github.com/ollama/ollama/pull/6241
2,454,274,622
PR_kwDOJ0Z1Ps53wWsa
6,241
Speech Prototype
{ "login": "royjhan", "id": 65097070, "node_id": "MDQ6VXNlcjY1MDk3MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/65097070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/royjhan", "html_url": "https://github.com/royjhan", "followers_url": "https://api.github.com/users/royjhan/followers", "following_url": "https://api.github.com/users/royjhan/following{/other_user}", "gists_url": "https://api.github.com/users/royjhan/gists{/gist_id}", "starred_url": "https://api.github.com/users/royjhan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/royjhan/subscriptions", "organizations_url": "https://api.github.com/users/royjhan/orgs", "repos_url": "https://api.github.com/users/royjhan/repos", "events_url": "https://api.github.com/users/royjhan/events{/privacy}", "received_events_url": "https://api.github.com/users/royjhan/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
7
2024-08-07T20:19:13
2025-01-21T04:07:49
2024-11-21T10:04:31
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6241", "html_url": "https://github.com/ollama/ollama/pull/6241", "diff_url": "https://github.com/ollama/ollama/pull/6241.diff", "patch_url": "https://github.com/ollama/ollama/pull/6241.patch", "merged_at": null }
whisper.cpp - custom ggml, wav audio Instructions for running in md As of now would require conversion to ggml format to run inference, would wait to see the general momentum surrounding speech-to-text models as bigger players release foundational models.
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6241/reactions", "total_count": 14, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 10, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6241/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/6372
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6372/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6372/comments
https://api.github.com/repos/ollama/ollama/issues/6372/events
https://github.com/ollama/ollama/issues/6372
2,468,038,732
I_kwDOJ0Z1Ps6TG0hM
6,372
System tray icon is empty
{ "login": "Biyakuga", "id": 67515021, "node_id": "MDQ6VXNlcjY3NTE1MDIx", "avatar_url": "https://avatars.githubusercontent.com/u/67515021?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Biyakuga", "html_url": "https://github.com/Biyakuga", "followers_url": "https://api.github.com/users/Biyakuga/followers", "following_url": "https://api.github.com/users/Biyakuga/following{/other_user}", "gists_url": "https://api.github.com/users/Biyakuga/gists{/gist_id}", "starred_url": "https://api.github.com/users/Biyakuga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Biyakuga/subscriptions", "organizations_url": "https://api.github.com/users/Biyakuga/orgs", "repos_url": "https://api.github.com/users/Biyakuga/repos", "events_url": "https://api.github.com/users/Biyakuga/events{/privacy}", "received_events_url": "https://api.github.com/users/Biyakuga/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-08-15T13:17:16
2024-08-15T22:31:17
2024-08-15T22:31:17
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? In the image below, you'll notice that the hover bubble for the Ollama system tray icon is empty. In contrast, other apps display the application's name when you hover over their icons. ![image](https://github.com/user-attachments/assets/d79a6b65-6a22-4cce-ab62-500a0b409b98) ### OS Windows ### GPU Nvidia, Intel ### CPU Intel ### Ollama version 0.3.6
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6372/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1046
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1046/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1046/comments
https://api.github.com/repos/ollama/ollama/issues/1046/events
https://github.com/ollama/ollama/issues/1046
1,984,002,491
I_kwDOJ0Z1Ps52QXm7
1,046
Add flag to force CPU only (instead of only autodetecting based on OS)
{ "login": "joake", "id": 11403993, "node_id": "MDQ6VXNlcjExNDAzOTkz", "avatar_url": "https://avatars.githubusercontent.com/u/11403993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joake", "html_url": "https://github.com/joake", "followers_url": "https://api.github.com/users/joake/followers", "following_url": "https://api.github.com/users/joake/following{/other_user}", "gists_url": "https://api.github.com/users/joake/gists{/gist_id}", "starred_url": "https://api.github.com/users/joake/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joake/subscriptions", "organizations_url": "https://api.github.com/users/joake/orgs", "repos_url": "https://api.github.com/users/joake/repos", "events_url": "https://api.github.com/users/joake/events{/privacy}", "received_events_url": "https://api.github.com/users/joake/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
9
2023-11-08T16:40:57
2024-01-30T21:16:23
2024-01-14T22:03:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Requesting a build flag to only use the CPU with ollama, not the GPU. Users on MacOS models without support for Metal can only run ollama on the CPU. Currently in llama.go the function NumGPU defaults to returning 1 (default enable metal on all MacOS) and the function chooseRunners will add metal to the runners by default on all "darwin" systems. This can lead to the error: ```sh ggml_metal_init: allocating ggml_metal_init: found device: Intel(R) UHD Graphics 630 ggml_metal_init: found device: AMD Radeon Pro 5500M ggml_metal_init: picking default device: AMD Radeon Pro 5500M ggml_metal_init: default.metallib not found, loading from source 2023/11/08 16:22:47 llama.go:399: signal: segmentation fault 2023/11/08 16:22:47 llama.go:407: error starting llama runner: llama runner process has terminated 2023/11/08 16:22:47 llama.go:473: llama runner stopped successfully ``` disabling metal by returning 0 in the NumGPU and removing metal from the chooseRunners function (by changing darwin to narwid for example) will circumvent this issue and run in the CPU only.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1046/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1610
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1610/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1610/comments
https://api.github.com/repos/ollama/ollama/issues/1610/events
https://github.com/ollama/ollama/issues/1610
2,049,237,003
I_kwDOJ0Z1Ps56JOAL
1,610
Linux Mint ollama pull model
{ "login": "dwk601", "id": 108056780, "node_id": "U_kgDOBnDQzA", "avatar_url": "https://avatars.githubusercontent.com/u/108056780?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwk601", "html_url": "https://github.com/dwk601", "followers_url": "https://api.github.com/users/dwk601/followers", "following_url": "https://api.github.com/users/dwk601/following{/other_user}", "gists_url": "https://api.github.com/users/dwk601/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwk601/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwk601/subscriptions", "organizations_url": "https://api.github.com/users/dwk601/orgs", "repos_url": "https://api.github.com/users/dwk601/repos", "events_url": "https://api.github.com/users/dwk601/events{/privacy}", "received_events_url": "https://api.github.com/users/dwk601/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2023-12-19T18:37:57
2023-12-19T19:23:37
2023-12-19T18:42:38
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I'm using Linux mint and after pulling model using ollama pull mistral I don't see any downloaded model in ollama folder in the usr directory. Anyone knows where the model downloaded?
{ "login": "technovangelist", "id": 633681, "node_id": "MDQ6VXNlcjYzMzY4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/633681?v=4", "gravatar_id": "", "url": "https://api.github.com/users/technovangelist", "html_url": "https://github.com/technovangelist", "followers_url": "https://api.github.com/users/technovangelist/followers", "following_url": "https://api.github.com/users/technovangelist/following{/other_user}", "gists_url": "https://api.github.com/users/technovangelist/gists{/gist_id}", "starred_url": "https://api.github.com/users/technovangelist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/technovangelist/subscriptions", "organizations_url": "https://api.github.com/users/technovangelist/orgs", "repos_url": "https://api.github.com/users/technovangelist/repos", "events_url": "https://api.github.com/users/technovangelist/events{/privacy}", "received_events_url": "https://api.github.com/users/technovangelist/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1610/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/62
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/62/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/62/comments
https://api.github.com/repos/ollama/ollama/issues/62/events
https://github.com/ollama/ollama/pull/62
1,795,815,924
PR_kwDOJ0Z1Ps5VCShs
62
llama: replace bindings with `llama_*` calls
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2023-07-10T02:44:43
2023-07-11T02:34:35
2023-07-10T22:51:37
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
true
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/62", "html_url": "https://github.com/ollama/ollama/pull/62", "diff_url": "https://github.com/ollama/ollama/pull/62.diff", "patch_url": "https://github.com/ollama/ollama/pull/62.patch", "merged_at": null }
Early PR to replace the C++ binding files with direct calls to llama.cpp from Go. It's missing quite a few features the C++ `binding.cpp` files had although those were almost direct copies of the `main` example in llama.cpp's repo. We should be able to add them back relatively easily. It's most likely slower as well so we'll have to make sure it's as fast / faster than the bindings or c++ example implementation.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/62/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/62/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5157
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5157/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5157/comments
https://api.github.com/repos/ollama/ollama/issues/5157/events
https://github.com/ollama/ollama/issues/5157
2,363,401,313
I_kwDOJ0Z1Ps6M3qRh
5,157
Update llama.cpp to support qwen2-57B-A14B pls
{ "login": "CoreJa", "id": 28624864, "node_id": "MDQ6VXNlcjI4NjI0ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/28624864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CoreJa", "html_url": "https://github.com/CoreJa", "followers_url": "https://api.github.com/users/CoreJa/followers", "following_url": "https://api.github.com/users/CoreJa/following{/other_user}", "gists_url": "https://api.github.com/users/CoreJa/gists{/gist_id}", "starred_url": "https://api.github.com/users/CoreJa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CoreJa/subscriptions", "organizations_url": "https://api.github.com/users/CoreJa/orgs", "repos_url": "https://api.github.com/users/CoreJa/repos", "events_url": "https://api.github.com/users/CoreJa/events{/privacy}", "received_events_url": "https://api.github.com/users/CoreJa/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
3
2024-06-20T02:57:01
2024-07-05T17:25:59
2024-07-05T17:25:59
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? llama.cpp is able to support qwen2-57B-A14B moe model now via https://github.com/ggerganov/llama.cpp/pull/7835. Please help update submodule llama.cpp to a newer version. Otherwise i quants of this model(I tried iq4_xs) would end up with a `CUDA error: CUBLAS_STATUS_NOT_INITIALIZED`. ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version v0.1.44
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5157/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/6291
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/6291/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/6291/comments
https://api.github.com/repos/ollama/ollama/issues/6291/events
https://github.com/ollama/ollama/pull/6291
2,458,500,400
PR_kwDOJ0Z1Ps53-pxJ
6,291
Don't hard fail on sparse setup error
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-08-09T18:58:48
2024-08-09T19:30:27
2024-08-09T19:30:25
COLLABORATOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/6291", "html_url": "https://github.com/ollama/ollama/pull/6291", "diff_url": "https://github.com/ollama/ollama/pull/6291.diff", "patch_url": "https://github.com/ollama/ollama/pull/6291.patch", "merged_at": "2024-08-09T19:30:25" }
It seems this can fail in some casees, but proceed with the download anyway. I haven't reproduced the users failure, but based on the log message and the recent regression, this seems highly likely to be the cause. Fixes #6263
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/6291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/6291/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5254
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5254/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5254/comments
https://api.github.com/repos/ollama/ollama/issues/5254/events
https://github.com/ollama/ollama/issues/5254
2,369,992,286
I_kwDOJ0Z1Ps6NQzZe
5,254
Wrong parameters for gemma:7b
{ "login": "qzc438", "id": 61488260, "node_id": "MDQ6VXNlcjYxNDg4MjYw", "avatar_url": "https://avatars.githubusercontent.com/u/61488260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qzc438", "html_url": "https://github.com/qzc438", "followers_url": "https://api.github.com/users/qzc438/followers", "following_url": "https://api.github.com/users/qzc438/following{/other_user}", "gists_url": "https://api.github.com/users/qzc438/gists{/gist_id}", "starred_url": "https://api.github.com/users/qzc438/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qzc438/subscriptions", "organizations_url": "https://api.github.com/users/qzc438/orgs", "repos_url": "https://api.github.com/users/qzc438/repos", "events_url": "https://api.github.com/users/qzc438/events{/privacy}", "received_events_url": "https://api.github.com/users/qzc438/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
2024-06-24T11:27:14
2024-07-09T16:20:23
2024-07-09T16:20:23
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Type `ollama show gemma:7b`: the parameters are 9b, not 7b. ### OS _No response_ ### GPU _No response_ ### CPU _No response_ ### Ollama version _No response_
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5254/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2584
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2584/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2584/comments
https://api.github.com/repos/ollama/ollama/issues/2584/events
https://github.com/ollama/ollama/pull/2584
2,141,243,546
PR_kwDOJ0Z1Ps5nOmg6
2,584
Update faq.md
{ "login": "elsatch", "id": 653433, "node_id": "MDQ6VXNlcjY1MzQzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elsatch", "html_url": "https://github.com/elsatch", "followers_url": "https://api.github.com/users/elsatch/followers", "following_url": "https://api.github.com/users/elsatch/following{/other_user}", "gists_url": "https://api.github.com/users/elsatch/gists{/gist_id}", "starred_url": "https://api.github.com/users/elsatch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elsatch/subscriptions", "organizations_url": "https://api.github.com/users/elsatch/orgs", "repos_url": "https://api.github.com/users/elsatch/repos", "events_url": "https://api.github.com/users/elsatch/events{/privacy}", "received_events_url": "https://api.github.com/users/elsatch/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-02-18T23:47:12
2024-02-20T13:56:43
2024-02-20T03:33:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2584", "html_url": "https://github.com/ollama/ollama/pull/2584", "diff_url": "https://github.com/ollama/ollama/pull/2584.diff", "patch_url": "https://github.com/ollama/ollama/pull/2584.patch", "merged_at": null }
Added a section for Setting environment variables on Windows
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2584/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7882
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7882/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7882/comments
https://api.github.com/repos/ollama/ollama/issues/7882/events
https://github.com/ollama/ollama/issues/7882
2,706,011,431
I_kwDOJ0Z1Ps6hSnUn
7,882
Support AMD Radeon 890m and NPU on Ryzen AI PC
{ "login": "GiorCocc", "id": 41432327, "node_id": "MDQ6VXNlcjQxNDMyMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/41432327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GiorCocc", "html_url": "https://github.com/GiorCocc", "followers_url": "https://api.github.com/users/GiorCocc/followers", "following_url": "https://api.github.com/users/GiorCocc/following{/other_user}", "gists_url": "https://api.github.com/users/GiorCocc/gists{/gist_id}", "starred_url": "https://api.github.com/users/GiorCocc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GiorCocc/subscriptions", "organizations_url": "https://api.github.com/users/GiorCocc/orgs", "repos_url": "https://api.github.com/users/GiorCocc/repos", "events_url": "https://api.github.com/users/GiorCocc/events{/privacy}", "received_events_url": "https://api.github.com/users/GiorCocc/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
closed
false
null
[]
null
2
2024-11-29T19:31:42
2024-12-02T15:40:49
2024-12-02T15:40:49
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please consider to add the support on AMD iGPU like Radeon 890m available on AMD Ryzen AI 9 HX 370 and NPU. Thanks
{ "login": "rick-github", "id": 14946854, "node_id": "MDQ6VXNlcjE0OTQ2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14946854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rick-github", "html_url": "https://github.com/rick-github", "followers_url": "https://api.github.com/users/rick-github/followers", "following_url": "https://api.github.com/users/rick-github/following{/other_user}", "gists_url": "https://api.github.com/users/rick-github/gists{/gist_id}", "starred_url": "https://api.github.com/users/rick-github/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rick-github/subscriptions", "organizations_url": "https://api.github.com/users/rick-github/orgs", "repos_url": "https://api.github.com/users/rick-github/repos", "events_url": "https://api.github.com/users/rick-github/events{/privacy}", "received_events_url": "https://api.github.com/users/rick-github/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7882/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/8634
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8634/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8634/comments
https://api.github.com/repos/ollama/ollama/issues/8634/events
https://github.com/ollama/ollama/issues/8634
2,815,768,303
I_kwDOJ0Z1Ps6n1Tbv
8,634
Ollama is not installing on Termux
{ "login": "imvickykumar999", "id": 50515418, "node_id": "MDQ6VXNlcjUwNTE1NDE4", "avatar_url": "https://avatars.githubusercontent.com/u/50515418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/imvickykumar999", "html_url": "https://github.com/imvickykumar999", "followers_url": "https://api.github.com/users/imvickykumar999/followers", "following_url": "https://api.github.com/users/imvickykumar999/following{/other_user}", "gists_url": "https://api.github.com/users/imvickykumar999/gists{/gist_id}", "starred_url": "https://api.github.com/users/imvickykumar999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/imvickykumar999/subscriptions", "organizations_url": "https://api.github.com/users/imvickykumar999/orgs", "repos_url": "https://api.github.com/users/imvickykumar999/repos", "events_url": "https://api.github.com/users/imvickykumar999/events{/privacy}", "received_events_url": "https://api.github.com/users/imvickykumar999/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
3
2025-01-28T14:01:22
2025-01-30T07:10:12
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
~ $ curl -fsSL https://ollama.com/install.sh | sh ``` >>> Installing ollama to /usr No superuser binary detected. Are you rooted? ```
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8634/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/1559
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1559/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1559/comments
https://api.github.com/repos/ollama/ollama/issues/1559/events
https://github.com/ollama/ollama/issues/1559
2,044,618,908
I_kwDOJ0Z1Ps553mic
1,559
Pass in prompt as arguments is broken for multimodal models
{ "login": "elsatch", "id": 653433, "node_id": "MDQ6VXNlcjY1MzQzMw==", "avatar_url": "https://avatars.githubusercontent.com/u/653433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elsatch", "html_url": "https://github.com/elsatch", "followers_url": "https://api.github.com/users/elsatch/followers", "following_url": "https://api.github.com/users/elsatch/following{/other_user}", "gists_url": "https://api.github.com/users/elsatch/gists{/gist_id}", "starred_url": "https://api.github.com/users/elsatch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elsatch/subscriptions", "organizations_url": "https://api.github.com/users/elsatch/orgs", "repos_url": "https://api.github.com/users/elsatch/repos", "events_url": "https://api.github.com/users/elsatch/events{/privacy}", "received_events_url": "https://api.github.com/users/elsatch/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
6
2023-12-16T05:41:21
2024-03-11T23:59:16
2024-03-11T23:59:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
I have been trying to evaluate the performance of the multimodal models passing the prompt as argument, as stated in the README.md section. Whenever I pass the image as an argument for ollama cli, it hallucinates the whole response. Asking about the same image using the regular chat, works without problem. This is a sample image I am using: ![image](https://github.com/jmorganca/ollama/assets/653433/11de3f68-0613-447d-8c3b-23bdd546dc08) These are the responses I am getting when passing the image as argument: ![image](https://github.com/jmorganca/ollama/assets/653433/5d3ae67c-bca0-45d0-b62b-f22074aa1450) Note: original image taken created Leonid Mamchenkov, using the Carbon website to style the code.
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1559/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4645
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4645/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4645/comments
https://api.github.com/repos/ollama/ollama/issues/4645/events
https://github.com/ollama/ollama/issues/4645
2,317,477,611
I_kwDOJ0Z1Ps6KIebr
4,645
3 GPUs, 2xNVIDIA and 1x AMD onboard - Can I force Python3 to use AMD, and Ollama to use 2xNVIDIA
{ "login": "HerroHK", "id": 170845944, "node_id": "U_kgDOCi7m-A", "avatar_url": "https://avatars.githubusercontent.com/u/170845944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HerroHK", "html_url": "https://github.com/HerroHK", "followers_url": "https://api.github.com/users/HerroHK/followers", "following_url": "https://api.github.com/users/HerroHK/following{/other_user}", "gists_url": "https://api.github.com/users/HerroHK/gists{/gist_id}", "starred_url": "https://api.github.com/users/HerroHK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HerroHK/subscriptions", "organizations_url": "https://api.github.com/users/HerroHK/orgs", "repos_url": "https://api.github.com/users/HerroHK/repos", "events_url": "https://api.github.com/users/HerroHK/events{/privacy}", "received_events_url": "https://api.github.com/users/HerroHK/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
0
2024-05-26T04:37:37
2024-05-26T04:37:37
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
My setup runs: 1x Asus TUF GAMING B650-PLUS ATX with onboard AMD GPU 2x 16GD6 ASUS RTX4060Ti ProArt OC When looking at nvidia-smi, I can see that one of the RTX4060 is dedicated to Ollama, and the other to python3 (Automatic1111). I have tried the "export CUDA_VISIBLE_DEVICES=0,1 " command, but seems as long as Automatic1111 is active, one RTX4060 remains dedicated to Python3. This is confirmed by GPU activity, when creating an image using Automatic1111 the dedicated GPU (1) goes to 100%, when entering a query to Ollama I can see the GPU (0) spin up. Is there a way to set Python3 to use the onboard AMD GPU (2) and Ollama to use GPUs 0 and 1? I am expecting way less image creation on these systems than I am expecting Ollama to be ran. TIA
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4645/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2649
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2649/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2649/comments
https://api.github.com/repos/ollama/ollama/issues/2649/events
https://github.com/ollama/ollama/issues/2649
2,147,553,104
I_kwDOJ0Z1Ps6AAQ9Q
2,649
Issue with new model Gemma
{ "login": "jaifar530", "id": 31308766, "node_id": "MDQ6VXNlcjMxMzA4NzY2", "avatar_url": "https://avatars.githubusercontent.com/u/31308766?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaifar530", "html_url": "https://github.com/jaifar530", "followers_url": "https://api.github.com/users/jaifar530/followers", "following_url": "https://api.github.com/users/jaifar530/following{/other_user}", "gists_url": "https://api.github.com/users/jaifar530/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaifar530/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaifar530/subscriptions", "organizations_url": "https://api.github.com/users/jaifar530/orgs", "repos_url": "https://api.github.com/users/jaifar530/repos", "events_url": "https://api.github.com/users/jaifar530/events{/privacy}", "received_events_url": "https://api.github.com/users/jaifar530/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-02-21T19:46:40
2024-02-21T23:18:25
2024-02-21T23:18:24
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
After pulling the new Gemma model i got this issue, note that the issue is only with two grmma models, other works fine ![Screenshot_20240221_234359_Chrome.jpg](https://github.com/ollama/ollama/assets/31308766/461863b3-c59c-42dc-b826-3ea093bebb4f)
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2649/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2699
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2699/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2699/comments
https://api.github.com/repos/ollama/ollama/issues/2699/events
https://github.com/ollama/ollama/issues/2699
2,150,347,691
I_kwDOJ0Z1Ps6AK7Or
2,699
Slow Response Time on Windows Prompt Compared to WSL
{ "login": "samer-alhalabi", "id": 79296172, "node_id": "MDQ6VXNlcjc5Mjk2MTcy", "avatar_url": "https://avatars.githubusercontent.com/u/79296172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samer-alhalabi", "html_url": "https://github.com/samer-alhalabi", "followers_url": "https://api.github.com/users/samer-alhalabi/followers", "following_url": "https://api.github.com/users/samer-alhalabi/following{/other_user}", "gists_url": "https://api.github.com/users/samer-alhalabi/gists{/gist_id}", "starred_url": "https://api.github.com/users/samer-alhalabi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samer-alhalabi/subscriptions", "organizations_url": "https://api.github.com/users/samer-alhalabi/orgs", "repos_url": "https://api.github.com/users/samer-alhalabi/repos", "events_url": "https://api.github.com/users/samer-alhalabi/events{/privacy}", "received_events_url": "https://api.github.com/users/samer-alhalabi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
9
2024-02-23T04:28:46
2024-07-08T04:13:06
2024-02-26T19:21:02
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
When executing prompts on Ollama using Windows version, I experience considerable delays and slowness in response time. However, when running the exact same model and prompt via WSL, the response time is notably faster. Given that the Windows version of Ollama is currently in preview, I understand there may be optimizations underway. Could you provide insight into whether there's a timeline for the next version release that addresses performance ?
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2699/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7740
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7740/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7740/comments
https://api.github.com/repos/ollama/ollama/issues/7740/events
https://github.com/ollama/ollama/issues/7740
2,672,082,408
I_kwDOJ0Z1Ps6fRL3o
7,740
Performance is decreasing
{ "login": "murzein", "id": 148742019, "node_id": "U_kgDOCN2fgw", "avatar_url": "https://avatars.githubusercontent.com/u/148742019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/murzein", "html_url": "https://github.com/murzein", "followers_url": "https://api.github.com/users/murzein/followers", "following_url": "https://api.github.com/users/murzein/following{/other_user}", "gists_url": "https://api.github.com/users/murzein/gists{/gist_id}", "starred_url": "https://api.github.com/users/murzein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/murzein/subscriptions", "organizations_url": "https://api.github.com/users/murzein/orgs", "repos_url": "https://api.github.com/users/murzein/repos", "events_url": "https://api.github.com/users/murzein/events{/privacy}", "received_events_url": "https://api.github.com/users/murzein/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-11-19T12:58:14
2024-11-19T23:53:50
2024-11-19T23:53:50
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Load any model, for example gemma2:27b OLLAMA_KEEP_ALIVE: -1 command: **ollama run gemma2:27b --verbose** message: **tell me about amd** **1 iteration** total duration: 12.9263992s load duration: 32.8561ms prompt eval count: **13 token(s)** <---- prompt eval duration: 73ms prompt eval rate: 178.08 tokens/s eval count: 480 token(s) eval duration: 12.81s eval rate: **37.47 tokens/s** <---- **2 iteration** prompt eval count: **506 token(s)** eval rate: **36.64 tokens/s** **3 iteration** prompt eval count: **882 token(s)** eval rate: **36.05 tokens/s** **4 iteration** prompt eval count: **1244 token(s)** eval rate: **35.81 tokens/s** **5 iteration** prompt eval count: **1584 token(s)** eval rate: **35.30 tokens/s** **6 iteration** prompt eval count: **1860 token(s)** eval rate: **17.04 tokens/s** Overload prompt, eval rate - 17.04 tokens/s **Reload the script** ctrl+z, and again: command: **ollama run gemma2:27b --verbose** message: **tell me about amd** **1 iteration** prompt eval count: **13 token(s)** eval rate: **18.01 tokens/s** Small prompt, but speed low. I am experiencing the same issue when accessing ollama via the API. Context is getting overflow somewhere. ### OS Linux, Windows ### GPU Nvidia, AMD ### CPU Intel ### Ollama version 0.4.2
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7740/timeline
null
not_planned
false
https://api.github.com/repos/ollama/ollama/issues/7107
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7107/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7107/comments
https://api.github.com/repos/ollama/ollama/issues/7107/events
https://github.com/ollama/ollama/issues/7107
2,568,601,801
I_kwDOJ0Z1Ps6ZGcDJ
7,107
Adrenalin Edition 24.9.1/24.10.1 slow ollama performance
{ "login": "skarabaraks", "id": 2365359, "node_id": "MDQ6VXNlcjIzNjUzNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2365359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skarabaraks", "html_url": "https://github.com/skarabaraks", "followers_url": "https://api.github.com/users/skarabaraks/followers", "following_url": "https://api.github.com/users/skarabaraks/following{/other_user}", "gists_url": "https://api.github.com/users/skarabaraks/gists{/gist_id}", "starred_url": "https://api.github.com/users/skarabaraks/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skarabaraks/subscriptions", "organizations_url": "https://api.github.com/users/skarabaraks/orgs", "repos_url": "https://api.github.com/users/skarabaraks/repos", "events_url": "https://api.github.com/users/skarabaraks/events{/privacy}", "received_events_url": "https://api.github.com/users/skarabaraks/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 5808482718, "node_id": "LA_kwDOJ0Z1Ps8AAAABWjZpng", "url": "https://api.github.com/repos/ollama/ollama/labels/performance", "name": "performance", "color": "A5B5C6", "default": false, "description": "" }, { "id": 5860134234, "node_id": "LA_kwDOJ0Z1Ps8AAAABXUqNWg", "url": "https://api.github.com/repos/ollama/ollama/labels/windows", "name": "windows", "color": "0052CC", "default": false, "description": "" }, { "id": 6433346500, "node_id": "LA_kwDOJ0Z1Ps8AAAABf3UTxA", "url": "https://api.github.com/repos/ollama/ollama/labels/amd", "name": "amd", "color": "000000", "default": false, "description": "Issues relating to AMD GPUs and ROCm" } ]
open
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
15
2024-10-06T11:07:21
2025-01-28T23:47:14
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? Both Adrenalin Edition drivers (24.9.1 and 24.10.1) significantly slows windows performance. GPU acceleration appears disabled. No issues with ollama on Adrenalin 24.8.1 (slightly older driver) My system Windows 11 24H2 GPU: RX6800 XT CPU: Ryzen 5900XT 32GB RAM Ollama version: Latest
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7107/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7107/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/2946
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2946/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2946/comments
https://api.github.com/repos/ollama/ollama/issues/2946/events
https://github.com/ollama/ollama/pull/2946
2,170,521,445
PR_kwDOJ0Z1Ps5oyXxi
2,946
Add Community Integration: OpenAOE
{ "login": "fly2tomato", "id": 11885550, "node_id": "MDQ6VXNlcjExODg1NTUw", "avatar_url": "https://avatars.githubusercontent.com/u/11885550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fly2tomato", "html_url": "https://github.com/fly2tomato", "followers_url": "https://api.github.com/users/fly2tomato/followers", "following_url": "https://api.github.com/users/fly2tomato/following{/other_user}", "gists_url": "https://api.github.com/users/fly2tomato/gists{/gist_id}", "starred_url": "https://api.github.com/users/fly2tomato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fly2tomato/subscriptions", "organizations_url": "https://api.github.com/users/fly2tomato/orgs", "repos_url": "https://api.github.com/users/fly2tomato/repos", "events_url": "https://api.github.com/users/fly2tomato/events{/privacy}", "received_events_url": "https://api.github.com/users/fly2tomato/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-03-06T02:52:01
2024-03-25T18:57:40
2024-03-25T18:57:40
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/2946", "html_url": "https://github.com/ollama/ollama/pull/2946", "diff_url": "https://github.com/ollama/ollama/pull/2946.diff", "patch_url": "https://github.com/ollama/ollama/pull/2946.patch", "merged_at": "2024-03-25T18:57:40" }
[OpenAOE](https://github.com/InternLM/OpenAOE) is a LLM Group Chat Framework: chat with multiple LLMs at the same time. Now the mistral-7b and gemma-7b models are added into OpenAOE under Ollama inference. ![image](https://github.com/ollama/ollama/assets/11885550/bc964015-4a8e-44c9-92d9-60a2ea1b8b30) ![image](https://github.com/ollama/ollama/assets/11885550/cf3f4003-c0eb-44ed-b2ad-646369bbbe2a) ![image](https://github.com/ollama/ollama/assets/11885550/4adc6ad5-2f89-45c1-b1be-2ce96f89863d)
{ "login": "BruceMacD", "id": 5853428, "node_id": "MDQ6VXNlcjU4NTM0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5853428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BruceMacD", "html_url": "https://github.com/BruceMacD", "followers_url": "https://api.github.com/users/BruceMacD/followers", "following_url": "https://api.github.com/users/BruceMacD/following{/other_user}", "gists_url": "https://api.github.com/users/BruceMacD/gists{/gist_id}", "starred_url": "https://api.github.com/users/BruceMacD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BruceMacD/subscriptions", "organizations_url": "https://api.github.com/users/BruceMacD/orgs", "repos_url": "https://api.github.com/users/BruceMacD/repos", "events_url": "https://api.github.com/users/BruceMacD/events{/privacy}", "received_events_url": "https://api.github.com/users/BruceMacD/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2946/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/5780
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/5780/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/5780/comments
https://api.github.com/repos/ollama/ollama/issues/5780/events
https://github.com/ollama/ollama/pull/5780
2,417,185,566
PR_kwDOJ0Z1Ps510ayH
5,780
fix parsing tool calls: break on unexpected eofs
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-07-18T19:08:33
2024-07-18T19:14:12
2024-07-18T19:14:10
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/5780", "html_url": "https://github.com/ollama/ollama/pull/5780", "diff_url": "https://github.com/ollama/ollama/pull/5780.diff", "patch_url": "https://github.com/ollama/ollama/pull/5780.patch", "merged_at": "2024-07-18T19:14:10" }
null
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/5780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/5780/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/1906
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1906/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1906/comments
https://api.github.com/repos/ollama/ollama/issues/1906/events
https://github.com/ollama/ollama/issues/1906
2,074,965,782
I_kwDOJ0Z1Ps57rXcW
1,906
Ollama not respecting num_gpu to load entire model into VRAM for a model that I know should fit into 24GB.
{ "login": "madsamjp", "id": 49611363, "node_id": "MDQ6VXNlcjQ5NjExMzYz", "avatar_url": "https://avatars.githubusercontent.com/u/49611363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madsamjp", "html_url": "https://github.com/madsamjp", "followers_url": "https://api.github.com/users/madsamjp/followers", "following_url": "https://api.github.com/users/madsamjp/following{/other_user}", "gists_url": "https://api.github.com/users/madsamjp/gists{/gist_id}", "starred_url": "https://api.github.com/users/madsamjp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madsamjp/subscriptions", "organizations_url": "https://api.github.com/users/madsamjp/orgs", "repos_url": "https://api.github.com/users/madsamjp/repos", "events_url": "https://api.github.com/users/madsamjp/events{/privacy}", "received_events_url": "https://api.github.com/users/madsamjp/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
1
2024-01-10T18:44:22
2024-05-10T00:14:58
2024-05-10T00:14:58
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Somewhat related to this issue: https://github.com/jmorganca/ollama/issues/1374 I have a model that I have configured to fit almost exactly into my 4090's VRAM. Prior to v 0.1.13, this model ran fine, and I could fit all layers into VRAM and fill the context. It would utilize a total of 23697MiB (which gave about 900MiB headroom). After 0.1.13, this model would cause me an OOM. I found a temporary solution to this by building ollama from source with the `-DLLAMA_CUDA_FORCE_MMQ=on` flag. Now after the recent update to 0.1.19, there was a PR that alleged to fix this issue https://github.com/jmorganca/ollama/pull/1850. I can now run that model without the OOM, however, Ollama never offloads more than 49 of the 63 layers to GPU. Even if I set the num_gpu parameter in interactive mode to 63 or higher, it still loads only 49 of the layers and only utilizes 21027MiB of a total of 24564MiB (only 86% of VRAM). Here is the modelfile: ``` FROM deepseek-coder:33b-instruct-q5_K_S PARAMETER num_gpu 63 PARAMETER num_ctx 2048 ``` Is it possible to force Ollama to load the entire model into VRAM as it was before v 0.1.11? Do models now take up MORE VRAM than before? VRAM is expensive and scarce at this time. I feel that giving users the flexibility to finely control models to maximize VRAM usage is imperative.
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1906/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1906/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4089
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4089/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4089/comments
https://api.github.com/repos/ollama/ollama/issues/4089/events
https://github.com/ollama/ollama/pull/4089
2,274,076,447
PR_kwDOJ0Z1Ps5uSBtV
4,089
server: destination invalid
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-05-01T19:39:17
2024-05-01T19:46:36
2024-05-01T19:46:35
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/4089", "html_url": "https://github.com/ollama/ollama/pull/4089", "diff_url": "https://github.com/ollama/ollama/pull/4089.diff", "patch_url": "https://github.com/ollama/ollama/pull/4089.patch", "merged_at": "2024-05-01T19:46:35" }
update cmd help too
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4089/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/7408
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7408/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7408/comments
https://api.github.com/repos/ollama/ollama/issues/7408/events
https://github.com/ollama/ollama/issues/7408
2,620,028,321
I_kwDOJ0Z1Ps6cKnWh
7,408
ollama keep reload the same model(Qwen2.5-70B)
{ "login": "TianWuYuJiangHenShou", "id": 20592000, "node_id": "MDQ6VXNlcjIwNTkyMDAw", "avatar_url": "https://avatars.githubusercontent.com/u/20592000?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TianWuYuJiangHenShou", "html_url": "https://github.com/TianWuYuJiangHenShou", "followers_url": "https://api.github.com/users/TianWuYuJiangHenShou/followers", "following_url": "https://api.github.com/users/TianWuYuJiangHenShou/following{/other_user}", "gists_url": "https://api.github.com/users/TianWuYuJiangHenShou/gists{/gist_id}", "starred_url": "https://api.github.com/users/TianWuYuJiangHenShou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TianWuYuJiangHenShou/subscriptions", "organizations_url": "https://api.github.com/users/TianWuYuJiangHenShou/orgs", "repos_url": "https://api.github.com/users/TianWuYuJiangHenShou/repos", "events_url": "https://api.github.com/users/TianWuYuJiangHenShou/events{/privacy}", "received_events_url": "https://api.github.com/users/TianWuYuJiangHenShou/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
5
2024-10-29T03:30:19
2024-10-30T07:13:53
2024-10-29T18:45:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? time=2024-10-29T03:23:31.503Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" llama_new_context_with_model: n_ctx = 16384 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 5120.00 MiB llama_new_context_with_model: KV self size = 5120.00 MiB, K (f16): 2560.00 MiB, V (f16): 2560.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 2.45 MiB llama_new_context_with_model: CUDA0 compute buffer size = 2144.00 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 48.01 MiB llama_new_context_with_model: graph nodes = 2806 llama_new_context_with_model: graph splits = 2 time=2024-10-29T03:23:31.754Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" INFO [main] model loaded | tid="140624792489984" timestamp=1730172211 time=2024-10-29T03:23:32.006Z level=INFO source=server.go:626 msg="llama runner started in 8.13 seconds" [GIN] 2024/10/29 - 03:23:51 | 200 | 29.221685658s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:23:58 | 200 | 6.120409627s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:24:05 | 200 | 6.69483427s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:24:16 | 200 | 9.664337429s | 10.102.227.89 | POST "/api/chat" [GIN] 2024/10/29 - 03:24:23 | 200 | 6.192004337s | 10.102.227.89 | POST "/api/chat" time=2024-10-29T03:24:25.096Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb gpu=GPU-e48a5d05-78b4-d21e-8fb9-6e26a4c0edb9 parallel=4 available=84544258048 required="52.6 GiB" time=2024-10-29T03:24:25.363Z level=INFO source=server.go:105 msg="system memory" total="503.5 GiB" free="491.2 GiB" free_swap="8.0 GiB" time=2024-10-29T03:24:25.364Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split="" memory.available="[78.7 GiB]" memory.gpu_overhead="0 B" memory.required.full="52.6 GiB" memory.required.partial="52.6 GiB" memory.required.kv="9.8 GiB" memory.required.allocations="[52.6 GiB]" memory.weights.total="46.6 GiB" memory.weights.repeating="45.6 GiB" memory.weights.nonrepeating="974.6 MiB" memory.graph.full="4.0 GiB" memory.graph.partial="5.0 GiB" time=2024-10-29T03:24:25.366Z level=INFO source=server.go:388 msg="starting llama server" cmd="/tmp/ollama3695149922/runners/cuda_v12/ollama_llama_server --model /home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb --ctx-size 32096 --batch-size 512 --embedding --n-gpu-layers 81 --threads 32 --parallel 4 --port 35985" time=2024-10-29T03:24:25.366Z level=INFO source=sched.go:449 msg="loaded runners" count=1 time=2024-10-29T03:24:25.366Z level=INFO source=server.go:587 msg="waiting for llama runner to start responding" time=2024-10-29T03:24:25.367Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server error" INFO [main] starting c++ runner | tid="140636503351296" timestamp=1730172265 INFO [main] build info | build=10 commit="3a8c75e" tid="140636503351296" timestamp=1730172265 INFO [main] system info | n_threads=32 n_threads_batch=32 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="140636503351296" timestamp=1730172265 total_threads=128 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="127" port="35985" tid="140636503351296" timestamp=1730172265 llama_model_loader: loaded meta data with 35 key-value pairs and 963 tensors from /home/models/ollama_models/blobs/sha256-0938027085a7434a9f3126b85230a2b8bda65b72ff03c50124dda89641271ecb (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 72B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 72B llama_model_loader: - kv 6: general.license str = other llama_model_loader: - kv 7: general.license.name str = qwen llama_model_loader: - kv 8: general.license.link str = https://huggingface.co./Qwen/Qwen2.5-7... llama_model_loader: - kv 9: general.base_model.count u32 = 1 llama_model_loader: - kv 10: general.base_model.0.name str = Qwen2.5 72B llama_model_loader: - kv 11: general.base_model.0.organization str = Qwen llama_model_loader: - kv 12: general.base_model.0.repo_url str = https://huggingface.co./Qwen/Qwen2.5-72B llama_model_loader: - kv 13: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 14: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 15: qwen2.block_count u32 = 80 llama_model_loader: - kv 16: qwen2.context_length u32 = 32768 llama_model_loader: - kv 17: qwen2.embedding_length u32 = 8192 llama_model_loader: - kv 18: qwen2.feed_forward_length u32 = 29568 llama_model_loader: - kv 19: qwen2.attention.head_count u32 = 64 llama_model_loader: - kv 20: qwen2.attention.head_count_kv u32 = 8 llama_model_loader: - kv 21: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 22: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 23: general.file_type u32 = 2 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,152064] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,152064] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 29: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 32: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 33: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 34: general.quantization_version u32 = 2 llama_model_loader: - type f32: 401 tensors llama_model_loader: - type q4_0: 561 tensors llama_model_loader: - type q6_K: 1 tensors time=2024-10-29T03:24:25.618Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 152064 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 8192 llm_load_print_meta: n_layer = 80 llm_load_print_meta: n_head = 64 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 29568 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = 70B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 72.71 B llm_load_print_meta: model size = 38.39 GiB (4.54 BPW) llm_load_print_meta: general.name = Qwen2.5 72B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: EOG token = 151643 '<|endoftext|>' llm_load_print_meta: EOG token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: yes llm_load_tensors: ggml ctx size = 0.85 MiB time=2024-10-29T03:24:27.075Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server not responding" llm_load_tensors: offloading 80 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 81/81 layers to GPU llm_load_tensors: CPU buffer size = 668.25 MiB llm_load_tensors: CUDA0 buffer size = 38647.70 MiB time=2024-10-29T03:24:27.778Z level=INFO source=server.go:621 msg="waiting for server to become available" status="llm server loading model" ^C^Ctime=2024-10-29T03:24:29.283Z level=WARN source=server.go:594 msg="client connection closed before server finished loading, aborting load" time=2024-10-29T03:24:29.283Z level=ERROR source=sched.go:455 msg="error loading llama server" error="timed out waiting for llama runner to start: context canceled" [GIN] 2024/10/29 - 03:24:29 | 499 | 5.239788128s | 10.102.227.89 | POST "/api/chat" ## I have set the parameter OLLAMA_KEEP_ALIVE=-1,but it don't works. ### OS Linux, Docker ### GPU Nvidia ### CPU Intel ### Ollama version ollama version=0.3.14
{ "login": "pdevine", "id": 75239, "node_id": "MDQ6VXNlcjc1MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/75239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pdevine", "html_url": "https://github.com/pdevine", "followers_url": "https://api.github.com/users/pdevine/followers", "following_url": "https://api.github.com/users/pdevine/following{/other_user}", "gists_url": "https://api.github.com/users/pdevine/gists{/gist_id}", "starred_url": "https://api.github.com/users/pdevine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdevine/subscriptions", "organizations_url": "https://api.github.com/users/pdevine/orgs", "repos_url": "https://api.github.com/users/pdevine/repos", "events_url": "https://api.github.com/users/pdevine/events{/privacy}", "received_events_url": "https://api.github.com/users/pdevine/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7408/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/253
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/253/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/253/comments
https://api.github.com/repos/ollama/ollama/issues/253/events
https://github.com/ollama/ollama/pull/253
1,831,879,303
PR_kwDOJ0Z1Ps5W8PxK
253
use a pipe to push to registry with progress
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2023-08-01T19:17:50
2023-08-03T19:11:24
2023-08-03T19:11:23
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/253", "html_url": "https://github.com/ollama/ollama/pull/253", "diff_url": "https://github.com/ollama/ollama/pull/253.diff", "patch_url": "https://github.com/ollama/ollama/pull/253.patch", "merged_at": "2023-08-03T19:11:23" }
switch to a monolithic upload instead of a chunk upload through a pipe to report progress
{ "login": "mxyng", "id": 2372640, "node_id": "MDQ6VXNlcjIzNzI2NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2372640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxyng", "html_url": "https://github.com/mxyng", "followers_url": "https://api.github.com/users/mxyng/followers", "following_url": "https://api.github.com/users/mxyng/following{/other_user}", "gists_url": "https://api.github.com/users/mxyng/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxyng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxyng/subscriptions", "organizations_url": "https://api.github.com/users/mxyng/orgs", "repos_url": "https://api.github.com/users/mxyng/repos", "events_url": "https://api.github.com/users/mxyng/events{/privacy}", "received_events_url": "https://api.github.com/users/mxyng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/253/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/8529
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8529/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8529/comments
https://api.github.com/repos/ollama/ollama/issues/8529/events
https://github.com/ollama/ollama/issues/8529
2,803,321,089
I_kwDOJ0Z1Ps6nF0kB
8,529
feat: OpenAI reasoning_content compatibility
{ "login": "EntropyYue", "id": 164553692, "node_id": "U_kgDOCc7j3A", "avatar_url": "https://avatars.githubusercontent.com/u/164553692?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EntropyYue", "html_url": "https://github.com/EntropyYue", "followers_url": "https://api.github.com/users/EntropyYue/followers", "following_url": "https://api.github.com/users/EntropyYue/following{/other_user}", "gists_url": "https://api.github.com/users/EntropyYue/gists{/gist_id}", "starred_url": "https://api.github.com/users/EntropyYue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EntropyYue/subscriptions", "organizations_url": "https://api.github.com/users/EntropyYue/orgs", "repos_url": "https://api.github.com/users/EntropyYue/repos", "events_url": "https://api.github.com/users/EntropyYue/events{/privacy}", "received_events_url": "https://api.github.com/users/EntropyYue/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396200, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aaA", "url": "https://api.github.com/repos/ollama/ollama/labels/feature%20request", "name": "feature request", "color": "a2eeef", "default": false, "description": "New feature or request" } ]
open
false
null
[]
null
1
2025-01-22T04:09:25
2025-01-22T07:19:36
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
The current thinking model outputs XML tags to distinguish between thinking and answering, and we need a new feature to make the `reasoning_content` of the OpenAI SDK work normally
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8529/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8529/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/7675
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7675/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7675/comments
https://api.github.com/repos/ollama/ollama/issues/7675/events
https://github.com/ollama/ollama/pull/7675
2,660,440,586
PR_kwDOJ0Z1Ps6B_LKa
7,675
runner.go: Increase survivability of main processing loop
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
0
2024-11-15T00:49:04
2024-11-15T01:18:42
2024-11-15T01:18:41
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/7675", "html_url": "https://github.com/ollama/ollama/pull/7675", "diff_url": "https://github.com/ollama/ollama/pull/7675.diff", "patch_url": "https://github.com/ollama/ollama/pull/7675.patch", "merged_at": "2024-11-15T01:18:41" }
Currently, if an error occurs during the prep stages (such as tokenizing) of a single request, it will only affect that request. However, if an error happens during decoding, it can take down the entire runner. Instead, it's better to drop the tokens that triggered the error and try to keep going. However, we also need to stop when we run out of tokens, otherwise, this just causes an infinite loop. This is likely the cause of at least some of the hanging issues that have been reported. Bug #7573
{ "login": "jessegross", "id": 6468499, "node_id": "MDQ6VXNlcjY0Njg0OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/6468499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jessegross", "html_url": "https://github.com/jessegross", "followers_url": "https://api.github.com/users/jessegross/followers", "following_url": "https://api.github.com/users/jessegross/following{/other_user}", "gists_url": "https://api.github.com/users/jessegross/gists{/gist_id}", "starred_url": "https://api.github.com/users/jessegross/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jessegross/subscriptions", "organizations_url": "https://api.github.com/users/jessegross/orgs", "repos_url": "https://api.github.com/users/jessegross/repos", "events_url": "https://api.github.com/users/jessegross/events{/privacy}", "received_events_url": "https://api.github.com/users/jessegross/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7675/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7675/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/2661
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2661/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2661/comments
https://api.github.com/repos/ollama/ollama/issues/2661/events
https://github.com/ollama/ollama/issues/2661
2,148,212,465
I_kwDOJ0Z1Ps6ACx7x
2,661
Migrating models from WSL2 to Native Windows
{ "login": "circbuf255", "id": 139676887, "node_id": "U_kgDOCFNM1w", "avatar_url": "https://avatars.githubusercontent.com/u/139676887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/circbuf255", "html_url": "https://github.com/circbuf255", "followers_url": "https://api.github.com/users/circbuf255/followers", "following_url": "https://api.github.com/users/circbuf255/following{/other_user}", "gists_url": "https://api.github.com/users/circbuf255/gists{/gist_id}", "starred_url": "https://api.github.com/users/circbuf255/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/circbuf255/subscriptions", "organizations_url": "https://api.github.com/users/circbuf255/orgs", "repos_url": "https://api.github.com/users/circbuf255/repos", "events_url": "https://api.github.com/users/circbuf255/events{/privacy}", "received_events_url": "https://api.github.com/users/circbuf255/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-02-22T05:08:17
2024-05-21T23:30:48
2024-02-22T08:17:14
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
**What is the correct workflow to migrate from WSL2 to Native Windows?** Migrating models (blobs/manifests) from WSL2 to Windows does not seem to work as expected. For those with hundreds of GB already downloaded in WSL2, there should be a method to move those to native Windows. The method I tried that does not work: **Modifying the blobs:** 1) copy/paste all sha256 blobs from WSL2 to Windows 2) rename the blobs to replace the "sha256:" with "sha256-" since windows doesn't support colon in filename 3) edit the contents of the blobs replacing "sha256:" with "sha256-" **Modifying the manifests:** 1) copy and past the manifest directory from WSL2 to Windows 2) edit the contents of the manifest files replacing "sha256:" with "sha256-" Command prompt: >>ollama list >>... (I got the expected results - I see all of the models) >> ollama run mixtral >>... (Again, I got the expected results I was able to chat with the model) However, after closing ollama in the taskbar and reloading it. ALL BLOBS ARE DELETED server.log says: "total blobs: 59" "total unused blobs removed: 59"
{ "login": "circbuf255", "id": 139676887, "node_id": "U_kgDOCFNM1w", "avatar_url": "https://avatars.githubusercontent.com/u/139676887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/circbuf255", "html_url": "https://github.com/circbuf255", "followers_url": "https://api.github.com/users/circbuf255/followers", "following_url": "https://api.github.com/users/circbuf255/following{/other_user}", "gists_url": "https://api.github.com/users/circbuf255/gists{/gist_id}", "starred_url": "https://api.github.com/users/circbuf255/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/circbuf255/subscriptions", "organizations_url": "https://api.github.com/users/circbuf255/orgs", "repos_url": "https://api.github.com/users/circbuf255/repos", "events_url": "https://api.github.com/users/circbuf255/events{/privacy}", "received_events_url": "https://api.github.com/users/circbuf255/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2661/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/4899
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4899/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4899/comments
https://api.github.com/repos/ollama/ollama/issues/4899/events
https://github.com/ollama/ollama/issues/4899
2,339,745,576
I_kwDOJ0Z1Ps6Lda8o
4,899
Failed to get max tokens for LLM with name qwen2:7b-instruct-fp16 with ollama
{ "login": "wenlong1234", "id": 106233935, "node_id": "U_kgDOBlUATw", "avatar_url": "https://avatars.githubusercontent.com/u/106233935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wenlong1234", "html_url": "https://github.com/wenlong1234", "followers_url": "https://api.github.com/users/wenlong1234/followers", "following_url": "https://api.github.com/users/wenlong1234/following{/other_user}", "gists_url": "https://api.github.com/users/wenlong1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/wenlong1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wenlong1234/subscriptions", "organizations_url": "https://api.github.com/users/wenlong1234/orgs", "repos_url": "https://api.github.com/users/wenlong1234/repos", "events_url": "https://api.github.com/users/wenlong1234/events{/privacy}", "received_events_url": "https://api.github.com/users/wenlong1234/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2024-06-07T07:05:29
2024-06-11T01:41:50
2024-06-09T17:24:40
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? utils.py 328 : Failed to get max tokens for LLM with name qwen2:7b-instruct-fp16. Defaulting to 4096. api_server-1 | Traceback (most recent call last): api_server-1 | File "/app/danswer/llm/utils.py", line 318, in get_llm_max_tokens api_server-1 | model_obj = model_map[model_name] api_server-1 | ~~~~~~~~~^^^^^^^^^^^^ api_server-1 | KeyError: 'qwen2:7b-instruct-fp16' ### OS _No response_ ### GPU AMD ### CPU AMD ### Ollama version 0.1.41
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4899/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2806
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2806/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2806/comments
https://api.github.com/repos/ollama/ollama/issues/2806/events
https://github.com/ollama/ollama/issues/2806
2,158,720,761
I_kwDOJ0Z1Ps6Aq3b5
2,806
How do we publicly show our model in the library?
{ "login": "WangRongsheng", "id": 55651568, "node_id": "MDQ6VXNlcjU1NjUxNTY4", "avatar_url": "https://avatars.githubusercontent.com/u/55651568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangRongsheng", "html_url": "https://github.com/WangRongsheng", "followers_url": "https://api.github.com/users/WangRongsheng/followers", "following_url": "https://api.github.com/users/WangRongsheng/following{/other_user}", "gists_url": "https://api.github.com/users/WangRongsheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangRongsheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangRongsheng/subscriptions", "organizations_url": "https://api.github.com/users/WangRongsheng/orgs", "repos_url": "https://api.github.com/users/WangRongsheng/repos", "events_url": "https://api.github.com/users/WangRongsheng/events{/privacy}", "received_events_url": "https://api.github.com/users/WangRongsheng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
2
2024-02-28T11:17:14
2024-02-28T14:37:35
2024-02-28T14:37:34
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
We created a model - [Aurora](https://ollama.com/wangrongsheng/aurora), which is a fine-tuned Mixtral-8X7B model with bilingual (chinese and english) dialog. But we can't search for it in the library. ![image](https://github.com/ollama/ollama/assets/55651568/4c3d3cf4-d386-46de-adc7-03330e62b636)
{ "login": "WangRongsheng", "id": 55651568, "node_id": "MDQ6VXNlcjU1NjUxNTY4", "avatar_url": "https://avatars.githubusercontent.com/u/55651568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangRongsheng", "html_url": "https://github.com/WangRongsheng", "followers_url": "https://api.github.com/users/WangRongsheng/followers", "following_url": "https://api.github.com/users/WangRongsheng/following{/other_user}", "gists_url": "https://api.github.com/users/WangRongsheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangRongsheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangRongsheng/subscriptions", "organizations_url": "https://api.github.com/users/WangRongsheng/orgs", "repos_url": "https://api.github.com/users/WangRongsheng/repos", "events_url": "https://api.github.com/users/WangRongsheng/events{/privacy}", "received_events_url": "https://api.github.com/users/WangRongsheng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2806/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/7552
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/7552/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/7552/comments
https://api.github.com/repos/ollama/ollama/issues/7552/events
https://github.com/ollama/ollama/issues/7552
2,640,548,513
I_kwDOJ0Z1Ps6dY5Kh
7,552
How to modify n_ctx
{ "login": "wuhongsheng", "id": 10295461, "node_id": "MDQ6VXNlcjEwMjk1NDYx", "avatar_url": "https://avatars.githubusercontent.com/u/10295461?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuhongsheng", "html_url": "https://github.com/wuhongsheng", "followers_url": "https://api.github.com/users/wuhongsheng/followers", "following_url": "https://api.github.com/users/wuhongsheng/following{/other_user}", "gists_url": "https://api.github.com/users/wuhongsheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/wuhongsheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wuhongsheng/subscriptions", "organizations_url": "https://api.github.com/users/wuhongsheng/orgs", "repos_url": "https://api.github.com/users/wuhongsheng/repos", "events_url": "https://api.github.com/users/wuhongsheng/events{/privacy}", "received_events_url": "https://api.github.com/users/wuhongsheng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396220, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2afA", "url": "https://api.github.com/repos/ollama/ollama/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "General questions" } ]
closed
false
null
[]
null
2
2024-11-07T10:21:49
2024-11-08T02:12:16
2024-11-08T02:12:16
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? How to modify n_ctx ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.4
{ "login": "wuhongsheng", "id": 10295461, "node_id": "MDQ6VXNlcjEwMjk1NDYx", "avatar_url": "https://avatars.githubusercontent.com/u/10295461?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuhongsheng", "html_url": "https://github.com/wuhongsheng", "followers_url": "https://api.github.com/users/wuhongsheng/followers", "following_url": "https://api.github.com/users/wuhongsheng/following{/other_user}", "gists_url": "https://api.github.com/users/wuhongsheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/wuhongsheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wuhongsheng/subscriptions", "organizations_url": "https://api.github.com/users/wuhongsheng/orgs", "repos_url": "https://api.github.com/users/wuhongsheng/repos", "events_url": "https://api.github.com/users/wuhongsheng/events{/privacy}", "received_events_url": "https://api.github.com/users/wuhongsheng/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/7552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/7552/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/3634
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/3634/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/3634/comments
https://api.github.com/repos/ollama/ollama/issues/3634/events
https://github.com/ollama/ollama/pull/3634
2,241,828,741
PR_kwDOJ0Z1Ps5slIuz
3,634
Update README.md
{ "login": "rapidarchitect", "id": 126218667, "node_id": "U_kgDOB4Xxqw", "avatar_url": "https://avatars.githubusercontent.com/u/126218667?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rapidarchitect", "html_url": "https://github.com/rapidarchitect", "followers_url": "https://api.github.com/users/rapidarchitect/followers", "following_url": "https://api.github.com/users/rapidarchitect/following{/other_user}", "gists_url": "https://api.github.com/users/rapidarchitect/gists{/gist_id}", "starred_url": "https://api.github.com/users/rapidarchitect/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rapidarchitect/subscriptions", "organizations_url": "https://api.github.com/users/rapidarchitect/orgs", "repos_url": "https://api.github.com/users/rapidarchitect/repos", "events_url": "https://api.github.com/users/rapidarchitect/events{/privacy}", "received_events_url": "https://api.github.com/users/rapidarchitect/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
1
2024-04-14T00:06:48
2024-11-21T08:26:05
2024-11-21T08:26:04
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
false
{ "url": "https://api.github.com/repos/ollama/ollama/pulls/3634", "html_url": "https://github.com/ollama/ollama/pull/3634", "diff_url": "https://github.com/ollama/ollama/pull/3634.diff", "patch_url": "https://github.com/ollama/ollama/pull/3634.patch", "merged_at": null }
added Taipy Ollama interface to the bottom of web and desktop interfaces
{ "login": "mchiang0610", "id": 3325447, "node_id": "MDQ6VXNlcjMzMjU0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3325447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchiang0610", "html_url": "https://github.com/mchiang0610", "followers_url": "https://api.github.com/users/mchiang0610/followers", "following_url": "https://api.github.com/users/mchiang0610/following{/other_user}", "gists_url": "https://api.github.com/users/mchiang0610/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchiang0610/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchiang0610/subscriptions", "organizations_url": "https://api.github.com/users/mchiang0610/orgs", "repos_url": "https://api.github.com/users/mchiang0610/repos", "events_url": "https://api.github.com/users/mchiang0610/events{/privacy}", "received_events_url": "https://api.github.com/users/mchiang0610/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/3634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/3634/timeline
null
null
true
https://api.github.com/repos/ollama/ollama/issues/4105
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/4105/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/4105/comments
https://api.github.com/repos/ollama/ollama/issues/4105/events
https://github.com/ollama/ollama/issues/4105
2,276,400,905
I_kwDOJ0Z1Ps6Hrx8J
4,105
Confusing error on linux with noexec on /tmp - Error: llama runner process no longer running: 1
{ "login": "utility-aagrawal", "id": 140737044, "node_id": "U_kgDOCGN6FA", "avatar_url": "https://avatars.githubusercontent.com/u/140737044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/utility-aagrawal", "html_url": "https://github.com/utility-aagrawal", "followers_url": "https://api.github.com/users/utility-aagrawal/followers", "following_url": "https://api.github.com/users/utility-aagrawal/following{/other_user}", "gists_url": "https://api.github.com/users/utility-aagrawal/gists{/gist_id}", "starred_url": "https://api.github.com/users/utility-aagrawal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/utility-aagrawal/subscriptions", "organizations_url": "https://api.github.com/users/utility-aagrawal/orgs", "repos_url": "https://api.github.com/users/utility-aagrawal/repos", "events_url": "https://api.github.com/users/utility-aagrawal/events{/privacy}", "received_events_url": "https://api.github.com/users/utility-aagrawal/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "dhiltgen", "id": 4033016, "node_id": "MDQ6VXNlcjQwMzMwMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/4033016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhiltgen", "html_url": "https://github.com/dhiltgen", "followers_url": "https://api.github.com/users/dhiltgen/followers", "following_url": "https://api.github.com/users/dhiltgen/following{/other_user}", "gists_url": "https://api.github.com/users/dhiltgen/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhiltgen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhiltgen/subscriptions", "organizations_url": "https://api.github.com/users/dhiltgen/orgs", "repos_url": "https://api.github.com/users/dhiltgen/repos", "events_url": "https://api.github.com/users/dhiltgen/events{/privacy}", "received_events_url": "https://api.github.com/users/dhiltgen/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
14
2024-05-02T20:13:48
2024-05-08T21:10:15
2024-05-08T21:10:15
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I installed ollama on my ubuntu 22.04 machine using the command curl -fsSL https://ollama.com/install.sh | sh I ran : ollama run llama3 and got this error: Error: llama runner process no longer running: 1 Can someone help me resolve it? ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.1.32
{ "login": "utility-aagrawal", "id": 140737044, "node_id": "U_kgDOCGN6FA", "avatar_url": "https://avatars.githubusercontent.com/u/140737044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/utility-aagrawal", "html_url": "https://github.com/utility-aagrawal", "followers_url": "https://api.github.com/users/utility-aagrawal/followers", "following_url": "https://api.github.com/users/utility-aagrawal/following{/other_user}", "gists_url": "https://api.github.com/users/utility-aagrawal/gists{/gist_id}", "starred_url": "https://api.github.com/users/utility-aagrawal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/utility-aagrawal/subscriptions", "organizations_url": "https://api.github.com/users/utility-aagrawal/orgs", "repos_url": "https://api.github.com/users/utility-aagrawal/repos", "events_url": "https://api.github.com/users/utility-aagrawal/events{/privacy}", "received_events_url": "https://api.github.com/users/utility-aagrawal/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/4105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/4105/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/1862
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/1862/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/1862/comments
https://api.github.com/repos/ollama/ollama/issues/1862/events
https://github.com/ollama/ollama/issues/1862
2,071,706,623
I_kwDOJ0Z1Ps57e7v_
1,862
Awq mod support/awq-gguf
{ "login": "sviknesh97", "id": 100233528, "node_id": "U_kgDOBflxOA", "avatar_url": "https://avatars.githubusercontent.com/u/100233528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sviknesh97", "html_url": "https://github.com/sviknesh97", "followers_url": "https://api.github.com/users/sviknesh97/followers", "following_url": "https://api.github.com/users/sviknesh97/following{/other_user}", "gists_url": "https://api.github.com/users/sviknesh97/gists{/gist_id}", "starred_url": "https://api.github.com/users/sviknesh97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sviknesh97/subscriptions", "organizations_url": "https://api.github.com/users/sviknesh97/orgs", "repos_url": "https://api.github.com/users/sviknesh97/repos", "events_url": "https://api.github.com/users/sviknesh97/events{/privacy}", "received_events_url": "https://api.github.com/users/sviknesh97/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
3
2024-01-09T06:23:17
2024-05-10T01:00:20
2024-05-10T01:00:19
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Does ollama support awq formats instead of gguf the gguf inference seems to be alittle slow hence thinking about awq and if it doesnt support is there a way to convert awq to gguf
{ "login": "jmorganca", "id": 251292, "node_id": "MDQ6VXNlcjI1MTI5Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/251292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmorganca", "html_url": "https://github.com/jmorganca", "followers_url": "https://api.github.com/users/jmorganca/followers", "following_url": "https://api.github.com/users/jmorganca/following{/other_user}", "gists_url": "https://api.github.com/users/jmorganca/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmorganca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmorganca/subscriptions", "organizations_url": "https://api.github.com/users/jmorganca/orgs", "repos_url": "https://api.github.com/users/jmorganca/repos", "events_url": "https://api.github.com/users/jmorganca/events{/privacy}", "received_events_url": "https://api.github.com/users/jmorganca/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/1862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/1862/timeline
null
completed
false
https://api.github.com/repos/ollama/ollama/issues/2222
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/2222/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/2222/comments
https://api.github.com/repos/ollama/ollama/issues/2222/events
https://github.com/ollama/ollama/issues/2222
2,103,082,752
I_kwDOJ0Z1Ps59Wn8A
2,222
Add LLaVAR model
{ "login": "ryx", "id": 476417, "node_id": "MDQ6VXNlcjQ3NjQxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/476417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ryx", "html_url": "https://github.com/ryx", "followers_url": "https://api.github.com/users/ryx/followers", "following_url": "https://api.github.com/users/ryx/following{/other_user}", "gists_url": "https://api.github.com/users/ryx/gists{/gist_id}", "starred_url": "https://api.github.com/users/ryx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryx/subscriptions", "organizations_url": "https://api.github.com/users/ryx/orgs", "repos_url": "https://api.github.com/users/ryx/repos", "events_url": "https://api.github.com/users/ryx/events{/privacy}", "received_events_url": "https://api.github.com/users/ryx/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5789807732, "node_id": "LA_kwDOJ0Z1Ps8AAAABWRl0dA", "url": "https://api.github.com/repos/ollama/ollama/labels/model%20request", "name": "model request", "color": "1E5DE6", "default": false, "description": "Model requests" } ]
open
false
null
[]
null
0
2024-01-27T00:09:54
2024-01-27T00:13:18
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
Please add the LLaVAR model (from https://llavar.github.io/) to ollama. The model is based on llava and offers enhanced text recognition abilities, which seems like a great addition to the current range of ollama models.
null
{ "url": "https://api.github.com/repos/ollama/ollama/issues/2222/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/2222/timeline
null
null
false
https://api.github.com/repos/ollama/ollama/issues/8311
https://api.github.com/repos/ollama/ollama
https://api.github.com/repos/ollama/ollama/issues/8311/labels{/name}
https://api.github.com/repos/ollama/ollama/issues/8311/comments
https://api.github.com/repos/ollama/ollama/issues/8311/events
https://github.com/ollama/ollama/issues/8311
2,769,441,773
I_kwDOJ0Z1Ps6lElPt
8,311
OpenSSL/3.0.13: error:0A00010B:SSL routines::wrong version number
{ "login": "dvdgamer08", "id": 134729003, "node_id": "U_kgDOCAfNKw", "avatar_url": "https://avatars.githubusercontent.com/u/134729003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dvdgamer08", "html_url": "https://github.com/dvdgamer08", "followers_url": "https://api.github.com/users/dvdgamer08/followers", "following_url": "https://api.github.com/users/dvdgamer08/following{/other_user}", "gists_url": "https://api.github.com/users/dvdgamer08/gists{/gist_id}", "starred_url": "https://api.github.com/users/dvdgamer08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvdgamer08/subscriptions", "organizations_url": "https://api.github.com/users/dvdgamer08/orgs", "repos_url": "https://api.github.com/users/dvdgamer08/repos", "events_url": "https://api.github.com/users/dvdgamer08/events{/privacy}", "received_events_url": "https://api.github.com/users/dvdgamer08/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 5667396184, "node_id": "LA_kwDOJ0Z1Ps8AAAABUc2aWA", "url": "https://api.github.com/repos/ollama/ollama/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
2025-01-05T20:43:11
2025-01-05T23:36:35
2025-01-05T23:36:35
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
null
null
### What is the issue? I've set up SSL for Ollama following these steps: ### Configuring SSL 1. Obtain an SSL certificate and a private key. You can acquire these from a Certificate Authority (CA) or generate a self-signed certificate (not recommended for production use). 2. Place the certificate and key files in the `~/.ollama/ssl/` directory on your server. 3. Ensure the files are named `cert.pem` and `key.pem`, respectively. 4. Restart the Ollama server. It will automatically detect the presence of these files and enable HTTPS. I've generated them using openssl and I also tried a cloudflare cert but it still wouldn't work. Nothing pops up after serving (OLLAMA_HOST=https://0.0.0.0:11434 ollama serve) and http still works the same. openssl s_client -connect 192.168.1.220:11434 returns: CONNECTED(00000003)` 40970FDC4E730000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../ssl/record/ssl3_record.c:354: no peer certificate available No client certificate CA names sent SSL handshake has read 5 bytes and written 293 bytes Verification: OK New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent Verify return code: 0 (ok) Some people suggested a reverse proxy but I'm reallllllly hoping this would work Tried it on both Windows Server and Ubuntu Server - same outcome ### OS Linux ### GPU _No response_ ### CPU Intel ### Ollama version 0.5.4
{ "login": "dvdgamer08", "id": 134729003, "node_id": "U_kgDOCAfNKw", "avatar_url": "https://avatars.githubusercontent.com/u/134729003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dvdgamer08", "html_url": "https://github.com/dvdgamer08", "followers_url": "https://api.github.com/users/dvdgamer08/followers", "following_url": "https://api.github.com/users/dvdgamer08/following{/other_user}", "gists_url": "https://api.github.com/users/dvdgamer08/gists{/gist_id}", "starred_url": "https://api.github.com/users/dvdgamer08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvdgamer08/subscriptions", "organizations_url": "https://api.github.com/users/dvdgamer08/orgs", "repos_url": "https://api.github.com/users/dvdgamer08/repos", "events_url": "https://api.github.com/users/dvdgamer08/events{/privacy}", "received_events_url": "https://api.github.com/users/dvdgamer08/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/ollama/ollama/issues/8311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/ollama/ollama/issues/8311/timeline
null
completed
false