ollama-multirun: review_this_bash_script_____usr_bin_env_: deepcoder:1.5b: 20250712-210327

models: codellama:7b cogito:3b cogito:8b deepcoder:1.5b deepseek-r1:1.5b deepseek-r1:14b deepseek-r1:8b dolphin-mistral:7b dolphin3:8b gemma3:1b gemma3:4b gemma3n:e2b gemma3n:e4b gemma:2b granite3.3:2b granite3.3:8b hermes3:8b llama3.1:8b-instruct-q4_1 llama3.2:1b llama3.2:3b llava-llama3:8b llava-phi3:3.8b llava:7b minicpm-v:8b mistral:7b mistral:7b-instruct qwen2.5-coder:7b qwen2.5vl:3b qwen2.5vl:7b qwen3:0.6b qwen3:1.7b qwen3:14b qwen3:4b qwen3:8b smollm2:1.7b smollm2:135m smollm2:360m

Prompt: (raw) (yaml) words:3130 bytes:31962

Thinking: deepcoder:1.5b (raw)

Output: deepcoder:1.5b (raw)

Stats (raw)
words251
bytes1799
total duration1m2.357411625s
load duration28.717792ms
prompt eval count9486 token(s)
prompt eval duration19.503133792s
prompt eval rate486.38 tokens/s
eval count1058 token(s)
eval duration42.824729958s
eval rate24.71 tokens/s
Model (raw)
namedeepcoder:1.5b
architectureqwen2
size2.5 GB
parameters1.8B
context length131072
embedding length1536
quantizationQ4_K_M
capabilitiescompletion
System
Ollama proc100% GPU
Ollama context16384
Ollama version0.9.7-rc0
Multirun timeout600 seconds
Sys archarm64
Sys processorarm
sys memory12G + 834M
Sys OSDarwin 24.5.0