ollama-multirun: swdub3jligfsbcbwcmv2aw91cybpbnn0cnvjdglvbnmucklnbm: llama3.2:1b: 20250703-153147

models: bakllava:7b codellama:7b deepcoder:1.5b deepseek-r1:1.5b deepseek-r1:8b dolphin-mistral:7b dolphin3:8b gemma3:1b gemma3:4b gemma:2b granite3.2-vision:2b granite3.3:2b huihui_ai/baronllm-abliterated:8b llama3-groq-tool-use:8b llama3.2:1b llava-llama3:8b llava-phi3:3.8b llava:7b minicpm-v:8b mistral:7b qwen2.5-coder:7b qwen2.5vl:3b qwen2.5vl:7b qwen3:1.7b qwen3:8b stable-code:3b starcoder:7b

Prompt: (raw) (yaml) words:1 bytes:229

Output: llama3.2:1b (raw)

Stats (raw)
words109
bytes733
total duration3.503749042s
load duration30.238125ms
prompt eval count179 token(s)
prompt eval duration246.77ms
prompt eval rate725.37 tokens/s
eval count182 token(s)
eval duration3.226138958s
eval rate56.41 tokens/s
Model (raw)
namellama3.2:1b
architecturellama
size2.8 GB
parameters1.2B
context length131072
embedding length2048
quantizationQ8_0
capabilitiescompletion
tools
System
ollama proc100% GPU
ollama version0.9.3
sys archarm64
sys processorarm
sys memory13G + 3385M
sys OSDarwin 24.5.0