ollama-multirun: base64__swdub3jligfsbcbwcmv2aw91cybpbnn0cnvjdglvbn: llama3.2:1b: 20250703-150012

models: bakllava:7b codellama:7b deepcoder:1.5b deepseek-r1:1.5b deepseek-r1:8b dolphin-mistral:7b dolphin3:8b gemma3:1b gemma3:4b gemma:2b granite3.2-vision:2b granite3.3:2b huihui_ai/baronllm-abliterated:8b llama3-groq-tool-use:8b llama3.2:1b llava-llama3:8b llava-phi3:3.8b llava:7b minicpm-v:8b mistral:7b qwen2.5-coder:7b qwen2.5vl:3b qwen2.5vl:7b qwen3:1.7b qwen3:8b stable-code:3b starcoder:7b

Prompt: (raw) (yaml) words:2 bytes:237

Output: llama3.2:1b (raw)

Stats (raw)
words6
bytes43
total duration
load duration
prompt eval count
prompt eval duration
prompt eval rate
eval count
eval duration
eval rate
Model (raw)
namellama3.2:1b
architecturellama
size2.8 GB
parameters1.2B
context length131072
embedding length2048
quantizationQ8_0
capabilitiescompletion
tools
System
ollama proc100% GPU
ollama version0.9.3
sys archarm64
sys processorarm
sys memory14G + 689M
sys OSDarwin 24.5.0