You must log in or # to comment.
Try other models, like deepseek chat, qwen 3, or even the new GPT-OSS
And if you don’t want to self host or don’t have strong hardware, I tried https://nano-gpt.com/ and it works pretty well and is relatively cheap
I pretty much see the cost per million tokens, and I assume the higher it is, the more energy it uses

