Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Heya, I'm the author of the post! This was probably unintentional but I think you're making a really valuable observation that will be helpful to others.

The models Cursor provides to use in their product are intermediated versions of models that companies like OpenAI and Anthropic offer. They are technically using Codex, but not in the way that they would be if you were in a tool like Codex (CLI) or Claude Code.

If you ask Cursor to solve a tough problem, Cursor will break down the problem into a different problem before sending that request to OpenAI so they can use Codex. They do this because: 1. To save money. By restructuring the prompt they can use less tokens, saving them money for running Cursor since they are the ones paying for the tokens with your subscription cost. 2. [Based on things the Cursor team has said] They believe they can construct a better intermediate prompt that is more representative of the problem you want to solve.

This extra level of abstraction means that you are not getting the best results when you use a tool like Cursor. OpenAI and Anthropic are running their harnesses Codex CLI and Claude Code at a loss (because VC), but providing better results. This is not the best way to make money, but it's a great way to build mindshare and hopefully get customers for life. (People are fickle and cheap though so I doubt this is a customers for life strategy the way people buy the same brand of deodorant once they start buying Dove.)

Happy to answer any questions you may have, but mostly I would highly suggest trying out Codex CLI and Claude Code to get a better feel for what I'm saying — and to also to get more out of your AI tools. :)





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: