Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Completely agree. You really have to learn how to use it.

For example, heard many say that doing big refactorings is causing problems. Found a way that is working for SwiftUI projects. I did a refactoring, moving files, restructuring large files into smaller components, and standardizing component setup of different views.

The pattern that works for me: 1) ask it to document the architecture and coding standards, 2) ask it to create a plan for refactoring, 3) ask it to do a low-risk refactoring first, 4) ask it to update the refacting plan, and then 5) go through all the remaining refactorings.

The refactoring plan comes with timeline estimates in days, but that is completely rubbish with claude code. Instead i asked it to estimate in 1) number of chat messages, 2) number of tokens, 3) cost based on number of tokens, 4) number of files impacted.

Another approach that works well is to first generate a throw away application. Then ask it to create documentation how to do it right, incorporate all the learning and where it got stuck. Finally, redo the application with these guidelines and rules.

Another tip, sometimes when it gets stuck, i open the project in windsurf, and ask another LLM (e.g., Gemini 2.5 pro, or qwen coder) to review the project and problem and then I will ask windsurf to provide me with a prompt to instruct claude code to fix it. Works well in some cases.

Also, biggest insight so far: don't expect it to be perfect first time. It needs a feedback loop: generate code, test the code, inspect the results and then improve the code.

Works well for SQL, especially if it can access real data: inspect the database, try some queries, try to understand the schema from your data and then work towards a SQL query that works. And then often as a final step it will simplify the working query.

I use an MCP tool with full access to a test database, so you can tell it to run explain plan and look at the statistics (pg_stat_statements). It will draw a mermaid diagram of your query, with performance numbers included (nr records retrieved, cache hit, etc), and will come back with optimized query and index suggestions.

Tried it also on csv and parquet files with duckdb, it will run the explain plan, compare both query, explain why parquet is better, will see that the query is doing predicate push down, etc.

Also when it gets things wrong, instead of inspecting the code, i ask it to create a design document with mermaid diagrams describing what it has built. Quite often that quickly shows some design mistake that you can ask it to fix.

Also with multiple tools on the same project, you have the problem of each using it's own way of keeping track of the plan. I asked claude code to come up with rules for itself and windsurf to collaborate on a project. It came back with a set of rules for CLAUDE.md and .windsurfrules on which files to have, and how to use them (PLAN.md, TODO.md, ARCHITECTURE.md, DECISION.md, COLLABORATION.md)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: