Software development is shifting in a noticeable way. In the latest episode of the Lex Fridman Podcast, Peter Steinberger (PSPDFKit) describes a model that is more than “AI writes code”: Agentic Engineering.
The practical benefit is clear: teams can deliver software much faster because AI agents do not just suggest code, they work through tasks as long as goals, rules, and quality standards are clearly defined. The focus is less on perfect syntax and more on solid constraints and control.
What is Agentic Engineering in one sentence?
Agentic Engineering means an AI agent works autonomously on a task within a defined framework. It analyses, implements, runs builds and tests, fixes errors, and repeats the loop until the task works.
That is the difference from ordinary prompting: not “one answer,” but a workflow.
Why this matters
Most teams do not lose time on big ideas. They lose it on routine:
- boilerplate and integration plumbing
- small changes that keep adding up
- testing, review, and rework
- constant context switching between tickets
This is exactly where Agentic Engineering helps: routine becomes automatable, while humans spend more time on architecture, product decisions, and quality.
The paradigm shift: from coder to architect
This is not a “humans get replaced” story. It is a role shift.
When agents take over iteration, two skills become more important:
- Systems thinking: What is secure, scalable, and maintainable?
- Technical taste: What is not just finished, but well solved?
Agents can provide speed. Whether that becomes a durable advantage depends on your standards.
OpenClaw and the “green loop”
Steinberger illustrates this with OpenClaw. The agent works inside a controlled environment and runs in a loop:
- Understand the task
- Change the code
- Run the tests
- If there is an error, read it, fix it, and test again
- Repeat until everything is green
The leverage is not the first draft. The leverage is automated iteration, with tests acting as the contract.
Where teams fail and how to avoid it
Agentic Engineering sounds like speed without tradeoffs. In practice, teams usually fail for the same reasons:
- Too much freedom: the agent changes more than intended
- Weak tests: “green” does not mean “correct”
- Fragmented context: rules sit in chats, knowledge sits in heads, and drift follows
- Tasks that are too large: control gets lost
The answer is not less AI. The answer is stronger guardrails.
A pragmatic start: 5 rules for your team
- Define the frame
Start with one module, one feature branch, and explicit no-go areas. - Turn tests into a contract
Without usable tests, you only accelerate mistakes. - Bundle context in writing
One page is often enough: goal, rules, examples, style, and data sources. - Keep loops short
Small tasks, quick checks, next step. - Keep the human final
The agent iterates. You do the final technical and business review before anything goes live.
Conclusion
Agentic Engineering is not a buzzword. It is a new operating model: AI agents execute, humans set direction and quality. Teams that learn this can ship faster without sacrificing maintainability and can build much more themselves, even with small teams.
If you want to learn how AI-assisted development and Agentic Engineering can work in a controlled way inside your company — with context, tests, guardrails, roles, and process — ask about our workshop.