AI can help smoothen a lot of processes in your work as a software developer. My experience is heavily frontend so this post will speak on the processes in a frontend developer's workflow that can be enhanced with AI.
The following are some use cases of AI in my daily workflow.
Research
There are a lot of memes about the daily work of a frontend engineer involving always googling how to center a div. That quick context switch to open a tab and search is the most common opportunity for AI. You're probably already doing this.
An example of something I'd ask the AI: "What packages can I use for a user walkthrough in this application? I currently know of only v-tour."
This will search the internet and give me a list of packages. Now, I could ask AI to expand and show me the differences between the packages, but in this case I like to visit each doc by myself to get a first-hand experience of what they're "selling". For researching new packages I still go to the docs to get a deeper understanding of what I'll be using.
Generating code
This is probably the 2nd most common use case. AI has become my medium for writing code. I know what I want to write, I describe it, and the AI writes it. Before I let it write anything, I ask how it would approach it first. If the approach is aligned with what I'm thinking, I let it go ahead. If not, I correct it and then let it write. I review and refactor what it gives me. It's basically typing small to get AI to do a bulk of the typing.
This makes the most sense when you're working in a domain you're comfortable in. You know what the end result should look like, you just don't want to type it all out. Doing this with new packages, features you haven't worked on before, or patterns you've only seen in tutorials is threading the beautiful borders of vibe coding (and remember, this is not a vibe coding guide).
Worth noting that deadlines play a role in whether I go this route. As good as AI is, it can take your work to negative progress real quick. When time is tight and I can't afford the back and forth, I get in there and write it myself.
Debugging
For debugging, you'll need a way to grant your AI agent access to a browser instance. Once you have that, your assistant is ready for prompts like "open the app in agent browser and let's test what we just worked on."
I was initially amazed when I started using Playwright for this. However, snapshots from Playwright MCP were too bloated. The context would get filled after 2 or 3 interactions with snapshots. Agent Browser solves that with very lightweight snapshots and a different approach to selecting elements in the browser.
My setup
Some common tools for giving AI browser access are:
- Browser MCP
- Claude for Chrome (Claude native)
- Playwright
- Agent Browser (what I currently use)
To use Agent Browser, go to the docs and install the CLI tool. Then install the skill for your appropriate agent. If you're like me you'll want to see what the AI is seeing and what it's doing, so you'll want to instruct your agent to open Agent Browser in "headed" mode (default is headless). You can add a global rule so you don't have to say this every time.
Iterative testing
For this, you need the agent browser mentioned in the previous section or any alternative.
Let's imagine this scenario. You're working on a 4-step multi-step form. You're currently wiring up the last step. After every code change, HMR refreshes the app in the browser and you lose the form state. If you had to refill the first 3 steps after every fix-and-test cycle, it will get maddening pretty quickly.
This presents an opportunity to let your agent refill up to the step you want to test yourself or have the agent test everything. When it encounters unexpected behaviour, it would jump to start debugging. You can stop it at any point to give input or allow it to finish what it's doing.
Any repetitive manual steps that sit between you and the thing you're actually working on can be handled this way.
Confirming functionality
When a stakeholder asks if you tested the feature yourself, your answer cannot be "I wrote tests to cover it." They want to know if you verified that the feature worked. The expectation of manually testing a feature you worked on is a given.
When you give the agent access to a browser instance and instruct it to test in headed mode, you see every step of the journey. You get close to the same confidence you would get from testing it yourself. And as such you can answer yes to the question.
Now, watching an AI click through your UI isn't exactly the same as navigating it yourself. You might catch UX friction or visual bugs that the agent wouldn't flag. But for verifying that validations fire and that the feature does what it's supposed to, it gets you most of the way there.
Mapping out test cases
When it comes to covering implementation with tests, a computer is better suited to provide an exhaustive list than the average human mind. By proxy, AI is better suited for this task than yourself. I don't mean writing the test itself, that would fall under the code generation section. What I'm talking about is the higher level of mapping out the test cases, including possible pitfalls from edge cases. Say you've built a file upload component. You'd think of the obvious cases: successful upload, wrong file type, file too large. But when you ask the AI to map out test cases, it'll come back with things like: what happens when the network drops mid-upload? What about a file with a very long name? What if the user drags and drops multiple files when only one is allowed? You get a more complete picture before you write a single test.
A note on tooling
From experience, output is heavily dependent on the medium you're using. I believe this is the tuning each vendor applies to their models to get them to behave how they want. If you're getting inconsistent results from AI, the tool you're using might be the variable worth changing.
We're still in the early stages of the AI revolution so we're all in uncharted territory here. These are only a few use cases I came up with. Don't be afraid to push the boundaries. So long as you maintain complete oversight of the work you're pushing, you should be fine.
I'll be writing follow up posts that go deeper into specific workflows, an extensive list of the tools I use, and how I have them set up. Stay tuned.
