John Henry and the Steam Hammer.
What does conventional FPGA development look like?
If you are working on an FPGA design and you are an expert, you already know roughly how to build it. You often just slap together some code and hit build, then look at the error messages and fix them. You find what broke and write up some tests to analyze it. You leave the test in place to make sure it can't happen again. You hit a blocker. You throw out the original code and try a different way, etc.
You work in a loop like that until you get something that is roughly working.
The development loop converges because there is an expert FPGA developer providing an error signal to hammer the loop into convergence.
Then you polish if you want. Or you don't. These are smaller more focused polishing control loops: same story, lighter blows with the hammer to force it into the right direction. You have to determine the direction.
Then you declare it done and move on.
But the whole time you really are just hammering at it.
Humans suck at writing elegant, syntactically correct and functional code the first time through. We only get it right with the help of a feedback loop. (And then we fix it by re-writing git history to make it look elegant. It's always a lie.)
What changes when you add an LLM.
Guess how you can hammer faster? With an LLM.
But don't think it's not just a hammer!
If you get an LLM involved in the development loop you can hammer out things faster. Now you are not limited by your ability to figure out when to throw out the original code and try something else, but by when to stop throwing out the original code and trying something else.
You're also limited by how fast you are able to read the code the LLM produces and judge if it is better than the last iteration that was slopped together, or if it has hit a blocker.
And as you get into the polishing stage, you need to be extra careful that you understand how everything works so you can start to limit the scope of the LLM's changes, because it can easily touch things that are outside of the scope you intended. (Simulated annealing comes to mind.)
(You almost never will be correcting syntax errors any more, but to focus on that 'efficiency gain' seems pretty silly.)
Does adding an LLM make development more efficient? It depends mostly on how often you lose the thread. A modern LLM can easily keep more context in its short term memory than you can! If you lose the thread, the development loop has broken open and will diverge.
If your development loop diverges, the LLM cannot push the development loop back on its own, because it cannot reason (it can only simulate text that looks like reasoning) and therefore it will only hammer in the right direction if the training data has more examples of the right direction than the wrong, or if you stop it from hammering the wrong way. If you're in novel development territory, it will have no useful training data. Think about this and draw your own conclusion.
So almost every time it diverges you'll have to first notice, then suddenly revert to conventional hammering at it. And if you let the LLM hammer too much, you might have a lot of mental catching up to do to be able to see which direction to swing the hammer to drive the loop to where you want it.
Let's Summarize
LLM's are just a more efficient hammer this can go very well if you can keep the LLM hammering in the right direction.
Hard Data
Some research that actually measured it shows that developers perceive about a 20% increase in productivity, and the hard metrics show about a 20% decrease.
(Regular Reminder: Humans are subject to cognitive bias.)
I suspect the likely upper limit of productivity gains is shown in this paper, because the authors are highly motivated to predict a particular outcome.
Others predict similar productivity gains and note that half of the "gains" are expected to be due to layoffs.