It’s nothing to do with the initial development. All about the long-term viability of the code. You can’t refactor or maintain something if you can’t prove your changes have’t broken it. I do believe that the code is better, as long as each test comes directly from the specification, it shows you have understood it.
The comment above about doing your own coverage using the debugger is naive. You can cover everything every time you make a change, or only the tiny bit when you make the next one? Then you start to have something really brittle. Not immediately, but soon (really soon) you will start to feel fear every time you change something. Then you’re in trouble.
This research is measuring the wrong thing. I don’t know how you’d measure the longevity of the code, but the initial build is only 10% of the effort in any large system. This is not taught at school and it should be. Writing maintainable code that has full tests is not a luxury. Far too many people think it is.