

Honestly, I think that this was a horrid read. It felt so unfocused, shallow and at times contradictory.
For example, at the top it talked about how software implementation has the highest adoption rate while code review/acceptance has the lowest, yet it never really talks about why that is apart from some shallow arguments (which I will come back later), or how to integrate AI more there.
And it never reached any depth, as any topic only gets grazed shortly before moving to the next, to the point where the pitfalls of overuse of AI (tech debt, security issues, etc.) are mentioned, twice, with no apparent acknowledgement of its former mention, and never mentioned how these issues get created nor show any examples.
And what I think is the funniest contradiction is that from the start, including the title, the article pushes for speed, yet near the end of the article, it discourages this thinking, saying that pushing dev teams for faster development will lead to corner cutting, and that for a better AI adoption one shouldn’t focus on development speed. Make up your damn mind before writing the article!
But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines?
The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it’s thorough critical thinking. We just suck at knowing of all edge cases and how we test for them.
Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.