I’m currently using Copilot, and 80% of the benefits I get from it come from removing boilerplate code and refactoring. The other 20% comes from using it as smart autocomplete so I can quickly add a lot of properties or arguments. I do a lot of “greenfield” work with it, which is where it really shines, since a lot of that work is just setting up all the important parts before you have to do anything complicated. It has helped me a lot get from an idea to a working program, both by getting rid of a lot of boring typing and by keeping me going when I was getting tired or uninspired.
I can see why you might not like it if you used it for important things that needed to be well planned and tested before they could run. I don’t find it very good for detailed work, but that’s fine with me because I want to take my time and think about what I’m doing at those times.
As a side note, using ChatGPT with GPT-4 or even just GPT turbo is a great way to get projects moving again when you need to use packages, APIs, or languages you don’t know. You can just tell it what you want to do, and it will give you good examples and explanations. It won’t always be right, but it’s close enough to get you out of a jam and a lot faster than looking through the documentation or stackoverflow for a good answer. It also helps to be very clear about what your problem is. For example, you could say which version of the package you want to use or how long you have to solve it. Those little prompt tricks remind me a lot of the Google-fu we had to learn to search well. I’m glad that Copilot is moving to GPT-4, which has chat built in. This will make the whole process easier to understand.
I use it for developing with Typescript and NodeJS. Copilot is most useful to me when I would have to look something up on Google anyway, like how to format a date string or how to do X in Selenium. 8 out of 10 times, the answer is correct, and the other times, it is at least interesting (gives me an idea of what to look for).
This quick feedback is much faster than searching on Google and keeps me in the IDE. It also just makes it more fun to code when I have a “pair programming” partner I can talk to through code and comments and get ideas from, even if not all of them are perfect.
I think they’ve already done a lot of real-life work in this direction, which is why GPT4 is better, but it’s still a problem. Maybe they will be able to get rid of hallucinations for good one day, but I can also see that it will be hard to do so without making everyone less creative. Maybe making things up is a big part of how LLMs work, and if you try to stop it, the magic will be gone. I don’t study AI, so I really don’t know. I’m just guessing.
I’m not trying to downplay the power or importance of LLMs, by the way, in case that’s why I’m getting downvoted… I use copilot and GPT4 every day, and they make me so much more productive. But right now I see them as ways to make rough drafts that need to be edited and checked. If they can’t solve hallucinations, LLMs will stay in this lane, which is still incredible, amazing, and useful, but it might not get us to the AI endgame that everyone is talking about.