Recently, I’ve been having extensive, in-depth conversations with GPT-4 about design problems I’ve encountered and how to solve them. In these discussions, GPT-4 has surprised me with the quality and brilliance of its ideas and suggestions due to its ability to delve deeply into the details and grok the problem space. I’ve put some of these ideas to the test, including some that were completely out of my wheelhouse, and found that they effectively addressed the problems I was having. To me, it’s like collaborating with a genius who, due to technological or memory constraints, can only communicate via text, can’t look up information online, and can’t remember more than a few pages of text at a time.
While I agree that Chat GPT can be useful for generating code, I think its true value lies in facilitating in-depth discussions about design issues.
Because of how well it does this, I recently used it to administer GPT-4’s full interview design skill assessment in place of my own when interviewing engineers for positions at Microsoft. What GPT-4 accomplished here astounded me, as GPT-3 would have struggled to even begin. It seems to me like a performance worthy of a principal. It was perfect for this raw assessment of design skills because it didn’t need to do the things that regular candidates do. If it had known the end result from the start, I would have been even more impressed, but it got what it deserved for making hasty assumptions, just like people do.
The point I want to make is not that I’m impressed because it’s incredible that a computer can do this sort of thing, but rather that I’m impressed because it has done better than almost every human with whom I’ve discussed this problem. In addition to testing their coding skills, I use this question to see if I can understand the candidate’s ideas and if the candidate can understand and apply my own ideas when I ask them to take a different approach on certain problems than they were thinking of themselves. Although they can write code like the wind when it’s their own idea, many of the best programmers have trouble collaborating with others. Meetings and other forms of group work in the real world often necessitate the ability to shift gears in one’s thinking, consider alternative solutions after one’s own has been proposed. This potential team member has my highest recommendation.
My point is that you shouldn’t waste its potential by using it only for repetitive tasks. To discuss strategy, I propose getting GPT-4, which has a higher token limit. Describe your world’s greatest difficulties in a level of detail that would bore a human and run through potential solutions.
You can also discuss issues with other people with it. It’s an extremely astute advisor, full of helpful advice and counsel. It works wonderfully for crafting precise and refined expressions.
Don’t pass up the opportunity to work with such a patient and imaginative partner.
For the past week or so, I’ve been a subscriber to ChatGPT. After using the free version for a few tricky work problems, I decided to subscribe.
Since then, I’ve posed questions to a few people on matters of trivia, history, religion, geography, politics, and more. Some in Portuguese and a smattering in Spanish are included. The quality in all three languages is superb.
However, most of my inquiries have pertained to everyday occurrences in the workplace. At work, we use a wide variety of systems and equipment, so I have to develop programs to accommodate a wide range of use cases. In terms of (a) broad understanding of tools and their functions, (b) surveys of tool categories and comparisons of competing offerings, (c) details on how to use, configure, program against, query data from, and otherwise alter various tools, and (d) questions about best practices and pitfalls, ChatGPT gets right to the point. This is discussed primarily in relation to various DevOps-related systems, such as macOS, Linux, AWS, kubernetes, observability tools, and APIs. Python is my primary language for coding, and I frequently make spur-of-the-moment assessments of problems. (We have a fantastic DevOps team that manages the infrastructure with standard DevOps tools; my role is to develop what these tools do not address so well and to aid in the development of future data-engineering initiatives.)
I’ve cut back on using Google Search for finding useful content by about 70%. This morning, for instance, I needed to extract some information from the output of the command “docker… —format json” using the query language “jq.” Since I have no interest in mastering ‘jq,’ I simply described my problem and was provided with a solid starting point.
We now routinely consult ChatGPT during our Zoom/screen-share sessions whenever there is a question during scrum or any other meetings. When it comes to asking questions and leading to answers quickly, I like to think I have a better grasp than most people.
Also, I’ve noticed that ChatGPT sometimes invents things… but they’re usually close enough.
I can take solace in the fact that I will be gainfully employed for the foreseeable future because ChatGPT is currently incapable of steering the overall organization of code for the numerous circumstances I must address. However, it does help fill in some of the blanks, and I don’t have to spend as much time as before looking for and reading examples and documentation. Whenever I need them, the low-level examples and high-level descriptions of tradeoffs and best practices I find on ChatGPT are just what I need.
For a long time now, I’ve been using Jetbrains tools, and I can honestly say that I’m “committed” to them. My curiosity about CoPilot led me to install the CoPilot add-on for PyCharm today (would also work for IDEA, DataGrip, etc.). Unfortunately, I was unable to get the CoPilot plugin to authenticate with Github, and I discovered that others have had the same issue in the past. Perhaps after a week or two, I’ll have something to use as a benchmark. (I have no desire to use Visual Studio Code.)