Coding with AI


AI is not leaving us anytime soon. Learning how to use it properly is a must-have skill for software developers.

I do not consider myself an expert, but I have experimented with this tool long enough so I could share some insights into how to make the best of them, and when it’s best to avoid them (for now).

When it works

On two occasions, I managed to get the GitHub Copilot to write all the code and unit tests for me, and all I had to do was type some instructions. It was awesome.

The first time was when I needed to implement the Builder design pattern, and the second when I needed to extract some code into its own class.

For the Builder design pattern, I already had the class to be built implemented, with attribute names and types, and the name of a very well-known algorithm to follow. In the case of the extraction code, the code was already written, and “copy/paste” into a class was no big deal.

So, I believe that the AI was able to write all the code for me because I had very clear and unambiguous instructions written in code, not in English.

When it mostly works

Once I was working on a project where I had to invoke an API that I had never used, in a completely alien domain. I chose Chat GPT as my companion, and overall it was very useful, but no perfect.

The AI would usually know what endpoint to use, the arguments to pass, and the expected values. However, many times it would give me answers that were valid for older versions of the API, that were not compatible with the version I was using.

My educated guess is that most blogs and tutorials on the internet were referring to older versions of the API, so Chat GPT was not able to get enough content to generate a valid response.

I don’t think that this is a great issue as long as you’re aware of this limitation and take it into account when using an AI.

When it wants to please

I’ve also noticed that the AI’s will do everything they can to please you, even if that means lying or saying something that’s incorrect.

For example, once I mistakenly told the AI that a command it was suggesting was wrong. Its answer was: you’re right, the command was wrong for X reason, as you said. The correct command is <same command as before>.

Another time, I was unknowingly trying to solve a problem that could not be solved. The AI kept proposing new alternatives and variations of the fix, until I encountered a blog that explained that that was not possible, and why.

My takeaway from this is that the AI’s rarely will say that they can’t do something. They are too willing to please their human masters that they won’t say “you’re wrong”, or “you can’t do that”.

When it remembers

If you use Chat GPT, I’ve noticed that it has memory, if you’re in the same conversation.

I have been experimenting solving different types of problems, from coding to writing, in an existing or new conversation. When the conversation is new, it produces answers that are not always very accurate.

Instead, if you have a thread where it correctly solved a problem, and you need to solve a similar problem, my advice is to do it on that thread. It will reuse the previous conversation and the chances of getting a more accurate result are higher.

When you can learn

On the same project where I had to invoke that new API, everything was new to me, and whenever I wanted to do something it was easier and faster to ask the AI how to do it.

Thanks to its help, I was able to learn about commands, arguments and options as I moved with my project. And when it could help me, I already knew enough to go to the documentation and find the answer by myself, which is another way of learning.

Using the AI as a helping tool is really useful, as long as you can validate what you’re learning, and are able to detect when you’re getting into an infinite loop.

When it doesn’t work

Another time, I needed to write a code to parse an XML file. I decided to build the final solution one requirement at a time, so the AI could build the solution based on its previous answers. That was a disaster.

The first three or four requirements were fine, until we came up with an interesting requirement that changed how an XML tag was processed depending on previously processed tags.

The code we had so far was unable to maintain any kind of status, so it needed to be refactored. The AI did its best, but it was unable to simultaneously meet the new requirement and preserve the previous functionality.

After some time I realised that that was a lost cause, and that it was best to do the code myself. I was able to code all the requirements using TDD and writing the code from scratch in a few hours, which I am sure was less if I had kept trying with the AI.

From this and other similar experiences, my opinion is that any complex piece of code should be coded by humans from the beginning.


In this article I have shared my experience using AI technologies to be more productive writing code and technical documentation.

The most important thing to remember is that, the moment you send something for review, it becomes yours. You are the ultimate responsible for the work that you submit.

Don’t ever submit anything generated by an AI without having your human brain reviewing it first.

Cheers!
JM

Cheers!
JM

Share if you find this content useful, and Follow me on LinkedIn to be notified of new articles.


Leave a Reply

Your email address will not be published. Required fields are marked *