Well we all have been there at one time or the other, LLMs seem to have all the answers, have you wondered how is it possible? I have and have tried to understand LLM better, after all they are our (s/w teams) fre-enemy (friend or enemy based on situation)

For starters LLMs are mere token generators which are trained by companies like Meta/Anthropic/Open AI with all the internet data available so far and continue to train them. So the LLMs have ingested all this and created associations between the different tokens and chunks of tokens, those who worked with NLP (Natural Language Processing) will know about words, embedding and vectors, the LLM is an extension to this but at a massive scale.

If LLMs were to be provided to us as is, we will not be so impressed, it hallucinates at times, gives wrong information etc. What companies have done is to append these LLMs with additional tools, for e.g. when you use github co-pilot and provide it a piece of log to analyze, it analyses the same by running different commands (with users permissions) , this LLM + additional tools make them agents, more useful to solve the problem at hand.

IMHO LLMs or agents in themselves are not intelligent. I would like to quote an example from work, an yocto build was failing, the github co-pilot identified the reason for the failure, only on a human nudge it wrote a script to download the failing packages, github co-pilot could not think of this, it was the human in the loop who had to give this idea, but once the idea is given, within next 10 mins it generated a shell script to download the failing package. So human in the loop is very much essential, someone who has her/his own ideas to try, the laborious work of writing code to try ideas, this is solved by the agents.

I have also witnessed agents taking over the developer’s mind and time, it sends the developer into a rabbit hole if used without stepping back every now and then, the end result is a massive token usage payment.

So agents must be used like one, the human must know if the output seems right or not, if the human is clueless of expected end result how it looks, then agent and human together are burning tokens and time.

Most important is excessive reliance on agents, leads to cognitive dis-ability , loss of self confidence to solve the problem at hand.

Analogously if a carpenter does not know his trade well enough, on how to use which tool and when, then giving him an agent will never make him a better carpenter.

So we all need to continue to learn and then rely on agents to do heavy lifting, the agent must in our control and not the other way around.

Happy llm’ing