It seems the press has become obsessed (yet again) with the existential threat to our existing posed by AI. At least that’s how it feels with my personal newsfeed handed to me by AI recommendation engines based on my many searches for AI topics. Having recently spent a lot of time looking at AI, I feel I need to weigh in with a slightly more balanced viewpoint.
Artificial General Intelligence (AGI) may someday threaten us like it does in The Terminator or The Matrix, but we are a long way from AGI. It is also debatable as to whether it sees us as a threat. Far more likely to me is that many AGIs are developed in competition, with vastly different budgets. Some will see each other as a threat, many will just be rubbish and fall apart at the first hurdle. Again, it brings me back to thoughts of Douglas Adams and the Sirius Cybernetics Corporation. Talking doors with genuine people personalities.
OK, so it doesn’t need to be an existential threat, right? The rapid increase in development and availability of Large Language Models (LLMs) and generative AI could create a productivity boost so enormous that many thousands of people find themselves out of a job. Let’s unpick that a little.
AI is a bunch of differential calculus performed at scale on matrices of data, with weighted inputs and loss equations. Doesn’t sound quite so threatening or so interesting now does it? How does that equate to the generative models we have today I hear you wanting to ask? Think of it this way…
When you prompt a generative visual model to draw you a picture of a face in the style of Rembrandt, it has a catalogue of Rembrandts. It has pulled them apart again and again and created a graph of all the similarities between them. It then uses trial and error, very, very quickly to create something that fits that model.
AI generated text works the same way. It has been trained on massive amounts of text to create mathematical models about how the words or syllables relate to each other and done the same with semantics, proper nouns, and the like to highlight the meaning. It’s amazing, but it’s not AGI. When it generates text, it is scouring its data and using trial and error to generate something like the text it has seen before, but within the constraints you give it.
I think of these models as very efficient curators with very, very efficient indexes to bring back answers from the body of information it holds almost instantaneously. They are not original in the sense of the ‘spark of creativity’, but they are very good at pulling disparate information together from wide sources, if that information exists. If it doesn’t it may ‘hallucinate’ and whether that approximates original thought can be debated another day, it’s not usually useful in a work context.
I don’t think this is an AI question. That heading was more clickbait, sorry. If a person takes a request from someone and delegates it to someone else, checks it, adds their name to it, and passes it back to the requestor, then my guess is they use generative models in much the same way. This is a question of value.
If we use AI to do lots of low value tasks with increased frequency, we invite automation to replace our function in the value chain. This is normal. As a species we have always sought to increase efficiencies in our work. Farming replaced hunting, mechanisation replaced human labour, computers replaced repetitive administration tasks. You may see this as good, bad or both but in context, generative AI is another step on a well-trodden road.
If I asked you, “Do you give 100% to every task?” would your honest answer be yes? When you are asked to produce a report do you extensively check every source, cite them all in references, and create a nuanced and balanced view of all the options available with their risks and assumptions made along the way? Almost certainly not every time.
We live in a time-poor era. Working days are long, pressures on our time are high, experienced help is hard to find.
Let’s ask a different question.
Remember that large language models are trained on massive sets of data. Remember also that they are trained on different sets of data. You can use that to your advantage. When you ask a LLM for an answer, be the value add to the information you get back. You need to turn the information into knowledge. Your experience is a lens to apply to that information, you provide the context that gives it value.
Here are some tips for using LLMs to increase your productivity and the quality of the work you derive from it:
Instead of thinking about LLMs as a threat to your job, see them as a threat to your backlog.
If you want to discuss the safe implementation of AI into your workplace, including generative models get in touch with Methods. We can help with organisation change, technology design, implementation, and service management.