“The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”
– Stephen Wolfram
This piece offers a stepping-stone approach to a complicated set of processes. Author Stephen Wolfram details the computational thinking underlying large language models (LLMs) such as ChatGPT.
NOTE: The explanation gets much denser as you scroll deeper. However, the opening paragraphs, at a minimum, are an excellent primer on what’s going on inside LLMs.