This image was created with the assistance of AI

Prompt Engineering

When I started doing prompt engineering, I didn't know it was called that so I just thought of it as AI psychology. The idea being that, since foundation models (large-scale machine learning models that can be fine-tuned for specific tasks) like GPT-x, Bard and Stable Diffusion ( a text-to-image generator) are something of a black box, in the sense that it is hard to represent the computation a model goes through to take the input and generate the output in a way that people can understand, we need to reverse engineer the best ways to get the best results from the models but understanding their psychology.

Some of the early tricks that were effective included giving examples, asking for step-by-step outputs and getting the model to evaluate and improve its own output. All of these tricks are agnostic about how the model generates the output, we just know that giving a prompt of such and such a type will lead to better outputs in this or that context.  Even prompts that seem to encourage models to reason in a certain way only actually change how the model produces the output, and may not have any impact on the actual process the model goes through to produce the output.

All this leaves us in a position where we have a tool that we don't fully understand and now need to work out how to use it best. That process is called Prompt Engineering.

Iain O'Neill 2023

Solaris and AGI

While reading Solaris I couldn't help but observe that there were many parallels to be drawn between the characterisation of a planet-wide blob of conscious cells and the prophesied Artificial General Intelligence which Open AI and other AI companies are working towards.

AGI, of course, is often contrasted with narrow AI, a kind of specialised system that can autonomously adapt to produce cognitive outputs that functionally match or outperform human abilities. For example, we are familiar with product or film recommendations that are generated by a narrow AI that is familiar with our past purchases. Other, less obvious, but still powerful influences of narrow AI include what kind of messages our political parties will choose and  where companies invest thier money.

AGI would be able to perform functions like these, but as many other functions as the human mind can, and possibly many more. In this sense, the definition of AGI, like that of AI, is a little circular. Artificial intelligence is defined as a     systems that can perform as well as human intelligence can, with little thought of how to define Intelligence without anthropomorphising it. So too, AGI is defined as an intelligence that has the generalisability of human intelligence as well as the performance. An AGI could pick up the rudiments of an game it was exposed to, as well as learn to crack a joke or design a product. In fact an AGI could adopt any of the roles ad hoc in any mental   human  pursuit and make a decent fist of it.

Solaris is depicted in the story as at least occasionally attempting to make sense of the humans and the human activities it encounters through mimicry, metaphor and examination. It is like an AGI that has no common ground with humans, but is nonetheless aware that there is something intentional there to interact with. 

Many of the challenges that come from bridging the gap between the mind of Solaris and the extelligence of human culture stem from a lack of common needs, wants and desires. These all come, at least in part, from having a body that requires physical, emotional, social and mental activity to develop and maintain. Solaris is indicated in the book to be a natural, albeit god-like, mind that is evolving due to its unique astrobiological niche but any needs, wants and desires are so far removed from those that humans have, that they make Contact seem impossible. 

AGI, in contrast, may emerge spontaneously from a complex network of AI agents, as our consciousness emerges from our internal conferences of mental functions mediated by executive functions and some nifty autobiographical retconning, but it may not have the biopsychosocial context required to really make contact with us.