Program for non-LLM Artificial Intelligence

I’ve been thinking for a while on the nature of artificial intelligence, and the exponential improvements we’ve seen in the domain over the last couple of years. My skepticism to some degree has felt unfounded, performance on various benchmarks has improved, and the newer models are most certainly significant improvements on the older models, and yet, my skepticism has remained. In this short essay I will attempt to provide a justification for my skepticism, the reasons for my being impressed with the progress in the field yet skeptical of the ability of AI particularly in the generation of novel solutions, one which we might define as paradigm shifting, as well as my skepticism surrounding claims of intelligence explosions in the domains of the natural sciences.

The first question we must address then is the nature of intelligence and the extent to which intellect defines the various capacities of our mind. Since the enlightenment, it has been held that intellect is akin to the mind. This claim, as Bergson identifies in his Creative Evolution is found most strongly in Kant, who then must admit to one of 3 scenarios, either that the mind conforms to things in themselves, or things in themselves conform to the mind, or that there is a preestablished harmony between things in themselves and the mind. This is doubtless, Bergson claims for two reasons, that Kant holds that the mind does not ‘overflow’ the intellect, and that he denies time an absolute nature, making it a priori much like space. I will focus particularly on the first claim.

Intelligence has doubtless been the defining characteristic of man. The contention however is if intelligence is what solely defines man, and Bergson offers us some compelling critiques. At the outset, intelligence can be defined as a capacity directed towards inert and unorganized matter, and to make it of use to us. Different animals are able to do this to varying degrees, and humans have shown themselves to be most capable in this domain, manipulating the material world increasingly well to achieve our ends. What this points us towards then, is that intelligence is the ability to understand form, while instinct is the ability to understand matter. On this framing, it is immediately apparent why intelligence succeeds in the manipulation of the material but fails to understand the natural, for unorganised material may be manipulated to fit any form (form here defined as a schema) as conceived by the intelligence, but the same is not true for living organisms, or systems. Intelligence alone cannot provide a reason for its ends however, and here we might find the reasons for the birth of the Enlightenment project of morality (and its failure, as Nietzsche demonstrates in the Genealogy of Morality) as well as existentialism that followed, though a deeper discussion of this would be a digression. What then defines our ends? The answer might be found in instinct.

Now to be sure, we possess faculties of both intelligence and instinct. If intelligence is the ability to deconstruct and use unorganised instruments, instinct is the faculty of using and constructing organised elements. The latter point is most important, for it suggests that evolution primarily occurs along the lines of instinct, not intelligence, but defending this claim would be beyond the scope of the current essay. Take for instance our eyes and the faculty of sight. One is never ‘taught’ to see, one simply sees. Or further, consider the chick that breaks through the egg, or the animal that seeks food, either through plants or through hunting. Both these drives are instinctive, not intelligent. Of course, it might be tempting to consider these the machinations of an intelligent being, but this is once more due to our own intellect’s predisposition towards understanding the world around us through form, through mechanism, to which unorganised inert matter is so amenable. Thus, what seems to appear emergently and without a clearly defined end, appears to us as either mechanical, or working towards a final purpose, neither of which may provide us with satisfactory answers and more essentially, deny us the experience of understanding living matter on its own terms. What I am proposing then is that while intelligence is not necessarily bound up within a particular telos, instinct most certainly is. Furthermore, in nature, intelligence and instinct appear together, in that they are complementary features attending to one and the same problem, providing to that other faculty what each lacks. In this sense instinct is not composed of mere tendencies but an internal source of purposiveness, a principle around which the organism is constituted and by which its actions are guided.

Thus instinct can be thought of as self originative, it creates and sustains itself. There is no externally imposed telos or criterion that instinct is attempting to satisfy or optimise for, and this is precisely why the intellect has failed in grasping human affairs, or more broadly, affairs concerning the living. For if the mechanist view of the intellect was true, it would in principle be possible to construct a living thing purely out of its parts, and to then endow it with life. On the mechanist program as usually conceived, this remains unachieved; in my view it is in principle misconceived. Each transformation that a living being goes through from the embryonic stage to the mature stage, to finally the decayed stage is internally guided and purposive. The potential for each stage is contained within the preceding stage yet once again this is not determinate, in that the final aged form is the form it was moving towards. The whole cannot be decomposed into its parts, but only understood as a harmony of the various categories our intellect has defined as ‘parts’.

It is this internally constituted nature of our instinct which supplies the framework for what MacIntyre in After Virtue calls ‘internal goods’. Internal goods, for MacIntyre, are those goods supplied by a practice which transforms the subject internally, for instance, the subtlety of a poet or the discernment of a physician. Practices are socially organised, historically extended activities and internal goods are discovered by virtue of participation within these practices. These are goods which are non rivalrous and may be gained by any subject participating within a practice, while external goods such as money, promotions, recognition, are scarce and rivalrous. Most importantly, internal goods are possibly achieved only through participation within that practice, and with the subject’s immersion in that practice, and they reshape the subject’s standards for excellence the more the subject participates in the practice. Internal goods may be thought of as instincts educated by a practice, the drive to excel provided by our instincts, and the excellence the reward in itself. The means and ends are one with internal goods, there is no external function or reward that is being optimised for, and often internal goods may even come into conflict with the achievement of certain external goods. Internal goods however may only bind when there is something to bind around. Namely, the instinct which supplies or directs our intelligence is what internal goods are dependent on. With this lack of instinct or intuition, internal goods have no meaning, or existence.

If a system lacks an immanent drive or source of concern, then it necessarily lacks the constitution to possess internal goods. It may be able to produce outcomes which mimic the productions of practices, but nevertheless it is constrained by the fact that it is producing these artefacts via optimisation for an externally defined goal or objective, it is in essence, optimising for an externally defined utility function. However, this is all the system may do, it may never own internal goods or the excellence that comes out of sustained engagement in a practice. Thus LLMs face two problems:

  • The heteronomy problem: An LLMs goals are always set during pretraining, and its achieving these goals are functions optimising a given utility function. Even if you attach ever more complex goals or utility functions, the essential action remains the same, that of optimising a given utility function.

  • Practice without transformation: An LLM may be able to engage in productive activity but the frame and constraints of this activity are nevertheless imposed upon it from without by a human. Unless the human transforms, the LLM does not transform. This links to the heteronomy problem too.

What kinds of novelty can LLMs provide then? It once again helps to distinguish between different kinds of novelty. In domains where the search space is defined, LLMs can certainly provide novel solutions. LLMs will be able to provide novel solutions within paradigms, as earlier Machine learning systems such as AlphaFold did with protein folding problems. Where the problem is clearly defined alongside the constraints, LLMs will prove useful. On the frontier however, things are much murkier, and paradigm shifts in a Kuhnian sense are decidedly not possible to achieve by an LLM. But even in these domains, such as for instance, superconductors, progress is likely to be accelerated but not exponentially so. The reason for this is quite simple; the bottleneck around discovery of new materials might have been accelerated, but the synthesis of new materials, testing, and then replication are all elements of discovery which are likely to remain as slow as they currently are. LLMs will also require fine tuning to eliminate potentially unstable compounds or compounds that cannot be scaled, these are all bottlenecks in discovery and then translating to industrial output that LLMs by themselves cannot solve. The likely implication of this is the existing narrow cone of progress we have seen in software will likely accelerate, while the economic and technical progress in other industries is much slower, albeit accelerated due to progress made in discovery. However, in domains where the frontier is attempting to grapple with poorly defined problems, and possibly necessitating a paradigm shift, an LLM will likely be unable to do so, precisely because it lacks internal goods, which are necessary for transforming the standards of a practice.

Given LLMs lack instinct or any telos governing its behavior, their goods collapse into external proxies guiding what would be ‘internal goods’ if they had any. This gives a ceiling to their capabilities, rapid growth in well defined tasks, and where experimentation may also be simulated, slow but accelerated growth in domains with well defined tasks but require experimentation and replication, and no growth on frontiers where paradigm changes are a necessity. It is essential to note these are built in limitations conferred on the artificial intelligence given its architecture, no amount of sophisticated improvement on software, or scaling up of hardware will serve as a workaround for this problem.

If there is to be a path beyond this ceiling, it happens necessarily through a manner which changes this architecture, whose organising centres are akin to an instinct which may be educated by a practice. Loosely this would consist of:

  • Agents with an immanent maintenance of goals, which bind over a longer term providing some proxy for telos.

  • Agents embedded within practices where standards are discovered through participation.

  • An ability to pursue ends which do not satisfy the externally imposed utility functions.

I maintain that a system that does the above nevertheless mimics human ability, and will be unable to have genuine internal goods as instinct seems non emergent but built into living systems (particularly living systems that move, namely fauna) but may come closer to the paradigm shifting kind of intelligence that the sector is attempting to build.

Subscribe to Aurelian
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.