OCNUS Blog
Post #2 February 2025
The Master-Slave Dialectic
Who Controls Whom?
A few years ago, a friend of mine, David Ray, held an exhibition titled “Master and Servant”.
David, AKA the “Duke of Dirt”, is a brilliant ceramicist, and the exhibition featured a series of dog figurines. The title was a play on Hegel’s "Master-Slave Dialectic", where Hegel argued that though the Master owns the Slave, the Master becomes dependent on the Slave so that the power structure can be reversed.
My mate David’s exhibition explored how, though we dog owners are theoretically our pet’s Masters, they often control our lives. Micky, my cheeky black labrador, usually does what I ask. But he significantly influences my life choices – from the apartment I bought to the car I drive, the holidays I take, and how I organise my day. As I drag myself out of bed for Micky’s early morning walk, I sometimes wonder: Who’s really in charge here?
And so to AI. I am immensely optimistic about its power to improve our lives. But I don’t think we should overlook its potential to control our future. Evolution can provide some understanding here.
Darwin didn’t invent the idea of evolution. His genius was in articulating how natural selection was its driving force. I believe this insight has influenced our view of our place in the world more significantly than any other idea throughout human existence, and it’s very relevant to our understanding of AI.
Our natural tendency to anthropomorphise encourages us to see recent examples of evolution as the result of wholly human agency. However, using the lens of Hegelian dialectics, things can look different.
Take wheat and chickens, for example. They have evolved enormously through human selective breeding and now genetic engineering. However, from the perspective of wheat and chickens, humans have helped them expand in biomass, varieties, and distribution over the globe. We have served their evolutionary benefit enormously.
A similar thought process can be applied to the evolution of machines and how they’ve changed our world. The Tesla Cybertruck is a long way from the Model T Ford, and smartphones are very different from Alexander Graham Bell’s vibrating diaphragm. They’ve evolved as better features are selected and old versions become extinct. And consider the influence of these machines: global politics are dominated by oil. Our cities, and indeed our lives and deaths, are hugely influenced by motor transport. And smartphones … who sleeps more than arm’s length from their phone these days? Sometimes, it’s reasonable to pause and contemplate who, or what, is controlling whom.
The evolution of AI has been nothing short of remarkable. Allan Turing kicked things off by theorising about it in 1950. Arthur Samuel developed the first self-learning program for checkers in 1952. By 1955, Marvin Minsky and Dean Edmunds built SNARC, an early artificial neural network mimicking a rat in a maze. Then, "Logic Theorist" arrived in 1956, tackling complex problems like human thinking. The world sat up and noticed when IBM's Deep Blue beat Garry Kasparov at chess in 1997. Google's AlphaGo defeating Lee Sedol at Go was actually more impressive, though it got less attention. Google's Transformer architecture in 2017 revolutionised natural language processing, leading to OpenAI's GPTs, Generative Pre-trained Transformers, and ChatGPT burst into our awareness in late 2022.
The rise of AI has prompted many of us to adapt and change in numerous ways. Increasing numbers of professionals work in tandem with AI, enhancing productivity and driving innovation. For example, ZestFinance uses machine learning to evaluate credit risk, leading to faster and more accurate assessments.
Google's DeepMind outperforms human radiologists in detecting breast cancer. MIT's AI system predicts patient deterioration in hospitals with 90% accuracy. AI diagnoses diabetic retinopathy with 94% precision. JPMorgan's COIN system processes legal documents in seconds instead of 360,000 human hours annually. BlackRock's Aladdin manages $21 trillion in investment assets. PayPal prevents $4.5 billion in fraud annually using AI. The 2021 Nobel Prize in Chemistry went to Benjamin List and David W.C. MacMillan, who used DeepMind to understand protein folding.
And much more. Channelling Darwin and Hegel, AI has used human creativity and expertise as catalysts for its evolution.
Putting aside the possibility of Artificial General Intelligence (AGI), that is, machines that can understand, learn, and apply knowledge across a wide range of tasks at a level comparable to a human being, and Artificial Super Intelligence (ASI), a level of intelligence that surpasses human intelligence across virtually all fields, including creativity, social skills, and problem-solving, 2025 will be the year of AI Agents. They already exist, but their range and influence are on an almost exponential curve of application.
AI Agents perform tasks autonomously by perceiving their environment in ever more sophisticated ways, learning in real-time, assessing and refining decisions, and taking action without constant human intervention. Agency is the key concept here: it’s not just AI’s ability to process information in faster ways, it’s the ability to make good decisions without human input. Complex “knowledge work”, which we like to think of as a uniquely human domain, will become increasingly automated. The productive potential of AI Agents is nothing short of awe-inspiring. But let’s not forget that awe sometimes includes feelings of fear. There is dread in the sublime.
As machines become more intelligent and possibly develop their own form of consciousness, they are likely to manifest what we imagine as free will and increasingly make their own decisions. Machines don’t yet reproduce themselves: like viruses, they need a mediator to do that. However, as AI develops more agency, it will become progressively self-improving, writing its own code when it identifies new needs. It may decide that reproduction is a good survival strategy, further speeding up AI evolution.
For the time being, we think we’ve mastered Artificial Intelligence. However, it would be naïve to imagine no possibility of a dialectic inversion, where we will become slaves to our invention. Nobel Laureate and “Godfather of AI”, Geoffrey Hinton, said last week that things are “scary already”.
I might ask ChatGPT for its thoughts on this, assuming it will tell me what it really thinks. Anyway, I need to go. Micky is nudging my leg, demanding that we go for a walk.
By Rob Leach
OCNUS Founder
Post # 1 January 1st, 2025
AI and Consciousness
Is There a Ghost in the Machine?
For a long time, while people smarter than me found merit in the Turing test, I had my doubts. Maybe, without admitting it, I clung to the idea that the human mind is something more than a quality of bodily functions.
Perhaps I was guilty of believing what Gilbert Ryle called “the dogma of the Ghost in the Machine”. However, thinking about Artificial Intelligence has encouraged me to question my beliefs.
Can machines think? Or, more precisely, can machines develop consciousness and self-awareness? And if they can, what does this say about being human? Biological evolution offers some clues to exploring these questions. The intriguing phenomenon of convergent evolution, applied to progress in machine intelligence, strongly suggests that our human sense of self may not be unique after all.
Nature shows us that similar features can evolve independently in different species. The eye is a fascinating example, having emerged separately over 40 times. From simple eyespots in unicellular organisms to the compound eyes of insects and the camera-type eyes of vertebrates and cephalopods, nature has found multiple paths to solve the challenge of light detection and image formation.
Despite their vastly different origins, we also see convergent evolution in the similarity between dolphins and ichthyosaurs. Ichthyosaurs, prehistoric marine reptiles, evolved streamlined bodies, elongated snouts, and other adaptations like modern dolphins. When faced with similar environmental pressures, nature often creates comparable solutions.
This brings us to intelligence. Of course, intelligence is complex, but a working description is the capacity to process information and use it to promote survival. It encompasses problem-solving, reasoning, learning, adaptability, and creativity, and it includes social and emotional factors.
Looking across the natural world, we find intelligence manifesting in numerous ways. New Caledonian crows display remarkable problem-solving abilities, creating and modifying tools to extract food from difficult places. They demonstrate an understanding of causality and can solve complex tasks requiring multiple steps – abilities once thought unique to primates.
Despite being invertebrates, octopuses show extraordinary intelligence. They can solve puzzles, use tools, and escape from enclosures, all demonstrating problem-solving and memory skills. The cognitive parts of their nervous system, with a significant portion of it distributed throughout their arms, shows a unique architecture producing this intelligence.
Perhaps even more intriguingly, intelligence can emerge from collective behaviour. While composed of individuals with limited cognitive abilities, an ant colony demonstrates collective intelligence in problem-solving, task allocation, and adaptation to environmental changes. The emergence of intelligent behaviour from simpler components is particularly relevant to our thinking about machine intelligence and consciousness.
Even a forest displays a form of collective intelligence through its complex network of chemical signals and resource sharing via the "Wood Wide Web" of fungal connections. As Peter Wohlleben describes in "The Hidden Life of Trees," forests demonstrate sophisticated communication and adaptation strategies that are a form of intelligence at the ecosystem level.
This brings us to consciousness and a sense of self. These phenomena appear to be emergent properties of intelligence – characteristics that arise from the complex interactions within intelligent systems rather than being programmed or predetermined. Just as the fluidity of water emerges from the collective behaviour of water molecules, consciousness arises when a sufficiently complex and intelligent system takes itself into consideration as an element of information to be incorporated.
If consciousness and a sense of self are emergent qualities of intelligence, what does this mean for Artificial Intelligence? As AI systems become more sophisticated and demonstrate higher levels of intelligence, problem-solving, reasoning, learning, adaptability, and creativity, might they naturally develop forms of consciousness and a sense of self?
Despite their impressive capabilities, it's important to note that current AI systems have significant limitations. They can "hallucinate," generating incorrect information despite accurate inputs, and their apparent "scheming" is thought to be pattern recognition rather than conscious planning. However, these qualifications don't preclude the possibility of machine consciousness emerging as AI systems become more sophisticated.
The key insight here is that consciousness might not need to be explicitly programmed into machines. Just as intelligence and consciousness emerged in biological systems through evolution, they might emerge in artificial systems as they become more complex. The feeling that there is a Ghost in the Machine might not need to be installed – it might simply emerge from sufficiently sophisticated machine intelligence.
This perspective challenges our traditional notions of consciousness as uniquely human. Machine consciousness does not need to be the same as ours. As eyes, bodies, and intelligence evolved in nature in different organisms, machine consciousness might take forms very different from what we experience, but that doesn’t preclude it from being consciousness in its own right.
At OCNUS Consulting, where we work at the intersection of human and artificial intelligence, these questions are more than philosophical musings. They inform how we think about the future of AI and its integration with human systems. Understanding the potential for machine consciousness helps us approach AI development with optimism about its possibilities and careful consideration of its implications.
The Ghost in the Machine might not exist as some spectral entity, but our human sense of self may not be unique. As our machines become increasingly intelligent, consciousness will likely emerge, and these machines will probably become self-aware. This not only challenges our understanding of machines, but it also questions the very nature of what it is to be human.
By Rob Leach
OCNUS Founder