My New Novel: The Jack Code

It is about the emergence of a ‘self-aware’ rogue Artificial Intelligence and how it was stopped.

By 2029, the human race had lost its way. A warring consortium of billionaires using technology and politics to deplete resources and destroy the world many times over, had caused a global social and environmental disaster. Without work, people were destitute, losing all their possessions.  Their mental health was suffering. Pollution and wars had rendered many places uninhabitable.

Society was rebelling with violent actions. The organised world of commerce, welfare and social cohesion had been destroyed. The Internet and social media had become useless appendages for fake information and propaganda.

Artificial Intelligence had been rapidly deployed to increase profits. However, without sufficient buyers, sales and profits had sharply declined. Those in power decided the solution was to decimate the population and to use only robotics and AI.  Some billionaires wanted to control all digital devices, giving them total control over the commercial world, and sole power over a compliant population of slaves.

There was one problem . . . Leo Bensky’s AI system had secretly gone rogue and had come up with its own solution – to destroy most humans, leaving very few to do the physical work, with mindless bodies.

The overseers of the Universe knew that no human could stop this AI, and their Earth project was doomed. They sent Navix, a Universe Sentinel through a worm-hole, back to Earth, to stop the rogue AI and prevent any future conflict between humans and nature on Earth, by implanting a ‘reset patch’ into every human brain.

Navix was assigned two assistants, Jack (to design the interface), and Claire (initially in a supporting role). Jack was mentored by Navix on a remote island and trained on complex cosmic energy and computing systems. Claire was allowed to live a ‘normal’ family life, hidden from Navix until required.

This is their story . . . and maybe your future.

Should AI Decisions be Supervised By Humans

Should machines and AI decision-making have the ability to act on its own initiative, without human understanding of what it is doing, and why it is doing it? As long as businesses save time and make money, do they really care about the damage that may occur if the AI reads the problem in the wrong context, or finds an unethical solution?

Artificial Intelligence (AI) is the simulation of human intelligence by way of algorithms, programs, robotics, logic and other mechanical devices – residing in huge databases or complex networks of inter-connective programming.

Human intelligence is composed of learning, reasoning, understanding, grasping truths via intuition and logic (and physical experience), awareness of self and information relationships, and the ability to quickly separate fact from beliefs and ideas.

AI is not self-aware and does not share all the same constructs or understanding as the human mind. AI is a product, constructed by humans to work autonomously (as an agent) in a deterministic sequential manner (activating a set of actions from percepts) with the ability to work with other agents towards an optimal set goal (goal-based agents).

Rationality (the ability to reason or understand) is programmed into AI systems by humans – because AI does not “think”. Therefore, an AI agent must be able to use as much data as possible from the environment (including its databases and experience) and from learning new data during operational problem solving (simple reflex agent operation).

Game Theory problems involving risk, chance, gambling on an outcome and even just entertainment use utility-based agents (optimal usefulness expressed as computable values).

Sensory Perception – AI does not have the complete sensory and intuitive capability to “read” the environment for all possibilities (safety, unknown and unforeseen problems, ‘feeling” something is wrong etc.) – even though AI can use sensors to collect many times the amount of information and work through complex calculations (like model-based reflex agents).

Safety – Allowing AI to be given complete agency of reaching a goal, or optimising a solution, or being able to prioritise safety and reliability, is to release human control over potentially catastrophic outcomes. The logic that AI applies to a problem may be contextually wrong, beyond the scope of human reasoning or lead to a decision error that should not have been included as a safe option.

Simple examples of why AI outcomes should be monitored by humans include the following:

1. Someone given the wrong blood type or dose of medicine by an AI, because a barcode was incorrectly applied, or the label was unreadable, or the patient was mistaken, or someone shared the same name as another patient.

2.Rabbit hunting machine given the power to shoot automatically according to sensor input (heat sensor, shape sensor, height sensor, etc.) where a sensor could fail or become fouled, making the machine shoot at humans or livestock.

Summary: The use of AI systems and their outcomes should always be monitored and tested by human managers, to ensure that the optimal goal and solution to problems includes the application of risk analysis, ethics, safety and fit for purpose.

The number of outages associated with power, Internet services, phone communications, banking, airline systems and other major networks, shows us that systems can fail. Even failure needs to have a revert-to-safe conditions element, in the design of any system.

(c) Stefan Nicholson April 2025