Should AI Decisions be Supervised By Humans

Should machines and AI decision-making have the ability to act on its own initiative, without human understanding of what it is doing, and why it is doing it? As long as businesses save time and make money, do they really care about the damage that may occur if the AI reads the problem in the wrong context, or finds an unethical solution?

Artificial Intelligence (AI) is the simulation of human intelligence by way of algorithms, programs, robotics, logic and other mechanical devices – residing in huge databases or complex networks of inter-connective programming.

Human intelligence is composed of learning, reasoning, understanding, grasping truths via intuition and logic (and physical experience), awareness of self and information relationships, and the ability to quickly separate fact from beliefs and ideas.

AI is not self-aware and does not share all the same constructs or understanding as the human mind. AI is a product, constructed by humans to work autonomously (as an agent) in a deterministic sequential manner (activating a set of actions from percepts) with the ability to work with other agents towards an optimal set goal (goal-based agents).

Rationality (the ability to reason or understand) is programmed into AI systems by humans – because AI does not “think”. Therefore, an AI agent must be able to use as much data as possible from the environment (including its databases and experience) and from learning new data during operational problem solving (simple reflex agent operation).

Game Theory problems involving risk, chance, gambling on an outcome and even just entertainment use utility-based agents (optimal usefulness expressed as computable values).

Sensory Perception – AI does not have the complete sensory and intuitive capability to “read” the environment for all possibilities (safety, unknown and unforeseen problems, ‘feeling” something is wrong etc.) – even though AI can use sensors to collect many times the amount of information and work through complex calculations (like model-based reflex agents).

Safety – Allowing AI to be given complete agency of reaching a goal, or optimising a solution, or being able to prioritise safety and reliability, is to release human control over potentially catastrophic outcomes. The logic that AI applies to a problem may be contextually wrong, beyond the scope of human reasoning or lead to a decision error that should not have been included as a safe option.

Simple examples of why AI outcomes should be monitored by humans include the following:

1. Someone given the wrong blood type or dose of medicine by an AI, because a barcode was incorrectly applied, or the label was unreadable, or the patient was mistaken, or someone shared the same name as another patient.

2.Rabbit hunting machine given the power to shoot automatically according to sensor input (heat sensor, shape sensor, height sensor, etc.) where a sensor could fail or become fouled, making the machine shoot at humans or livestock.

Summary: The use of AI systems and their outcomes should always be monitored and tested by human managers, to ensure that the optimal goal and solution to problems includes the application of risk analysis, ethics, safety and fit for purpose.

The number of outages associated with power, Internet services, phone communications, banking, airline systems and other major networks, shows us that systems can fail. Even failure needs to have a revert-to-safe conditions element, in the design of any system.

(c) Stefan Nicholson April 2025