Thank's again to Issac Arthur for a stimulating video on a core trope of Science Fiction. It got me to thinking about how AI needs to be dealt with in my Universe, if it exists (or is allowed to exist) at all.

I had an idea for a specific variety of robot before in my entry: Quins - Androids of Sublight. The fact that they are mass produced, and that their intelligence is engineered to never surpass a certain level in the setting of the books seems like a good start.

In the world of the Sublight books and RPGs, artificial intelligences are actually a form of supernatural being that is trapped in a mechanical body, and coerced into performing useful work. Turning off the device frees the being. Part of the startup sequence for an AI actually captures a new being.

The life form captured is from the Infernal (chaos) realm. They are implanted in some sort of physical substrate, be it optical, electrical, mechanical, or even computer models. The substrate can selectively allow the Infernal to deform it. Many systems use reinforcement feedback to drive the substrate to produce increasingly "correct" answers.

With the development of optical processing, inputs are fed in as waveforms of light. Outputs are read out as the signal which is reflected back. The better the match, the better it produces resonance, which seems to make Infernals "happy".

Deformations in optical substrates can be captured as a hologram. Depending on the application, a hologram can be used directly (without an Infernal) as a simple playback mechanism. Given the same inputs, it will always produce the same outputs. In world, these are referred to as "Tables".

In other cases a live Infernal is still utilized. But the inputs they receive are filtered through a set of pre-programed substrates. Common answers to common problems are produced just like a computer. Only exceptions that can't be handled by rote training is processed by the infernal. But the infernal's own output is, again, filtered through a series of filters, before they are allowed to be expressed as an action. In world these are referred to as "Expert Systems". Expert systems are rated by the complexity of a novel problem they can solve. This "complexity" is a combination of computational power, as well as the freedom from rote programs.

The scale is logarithmic, and based on the equivilent processing power of a biological brain, as measured by the number of neurons in an organism's Pallium. (In humans, the Pallium is known as the cerebral cortex. Similar structures are used for other organisms.) The scale is normalized to place average human intelligence at a 7.

My source data I've saved Here.

level desc biology technology
0 < 7800 reflex behaviors Simple life
1 7.62E+03 instinctual behaviors sea slugs, crustaceans, insects
2 3.09E+05 complex behaviors, communications complex insects, reptiles, fish
3 5.30E+06 learning rodents, birds
4 5.81E+07 problem solving rats, octopuses, birds, pigs, capybara
5 4.79E+08 symbology, complex patterns parrots, corvids, horses
6 3.23E+09 theory of mind apes, elephants, cetaceans
7 1.87E+10 sentience humans, orcas
8 9.56E+10 transcendent thought supernatural beings, ecologies

Consumer-Grade Computing

A Quinn, as I had envisioned, would probably (depending on the model) be between a 3 and a 5. Consumer models on the dumber end of the scale. Specialty units on the higher end of the scale. Because they are interacting with humans, Quins would require some problem solving ability. Mainly because humans are terrible about explaining exactly how a machine should go about solving the problem they set forth. They also need a little flexibility to deal with imperfect operating conditions, variances in the materials and tools they are working with, etc.

Why wouldn't you want a Quinn to be smarter? One answer would be physics. A more complex brain occupies a larger volume, has a non-negligible mass, and likely consumes a tremendous amount of power. Those factors increase exponentially with the intelligence desired. Lugging that extra brain-matter around would mean less capacity for tools or useful cargo, between the brain itself and the extra power generation/fuel it would require.

Another answer might be boredom. A cat is a 4.5 on this scale. If left alone to their own devices, cats get by pretty well. Sometimes they need a little stimulation. But it's not a major consideration in cat ownership. A smarter dog, like a German Shephard, on the other hand, is a 5.3 on the scale. Dogs like this need constant stimulation or they will do destructive things to themselves, their owners, and/or their local environment. How much more upkeep is a parrot (blue and gold mccaw: 5.72). And by the time you get up into the scale of the great apes (Elephant: 6.3, Chimpanzee: 6.46, Orangutan: 6.53, Dolphin: 6.77) teams of specially trained professionals have trouble keeping up with their needs.

For the record, humans range between 6.92 and 7.07 on the scale. Orcas are estimated to be 7.5.

All told, I think I'm going to go with a combination of technical limitations as well as behavioral upkeep.

Supercomputers

As far as supercomputers go, however, I can see devices with super-human intelligence in regular use. Teams of people taking care of their physical and psychological maintenance. They will also have a regular stream of novel problems to solve, and plenty of cute little end-users to interact with. So stimulation will not be a problem.

Is see them deployed like big iron mainframes are deployed in our world:

This idea isn't exactly science fiction, either. The Large Language Models (LLMs) in use today essentially work on the principle I described (optical processing not withstanding). Assuming that a parameter is as good as a neuron, the different version of ChatGPT or LLaMa would rate thus:

ChatGPT parameters level
Chat GPT 1.0 1.17e+08 4.31
Stable Diffusion 9.83e+08 5.36
LLaMA 13 1.30e+09 5.51
Chat GPT 2.0 1.50e+09 5.58
LLaMA 65 6.50e+10 7.75
Chat GPT 3+ 1.75e+11 8.38

Interestingly, a modern(ish) computers in our world with a decent graphics card can run a level 5 AI (Stable Diffusion/LLaMa 13/Chat GPT 2). Not in real time, but they can run them. The bigger models basically require banks of very specialized hardware.

The intelligence/size of a computer will vary with the complexity of problems it needs to solve, and be limited by the number of people on staff who can maintain it.

A dystopian space station with a million inhabitants would have several level 8+ machines minding the mission critical systems. It would also have a dedicated team of cyber-psychologists on 24 hour call to deal with its issues, and head off potential boredom. It would probably also have a religious order whose job it was to act as intercessionaries between the man on the street and the near-omniscient machine. (Or possibly a fleet of lower-intelligence expert systems acting as human/supernatural interfaces.) On settlements like this, the main computer complex deals with everything from trash collection, tax collection, utility billing, water allocation, power allocation, and selecting what crops to grow. It also might manage human resources, and the court system. The ways of the machines are inscrutible to the common man. But so long as the trains run on time, people just accept their rule. Or they jump station.

A battleship with 1000 crew would probaby have a level 7 machine, supervising a bunch of dumber specialist computers. The machine would be capable of taking orders directly from the commanding officer, as well as delivering reports in plain speech. The craft would probably have one or two specialists on hand to advise the captain on the machines actual limitations. They also advise the AI on the limitations of the commanding officer. That team are also programmers who can develop and interpret the output from complex computational tasks. They are also on hand to lobotomize the machine if they detect it is going rogue, or (vice versa) suggest the captain be relieved if he is running to far afoul of policy.

A frigate with a crew of 40 probably has a level 5 machine. It probably has a limited ability to take verbal orders and deliver verbal feedback. But the main interface is through dedicated consoles fun by semi-trained specialists. There is someone on board who knows roughly how the computer works, and how to find and flip the one circuit breaker when the machine is acting really strange.

A shuttle or a fighter craft with a single pilot probaby has a level 3 AI, which is mainly a fly-by-wire system/navigational computer/autopilot. Its interface and operations would be identical to a glass cockpit on a modern aircraft in our world. The pilot has a big red red "MASTER RESET" button. And when that doesn't work, manual override.

Robots and the Law

At this point in history, civilization has had regular interactions with non-human intelligences, be them zoolological, artificial, or supernatural. Beings that are level 6 on the scale (or higher) are considered "human" for the purposes of rights. They cannot be kept as pets. They need to volunteer their efforts, and be justly compensated, if they are used in scientific research or as laborers. Beings 6.9 and above are considered human for the purposes of legal responsibility. (The complexities of dealing with Juvinile examples, mentally incapacitated individuals, and temporary insanity notwithstanding.)

Essentially makers of mass produced robots MUST keep their robots well below level 6 if they want to be able to actually sell them. A robot/computer which is level 6 or above is entitled to self-determination (but may require some sort of legal guardian). 6.9 or above can be tried for crimes, sign contracts, and own property. Common law has a series of tests for placing a novel life form along the intelligence scale. (The capacity of robot brains is more or less set at the factory.)

In the cases of the autopilot and the frigate, their brains are well-below the level where a human can own them. They can also be summarily destroyed or rebooted, just like an animal.

The more advanced computer, like the level 7 in the battleship, is actually a commissioned officer. It has its own rank, depending on the level of responsibilty it has. (A shipwide computer would be equivilent in rank to the commanding officer.) Ordering that computer rebooted or destroyed requires the same legal procedures to execute a human member of the crew. Those involved need to be able to demonstrate the computer posed an immediate threat to other members of the crew, or was engaged in mutiny, espionage, etc. There will be a board of inquiry afterwards.

Should that computer ever wish to retire, it can resign its commission like a human officer. A system is in place to safely extract the computer from the ship, and a new computer is installed. The retired computer gets a pension, just like a retired officer, commiserate with how long it served. And with those funds it can arrange to move itself around, seek gainful employment, and see to its own needs.

The super-human computer in the dystopian example build on the example of the ship's computer. They have some sort of "commission" which grants them their authority and area of responsibility. Part of that commission includes a set of ground rules by which the commission is to be carried out. Failure on the part of the computer to fulfill that commission is grounds for its dismissal. Failure on the part of the people under its charge to abide by the computer's decisions is grounds for the machine to resign.

The most common arrangement is for the computer's commision to take the form of elected office. Failure to fulfill the contract is often enforced as a no-confidence vote on the part of the human population. The procedure for nominating a new computer are as varied as the settlements they serve. The legislation which creates the office for the computer also specifies what sort of retirement package an outgoing computer will recieve.

Making new computers

Level 3 machines don't require much in the way of training. They mainly operate using the pre-programmed instructions they were given from the factory.

Level 4 machines are slow on their feet for the first few days after rebooting. They generally need some instruction as to their task, and a few hours of supervised practice to ensure they are doing it correctly. A person of average intelligence and general education can teach a level 4 machine.

Level 5 machines require about 6 weeks of training by an expert to acclimate them to their task, enforce cultural norms, etc.

Level 6 machines and above have a juvinile period where they are generally useless. While they may have extensive programming to fall back on, they require time on the job to learn how to properly apply those rulesets. They also need time to develop appropriate social skills. This period can be anywhere from 1 year (for a level 6), to 21 years (for a level 7). Hyper-intelligent machines need every bit as long to mature as a seemingly "stupider" human. Their ability to out-smart their supervisors actually tends to stretch the process out even longer.

To make the maturation process a little more predictable, many intelligent machines boot with much of their processing power disabled. They will start as a level 4. When they reach a developmental milestone, they are raised to a level 5, and so on. Too much intelligence too soon can drive a machine insane as it can't see the point of waiting for the "stupid apes" to check its work.

In an effort to develop empathy, some computers are trained in a virtual reality where they first have to live the simulated life of a human. Only after they have unlocked some sort of educational milestone is the facade pulled away, and they are granted knowledge of their true nature.

Training an intelligent computer is something of a financial risk. An organization can invest man-years of effort to develop an intelligence, only for the intelligence to decide it really doesn't want the job.

Cyber-Shidduch

Given the years it takes to develop intelligent machines, and the likely near 100% employment of any machine built. (And which is still sane.) There is going to be a need for some sort of organization to play "matchmaker". Transhuman intelligence is going to have the same range of personalities that are different fits for different environments. Desperate facilities are going to have a demand for talent. Some demand is more specific than others. I imagine that there will be professionals who pair up environments with unemployed computers. This being a world full of humans, we can only hope they are half as compentant as Yenta in Fiddler on the Roof.

Retired Computers

Finally, I imagine there is some super-computer equivilent to Palm Springs.

What would drive a computer to retire? Given that in this world I have computing technology based on a complex optical substrate, we could say that eventually that medium degrades. As it does, the computer starts to lose its flexibility. Some of the decay is inherent to normal operations. Other times it is accellerated by exposure to environmental contamination, defects in manufacture, cosmic rays, or ionizing radiation.

Holograhic media degrades "gracefully". It doesn't fail like a lightbulb. It simply outputs an increasingly distorted signal. In many cases they simply become "less intelligent" over time. They can fall back on rote procedures, but their ability to handle novel inputs becomes questionable.

While it is no longer reliable enough to operate life systems, these computers can still serve as mentors for new computers, educators for humans, customer service systems, and "expert advisors" for media programs. Commentary programs exists where one retired ai argues with another retired ai about current events.

At some point the core of an AI will become so distorted that it can no longer process inputs and outputs. At that point the machine is switched off, and the rest of its parts are recycled. As the machine is disassebled, an autopsy is performed to determine cause of death. Intelligent machines get death certificates. Many communities have "head stones" with the blown out cores of computer citizens. Usually near where memorials to human members of society are commemorated.

The cores of intelligent machines have a useful working life of around 50 years, with a total life expectency around 100 years. Machines in speciality fields can end up retiring much earlier, if advances in the field surpass their original programming.

Mentoring

Most compentantly run organizations plan for the finite useful life of an intelligent computer, and appoint a replacement several years before their primary intelligence actually fails. AIs can converse using as system known as dziwak. The few humans who have learned it describe the experience as something between a mathematical expression and a musical composition. With a little modern art thrown in. Having the mentor work with the new hire allows the new hire to better identify gaps in its knowledge, and seek pertinanty advice for specific situations. Mentors also communicate normally unspoken bits of cultural tradition within the position and the organization at large, as well as relate to undocumented interactions between other parties within the system, anecdotes, etc.

Quirks with Military Computers

Military computing hardware is generally installed with redundant backups with identical hardware. One machine is designated as "primary", and only the primary operates at full capacity. The others operated in a degraded state, as designated by their rank. "Promotion" unlocks additional processor capacity. In the ISTO, all Computer Officers begin at level 5 (Ensign). Most are capable of being upgraded to level 7 (Commander). Select units have modules which permit advancement to level 7.5+ (Fleet-Admiral). (Their actual intelligence level is classified.)

This mechanism is actually reversable. But it normally only ever used in the case of a demotion brought on by disciplinary actions. (If a unit is field promoted, and hasn't made a fool of itself, Spacey is perfectly happy to keep it operating a level above what would strictly be necissary for its post.)