It’s a typically Phildickian depiction of humanity’s relationship with machines (ie bleak and paranoid), but it was extremely prescient in its identification of the problem we are facing now as we move into a new phase of technology. The autofacs are not malicious, or even, like Skynet, motivated by self-preservation.

It’s just that a mistake was made somewhere between programmer and machine and people can no longer get the machines to understand what they need them to do. The machines are operating on bad information and people are shut out of the decision-making.

As machines take over jobs previously carried out by people, the aim is often to take human error out of the equation. Actions can be endlessly repeated, to ideal specifications each time. Decisions can be made in fractions of a second, in response to constantly updated information.

But the potential for errors to be made is still there. It’s simply shifted to the interface between programmer and machine. That is to say, the machines of the future (and increasingly, of the present) are only as good as their code.

As the presence of autonomous machines in industries grows – led by healthcare, delivery, travel and driverless cars – coding decisions are having immense consequences.

In some cases, these are catastrophic. In October 2018, and again in March 2019, Boeing 737 Max airplanes crashed  both times fatally. The cause of both crashes was the same: an automatic anti-stall system that took its information from a limited number of sensors. The computer was operating on bad information and the pilots were shut out of the decision-making.

In a smaller, domestic setting, the autonomous machine we’re most likely to have seen is a robot vacuum cleaner. If you’ve ever watched one at work, you’ll have experienced the frustration of seeing it vacuum the same already clean area three times, then zoom past an area with very visible debris. You can’t nudge it or point out what it’s missed. You’re shut out of the process. 

Where autonomous machines are concerned, this is a key part of the plan. You don’t make something designed to replace human action and then allow a way for people to jump in and take over as they see fit. Once a machine has been programmed, human involvement in its functioning is very limited. That means that the original programming decisions become more and more important.

People who have self-cleaning ovens often experience a stressful moment the first time they start the self-cleaning programme. Typically, the oven will lock itself and heat up to an extremely high temperature to turn any food residue inside to ash. If you do an internet search for self-cleaning ovens, you’ll find huge numbers of requests for advice from owners who panic at this point and cut the power to the machine.

Domestic, autonomous machines mean that people will have to accept the anxiety that comes with ceding control over something in their own homes. In order to do so, people need to have confidence in the original design and let the machine do its job.

More home appliance manufacturers are launching autonomous tech. Samsung’s latest range of Jet vacuum cleaners will be accompanied by self-emptying dust containers. The user just sticks the vac on the Clean Station and the bin will empty itself. This fixes the flaw in the HEPA filtration system for bagless vacuum cleaners. That is, people with allergies and dust sensitivity do not have to be exposed to dust even when emptying their machines. (Although we do wonder how the Clean Station itself is then emptied.)   

Meanwhile, LG is launching a self-sanitising water purifier. The PuriCare is equipped with a UV LED light that activates automatically to keep the tap clear of bacterial build-up.

So far, all autonomous machines for the home are focused on cleaning. Not only is it the simplest function to program but it's also the kind of job we are currently confident enough to automate in our homes. It's low-risk and low-importance.

But that will change as technology advances and consumer confidence improves with it. Then, the onus will be on consumers to research the tech that they allow into their homes and to understand it well enough to control what limited interactions they have with it. This will mean an attitudinal shift for the kind of consumer that finds it a struggle to get through a set-up guide.

But not everyone will be happy to accept a more passive role. Some people will still find ways to re-insert themselves into automated processes. The writer and artist James Bridle, author of New Dark Age, has discussed how people can disrupt autonomous technologies by understanding and taking advantage of their parameters. His famous example is encircling and trapping a self-driving car within a salt ring. To the car’s sensors, the salt is a solid white line on a road  a barrier that its programming will not allow it to cross. 

As far as Bridle is concerned, such an act would be a liberating instance of human refusal to accept its reduced role in the world. But the same methodology is available to people with malicious, not mischievous intentions. 

In the end, our concerns about autonomous tech are never about the tech itself. They will always be concerns about the skills and intentions of other people. Human error will never be entirely eliminated and people will always find a way back in.