ISO 10218:2025 + HSE’s AI stance: the new compliance checklist for humanoid pilots in UK industry

Humanoid robots are getting closer to real pilots in warehouses, factories and industrial sites. For UK businesses, the key challenge is no longer just capability. It is governance: how to run a pilot that operations, EHS, procurement and insurers can all support. Two recent signals matter most. First, HSE has said that when AI affects workplace health and safety, employers must carry out a risk assessment and put controls in place, including controls for cyber security threats. Second, ISO updated its core industrial robot safety standards in 2025, with ISO 10218-1 covering the robot itself and ISO 10218-2 covering the application, integration, commissioning, operation, maintenance and decommissioning of robot systems.

That combination changes the conversation for humanoid pilots in the UK. It means a business cannot rely on “the OEM says it is safe” as the whole answer. The robot matters, but the deployment matters just as much: where it operates, how people interact with it, how updates are handled, how faults are recovered, and who has authority to stop it. ISO 10218-2:2025 is explicit that safety extends across design, integration, commissioning, operation, maintenance, decommissioning and disposal of industrial robot applications and cells. HSE’s position points the same way: AI-related workplace risk still sits inside normal health and safety duties and must be risk-assessed and controlled.

There is an important caveat here. ISO 10218 is not itself UK law, and humanoids will not always fit neatly into legacy industrial-robot categories. But in practice, updated ISO standards are highly relevant benchmarks for what “good” looks like in robot safety and system integration, especially when a business is trying to show it has taken a sensible, structured approach. ISO describes the 2025 editions as the comprehensive safety requirements for industrial robots and their applications, with Part 1 focused on the robot as partly completed machinery and Part 2 focused on the complete application and robot cell.

For UK businesses, the practical lesson is simple: a humanoid pilot now needs an evidence pack, not just a demo. The internal question is no longer “can the robot do the task?” It is “can we evidence that this deployment is controlled, supervised, maintainable and safe enough to justify running it on site?” HSE’s AI statement is especially important because it explicitly brings cyber into that answer. A humanoid is not just a physical machine. It is also a software-defined, sensor-heavy, updateable system.

What ISO 10218:2025 means in practice

ISO 10218-1:2025 deals with the robot itself. ISO says it addresses safety requirements specific to industrial robots as partly completed machinery, including inherently safe design, risk reduction measures and information for use. In plain English, this is the supplier-side layer: what the machine is, how it is designed, and what the OEM must tell you about safe use.

ISO 10218-2:2025 is where things become much more relevant for buyers and operators. ISO says Part 2 covers the integration of industrial robot applications and robot cells, including design, integration, commissioning, operation, maintenance, decommissioning and disposal, plus the information needed for those stages. That is the right mental model for humanoids in industry: not a gadget arriving on site, but a system being integrated into a live environment with people, workflows, routes, tools and change control.

That is why the compliance story for humanoids should not be framed as “is this robot compliant?” on its own. It should be framed as “is this application controlled?” A humanoid may be perfectly capable in one zone, on one floor, with one route and one set of tasks, and completely unsuitable in another. The application is the thing you must govern. ISO 10218-2:2025 is effectively telling the market to think in that full-system way.

Why HSE’s AI stance matters more than many operators realise

HSE’s January 2026 statement is short, but commercially significant. It says that where AI impacts workplace health and safety, a risk assessment is required and appropriate controls must be put in place, and that this should include addressing cyber security threats. HSE also says it wants AI risk to reach a point where it is managed like any other risk: sensibly, proportionately and pragmatically.

That is a useful signal for any UK business considering humanoids. It means the right governance model is not “let the robotics team handle it.” It is cross-functional from the start. EHS, operations, engineering and IT/security all need a view because the deployment risk is both physical and digital. A humanoid with remote access, telemetry, software updates and AI-enabled perception creates a combined safety + cyber problem, not two separate ones. HSE’s statement does not provide a detailed robot-specific rulebook, but it does make clear that employers cannot treat AI risk as exempt from ordinary workplace duties.

The new compliance checklist for humanoid pilots in UK industry

The most useful response is not to overcomplicate it. Build a pilot checklist that reflects what the standards and HSE are pushing you toward.

1. Define the application, not just the robot

Start by documenting the exact task or task pack. What is the robot doing? Where is it doing it? What objects is it handling? What routes, workstations or handoff points are in scope? ISO 10218-2:2025 is application-focused, so the pilot should be too.

2. Write down the operating envelope

What surfaces, gradients, lighting conditions, aisle widths, congestion levels and speed limits are acceptable? Where is the robot allowed to operate, and where is it not? This is one of the easiest ways to turn “interesting technology” into a controlled deployment. ISO’s integration and operation language makes this kind of definition central, not optional.

3. Separate supplier evidence from site evidence

From the supplier side, gather the robot’s safety and user information, operating constraints, emergency-stop behaviour and maintenance requirements. From the site side, produce your own readiness evidence: routes, charging location, signage, supervision model, staffing, and local risk controls. ISO 10218-1 is about the robot as supplied; ISO 10218-2 is about what happens when it is integrated into your real environment.

4. Make cyber part of the safety case

Who can remotely access the robot? How are credentials managed? How are updates approved and rolled back? What telemetry or video is recorded, where is it stored, and who can view it? HSE explicitly says cyber security threats should be addressed where AI affects workplace safety, so this should be in the pilot paperwork from day one.

5. Record training and competence

Who is trained to supervise, intervene, isolate, restart or stop the robot? Who is allowed to change settings or approve updated task parameters? ISO 10218-2 includes operation and maintenance in scope, which means competence and information for use are part of the safety picture.

6. Time-box the pilot and define acceptance criteria

A good humanoid pilot should have clear gates: commissioning, early safe operation, and performance/acceptance. What counts as pass, partial pass or stop? Who signs? What intervention rate, uptime or incident threshold is acceptable? The standards do not prescribe your commercial SAT process, but their emphasis on commissioning, operation and information for use supports a documented acceptance approach.

7. Plan failure recovery before go-live

If the robot faults, freezes, loses connectivity or behaves unexpectedly, what happens next? Who isolates it? Who attends site? Is there a manual fallback? How is the asset removed or recommissioned? For humanoids, post-fault recovery is part of compliance maturity, not just a support issue. ISO 10218-2’s inclusion of maintenance and decommissioning reinforces that lifecycle view.

What a good evidence pack should contain

A sensible UK humanoid pilot pack now looks something like this:

A short application description explaining the task, area, people involved and operating assumptions.
A site readiness checklist covering routes, surfaces, access, charging and supervision.
A supplier evidence pack with robot information, constraints and maintenance needs.
A risk assessment and controls summary, including cyber-related controls where relevant.
A training and competence record.
A pilot acceptance sheet with measurable pass/fail criteria and named sign-off roles.
A fault and recovery process for intervention, incident handling and removal/restart.

None of that is especially glamorous. That is exactly the point. The market is moving away from demo-led thinking and toward proof-led deployment. HSE’s position and the 2025 ISO updates both reinforce that direction.

What UK businesses should do now

If humanoids are on your roadmap, the best next step is not to wait for perfect regulation. It is to upgrade the quality of your pilot process. Treat the deployment as a governed system, not a one-off technology trial. Use ISO 10218’s robot/application split as the right frame: what the supplier provides, and what your site must still control. Use HSE’s AI stance as the reminder that risk assessment still applies, and that cyber belongs inside the same conversation.

For UK industry, that is the real shift. The businesses that adopt humanoids well will not be the ones with the best demo. They will be the ones with the best evidence pack.

The Robot Group's View

At The Robot Group, we think the next wave of humanoid adoption in the UK will be won on readiness, not rhetoric. The winners will be the businesses that define the task properly, document the operating envelope, align EHS and IT early, and treat acceptance and recovery as seriously as the headline capability.

Because the real question is no longer whether a humanoid can do useful work.

It is whether your business can prove that it is ready to deploy one well.

Subscribe to The Robot Group