AI, Automation and the Human Premium: Why Crewing Still Begins With People

AI is reshaping maritime operations, but crewing is different: safety, fatigue, morale and accountability don’t fit neat models. Practitioners agree AI can help—automating routine work—only if it strengthens, not replaces, human judgment at sea today.
Artificial intelligence is steadily reshaping maritime operations. Predictive maintenance, voyage optimisation, compliance automation and digital reporting are no longer pilots—they’re becoming normal practice. But when it comes to crewing, the stakes are higher and the variables messier: safety, fatigue, morale, competence, culture, and accountability.
The industry’s direction of travel is clear. So is the warning from practitioners across the ecosystem: AI can improve crewing, but only if it strengthens—not replaces—human judgement.
Kris Vedat, CEO of SmartSea, is blunt about what changes first. “Over the next three to five years, I don’t think AI will fundamentally remove crew from vessels, but it will change how their time is spent,” he says—shifting routine monitoring and reporting into automation, “enabling the crew to focus more on… safety critical decisions, operational judgment.”
That emphasis on decision support, rather than decision substitution, is repeated by Steven Jones, founder of the Seafarers Happiness Index, and by Master Mariner Captain Pradeep Chawla—each from a different angle, but with the same core principle: the human remains the final safety barrier.
The Human Element Isn’t a Footnote — It’s the Main Plot
Maritime safety data keeps pointing back to people, not technology.
EMSA’s latest high-level analysis of EU-reported casualties and incidents finds that when human action and human behavioural contributing factors are combined, the human element accounts for 80.1% of investigated casualties and incidents (2014–2023).
That statistic matters in an AI-crewing discussion because it frames the real risk: not whether technology can “work,” but whether it changes behaviours and decisions at the sharp end. Captain Chawla describes the sharp end in practical terms. “Crewing decisions ashore directly affect the safety and operational efficiency of the ship,” he says. “Ultimately, the Master has to get things done… the people at the ‘sharp end’ are very critical to the success or failure of the task.”
Why “Perfect Matching” Is Harder Than Any Scheduling Algorithm
Shipping rent from most industries: the working team is constantly refreshed, often with limited overlap. “The ‘team’ on board practically changes every few weeks,” Captain Chawla notes. “Matching the right person to the right ship is critical… [and] it is also important… to judge… cultural compatibility… and even compatibility between individuals.”Here’s the rub: those compatibility factors are precisely what AI struggles to quante dependent on the quality of the data… garbage in, garbage out,” Chawla says. But even with good data, “humans are not predictable… the mood of an individual is affected by so many factors… I do not think algorithms can achieve any reliability in understanding people in the near future.”
Steven Jones frames the same issue as a “context deficit.” AI may bring precision, it can optimise logistics, but cannot witness leadership, mentorship, or crisis behaviour in the way human decision-makers can.
Fatigue: The Statistic That Should Scare Everyone
If there is one “human directly into casualty risk, it is fatigue. The International Transport Workers’ Federation (ITF) states: “Estimates suggest that 25% of marine casualties are caused by fatigue.” That matters because crewing, rotation planning, and administrative burden directly shape fatigue exposure.
Jones argues that measurable value exists only when automation returns time and predictability to seafarers—“a shorter to-do list or a more predictable schedule.” Otherwise, a tool is experienced onboard as “a cost, not a value.”
Captain Chawla gives a concrete example: “Take the task of filling up the crew list in different formats for each port. An AI tool could… cut down the time by freeing the person to take more rest.” But he also draws a sharp line between visible fatigue and hidden mental stress: “Signs of fatigue are easy to recognize… Mental stress is far more difficult… eeded… Very few companies invest… The reliance is on the Master reporting them to the company.”
The Officer Shortage Problem Makes “Automation Temptation” Worse
The industry is simultaneously trying to digitise and to fill berths. The BIMCO/I Report warns of a current shortfall of 26,240 STCW-certified officers (as of 2021) and projects a need for an additional 89,510 officers by 2026 if training and recruitment don’t scale with demand. In that context, automation becomes attractive as a pressure valve: reduce onboard workload, improve productivity, and support stretched crewing departments.
Vedat’s position is that AI should be used to elevate roles, not remove them: “AI is capable of removing the repetitive low value tasks… [and] elevat[ing] the crews role to more oversight and judgment.”
But both Jones and Chawla warn that if the industry tries to automate too quickly, it risks replacing human buffers with brittle systems—right when the workforce supply is already strained.
Accountability: You Can’t Put an Algorithm in Frhis is where the AI conversation stops being technical and becomes operationally existential.
Vedat is explicit: “The accountability always has to sit with the human… the master of the vessel owns the final call… most successful deployments… improv[e] situational awareness, but not removing responsibility.”
Steven Jones makes the legal reality unavoidable: “We cannot yet put an algorithm in front of a Court of Inquiry or a Port State Control officer… accountability must rest with the person who had the power to override the machine.”
Captain Chawla anchors this in Master does have the overriding authority for safety granted under ISM… Even if the situation is not as serious, Master must have the authority to change any crew member that is unsuitable.”
Data Requisite
AI does not fix messy foundations; it accelerates their consequences. Vedat argues the industry needs to “walk before we can run,” because “AI is only as good as the data feeding into it,” and shipping still lacks aviation-like standards.
Steven Jones describes the onboard cost of fragmented systems as a “Data Debt tax”—the repeated re-entry of sea time, certificates, passports and forms because systems don’t sync. “An AI built on fragmented data is just a high-speed way to make the wrong decisions.” and Morale: The Career-Decision Red Line Of all the areas to automate, promotions and rotations are the most emotionally charged—and the most corrosive when perceived as unfair.
“Seafarers… do not like algorithms deciding their careers,” Captain Chawla says. “Algorision making… but the final decision of promotions and rotations must be decided by humans.” And when decisions feel unfair: “The perception of unfairness… destroys the morale of the workforce… [and] affects the safety culture on board.”
Steven Jones draws the governance line clearly: if humans can’t explain why a career-impacting decision was made, the support to decision-maker—and trust collapses.
The Human Premium
The most realistic future is not “AI versus humanision of labour.
In five years, AI can own more of the what and when: paperwork automation, certificate tracking, travel logistics optimisation, fatigue-risk flagging, and schehe who and why—leadership chemistry, crisis behaviour, compassionate exceptions, and accountability—must remain human.
As Steven Jones puts it, AI “doesn’t have judgement; it has probability.” And as Captain Chawla concludes: “I do not think that in the near future AI can become reliable enough to replace the human being completely. The most critical non-technical issue is… trusting technology to this extent.”
For shipping, that is the central trade- automate tasks, but it cannot automate trust.
Source: seanews.co.uk. Shajahan Ahmed