Artificial intelligence programs that were being used by the Department for Work and Pensions (DWP) are raising “serious concerns”, with some AI prototypes being dropped by the department. Documents released to The Guardian found that one AI, called ‘white mail’, has been reading correspondences from benefit applicants and claimants, and supposedly prioritising the most vulnerable cases, without benefit claimants knowing.

An internal data protection impact assessment said letter writers “do not need to know about their involvement in the initiative”, according to The Guardian’s findings. It also said consulting individuals about how their data is processed is “not necessary as … these solutions will increase the efficiency of the processing”.

Sensitive information, including, national insurance numbers, dates of birth, addresses, telephone details, email addresses, details of benefit claims, health information, bank account details, racial and sexual characteristics, and details on children such as their dates of birth and any special needs, can be included in the correspondence, according to the assessment. White mail is one of many public sector algorithms yet to be logged on the transparency register for central government AIs.

The revelation is leading to charities and others who work with benefit claimants to voice “serious concerns” about how the program handles personal data. Policy and public affairs manager at charity Turn2us, Meagan Levin, said the system “raises concerns, particularly around the lack of transparency and its handling of highly sensitive personal data, including medical records and financial details. Processing such information without claimants’ knowledge and consent is deeply troubling.”

The documents show that data is encrypted before originals are deleted, and is held by the DWP and its cloud computing provider. Officials have said the AI does not make decisions and no data is processed by it, and is instead complimentary to existing systems, flagging correspondence which is then reviewed by agents.

“Prioritising some cases inevitably deprioritises others, so it is vital to understand how these decisions are made and ensure they are fair,” said Levin. “The DWP should publish data on the system’s performance and introduce safeguards, including regular audits and accessible appeals processes, to protect vulnerable claimants.

“Transparency and accountability must be at the heart of any AI system to ensure it supports, rather than harms, those who rely on it.” Shelved AI programs include Aigent, designed to accelerate PIP by summarising evidence for inclusion in decision letters, and A-cubed, which aimed to provide work coaches with quick access to advice to support people into employment.