Table of Contents
It was a Thursday morning when the quarantine digest arrived. For an operations manager at a mid-size HR consulting firm, it looked like every other notification she received from the company's email infrastructure: a clean layout, a count of held messages, a blue "Manage quarantined email" button at the bottom. The subject line even referenced a specific distribution list she recognized. One message was waiting. All she had to do was click "Allow."
She almost did.
One Quarantined Message, Three Hidden Traps
The email presented itself as a routine quarantine report from the organization's email hosting provider. It claimed one message had been held, a document settlement notice from what appeared to be a Kenyan HR professional organization. The layout was precise: a header warning not to forward the email (a nice trust-building touch), a bold count of quarantined items, a 31-day auto-deletion timer, and three clear calls to action. "Allow." "View." "Manage quarantined email."
Every one of those buttons pointed to co[.]quarantine[.]serverdata[.]net, loaded with a different JSON Web Token in the query string. The JWTs were properly formatted, with key IDs, issuers, and expiration timestamps that would look right to anyone who bothered to inspect them. They mimicked the exact token structure a real quarantine system would generate.
But one link gave the game away. The middle button displayed membership@ihrm[.]or[.]ke as its visible text, suggesting the quarantined message came from a legitimate sender at the Institute of Human Resource Management in Kenya. Click it, though, and the destination was the same attacker-controlled quarantine portal. The display text was a mask, designed to make the recipient think she was previewing a sender address when she was actually walking into a credential harvesting page.
A Portal Built to Be Believed
Here is where the attack gets clever. Clicking any of those JWT links did not land on a crude login form or a blank page with a password field. It loaded a fully functional quarantine management interface, complete with tabbed navigation ("Quarantined emails," "Safe and Blocked senders," "Settings," "Email aliases"), bulk action controls, and an advanced search dropdown.
The portal even displayed the original lure message inside itself: a line item showing "Certain Spam" as the category, the subject "Nnedv-Doc_Settlement Attached for Immediate Review and Approval," the sender membership@ihrm[.]or[.]ke, and a timestamp of 03/18/26 at 23:48. Five action buttons sat next to it. "View." "Allow." "Allow sender." "Block." "Delete."
This was not a phishing page pretending to be a login screen. It was a phishing page pretending to be the security tool itself. The attacker built an entire quarantine management experience (T1534) so the victim would feel like she was performing a routine administrative task when she was actually handing over credentials. According to the Verizon DBIR 2024, credential theft remains the primary objective in over 40% of breaches, and attacks that embed themselves within trusted workflows are among the hardest to detect.
See Your Risk: Calculate how many threats your SEG is missing
The Recipient Was Wrong, and So Was Everything Else
The email was addressed to allstaff@nnedv[.]org in both the subject line and the To header. But it landed in the inbox of someone at an entirely different organization, a small HR consulting firm with no connection to the nonprofit. That mismatch, a quarantine digest for one organization delivered to a mailbox at another, was the first fracture in the facade.
The sending infrastructure told a contradictory story. SPF passed. DMARC passed. The domain serverdata[.]net had been registered since 2002 with GoDaddy, running its own name servers. Nothing about the envelope screamed forgery. But the DKIM check on the final relay hop returned a body hash mismatch, meaning the message body had been altered after the original signature was applied. An earlier hop showed DKIM passing cleanly. Something between the two modified the content in transit.
That split authentication result, passing at the origin and failing at the destination, is consistent with gateway tampering or message rewriting. The relay chain included a hop through inkyphishfence[.]com, a known email security inspection service, which can legitimately alter messages. But combined with the recipient mismatch and the display-text deception, the DKIM failure became the third signal in a pattern that pointed to one conclusion.
The IRONSCALES platform correlated all three anomalies. The community detection engine, drawing on resolutions of similar incidents across its global network, flagged the message with high confidence. Three affected mailboxes were quarantined within 45 seconds of the initial report (T1204.001). The "Allow" button never got clicked.
Indicators of Compromise
| Type | Indicator | Context |
|---|---|---|
| Domain | co[.]quarantine[.]serverdata[.]net | Attacker-controlled quarantine portal |
| Domain | serverdata[.]net | Sending domain (registered 2002, SPF/DMARC pass) |
reports@serverdata[.]net | Envelope sender | |
membership@ihrm[.]or[.]ke | Display text used as link mask | |
| IP | 64[.]78[.]48[.]58 | Outbound relay (out.exch030.serverdata.net) |
| IP | 64[.]78[.]32[.]85 | Internal relay (relayco22b.serverdata.net) |
When the Security Tool Is the Disguise
This attack did not try to look like a bank, a shipping company, or a SaaS vendor. It tried to look like the security infrastructure itself. That is a meaningful escalation. Organizations train employees to interact with quarantine systems, to review held messages, to click "Allow" when something is legitimate. This campaign (T1566.002) weaponized that exact training.
The FBI IC3 2024 report documented over $2.9 billion in losses from business email compromise, a category that increasingly includes credential harvesting as the initial access vector. The IBM Cost of a Data Breach 2024 report puts the average cost of a phishing-initiated breach at $4.88 million, with stolen credentials as the most common entry point. When the phishing page looks like a security tool, the traditional advice ("check the URL, look for typos, verify the sender") falls apart. The URL had a plausible subdomain. There were no typos. The sender domain was 24 years old.
What stopped this was not a URL blocklist or a static signature. It was behavioral correlation: a recipient that did not match the envelope, a link label that did not match its destination, and an authentication result that contradicted itself across relay hops. Each signal alone might have been explainable. Together, they were not.
Security teams should audit their quarantine notification templates and ensure employees know exactly what their legitimate digest looks like, down to the sending domain and URL structure. If your quarantine system sends notifications from one domain and the management portal lives on another, document that publicly for your users. And if a quarantine digest arrives for a distribution list that does not belong to your organization, treat it as hostile, no matter how polished the portal behind it looks.
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks.