What is Social Engineering
Social engineering is the practice of manipulating people into bypassing security controls by exploiting psychology rather than software. Learn the six Cialdini principles attackers stack, the named groups running modern campaigns, and the eight-layer defense framework that stops them.
By Dmytro Koziatynskyi Last reviewed
Social engineering is the human-layer breach driver across every channel
Social engineering is the practice of manipulating a person into performing an action or disclosing information that compromises security, by exploiting human psychology rather than software flaws. It is the parent category that contains phishing, spear phishing, vishing, smishing, pretexting, business email compromise, deepfake video fraud, USB drop attacks, and physical tailgating. The delivery channel changes; the manipulation engine stays constant. Attackers route around firewalls, EDR, and secure email gateways because the target is human judgment under pressure, and human judgment is predictable when the right cognitive levers are pulled in the right sequence.
The Verizon 2024 Data Breach Investigations Report attributes 68% of breaches to a non-malicious human element, and the FBI Internet Crime Complaint Center reported $12.5 billion in cyber-enabled fraud losses across 2023, with business email compromise alone accounting for $2.9 billion of that figure. IBM Cost of a Data Breach 2024 placed the average global breach at $4.88 million, with social-engineering-rooted incidents costing more than the mean because they tend to involve insider-level access and longer dwell time. The volume is industrial, the cost curve is rising, and the per-target cost to the attacker keeps falling as AI tooling commoditizes the craft.
Modern social engineering is built on the six influence principles documented by Robert Cialdini: authority (a request from a senior figure or trusted institution), urgency (a deadline that punishes hesitation), reciprocity (a small favor that creates a return obligation), scarcity (a limited window or a one-time link), social proof (others have already complied), and liking (rapport built before the ask). Attackers rarely use one principle alone. They stack two or three in a single pretext to compress the target decision window below the threshold required for verification. The 2023 MGM Resorts breach, the 2022 Twilio campaign that hit more than 130 companies, and the FACC and Toyota Boshoku wire frauds each show the same stack: authority plus urgency plus social proof, delivered through whichever channel the target trusts.
If you are a buyer reading this page, you almost certainly already run an annual security awareness video, a phishing simulation tool, and a written policy. That stack catches the entry-level pretexts. The expensive pretexts (the ones that drive ransomware deployment, eight-figure wire transfers, and full domain compromise) walk through it. The rest of this page covers the modern attack chain, three named incidents with citable losses, the eight defensive layers that actually work, and the role-based exercise approach that builds the verification reflex you need at the human layer.
How social engineering attacks unfold
Reconnaissance and target selection
Attackers harvest target intelligence from LinkedIn, breach dumps, GitHub repositories, SEC filings, conference recordings, podcast transcripts, and corporate directories. LinkedIn Sales Navigator, Hunter.io, Apollo, and ZoomInfo turn a target organization into a roster with names, titles, reporting lines, tenure, and email patterns. Specialized groups like Scattered Spider, FIN7, and TA453 maintain active dossiers on finance staff, IT help-desk technicians, executive assistants, and vendor-management roles because those positions hold the levers attackers want. The 2023 MGM Resorts breach reportedly began here, with the attacker identifying a help-desk employee through LinkedIn before placing a vishing call to that desk.
Pretext crafting
The attacker builds a story that exploits authority, urgency, scarcity, reciprocity, social proof, or rapport, often layering two or three at once. AI tooling now drafts pretexts in fluent business English, customized to the target role, current calendar context, and recent corporate events. Templates rotate by season and trigger: vendor-invoice changes after a real merger announcement, mandatory MFA enrollment during a tooling migration, mandatory benefits update during open enrollment, charity matching during disaster news cycles. Cofense and Proofpoint annual reports both flag calendar invites, OAuth consent pages, and QR code lures as the fastest-growing pretext containers of the past 18 months.
Channel selection
The attacker picks the channel the target trusts most for the requested action. Email is the default for written approvals, SMS for password resets and shipping notices, voice for help-desk pretexts and finance authorization, in-person USB drops or tailgating for facilities and warehouse staff, and live video for executive impersonation now that deepfake tooling can drop a face into a Zoom or Teams call in real time. Multi-channel sequencing is the 2024 default. An email lands first, a vishing call follows within hours referencing the message, and a deepfake video call closes the trust loop on a high-value request. The 2024 Arup $25 million wire fraud ran exactly this sequence.
Trust-building or pressure cycle
Long-cycle attackers like TA453 build rapport over weeks of benign exchanges before the malicious ask, so the target is conditioned to comply. Short-cycle attackers like Scattered Spider compress the same effect into minutes by stacking authority plus urgency plus social proof in one call. Both patterns work because they remove the pause that produces verification. The pretext often references a real internal project, a real colleague by name, a real vendor relationship, or a real upcoming event the attacker pulled from an autoresponder, a calendar share, or a press release. By the time the target processes the ask, the cognitive cost of refusing feels higher than the cost of complying.
Payload delivery
The action requested falls into a small set of categories: credentials (entered into a lookalike portal or read aloud on a call), money (wire transfer, gift cards, payroll redirect, vendor invoice change), data (customer lists, source code, HR rosters, financial records), or access (MFA reset, VPN configuration, OAuth consent grant, badge handoff, USB plug-in). Scattered Spider specializes in MFA reset and VPN configuration handoffs that deliver immediate domain access. FIN7 specializes in finance-team credential harvest and follow-on ransomware staging. The 0ktapus / Scatter Swine cluster specialized in SMS-driven credential capture across 130 plus companies including Twilio, Cloudflare, MailChimp, and DoorDash.
Follow-on monetization
Once the payload is captured, the attacker pivots. Common plays: silent mail forwarding rules to monitor wire instructions, OAuth consent abuse to grant persistent Microsoft 365 or Google Workspace access, ransomware staging from the compromised endpoint, lateral SSO movement into payroll and treasury, harvested templates for clone phishing into the supply chain, or direct sale of access on initial-access broker forums. The MGM Resorts breach moved from help-desk vishing to ransomware deployment inside 36 hours, with reported losses near $100 million. The FBI IC3 average single-loss BEC figure crossed $130,000 in 2023, with several published cases above $50 million.
Real-world social engineering case studies
2023 MGM Resorts vishing-to-ransomware, ~$100M loss
In September 2023, the threat group tracked as Scattered Spider (also UNC3944, Octo Tempest) breached MGM Resorts International through a single vishing call to the IT help desk. The attackers identified a target employee through LinkedIn, called the help desk impersonating that employee, and convinced the technician to reset the account credentials and MFA enrollment. Inside ten minutes the attackers held privileged identity access, and inside 36 hours they had deployed ALPHV/BlackCat ransomware across the environment. MGM disclosed an estimated $100 million impact in its subsequent 8-K filing, including ten days of system outages across slot machines, hotel keys, point-of-sale, and digital reservations. The case redefined the help desk as a tier-one target for executive-level social engineering.
2022 Twilio / 0ktapus smishing campaign, 130+ companies hit
In August 2022, the threat cluster tracked as 0ktapus and Scatter Swine ran a coordinated SMS phishing campaign that compromised Twilio and at least 136 other organizations, including Cloudflare, MailChimp, DoorDash, Klaviyo, and Mailgun. Texts impersonated Okta password resets and routed targets to lookalike SSO portals running Modlishka-style adversary-in-the-middle proxies. The kit captured live MFA codes and replayed sessions inside minutes.
Twilio confirmed attacker access to internal tools and customer data affecting 209 customers, including downstream impact on Signal users. Cloudflare blocked the breach because hardware FIDO2 keys were mandatory for every employee — pretext quality was not the deciding factor; phishing-resistant MFA was. The campaign showed that SMS as a trust channel is fully exploitable at industrial scale.
2016-2019 FACC and Toyota Boshoku BEC wire fraud, $54M and $37M
In January 2016, Austrian aerospace parts manufacturer FACC AG disclosed that attackers had used a business email compromise pretext, impersonating the CEO to authorize a wire transfer of approximately 50 million euros (about $54 million) to attacker-controlled accounts. The board fired the CEO and CFO in the aftermath.
In September 2019, Toyota Boshoku Europe disclosed a similar BEC loss of approximately $37 million through a wire request that exploited authority and urgency to bypass internal payment controls.
Both cases share the BEC pattern: a single skipped callback verification, executive impersonation by lookalike domain or compromised mailbox, and a finance team trained to act fast on senior requests. The FBI IC3 reports BEC accounted for $2.9 billion in 2023 across 21,489 complaints.
How to defend against social engineering
One-click reporting button across every channel
Install a Report Phish button in Outlook, Gmail, and the mobile mail clients, plus a published phone-report number for vishing, an SMS-forward number for smishing, and a Slack or Teams shortcut for impersonation in chat. Publish the median triage time and the count of confirmed pretexts caught in the previous month, so reporting feels useful rather than ignored. Reporting rate is the leading indicator most strongly correlated with breach resilience; the gap between first contact and first report is the window an attacker uses to escalate. A program that climbs reporting rate from 18% to above 65% routinely cuts confirmed-incident dwell time by an order of magnitude.
Role-based scenario training, not annual videos
Generic awareness content plateaus inside two quarters. Role-based exercises that mirror current attacker tradecraft keep the verification reflex live. Finance and AP get BEC, vendor-invoice, deepfake-wire, and gift-card pretexts. IT and help-desk staff get vishing and MFA-reset scenarios drawn from the MGM playbook. Engineering gets OAuth consent, supply-chain code commit, and AI assistant prompt-injection patterns. Executives and their assistants get whaling, deepfake video calls, and personal-device hardening. Customer success and partnerships get vendor-impersonation pretexts. The result is a verification reflex that transfers across every channel the workforce uses.
Dual authorization with callback verification on a published number
Write a one-page policy that requires a callback to a published internal number before any wire change, banking detail update, MFA reset, payroll edit, vendor onboarding, or executive request that arrives outside normal channels, regardless of the inbound channel. Adopt a code-word system for high-value finance requests so deepfake voice or video cannot complete the chain alone. Require dual authorization (two named approvers, with the second approver picked from a published rotation, not the requester) for any payment above a defined threshold. Rehearse the policy with finance, AP, HR, and the help desk every quarter. The 2024 Arup $25 million loss, the FACC $54 million loss, and the Toyota Boshoku $37 million loss were each preventable by this single control.
No-blame coaching culture
Public shaming kills the reporting culture you need. Coach repeat clickers privately with a 60-second microlesson tied to the exact pattern they missed, pair with a manager check-in only if the pattern recurs, and never publish individual click rates. Most breaches that involve a social-engineering click also involve a delayed report, and most delayed reports are produced by fear of consequence rather than failure of detection. The cultural posture of the program decides the time-to-report curve, which decides the blast radius of every successful pretext.
Phishing-resistant MFA (FIDO2 and passkeys)
Hardware-bound keys (YubiKey, Titan) and platform passkeys on iOS and Android cannot be phished by adversary-in-the-middle proxies. The cryptographic challenge is bound to the legitimate domain, so a fake login page or a Modlishka-style proxy cannot complete the handshake even when the user enters real credentials. Move every employee from SMS, push, and TOTP to FIDO2 or passkeys, starting with finance, IT, executives, and developers. Cloudflare publicly credited mandatory hardware keys with stopping the 2022 0ktapus / Scatter Swine campaign at its perimeter while peer companies on push-based MFA were breached.
In-person tailgating and physical pretext policy
Brief reception, security, facilities, and warehouse staff on the same pretext patterns that hit the inbox. Common physical scripts: a delivery courier who needs the badge held open, a contractor with a laminated lanyard but no scheduled visit, a vendor field engineer arriving for an unscheduled audit, and the USB drop with a fake "annual safety audit" cover letter. Require photo-ID match and a directory callback for every unbadged entry, prohibit USB plug-in on facility computers, and run quarterly tailgating drills measured against a published baseline. The loading dock and the front desk are tier-one social-engineering targets that the SOC cannot see.
Deepfake-aware verification protocols
Voice cloning APIs need 30 seconds of source audio to produce a convincing pretext, and live-face video deepfakes ran in production during the 2024 Arup wire fraud and have since appeared in private-equity diligence calls and CFO authorization workflows. Train executives, finance teams, and assistants that visual identity on a live call is no longer evidence. Require a code-word challenge response for any high-value request placed through video or voice, never give the code word out on the same channel that requested it, and rotate the code word quarterly. Pair with synthetic-media detection tools where deployed, but do not rely on them as the sole control.
Executive personal-device OPSEC and OSINT scrub
Whaling and CEO-fraud targets are researched through public profiles. Lock down executive LinkedIn (turn off followers, scrub speaking schedules, remove tenure and reporting lines), enroll personal phones in MDM if used for work mail, enforce passkeys on personal Apple and Google IDs that hold work data, and commission a quarterly OSINT pass to identify lookalike domain registrations, impersonating social accounts, and leaked credentials in breach dumps. The verification reflex matters most at the top of the org chart: one CFO who follows the callback policy prevents the eight-figure loss the rest of the program is designed to avoid.
How RansomLeak trains employees to recognize social engineering
RansomLeak runs immersive, scenario-based exercises rather than recorded videos and static quizzes. The foundational drill is the social engineering exercise, which puts learners inside a layered pretext stacking authority, urgency, and rapport, so the influence pattern is recognized under live pressure rather than recited from a slide. Each scenario ends with immediate feedback that names the specific Cialdini principles in play, the cues missed, and the verification step that would have caught the real attack. Exercises ship as SCORM 1.2 and SCORM 2004 packages so they drop into Cornerstone, Workday Learning, Docebo, SAP SuccessFactors, or any standards-compliant LMS without integration work.
Channel-specific extensions cover the surface beyond email. The vishing exercise drills the help-desk pretext that drove the 2023 MGM breach. The smishing exercise drills the SMS pattern that hit Twilio and 130 plus companies in the 0ktapus campaign. The QR code phishing exercise covers the image-based bypass that defeats secure email gateway URL scanning. The business email compromise exercise walks finance teams through the wire-instruction-change pattern that drove $2.9 billion in 2023 IC3 losses. The whaling-with-a-deepfake exercise puts executives and their assistants inside the Arup case scenario with a spoofed email, a cloned-voice voicemail, and a deepfake video call requesting a wire.
Programs are scoped by role rather than blasted to all-staff, because the pretexts that hit finance, IT, and executives are not the pretexts that hit warehouse and customer success. KPIs track reporting rate and time-to-report as primary indicators, with click rate as a secondary measure, and refresh content monthly to track attacker tradecraft as it shifts. The result is measurable behavior change at scale: a verification reflex that holds across email, SMS, voice, QR, video, and physical channels, built from real campaigns rather than generic compliance content.
How does social engineering work, and why is it the dominant breach driver?
Social engineering is manipulating a person into an action or disclosure that compromises security, exploiting psychology rather than software flaws. It is the parent category that contains phishing, vishing, smishing, pretexting, business email compromise, deepfake video fraud, and tailgating. Modern attacks stack two or three Cialdini principles (authority, urgency, reciprocity, scarcity, social proof, liking) into a single pretext to compress the target decision window below the threshold for verification.
Verizon DBIR 2024 attributes 68% of breaches to a non-malicious human element, and the FBI IC3 reported $12.5 billion in cyber-enabled fraud losses in 2023. Named groups specialize in specific pretexts: Scattered Spider runs help-desk vishing (2023 MGM Resorts, $100M loss), 0ktapus runs SMS-driven adversary-in-the-middle (2022 Twilio, 130+ companies), FIN7 targets finance-team credentials, and TA453 builds long-cycle rapport before the ask.
Defense layers technical and human controls. Technical: phishing-resistant MFA, DMARC at p=reject, URL rewriting at the email gateway, threat-intel DNS filtering, and deepfake-aware verification. Human: role-based scenario exercises, a one-click report button across every channel, no-blame coaching, dual authorization with callback verification on a published number, and a tailgating policy. The reflex must be drilled, not announced.
Recommended exercises
Scenario-based simulations from the 100+ catalogue.
Social Engineering
The foundational drill that exposes the layered Cialdini pretext (authority plus urgency plus rapport) so the influence pattern is recognized under live pressure.
Try the exerciseVishing
Drills the help-desk and finance-call pretext pattern that drove the 2023 MGM Resorts $100 million breach inside a 36-hour window.
Try the exerciseSmishing
Builds the verification reflex against the SMS-channel pattern used by the 0ktapus / Scatter Swine cluster to compromise Twilio and 130 plus other companies.
Try the exerciseBusiness Email Compromise
Walks finance and AP teams through the wire-instruction-change pretext that drove $2.9 billion in 2023 FBI IC3 reported losses, including the FACC $54M case.
Try the exerciseWhaling With a Deepfake
Puts executives and their assistants inside the 2024 Arup scenario with a spoofed email, cloned-voice voicemail, and a live deepfake video call requesting a $25M wire.
Try the exerciseCallback Phishing
Covers the reverse-vishing variant where a benign-looking email drives the target to call the attacker, who runs the social-engineering script over the phone.
Try the exerciseFurther reading
Deeper guides on adjacent topics.
Related glossary terms
Quick definitions for the terms in this pillar.
Frequently Asked Questions
What security leaders ask about this threat.
What is social engineering?
Social engineering is the practice of manipulating a person into performing an action or sharing information that compromises security, by exploiting psychology rather than software flaws. It is the parent category that contains phishing, spear phishing, vishing, smishing, pretexting, business email compromise, deepfake fraud, USB drops, and physical tailgating.
The Verizon 2024 Data Breach Investigations Report attributes 68% of breaches to a non-malicious human element. Attackers route around firewalls, EDR, and email gateways because the target is human judgment under pressure, and human judgment is predictable when the right cognitive levers are pulled in the right sequence.
How is social engineering different from phishing?
Phishing is one delivery channel for social engineering, not a synonym for it. Email phishing, vishing (voice), smishing (SMS), quishing (QR codes), and clone phishing are all phishing variants, and all of them sit inside the broader social engineering category. The pretexting story behind the message is the social-engineering layer, and that layer is portable across every channel.
Other social-engineering vectors live outside phishing entirely: in-person tailgating, USB drop attacks, deepfake video calls during live conferencing, and shoulder-surfing in coworking spaces all qualify as social engineering without any phishing involved. A program that drills only email phishing leaves the phone, the front desk, and the loading dock unguarded.
What are the six Cialdini principles in social engineering?
Robert Cialdini documented six influence principles that drive compliance: authority (a request from a senior figure or trusted institution), urgency (a deadline that punishes hesitation), reciprocity (a small favor that creates a return obligation), scarcity (a limited window or one-time opportunity), social proof (others have already complied), and liking (rapport built before the ask).
Attackers rarely use one principle alone. The 2023 MGM Resorts vishing call stacked authority (impersonating an employee), urgency (locked out of a customer call), and social proof (the attacker dropped real internal references). The 2024 Arup deepfake wire fraud stacked authority (the CFO), urgency (a private acquisition), and social proof (other "executives" on the call).
How can I train my team against social engineering?
Replace annual compliance videos with monthly role-based scenario exercises that mirror current attacker tradecraft. Finance and AP need BEC, vendor-invoice, and deepfake-wire scenarios. IT help-desk staff need vishing and MFA-reset pretexts drawn from the MGM playbook. Engineering needs OAuth consent and AI assistant prompt-injection patterns. Executives and assistants need whaling and deepfake video calls.
Track reporting rate and time-to-report as primary KPIs, not just click rate. Pair the exercises with a one-click reporting button across every channel, no-blame private coaching for repeat clickers, and an out-of-band callback verification policy for any payment, access, or identity request. SANS research shows monthly cadence cuts click rates from above 30% to under 5% inside 12 months.
What does the Verizon DBIR say about human-element breaches?
The Verizon 2024 Data Breach Investigations Report attributes 68% of breaches to a non-malicious human element, including phishing clicks, social-engineering compliance, misdelivery, and configuration errors. The report frames human-element actions as the largest category of breach root cause across every industry vertical it tracks.
The FBI IC3 reported $12.5 billion in cyber-enabled fraud losses across 2023, with business email compromise alone accounting for $2.9 billion across 21,489 complaints. IBM Cost of a Data Breach 2024 placed the average global breach at $4.88 million, and social-engineering-rooted incidents tend to cost more than the mean because they involve insider-level access and longer dwell time.
How do attackers use AI to scale social engineering?
Large language models removed the broken-English signal that defenders relied on, drafting fluent business-English pretexts customized to a target role, current calendar, and recent corporate events in seconds. Voice cloning APIs need 30 seconds of source audio to produce a convincing live-call pretext, and live-face video deepfake tooling can drop a target face into a Zoom or Teams call in real time.
The 2024 Arup case ran the full chain: an email pretext, a vishing follow-up, and a deepfake video call with cloned executives that authorized a $25 million wire. Detection now relies on verification reflex and policy controls (dual authorization, callback verification, code-word challenges), not on spotting linguistic or visual tells.
Sources & further reading
Primary sources cited above and adjacent guidance.
Train Your Team Against This Threat
Book a 30-minute walkthrough. We will scope the exercise sequence and rollout timeline.