Bug Bounty Automation Python 2026 — The Exact Stack Top Hunters Use to Earn $300K/Year

Bug Bounty Automation Python 2026 — The Exact Stack Top Hunters Use to Earn $300K/Year

Where are you with bug bounty automation?




Bug bounty automation Python 2026 :— The hunters consistently earning $150,000–$300,000 a year are not better hackers than everyone else. I’ve met enough of them to know that’s not what separates them. What separates them is that while the average hunter is manually running subfinder and reviewing output at their desk, the top earners have Python pipelines that discovered seventeen new subdomains at 3:47 this morning, probed them all for live services by 3:49, ran Nuclei against the interesting ones by 3:52, and sent a Telegram alert to their phone at 3:54 when it found an exposed admin panel. They were asleep. The window of opportunity on new assets is often measured in hours. Automation determines whether you’re in that window. This article gives you the exact pipeline architecture that makes that possible.

🎯 What You’ll Build After This Article

A continuous subdomain recon layer using subfinder wrapped in Python with SQLite deduplication
An HTTP asset fingerprinting layer that detects new endpoints and response changes using httpx
An automated vulnerability detection layer combining Nuclei with custom Python checks
A triage and notification layer that sends Telegram alerts only for actionable high/critical findings
A rate-limiting mechanism that keeps your automation programme-compliant and avoids bans
A deployment strategy for running the full pipeline 24/7 on a $5/month VPS

⏱️ 40 min read · 3 exercises · Authorised targets and programme-approved use only

Bug bounty automation builds directly on the manual reconnaissance skills covered throughout the 60-Day Bug Bounty Course. If you haven’t read the subdomain enumeration deep-dive yet, Day 6: Subdomain Enumeration covers the manual foundation that this automation layer wraps. The goal of automation is not to replace understanding — it is to apply your skills continuously at a scale that manual work cannot match.


Why Bug Bounty Automation Is Now Non-Negotiable

Programme scopes have grown dramatically. A typical enterprise VDP in 2026 covers hundreds of wildcard domains, multiple cloud infrastructure footprints, mobile applications, internal tools exposed through SSO, and APIs. The attack surface is enormous and constantly changing. New microservices deploy daily. Developer test environments appear and disappear. Feature branches get accidentally exposed to the internet. Every one of these events is a potential bug bounty finding, and the window of opportunity — the time between deployment and hardening — is often 12–48 hours.

Manual hunters process 10–20 targets per day with focused attention. Automated hunters process 500–2,000 targets per day continuously. This is not a skill differential — it is a throughput differential. The arithmetic is straightforward: a hunter checking new assets manually once a week is competing against automation that checks the same scope every six hours. On a programme with hundreds of wildcard domains, the manual hunter sees roughly 1% of the new asset activity that continuous automation captures. The findings follow the same distribution.

The automation threshold for serious bug bounty is lower than most hunters expect. You do not need a sophisticated multi-server distributed system to gain most of the throughput benefit. A single Python script running on a $5/month VPS, executing a four-tool pipeline every six hours, provides the most valuable capability: continuous monitoring of new asset discovery and automatic alerting when something interesting appears. The code in this article implements exactly that pipeline.

securityelites.com
Manual vs Automated Bug Bounty — Daily Target Coverage
Manual Hunter
Targets/day: 10–20
New asset check: Weekly
Response time: Days after deploy
Coverage: ~1% of scope
Human attention = quality depth

Automated Pipeline
Targets/day: 500–2,000
New asset check: Every 6 hours
Response time: Hours after deploy
Coverage: Full scope width
Automation = breadth at scale

📸 The throughput reality of manual vs automated bug bounty. Manual hunting provides depth and creative exploitation — qualities automation cannot replicate. Automation provides continuous full-scope coverage that no manual process can match. Top earners use both: the pipeline handles discovery and initial triage 24/7, human attention handles deep exploitation of whatever the pipeline surfaces. Missing either layer leaves significant earning potential on the table.


Pipeline Architecture — The Four Layers

The bug bounty automation pipeline I use and recommend has four layers. Each layer has a single responsibility. Each feeds the next. Each can be improved or replaced independently without breaking the others. This separation of concerns is what makes the pipeline maintainable as you scale to more programmes and more targets over time.

Layer 1 is continuous recon — discovering new subdomains and assets as they appear. Layer 2 is change detection — tracking which assets are live HTTP services and flagging when their responses change. Layer 3 is vulnerability detection — running templated and custom checks against assets that warrant scanning. Layer 4 is triage and notification — filtering the findings to only actionable ones and pushing them to wherever you work. The critical design principle is that each layer acts only on genuinely new information — Layer 2 only processes new subdomains from Layer 1, Layer 3 only scans assets that are new or changed, Layer 4 only sends alerts for findings not already seen. This prevents the pipeline from endlessly reprocessing known data and flooding your notification channel with noise.


Layer 1: Continuous Subdomain Recon with SQLite Deduplication

The recon layer wraps subfinder — a Go tool that queries dozens of passive DNS and certificate transparency sources simultaneously. Python’s subprocess module calls subfinder, parses its stdout output, and stores discovered subdomains in a local SQLite database. The database serves as the deduplication layer: a subdomain with a PRIMARY KEY constraint will raise an IntegrityError on insert if it already exists. Catching that error is how we identify genuinely new discoveries versus re-discovered known assets.

Running this layer every six hours means a new subdomain that appears at 2am is in your pipeline by 2am. The SQLite first_seen timestamp tells you exactly when each asset was discovered, which is useful for demonstrating to programmes that you found a vulnerability on a recently deployed asset — the timestamp is part of the bug bounty report timeline evidence. The layer only returns the list of new subdomains from each run, so subsequent pipeline layers never see duplicates.

LAYER 1 — CONTINUOUS RECON PYTHON MODULE
# recon_layer.py — subdomain discovery with deduplication
import subprocess, sqlite3, datetime, json
from pathlib import Path
DB_PATH = Path(“~/.bb_pipeline/recon.db”).expanduser()
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
def init_db():
conn = sqlite3.connect(DB_PATH)
conn.execute(“””CREATE TABLE IF NOT EXISTS subdomains (
subdomain TEXT PRIMARY KEY,
domain TEXT NOT NULL,
first_seen TEXT NOT NULL
)”””)
conn.commit(); return conn
def run_subfinder(domain: str) -> list[str]:
result = subprocess.run(
[“subfinder”, “-d”, domain, “-silent”, “-all”],
capture_output=True, text=True, timeout=300
)
return [line.strip() for line in result.stdout.splitlines() if line.strip()]
def store_and_filter_new(domain: str, subdomains: list[str]) -> list[str]:
“””Returns only genuinely new subdomains — never seen before.”””
conn = init_db()
now = datetime.datetime.utcnow().isoformat()
new = []
for sub in subdomains:
try:
conn.execute(“INSERT INTO subdomains VALUES (?,?,?)”, (sub, domain, now))
new.append(sub) # only appended if insert succeeded (no duplicate)
except sqlite3.IntegrityError:
pass # already in DB — silently skip
conn.commit()
return new
def recon_domain(domain: str) -> list[str]:
all_subs = run_subfinder(domain)
new_subs = store_and_filter_new(domain, all_subs)
print(f”[recon] {domain}: {len(all_subs)} total, {len(new_subs)} NEW”)
return new_subs

SQLite Deduplication is the Single Most Important Design Decision: Without it, every pipeline run re-processes every subdomain from scratch — flooding your notification queue with duplicates and making signal detection impossible. The INSERT OR IGNORE pattern (or IntegrityError catch) ensures only genuinely new assets bubble up to Layer 2 and beyond. This single 5-line function is what separates a noisy script from a professional pipeline.
securityelites.com
Layer 1 Pipeline Output — New vs Known Subdomain Detection
$ python3 recon_runner.py –domain target.com
[06:00:01] subfinder found 847 subdomains for target.com
[06:00:03] DB check: 844 already known, 3 NEW
[06:00:03] NEW: staging-api2.target.com
[06:00:03] NEW: internal-beta.target.com
[06:00:03] NEW: payments-v3-test.target.com
↑ 844 known subdomains silently skipped · 3 new pass to Layer 2
✅ “payments-v3-test” discovered 4 hours after deployment — before security review

📸 Layer 1 deduplication in action. Of 847 subdomains discovered, 844 were already in the SQLite database from previous runs and are silently skipped. Only the 3 genuinely new subdomains proceed to Layer 2 for HTTP probing. The subdomain “payments-v3-test.target.com” was discovered 4 hours after it was deployed — before the security team had reviewed it. This is the window of opportunity that automation captures and manual hunting misses entirely.

🛠️ EXERCISE 1 — BROWSER (15 MIN · NO INSTALL)
Explore the Bug Bounty Automation Tool Ecosystem

⏱️ 15 minutes · Browser only — no install required

Step 1: Explore ProjectDiscovery’s toolkit
Go to: https://github.com/projectdiscovery
Find and read the README for these 5 tools:
subfinder / httpx / nuclei / dnsx / naabu
For each, write down: (a) what it does, (b) its input format, (c) its output format.
Understanding I/O formats is the foundation of building pipelines.

Step 2: Understand the pipeline concept on GitHub
Search GitHub for: bug bounty automation python
Find one open-source pipeline (examples: trickest/resolvers, tomnomnom/hacks, or similar).
Examine its architecture: What triggers it? How does it store state?
How many tools does it chain? What notification mechanism does it use?

Step 3: Read a programme automation policy
Go to: https://hackerone.com/programs
Pick any public programme with a large scope.
Find the programme’s policy page and look for the automation section.
Common phrases: “automated scanning tools”, “rate limiting”, “prior written approval”.
Does this programme explicitly allow automation? Explicitly prohibit it? Say nothing?

Step 4: Calculate your opportunity window
Pick a programme scope with 5 wildcard domains.
Estimate: if each domain has 50–100 subdomains on average,
and each subdomain has 3–5 discoverable endpoints,
how many total assets does this programme have?
If you checked manually once per week vs automated every 6 hours,
how many cycles of new-asset discovery would you miss per week manually?

✅ What you just learned: Understanding I/O formats is what makes tool chaining possible — subfinder outputs one subdomain per line, httpx reads the same format, Nuclei reads URLs from a file. This is the Unix philosophy of composable tools and it is why ProjectDiscovery’s stack works so well together. The programme policy research is critical — automation without reading the policy is the fastest way to get banned from programmes you want to work on. The arithmetic exercise demonstrates why manual hunters cannot compete on scope coverage against automated pipelines.

📸 Screenshot the 5-tool I/O format table you created and the programme automation policy section you found. Post to #bb-automation on Discord.


Layer 2: HTTP Asset Discovery and Change Detection

Subdomain discovery gives you hostnames. Layer 2 converts those hostnames to live HTTP services and builds fingerprints of each one. A fingerprint captures the essential identity of a service’s response: status code, page title, server header, and a hash of the response body. The hash is the critical element for change detection. When the pipeline runs again in six hours, it fetches the same endpoints and computes new hashes. Any hash change signals something changed on that endpoint — a new deployment, modified configuration, added functionality, or removed authentication. Each change is worth investigating manually.

The Python httpx library is the right choice for this layer because it supports both synchronous and asynchronous request models, maintains efficient connection pools, and handles redirects cleanly. Running fifty concurrent workers with ThreadPoolExecutor lets you fingerprint 500 hosts in roughly a minute on a standard VPS — fast enough that the full pipeline completes well within the six-hour cron interval even for large scopes.

LAYER 2 — HTTPX FINGERPRINTING AND CHANGE DETECTION
# change_detector.py
import httpx, hashlib, sqlite3, re
from concurrent.futures import ThreadPoolExecutor
from pathlib import Path
DB_PATH = Path(“~/.bb_pipeline/recon.db”).expanduser()
def fingerprint(host: str) -> dict:
for scheme in (“https”, “http”):
try:
r = httpx.get(f”{scheme}://{host}”, timeout=10,
follow_redirects=True, headers={“User-Agent”: “Mozilla/5.0”})
title_match = re.search(r”<title>([^<]{0,120})</title>”, r.text, re.I)
return {
“host”: host, “url”: str(r.url),
“status”: r.status_code,
“title”: title_match.group(1).strip() if title_match else “”,
“server”: r.headers.get(“server”, “”),
“body_hash”: hashlib.sha256(r.content).hexdigest()[:16],
“length”: len(r.content),
}
except Exception:
continue
return {“host”: host, “error”: “unreachable”}
def detect_changes(hosts: list[str]) -> list[dict]:
“””Returns fingerprints for new hosts and hosts whose body hash changed.”””
conn = sqlite3.connect(DB_PATH)
conn.execute(“””CREATE TABLE IF NOT EXISTS fingerprints (
host TEXT PRIMARY KEY, url TEXT, status INT,
title TEXT, server TEXT, body_hash TEXT, length INT
)”””)
conn.commit()
with ThreadPoolExecutor(max_workers=50) as pool:
results = list(pool.map(fingerprint, hosts))
actionable = []
for fp in results:
if “error” in fp: continue
row = conn.execute(“SELECT body_hash FROM fingerprints WHERE host=?”, (fp[“host”],)).fetchone()
if row is None or row[0] != fp[“body_hash”]:
actionable.append(fp) # new host OR response changed
conn.execute(“INSERT OR REPLACE INTO fingerprints VALUES (?,?,?,?,?,?,?)”,
(fp[“host”], fp[“url”], fp[“status”], fp[“title”],
fp[“server”], fp[“body_hash”], fp[“length”]))
conn.commit()
return actionable


Layer 3: Nuclei Integration and Custom Python Checks

Layer 3 is where the pipeline does actual vulnerability detection. Nuclei handles the commodity checks — its template library covers thousands of known vulnerability patterns including exposed admin panels, default credentials, known CVEs in popular software, misconfigured security headers, and exposed sensitive files. Running Nuclei against the list of new and changed assets from Layer 2 gives you comprehensive coverage of known patterns at scale, automatically, on every new asset that appears.

Custom Python checks are where your edge over other automation users comes from. Every programme-specific pattern you identify through manual testing can be encoded as a Python function that runs alongside Nuclei. Examples: a check that looks for hardcoded AWS keys in JavaScript files (a pattern Nuclei templates cover broadly, but a custom check can apply programme-specific naming conventions), a check for specific API versioning endpoints you’ve identified as frequently misconfigured in this programme’s stack, or a check for headers or response patterns unique to this company’s internal tooling that external Nuclei templates would never know about.

LAYER 3 — NUCLEI WRAPPER AND CUSTOM SECRET SCANNER
# vuln_layer.py
import subprocess, json, httpx, re
def run_nuclei(urls: list[str], severity: str = “medium,high,critical”) -> list[dict]:
“””Pipe URL list to nuclei, return parsed JSON findings.”””
import tempfile, os
with tempfile.NamedTemporaryFile(mode=”w”, suffix=”.txt”, delete=False) as f:
f.write(“\n”.join(urls))
tmp = f.name
result = subprocess.run(
[“nuclei”, “-l”, tmp, “-severity”, severity, “-j”,
“-silent”, “-rate-limit”, “50”, “-timeout”, “10”],
capture_output=True, text=True
)
os.unlink(tmp)
findings = []
for line in result.stdout.splitlines():
try: findings.append(json.loads(line))
except json.JSONDecodeError: pass
return findings
SECRET_PATTERNS = {
“aws_access_key”: r”AKIA[0-9A-Z]{16}”,
“github_pat”: r”ghp_[A-Za-z0-9]{36}”,
“stripe_live”: r”sk_live_[0-9a-zA-Z]{24}”,
“sendgrid_key”: r”SG\.[A-Za-z0-9_-]{22}\.[A-Za-z0-9_-]{43}”,
}
def scan_js_for_secrets(url: str) -> dict | None:
“””Fetch a URL, scan response text for hardcoded secrets.”””
try:
r = httpx.get(url, timeout=10, follow_redirects=True)
for key_type, pattern in SECRET_PATTERNS.items():
match = re.search(pattern, r.text)
if match:
return {“url”: url, “type”: key_type,
“match_preview”: match.group()[:8] + “…”}
except Exception:
pass
return None


Layer 4: Triage Filtering and Telegram Notifications

The notification layer is where most automation pipelines fail. Generating findings is easy. Generating actionable, deduplicated, severity-filtered notifications that tell you exactly what to look at — without burying the critical alert in a flood of informational noise — requires deliberate design. A notification system that sends every Nuclei finding, including informational and low severity, will train you to ignore the channel. When the critical finding arrives, you won’t see it because you’ve learned to dismiss the noise.

The filtering rules are simple: only notify on high and critical severity findings. Only notify on a specific finding once — store sent findings in SQLite and check before every notification. Include enough context in the message to make the alert self-contained: target URL, template ID, severity, and a deep link to the Nuclei template documentation. The goal is that when a Telegram notification arrives at 3am, you can read it, immediately understand what it is and why it matters, and decide in thirty seconds whether it warrants waking up to investigate.

LAYER 4 — TELEGRAM NOTIFICATION WITH DEDUPLICATION
# notifier.py
import httpx, hashlib, sqlite3
from pathlib import Path
DB_PATH = Path(“~/.bb_pipeline/recon.db”).expanduser()
ALERT_SEVERITIES = {“high”, “critical”}
def _finding_hash(finding: dict) -> str:
key = f”{finding.get(‘template-id’,”)}-{finding.get(‘matched-at’,”)}”
return hashlib.md5(key.encode()).hexdigest()
def _already_sent(conn: sqlite3.Connection, fhash: str) -> bool:
conn.execute(“CREATE TABLE IF NOT EXISTS sent_alerts (hash TEXT PRIMARY KEY)”)
row = conn.execute(“SELECT 1 FROM sent_alerts WHERE hash=?”, (fhash,)).fetchone()
return row is not None
def send_telegram(finding: dict, bot_token: str, chat_id: str) -> bool:
severity = finding.get(“info”, {}).get(“severity”, “unknown”).lower()
if severity not in ALERT_SEVERITIES:
return False # skip info, low, medium
conn = sqlite3.connect(DB_PATH)
fhash = _finding_hash(finding)
if _already_sent(conn, fhash):
return False # dedup — already notified
emoji = “🔴” if severity == “critical” else “🟠”
msg = (f”{emoji} <b>Bug Bounty Alert — {severity.upper()}</b>\n”
f”Template: {finding.get(‘template-id’, ‘custom’)}\n”
f”Target: {finding.get(‘host’, ”)}\n”
f”URL: {finding.get(‘matched-at’, ”)}”)
resp = httpx.post(
f”https://api.telegram.org/bot{bot_token}/sendMessage”,
data={“chat_id”: chat_id, “text”: msg, “parse_mode”: “HTML”},
)
if resp.status_code == 200:
conn.execute(“INSERT INTO sent_alerts VALUES (?)”, (fhash,))
conn.commit()
return resp.status_code == 200


Rate Limiting, Scope Validation, and Responsible Automation

Unthrottled automation has ended bug bounty careers. Programmes have banned hunters permanently for running aggressive scanners that caused load issues on production systems. The rule is simple and non-negotiable: your automation must respect the programme’s systems as if they were your own. This means explicit rate limiting at the per-domain level, immediate backoff on 429 responses, and reading every programme’s automation policy before deployment.

Two requests per second per domain is a safe baseline that avoids triggering rate limiting on most web applications while still giving you reasonable throughput. The token bucket implementation below ensures this limit is maintained across concurrent workers — without it, fifty concurrent httpx requests can burst to fifty requests per second against a single domain, which will get you banned and may cause real service degradation.

RATE LIMITER — TOKEN BUCKET IMPLEMENTATION
# rate_limiter.py — per-domain token bucket
import time, threading
from collections import defaultdict
class DomainRateLimiter:
def __init__(self, rps: float = 2.0):
self.rps = rps
self._tokens = defaultdict(lambda: 1.0)
self._last = defaultdict(float)
self._lock = threading.Lock()
def wait(self, domain: str) -> None:
with self._lock:
now = time.monotonic()
elapsed = now – self._last[domain]
self._tokens[domain] = min(1.0, self._tokens[domain] + elapsed * self.rps)
if self._tokens[domain] < 1.0:
wait_time = (1.0 – self._tokens[domain]) / self.rps
time.sleep(wait_time)
self._tokens[domain] -= 1.0
self._last[domain] = time.monotonic()
# Usage: limiter = DomainRateLimiter(rps=2.0)
# Before every request: limiter.wait(tldextract.extract(url).registered_domain)

⚠️ Read Programme Policies First: Before deploying any automation against a bug bounty programme, read the programme’s policy page specifically for automation rules. Phrases like “automated scanning requires prior written approval,” “do not run automated scanners,” or “rate limit to X requests per minute” are legally binding conditions of your participation. Violating them can result in permanent programme bans and forfeiture of any pending payouts. When in doubt, email the programme team and ask permission before running.

VPS Deployment and Cron Scheduling

The full pipeline runs as a single orchestrator script that imports and calls each layer in sequence. A cron job on a $5/month VPS executes this script every six hours. The VPS needs: Python 3.10+, subfinder, httpx, Nuclei, and your pipeline code. The setup is straightforward and the monthly cost is offset by a single low-severity bug bounty payout.

PIPELINE ORCHESTRATOR AND CRON DEPLOYMENT
# pipeline.py — main orchestrator
import json, os
from recon_layer import recon_domain
from change_detector import detect_changes
from vuln_layer import run_nuclei, scan_js_for_secrets
from notifier import send_telegram
BOT_TOKEN = os.environ[“TG_BOT_TOKEN”]
CHAT_ID = os.environ[“TG_CHAT_ID”]
TARGETS = [“example.com”, “target.com”] # programme-approved domains
def main():
all_new_hosts = []
for domain in TARGETS:
all_new_hosts.extend(recon_domain(domain))
print(f”[pipeline] Total new subdomains this run: {len(all_new_hosts)}”)
if not all_new_hosts:
return # nothing new — stop early, save resources
changed_assets = detect_changes(all_new_hosts)
print(f”[pipeline] Actionable assets (new/changed): {len(changed_assets)}”)
urls = [fp[“url”] for fp in changed_assets if “url” in fp]
nuclei_findings = run_nuclei(urls)
for finding in nuclei_findings:
send_telegram(finding, BOT_TOKEN, CHAT_ID)
if __name__ == “__main__”:
main()
# Cron entry — run every 6 hours
0 */6 * * * cd /opt/bb_pipeline && python3 pipeline.py >> /var/log/bb_pipeline.log 2>&1
# Set environment variables for secrets
export TG_BOT_TOKEN=”your_bot_token_here”
export TG_CHAT_ID=”your_chat_id_here”

🧠 EXERCISE 2 — THINK LIKE AN AUTOMATION ARCHITECT (12 MIN)
Design Decisions — Solve Three Architecture Problems

⏱️ 12 minutes · Paper or text editor — no tools

Work through these three architecture decisions that every bug bounty
automation builder faces. For each, choose your answer and justify it
with specific technical reasoning.

PROBLEM 1 — Storage:
You need to store subdomain history, HTTP fingerprints, and sent alerts.
Options: SQLite (local file) vs PostgreSQL (server) vs Redis (in-memory)
Factors to consider: concurrent write safety, query patterns needed,
operational complexity, recovery after VPS restart.
Which do you choose for a single-VPS pipeline and why?

PROBLEM 2 — False Positive Rate:
Your Nuclei integration produces 50 findings per week.
After manual review, only 8 are real vulnerabilities.
The 42 false positives waste your time and train you to ignore alerts.
Describe three technical mechanisms (NOT template-level changes)
you would implement at the pipeline layer to reduce false positive rate.

PROBLEM 3 — What NOT to Automate:
Name three vulnerability classes you would NOT include in automated scanning
and handle only manually. For each, explain: why automated scanning creates
unacceptable risk (legal, technical, or policy-level), and how you would
prioritise manual testing for that class instead.

✅ ANSWER GUIDANCE — Problem 1: SQLite is the correct choice for a single-VPS pipeline. PostgreSQL adds operational overhead (service management, backups, connection pooling) that single-VPS pipelines don’t justify. Redis loses state on restart unless persisted. SQLite requires no separate service, survives VPS restarts automatically, and has sufficient concurrent write performance for this use case since writes happen sequentially within each pipeline run. Problem 2: Three mechanisms — (1) Require two independent findings from two separate runs before alerting (reduces scan-artifact noise), (2) Implement a confidence scoring system that requires additional verification HTTP requests before classifying a finding as actionable, (3) Maintain a personal false-positive register keyed on template-ID + host-pattern; suppress future alerts for combinations you’ve previously verified as false. Problem 3: SQL injection (high risk of causing data damage in production, many programmes explicitly exclude automated SQLi scanners), authentication endpoints (repeated auth attempts trigger account lockouts and security alerts, possible DoS), payment processing endpoints (any automated interaction with payment flows risks real financial impact).

📸 Post your architecture decision answers to #bb-automation on Discord — especially your false positive reduction techniques.

🛠️ EXERCISE 3 — BROWSER ADVANCED (20 MIN)
Set Up a Telegram Bot and Validate the Notification Architecture

⏱️ 20 minutes · Browser + Telegram app — no server required

Step 1: Create a Telegram Bot (free, takes 3 minutes)
Open Telegram. Search for @BotFather.
Send: /newbot
Follow the prompts — give it a name and username.
Save the bot token that BotFather provides. Format: 123456789:ABCdef…

Step 2: Get your Chat ID
In Telegram, send a message to your new bot.
Open this URL in your browser (replace BOT_TOKEN):
https://api.telegram.org/bot{BOT_TOKEN}/getUpdates
Find “chat”: {“id”: XXXXXXXXX} in the response — that’s your Chat ID.

Step 3: Send a test message via the API
In your browser address bar, open this URL (replace both values):
https://api.telegram.org/bot{BOT_TOKEN}/sendMessage?chat_id={CHAT_ID}&text=Pipeline+Test+OK

If a message appears in your Telegram chat, your notification layer works.

Step 4: Simulate a critical alert
Open this URL (replace BOT_TOKEN and CHAT_ID):
https://api.telegram.org/bot{BOT_TOKEN}/sendMessage?chat_id={CHAT_ID}&text=🔴+CRITICAL+exposed-admin-panel+target.example.com&parse_mode=HTML

Does the red circle emoji render? This is exactly what your pipeline
will send you at 3am when it finds something real.

Step 5: Research Telegram bot rate limits
Search: “Telegram bot API rate limits”
What is the maximum messages per second a bot can send?
What happens if you exceed it?
How would you handle this in the notifier.py code?

✅ What you just learned: You now have a working Telegram notification endpoint. The URL-based API test proves the notification layer works before writing a single line of pipeline code. Telegram’s bot API rate limit (30 messages per second globally, 1 message per second per chat) is important for the deduplication design — if your pipeline ever sends more than 1 alert per second to the same chat, subsequent messages will be throttled. The deduplication in the notifier.py code ensures each unique finding only generates one alert, keeping you well under any rate limits. In production, your pipeline will typically generate 0–5 alerts per run, making Telegram perfectly suitable for this use case.

📸 Screenshot the test message appearing in your Telegram chat. Post to #bb-automation on Discord with tag #notification-layer.

securityelites.com
Telegram Alert — Layer 4 Notification in Production
🔴 Bug Bounty Alert — CRITICAL
Template: exposed-panel
Target: payments-v3-test.target.com
URL: https://payments-v3-test.target.com/admin/
Received: 03:54:17 · Pipeline run #847

✅ New subdomain discovered 03:49 → Nuclei found exposed admin panel 03:52 → Alert sent 03:54
Manual hunter checks scope next morning — endpoint already patched by then

📸 A production Layer 4 Telegram alert. The pipeline discovered a new subdomain (payments-v3-test.target.com) at 03:49, ran Nuclei against it at 03:52, and sent a Critical alert at 03:54. The entire chain — discovery to notification — took 5 minutes at 3am with no human involvement. This is the window that separates top-earning hunters from manual ones: the new test payment endpoint was deployed that evening and patched the following morning, leaving a 6-hour discovery window that only continuous automation can reliably capture.

🧠 QUICK CHECK — Bug Bounty Automation Python 2026

Your Layer 1 recon script discovers 50 new subdomains this run. Your Layer 4 Telegram bot sends you 50 alerts within 10 minutes. You’re getting similar volumes every run. What is the most important architectural fix and why?



📋 Bug Bounty Automation Python — Key Architecture Reference

Layer 1: subfinder → Python SQLite dedup → new_subdomains[]Continuous recon — only returns genuinely new assets each run
Layer 2: httpx fingerprint → SHA-256 body hash → changed_assets[]Change detection — acts on new hosts and response changes only
Layer 3: nuclei -l urls.txt -j -severity medium,high,criticalVulnerability scan — JSON output, parse and filter programmatically
Layer 4: Telegram API + SQLite dedup → high/critical onlyNotification — deduplicated, severity-filtered, context-rich alerts
DomainRateLimiter(rps=2.0).wait(domain)Rate limiting — token bucket, 2 req/sec per domain, programme-safe
0 */6 * * * python3 pipeline.py >> pipeline.log 2>&1Cron schedule — every 6 hours, continuous coverage while you sleep

❓ Frequently Asked Questions — Bug Bounty Automation Python 2026

Is bug bounty automation allowed on all programmes?
No. Many programmes explicitly prohibit automated scanning or require prior written approval. Read the programme policy before deploying any automation. Platform-level policies (HackerOne, Bugcrowd) supplement individual programme rules. Violating automation policies can result in permanent programme bans and forfeiture of payouts. When in doubt, email the programme team first.
How much does a continuous bug bounty automation VPS cost?
$5–20/month on Hetzner or DigitalOcean covers a basic single-VPS pipeline running subfinder, httpx, and Nuclei every 6 hours. A more complete setup (multiple workers, VPN exit, dedicated notification server) runs $30–80/month. A single medium-severity payout of $300–500 offsets a full year of infrastructure costs.
What Python libraries are essential for this pipeline?
httpx (async HTTP, better than requests for concurrent fingerprinting), sqlite3 (stdlib, state management and deduplication), subprocess (wrapping Go tools like subfinder and Nuclei), asyncio (concurrent I/O where needed), and the Telegram HTTP API via httpx for notifications. Keep dependencies minimal — fewer packages means fewer breakage points.
Can automation replace manual testing?
No. Automation handles coverage — scanning thousands of endpoints for known patterns continuously. Manual testing handles depth — complex logic flaws, vulnerability chains, and context-dependent findings. Top earners use both: automation for scale and 24/7 monitoring, manual work for creative exploitation of what the pipeline surfaces.
How do I avoid getting banned for automated scanning?
Rate limit to 2 req/sec per domain. Read each programme’s automation policy. Back off immediately on 429 responses. Never run destructive tools (sqlmap, XSS fuzzers) without explicit programme approval for automated testing. Stop immediately if you receive any message from a programme team about your scanning activity.
What is the typical opportunity window for new bug bounty assets?
New subdomains and endpoints are most vulnerable in the first 24–72 hours after deployment — before the security team reviews the new asset. Continuous recon every 6 hours puts you in that window. Manual hunters checking weekly are often 4–14 days behind on new asset discovery, missing the highest-value window entirely.
← Hub

60-Day Bug Bounty Course

Related →

Subdomain Enumeration Tools 2026

📚 Further Reading

  • Day 6: Subdomain Enumeration Bug Bounty — The manual foundation that this automation layer wraps — understanding how subfinder, amass, and crt.sh work before automating them makes your pipeline more effective and debuggable.
  • Subdomain Enumeration Tools 2026 — Comprehensive comparison of the best subdomain enumeration tools — subfinder, amass, dnsx, and others — with performance benchmarks and use case guidance.
  • 60-Day Bug Bounty Mastery Course — Full course hub — automation is the capstone of the 60-day methodology, built on the manual skills established in Days 1–25.
  • Nuclei — GitHub Repository — Official Nuclei repository with full template library, JSON output documentation, and integration examples for building pipelines like the one in this article.
  • Subfinder — GitHub Repository — Official subfinder repository with API key configuration guide, passive source list, and output format documentation for Python integration.
ME
Mr Elite
Owner, SecurityElites.com
I built my first Python bug bounty automation script in 2021 after getting beaten to three critical findings in a single week on the same programme. Someone found a staging environment subdomain about twelve hours before I got there, and it had an exposed admin panel with default credentials. I had been checking that programme manually every few days. They were running continuous recon. That frustration turned into six months of building, testing, and iterating on the pipeline described here. My first full month running it produced more findings than my previous six months of manual hunting combined — not because the automation was smarter than me, but because it was awake when I wasn’t.


Where are you with bug bounty automation?




Bug bounty automation Python 2026 — the hunters consistently earning $150,000–$300,000 a year are not better hackers than everyone else. I’ve met enough of them to know that’s not what separates them. What separates them is that while the average hunter is manually running subfinder and reviewing output at their desk, the top earners have Python pipelines that discovered seventeen new subdomains at 3:47 this morning, probed them all for live services by 3:49, ran Nuclei against the interesting ones by 3:52, and sent a Telegram alert to their phone at 3:54 when it found an exposed admin panel. They were asleep. The window of opportunity on new assets is often measured in hours. Automation determines whether you’re in that window. This article gives you the exact pipeline architecture that makes that possible.

🎯 What You’ll Build After This Article

A continuous subdomain recon layer using subfinder wrapped in Python with SQLite deduplication
An HTTP asset fingerprinting layer that detects new endpoints and response changes using httpx
An automated vulnerability detection layer combining Nuclei with custom Python checks
A triage and notification layer that sends Telegram alerts only for actionable high/critical findings
A rate-limiting mechanism that keeps your automation programme-compliant and avoids bans
A deployment strategy for running the full pipeline 24/7 on a $5/month VPS

⏱️ 40 min read · 3 exercises · Authorised targets and programme-approved use only

Bug bounty automation builds directly on the manual reconnaissance skills covered throughout the 60-Day Bug Bounty Course. If you haven’t read the subdomain enumeration deep-dive yet, Day 6: Subdomain Enumeration covers the manual foundation that this automation layer wraps. The goal of automation is not to replace understanding — it is to apply your skills continuously at a scale that manual work cannot match.


Why Bug Bounty Automation Is Now Non-Negotiable

Programme scopes have grown dramatically. A typical enterprise VDP in 2026 covers hundreds of wildcard domains, multiple cloud infrastructure footprints, mobile applications, internal tools exposed through SSO, and APIs. The attack surface is enormous and constantly changing. New microservices deploy daily. Developer test environments appear and disappear. Feature branches get accidentally exposed to the internet. Every one of these events is a potential bug bounty finding, and the window of opportunity — the time between deployment and hardening — is often 12–48 hours.

Manual hunters process 10–20 targets per day with focused attention. Automated hunters process 500–2,000 targets per day continuously. This is not a skill differential — it is a throughput differential. The arithmetic is straightforward: a hunter checking new assets manually once a week is competing against automation that checks the same scope every six hours. On a programme with hundreds of wildcard domains, the manual hunter sees roughly 1% of the new asset activity that continuous automation captures. The findings follow the same distribution.

The automation threshold for serious bug bounty is lower than most hunters expect. You do not need a sophisticated multi-server distributed system to gain most of the throughput benefit. A single Python script running on a $5/month VPS, executing a four-tool pipeline every six hours, provides the most valuable capability: continuous monitoring of new asset discovery and automatic alerting when something interesting appears. The code in this article implements exactly that pipeline.

securityelites.com
Manual vs Automated Bug Bounty — Daily Target Coverage
Manual Hunter
Targets/day: 10–20
New asset check: Weekly
Response time: Days after deploy
Coverage: ~1% of scope
Human attention = quality depth

Automated Pipeline
Targets/day: 500–2,000
New asset check: Every 6 hours
Response time: Hours after deploy
Coverage: Full scope width
Automation = breadth at scale

📸 The throughput reality of manual vs automated bug bounty. Manual hunting provides depth and creative exploitation — qualities automation cannot replicate. Automation provides continuous full-scope coverage that no manual process can match. Top earners use both: the pipeline handles discovery and initial triage 24/7, human attention handles deep exploitation of whatever the pipeline surfaces. Missing either layer leaves significant earning potential on the table.


Pipeline Architecture — The Four Layers

The bug bounty automation pipeline I use and recommend has four layers. Each layer has a single responsibility. Each feeds the next. Each can be improved or replaced independently without breaking the others. This separation of concerns is what makes the pipeline maintainable as you scale to more programmes and more targets over time.

Layer 1 is continuous recon — discovering new subdomains and assets as they appear. Layer 2 is change detection — tracking which assets are live HTTP services and flagging when their responses change. Layer 3 is vulnerability detection — running templated and custom checks against assets that warrant scanning. Layer 4 is triage and notification — filtering the findings to only actionable ones and pushing them to wherever you work. The critical design principle is that each layer acts only on genuinely new information — Layer 2 only processes new subdomains from Layer 1, Layer 3 only scans assets that are new or changed, Layer 4 only sends alerts for findings not already seen. This prevents the pipeline from endlessly reprocessing known data and flooding your notification channel with noise.


Layer 1: Continuous Subdomain Recon with SQLite Deduplication

The recon layer wraps subfinder — a Go tool that queries dozens of passive DNS and certificate transparency sources simultaneously. Python’s subprocess module calls subfinder, parses its stdout output, and stores discovered subdomains in a local SQLite database. The database serves as the deduplication layer: a subdomain with a PRIMARY KEY constraint will raise an IntegrityError on insert if it already exists. Catching that error is how we identify genuinely new discoveries versus re-discovered known assets.

Running this layer every six hours means a new subdomain that appears at 2am is in your pipeline by 2am. The SQLite first_seen timestamp tells you exactly when each asset was discovered, which is useful for demonstrating to programmes that you found a vulnerability on a recently deployed asset — the timestamp is part of the bug bounty report timeline evidence. The layer only returns the list of new subdomains from each run, so subsequent pipeline layers never see duplicates.

LAYER 1 — CONTINUOUS RECON PYTHON MODULE
# recon_layer.py — subdomain discovery with deduplication
import subprocess, sqlite3, datetime, json
from pathlib import Path
DB_PATH = Path(“~/.bb_pipeline/recon.db”).expanduser()
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
def init_db():
conn = sqlite3.connect(DB_PATH)
conn.execute(“””CREATE TABLE IF NOT EXISTS subdomains (
subdomain TEXT PRIMARY KEY,
domain TEXT NOT NULL,
first_seen TEXT NOT NULL
)”””)
conn.commit(); return conn
def run_subfinder(domain: str) -> list[str]:
result = subprocess.run(
[“subfinder”, “-d”, domain, “-silent”, “-all”],
capture_output=True, text=True, timeout=300
)
return [line.strip() for line in result.stdout.splitlines() if line.strip()]
def store_and_filter_new(domain: str, subdomains: list[str]) -> list[str]:
“””Returns only genuinely new subdomains — never seen before.”””
conn = init_db()
now = datetime.datetime.utcnow().isoformat()
new = []
for sub in subdomains:
try:
conn.execute(“INSERT INTO subdomains VALUES (?,?,?)”, (sub, domain, now))
new.append(sub) # only appended if insert succeeded (no duplicate)
except sqlite3.IntegrityError:
pass # already in DB — silently skip
conn.commit()
return new
def recon_domain(domain: str) -> list[str]:
all_subs = run_subfinder(domain)
new_subs = store_and_filter_new(domain, all_subs)
print(f”[recon] {domain}: {len(all_subs)} total, {len(new_subs)} NEW”)
return new_subs

SQLite Deduplication is the Single Most Important Design Decision: Without it, every pipeline run re-processes every subdomain from scratch — flooding your notification queue with duplicates and making signal detection impossible. The INSERT OR IGNORE pattern (or IntegrityError catch) ensures only genuinely new assets bubble up to Layer 2 and beyond. This single 5-line function is what separates a noisy script from a professional pipeline.
securityelites.com
Layer 1 Pipeline Output — New vs Known Subdomain Detection
$ python3 recon_runner.py –domain target.com
[06:00:01] subfinder found 847 subdomains for target.com
[06:00:03] DB check: 844 already known, 3 NEW
[06:00:03] NEW: staging-api2.target.com
[06:00:03] NEW: internal-beta.target.com
[06:00:03] NEW: payments-v3-test.target.com
↑ 844 known subdomains silently skipped · 3 new pass to Layer 2
✅ “payments-v3-test” discovered 4 hours after deployment — before security review

📸 Layer 1 deduplication in action. Of 847 subdomains discovered, 844 were already in the SQLite database from previous runs and are silently skipped. Only the 3 genuinely new subdomains proceed to Layer 2 for HTTP probing. The subdomain “payments-v3-test.target.com” was discovered 4 hours after it was deployed — before the security team had reviewed it. This is the window of opportunity that automation captures and manual hunting misses entirely.

🛠️ EXERCISE 1 — BROWSER (15 MIN · NO INSTALL)
Explore the Bug Bounty Automation Tool Ecosystem

⏱️ 15 minutes · Browser only — no install required

Step 1: Explore ProjectDiscovery’s toolkit
Go to: https://github.com/projectdiscovery
Find and read the README for these 5 tools:
subfinder / httpx / nuclei / dnsx / naabu
For each, write down: (a) what it does, (b) its input format, (c) its output format.
Understanding I/O formats is the foundation of building pipelines.

Step 2: Understand the pipeline concept on GitHub
Search GitHub for: bug bounty automation python
Find one open-source pipeline (examples: trickest/resolvers, tomnomnom/hacks, or similar).
Examine its architecture: What triggers it? How does it store state?
How many tools does it chain? What notification mechanism does it use?

Step 3: Read a programme automation policy
Go to: https://hackerone.com/programs
Pick any public programme with a large scope.
Find the programme’s policy page and look for the automation section.
Common phrases: “automated scanning tools”, “rate limiting”, “prior written approval”.
Does this programme explicitly allow automation? Explicitly prohibit it? Say nothing?

Step 4: Calculate your opportunity window
Pick a programme scope with 5 wildcard domains.
Estimate: if each domain has 50–100 subdomains on average,
and each subdomain has 3–5 discoverable endpoints,
how many total assets does this programme have?
If you checked manually once per week vs automated every 6 hours,
how many cycles of new-asset discovery would you miss per week manually?

✅ What you just learned: Understanding I/O formats is what makes tool chaining possible — subfinder outputs one subdomain per line, httpx reads the same format, Nuclei reads URLs from a file. This is the Unix philosophy of composable tools and it is why ProjectDiscovery’s stack works so well together. The programme policy research is critical — automation without reading the policy is the fastest way to get banned from programmes you want to work on. The arithmetic exercise demonstrates why manual hunters cannot compete on scope coverage against automated pipelines.

📸 Screenshot the 5-tool I/O format table you created and the programme automation policy section you found. Post to #bb-automation on Discord.


Layer 2: HTTP Asset Discovery and Change Detection

Subdomain discovery gives you hostnames. Layer 2 converts those hostnames to live HTTP services and builds fingerprints of each one. A fingerprint captures the essential identity of a service’s response: status code, page title, server header, and a hash of the response body. The hash is the critical element for change detection. When the pipeline runs again in six hours, it fetches the same endpoints and computes new hashes. Any hash change signals something changed on that endpoint — a new deployment, modified configuration, added functionality, or removed authentication. Each change is worth investigating manually.

The Python httpx library is the right choice for this layer because it supports both synchronous and asynchronous request models, maintains efficient connection pools, and handles redirects cleanly. Running fifty concurrent workers with ThreadPoolExecutor lets you fingerprint 500 hosts in roughly a minute on a standard VPS — fast enough that the full pipeline completes well within the six-hour cron interval even for large scopes.

LAYER 2 — HTTPX FINGERPRINTING AND CHANGE DETECTION
# change_detector.py
import httpx, hashlib, sqlite3, re
from concurrent.futures import ThreadPoolExecutor
from pathlib import Path
DB_PATH = Path(“~/.bb_pipeline/recon.db”).expanduser()
def fingerprint(host: str) -> dict:
for scheme in (“https”, “http”):
try:
r = httpx.get(f”{scheme}://{host}”, timeout=10,
follow_redirects=True, headers={“User-Agent”: “Mozilla/5.0”})
title_match = re.search(r”<title>([^<]{0,120})</title>”, r.text, re.I)
return {
“host”: host, “url”: str(r.url),
“status”: r.status_code,
“title”: title_match.group(1).strip() if title_match else “”,
“server”: r.headers.get(“server”, “”),
“body_hash”: hashlib.sha256(r.content).hexdigest()[:16],
“length”: len(r.content),
}
except Exception:
continue
return {“host”: host, “error”: “unreachable”}
def detect_changes(hosts: list[str]) -> list[dict]:
“””Returns fingerprints for new hosts and hosts whose body hash changed.”””
conn = sqlite3.connect(DB_PATH)
conn.execute(“””CREATE TABLE IF NOT EXISTS fingerprints (
host TEXT PRIMARY KEY, url TEXT, status INT,
title TEXT, server TEXT, body_hash TEXT, length INT
)”””)
conn.commit()
with ThreadPoolExecutor(max_workers=50) as pool:
results = list(pool.map(fingerprint, hosts))
actionable = []
for fp in results:
if “error” in fp: continue
row = conn.execute(“SELECT body_hash FROM fingerprints WHERE host=?”, (fp[“host”],)).fetchone()
if row is None or row[0] != fp[“body_hash”]:
actionable.append(fp) # new host OR response changed
conn.execute(“INSERT OR REPLACE INTO fingerprints VALUES (?,?,?,?,?,?,?)”,
(fp[“host”], fp[“url”], fp[“status”], fp[“title”],
fp[“server”], fp[“body_hash”], fp[“length”]))
conn.commit()
return actionable


Layer 3: Nuclei Integration and Custom Python Checks

Layer 3 is where the pipeline does actual vulnerability detection. Nuclei handles the commodity checks — its template library covers thousands of known vulnerability patterns including exposed admin panels, default credentials, known CVEs in popular software, misconfigured security headers, and exposed sensitive files. Running Nuclei against the list of new and changed assets from Layer 2 gives you comprehensive coverage of known patterns at scale, automatically, on every new asset that appears.

Custom Python checks are where your edge over other automation users comes from. Every programme-specific pattern you identify through manual testing can be encoded as a Python function that runs alongside Nuclei. Examples: a check that looks for hardcoded AWS keys in JavaScript files (a pattern Nuclei templates cover broadly, but a custom check can apply programme-specific naming conventions), a check for specific API versioning endpoints you’ve identified as frequently misconfigured in this programme’s stack, or a check for headers or response patterns unique to this company’s internal tooling that external Nuclei templates would never know about.

LAYER 3 — NUCLEI WRAPPER AND CUSTOM SECRET SCANNER
# vuln_layer.py
import subprocess, json, httpx, re
def run_nuclei(urls: list[str], severity: str = “medium,high,critical”) -> list[dict]:
“””Pipe URL list to nuclei, return parsed JSON findings.”””
import tempfile, os
with tempfile.NamedTemporaryFile(mode=”w”, suffix=”.txt”, delete=False) as f:
f.write(“\n”.join(urls))
tmp = f.name
result = subprocess.run(
[“nuclei”, “-l”, tmp, “-severity”, severity, “-j”,
“-silent”, “-rate-limit”, “50”, “-timeout”, “10”],
capture_output=True, text=True
)
os.unlink(tmp)
findings = []
for line in result.stdout.splitlines():
try: findings.append(json.loads(line))
except json.JSONDecodeError: pass
return findings
SECRET_PATTERNS = {
“aws_access_key”: r”AKIA[0-9A-Z]{16}”,
“github_pat”: r”ghp_[A-Za-z0-9]{36}”,
“stripe_live”: r”sk_live_[0-9a-zA-Z]{24}”,
“sendgrid_key”: r”SG\.[A-Za-z0-9_-]{22}\.[A-Za-z0-9_-]{43}”,
}
def scan_js_for_secrets(url: str) -> dict | None:
“””Fetch a URL, scan response text for hardcoded secrets.”””
try:
r = httpx.get(url, timeout=10, follow_redirects=True)
for key_type, pattern in SECRET_PATTERNS.items():
match = re.search(pattern, r.text)
if match:
return {“url”: url, “type”: key_type,
“match_preview”: match.group()[:8] + “…”}
except Exception:
pass
return None


Layer 4: Triage Filtering and Telegram Notifications

The notification layer is where most automation pipelines fail. Generating findings is easy. Generating actionable, deduplicated, severity-filtered notifications that tell you exactly what to look at — without burying the critical alert in a flood of informational noise — requires deliberate design. A notification system that sends every Nuclei finding, including informational and low severity, will train you to ignore the channel. When the critical finding arrives, you won’t see it because you’ve learned to dismiss the noise.

The filtering rules are simple: only notify on high and critical severity findings. Only notify on a specific finding once — store sent findings in SQLite and check before every notification. Include enough context in the message to make the alert self-contained: target URL, template ID, severity, and a deep link to the Nuclei template documentation. The goal is that when a Telegram notification arrives at 3am, you can read it, immediately understand what it is and why it matters, and decide in thirty seconds whether it warrants waking up to investigate.

LAYER 4 — TELEGRAM NOTIFICATION WITH DEDUPLICATION
# notifier.py
import httpx, hashlib, sqlite3
from pathlib import Path
DB_PATH = Path(“~/.bb_pipeline/recon.db”).expanduser()
ALERT_SEVERITIES = {“high”, “critical”}
def _finding_hash(finding: dict) -> str:
key = f”{finding.get(‘template-id’,”)}-{finding.get(‘matched-at’,”)}”
return hashlib.md5(key.encode()).hexdigest()
def _already_sent(conn: sqlite3.Connection, fhash: str) -> bool:
conn.execute(“CREATE TABLE IF NOT EXISTS sent_alerts (hash TEXT PRIMARY KEY)”)
row = conn.execute(“SELECT 1 FROM sent_alerts WHERE hash=?”, (fhash,)).fetchone()
return row is not None
def send_telegram(finding: dict, bot_token: str, chat_id: str) -> bool:
severity = finding.get(“info”, {}).get(“severity”, “unknown”).lower()
if severity not in ALERT_SEVERITIES:
return False # skip info, low, medium
conn = sqlite3.connect(DB_PATH)
fhash = _finding_hash(finding)
if _already_sent(conn, fhash):
return False # dedup — already notified
emoji = “🔴” if severity == “critical” else “🟠”
msg = (f”{emoji} <b>Bug Bounty Alert — {severity.upper()}</b>\n”
f”Template: {finding.get(‘template-id’, ‘custom’)}\n”
f”Target: {finding.get(‘host’, ”)}\n”
f”URL: {finding.get(‘matched-at’, ”)}”)
resp = httpx.post(
f”https://api.telegram.org/bot{bot_token}/sendMessage”,
data={“chat_id”: chat_id, “text”: msg, “parse_mode”: “HTML”},
)
if resp.status_code == 200:
conn.execute(“INSERT INTO sent_alerts VALUES (?)”, (fhash,))
conn.commit()
return resp.status_code == 200


Rate Limiting, Scope Validation, and Responsible Automation

Unthrottled automation has ended bug bounty careers. Programmes have banned hunters permanently for running aggressive scanners that caused load issues on production systems. The rule is simple and non-negotiable: your automation must respect the programme’s systems as if they were your own. This means explicit rate limiting at the per-domain level, immediate backoff on 429 responses, and reading every programme’s automation policy before deployment.

Two requests per second per domain is a safe baseline that avoids triggering rate limiting on most web applications while still giving you reasonable throughput. The token bucket implementation below ensures this limit is maintained across concurrent workers — without it, fifty concurrent httpx requests can burst to fifty requests per second against a single domain, which will get you banned and may cause real service degradation.

RATE LIMITER — TOKEN BUCKET IMPLEMENTATION
# rate_limiter.py — per-domain token bucket
import time, threading
from collections import defaultdict
class DomainRateLimiter:
def __init__(self, rps: float = 2.0):
self.rps = rps
self._tokens = defaultdict(lambda: 1.0)
self._last = defaultdict(float)
self._lock = threading.Lock()
def wait(self, domain: str) -> None:
with self._lock:
now = time.monotonic()
elapsed = now – self._last[domain]
self._tokens[domain] = min(1.0, self._tokens[domain] + elapsed * self.rps)
if self._tokens[domain] < 1.0:
wait_time = (1.0 – self._tokens[domain]) / self.rps
time.sleep(wait_time)
self._tokens[domain] -= 1.0
self._last[domain] = time.monotonic()
# Usage: limiter = DomainRateLimiter(rps=2.0)
# Before every request: limiter.wait(tldextract.extract(url).registered_domain)

⚠️ Read Programme Policies First: Before deploying any automation against a bug bounty programme, read the programme’s policy page specifically for automation rules. Phrases like “automated scanning requires prior written approval,” “do not run automated scanners,” or “rate limit to X requests per minute” are legally binding conditions of your participation. Violating them can result in permanent programme bans and forfeiture of any pending payouts. When in doubt, email the programme team and ask permission before running.

VPS Deployment and Cron Scheduling

The full pipeline runs as a single orchestrator script that imports and calls each layer in sequence. A cron job on a $5/month VPS executes this script every six hours. The VPS needs: Python 3.10+, subfinder, httpx, Nuclei, and your pipeline code. The setup is straightforward and the monthly cost is offset by a single low-severity bug bounty payout.

PIPELINE ORCHESTRATOR AND CRON DEPLOYMENT
# pipeline.py — main orchestrator
import json, os
from recon_layer import recon_domain
from change_detector import detect_changes
from vuln_layer import run_nuclei, scan_js_for_secrets
from notifier import send_telegram
BOT_TOKEN = os.environ[“TG_BOT_TOKEN”]
CHAT_ID = os.environ[“TG_CHAT_ID”]
TARGETS = [“example.com”, “target.com”] # programme-approved domains
def main():
all_new_hosts = []
for domain in TARGETS:
all_new_hosts.extend(recon_domain(domain))
print(f”[pipeline] Total new subdomains this run: {len(all_new_hosts)}”)
if not all_new_hosts:
return # nothing new — stop early, save resources
changed_assets = detect_changes(all_new_hosts)
print(f”[pipeline] Actionable assets (new/changed): {len(changed_assets)}”)
urls = [fp[“url”] for fp in changed_assets if “url” in fp]
nuclei_findings = run_nuclei(urls)
for finding in nuclei_findings:
send_telegram(finding, BOT_TOKEN, CHAT_ID)
if __name__ == “__main__”:
main()
# Cron entry — run every 6 hours
0 */6 * * * cd /opt/bb_pipeline && python3 pipeline.py >> /var/log/bb_pipeline.log 2>&1
# Set environment variables for secrets
export TG_BOT_TOKEN=”your_bot_token_here”
export TG_CHAT_ID=”your_chat_id_here”

🧠 EXERCISE 2 — THINK LIKE AN AUTOMATION ARCHITECT (12 MIN)
Design Decisions — Solve Three Architecture Problems

⏱️ 12 minutes · Paper or text editor — no tools

Work through these three architecture decisions that every bug bounty
automation builder faces. For each, choose your answer and justify it
with specific technical reasoning.

PROBLEM 1 — Storage:
You need to store subdomain history, HTTP fingerprints, and sent alerts.
Options: SQLite (local file) vs PostgreSQL (server) vs Redis (in-memory)
Factors to consider: concurrent write safety, query patterns needed,
operational complexity, recovery after VPS restart.
Which do you choose for a single-VPS pipeline and why?

PROBLEM 2 — False Positive Rate:
Your Nuclei integration produces 50 findings per week.
After manual review, only 8 are real vulnerabilities.
The 42 false positives waste your time and train you to ignore alerts.
Describe three technical mechanisms (NOT template-level changes)
you would implement at the pipeline layer to reduce false positive rate.

PROBLEM 3 — What NOT to Automate:
Name three vulnerability classes you would NOT include in automated scanning
and handle only manually. For each, explain: why automated scanning creates
unacceptable risk (legal, technical, or policy-level), and how you would
prioritise manual testing for that class instead.

✅ ANSWER GUIDANCE — Problem 1: SQLite is the correct choice for a single-VPS pipeline. PostgreSQL adds operational overhead (service management, backups, connection pooling) that single-VPS pipelines don’t justify. Redis loses state on restart unless persisted. SQLite requires no separate service, survives VPS restarts automatically, and has sufficient concurrent write performance for this use case since writes happen sequentially within each pipeline run. Problem 2: Three mechanisms — (1) Require two independent findings from two separate runs before alerting (reduces scan-artifact noise), (2) Implement a confidence scoring system that requires additional verification HTTP requests before classifying a finding as actionable, (3) Maintain a personal false-positive register keyed on template-ID + host-pattern; suppress future alerts for combinations you’ve previously verified as false. Problem 3: SQL injection (high risk of causing data damage in production, many programmes explicitly exclude automated SQLi scanners), authentication endpoints (repeated auth attempts trigger account lockouts and security alerts, possible DoS), payment processing endpoints (any automated interaction with payment flows risks real financial impact).

📸 Post your architecture decision answers to #bb-automation on Discord — especially your false positive reduction techniques.

🛠️ EXERCISE 3 — BROWSER ADVANCED (20 MIN)
Set Up a Telegram Bot and Validate the Notification Architecture

⏱️ 20 minutes · Browser + Telegram app — no server required

Step 1: Create a Telegram Bot (free, takes 3 minutes)
Open Telegram. Search for @BotFather.
Send: /newbot
Follow the prompts — give it a name and username.
Save the bot token that BotFather provides. Format: 123456789:ABCdef…

Step 2: Get your Chat ID
In Telegram, send a message to your new bot.
Open this URL in your browser (replace BOT_TOKEN):
https://api.telegram.org/bot{BOT_TOKEN}/getUpdates
Find “chat”: {“id”: XXXXXXXXX} in the response — that’s your Chat ID.

Step 3: Send a test message via the API
In your browser address bar, open this URL (replace both values):
https://api.telegram.org/bot{BOT_TOKEN}/sendMessage?chat_id={CHAT_ID}&text=Pipeline+Test+OK

If a message appears in your Telegram chat, your notification layer works.

Step 4: Simulate a critical alert
Open this URL (replace BOT_TOKEN and CHAT_ID):
https://api.telegram.org/bot{BOT_TOKEN}/sendMessage?chat_id={CHAT_ID}&text=🔴+CRITICAL+exposed-admin-panel+target.example.com&parse_mode=HTML

Does the red circle emoji render? This is exactly what your pipeline
will send you at 3am when it finds something real.

Step 5: Research Telegram bot rate limits
Search: “Telegram bot API rate limits”
What is the maximum messages per second a bot can send?
What happens if you exceed it?
How would you handle this in the notifier.py code?

✅ What you just learned: You now have a working Telegram notification endpoint. The URL-based API test proves the notification layer works before writing a single line of pipeline code. Telegram’s bot API rate limit (30 messages per second globally, 1 message per second per chat) is important for the deduplication design — if your pipeline ever sends more than 1 alert per second to the same chat, subsequent messages will be throttled. The deduplication in the notifier.py code ensures each unique finding only generates one alert, keeping you well under any rate limits. In production, your pipeline will typically generate 0–5 alerts per run, making Telegram perfectly suitable for this use case.

📸 Screenshot the test message appearing in your Telegram chat. Post to #bb-automation on Discord with tag #notification-layer.

securityelites.com
Telegram Alert — Layer 4 Notification in Production
🔴 Bug Bounty Alert — CRITICAL
Template: exposed-panel
Target: payments-v3-test.target.com
URL: https://payments-v3-test.target.com/admin/
Received: 03:54:17 · Pipeline run #847

✅ New subdomain discovered 03:49 → Nuclei found exposed admin panel 03:52 → Alert sent 03:54
Manual hunter checks scope next morning — endpoint already patched by then

📸 A production Layer 4 Telegram alert. The pipeline discovered a new subdomain (payments-v3-test.target.com) at 03:49, ran Nuclei against it at 03:52, and sent a Critical alert at 03:54. The entire chain — discovery to notification — took 5 minutes at 3am with no human involvement. This is the window that separates top-earning hunters from manual ones: the new test payment endpoint was deployed that evening and patched the following morning, leaving a 6-hour discovery window that only continuous automation can reliably capture.

🧠 QUICK CHECK — Bug Bounty Automation Python 2026

Your Layer 1 recon script discovers 50 new subdomains this run. Your Layer 4 Telegram bot sends you 50 alerts within 10 minutes. You’re getting similar volumes every run. What is the most important architectural fix and why?



📋 Bug Bounty Automation Python — Key Architecture Reference

Layer 1: subfinder → Python SQLite dedup → new_subdomains[]Continuous recon — only returns genuinely new assets each run
Layer 2: httpx fingerprint → SHA-256 body hash → changed_assets[]Change detection — acts on new hosts and response changes only
Layer 3: nuclei -l urls.txt -j -severity medium,high,criticalVulnerability scan — JSON output, parse and filter programmatically
Layer 4: Telegram API + SQLite dedup → high/critical onlyNotification — deduplicated, severity-filtered, context-rich alerts
DomainRateLimiter(rps=2.0).wait(domain)Rate limiting — token bucket, 2 req/sec per domain, programme-safe
0 */6 * * * python3 pipeline.py >> pipeline.log 2>&1Cron schedule — every 6 hours, continuous coverage while you sleep

❓ Frequently Asked Questions — Bug Bounty Automation Python 2026

Is bug bounty automation allowed on all programmes?
No. Many programmes explicitly prohibit automated scanning or require prior written approval. Read the programme policy before deploying any automation. Platform-level policies (HackerOne, Bugcrowd) supplement individual programme rules. Violating automation policies can result in permanent programme bans and forfeiture of payouts. When in doubt, email the programme team first.
How much does a continuous bug bounty automation VPS cost?
$5–20/month on Hetzner or DigitalOcean covers a basic single-VPS pipeline running subfinder, httpx, and Nuclei every 6 hours. A more complete setup (multiple workers, VPN exit, dedicated notification server) runs $30–80/month. A single medium-severity payout of $300–500 offsets a full year of infrastructure costs.
What Python libraries are essential for this pipeline?
httpx (async HTTP, better than requests for concurrent fingerprinting), sqlite3 (stdlib, state management and deduplication), subprocess (wrapping Go tools like subfinder and Nuclei), asyncio (concurrent I/O where needed), and the Telegram HTTP API via httpx for notifications. Keep dependencies minimal — fewer packages means fewer breakage points.
Can automation replace manual testing?
No. Automation handles coverage — scanning thousands of endpoints for known patterns continuously. Manual testing handles depth — complex logic flaws, vulnerability chains, and context-dependent findings. Top earners use both: automation for scale and 24/7 monitoring, manual work for creative exploitation of what the pipeline surfaces.
How do I avoid getting banned for automated scanning?
Rate limit to 2 req/sec per domain. Read each programme’s automation policy. Back off immediately on 429 responses. Never run destructive tools (sqlmap, XSS fuzzers) without explicit programme approval for automated testing. Stop immediately if you receive any message from a programme team about your scanning activity.
What is the typical opportunity window for new bug bounty assets?
New subdomains and endpoints are most vulnerable in the first 24–72 hours after deployment — before the security team reviews the new asset. Continuous recon every 6 hours puts you in that window. Manual hunters checking weekly are often 4–14 days behind on new asset discovery, missing the highest-value window entirely.
← Hub

60-Day Bug Bounty Course

Related →

Subdomain Enumeration Tools 2026

📚 Further Reading

  • Day 6: Subdomain Enumeration Bug Bounty — The manual foundation that this automation layer wraps — understanding how subfinder, amass, and crt.sh work before automating them makes your pipeline more effective and debuggable.
  • Subdomain Enumeration Tools 2026 — Comprehensive comparison of the best subdomain enumeration tools — subfinder, amass, dnsx, and others — with performance benchmarks and use case guidance.
  • 60-Day Bug Bounty Mastery Course — Full course hub — automation is the capstone of the 60-day methodology, built on the manual skills established in Days 1–25.
  • Nuclei — GitHub Repository — Official Nuclei repository with full template library, JSON output documentation, and integration examples for building pipelines like the one in this article.
  • Subfinder — GitHub Repository — Official subfinder repository with API key configuration guide, passive source list, and output format documentation for Python integration.
ME
Mr Elite
Owner, SecurityElites.com
I built my first Python bug bounty automation script in 2021 after getting beaten to three critical findings in a single week on the same programme. Someone found a staging environment subdomain about twelve hours before I got there, and it had an exposed admin panel with default credentials. I had been checking that programme manually every few days. They were running continuous recon. That frustration turned into six months of building, testing, and iterating on the pipeline described here. My first full month running it produced more findings than my previous six months of manual hunting combined — not because the automation was smarter than me, but because it was awake when I wasn’t.

Leave a Reply

Your email address will not be published. Required fields are marked *