I'm always excited to connect with professionals, collaborate on cybersecurity projects, or share insights.
You start a new bug bounty target. The first move is the recon chain. Subdomain enumeration. Host discovery. HTTP probing. Tech fingerprinting. JavaScript scraping for hardcoded secrets. Wordlist content brute-forcing. Three hours later you have a spreadsheet. A few misconfigs. A handful of tech banners. Nothing the triager actually wants to read.
That used to be my workflow on every target. It is not anymore.
The bugs that pay live in the same place they always have. They live on the main application. The thing already open in your browser the moment you pick the program. Not on a forgotten subdomain. Not in a JavaScript file pulled by a scraper. On the app.
In 2026, my recon collapsed into three things. The application. My proxy. And a way of reading what I see. No tools running in the background. No checklist. No spreadsheet. The first hour goes into understanding the app well enough that the bugs surface on their own.
What follows is the exact process. What I dropped, what replaced it, and the patterns that pull broken access control bugs out on almost every target.
Table of contents [Show]
Real bugs come from complexity. Complexity lives on the main application. Feature flags ship every week. User roles interact with each other. Access control logic is evaluated in a hundred different places. Business logic gets revised faster than anyone can audit it. That density of decisions is what produces the bugs that pay.
A forgotten subdomain might leak data. An old API banner might reveal a misconfig. Those are real findings. But the density of high-impact business logic bugs is highest on the application the developers actually care about. Broken access control specifically shows up almost every time. IDORs. Privilege escalations. Role confusion. Horizontal access failures. Reportable. Consistently paid.
The application was open in my browser the entire time the recon chain ran. The fastest way to find a bug on it was always to look at it.
Most recon advice splits into two activities. Asset discovery and content discovery. Both have a place. Neither belongs at the start of a target.
Asset discovery finds new surfaces: subdomains, hosts, things the program forgot about. Skip it on the first pass. The host you picked is already in front of you. If the application talks to another host, your proxy shows it the moment a request fires.
Content discovery finds something specific in a known place: hidden endpoints, backup files, old routes, undocumented parameters. Skip it before you understand the application. Fuzzing for directories with no question to answer is guessing. If you cannot answer "why am I searching here" and "what am I searching for," there is no reason to search.
Content discovery does belong in the workflow later. You hit an authentication bug that only escalates to critical with an open redirect. You don't have one. Now you fuzz for redirect parameters on the host you already understand. Targeted. One reason. One place.
| Activity | What it does | Skip when | Run when |
|---|---|---|---|
| Asset discovery | Finds new subdomains, hosts, surfaces | You haven't tested the main application yet | The primary surface is exhausted and you want to expand |
| Content discovery | Finds endpoints, files, parameters at a known host | You don't yet know which bug you are chasing | You have a specific bug that needs a specific resource |
Content discovery at the start is guessing. Content discovery after understanding is hunting.
Before doing anything else, use the application like a normal user for ten minutes. Log in. Click everything. Hit every main feature you can find. Don't test. Don't change anything. Just use it. Your proxy records the traffic in the background.
After ten minutes you have two things. A working sense of what the application actually does. And a clean pile of authentic traffic in your HTTP history.
Now you split.
A standard application with one user role splits into two pieces. Everything the browser renders is one surface. Everything the browser talks to is another. Frontend. Backend API. Two.
If the platform ships separate products on separate stacks, the count goes up. A partner portal with its own frontend and its own backend is two more pieces. An admin panel built as its own application, not a page nested inside the main one, adds two more on top of that. Six surfaces in total.
The trap is counting roles. A platform with ten user roles still ships on one frontend and one API in most cases. That is two surfaces, not twenty. Same URL root, same backend, just different access decisions happening inside. Roles change what gets evaluated inside a piece. Surfaces change how many pieces exist.
Count separate applications. Not separate users.
The Split turns a target into a finite list. Each piece gets its own attention. Each piece gets its own session. None of them get mixed up. None of them get forgotten. That is the focus the rest of the work depends on.
Most hunters answer one question. The good ones answer three.
How does the feature work. The mechanics. The inputs, the actions, the outputs. Reading the documentation and clicking through the UI gets you here. Floor.
Why does it work the way it does. Why is this field present. Why does this endpoint return what it returns. Why does an admin see a button that other roles don't. Why is there an admin role at all, and what is it actually for.
When does the protection fire. When is the permission evaluated. When does the server trust the client, and when does it not. When does the role boundary matter, and when does the code skip the check entirely.
Answer all three honestly and the bugs surface as gaps. The places where the developer assumed one thing and the server trusted another. The places where the permission was enforced on the frontend and forgotten on the backend. Those gaps stop hiding once the application is clear in your head.
That is the whole game. Confused hunters find nothing. Hunters who can answer the three questions find bugs in the same hour other people spend on subdomain enumeration.
Start with the API surface. Open the proxy. Scroll the HTTP history captured during the first ten minutes. Is the backend REST. Is it GraphQL. Is it some internal RPC the company built themselves. The shape of the traffic dictates the shape of every test that comes after.
Then look at who the API serves. If it is GraphQL, is one endpoint serving regular users and admins, or is the admin role hitting a separate endpoint somewhere you haven't seen yet. Two endpoints that look identical but enforce different trust boundaries is exactly where access control fails.
Then move to features. Not all of them. Pick the interesting ones. Features that touch other users. Features that share data. Features that let you invite, publish, or upload.
Every interesting feature has options, and every option is its own test. Take a photo-sharing flow with three modes: public, link-only (unlisted), password-protected. Three trust models in one feature. Is the password check happening in the browser or on the server. Is the unlisted link actually unlisted, or is there a listing endpoint that returns it anyway. Does the public photo leak metadata it was never supposed to ship.
While you are doing this, capture anomalies. Anything that breaks the rhythm of the application.
These are the candidates. Take a screenshot. Write a three-word note. Admin header present. User ID twice in body. Client-side permission check. Move on.
By the end of the pass you have twenty notes. Each one is a test waiting to be written. Then you test. The notes tell you what to do. The understanding tells you what to try.
Classic IDOR testing says swap your user ID with someone else's and look for their data. Every developer learns to block that exact case. Most testers stop when it fails.
The real bug is not always in the direct ID swap. It is in what happens when the application receives a value the developer never anticipated.
Your ID as null. Your ID as undefined in a JavaScript-heavy app. The victim's ID dropped into a field that was never supposed to hold any user ID at all.
You cannot change your user ID and read another user's data. Fine. But what if you publish a photo where the publisher field is the victim's ID. Not your ID swapped with theirs on your own profile. Their ID dropped into a field that was never supposed to accept your input.
Maybe the photo ends up on their profile. Maybe a report on that photo notifies the victim instead of you. Maybe the permission system treats the victim as the author and you as the third party. Maybe sharing the photo pulls the victim's contact list through a chain nobody designed for.
That is the move. Not bypassing one specific check. Feeding the application a request it does not know how to categorize.
Confused code defaults. Confused code falls back to the closest value. Confused code applies whichever rule fires first. Each resolution is unintended behavior the developer never planned for.
Stop trying to bypass one check. Start trying to make the application unsure which check to run. The bugs follow.
Do this enough times and something shifts. Understanding applications builds a library in your head. Not a list of payloads. A library of patterns.
Three quick examples.
A permissions panel in the interface fires the broken access control pattern. Who can invite, who can kick, who can change roles, what happens if you swap the role value on your own request before it leaves the browser.
Any field that takes a URL fires the SSRF pattern. A webhook destination. An avatar imported from a URL. A video imported from a link. What does the server do with the URL. Does it follow redirects. Does it accept localhost. What headers does it send. What happens at internal addresses.
User-controllable input reflected back into the page fires the injection pattern. Is it in an attribute. Is it in a script context. Is it escaped server-side or only client-side. Where does this value travel downstream from here.
Three patterns. Three completely different bug classes. None of them require a wordlist. All of them require understanding the feature well enough to recognize what kind of bug lives there. The library is the real skill. The tool chain never was.
Developers write obfuscation that looks scary and is not. Two patterns show up constantly. Both fool automated scanners. Neither survives reading the code.
Base64-encoded user IDs
You hit an endpoint. The user ID in the request is not a number. It is a fifty-character base64 string. It looks cryptographic. It looks like something you should not touch.
Decode it. The result is your user ID. That is the whole mystery. Wrapped in one layer of encoding to slow down anyone who does not read the request carefully.
Original: eyJ1c2VySWQiOiAxMDIzfQ==
Decoded: {"userId": 1023}Change the value. Re-encode. Send. You just swapped to another user.
Automated secret scanners ignore this because it is not a secret. It is the real ID with one layer of base64 around it. The defense is appearance. The defense breaks the moment you spend three seconds reading.
Signatures computed in JavaScript
You try to hit an admin endpoint. You get back 403. You look at successful admin requests in your proxy and notice they include a custom header. Something like X-Admin-Signature. The value is a long hash. Without knowing the algorithm, you would assume there is a secret key on the server.
Open the JavaScript file the application is already loading. Search for the header name. The function is right there in the source.
const signature = sha256(userId + userAgent + username);
request.headers["X-Admin-Signature"] = signature;The signature is a SHA-256 of three values. Your user ID. Your User-Agent. Your username. All three are values you already have. Compute the hash yourself in three seconds, set the header, send the request, and the admin endpoints respond.
The server is not checking that you are an admin. It is checking that the signature math is valid. The auth logic shipped to the frontend in plaintext, hoping nobody would look at it.
A scanner will not find that. Scanners search for strings like AWS_SECRET or api_key=. They do not search for authentication logic encoded in a function. Reading finds it. Scanning never does.
Old recon advice: collect every JavaScript file the target loads. Save them. Run scanners. Grep for secrets. Most hunters still do exactly this.
Collection is automatic now. If the target is example.com, then example.com loads its own JavaScript. Every file. Browsing the application with a proxy in the middle puts every JavaScript request in the HTTP history without any scraping. Use the app, open the proxy, and every JavaScript file the application actually uses is sitting there ready to read.
The harder problem is knowing what to look for.
Take PII leaks. Classic target for a JavaScript scanner. Run regex for email patterns, phone numbers, hardcoded credentials in strings. Hope something hits.
Replace the scanner with a walk through the signup flow. Slowly. Enter your email. Enter your phone number. Enter your name. Watch every request your proxy records. You see exactly where your email lands. You see exactly which endpoints return user data. You see which roles are supposed to access it and under what conditions.
Twenty minutes later you have a complete map of where the personal data lives, where it is exposed, and which endpoints serve it to which roles. Any gap in that map is a PII leak you can verify with a direct test.
The scraper looks in random places hoping for a hit. The walkthrough watches where the data actually moves. Understanding the flow beats scraping the files. Every target.
Recon in 2026 is a posture, not a checklist. Open the application. Use it. Read the traffic. Split it into surfaces. Walk every interesting feature. Capture every anomaly. Test with intent.
The chain of subdomain enumeration, JavaScript scraping, and wordlist brute-forcing still has uses. It belongs in the workflow when you have a specific reason to run it. Most of the time, on most targets, you don't.
Understanding the application is the recon. Everything else is a tool you reach for when understanding tells you to.
One area I deliberately left out is authentication. OAuth flows, session logic, token handling. Auth bugs are some of the highest-paying findings in any program, and they deserve their own walkthrough. That one is coming.
You can spend three hours finding subdomains. You can spend three hours finding bugs. The choice should be obvious.
Your email address will not be published. Required fields are marked *