A web application crawler for bug bounty hunting

A Python-based web crawler that enables you to execute your payload against all requests in scope

N.Y.A.W.C is a very useful web application crawler for vulnerability scanning. It crawls every GET and POST request in the specified scope and keeps track of the request and response data. I developed N.Y.A.W.C because I needed a good open-source Python crawler that enabled me to modify requests on the go for my AngularJS CSTI scanner.

Crawling Flow

The crawler is multi-threaded but you don't have to worry about any of the multi threading yourself. To give you a better idea of the crawling flow I added the diagram below.

  • #1  It adds your start request to the queue.
  • #2  It starts the first request in the queue (repeats until max threads option reached).
  • #3  It adds all requests found in the response to the queue (except duplicates).
  • #4  It goes to step #2 again to spawn new requests.

Please note that if the queue is empty and all crawler threads are finished, the crawler will stop.

nyawc-flow.svg

There are several hooks in the code that you can use to, for example, tamper form data before it is posted. Check the documentation for more information about those hooks.

Installation

First make sure you're on Python 3.3 or higher. After that install N.Y.A.W.C via PyPi using the command below.

$ pip install --upgrade nyawc

The code sample below can be used to get the crawler up and running within a few minutes. If you like it, check out the documentation to get started on implementing your own exploits.

# example.py
from nyawc.Options import Options
from nyawc.QueueItem import QueueItem
from nyawc.Crawler import Crawler
from nyawc.CrawlerActions import CrawlerActions
from nyawc.http.Request import Request

def cb_crawler_before_start():
    print("Crawler started.")

def cb_crawler_after_finish(queue):
    print("Crawler finished. Found " + str(queue.count_finished) + " requests.")

    for queue_item in queue.get_all(QueueItem.STATUS_FINISHED).values():
        print("[" + queue_item.request.method + "] " + queue_item.request.url + " (PostData: " + str(queue_item.request.data) + ")")

def cb_request_before_start(queue, queue_item):
    # return CrawlerActions.DO_SKIP_TO_NEXT
    # return CrawlerActions.DO_STOP_CRAWLING

    return CrawlerActions.DO_CONTINUE_CRAWLING

def cb_request_after_finish(queue, queue_item, new_queue_items):
    percentage = str(int(queue.get_progress()))
    total_requests = str(queue.count_total)

    print("At " + percentage + "% of " + total_requests + " requests ([" + str(queue_item.response.status_code) + "] " + queue_item.request.url + ").")

    # return CrawlerActions.DO_STOP_CRAWLING
    return CrawlerActions.DO_CONTINUE_CRAWLING

def cb_form_before_autofill(queue_item, elements, form_data):

    # return CrawlerActions.DO_NOT_AUTOFILL_FORM
    return CrawlerActions.DO_AUTOFILL_FORM

def cb_form_after_autofill(queue_item, elements, form_data):
    pass

# Declare the options
options = Options()

# Callback options
options.callbacks.crawler_before_start = cb_crawler_before_start # Called before the crawler starts crawling. Default is a null route.
options.callbacks.crawler_after_finish = cb_crawler_after_finish # Called after the crawler finished crawling. Default is a null route.
options.callbacks.request_before_start = cb_request_before_start # Called before the crawler starts a new request. Default is a null route.
options.callbacks.request_after_finish = cb_request_after_finish # Called after the crawler finishes a request. Default is a null route.
options.callbacks.form_before_autofill = cb_form_before_autofill # Called before the crawler autofills a form. Default is a null route.
options.callbacks.form_after_autofill = cb_form_after_autofill # Called after the crawler autofills a form. Default is a null route.

# Scope options
options.scope.protocol_must_match = False # Only crawl pages with the same protocol as the startpoint (e.g. only https). Default is False.
options.scope.subdomain_must_match = False # Only crawl pages with the same subdomain as the startpoint. If the startpoint is not a subdomain, no subdomains will be crawled. Default is True.
options.scope.domain_must_match = True # Only crawl pages with the same domain as the startpoint (e.g. only finnwea.com). Default is True.
options.scope.max_depth = None # The maximum search depth. 0 only crawls the start request. 1 will also crawl all the requests found on the start request. 2 goes one level deeper, and so on. Default is None (unlimited).

# Identity options
options.identity.cookies.set(name='tasty_cookie', value='yum', domain='finnwea.com', path='/cookies')
options.identity.cookies.set(name='gross_cookie', value='blech', domain='finnwea.com', path='/elsewhere')
options.identity.headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36" # The user agent to make requests with. Default is Chrome.    
}

# Performance options
options.performance.max_threads = 8 # The maximum amount of simultaneous threads to use for crawling. Default is 4.

crawler = Crawler(options)
crawler.start_with(Request("https://www.finnwea.com/"))