Module: Arachni::RPC::Server::Framework::Slave

Included in:
MultiInstance
Defined in:
lib/arachni/rpc/server/framework/slave.rb

Overview

Holds methods for slave Instances, both for remote management and utility ones.

Author:

Instance Method Summary collapse

Instance Method Details

#set_master(url, token) ⇒ Bool

Sets the URL and authentication token required to connect to this Instance’s master and makes this Instance a slave.

Parameters:

  • url (String)

    Master’s URL in ‘hostname:port` form.

  • token (String)

    Master’s authentication token.

Returns:

  • (Bool)

    ‘true` on success, `false` if the instance is already part of a multi-Instance operation.



40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
# File 'lib/arachni/rpc/server/framework/slave.rb', line 40

def set_master( url, token )
    # If we're already a member of a multi-Instance operation bail out.
    return false if !solo?

    # Make sure the desired plugins are loaded before #prepare runs them.
    plugins.load @opts.plugins if @opts.plugins

    # Start the clock and run the plugins.
    prepare

    @master = connect_to_instance( url: url, token: token )

    # Multi-Instance scans need extra info when it comes to auditing,
    # like a whitelist of elements each slave is allowed to audit.
    #
    # Each slave needs to populate a list of element scope-IDs for each page
    # it finds and send it back to the master, which will determine their
    # distribution when it comes time for the audit.
    #
    # This is our buffer for that list.
    @element_ids_per_url = {}

    # Process each page as it is crawled.
    # (The crawl will start the first time any Instance pushes paths to us.)
    spider.on_each_page do |page|
        @status = :crawling

        if page.platforms.any?
            print_info "Identified as: #{page.platforms.to_a.join( ', ' )}"
        end

        # Build a list of deduplicated element scope IDs for this page.
        @element_ids_per_url[page.url] ||= []
        build_elem_list( page ).each do |id|
            @element_ids_per_url[page.url] << id
        end
    end

    # Setup a hook to be called every time we run out of paths.
    spider.after_each_run do
        data = {}

        if @element_ids_per_url.any?
            data[:element_ids_per_url] = @element_ids_per_url.dup
        end

        if spider.done?
            print_status 'Done crawling -- at least for now.'

            data[:platforms]  = Platform::Manager.light if Options.fingerprint?
            data[:crawl_done] = true
        end

        sitrep( data )
        @element_ids_per_url.clear
    end

    # Buffer for logged issues that are to be sent to the master.
    @issue_buffer = []

    # Don't store issues locally -- will still filter duplicate issues though.
    @modules.do_not_store

    # Buffer discovered issues...
    @modules.on_register_results do |issues|
        @issue_buffer |= issues
    end
    # ... and flush it on each page audit.
    on_audit_page do
        sitrep( issues: @issue_buffer.dup )
        @issue_buffer.clear
    end

    print_status "Enslaved by: #{url}"

    true
end

#slave?Bool

Returns ‘true` if this instance is a slave, `false` otherwise.

Returns:

  • (Bool)

    ‘true` if this instance is a slave, `false` otherwise.



119
120
121
122
# File 'lib/arachni/rpc/server/framework/slave.rb', line 119

def slave?
    # If we don't have a connection to the master then we're not a slave.
    !!@master
end