hawler.rb

Path: lib/hawler.rb  (CVS)
Last Update: Sat Jan 24 12:28:17 -0800 2009

$Id: hawler.rb 26 2009-01-02 07:27:37Z warchild $

Copyright (c) 2008, Jon Hart All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in the
      documentation and/or other materials provided with the distribution.
    * Neither the name of the <organization> nor the
      names of its contributors may be used to endorse or promote products
      derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY Jon Hart ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL Jon Hart BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Hawler, the HTTP crawler. Written after years of reusing the same perl code over and over in every tool that could make use of crawling functionality. Now it is truly reusable and in Ruby.

Written to ease satisfying curiousities about the way the web is woven.

The original gem and tools that make use of Hawler can be found at:

  http://spoofed.org/files/hawler/

The basic idea is that a Hawler visits a given URI (get), pulls all of the links from the response body (harvest), and repeats this until every link to the specified recurse depth has been visited. Every URI that is visit is passed to analyze, which is simply a block that takes the URI, referer, and the response as arguments.

This is a unordered, breadth-first crawl. Enjoy.

Jon Hart <[email protected]>

Required files

net/http   net/https   uri   set   hawlee   hawlerhelper  

[Validate]