1 zmbot: a Simple Web harvesting robot for Z'mbol.
5 zmbot is a simple web harvester written in Tcl. The following
6 summaries the features:
8 o Simple administration. One script does the job and no external
9 database is required to operate.
11 o Interruptible. Harvesting may safely be stopped/interrupted at any
14 o Gentle harvesting. By default a site is visited once per minute -
17 o Concurrent harvesting (jobs) in one process and one thread.
19 o Inspects content-type header to determine structure of page.
21 o Written in Tcl and is quite portable. (Some may not think this as being
22 feature; Perl version is welcomed!).
24 o Creates simple XML output. One file per URL.
26 The robot is started from the command line and takes one or more URL's
27 as parameter(s). Options, prefixed with minus, alter the behaviour of
28 the harvesting. The following options are supported:
30 -j jobs The maximum number of concurrent HTTP sessions; default 5 jobs.
32 -i idle Idle time in microseconds between visits to the same site;
33 default 60000 = 60 seconds.
35 -c count Maximum distance from original URL as given from the command
39 -d domain Only sites matching domain are visited. The domain given is
40 a Tcl glob expression (.e.g *.somwhere.com). Remember to
41 quote the domain when given on the command line so that your
42 shell doesn't expand this. This option may be repeated thus
43 allowing you to specify many "allowed" domains.
45 -r rules Specifies a file with rules. See the rules file for an
48 Example 1: Harvest three links away from www.somwhere.com world-wide:
49 ./robot.tcl -c 3 http://www.somwhere.com/
51 Example 2: Harvest the site www.somwhere.com only:
52 ./robot.tcl -d www.somewhere.com http://www.somewhere.com/
54 Example 3: Harvest up to two click from www.a.dk and www.b.dk in dk-domain:
55 ./robot.tcl -d '*.dk' -c 2 http://www.a.dk/ http://www.b.dk/
57 The zmbot robot creates three directories, visited, unvisited, bad
58 for visited pages, unvisited pages, and bad pages respectively. The
59 visited area holds keywords and metadata for all successully retrieved
60 pages. The unvisited area serves as a "todo" list of pages to be visited
61 in the future. The bad area holds pages that for some reason cannot be
62 retrieved: non-existant, permission denied, robots.txt disallow, etc.
69 The configure script looks for the Tcl shell, tclsh, to determine the
70 location of Tcl and its configuration file tclConfig.sh. To manually specify
71 Tcl's location, add --with-tclconfig and specify the directory where
72 tclConfig.sh is installed. For example:
73 ./configure --with-tclconfig=/usr/local/lib