gecko-dev/webtools/web-sniffer/HISTORY
2005-01-19 17:35:18 +00:00

54 lines
1.6 KiB
Plaintext

A Brief History of this Project
Early 1998
The original goal of this project was to write an HTTP robot (aka Web
crawler) that showed how many of the Web's documents actually used the
HTTP and HTML META charset (text encoding) labels.
The robot had to parse HTML to find embedded hyperlinks. As an aid in the
development of the HTML parser, a "pretty printer" was written to display
the results of the parsing using different colors for the different parts.
From here it was a relatively small step to create the view.cgi tool that
fetches and pretty-prints the document associated with a URI entered by
the user.
The proxy tool also used the HTTP fetching and HTML pretty-printing code
to display the documents accessed by the user's browser.
The link checker "link" and site downloader "grab" used the Web crawling
code to do their thing.
The other files in the project were small attempts to learn more about
various Internet protocols.
The view.cgi tool and robot results were shared with other members of
Netscape's internationalization group.
Feb 1, 2000
The code was dubbed "Web Sniffer" and checked into the mozilla.org
CVS repository. The view.cgi tool was made available on the Internet
at mozilla.org.
Sep 26, 2003
It was reported that the view.cgi tool no longer worked. The server had
been switched to Linux, but Web Sniffer had never been compiled on Linux,
and it did not compile there. It was originally built on Solaris.
Jan 30, 2004
The server was configured to redirect to web-sniffer.net, a similar tool.
Jan 19, 2005
The code was ported to Linux and renamed SniffURI. A new Web site called
sniffuri.org was set up for the project.