over which the regression should be done. The regression will be calculated
and printed out for each n urls and followed by a regression for the
entire set of data.
cached up befor the real memory testing starts happening. The linear
regression AWK script will be modified to think this way, too.
Added some comment lines to explain a little about what each list is about.
Uncommented some urls which have been causing trouble, under the assumption
that what is checked in should be complete. Whoever uses the list can
comment out whatever urls are troublesome in the particular test they are
running.
When the .dat files are created all test lines are awk'ed out so that
text can be included in the OUTFILE without affecting the data which
gets graphed. The awk'ing assumes that blank lines in the OUTFILE
represent urls which failed to load and substitutes zeroes for all
data values.