Error - template LAYOUT-DATA-WRAPPER not found

A configuration error was detected in the CGI script; the LAYOUT-DATA-WRAPPER template could not be found.

Error - template STYLE-SHEET not found

A configuration error was detected in the CGI script; the STYLE-SHEET template could not be found.

Error - template SUB-TOP-BANNER not found

A configuration error was detected in the CGI script; the SUB-TOP-BANNER template could not be found.
Subject:
From:
Reply To:
PCSOFT - Personal Computer software discussion list <[log in to unmask]>
Date:
Thu, 20 Dec 2001 04:36:25 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (64 lines)
On 19 Dec 2001, at 23:32, A&C Thompson wrote:

> ----- Original Message -----
> From: "Rick Glazier"
> Subject: [PCSOFT] Easy Cleaner - find duplicates feature.
>
>
>
> > A question about the find duplicates feature.
> > On my computer (with 4 partitions on one "new" hard
> > drive) it found one set of dups (one pair - and listed them)
> > several hundred(?) times.
> > It seemed to be the only files it found.
> > It also said it would take 320 MORE hours to finish.
> > It had run about 6 hours up to that point.
> > While somewhat unresponsive, the program was not
> > frozen and exited correctly when asked...
> > I know this is not normal...
> > Anybody else having problems?
> > This feature alone whould have been great for me since
> > I have a tendency to copy stuff to different drives and then
> > forget I have it "all over the place"...)
> >                   Rick Glazier
> -------------------------------------------
> Rick,
>
> I _think_ the secret here Rick, is to use the 'Find' textbox (it
> comes up on the duplicates screen) and search for only one type of
> file at a time. I just did a duplicate search for *.exe files, and
> it took less than 5 minutes. Also, you might want to look at the
> options/duplicate files tab and eliminate some of the criteria. As
> for the "several hundred(?)" pair of the same file, do you recall
> what the file was? Other than desktop.ini, I can't think of a
> standard file that would have so many duplicates.
>
> Alan Thompson


  Sounds to me like Rick has a "combinatorial explosion" on his
hands.

  A nice straightforward approach to implementing a chack for
duplications is to compare the second file to the first, the third
file to the first and the second, the fourth file to the first,
second and third, and so on.  If you have a thousand files, that
requires about half a million comparisons.  Suddenly that 320-hour
estimate doesn't sound quite so bad.

  If the program sorts the files into a binary tree as it goes, a
thousand files will require -- on average -- about ten thousand
comparisons.  The tree, however, might take a megabyte per thousand
files.  If the tree gets too big, parts of it can be swapped out to
disk, adding delay potential, until either virtual memory runs out,
or the program's heap cannot grow.

  Rick's "320 hours" scenario suggests that the product currently
takes the first approach.

Dave Gillett

              The NOSPIN Group is now offering Free PC Tech
                     support at our newest website:
                          http://freepctech.com

ATOM RSS1 RSS2

LISTSERV.ICORS.ORG Secured by F-Secure Anti-Virus CataList Email List Search Powered by LISTSERV