Maker Pro
Maker Pro

Magnetic Force Microscopy?

J

Jeroen Vriesman

Jan 1, 1970
0
Someone who can make MFM is probably able to make much more money with
normal design work in stead of criminal activity.

So that leaves only people who have access to MFM technology.

anyway, you don't even need special software, on a unix system just "dd
if=/dev/random of=/dev/somedisk" will do the job.

cheers,
Jeroen.
 
M

Mike

Jan 1, 1970
0
With all the talk of HD recovery being difficult to impossible how well does
MFM actually work?

Has anyone evr built a succesful `garage` MFM unit?

Just wondering after reading:

http://www.usenix.org/publications/library/proceedings/sec96/full_papers/gutmann/

and using the freeware shredder:

http://www.tolvanen.com/eraser/

Thanks
Adam

I dunno about the MFM machine, but the usenix paper is from 1996, and
references technology that was already several years old.

If we look at an average modern drive (a Maxtor 20.4GB DiamondMax), it has
a track density of 17,305 per inch, and a flux density of 236 to 306 flux
changes per inch. When that Usenix paper was written, both those numbers
were lower by around an order of magnitude. That means that today's data
bit is stored in roughly 1/100 the area that it was ten years ago.

With the advent of communication channel techniques (PRML codes and Viterbi
detectors) applied to the disk channel, data bits no longer needed to be
isolated from adjacent bits in the same track (adjacent tracks are another
matter). In addition, the raw error rate coming from the read channel is
now around 10^-4, with the ECC improving the final error rate to less than
10^-12; prior to the communication channel era the raw error rate was
generally 10^-12. The reduction in raw error rate means that the SNR on the
disk is much much lower than it was in the old days.

The net result is that there is far less signal there to recover even
before the previous data has been overwritten, much less after it's been
overwritten more than a couple times.

-- Mike --
 
K

Kevin McMurtrie

Jan 1, 1970
0
Mike said:
I dunno about the MFM machine, but the usenix paper is from 1996, and
references technology that was already several years old.

If we look at an average modern drive (a Maxtor 20.4GB DiamondMax), it has
a track density of 17,305 per inch, and a flux density of 236 to 306 flux
changes per inch. When that Usenix paper was written, both those numbers
were lower by around an order of magnitude. That means that today's data
bit is stored in roughly 1/100 the area that it was ten years ago.

With the advent of communication channel techniques (PRML codes and Viterbi
detectors) applied to the disk channel, data bits no longer needed to be
isolated from adjacent bits in the same track (adjacent tracks are another
matter). In addition, the raw error rate coming from the read channel is
now around 10^-4, with the ECC improving the final error rate to less than
10^-12; prior to the communication channel era the raw error rate was
generally 10^-12. The reduction in raw error rate means that the SNR on the
disk is much much lower than it was in the old days.

The net result is that there is far less signal there to recover even
before the previous data has been overwritten, much less after it's been
overwritten more than a couple times.

-- Mike --

250GB drives are pretty cheap today. Not only would that be a seriously
bad S/N ratio for old data, but it could take a very long time to find
what you're looking for with a tiny scanner.
 
Top