Maker Pro
Maker Pro

Are SSDs always rubbish under winXP?

D

Don Y

Jan 1, 1970
0
Hi Tom,

On 3/3/2012 9:16 AM, Tom Del Rosso wrote:

[8<]
There are 2 possibilities.

A 16 or 32 GB disk is practical. It would cost more but it's only part of a
system, and it would have the speed and the reliability.

But 16-32G is small in the desktop/laptop world (I'm working on
designs of handheld/embedded devices with that much "on board")
A partial RAM buffer would solve the registry problem, and any similar
problem, without needing the drive to understand every filesystem. On a
typical day less than 1 GB is modified at all. They could easily have a
buffer big enough to avoid writing to any flash for at least a day. That
would cut writes to frequently changed files down to nothing, as such files
never add up to more than 1 GB.

The "right" way to solve the windows problem is to acknowledge
that:
- swap file writes don't need to be to persistent storage
- registry updates needn't be persistent UNTIL POWER FAIL
IS IMMINENT!

So, move the swap into a dedicated piece of RAM -- even if you
have to locate that RAM in a "disk drive" to get around "issues"
in windows configuration.

I don't think Windows has a predictable mechanism for relocating
*just* the registry to a different "filesystem" (which could
then be mounted on a different physical device!). E.g., true
symlinks could give you this capability without necessitating
a special mount point *just* for registry files (MS doesn't
think ahead in this respect... \WINDOWS\registry would have
made far more sense as it would have isolated those files
someplace "convenient" for future manipulation. Oh, I forgot.
Putting "MS" and "future" is a silly idea. "Microsoft, bringing
1970's technology to the 21st century!")

If you can move the registry files onto a "ram-disk" and then
schedule a process to flush them to the *real* disk at power
down, you've solved the problem (except for the inevitable
MS crashes that don't proceed through an orderly shutdown!)
 
D

Don Y

Jan 1, 1970
0
Hi Joseph,

Remember, the SSD implements a similar but *independant* "block
tracking scheme" *under* the filesystem's "block tracking scheme"
(block != block). So, getting them to be consistent with each other
is the trick.

Some OS's include support for the ATA "TRIM" command which allows
the OS to explicitly tell the drive which blocks are actually
"free" (i.e., the OS can interpret the allocation table(s) on
behalf of the SSD and tell the SSD which blocks are suitable for
"reuse"). Some SSD manufacturers provide utilities (running
*under* the OS as applications) to interrogate the allocation
table and convey this information to the SSD as a proxy for the
(deficient) OS.

In either case, the SSD needs to support this capability. And,
it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition
table and "understand" the filesystem(s) present on its media]

Actually the easy and better approach may be to watch for frequently
rewritten blocks and make sure to keep them moving around. In better
engineered systems use that remapping to move them through relatively
steady areas of storage preferentially. Simpler implementation and works
for all OSs and filesystems (except swap).

Wear leveling algorithms effectively do that. The erase count
for each flash "page" is tracked. If a page is rewritten, then
a count, *somewhere*, is incremented.

I.e., frequently written (filesystem-)blocks will get moved as
a consequence of this. The problem is identifying those filesystem
blocks that are "no longer being used" (and, from that, the
associated flash pages) and good candidates for reuse. The drive
needs to know *how* the medium is being used (i.e., the structure
and contents of the filesystem) in order to infer this on its
own (else, the OS needs to explicitly TELL it the information that
it needs).

Without full models of every file system the is no even hopeful way a

There really aren't that many different filesystems in use. When you
contrast the number of filesystems with the number of different versions
of Windows, Solaris, Linux, FreeBSD, NetBSD, OpenBSD, QNX, Minix, IRIX,
DOS, AmigaOS, MacOS, etc. you see how much *easier* it is to support
filesystems than rely on OS support!

E.g., accurately modeling a FAT32 filesystem would allow you to use
that SSD on most of the above (though perhaps not with all of the
bells and whistles that you would like). OTOH, if *only* Windows 7/8
supports TRIM, then how many of the above environments are unusable
(if the SSD *relies* on TRIM)?
storage device can guess "no longer used". Not used before, not written
much, and not written recently can all be tracked in dependant of the OS
and file system. These are useful for remapping. Of course the space for
keeping this data is in addition to the space for the user available
storage. This is why flash in non power of two sizes are reasonable.

For general data storage devices this is not the issue. More advanced
interfaces for flash to tell it this is no longer needed is an answer.

The problem there is you are (largely) left at the mercy of the OS
provider to implement those features. Or, "hacks" like Intel has
implemented (which require you to tune how often the hack is invoked
based on the sort of disk traffic you encounter).

And forget the ability to run "legacy" OS's using that sort of
hardware. Which OS vendor is going to back-port that sort of
utility to an older/obsolescent OS?
 
J

josephkk

Jan 1, 1970
0
Hi Tom,

On 3/3/2012 9:16 AM, Tom Del Rosso wrote:

[8<]
There are 2 possibilities.

A 16 or 32 GB disk is practical. It would cost more but it's only partof a
system, and it would have the speed and the reliability.

But 16-32G is small in the desktop/laptop world (I'm working on
designs of handheld/embedded devices with that much "on board")
A partial RAM buffer would solve the registry problem, and any similar
problem, without needing the drive to understand every filesystem. Ona
typical day less than 1 GB is modified at all. They could easily havea
buffer big enough to avoid writing to any flash for at least a day. That
would cut writes to frequently changed files down to nothing, as such files
never add up to more than 1 GB.

That depends quite a bit on what you are doing. In a rather typical
desktop environment what you say seems true on the face of it. Edit HD
video for a few minutes to a few hours and it is obviously not the case.
See also large simulations.
The "right" way to solve the windows problem is to acknowledge
that:
- swap file writes don't need to be to persistent storage
- registry updates needn't be persistent UNTIL POWER FAIL
IS IMMINENT!

So, move the swap into a dedicated piece of RAM -- even if you
have to locate that RAM in a "disk drive" to get around "issues"
in windows configuration.
Doable!

I don't think Windows has a predictable mechanism for relocating
*just* the registry to a different "filesystem" (which could
then be mounted on a different physical device!). E.g., true
symlinks could give you this capability without necessitating
a special mount point *just* for registry files (MS doesn't
think ahead in this respect... \WINDOWS\registry would have
made far more sense as it would have isolated those files
someplace "convenient" for future manipulation. Oh, I forgot.
Putting "MS" and "future" is a silly idea. "Microsoft, bringing
1970's technology to the 21st century!")

Windows also writes a lot of log files just like linux. I think that the
registry writes are performance counter data (a damn stupid place to put
them). MS should put in some controls to control the write frequency from
several times per second to no more than once a day.
If you can move the registry files onto a "ram-disk" and then
schedule a process to flush them to the *real* disk at power
down, you've solved the problem (except for the inevitable
MS crashes that don't proceed through an orderly shutdown!)

That may be possible, but how frequently do you want it to write to
non-volitle memory (disk like)?

?-)
 
J

josephkk

Jan 1, 1970
0
Hi Joseph,


Still, it would be hard to determine this just by looking at the
information passed to the SSD. And, you are now at the mercy of
"policy changes" in the OS. E.g., what happens if service pack 28
changes how the registry is updated, frequency, etc. You surely
don't want to have to upgrade your SSD's firmware every time
MS sneezes!

Maybe, depends a bit on how the FW policy is represented. Still
pathological write policies from MS are an existing problem.
OTOH, the layout of the filesystem is relatively constant. MS
can't just arbitrarily decide that the filesystem is organized
differently (though they could change the policies they implement
when *selecting* a free block, etc.).


It's a lousy analogy -- but, it highlights the fact that the
SSD only works (*well*) with a particular OS.

My DLT's have different firmware images for different OS's.
I've never explored the nature of the differences. Rather,
I made a point of puttin Sun-branded DLTs on Sun boxen, etc.
(I should try swapping a DLT on a Windows box with one from
a Sun and see if there are any differences in performance
or features)

DLT? Googling, Oh HiTC! Well HiTC include both serpentine and helical
physical tape layouts.
Yes, I wasn't commenting about actual numbers but, rather, the
fact that DVD-RW *can* be rewritten -- though much slower than
*read*... and, for a limited number of cycles.

OK, a matter of degree and matching ratios.
Yes. I am using large amounts of FLASH in lieu of a disk in a
couple of projects to take advantage of the faster access, smaller
size/power, etc. But, the same "write" issue Peter brings up
has to be addressed. People think of disks as infinitely
rewritable media *and* having reasonably uniform access times.
An illusion that FLASH can't maintain.

Interesting public misconception, rotating disk does NOT have uniform
access times, it is just so fast now that almost nobody notices.
Especially with serious OS buffering. Rotating disk does have effectively
transfinite write endurance. Flash does have nearly uniform access time
(both read and write), but it is so much faster than rotating disk that
only a very few care.

Cheers
?-)
 
J

josephkk

Jan 1, 1970
0
Hi Joseph,

Remember, the SSD implements a similar but *independant* "block
tracking scheme" *under* the filesystem's "block tracking scheme"
(block != block). So, getting them to be consistent with each other
is the trick.

Some OS's include support for the ATA "TRIM" command which allows
the OS to explicitly tell the drive which blocks are actually
"free" (i.e., the OS can interpret the allocation table(s) on
behalf of the SSD and tell the SSD which blocks are suitable for
"reuse"). Some SSD manufacturers provide utilities (running
*under* the OS as applications) to interrogate the allocation
table and convey this information to the SSD as a proxy for the
(deficient) OS.

In either case, the SSD needs to support this capability. And,
it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition
table and "understand" the filesystem(s) present on its media]

Actually the easy and better approach may be to watch for frequently
rewritten blocks and make sure to keep them moving around. In better
engineered systems use that remapping to move them through relatively
steady areas of storage preferentially. Simpler implementation and works
for all OSs and filesystems (except swap).

Wear leveling algorithms effectively do that. The erase count
for each flash "page" is tracked. If a page is rewritten, then
a count, *somewhere*, is incremented.

I.e., frequently written (filesystem-)blocks will get moved as
a consequence of this. The problem is identifying those filesystem
blocks that are "no longer being used" (and, from that, the
associated flash pages) and good candidates for reuse. The drive
needs to know *how* the medium is being used (i.e., the structure
and contents of the filesystem) in order to infer this on its
own (else, the OS needs to explicitly TELL it the information that
it needs).

Without full models of every file system the is no even hopeful way a

There really aren't that many different filesystems in use. When you
contrast the number of filesystems with the number of different versions
of Windows, Solaris, Linux, FreeBSD, NetBSD, OpenBSD, QNX, Minix, IRIX,
DOS, AmigaOS, MacOS, etc. you see how much *easier* it is to support
filesystems than rely on OS support!

OK. Unixes and linuxs have only about a dozen reasonable filesystems and
only about 5 of them are reasonably popular. Various fats and ntfs with
Mac HFS(+). Don't know what Amiga did. Minix has its own file system as
does QNX. Hmmm. Still about a dozen of them.
E.g., accurately modeling a FAT32 filesystem would allow you to use
that SSD on most of the above (though perhaps not with all of the
bells and whistles that you would like). OTOH, if *only* Windows 7/8
supports TRIM, then how many of the above environments are unusable
(if the SSD *relies* on TRIM)?

Well at least it could support all the bells and whistles of fat32 which
ain't that bad. But if you need real file ownership support fat doesn't
cut it, and ntfs ain't much better.
The problem there is you are (largely) left at the mercy of the OS
provider to implement those features. Or, "hacks" like Intel has
implemented (which require you to tune how often the hack is invoked
based on the sort of disk traffic you encounter).

And forget the ability to run "legacy" OS's using that sort of
hardware. Which OS vendor is going to back-port that sort of
utility to an older/obsolescent OS?

The same goes for the FS dependant disk algorithms.
 
D

Don Y

Jan 1, 1970
0
Hi Joseph,

---------------------------------^^^^^^^^^^^^^^^^^^


Interesting public misconception, rotating disk does NOT have uniform
access times, it is just so fast now that almost nobody notices.

Disk is a DASD while tape is SASD.

(Physical) disk access time consists of rotational delay as the
platter stack spins around to bring the magnetic domains (assuming
a magnetic disk) of interest under the head; plus seek time -- the
time to position the head array over the appropriate track.

Rotational delay obviously varies based on where the domains
happen to be "at the present time" wrt the head(s). It also
is a function of the speed at which the platters are rotating
(some media vary their speed based on where the data is located
on the medium).

Some disk assemblies eliminated the seek time by incorporating a
head per track (cylinder). The RS11 had 128 (?) fixed heads
and was *effectively* "word addressable" (256K 16+1b words).

In a DASD, access time is tightly bounded and "very similar"
regardless of which data are being accessed. By contrast,
in a SASD, access time varies greatly with where the data
resides on the medium.

(I wrote a driver for half inch 9T that could be mounted as
a block device. Slow as sin but amusing to watch the reels
start, stop and reverse on a dime as it chased down the
desired "block"!)
Especially with serious OS buffering. Rotating disk does have effectively
transfinite write endurance. Flash does have nearly uniform access time
(both read and write),

No. Flash write times are orders of magnitude slower than read
times. This is another reason why you really want to avoid reads.

By contrast, with the exception of the physical access delays
necessitated by the rotating medium, a disk's write time is
essentially the same as its read time.
 
F

FatBytestard

Jan 1, 1970
0
That depends quite a bit on what you are doing. In a rather typical
desktop environment what you say seems true on the face of it. Edit HD
video for a few minutes to a few hours and it is obviously not the case.
See also large simulations.

Which you do not do on an SSD, IDIOT!

You still need a spinning hard drive for that. ANY editor with half a
brain knows that.

A 30 second google search will tell you that they do fail (SSDs). Why
exercise it?

Oh and he was talking about system files.
 
J

josephkk

Jan 1, 1970
0
(I wrote a driver for half inch 9T that could be mounted as
a block device. Slow as sin but amusing to watch the reels
start, stop and reverse on a dime as it chased down the
desired "block"!)


No. Flash write times are orders of magnitude slower than read
times. This is another reason why you really want to avoid reads.
^^^^^ writes

Falsh write times are still much faster than disk reads/writes. Low 100's
of us (can be a fast as mid 10s of us) versus 3 to 10 ms or so. Depends a
lot on how many seeks, at least two if the filesystem keeps track of
modified time (mtime). Keeping track of accessed time (atime) really
hurts rotating disk filesystem performance by adding seeks and writes like
crazy.
 
Top