Maker Pro
Maker Pro

Are SSDs always rubbish under winXP?

P

Peter

Jan 1, 1970
0
George Neuner said:
You mean "expensive" rather than "difficult". The only requirement
for running Windows without a pagefile is a lot of RAM. No special
settings other than "no pagefile" are necessary.



All versions of Windows spend a great deal of effort to maintain
performance counters in the registry. Disabling performance
monitoring (if/when you don't need it) should put a stop to quite a
bit of unnecessary disk access.

You can do it all at once with a registry tweak:

- Go to: HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib

- Add a new DWORD Value "DisablePerformanceCounters". Set the value
of DisablePerformanceCounters value to 1 and reboot your computer.


Or use Microsoft's resource kit tool to enable/disable individual
performance counters:
http://download.microsoft.com/downl...xctrlst/1.00.0.1/nt5/en-us/exctrlst_setup.exe


George

That's interesting and it may prolong the life of an SSD, but I don't
think there is any way to disable swapping in winXP onwards. IIRC, a
lot of apps stops running if you do that.

Win the old win3.1 you could just do that, and all disk activity
stopped totally. I ran a multizone heating controller on such a
system, for about 10 years. When NT came along, that was no longer
possible (on the retail version).
 
D

Don Y

Jan 1, 1970
0
Hi Joseph,
Probably difficult and expensive.

*easier* (technically) to do than that which follows (which is a
superset of this). From a *Marketing* perspective, however, it
may be a problem as it would effectively render a drive "useless"
(?) if deployed for a filesystem other than intended. (Also
a problem if different filesystems are employed on the same medium
concurrently)
Now that is some useful thinking.

Also remember that MSwin continually writes to the registry, which is
mirrored on disk.

This is why knowing the intended deployment can come in handy as
the drive could "notice" that behavior and opt to divert it to
a RAM-resident portion.

Clearly something has to be done as reliability will only get
worse as geometries shrink. "The *good capacities are
going up and costs are going DOWN! The *bad* news? So is
durability!"

ISTM that SSDs really only make sense as read-only devices.
Put the OS and applications on it and let everything else
reside on writable media...
 
J

Jasen Betts

Jan 1, 1970
0
Wanna bet that the wear leveling algorithms are _not_ designed for fat
file systems only?

?-8

That's a claim I've heard before, I'm not convinced either way.


To change the subject yet again the MS-DOS implementation fo FAT would
not allocate recently freed blocks that were prior to the last block
allocated until the end of the disk was reached. If one never
rebooted (or removed the disk) this would give a primitive sort of
wear leveling.
 
D

Don Y

Jan 1, 1970
0
Hi Jasen,

That's a claim I've heard before, I'm not convinced either way.

To change the subject yet again the MS-DOS implementation fo FAT would
not allocate recently freed blocks that were prior to the last block
allocated until the end of the disk was reached. If one never
rebooted (or removed the disk) this would give a primitive sort of
wear leveling.

The problem is that the disk needs to understand the filesystem(s)'
format in order for it to "snoop" the allocation table and deduce,
from that, which "physical blocks" (blech... I'm playing fast and loose
with terminology, here) *in* the FLASH are STILL IN USE (i.e., are
"referenced") vs. MARKED AS FREE.

As far as the SSD is concerned, *all* of the blocks are "in use"
as soon as each one has been "touched".

Remember, the SSD implements a similar but *independant* "block
tracking scheme" *under* the filesystem's "block tracking scheme"
(block != block). So, getting them to be consistent with each other
is the trick.

Some OS's include support for the ATA "TRIM" command which allows
the OS to explicitly tell the drive which blocks are actually
"free" (i.e., the OS can interpret the allocation table(s) on
behalf of the SSD and tell the SSD which blocks are suitable for
"reuse"). Some SSD manufacturers provide utilities (running
*under* the OS as applications) to interrogate the allocation
table and convey this information to the SSD as a proxy for the
(deficient) OS.

In either case, the SSD needs to support this capability. And,
it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition
table and "understand" the filesystem(s) present on its media]
 
T

Tom Del Rosso

Jan 1, 1970
0
Don said:
This is why knowing the intended deployment can come in handy as
the drive could "notice" that behavior and opt to divert it to
a RAM-resident portion.

These things should have full RAM buffers with write-back at power-down, as
I think a double-layer cap could supply it especially if the block erases
are done when power is up. Then you only have to erase each block once.

Except that DRAM is more expensive than flash, which is itself an odd
development. When and why did that reversal occur?
 
G

George Neuner

Jan 1, 1970
0
What Joseph means is that Windows always swaps even when the ram is
not full. Running Windows WITH a swapfile makes it a lot SLOWER
even if you have more than enough ram. As usual MS didn't got the
mechanism right.

The paging mechanism itself is not at fault, but other things
Microsoft got wrong are working against it.

As someone else said, the paging statistics often are misinterpreted.
If you look with, e.g., Process Explorer, quite often you'll often
find that there is very little in the pagefile and, at the same time,
loads of unused RAM ... and yet the disk is churning. At least on
NT/2K/XP ... Windows 7 and 2K3 and later server editions have self
tuning management and do a much better job (though they all still have
the performance counter registry access issues).


The first issue is that Windows uses relocatable libraries as opposed
to position-independent libraries. Because dlls are not position
independent, when multiples instances are mapped at different
addresses, there must be multiple copies of the code in memory (one
for each base address). The most commonly used OS dlls have unique
base addresses so the odds of multiple mapping are very low (though
not zero), but language runtime and user written dlls all have the
same default base addresses unless the developer deliberately rebases
them. Non-OS shared dlls often place unnecessary memory pressure on
Windows. Code is paged directly from executables, so the pagefile is
backing only instance data, but having to page in code for different
instances increases disk accesses.


The second issue, which interacts with the first, is that Windows does
not have a unified file system cache, but rather it tries to be "fair"
by reserving cache address space for each running process. By
default, Windows will take up to 80% of RAM for file caching, so if
you have the normal situation where a lot of processes aren't using
their allotted space, a lot of your RAM may be going unused.

There is a free tool called "cacheset" which will change the per
process file cache limits. Unfortunately cacheset does not change the
default settings in the registry, so you have to run it each time you
log in, but the tool can be command line driven so you can place it a
startup batch file.

Cacheset, Process Explorer, and a bunch of other useful stuff are all
available at http://technet.microsoft.com/en-us/sysinternals

There are a number of registry tweaks available for adjusting
process/system RAM distribution and default file caching parameters.
You can find these with the search engine of your choice.


George
 
G

George Neuner

Jan 1, 1970
0
That's interesting and it may prolong the life of an SSD, but I don't
think there is any way to disable swapping in winXP onwards. IIRC, a
lot of apps stops running if you do that.

Win the old win3.1 you could just do that, and all disk activity
stopped totally. I ran a multizone heating controller on such a
system, for about 10 years. When NT came along, that was no longer
possible (on the retail version).

AFAIK, you can run any version of Windows without a pagefile - given
sufficient RAM. I haven't tried it with Win7 (or 8) yet, but I know
from personal experience that it works in all the previous versions
(including server editions).

I can only speculate as to why you couldn't make it work.

Windows doesn't handle over-allocation of address space in the same
way Unix and Linux does. Unix and Linux don't commit pages until you
touch them, so you can do idiotic things like malloc 1.5GB in a system
with 256MB of total VMM space. As long as you never touch the extra,
you'll never have a problem.

But unless an application is deliberately written using VirtualAlloc()
et al., Windows commits *all* pages of an allocation immediately. If
there is no pagefile, the total of all the committed space has to fit
into RAM, so if programs are grabbing more memory than they intend to
use, you can easily have a problem.

George
 
D

Don Y

Jan 1, 1970
0
Hi Tom,

These things should have full RAM buffers with write-back at power-down, as
I think a double-layer cap could supply it especially if the block erases
are done when power is up. Then you only have to erase each block once.

<grin> Think about that for a moment. *ASSUMING*, by "full RAM
buffers" you mean "a bit of RAM for each bit of FLASH(ROM)", you're
talking about ~1TB of RAM in addition to the ~1TB of FLASH!
Except that DRAM is more expensive than flash, which is itself an odd
development. When and why did that reversal occur?

Flash was a new technology (WAROM :> ). Every new technology rides the
manufacturing efficiency curve downward.

Of course, FLASH has many limitations that (any form of) RAM doesn't.
OTOH, it is a very *tiny* geometry! You need less "stuff" to store
the data.

SRAM requires several (6?) transistors to ACTIVELY store (latch) the
datum. Transistors take up space. SRAM is (universally?) highly
addressable ("byte at a time", so to speak). So, the decoding logic
also takes up space.

DRAM requires just *one* transistor -- and a *capacitor* (that "holds
charge" to remember the datum!). Capacitors are big.

FLASH requires just the *transistor*. And, can be stacked (3D) to
cram more than one "bit" in a "cell".

NAND flash sacrifices flexibility in addressability for even smaller
effective cell sizes.

In addition to size (which translates most directly into manufacturing
costs), FLASH uses less power to retain/retrieve/update the data it
contains (i.e., it will retain the data in the absence of power!).
SRAM uses *gobs* of power to hold the "latches" in their particular
states. DRAM uses less to keep those capacitors "topped off" with
the right amount of charge (this is what "refresh" is all about).

Power == heat. Imagine putting 60 16GB DIMMs in a case the size of
a 3.5" disk drive (1TB -- neglecting any ECC). How do you even
package something like that with any hope of getting all the heat
*out* of the case?

DRAM (or any "faster-than-FLASH" RAM) is overkill for this type
of application. It offers more bandwidth than is needed. You
end up paying for that -- with extra product cost, power
requirements, size, weight, complexity, etc.

FLASH tries to hit a sweet spot in that application domain.
It gives you higher bandwidth (pseudo-random access) than rotating
media without being EXHORBITANTLY so (like DRAM). It avoids the
mechanical consequences of rotating media (*drop* your disk-based
product on a construction site and see how well it fares :> ).

Of course, the nature of Engineering is such that there are no
free lunches. So, you pay for these features with liabilities!
 
T

TheQuickBrownFox

Jan 1, 1970
0
Of course, the nature of Engineering is such that there are no
free lunches. So, you pay for these features with liabilities!


I get free lunches from the plates left after meetings all the time.

There are always about 15 or so that never get taken, and when the
email hits that the meeting is over and there are left overs, you'd
better get there fast! Always good to be friends with the exec secs.

Mmmmmmm... Turkey and Bacon Club Wraps... Or On Sourdough... Mmmmmmm.
 
J

josephkk

Jan 1, 1970
0
Hi Joseph,


*easier* (technically) to do than that which follows (which is a
superset of this). From a *Marketing* perspective, however, it
may be a problem as it would effectively render a drive "useless"
(?) if deployed for a filesystem other than intended. (Also
a problem if different filesystems are employed on the same medium
concurrently)

Yes and no. I was looking at determining filesystem from usage patterns
rather than implementing different wear leveling algorithms for different
filsystems. Different code per file system is something that both variant
approaches have in common.
This would be vastly cheaper than trying to infer it from usage patterns.
let alone the engineering to figure out how to do that.
This is why knowing the intended deployment can come in handy as
the drive could "notice" that behavior and opt to divert it to
a RAM-resident portion.

Clearly something has to be done as reliability will only get
worse as geometries shrink. "The *good capacities are
going up and costs are going DOWN! The *bad* news? So is
durability!"

ISTM that SSDs really only make sense as read-only devices.
Put the OS and applications on it and let everything else
reside on writable media...

I must agree in part. SSD is fine for a write infrequently, read lots
kind of use, the OS and applications is a good cut. But it is not a write
once read forever (DVD-R) situation. Not so good for log files and such
(write lots, read infrequently). Its major attraction is average access
time which is >100 times faster than rotating disk. And it is cheaper
than RAM which is another 100 times faster). Getting the most out of a
particular machine requires balancing these systems, Amdahls's law is
helpful here.

?-)
 
J

josephkk

Jan 1, 1970
0
Hi Jasen,

That's a claim I've heard before, I'm not convinced either way.

To change the subject yet again the MS-DOS implementation fo FAT would
not allocate recently freed blocks that were prior to the last block
allocated until the end of the disk was reached. If one never
rebooted (or removed the disk) this would give a primitive sort of
wear leveling.

The problem is that the disk needs to understand the filesystem(s)'
format in order for it to "snoop" the allocation table and deduce,
from that, which "physical blocks" (blech... I'm playing fast and loose
with terminology, here) *in* the FLASH are STILL IN USE (i.e., are
"referenced") vs. MARKED AS FREE.

As far as the SSD is concerned, *all* of the blocks are "in use"
as soon as each one has been "touched".

Remember, the SSD implements a similar but *independant* "block
tracking scheme" *under* the filesystem's "block tracking scheme"
(block != block). So, getting them to be consistent with each other
is the trick.

Some OS's include support for the ATA "TRIM" command which allows
the OS to explicitly tell the drive which blocks are actually
"free" (i.e., the OS can interpret the allocation table(s) on
behalf of the SSD and tell the SSD which blocks are suitable for
"reuse"). Some SSD manufacturers provide utilities (running
*under* the OS as applications) to interrogate the allocation
table and convey this information to the SSD as a proxy for the
(deficient) OS.

In either case, the SSD needs to support this capability. And,
it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition
table and "understand" the filesystem(s) present on its media]

Actually the easy and better approach may be to watch for frequently
rewritten blocks and make sure to keep them moving around. In better
engineered systems use that remapping to move them through relatively
steady areas of storage preferentially. Simpler implementation and works
for all OSs and filesystems (except swap).

?-)
 
G

George Neuner

Jan 1, 1970
0
I did indeed mean difficult, deleting the swap file in MSwin is often a
hand edit to the registry.

Ok, I see your point. But deleting the file technically is different
from telling Windows not to use it. After turning off paging and
rebooting, the file - even if there - won't be used. You can confirm
this by monitoring.

2K and above do make it hard to remove the file permanently. Once
paging is disabled, the file isn't in use and you can delete it with
no problem, but without the registry edit you refer to the system will
recreate the missing file on every reboot.

The simplest thing for most people to do is to reduce the file to
minimum size (2MB). After reboot, the file will be truncated and
won't ever grow. Then just forget about it.

George
 
D

Don Y

Jan 1, 1970
0
Hi Joseph,

Hi Jasen,

One thing that I'd like to know is, how does it store the allocation table?
That must be in flash too, or maybe EEPROM where cells are not paged, but
they still have a write-cycle lifetime.

All blocks are subject to wear leveling.
that includes the FAT (if you use a filesystem that works that way)

The wear-leveling is hidden from the operating system.

Wanna bet that the wear leveling algorithms are _not_ designed for fat
file systems only?

That's a claim I've heard before, I'm not convinced either way.

To change the subject yet again the MS-DOS implementation fo FAT would
not allocate recently freed blocks that were prior to the last block
allocated until the end of the disk was reached. If one never
rebooted (or removed the disk) this would give a primitive sort of
wear leveling.

The problem is that the disk needs to understand the filesystem(s)'
format in order for it to "snoop" the allocation table and deduce,
from that, which "physical blocks" (blech... I'm playing fast and loose
with terminology, here) *in* the FLASH are STILL IN USE (i.e., are
"referenced") vs. MARKED AS FREE.

As far as the SSD is concerned, *all* of the blocks are "in use"
as soon as each one has been "touched".

Remember, the SSD implements a similar but *independant* "block
tracking scheme" *under* the filesystem's "block tracking scheme"
(block != block). So, getting them to be consistent with each other
is the trick.

Some OS's include support for the ATA "TRIM" command which allows
the OS to explicitly tell the drive which blocks are actually
"free" (i.e., the OS can interpret the allocation table(s) on
behalf of the SSD and tell the SSD which blocks are suitable for
"reuse"). Some SSD manufacturers provide utilities (running
*under* the OS as applications) to interrogate the allocation
table and convey this information to the SSD as a proxy for the
(deficient) OS.

In either case, the SSD needs to support this capability. And,
it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition
table and "understand" the filesystem(s) present on its media]

Actually the easy and better approach may be to watch for frequently
rewritten blocks and make sure to keep them moving around. In better
engineered systems use that remapping to move them through relatively
steady areas of storage preferentially. Simpler implementation and works
for all OSs and filesystems (except swap).

Wear leveling algorithms effectively do that. The erase count
for each flash "page" is tracked. If a page is rewritten, then
a count, *somewhere*, is incremented.

I.e., frequently written (filesystem-)blocks will get moved as
a consequence of this. The problem is identifying those filesystem
blocks that are "no longer being used" (and, from that, the
associated flash pages) and good candidates for reuse. The drive
needs to know *how* the medium is being used (i.e., the structure
and contents of the filesystem) in order to infer this on its
own (else, the OS needs to explicitly TELL it the information that
it needs).

Imagine I give you each of my telephone messages (those little
pink slips of paper) and ask you to hold onto them for me.
Months later, your pockets are bulging with all these slips of
paper. Soon, you'll have no place to store them! How do you
decide which slips you can discard? You have no knowledge of
which are still *pertinent* to me!

[This is a bad analogy but it illustrates how the information
that *you* have needs to know the information *I* have in order
to best make use of the space you have available for holding
those little slips of paper! Imagine if you could snoop on
all my phone calls and other contacts and DECIDE FOR YOURSELF
if I have "returned" a call for which you have been holding
a "slip". You could then better manage the slips as you would
know which ones you could discard. Failing this, you would have
to rely on ME -- playing the role of OS in this analogy -- to
tell you which to discard!]
 
D

Don Y

Jan 1, 1970
0
Hi Joseph,
Yes and no. I was looking at determining filesystem from usage patterns
rather than implementing different wear leveling algorithms for different
filsystems. Different code per file system is something that both variant
approaches have in common.

"Past performance is not a predictor of future performance" :>
It is really hard to look at the SSD's upward facing interface
and decide on an EFFECTIVE strategy for managing the medium
within.

If <something> is hammering away at 2 particular blocks on the
disk, will that behavior continue? Or, will the NEXT two blocks
get hammered on just as soon as you (the SSD) have decided that
this is a behavior pattern that you can exploit?

With a desktop environment, you have no predictive power as to
what the user is likely to WANT to do next.
This would be vastly cheaper than trying to infer it from usage patterns.
let alone the engineering to figure out how to do that.

This then becomes a marketing problem. Now you have a device
that fits *some* markets but not others. OK, you can deal with
the 800 pound gorillas (MS & Apple). But, what about other
deployments? What about *new* filesystems that don't exist
at the time you release the product? ("Sorry, this disk can
only be used on machines running DOS version X or earlier")

Imagine if semiconductor devices were constrained to ONLY
operate in certain applications ("Sorry, this resistor can
only be used in personal media players")

And, of course, snooping the filesystem is more complicated
(in addition to all the other functions that the SSD must
*still* perform) than just acting like a block storage device!
(bugs)
I must agree in part. SSD is fine for a write infrequently, read lots
kind of use, the OS and applications is a good cut. But it is not a write
once read forever (DVD-R) situation.

No. The DVD-RW analogy is more appropriate.
Not so good for log files and such
(write lots, read infrequently). Its major attraction is average access
time which is>100 times faster than rotating disk. And it is cheaper
than RAM which is another 100 times faster).

But don't forget the mechanical aspects of its appeal: you can
drop the thing WHILE OPERATING and not feel your sphincter
clench in the process! :>

(There are many other nice features -- like the fact that it
spins up and down *really* fast!)
 
J

josephkk

Jan 1, 1970
0
Hi Joseph,


"Past performance is not a predictor of future performance" :>
It is really hard to look at the SSD's upward facing interface
and decide on an EFFECTIVE strategy for managing the medium
within.

If <something> is hammering away at 2 particular blocks on the
disk, will that behavior continue? Or, will the NEXT two blocks
get hammered on just as soon as you (the SSD) have decided that
this is a behavior pattern that you can exploit?

With a desktop environment, you have no predictive power as to
what the user is likely to WANT to do next.
Perhaps not entirely, there is the registry issue in MS OSs. It is memory
mapped and only on the order of 1/2 MiB to 1 MiB (call it 1000 to 2000
blocks) and i will bet that the frequent(every second) writes cover only a
few blocks.
This then becomes a marketing problem. Now you have a device
that fits *some* markets but not others. OK, you can deal with
the 800 pound gorillas (MS & Apple). But, what about other
deployments? What about *new* filesystems that don't exist
at the time you release the product? ("Sorry, this disk can
only be used on machines running DOS version X or earlier")

Imagine if semiconductor devices were constrained to ONLY
operate in certain applications ("Sorry, this resistor can
only be used in personal media players")

Now that is a little past over the top in straining the analogy.
And, of course, snooping the filesystem is more complicated
(in addition to all the other functions that the SSD must
*still* perform) than just acting like a block storage device!
(bugs)


No. The DVD-RW analogy is more appropriate.

Partially. The write and erase endurance of DVD-RW is not appropriate,
more like DVD-RAM write endurance. (two magnitudes better, minimum)
But don't forget the mechanical aspects of its appeal: you can
drop the thing WHILE OPERATING and not feel your sphincter
clench in the process! :>

(There are many other nice features -- like the fact that it
spins up and down *really* fast!)

And it even seeks really fast.
 
J

josephkk

Jan 1, 1970
0
Remember, the SSD implements a similar but *independant* "block
tracking scheme" *under* the filesystem's "block tracking scheme"
(block != block). So, getting them to be consistent with each other
is the trick.

Some OS's include support for the ATA "TRIM" command which allows
the OS to explicitly tell the drive which blocks are actually
"free" (i.e., the OS can interpret the allocation table(s) on
behalf of the SSD and tell the SSD which blocks are suitable for
"reuse"). Some SSD manufacturers provide utilities (running
*under* the OS as applications) to interrogate the allocation
table and convey this information to the SSD as a proxy for the
(deficient) OS.

In either case, the SSD needs to support this capability. And,
it doesn't get around the eventual wear-out issue.

[Or, as I mentioned elsewhere, let the drive snoop the partition
table and "understand" the filesystem(s) present on its media]

Actually the easy and better approach may be to watch for frequently
rewritten blocks and make sure to keep them moving around. In better
engineered systems use that remapping to move them through relatively
steady areas of storage preferentially. Simpler implementation and works
for all OSs and filesystems (except swap).

Wear leveling algorithms effectively do that. The erase count
for each flash "page" is tracked. If a page is rewritten, then
a count, *somewhere*, is incremented.

I.e., frequently written (filesystem-)blocks will get moved as
a consequence of this. The problem is identifying those filesystem
blocks that are "no longer being used" (and, from that, the
associated flash pages) and good candidates for reuse. The drive
needs to know *how* the medium is being used (i.e., the structure
and contents of the filesystem) in order to infer this on its
own (else, the OS needs to explicitly TELL it the information that
it needs).

Without full models of every file system the is no even hopeful way a
storage device can guess "no longer used". Not used before, not written
much, and not written recently can all be tracked in dependant of the OS
and file system. These are useful for remapping. Of course the space for
keeping this data is in addition to the space for the user available
storage. This is why flash in non power of two sizes are reasonable.
Imagine I give you each of my telephone messages (those little
pink slips of paper) and ask you to hold onto them for me.
Months later, your pockets are bulging with all these slips of
paper. Soon, you'll have no place to store them! How do you
decide which slips you can discard? You have no knowledge of
which are still *pertinent* to me!

For general data storage devices this is not the issue. More advanced
interfaces for flash to tell it this is no longer needed is an answer.
Other than that the techniques already discussed will have to suffice, in
spite of more failures than predicted.
[This is a bad analogy but it illustrates how the information
that *you* have needs to know the information *I* have in order
to best make use of the space you have available for holding
those little slips of paper! Imagine if you could snoop on
all my phone calls and other contacts and DECIDE FOR YOURSELF
if I have "returned" a call for which you have been holding
a "slip". You could then better manage the slips as you would
know which ones you could discard. Failing this, you would have
to rely on ME -- playing the role of OS in this analogy -- to
tell you which to discard!]
 
D

Don Y

Jan 1, 1970
0
Hi Joseph,
Perhaps not entirely, there is the registry issue in MS OSs. It is memory
mapped and only on the order of 1/2 MiB to 1 MiB (call it 1000 to 2000
blocks) and i will bet that the frequent(every second) writes cover only a
few blocks.

Still, it would be hard to determine this just by looking at the
information passed to the SSD. And, you are now at the mercy of
"policy changes" in the OS. E.g., what happens if service pack 28
changes how the registry is updated, frequency, etc. You surely
don't want to have to upgrade your SSD's firmware every time
MS sneezes!

OTOH, the layout of the filesystem is relatively constant. MS
can't just arbitrarily decide that the filesystem is organized
differently (though they could change the policies they implement
when *selecting* a free block, etc.).
Now that is a little past over the top in straining the analogy.

It's a lousy analogy -- but, it highlights the fact that the
SSD only works (*well*) with a particular OS.

My DLT's have different firmware images for different OS's.
I've never explored the nature of the differences. Rather,
I made a point of puttin Sun-branded DLTs on Sun boxen, etc.
(I should try swapping a DLT on a Windows box with one from
a Sun and see if there are any differences in performance
or features)
Partially. The write and erase endurance of DVD-RW is not appropriate,
more like DVD-RAM write endurance. (two magnitudes better, minimum)

Yes, I wasn't commenting about actual numbers but, rather, the
fact that DVD-RW *can* be rewritten -- though much slower than
*read*... and, for a limited number of cycles.
And it even seeks really fast.

Yes. I am using large amounts of FLASH in lieu of a disk in a
couple of projects to take advantage of the faster access, smaller
size/power, etc. But, the same "write" issue Peter brings up
has to be addressed. People think of disks as infinitely
rewritable media *and* having reasonably uniform access times.
An illusion that FLASH can't maintain.
 
M

MrTallyman

Jan 1, 1970
0
Perhaps not entirely, there is the registry issue in MS OSs. It is memory
mapped and only on the order of 1/2 MiB to 1 MiB (call it 1000 to 2000
blocks) and i will bet that the frequent(every second) writes cover only a
few blocks.

You are such a fucking idiot!

The "registry" is a SET of files. That is plural, you fucking retard!

The pre-second writes are updates to a simple log file, you absolute
ditz!

It don't get no dumber than a presumptuous idiot thinking he knows what
is going on, based on his already bent perceptions. You failed from the
get-go, boy. Achieving escape velocity from the stupid you have grown
yourself up to be is as hard as removing the fat from some lame lard ass
that spent decades putting it there.

They think they can get it all off in a matter of months.

You think your past convolutions of perception won't affect your
capacity to understand the operation of a system.

You are both wrong.
 
M

MrTallyman

Jan 1, 1970
0
Without full models of every file system the is no even hopeful way a
storage device can guess "no longer used".

Yeah, and YOUR pathetic guesses as to what is going on or where or when
are off 100% as well, idiot!
 
T

Tom Del Rosso

Jan 1, 1970
0
Don said:
Hi Tom,



<grin> Think about that for a moment. *ASSUMING*, by "full RAM
buffers" you mean "a bit of RAM for each bit of FLASH(ROM)", you're
talking about ~1TB of RAM in addition to the ~1TB of FLASH!

There are 2 possibilities.

A 16 or 32 GB disk is practical. It would cost more but it's only part of a
system, and it would have the speed and the reliability.

A partial RAM buffer would solve the registry problem, and any similar
problem, without needing the drive to understand every filesystem. On a
typical day less than 1 GB is modified at all. They could easily have a
buffer big enough to avoid writing to any flash for at least a day. That
would cut writes to frequently changed files down to nothing, as such files
never add up to more than 1 GB.
 
Top