Maker Pro
Maker Pro

Are SSDs always rubbish under winXP?

P

Peter

Jan 1, 1970
0
I have installed a number of SSDs in desktops (24/7 operation) and all
failed within a year or so.

Example:
http://www.crucial.com/store/partspecs.aspx?IMODULE=CT256M4SSD2

They get replaced under warranty but the result is still rubbish, not
to mention hassle, loss of data (we have tape backups but it's still a
hassle). It seems that specific files (specific locations in the
FLASH) become unreadable. The usual manifestation is that the disk
becomes unbootable (sometimes NTLDR is not found; those are fixed
using the Repair function on the install CD).

Just now I have fixed one PC which used to simply reboot (no BSOD) and
then report "no OS found" but if one power cycled it, it would start
up OK. Then it would run for maybe an hour before doing the same. That
was a duff Crucial 256GB SSD too - £400 original cost. I put a 500GB
WD hard drive in there (using the same motherboard SATA controller)
and it is fine.

Years ago, on a low power PC project which shut down its hard drives,
I did some research on what types of disk access windows does all the
time and how they can be stopped. It turns out that it accesses the
registry c. once per second, and it is a write, not just a read. On
top of that are loads of other accesses, but these tend to die out
after a long period of inactivity, and in an embedded app you can
strip out various processes anyway. But the registry write cannot be
disabled (in fact on a desktop O/S most things can't be) and even at
~100k writes per day to the same spot, this is going to wear out a
specific FLASH area pretty quick. They are good for OTOO 10M-100M
writes.

But don't these SSDs have a microcontroller which is continually
evening out the wear, by remapping the sectors?

Their performance is great, especially if you get one with a 6gbit/sec
SATA interface and a quality fast controller (Adaptec) to match that.
I've seen 10x speedups in some functions.

I gather that under win7 things are done differently (it supports the
TRIM function, but that's unrelated to wear spreading AIUI) but for
app compatibility reasons, etc, we use XP.

OTOH I have installed 3 SSDs, much smaller at 32GB, in XP laptops, and
all have been 100% fine. Those were made by Samsung. But those don't
get run 24/7.

I have a couple of 256GB SSDs which have been replaced under warranty
but which are basically unusable for windoze (XP). Can they be used
under say Unix (we have a couple of FreeBSD email servers)? Or is
there some winXP driver which can continually remap the logical
sectors?
 
M

mike

Jan 1, 1970
0
I have installed a number of SSDs in desktops (24/7 operation) and all
failed within a year or so.

Example:
http://www.crucial.com/store/partspecs.aspx?IMODULE=CT256M4SSD2

They get replaced under warranty but the result is still rubbish, not
to mention hassle, loss of data (we have tape backups but it's still a
hassle). It seems that specific files (specific locations in the
FLASH) become unreadable. The usual manifestation is that the disk
becomes unbootable (sometimes NTLDR is not found; those are fixed
using the Repair function on the install CD).

Just now I have fixed one PC which used to simply reboot (no BSOD) and
then report "no OS found" but if one power cycled it, it would start
up OK. Then it would run for maybe an hour before doing the same. That
was a duff Crucial 256GB SSD too - £400 original cost. I put a 500GB
WD hard drive in there (using the same motherboard SATA controller)
and it is fine.

Years ago, on a low power PC project which shut down its hard drives,
I did some research on what types of disk access windows does all the
time and how they can be stopped. It turns out that it accesses the
registry c. once per second, and it is a write, not just a read. On
top of that are loads of other accesses, but these tend to die out
after a long period of inactivity, and in an embedded app you can
strip out various processes anyway. But the registry write cannot be
disabled (in fact on a desktop O/S most things can't be) and even at
~100k writes per day to the same spot, this is going to wear out a
specific FLASH area pretty quick. They are good for OTOO 10M-100M
writes.

But don't these SSDs have a microcontroller which is continually
evening out the wear, by remapping the sectors?

Their performance is great, especially if you get one with a 6gbit/sec
SATA interface and a quality fast controller (Adaptec) to match that.
I've seen 10x speedups in some functions.

I gather that under win7 things are done differently (it supports the
TRIM function, but that's unrelated to wear spreading AIUI) but for
app compatibility reasons, etc, we use XP.

OTOH I have installed 3 SSDs, much smaller at 32GB, in XP laptops, and
all have been 100% fine. Those were made by Samsung. But those don't
get run 24/7.

I have a couple of 256GB SSDs which have been replaced under warranty
but which are basically unusable for windoze (XP). Can they be used
under say Unix (we have a couple of FreeBSD email servers)? Or is
there some winXP driver which can continually remap the logical
sectors?

If you just dropped in the drive, you got what you'd expect.
There are numerous webpages on tweaks for SSD drives.

EWF might be relevant.
 
M

Mel Wilson

Jan 1, 1970
0
Peter said:
I have a couple of 256GB SSDs which have been replaced under warranty
but which are basically unusable for windoze (XP). Can they be used
under say Unix (we have a couple of FreeBSD email servers)? Or is
there some winXP driver which can continually remap the logical
sectors?

I did a system that used Debian Linux with small SSDs and we eliminated the
swap partition and set the filesystem not to update inodes on access. We
lost no SSDs in normal operation (lost one to a power-supply accident), but
we'd only logged about a year of use in two prototypes. Good statistics
would want more samples than that.

Mel.
 
H

HectorZeroni

Jan 1, 1970
0
Just now I have fixed one PC which used to simply reboot (no BSOD) and
then report "no OS found" but if one power cycled it, it would start
up OK. Then it would run for maybe an hour before doing the same. That
was a duff Crucial 256GB SSD too - £400 original cost. I put a 500GB
WD hard drive in there (using the same motherboard SATA controller)
and it is fine.

There is your problem. You are so stupid that you would pay that much
for so little.

I'd bet that it comes down to something stupid like your mobo being set
your "SMART being on or some other ancient controller chip mode.

Oh, and "controller" is a misnomer. The controller is on the hard
drive. "IDE" means that the controller is on the drive. The NOBO chip is
no more than an I/O chip, NOT a controller.

All that proves is that you were around back when there WAS a separate
controller, and you retained the moniker even though the facts changed.
Points to a casual attitude toward technical details.

SATA is slightly different, but not much. Still tertiary to the PCI
bus though.
 
H

HectorZeroni

Jan 1, 1970
0
and even at
~100k writes per day to the same spot, this is going to wear out a
specific FLASH area pretty quick. They are good for OTOO 10M-100M
writes.

It would not EVER be "to the same spot".

You need to figure out how files get written and how the volume gets
managed and how deleted files and the space they occupied gets managed,
and finally, how file edits get written to a file.

What you should have done is buy a Seagate hybrid drive. They are
500GB (now 750), with a 4GB flash drive integrated into them.

They give flash like performance with HD like storage capacity and
reliability. The 500/4 is nice, but I will be getting the 750/8 soon.
 
H

HectorZeroni

Jan 1, 1970
0
OTOH I have installed 3 SSDs, much smaller at 32GB, in XP laptops, and
all have been 100% fine. Those were made by Samsung. But those don't
get run 24/7.

I have a couple of 256GB SSDs which have been replaced under warranty
but which are basically unusable for windoze (XP). Can they be used
under say Unix (we have a couple of FreeBSD email servers)? Or is
there some winXP driver which can continually remap the logical
sectors?

Get a true RAID 5 array up, and fill it with nine of those fuckers, and
sector map them out, and when up to two fail, the data is still
recoverable. You can also schedule change outs to keep things in high
rel.

A RAID 5 array with actual hard drives would probably perform faster,
and would certainly be years more reliable.
 
D

Don Y

Jan 1, 1970
0
Hi Peter,

I have installed a number of SSDs in desktops (24/7 operation) and all
failed within a year or so.

Note that it's not 24/7(/365) that kills the drive but, rather, the
amount of data *written* to the drive, in total. For a reasonably
high traffic, COTS (i.e., not designed with SSDs in mind) server
application, 2-3 years is probably a *high* number!
Example:
http://www.crucial.com/store/partspecs.aspx?IMODULE=CT256M4SSD2

They get replaced under warranty but the result is still rubbish, not
to mention hassle, loss of data (we have tape backups but it's still a
hassle). It seems that specific files (specific locations in the
FLASH) become unreadable. The usual manifestation is that the disk
becomes unbootable (sometimes NTLDR is not found; those are fixed
using the Repair function on the install CD).

Just now I have fixed one PC which used to simply reboot (no BSOD) and
then report "no OS found" but if one power cycled it, it would start
up OK. Then it would run for maybe an hour before doing the same. That
was a duff Crucial 256GB SSD too - £400 original cost. I put a 500GB
WD hard drive in there (using the same motherboard SATA controller)
and it is fine.

Years ago, on a low power PC project which shut down its hard drives,
I did some research on what types of disk access windows does all the
time and how they can be stopped. It turns out that it accesses the
registry c. once per second, and it is a write, not just a read. On
top of that are loads of other accesses, but these tend to die out
after a long period of inactivity, and in an embedded app you can
strip out various processes anyway. But the registry write cannot be
disabled (in fact on a desktop O/S most things can't be) and even at
~100k writes per day to the same spot, this is going to wear out a
specific FLASH area pretty quick. They are good for OTOO 10M-100M
writes.

Understanding what's going on under the hood of a SSD is usually
harder than an equivalent (magnetic) "hard disk".

Writing to "the same spot" rarely happens inside the SSD. The internal
controller tries to shift the writes around to provide some degree of
wear leveling. Even writing the same "sector" (viewed from the disk
API) can result in totally different memory cells being accessed.

(to prove this, you could write a little loop that repeatedly rewrites
the same sector. If not for the wear leveling, you'd "burn a hole"
in the disk in < 1 hour!)

Ideally, the total number of cell rewrites could be used to distribute
wear *evenly* around the medium. I.e., even parts of the disk that
are NOT being changed would deliberately be "relocated" to other parts
of the device so that the "low rewrite history" of those particular
cells could be made available to data that *is* being rewritten
often.

[Think about it: the portion of the medium that holds the executables
for applications are rarely erased or rewritten. Any memory cells
holding those bytes see their number of rewrites *frozen* once the
application is "written" to them. By contrast, "empty" parts of the
disk are rewritten more often as files are created and modified there.
If the "static" parts of the media holding the application executables
can be periodically "moved" to other parts of the medium that have
encountered lots of rewrites, then the relatively unused portion of
the medium "under" those static areas can be exploited... "fresh meat"!]

Typically, wear leveling is confined to those parts of the medium that
are "free"/available. So, one good predictor of SSD life is "amount
of UNUSED space" coupled with "frequency of writes". Note that vendors
can cheat and make their performance data look better by packaging a
larger drive in a deliberately "derated" specification. E.g., putting
300G of FLASH in a 250G drive and NOT LETTING YOU SEE the extra 50G!
(but the wear-leveling controller inside *does* see it -- and EXPLOITS
it!)

The big culprit in SSDs is the fact that writes have to modify an
entire flash page. So, while your application is dealing with 512B
sectors... and the OS is mapping those into *clusters*... the SSD
is mapping *those* into flash pages.

Imagine an application that does small, frequent updates. E.g., a
DBMS manipulating a set of tables. The application goes through and
naively updates some attribute (field) of some number of records
(rows) in those tables. The DBMS wants this flushed to the disk
so that the data is recoverable in an outage/crash.

So, the disk sees lots of little writes. Often, very close together
"physically" -- sector 1, sector3, sector7, sector2, etc. -- depending
on the size of the objects being updated, width of the rows, etc.

But, the controller in the SSD sees:
sector1: OK, that's in flash page 2884. Let's copy that page's
contents, update it per the new incoming data to be written
to sector 1 and store it in page 7733.
sector3: OK, that's in page 7733 (right next to sector 1!). Let's
copy that page's contents, update it per the new incoming
data to be written to sector 3 and store it in page 8224.
sector7: ....
I.e., the flash page size is so large that it allows data written to
a large number of individual sectors to cause a page update. So,
even if the SSD works to spread those physical writes around the
medium, it can end up doing one for each sector write (worst case)
that you initiate!
But don't these SSDs have a microcontroller which is continually
evening out the wear, by remapping the sectors?

Yes, sometimes even two levels of controller inside the SSD. But,
see above.
Their performance is great, especially if you get one with a 6gbit/sec
SATA interface and a quality fast controller (Adaptec) to match that.
I've seen 10x speedups in some functions.

I gather that under win7 things are done differently (it supports the
TRIM function, but that's unrelated to wear spreading AIUI) but for
app compatibility reasons, etc, we use XP.

OTOH I have installed 3 SSDs, much smaller at 32GB, in XP laptops, and
all have been 100% fine. Those were made by Samsung. But those don't
get run 24/7.

I have a couple of 256GB SSDs which have been replaced under warranty
but which are basically unusable for windoze (XP). Can they be used
under say Unix (we have a couple of FreeBSD email servers)? Or is
there some winXP driver which can continually remap the logical
sectors?

The problem with SSD's is that the models used by OS's and application
designers haven't been created with them in mind. They treat "disk
space" as homogenous, uniform access, etc. The idea that access leads
to wear is not part of the programmer's model.

Eunices tend to be a bit more predictable (i.e., don't require a
Redmond mailing address to understand what's happening, in detail,
under the hood) in how they use disk space. E.g., you can easily get
the root partition to reside on Read-Only media (i.e., *no* writes!)
This could include most of the applications and libraries as well
(/usr, /stand, /lib, etc.). The /var filesystem tends to
accumulate "computer/OS generated files" -- logs, queues, etc. You
can configure "logging" to largely reduce or eliminate that sort
of activity on an application by application basis. In addition to
restraining the *rate* of logfile growth, you can limit the resources
set aside for those individual log files.

The /tmp filesystem is where well-behaved applications
should create temporary files -- files that don't need to survive
a power outage. Other apps might allow you to create files
*anywhere* (e.g., a text editor can TRY to create a file anywhere
in the namespace) but if you don't have "human" users, these can
usually be constrained to more suitable parts of the filesystem.

So, for example, to deploy an SSD on a UN*X box, I would put /root
on "ROM" (a CF card or some small PROTECTED corner of the SSD)
as it will "never" be changing. Configuration parameters in /etc
would reside in a BBRAM (could also be on the SSD but BBRAM lets
them be changed *and* retained if the SSD goes south. /etc can
be very small and parts of it that *don't* change can be symlinked
to files on the "ROM" file system). Temporary files would sit
in RAM, ideally. Very fast, infinite rewrites, automatically
discarded when the system powers down/reboots, etc.

So, /var holds the things that you *want* to persist (/var/mail,
/var/spool, /var/log) and *only* those things. Hopefully
giving you more average free-space than you would otherwise
have if you tried cramming everything on it!

[I haven't run FBSD in many years so they may have tweeked where
things reside wrt specific filesystems. And linux has its own
idea as to where things *should* reside. So, my comments are
only approximate -- they *should* apply to NetBSD...]
 
D

Don Y

Jan 1, 1970
0
Hi Peter,
OTOH I have installed 3 SSDs, much smaller at 32GB, in XP laptops, and
all have been 100% fine. Those were made by Samsung. But those don't
get run 24/7.

I have a couple of 256GB SSDs which have been replaced under warranty
but which are basically unusable for windoze (XP). Can they be used
under say Unix (we have a couple of FreeBSD email servers)? Or is
there some winXP driver which can continually remap the logical
sectors?

It just occurs to me... I have XPembedded running on a couple of SBC's
WITHOUT magnetic disks. I should dump their registries and see if
there are changes to <mumble> that affect the actions of daemon tasks
(like the registry update)!!
 
V

Vladimir Ivanov

Jan 1, 1970
0
~100k writes per day to the same spot, this is going to wear out a
specific FLASH area pretty quick. They are good for OTOO 10M-100M
writes.

Modern MLC NAND block is rated at about 1-3K program/erase cycles, SLC
should be at about 50K. Fingers crossed that the SSD controller does full
static wear leveling, you can do the simple math about expected endurance
with example block size of 512KB to get the feeling.

The SDRAM cache in the SSD offsets this a bit (by deferring the write to
the flash memory), but exactly how much depends on controller policy.
But don't these SSDs have a microcontroller which is continually
evening out the wear, by remapping the sectors?

They have. The controller does a lot, and due to complexity there can
always be latent bugs. And the NAND reliability is just awful, especially
with shrinking the geometry.

Not that it matters to trendy consumer device manufacturers.
 
H

HectorZeroni

Jan 1, 1970
0
Now that you mention it, USB memory sticks don't last,


Where are your stats from? I have never had one fail, and I have like
50 of them, and I change file system types, and everything.Lots of
thorough use.
but hard drives
seem to never fail.


They are one of the few things still made with mil specs in mind, if not
followed religiously.

I think the semi guys are pushing flash density to
the bleeding edge of reliability.

They are just now selling stacked 750 GB modules, etc. That really
ain't all that big, and the chips use densities which get tested.
You may as well buy a PC with a fast hard drive and mountains of DRAM,
so it has lots of disk cache and doesn't thrash virtual.

Or the OS guys will wise up and make a segmented system which puts logs
and other constantly modified files onto magnetic storage, whenever
available.

The Seagate hybrids already address these issues with their intelligent
management of what ends up on the flash half of the hybrid.
 
D

Don Y

Jan 1, 1970
0
Now that you mention it, USB memory sticks don't last, but hard drives
seem to never fail. I think the semi guys are pushing flash density to
the bleeding edge of reliability.

As with anything, as geometries get smaller, you start having to
worry about reliability and durability, more. E.g., SLC flash
is considerably more reliable than MLC. NOR moreso than NAND,
etc.
You may as well buy a PC with a fast hard drive and mountains of DRAM,
so it has lots of disk cache and doesn't thrash virtual.

This doesn't help you if the application intentionally flushes
the disk cache (e.g., as a DBMS would do).

The problem is the application and OS think of memory as having
homogenous characteristics -- access time, durability, persistence,
etc. New storage technologies violate many of those assumptions
in trying to mimic "older" technologies.

To *best* exploit a storage technology (e.g., you wouldn't use
a SASD someplace intended for a DASD!), you have to consider the
capabilities and characteristics of the device and match them
with the needs and expectations of the application.

In the MS world, disks are treated as *completely* homogenous.
The file system assumes EVERYTHING can be writable (actually,
there is a little lie in this) and "read only" is just an
artificial construct layered onto it by the OS.

Other OS's (and applications) take a more disciplined approach
to "what goes where" so that a knowledgeable implementor can
deploy a system *tuned* to the hardware being used. How finely
you tune it is a function of how much you want to *push* the
implementation (e.g., do you look at individual files and move
them to R/O vs. R/W filesystems based on expected usage? Or,
just deal with whole filesystems and suffer the inefficiencies
that come with it?)

E.g., while you could store your *code* in RAM (in a product),
chances are, you *wouldn't* -- due to cost, volatility, etc.
OTOH, you might *load* the code into RAM for execution (to
make better use of its performance aspects while relying on
a nonvolatile store for persistence)
 
P

Peter

Jan 1, 1970
0
John Larkin said:
Now that you mention it, USB memory sticks don't last, but hard drives
seem to never fail. I think the semi guys are pushing flash density to
the bleeding edge of reliability.

You may as well buy a PC with a fast hard drive and mountains of DRAM,
so it has lots of disk cache and doesn't thrash virtual.
I have done some googling on this topic and it is quite a nasty
suprise to learn how poor a life flash drives are *expected* to have.
For example (can't find the URL right now) the Intel X25 SSDs can have
only about 30TB written to the drive in its whole life. With perfect
wear spreading, this will push every part of the drive to the flash
write limit in something like 5 years (they reckon) of average desktop
computer usage (they reckon).

30TB is not all that much, over years, especially with swapfile usage.

And if the wear spreading is working less than optimally (firmware
bugs) then all bets are off. On the SSD forums there is a ton of stuff
about different SSD firmware versions doing different things. I have
to wonder who actually has a LIFE after worrying about the firmware on
a "hard drive" :) You don't worry about firmware updates on a cooker,
do you?

So I am not suprised my SSDs are knackered in c. 1 year while hard
drives seem to go on for ever, sometimes making a funny noise after ~5
years (on a 24/7 email/web server) at which point they can be changed.
 
P

Peter

Jan 1, 1970
0
HectorZeroni said:
Where are your stats from? I have never had one fail, and I have like
50 of them, and I change file system types, and everything.Lots of
thorough use.

You probably don't write terabytes to them though. Also you are
extremely unlikely to ever go anywhere near even a very low write
cycle limit (1000+) with a removable drive. In most usage, one does
just ONE write to the device, in each use.

So my other post re Intel SSD write limits. They are very suprisingly
low.
They are one of the few things still made with mil specs in mind, if not
followed religiously.



They are just now selling stacked 750 GB modules, etc. That really
ain't all that big, and the chips use densities which get tested.


Or the OS guys will wise up and make a segmented system which puts logs
and other constantly modified files onto magnetic storage, whenever
available.

The Seagate hybrids already address these issues with their intelligent
management of what ends up on the flash half of the hybrid.

OK, but why does anybody use an SSD?

I used them to make a hopefully silent PC, or one drawing little
power. Or, in portable apps, to make a tablet computer work above
13000ft in an unpressurised aircraft
http://www.peter2000.co.uk/ls800/index.html

Combining a HD with an SSD defeats both those things.

In actual usage, I find, the SSD outperforms a HD very noticeably in
very narrow/specific apps only, which tend to be

- a complex app, comprising of hundreds of files, loading up, perhaps
involving loading up thousands of records from a complicated database
into RAM

- any app doing masses of random database reads

Anything involving writing is usually slower, and anything involving
sequential reading is no quicker.
 
D

Don Y

Jan 1, 1970
0
Hi Peter,

I have done some googling on this topic and it is quite a nasty
suprise to learn how poor a life flash drives are *expected* to have.
For example (can't find the URL right now) the Intel X25 SSDs can have
only about 30TB written to the drive in its whole life. With perfect
wear spreading, this will push every part of the drive to the flash
write limit in something like 5 years (they reckon) of average desktop
computer usage (they reckon).

But, in reality, you have a *smaller* effective disk size (unless you
are only using the ENTIRE disk as "temporary storage") and a
correspondingly LOWER total write capacity. (i.e., that 30TB can
turn into 3TB if 90% of the drive is "already spoken for")
30TB is not all that much, over years, especially with swapfile usage.

And if the wear spreading is working less than optimally (firmware
bugs) then all bets are off. On the SSD forums there is a ton of stuff
about different SSD firmware versions doing different things. I have
to wonder who actually has a LIFE after worrying about the firmware on
a "hard drive" :) You don't worry about firmware updates on a cooker,
do you?

So I am not suprised my SSDs are knackered in c. 1 year while hard
drives seem to go on for ever, sometimes making a funny noise after ~5
years (on a 24/7 email/web server) at which point they can be changed.

An SSD really only makes sense as a big ROM, in practice. Put
the OS on the SSD and find something else for "writeable store".
Or, arrange for the SSD to be "considerably larger" (effectively)
than it actually would have been to improve its durability.
E.g., if you need 10GB of *writeable* space, use a 100GB drive,
instead of trying to use 90G for "readable" space, etc.
 
H

HectorZeroni

Jan 1, 1970
0
OK, but why does anybody use an SSD?

Because electronics are way faster than physical, spinning media with
hard latencies built in to every read and write.
 
H

HectorZeroni

Jan 1, 1970
0
Combining a HD with an SSD defeats both those things.

Combining the two gives you instantaneous access to the files in that
segment, and HUGE, RELIABLE storage capacity for the greater mass of your
data.
 
D

Don Y

Jan 1, 1970
0
Hi Peter,

You probably don't write terabytes to them though. Also you are
extremely unlikely to ever go anywhere near even a very low write
cycle limit (1000+) with a removable drive. In most usage, one does
just ONE write to the device, in each use.

So my other post re Intel SSD write limits. They are very suprisingly
low.

The SSD hopes to leverage that low rewrite limit (thousands of
cycles PER CELL) over a large amount of UNUSED CAPACITY -- along
with discouraging writes.
OK, but why does anybody use an SSD?

Why do you use FLASH in your designs, Peter? Why not ROM? Or RAM?
Why do you use RAM instead of FLASH (for some things)? And ROM
instead of RAM? etc. Look at the SSD in the same way that you look at
the capabilities and characteristics of other "storage media".

<grin>

SSD's can be very effective in an application that is designed with
the characteristics of the SSD in mind. Just like FLASH can be
better than BBRAM or ERPROM in *those* types of applications.
Imagine going back to the days of UV erasing an EPROM each time
you wanted to update firmware... :<

With those considerations in mind, imagine how you would design
a product with GB's of persistent storage that had to survive
being *dropped* on a construction site. Or, exposed to large
temperature/pressure differences.
I used them to make a hopefully silent PC, or one drawing little
power. Or, in portable apps, to make a tablet computer work above
13000ft in an unpressurised aircraft
http://www.peter2000.co.uk/ls800/index.html

Combining a HD with an SSD defeats both those things.
Exactly.

In actual usage, I find, the SSD outperforms a HD very noticeably in
very narrow/specific apps only, which tend to be

- a complex app, comprising of hundreds of files, loading up, perhaps
involving loading up thousands of records from a complicated database
into RAM

- any app doing masses of random database reads

Any time you are NOT looking for sequential data accesses, the
SSD should win. Access time for a SSD is a lot closer to constant
than for a disk drive (for unconstrained accesses). The disk can
artificially boost its performance by exploiting read and write
cache. But, that doesn't work for completely unconstrained
accesses (e.g., a DBMS's accesses could be all over the platter
based on how the DB is being accessed).
Anything involving writing is usually slower, and anything involving
sequential reading is no quicker.

This is driven by the sizes of the onboard cache, in the disk case
(assuming write caching is enabled) and the RAM-cache in the SSD.
In each case, the actual controllers embedded into those devices
can further constrain/enhance access. E.g., possibly allowing
multiple concurrent writes in the SSD case -- depending on the
circuit topology.
 
S

StickThatInYourPipeAndSmokeIt

Jan 1, 1970
0
Hi Peter,



The SSD hopes to leverage that low rewrite limit (thousands of
cycles PER CELL) over a large amount of UNUSED CAPACITY -- along
with discouraging writes.


Why do you use FLASH in your designs, Peter? Why not ROM? Or RAM?
Why do you use RAM instead of FLASH (for some things)? And ROM
instead of RAM? etc. Look at the SSD in the same way that you look at
the capabilities and characteristics of other "storage media".

<grin>

We use them because they can be very small, and can take mechanical
shock well. Here are a bunch smaller than laptop drives.

Put that in your design and smoke it.

http://tinyurl.com/74fvt48
 
U

UltimatePatriot

Jan 1, 1970
0
Hi Peter,

On 2/25/2012 3:39 PM, Peter wrote:

much snip

The only thing you state here, which I disagree with.
Any time you are NOT looking for sequential data accesses, the
SSD should win. Access time for a SSD is a lot closer to constant
than for a disk drive (for unconstrained accesses). The disk can
artificially boost its performance by exploiting read and write
cache. But, that doesn't work for completely unconstrained
accesses (e.g., a DBMS's accesses could be all over the platter
based on how the DB is being accessed).


This is driven by the sizes of the onboard cache, in the disk case
(assuming write caching is enabled) and the RAM-cache in the SSD.
In each case, the actual controllers embedded into those devices
can further constrain/enhance access. E.g., possibly allowing
multiple concurrent writes in the SSD case -- depending on the
circuit topology.

This man actually knows exactly what he is talking about and what is a
fact of reality in this industry.

This guy is one to rely on for good info, folks.
 
N

Nico Coesel

Jan 1, 1970
0
A decent SSD should have wear leveling but if you don't disable the
virtual memory the SSD will wear out quickly.
Now that you mention it, USB memory sticks don't last, but hard drives
seem to never fail. I think the semi guys are pushing flash density to
the bleeding edge of reliability.

You may as well buy a PC with a fast hard drive and mountains of DRAM,
so it has lots of disk cache and doesn't thrash virtual.

Unless you disable the virtual memory XP swaps everything it can to
the hard disk to have as much unused memory as possible. Its a real
nuisance. Just install 2GB of memory and disable swap to get maximum
performance. The performance gain is huge.
 
Top