Maker Pro
Maker Pro

scatter gather DMA

P

prav

Jan 1, 1970
0
Hi all,

I wanted to know how scatter gather DMA is different from normal DMA
operations.
I am not getting any good resources on this .Suggesting any good links
on scatter gather DMA would be appreciated.

rgds,
prav
 
J

John Larkin

Jan 1, 1970
0
Hi all,

I wanted to know how scatter gather DMA is different from normal DMA
operations.
I am not getting any good resources on this .Suggesting any good links
on scatter gather DMA would be appreciated.

rgds,
prav

Normally, DMA transfers data from, say, a disk to/from memory at
sequential physical memory addresses. But if a program runs in virtual
memory, the program's logical, contiguous memory addresses are
scattered in physical memory, in sort of a random checkerboard of
memory pages. If the disk controller has gather/scatter hardware, it
can do big block transfers to/from these physically scattered chunks
of data. I guess the operating system has to tell it where the pages
are, or maybe the controller hardware can access the computer's page
tables directly... I'm not sure about that. Anyhow, it can hop-skip
around the address space during a single block transfer.

A programmer remarked to me the other day that the invention of C was
the worst thing that ever happened to computing. I'd vote for virtual
memory as the second worst.

John
 
R

Rich Grise

Jan 1, 1970
0
John Larkin said:
A programmer remarked to me the other day that the invention of C was
the worst thing that ever happened to computing. I'd vote for virtual
memory as the second worst.

He must not have been much of a programmer. What's his basis for such
a wacko claim?

Thanks,
Rich

(PS - if I had to pick the worst thing, it'd be "C plus plus.")
 
B

Bob Masta

Jan 1, 1970
0
On Fri, 18 Jun 2004 19:26:16 -0700, John Larkin

A programmer remarked to me the other day that the invention of C was
the worst thing that ever happened to computing. I'd vote for virtual
memory as the second worst.

I can't imagine a modern multi-tasking OS without virtual
memory. In Windows, every application is written to be
loaded at the same fixed address. If you didn't have
virtual memory, everything would have to be relocated
when loading, in order to fit into whatever memory was
available. Windows programs are already ridiculously
huge (due to the C/C++ nonsense); imagine them with
gigantic relocation tables!



Bob Masta
dqatechATdaqartaDOTcom

D A Q A R T A
Data AcQuisition And Real-Time Analysis
www.daqarta.com
 
J

John Larkin

Jan 1, 1970
0
On Fri, 18 Jun 2004 19:26:16 -0700, John Larkin



I can't imagine a modern multi-tasking OS without virtual
memory. In Windows, every application is written to be
loaded at the same fixed address. If you didn't have
virtual memory, everything would have to be relocated
when loading, in order to fit into whatever memory was
available. Windows programs are already ridiculously
huge (due to the C/C++ nonsense); imagine them with
gigantic relocation tables!

Virtual memory isn't the cure for code bloat, it's the cause.

Classic hardware memory management can handle the relocation thing
just fine, with much less overhead than a jillion page table entries.
Each task needs at minimum four relocation variables: I-space offset
and size, and D-space offset and size; poke them into the MMU, and the
hardware does the rest. If Windows had classic memory management with
I/D space separation, buffer overrun vulnerabilities would be
impossible.

Since the logical address space of Windows is limited to 31 bits, and
2 gig of ram is cheap, all virtual memory adds is bloat.

When IBM announced S/360, they suggested users would be running v/r
memory ratios of 200. In practice, the user base averaged 1.2.


John
 
J

John Larkin

Jan 1, 1970
0
He must not have been much of a programmer. What's his basis for such
a wacko claim?

He's a consultant. Mostly what he does is untangle corporate IT
structures that are fatally disfunctional. He suggests that a
Cobol-like structure is optimum for application programming, for
reasons I can't completely follow, but the essence is that Cobol
programmers can only solve the applications problem at hand (which, in
truth, is often deadly boring) and not get diverted by tricky features
that they simply can't access. I'll have to talk to him some more
about this, but I agree: most programmers really want to play with the
system, and not solve that customer's dull problem, and C is the ideal
tool for that approach.
(PS - if I had to pick the worst thing, it'd be "C plus plus.")

No argument there; an even trickier toy.

John
 
T

Tim Auton

Jan 1, 1970
0
John Larkin said:
Virtual memory isn't the cure for code bloat, it's the cause. [snip]
Since the logical address space of Windows is limited to 31 bits, and
2 gig of ram is cheap, all virtual memory adds is bloat.

When IBM announced S/360, they suggested users would be running v/r
memory ratios of 200. In practice, the user base averaged 1.2.

Now that is ridiculous. I like virtual memory. I don't like using it,
but I like knowing it's there. For example, say I fancy a quick game
of Enemy Territory in the middle of a Photoshop session. Photoshop and
all its crap gets dumped to disk while I shoot some people, and it's
all still there when I get back. I can handle a couple of seconds of
wait while it swaps in and out. As long as what you are currently
doing fits into RAM (ie you're not swapping all the time) it's a good
thing.

2GB of RAM costs £500. That's not cheap in my book. You can get an
entire PC for that.


Tim
 
J

John Larkin

Jan 1, 1970
0
John Larkin said:
Virtual memory isn't the cure for code bloat, it's the cause. [snip]
Since the logical address space of Windows is limited to 31 bits, and
2 gig of ram is cheap, all virtual memory adds is bloat.

When IBM announced S/360, they suggested users would be running v/r
memory ratios of 200. In practice, the user base averaged 1.2.

Now that is ridiculous. I like virtual memory. I don't like using it,
but I like knowing it's there. For example, say I fancy a quick game
of Enemy Territory in the middle of a Photoshop session. Photoshop and
all its crap gets dumped to disk while I shoot some people, and it's
all still there when I get back. I can handle a couple of seconds of
wait while it swaps in and out. As long as what you are currently
doing fits into RAM (ie you're not swapping all the time) it's a good
thing.

You don't need virtual memory to swap multiple tasks. With non-virtual
MMU hardware, the only limitation is that no single task exceed the
available physical memory. Of course, Windows is such a pig that the
OS alone probably couldn't fit into real memory.

John
 
R

Robert C Monsen

Jan 1, 1970
0
John Larkin said:
John Larkin said:
Virtual memory isn't the cure for code bloat, it's the cause. [snip]
Since the logical address space of Windows is limited to 31 bits, and
2 gig of ram is cheap, all virtual memory adds is bloat.

When IBM announced S/360, they suggested users would be running v/r
memory ratios of 200. In practice, the user base averaged 1.2.

Now that is ridiculous. I like virtual memory. I don't like using it,
but I like knowing it's there. For example, say I fancy a quick game
of Enemy Territory in the middle of a Photoshop session. Photoshop and
all its crap gets dumped to disk while I shoot some people, and it's
all still there when I get back. I can handle a couple of seconds of
wait while it swaps in and out. As long as what you are currently
doing fits into RAM (ie you're not swapping all the time) it's a good
thing.

You don't need virtual memory to swap multiple tasks. With non-virtual
MMU hardware, the only limitation is that no single task exceed the
available physical memory. Of course, Windows is such a pig that the
OS alone probably couldn't fit into real memory.

John

Old unix systems used to do this. I worked with a minicomputer that
was manufactured by BBN in the early 80s, which ran a variant of UNIX
that didn't implement demand paging, but instead swapped entire
processes out to disk. Processes had to be contiguous in memory, as I
recall... It was slow, and had silly limitations like a 2M address
space (which was, sadly, the limit of the address space, using 20 bit
words!) With demand paging, your working set for all your processes is
generally smaller than the available physical memory, so swapping is
kept to a minimum. It also helps with processor caching strategies,
since most processors tie the MMU to L2 cache.

As an aside, that old BBN machine was called a 'C Machine', because
the microcoded machine language was optimized for C, and did things
like hardware stack frames, etc. The hardware was designed to replace
the original honeywell 318 processors the arpanet switching code ran
on, and so was able to emulate the 316's instruction set in microcode.
Since the switching code wasn't written in C, it was cheaper to build
new hardware to emulate the system it formerly ran on than to rewrite
the algorithms.

VM is a pain for folks who want to diddle the metal, like you, but its
great for most application programmers. No more stray pointers
whacking hardware registers by mistake...

Regards,
Bob Monsen
 
J

John Larkin

Jan 1, 1970
0
John Larkin said:
[snip]
Virtual memory isn't the cure for code bloat, it's the cause.
[snip]
Since the logical address space of Windows is limited to 31 bits, and
2 gig of ram is cheap, all virtual memory adds is bloat.

When IBM announced S/360, they suggested users would be running v/r
memory ratios of 200. In practice, the user base averaged 1.2.

Now that is ridiculous. I like virtual memory. I don't like using it,
but I like knowing it's there. For example, say I fancy a quick game
of Enemy Territory in the middle of a Photoshop session. Photoshop and
all its crap gets dumped to disk while I shoot some people, and it's
all still there when I get back. I can handle a couple of seconds of
wait while it swaps in and out. As long as what you are currently
doing fits into RAM (ie you're not swapping all the time) it's a good
thing.

You don't need virtual memory to swap multiple tasks. With non-virtual
MMU hardware, the only limitation is that no single task exceed the
available physical memory. Of course, Windows is such a pig that the
OS alone probably couldn't fit into real memory.

John

Old unix systems used to do this. I worked with a minicomputer that
was manufactured by BBN in the early 80s, which ran a variant of UNIX
that didn't implement demand paging, but instead swapped entire
processes out to disk.


I used to run the RSTS multiuser timeshare system on a PDP-11 with
512K bytes of ram. It would share ten users and ran for months at a
time; only a power failure would take it down. It kept each task's
I-space and D-space separate, but each contiguous, except that all
jobs had a common read-only runtime system (ie, user interface) mapped
into its space. RSTS supported several optional RTS's, virtual
operating systems. It swapped out only as much as it needed to
schedule tasks, so it didn't always swap out all of any given task.
Yes, it was slow, maybe a tenth as fast as Windows... with maybe
1/2000 the processing power and 1/1000 the memory of a typical PC.

Windows was born in ignorance and got kluged from there.

John
 
R

Rich Grise

Jan 1, 1970
0
Bob Masta said:
On Fri, 18 Jun 2004 19:26:16 -0700, John Larkin



I can't imagine a modern multi-tasking OS without virtual
memory. In Windows, every application is written to be
loaded at the same fixed address. If you didn't have
virtual memory, everything would have to be relocated
when loading, in order to fit into whatever memory was
available. Windows programs are already ridiculously
huge (due to the C/C++ nonsense); imagine them with
gigantic relocation tables!

You seem to be confusing "Virtual Memory" with "Segmented
Memory." A program doesn't care where its base address is
because the loader sets the segment registers. Virtual
memory has nothing to do with that. Virtual memory is
memory that's paged out to disk to make it look to the
app like there's more RAM than there really is. Where
in that RAM you're doing access (and who owns it) is
an entirely different layer of the operation.

Cheers!
Rich
 
R

Rich Grise

Jan 1, 1970
0
John Larkin said:
He's a consultant. Mostly what he does is untangle corporate IT
structures that are fatally disfunctional. He suggests that a
Cobol-like structure is optimum for application programming, for
reasons I can't completely follow, but the essence is that Cobol
programmers can only solve the applications problem at hand (which, in
truth, is often deadly boring) and not get diverted by tricky features
that they simply can't access. I'll have to talk to him some more
about this, but I agree: most programmers really want to play with the
system, and not solve that customer's dull problem, and C is the ideal
tool for that approach.

Ah, Cobol! I have a couple of memorable Cobol-related experiences, so to
speak. :) I was a part-time programmer when desktop computers were
just starting out - I was working there when IBM announced the PC.
Anyway, the co's accounting system was in COBOL, and they'd brought
in a contract programmer. I think I hurt her feelings when, looking
over her shoulder, I remarked (pretty much to the whole office), "Why
do I get the feeling that I'm watching somebody build an accounting
system with stone axes and animal skins?" (it was shortly after that
ST ep with Joan Collins and the Guardian of Forever.)

Another time, a gal I knew socially asked me if I'd tutor her in
programming. "Sure!" I says. In this case, the fact that Cobol was
used is mostly McGuffin - she just wanted to get in my knickers. ;-)

Cheers!
Rich
 
R

Robert C Monsen

Jan 1, 1970
0
John Larkin said:
I used to run the RSTS multiuser timeshare system on a PDP-11 with
512K bytes of ram. It would share ten users and ran for months at a
time; only a power failure would take it down. It kept each task's
I-space and D-space separate, but each contiguous, except that all
jobs had a common read-only runtime system (ie, user interface) mapped
into its space. RSTS supported several optional RTS's, virtual
operating systems. It swapped out only as much as it needed to
schedule tasks, so it didn't always swap out all of any given task.
Yes, it was slow, maybe a tenth as fast as Windows... with maybe
1/2000 the processing power and 1/1000 the memory of a typical PC.

Windows was born in ignorance and got kluged from there.

RSTS was developed at DEC. Turns out that the main guys who developed
Windows NT were also from DEC, and had worked on VMS. Small world.
 
J

John Larkin

Jan 1, 1970
0
RSTS was developed at DEC. Turns out that the main guys who developed
Windows NT were also from DEC, and had worked on VMS. Small world.

Right. The guy who created the kernal of NT was one of the authors of
VMS. Nobody at Microsoft knew how to program, so they had to bring in
an expert.

John
 
R

Rich Grise

Jan 1, 1970
0
John Larkin said:
Right. The guy who created the kernal of NT was one of the authors of
VMS. Nobody at Microsoft knew how to program, so they had to bring in
an expert.

Well, you _could_ say that they knew how to program the 8080, or would
even that be an overstatement?

I sure wish iNtel had come up with something other than the 8086/8088
for 16 bits, though. Didn't Zilog have a 16-bit version of the Z80?
I suppose it makes sense that IBM went with iNtel - the upside down
bytes in Motorola probably frightened the IBM guys. ;-)

Yeah, I heard that that 8-bit data bus let them use existing
peripherals and memory and stuff, so there's an excuse for the 8088,
but why such an incredibly stupid segmentation scheme?

Nowadays, of course, it's moot, I guess.

Does a 64-bit processor have a 2^64 address space?

Is that a comprehensible number?
18,446,744,073,709,551,616 decimal.
lessee: 1 KByte = 1,024
MByte = 1,048,576
GByte = 1,073,741,824

The first hit on "unit scale prefixes" WOQ
http://www.google.com/search?hl=en&lr=&ie=UTF-8&q=unit+scale+prefixes
was
http://www.ex.ac.uk/cimt/dictunit/dictunit.htm#prefixes
where I found this:
-
yotta [Y] 1 000 000 000 000 000 000 000 000 = 10^24
# 18,446,744,073,709,551,616 ... nope
zetta [Z] 1 000 000 000 000 000 000 000 = 10^21
# 18,446,744,073,709,551,616 That's the one!
exa [E] 1 000 000 000 000 000 000 = 10^18
peta [P] 1 000 000 000 000 000 = 10^15
tera [T] 1 000 000 000 000 = 10^12
giga [G] 1 000 000 000 (a thousand millions = a billion)
mega [M] 1 000 000 (a million)
kilo [k] 1 000 (a thousand)
hecto [h] 100 (a hundred)
deca [da]10 (ten)
1
deci [d] 0.1 (a tenth)
centi [c] 0.01 (a hundredth)
milli [m] 0.001 (a thousandth)
micro [µ] 0.000 001 (a millionth)
nano [n] 0.000 000 001 (a thousand millionth)
pico [p] 0.000 000 000 001 = 10^-12
femto [f] 0.000 000 000 000 001 = 10^-15
atto [a] 0.000 000 000 000 000 001 = 10^-18
zepto [z] 0.000 000 000 000 000 000 001 = 10^-21
yocto [y] 0.000 000 000 000 000 000 000 001 = 10^-24
#

18 ExaBytes. One Billion Gigabytes. How many times over could
that hold all of Human Knowledge?

Google is your friend.

BTW, if there were a big enough RAM to actually hold and address
all currently recorded human knowledge, how big would the index
be?

Daffynitions:
femtosecond: The time it takes to realize you just wandered into
the "wrong kind" of bar in West Hollywood. ;-)

Cheers!
Rich
 
J

John Larkin

Jan 1, 1970
0
Well, you _could_ say that they knew how to program the 8080, or would
even that be an overstatement?

Well, they bought DOS and, as I heard it, IBM did extensive debugging
and cleanup on that. All the Microsoft-coded Windows versions
(1-2-3-95-98-SE-ME) were dogs.
I sure wish iNtel had come up with something other than the 8086/8088
for 16 bits, though. Didn't Zilog have a 16-bit version of the Z80?
I suppose it makes sense that IBM went with iNtel - the upside down
bytes in Motorola probably frightened the IBM guys. ;-)

What a shame. The 68K architecture (patterned after the PDP-11 and
S/360, sort of) is beautiful.
Yeah, I heard that that 8-bit data bus let them use existing
peripherals and memory and stuff, so there's an excuse for the 8088,
but why such an incredibly stupid segmentation scheme?

Ironic: Intel, the home of Moore's Law, had so little faith in the
progression of RAM density that they slid the segment registers FOUR
bits to the left!
Nowadays, of course, it's moot, I guess.

Except we're stuck with the hideous 8008 architecture. Dozens of
nuclear power-plant equivalants are running day and night to power
these hogs.
Does a 64-bit processor have a 2^64 address space?

Yes, unless they truncate some bus bits to save pins or something. We
have just begun to explore code bloat.
Daffynitions:
femtosecond: The time it takes to realize you just wandered into
the "wrong kind" of bar in West Hollywood. ;-)


Onosecond: the time between when you click the "send" box and when you
realize you've made a big mistake.

John
 
B

Bob Masta

Jan 1, 1970
0
You seem to be confusing "Virtual Memory" with "Segmented
Memory." A program doesn't care where its base address is
because the loader sets the segment registers. Virtual
memory has nothing to do with that. Virtual memory is
memory that's paged out to disk to make it look to the
app like there's more RAM than there really is. Where
in that RAM you're doing access (and who owns it) is
an entirely different layer of the operation.

Cheers!
Rich

Segments aren't used like this in Windows. Each
process has one huge flat address space, with all
segments pointing to the start. That's not only
true when you write the code, but also while running.
There is no code relocation in the usual sense, it's
all done by the OS with translation tables. You
never know the true physical address of anything;
it could be anyplace in memory or on disk, at the
OS's whim/discretion.

My understanding this that this is what makes
DLLs (Dynamic Link Libraries) possible, but I must confess
I haven't had occasion to check into what is
possible with other aproaches to memory management.

Best regards...





Bob Masta
dqatechATdaqartaDOTcom

D A Q A R T A
Data AcQuisition And Real-Time Analysis
www.daqarta.com
 
J

John Larkin

Jan 1, 1970
0
My understanding this that this is what makes
DLLs (Dynamic Link Libraries) possible, but I must confess
I haven't had occasion to check into what is
possible with other aproaches to memory management.

Most versions of Windows allowed virtual to be turned off, and the DLL
Hell thing worked just as well/badly as ever. But that config wasn't
generally useful, as it typically wouldn't run much without a gig of
ram or so. I guess they didn't swap tasks at all with virtual off.

John
 
T

Tim Smith

Jan 1, 1970
0
Normally, DMA transfers data from, say, a disk to/from memory at
sequential physical memory addresses. But if a program runs in virtual
memory, the program's logical, contiguous memory addresses are scattered
in physical memory, in sort of a random checkerboard of memory pages. If
the disk controller has gather/scatter hardware, it

Even without virtual memory, scatter/gather is useful. Consider networking,
where a packet consists of a very low level header followed by data, and
that data in turn consists of a header for a higher level protocol followed
by data, and THAT data consists of a header for a still higher level
protocol and data, and so on.

With scatter/gather, each layer of protocol software can add its header by
simply adding it to the scatter/gather list. Without it, the packet would
have to be assembled contigously in memory before sending, which would be
slower.
 
T

Tim Smith

Jan 1, 1970
0
A programmer remarked to me the other day that the invention of C was the
worst thing that ever happened to computing. I'd vote for virtual memory
as the second worst.

So you think being able to run more programs faster in less RAM with less
disk I/O is the second worst thing that happened to computing?
 
Top