Maker Pro
Maker Pro

PCI Bus layout for "on board" design.

S

Sylvain Munaut

Jan 1, 1970
0
Hello,


I'd like to know what the considerations are for the PCB layout of a "internal" PCI bus.
By that, I mean I have the PCI Host bridge, arbiter and devices on a single embedded board.


Since they are all almost 'in line' I thought to make :



PCI PCI PCI
Dev 1 Dev 2 Dev 3 ....
||||| |||||| |||||
\================================\
\================================\
\================================\
||||||
PCI
Host


iow, "long" (not that long, 20 inch very max), horizontal traces for all the PCI signals,
on an inner signal layer, then all the traces to the chips (host or dev) would be on the
top layer and just 'via'ed direct to the bus. Of course, keep the trace withing the 60-100
ohm impedance range and with sufficient horizontal spacing (like 2-3 W). But I didn't
plan for any termination at either end (PCI don't need/want them I think).

Is it important the order of devices or to have the PCI host at one "end" of that bus ?

Another concern is that the CPU I use share some lines between PCI and other busses ...
(like IDE and local bus ...), what should I be aware of ?


Thanks for any insight,

Sylvain
 
N

Nico Coesel

Jan 1, 1970
0
Sylvain Munaut said:
Hello,


I'd like to know what the considerations are for the PCB layout of a "internal" PCI bus.
By that, I mean I have the PCI Host bridge, arbiter and devices on a single embedded board.


Since they are all almost 'in line' I thought to make :



PCI PCI PCI
Dev 1 Dev 2 Dev 3 ....
||||| |||||| |||||
\================================\
\================================\
\================================\
||||||
PCI
Host


iow, "long" (not that long, 20 inch very max), horizontal traces for all the PCI signals,
on an inner signal layer, then all the traces to the chips (host or dev) would be on the
top layer and just 'via'ed direct to the bus. Of course, keep the trace withing the 60-100
ohm impedance range and with sufficient horizontal spacing (like 2-3 W). But I didn't
plan for any termination at either end (PCI don't need/want them I think).

PCI should not be terminated! The nodes on the bus shouldn't be too
far away from the bus itself. (Read the PCI specs). I guess if you
pretend you are routing each device as if it where on a PCI card (in
other words: maximum and minimum length of traces in respect to the
bus) you'll be fine.
Is it important the order of devices or to have the PCI host at one "end" of that bus ?

Another concern is that the CPU I use share some lines between PCI and other busses ...
(like IDE and local bus ...), what should I be aware of ?

Don't share any PCI signal with other busses.
 
S

Sylvain Munaut

Jan 1, 1970
0
PCI should not be terminated! The nodes on the bus shouldn't be too
far away from the bus itself. (Read the PCI specs). I guess if you
pretend you are routing each device as if it where on a PCI card (in
other words: maximum and minimum length of traces in respect to the
bus) you'll be fine.

Ok, thanks.

Don't share any PCI signal with other busses.

Well, I didn't design the CPU I'm using ...
Mostly the AD[31:0] is shared ( other lines are not ).



Sylvaint
 
M

Mac

Jan 1, 1970
0
Hello,


I'd like to know what the considerations are for the PCB layout of a
"internal" PCI bus. By that, I mean I have the PCI Host bridge, arbiter
and devices on a single embedded board.


Since they are all almost 'in line' I thought to make :



PCI PCI PCI
Dev 1 Dev 2 Dev 3 ....
||||| |||||| |||||
\================================\
\================================\
\================================\
||||||
PCI
Host


iow, "long" (not that long, 20 inch very max), horizontal traces for all
the PCI signals, on an inner signal layer, then all the traces to the
chips (host or dev) would be on the top layer and just 'via'ed direct to
the bus. Of course, keep the trace withing the 60-100 ohm impedance
range and with sufficient horizontal spacing (like 2-3 W). But I didn't
plan for any termination at either end (PCI don't need/want them I
think).

Is it important the order of devices or to have the PCI host at one
"end" of that bus ?

Another concern is that the CPU I use share some lines between PCI and
other busses ... (like IDE and local bus ...), what should I be aware of
?


Thanks for any insight,

Sylvain

Could you elaborate on the shared pin thing? That doesn't sound right at
all. And 20 inches sounds a tad long, but I imagine it will work OK. Maybe
you should take (or "have," if you are British) a look at the PCI spec.

Your basic topology seems right.

What are you doing for a clock? You should probably route separate clocks
to each device, and match lengths on the clock traces.

I don't remember, but I think that PCI requires pullups on some or all
lines. Don't forget them if applicable. I am assuming this is 32-bit, 33
MHz PCI?

--Mac
 
S

Sylvain Munaut

Jan 1, 1970
0
Hi Mac
I'd like to know what the considerations are for the PCB layout of a
"internal" PCI bus. By that, I mean I have the PCI Host bridge, arbiter
and devices on a single embedded board.
[snip]
Another concern is that the CPU I use share some lines between PCI and
other busses ... (like IDE and local bus ...), what should I be aware of
Could you elaborate on the shared pin thing? That doesn't sound right at
all.

Well yes that doesnt sound right but ...
The CPU I use shares the AD[31:0] line between the PCI and other
busses (like it's local bus where the flash is). The other control
signals ( for PCI : pci_clk, irdy, ... or for local bus CE, WE,
.... ) are separate.

In the datasheet they claim the drivers for AD[31:0] is of type
PCI_33 but when I look at the IBIS model, it seems to drive the line
a bit high compared to what a "true" pci driver should do. But in
the simulation, that looks the same more or less ( doesn't look
pretty IMHO, 800mv over/undershoot ??? for a simple point to point
line at 33 Mhz)

The other "devices" I have on that bus are : Some flash and a level
switcher 3.3v/5v for a ATA bus. When theses other bus are accessed,
the cpu keeps frame# deasserted so the other pci device don't care
what happens and refuses any req#.


And 20 inches sounds a tad long, but I imagine it will work OK. Maybe
you should take (or "have," if you are British) a look at the PCI spec.

Well, 20inch was a little pessimistic ... It should be more like
between 10 and 15 inch (from pad to pad)

I have read the specs but it's not always very clear to me. I'll try
to reread them more closely.
Your basic topology seems right.

That's a start ;)

What are you doing for a clock? You should probably route separate clocks
to each device, and match lengths on the clock traces.

I was thinking to route clock as any other trace, just a litte
further away than the others. My PCI host just have one PCI clk output.

I don't remember, but I think that PCI requires pullups on some or all
lines. Don't forget them if applicable.

Yes, control lines requires pullups, other lines stability is
assured by bus parking.
I am assuming this is 32-bit, 33 MHz PCI?

Yes I didn't mention it but it's that in 3.3v signalling ;)



Sylvain
 
M

Mac

Jan 1, 1970
0
Hi Mac

Mac wrote: [snip]
Could you elaborate on the shared pin thing? That doesn't sound right
at all.

Well yes that doesnt sound right but ... The CPU I use shares the
AD[31:0] line between the PCI and other busses (like it's local bus
where the flash is). The other control signals ( for PCI : pci_clk,
irdy, ... or for local bus CE, WE, ... ) are separate.

In the datasheet they claim the drivers for AD[31:0] is of type PCI_33
but when I look at the IBIS model, it seems to drive the line a bit high
compared to what a "true" pci driver should do. But in the simulation,
that looks the same more or less ( doesn't look pretty IMHO, 800mv
over/undershoot ??? for a simple point to point line at 33 Mhz)

The other "devices" I have on that bus are : Some flash and a level
switcher 3.3v/5v for a ATA bus. When theses other bus are accessed, the
cpu keeps frame# deasserted so the other pci device don't care what
happens and refuses any req#.

Ah, OK. I guess that is OK, then, as long as the arbiter won't grant the
bus to anybody. We just have to hope that the CPU doesn't starve the
devices out. But if this is a real CPU that other people are using
successfully, then we have to assume it works reasonably well.

[snip]
I was thinking to route clock as any other trace, just a litte further
away than the others. My PCI host just have one PCI clk output.

Hmmm. How many devices do you have? In all the PCI designs I have
seen, each device gets a separate copies of the PCI clock. This
includes designs such as yours where all the devices are "on-board."

Also, the loading the devices put on the clock is constrained by the PCI
specification, so it seems like the specification considers the clock to
be critical. So, if you have more than two devices on the clock, then I
think you need to buffer it to be safe. You have to use a buffer with a
PLL in it. (Or a separate PLL with a buffer in the loop) These are
sometimes called zero delay buffers or ZDB's.

Here is one chip that might suit your application, depending on how many
devices you have:

The cy2305, a "zero delay buffer" from cypress semiconductor.

Digikey stocks it. The digikey part number is: 428-1347-ND

[snip]

Good luck!

--Mac
 
S

Sylvain Munaut

Jan 1, 1970
0
Mac said:
Hmmm. How many devices do you have? In all the PCI designs I have
seen, each device gets a separate copies of the PCI clock. This
includes designs such as yours where all the devices are "on-board."

I have at least 4 pci devices so I'll use a zero delay buffer as you
suggested then.


Many Thanks,


Sylvain
 
Top