MMU Caching for Fun and Profit

I’m in the middle of a very long, very drawn out project to try to emulate the AT&T 3B2/400 computer. I should probably have been sharing my progress more frequently than I have been, but it has for the most part been a painful and solitary endeavor.

Today, though, there is something in particular that is bothering me greatly, and I must yell into the void to get this frustration off my chest. And that is, how in the hell does the MMU cache work?

So first, a little background.

The MMU on the 3B2/400 divides virtual addresses into four parts. The first two bits (the SID) identify the section of virtual memory, of which there are four. The next 12 bits (the SSL) identify a physical memory offset to locate either a contiguous segment of memory, or a set of pages. The details of how contiguous segments and paged memory work are not that important for this discussion.

The actual segments and pages are described in the 3B2’s memory using Segment Descriptors and Page Descriptors. These descriptors are directions to the MMU that tell it how to do a translation from virtual address to physical address.

For performance, the MMU doesn’t want to go talking to main memory every time it does a translation. So, it has an on-chip cache. It can store 8 Segment Descriptors per section (a total of 32), and it can store 16 Page Descriptors per section (a total of 64). The cache keeps things humming along.

So far, so good, right? Makes sense? Following along?

Here’s where things get crazy. Here’s how the MMU datasheet describes the internals of the cache entries.

Note the language on the “tag” field for each of the cache entry formats. These tags are supposed to uniquely identify the Segment Descriptor and Page Descriptor entries, right? And I presume they’re used for looking up a virtual address in the cache?

But wait! Bits 20–29 can’t possibly be enough to uniquely identify a segment descriptor. To look up a Segment Descriptor in main memory, you need all 13 bits of the SSL! There would surely be collisions in the cache, right?

And the problem extends to the Page Descriptor cache. Again, bits 13–16 and 18–29 of the virtual address can’t identify a Page Descriptor uniquely. You need all 13 bits of the SSL and all 6 bits of the PSL to locate a Page Descriptor in main memory.

This is my confusion. If those bits identify an SD and a PD uniquely in the cache, why can’t they identify them uniquely in main memory? And why aren’t there collisions in the cache?

This mysterious machine gets harder to understand all the time.

5 thoughts on “MMU Caching for Fun and Profit”

  1. Weird. Do you have a link to the documentation that this comes from?

    Possibilities that leap to mind: that SSLs are somehow required to be aligned to certain boundries? Do you have actual hardware and confirmed that there _aren’t_ collisions? I’d love to see the full tables above.

  2. Okay, so please correct me if I’m wrong, I just skimmed this:

    Bits 17-29 is a 13-bit SSL, not a 14-bit SSL. Bits 20-29 are in the tag, leaving 3-bits missing. There are 8 Segment Descriptors, so presumably those three bits are the index (that selects which descriptor)?

    Or are they indexed by something else?

  3. Hi Dougall, you’re quite right, the SSL 13-bits, not 14-bits. I had it right in my code, but not the blog. I’ll correct the original post.

    I’m not so sure about the 8 segment descriptors, though. The SSL is an index into a table of segment descriptors. So, for a concrete example, let’s imagine a virtual address with a base physical memory address of 0x00200000. If the SSL equals 3, that means we should look up the segment descriptor for that virtual address at physical memory location 0x00200024, since there are 8 bytes per SD, and the 3B2 is a byte-indexed machine. So the bottom three bits of the SSL seem pretty important.

Leave a Reply

Your email address will not be published. Required fields are marked *