Message Passing Priority Scheduled Threads
To provide access to CPU compute cycles, Silicon Graphics provides a simple CPU scheduler to help the game manage multiple threads of control. Following are the attributes of this scheduling scheme:
- Non-preemptive execution: The currently running thread will continue to run on the CPU until it wishes to yield. Preemption occurs if there is a need to service another higher-priority thread awakened by an interrupt event. The interrupt service thread must not consume extensive CPU cycles. In other words, preemption is only caused by interrupts. Preemption can also occur explicitly with a yield, or implicitly while waiting to receive a message.
-
Priority scheduling: A simple numerical priority determines which thread runs when a currently executing thread yields or an interrupt causes rescheduling.
-
Message passing: Threads communicate with each other through messages. One thread writes a message into a queue for another thread to retrieve.
-
Interrupt messages: An application can associate a message to a particular thread with an interrupt.
CPU Data Cache
The R4300 has a write back data cache to improve CPU performance. When the CPU reads data, the cache may satisfy the read request eliminating the extra cycles needed to access main memory. When the CPU writes data, the data is written to the cache first and then flushed to main memory at some point in the future. Therefore, when the CPU modifies data for the RCP’s or IO DMA engine’s consumption via memory, the software must perform explicit cache flushing. The application can choose to flush the entire cache or a particular memory segment. If the cache is not flushed, the RCP or DMA may get stale data from main memory.
Before the RCP or IO DMA engines produce data for the CPU to process, the internal CPU caches must be explicitly invalidated. This is to avoid the CPU from examining old data in the cache. The invalidation must occur before the RCP or DMA engine place the data in main memory. Otherwise, there is a chance that a write back of data in the cache will destroy the new data in main memory.
Since the software is responsible for cache coherency, keeping data regions on cache line boundaries is a good idea. A single cache line containing multiple data produced by multiple processors can be difficult to keep coherent.
No Default Memory Management
As described above, the Nintendo 64 operating system provides multi-threaded message-passing execution control. The operating system does not impose a default memory management model. It does provide a generic Translation Lookaside Buffer (TLB) access. The application can use the TLB to provide for a variety of operations such as virtual contiguous memory or memory protection. For example, an application can use TLBs to protect against stack overflows.
Timers
Simple timer facilities are provided, useful for performance profiling, real-time scheduling, or game timing. See the man page for osGetTime (3P) for more information.
Variable TLB Page Sizes
The R4300 also has variable translation lookaside buffer (TLB) page size capability. This can provide additional, useful functionality such as the “poorman’s two-way set-associative cache,” because the data cache is 8 KB of direct-mapped memory and TLB pages size can be set to 4 KB. The application can roll a 4 KB cache window through a contiguous chunk of memory without wiping out the other 4 KB in cache.
MIPS Coprocesser 0 Access
A set of application programming interfaces (APIs) are also provided for co-processor 0 register access, including CPU cycle accurate timer, cause of exception, and status.
I/O Access and Management
The I/O subsystem provides functional access to the individual I/O hardware subcomponents. Most functions provide for logical translation to raw physical access to the I/O device.
Figure 4.2.1 I/O Access and Management Software Components
PI Manager
Nintendo 64 also provides a peripheral interface (PI) device manager for multiple threads to access the peripheral device. For example, the audio thread may want to page in the next set of audio samples, while the graphics thread needs to page in a future database. The PI manager is a thread that waits for commands to be placed in a message queue. At the completion of the command, a message is sent to the thread that requested the DMA. Also refer to Section 27, “EPI Manager and Extension Devices.”
VI Manager
A simple video interface (VI) device manager keeps track of when vertical retrace and graphics rendering is complete. It also updates the proper video modes for the new video field. The VI manager can send a message to the game application on a vertical retrace. The game can use this to synchronize rendering the next frame.
Copyright © 1999
Nintendo of America Inc. All Rights Reserved
Nintendo and N64 are registered trademarks of Nintendo
Last Updated January, 1999
|