Tuesday, May 5, 2020

Made of Semiconductors

Question: Discuss about the Made of Semiconductors ? Answer : Introduction Normal processors can generally perform operations on operands more efficiently as compared to main memories located within computers. However, this outcome does not mean that large capacity memories cannot perform operations in similar fashions as processors. In fact, memories (made of semiconductors) can perform at high speeds compared to normal processors, however, the cost implications for their performance is much too high hence the choice of processors. Moreover, for operand operations, main memories must be fitted with semiconductor components that work at high speeds which again increases the cost even further. Therefore, to bridge the gap between high-speed processors and main memories, a memory block(s) known as cache are fitted between the processors and the main memories (UMD, 2001). Cache memory is, therefore, a high-speed memory that bridges the gap developed between the variations in processor and main memory speeds. Furthermore, the same facility is used to store addresses that are frequently used by the main memory. In general, the overall idea behind cache memory is similar to that of virtual memories where some portions of the main memory (which acts at low speeds) are stored within a high-speed location (cache) in duplicate format. This outcome leads to an efficient system that will always pass memory requests to the cache memory before escalating them to the main memory (UMD, 2001). Consider an example where an application request a certain arithmetic operation, this request is first passed to the cache memory if the necessary operands and addresses are available the cache executes this operation. However, if it fails to respond, the operation is forwarded to the main memory (Dandamudi, 2003). Nevertheless, cache memories should never be confused with virtual memories as they have high speeds requirements as compared to virtual memories. However, they both depend on the correlation of address references which means their differences stem from implementation procedures. Cache Memorys Major Components Memory Components Cache memory avails frequently used data to the processors, this functionality is achieved through its building blocks that develop small memory sections known as the primary cache (Level 1 cache). Primary cache is built within the processor such as the central processing unit (CPU) having a small memory size typically within the range of 2 kilobytes to 64 kilobytes. In addition to the primary cache, the secondary cache (Level 2 cache) is then developed within a memory location located in close proximities to the CPU. On most occasions, the level 2 cache is hosted within a memory card but will have a direct connection to the processor. Furthermore, the use of level 2 cache is normally regulated by the L2 controller a specific circuit found on the motherboard of a computer. Moreover, its size (level 2 cache) will range between 256 KB to 2MB depending on the processor's size (Nicolast, 2011). In addition to these levels is the Level 3 cache that is typically used to improve the overall performance of level 1 and level 2 caches. In general, level 3 caches are specialised memory locations that are significantly slower to level 1 and 2 caches. However, as compared to other memory locations they are significantly faster, for instance, they are known to have double the speeds of random access memory (RAM). Moreover, most modern multi-processor computers will have L1 and L2 caches within each processor core but will have a common L3 cache shared amongst the processors. Furthermore, because its meant to improve operations, any instruction directed to it is automatically elevated to a higher tier i.e. L1 or L2 cache (Rouse, 2017). Fig: Three level cache configuration Specialised Cache Although level 3 cache can be defined as a specialised cache, other caches that perform other operations other than data and instruction cache exist and hold this name. TLB (Translation look side buffer) for instance records both virtual and physical addresses allowing for translation to take place. Moreover, other caches like the disk cache leverage on the functionalities of the RAM and flash memory to offer same services as data caching but at different locations. Cache organisation and Configurations Several methods can be used to store data within the cache memory, these methods govern the internal structure of the memory locations. Moreover, the processor uses these structures to reference the address of the main memory location of the data it wants to access. Therefore, a proper organisation is needed to find the appropriate location, a concept known as mapping. This concept must be implemented in the hardware components to facilitate improvements in the operating system. Three methods are commonly used: Direct mapping Similar to a table having many rows but with three columns, direct mapping assigns each block of data to a specific cache location. Fully associative mapping Unlike direct mapping where a block mapping is based on pre-defined cache locations, this method allows a block of data to be mapped to any given cache location. Set Associative mapping Commonly seen as a compromise between the previous two, this method allows a block to be mapped in a subset of cache locations. Discussion Cache Performance To outline a caches performance, several terms must be defined as their operations determine the functionality of the said system. One, we have the cache hit which is the actual data found in the cache itself. Secondly, we have cache miss which is any data other than that found in the cache memory. Moreover, when a processor loads any data from a given memory into the cache, a delay is exhibited i.e. the miss penalty. Now, a general approach to quantifying a caches performance is to calculate its access time, where this value is given as: Access time = Hit cost + Miss rate*Miss penalty Furthermore, since the initial definition of cache memory saw it as a component that bridges the speed between fast and slow memory locations, this estimation can translate into: Access time = Fast memory access time + miss rate*slow memory access time (UMD, 2001). During the design of cache memory a lot of emphases is placed on fast control as well as reducing the size of miss rate, this because the speed of the main memory is always expected to improve with time (i.e. with time and more operations the speed increases). Moreover, the miss rate, an important component of cache design can be classified into three major categories; conflict miss, capacity miss and compulsory miss. Compulsory misses are a mandatory requirement of cache memory as they occur when a new program/data is loaded into the cache for the first time. Capacity misses, on the other hand, occur when the size of the cache is less as compared to the size of the data regardless of the organisation or mapping used. Finally, conflict misses usually occur when the hashing functions (operation mechanisms) operate at a fast rate missing some data. From this definition, it pretty obvious to note the misses that can be easily eradicated thus improving the systems performance i.e. confli ct misses. This objective is achieved using proper hashing functions that match the system design catering for both the fast and slow memory locations (Silvano, 2014). Application of cache memory to achieve its overall goal In general, the overall objective of the cache memory is to improve a computers performance and having established how its performance is gauged its important to highlight how it actually achieves its goal. For a cache memory to improve a computers performance, it has two general objectives: To provide the user with an illusion of using a very large memory that is simultaneously very fast. Remember, a user will use a 1 terabyte hard disk at a very high speed, however, the actual memory location does not achieve these speeds on its own. To meet the first objective, it provides data to the processors at very high speeds which facilitate a faster frequency of operations. Now, these objectives are again achieved using the principle of locality of references where the cache continuously refers to used memory locations or data. Two variations of the principle exist; temporary locality where data and even instructions are used in loops if the reference to the memory element exists. Secondly, the spatial locality where a close address is used to refer to the actual memory location or element (Silvano, 2014). Future trends In the past several cache memory variations have existed, for instance in some inexpensive computers, the level 3 cache is eliminated and incorporated into other systems. However recent trends have seen a new solution being adopted, a solution that is set to be the future of cache memories. In essence, all the three levels of cache memory are being implemented in the processors which improve the performance. Therefore, in the future users will stop choosing computers based on the motherboard and bus architecture and instead focus on processors having the right cache integration (Rouse, 2017). Conclusion Cache memory is an integral part of a computer and without it, the speed gaps between the processors would be so high which would translate into poor performance. A processor is very fast, transferring and loading data at high speeds, however, the main memory is slow at meeting the same functions but an upside to it is that it has a large memory capacity. The cache acts as a liaison between the two enabling the end user to capitalise on the functionalities and benefits of the two components. References Dandamudi. S. (2003). Fundamentals of computer organization and design. Springer. Retrieved 28 March, 2017, from: https://service.scs.carleton.ca/sivarama/org_book/org_book_web/slides/chap_1_versions/ch17_1.pdf Nicolast. (2011). Main memory. Introduction to computer science course. Retrieved 28 March, 2017, from: https://www2.cs.ucy.ac.cy/~nicolast/courses/lectures/MainMemory.pdf Rouse. M. (2017). Cache memory. Tech target. Retrieved 28 March, 2017, from: https://searchstorage.techtarget.com/definition/cache-memory Silvano. C. (2014). Introduction to cache memories. Advanced computer architecture. Retrieved 28 March, 2017, from: https://home.deib.polimi.it/silvano/FilePDF/ARC-MULTIMEDIA/Lesson_8A_New_Cache_ENGLISH_V4_COMO.pdf UMD. (2011). Introduction to cache memory. Retrieved 28 March, 2017, from: https://www.cs.umd.edu/class/fall2001/cmsc411/proj01/cache/cache.pdf

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.