Exascale Computer Case Study

2952 Words6 Pages

Main Challenges in Developing Exascale Computers.
Firstly coming to power management, this power crisis problem effects many performance issues which include working of the processor. The main barrier for multicore processor is power management. Reliability and resiliency will be critical at the scale of billion-way concurrency: “silent errors,” caused by the failure of components and manufacturing variability, will more drastically affect the results of computations on Exascale computers than today’s Petascale computers. In case of threading if a query is run the more number of servers participate in the query and the more number of variability in terms of response time. The slower the server it goes with the bigger machine and lot of nodes …show more content…

In case of resilience getting the correct answer on Exascale systems with frequent faults, lack of reproducibility in collective communication, and new mathematical algorithms with limited verification will be a critical area of investment. Getting the wrong answer really fast is of little value to the scientist. Many new memory technologies are emerging, including stacked memory, non-volatile memory, and processor-in-memory. All of these need to be evaluated for use in an Exascale system. Minimizing data movement to this memory and making it more energy efficient are critical to developing a viable Exascale system. Science requirements for the amount of memory will be a significant driver of overall system cost. Emory and making it more energy efficient are critical to developing a viable Exascale system. Performance of interconnect is key to extracting the full computational capability of a computing system. Without a high performance, energy-efficient interconnect, an Exascale system would be more like the millions of individual computers in a data center, rather than a supercomputer. Programming tools, compilers, debuggers, and performance enhancement tools will all play a big part in how productive a scientist is when working with an Exascale system. Without increasing programming …show more content…

As we all know that Exascale computers runs million processors which generates data at a rate of terabytes per second. It is impossible to store data generated at such a rate. Methods like dynamic reduction of data by summarization, subset selection, and more sophisticated dynamic pattern identification methods will be necessary to reduce the volume of data. And also the reduced volume needs to be stored at the same rate which it is generated in order to proceed without interruption. This requirement will present new challenges for the movement of data from one super computer to the local and remote storage systems. Data distribution have to be integrated into the data generation phase. This issue of large scale data movement will become more acute as very large datasets and subsets are shared by large scientific communities, this situation requires a large amount of data to be replicated or moved from production to the analysis machines which are sometimes in wide area. While network technology is greatly improved with the introduction of optical connectivity the transmission of large volumes of data will encounter transient failure and automatic recovery tools will be necessary. Another fundamental requirement is the automatic allocation, use and release of storage space. Replicated data cannot be left

Open Document