Essay PreviewMore ↓
A Brief Introduction
You can define distributed computing many different ways. Various vendors have created and marketed distributed computing systems for years, and have developed numerous initiatives and architectures to permit distributed processing of data and objects across a network of connected systems. They have developed a virtual environment where you can harness idle CPU cycles and storage space of tens, hundreds, or thousands of networked systems to work together on a particularly processing-intensive problem. The growth of such processing models has been limited, however, due to a lack of compelling applications and by bandwidth bottlenecks, combined with significant security, management, and standardization challenges. A number of new vendors have appeared to take advantage of the promising market; including Intel, Microsoft, Sun, and Compaq that have validated the importance of the concept.1 Also, an innovative worldwide distributed computing project whose goal is to find intelligent life in the universe--SETI@Home--has captured the imaginations, and desktop processing cycles of millions of users and desktops.
How It Works
The most powerful computer in the world, according to a recent ranking, is a machine called Janus, which has 9,216 Pentium Pro processors2. That's a lot of Pentiums, but it's a pretty small number in comparison with the 20 million or more processors attached to the global Internet. If you have a big problem to solve, recruiting a few percent of the CPUs on the Net would gain you more raw power than any supercomputer on earth. The rise of cooperative-computing projects on the Internet is both a technical and a social phenomenon. On the technical side, the key requirement is to slice a problem into thousands of tiny pieces that can be solved independently, and then to reassemble the answers. The social or logistical challenge is to find all those widely dispersed computers and persuade their owners to make them available.
In most cases today, a distributed computing architecture consists of very lightweight software agents installed on a number of client systems, and one or more dedicated distributed computing management servers. There may also be requesting clients with software that allows them to submit jobs along with lists of their required resources. An agent running on a processing client detects when the system is idle, notifies the management server that the system is available for processing, and usually requests an application package.
How to Cite this Page
"Distributed Computing: What Is It and Can It Be Useful." 123HelpMe.com. 18 Jul 2019
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- In recent years a lot of development resulted which have changed the use of computers and now many people are using computers in routine life. Computing systems are now used in many ways. Communications are also done through computers which are giving better results as compared with ordinary methods through telephones and the like devices. Bandwidth and speed of communications by using a computer system is improved which is increasing its demand with time. The Cambridge Distributed Computing System has played an important role in improving the quality of communications and opened a new way in which computing systems could be used for getting better results.... [tags: technology, security, communications]
897 words (2.6 pages)
- Cloud computing can be defined as: “a variety of computing concepts that involve a large number of computers connected through a real-time communication network such as the Internet.” Our understanding of cloud computing is distributed computing over a network with the ability to run a program or application on many connected computers at the same time. From our research we can describe Cloud computing as both a platform and a type of application. The Cloud computing platform offers, configures, reconfigures and provisions hardware and software services as needed.... [tags: platform, application, technology, virtualization]
3090 words (8.8 pages)
- ... As mentioned in the existing system through many solutions and TPA can do with the auditing process by verifying blocks of the files to check the integrity of files stored by a remote server who provides the service without any knowledge of the actual data contents by comparing each in the batch auditing manner. It has three phases: initialization, verification, and extraction for authentication we use the Encryption along with the block wise verifiability to maintain the local copy of data.... [tags: client data integrity, system design]
2069 words (5.9 pages)
- ... We can distinguish among different service-level agreements (SLAs) by their variable degree of shared responsibil¬ity between cloud providers and users.data integrity, user con¬fidentiality, and trust among providers, indi¬vidual users, and user groups are the critical issues related to security. at the innermost implementation layer, there is infrastructure-as-a-service (IaaS) model which is extended to form the platform-as-a-service (PaaS) layer by adding OS and middle¬ware support. PaaS further by applications on data, content, and meta¬data using special APIsis created to extends to the software-as-a-service (SaaS) model.... [tags: system, techniques, software, service]
1377 words (3.9 pages)
- Stallman conceives a movement about the defense of the free software and it is an interesting topic to discuss and it has been converted in a stimulus for all computers users and developers to create free software that can be modified, and distributed freely. Private software does not benefit the humanity and the informatics. On the contrary, it induces to have more insecure, expensive, and inaccessible systems applications. I do not pretend to cross out the private software as obsolete and inefficient.... [tags: Computing]
901 words (2.6 pages)
- Client Server Model Using Distributed and Parallel Computing Submitted by -: Mayank Deora (13116041) B.Tech ECE III year Distributed Computing Distributed computing is a computing concept that refers to multiple computer systems working on a single problem. In distributed computing architecture, a single problem is divided into many parts, and each part is solved by different computers situated at different geographical locations. As long as the computers are connected to each other via a network, they can communicate with each other to solve the problem with a contribution from each node in the network.... [tags: Client-server, Server, Peer-to-peer]
802 words (2.3 pages)
- Introduction Load balancing is the strategy of load redistribution among the computational elements(CE) of a heterogeneous network where they work cooperatively so that large loads can be distributed among them in a fair and effective manner. Load Balancing strategies ensure that the workload is distributed over all CEs according to their processing speed and availability in such a way that the overall execution time is minimized. But there are a wide variety of issues that need to be considered for heterogeneous environment.... [tags: Distributed Load Balancing]
3035 words (8.7 pages)
- The grid computing concept has been widely used in scientific research, oil and gas fields, banking and education a few years ago. In this work, we adopt the idea of grid computing but bring it to mobile and ubiquitous devices instead of traditional computers. Therefore, we propose a new concept: Mist Computing. Mist computing complements cloud computing where computing is performed far away, in the 'cloud'. Similar research conducted by Chu  extends an implementation of grid computing to mobile and ubiquitous devices and addresses some limitation problems mobile and ubiquitous devices introduce, such as limited resource and limited battery life.... [tags: Mist Computing, Cloud Computing]
662 words (1.9 pages)
- Abstract - Cloud storage provides users to easily store their data and enjoy the cloud applications need not install in local hardware and software system. In cloud computing, data is moved to a remotely located cloud server. Many users place their data in the cloud and so data integrity is very important issue in cloud storage. In this paper a flexible distributed storage auditing mechanism, using the homomorphism token and distributed erasure coded data and it support to ensure the correctness and availability of users’ data in the cloud.... [tags: Cloud computing, Data management]
795 words (2.3 pages)
- ... (Barnatt, 2010) Equal rights to all devices Cloud computing is accessible to any computer in question as long as the computer meets the requirements of a reliable internet connection as well as a web browser. It does not matter whether the computer in question is a traditional desktop or a tablet; the cloud is made available for all. For example if a Microsoft excel file is sent over the internet to a mail recipient, it can be viewed and edited compatibly only if the person has Microsoft excel installed on the device in use.... [tags: computing landscape, electronic data, ]
1721 words (4.9 pages)
What are some Applications
The following scenarios are examples of some types of application tasks that can be set up to take advantage of distributed computing.
Prime Numbers. For two decades, the weapon of choice in this elite sport was a supercomputerpreferably the latest model from Cray Research. Beginning in 1979, the prime-number pursuit was dominated by David Slowinski and his colleagues at Cray (which is now a division of Silicon Graphics).4 The Cray team had to keep topping their own records, because they had so little competition elsewhere. In 1996, however, a new largest prime was found with an unpretentious desktop PC. The discoverer was a member of an Internet consortium who attacked the problem collectively with a few thousand computers. In August of 1997 another member of the same group found a still larger prime, which stands as the current record. Slowinski, being a good sport, offered one of his supercomputers to verify the discoveries.5
Golomb rulers. Imagine a six-inch ruler with marks inscribed not at the usual equal intervals but at 0, 1, 4 and 6 inches. Taking all possible pairs of marks, you can measure six distinct distances: 1, 2, 3, 4, 5 and 6 inches. A ruler on which no two pairs of marks measure the same distance is called a Golomb ruler, after Solomon W. Golomb of the University of Southern California, who described the concept 25 years ago.6 The 0-1-4-6 example is a perfect Golomb ruler, in that all integer intervals from 1 to the length of the ruler are represented. On rulers with more than four marks, perfection is not possible; the best you can do is an optimal Golomb ruler, which for a given number of marks is the shortest ruler on which no intervals are duplicated.7
Aliens. If the prospect of finding a bigger prime number or a smaller Golomb ruler won't induce you to pledge your CPU, perhaps the chance to discover an extragalactic civilization might. That's where SETI@home comes in. SETI@home is the project of Dan Werthimer of the University of California at Berkeley and several colleagues.8 Their plans are ambitious; they seek a mass audience. The client software they envision would run as a screen saver, starting up automatically whenever a machine is left idle. As the program churned away on the data analysis, it would also display a series of images related to the project, such as a representation of the data currently being examined or a map showing the progress of the sky survey recorded with the radio telescope of the Arecibo Observatory in Puerto Rico.9
A search for extraterrestrial intelligence has been going on at Arecibo for almost two decades. The telescope slowly scans the sky, detecting emissions over a broad range of radio wavelengths. A special-purpose signal-processing computer applies a Fourier transform to the data to pick out narrow-bandwidth signals, which could be the alien equivalent of "I Love Lucy." The astronomers would like to put the data through a more thorough analysis, but computing capacity is a bottleneck. With enough computers on the job, the Arecibo data could be sliced into finer divisions of bandwidth and frequency. Moreover, the analysis software could check for other kinds of regularities, such as signals that pulsate or that "chirp" through a sequence of frequencies. The task is well suited to Internet computing in that only small blocks of data need to be passed back and forth over the network, but a great deal of computing is needed on each block.
Health Sciences and Genomes. Many of today's distributed system vendors are aiming squarely at the life sciences market, which has a sudden need for massive computing power. As a result of sequencing the human genome, the number of identifiable biological targets for today's drugs is expected to increase from about 500 to about 10,000. Pharmaceutical firms have repositories of millions of different molecules and compounds, some of which may have characteristics that make them appropriate for inhibiting newly found proteins. The process of matching all these "ligands" to their appropriate targets is an ideal task for distributed computing, and the quicker it's done, the quicker and greater the benefits will be. Another related application is the recent trend of generating new types of drugs solely on computers.10
Complex financial modeling, weather forecasting, and geophysical exploration are on the radar screens of these vendors, as well as car crash and other complex simulations. The possibilities for the uses seem limited only by the minds that can theorize their needs.
Now the big question How do we pay for this? One way proposed is through a commodity market in CPU cycles. If you have 100 computers with nothing to do nights and weekends, you offer the spare capacity at an asking price expressed in millicents per trillion instructions or dollars per Pentium-year, or some such fabricated unit. Meanwhile someone with a big batch of numbers to crunch enters a bid for a stated quantity of computation, measured in the same units. An automated clearinghouse matches up buyers and sellers.
It could be fun to watch the fluctuations of such a market. When some Hollywood studio needs half the processors in the galaxy to render scenes for the next Star Wars saga, prices shoot up. Over academic holidays, excess capacity brings on a seasonal slump. The market is likely to be volatile, because CPU cycles are like electricity or fresh asparagus: You can't stockpile them for later use.' Trading in CPU cycles is not a new idea. As early as 1968 Ivan Sutherland wrote of a "futures market in computer time"although his market consisted of only a whiteboard and colored pens.11
In the end, given a free flow of computations through all those 100 million nodes, I hope that we as a civilization can find a better use for this machine than what Janus had as a primary function - to simulate the explosion of nuclear weapons. I can foresee the pharmaceutical companies developing anti-cancer drugs; medical science developing diagnostic procedures for debilitating diseases, then finding cures; for meteorological and seismological agencies predicting and planning for natural disasters; for global economy analysis that aids in preventing food, grain and medical shortages in less-developed countries, which could then alleviate the need for the current deadly civil wars throughout the world. Granted these are very altruistic, and perhaps not events that all happen in my lifetime, but I have to believe that my idle CPU cycles can help in some small way. I have been utilizing the SETI@HOME software for awhile with no system problems or corruptions. Have I made any world changing discoveries or found the cure for any ailment No. But at least I can say "I'm trying".
1. Distrubted.net Website http://www.distributed.net/
2. Distributed Computing Publisher: Springer-Verlag Heidelberg ISSN: 0178-2770 (Paper) 1432-0452 (Online) http://springerlink.metapress.com/
3. Grid.org, Website, http://www.grid.org/home.htm
4. Shoch, John F., and Jon A. Hupp. 1982. The "Worm" programsearly experience with a distributed computation. Communications of the ACM 25:172-180 http://doi.acm.org/10.1145/358453.358455
6. Golomb, Solomon W. 1972. How to number a graph. In Graph Theory and Computing, ed. Ronald C. Read. New York: Academic Press. http://commsci.usc.edu/faculty/golomb.html
7. Dollas, Apostolos, William T. Rankin and David McCracken. 1998. A new algorithm for Golomb ruler derivation and proof of the 19 mark ruler. IEEE Transactions on Information Theory 44:379-382. http://www.ieee.org/portal/index.jsp
8. SETI@Home, Website, http://setiathome.ssl.berkeley.edu
10. Parabon Computation, Inc. Website, http://www.parabon.com/
11. Sutherland, Ivan E. 1968. A futures market in computer time. Communications of the ACM 11:449-451. http://doi.acm.org/10.1145/363347.363396