Wait a second!
More handpicked essays just for you.
More handpicked essays just for you.
human development chapter 2
Human development concept
human development chapter 2
Don’t take our word for it - see why 10 million students trust us with their essay needs.
Recommended: human development chapter 2
Michael Lesk adopts Shakespeare’s theory of seven ages of human being which start from infancy to senility to predict the evolution of Information Retrieval from 1945 to 2010. In this paper, Lesk tried to compare two approaches to information retrieval. The first approach is intellectual analysis by human and machine – artificial intelligence introduced by Vannevar Bush’s. The second approach is simple exhaustive processing – statistical detail introduced by Warren Weaver’s .The paper was written in 1995, when the Internet and World Wide Web technology still crawling to grow. I’ve identified three important elements to be elaborated in this essay regarding the evolution of information retrieval.
Important Element 1: The statistical detail vs the artificial intelligence approaches
The first IR system was built which used indexes and concordances. When the first large scale information systems were developed, computers can search indexes must better than human, which required more detailed indexing. However, indexing could also become too expensive and time consuming. Therefore, the idea of free-text searching is initiated, which eliminates the need for manual indexing. Objections pointed out that selecting the right words might not be the correct label for a given subject. One solution is official vocabularies. The idea of recall and precision also came out as methods for evaluating information retrieval systems, and they showed that free-text indexing was as effective as manual indexing and much cheaper. New information retrieval techniques such as relevance feedback, multi-lingual retrieval were invented. The 1960s also was the start of research into natural language question-answering, and researchers began building systems ...
... middle of paper ...
...his.
Lesk further pointed out some potential problems such as illegal copying, copyright law itself, difficulty for people to upload, legal liability and public policy debates restricting technological development and availability. These remain challenges for information systems today and probably will some time to resolve.
As the conclusion, the paper made good contribution to the field by describing the history of the information retrieval systems from 1945 to 1996 with abundant information on the various technologies developed, information retrieval systems built, and how they affected the research in information retrieval. I think artificial intelligence will start to play a leading role in information retrieval in the following years and one day we will have true question answering type of information retrieval at the finger tip of every Internet user.
Nicholas Carr, a periodic writer on issues such as technology and culture, wrote the article called “Is Google Making Us Stupid?” (743) In this, he discusses the way that not only Google, but also the advancement in technology, especially computers and computer engines is slowly altering our brain, along with the way we take in information. The process started back in the 1970’s and 1980’s when technology got a jump in society. For example “television was our medium of choice” says Carr (747). From then on it has been a slow decline for the way we process information. Throughout this essay Carr backs up the reasons why he feels the way by using different types of figurative language, deductive reasoning, plus the use of logical fallacies that can strengthen or may even weaken his argument.
Ontology contains a set of concepts and relationship between concepts, and can be applied into information retrieval to deal with user queries.
In 1998, when Google created its search engine, very little data was available about search engines. One of the first search engines was the Wold Wide Web Worm, which was not released unitl 1994. In order to research and create a more dynamic search engine Google’s creators had very little information to go on, and encountered many challenges. It is a challenge to create a high quality search engine as search engines need to crawl and then index millions of pages of information on the web. An additional feature of Google’s large scale search engine is that it will use hypertext information to refine search results. The challenge is to be able to scale the vast amount of data available on the web. The main goal is to improve the quality of search results. The second goal is to be able to make all the data on the web available for academic research. One of the key features of Google that sets it apart from other web search engines is that it is built to scale well to large data sets. It plans to leverage the increase in technological advances and decrease in hardware and storage costs to create its robust system.
One of the wonderful things about the internet is how it makes life much easier if the information can be found in the convenience of the home instead of going to a library and making a day out of it. This is especially true if the internet offers updated information as soon as it happens were as a library may only update a few things every week or month at a time. It is truly remarkable how much information can be found and because of this it isn’t unbelievable that more and more people are using the internet instead of going to a library or using another service the internet can offer them. However, without organization and direction information is useless. Search engines offer this stepping stone by storing all the data in a manor that is searchable. Two of the major search engines are Google.com and Msn.com. Both offer great search engines and services, but have different styles and appeal to different audiences looking for different things.
To digest natural language implies understanding, a function that is uniquely human. To understand something implies to have senses that interpret the world such as emotions and awareness of our own physical experiences. When someone tells a story, we rely upon previous experience for interpretation. We form a reaction, our heart rate may change, we may start sweating, we may relax or tense, and feel certain emotions such as fear. Upon getting new information, a persons attitude may change or the way they think may change. A computer made of metal simply does not have the faculties to experience the world as we do. It can be programmed to respond in such a way that mimics a human response, but can not be considered to be really understanding what it is doing. Recall the story of Helen Keller, how she finally began to learn a language when she was given immediate experiential feedback. The teacher would pour water on her and then do the sign language for the word in her hand. The founder of Toastmaster's organization started it on the premise that people learn in moments of pleasure, and structured the organization so it would provide just that. A computer would not have the senses to make such understanding of these words and experiences possible.
According to Lynch (2008), creating a web based search engine from scratch was an ambitious objective for the software requirement and the index website. The process of developing the system was costly but Doug Cutting and Mike Cafarella believed it was worth the cost. The success of this project unlocked the ultimately democratized algorithm of search engine system. After the success of this project, Nutch was started in 2002 as a working crawler and gave rise to the emergence of various search engines.
In the year 2000 there was an estimated 2.5 billion web pages on the internet, with a growth rate of 7.3 million per day. Linear algebra is used in the organization and sorting of these web pages when storing them in an internet search database. The vector space model;is used to enhance search results by representing them as two vectors, the document vector and query vector. Each dimension in the vector corresponds to a different term. If the term occurs in the document, its value in the vector is a non-zero value. Several different ways of computing the term weights have been developed, one of which is the frequency-inverse document frequency (tf-idf) weights. The frequency portion if td-idf refers to the frequency of the term within the document. The inverse document frequency is the log function of the total number of documents / divided by the number of documents in which the term appears.The frequency-inverse document model just multiplies these 2 values. Using the cosine similarity between the document and query vector allows the computer to group data together or output data that is similar. The major advantages of using this model over the standard boolean model is that it allows ranking of documents according to their relevance, and it allows partial matching. There are a large number of variations of ...
In today’s fast paced technology, search engines have become vastly popular use for people’s daily routines. A search engine is an information retrieval system that allows someone to search the...
Before the 1990¡¦s Search Engines were non existent. The first tool to be used for searching the Internet was Archie, which was created in 1990 by a student named Alan Emtage who attended McGill University in Motreal. Searches were achieved by downloading director listing of files on public FTP (File Transfer Protocol) sites. This created a database of files for searching but could not search by file contents.
When you are in hurry, which search engine do you choose in order to get the best result? Maybe you just use the one that is familiar to you. Google, Yahoo, and Msn are the three most common search engines that we use in daily life. Although Yahoo and Msn are not the top five of search engines (based on Searchengineswatch.com Feb 2003), we still use those because we are used to using those sites. In my personal experience, I also never realized why I use those search engines. I just use them because the first time I use I was told to use “google.com” by my friend and it has become my habit. In this paper particularly, I will discuss how search engines work, and the similarities and differences of those three search engines. I hope based on the information I give, you will choose the right search engines in order to maximize your result and minimize your time.
Information Retrieval (IR) is to represent, retrieve from storage and organise the information. The information should be easily access. User will be more interested with easy access information. Information retrieval process is the skills of searching for documents, for information within documents and for metadata about documents, as well as that of searching relational databases and the World Wide Web. According to (Shing Ping Tucker, 2008), E-commerce is rapidly a growing segment in the internet.
Since the publication of the World Wide Web in 1991 people have been using search engines to obtain their information (Berners-Lee). These sources of information have greatly evolved over the past two decades and are continuing to become more efficient. Even though most any person with a computer uses a search engine, many do not know how it works. For starters, there are two main types of web searches: crawler based and human powered.
Internet commerce is one of the fastest growing industries today. With the wide range of capabilities the web has it make it easier and cost efficient for businesses to make transactions with other businesses. One factor that allows businesses to find each other is search engines. Search engines are part of the reason the web is growing so rapidly.
Much like fast-food or entertainment, our modern world has access to tools for the nearly instant obtaining of information: search engines. But as with any service in today’s free market, there must be competition between two or more companies offering similar assets; in this case, Google’s search engine and Microsoft’s Bing. How do they compare to each other? Which delivers better results? Are there any distinguishing factors for one not common to the other? These questions are among many in the comparison between the two search engines. Through analyzing and weighing each option for Internet searches, one will be able to correctly determine which medium has greater value to the online community.
The Internet has made access to information easier. Information is stored efficiently and organized on the Internet. For example, instead of going to our local library, we can use Internet search engines. Simply by doing a search, we get thousands of results. The search engines use a ranking system to help us retrieve the most pertinent results in top order. Just a simple click and we have our information. Therefore, we can learn about anything, immediately. In a matter of moments, we can become an expert.