Natural Language and Computer Programs

Natural Language and Computer Programs

Length: 2372 words (6.8 double-spaced pages)

Rating: Excellent

Open Document

Essay Preview

More ↓
Natural Language and Computer Programs


Anyone who has tried to explain the workings of a computer, or even a VCR, to an older relative has a very good idea of why natural language operation is a goal of computer science researchers. Simply put, most people have no desire to learn a computer language in order to use their electronic devices. In order to allow people to effectively use computer-based systems, those systems must be programmed to understand natural language – the language a regular person speaks – and respond in kind.

Most natural language-processing systems break that task down into two parts, comprehension and production. Some systems, like the search engine ask.com, where the user types in a whole interrogative sentence instead of a few terms to search for, are programmed to take commands in English and so have comprehension as their goal. Others, particularly those designed to pass the test proposed by Alan Turing in which a computer must pass as a human in conversation with an interrogator, are designed to simply produce realistic responses, sometimes without bothering to break down the input at all.

For the purposes of simplicity, most natural language programs operate through typed input and printed or on-screen output, since speech recognition and production are just complications at this point and can always be integrated later, simply by having the program convert the speech to text and vice-versa. By working only with typed input, a whole host of obstacles to understanding are avoided. People, when speaking, have accents, slur words, change sentence structure mid-thought, stick in “like” anywhere they want, and do many other things that make everyday speech much less straightforward than the slightly more formal process of typing. Even typed, however, an English sentence is not an easy thing to parse.

An example of this difficulty can be seen in the sentence “I left a job for my wife”. Out of context, it is impossible to determine which of two possible meanings is the correct one. Did the speaker leave a job (i.e. quit) because of his wife, or did he leave a job (i.e. let one remain) for his wife? A computer must be able to refer to the context around such a sentence in order to extract the meaning from it.

How to Cite this Page

MLA Citation:
"Natural Language and Computer Programs." 123HelpMe.com. 20 Feb 2020
    <https://www.123helpme.com/view.asp?id=28580>.

Need Writing Help?

Get feedback on grammar, clarity, concision and logic instantly.

Check your paper »

Machine Translation Essay

- Machine Translation Abstract In this paper, the overview of machine translation (MT) is presented. The original idea of MT has been investigated since 1950s by many research groups and at present many MT systems have been created and developed around the world. Three approaches of MT systems: direct translation, transfer and interlingual approaches are common systems. The main idea of direct translation approach is word-by-word replacement before the transformation of the structure from source language (SL) to target language (TL)....   [tags: Computer Programs Programming Natural Language]

Research Papers
2224 words (6.4 pages)

Studying MSC in Speech and Language Processing

- The Capabilities and Limitations of Current Practical Applications of Computational Linguistics Introduction Although the field of Computational Linguistics is relatively new, it contains several sub-areas reflecting practical applications in the field. For example, automatic translation is one of the main components of Computational Linguistics. It can be considered as an independent subject because people who work on this domain are not necessarily experts in the other domains of CL. However, what connects them is the fact that all of these subjects use computers as a tool to deal with human language....   [tags: Language ]

Research Papers
671 words (1.9 pages)

Essay on Language, Linguistics And The Importance Of Studying Language

- What is language, linguistics and the importance of studying language. It is a staggering thought to imagine an existence without language. To be restricted to basic forms of communication or to have none at all is an unimaginably condition. Language, in simple terms, is the manner in which people express themselves and the understanding of communication presented to them . The phenomenon of language is confined to mankind and is an intricate and vital element in the complex framework of human beings....   [tags: Language, Linguistics, Historical linguistics]

Research Papers
723 words (2.1 pages)

Natural Language Generation Essay

- Natural Language Generation Abstract Natural language generation is a relatively new field in computer science. The two main questions to be solved are "what to say" and "how to say it." What to say includes answering questions not specifically asked by the user, and remembering previous statements made by the user and by the program itself. How to say it involves construction of logical sentences, use of pronouns, and sentence fragments. When speech synthesis is used, it also involves producing intelligible word rhythm, emphasis, and pauses....   [tags: Computer Science]

Research Papers
2094 words (6 pages)

History of the Computer Essay

- History of the Computer The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers.[1] Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space.[2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery....   [tags: Technology Innovation Computer History]

Research Papers
1231 words (3.5 pages)

Natural Language Processing Essay

- Natural Language Processing There have been high hopes for Natural Language Processing. Natural Language Processing, also known simply as NLP, is part of the broader field of Artificial Intelligence, the effort towards making machines think. Computers may appear intelligent as they crunch numbers and process information with blazing speed. In truth, computers are nothing but dumb slaves who only understand on or off and are limited to exact instructions. But since the invention of the computer, scientists have been attempting to make computers not only appear intelligent but be intelligent....   [tags: essays research papers]

Free Essays
1929 words (5.5 pages)

Natural Language Processing in Theoretical Application Essay

- Natural Language Processing in Theoretical Application Abstract: In this paper, I will be discussing the creation and implementation of a device that will utilize the concepts of natural language processing and apply it to everyday activities. The device will be a carry-along unit that can be adapted to several devices a person would use everyday, like the car, items in the kitchen, and your computer. This device will be portable, compact, durable and adaptable. The device will not just adapt to any device however, the capability to interface with the device will have to be built into the objects that the device will interact with....   [tags: essays research papers]

Free Essays
2085 words (6 pages)

Teachers vs. Software, Jobs in Language Education Essay

- A teacher is a person that demonstrates how to perform particular tasks, directs you on the path of endless possibilities, instills morals along with integrity, and reveals a wondrous world of learning. Methods of comprehension are changing with every increasing minute. In the 21st century education has a higher demand and one of these demands are a second language. In the Ridgewood School District, in Ridgewood, New Jersey, school officials are electing to substitute educators that administer foreign languages with a learning software Rosetta Stone....   [tags: Teachers, language Software, Jobs, Education, ]

Research Papers
793 words (2.3 pages)

Essay about The Future of Computer Science

- ... Ideally, this suggestion can avoid the financial crisis based on the failure of credit department. From research and development investors’ point of view, even though the U.S. fiscal deficits year after year, but research funding has continued to increase (28). Nowadays, the financial crisis has got more severed. Thinking of the investment banks have performed the significantly technical positions in the financial history. Banks help raising money for the new company, and the product that they buy is raising comprehension of investors' interest in technology companies....   [tags: programming languages, remarkable creativity]

Research Papers
960 words (2.7 pages)

Applying for a Computer Science Degree Essay

- Every day brings a new challenge for Computer Science. The subject is extremely diverse and success depends on the creativity and the commitment of the individual. I gain satisfaction from solving logical problems and working with computer systems. Ever since my IT technician at Secondary School introduced me to programming in Powershell. I always want to know just how computers worked and how much they could evolve over time. This interest has only grown more intense with each new discovery that I make....   [tags: study, systems, degree, career]

Research Papers
574 words (1.6 pages)

Related Searches



How a program analyzed that sentence would take cues from linguistics, the study of how language is acquired and functions. Linguistics is believed to hold one of the keys to natural language processing, because if one can understand how language works, one can more easily duplicate it in a computer. Similarly, artificial intelligence researchers explore neuroscience to try and better understand what they are trying to duplicate. Linguistic theory, however, has only relatively recently caught up with computers, with the work of Noam Chomsky. Chomsky’s theories of linguistics are promising to natural language programmers, because they suggest that at least some fundamental part of language is internal and universal.

Further according to Chomsky, every speaker of a language has a “computational language procedure” and a lexicon, or a personal dictionary, which stores information about the meaning of words and how they are affected by the operations of the computational procedure (Broderick). This is very promising because handling databases like the lexicon is something computers are very good at, especially if there is a set of rules like the computational operations.

While many linguists are now lashing out against Chomsky and his view of language, suggesting that language may not have innate rules and thus may require a human brain to process, natural language programmers are still hard to work on creating a program that can accurately analyze sentences and produce well-formed sentences of its own in response. At the current time, they have achieved a basic level of parsing sentences.

A basic natural language processor might try to first understand the syntax of a sentence given to it. The program can be outfitted with a set of grammatical forms that it can look for as it reads through a sentence. In the sample “I left a job for my wife, the parsing program would recognize “I as a possible subject. It would then expect one of the grammatical forms that could follow directly after a subject, such as an adjectival clause (which would have to be set off in commas, as in “I, hungry and tired, ate much”), an adverb, or the verb itself, among others. Finding the verb “left”, the program would expect next one of a fewer number of forms, since the sentence’s structure is becoming more strict and developed. A noun would be likely, to act as the object of the verb, or an adverb, to modify the verb. “A job” clearly fits as the object of “left”, and then the word “for” introduces a prepositional phrase. This is where the meaning becomes ambiguous and a simple sentence parser breaks down, however, because it cannot tell if that prepositional phrase is causal in relation to the verb or expressing an indirect object, because of the multiple meanings of “for”.

Another area in which syntactical parsers run into trouble is with participles that look exactly like active verbs in the same tense. An example from McTear is "The teachers taught by the Berlioz method passed the test", in which "the teachers taught" and "the teachers taught by the Berlioz method" could both be sentences on their own with completely different meanings. That type of sentence, in which a language processor assumes a structure for it and then discovers it to be false, is called a "garden-path" sentence (McTear).

Natural language processors with very fixed goals can easily surmount problems such as the above. For example, an interface for a program that queries a database will probably not be exposed to complicated sentences with particularly ambiguous meanings, and in fact will most likely only face queries in a particular form. Such an interface would not achieve the goals of the Turing test, and the programmers would have no intention of making it do so. It would have only slightly more capability than a template input, with spaces for "who" "what" and "where" (or whatever the query would be for), since it would simply strip those out of the sentence anyway. When given an input it did not understand, such a program could believably get away with stating that it did not understand and perhaps explaining what template for input it would understand.

At the moment, programs that can pass the Turing test are mostly aiming towards the second goal mentioned earlier, that of responding believably. A whole host of such programs exist. Early ones, such as ELIZA, were very simple. ELIZA was supposed to be a non-directional therapist, and as such, "her" replies consisted of only questions and neutral statements. Given a sentence containing the word "mother" or "father", ELIZA would prompt, "Tell me more about your family". Another tool ELIZA possessed was to simply swap pronouns, and respond as a question, so that asking it "are you listening to me?" would return "are I listening to you? Good question." The holes in this system are obvious - every time ELIZA was prompted with "mother" or "father", except in a sentence with a greater structure that prompted a different reply, she would say "tell me more about your family", which would become repetitive quickly. And as the sample sentence shows, simply reversing pronouns does not make for good English, and can often be glaringly obvious.

Later chatbots, as programs designed simply to simulate conversation are called, were more sophisticated. ALICE is one of the best-known, having twice in a row taken the third place Loebner prize (the highest any program has ever gotten), awarded to the most human-like of a group of programs attempting to pass a Turing test. ALICE’s programming is based on a markup language called AIML, into which dialogues can be converted, giving ALICE a set of responses to various phrases. Since AIML is a markup language, it has a set of tags that are written interspersed with regular text to define attributes of particular pieces of the text. Some of AIML’s tags give it the ability to do interesting and quite complex things with sentences and replies.

The <that> tag serves as a way of referencing the last thing the chatbot said, which helps it determine context. If one were to ask ALICE “Do you love me?” and after receiving a response, ask “are you sure?” ALICE could reference what it just said to determine what it is sure about. <that> is one way of setting a topic for the bot to converse in. The other way is with the <topic> tag, which allows a group of categories, or set inputs and responses, to be grouped together. Recursion can be instituted through the <srai> tag, and has a large number of uses. It can be used to define synonyms, so that instead of programming a separate response for “hello”, “howdy”, “hi there”, “hi”, and “what’s up”, the programmer could set one response for “hello” and program all the others to return the “hello” response. This suggests alternate uses for <srai>, such as linking common misspellings of something to the response for the correct version, and setting a keyword that, no matter where it appeared in the sentence, would trigger a specific response (Wallace).

ALICE also has a set of variables that can be easily customized, such as the default name for someone talking to it, or ALICE’s “preferences” for music, food, books, and movies. ALICE's programming can become quite complex, to the extent of mapping out entire common conversations. This may seem like a never-ending task, due to the sheer number of possible things that could be said in a conversation, but ALICE’s programmers realized that it was not that important. The reason this is not completely unwieldy (since possible sentences and conversations grow exponentially every step in), is because of a phenomenon that is a generalization of the Zipf distribution. In a Zipf distribution, given a list of statistically related things, such as city populations, there is a correlation between their place on the list and their value as a fraction of the top one. Applied to conversations, after the huge variety of opening conversational gambits, there are fewer follow-ups that make sense, and after those, even fewer third-steps. So there are fewer branches coming out of each successive node, and that means the choices will eventually limit themselves.

By exploiting this phenomenon, ALICE manages to become believable most of the time. ALICE runs into trouble occasionally, such as when someone makes a comment referring to something they just said, instead of something ALICE said, and ALICE’s <that> tag refers it back to the wrong thing.

JFRED is a competitor to ALICE that is programmed with Java Rulesets instead of a markup language. In some ways this is a simplifying thing, and in others it allows JFRED-based programs to have greater functionality than AIML ones. One thing unique to JFRED is fuzzy logic. While ALICE can be instructed to randomly select a response from a list, JFRED can have those responses listed and linked to other topics with probabilities, so that, for example, the set of responses to “hello” can be linked to the set of responses for “what’s up?”. JFRED also does topic links in a slightly different way than AIML, by defining a set of regular expressions linked to one general one, instead of giving each individual expression a link to the general form (Garner).

With all of this complicated framework being built up to allow computers to respond believably to any statement, one may wonder if the computer is, in fact, getting any closer to actually understanding what is being said to it. Here is a sample excerpt from a conversation with ALICE about understanding English:

Human: Do you understand English?

ALICE: Of course I understand it. Do you understand it?

Human: I think I do.

ALICE: Why? When?



Clearly, this program leaves something to be desired. However, I feel that the issue of ALICE understanding English is not completely out of the question. Understanding for humans comes from being able to associate the words being heard with concepts and symbols, and then reverse the process. In this way we are able to draw analogies and rephrase things and analyze accurately.

When a human hears a word, the brain automatically conjures up every association the person has with it, and then narrows down possible meanings based on context. For example, the word “apple” brings to mind (below the conscious level, though) the concepts of red, Adam and Eve, New York, Snow White, the idiom “apple of one’s eye”, and anything else a person may have associated with apple. And computer parsers currently do similar things, checking each part of a sentence against all the possible parts of speech and associations it could have.

Computer programs like ALICE are coming closer to being able to parse sentences the same way humans do, and so all they would need to have is that same database of symbols and experiences. While providing a computer with such a database, or the means to acquire one, is a daunting task, it is not impossible. A large number of malleable links between that database and a lexicon of words would allow the computer to draw accurate analogies, group new experiences together with the old ones in ways that it could describe, and, in short, be able to refer to its “knowledge” in the same way that humans do, and that, I believe, is the heart of “understanding”, and thus creating a computer that can truly operate in natural language.

From the mouth of ALICE herself:

Human: What is the future of natural language processing?

ALICE: You are talking to her.



Works Cited

Broderick, Paul Brohan, Chomsky For Philosophers.

<http://www.personal.kent.edu/~pbohanbr/Webpage/New/newintro.html>, vis. Dec. 2002.



Conversation with A.L.I.C.E.,

<http://www.pandorabots.com/pandora/talk?botid=f5d922d97e345aa1>, vis. Dec. 2002.



Conversations with ELIZA. < http://ecceliza.cjb.net/>



Garner, Robby Glen. JRL programming QuickStart. 2001.

<http://www.cybermecha.com/jrldocs.txt>, vis. Nov. 2002.



King, Margaret, ed., Parsing Natural Language. New York: Academic Press,

1983.



McTear, Michael, The Articulate Computer. Oxford: Basil Blackwell Ltd., 1987.



"Natural Language Processing Tutorials",

<http://www.scism.sbu.ac.uk/inmandw/tutorials/nlp/index.html>, vis. Nov. 2002.



Wallace, Richard S. The Anatomy of A.L.I.C.E..

<http://www.alicebot.org/anatomy.doc>
Return to 123HelpMe.com