Essay PreviewMore ↓
With the widespread use of the internet, networked applications have expanded to provide many services that a few years ago would have seemed impractical or futuristic. Among them are applications that allow you to find your perfect date, to file your taxes online, rent movies or even to send away gifts you don’t like. With the proliferation of the internet the demand for programs that use information in more complicated and advanced ways has risen. Commercial entities have come forward to fulfill this demand, and the internet has become the center for many applications driven by information. As information use and sharing among applications becomes more desirable we have seen the downside of sensitive information being accessible to entities for which it was not intended.
When we look at the development goals of the internet and of computer networks in general we can easily see the contradictory goals that protecting privacy would present. The internet was developed by people who saw great potential in being able to share scientific and military information quickly and easily between computers. Concerns about the privacy of information created by the new applications mentioned above, give us the goal of making sure that information is only accessible by the entities that it is intended for. By definition this means making information sharing more difficult as we don’t want a legitimate user of information to be able to share that information with someone who does not have a legitimate right. For example if I submit my personal information to an insurance company, I don’t want the insurance company to share my information with others who might use it to send me advertisements or for more sinister purposes. Current computer systems and networks have been built with the first goal of ubiquitous access and information sharing in mind. Therefore protecting sensitive information requires us to completely rethink the way that computer systems are designed. Potentially there are two routes that we could take. One is to allow computer systems and the internet to enjoy the free architecture that they have at present but to prosecute violators with strict laws on information security. The other is to completely redesign computer systems with the additional goal that information should only be accessible by parties that the owner of the information trusts.
How to Cite this Page
"Trusted Systems: Protecting Sensitive Information." 123HelpMe.com. 26 Jun 2019
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- When the internet was considered a global information system in 1995 millions of Americans participated in virtual communication. People began to communicate with each other and personal information began to be placed online by the stroke of the fingertips to their own computer. So the question is the privacy of individuals trusted online. Can people snoop around and see personal information. Of course people can if guidelines are not set in place to protect them. Public and private information can be complex when some individual(s) do not expect their communication to be read outside of their online community.... [tags: Global Information Systems, Internet]
901 words (2.6 pages)
- This research paper will introduce and examine eight information security technologies. Respective sections will show specific technology and support the following format: • Technology Overview: introduction to the technology. • Business Analysis: A study of the usefulness, utility, cost, usefulness complexity of the technology in the present business environment. • Security Analysis: The security technology is evaluated against the impression of Confidentiality, Integrity and Availability as well as presenting its role as a Countermeasure (protect, detect, correct).... [tags: firewalls, intrusion detection, network mapping ]
1159 words (3.3 pages)
- Maptycs – A new - and better - risk management information system made for today’s world. Meet Maptycs – Empowering Users, Protecting Data Maptycs is a new generation of risk management information system that leverages big data and cloud technologies to manage high volumes of external and internal information for powerful analysis of property risk exposure and business interruption. Maptycs puts the user in control, allowing them to manage and analyze their locations, finances, policies, claims, and more.... [tags: Management, Risk management, Business school]
2275 words (6.5 pages)
- I. Components of PCI standards PCI Data Security Standard (PCI DSS) (PCI DSS) is the base standard for merchants and card processors. It addresses security technology controls and processes for protecting cardholder data. Attaining compliance with PCI DSS can be tough, and can drastically impact your organization’s business processes, service, and technology architecture (Microsoft, 2009). PCI DSS version 1.2 is the most recent version of the standard, and takes the place of all previous versions of PCI DSS.... [tags: Information Technology]
1156 words (3.3 pages)
- Information Systems An information system refers to a collection of components which are used for the collection, storage and the processing of some raw data so that information may be provided. Different fields have made use of information systems in various ways due to their capacity to inform and give digital products. These areas have relied on it (information system) in performing and managing operations with the aim of increasing production and efficiency. For instance, they have been used in running inter-organizational supply chains as well as the electronic markets.... [tags: Decision theory, Information systems]
1248 words (3.6 pages)
- This report aims to shed light on the use of Information Systems in the Construction Industry in regards to what it adds to a given company’s “value chain” and its relationships to organisational strategy and competitive advantage. The construction industry is one the biggest sectors in the UK economy, and the world, contributing about 10% to the UK’s GDP and continuous to grow despite the recent economic downturn. The global construction market is facing changes as various businesses are still reacting to 2008 economic crisis.... [tags: Information Systems]
907 words (2.6 pages)
- INTRODUCTION Many countries have established national initiatives to implement integrated medical information system to improve nation’s healthcare system and to ensure patients with quality and efficiency of health care services, and Malaysia is no exception. According to Ching et al (2012) the rapid changes in the medical environment have greatly accelerated and increased hospitals’ demand for the quality and quantity of information processing. In Malaysia currently, the Health Minister Datuk Seri Liow Tiong Lai has announced that the government hospitals nationwide will be able to serve patients faster by 2015 when medical records are accessible online.... [tags: Intergrated Medical Information Systems]
1394 words (4 pages)
- Company Description Stryker is a global medical device manufacturing company. The Instruments Division is located in Kalamazoo, Michigan. It was founded by Dr. Homer Stryker, an orthopedic surgeon. Dr. Stryker discovered that certain medical devices were not meeting his patient’s needs. Because of this, Dr. Stryker decided to invent new ones. The devices he invented were successful, and the interest in Dr. Stryker’s products began to grow. As a result, in 1941 he decided to start a company to produce them.... [tags: Information Systems Research Paper]
3504 words (10 pages)
- Introduction According to the given task, it is expected to create the database for the management of the company. Such organization as dentist clinic has chosen for development. In order to collect information, many sources with such kinds of clinics were investigated and used for this purpose. The main ones are http://www.medtorgrf.ru and http://www.mosdental.ru, in which necessary medicines, instruments and main procedures were revealed for the implementation of dental activity. Besides, it is necessary to identify the types of systems to which the designed database belongs.... [tags: Dental Information Systems]
2781 words (7.9 pages)
- Moore’s law can be defined as a forecast that the number of micro components that could be fitted into a silicon chip of the lowest developed cost would increase twofold after a period of eighteen months. It is another way of stating that the processing speed of a micro chip doubles every eighteen months. Since the speed of the microprocessor is increasing in a span of eighteen months, it means that it will take half the time that was previously taken, to execute a particular task. Business transactions can be performed faster within a shorter duration than usual.... [tags: Management, Information Systems]
1485 words (4.2 pages)
The first alternative, thus far has not provided adequate results because of several reasons. The internet is global and far reaching. The policing of this global network, has to be done through a global authority. Yet no such authority exists. Laws local to different nations have been introduced but because the laws are diverse and vary from one country to another, entities who are involved in violations have been able to continue operations simply by shifting their base to a different country. Therefore, there has been a lot of research done to develop technological solutions that would make computers trusted. This would allow users to confidently send information to a different computer knowing that it can only be used for one particular purpose.
The present architecture of computers was not designed with privacy in mind. As a result we have systems through which, once information is represented in electronic form within it, anyone who has access to that computer system could make further copies or redistribute it. For example if jetBlue’s computer system contains a database of customer information records, anyone who has access to this database would be able to make copies of it or to transmit it in various forms to unauthorized users. Even if there were passwords which restrict the usage to a few individuals, the present architecture makes it easy for programs such as screen capture utilities to run in the background and capture sensitive information which can then be transmitted over the internet. These programs running in the background could be programs which have been deliberately installed by a malicious user of the system or the program could be installed by a virus or a Trojan of some sort. Basically a user who trusts his/her personally identifiable information to a corporate computer system of this type is faced with the following questions.
1. Can I trust the organization not to make copies or to give my information to someone I don’t want it to be given to?
2. Even if I trust the organization (and all the individuals that are part of it – a significant level of trust) can I trust the computer system not to be infected by spy ware/viruses that could be sharing my personal information over the internet?
With present day computer systems and technology none of these questions can be answered without a considerable amount of doubt being raised. For example the second question above is impossible to answer in the affirmative. The reason is that with the current architecture programs have a shared memory model where although you could be using a very secure program which does not share its information with any other programs, there could still be another program running in the background that is just reading the first program’s memory and monitoring the screen, reading the keyboard input etc. Basically everything that the first program is doing could be scrutinized by the second program. However secure you make the first program with encryption or passwords, the architectural flaw cannot be avoided. Spyware such as the Gain Network’s Gator are programs that use this flaw for commercial interests such as identifying an individual’s shopping habits.
Organizations that rely on these computer systems and attempt to implement privacy policies are also faced with several issues.
Before delving into the architecture of a trusted system it is necessary to define the context in which we use “trust.” In the context of trusted systems this term entails that the legitimate owner of the information believes that the information is being used appropriately. For example if I require that jetBlue be able to use my personal information but no other entity will have access to my information, then a trusted system would make sure that this belief is accurate (by restricting all other uses other than those that I specify). As another example if a company asks for my delivery address to be used to deliver goods during the next week, I could specify that I want my address destroyed after a week and in a “trusted” system this would happen.
If all the computer systems connected together could be trusted this would create a scenario which would make enforcing privacy policies very straightforward. In a global network such as the internet we could look at the case in which a few of the countries decided not to accept the policies. By definition, none of the trusted systems would trust the remaining systems, leading to the systems that were not part of the trusted platform having to adopt the platform in order to continue to be a part of the network. The alternative would be for them to be part of the network but not be able to access any of the sensitive information (A state of isolation from everyone else). Therefore we see that a global police force to enforce laws is not absolutely necessary. This allows market forces to dictate without a global authority intervening. A trusted system if accepted by enough network nodes, would force the remaining nodes to also embrace the “trust” technology. A big problem with the current architecture is that once we give a piece of information out to a different entity, we no longer have control of what happens to that information. With email addresses for example, once an address is used for online purchases the corporate entity has the address and has full control of that data. If I wanted to revoke rights to my email address because the entity suddenly became my enemy, this would not be possible. Once information leaves your computer, you have no control of it. With a trusted system this would be changed drastically. The ownership of the information does not change just because it’s in someone else’s hands. The trusted system still has to enforce the policies to which it was bound at the time of the transfer.
A valuable application of trusted systems would be in enforcing P3P policies. The Platform for Privacy Preferences (P3P) project which was developed by the World Wide Web Consortium is a simple and automated way for websites to specify “intent.” P3P could be used by the site to describe exactly what it does with the data that it collects and a user could decide not to visit that site if the user does not like the P3P policy. A problem with this right now is that there is no real enforcement to force a site to behave exactly as specified in the policy. By linking P3P with a trusted system, the user could have complete “trust” about how the sensitive information will be used by the site.
There are certain limitations to this approach however, as no technological solution can stop someone from writing down the information displayed on the screen or simply remembering it and telling someone else about it. This is beyond the scope of a technological solution. These limitations should not be a concern in pursuing a trusted system solution. The reason is that an individual writing down information is not something that can be done in a very large scale and also it’s not really a fault with the computer system as this could happen even if there were no computers involved.
Before I describe the architecture developed by the Trusted Computing Group it’s important to note the following goals and how they help in establishing the privacy of sensitive information.
1. The computer system which handles the sensitive information must be in a known state (ie. It must be able to identify each program that is running on the system, or it should be possible to completely isolate the program handling the sensitive data from other programs). This is important because without this we could have spyware/viruses running in the background which could have access to the sensitive information.
2. It must be possible to attest to this known state. Without this feature a corporation could pretend to be in a known state but not really be running a trusted platform, in which case the owner of the information should not transmit the sensitive information. It’s important to note that this is not a general form of attestation, because only the corporate database needs to attest what system its running, the user need not attest what he is running because it’s the user who trusts the corporate database by submitting the information, not the other way round.
3. The information should only be accessible through programs that have been specifically identified by the owner of the data. For example if I’m sending my personal information to a trusted system at jetBlue, I would want only the trusted database application to be able to access my information. The mass mailer application should not be able to access my information.
The “Trusted Computing Architecture” was proposed by the Trusted Computing Group (TCG) as a solution to the need for a trusted computing platform. It is important to keep the three goals mentioned above in mind as we go through the specifications of the architecture.
The Trusted Computing Group is a group of computer manufacturers and operating system manufacturers which have come together to build a trusted platform. The key companies in this initiative are Microsoft, Intel, IBM, HP and AMD.
When a Trusted System is started according to this architecture it goes through a series of steps. The first is to verify the authenticity of a unit known as the core root of trust. The core root of trust (CRT) is very important because everything else is built up on the assumption that the core is valid. If the CRT is authentic the boot process would go into the next stage which would be to execute the instructions in the CRT. The first step of the core is to validate that the next stage is valid and then execute it. Likewise, a sequence of executions takes place, and at every stage the system will be at a known state running software or firmware that has been verified. Therefore the Trusted Computing Group’s specification takes care of the requirement that the system should start up in a known state. If any of the validation checks fail the system has two options. The first option which has lost favor with manufacturers of late is to simply shut down and refuse to start up. The other option is to start up in a state where the system is unverified. In this state, the system cannot be trusted and none of the sensitive information stored in the system is accessible.
Information should only be accessible by an application if it is specifically named by the owner of the information. This is for requirement (3) above. To accomplish this, The Trusted Computing Group specifies several requirements that a trusted platform should meet. A trusted platform must provide strong encryption, hashing and random number generation algorithms. These algorithms are used to store information in encrypted form so that only a program which has the appropriate permissions will be able to access the sensitive information. The Trusted Platform Module which is defined by the Trusted Computing Specification is able to store an unlimited number of keys for its applications as well. This avoids the potentially insecure “password file” which is also stored with the data the password protects. The keys are stored separately in the Trusted Platform Module (TPM). Therefore, the Trusted Computing Group specification satisfies our third goal in creating trusted systems for sensitive information.
The software portion of the specification is developed by Microsoft as the NGSCB (Next Generation Secure Computing Base). Previously known as Palladium, Microsoft expects to integrate NGSCB into the next Windows release by 2005. Palladium (or NGSCB) provides our 2nd goal which was to be able to attest that a trusted platform is running and be able to prove the authenticity of the software running on the platform. There is a key difference though. The difference is that NGSCB attempts to provide attestation for all software applications. This is not limited to the software that will be handling sensitive information in a corporate database, but is extended to all software so that individual users can be made to attest to the authenticity of software running on their system. If we consider the programs that an individual runs on his/her computer as personal information, this attestation itself is an attack on the user’s privacy.
NGSCB provides several features which are important for establishing the privacy of user data. One of the more important features is memory curtaining. Memory curtaining refers to strong hardware enforced memory isolation, where each program is running in its own space and cannot affect or read data from another program’s memory. This means that sensitive information being handled by one program is safe from the prying eyes of another program which is running at the same time. This completely controls the spyware issue. For example if Gator were to be run on a trusted platform it would not be able to access data that is being used in the tax return preparation software running simultaneously. Not even the operating system itself can access the memory spaces of the programs that are running. This means that even if such a system was compromised by a virus, the virus would be harmless as it would not be able to affect the functioning of other programs or have access to sensitive information. Serious privacy abuses where viruses take control of email software and send out emails to people in an individual’s address book are no longer possible. A key advantage here is that most of the change to implement memory curtaining is done at a hardware level so that Palladium (or NGSCB) is completely backward compatible and able to run programs which were designed for previous Windows versions. Only programs that relied on unsafe methods of sharing data will fail to function.
A second feature that NGSCB provides is secure Input/Output. This is also a key improvement in enabling the privacy of data. This stops the ability of key loggers/screen capture programs of being able to access sensitive information that is being typed or displayed on the screen. This functions by encrypting the input/output stream right up to the point that the output device outputs data or from the point that the input device inputs data. A feature built into this is the ability for a program to determine if the input actually came from an input device or was actually displayed on the screen for the user. This avoids the possibility of a malicious program which could attempt to hijack an anti virus program.
A third feature of NGSCB is Secure Storage. Secure Storage addresses the inability of the current PC architecture to store keys securely. To address the fact that keys should only be accessible to legitimate users, NGSCB uses the ingenious method of generating the key each time it is needed by using a combination of the software running as well as the configuration of the computer platform at that moment. This means that the key need not be stored as it can be recreated whenever it is needed. This method of generating the key as a combination of the platform configuration and the software that is running on the platform is rather controversial as it means that the information can only be accessed on the same system as it was created on. If privacy of the data is the chief concern this should not be the way to create the key as the owner should have the ability to specify from which platforms or computers the information can be accessed. Secure storage allows for passwords to be stored separate from actual data, which greatly increases the security of the data.
The Trusted Computing Group specification together with the Next Generation Secure Computing Base provides us with several important features that can be used in the implementation of a trusted system according to our definition of “trust.” A key difference here is that the scope of the Trusted Computing Group specification and the NGSCB is not just limited to organizations or entities that handle sensitive information but to all computers as a whole. Also another difference is that the TCG specification attempts to protect data even when it’s on the computer from which it originated, essentially protecting the data from its legitimate owner.
A key issue that the Trusted Computing Group has to face before the Trusted Computing Platform can become reality is the scope of the system. The TCG specification as described above attempts to solve a large number of non-privacy related issues. For example, although Digital Rights Management is a serious issue for the movie and music industries, this takes the focus away from the privacy issues that the platform can solve. This has lead several groups to be completely against the “Trusted Computing” platform. Also, Microsoft’s role as the only operating system developer in the consortium has been treated with some suspicion. Although several of the ideas seem really innovative, it’s hard to convince computer users that Microsoft is the ideal corporation to be leading the campaign for “trust.” The scope should be revised to handle only corporations and other entities that deal directly with sensitive information originating outside. The system should not be used to protect information from their legitimate owners as they should have full control of information.
As mentioned above attestation is only required between a company that is asking for sensitive information and a user. And the user, in most circumstances does not need to perform an attestation. Therefore this is something that should be removed if privacy and security is really the focus of the Trusted Computing Platform.
Another key issue is that the platform configuration and the software running on the platform are used as the key for sealing data. This does not seem to be very effective in maintaining the privacy of data, and rather has the effect of forcing the user to always use the same application programs. For example a user who seals information with Microsoft Excel can only unseal it with the same application and if Excel stops the export functionality the user will not be able to move to a different application resulting in very uncompetitive behavior. Therefore this is something that also needs to be revised.
There are many advantages to using a Trusted Computing platform, some of which were named before. If the issues with the TCG Specification are fixed, it could provide an effective technical solution that can greatly reduce the opportunity for sensitive information to be stolen or accessed by parties who do not have legitimate access. Memory curtaining is an excellent idea which could stop spyware and other improperly coded programs (memory leaks etc.) from leaking sensitive information.
The Trusted Computing Group’s effort to bring forward a trusted platform is commendable. Although the system specifications currently have several deficiencies it seems that with a relatively small amount of change the Trusted Computing Platform could be used as a starting point to provide users of computer networks with more control over their sensitive data. With the widespread use of sensitive information in computer networks, implementing a system such as this is a requirement that needs to be fulfilled as soon as possible. With regard to the scope of the Trusted Computing Platform, the TCG has gone a little outside the path of what’s best for end users. This is something that they need to realize and correct soon, so that the good ideas which are part of the platform can benefit users in the near future.
The ideal route to take in tackling privacy issues is to provide strong technological solutions combined with strong legal measures. Even if the trusted systems of reduced scope, as defined in this paper, are not able to tackle all the cases of computer related privacy, by adopting trusted systems the number of cases could be reduced greatly. With a reduced number privacy related crimes law enforcement will be easier. Something that would also help in implementing Trusted Systems would be the involvement of an independent, non-profit group such as TRUSTe in the development of trusted systems. If an independent organization (or several) played a significant role, the public would find it easier to accept the role of large corporations such as Microsoft within the Trusted Computing Group. With the broad range of applications that rely on the use of sensitive information continually growing, implementing effective Trusted Systems is an inevitable next step.
1) The Trusted Computing Group https://www.trustedcomputinggroup.org/home
2) Trusted Computing Platform Alliance http://www.trustedcomputing.org/home
3) Next Generation Secure Computing Base http://www.microsoft.com/resources/ngscb/default.mspx
4) Seth Schoen – Electronic Frontier Foundation (Trusted Computing Promise and Risk) http://www.eff.org/Infra/trusted_computing/20031001_tc.php
5) Ross Anderson – Trusted Computing FAQ