Normalization of the Lowe's Inventory Information System Database
As a database grows in size and complexity it is essential that order and organization be maintained to control these complexities and minimize errors and redundancy in the associated data. This goal is managed by a process referred to as normalization.
Normalization permits us to design our relational database tables so that they "(1) contain all the data necessary for the purposes that the database is to serve, (2) have as little redundancy as possible, (3) accommodate multiple values for types of data that require them, (4) permit efficient updates of the data in the database, and (5) avoid the danger of losing data unknowingly (Wyllys, R. E., 2002).".
As a prelude to normalization, the database modeler researches the company and current database uses to determine the requirements for the new database. Table elements and relationships are determined, and candidate keys reviewed and established for the tables. The process of database normalization then begins.
Databases can attain varying degrees of normalization classified as 1NF, 2NF, 3NF, 4NF, 5NF, and BCNF, however for practicality and in staying with the layout of our Lowe's inventory database, only the first through third normal forms or 1NF – 3NF will be addressed.
First, a balance must be struck between data accessibility with regard to performance and maintenance and the concerns of data redundancy. To accomplish this and normalize the Lowe’s database, the supply and retail sides of the database were combined and the tables set in first normal form. In first normal form, the tables were formatted to ensure that the data within them was atomic i.e., ensuring that it was in its simplest form and had no repeating groups. A concatenated PK characterizes tables in 1NF and these tables can have partial and transitive dependencies. Decomposition from this point helps to eliminate redundancy as the modeler works toward a defined goal based on business rules and individual needs.
The tables were next moved to second normal form, again undergoing a review where efforts were taken to reduce the amount of redundant data by extracting and placing it in new table(s). Here, each key component is written on a separate line, with the original key written on the last line. All dependant attributes then follow their perspective keys. This process is used to eliminate partial dependencies which are not allowed in 2NF.
Finally, the tables were set into third normal form by ensuring that no non-identifying attributes were dependent on any other non-identifying attributes.
The next project deliverable is a robust, modernized database and data warehouse design. The company collects large amounts of website data and uses this data to analyze it for the company’s customers. This document will provide an overview of the new data warehouse along with the type of database design that has been selected for the data warehouse. Included in the appendix of this document is a graphical depiction of the logical design of the
According to the task assigned to me, I have visited one of my favorite food court (i.e) TACO BELL. As a matter of fact each and every organization has a unique data requirements in this regard, this particular food court has its own requirement for data. The requirement may be seen at different levels. The first level of TACO Bell is Hiring manager. The hiring manager requires the data of the employee who work at Taco bell. On the next level will be a manager who looks after the inventory. Inventory management requires a database which works efficiently. So, at this level the use of the database is must. At the next level the data requirement of the cashier is different from others. The next level of data requirement is for the drive-away customers.
Data warehousing is a difficult system and has to have the capability deliver quality data. An operational database is one which is used by organizations to run its day to day database activities. They are designed to handle rapid transaction processes with systematically updates. Velocity is important to operational databases. They are most commonly operated by office staff, and are on the order of megabytes of data to gigabytes. Database consistency checks and constraints are rigidly enforced. They contain the latest technology necessary to operate organizational functions.
With the advancement in database systems and software, Eric Brewer in his new article that:
Delobel, C., C. Lecluse, and P. Richard. Databases: From Relational to Object-Oriented Systems. ITP, 1995.
Dr. Edgar F. Codd was best known for creating the “relational” model for representing data that led to today’s database industry ("Edgar F. Codd") (Edgar F. Codd). He received many awards for his contributions and he is one of the many reasons that we have some of the technologies today. As we dig deeper into his life in this research paper, we will find that Dr. Edgar F. Codd was in fact, a self-motivated genius.
The Revolution in Database Architecture, by Jim Gray, describes the path that Gray thought that the evolution of the Database Architecture would take after 2004. He considers that databases had been stagnated for several years and that, beginning in 2004, the development of several technologies would pave the way into a revolution in the database world.
[7] Elmasri & Navathe. Fundamentals of database systems, 4th edition. Addison-Wesley, Redwood City, CA. 2004.
When developing a relational database understanding the logical flow of information and proper planning will improve the probability of the database functioning the way it is intended and producing the desired results. In determining the proper structure of a relational database for a video rental store one must consider what information is stored, the process for renting videos and information on the videos maintained in inventory. Customer, Videos and Video Types are the entity classes that will be discussed and Customer Order is the intersection relation needed to explain the complete process as seen on the Entity Relationship Diagram below.
Inconsistently storing organization data creates a lot of issues, a poor database design can cause security, integrity and normalization related issues. Majority of these issues are due to redundancy and weak data integrity and irregular storage, it is an ongoing challenge for every organization and it is important for organization and DBA to build logical, conceptual and efficient design for database. In today’s complex database systems Normalization, Data Integrity and security plays a key role. Normalization as design approach helps to minimize data redundancy and optimizes data structure by systematically and properly placing data in to appropriate groupings, a successful normalize designed follows “First Normalization Flow”, “Second Normalization Flow” and “Third Normalization flow”. Data integrity helps to increase accuracy and consistency of data over its entire life cycle, it also help keep track of database objects and ensure that each object is created, formatted and maintained properly. It is critical aspect of database design which involves “Database Structure Integrity” and “Semantic data Integrity”. Database Security is another high priority and critical issue for every organization, data breaches continue to dominate business and IT, building a secure system is as much important like Normalization and Data Integrity. Secure system helps to protect data from unauthorized users, data masking and data encryption are preferred technology used by DBA to protect data.
This data model organizes data is a tree-like structure similar to the organizational structure. This structure allows the repetition of information using parent/child relationship. One-to-many relationship is created as one parent can have many children but a child belongs to only one parent. This model is the first data base model created by IBM in 1966. It was used in Information Management System (IMS) developed by IBM. Later Microsoft employed it in Windows Registry. This data model replaced the Flat file database system because it was more fast and simple but inflexible as the relationship was restricted to one-to-many.
One of the most basic measures that most be examined and planned involves the smallest units within the database, the fields. The fields are derived from the simple attributes that were defined in the logical data model. A few decisions need to be made regarding each of these individual fields. First what type of data is going to be storied in them? The data type that is assigned to each field should be able to accurately represent every possible valid value, while limiting invalid values as much as possible. Special consideration should be taken for any manipulations that will be done on the data as some data types allow these manipulations a lot easier than other ones. When considering data manipulations it is important to keep in mind simple things like addition, if finding the sum of the data field’s values the data type that worked for the fields may not be large enough to support the resulting summation.
Database Systems has a practical, hands-on approach that makes it uniquely suited to providing a strong foundation in good database design practice. Database design is more art than science. While it's true that a properly designed database should follow the normal forms and the relational model, you still have to come up with a design that reflects the business you are trying to model. This paper shows describes design process of database project.
The database application design can be improved in a number of ways as described below:
Prior to the start of the Information Age in the late 20th century, businesses had to collect data from non-automated sources. Businesses then lacked the computing resources necessary to properly analyze the data, and as a result, companies often made business d...