首页 毕业论文外文翻译-物理数据库设计简介

毕业论文外文翻译-物理数据库设计简介

举报
开通vip

毕业论文外文翻译-物理数据库设计简介毕业论文外文翻译-物理数据库设计简介 学校代码: 10128 学 号:200920205048 ( 英文题目:Software Database An Object-Oriented Perspective. 中文题目:软件数据库的面向对象的视角 学生姓名: 学 院:信息工程学院 系 别:软件工程系 专 业:软件工程 班 级: 指导教师: 二〇一三 年 六 月 A HISTORICAL PERSPECTIVE From the earliest days of computers, ...

毕业论文外文翻译-物理数据库设计简介
毕业 论文 政研论文下载论文大学下载论文大学下载关于长拳的论文浙大论文封面下载 外文翻译-物理数据库设计简介 学校代码: 10128 学 号:200920205048 ( 英文 快递公司问题件快递公司问题件货款处理关于圆的周长面积重点题型关于解方程组的题及答案关于南海问题 目:Software Database An Object-Oriented Perspective. 中文题目:软件数据库的面向对象的视角 学生姓名: 学 院:信息工程学院 系 别:软件工程系 专 业:软件工程 班 级: 指导教师: 二〇一三 年 六 月 A HISTORICAL PERSPECTIVE From the earliest days of computers, storing and manipulating data have been a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s. Bachman was the first recipient of ACM’s Turing Award (the computer science equivalent of a Nobel prize) for work in the database area; he received the award in 1973. In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even today in many major installations. IMS formed the basis for an alternative data representation framework called the hierarchical data model. The SABRE system for making airline reservations was jointly developed by American Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity! In 1970, Edgar Codd, at IBM’s San Jose Research Laboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for his seminal work. Database systems matured as an academic discipline, and the popularity of relational DBMSs changed the commercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice. In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databases, developed as part of IBM’s System R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, was adopted by the American National Standards Institute (ANSI) and International Standards Organization (ISO). Arguably, the most widely used form of 1 concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for his contributions to the field of transaction management in a DBMS. In the late 1980s and the 1990s, advances have been made in many areas of database systems. Considerable research has been carried out into more powerful query languages and richer data models, and there has been a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Oracle 8, Informix UDS) have extended their systems with the ability to store new data types such as images and text, and with the ability to ask more complex queries. Specialized systems have been developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis. An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle, PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, human resources planning, financial analysis) encountered by a large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs have entered the Internet Age. While the first generation of Web sites stored their data exclusively in operating systems files, the use of a DBMS to store data that is accessed through a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at making it more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible through computer networking. Today the field is being driven by exciting visions such as 2 multimedia databases, interactive video, digital libraries, a host of scientific projects such as the human genome mapping effort and NASA’s Earth Observation System project, and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and most vigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one! INTRODUCTION TO PHYSICAL DATABASE DESIGN Like all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typical workload that the database must support; the workload consists of a mix of queries and updates. Users also have certain requirements about how fast certain queries or updates must run or how many transactions must be processed per second. The workload description and users’ performance requirements are the basis on which a number of decisions have to be made during physical database design. To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play. DATABASE WORKLOADS The key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements: 1. A list of queries and their frequencies, as a fraction of all queries and updates. 2. A list of updates and their frequencies. 3. Performance goals for each type of query and update. 3 For each query in the workload, we must identify: Which relations are accessed. Which attributes are retained (in the SELECT clause). Which attributes have selection or join conditions expressed on them (in the WHERE clause) and how selective these conditions are likely to be. Similarly, for each update in the workload, we must identify: Which attributes have selection or join conditions expressed on them (in the WHERE clause) and how selective these conditions are likely to be. The type of update (INSERT, DELETE, or UPDATE) and the updated relation. For UPDATE commands, the fields that are modified by the update. Remember that queries and updates typically have parameters, for example, a debit or credit operation involves a particular account number. The values of these parameters determine selectivity of selection and join conditions. Updates have a query component that is used to find the target tuples. This component can benefit from a good physical design and the presence of indexes. On the other hand, updates typically require additional work to maintain indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes. NEED FOR DATABASE TUNING Accurate, detailed workload information may be hard to come by while doing the initial design of the system. Consequently, tuning a database after it has been designed and deployed is important—we must refine the initial design in the light of actual usage patterns to obtain the best possible performance. The distinction between database design and database tuning is somewhat arbitrary. We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequent changes to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. 4 Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical design process. Where we draw the line between design and tuning is not very important. OVERVIEW OF DATABASE TUNING After the initial phase of database design, actual use of the database provides a valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs (although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; for example, the optimizer may not be using some indexes as intended to produce good plans. Continued database tuning is important to get the best possible performance. TUNING THE CONCEPTUAL SCHEMA In the course of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may have to redesign our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make). We may realize that a redesign is necessary during the initial design process or later, after the system has been in use for a while. Once a database has been designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now consider the issues involved in conceptual schema (re)design from the point of view of 5 performance. Several options must be considered while tuning the conceptual schema: We may decide to settle for a 3NF design instead of a BCNF design. If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload. Sometimes we might decide to further decompose a relation that is already in BCNF. In other situations we might denormalize. That is, we might choose to replace a collection of relations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF). This discussion of normalization has concentrated on the technique of decomposition, which amounts to vertical partitioning of a relation. Another technique to consider is horizontal partitioning of a relation, which would lead to our having two relations with identical schemas. Note that we are not talking about physically partitioning the cuples of a single relation; rather, we want to create two distinct relations (possibly with different constraints and indexes on each). Incidentally, when we redesign the conceptual schema, especially if we are tuning an existing database schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural. TUNING QUERIES AND VIEWS If we notice that a query is running much slower than we expected, we have to examine the query carefully to end the problem. Some rewriting of the query, perhaps in conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected. When tuning a query, the first thing to verify is that the system is using the plan that you expect it to use. It may be that the system is not finding the best plan for a variety of 6 reasons. Some common situations that are not handled efficiently by many optimizers follow: A selection condition involving null values. Selection conditions involving arithmetic or string expressions or conditions using the or connective. For example, if we have a condition E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age/2=D.age would reverse the situation. Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause. If the optimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing hints to the optimizer; for example, users might be able to force the use of a particular index or choose the join order and join method. A user who wishes to guide optimization in this manner should have a thorough understanding of both optimization and the capabilities of the given DBMS. (8)OTHER TOPICS MOBILE DATABASES The availability of portable computers and wireless communications has created a new breed of nomadic database users. At one level these users are simply accessing a database through a network, which is similar to distributed DBMSs. At another level the network as well as data and user characteristics now have several novel properties, which affect basic assumptions in many components of a DBMS, including the query engine, transaction manager, and recovery manager. Users are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly higher in proportion to I/O and CPU costs. Users’ locations are constantly changing, and mobile computers have a limited battery life. Therefore, the true communication costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is frequently replicated to minimize the cost of accessing it from different locations. 7 As a user moves around, data could be accessed from multiple database servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact have to give up on ACID transactions and develop alternative notions of consistency for user programs. MAIN MEMORY DATABASES The price of main memory is now low enough that we can buy enough main memory to hold the entire database for many applications; with 64-bit addressing, modern CPUs also have very large address spaces. Some commercial systems now have several gigabytes of main memory. This shift prompts a reexamination of some basic DBMS design decisions, since disk accesses no longer dominate processing time for a memory-resident database: Main memory does not survive system crashes, and so we still have to implement logging and recovery to ensure transaction atomicity and durability. Log records must be written to stable storage at commit time, and this process could become a bottleneck. To minimize this problem, rather than commit each transaction as it completes, we can collect completed transactions and commit them in batches; this is called group commit. Recovery algorithms can also be optimized since pages rarely have to be written out to make room for other pages. The implementation of in-memory operations has to be optimized carefully since disk accesses are no longer the limiting factor for performance. A new criterion must be considered while optimizing queries, namely the amount of space required to execute a plan. It is important to minimize the space overhead because exceeding available physical memory would lead to swapping pages to disk (through the operating system’s virtual memory mechanisms), greatly slowing down execution. Page-oriented data structures become less important (since pages are no longer the unit of data retrieval), and clustering is not important (since the cost of accessing any region of main memory is uniform). 8 (一)从历史的角度回顾 从数据库的早期开始,存储和操纵数据就一直是主要的应用焦点。第一个通用的DBMS是由Charles Bechman于20世纪60年代早期在通用电器公司设计的,称为集成数据存储(Integrated Data Store).它奠定了网状数据模型的基础。网状数据模型由数据系统语言协会(CODASYL) 标准 excel标准偏差excel标准偏差函数exl标准差函数国标检验抽样标准表免费下载红头文件格式标准下载 化,并在整个20世纪60年代对数据库系统产生了巨大的影响。由于Bachman在数据库领域的贡献,他成为第一个ACM图灵奖(相当于计算机科学界的诺贝尔奖)的获得者,并于1973年接受了这一奖励。 20世纪60年代末期,IBM成功开发了信息管理系统(IMS)DBMS。直至今天,它还在许多系统中使用。IMS奠定了另一个数据表达框架——层次数据模型的基础。同时,美国航空公司和IBM联合开发出用于飞机订票的SABRE系统,它允许多个用户通过计算机网络存取相同数据。有趣的是,今天SABRE系统被用于支持广为流行的基于Web的旅游服务,如Travelocity。 1970年,Edgar Codd在IBM的San Jose研究实验室推出了一种新的,称为关系数据模型的数据表达框架。这后来被证明是数据库系统开发中的分水岭:它推进了几个基于关系模型的数据库管理系统的快速开发,并取得大量的理论成果,从而为数据库领域奠定了坚实的基础。Coff因为其杰出的工作而获得了1981年图灵奖。数据库系统作为学术学科已经成熟了,而且关系型DBMS的普及改变了商业应用前景。其益处被广泛认同,使用DBMS来管理公司数据变得很普遍。 在20世纪80年代,关系模型巩固了它作为主导DBMS模式的地位,而数据库系统继续被广泛使用。作为IBM的 System R项目的一部分而开发的关系数据库SQL查询语言,现在成为了标准查询语言。SQL于20世纪80年代末期得到标准化,目前的标准SQL:1999被美国国家标准协会(ANSI)和国际标准组织(ISO)接受。并发编程使用最广的形式就是数据库程序(称为事务)的并发执行。用户编写程序时不用考虑其他程序的运行,并发执行操作由DBMS管理。James Gray因他对DBMS事务处理领域的贡献而获得了1999图灵奖。 在20世纪80年代末期和90年代,数据库系统在很多方面得到发掌。相当多的研究侧重于功能强大的查询语言和更丰富的数据模型,其重点也放在了支持对企业各部分数据的复杂分析上。很多数据库提供商(如IBM的DB2,Oracle 8,Informix UDS) 9 樱井扩展了它们的系统,使之具有存储诸如图像,文本等新数据类型的能力,以及回答更复杂查询的能力。大量的厂商已经为创建数据仓库,继承多个数据库的数据以及实现专业化分析而开发了专用系统。 一个有趣的现象是随着一些企业资源规划(ERP)和管理自愿规划(MRP)软件包的出现,他们在DBMS之上增加了一层面向应用的特征。广泛使用的软件包有Baan,Oracle,PeopleSoft,SAP和Siebel等系统,它们先确定大多数组织机构所遇到的共同任务(例如,库存管理,人力资源规划,财务分析等),并提供一个通用的应用层以完成这些任务。数据存储在关系型DBMS中,可以为不同公司分别定制应用层。与从头开始创建应用层的开销相比,这样可以降低公司的总体开销。 也许,在DBMS的发展中,最重要的事是DBMS已经进入了因特网时代。第一代Web站点是把数据存储在操作系统的文件中,而现在,使用DBMS存储数据并通过Web浏览器浏览数据已变得越来越普遍。通过Web可存取的表单界面来产生查询请求,并使用诸如HTML的标记语言将查询结果格式化,从而便于在浏览器中显示。所有数据库提供商都在增加它们的DBMS功能,使之更适于在因特网上部署。 随着越来越多在线数据的产生,并且通过计算机网络越来越容易获得,数据库也变得更加重要了。今天,众多领域的发展需求,例如,多媒体数据库,互动视频,流数据,数字图书馆等精彩视频节目,人类基因图和NASA的地球观测系统等科学项目,以及公司对巩固它们的决策支持处理和有用信息挖掘的渴望,正推动着数据库领域的发展。在商业上,数据库管理系统代表着最大和最具活力的市场之一。所以,有关数据库系统的研究回报丰厚~ (二)物理数据库设计简介 与数据库设计的其他方面一样,我们要根据数据的性质和用途来进行物理数据库设计。特别是,我们必须了解数据库所必须支持的典型的工作负载,工作负载是查询和更新的混合体。用户有一些特定的要求,如,某些查询或更新的执行速度应该有多快,或者每秒钟必须处理多少个事务等。在物理数据库设计过程中,工作负载的描述和用户的需求是作出许多决策的基础。 为了获得一个好的物理数据库设计,我们还要调整系统的性能以满足用户的需求。设计者需要明白DBMS工作的细节,特别是DBMS所支持的索引和查询处理技术。 10 如果数据库允许多个用户并发访问,或者是分布式数据库,那么这是设计任务就变得更复杂了,还需要考虑DBMS的其他特点。 (三)数据库负载 一个好的数据库设计的关键是对所希望的负载有准确的描述。一个工作负载的描述包括以下几个部分: 1.一个查询及其出现的频率的列表,一个查询的频率指该查询在所有的查询和更新中所占的比例。 2.更新及其出现的频率列表。 3.每一种查询和更新类型所对应的性能目标。 对于在工作负载中的每个查询,我们必须确定: 需要访问哪些关系。 需要保留那些属性(在SELECT子句中)。 在那些属性上有选择或连接条件(在WHERE子句中),以及这些条件具有多大的选择性。 类似地,对工作负载中每个更新,我们必须确定: 在哪些属性上有选择或连接条件(在WHERE语句中),以及有多大的选择性。 更新的类型(INSERT,DELETE,UPDATE)以及所要更新的关系。 对于UPDATE命令,要更新哪些字段。 典型的查询和更新都带有参数,例如,借款或存款操作都涉及某个特定的帐号。这些参数的值决定了选择和连接条件的选择性。 更新中包括一个查询部分,用来找到目标元组。这个部分可以得益于一个好的物理设计和索引。另一方面,更新操作一般还要做一些额外的工作,以维护所修改的属性上的索引。这样,尽管查询总可以从索引受益,但是索引也可能使一个给定的更新加快或变慢。在生成索引时,设计者应该在头脑中进行一下权衡。 (四)数据库调整的必要性 准确地讲,在系统设计的初始阶段,我们很难得到工作负载的详细信息。所以在 11 系统设计完以后,对数据库的调整就变得很重要,我们必须按照实际的使用模式来对初始的设计进行求精,以便获得好的性能。 对于如何区别数据库设计和数据库调整,人们有不同的看法。一种看法认为,一旦初始模式、索引和聚簇决策已经确定,那么设计过程也就结束了。接下去对概念模式或索引的任何改变,都被认为是对数据库进行调整的活动。另一种看法是,对于概念模式的进一步求精(和受这些改进影响的物理设计决策)也应该是物理设计过程的一部分。 如何区分设计和调整并不是很重要的 (五)数据库调整简介 当数据库初始设计完成后,数据库的实际使用提供了一些有用的详细信息,它们可以用来对初始设计进行进一步求精。先前对工作负载的很多假设都可以用观察到的模式来代替;一般来讲,一些初始的关于工作负载的说明将得到验证,其中有一些可能是错误的。关于数据大小的初始猜测可以用实际的数据库的统计数字来代替(尽管这个信息会随着系统的不断进化而变化)。对于查询的仔细监测可龕发现一些预测不到的问题,例如,优化器可能不使用某些索引,尽管这些索引可以产生好的 计划 项目进度计划表范例计划下载计划下载计划下载课程教学计划下载 。 为了获得可能的最好的性能,对数据库进行连续的调整是很重要的。 (六)调整概念模式 在数据库设计期间,我们也许会意识到,在给定工作负载和任何一组可行的物理设计选择的情况下,当前选择的关系模式并不能满足性能目标。如果是这样,我们也许必须重新设计概念模式(而且还要重新检查那些受到影响的物理设计决策)。 在系统已经运行了一段时间后,我们也许会认识到在初始设计期间或之后重新设计的必要性。一旦数据库设计完成并且已经被装载数据了,如果要改变概念模式,就需要做出很大的努力去映射受到影响的关系的内容。然而,有时需要根据使用系统的经验来对概念模式进行修正。现在,我们从性能的角度来考虑概念模式(重新)设计中的一些问题。 在对概念模式进行调整时我们必须考虑以下几点: 12 我们也许应当采用3NF设计来代替BCNF设计。 如果将一个关系分解为3NF或BCNF有两种方式,那么应该根据工作负载来进行选择。 有时我们需要对一个应景是BCNF的关系进一步分解。 在某些情况下可能进行反规范化。也就是,可能将一组由一个大关系分解而得到的关系用它们的原大关系代替,尽管这样会引起一些冗余的问题。而且,我们可能在特定的关系上加上一些字段来加速一些重要的查询,即使这样会导致对一些信息的冗余存储(从而使得模式既不是3NF也不是BCNF)。 关于规范化的讨论集中在分解技术上,实际上就是对关系的垂直划分。另一个技术是对关系进行水平划分,这将导致两个具有相同模式的关系。这里需要注意的是,这里讨论的不是一个关系元组的物理划分;而是想创建两个不同的关系(可能具有不同的约束和索引)。 另外,在重新设计概念模式时,特别是如果调整一个现存数据库的模式时,我们需要考虑是否定义视图来向用户隐藏这些改变,因为对于用户来说原始的模式可能更自然一些。 (七)调整查询和视图 如果一个查询比预计的要慢得多,那么我们就必须仔细检查并找出问题。通过对查询进行重写,并且进行一些索引的调整,常常能够解决问题。如果在视图上的一些查询运行得很慢,也可以进行类似的调整。 当调整一个查询时,第一件事就是确定系统是否使用了你所希望的执行计划。由于一些原因,有时系统可能没有找到最好的执行计划。下面是很多优化器都不能有效处理的一些情况: 含有空值的选择条件; 选择条件含有算数或字符串表达式,或者使用OR进行条件连接。例如,如果在WHERE语句中有一个条件E.age=2*D.age,那么优化器可能会正确利用现有的在E.age上的索引,但是不能正确利用在D.age上的索引。当将条件变为E.age/2=D.age时,将会出现相反的情况。 不能识别出一个复杂的执行计划,例如不能发现在GROUP BY语句中含有的 13 聚集操作的查询中的只读索引扫描计划。 如果优化器不太聪明,不能发现最好的执行计划(使用DBMS支持的一些访问方法和执行策略),一些系统允许用户通过提供给优化器一些提示来指导计划的选择。例如,用户可能强迫系统使用特定的索引或连接的方法和顺序。如果一个用户希望以这种方式来指导优化,那么他应该对优化和给定的DBMS的能力有一个全面的理解。 (八)其他专题 移动数据库 便携计算机和无线通信的应用创建了一批移动数据库用户。一方面,这些用户只是简单地通过网络来访问数据库,类似于分布式DBMS。另一方面,不论是网络,还是数据和用户,都有一些新的特性,这就影响了DBMS中的许多构件,包括查询引擎,事务管理程序和虎伏管理程序。 通过无线连接的用户带宽是以太网的1/10左右,是ATM网的1/100左右。因此通信开销比I/O和CPU开销更高。 用户的位置通常是变动的,而且移动计算机的电池寿命是有限的。因此,除内容传输开销以及因位置的频繁变动而产生的开销外,真正的通信开销体现在连接时间和电池的使用上。通常将数据生成多个副本,以使从不同位置的访问开销最小化。 当一个用户移动的时候,一个事务可能需要从多个数据库服务器中访问数据,此时丢失连接的可能性比传统的网络要大。因此,集中式的事务管理也是不实际的,尤其是有些数据存储在移动计算机上更是这样。事实上,我们可能不得不放弃具有ACID特性的事务,而是为用户程序开发其他的一致性方法。 主存数据库 由于主存的价格已经很便宜了,许多应有可以购买足够的主存来保存整个数据库。而且,现代的CPU使用64位的寻址,有了很大的寻址空间。一些商用的数据库已有几个GB的主存。这促使重新考虑一些基本的DBMS设计决策,因为对于驻留内存的数据库,磁盘访问不再是主要的处理时间。 主存也不能幸免于系统崩溃,所以仍然需要实现日志和恢复机制,以保证事务的原子性和持久性。在提交事务的时候,日志记录必须写到固定的存储中。这个处理可能会变成一个瓶颈。为了使该问题最小化,不是每完成一个事务就进行提交,而是收 14 集已完成的事务,然后批量提交它们,这称为组提交。恢复算法同样也需要优化,因为很少这种情况:某页因需要为其他页省出空间而不得不被移出。 应认真优化主存中的操作实现,因为磁盘存取不再是限制性能的要素。 在优化查询时必须考虑新的标准,即执行一个计划时所需要的空间数量。最小化空间开销是很重要的,因为如果空间开销超出物理内存,会使交换页保存到磁盘(通过操作系统的虚拟存储机制),这将大大降低执行效率。 基于页的数据结构不再重要了(因为页不再是数据检索的单位),同时,聚簇也不重要了(因为访问主存的任何区域的开销是一致的)。 五分钟搞定5000字毕业论文外文翻译,你想要的工具都在这里! 在科研过程中阅读翻译外文文献是一个非常重要的环节,许多领域高水平的文献都是外文文献,借鉴一些外文文献翻译的经验是非常必要的。由于特殊原因我翻译外文文献的机会比较多,慢慢地就发现了外文文献翻译过程中的三大利器:Google“翻译”频道、金山词霸(完整版本)和CNKI“翻译助手"。 具体操作过程如下: 1.先打开金山词霸自动取词功能,然后阅读文献; 2.遇到无法理解的长句时,可以交给Google处理,处理后的结果猛一看,不堪入目,可是经过大脑的再处理后句子的意思基本就明了了; 3.如果通过Google仍然无法理解,感觉就是不同,那肯定是对其中某个“常用单词”理解有误,因为某些单词看似很简单,但是在文献中有特殊的意思,这时就可以通过CNKI的“翻译助手”来查询相关单词的意思,由于CNKI的单词意思都是来源与大量的文献,所以它的吻合率很高。 另外,在翻译过程中最好以“段落”或者“长句”作为翻译的基本单位,这样才不会造成“只见树木,不见森林”的误导。 15 四大工具: 1、Google翻译: google,众所周知,谷歌里面的英文文献和资料还算是比较详实的。我利用它是这样的。一方面可以用它查询英文论文,当然这方面的帖子很多,大家可以搜索,在此不赘述。回到我自己说的翻译上来。下面给大家举个例子来说明如何用吧 比如说“电磁感应透明效应”这个 词汇 英语3500词汇语境记忆pets3考试词汇二年级反义词和近义词初中词汇词汇大全考研英语二高频词汇表 你不知道他怎么翻译, 首先你可以在CNKI里查中文的,根据它们的关键词中英文对照来做,一般比较准确。 在此主要是说在google里怎么知道这个翻译意思。大家应该都有词典吧,按中国人的办法,把一个一个词分着查出来,敲到google里,你的这种翻译一般不太准,当然你需要验证是否准确了,这下看着吧,把你的那支离破碎的翻译在google里搜索,你能看到许多相关的文献或资料,大家都不是笨蛋,看看,也就能找到最精确的翻译了,纯西式的~我就是这么用的。 2、CNKI翻译: CNKI翻译助手,这个网站不需要介绍太多,可能有些人也知道的。主要说说它的有点,你进去看看就能发现:搜索的肯定是专业词汇,而且它翻译结果下面有文章与之对应(因为它是CNKI检索提供的,它的翻译是从文献里抽出来的),很实用的一个网站。估计别的写文章的人不是傻子吧,它们的东西我们可以直接拿来用,当然省事了。网址告诉大家,有兴趣的进去看看,你们就会发现其乐无穷~还是很值得用的。 3、网路版金山词霸(不到1M): 4、有道在线翻译: 16 翻译时的速度: 这里我谈的是电子版和打印版的翻译速度,按个人翻译速度看,打印版的快些,因为看电子版本一是费眼睛,二是如果我们用电脑,可能还经常时不时玩点游戏,或者整点别的,导致最终SPPEED变慢,再之电脑上一些词典(金山词霸等)在专业翻译方面也不是特别好,所以翻译效果不佳。在此本人建议大家购买清华大学编写的好像是国防工业出版社的那本《英汉科学技术词典》,基本上挺好用。再加上网站如:google CNKI翻译助手,这样我们的翻译速度会提高不少。 具体翻译时的一些技巧(主要是写论文和看论文方面) 大家大概都应预先清楚明白自己专业方向的国内牛人,在这里我强烈建议大家仔细看完这些头上长角的人物的中英文文章,这对你在专业方向的英文和中文互译水平提高有很大帮助。 我们大家最蹩脚的实质上是写英文论文,而非看英文论文,但话说回来我们最终提高还是要从下大工夫看英文论文开始。提到会看,我想它是有窍门的,个人总结如下: 1、把不同方面的论文分夹存放,在看论文时,对论文必须做到看完后完全明白(你重视的论文);懂得其某部分讲了什么(你需要参考的部分论文),在看明白这些论文的情况下,我们大家还得紧接着做的工作就是把论文中你觉得非常巧妙的表达写下来,或者是你论文或许能用到的表达摘记成本。这个本将是你以后的财富。你写论文时再也不会为了一些表达不符合西方表达模式而烦恼。你的论文也降低了被SCI或大牛刊物退稿的几率。不信,你可以试一试 2、把摘记的内容自己编写成检索,这个过程是我们对文章再回顾,而且是对你摘抄的经典妙笔进行梳理的重要阶段。你有了这个过程。写英文论文时,将会有一种信手拈来的感觉。许多文笔我们不需 17 要自己再翻译了。当然前提是你梳理的非常细,而且中英文对照写的比较详细。 3、最后一点就是我们往大成修炼的阶段了,万事不是说成的,它是做出来的。写英文论文也就像我们小学时开始学写作文一样,你不练笔是肯定写不出好作品来的。所以在此我鼓励大家有时尝试着把自己的论文强迫自己写成英文的,一遍不行,可以再修改。最起码到最后你会很满意。呵呵,我想我是这么觉得的。 18
本文档为【毕业论文外文翻译-物理数据库设计简介】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_337177
暂无简介~
格式:doc
大小:74KB
软件:Word
页数:0
分类:生活休闲
上传时间:2017-12-01
浏览量:20