酷兔英语
文章总共2页
Wavelets

Definition: Wavelets are mathematical functions that let us divide data into different frequencycomponents and then study each component with a resolution appropriate for its overall scale. Wavelets are used in computer imaging, animation, noise reduction and data compression.

In many fields of study, from science and engineering to economics and psychology, we need to analyze data so that we can discover underlying patterns and information. A common way of doing this is to transform the data by applying mathematical functions.

One of the best-known processing techniques is Fourier analysis, in which you can approximate a real-world data stream by adding together a series of sine and cosine curves at different frequencies; the more curves you include in your approximation, the more closely you can replicate the original data. Since we know how to work with these well-defined trigonometric curves, we can often deduce patterns in the data that would otherwise remain hidden.

But Fourier analysis has limitations. It works best when the original data has features that repeat periodically, and it has trouble with transient signals or data that shows abrupt changes, such as the spoken word. Often, we need to be able to change our analytical representation depending on the actual data, so that we can resolve more detail in specific parts of the data stream. In essence, we need a way to change scale at various points, and scale is at the heart of wavelets.

The following explanation is adapted from Dana Mackenzie's highly recommended article "Wavelets: Seeing the Forest and the Trees" .

Consider how we view a landscape. If you're looking down from a jet airliner in summer, a forest appears as a solid canopy of green. If you're in a car driving by, however, you see individual trees. If you stop and move closer, you can make out individual branches and leaves. Up close, you may spot a dewdrop or an insect sitting on a leaf. With a magnifying glass, you can see structural details of the leaf and its veins.

As we get ever closer to an object, our view becomes narrower and we see finer and finer detail. In other words, as our scope becomes smaller, our resolution becomes greater.

Our eyes and mind adapt quickly to these changes in perspective, moving from the macro scale to the micro. Unfortunately, we can't apply this technique to a photograph or computerized digital image.

If you enlarged a picture of a forest (as if you were trying to get "closer" to a tree), all you'd see is a fuzzier image; you still wouldn't be able to make out the branch, the leaf or the dewdrop. Regardless of what you might see in the movies, no amount of "sharpening" or processing can help you see detail that hasn't already been encoded into the image. We can't see anything smaller than a pixel, and the camera can show us only one resolution at a time.

Wavelet algorithms allow us to record or process different areas of a scene at different levels of detail (resolution) and using greater amounts of compression (scale). In essence, they let us take new photos at closer range. If you look at a collection of data (also called a signal) from a broad perspective, you'll notice large-scale features; using a smaller, closer perspective, you can observe much smaller features.

Unlike the sinusoidal, endlessly repeating waves used in Fourier analysis, wavelets are often irregular and asymmetric, with values that die out to zero as they move farther from a central point. By decomposing a data stream into wavelets, it's often possible to preserve and even enhancespecific local features of the signal and information about its timing.

Wavelets can take almost any shape, and much of the work being done in wavelet applications is based on findingappropriate wavelet functions that work for the type of data being processed.

The first wavelet function was a simple square waveform, developed by mathematician Alfred Harr in the early 1900s. Real advancement in the field, however, began in the mid-1980s, when Jean Morlet, an engineer at a French oil company, developed wavelet-transform analysis to interpret seismic data. He then teamed with physicist Alex Grossmann to formalize the mathematics.

Moving well beyond their geophysical roots, wavelets today are used for a variety of purposes, especially in the areas of digital imaging and compression.

Depending on your needs, for example, you can use different types of compression to reduce the size of a digital image according to how much detail or accuracy you are willing to give up. Wavelet-based compression can be much more efficient than older types. Wavelets also make possible incredibly fine detail and texture mapping, such as the lifelike rendering of hair in the animated film Monsters, Inc., while still keeping file sizes and processing times manageable.

Wavelets are central to a number of image-related compression standards, including the JPEG-2000 standard for color images and WSQ, the wavelet scalar quantization gray-scale fingerprint image compression algorithm that the FBI has used since 1993 for storing its fingerprint database.

The wavelet compression in the MPEG-4 digital video standard offers better-quality Web-based video than JPEG, yet it produces files that are a fraction of the size. MPEG-4 also has several quality layers, allowing servers to adjust their output dynamically according to needed bandwidth.

Wavelets are also being used for noise reduction and image-searching techniques. Scientists are now exploring the use of wavelets for various types of medical diagnostics and for weather forecasting as well.

  小波

  定义: 小波是数学函数,它让我们将数据分成不同频率的分量,然后按与整体尺度相适应的分辨率分析每个分量。小波用于计算机成像、动画、降噪和数据压缩。

  在很多研究领域,从科学研究与工程技术到经济学和心理学,我们需要分析数据,从而能发现基本的模式和信息。进行这种分析常用的方法就是利用数学函数做数据变换。

  傅里叶分析是其中一个最著名的处理技术,通过将不同频率上的一系列正弦和余弦曲线迭加起来,你就能逼近真实世界中的数据流。在你的近似计算中曲线越多,就越能更精确地复制原始的数据。由于我们知道如何用它们定义完善的三角函数曲线,所以我们常常能推算出隐藏的数据模式。

  但是傅里叶分析也有局限性。它最适合分析周期性重复的原始数据,对瞬态信号或者表现出突然变化的数据(如说的话),傅里叶分析就有困难。所以我们常常需要随实际数据改变我们的分析表示法,从而使我们能分辨出数据流中特定部分更多的细节。本质上,我们需要一种能在不同点上改变尺度的方法,而尺度就是小波的核心。

  下面的解释节选于Dana Mackenzie所著、受到高度推崇的"小波: 既见森林又见树木"一书。

  考虑一下我们是如何看风景的。如果你在夏天从飞机上向下看,森林就是铺天盖地的绿色。然而,若是你开车从旁边经过,你见到的是一棵棵的树木。如果你停下来,走得更靠近一些,你就能看清枝杈和树叶。再近些,你还要可以看见树叶上的露珠和昆虫。而用放大镜,你就能看清树叶和其脉络的构造细节。

  当我们更靠近一个物体时,我们的视野就变窄了,看见越来越细微的细节。换言之,当我们的范围变得更小时,我们的分辨率就更高。我们的眼睛和思维能很快适应视野的变化,从宏观转到微观。可惜我们不能将此技术应用于照片或计算机化的数字图像。

  如果你放大一张森林的照片(好像你在试图"走近"一棵树木),你所见到的是更模糊的图像;。你不能分辨出枝杈、树叶或露珠。不管你在电影里看到什么,任何"锐化"或处理都无助你看清细节,这些细节原本就没有编码进图像。我们见不到比像素更小的东西,照相机一次只能给我们提供一种分辨率。

  小波算法允许我们以不同等级的细节(分辨率)和利用更大的压缩(比例尺),记录或处理一个场景的不同区域。本质上,它们让我们在更近的距离上拍摄新的照片。如果你从一个很宽的视野看数据的集合(也称信号),你将看到大尺寸的特性,在更小、更靠近的视野上,你能观察到更细小的特性。

  与傅里叶分析中使用的无限重复的正弦波不同,小波常常是不规则的和非对称的,随着离中心点越来越远,其数值逐渐靠近零。通过把数据流分解成小波,常常就有可能保存甚至增强信号和信息有关时序的具体的局部特性。

  小波几乎可以采用任何波形,在小波应用中正在做的大量工作是基于发现对处理的数据类型有效的相应的小波函数。

  第一个小波函数是简单的方波,它是由数学家Alfred Harr在二十世纪初发现的。然而,该领域真正的发展始于上世纪八十年代中期,当时在一家法国石油公司工作的工程师Jean Morlet开发了小波变换分析来解释地震数据。然后他与物理学家Alex Grossmann合作,建立了正式的数学模型。

  小波的发展已经远远超出了当初的地球物理学基础,今天小波被用于各种不同的目的,特别是在数字成像和压缩领域。

  例如,根据你的需求,依据你愿意放弃多少细节或精度,你可以使用不同类型的压缩,来降低数字图像的大小。基于小波的压缩比其他类型有高得多的效率。小波还能实现想象不到的细微细节和纹理绘图,如动画电影Monster, Inc.中有真实感的头发绘制,同时仍能使文件的大小和处理的时间在能接受的范围内。

  小波也是很多与图像有关的压缩标准的核心,如彩色图像的JPEG-2000标准和WSQ(小波标量量化灰度指纹图像压缩算法),自1993年以来美国联邦调查局就开始将该算法用于储存指纹数据库。

  MPEG-4数字视频标准中的小波压缩提供了比JPEG质量更好的基于Web的视频,但它产生的文件只有(JPEG)文件的几分之一。MPEG-4还有几种质量级别,允许服务器根据所需带宽动态调整输出。

  小波也用于降噪和图像搜索技术。科学家正在探讨利用小波进行不同类型的医疗诊断以及气象预报

Podcasting

Podcasting is the preparation and distribution of audio (and possibly other media) files for download to digital music or multimedia players, such as the iPod. A podcast can be easily created from a digital audio file. The podcaster first saves the file as an MP3 and then uploads it to the Web site of a service provider. The MP3 file gets its own URL, which is inserted into an RSS XML document as an enclosure within an XML tag.

Once a podcast has been created, it can be registered with content aggregators(A content aggregator is an individual or organization that gathers Web content from different online sources for reuse or resale, such as podcasting.net or ipodder.org. People can browse through the categories or subscribe to specific podcast RSS feeds which will download to their audio players automatically when they next connect. Although podcasts are generally audio files created for digital music players, the same technology can be used to prepare and transmit images, text, and video. Various XML formats are making content easier to aggregate and redistribute.

Podcasting is similar in nature to RSS, which allows subscribers to subscribe to a set of feeds to view syndicated Web site content. With podcasting however, you have a set of subscriptions that are checked regularly for updates and instead of reading the feeds on your computer screen, you listen to the new content on on your iPod (or like device).

  泡播

  泡播是指制备和分发音频(可能还有其他的媒体的)文件,供下载到数字音乐或多媒体播放机,如iPod播放机。泡播很容易从数字音频文件生成。泡播者按MP3格式保存文件,然后上载到服务提供商的网站。MP3文件有它自己的URL,它被作为XML标签中的副件插入RSS XML文档中。

  一旦生成了泡播,就可以在内容集中者那里登记注册(内容集中者是个人或者组织,他们从不同的在线来源收集网上内容,供重复使用或销售,如podcasting.net或ipodder.org)。人们可以浏览目录或订阅具体的泡播RSS内容,当他们再次连接订阅的内容时,会自动下载到他们的音频播放器中。虽然泡播通常是为数字音乐播放器生成的音频文件,但同一技术也能用于制备和发送图像、文本和视频。各种XML格式使内容更容易集中和重新分发。

  泡播本质上与RSS相似,RSS(系"丰富(内容)站点摘要"的英文缩写)允许订阅者订阅一组反馈,以观看合作网站的内容。但是,有了泡播,你就拥有一套订阅,定期进行检查更新,用不着去阅读计算机屏幕上的反馈,而在你的iPod(或类似设备上)收听新的内容。

mickey

A mickey is a unit of measurement for the speed and movement direction of a computer mouse. The speed of the mouse is the ratio between how many pixels the cursor moves on the screen and how many centimeters you move the mouse on the mouse pad. The directional movement is called the horizontal mickey count and the vertical mickey count. One mickey is approximately 1/200th of an inch.

greenfield

In networking, a greenfield deployment is the installation" title="n.就职;安装;装置">installation and configuration of a network where none existed before, for example in a new office. A brownfield deployment, in contrast, is an upgrade or addition to an existing network and uses some legacycomponents. The terms come from the building industry, where undeveloped land (and especially unpolluted land) is described as greenfield and previously developed (often polluted and abandoned) land is described as brownfield.

  米鼠

米鼠是计量鼠标运动速度和方向的单位。鼠标的速度是游标在屏幕上经过了多少像素与鼠标在鼠标垫上移动了多少厘米之比。方向性移动叫做水平米鼠数和垂直米鼠数。一个米鼠大约为两百分之一英寸。

  绿场

联网时,绿场部署是指在以前没有网络的地方,如新的办公室,安装和配置网络。与之相反,灰场部署是指对现有的网络升级或扩容,并利用一些老式的设备。此术语出自建筑行业,未开发的(尤其是为污染的)土地称之为绿场,而以前开发过的(通常是被污染的和被废弃的)土地称之为灰场。

DISCO

DISCO is a Microsoft technology for publishing and discovering Web Services. DISCO can define a document format along with an interrogation algorithm, making it possible to discover the Web Services exposed on a given server. DISCO makes it possible to discover the capabilities of each Web Service (via documentation) and how to interact with it. To publish a deployed Web Service using DISCO, you simply need to create a .disco file and place it in the vroot along with the other service-related configuration.

EDGE

EDGE (Enhanced Data GSM Environment), a faster version of the Global System for Mobile (GSM) wireless service, is designed to deliver data at rates up to 384 Kbps and enable the delivery of multimedia and other broadband applications to mobile phone and computer users. The EDGE standard is built on the existing GSM standard, using the same time-division multiple access (TDMA) frame structure and existing cell arrangements. The base stations can be updated with software. EDGE is regarded as an evolutionary standard on the way to Universal Mobile Telecommunications Service (UMTS).

  DISCO技术

  DISCO是微软公司用于公布和发现Web Services的技术。DISCO可以定义文档的格式加上查询算法,使其能发现在某个服务器上暴露的Web Services。DISCO(通过文档)能够发现每个Web Services的功能,以及如何交互的。为了公布一个已部署的、利用DISCO的Web Services,你只需生成一个.disco文件,并将它与其他服务有关的配置一起放到vroot中。

  EDGE

  EDGE(增强数据型GSM环境)是一种速度更快的GSM无线服务业务,设计成以高达384K位/秒的速率传送数据,实现向移动电话和计算机用户传送多媒体和其他宽带应用。EDGE标准建立在GSM标准之上,利用相同的TDMA(时分多址)帧结构和现有的小区布置。基站可以通过软件升级。EDGE被视作在向UMTS(万能移动电信服务)迈进路上的一个进化标准。

PCI Express pumps up performance

In the past decade, PCI has served as the dominant I/O architecture for PCs and servers, carrying data generated by microprocessors, network adapters, graphics cards and other subsystems to which it is connected. However, as the speed and capabilities of computing components increase, PCI's bandwidth limitations and the inefficiencies of its parallel architectureincreasingly have become bottlenecks to system performance.

PCI is a unidirectional parallel bus architecture in which multiple adapters must contend for available bus bandwidth. Although performance of the PCI interface has been improved over the years, problems with signal skew (when bits of data arrive at their destination too late), signal routing and the inability to lower the voltage or increase the frequency, strongly indicate that the architecture is running out of steam. Additional attempts to improve its performance would be costly and impractical. In response, a group of vendors, including some of the largest and most successful system developers in the industry, unveiled an I/O architecture dubbed PCI Express (initially called Third Generation I/O, or 3GIO).

PCI Express is a point-to-point switching architecture that creates high-speed, bidirectional links between a CPU and system I/O (the switch is connected to the CPU by a host bridge). Each of these links can encompass one or more "lanes"comprising four wires--two for transmitting data and two for receiving data. The design of these lanes enables the use of lower voltages (resulting in lower power usage), reduces electromagnetic emissions, eliminates signal skew, lowers costs through simpler design and generally improves performance.

In its initial implementation, PCI Express can yield transfer speeds of 2.5G bit/sec in each direction, on each lane. By contrast, the version of the PCI architecture that is most common today, PCI-X 1.0, offers 1G bit/sec in throughput. PCI Express cards are available in four- or eight-lane configurations (called x4 and x8). An x4 PCI Express card can provide as much as 20G bit/sec in throughput, while an x8 PCI Express card can offer up to 40G bit/sec in throughput.

Earlier attempts to create a new PCI architecture failed in part because they required so many changes to the system and application software. Drivers, utilities and management applications all would have to be rewritten. PCI Express developers removed the dependency on new operating system support, letting PCI-compatible drivers and applications run unchanged on PCI Express hardware.

A bus for the future

Developers are working on increasing the scalability of PCI Express. While current server and desktop systems support PCI Express adapters and graphics cards with up to eight lanes (x8), the architecture will support as many as 32 lanes (x32) in the future.

The first Fibre Channel host bus adapters were designed to support four lanes instead of eight lanes, in part because server developers had designed their systems with four-lane slots. As even more bandwidth is required, implementing an eight-lane design potentially could double the performance, provided there were no other bottlenecks in the system.

This scalability, along with the expected doubling of the speed of each lane to 5G bit/sec, should keep PCI Express a viable solution for designers for the foreseeable future.

PCI Express is a significant improvement over PCI and is well on its way to becoming the new standard for PCs, servers and more. Not only can it lower costs and improve reliability, but it also significantly can improve performance. Applications such as music and videostreaming, video on demand, VoIP and data storage will benefit from these improvements.

  PCI Express总线提升性能

  在过去十年间,PCI总线一直是PC机和服务器上的主流I/O架构,它负责将微处理器、网卡、图形卡和其他子系统生成的数据送到与它相连的部件。然而,随着计算部件的速度和能力的提高,PCI并行架构的带宽局限性和低效率越来越成为系统性能的瓶颈。

  PCI是一个单向的并行总线架构,其中多个适配器必须争夺可用的总线带宽。虽然PCI接口的性能几年来不断得到改进,但信号偏离(数据位到达目的地太晚)、信号路由、以及电压无法降低或频率更高时就不能正常工作等问题,无不表明该架构走到了尽头。改进其性能的设想代价很高,也不实际。针对此问题,一些厂商(包括最大的和最成功的系统开发商)公布了一个叫PCI Express(最初叫第三代I/O,缩写为3GIO)的I/O架构。

  PCI Express是一个点对点的交换架构,在CPU和系统I/O之间建立高速的双向链路(交换机由主桥接到CPU)。每一个链路可以包含一个或多个由四条电线组成的"通道",其中两条线发送数据,两条线接收数据。这些通道的设计能允许在低压下使用(这样功率消耗较少)、降低电磁辐射、消除信号偏离、以及简化设计带来的成本降低,总的来说改进了性能。

  在其最初的实现中,PCI Express就能保证在每个通道上双向的传输速度达到2.5G位/秒。而目前最常见的PCI架构版本--PCI-X 1.0提供的吞吐量为1G 位/秒。目前能得到的PCI Express卡为4通道或8通道的配置(叫x4和x8)。x4 PCI Express卡能提供的吞吐量达到20G 位/秒,而x8 PCI Express卡能提供的吞吐量则高达40G 位/秒。

  早期创建新的PCI架构的设想之所以失败了,是因为要求系统和应用软件做太多的修改,驱动程序、例行程序和管理应用程序全都必须重写。PCI Express开发者消除了对新操作系统支持的依赖,让与PCI兼容的驱动程序和应用程序无需更改就能在PCI Express硬件上运行。

  未来的总线

  目前,开发者正在研究如何提高PCI Express的可扩展性。现在的服务器和台式系统支持多达8通道(x8)的PCI Express卡和图形卡,而将来该架构能支持多达32通道(x32)。

  第一个光纤通道的主总线适配器设计成支持4通道,不是8通道,部分原因是由于服务器开发商已经将其系统设计了4通道插槽。当需要更多带宽时,实现8通道的设计能使性能翻一番,只要系统中没有其他的瓶颈。

  此可扩展性加上每个通道的速度有望加倍,达到5G位/秒,应该可以使PCI Express在可预计的未来成为设计师可用的选择方案。

  PCI Express相对PCI是一次重大的改进,它正在沿着成为PC机、服务器以及更多设备新标准的道路前进。它不仅降低了成本、提高可靠性,而且还能显著地改善性能。音乐和视频流、视频点播、网络电话和数据存储等应用程序也将从这些改进中受益。

Virtual Servers(2)

  Virtualized servers do all the good and bad things regular servers do. They boot up, power down, suspend, hang, and even crash. If a guest OS or a device driver it uses is buggy, the virtual PC will crater. But not the physical computer, and that's key.

  If your OS crashes or an application hangs, or even if you install a software fix that requires a reboot, nothing happens to the hardware. One virtual machine can fail over to another in a purely virtual sense or in a way that's closer to the real thing. Even if certain hardware devices have malfunctioned, so long as the fail-over target is configured to use a secondary network adapter and an alternate path to storage, the fail-over will work exactly as it would if the virtual PCs were physical PCs.

  In most cases, an enterprise management system will monitor and react to a virtual fail-over as if it were the real thing. Solutions such as HP OpenView see and interact with virtual servers the same way they do with physical ones. The reported configurations of the servers will change after they're virtualized, but it's entirely likely that the day-to-day management of your shop will experience little change.

  In addition, most virtualization systems bundle solution-specific management software, allowing an administrator to sit at a central console and manipulate all the virtual servers in an enterprise. It's quite an eye-opener to swap out a virtual Ethernet card without ever touching the hardware.

  A virtualization solution's management console gives you a degree of control over your virtual PCs that surpasses what administrators can do with traditional tools. From a central location, you can boot and shut down virtual PCs as needed. It's also possible to pause them, which harmlessly freezes them in their current state, or hibernate them, putting them in a deep freeze by saving their state to a file on disk. By overwriting the disk file, you can restore PCs from a backed-up state and roll back changes that rendered the guest inoperable, all from a terminalsession.

  In environments with a mix of operating systems--a common condition that turns even simple consolidation into a messy affair-- one solution would be to host each OS in its own VM. For example, on a PC server running one of VMware's virtualization solutions, you can run any combination of Windows 2003 Server, Windows 2000, Windows NT 4.0, various flavors of Linux, and FreeBSD. You can even use VMs to host different versions of the same OS. Linux software is infamous for dependence on specificversions and vendor distributions of Linux. Virtualization is the only way to run applications designed for Red Hat 7.2 and Suse 9.0 simultaneously on a single server.

  Virtualization is magnificent stuff, but it doesn't cure all ills. You can never create a virtual PC that outperforms the physical system underneath. You will learn much about your applications' system requirements from moving them to a virtual environment. They'll likely surprise you, either with how little of the original server they used--that's the typical case--or how piggish they are. If necessary, you can throttle the nasty ones down.

  And while one of the great benefits of virtualization is security--it's hard to accomplish much by cracking a system that doesn't exist--a virtualized PC can still be compromised. Fortunately, the cure is to overwrite the virtual PC's disk image with one that's known to be clean, but managing virtual servers still requires vigilance.

  Ultimately, hardware consolidation is only one reason to opt for server virtualization, and it has wide appeal. Still, depending on each department's unique needs, IT managers are sure to find innumerable ways that virtualization can benefit your enterprise. (The End)

时文选读

虚拟服务器 (2)

  虚拟化的服务器会做正常服务器会做的所有好事和坏事。它们会自举、宕机、挂起、暂停、甚至崩溃。如果它的客 OS或者设备驱动程序有错误,虚拟PC机将崩溃。而物理的计算机不会那样,这是关键。

  如果 OS崩溃或者应用程序暂停,甚至如果你安装需要重新引导的软件补丁,对硬件不会发生任何影响。一台虚拟机能以完全虚拟的方式或者以更近似于真实的方式,将故障转移到另一台虚拟机。即便某个硬件装置有故障,只要配置好故障转移的目标,能使用第二网卡和到存储设备的替代通道,故障转移就能正确地工作,如同虚拟机就是一台物理PC机那样地工作。

  在多数情况下,企业管理系统将对虚拟的故障转移进行监视和做出反应,好像就是真实的东西。诸如 HP OpenView一类的解决方案对虚拟服务器做出的反应和对物理服务器做出的反应是一样的。在虚拟化之后,报告的服务器配置将改变,但是整体上,数据中心日常管理几乎没有什么变化。

  此外,多数虚拟化系统与特定的解决方案管理软件是捆绑在一起的,允许管理员坐在中央控制台,管理整个企业的所有虚拟服务器。在不用碰硬件的情况下,就能置换以太网卡,真让人大开眼界。

  虚拟化解决方案的管理控制台让你一定程度上控制虚拟 PC机,这种控制程度超过了用传统工具能做到的程度。从一个集中点,就能按需要引导和关闭虚拟PC机。也能暂停它们,无害地将它们冻结在当前的状态,或者使它们休眠,通过将它们的状态保存到磁盘上的一个文件中,将它们深度冻结。通过重写该磁盘文件,就能将PC机从备份状态中恢复过来,从最后一次会话开始将使之不工作的改变全部回滚。

  在多种操作系统混用的环境下(这是一种常见的情况,会把简单的服务器集中弄得乱七八糟),一种解决办法就是将每个 OS在各自虚拟机上运行。例如,在一台运行VMware虚拟方案的PC服务器上,你能运行Windows 2003 Server、 Windows 2000、 Windows NT 4.0、各种版本的Linux以及FreeBSD的各种组合。甚至能运行同一OS的不同版本。Linux因对特定版本和厂商有依赖性而搞得名声狼藉。虚拟化是惟一的方法,让一台服务器上同时运行为Red Hat 7.2 和Suse 9.0设计的应用程序。

  虚拟化是个好东西,但它不能包治百病。你永远不能建立一个虚拟 PC机,其性能超过它所在的物理系统。将应用程序移到虚拟环境,你能从中学到很多应用程序的系统要求。它们可能让你吃惊,不是它们只用了原来服务器非常少的一部分(这是一种很典型的情况),就是它们是多么的贪婪。如果有必要,你就能关闭那些讨厌的程序。

  虚拟化最大的好处之一是安全(破坏实际不存在的系统是难以实现的),同时虚拟化的 PC机还能做出妥协。办法是把虚拟PC机的磁盘映像重写到已知是干净的磁盘上,但是管理虚拟服务器仍需要警惕。

归根结底,硬件集中不是选择服务器虚拟化的惟一理由,它还有很广泛的需求。依据每个部门不同的要求, IT 管理人员一定能找到数不清的方法,让虚拟化为企业服务。



Extensible Stylesheet Language (XSL)(1)

  Markup languages have been around since 1969. That was the grandfather of Hypertext Markup Language (HTML), which makes the Web work, and of Extensible Markup Language (XML), which has become the primary means of defining, storing and formatting data in a multitude of areas, including documents, forms and databases.

  At the heart of these languages is a system called tagging, where text or data is marked by indicators enclosed in angled brackets, always at the beginning (tag) and often at the end (/tag).

  HTML pages use standardized, predefined tags. For example, (p) means a paragraph, (h1) means a header and (b) followed by (/b) means the enclosed text is to be bold. Web browsers interpret these tags and format the text accordingly when they display the pages on-screen.

  With XML, however, programmers can make up tags, and browsers have no built-in way of knowing what the tags mean or what to do about them. Further complicating matters, we can use tags to describe data itself (content) or to give formatting instructions (how to display or arrange an element).

  For instance, (table) could refer to a matrixlike arrangement of items on an HTML page, or it could signify a piece of furniture. This flexibility makes XML powerful, but it confuses the distinction between content and format.

  In order to display XML documents usefully, we need a mechanism that identifies and describes the meaning of formatting tags and shows how they affect other parts of the document. Past mechanisms have included the Document Style Semantics and Specification Language, and Cascading Style Sheets. Both have now been extended and superseded by Extensible Stylesheet Language, a standard recommended by the World Wide Web Consortium (W3C) in 2001.

  Extensible Stylesheet Language (XSL) is a family of languages and specifications designed for laying out and presenting XML documents and data in specified formats appropriate for the final output medium or device.


文章总共2页